Feb 18 2008

Reverse Engineering the Brain

The annual meeting of the American Association for the Advancement of Science (AAAS) concludes today in Boston. Unfortunately my schedule did not allow me to attend, but the press releases and news reports are flowing from Beantown. Among the announcements were the 24 greatest scientific challenges facing humanity in the 21st century (produced by a panel of experts put together by the National Academy of Engineering). On the list was the task of reverse engineering the human brain. (You can see the full list in this article.)

To me it seems highly likely that this is a goal we will achieve in the 21st century – 92 years seems like more than enough time. We have already begun, in fact, to reverse engineer the brain. If we separate this task into two broad categories – hardware and software – both have progressed very far but still have a long way to go. By “hardware” I mean how the neurons and other brain cells work together to create memory, information processing, and sensory perception – how does the brain physically work. By “software” I mean what information is actually in the brain and how that information and different processing areas work together to produce the net effects of mood, thought, and behavior.

Our understanding of the hardware of the brain is already highly detailed, but it is also clear that there is at least a layer of complexity that we have not yet drilled down to. We know how neurons conduct signals, how those signals affect the firing of other neurons, how neurotransmitters work, how neuronal function is modulated by other neurons and other brain cells (like astrocytes), and how the strength of neuronal connections relate to memory. However, the more we look the more layers of complexity to all of these things we find. And we have only begun to explore how patterns of neuronal activity relate to specific cognitive functions.

Understanding the software of the brain is a harder nut to crack, but recent progress has been accelerating, due largely to improved technology, like functional MRI scanning. How, exactly, do our brains store the number 3 or the word “pterodactyl”? How does it process different types of information, and how do the different parts of the brain interact with each other? Progress on all these fronts is steady, but neuroscientists today are like astronomers who just recently turned their telescopes to the sky and are trying to make sense of all the complexity of the universe.

Already we have learned a great deal, as fMRI studies are showing which parts of the brain do what. We are actually well on our way to reverse engineering the brain.

We are also at the relative beginning of another avenue to reverse engineering the brain, and that is using computers to model the brain. Ray Kurzweil, author of The Singularity and an expert on artificial intelligence and future technology, who was one of the experts on the NAE panel, believes that we will succeed in creating human-level artificial intelligence by 2029 – 21 years from now. He extrapolates from current progress that we will have both powerful enough computers and sophisticated enough software to create artificial intelligence in about two decades. Kurzweil has clearly thought in great detail about this question – the 2029 figure is not a casual guess. But even if we assume that his estimates are off by half, that means that we will achieve this goal by 2050, still only halfway through the 21st century.

The hardware extrapolation seems to be the easier of the two – computer technology has been progressing at a very steady and predictable rate for decades (a phenomenon known as Moore’s Law). Extending this for another couple of decades does not seem like much of a stretch. Sure, we may run into unexpected technological hurdles, but so far we have been able to develop new approaches to computing technology to keep blasting through all hurdles and keep Moore’s Law on track. So while there is always uncertainty in predicting future technology, predicting this level of computer advancement at the least can be considered highly probable.

The software extrapolation I think is more difficult to do, as conceptual hurdles may be more difficult to solve and may stall progress for a undetermined amount of time. So I am less confident with this prediction, but I still give it a high probability based upon the steady progress that has been made so far. Also, I cannot think of any theoretical reason why artificial intelligence should not become a reality.

MIT researcher Tomaso Poggio’s research is at the core of this question. He has been engaged in two parallel lines of research – using computer software to model brain function and using our understanding of brain function to improve computer software. At the AAAS meeting this past weekend he explained how he is now combining these two lines of research into one. He came to this conclusion after a computer model of the human visual system he was working on was actually able to function as an artificially intelligent visual recognition system superior to previous such computer systems. He is quoted as saying:

“My perspective changed in a dramatic way. It meant that we may be closer to understanding how the visual cortex recognizes objects and scenes than I ever thought possible.”

Computers are now powerful enough that we can run virtual simulations of our current models of brain organization and function. This is therefore a new tool to test our hypotheses about brain function. And at the same time, as a bonus, these computer models can actually function on their own as computer systems. This is similar to using a computer to model how a grandmaster plays chess and in so doing creating a computer program that can actually play chess.

Fast forward 21 years and (Ray Kurzweil predicts) we will have accurate computer models of brain structure and function that will not only simulate brain function – they will be functional artificial brains. Reverse engineering the brain and creating human-level artificial intelligence are tasks that will progress in parallel – each playing off the other. In fact (as with Poggio’s research) these two tasks will be one and the same research program.

This is all very exciting stuff. But so far I have only been discussing the basic science – creating the computer models. What about the applications? This is a topic for another day.

75 responses so far