U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Institute of Medicine (US) Forum on Neuroscience and Nervous System Disorders. From Molecules to Minds: Challenges for the 21st Century: Workshop Summary. Washington (DC): National Academies Press (US); 2008.

Cover of From Molecules to Minds

From Molecules to Minds: Challenges for the 21st Century: Workshop Summary.

Show details

Grand Challenge: How Does the Human Brain Work and Produce Mental Activity?

How does the brain work and produce mental activity? How does physical activity in the brain give rise to thought, emotion, and behavior?

We envision our brains taking in data, running those data through some unknown processes, and then somehow telling us how to act, feel, or behave. “What are the algorithmic principles that the brain uses?” Blakemore asked. “Are there some which are nonalgorithmic? How can we approach the modeling of those principles?”

In the deepest sense, we do not know how information is processed, stored, or recalled; how motor commands emerge and become effective; how we experience the sensory world; how we think or feel or empathize. This is because explanations ultimately must be integrated across levels of analysis, including: molecular, cellular, synaptic, circuit, systems, computational, and psychological, and until now the field has not been mature enough to integrate information across all these disciplines.

These are some of the most compelling questions in the world, said Olsen in the opening session of the workshop.

Of course there is another reason—or rather, many millions of reasons—why we do not have a working theory of the brain. As Blakemore pointed out, there are more neurons in the brain than there are stars in the galaxy, and we form more than 1 million new connections among these neurons each day. Simply put, the scope of the challenge is awesome.

Still, the feeling among many at the workshop was that there was hope in meeting this challenge.

The reason? Major technological advances during the past few years are allowing neuroscientists to do the kind of research and tackle the kind of challenges they have always dreamed of, starting, according to many at the workshop, with drawing up the wiring diagram of the human brain.

Mapping the Human Brain

The idea of mapping the human brain is not new. The “father of neuroscience,” Santiago Ramon y Cajal, argued at the turn of the 20th century that the brain was made up of neurons woven together in a highly specific way. We have been trying to map this exquisite network since then.

In fact, scientists in other settings have called the wiring diagram a Grand Challenge of neuroscience in and of itself. It appears on the Grand Challenges of the Mind and Brain list for the National Science Foundation (NSF, 2006), on the Grand Challenges list of the National Academy of Engineering (NRC, 2008), and on the wish lists of at least a half-dozen major scientific fields, from genetics to computer science.

If we are interested in how the mind works, then we definitely need to know the physical instantiation of brains and function, remarked Jeffrey Lichtman, professor of molecular and cellular biology, Harvard University. This effort will require some mechanism to obtain the connectional maps that will integrate anatomy, neuronal activity, and function. Until those are available, the field will not be able to move forward to its full potential.

The challenge is similar, in many ways, to mapping the human genome: We might not know exactly what we will learn, but we have a strong belief that we will learn a lot, commented Leshner.

So why has it not happened?

Because neurons are very small and the human brain is exquisitely complex and hard to study. Eve Marder, professor of neuroscience at Brandeis University and president of the Society for Neuroscience, noted that scientists have been working on circuit analysis for nearly 40 years, primarily with smaller organisms, particularly invertebrates, because their simpler neurological systems are more amenable to study and analysis.

The classic approach, in place since the 1960s, has been simple: Define behaviors, identify neurons involved in those behaviors, determine the connectivity between those neurons, and then excite individual neurons to understand their role in influencing behavior. This approach is called “circuit dynamics,” and it has been tremendously helpful to understanding how these simple neurological systems work.

But as you move from sponges and anemones to primates and humans, each step of that analytical process becomes infinitely more challenging.

As Marder noted, the impediments, until today, to understanding larger circuits and vertebrate brains include difficulty in identifying neurons, difficulty in perturbing individual classes of neurons in isolation, and difficulty in recording from enough of the neurons at the same time with enough spatial and temporal resolution.

In other words, difficulty arose in every step of the circuit dynamics process.

But the key words in Marder’s statement are “until today.” If you look at the three things Marder identified as stumbling blocks, major technological breakthroughs over the past few years have solved or are close to solving each one, starting with a new technique born from the lab of Lichtman: “the Brainbow.”

Technological Advance: The Brainbow

Mapping the brain is not easy. Neurons and the connections between them are so small and complex that tracing their path through the brain has been nearly impossible.

For more than a century, the best method available to researchers has been the “Golgi stain.” Developed in 1873 (and little improved on since), the Golgi method uses a stain of silver chromate salt to trace the path of individual neurons, right down to the axons and dendrites.

The Golgi method works quite well, but comes with two major flaws that limit its use in studying complex connections among neurons in a single network. The first flaw is that the method stains everything the same color—grey—making it very difficult to study multiple neurons at once or to envision how different neurons link together. Second, it is difficult to target specific cells to be stained, that is, neurons that are stained are done so in a largely random pattern.

Over the years, researchers have improved on the Golgi stain. For example, geneticists found ways to “tag” different neurons with genes that naturally produce fluorescent colors, so that the neurons themselves could be made to glow red, blue, or yellow. This advance allowed researchers to study a handful of neurons at once.

The neurosciences have now matured to the point where scientific knowledge and technological advances are converging to bring new capabilities. For example, in 2007 Harvard University researcher Jean Livet, working out of Lichtman’s lab, published a paper showing how fluorescent-coding genes from jellyfish and coral could be combined to force different neurons to express hundreds of different colors (Livet et al., 2007). This Brainbow technique relies on three genes—coding for red, blue, and yellow—which are combined in different levels to produce all the different tones. A cell might have three red genes, two blue, and one yellow, for instance.

The result? Researchers can, for the first time, identify and map hundreds of neurons at once, seeing how they wrap and interact with one another, tracing the map of the brain in greater detail than was possible just 1 or 2 years earlier.

Technological Advance: Neuronal “Light Switch”

Marder’s second impediment was the challenge of perturbing individual neurons. Even if you can see the connections between the actual cells, if you want to see how one neuron connects to and influences another, and most importantly what impact that has on behavior, you must be able to “excite” those neurons to find out. Over and over and over again.

The classical method uses electrodes to stimulate neurons, but it is neither precise nor particularly sophisticated. Neurons are so small and make so many connections—an individual neuron can make well over 100 separate connections with other neurons—that it is extremely difficult to precisely activate a single neuron, let alone a specific neuronal connection, in an in-vitro model system, and even more so in an in-vivo vertebrate nervous system.

In 2005, however, researchers in Stanford University and the Max Planck Institute of Biophysics Germany developed a neuronal “light switch” that allows them to turn individual neurons or neuronal connections on or off by exposing them to light (Boyden et al., 2005). The science behind the study is impressive. Researchers discovered a protein from green algae that switches the electrical state of a cell when exposed to blue light. By inserting this gene into rat neurons, researchers were able to gain control over those neurons and consequently their connections, turning them on and off with the flip of a switch. As an added bonus, researchers attached this protein to a gene that glows when exposed to green light, allowing them to both identify and control individual neurons. Therefore, under green light researchers can view the neurons that make the protein, and by switching the light beam to blue, they can excite a neuron and investigate its effects.

The applications and implications of this new technique are many. From a research perspective, being able to turn individual neurons on and off allows advanced study of the function of individual neurons in the brain. From a clinical perspective, the ability to modulate neurons using something as simple and noninvasive as light opens up opportunities for extremely targeted therapies for diseases such as Parkinson’s, depression, and more.

Technological Challenge: Spatial and Temporal Resolution

Marder’s third challenge—the difficulty in recording from enough of the neurons at the same time with enough spatial and temporal resolution—remains a major challenge for the field. Both imaging and electrode recording capabilities have come a long way in recent years, but multiple researchers expressed the need for more.

Multichannel Microelectrode Recording Arrays

The development of multichannel microelectrode recording arrays allows researchers to accurately measure the activity of multiple neurons at a single time. Advances in photonics, electronic circuitry, and engineering have made it possible for these arrays to be shrunken substantially, dramatically increasing the number of neurons that can be monitored directly through the skin. Moreover, researchers believe the devices can now be implanted in the brain, or else where in the nervous system, suggesting we could measure the output of neurons on an individual level over long periods of time (Kelly et al., 2007).

If we are going to get a real map of the functional wiring diagram of the human, we need to be able to do it noninvasively and on a widespread basis.

“Brain functions are encoded in a distributed network in the brain,” said Bin He, professor of biomedical engineering, electrical engineering, and neuroscience, University of Minnesota, so it is important to image brain connectivity and network dynamics not only beyond localized circuits, but throughout the entire network.

Functional Magnetic Resonance Imaging

Functional Magnetic Resonance Imaging (fMRI) allows researchers to noninvasively measure blood flow and blood oxygenation in the brain. Because blood flow and oxygenation are closely linked with brain activity, researchers can see which areas of the brain are active when volunteers (or research animals) are performing an assigned task.

A circuit map that does not correlate back to activity is not extremely valuable. fMRI is one technique used to integrate anatomy back to function, allowing this correlation. Unfortunately, fMRI readings are not perfect. Spatial resolution has only recently advanced to the millimeter level, and unfortunately the measurements are not in real time. There is a delay of about a second between brain activity and associated changes in blood flow and oxygenation that can be detected by the fMRI. However, researchers need to be able to measure activity in a real-time, millisecond-by-millisecond basis and on a much smaller spatial scale. As a result, they are now working on ways to combine fMRI readings with instantaneous feedback loops such as electroencephalography (EEG) and magnetoencephalography (MEG).

“Can we develop a technique which can noninvasively image the neural activity at millimeter spatial resolution and millisecond temporal resolution?” asked He, in a comment echoed by others at the workshop. But even this is resolution is course relative to the size of a neuron—a cubic millimeter of brain cortex contains 104 to 105 neurons.

Computer Science and Learning Algorithms

Even with all these advances in collecting data, the challenges of mapping the brain remain enormous. The human genome project would not have been possible until the turn of the 21st century, as the genetics field simply did not have the automated techniques or the computer power to tackle the project. The amount of data involved in mapping the structure of the brain is likely to be an order of magnitude greater than was required for mapping the genome, and will require enormous computing capacity. This is where computer science comes in.

One example of using computational methods to link neural activity to psychological states was provided by Tom Mitchell, chair of the Machine Learning Department at Carnegie Mellon University, who described how, through the use of machine learning methods, a person’s neural activity and reactions to words or pictures can be decoded via fMRI. Such computer algorithms, which have been adopted by researchers studying brain-wide neural representations, provide a direct link between the biology of neural activity and abstract mental states such as thinking about an object.

In addition, the work of Sebastian Seung’s lab at the Massachusetts Institute of Technology was highlighted. Seung and colleagues have been able to develop a machine-learning algorithm that can help trace the path of individual neurons through the brain (Jain et al., 2006). In Seung’s program, a machine “watches” as humans go through and map individual neurons. It then examines how the human researchers did this work and develops parameters to follow the same pattern, therefore potentially providing a tool that would dramatically decrease the number of person hours required to some of the work.

To localize proteins and other chemicals efficiently and construct the neurochemical microcircuitry of the brain will require the equivalent of the automated sequencers that drove, with increasing rapidity, the sequencing of the human genome, said Joseph Coyle, professor of psychiatry and neuroscience at Harvard Medical School.

There is no way that a human mind or a collection of human minds could effectively and efficiently sift through the tremendous amount of data. Rather, it is going to require automated procedures running on computers that have proved themselves in one domain being applied to this domain, added Read Montague, professor of neuroscience at the Human Neuroimaging Lab, Baylor College of Medicine.

Lichtman stressed that this is big science. No single laboratory can do this. Rather it can only be done through a multilaboratory, national, even an international effort.

All of these advances have researchers like Lichtman and Marder very excited.

“I would say, today, 2008, 2009, we are right at a historical cusp, because we have revolutionary opportunities for circuit analysis in the next decade,” said Marder.

“Is this a possibility?” asked Lichtman, who used the word “connectome” to refer to the wiring diagram of the brain. “Can we get connectomes? I would argue that we can. Finally, there are the necessary techniques to do this.”

The Importance of Neural Networks

The connectome, of course, is just one step, a way of breaking the brain down into understandable pieces. New research shows that the brain is significantly more than the sum of its parts, and that a network-level view is critical to understanding how it functions.

When information comes in from the outside world—say, when you look at the Mona Lisa—the sensory input is transformed in the brain into a series of electrical spikes. It is not that one or two neurons fire; entire regions of the brain (and perhaps the entire brain itself) light up, with a complexity of pathways that tells us a simple circuit map cannot fully account for activity in the brain.

William Bialek, a professor at the Joseph Henry Laboratories of Physics and the Lewis-Sigler Institute of Integrative Genomics, Princeton University, described this series of spikes at the workshop as “the language in which the nervous system does its business.”

“Although much of the history of neuroscience is about understanding the responses of individual neurons,” said Bialek, “in fact, almost all of our experiences are based on the activity of many, many neurons.”

He put forward the human retina as an example. If you measure the correlations among different neurons processing information from the retina, you find that the correlations are very weak. Therefore, it is tempting to assume that it is the individual neurons that matter, and not the whole. But Bialek says some order is hiding in the code.

Although all the correlations among neurons are weak, nearly all pairs are correlated. Intriguingly, this is reminiscent of models for how collective opinions form in societies, but it is also reminiscent of earlier models in statistical physics, where, in fact, surprisingly dramatic collective effects can be hiding in these weak correlations.

John Hopfield proposed just such a model of neural networks in 1982, and the model has been supported by the research in many ways. Bialek explained, for instance, that these networks have a tendency to fall into different “states,” or general patterns of electrical spikes, which are more consistent than the individual firing of single neurons. If you play a movie to the retina twice, for instance, the exact neurons that fire will change each time. The overall pattern of brain activity, however, will be retained and reproduced.

We have already made great strides in being able to understand these codes, according to some at the workshop. Theodore Berger, professor of Engineering at the University of Southern California noted that multisite recording array technologies and new advances in computer algorithms, including nonlinear dynamic models, have made it much easier to understand the representations of the outside world in the brain. There was the strong suggestion, by Berger, that technological developments would rapidly translate into substantial breakthroughs or developments.

In the past decade or two, we have achieved a great deal in brain mapping and localization per se, but today the need is to move from brain localization to connectivity imaging, remarked He.

Others thought that even more surprising patterns may emerge—patterns we cannot even imagine today.

Montague argued that the field of neuroscience brings psychological concepts of behavior to the table, working with assumptions that the brain works in a particular way and that these assumptions influence how we study the brain.

“When we look for neural correlates—we go look for the neural correlates of learning and memory or we go look for the neural correlates of scratch-pad memory or long-term memory—maybe there are some hidden concepts there that a more agnostic approach on the outside and the inside would reveal,” said Montague.

Montague called for more rigorous definitions of behavior and a more agnostic approach to research, using the power of modern computing technology to search for patterns we cannot even imagine. The time is ripe for a bottom-up analysis in which one can move away from psychological space to computational space, with good quantification of behavioral endpoints.

The Way Forward

A true theory of the brain, in some ways, is the ultimate goal: understanding how the physical processes in our neurons turn into behaviors and perceptions of the outside world.

As the above discussions demonstrated, and as summarized by the session chair and Provost of Harvard University, Steven Hyman, we are still in the early stages of answering that question, or even figuring out what that question might look like. There was widespread support in the room for the importance of mapping the physical circuitry of the brain, but there was also a feeling that a physical map alone would not be sufficient to explain how it actually works. There were suggestions to focus on neural networks and the language of electrical activity in the brain, as well as efforts to drive agnostic data crunching to search for patterns that we cannot even imagine.

Panelists generally agreed that great technological breakthroughs have made this effort more possible now than ever before, but that additional breakthroughs—particularly in imaging and computer learning—were needed.

In the end, the payoff from this kind of research would be huge. Not only is developing a viable theory of the brain’s capabilities one of the great intellectual challenges in mankind’s history, but this research would also have tremendous applications for curing disease, guiding education policies, and maintaining health.

We have reached a technical point where it becomes feasible to imagine approaching an understanding of the way the brain is constructed at a level of detail, granularity, and rigor so that we could imagine that taking shape and reaching a theory of the mind and the brain at some point, commented Dennis Choi, former president of the Society of Neuroscience and the Director of the Comprehensive Neuroscience Center at Emory University. All that remains is to do it.

Copyright © 2008, National Academy of Sciences.
Bookshelf ID: NBK50989

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (527K)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...