On April 2nd 2013 President Obama formally unveiled the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative, which was previously known as the Brain Activity Map project (my initial reaction to this project can be found here). The full text of the President’s remarks can be found here, and a video question and answer session with NIH Director Francis Collins and DARPA director Arati Prabhakar can be found here (the comments section is, as always, entertaining). NIH has published a website that outlines some of the details here.
Why do we need a new BRAIN initiative? It’s not like we haven’t mapped the brain before.
Using the Golgi method, in the 1890s Ramon Y Cajal mapped the fine circuitry of the brain in astonishing detail. In 1909 Korbinian Brodmann (Fig.1) provided a more cartographic rendering of the brain, and parsed its vastness into discrete areas based on cytoarchitecture, providing labels for different regions that allowed researchers across the globe to communicate through this common map. Wilder Penfield and other neurosurgeons in the early 1900s electrically zapped the cerebral cortex, providing the first of what we think of as functional human brain maps. Now, we’re comfortable with the notion that we can derive some idea, if imperfect, of underlying neural processing with functional imaging approaches, like fMRI and MEG.
Are we there yet? Not quite.
What major knowledge gap will the BRAIN initiative fill (fig.2)? We do very well in measuring brain activity at different scales – if we take one of the ~86 billion neurons in our brain we can measure ion channels, the tiny molecular machines that generate electrical patterns of brain activity — and their spiking outputs can be measured, cell by cell. At the level of the whole brain, we have imaging technologies like fMRI and MEG that can measure relative activity in larger patches and – to a degree – connected networks during meaningful behaviors.
But there’s very little that can give us both kinds of information simultaneously, a sort of “Mandlebrot set” view of neural systems (Fig.3). Imagine you were an alien orbiting the planet, interested in understanding humanity, and all you could see were the lights of our cities at night – you’d learn a bit about humanity by observing those energy patterns, but it would not be a nuanced view that told you very much about people: that’s the fMRI view. Now, let’s say you land your spaceship and listen in on a water cooler conversation by two coworkers – you’d learn about their TV habits and opinions, and some specifics of their interpersonal interaction, but not as much about humanity writ large: that’s the reductionist, single cell view. In between, there’s a lot of interesting information – such as the interactions and flow of resources and information between communities and the network ecosystem of the interactions. If we could understand this “community” level organization, and how it relates to the levels above and below it, we would understand much more about how the brain works as it does things like remember where you left your car keys – and this knowledge of the normal brain would in turn will let us figure out how these community interactions go wrong in memory disorders like Alzhemier’s disease.
There’s no doubt that we need to fill this gap with novel feats of engineering and mathematics that will allow us to collect data at an unprecedented level of resolution, and at multiple scales. Along with this, we’ll need muscular informatics tools to deal with the massive data sets that will be generated.
Can we be more precise than this? Yes, we can guess what these new technologies will look like; we can define the set of useful features that such technologies will bear and the parameter space in which we wish to operate. But can we specify what the solutions will be? That is where it becomes tricky – because we just don’t know. Human beings are notoriously bad at predicting scientific advances (think flying cars and jet packs). And that’s where we need to exercise caution.
Is there an apt metaphor for this sort of big science? Most of us are too young to remember the Manhattan Project, in which the United States designed and built the first atomic bomb. The popular notion of the Manhattan Project lodged in our collective consciousness is rather different than how the project actually played out. The real Manhattan project was highly secretive, run by what was essentially a cabal of skilled scientists and military officials, and was an incredibly risky proposition. Add to this that success meant that the world would possess a weapon of unprecedented destructive power, and the level of secrecy was profound. But we did succeed, and in a spectacular, world-changing way that leads people to periodically say, “We need a Manhattan project to X” where “X” is a monumental problem that seems too big for little fish to nibble.
But the BRAIN initiative should be different, almost the complete opposite of the Manhattan Project – sort of like the Human Genome Project, which appears to be the closest metaphor.
It should be transparent. In a time devoid of the internet and computers (imagine!) the Manhattan Project collected talent in one place to promote collaboration – now, with the communication technologies at our disposal we don’t need to go to such lengths to achieve the goals of meaningful collaboration, and because this is all being done in service of humanity there’s little need for secrecy. The information from such a project should be freely accessible by all scientists and the public. The data and findings shouldn’t be hidden behind a paywall (though, the papers that first described the project were behind journal subscription paywalls. Ahem.). Director Collins has attempted to reassure us on this point, saying, “We firmly believe, and this is strongly supported across the government in terms of the science agencies, in making data accessible immediately to all those who might be able to utilize this to make new insights.”
Director Collins explicitly cited the Human Genome Project as a model for how data dissemination would occur. “All the data produced would be immediately placed on the internet without any barrier to access or any attempt to file for intellectual property for things that seemed like they were basic science and they should be in the public domain. The BRAIN project will follow those same principles…” The neuroscience community will have the opportunity to, according to Collins, “build on what the core group produces”.
A big concern among the research community is the perception that already strained resources may be drained to support a cabal of star researchers (update 4/4/2013: see interview with William Newsome a co-director of the advisory group). Dr. Collins answered this resource question in more detail during the Q&A than I had previously seen (see ~8 minutes into the video). Where will the first $100 million down payment come from? Here’s the breakdown according to Collins:
1) $40 million will come via the Neuroscience Blueprint, a “rolling investment fund”.
2) DARPA will chip in $40 million.
3) The National Science Foundation (NSF) will contribute $20 million.
4) “Other institute contributions” will make up the rest, including, according to Collins, “A little bit from my own discretionary fund”, as well as unspecified foundation support. Since the cited sources already add up to $100 million, the amount of that contribution is not clear, but could refer to future years.
When asked why we should be making this investment when NIH paylines are so low (with sequestration, NIH success rates will further plummet, with many excellent projects going unfunded), Collins answered, “That’s a very serious question and of course we are deeply concerned. At the present time, the support for biomedical research is in a pretty difficult pickle. Over the course of the last 10 years the NIH budget has lost about 20% of its purchasing power, with flat budgets and inflation eroding our ability to conduct research.”
Collins went on to suggest that the opportunity outweighs the risk. He also stated that the amount of the first year investment, $100 million, is only about .3% of the NIH budget (the current expenditure by NIH on neuroscience is ~$5 billion). “It’s not as if we are taking this money and putting it somewhere else that is completely disconnected from the overall enterprise”, he said. The argument appears to be that while these are not new dollars, the relatively small proportion directed at the BRAIN initiative will not cause an unintended competition for current funding. The argument on the NIH website devoted to the project is rather zen-like: “Five years ago a project such as this would have been considered impossible. Five years from now will be too late.” With such an ill-defined endgame the rationale for this statement isn’t entirely clear.
Regarding the team of scientists who will direct the initiative (at this stage, an advisory group), Collins said, “we felt this was the perfect moment to convene a Dream Team of the most visionary neuroscientists across the board, from simple systems to complicated ones,” put them into a, “clearly charged circumstance of telling us what the milestones should be, what the goals should be”, including determining the balance between technology development and experimental focus. The renaming would suggest that technology development will be emphasized.
Cori Bargman of Rockefeller University and William Newsome from Stanford University were named by Collins as co-directors of this effort. Apparently, this group will have enormous influence on the project (and, I would offer, neuroscience research) for the foreseeable future. “They are going to guide us, and a lot of what NIH does going forward (on the project) will be very much dependent upon what this group comes up with.” The group will set immediate and longer term priorities. According to Collins, “They are gonna give us initial feedback this summer about what we should start on in fiscal year (20)14.” And, “then we’re going to ask them by the summer of 2014 to lay out a much more detailed schedule of what exactly the milestones ought to be so that we have a firm foundation for this project.”
Some within the research community fear that placing tremendous resources into a small number of projects, no matter how capable the investigators may be, could be a recipe for failure – like betting your mortgage on one horse at the races. It also runs counter to the idea of serendipity that often underlies scientific advances. Incentives for collaboration or the reserving of funds for multidisciplinary work that will be necessary to crack the complex technical problems that will arise could allay these fears. A perception among the scientific community that there will be winners and losers could harm the long term goals of collaboration and team science on which such an effort will depend. A good approach might be to use the existing NIH request for information (RFI), Request for Applications (RFA) and Program Announcement (PA) structures, and allow for a range of funding mechanisms – from small grants to large programmatic proposals. Smaller projects could still function under data sharing requirements established for the larger effort to ensure that the results are disseminated and are synergistic with the broader theme.
Such diffusion could also mitigate risks and biases. In scientific endeavors like neuroscience, where there are great technical hurdles and significant nonlinearities, a little noise in the system is actually a good thing. It prevents us from settling too firmly on one path, or method, to the exclusion of others, a bit like the brain does. It guards against cabal mentality, and confirmation biases. This is important because the solutions to complex problems often arise from where you least expect it.
When asked about the public perception of the project, Collins noted that “Maybe it will be our generation’s moon shot”, but went on to caution that it should not be thought of as a “race”, but instead the first steps toward international cooperation on the big problem of how the brain works. He went on to say, “The results of this project, if it is successful, should touch all of the 7 billion lives on the planet — so we really ought to work on it together.” Dr. Arati Prabhakar, Director of DARPA, commented on the value of the project in stimulating the imagination of future scientists, adding, “Space was inspirational to us as kids, it was something that fired our imaginations and got our pulses racing. I think this project certainly has that kind of potential.”
I agree with those sentiments. Mapping the brain is a worthy investment with significant returns to scientific knowledge and human health, and a compelling case is being made to invest in a robust, big science effort to understand the mind in health and disease. Some – but not all – of my initial questions have been answered and with the talent the Director is apparently assembling I’m hopeful that we can look forward to a more open process and an interesting debate as the initiative continues to take shape. It’s our responsibility to apply the same rigor, precision and constructive skepticism to this process that we do to the rest of our science, and to listen to the important ideas that run counter to the narrative that has developed, particularly as it relates to the “little science” that constitutes the bulk of biomedical research, and from which remarkable advances will continue to be made.
Update 4/03/2013: A new fact sheet listing additional details was published by the White House here.
[Note: In full disclosure, my own lab is funded in part through NIH. So I am definitely biased toward seeking cures for diseases like brain cancer, epilepsy, PTSD, CTE, Alzheimer’s and Parkinson’s. My lab houses projects using patch clamp methods as well as human brain imaging]