Tag Archives: George M. Church

Viewing RNA (ribonucleic acid) more closely at the nanoscale with expansion microscopy (EXM) and off-the-shelf parts

A close cousin to DNA (deoxyribonucleic acid), RNA (ribonucleic acid) is a communicator according to a July 4, 2016 news item on ScienceDaily describing how a team at the Massachusetts Institute of Technology (MIT) managed to image RNA more precisely,

Cells contain thousands of messenger RNA molecules, which carry copies of DNA’s genetic instructions to the rest of the cell. MIT engineers have now developed a way to visualize these molecules in higher resolution than previously possible in intact tissues, allowing researchers to precisely map the location of RNA throughout cells.

Key to the new technique is expanding the tissue before imaging it. By making the sample physically larger, it can be imaged with very high resolution using ordinary microscopes commonly found in research labs.

“Now we can image RNA with great spatial precision, thanks to the expansion process, and we also can do it more easily in large intact tissues,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, a member of MIT’s Media Lab and McGovern Institute for Brain Research, and the senior author of a paper describing the technique in the July 4, 2016 issue of Nature Methods.

A July 4, 2016 MIT news release (also on EurekAlert), which originated the news item, explains why scientists want a better look at RNA and how the MIT team accomplished the task,

Studying the distribution of RNA inside cells could help scientists learn more about how cells control their gene expression and could also allow them to investigate diseases thought to be caused by failure of RNA to move to the correct location.

Boyden and colleagues first described the underlying technique, known as expansion microscopy (ExM), last year, when they used it to image proteins inside large samples of brain tissue. In a paper appearing in Nature Biotechnology on July 4, the MIT team has now presented a new version of the technology that employs off-the-shelf chemicals, making it easier for researchers to use.

MIT graduate students Fei Chen and Asmamaw Wassie are the lead authors of the Nature Methods paper, and Chen and graduate student Paul Tillberg are the lead authors of the Nature Biotechnology paper.

A simpler process

The original expansion microscopy technique is based on embedding tissue samples in a polymer that swells when water is added. This tissue enlargement allows researchers to obtain images with a resolution of around 70 nanometers, which was previously possible only with very specialized and expensive microscopes. However, that method posed some challenges because it requires generating a complicated chemical tag consisting of an antibody that targets a specific protein, linked to both a fluorescent dye and a chemical anchor that attaches the whole complex to a highly absorbent polymer known as polyacrylate. Once the targets are labeled, the researchers break down the proteins that hold the tissue sample together, allowing it to expand uniformly as the polyacrylate gel swells.

In their new studies, to eliminate the need for custom-designed labels, the researchers used a different molecule to anchor the targets to the gel before digestion. This molecule, which the researchers dubbed AcX, is commercially available and therefore makes the process much simpler.

AcX can be modified to anchor either proteins or RNA to the gel. In the Nature Biotechnology study, the researchers used it to anchor proteins, and they also showed that the technique works on tissue that has been previously labeled with either fluorescent antibodies or proteins such as green fluorescent protein (GFP).

“This lets you use completely off-the-shelf parts, which means that it can integrate very easily into existing workflows,” Tillberg says. “We think that it’s going to lower the barrier significantly for people to use the technique compared to the original ExM.”

Using this approach, it takes about an hour to scan a piece of tissue 500 by 500 by 200 microns, using a light sheet fluorescence microscope. The researchers showed that this technique works for many types of tissues, including brain, pancreas, lung, and spleen.

Imaging RNA

In the Nature Methods paper, the researchers used the same kind of anchoring molecule but modified it to target RNA instead. All of the RNAs in the sample are anchored to the gel, so they stay in their original locations throughout the digestion and expansion process.

After the tissue is expanded, the researchers label specific RNA molecules using a process known as fluorescence in situ hybridization (FISH), which was originally developed in the early 1980s and is widely used. This allows researchers to visualize the location of specific RNA molecules at high resolution, in three dimensions, in large tissue samples.

This enhanced spatial precision could allow scientists to explore many questions about how RNA contributes to cellular function. For example, a longstanding question in neuroscience is how neurons rapidly change the strength of their connections to store new memories or skills. One hypothesis is that RNA molecules encoding proteins necessary for plasticity are stored in cell compartments close to the synapses, poised to be translated into proteins when needed.

With the new system, it should be possible to determine exactly which RNA molecules are located near the synapses, waiting to be translated.

“People have found hundreds of these locally translated RNAs, but it’s hard to know where exactly they are and what they’re doing,” Chen says. “This technique would be useful to study that.”

Boyden’s lab is also interested in using this technology to trace the connections between neurons and to classify different subtypes of neurons based on which genes they are expressing.

There’s a brief (30 secs.), silent video illustrating the work (something about a ‘Brainbow Hippocampus’) made available by MIT,


Here’s a link to and a citation for the paper,

Nanoscale imaging of RNA with expansion microscopy by Fei Chen, Asmamaw T Wassie, Allison J Cote, Anubhav Sinha, Shahar Alon, Shoh Asano, Evan R Daugharthy, Jae-Byum Chang, Adam Marblestone, George M Church, Arjun Raj, & Edward S Boyden.     Nature Methods (2016)  doi:10.1038/nmeth.3899 Published online 04 July 2016

This paper is behind a paywall.

Nucleic acid-based memory storage

We’re running out of memory. To be more specific, there are two problems: the supply of silicon and a limit to how much silicon-based memory can store. An April 27, 2016 news item on Nanowerk announces a nucleic acid-based approach to solving the memory problem,

A group of Boise State [Boise State University in Idaho, US] researchers, led by associate professor of materials science and engineering and associate dean of the College of Innovation and Design Will Hughes, is working toward a better way to store digital information using nucleic acid memory (NAM).

An April 25, 2016 Boise State University news release, which originated the news item, expands on the theme of computer memory and provides more details about the approach,

It’s no secret that as a society we generate vast amounts of data each year. So much so that the 30 billion watts of electricity used annually by server farms today is roughly equivalent to the output of 30 nuclear power plants.

And the demand keeps growing. The global flash memory market is predicted to reach $30.2 billion this year, potentially growing to $80.3 billion by 2025. Experts estimate that by 2040, the demand for global memory will exceed the projected supply of silicon (the raw material used to store flash memory). Furthermore, electronic memory is rapidly approaching its fundamental size limits because of the difficulty in storing electrons in small dimensions.

Hughes, with post-doctoral researcher Reza Zadegan and colleagues Victor Zhirnov (Semiconductor Research Corporation), Gurtej Sandhun (Micron Technology Inc.) and George Church (Harvard University), is looking to DNA molecules to solve the problem. Nucleic acid — the “NA” in “DNA” — far surpasses electronic memory in retention time, according to the researchers, while also providing greater information density and energy of operation.

Their conclusions are outlined in an invited commentary in the prestigious journal Nature Materials published earlier this month.

“DNA is the data storage material of life in general,” said Hughes. “Because of its physical and chemical properties, it also may become the data storage material of our lives.” It may sound like science fiction, but Hughes will participate in an invitation-only workshop this month at the Intelligence Advanced Research Projects Activity (IARPA) Agency to envision a portable DNA hard drive that would have 500 Terabytes of searchable data – that’s about the the size of the Library of Congress Web Archive.

“When information bits are encoded into polymer strings, researchers and manufacturers can manage and manipulate physical, chemical and biological information with standard molecular biology techniques,” the paper [in Nature Materials?] states.

Cost-competitive technologies to read and write DNA could lead to real-world applications ranging from artificial chromosomes, digital hard drives and information-management systems, to a platform for watermarking and tracking genetic content or next-generation encryption tools that necessitate physical rather than electronic embodiment.

Here’s how it works. Current binary code uses 0’s and 1’s to represent bits of information. A computer program then accesses a specific decoder to turn the numbers back into usable data. With nucleic acid memory, 0’s and 1’s are replaced with the nucleotides A, T, C and G. Known as monomers, they are covalently bonded to form longer polymer chains, also known as information strings.

Because of DNA’s superior ability to store data, DNA can contain all the information in the world in a small box measuring 10 x 10 x 10 centimeters cubed. NAM could thus be used as a sustainable time capsule for massive, scientific, financial, governmental, historical, genealogical, personal and genetic records.

Better yet, DNA can store digital information for a very long time – thousands to millions of years. Currently, usable information has been extracted from DNA in bones that are 700,000 years old, making nucleic acid memory a promising archival material. And nucleic acid memory uses 100 million times less energy than storing data electronically in flash, and the data can live on for generations.

At Boise State, Hughes and Zadegan are examining DNA’s stability under extreme conditions. DNA strands are subjected to temperatures varying from negative 20 degrees Celsius to 100 degrees Celsius, and to a variety of UV exposures to see if they can still retain their information. What they’re finding is that much less information is lost with NAM than with the current state of the industry.

Here’s a link to and a citation for the Nature Materials paper,

Nucleic acid memory by Victor Zhirnov, Reza M. Zadegan, Gurtej S. Sandhu, George M. Church, & William L. Hughes. Nature Materials 15, 366–370 (2016)  doi:10.1038/nmat4594 Published online 23 March 2016

This paper is behind a paywall.

Nano and a Unified Microbiome Initiative (UMI)

A Jan. 6, 2015 news item on Nanowerk features a proposal by US scientists for a Unified Microbiome Initiative (UMI),

In October [2015], an interdisciplinary group of scientists proposed forming a Unified Microbiome Initiative (UMI) to explore the world of microorganisms that are central to life on Earth and yet largely remain a mystery.

An article in the journal ACS Nano (“Tools for the Microbiome: Nano and Beyond”) describes the tools scientists will need to understand how microbes interact with each other and with us.

A Jan. 6, 2016 American Chemical Society (ACS) news release, which originated the news item, expands on the theme,

Microbes live just about everywhere: in the oceans, in the soil, in the atmosphere, in forests and in and on our bodies. Research has demonstrated that their influence ranges widely and profoundly, from affecting human health to the climate. But scientists don’t have the necessary tools to characterize communities of microbes, called microbiomes, and how they function. Rob Knight, Jeff F. Miller, Paul S. Weiss and colleagues detail what these technological needs are.

The researchers are seeking the development of advanced tools in bioinformatics, high-resolution imaging, and the sequencing of microbial macromolecules and metabolites. They say that such technology would enable scientists to gain a deeper understanding of microbiomes. Armed with new knowledge, they could then tackle related medical and other challenges with greater agility than what is possible today.

Here’s a link to and a citation for the paper,

Tools for the Microbiome: Nano and Beyond by Julie S. Biteen, Paul C. Blainey, Zoe G. Cardon, Miyoung Chun, George M. Church, Pieter C. Dorrestein, Scott E. Fraser, Jack A. Gilbert, Janet K. Jansson, Rob Knight, Jeff F. Miller, Aydogan Ozcan, Kimberly A. Prather, Stephen R. Quake, Edward G. Ruby, Pamela A. Silver, Sharif Taha, Ger van den Engh, Paul S. Weiss, Gerard C. L. Wong, Aaron T. Wright, and Thomas D. Young. ACS Nano, Article ASAP DOI: 10.1021/acsnano.5b07826 Publication Date (Web): December 22, 2015

Copyright © 2015 American Chemical Society

This is an open access paper.

I sped through very quickly and found a couple of references to ‘nano’,

Ocean Microbiomes and Nanobiomes

Life in the oceans is supported by a community of extremely small organisms that can be called a “nanobiome.” These nanoplankton particles, many of which measure less than 0.001× the volume of a white blood cell, harvest solar and chemical energy and channel essential elements into the food chain. A deep network of larger life forms (humans included) depends on these tiny microbes for its energy and chemical building blocks.

The importance of the oceanic nanobiome has only recently begun to be fully appreciated. Two dominant forms, Synechococcus and Prochlorococcus, were not discovered until the 1980s and 1990s.(32-34) Prochloroccus has now been demonstrated to be so abundant that it may account for as much as 10% of the world’s living organic carbon. The organism divides on a diel cycle while maintaining constant numbers, suggesting that about 5% of the world’s biomass flows through this species on a daily basis.(35-37)

Metagenomic studies show that many other less abundant life forms must exist but elude direct observation because they can neither be isolated nor grown in culture.

The small sizes of these organisms (and their genomes) indicate that they are highly specialized and optimized. Metagenome data indicate a large metabolic heterogeneity within the nanobiome. Rather than combining all life functions into a single organism, the nanobiome works as a network of specialists that can only exist as a community, therein explaining their resistance to being cultured. The detailed composition of the network is the result of interactions between the organisms themselves and the local physical and chemical environment. There is thus far little insight into how these networks are formed and how they maintain steady-state conditions in the turbulent natural ocean environment.

Rather than combining all life functions into a single organism, the nanobiome works as a network of specialists that can only exist as a community

The serendipitous discovery of Prochlorococcus happened by applying flow cytometry (developed as a medical technique for counting blood cells) to seawater.(34) With these medical instruments, the faint signals from nanoplankton can only be seen with great difficulty against noisy backgrounds. Currently, a small team is adapting flow cytometric technology to improve the capabilities for analyzing individual nanoplankton particles. The latest generation of flow cytometers enables researchers to count and to make quantitative observations of most of the small life forms (including some viruses) that comprise the nanobiome. To our knowledge, there are only two well-equipped mobile flow cytometry laboratories that are regularly taken to sea for real-time observations of the nanobiome. The laboratories include equipment for (meta)genome analysis and equipment to correlate the observations with the local physical parameters and (nutrient) chemistry in the ocean. Ultimately, integration of these measurements will be essential for understanding the complexity of the oceanic microbiome.

The ocean is tremendously undersampled. Ship time is costly and limited. Ultimately, inexpensive, automated, mobile biome observatories will require methods that integrate microbiome and nanobiome measurements, with (meta-) genomics analyses, with local geophysical and geochemical parameters.(38-42) To appreciate how the individual components of the ocean biome are related and work together, a more complete picture must be established.

The marine environment consists of stratified zones, each with a unique, characteristic biome.(43) The sunlit waters near the surface are mixed by wind action. Deeper waters may be mixed only occasionally by passing storms. The dark deepest layers are stabilized by temperature/salinity density gradients. Organic material from the photosynthetically active surface descends into the deep zone, where it decomposes into nutrients that are mixed with compounds that are released by volcanic and seismic action. These nutrients diffuse upward to replenish the depleted surface waters. The biome is stratified accordingly, sometimes with sudden transitions on small scales. Photo-autotrophs dominate near the surface. Chemo-heterotrophs populate the deep. The makeup of the microbial assemblages is dictated by the local nutrient and oxygen concentrations. The spatiotemporal interplay of these systems is highly relevant to such issues as the carbon budget of the planet but remains little understood.

And then, there was this,

Nanoscience and Nanotechnology Opportunities

The great advantage of nanoscience and nanotechnology in studying microbiomes is that the nanoscale is the scale of function in biology. It is this convergence of scales at which we can “see” and at which we can fabricate that heralds the contributions that can be made by developing new nanoscale analysis tools.(159-168) Microbiomes operate from the nanoscale up to much larger scales, even kilometers, so crossing these scales will pose significant challenges to the field, in terms of measurement, stimulation/response, informatics, and ultimately understanding.

Some progress has been made in creating model systems(143-145, 169-173) that can be used to develop tools and methods. In these cases, the tools can be brought to bear on more complex and real systems. Just as nanoscience began with the ability to image atoms and progressed to the ability to manipulate structures both directly and through guided interactions,(162, 163, 174-176) it has now become possible to control structure, materials, and chemical functionality from the submolecular to the centimeter scales simultaneously. Whereas substrates and surface functionalization have often been tailored to be resistant to bioadhesion, deliberate placement of chemical patterns can also be used for the growth and patterning of systems, such as biofilms, to be put into contact with nanoscale probes.(177-180) Such methods in combination with the tools of other fields (vide infra) will provide the means to probe and to understand microbiomes.

Key tools for the microbiome will need to be miniaturized and made parallel. These developments will leverage decades of work in nanotechnology in the areas of nanofabrication,(181) imaging systems,(182, 183) lab-on-a-chip systems,(184) control of biological interfaces,(185) and more. Commercialized and commoditized tools, such as smart phone cameras, can also be adapted for use (vide infra). By guiding the development and parallelization of these tools, increasingly complex microbiomes will be opened for study.(167)

Imaging and sensing, in general, have been enjoying a Renaissance over the past decades, and there are various powerful measurement techniques that are currently available, making the Microbiome Initiative timely and exciting from the broad perspective of advanced analysis techniques. Recent advances in various -omics technologies, electron microscopy, optical microscopy/nanoscopy and spectroscopy, cytometry, mass spectroscopy, atomic force microscopy, nuclear imaging, and other techniques, create unique opportunities for researchers to investigate a wide range of questions related to microbiome interactions, function, and diversity. We anticipate that some of these advanced imaging, spectroscopy, and sensing techniques, coupled with big data analytics, will be used to create multimodal and integrated smart systems that can shed light onto some of the most important needs in microbiome research, including (1) analyzing microbial interactions specifically and sensitively at the relevant spatial and temporal scales; (2) determining and analyzing the diversity covered by the microbial genome, transcriptome, proteome, and metabolome; (3) managing and manipulating microbiomes to probe their function, evaluating the impact of interventions and ultimately harnessing their activities; and (4) helping us identify and track microbial dark matter (referring to 99% of micro-organisms that cannot be cultured).

In this broad quest for creating next-generation imaging and sensing instrumentation to address the needs and challenges of microbiome-related research activities comprehensively, there are important issues that need to be considered, as discussed below.

The piece is extensive and quite interesting, if you have the time.

Using scientific methods and technology to explore living systems as artistic subjects: bioart

There is a fascinating set of stories about bioart designed to whet your appetite for more (*) in a Nov. 23, 2015 Cell Press news release on EurekAlert (Note: A link has been removed),

Joe Davis is an artist who works not only with paints or pastels, but also with genes and bacteria. In 1986, he collaborated with geneticist Dan Boyd to encode a symbol for life and femininity into an E. coli bacterium. The piece, called Microvenus, was the first artwork to use the tools and techniques of molecular biology. Since then, bioart has become one of several contemporary art forms (including reclamation art and nanoart) that apply scientific methods and technology to explore living systems as artistic subjects. A review of the field, published November 23, can be found in Trends in Biotechnology.

Bioart ranges from bacterial manipulation to glowing rabbits, cellular sculptures, and–in the case of Australian-British artist Nina Sellars–documentation of an ear prosthetic that was implanted onto fellow artist Stelarc’s arm. In the pursuit of creating art, practitioners have generated tools and techniques that have aided researchers, while sometimes crossing into controversy, such as by releasing invasive species into the environment, blurring the lines between art and modern biology, raising philosophical, societal, and environmental issues that challenge scientific thinking.

“Most people don’t know that bioart exists, but it can enable scientists to produce new ideas and give us opportunities to look differently at problems,” says author Ali K. Yetisen, who works at Harvard Medical School and the Wellman Center for Photomedicine, Massachusetts General Hospital. “At the same time there’s been a lot of ethical and safety concerns happening around bioart and artists who wanted to get involved in the past have made mistakes.”

Here’s a sample of Joe Davis’s work,

 Caption This photograph shows polyptich paintings by Joe Davis of his 28-mer Microvenus DNA molecule (2006 Exhibition in Greece at Athens School of Fine Arts). Credit: Courtesy of Joe Davis

This photograph shows polyptich paintings by Joe Davis of his 28-mer Microvenus DNA molecule (2006 Exhibition in Greece at Athens School of Fine Arts). Credit: Courtesy of Joe Davis

The news release goes on to recount a brief history of bioart, which stretches back to 1928 and then further back into the 19th and 18th centuries,

In between experiments, Alexander Fleming would paint stick figures and landscapes on paper and in Petri dishes using bacteria. In 1928, after taking a brief hiatus from the lab, he noticed that portions of his “germ paintings,” had been killed. The culprit was a fungus, penicillin–a discovery that would revolutionize medicine for decades to come.

In 1938, photographer Edward Steichen used a chemical to genetically alter and produce interesting variations in flowering delphiniums. This chemical, colchicine, would later be used by horticulturalists to produce desirable mutations in crops and ornamental plants.

In the late 18th and early 19th centuries, the arts and sciences moved away from traditionally shared interests and formed secular divisions that persisted well into the 20th century. “Appearance of environmental art in the 1970s brought about renewed awareness of special relationships between art and the natural world,” Yetisen says.

To demonstrate how we change landscapes, American sculptor Robert Smithsonian paved a hillside with asphalt, while Bulgarian artist Christo Javacheffa (of Christo and Jeanne-Claude) surrounded resurfaced barrier islands with bright pink plastic.

These pieces could sometimes be destructive, however, such as in Ten Turtles Set Free by German-born Hans Haacke. To draw attention to the excesses of the pet trade, he released what he thought were endangered tortoises back to their natural habitat in France, but he inadvertently released the wrong subspecies, thus compromising the genetic lineages of the endangered tortoises as the two varieties began to mate.

By the late 1900s, technological advances began to draw artists’ attention to biology, and by the 2000s, it began to take shape as an artistic identity. Following Joe Davis’ transgenic Microvenus came a miniaturized leather jacket made of skin cells, part of the Tissue Culture & Art Project (initiated in 1996) by duo Oran Catts and Ionat Zurr. Other examples of bioart include: the use of mutant cacti to simulate appearance of human hair in the place of cactus spines by Laura Cinti of University College London’s C-Lab; modification of butterfly wings for artistic purposes by Marta de Menezes of Portugal; and photographs of amphibian deformation by American Brandon Ballengée.

“Bioart encourages discussions about societal, philosophical, and environmental issues and can help enhance public understanding of advances in biotechnology and genetic engineering,” says co-author Ahmet F. Coskun, who works in the Division of Chemistry and Chemical Engineering at California Institute of Technology.

Life as a Bioartist

Today, Joe Davis is a research affiliate at MIT Biology and “Artist-Scientist” at the George Church Laboratory at Harvard–a place that fosters creativity and technological development around genetic engineering and synthetic biology. “It’s Oz, pure and simple,” Davis says. “The total amount of resources in this environment and the minds that are accessible, it’s like I come to the city of Oz every day.”

But it’s not a one-way street. “My particular lab depends on thinking outside the box and not dismissing things because they sound like science fiction,” says [George M.] Church, who is also part of the Wyss Institute for Biologically Inspired Engineering. “Joe is terrific at keeping us flexible and nimble in that regard.”

For example, Davis is working with several members of the Church lab to perform metagenomics analyses of the dust that accumulates at the bottom of money-counting machines. Another project involves genetically engineering silk worms to spin metallic gold–an homage to the fairy tale of Rumpelstiltskin.

“I collaborate with many colleagues on projects that don’t necessarily have direct scientific results, but they’re excited to pursue these avenues of inquiry that they might not or would not look into ordinarily–they might try to hide it, but a lot of scientists have poetic souls,” Davis says. “Art, like science, has to describe the whole word and you can’t describe something you’re basically clueless about. The most exciting part of these activities is satiating overwhelming curiosity about everything around you.”

The number of bioartists is still small, Davis says, partly because of a lack of federal funding of the arts in general. Accessibility to the types of equipment bioartists want to experiment with can also be an issue. While Davis has partnered with labs over the past few decades, other artists affiliate themselves with community access laboratories that are run by do-it-yourself biologists. One way that universities can help is to create departmental-wide positions for bioartists to collaborate with scientists.

“In the past, there have been artists affiliated with departments in a very utilitarian way to produce figures or illustrations,” Church says. “Having someone like Joe stimulates our lab to come together in new ways and if we had more bioartists, I think thinking out of the box would be a more common thing.”

“In the era of genetic engineering, bioart will gain new meanings and annotations in social and scientific contexts,” says Yetisen. “Bioartists will surely take up new roles in science laboratories, but this will be subject to ethical criticism and controversy as a matter of course.”

Here’s a link to and a citation for the paper,

Bioart by Ali K. Yetisen, Joe Davis, Ahmet F. Coskun, George M. Church, Seok Hyun. Trends in Biotechnology,  DOI: http://dx.doi.org/10.1016/j.tibtech.2015.09.011 Published Online: November 23, 2015

This paper appears to be open access.

*Removed the word ‘featured’ on Dec. 1, 2015 at 1030 hours PDT.

Nanotechnology and the US mega science project: BAM (Brain Activity Map) and more

The Brain Activity Map (BAM) project received budgetary approval as of this morning, Apr. 2, 2013 (I first mentioned BAM in my Mar. 4, 2013 posting when approval seemed imminent). From the news item, Obama Announces Huge Brain-Mapping Project, written by Stephanie Pappas for Yahoo News (Note: Links have been removed),

 President Barack Obama announced a new research initiative this morning (April 2) to map the human brain, a project that will launch with $100 million in funding in 2014.

The Brain Activity Map (BAM) project, as it is called, has been in the planning stages for some time. In the June 2012 issue of the journal Neuron, six scientists outlined broad proposals for developing non-invasive sensors and methods to experiment on single cells in neural networks. This February, President Obama made a vague reference to the project in his State of the Union address, mentioning that it could “unlock the answers to Alzheimer’s.”

In March, the project’s visionaries outlined their final goals in the journal Science. They call for an extended effort, lasting several years, to develop tools for monitoring up to a million neurons at a time. The end goal is to understand how brain networks function.

“It could enable neuroscience to really get to the nitty-gritty of brain circuits, which is the piece that’s been missing from the puzzle,” Rafael Yuste, the co-director of the Kavli Institute for Brain Circuits at Columbia University, who is part of the group spearheading the project, told LiveScience in March. “The reason it’s been missing is because we haven’t had the techniques, the tools.” [Inside the Brain: A Journey Through Time]

Not all neuroscientists support the project, however, with some arguing that it lacks clear goals and may cannibalize funds for other brain research.

….

I believe the $100M mentioned for 2014 would one installment in a series totaling up to $1B or more. In any event, it seems like a timely moment to comment on the communications campaign that has been waged on behalf of the BAM. It reminds me a little of the campaign for graphene, which was waged in the build up to the decision as to which two projects (in a field of six semi-finalists, then narrowed to a field of four finalists) should receive a FET (European Union’s Future and Emerging Technology) 1 billion euro research prize each. It seemed to me even a year or so before the decision that graphene’s win was a foregone conclusion but the organizers left nothing to chance and were relentless in their pursuit of attention and media coverage in the buildup to the final decision.

The most recent salvo in the BAM campaign was an attempt to link it with nanotechnology. A shrewd move given that the US has spent well over $1B since the US National Nanotechnology Initiative (NNI) was first approved in 2000. Linking the two projects means the NNI can lend a little authority to the new project (subtext: we’ve supported a mega-project before and that was successful) while the new project BAM can imbue the ageing NNI with some excitement.

Here’s more about nanotechnology and BAM from a Mar. 27, 2013 Spotlight article by Michael Berger on Nanowerk,

A comprehensive understanding of the brain remains an elusive, distant frontier. To arrive at a general theory of brain function would be an historic event, comparable to inferring quantum theory from huge sets of complex spectra and inferring evolutionary theory from vast biological field work. You might have heard about the proposed Brain Activity Map – a project that, like the Human Genome Project, will tap the hive mind of experts to make headway in the understanding of the field. Engineers and nanotechnologists will be needed to help build ever smaller devices for measuring the activity of individual neurons and, later, to control how those neurons function. Computer scientists will be called upon to develop methods for storing and analyzing the vast quantities of imaging and physiological data, and for creating virtual models for studying brain function. Neuroscientists will provide critical biological expertise to guide the research and interpret the results.

Berger goes on to highlight some of the ways nanotechnology-enabled devices could contribute to the effort. He draws heavily on a study published Mar. 20, 2013 online in ACS (American Chemical Society)Nano. Shockingly, the article is open access. Given that this is the first time I’ve come across an open access article in any of the American Chemical Society’s journals, I suspect that there was payment of some kind involved to make this information freely available. (The practice of allowing researchers to pay more in order to guarantee open access to their research in journals that also have articles behind paywalls seems to be in the process of becoming more common.)

Here’s a citation and a link to the article about nanotechnology and BAM,

Nanotools for Neuroscience and Brain Activity Mapping by A. Paul Alivisatos, Anne M. Andrews, Edward S. Boyden, Miyoung Chun, George M. Church, Karl Deisseroth, John P. Donoghue, Scott E. Fraser, Jennifer Lippincott-Schwartz, Loren L. Looger, Sotiris Masmanidis, Paul L. McEuen, Arto V. Nurmikko, Hongkun Park, Darcy S. Peterka, Clay Reid, Michael L. Roukes, Axel Scherer, Mark Schnitzer, Terrence J. Sejnowski, Kenneth L. Shepard, Doris Tsao, Gina Turrigiano, Paul S. Weiss, Chris Xu, Rafael Yuste, and Xiaowei Zhuang. ACS Nano, 2013, 7 (3), pp 1850–1866 DOI: 10.1021/nn4012847 Publication Date (Web): March 20, 2013
Copyright © 2013 American Chemical Society

As these things go, it’s a readable article for people without a neuroscience education provided they don’t mind feeling a little confused from time to time. From Nanotools for Neuroscience and Brain Activity Mapping (Note: Footnotes and links removed),

The Brain Activity Mapping (BAM) Project (…) has three goals in terms of building tools for neuroscience capable of (…) measuring the activity of large sets of neurons in complex brain circuits, (…) computationally analyzing and modeling these brain circuits, and (…) testing these models by manipulating the activities of chosen sets of neurons in these brain circuits.

As described below, many different approaches can, and likely will, be taken to achieve these goals as neural circuits of increasing size and complexity are studied and probed.

The BAM project will focus both on dynamic voltage activity and on chemical neurotransmission. With an estimated 85 billion neurons, 100 trillion synapses, and 100 chemical neurotransmitters in the human brain,(…) this is a daunting task. Thus, the BAM project will start with model organisms, neural circuits (vide infra), and small subsets of specific neural circuits in humans.

Among the approaches that show promise for the required dynamic, parallel measurements are optical and electro-optical methods that can be used to sense neural cell activity such as Ca2+,(7) voltage,(…) and (already some) neurotransmitters;(…) electrophysiological approaches that sense voltages and some electrochemically active neurotransmitters;(…) next-generation photonics-based probes with multifunctional capabilities;(18) synthetic biology approaches for recording histories of function;(…) and nanoelectronic measurements of voltage and local brain chemistry.(…) We anticipate that tools developed will also be applied to glia and more broadly to nanoscale and microscale monitoring of metabolic processes.

Entirely new tools will ultimately be required both to study neurons and neural circuits with minimal perturbation and to study the human brain. These tools might include “smart”, active nanoscale devices embedded within the brain that report on neural circuit activity wirelessly and/or entirely new modalities of remote sensing of neural circuit dynamics from outside the body. Remarkable advances in nanoscience and nanotechnology thus have key roles to play in transduction, reporting, power, and communications.

One of the ultimate goals of the BAM project is that the knowledge acquired and tools developed will prove useful in the intervention and treatment of a wide variety of diseases of the brain, including depression, epilepsy, Parkinson’s, schizophrenia, and others. We note that tens of thousands of patients have already been treated with invasive (i.e., through the skull) treatments. [emphases mine] While we hope to reduce the need for such measures, greatly improved and more robust interfaces to the brain would impact effectiveness and longevity where such treatments remain necessary.

Perhaps not so coincidentally, there was this Mar. 29, 2013 news item on Nanowerk,

Some human cells forget to empty their trash bins, and when the garbage piles up, it can lead to Parkinson’s disease and other genetic and age-related disorders. Scientists don’t yet understand why this happens, and Rice University engineering researcher Laura Segatori is hoping to change that, thanks to a prestigious five-year CAREER Award from the National Science Foundation (NSF).

Segatori, Rice’s T.N. Law Assistant Professor of Chemical and Biomolecular Engineering and assistant professor of bioengineering and of biochemistry and cell biology, will use her CAREER grant to create a toolkit for probing the workings of the cellular processes that lead to accumulation of waste material and development of diseases, such as Parkinson’s and lysosomal storage disorders. Each tool in the kit will be a nanoparticle — a speck of matter about the size of a virus — with a specific shape, size and charge.  [emphases mine] By tailoring each of these properties, Segatori’s team will create a series of specialized probes that can undercover the workings of a cellular process called autophagy.

“Eventually, once we understand how to design a nanoparticle to activate autophagy, we will use it as a tool to learn more about the autophagic process itself because there are still many question marks in biology regarding how this pathway works,” Segatori said. “It’s not completely clear how it is regulated. It seems that excessive autophagy may activate cell death, but it’s not yet clear. In short, we are looking for more than therapeutic applications. We are also hoping to use these nanoparticles as tools to study the basic science of autophagy.”

There is no direct reference to BAM but there are some intriguing correspondences.

Finally, there is no mention of nanotechnology in this radio broadcast/podcast and transcript but it does provide more information about BAM (for many folks this was first time they’d heard about the project) and the hopes and concerns this project raises while linking it to the Human Genome Project. From the Mar. 31, 2013 posting of a transcript and radio (Kera News; a National Public Radio station) podcast titled, Somewhere Over the Rainbow: The Journey to Map the Human Brain,

During the State of the Union, President Obama said the nation is about to embark on an ambitious project: to examine the human brain and create a road map to the trillions of connections that make it work.

“Every dollar we invested to map the human genome returned $140 to our economy — every dollar,” the president said. “Today, our scientists are mapping the human brain to unlock the answers to Alzheimer’s.”

Details of the project have slowly been leaking out: $3 billion, 10 years of research and hundreds of scientists. The National Institutes of Health is calling it the Brain Activity Map.

Obama isn’t the first to tout the benefits of a huge government science project. But can these projects really deliver? And what is mapping the human brain really going to get us?

Whether one wants to call it a public relations campaign or a marketing campaign is irrelevant. Science does not take place in an environment where data and projects are considered dispassionately. Enormous amounts of money are spent to sway public opinion and policymakers’ decisions.

ETA Ap. 3, 2013: Here are more stories about BAM and the announcement:

BRAIN Initiative Launched to Unlock Mysteries of Human Mind

Obama’s BRAIN Only 1/13 The Size Of Europe’s

BRAIN Initiative Builds on Efforts of Leading Neuroscientists and Nanotechnologists