Tag Archives: DARPA

Funding trends for US synthetic biology efforts

Less than 1% of total US federal funding for synthetic biology is dedicated to risk research according to a Sept. 16, 2015 Woodrow Wilson International Center for Scholars news release on EurekAlert,

A new analysis by the Synthetic Biology Project at the Wilson Center finds the Defense Department and its Defense Advanced Research Projects Agency (DARPA) fund much of the U.S. government’s research in synthetic biology, with less than 1 percent of total federal funding going to risk research.

The report, U.S. Trends in Synthetic Biology Research, finds that between 2008 and 2014, the United States invested approximately $820 million dollars in synthetic biology research. In that time period, the Defense Department became a key funder of synthetic biology research. DARPA’s investments, for example, increased from near zero in 2010 to more than $100 million in 2014 – more than three times the amount spent by the National Science Foundation (NSF).

The Wilson Center news release can also be found here on the Center’s report publication page where it goes on to provide more detail and where you can download the report,

The report, U.S. Trends in Synthetic Biology Research, finds that between 2008 and 2014, the United States invested approximately $820 million dollars in synthetic biology research. In that time period, the Defense Department became a key funder of synthetic biology research. DARPA’s investments, for example, increased from near zero in 2010 to more than $100 million in 2014 – more than three times the amount spent by the National Science Foundation (NSF).

“The increase in DARPA research spending comes as NSF is winding down its initial investment in the Synthetic Biology Engineering Research Center, or SynBERC,” says Dr. Todd Kuiken, senior program associate with the project. “After the SynBERC funding ends next year, it is unclear if there will be a dedicated synthetic biology research program outside of the Pentagon. There is also little investment addressing potential risks and ethical issues, which can affect public acceptance and market growth as the field advances.”

The new study found that less than one percent of the total U.S. funding is focused on synthetic biology risk research and approximately one percent addresses ethical, legal, and social issues.

Internationally, research funding is increasing. Last year, research investments by the European Commission and research agencies in the United Kingdom exceeded non-defense spending in the United States, the report finds.

The research spending comes at a time of growing interest in synthetic biology, particularly surrounding the potential presented by new gene-editing techniques. Recent research by the industry group SynBioBeta indicated that, so far in 2015, synthetic biology companies raised half a billion dollars – more than the total investments in 2013 and 2014 combined.

In a separate Woodrow Wilson International Center for Scholars Sept. 16, 2015 announcement about the report, an upcoming event notice was included,

Save the date: On Oct. 7, 2015, the Synthetic Biology Project will be releasing a new report on synthetic biology and federal regulations. More details will be forthcoming, but the report release will include a noon event [EST] at the Wilson Center in Washington, DC.

I haven’t been able to find any more information about this proposed report launch but you may want to check the Synthetic Biology Project website for details as they become available. ETA Oct. 1, 2015: The new report titled: Leveraging Synthetic Biology’s Promise and Managing Potential Risk: Are We Getting It Right? will be launched on Oct. 15, 2015 according to an Oct. 1, 2015 notice,

As more applications based on synthetic biology come to market, are the existing federal regulations adequate to address the risks posed by this emerging technology?

Please join us for the release of our new report, Leveraging Synthetic Biology’s Promise and Managing Potential Risk: Are We Getting It Right? Panelists will discuss how synthetic biology applications would be regulated by the U.S. Coordinated Framework for Regulation of Biotechnology, how this would affect the market pathway of these applications and whether the existing framework will protect human health and the environment.

A light lunch will be served.


Lynn Bergeson, report author; Managing Partner, Bergeson & Campbell

David Rejeski, Director, Science and Technology Innovation Program

Thursday,October 15th, 2015
12:00pm – 2:00pm

6th Floor Board Room


Wilson Center
Ronald Reagan Building and
International Trade Center
One Woodrow Wilson Plaza
1300 Pennsylvania, Ave., NW
Washington, D.C. 20004

Phone: 202.691.4000


A pragmatic approach to alternatives to animal testing

Retitled and cross-posted from the June 30, 2015 posting (Testing times: the future of animal alternatives) on the International Innovation blog (a CORDIS-listed project dissemination partner for FP7 and H2020 projects).

Maryse de la Giroday explains how emerging innovations can provide much-needed alternatives to animal testing. She also shares highlights of the 9th World Congress on Alternatives to Animal Testing.

‘Guinea pigging’ is the practice of testing drugs that have passed in vitro and in vivo tests on healthy humans in a Phase I clinical trial. In fact, healthy humans can make quite a bit of money as guinea pigs. The practice is sufficiently well-entrenched that there is a magazine, Guinea Pig Zero, devoted to professionals. While most participants anticipate some unpleasant side effects, guinea pigging can sometimes be a dangerous ‘profession’.


One infamous incident highlighting the dangers of guinea pigging occurred in 2006 at Northwick Park Hospital outside London. Volunteers were offered £2,000 to participate in a Phase I clinical trial to test a prospective treatment – a monoclonal antibody designed for rheumatoid arthritis and multiple sclerosis. The drug, called TGN1412, caused catastrophic systemic organ failure in participants. All six individuals receiving the drug required hospital treatment. One participant reportedly underwent amputation of fingers and toes. Another reacted with symptoms comparable to John Merrick, the Elephant Man.

The root of the disaster lay in subtle immune system differences between humans and cynomolgus monkeys – the model animal tested prior to the clinical trial. The drug was designed for the CD28 receptor on T cells. The monkeys’ receptors closely resemble those found in humans. However, unlike these monkeys, humans have other immune cells that carry CD28. The trial participants received a starting dosage that was 0.2 per cent of what the monkeys received in their final tests, but failure to take these additional receptors into account meant a dosage that was supposed to occupy 10 per cent of the available CD28 receptors instead occupied 90 per cent. After the event, a Russian inventor purchased the commercial rights to the drug and renamed it TAB08. It has been further developed by Russian company, TheraMAB, and TAB08 is reportedly in Phase II clinical trials.


While animal testing has been a powerful and useful tool for determining safe usage for pharmaceuticals and other types of chemicals, it is also a cruel and imperfect practice. Moreover, it typically only predicts 30-60 per cent of human responses to new drugs. Nanotechnology and other emerging innovations present possibilities for reducing, and in some cases eliminating, the use of animal models.

People for the Ethical Treatment of Animals (PETA), still better known for its publicity stunts, maintains a webpage outlining a number of alternatives including in silico testing (computer modelling), and, perhaps most interestingly, human-on-a-chip and organoid (tissue engineering) projects.

Organ-on-a-chip projects use stem cells to create human tissues that replicate the functions of human organs. Discussions about human-on-a-chip activities – a phrase used to describe 10 interlinked organ chips – were a highlight of the 9th World Congress on Alternatives to Animal Testing held in Prague, Czech Republic, last year. One project highlighted at the event was a joint US National Institutes of Health (NIH), US Food and Drug Administration (FDA) and US Defense Advanced Research Projects Agency (DARPA) project led by Dan Tagle that claimed it would develop functioning human-on-a-chip by 2017. However, he and his team were surprisingly close-mouthed and provided few details making it difficult to assess how close they are to achieving their goal.

By contrast, Uwe Marx – Leader of the ‘Multi-Organ-Chip’ programme in the Institute of Biotechnology at the Technical University of Berlin and Scientific Founder of TissUse, a human-on-a-chip start-up company – claims to have sold two-organ chips. He also claims to have successfully developed a four-organ chip and that he is on his way to building a human-on-a-chip. Though these chips remain to be seen, if they are, they will integrate microfluidics, cultured cells and materials patterned at the nanoscale to mimic various organs, and will allow chemical testing in an environment that somewhat mirrors a human.

Another interesting alternative for animal testing is organoids – a feature in regenerative medicine that can function as test sites. Engineers based at Cornell University recently published a paper on their functional, synthetic immune organ. Inspired by the lymph node, the organoid is comprised of gelatin-based biomaterials, which are reinforced with silicate nanoparticles (to keep the tissue from melting when reaching body temperature) and seeded with cells allowing it to mimic the anatomical microenvironment of a lymphatic node. It behaves like its inspiration converting B cells to germinal centres which activate, mature and mutate antibody genes when the body is under attack. The engineers claim to be able to control the immune response and to outperform 2D cultures with their 3D organoid. If the results are reproducible, the organoid could be used to develop new therapeutics.

Maryse de la Giroday is a science communications consultant and writer.

Full disclosure: Maryse de la Giroday received transportation and accommodation for the 9th World Congress on Alternatives to Animal Testing from SEURAT-1, a European Union project, making scientific inquiries to facilitate the transition to animal testing alternatives, where possible.

ETA July 1, 2015: I would like to acknowledge more sources for the information in this article,


The guinea pigging term, the ‘professional aspect, the Northwick Park story, and the Guinea Pig Zero magazine can be found in Carl Elliot’s excellent 2006 story titled ‘Guinea-Pigging’ for New Yorker magazine.


Information about the drug used in the Northwick Park Hospital disaster, the sale of the rights to a Russian inventor, and the June 2015 date for the current Phase II clinical trials were found in this Wikipedia essay titled, TGN 1412.


Additional information about the renamed drug, TAB08 and its Phase II clinical trials was found on (a) a US government website for information on clinical trials, (b) in a Dec. 2014 (?) TheraMAB  advertisement in a Nature group magazine and a Jan. 2014 press release,




An April 2015 article (Experimental drug that injured UK volunteers resumes in human trials) by Owen Dyer for the British Medical Journal also mentioned the 2015 TheraMab Phase II clinical trials and provided information about the information about Macaque (cynomolgus) monkey tests.


BMJ 2015; 350 doi: http://dx.doi.org.proxy.lib.sfu.ca/10.1136/bmj.h1831 (Published 02 April 2015) Cite this as: BMJ 2015;350:h1831

A 2009 study by Christopher Horvath and Mark Milton somewhat contradicts the Dyer article’s contention that a species Macaque monkey was used as an animal model. (As the Dyer article is more recent and the Horvath/Milton analysis is more complex covering TGN 1412 in the context of other MAB drugs and their precursor tests along with specific TGN 1412 tests, I opted for the simple description.)

The TeGenero Incident [another name for the Northwick Park Accident] and the Duff Report Conclusions: A Series of Unfortunate Events or an Avoidable Event? by Christopher J. Horvath and Mark N. Milton. Published online before print February 24, 2009, doi: 10.1177/0192623309332986 Toxicol Pathol April 2009 vol. 37 no. 3 372-383


Philippa Roxbuy’s May 24, 2013 BBC news online article provided confirmation and an additional detail or two about the Northwick Park Hospital accident. It notes that other models, in addition to animal models, are being developed.


Anne Ju’s excellent June 10,2015 news release about the Cornell University organoid (synthetic immune organ) project was very helpful.


There will also be a magazine article in International Innovation, which will differ somewhat from the blog posting, due to editorial style and other requirements.

ETA July 22, 2015: I now have a link to the magazine article.

Entangling thousands of atoms

Quantum entanglement as an idea seems extraordinary to me like something from of the fevered imagination made possible only with certain kinds of hallucinogens. I suppose you could call theoretical physicists who’ve conceptualized entanglement a different breed as they don’t seem to need chemical assistance for their flights of fancy, which turn out to be reality. Researchers at MIT (Massachusetts Institute of Technology) and the University of Belgrade (Serbia) have entangled thousands of atoms with a single photon according to a March 26, 2015 news item on Nanotechnology Now,

Physicists from MIT and the University of Belgrade have developed a new technique that can successfully entangle 3,000 atoms using only a single photon. The results, published today in the journal Nature, represent the largest number of particles that have ever been mutually entangled experimentally.

The researchers say the technique provides a realistic method to generate large ensembles of entangled atoms, which are key components for realizing more-precise atomic clocks.

“You can make the argument that a single photon cannot possibly change the state of 3,000 atoms, but this one photon does — it builds up correlations that you didn’t have before,” says Vladan Vuletic, the Lester Wolfe Professor in MIT’s Department of Physics, and the paper’s senior author. “We have basically opened up a new class of entangled states we can make, but there are many more new classes to be explored.”

A March 26, 2015 MIT news release by Jennifer Chu (also on EurekAlert but dated March 25, 2015), which originated the news item, describes entanglement with particular attention to how it relates to atomic timekeeping,

Entanglement is a curious phenomenon: As the theory goes, two or more particles may be correlated in such a way that any change to one will simultaneously change the other, no matter how far apart they may be. For instance, if one atom in an entangled pair were somehow made to spin clockwise, the other atom would instantly be known to spin counterclockwise, even though the two may be physically separated by thousands of miles.

The phenomenon of entanglement, which physicist Albert Einstein once famously dismissed as “spooky action at a distance,” is described not by the laws of classical physics, but by quantum mechanics, which explains the interactions of particles at the nanoscale. At such minuscule scales, particles such as atoms are known to behave differently from matter at the macroscale.

Scientists have been searching for ways to entangle not just pairs, but large numbers of atoms; such ensembles could be the basis for powerful quantum computers and more-precise atomic clocks. The latter is a motivation for Vuletic’s group.

Today’s best atomic clocks are based on the natural oscillations within a cloud of trapped atoms. As the atoms oscillate, they act as a pendulum, keeping steady time. A laser beam within the clock, directed through the cloud of atoms, can detect the atoms’ vibrations, which ultimately determine the length of a single second.

“Today’s clocks are really amazing,” Vuletic says. “They would be less than a minute off if they ran since the Big Bang — that’s the stability of the best clocks that exist today. We’re hoping to get even further.”

The accuracy of atomic clocks improves as more and more atoms oscillate in a cloud. Conventional atomic clocks’ precision is proportional to the square root of the number of atoms: For example, a clock with nine times more atoms would only be three times as accurate. If these same atoms were entangled, a clock’s precision could be directly proportional to the number of atoms — in this case, nine times as accurate. The larger the number of entangled particles, then, the better an atomic clock’s timekeeping.

It seems weak lasers make big entanglements possible (from the news release),

Scientists have so far been able to entangle large groups of atoms, although most attempts have only generated entanglement between pairs in a group. Only one team has successfully entangled 100 atoms — the largest mutual entanglement to date, and only a small fraction of the whole atomic ensemble.

Now Vuletic and his colleagues have successfully created a mutual entanglement among 3,000 atoms, virtually all the atoms in the ensemble, using very weak laser light — down to pulses containing a single photon. The weaker the light, the better, Vuletic says, as it is less likely to disrupt the cloud. “The system remains in a relatively clean quantum state,” he says.

The researchers first cooled a cloud of atoms, then trapped them in a laser trap, and sent a weak laser pulse through the cloud. They then set up a detector to look for a particular photon within the beam. Vuletic reasoned that if a photon has passed through the atom cloud without event, its polarization, or direction of oscillation, would remain the same. If, however, a photon has interacted with the atoms, its polarization rotates just slightly — a sign that it was affected by quantum “noise” in the ensemble of spinning atoms, with the noise being the difference in the number of atoms spinning clockwise and counterclockwise.

“Every now and then, we observe an outgoing photon whose electric field oscillates in a direction perpendicular to that of the incoming photons,” Vuletic says. “When we detect such a photon, we know that must have been caused by the atomic ensemble, and surprisingly enough, that detection generates a very strongly entangled state of the atoms.”

Vuletic and his colleagues are currently using the single-photon detection technique to build a state-of-the-art atomic clock that they hope will overcome what’s known as the “standard quantum limit” — a limit to how accurate measurements can be in quantum systems. Vuletic says the group’s current setup may be a step toward developing even more complex entangled states.

“This particular state can improve atomic clocks by a factor of two,” Vuletic says. “We’re striving toward making even more complicated states that can go further.”

This research was supported in part by the National Science Foundation, the Defense Advanced Research Projects Agency, and the Air Force Office of Scientific Research.

Here’s a link to and a citation for the paper,

Entanglement with negative Wigner function of almost 3,000 atoms heralded by one photon by Robert McConnell, Hao Zhang, Jiazhong Hu, Senka Ćuk & Vladan Vuletić. Nature 519 439–442 (26 March 2015) doi:10.1038/nature14293 Published online 25 March 2015

This article is behind a paywall but there is a free preview via ReadCube Access.

This image illustrates the entanglement of a large number of atoms. The atoms, shown in purple, are shown mutually entangled with one another. Image: Christine Daniloff/MIT and Jose-Luis Olivares/MIT

This image illustrates the entanglement of a large number of atoms. The atoms, shown in purple, are shown mutually entangled with one another.
Image: Christine Daniloff/MIT and Jose-Luis Olivares/MIT

Self-organizing nanotubes and nonequilibrium systems provide insights into evolution and artificial life

If you’re interested in the second law of thermodynamics, this Feb. 10, 2015 news item on ScienceDaily provides some insight into the second law, self-organized systems, and evolution,

The second law of thermodynamics tells us that all systems evolve toward a state of maximum entropy, wherein all energy is dissipated as heat, and no available energy remains to do work. Since the mid-20th century, research has pointed to an extension of the second law for nonequilibrium systems: the Maximum Entropy Production Principle (MEPP) states that a system away from equilibrium evolves in such a way as to maximize entropy production, given present constraints.

Now, physicists Alexey Bezryadin, Alfred Hubler, and Andrey Belkin from the University of Illinois at Urbana-Champaign, have demonstrated the emergence of self-organized structures that drive the evolution of a non-equilibrium system to a state of maximum entropy production. The authors suggest MEPP underlies the evolution of the artificial system’s self-organization, in the same way that it underlies the evolution of ordered systems (biological life) on Earth. …

A Feb. 10, 2015 University of Illinois College of Engineering news release (also on EurekAlert), which originated the news item, provides more detail about the theory and the research,

MEPP may have profound implications for our understanding of the evolution of biological life on Earth and of the underlying rules that govern the behavior and evolution of all nonequilibrium systems. Life emerged on Earth from the strongly nonequilibrium energy distribution created by the Sun’s hot photons striking a cooler planet. Plants evolved to capture high energy photons and produce heat, generating entropy. Then animals evolved to eat plants increasing the dissipation of heat energy and maximizing entropy production.

In their experiment, the researchers suspended a large number of carbon nanotubes in a non-conducting non-polar fluid and drove the system out of equilibrium by applying a strong electric field. Once electrically charged, the system evolved toward maximum entropy through two distinct intermediate states, with the spontaneous emergence of self-assembled conducting nanotube chains.

In the first state, the “avalanche” regime, the conductive chains aligned themselves according to the polarity of the applied voltage, allowing the system to carry current and thus to dissipate heat and produce entropy. The chains appeared to sprout appendages as nanotubes aligned themselves so as to adjoin adjacent parallel chains, effectively increasing entropy production. But frequently, this self-organization was destroyed through avalanches triggered by the heating and charging that emanates from the emerging electric current streams. (…)

“The avalanches were apparent in the changes of the electric current over time,” said Bezryadin.

“Toward the final stages of this regime, the appendages were not destroyed during the avalanches, but rather retracted until the avalanche ended, then reformed their connection. So it was obvious that the avalanches correspond to the ‘feeding cycle’ of the ‘nanotube inset’,” comments Bezryadin.

In the second relatively stable stage of evolution, the entropy production rate reached maximum or near maximum. This state is quasi-stable in that there were no destructive avalanches.

The study points to a possible classification scheme for evolutionary stages and a criterium for the point at which evolution of the system is irreversible—wherein entropy production in the self-organizing subsystem reaches its maximum possible value. Further experimentation on a larger scale is necessary to affirm these underlying principals, but if they hold true, they will prove a great advantage in predicting behavioral and evolutionary trends in nonequilibrium systems.

The authors draw an analogy between the evolution of intelligent life forms on Earth and the emergence of the wiggling bugs in their experiment. The researchers note that further quantitative studies are needed to round out this comparison. In particular, they would need to demonstrate that their “wiggling bugs” can multiply, which would require the experiment be reproduced on a significantly larger scale.

Such a study, if successful, would have implications for the eventual development of technologies that feature self-organized artificial intelligence, an idea explored elsewhere by co-author Alfred Hubler, funded by the Defense Advanced Research Projects Agency [DARPA]. [emphasis mine]

“The general trend of the evolution of biological systems seems to be this: more advanced life forms tend to dissipate more energy by broadening their access to various forms of stored energy,” Bezryadin proposes. “Thus a common underlying principle can be suggested between our self-organized clouds of nanotubes, which generate more and more heat by reducing their electrical resistance and thus allow more current to flow, and the biological systems which look for new means to find food, either through biological adaptation or by inventing more technologies.

“Extended sources of food allow biological forms to further grow, multiply, consume more food and thus produce more heat and generate entropy. It seems reasonable to say that real life organisms are still far from the absolute maximum of the entropy production rate. In both cases, there are ‘avalanches’ or ‘extinction events’, which set back this evolution. Only if all free energy given by the Sun is consumed, by building a Dyson sphere for example, and converted into heat then a definitely stable phase of the evolution can be expected.”

“Intelligence, as far as we know, is inseparable from life,” he adds. “Thus, to achieve artificial life or artificial intelligence, our recommendation would be to study systems which are far from equilibrium, with many degrees of freedom—many building blocks—so that they can self-organize and participate in some evolution. The entropy production criterium appears to be the guiding principle of the evolution efficiency.”

I am fascinated

  • (a) because this piece took an unexpected turn onto the topic of artificial life/artificial intelligence,
  • (b) because of my longstanding interest in artificial life/artificial intelligence,
  • (c) because of the military connection, and
  • (d) because this is the first time I’ve come across something that provides a bridge from fundamental particles to nanoparticles.

Here’s a link to and a citation for the paper,

Self-Assembled Wiggling Nano-Structures and the Principle of Maximum Entropy Production by A. Belkin, A. Hubler, & A. Bezryadin. Scientific Reports 5, Article number: 8323 doi:10.1038/srep08323 Published 09 February 2015

Adding to my delight, this paper is open access.

See-through medical sensors from the University of Wisconsin-Madison

This is quite the week for see-through medical devices based on graphene. A second team has developed a transparent sensor which could allow scientists to make observations of brain activity that are now impossible, according to an Oct. 20, 2014 University of Wisconsin-Madison news release (also on EurekAlert),

Neural researchers study, monitor or stimulate the brain using imaging techniques in conjunction with implantable sensors that allow them to continuously capture and associate fleeting brain signals with the brain activity they can see.

However, it’s difficult to see brain activity when there are sensors blocking the view.

“One of the holy grails of neural implant technology is that we’d really like to have an implant device that doesn’t interfere with any of the traditional imaging diagnostics,” says Justin Williams, the Vilas Distinguished Achievement Professor of biomedical engineering and neurological surgery at UW-Madison. “A traditional implant looks like a square of dots, and you can’t see anything under it. We wanted to make a transparent electronic device.”

The researchers chose graphene, a material gaining wider use in everything from solar cells to electronics, because of its versatility and biocompatibility. And in fact, they can make their sensors incredibly flexible and transparent because the electronic circuit elements are only 4 atoms thick—an astounding thinness made possible by graphene’s excellent conductive properties. “It’s got to be very thin and robust to survive in the body,” says Zhenqiang (Jack) Ma, the Lynn H. Matthias Professor and Vilas Distinguished Achievement Professor of electrical and computer engineering at UW-Madison. “It is soft and flexible, and a good tradeoff between transparency, strength and conductivity.”

Drawing on his expertise in developing revolutionary flexible electronics, he, Williams and their students designed and fabricated the micro-electrode arrays, which—unlike existing devices—work in tandem with a range of imaging technologies. “Other implantable micro-devices might be transparent at one wavelength, but not at others, or they lose their properties,” says Ma. “Our devices are transparent across a large spectrum—all the way from ultraviolet to deep infrared.”

The transparent sensors could be a boon to neuromodulation therapies, which physicians increasingly are using to control symptoms, restore function, and relieve pain in patients with diseases or disorders such as hypertension, epilepsy, Parkinson’s disease, or others, says Kip Ludwig, a program director for the National Institutes of Health neural engineering research efforts. “Despite remarkable improvements seen in neuromodulation clinical trials for such diseases, our understanding of how these therapies work—and therefore our ability to improve existing or identify new therapies—is rudimentary.”

Currently, he says, researchers are limited in their ability to directly observe how the body generates electrical signals, as well as how it reacts to externally generated electrical signals. “Clear electrodes in combination with recent technological advances in optogenetics and optical voltage probes will enable researchers to isolate those biological mechanisms. This fundamental knowledge could be catalytic in dramatically improving existing neuromodulation therapies and identifying new therapies.”

The advance aligns with bold goals set forth in President Barack Obama’s BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative. Obama announced the initiative in April 2013 as an effort to spur innovations that can revolutionize understanding of the brain and unlock ways to prevent, treat or cure such disorders as Alzheimer’s and Parkinson’s disease, post-traumatic stress disorder, epilepsy, traumatic brain injury, and others.

The UW-Madison researchers developed the technology with funding from the Reliable Neural-Interface Technology program at the Defense Advanced Research Projects Agency.

While the researchers centered their efforts around neural research, they already have started to explore other medical device applications. For example, working with researchers at the University of Illinois-Chicago, they prototyped a contact lens instrumented with dozens of invisible sensors to detect injury to the retina; the UIC team is exploring applications such as early diagnosis of glaucoma.

Here’s an image of the see-through medical implant,

Caption: A blue light shines through a clear implantable medical sensor onto a brain model. See-through sensors, which have been developed by a team of University of Wisconsin Madison engineers, should help neural researchers better view brain activity. Credit: Justin Williams research group

Caption: A blue light shines through a clear implantable medical sensor onto a brain model. See-through sensors, which have been developed by a team of University of Wisconsin Madison engineers, should help neural researchers better view brain activity.
Credit: Justin Williams research group

Here’s a link to and a citation for the paper,

Graphene-based carbon-layered electrode array technology for neural imaging and optogenetic applications by Dong-Wook Park, Amelia A. Schendel, Solomon Mikael, Sarah K. Brodnick, Thomas J. Richner, Jared P. Ness, Mohammed R. Hayat, Farid Atry, Seth T. Frye, Ramin Pashaie, Sanitta Thongpang, Zhenqiang Ma, & Justin C. Williams. Nature Communications 5, Article number: 5258 doi:10.1038/ncomms6258 Published
20 October 2014

This is an open access paper.

DARPA (US Defense Advanced Research Projects Agency), which funds this work at the University of Wisconsin-Madison, has also provided an Oct. 20, 2014 news release (also published an an Oct. 27, 2014 news item on Nanowerk) describing this research from the military perspective, which may not be what you might expect. First, here’s a description of the DARPA funding programme underwriting this research, from DARPA’s Reliable Neural-Interface Technology (RE-NET) webpage,

Advancing technology for military uniforms, body armor and equipment have saved countless lives of our servicemembers injured on the battlefield.  Unfortunately, many of those survivors are seriously and permanently wounded, with unprecedented rates of limb loss and traumatic brain injury among our returning soldiers. This crisis has motivated great interest in the science of and technology for restoring sensorimotor functions lost to amputation and injury of the central nervous system. For a decade now, DARPA has been leading efforts aimed at ‘revolutionizing’ the state-of-the-art in prosthetic limbs, recently debuting 2 advanced mechatronic limbs for the upper extremity. These new devices are truly anthropomorphic and capable of performing dexterous manipulation functions that finally begin to approach the capabilities of natural limbs. However, in the absence of a high bandwidth, intuitive interface for the user, these limbs will never achieve their full potential in improving the quality of life for the wounded soldiers that could benefit from this advanced technology.

DARPA created the Reliable Neural-Interface Technology (RE-NET) program in 2010 to directly address the need for high performance neural interfaces to control dexterous functions made possible with advanced prosthetic limbs.  Specifically, RE-NET seeks to develop the technologies needed to reliably extract information from the nervous system, and to do so at a scale and rate necessary to control many degree-of-freedom (DOF) machines, such as high-performance prosthetic limbs. Prior to the DARPA RE-NET program, all existing methods to extract neural control signals were inadequate for amputees to control high-performance prostheses, either because the level of extracted information was too low or the functional lifetime was too short. However, recent technological advances create new opportunities to solve both of these neural-interface problems. For example, it is now feasible to develop high-resolution peripheral neuromuscular interfaces that increase the amount of information obtained from the peripheral nervous system.  Furthermore, advances in cortical microelectrode technologies are extending the durability of neural signals obtained from the brain, making it possible to create brain-controlled prosthetics that remain useful over the full lifetime of the patient.

US Defense Advanced Research Projects Agency (DARPA) Atoms to Products webinar in September 2014

On Sept. 9, 2014 and Sept. 11, 2014, DARPA (US Defense Advanced Research Projects Agency) will hold identical webinars for proposers interested in the Atoms to Products program (presumably they are expecting many, many proposers). (Thanks to James Lewis on the Foresight Institute’s Nanodot blog for his Sept. 1, 2014 posting about the webinars.)

An Aug. 22, 2014 DARPA news release offers details about the project and the webinars,

New program also seeks to develop revolutionary miniaturization and assembly methods that would work at scales 100,000 times smaller than current state-of-the-art technology

Many common materials exhibit different and potentially useful characteristics when fabricated at extremely small scales—that is, at dimensions near the size of atoms, or a few ten-billionths of a meter. These “atomic scale” or “nanoscale” properties include quantized electrical characteristics, glueless adhesion, rapid temperature changes, and tunable light absorption and scattering that, if available in human-scale products and systems, could offer potentially revolutionary defense and commercial capabilities. Two as-yet insurmountable technical challenges, however, stand in the way: Lack of knowledge of how to retain nanoscale properties in materials at larger scales, and lack of assembly capabilities for items between nanoscale and 100 microns—slightly wider than a human hair.

DARPA has created the Atoms to Product (A2P) program to help overcome these challenges. The program seeks to develop enhanced technologies for assembling atomic-scale pieces. It also seeks to integrate these components into materials and systems from nanoscale up to product scale in ways that preserve and exploit distinctive nanoscale properties.

“We want to explore new ways of putting incredibly tiny things together, with the goal of developing new miniaturization and assembly methods that would work at scales 100,000 times smaller than current state-of-the-art technology,” said John Main, DARPA program manager. “If successful, A2P could help enable creation of entirely new classes of materials that exhibit nanoscale properties at all scales. It could lead to the ability to miniaturize materials, processes and devices that can’t be miniaturized with current technology, as well as build three-dimensional products and systems at much smaller sizes.”

This degree of scaled assembly is common in nature, Main continued. “Plants and animals, for example, are effectively systems assembled from atomic- and molecular-scale components a million to a billion times smaller than the whole organism. We’re trying to lay a similar foundation for developing future materials and devices.”

To familiarize potential participants with the technical objectives of the A2P program, DARPA has scheduled identical Proposers Day webinars on Tuesday, September 9, 2014, and Thursday, September 11, 2014. Advance registration is required and closes on September 5, 2014, at 5:00 PM Eastern Time. Participants must register through the registration website: http://www.sa-meetings.com/A2PProposersDay.

The DARPA Special Notice announcing the Proposers’ Day webinars is available at http://go.usa.gov/mgKB. This announcement does not constitute a formal solicitation for proposals or abstracts and is issued solely for information and program planning purposes. The Special Notice is not a Request for Information (RFI); therefore, DARPA will accept no submissions against this announcement. DARPA expects to release a Broad Agency Announcement (BAA) with full technical details on A2P soon on the Federal Business Opportunities website (www.fbo.gov). For more information, please email DARPA-SN-14-61@darpa.mil.

Over the years I’ve come across several references to bottom-up engineering or manufacturing but it’s seemed more theoretical than real. I gather DARPA is hoping to make bottom-up manufacturing a reality.

TrueNorth, a brain-inspired chip architecture from IBM and Cornell University

As a Canadian, “true north” is invariably followed by “strong and free” while singing our national anthem. For many Canadians it is almost the only phrase that is remembered without hesitation.  Consequently, some of the buzz surrounding the publication of a paper celebrating ‘TrueNorth’, a brain-inspired chip, is a bit disconcerting. Nonetheless, here is the latest IBM (in collaboration with Cornell University) news from an Aug. 8, 2014 news item on Nanowerk,

Scientists from IBM unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW—orders of magnitude less power than a modern microprocessor. A neurosynaptic supercomputer the size of a postage stamp that runs on the energy equivalent of a hearing-aid battery, this technology could transform science, technology, business, government, and society by enabling vision, audition, and multi-sensory applications.

An Aug. 7, 2014 IBM news release, which originated the news item, provides an overview of the multi-year process this breakthrough represents (Note: Links have been removed),

There is a huge disparity between the human brain’s cognitive capability and ultra-low power consumption when compared to today’s computers. To bridge the divide, IBM scientists created something that didn’t previously exist—an entirely new neuroscience-inspired scalable and efficient computer architecture that breaks path with the prevailing von Neumann architecture used almost universally since 1946.

This second generation chip is the culmination of almost a decade of research and development, including the initial single core hardware prototype in 2011 and software ecosystem with a new programming language and chip simulator in 2013.

The new cognitive chip architecture has an on-chip two-dimensional mesh network of 4096 digital, distributed neurosynaptic cores, where each core module integrates memory, computation, and communication, and operates in an event-driven, parallel, and fault-tolerant fashion. To enable system scaling beyond single-chip boundaries, adjacent chips, when tiled, can seamlessly connect to each other—building a foundation for future neurosynaptic supercomputers. To demonstrate scalability, IBM also revealed a 16-chip system with sixteen million programmable neurons and four billion programmable synapses.

“IBM has broken new ground in the field of brain-inspired computers, in terms of a radically new architecture, unprecedented scale, unparalleled power/area/speed efficiency, boundless scalability, and innovative design techniques. We foresee new generations of information technology systems – that complement today’s von Neumann machines – powered by an evolving ecosystem of systems, software, and services,” said Dr. Dharmendra S. Modha, IBM Fellow and IBM Chief Scientist, Brain-Inspired Computing, IBM Research. “These brain-inspired chips could transform mobility, via sensory and intelligent applications that can fit in the palm of your hand but without the need for Wi-Fi. This achievement underscores IBM’s leadership role at pivotal transformational moments in the history of computing via long-term investment in organic innovation.”

The Defense Advanced Research Projects Agency (DARPA) has funded the project since 2008 with approximately $53M via Phase 0, Phase 1, Phase 2, and Phase 3 of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program. Current collaborators include Cornell Tech and iniLabs, Ltd.

Building the Chip

The chip was fabricated using Samsung’s 28nm process technology that has a dense on-chip memory and low-leakage transistors.

“It is an astonishing achievement to leverage a process traditionally used for commercially available, low-power mobile devices to deliver a chip that emulates the human brain by processing extreme amounts of sensory information with very little power,” said Shawn Han, vice president of Foundry Marketing, Samsung Electronics. “This is a huge architectural breakthrough that is essential as the industry moves toward the next-generation cloud and big-data processing. It’s a pleasure to be part of technical progress for next-generation through Samsung’s 28nm technology.”

The event-driven circuit elements of the chip used the asynchronous design methodology developed at Cornell Tech [aka Cornell University] and refined with IBM since 2008.

“After years of collaboration with IBM, we are now a step closer to building a computer similar to our brain,” said Professor Rajit Manohar, Cornell Tech.

The combination of cutting-edge process technology, hybrid asynchronous-synchronous design methodology, and new architecture has led to a power density of 20mW/cm2 which is nearly four orders of magnitude less than today’s microprocessors.

Advancing the SyNAPSE Ecosystem

The new chip is a component of a complete end-to-end vertically integrated ecosystem spanning a chip simulator, neuroscience data, supercomputing, neuron specification, programming paradigm, algorithms and applications, and prototype design models. The ecosystem supports all aspects of the programming cycle from design through development, debugging, and deployment.

To bring forth this fundamentally different technological capability to society, IBM has designed a novel teaching curriculum for universities, customers, partners, and IBM employees.

Applications and Vision

This ecosystem signals a shift in moving computation closer to the data, taking in vastly varied kinds of sensory data, analyzing and integrating real-time information in a context-dependent way, and dealing with the ambiguity found in complex, real-world environments.

Looking to the future, IBM is working on integrating multi-sensory neurosynaptic processing into mobile devices constrained by power, volume and speed; integrating novel event-driven sensors with the chip; real-time multimedia cloud services accelerated by neurosynaptic systems; and neurosynaptic supercomputers by tiling multiple chips on a board, creating systems that would eventually scale to one hundred trillion synapses and beyond.

Building on previously demonstrated neurosynaptic cores with on-chip, online learning, IBM envisions building learning systems that adapt in real world settings. While today’s hardware is fabricated using a modern CMOS process, the underlying architecture is poised to exploit advances in future memory, 3D integration, logic, and sensor technologies to deliver even lower power, denser package, and faster speed.

I have two articles that may prove of interest, Peter Stratton’s Aug. 7, 2014 article for The Conversation provides an easy-to-read introduction to both brains, human and computer, (as they apply to this research) and TrueNorth (h/t phys.org also hosts Stratton’s article). There’s also an Aug. 7, 2014 article by Rob Farber for techenablement.com which includes information from a range of text and video sources about TrueNorth and cognitive computing as it’s also known (well worth checking out).

Here’s a link to and a citation for the paper,

A million spiking-neuron integrated circuit with a scalable communication network and interface by Paul A. Merolla, John V. Arthur, Rodrigo Alvarez-Icaza, Andrew S. Cassidy, Jun Sawada, Filipp Akopyan, Bryan L. Jackson, Nabil Imam, Chen Guo, Yutaka Nakamura, Bernard Brezzo, Ivan Vo, Steven K. Esser, Rathinakumar Appuswamy, Brian Taba, Arnon Amir, Myron D. Flickner, William P. Risk, Rajit Manohar, and Dharmendra S. Modha. Science 8 August 2014: Vol. 345 no. 6197 pp. 668-673 DOI: 10.1126/science.1254642

This paper is behind a paywall.

Squishy but rigid robots from MIT (Massachusetts Institute of Technology)

A July 14, 2014 news item on ScienceDaily MIT (Massachusetts Institute of Technology) features robots that mimic mice and other biological constructs or, if you prefer, movie robots,

In the movie “Terminator 2,” the shape-shifting T-1000 robot morphs into a liquid state to squeeze through tight spaces or to repair itself when harmed.

Now a phase-changing material built from wax and foam, and capable of switching between hard and soft states, could allow even low-cost robots to perform the same feat.

The material — developed by Anette Hosoi, a professor of mechanical engineering and applied mathematics at MIT, and her former graduate student Nadia Cheng, alongside researchers at the Max Planck Institute for Dynamics and Self-Organization and Stony Brook University — could be used to build deformable surgical robots. The robots could move through the body to reach a particular point without damaging any of the organs or vessels along the way.

A July 14, 2014 MIT news release (also on EurekAlert), which originated the news item, describes the research further by referencing both octopuses and jello,

Working with robotics company Boston Dynamics, based in Waltham, Mass., the researchers began developing the material as part of the Chemical Robots program of the Defense Advanced Research Projects Agency (DARPA). The agency was interested in “squishy” robots capable of squeezing through tight spaces and then expanding again to move around a given area, Hosoi says — much as octopuses do.

But if a robot is going to perform meaningful tasks, it needs to be able to exert a reasonable amount of force on its surroundings, she says. “You can’t just create a bowl of Jell-O, because if the Jell-O has to manipulate an object, it would simply deform without applying significant pressure to the thing it was trying to move.”

What’s more, controlling a very soft structure is extremely difficult: It is much harder to predict how the material will move, and what shapes it will form, than it is with a rigid robot.

So the researchers decided that the only way to build a deformable robot would be to develop a material that can switch between a soft and hard state, Hosoi says. “If you’re trying to squeeze under a door, for example, you should opt for a soft state, but if you want to pick up a hammer or open a window, you need at least part of the machine to be rigid,” she says.

Compressible and self-healing

To build a material capable of shifting between squishy and rigid states, the researchers coated a foam structure in wax. They chose foam because it can be squeezed into a small fraction of its normal size, but once released will bounce back to its original shape.

The wax coating, meanwhile, can change from a hard outer shell to a soft, pliable surface with moderate heating. This could be done by running a wire along each of the coated foam struts and then applying a current to heat up and melt the surrounding wax. Turning off the current again would allow the material to cool down and return to its rigid state.

In addition to switching the material to its soft state, heating the wax in this way would also repair any damage sustained, Hosoi says. “This material is self-healing,” she says. “So if you push it too far and fracture the coating, you can heat it and then cool it, and the structure returns to its original configuration.”

To build the material, the researchers simply placed the polyurethane foam in a bath of melted wax. They then squeezed the foam to encourage it to soak up the wax, Cheng says. “A lot of materials innovation can be very expensive, but in this case you could just buy really low-cost polyurethane foam and some wax from a craft store,” she says.

In order to study the properties of the material in more detail, they then used a 3-D printer to build a second version of the foam lattice structure, to allow them to carefully control the position of each of the struts and pores.

When they tested the two materials, they found that the printed lattice was more amenable to analysis than the polyurethane foam, although the latter would still be fine for low-cost applications, Hosoi says.

The wax coating could also be replaced by a stronger material, such as solder, she adds.

Hosoi is now investigating the use of other unconventional materials for robotics, such as magnetorheological and electrorheological fluids. These materials consist of a liquid with particles suspended inside, and can be made to switch from a soft to a rigid state with the application of a magnetic or electric field.

When it comes to artificial muscles for soft and biologically inspired robots, we tend to think of controlling shape through bending or contraction, says Carmel Majidi, an assistant professor of mechanical engineering in the Robotics Institute at Carnegie Mellon University, who was not involved in the research. “But for a lot of robotics tasks, reversibly tuning the mechanical rigidity of a joint can be just as important,” he says. “This work is a great demonstration of how thermally controlled rigidity-tuning could potentially be used in soft robotics.”

Here’s a link to and a citation for the paper,

Thermally Tunable, Self-Healing Composites for Soft Robotic Applications by Nadia G. Cheng, Arvind Gopinath, Lifeng Wang, Karl Iagnemma, and Anette E. Hosoi. Macromolecular Materials and Engineering DOI: 10.1002/mame.201400017 Article first published online: 30 JUN 2014

© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

US military wants you to remember

While this July 10, 2014 news item on ScienceDaily concerns DARPA, an implantable neural device, and the Lawrence Livermore National Laboratory (LLNL), it is a new project and not the one featured here in a June 18, 2014 posting titled: ‘DARPA (US Defense Advanced Research Projects Agency) awards funds for implantable neural interface’.

The new project as per the July 10, 2014 news item on ScienceDaily concerns memory,

The Department of Defense’s Defense Advanced Research Projects Agency (DARPA) awarded Lawrence Livermore National Laboratory (LLNL) up to $2.5 million to develop an implantable neural device with the ability to record and stimulate neurons within the brain to help restore memory, DARPA officials announced this week.

The research builds on the understanding that memory is a process in which neurons in certain regions of the brain encode information, store it and retrieve it. Certain types of illnesses and injuries, including Traumatic Brain Injury (TBI), Alzheimer’s disease and epilepsy, disrupt this process and cause memory loss. TBI, in particular, has affected 270,000 military service members since 2000.

A July 2, 2014 LLNL news release, which originated the news item, provides more detail,

The goal of LLNL’s work — driven by LLNL’s Neural Technology group and undertaken in collaboration with the University of California, Los Angeles (UCLA) and Medtronic — is to develop a device that uses real-time recording and closed-loop stimulation of neural tissues to bridge gaps in the injured brain and restore individuals’ ability to form new memories and access previously formed ones.

Specifically, the Neural Technology group will seek to develop a neuromodulation system — a sophisticated electronics system to modulate neurons — that will investigate areas of the brain associated with memory to understand how new memories are formed. The device will be developed at LLNL’s Center for Bioengineering.

“Currently, there is no effective treatment for memory loss resulting from conditions like TBI,” said LLNL’s project leader Satinderpall Pannu, director of the LLNL’s Center for Bioengineering, a unique facility dedicated to fabricating biocompatible neural interfaces. …

LLNL will develop a miniature, wireless and chronically implantable neural device that will incorporate both single neuron and local field potential recordings into a closed-loop system to implant into TBI patients’ brains. The device — implanted into the entorhinal cortex and hippocampus — will allow for stimulation and recording from 64 channels located on a pair of high-density electrode arrays. The entorhinal cortex and hippocampus are regions of the brain associated with memory.

The arrays will connect to an implantable electronics package capable of wireless data and power telemetry. An external electronic system worn around the ear will store digital information associated with memory storage and retrieval and provide power telemetry to the implantable package using a custom RF-coil system.

Designed to last throughout the duration of treatment, the device’s electrodes will be integrated with electronics using advanced LLNL integration and 3D packaging technologies. The microelectrodes that are the heart of this device are embedded in a biocompatible, flexible polymer.

Using the Center for Bioengineering’s capabilities, Pannu and his team of engineers have achieved 25 patents and many publications during the last decade. The team’s goal is to build the new prototype device for clinical testing by 2017.

Lawrence Livermore’s collaborators, UCLA and Medtronic, will focus on conducting clinical trials and fabricating parts and components, respectively.

“The RAM [Restoring Active Memory] program poses a formidable challenge reaching across multiple disciplines from basic brain research to medicine, computing and engineering,” said Itzhak Fried, lead investigator for the UCLA on this project and  professor of neurosurgery and psychiatry and biobehavioral sciences at the David Geffen School of Medicine at UCLA and the Semel Institute for Neuroscience and Human Behavior. “But at the end of the day, it is the suffering individual, whether an injured member of the armed forces or a patient with Alzheimer’s disease, who is at the center of our thoughts and efforts.”

LLNL’s work on the Restoring Active Memory program supports [US] President [Barack] Obama’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative.

Obama’s BRAIN is picking up speed.

DARPA (US Defense Advanced Research Projects Agency) awards funds for implantable neural interface

I’m not a huge fan of neural implantable devices (at least not the ones that facilitate phone calls directly to and from the brain as per my April 30, 2010 posting; scroll down about 40% of the way) but they are important from a therapeutic perspective. On that  note, the Lawrence Livermore National Laboratory (LLNL) has received an award of $5.6M from the US Defense Advanced Research Projects Agency (DARPA) to advance their work on neural implantable interfaces. From a June 13, 2014 news item on Azonano,

Lawrence Livermore National Laboratory recently received $5.6 million from the Department of Defense’s Defense Advanced Research Projects Agency (DARPA) to develop an implantable neural interface with the ability to record and stimulate neurons within the brain for treating neuropsychiatric disorders.

The technology will help doctors to better understand and treat post-traumatic stress disorder (PTSD), traumatic brain injury (TBI), chronic pain and other conditions.

Several years ago, researchers at Lawrence Livermore in conjunction with Second Sight Medical Products developed the world’s first neural interface (an artificial retina) that was successfully implanted into blind patients to help partially restore their vision. The new neural device is based on similar technology used to create the artificial retina.

An LLNL June 11, 2014 news release, which originated the news item, provides some fascinating insight into the interrelations between various US programs focused on the brain and neural implants,

“DARPA is an organization that advances technology by leaps and bounds,” said LLNL’s project leader Satinderpall Pannu, director of the Lab’s Center for Micro- and Nanotechnology and Center for Bioengineering, a facility dedicated to fabricating biocompatible neural interfaces. “This DARPA program will allow us to develop a revolutionary device to help patients suffering from neuropsychiatric disorders and other neural conditions.”

The project is part of DARPA’s SUBNETS (Systems-Based Neurotechnology for Emerging Therapies) program. The agency is launching new programs to support President Obama’s BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative, a new research effort aimed to revolutionize our understanding of the human mind and uncover ways to treat, prevent and cure brain disorders.

LLNL and Medtronic are collaborating with UCSF, UC Berkeley, Cornell University, New York University, PositScience Inc. and Cortera Neurotechnologies on the DARPA SUBNETS project. Some collaborators will be developing the electronic components of the device, while others will be validating and characterizing it.

As part of its collaboration with LLNL, Medtronic will consult on the development of new technologies and provide its investigational Activa PC+S deep brain stimulation (DBS) system, which is the first to enable the sensing and recording of brain signals while simultaneously providing targeted DBS. This system has recently been made available to leading researchers for early-stage research and could lead to a better understanding of how various devastating neurological conditions develop and progress. The knowledge gained as part of this collaboration could lead to the next generation of advanced systems for treating neural disease.

As for what LLNL will contribute (from the news release),

The LLNL Neural Technology group will develop an implantable neural device with hundreds of electrodes by leveraging their thin-film neural interface technology, a more than tenfold increase over current Deep Brain Stimulation (DBS) devices. The electrodes will be integrated with electronics using advanced LLNL integration and 3D packaging technologies. The goal is to seal the electronic components in miniaturized, self-contained, wireless neural hardware. The microelectrodes that are the heart of this device are embedded in a biocompatible, flexible polymer.

Surgically implanted into the brain, the neural device is designed to help researchers understand the underlying dynamics of neuropsychiatric disorders and re-train neural networks to unlearn these disorders and restore proper function. This will enable the device to be eventually removed from the patient instead of being dependent on it.

This image from LLNL illustrates their next generation neural implant,

This rendering shows the next generation neural device capable of recording and stimulating the human central nervous system being developed at Lawrence Livermore National Laboratory. The implantable neural interface will record from and stimulate neurons within the brain for treating neuropsychiatric disorders.

This rendering shows the next generation neural device capable of recording and stimulating the human central nervous system being developed at Lawrence Livermore National Laboratory. The implantable neural interface will record from and stimulate neurons within the brain for treating neuropsychiatric disorders.

i expect there will be many more ‘brain’ projects to come with the advent of the US BRAIN initiative (funds of $100M in 2014 and $200M in 2015) and the European Union’s Human Brain Project (1B Euros to be spent on research over a 10 year period).