Tag Archives: DARPA

Nanotechnology and cybersecurity risks

Gregory Carpenter has written a gripping (albeit somewhat exaggerated) piece for Signal, a publication of the  Armed Forces Communications and Electronics Association (AFCEA) about cybersecurity issues and  nanomedicine endeavours. From Carpenter’s Jan. 1, 2016 article titled, When Lifesaving Technology Can Kill; The Cyber Edge,

The exciting advent of nanotechnology that has inspired disruptive and lifesaving medical advances is plagued by cybersecurity issues that could result in the deaths of people that these very same breakthroughs seek to heal. Unfortunately, nanorobotic technology has suffered from the same security oversights that afflict most other research and development programs.

Nanorobots, or small machines [or nanobots[, are vulnerable to exploitation just like other devices.

At the moment, the issue of cybersecurity exploitation is secondary to making nanobots, or nanorobots, dependably functional. As far as I’m aware, there is no such nanobot. Even nanoparticles meant to function as packages for drug delivery have not been perfected (see one of the controversies with nanomedicine drug delivery described in my Nov. 26, 2015 posting).

That said, Carpenter’s point about cybersecurity is well taken since security features are often overlooked in new technology. For example, automated banking machines (ABMs) had woefully poor (inadequate, almost nonexistent) security when they were first introduced.

Carpenter outlines some of the problems that could occur, assuming some of the latest research could be reliably  brought to market,

The U.S. military has joined the fray of nanorobotic experimentation, embarking on revolutionary research that could lead to a range of discoveries, from unraveling the secrets of how brains function to figuring out how to permanently purge bad memories. Academia is making amazing advances as well. Harnessing progress by Harvard scientists to move nanorobots within humans, researchers at the University of Montreal, Polytechnique Montreal and Centre Hospitalier Universitaire Sainte-Justine are using mobile nanoparticles inside the human brain to open the blood-brain barrier, which protects the brain from toxins found in the circulatory system.

A different type of technology presents a risk similar to the nanoparticles scenario. A DARPA-funded program known as Restoring Active Memory (RAM) addresses post-traumatic stress disorder, attempting to overcome memory deficits by developing neuroprosthetics that bridge gaps in an injured brain. In short, scientists can wipe out a traumatic memory, and they hope to insert a new one—one the person has never actually experienced. Someone could relish the memory of a stroll along the French Riviera rather than a terrible firefight, even if he or she has never visited Europe.

As an individual receives a disruptive memory, a cyber criminal could manage to hack the controls. Breaches of the brain could become a reality, putting humans at risk of becoming zombie hosts [emphasis mine] for future virus deployments. …

At this point, the ‘zombie’ scenario Carpenter suggests seems a bit over-the-top but it does hearken to the roots of the zombie myth where the undead aren’t mindlessly searching for brains but are humans whose wills have been overcome. Mike Mariani in an Oct. 28, 2015 article for The Atlantic has presented a thought-provoking history of zombies,

… the zombie myth is far older and more rooted in history than the blinkered arc of American pop culture suggests. It first appeared in Haiti in the 17th and 18th centuries, when the country was known as Saint-Domingue and ruled by France, which hauled in African slaves to work on sugar plantations. Slavery in Saint-Domingue under the French was extremely brutal: Half of the slaves brought in from Africa were worked to death within a few years, which only led to the capture and import of more. In the hundreds of years since, the zombie myth has been widely appropriated by American pop culture in a way that whitewashes its origins—and turns the undead into a platform for escapist fantasy.

The original brains-eating fiend was a slave not to the flesh of others but to his own. The zombie archetype, as it appeared in Haiti and mirrored the inhumanity that existed there from 1625 to around 1800, was a projection of the African slaves’ relentless misery and subjugation. Haitian slaves believed that dying would release them back to lan guinée, literally Guinea, or Africa in general, a kind of afterlife where they could be free. Though suicide was common among slaves, those who took their own lives wouldn’t be allowed to return to lan guinée. Instead, they’d be condemned to skulk the Hispaniola plantations for eternity, an undead slave at once denied their own bodies and yet trapped inside them—a soulless zombie.

I recommend reading Mariani’s article although I do have one nit to pick. I can’t find a reference to brain-eating zombies until George Romero’s introduction of the concept in his movies. This Zombie Wikipedia entry seems to be in agreement with my understanding (if I’m wrong, please do let me know and, if possible, provide a link to the corrective text).

Getting back to Carpenter and cybersecurity with regard to nanomedicine, while his scenarios may seem a trifle extreme it’s precisely the kind of thinking you need when attempting to anticipate problems. I do wish he’d made clear that the technology still has a ways to go.

DARPA (US Defense Advanced Research Project Agency) ‘Atoms to Product’ program launched

It took over a year after announcing the ‘Atoms to Product’ program in 2014 for DARPA (US Defense Advanced Research Projects Agency) to select 10 proponents for three projects. Before moving onto the latest announcement, here’s a description of the ‘Atoms to Product’ program from its Aug. 27, 2014 announcement on Nanowerk,

Many common materials exhibit different and potentially useful characteristics when fabricated at extremely small scales—that is, at dimensions near the size of atoms, or a few ten-billionths of a meter. These “atomic scale” or “nanoscale” properties include quantized electrical characteristics, glueless adhesion, rapid temperature changes, and tunable light absorption and scattering that, if available in human-scale products and systems, could offer potentially revolutionary defense and commercial capabilities. Two as-yet insurmountable technical challenges, however, stand in the way: Lack of knowledge of how to retain nanoscale properties in materials at larger scales, and lack of assembly capabilities for items between nanoscale and 100 microns—slightly wider than a human hair.

DARPA has created the Atoms to Product (A2P) program to help overcome these challenges. The program seeks to develop enhanced technologies for assembling atomic-scale pieces. It also seeks to integrate these components into materials and systems from nanoscale up to product scale in ways that preserve and exploit distinctive nanoscale properties.

DARPA’s Atoms to Product (A2P) program seeks to develop enhanced technologies for assembling nanoscale items, and integrating these components into materials and systems from nanoscale up to product scale in ways that preserve and exploit distinctive nanoscale properties.

A Dec. 29, 2015 news item on Nanowerk features the latest about the project,

DARPA recently selected 10 performers to tackle this challenge: Zyvex Labs, Richardson, Texas; SRI, Menlo Park, California; Boston University, Boston, Massachusetts; University of Notre Dame, South Bend, Indiana; HRL Laboratories, Malibu, California; PARC, Palo Alto, California; Embody, Norfolk, Virginia; Voxtel, Beaverton, Oregon; Harvard University, Cambridge, Massachusetts; and Draper Laboratory, Cambridge, Massachusetts.

A Dec. 29, 2015 DARPA news release, which originated the news item, offers more information and an image illustrating the type of advances already made by one of the successful proponents,

DARPA recently launched its Atoms to Product (A2P) program, with the goal of developing technologies and processes to assemble nanometer-scale pieces—whose dimensions are near the size of atoms—into systems, components, or materials that are at least millimeter-scale in size. At the heart of that goal was a frustrating reality: Many common materials, when fabricated at nanometer-scale, exhibit unique and attractive “atomic-scale” behaviors including quantized current-voltage behavior, dramatically lower melting points and significantly higher specific heats—but they tend to lose these potentially beneficial traits when they are manufactured at larger “product-scale” dimensions, typically on the order of a few centimeters, for integration into devices and systems.

“The ability to assemble atomic-scale pieces into practical components and products is the key to unlocking the full potential of micromachines,” said John Main, DARPA program manager. “The DARPA Atoms to Product Program aims to bring the benefits of microelectronic-style miniaturization to systems and products that combine mechanical, electrical, and chemical processes.”

The program calls for closing the assembly gap in two steps: From atoms to microns and from microns to millimeters. Performers are tasked with addressing one or both of these steps and have been assigned to one of three working groups, each with a distinct focus area.


Image caption: Microscopic tools such as this nanoscale “atom writer” can be used to fabricate minuscule light-manipulating structures on surfaces. DARPA has selected 10 performers for its Atoms to Product (A2P) program whose goal is to develop technologies and processes to assemble nanometer-scale pieces—whose dimensions are near the size of atoms—into systems, components, or materials that are at least millimeter-scale in size. (Image credit: Boston University)

Here’s more about the projects and the performers (proponents) from the A2P performers page on the DARPA website,

Nanometer to Millimeter in a Single System – Embody, Draper and Voxtel

Current methods to treat ligament injuries in warfighters [also known as, soldiers]—which account for a significant portion of reported injuries—often fail to restore pre-injury performance, due to surgical complexities and an inadequate supply of donor tissue. Embody is developing reinforced collagen nanofibers that mimic natural ligaments and replicate the biological and biomechanical properties of native tissue. Embody aims to create a new standard of care and restore pre-injury performance for warfighters and sports injury patients at a 50% reduction compared to current costs.

Radio Frequency (RF) systems (e.g., cell phones, GPS) have performance limits due to alternating current loss. In lower frequency power systems this is addressed by braiding the wires, but this is not currently possibly in cell phones due to an inability to manufacture sufficiently small braided wires. Draper is developing submicron wires that can be braided using DNA self-assembly methods. If successful, portable RF systems will be more power efficient and able to send 10 times more information in a given channel.

For seamless control of structures, physics and surface chemistry—from the atomic-level to the meter-level—Voxtel Inc. and partner Oregon State University are developing an efficient, high-rate, fluid-based manufacturing process designed to imitate nature’s ability to manufacture complex multimaterial products across scales. Historically, challenges relating to the cost of atomic-level control, production speed, and printing capability have been effectively insurmountable. This team’s new process will combine synthesis and delivery of materials into a massively parallel inkjet operation that draws from nature to achieve a DNA-like mediated assembly. The goal is to assemble complex, 3-D multimaterial mixed organic and inorganic products quickly and cost-effectively—directly from atoms.

Optical Metamaterial Assembly – Boston University, University of Notre Dame, HRL and PARC.

Nanoscale devices have demonstrated nearly unlimited power and functionality, but there hasn’t been a general- purpose, high-volume, low-cost method for building them. Boston University is developing an atomic calligraphy technique that can spray paint atoms with nanometer precision to build tunable optical metamaterials for the photonic battlefield. If successful, this capability could enhance the survivability of a wide range of military platforms, providing advanced camouflage and other optical illusions in the visual range much as stealth technology has enabled in the radar range.

The University of Notre Dame is developing massively parallel nanomanufacturing strategies to overcome the requirement today that most optical metamaterials must be fabricated in “one-off” operations. The Notre Dame project aims to design and build optical metamaterials that can be reconfigured to rapidly provide on-demand, customized optical capabilities. The aim is to use holographic traps to produce optical “tiles” that can be assembled into a myriad of functional forms and further customized by single-atom electrochemistry. Integrating these materials on surfaces and within devices could provide both warfighters and platforms with transformational survivability.

HRL Laboratories is working on a fast, scalable and material-agnostic process for improving infrared (IR) reflectivity of materials. Current IR-reflective materials have limited use, because reflectivity is highly dependent on the specific angle at which light hits the material. HRL is developing a technique for allowing tailorable infrared reflectivity across a variety of materials. If successful, the process will enable manufacturable materials with up to 98% IR reflectivity at all incident angles.

PARC is working on building the first digital MicroAssembly Printer, where the “inks” are micrometer-size particles and the “image” outputs are centimeter-scale and larger assemblies. The goal is to print smart materials with the throughput and cost of laser printers, but with the precision and functionality of nanotechnology. If successful, the printer would enable the short-run production of large, engineered, customized microstructures, such as metamaterials with unique responses for secure communications, surveillance and electronic warfare.

Flexible, General Purpose Assembly – Zyvex, SRI, and Harvard.

Zyvex aims to create nano-functional micron-scale devices using customizable and scalable manufacturing that is top-down and atomically precise. These high-performance electronic, optical, and nano-mechanical components would be assembled by SRI micro-robots into fully-functional devices and sub-systems such as ultra-sensitive sensors for threat detection, quantum communication devices, and atomic clocks the size of a grain of sand.

SRI’s Levitated Microfactories will seek to combine the precision of MEMS [micro-electromechanical systems] flexures with the versatility and range of pick-and-place robots and the scalability of swarms [an idea Michael Crichton used in his 2002 novel Prey to induce horror] to assemble and electrically connect micron and millimeter components to build stronger materials, faster electronics, and better sensors.

Many high-impact, minimally invasive surgical techniques are currently performed only by elite surgeons due to the lack of tactile feedback at such small scales relative to what is experienced during conventional surgical procedures. Harvard is developing a new manufacturing paradigm for millimeter-scale surgical tools using low-cost 2D layer-by-layer processes and assembly by folding, resulting in arbitrarily complex meso-scale 3D devices. The goal is for these novel tools to restore the necessary tactile feedback and thereby nurture a new degree of dexterity to perform otherwise demanding micro- and minimally invasive surgeries, and thus expand the availability of life-saving procedures.


‘Sidebar’ is my way of indicating these comments have little to do with the matter at hand but could be interesting factoids for you.

First, Zyvex Labs was last mentioned here in a Sept. 10, 2014 posting titled: OCSiAL will not be acquiring Zyvex. Notice that this  announcement was made shortly after DARPA’s A2P program was announced and that OCSiAL is one of RUSNANO’s (a Russian funding agency focused on nanotechnology) portfolio companies (see my Oct. 23, 2015 posting for more).

HRL Laboratories, mentioned here in an April 19, 2012 posting mostly concerned with memristors (nanoscale devices that mimic neural or synaptic plasticity), has its roots in Howard Hughes’s research laboratories as noted in the posting. In 2012, HRL was involved in another DARPA project, SyNAPSE.

Finally and minimally, PARC also known as, Xerox PARC, was made famous by Steven Jobs and Steve Wozniak when they set up their own company (Apple) basing their products on innovations that PARC had rejected. There are other versions of the story and one by Malcolm Gladwell for the New Yorker May 16, 2011 issue which presents a more complicated and, at times, contradictory version of that particular ‘origins’ story.

Funding trends for US synthetic biology efforts

Less than 1% of total US federal funding for synthetic biology is dedicated to risk research according to a Sept. 16, 2015 Woodrow Wilson International Center for Scholars news release on EurekAlert,

A new analysis by the Synthetic Biology Project at the Wilson Center finds the Defense Department and its Defense Advanced Research Projects Agency (DARPA) fund much of the U.S. government’s research in synthetic biology, with less than 1 percent of total federal funding going to risk research.

The report, U.S. Trends in Synthetic Biology Research, finds that between 2008 and 2014, the United States invested approximately $820 million dollars in synthetic biology research. In that time period, the Defense Department became a key funder of synthetic biology research. DARPA’s investments, for example, increased from near zero in 2010 to more than $100 million in 2014 – more than three times the amount spent by the National Science Foundation (NSF).

The Wilson Center news release can also be found here on the Center’s report publication page where it goes on to provide more detail and where you can download the report,

The report, U.S. Trends in Synthetic Biology Research, finds that between 2008 and 2014, the United States invested approximately $820 million dollars in synthetic biology research. In that time period, the Defense Department became a key funder of synthetic biology research. DARPA’s investments, for example, increased from near zero in 2010 to more than $100 million in 2014 – more than three times the amount spent by the National Science Foundation (NSF).

“The increase in DARPA research spending comes as NSF is winding down its initial investment in the Synthetic Biology Engineering Research Center, or SynBERC,” says Dr. Todd Kuiken, senior program associate with the project. “After the SynBERC funding ends next year, it is unclear if there will be a dedicated synthetic biology research program outside of the Pentagon. There is also little investment addressing potential risks and ethical issues, which can affect public acceptance and market growth as the field advances.”

The new study found that less than one percent of the total U.S. funding is focused on synthetic biology risk research and approximately one percent addresses ethical, legal, and social issues.

Internationally, research funding is increasing. Last year, research investments by the European Commission and research agencies in the United Kingdom exceeded non-defense spending in the United States, the report finds.

The research spending comes at a time of growing interest in synthetic biology, particularly surrounding the potential presented by new gene-editing techniques. Recent research by the industry group SynBioBeta indicated that, so far in 2015, synthetic biology companies raised half a billion dollars – more than the total investments in 2013 and 2014 combined.

In a separate Woodrow Wilson International Center for Scholars Sept. 16, 2015 announcement about the report, an upcoming event notice was included,

Save the date: On Oct. 7, 2015, the Synthetic Biology Project will be releasing a new report on synthetic biology and federal regulations. More details will be forthcoming, but the report release will include a noon event [EST] at the Wilson Center in Washington, DC.

I haven’t been able to find any more information about this proposed report launch but you may want to check the Synthetic Biology Project website for details as they become available. ETA Oct. 1, 2015: The new report titled: Leveraging Synthetic Biology’s Promise and Managing Potential Risk: Are We Getting It Right? will be launched on Oct. 15, 2015 according to an Oct. 1, 2015 notice,

As more applications based on synthetic biology come to market, are the existing federal regulations adequate to address the risks posed by this emerging technology?

Please join us for the release of our new report, Leveraging Synthetic Biology’s Promise and Managing Potential Risk: Are We Getting It Right? Panelists will discuss how synthetic biology applications would be regulated by the U.S. Coordinated Framework for Regulation of Biotechnology, how this would affect the market pathway of these applications and whether the existing framework will protect human health and the environment.

A light lunch will be served.


Lynn Bergeson, report author; Managing Partner, Bergeson & Campbell

David Rejeski, Director, Science and Technology Innovation Program

Thursday,October 15th, 2015
12:00pm – 2:00pm

6th Floor Board Room


Wilson Center
Ronald Reagan Building and
International Trade Center
One Woodrow Wilson Plaza
1300 Pennsylvania, Ave., NW
Washington, D.C. 20004

Phone: 202.691.4000


A pragmatic approach to alternatives to animal testing

Retitled and cross-posted from the June 30, 2015 posting (Testing times: the future of animal alternatives) on the International Innovation blog (a CORDIS-listed project dissemination partner for FP7 and H2020 projects).

Maryse de la Giroday explains how emerging innovations can provide much-needed alternatives to animal testing. She also shares highlights of the 9th World Congress on Alternatives to Animal Testing.

‘Guinea pigging’ is the practice of testing drugs that have passed in vitro and in vivo tests on healthy humans in a Phase I clinical trial. In fact, healthy humans can make quite a bit of money as guinea pigs. The practice is sufficiently well-entrenched that there is a magazine, Guinea Pig Zero, devoted to professionals. While most participants anticipate some unpleasant side effects, guinea pigging can sometimes be a dangerous ‘profession’.


One infamous incident highlighting the dangers of guinea pigging occurred in 2006 at Northwick Park Hospital outside London. Volunteers were offered £2,000 to participate in a Phase I clinical trial to test a prospective treatment – a monoclonal antibody designed for rheumatoid arthritis and multiple sclerosis. The drug, called TGN1412, caused catastrophic systemic organ failure in participants. All six individuals receiving the drug required hospital treatment. One participant reportedly underwent amputation of fingers and toes. Another reacted with symptoms comparable to John Merrick, the Elephant Man.

The root of the disaster lay in subtle immune system differences between humans and cynomolgus monkeys – the model animal tested prior to the clinical trial. The drug was designed for the CD28 receptor on T cells. The monkeys’ receptors closely resemble those found in humans. However, unlike these monkeys, humans have other immune cells that carry CD28. The trial participants received a starting dosage that was 0.2 per cent of what the monkeys received in their final tests, but failure to take these additional receptors into account meant a dosage that was supposed to occupy 10 per cent of the available CD28 receptors instead occupied 90 per cent. After the event, a Russian inventor purchased the commercial rights to the drug and renamed it TAB08. It has been further developed by Russian company, TheraMAB, and TAB08 is reportedly in Phase II clinical trials.


While animal testing has been a powerful and useful tool for determining safe usage for pharmaceuticals and other types of chemicals, it is also a cruel and imperfect practice. Moreover, it typically only predicts 30-60 per cent of human responses to new drugs. Nanotechnology and other emerging innovations present possibilities for reducing, and in some cases eliminating, the use of animal models.

People for the Ethical Treatment of Animals (PETA), still better known for its publicity stunts, maintains a webpage outlining a number of alternatives including in silico testing (computer modelling), and, perhaps most interestingly, human-on-a-chip and organoid (tissue engineering) projects.

Organ-on-a-chip projects use stem cells to create human tissues that replicate the functions of human organs. Discussions about human-on-a-chip activities – a phrase used to describe 10 interlinked organ chips – were a highlight of the 9th World Congress on Alternatives to Animal Testing held in Prague, Czech Republic, last year. One project highlighted at the event was a joint US National Institutes of Health (NIH), US Food and Drug Administration (FDA) and US Defense Advanced Research Projects Agency (DARPA) project led by Dan Tagle that claimed it would develop functioning human-on-a-chip by 2017. However, he and his team were surprisingly close-mouthed and provided few details making it difficult to assess how close they are to achieving their goal.

By contrast, Uwe Marx – Leader of the ‘Multi-Organ-Chip’ programme in the Institute of Biotechnology at the Technical University of Berlin and Scientific Founder of TissUse, a human-on-a-chip start-up company – claims to have sold two-organ chips. He also claims to have successfully developed a four-organ chip and that he is on his way to building a human-on-a-chip. Though these chips remain to be seen, if they are, they will integrate microfluidics, cultured cells and materials patterned at the nanoscale to mimic various organs, and will allow chemical testing in an environment that somewhat mirrors a human.

Another interesting alternative for animal testing is organoids – a feature in regenerative medicine that can function as test sites. Engineers based at Cornell University recently published a paper on their functional, synthetic immune organ. Inspired by the lymph node, the organoid is comprised of gelatin-based biomaterials, which are reinforced with silicate nanoparticles (to keep the tissue from melting when reaching body temperature) and seeded with cells allowing it to mimic the anatomical microenvironment of a lymphatic node. It behaves like its inspiration converting B cells to germinal centres which activate, mature and mutate antibody genes when the body is under attack. The engineers claim to be able to control the immune response and to outperform 2D cultures with their 3D organoid. If the results are reproducible, the organoid could be used to develop new therapeutics.

Maryse de la Giroday is a science communications consultant and writer.

Full disclosure: Maryse de la Giroday received transportation and accommodation for the 9th World Congress on Alternatives to Animal Testing from SEURAT-1, a European Union project, making scientific inquiries to facilitate the transition to animal testing alternatives, where possible.

ETA July 1, 2015: I would like to acknowledge more sources for the information in this article,


The guinea pigging term, the ‘professional aspect, the Northwick Park story, and the Guinea Pig Zero magazine can be found in Carl Elliot’s excellent 2006 story titled ‘Guinea-Pigging’ for New Yorker magazine.


Information about the drug used in the Northwick Park Hospital disaster, the sale of the rights to a Russian inventor, and the June 2015 date for the current Phase II clinical trials were found in this Wikipedia essay titled, TGN 1412.


Additional information about the renamed drug, TAB08 and its Phase II clinical trials was found on (a) a US government website for information on clinical trials, (b) in a Dec. 2014 (?) TheraMAB  advertisement in a Nature group magazine and a Jan. 2014 press release,




An April 2015 article (Experimental drug that injured UK volunteers resumes in human trials) by Owen Dyer for the British Medical Journal also mentioned the 2015 TheraMab Phase II clinical trials and provided information about the information about Macaque (cynomolgus) monkey tests.


BMJ 2015; 350 doi: http://dx.doi.org.proxy.lib.sfu.ca/10.1136/bmj.h1831 (Published 02 April 2015) Cite this as: BMJ 2015;350:h1831

A 2009 study by Christopher Horvath and Mark Milton somewhat contradicts the Dyer article’s contention that a species Macaque monkey was used as an animal model. (As the Dyer article is more recent and the Horvath/Milton analysis is more complex covering TGN 1412 in the context of other MAB drugs and their precursor tests along with specific TGN 1412 tests, I opted for the simple description.)

The TeGenero Incident [another name for the Northwick Park Accident] and the Duff Report Conclusions: A Series of Unfortunate Events or an Avoidable Event? by Christopher J. Horvath and Mark N. Milton. Published online before print February 24, 2009, doi: 10.1177/0192623309332986 Toxicol Pathol April 2009 vol. 37 no. 3 372-383


Philippa Roxbuy’s May 24, 2013 BBC news online article provided confirmation and an additional detail or two about the Northwick Park Hospital accident. It notes that other models, in addition to animal models, are being developed.


Anne Ju’s excellent June 10,2015 news release about the Cornell University organoid (synthetic immune organ) project was very helpful.


There will also be a magazine article in International Innovation, which will differ somewhat from the blog posting, due to editorial style and other requirements.

ETA July 22, 2015: I now have a link to the magazine article.

Entangling thousands of atoms

Quantum entanglement as an idea seems extraordinary to me like something from of the fevered imagination made possible only with certain kinds of hallucinogens. I suppose you could call theoretical physicists who’ve conceptualized entanglement a different breed as they don’t seem to need chemical assistance for their flights of fancy, which turn out to be reality. Researchers at MIT (Massachusetts Institute of Technology) and the University of Belgrade (Serbia) have entangled thousands of atoms with a single photon according to a March 26, 2015 news item on Nanotechnology Now,

Physicists from MIT and the University of Belgrade have developed a new technique that can successfully entangle 3,000 atoms using only a single photon. The results, published today in the journal Nature, represent the largest number of particles that have ever been mutually entangled experimentally.

The researchers say the technique provides a realistic method to generate large ensembles of entangled atoms, which are key components for realizing more-precise atomic clocks.

“You can make the argument that a single photon cannot possibly change the state of 3,000 atoms, but this one photon does — it builds up correlations that you didn’t have before,” says Vladan Vuletic, the Lester Wolfe Professor in MIT’s Department of Physics, and the paper’s senior author. “We have basically opened up a new class of entangled states we can make, but there are many more new classes to be explored.”

A March 26, 2015 MIT news release by Jennifer Chu (also on EurekAlert but dated March 25, 2015), which originated the news item, describes entanglement with particular attention to how it relates to atomic timekeeping,

Entanglement is a curious phenomenon: As the theory goes, two or more particles may be correlated in such a way that any change to one will simultaneously change the other, no matter how far apart they may be. For instance, if one atom in an entangled pair were somehow made to spin clockwise, the other atom would instantly be known to spin counterclockwise, even though the two may be physically separated by thousands of miles.

The phenomenon of entanglement, which physicist Albert Einstein once famously dismissed as “spooky action at a distance,” is described not by the laws of classical physics, but by quantum mechanics, which explains the interactions of particles at the nanoscale. At such minuscule scales, particles such as atoms are known to behave differently from matter at the macroscale.

Scientists have been searching for ways to entangle not just pairs, but large numbers of atoms; such ensembles could be the basis for powerful quantum computers and more-precise atomic clocks. The latter is a motivation for Vuletic’s group.

Today’s best atomic clocks are based on the natural oscillations within a cloud of trapped atoms. As the atoms oscillate, they act as a pendulum, keeping steady time. A laser beam within the clock, directed through the cloud of atoms, can detect the atoms’ vibrations, which ultimately determine the length of a single second.

“Today’s clocks are really amazing,” Vuletic says. “They would be less than a minute off if they ran since the Big Bang — that’s the stability of the best clocks that exist today. We’re hoping to get even further.”

The accuracy of atomic clocks improves as more and more atoms oscillate in a cloud. Conventional atomic clocks’ precision is proportional to the square root of the number of atoms: For example, a clock with nine times more atoms would only be three times as accurate. If these same atoms were entangled, a clock’s precision could be directly proportional to the number of atoms — in this case, nine times as accurate. The larger the number of entangled particles, then, the better an atomic clock’s timekeeping.

It seems weak lasers make big entanglements possible (from the news release),

Scientists have so far been able to entangle large groups of atoms, although most attempts have only generated entanglement between pairs in a group. Only one team has successfully entangled 100 atoms — the largest mutual entanglement to date, and only a small fraction of the whole atomic ensemble.

Now Vuletic and his colleagues have successfully created a mutual entanglement among 3,000 atoms, virtually all the atoms in the ensemble, using very weak laser light — down to pulses containing a single photon. The weaker the light, the better, Vuletic says, as it is less likely to disrupt the cloud. “The system remains in a relatively clean quantum state,” he says.

The researchers first cooled a cloud of atoms, then trapped them in a laser trap, and sent a weak laser pulse through the cloud. They then set up a detector to look for a particular photon within the beam. Vuletic reasoned that if a photon has passed through the atom cloud without event, its polarization, or direction of oscillation, would remain the same. If, however, a photon has interacted with the atoms, its polarization rotates just slightly — a sign that it was affected by quantum “noise” in the ensemble of spinning atoms, with the noise being the difference in the number of atoms spinning clockwise and counterclockwise.

“Every now and then, we observe an outgoing photon whose electric field oscillates in a direction perpendicular to that of the incoming photons,” Vuletic says. “When we detect such a photon, we know that must have been caused by the atomic ensemble, and surprisingly enough, that detection generates a very strongly entangled state of the atoms.”

Vuletic and his colleagues are currently using the single-photon detection technique to build a state-of-the-art atomic clock that they hope will overcome what’s known as the “standard quantum limit” — a limit to how accurate measurements can be in quantum systems. Vuletic says the group’s current setup may be a step toward developing even more complex entangled states.

“This particular state can improve atomic clocks by a factor of two,” Vuletic says. “We’re striving toward making even more complicated states that can go further.”

This research was supported in part by the National Science Foundation, the Defense Advanced Research Projects Agency, and the Air Force Office of Scientific Research.

Here’s a link to and a citation for the paper,

Entanglement with negative Wigner function of almost 3,000 atoms heralded by one photon by Robert McConnell, Hao Zhang, Jiazhong Hu, Senka Ćuk & Vladan Vuletić. Nature 519 439–442 (26 March 2015) doi:10.1038/nature14293 Published online 25 March 2015

This article is behind a paywall but there is a free preview via ReadCube Access.

This image illustrates the entanglement of a large number of atoms. The atoms, shown in purple, are shown mutually entangled with one another. Image: Christine Daniloff/MIT and Jose-Luis Olivares/MIT

This image illustrates the entanglement of a large number of atoms. The atoms, shown in purple, are shown mutually entangled with one another.
Image: Christine Daniloff/MIT and Jose-Luis Olivares/MIT

Self-organizing nanotubes and nonequilibrium systems provide insights into evolution and artificial life

If you’re interested in the second law of thermodynamics, this Feb. 10, 2015 news item on ScienceDaily provides some insight into the second law, self-organized systems, and evolution,

The second law of thermodynamics tells us that all systems evolve toward a state of maximum entropy, wherein all energy is dissipated as heat, and no available energy remains to do work. Since the mid-20th century, research has pointed to an extension of the second law for nonequilibrium systems: the Maximum Entropy Production Principle (MEPP) states that a system away from equilibrium evolves in such a way as to maximize entropy production, given present constraints.

Now, physicists Alexey Bezryadin, Alfred Hubler, and Andrey Belkin from the University of Illinois at Urbana-Champaign, have demonstrated the emergence of self-organized structures that drive the evolution of a non-equilibrium system to a state of maximum entropy production. The authors suggest MEPP underlies the evolution of the artificial system’s self-organization, in the same way that it underlies the evolution of ordered systems (biological life) on Earth. …

A Feb. 10, 2015 University of Illinois College of Engineering news release (also on EurekAlert), which originated the news item, provides more detail about the theory and the research,

MEPP may have profound implications for our understanding of the evolution of biological life on Earth and of the underlying rules that govern the behavior and evolution of all nonequilibrium systems. Life emerged on Earth from the strongly nonequilibrium energy distribution created by the Sun’s hot photons striking a cooler planet. Plants evolved to capture high energy photons and produce heat, generating entropy. Then animals evolved to eat plants increasing the dissipation of heat energy and maximizing entropy production.

In their experiment, the researchers suspended a large number of carbon nanotubes in a non-conducting non-polar fluid and drove the system out of equilibrium by applying a strong electric field. Once electrically charged, the system evolved toward maximum entropy through two distinct intermediate states, with the spontaneous emergence of self-assembled conducting nanotube chains.

In the first state, the “avalanche” regime, the conductive chains aligned themselves according to the polarity of the applied voltage, allowing the system to carry current and thus to dissipate heat and produce entropy. The chains appeared to sprout appendages as nanotubes aligned themselves so as to adjoin adjacent parallel chains, effectively increasing entropy production. But frequently, this self-organization was destroyed through avalanches triggered by the heating and charging that emanates from the emerging electric current streams. (…)

“The avalanches were apparent in the changes of the electric current over time,” said Bezryadin.

“Toward the final stages of this regime, the appendages were not destroyed during the avalanches, but rather retracted until the avalanche ended, then reformed their connection. So it was obvious that the avalanches correspond to the ‘feeding cycle’ of the ‘nanotube inset’,” comments Bezryadin.

In the second relatively stable stage of evolution, the entropy production rate reached maximum or near maximum. This state is quasi-stable in that there were no destructive avalanches.

The study points to a possible classification scheme for evolutionary stages and a criterium for the point at which evolution of the system is irreversible—wherein entropy production in the self-organizing subsystem reaches its maximum possible value. Further experimentation on a larger scale is necessary to affirm these underlying principals, but if they hold true, they will prove a great advantage in predicting behavioral and evolutionary trends in nonequilibrium systems.

The authors draw an analogy between the evolution of intelligent life forms on Earth and the emergence of the wiggling bugs in their experiment. The researchers note that further quantitative studies are needed to round out this comparison. In particular, they would need to demonstrate that their “wiggling bugs” can multiply, which would require the experiment be reproduced on a significantly larger scale.

Such a study, if successful, would have implications for the eventual development of technologies that feature self-organized artificial intelligence, an idea explored elsewhere by co-author Alfred Hubler, funded by the Defense Advanced Research Projects Agency [DARPA]. [emphasis mine]

“The general trend of the evolution of biological systems seems to be this: more advanced life forms tend to dissipate more energy by broadening their access to various forms of stored energy,” Bezryadin proposes. “Thus a common underlying principle can be suggested between our self-organized clouds of nanotubes, which generate more and more heat by reducing their electrical resistance and thus allow more current to flow, and the biological systems which look for new means to find food, either through biological adaptation or by inventing more technologies.

“Extended sources of food allow biological forms to further grow, multiply, consume more food and thus produce more heat and generate entropy. It seems reasonable to say that real life organisms are still far from the absolute maximum of the entropy production rate. In both cases, there are ‘avalanches’ or ‘extinction events’, which set back this evolution. Only if all free energy given by the Sun is consumed, by building a Dyson sphere for example, and converted into heat then a definitely stable phase of the evolution can be expected.”

“Intelligence, as far as we know, is inseparable from life,” he adds. “Thus, to achieve artificial life or artificial intelligence, our recommendation would be to study systems which are far from equilibrium, with many degrees of freedom—many building blocks—so that they can self-organize and participate in some evolution. The entropy production criterium appears to be the guiding principle of the evolution efficiency.”

I am fascinated

  • (a) because this piece took an unexpected turn onto the topic of artificial life/artificial intelligence,
  • (b) because of my longstanding interest in artificial life/artificial intelligence,
  • (c) because of the military connection, and
  • (d) because this is the first time I’ve come across something that provides a bridge from fundamental particles to nanoparticles.

Here’s a link to and a citation for the paper,

Self-Assembled Wiggling Nano-Structures and the Principle of Maximum Entropy Production by A. Belkin, A. Hubler, & A. Bezryadin. Scientific Reports 5, Article number: 8323 doi:10.1038/srep08323 Published 09 February 2015

Adding to my delight, this paper is open access.

See-through medical sensors from the University of Wisconsin-Madison

This is quite the week for see-through medical devices based on graphene. A second team has developed a transparent sensor which could allow scientists to make observations of brain activity that are now impossible, according to an Oct. 20, 2014 University of Wisconsin-Madison news release (also on EurekAlert),

Neural researchers study, monitor or stimulate the brain using imaging techniques in conjunction with implantable sensors that allow them to continuously capture and associate fleeting brain signals with the brain activity they can see.

However, it’s difficult to see brain activity when there are sensors blocking the view.

“One of the holy grails of neural implant technology is that we’d really like to have an implant device that doesn’t interfere with any of the traditional imaging diagnostics,” says Justin Williams, the Vilas Distinguished Achievement Professor of biomedical engineering and neurological surgery at UW-Madison. “A traditional implant looks like a square of dots, and you can’t see anything under it. We wanted to make a transparent electronic device.”

The researchers chose graphene, a material gaining wider use in everything from solar cells to electronics, because of its versatility and biocompatibility. And in fact, they can make their sensors incredibly flexible and transparent because the electronic circuit elements are only 4 atoms thick—an astounding thinness made possible by graphene’s excellent conductive properties. “It’s got to be very thin and robust to survive in the body,” says Zhenqiang (Jack) Ma, the Lynn H. Matthias Professor and Vilas Distinguished Achievement Professor of electrical and computer engineering at UW-Madison. “It is soft and flexible, and a good tradeoff between transparency, strength and conductivity.”

Drawing on his expertise in developing revolutionary flexible electronics, he, Williams and their students designed and fabricated the micro-electrode arrays, which—unlike existing devices—work in tandem with a range of imaging technologies. “Other implantable micro-devices might be transparent at one wavelength, but not at others, or they lose their properties,” says Ma. “Our devices are transparent across a large spectrum—all the way from ultraviolet to deep infrared.”

The transparent sensors could be a boon to neuromodulation therapies, which physicians increasingly are using to control symptoms, restore function, and relieve pain in patients with diseases or disorders such as hypertension, epilepsy, Parkinson’s disease, or others, says Kip Ludwig, a program director for the National Institutes of Health neural engineering research efforts. “Despite remarkable improvements seen in neuromodulation clinical trials for such diseases, our understanding of how these therapies work—and therefore our ability to improve existing or identify new therapies—is rudimentary.”

Currently, he says, researchers are limited in their ability to directly observe how the body generates electrical signals, as well as how it reacts to externally generated electrical signals. “Clear electrodes in combination with recent technological advances in optogenetics and optical voltage probes will enable researchers to isolate those biological mechanisms. This fundamental knowledge could be catalytic in dramatically improving existing neuromodulation therapies and identifying new therapies.”

The advance aligns with bold goals set forth in President Barack Obama’s BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative. Obama announced the initiative in April 2013 as an effort to spur innovations that can revolutionize understanding of the brain and unlock ways to prevent, treat or cure such disorders as Alzheimer’s and Parkinson’s disease, post-traumatic stress disorder, epilepsy, traumatic brain injury, and others.

The UW-Madison researchers developed the technology with funding from the Reliable Neural-Interface Technology program at the Defense Advanced Research Projects Agency.

While the researchers centered their efforts around neural research, they already have started to explore other medical device applications. For example, working with researchers at the University of Illinois-Chicago, they prototyped a contact lens instrumented with dozens of invisible sensors to detect injury to the retina; the UIC team is exploring applications such as early diagnosis of glaucoma.

Here’s an image of the see-through medical implant,

Caption: A blue light shines through a clear implantable medical sensor onto a brain model. See-through sensors, which have been developed by a team of University of Wisconsin Madison engineers, should help neural researchers better view brain activity. Credit: Justin Williams research group

Caption: A blue light shines through a clear implantable medical sensor onto a brain model. See-through sensors, which have been developed by a team of University of Wisconsin Madison engineers, should help neural researchers better view brain activity.
Credit: Justin Williams research group

Here’s a link to and a citation for the paper,

Graphene-based carbon-layered electrode array technology for neural imaging and optogenetic applications by Dong-Wook Park, Amelia A. Schendel, Solomon Mikael, Sarah K. Brodnick, Thomas J. Richner, Jared P. Ness, Mohammed R. Hayat, Farid Atry, Seth T. Frye, Ramin Pashaie, Sanitta Thongpang, Zhenqiang Ma, & Justin C. Williams. Nature Communications 5, Article number: 5258 doi:10.1038/ncomms6258 Published
20 October 2014

This is an open access paper.

DARPA (US Defense Advanced Research Projects Agency), which funds this work at the University of Wisconsin-Madison, has also provided an Oct. 20, 2014 news release (also published an an Oct. 27, 2014 news item on Nanowerk) describing this research from the military perspective, which may not be what you might expect. First, here’s a description of the DARPA funding programme underwriting this research, from DARPA’s Reliable Neural-Interface Technology (RE-NET) webpage,

Advancing technology for military uniforms, body armor and equipment have saved countless lives of our servicemembers injured on the battlefield.  Unfortunately, many of those survivors are seriously and permanently wounded, with unprecedented rates of limb loss and traumatic brain injury among our returning soldiers. This crisis has motivated great interest in the science of and technology for restoring sensorimotor functions lost to amputation and injury of the central nervous system. For a decade now, DARPA has been leading efforts aimed at ‘revolutionizing’ the state-of-the-art in prosthetic limbs, recently debuting 2 advanced mechatronic limbs for the upper extremity. These new devices are truly anthropomorphic and capable of performing dexterous manipulation functions that finally begin to approach the capabilities of natural limbs. However, in the absence of a high bandwidth, intuitive interface for the user, these limbs will never achieve their full potential in improving the quality of life for the wounded soldiers that could benefit from this advanced technology.

DARPA created the Reliable Neural-Interface Technology (RE-NET) program in 2010 to directly address the need for high performance neural interfaces to control dexterous functions made possible with advanced prosthetic limbs.  Specifically, RE-NET seeks to develop the technologies needed to reliably extract information from the nervous system, and to do so at a scale and rate necessary to control many degree-of-freedom (DOF) machines, such as high-performance prosthetic limbs. Prior to the DARPA RE-NET program, all existing methods to extract neural control signals were inadequate for amputees to control high-performance prostheses, either because the level of extracted information was too low or the functional lifetime was too short. However, recent technological advances create new opportunities to solve both of these neural-interface problems. For example, it is now feasible to develop high-resolution peripheral neuromuscular interfaces that increase the amount of information obtained from the peripheral nervous system.  Furthermore, advances in cortical microelectrode technologies are extending the durability of neural signals obtained from the brain, making it possible to create brain-controlled prosthetics that remain useful over the full lifetime of the patient.

US Defense Advanced Research Projects Agency (DARPA) Atoms to Products webinar in September 2014

On Sept. 9, 2014 and Sept. 11, 2014, DARPA (US Defense Advanced Research Projects Agency) will hold identical webinars for proposers interested in the Atoms to Products program (presumably they are expecting many, many proposers). (Thanks to James Lewis on the Foresight Institute’s Nanodot blog for his Sept. 1, 2014 posting about the webinars.)

An Aug. 22, 2014 DARPA news release offers details about the project and the webinars,

New program also seeks to develop revolutionary miniaturization and assembly methods that would work at scales 100,000 times smaller than current state-of-the-art technology

Many common materials exhibit different and potentially useful characteristics when fabricated at extremely small scales—that is, at dimensions near the size of atoms, or a few ten-billionths of a meter. These “atomic scale” or “nanoscale” properties include quantized electrical characteristics, glueless adhesion, rapid temperature changes, and tunable light absorption and scattering that, if available in human-scale products and systems, could offer potentially revolutionary defense and commercial capabilities. Two as-yet insurmountable technical challenges, however, stand in the way: Lack of knowledge of how to retain nanoscale properties in materials at larger scales, and lack of assembly capabilities for items between nanoscale and 100 microns—slightly wider than a human hair.

DARPA has created the Atoms to Product (A2P) program to help overcome these challenges. The program seeks to develop enhanced technologies for assembling atomic-scale pieces. It also seeks to integrate these components into materials and systems from nanoscale up to product scale in ways that preserve and exploit distinctive nanoscale properties.

“We want to explore new ways of putting incredibly tiny things together, with the goal of developing new miniaturization and assembly methods that would work at scales 100,000 times smaller than current state-of-the-art technology,” said John Main, DARPA program manager. “If successful, A2P could help enable creation of entirely new classes of materials that exhibit nanoscale properties at all scales. It could lead to the ability to miniaturize materials, processes and devices that can’t be miniaturized with current technology, as well as build three-dimensional products and systems at much smaller sizes.”

This degree of scaled assembly is common in nature, Main continued. “Plants and animals, for example, are effectively systems assembled from atomic- and molecular-scale components a million to a billion times smaller than the whole organism. We’re trying to lay a similar foundation for developing future materials and devices.”

To familiarize potential participants with the technical objectives of the A2P program, DARPA has scheduled identical Proposers Day webinars on Tuesday, September 9, 2014, and Thursday, September 11, 2014. Advance registration is required and closes on September 5, 2014, at 5:00 PM Eastern Time. Participants must register through the registration website: http://www.sa-meetings.com/A2PProposersDay.

The DARPA Special Notice announcing the Proposers’ Day webinars is available at http://go.usa.gov/mgKB. This announcement does not constitute a formal solicitation for proposals or abstracts and is issued solely for information and program planning purposes. The Special Notice is not a Request for Information (RFI); therefore, DARPA will accept no submissions against this announcement. DARPA expects to release a Broad Agency Announcement (BAA) with full technical details on A2P soon on the Federal Business Opportunities website (www.fbo.gov). For more information, please email DARPA-SN-14-61@darpa.mil.

Over the years I’ve come across several references to bottom-up engineering or manufacturing but it’s seemed more theoretical than real. I gather DARPA is hoping to make bottom-up manufacturing a reality.

TrueNorth, a brain-inspired chip architecture from IBM and Cornell University

As a Canadian, “true north” is invariably followed by “strong and free” while singing our national anthem. For many Canadians it is almost the only phrase that is remembered without hesitation.  Consequently, some of the buzz surrounding the publication of a paper celebrating ‘TrueNorth’, a brain-inspired chip, is a bit disconcerting. Nonetheless, here is the latest IBM (in collaboration with Cornell University) news from an Aug. 8, 2014 news item on Nanowerk,

Scientists from IBM unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW—orders of magnitude less power than a modern microprocessor. A neurosynaptic supercomputer the size of a postage stamp that runs on the energy equivalent of a hearing-aid battery, this technology could transform science, technology, business, government, and society by enabling vision, audition, and multi-sensory applications.

An Aug. 7, 2014 IBM news release, which originated the news item, provides an overview of the multi-year process this breakthrough represents (Note: Links have been removed),

There is a huge disparity between the human brain’s cognitive capability and ultra-low power consumption when compared to today’s computers. To bridge the divide, IBM scientists created something that didn’t previously exist—an entirely new neuroscience-inspired scalable and efficient computer architecture that breaks path with the prevailing von Neumann architecture used almost universally since 1946.

This second generation chip is the culmination of almost a decade of research and development, including the initial single core hardware prototype in 2011 and software ecosystem with a new programming language and chip simulator in 2013.

The new cognitive chip architecture has an on-chip two-dimensional mesh network of 4096 digital, distributed neurosynaptic cores, where each core module integrates memory, computation, and communication, and operates in an event-driven, parallel, and fault-tolerant fashion. To enable system scaling beyond single-chip boundaries, adjacent chips, when tiled, can seamlessly connect to each other—building a foundation for future neurosynaptic supercomputers. To demonstrate scalability, IBM also revealed a 16-chip system with sixteen million programmable neurons and four billion programmable synapses.

“IBM has broken new ground in the field of brain-inspired computers, in terms of a radically new architecture, unprecedented scale, unparalleled power/area/speed efficiency, boundless scalability, and innovative design techniques. We foresee new generations of information technology systems – that complement today’s von Neumann machines – powered by an evolving ecosystem of systems, software, and services,” said Dr. Dharmendra S. Modha, IBM Fellow and IBM Chief Scientist, Brain-Inspired Computing, IBM Research. “These brain-inspired chips could transform mobility, via sensory and intelligent applications that can fit in the palm of your hand but without the need for Wi-Fi. This achievement underscores IBM’s leadership role at pivotal transformational moments in the history of computing via long-term investment in organic innovation.”

The Defense Advanced Research Projects Agency (DARPA) has funded the project since 2008 with approximately $53M via Phase 0, Phase 1, Phase 2, and Phase 3 of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program. Current collaborators include Cornell Tech and iniLabs, Ltd.

Building the Chip

The chip was fabricated using Samsung’s 28nm process technology that has a dense on-chip memory and low-leakage transistors.

“It is an astonishing achievement to leverage a process traditionally used for commercially available, low-power mobile devices to deliver a chip that emulates the human brain by processing extreme amounts of sensory information with very little power,” said Shawn Han, vice president of Foundry Marketing, Samsung Electronics. “This is a huge architectural breakthrough that is essential as the industry moves toward the next-generation cloud and big-data processing. It’s a pleasure to be part of technical progress for next-generation through Samsung’s 28nm technology.”

The event-driven circuit elements of the chip used the asynchronous design methodology developed at Cornell Tech [aka Cornell University] and refined with IBM since 2008.

“After years of collaboration with IBM, we are now a step closer to building a computer similar to our brain,” said Professor Rajit Manohar, Cornell Tech.

The combination of cutting-edge process technology, hybrid asynchronous-synchronous design methodology, and new architecture has led to a power density of 20mW/cm2 which is nearly four orders of magnitude less than today’s microprocessors.

Advancing the SyNAPSE Ecosystem

The new chip is a component of a complete end-to-end vertically integrated ecosystem spanning a chip simulator, neuroscience data, supercomputing, neuron specification, programming paradigm, algorithms and applications, and prototype design models. The ecosystem supports all aspects of the programming cycle from design through development, debugging, and deployment.

To bring forth this fundamentally different technological capability to society, IBM has designed a novel teaching curriculum for universities, customers, partners, and IBM employees.

Applications and Vision

This ecosystem signals a shift in moving computation closer to the data, taking in vastly varied kinds of sensory data, analyzing and integrating real-time information in a context-dependent way, and dealing with the ambiguity found in complex, real-world environments.

Looking to the future, IBM is working on integrating multi-sensory neurosynaptic processing into mobile devices constrained by power, volume and speed; integrating novel event-driven sensors with the chip; real-time multimedia cloud services accelerated by neurosynaptic systems; and neurosynaptic supercomputers by tiling multiple chips on a board, creating systems that would eventually scale to one hundred trillion synapses and beyond.

Building on previously demonstrated neurosynaptic cores with on-chip, online learning, IBM envisions building learning systems that adapt in real world settings. While today’s hardware is fabricated using a modern CMOS process, the underlying architecture is poised to exploit advances in future memory, 3D integration, logic, and sensor technologies to deliver even lower power, denser package, and faster speed.

I have two articles that may prove of interest, Peter Stratton’s Aug. 7, 2014 article for The Conversation provides an easy-to-read introduction to both brains, human and computer, (as they apply to this research) and TrueNorth (h/t phys.org also hosts Stratton’s article). There’s also an Aug. 7, 2014 article by Rob Farber for techenablement.com which includes information from a range of text and video sources about TrueNorth and cognitive computing as it’s also known (well worth checking out).

Here’s a link to and a citation for the paper,

A million spiking-neuron integrated circuit with a scalable communication network and interface by Paul A. Merolla, John V. Arthur, Rodrigo Alvarez-Icaza, Andrew S. Cassidy, Jun Sawada, Filipp Akopyan, Bryan L. Jackson, Nabil Imam, Chen Guo, Yutaka Nakamura, Bernard Brezzo, Ivan Vo, Steven K. Esser, Rathinakumar Appuswamy, Brian Taba, Arnon Amir, Myron D. Flickner, William P. Risk, Rajit Manohar, and Dharmendra S. Modha. Science 8 August 2014: Vol. 345 no. 6197 pp. 668-673 DOI: 10.1126/science.1254642

This paper is behind a paywall.

Squishy but rigid robots from MIT (Massachusetts Institute of Technology)

A July 14, 2014 news item on ScienceDaily MIT (Massachusetts Institute of Technology) features robots that mimic mice and other biological constructs or, if you prefer, movie robots,

In the movie “Terminator 2,” the shape-shifting T-1000 robot morphs into a liquid state to squeeze through tight spaces or to repair itself when harmed.

Now a phase-changing material built from wax and foam, and capable of switching between hard and soft states, could allow even low-cost robots to perform the same feat.

The material — developed by Anette Hosoi, a professor of mechanical engineering and applied mathematics at MIT, and her former graduate student Nadia Cheng, alongside researchers at the Max Planck Institute for Dynamics and Self-Organization and Stony Brook University — could be used to build deformable surgical robots. The robots could move through the body to reach a particular point without damaging any of the organs or vessels along the way.

A July 14, 2014 MIT news release (also on EurekAlert), which originated the news item, describes the research further by referencing both octopuses and jello,

Working with robotics company Boston Dynamics, based in Waltham, Mass., the researchers began developing the material as part of the Chemical Robots program of the Defense Advanced Research Projects Agency (DARPA). The agency was interested in “squishy” robots capable of squeezing through tight spaces and then expanding again to move around a given area, Hosoi says — much as octopuses do.

But if a robot is going to perform meaningful tasks, it needs to be able to exert a reasonable amount of force on its surroundings, she says. “You can’t just create a bowl of Jell-O, because if the Jell-O has to manipulate an object, it would simply deform without applying significant pressure to the thing it was trying to move.”

What’s more, controlling a very soft structure is extremely difficult: It is much harder to predict how the material will move, and what shapes it will form, than it is with a rigid robot.

So the researchers decided that the only way to build a deformable robot would be to develop a material that can switch between a soft and hard state, Hosoi says. “If you’re trying to squeeze under a door, for example, you should opt for a soft state, but if you want to pick up a hammer or open a window, you need at least part of the machine to be rigid,” she says.

Compressible and self-healing

To build a material capable of shifting between squishy and rigid states, the researchers coated a foam structure in wax. They chose foam because it can be squeezed into a small fraction of its normal size, but once released will bounce back to its original shape.

The wax coating, meanwhile, can change from a hard outer shell to a soft, pliable surface with moderate heating. This could be done by running a wire along each of the coated foam struts and then applying a current to heat up and melt the surrounding wax. Turning off the current again would allow the material to cool down and return to its rigid state.

In addition to switching the material to its soft state, heating the wax in this way would also repair any damage sustained, Hosoi says. “This material is self-healing,” she says. “So if you push it too far and fracture the coating, you can heat it and then cool it, and the structure returns to its original configuration.”

To build the material, the researchers simply placed the polyurethane foam in a bath of melted wax. They then squeezed the foam to encourage it to soak up the wax, Cheng says. “A lot of materials innovation can be very expensive, but in this case you could just buy really low-cost polyurethane foam and some wax from a craft store,” she says.

In order to study the properties of the material in more detail, they then used a 3-D printer to build a second version of the foam lattice structure, to allow them to carefully control the position of each of the struts and pores.

When they tested the two materials, they found that the printed lattice was more amenable to analysis than the polyurethane foam, although the latter would still be fine for low-cost applications, Hosoi says.

The wax coating could also be replaced by a stronger material, such as solder, she adds.

Hosoi is now investigating the use of other unconventional materials for robotics, such as magnetorheological and electrorheological fluids. These materials consist of a liquid with particles suspended inside, and can be made to switch from a soft to a rigid state with the application of a magnetic or electric field.

When it comes to artificial muscles for soft and biologically inspired robots, we tend to think of controlling shape through bending or contraction, says Carmel Majidi, an assistant professor of mechanical engineering in the Robotics Institute at Carnegie Mellon University, who was not involved in the research. “But for a lot of robotics tasks, reversibly tuning the mechanical rigidity of a joint can be just as important,” he says. “This work is a great demonstration of how thermally controlled rigidity-tuning could potentially be used in soft robotics.”

Here’s a link to and a citation for the paper,

Thermally Tunable, Self-Healing Composites for Soft Robotic Applications by Nadia G. Cheng, Arvind Gopinath, Lifeng Wang, Karl Iagnemma, and Anette E. Hosoi. Macromolecular Materials and Engineering DOI: 10.1002/mame.201400017 Article first published online: 30 JUN 2014

© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.