Category Archives: robots

Turning brain-controlled wireless electronic prostheses into reality plus some ethical points

Researchers at Stanford University (California, US) believe they have a solution for a problem with neuroprosthetics (Note: I have included brief comments about neuroprosthetics and possible ethical issues at the end of this posting) according an August 5, 2020 news item on ScienceDaily,

The current generation of neural implants record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But, so far, when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the implants generated too much heat to be safe for the patient. A new study suggests how to solve his problem — and thus cut the wires.

Caption: Photo of a current neural implant, that uses wires to transmit information and receive power. New research suggests how to one day cut the wires. Credit: Sergey Stavisky

An August 3, 2020 Stanford University news release (also on EurekAlert but published August 4, 2020) by Tom Abate, which originated the news item, details the problem and the proposed solution,

Stanford researchers have been working for years to advance a technology that could one day help people with paralysis regain use of their limbs, and enable amputees to use their thoughts to control prostheses and interact with computers.

The team has been focusing on improving a brain-computer interface, a device implanted beneath the skull on the surface of a patient’s brain. This implant connects the human nervous system to an electronic device that might, for instance, help restore some motor control to a person with a spinal cord injury, or someone with a neurological condition like amyotrophic lateral sclerosis, also called Lou Gehrig’s disease.

The current generation of these devices record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the devices would generate too much heat to be safe for the patient.

Now, a team led by electrical engineers and neuroscientists Krishna Shenoy, PhD, and Boris Murmann, PhD, and neurosurgeon and neuroscientist Jaimie Henderson, MD, have shown how it would be possible to create a wireless device, capable of gathering and transmitting accurate neural signals, but using a tenth of the power required by current wire-enabled systems. These wireless devices would look more natural than the wired models and give patients freer range of motion.

Graduate student Nir Even-Chen and postdoctoral fellow Dante Muratore, PhD, describe the team’s approach in a Nature Biomedical Engineering paper.

The team’s neuroscientists identified the specific neural signals needed to control a prosthetic device, such as a robotic arm or a computer cursor. The team’s electrical engineers then designed the circuitry that would enable a future, wireless brain-computer interface to process and transmit these these carefully identified and isolated signals, using less power and thus making it safe to implant the device on the surface of the brain.

To test their idea, the researchers collected neuronal data from three nonhuman primates and one human participant in a (BrainGate) clinical trial.

As the subjects performed movement tasks, such as positioning a cursor on a computer screen, the researchers took measurements. The findings validated their hypothesis that a wireless interface could accurately control an individual’s motion by recording a subset of action-specific brain signals, rather than acting like the wired device and collecting brain signals in bulk.

The next step will be to build an implant based on this new approach and proceed through a series of tests toward the ultimate goal.

Here’s a link to and a citation for the paper,

Power-saving design opportunities for wireless intracortical brain–computer interfaces by Nir Even-Chen, Dante G. Muratore, Sergey D. Stavisky, Leigh R. Hochberg, Jaimie M. Henderson, Boris Murmann & Krishna V. Shenoy. Nature Biomedical Engineering (2020) DOI: Published: 03 August 2020

This paper is behind a paywall.

Comments about ethical issues

As I found out while investigating, ethical issues in this area abound. My first thought was to look at how someone with a focus on ability studies might view the complexities.

My ‘go to’ resource for human enhancement and ethical issues is Gregor Wolbring, an associate professor at the University of Calgary (Alberta, Canada). his profile lists these areas of interest: ability studies, disability studies, governance of emerging and existing sciences and technologies (e.g. neuromorphic engineering, genetics, synthetic biology, robotics, artificial intelligence, automatization, brain machine interfaces, sensors) and more.

I can’t find anything more recent on this particular topic but I did find an August 10, 2017 essay for The Conversation where he comments on technology and human enhancement ethical issues where the technology is gene-editing. Regardless, he makes points that are applicable to brain-computer interfaces (human enhancement), Note: Links have been removed),

Ability expectations have been and still are used to disable, or disempower, many people, not only people seen as impaired. They’ve been used to disable or marginalize women (men making the argument that rationality is an important ability and women don’t have it). They also have been used to disable and disempower certain ethnic groups (one ethnic group argues they’re smarter than another ethnic group) and others.

A recent Pew Research survey on human enhancement revealed that an increase in the ability to be productive at work was seen as a positive. What does such ability expectation mean for the “us” in an era of scientific advancements in gene-editing, human enhancement and robotics?

Which abilities are seen as more important than others?

The ability expectations among “us” will determine how gene-editing and other scientific advances will be used.

And so how we govern ability expectations, and who influences that governance, will shape the future. Therefore, it’s essential that ability governance and ability literacy play a major role in shaping all advancements in science and technology.

One of the reasons I find Gregor’s commentary so valuable is that he writes lucidly about ability and disability as concepts and poses what can be provocative questions about expectations and what it is to be truly abled or disabled. You can find more of his writing here on his eponymous (more or less) blog.

Ethics of clinical trials for testing brain implants

This October 31, 2017 article by Emily Underwood for Science was revelatory,

In 2003, neurologist Helen Mayberg of Emory University in Atlanta began to test a bold, experimental treatment for people with severe depression, which involved implanting metal electrodes deep in the brain in a region called area 25 [emphases mine]. The initial data were promising; eventually, they convinced a device company, St. Jude Medical in Saint Paul, to sponsor a 200-person clinical trial dubbed BROADEN.

This month [October 2017], however, Lancet Psychiatry reported the first published data on the trial’s failure. The study stopped recruiting participants in 2012, after a 6-month study in 90 people failed to show statistically significant improvements between those receiving active stimulation and a control group, in which the device was implanted but switched off.

… a tricky dilemma for companies and research teams involved in deep brain stimulation (DBS) research: If trial participants want to keep their implants [emphases mine], who will take responsibility—and pay—for their ongoing care? And participants in last week’s meeting said it underscores the need for the growing corps of DBS researchers to think long-term about their planned studies.

… participants bear financial responsibility for maintaining the device should they choose to keep it, and for any additional surgeries that might be needed in the future, Mayberg says. “The big issue becomes cost [emphasis mine],” she says. “We transition from having grants and device donations” covering costs, to patients being responsible. And although the participants agreed to those conditions before enrolling in the trial, Mayberg says she considers it a “moral responsibility” to advocate for lower costs for her patients, even it if means “begging for charity payments” from hospitals. And she worries about what will happen to trial participants if she is no longer around to advocate for them. “What happens if I retire, or get hit by a bus?” she asks.

There’s another uncomfortable possibility: that the hypothesis was wrong [emphases mine] to begin with. A large body of evidence from many different labs supports the idea that area 25 is “key to successful antidepressant response,” Mayberg says. But “it may be too simple-minded” to think that zapping a single brain node and its connections can effectively treat a disease as complex as depression, Krakauer [John Krakauer, a neuroscientist at Johns Hopkins University in Baltimore, Maryland] says. Figuring that out will likely require more preclinical research in people—a daunting prospect that raises additional ethical dilemmas, Krakauer says. “The hardest thing about being a clinical researcher,” he says, “is knowing when to jump.”

Brain-computer interfaces, symbiosis, and ethical issues

This was the most recent and most directly applicable work that I could find. From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.

Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence.

Interface technologies are divided into those that ‘read’ the brain to record brain activity and decode its meaning, and those that ‘write’ to the brain to manipulate activity in specific regions and affect their function.

Commercial research is opaque, but scientists at social-media platform Facebook are known to be pursuing brain-reading techniques for use in headsets that would convert users’ brain activity into text. And neurotechnology companies such as Kernel in Los Angeles, California, and Neuralink, founded by Elon Musk in San Francisco, California, predict bidirectional coupling in which computers respond to people’s brain activity and insert information into their neural circuitry. [emphasis mine]

Already, it is clear that melding digital technologies with human brains can have provocative effects, not least on people’s agency — their ability to act freely and according to their own choices. Although neuroethicists’ priority is to optimize medical practice, their observations also shape the debate about the development of commercial neurotechnologies.

Neuroethicists began to note the complex nature of the therapy’s side effects. “Some effects that might be described as personality changes are more problematic than others,” says Maslen [Hannah Maslen, a neuroethicist at the University of Oxford, UK]. A crucial question is whether the person who is undergoing stimulation can reflect on how they have changed. Gilbert, for instance, describes a DBS patient who started to gamble compulsively, blowing his family’s savings and seeming not to care. He could only understand how problematic his behaviour was when the stimulation was turned off.

Such cases present serious questions about how the technology might affect a person’s ability to give consent to be treated, or for treatment to continue. [emphases mine] If the person who is undergoing DBS is happy to continue, should a concerned family member or doctor be able to overrule them? If someone other than the patient can terminate treatment against the patient’s wishes, it implies that the technology degrades people’s ability to make decisions for themselves. It suggests that if a person thinks in a certain way only when an electrical current alters their brain activity, then those thoughts do not reflect an authentic self.

To observe a person with tetraplegia bringing a drink to their mouth using a BCI-controlled robotic arm is spectacular. [emphasis mine] This rapidly advancing technology works by implanting an array of electrodes either on or in a person’s motor cortex — a brain region involved in planning and executing movements. The activity of the brain is recorded while the individual engages in cognitive tasks, such as imagining that they are moving their hand, and these recordings are used to command the robotic limb.

If neuroscientists could unambiguously discern a person’s intentions from the chattering electrical activity that they record in the brain, and then see that it matched the robotic arm’s actions, ethical concerns would be minimized. But this is not the case. The neural correlates of psychological phenomena are inexact and poorly understood, which means that signals from the brain are increasingly being processed by artificial intelligence (AI) software before reaching prostheses.[emphasis mine]

But, he [Philipp Kellmeyer, a neurologist and neuroethicist at the University of Freiburg, Germany] says, using AI tools also introduces ethical issues of which regulators have little experience. [emphasis mine] Machine-learning software learns to analyse data by generating algorithms that cannot be predicted and that are difficult, or impossible, to comprehend. This introduces an unknown and perhaps unaccountable process between a person’s thoughts and the technology that is acting on their behalf.

Maslen is already helping to shape BCI-device regulation. She is in discussion with the European Commission about regulations it will implement in 2020 that cover non-invasive brain-modulating devices that are sold straight to consumers. [emphases mine; Note: There is a Canadian company selling this type of product, MUSE] Maslen became interested in the safety of these devices, which were covered by only cursory safety regulations. Although such devices are simple, they pass electrical currents through people’s scalps to modulate brain activity. Maslen found reports of them causing burns, headaches and visual disturbances. She also says clinical studies have shown that, although non-invasive electrical stimulation of the brain can enhance certain cognitive abilities, this can come at the cost of deficits in other aspects of cognition.

Regarding my note about MUSE, the company is InteraXon and its product is MUSE.They advertise the product as “Brain Sensing Headbands That Improve Your Meditation Practice.” The company website and the product seem to be one entity, Choose Muse. The company’s product has been used in some serious research papers they can be found here. I did not see any research papers concerning safety issues.

Getting back to Drew’s July 24, 2019 article and Patient 6,

… He [Gilbert] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.

… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.

“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”

I strongly recommend reading Drew’s July 24, 2019 article in its entirety.


It’s easy to forget that in all the excitement over technologies ‘making our lives better’ that there can be a dark side or two. Some of the points brought forth in the articles by Wolbring, Underwood, and Drew confirmed my uneasiness as reasonable and gave me some specific examples of how these technologies raise new issues or old issues in new ways.

What I find interesting is that no one is using the term ‘cyborg’, which would seem quite applicable.There is an April 20, 2012 posting here titled ‘My mother is a cyborg‘ where I noted that by at lease one definition people with joint replacements, pacemakers, etc. are considered cyborgs. In short, cyborgs or technology integrated into bodies have been amongst us for quite some time.

Interestingly, no one seems to care much when insects are turned into cyborgs (can’t remember who pointed this out) but it is a popular area of research especially for military applications and search and rescue applications.

I’ve sometimes used the term ‘machine/flesh’ and or ‘augmentation’ as a description of technologies integrated with bodies, human or otherwise. You can find lots on the topic here however I’ve tagged or categorized it.

Amongst other pieces you can find here, there’s the August 8, 2016 posting, ‘Technology, athletics, and the ‘new’ human‘ featuring Oscar Pistorius when he was still best known as the ‘blade runner’ and a remarkably successful paralympic athlete. It’s about his efforts to compete against able-bodied athletes at the London Olympic Games in 2012. It is fascinating to read about technology and elite athletes of any kind as they are often the first to try out ‘enhancements’.

Gregor Wolbring has a number of essays on The Conversation looking at Paralympic athletes and their pursuit of enhancements and how all of this is affecting our notions of abilities and disabilities. By extension, one has to assume that ‘abled’ athletes are also affected with the trickle-down effect on the rest of us.

Regardless of where we start the investigation, there is a sameness to the participants in neuroethics discussions with a few experts and commercial interests deciding on how the rest of us (however you define ‘us’ as per Gregor Wolbring’s essay) will live.

This paucity of perspectives is something I was getting at in my COVID-19 editorial for the Canadian Science Policy Centre. My thesis being that we need a range of ideas and insights that cannot be culled from small groups of people who’ve trained and read the same materials or entrepreneurs who too often seem to put profit over thoughtful implementations of new technologies. (See the PDF May 2020 edition [you’ll find me under Policy Development]) or see my May 15, 2020 posting here (with all the sources listed.)

As for this new research at Stanford, it’s exciting news, which raises questions, as it offers the hope of independent movement for people diagnosed as tetraplegic (sometimes known as quadriplegic.)

Hydrogel (a soft, wet material) can memorize, retrieve, and forget information like a human brain

This is fascinating and it’s not a memristor. (You can find out more about memristors here on the Nanowerk website). Getting back to the research, scientists at Hokkaido University (Japan) are training squishy hydrogel to remember according to a July 28, 2020 news item on (Note: Links have been removed),

Hokkaido University researchers have found a soft and wet material that can memorize, retrieve, and forget information, much like the human brain. They report their findings in the journal Proceedings of the National Academy of Sciences (PNAS).

The human brain learns things, but tends to forget them when the information is no longer important. Recreating this dynamic memory process in manmade materials has been a challenge. Hokkaido University researchers now report a hydrogel that mimics the dynamic memory function of the brain: encoding information that fades with time depending on the memory intensity.

Hydrogels are flexible materials composed of a large percentage of water—in this case about 45%—along with other chemicals that provide a scaffold-like structure to contain the water. Professor Jian Ping Gong, Assistant Professor Kunpeng Cui and their students and colleagues in Hokkaido University’s Institute for Chemical Reaction Design and Discovery (WPI-ICReDD) are seeking to develop hydrogels that can serve biological functions.

“Hydrogels are excellent candidates to mimic biological functions because they are soft and wet like human tissues,” says Gong. “We are excited to demonstrate how hydrogels can mimic some of the memory functions of brain tissue.”

Caption: The hydrogel’s memorizing-forgetting behavior is achieved based on fast water uptake (swelling) at high temperature and slow water release (shrinking) at low temperature, which is enabled by dynamic bonds in the gel. The swelling part turns from transparent to opaque when cooled, enabling memory retrieval. (Chengtao Yu et al., PNAS, July 27, 2020) Credit: Chengtao Yu et al., PNAS, July 27, 2020

A July 27, 2020 Hokkaido University press release (also on EurekAlert but published July 28, 2020), which originated the news item, investigates just how the scientists trained the hydrogel,

In this study, the researchers placed a thin hydrogel between two plastic plates; the top plate had a shape or letters cut out, leaving only that area of the hydrogel exposed. For example, patterns included an airplane and the word “GEL.” They initially placed the gel in a cold water bath to establish equilibrium. Then they moved the gel to a hot bath. The gel absorbed water into its structure causing a swell, but only in the exposed area. This imprinted the pattern, which is like a piece of information, onto the gel. When the gel was moved back to the cold water bath, the exposed area turned opaque, making the stored information visible, due to what they call “structure frustration.” At the cold temperature, the hydrogel gradually shrank, releasing the water it had absorbed. The pattern slowly faded. The longer the gel was left in the hot water, the darker or more intense the imprint would be, and therefore the longer it took to fade or “forget” the information. The team also showed hotter temperatures intensified the memories.

“This is similar to humans,” says Cui. “The longer you spend learning something or the stronger the emotional stimuli, the longer it takes to forget it.”

The team showed that the memory established in the hydrogel is stable against temperature fluctuation and large physical stretching. More interestingly, the forgetting processes can be programmed by tuning the thermal learning time or temperature. For example, when they applied different learning times to each letter of “GEL,” the letters disappeared sequentially.

The team used a hydrogel containing materials called polyampholytes or PA gels. The memorizing-forgetting behavior is achieved based on fast water uptake and slow water release, which is enabled by dynamic bonds in the hydrogels. “This approach should work for a variety of hydrogels with physical bonds,” says Gong.

“The hydrogel’s brain-like memory system could be explored for some applications, such as disappearing messages for security,” Cui added.

Here’s a link to and a citation for the paper,

Hydrogels as dynamic memory with forgetting ability by Chengtao Yu, Honglei Guo, Kunpeng Cui, Xueyu Li, Ya Nan Ye, Takayuki Kurokawa, and Jian Ping Gong. PNAS August 11, 2020 117 (32) 18962-18968 DOI: First published July 27, 2020

This paper is behind a paywall.

Neurotransistor for brainlike (neuromorphic) computing

According to researchers at Helmholtz-Zentrum Dresden-Rossendorf and the rest of the international team collaborating on the work, it’s time to look more closely at plasticity in the neuronal membrane,.

From the abstract for their paper, Intrinsic plasticity of silicon nanowire neurotransistors for dynamic memory and learning functions by Eunhye Baek, Nikhil Ranjan Das, Carlo Vittorio Cannistraci, Taiuk Rim, Gilbert Santiago Cañón Bermúdez, Khrystyna Nych, Hyeonsu Cho, Kihyun Kim, Chang-Ki Baek, Denys Makarov, Ronald Tetzlaff, Leon Chua, Larysa Baraban & Gianaurelio Cuniberti. Nature Electronics volume 3, pages 398–408 (2020) DOI: Published online: 25 May 2020 Issue Date: July 2020

Neuromorphic architectures merge learning and memory functions within a single unit cell and in a neuron-like fashion. Research in the field has been mainly focused on the plasticity of artificial synapses. However, the intrinsic plasticity of the neuronal membrane is also important in the implementation of neuromorphic information processing. Here we report a neurotransistor made from a silicon nanowire transistor coated by an ion-doped sol–gel silicate film that can emulate the intrinsic plasticity of the neuronal membrane.

Caption: Neurotransistors: from silicon chips to neuromorphic architecture. Credit: TU Dresden / E. Baek Courtesy: Helmholtz-Zentrum Dresden-Rossendorf

A July 14, 2020 news item on Nanowerk announced the research (Note: A link has been removed),

Especially activities in the field of artificial intelligence, like teaching robots to walk or precise automatic image recognition, demand ever more powerful, yet at the same time more economical computer chips. While the optimization of conventional microelectronics is slowly reaching its physical limits, nature offers us a blueprint how information can be processed and stored quickly and efficiently: our own brain.

For the very first time, scientists at TU Dresden and the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) have now successfully imitated the functioning of brain neurons using semiconductor materials. They have published their research results in the journal Nature Electronics (“Intrinsic plasticity of silicon nanowire neurotransistors for dynamic memory and learning functions”).

A July 14, 2020 Helmholtz-Zentrum Dresden-Rossendorf press release (also on EurekAlert), which originated the news items delves further into the research,

Today, enhancing the performance of microelectronics is usually achieved by reducing component size, especially of the individual transistors on the silicon computer chips. “But that can’t go on indefinitely – we need new approaches”, Larysa Baraban asserts. The physicist, who has been working at HZDR since the beginning of the year, is one of the three primary authors of the international study, which involved a total of six institutes. One approach is based on the brain, combining data processing with data storage in an artificial neuron.

“Our group has extensive experience with biological and chemical electronic sensors,” Baraban continues. “So, we simulated the properties of neurons using the principles of biosensors and modified a classical field-effect transistor to create an artificial neurotransistor.” The advantage of such an architecture lies in the simultaneous storage and processing of information in a single component. In conventional transistor technology, they are separated, which slows processing time and hence ultimately also limits performance.

Silicon wafer + polymer = chip capable of learning

Modeling computers on the human brain is no new idea. Scientists made attempts to hook up nerve cells to electronics in Petri dishes decades ago. “But a wet computer chip that has to be fed all the time is of no use to anybody,” says Gianaurelio Cuniberti from TU Dresden. The Professor for Materials Science and Nanotechnology is one of the three brains behind the neurotransistor alongside Ronald Tetzlaff, Professor of Fundamentals of Electrical Engineering in Dresden, and Leon Chua [emphasis mine] from the University of California at Berkeley, who had already postulated similar components in the early 1970s.

Now, Cuniberti, Baraban and their team have been able to implement it: “We apply a viscous substance – called solgel – to a conventional silicon wafer with circuits. This polymer hardens and becomes a porous ceramic,” the materials science professor explains. “Ions move between the holes. They are heavier than electrons and slower to return to their position after excitation. This delay, called hysteresis, is what causes the storage effect.” As Cuniberti explains, this is a decisive factor in the functioning of the transistor. “The more an individual transistor is excited, the sooner it will open and let the current flow. This strengthens the connection. The system is learning.”

Cuniberti and his team are not focused on conventional issues, though. “Computers based on our chip would be less precise and tend to estimate mathematical computations rather than calculating them down to the last decimal,” the scientist explains. “But they would be more intelligent. For example, a robot with such processors would learn to walk or grasp; it would possess an optical system and learn to recognize connections. And all this without having to develop any software.” But these are not the only advantages of neuromorphic computers. Thanks to their plasticity, which is similar to that of the human brain, they can adapt to changing tasks during operation and, thus, solve problems for which they were not originally programmed.

I highlighted Dr. Leon Chua’s name as he was one of the first to conceptualize the notion of a memristor (memory resistor), which is what the press release seems to be referencing with the mention of artificial synapses. Dr. Chua very kindly answered a few questions for me about his work which I published in an April 13, 2010 posting (scroll down about 40% of the way).

Brain-inspired computer with optimized neural networks

Caption: Left to right: The experiment was performed on a prototype of the BrainScales-2 chip; Schematic representation of a neural network; Results for simple and complex tasks. Credit: Heidelberg University

I don’t often stumble across research from the European Union’s flagship Human Brain Project. So, this is a delightful occurrence especially with my interest in neuromorphic computing. From a July 22, 2020 Human Brain Project press release (also on EurekAlert),

Many computational properties are maximized when the dynamics of a network are at a “critical point”, a state where systems can quickly change their overall characteristics in fundamental ways, transitioning e.g. between order and chaos or stability and instability. Therefore, the critical state is widely assumed to be optimal for any computation in recurrent neural networks, which are used in many AI [artificial intelligence] applications.

Researchers from the HBP [Human Brain Project] partner Heidelberg University and the Max-Planck-Institute for Dynamics and Self-Organization challenged this assumption by testing the performance of a spiking recurrent neural network on a set of tasks with varying complexity at – and away from critical dynamics. They instantiated the network on a prototype of the analog neuromorphic BrainScaleS-2 system. BrainScaleS is a state-of-the-art brain-inspired computing system with synaptic plasticity implemented directly on the chip. It is one of two neuromorphic systems currently under development within the European Human Brain Project.

First, the researchers showed that the distance to criticality can be easily adjusted in the chip by changing the input strength, and then demonstrated a clear relation between criticality and task-performance. The assumption that criticality is beneficial for every task was not confirmed: whereas the information-theoretic measures all showed that network capacity was maximal at criticality, only the complex, memory intensive tasks profited from it, while simple tasks actually suffered. The study thus provides a more precise understanding of how the collective network state should be tuned to different task requirements for optimal performance.

Mechanistically, the optimal working point for each task can be set very easily under homeostatic plasticity by adapting the mean input strength. The theory behind this mechanism was developed very recently at the Max Planck Institute. “Putting it to work on neuromorphic hardware shows that these plasticity rules are very capable in tuning network dynamics to varying distances from criticality”, says senior author Viola Priesemann, group leader at MPIDS. Thereby tasks of varying complexity can be solved optimally within that space.

The finding may also explain why biological neural networks operate not necessarily at criticality, but in the dynamically rich vicinity of a critical point, where they can tune their computation properties to task requirements. Furthermore, it establishes neuromorphic hardware as a fast and scalable avenue to explore the impact of biological plasticity rules on neural computation and network dynamics.

“As a next step, we now study and characterize the impact of the spiking network’s working point on classifying artificial and real-world spoken words”, says first author Benjamin Cramer of Heidelberg University.

Here’s a link to and a citation for the paper,

Control of criticality and computation in spiking neuromorphic networks with plasticity by Benjamin Cramer, David Stöckel, Markus Kreft, Michael Wibral, Johannes Schemmel, Karlheinz Meier & Viola Priesemann. Nature Communications volume 11, Article number: 2853 (2020) DOI: Published: 05 June 2020

This paper is open access.

Improving neuromorphic devices with ion conducting polymer

A July 1, 2020 news item on ScienceDaily announces work which researchers are hopeful will allow them exert more control over neuromorphic devices’ speed of response,

“Neuromorphic” refers to mimicking the behavior of brain neural cells. When one speaks of neuromorphic computers, they are talking about making computers think and process more like human brains-operating at high-speed with low energy consumption.

Despite a growing interest in polymer-based neuromorphic devices, researchers have yet to establish an effective method for controlling the response speed of devices. Researchers from Tohoku University and the University of Cambridge, however, have overcome this obstacle through mixing the polymers PSS-Na and PEDOT:PSS, discovering that adding an ion conducting polymer enhances neuromorphic device response time.

A June 24, 2020 Tohoku University press release (also on EurekAlert), which originated the news item, provides a few more technical details,

Polymers are materials composed of long molecular chains and play a fundamental aspect in modern life from the rubber in tires, to water bottles, to polystyrene. Mixing polymers together results in the creation of new materials with their own distinct physical properties.

Most studies on neuromorphic devices based on polymer focus exclusively on the application of PEDOT: PSS, a mixed conductor that transports both electrons and ions. PSS-Na, on the other hand, transports ions only. By blending these two polymers, the researchers could enhance the ion diffusivity in the active layer of the device. Their measurements confirmed an increase in device response time, achieving a 5-time shorting at maximum. The results also proved how closely related response time is to the diffusivity of ions in the active layer.

“Our study paves the way for a deeper understanding behind the science of conducting polymers.” explains co-author Shunsuke Yamamoto from the Department of Biomolecular Engineering at Tohoku University’s Graduate School of Engineering. “Moving forward, it may be possible to create artificial neural networks composed of multiple neuromorphic devices,” he adds.

Here’s a link to and a citation for the paper,

Controlling the Neuromorphic Behavior of Organic Electrochemical Transistors by Blending Mixed and Ion Conductors by Shunsuke Yamamoto and George G. Malliaras. ACS [American Chemical Society] Appl. Electron. Mater. 2020, XXXX, XXX, XXX-XXX DOI: Publication Date:June 15, 2020 Copyright © 2020 American Chemical Society

This paper is behind a paywall.

Filmmaking beetles wearing teeny, tiny wireless cameras

Researchers at the University of Washington have developed a tiny camera that can ride aboard an insect. Here a Pinacate beetle explores the UW campus with the camera on its back. Credit: Mark Stone/University of Washington

Scientists at Washington University have created a removable wireless camera backpack for beetles and for tiny robots resembling beetles. I’m embedding a video shot by a beetle later in this post with a citation and link for the paper, near the end of this post where you’ll also find links to my other posts on insects and technology.

As for the latest on insects and technology, there’s a July 15, 2020 news item on ScienceDaily,

In the movie “Ant-Man,” the title character can shrink in size and travel by soaring on the back of an insect. Now researchers at the University of Washington have developed a tiny wireless steerable camera that can also ride aboard an insect, giving everyone a chance to see an Ant-Man view of the world.

The camera, which streams video to a smartphone at 1 to 5 frames per second, sits on a mechanical arm that can pivot 60 degrees. This allows a viewer to capture a high-resolution, panoramic shot or track a moving object while expending a minimal amount of energy. To demonstrate the versatility of this system, which weighs about 250 milligrams — about one-tenth the weight of a playing card — the team mounted it on top of live beetles and insect-sized robots.

A July 15, 2020 University of Washington news release (also on EurekAlert), which originated the news item, provides more technical detail (although I still have a few questions) about the work,

“We have created a low-power, low-weight, wireless camera system that can capture a first-person view of what’s happening from an actual live insect or create vision for small robots,” said senior author Shyam Gollakota, a UW associate professor in the Paul G. Allen School of Computer Science & Engineering. “Vision is so important for communication and for navigation, but it’s extremely challenging to do it at such a small scale. As a result, prior to our work, wireless vision has not been possible for small robots or insects.”

Typical small cameras, such as those used in smartphones, use a lot of power to capture wide-angle, high-resolution photos, and that doesn’t work at the insect scale. While the cameras themselves are lightweight, the batteries they need to support them make the overall system too big and heavy for insects — or insect-sized robots — to lug around. So the team took a lesson from biology.

“Similar to cameras, vision in animals requires a lot of power,” said co-author Sawyer Fuller, a UW assistant professor of mechanical engineering. “It’s less of a big deal in larger creatures like humans, but flies are using 10 to 20% of their resting energy just to power their brains, most of which is devoted to visual processing. To help cut the cost, some flies have a small, high-resolution region of their compound eyes. They turn their heads to steer where they want to see with extra clarity, such as for chasing prey or a mate. This saves power over having high resolution over their entire visual field.”

To mimic an animal’s vision, the researchers used a tiny, ultra-low-power black-and-white camera that can sweep across a field of view with the help of a mechanical arm. The arm moves when the team applies a high voltage, which makes the material bend and move the camera to the desired position. Unless the team applies more power, the arm stays at that angle for about a minute before relaxing back to its original position. This is similar to how people can keep their head turned in one direction for only a short period of time before returning to a more neutral position.

“One advantage to being able to move the camera is that you can get a wide-angle view of what’s happening without consuming a huge amount of power,” said co-lead author Vikram Iyer, a UW doctoral student in electrical and computer engineering. “We can track a moving object without having to spend the energy to move a whole robot. These images are also at a higher resolution than if we used a wide-angle lens, which would create an image with the same number of pixels divided up over a much larger area.”

The camera and arm are controlled via Bluetooth from a smartphone from a distance up to 120 meters away, just a little longer than a football field.

The researchers attached their removable system to the backs of two different types of beetles — a death-feigning beetle and a Pinacate beetle. Similar beetles have been known to be able to carry loads heavier than half a gram, the researchers said.

“We made sure the beetles could still move properly when they were carrying our system,” said co-lead author Ali Najafi, a UW doctoral student in electrical and computer engineering. “They were able to navigate freely across gravel, up a slope and even climb trees.”

The beetles also lived for at least a year after the experiment ended. [emphasis mine]

“We added a small accelerometer to our system to be able to detect when the beetle moves. Then it only captures images during that time,” Iyer said. “If the camera is just continuously streaming without this accelerometer, we could record one to two hours before the battery died. With the accelerometer, we could record for six hours or more, depending on the beetle’s activity level.”

The researchers also used their camera system to design the world’s smallest terrestrial, power-autonomous robot with wireless vision. This insect-sized robot uses vibrations to move and consumes almost the same power as low-power Bluetooth radios need to operate.

The team found, however, that the vibrations shook the camera and produced distorted images. The researchers solved this issue by having the robot stop momentarily, take a picture and then resume its journey. With this strategy, the system was still able to move about 2 to 3 centimeters per second — faster than any other tiny robot that uses vibrations to move — and had a battery life of about 90 minutes.

While the team is excited about the potential for lightweight and low-power mobile cameras, the researchers acknowledge that this technology comes with a new set of privacy risks.

“As researchers we strongly believe that it’s really important to put things in the public domain so people are aware of the risks and so people can start coming up with solutions to address them,” Gollakota said.

Applications could range from biology to exploring novel environments, the researchers said. The team hopes that future versions of the camera will require even less power and be battery free, potentially solar-powered.

“This is the first time that we’ve had a first-person view from the back of a beetle while it’s walking around. There are so many questions you could explore, such as how does the beetle respond to different stimuli that it sees in the environment?” Iyer said. “But also, insects can traverse rocky environments, which is really challenging for robots to do at this scale. So this system can also help us out by letting us see or collect samples from hard-to-navigate spaces.”


Johannes James, a UW mechanical engineering doctoral student, is also a co-author on this paper. This research was funded by a Microsoft fellowship and the National Science Foundation.

I’m surprised there’s no funding from a military agency as the military and covert operation applications seem like an obvious pairing. In any event, here’s a link to and a citation for the paper,

Wireless steerable vision for live insects and insect-scale robots by Vikram Iyer, Ali Najafi, Johannes James, Sawyer Fuller, and Shyamnath Gollakota. Science Robotics 15 Jul 2020: Vol. 5, Issue 44, eabb0839 DOI: 10.1126/scirobotics.abb0839

This paper is behind a paywall.

Video and links

As promised, here’s the video the scientists have released,

These posts feature some fairly ruthless uses of the insects.

  1. The first mention of insects and technology here is in a July 27, 2009 posting titled: Nanotechnology enables robots and human enhancement: part 4. The mention is in the second to last paragraph of the post. Then,.
  2. A November 23, 2011 post titled: Cyborg insects and trust,
  3. A January 9, 2012 post titled: Controlling cyborg insects,
  4. A June 26, 2013 post titled: Steering cockroaches in the lab and in your backyard—cutting edge neuroscience, and, finally,
  5. An April 11, 2014 post titled: Computerized cockroaches as precursors to new healing techniques.

As for my questions (how do you put the backpacks on the beetles? is there a strap, is it glue, is it something else? how heavy is the backpack and camera? how old are the beetles you use for this experiment? where did you get the beetles from? do you have your own beetle farm where you breed them?), I’ll see if I can get some answers.

Shining a light on flurocarbon bonds and robotic ‘soft’ matter research

Both of these news bits are concerned with light for one reason or another.

Rice University (Texas, US) and breaking fluorocarbon bonds

The secret to breaking fluorocarbon bonds is light according to a June 22, 2020 news item on Nanowerk,

Rice University engineers have created a light-powered catalyst that can break the strong chemical bonds in fluorocarbons, a group of synthetic materials that includes persistent environmental pollutants.

A June 22, 2020 Rice University news release (also on EurekAlert), which originated the news item, describes the work in greater detail,

In a study published this month in Nature Catalysis, Rice nanophotonics pioneer Naomi Halas and collaborators at the University of California, Santa Barbara (UCSB) and Princeton University showed that tiny spheres of aluminum dotted with specks of palladium could break carbon-fluorine (C-F) bonds via a catalytic process known as hydrodefluorination in which a fluorine atom is replaced by an atom of hydrogen.

The strength and stability of C-F bonds are behind some of the 20th century’s most recognizable chemical brands, including Teflon, Freon and Scotchgard. But the strength of those bonds can be problematic when fluorocarbons get into the air, soil and water. Chlorofluorocarbons, or CFCs, for example, were banned by international treaty in the 1980s after they were found to be destroying Earth’s protective ozone layer, and other fluorocarbons were on the list of “forever chemicals” targeted by a 2001 treaty.

“The hardest part about remediating any of the fluorine-containing compounds is breaking the C-F bond; it requires a lot of energy,” said Halas, an engineer and chemist whose Laboratory for Nanophotonics (LANP) specializes in creating and studying nanoparticles that interact with light.

Over the past five years, Halas and colleagues have pioneered methods for making “antenna-reactor” catalysts that spur or speed up chemical reactions. While catalysts are widely used in industry, they are typically used in energy-intensive processes that require high temperature, high pressure or both. For example, a mesh of catalytic material is inserted into a high-pressure vessel at a chemical plant, and natural gas or another fossil fuel is burned to heat the gas or liquid that’s flowed through the mesh. LANP’s antenna-reactors dramatically improve energy efficiency by capturing light energy and inserting it directly at the point of the catalytic reaction.

In the Nature Catalysis study, the energy-capturing antenna is an aluminum particle smaller than a living cell, and the reactors are islands of palladium scattered across the aluminum surface. The energy-saving feature of antenna-reactor catalysts is perhaps best illustrated by another of Halas’ previous successes: solar steam. In 2012, her team showed its energy-harvesting particles could instantly vaporize water molecules near their surface, meaning Halas and colleagues could make steam without boiling water. To drive home the point, they showed they could make steam from ice-cold water.

The antenna-reactor catalyst design allows Halas’ team to mix and match metals that are best suited for capturing light and catalyzing reactions in a particular context. The work is part of the green chemistry movement toward cleaner, more efficient chemical processes, and LANP has previously demonstrated catalysts for producing ethylene and syngas and for splitting ammonia to produce hydrogen fuel.

Study lead author Hossein Robatjazi, a Beckman Postdoctoral Fellow at UCSB who earned his Ph.D. from Rice in 2019, conducted the bulk of the research during his graduate studies in Halas’ lab. He said the project also shows the importance of interdisciplinary collaboration.

“I finished the experiments last year, but our experimental results had some interesting features, changes to the reaction kinetics under illumination, that raised an important but interesting question: What role does light play to promote the C-F breaking chemistry?” he said.

The answers came after Robatjazi arrived for his postdoctoral experience at UCSB. He was tasked with developing a microkinetics model, and a combination of insights from the model and from theoretical calculations performed by collaborators at Princeton helped explain the puzzling results.

“With this model, we used the perspective from surface science in traditional catalysis to uniquely link the experimental results to changes to the reaction pathway and reactivity under the light,” he said.

The demonstration experiments on fluoromethane could be just the beginning for the C-F breaking catalyst.

“This general reaction may be useful for remediating many other types of fluorinated molecules,” Halas said.

Caption: An artist’s illustration of the light-activated antenna-reactor catalyst Rice University engineers designed to break carbon-fluorine bonds in fluorocarbons. The aluminum portion of the particle (white and pink) captures energy from light (green), activating islands of palladium catalysts (red). In the inset, fluoromethane molecules (top) comprised of one carbon atom (black), three hydrogen atoms (grey) and one fluorine atom (light blue) react with deuterium (yellow) molecules near the palladium surface (black), cleaving the carbon-fluorine bond to produce deuterium fluoride (right) and monodeuterated methane (bottom). Credit: H. Robatjazi/Rice University

Here’s a link to and a citation for the paper,

Plasmon-driven carbon–fluorine (C(sp3)–F) bond activation with mechanistic insights into hot-carrier-mediated pathways by Hossein Robatjazi, Junwei Lucas Bao, Ming Zhang, Linan Zhou, Phillip Christopher, Emily A. Carter, Peter Nordlander & Naomi J. Halas. Nature Catalysis (2020) DOI: Published: 08 June 2020

This paper is behind a paywall.

Northwestern University (Illinois, US) brings soft robots to ‘life’

This June 22, 2020 news item on ScienceDaily reveals how scientists are getting soft robots to mimic living creatures,

Northwestern University researchers have developed a family of soft materials that imitates living creatures.

When hit with light, the film-thin materials come alive — bending, rotating and even crawling on surfaces.

A June 22, 2020 Northwestern University news release (also on EurekAlert) by Amanda Morris, which originated the news item, delves further into the details,

Called “robotic soft matter by the Northwestern team,” the materials move without complex hardware, hydraulics or electricity. The researchers believe the lifelike materials could carry out many tasks, with potential applications in energy, environmental remediation and advanced medicine.

“We live in an era in which increasingly smarter devices are constantly being developed to help us manage our everyday lives,” said Northwestern’s Samuel I. Stupp, who led the experimental studies. “The next frontier is in the development of new science that will bring inert materials to life for our benefit — by designing them to acquire capabilities of living creatures.”

The research will be published on June 22 [2020] in the journal Nature Materials.

Stupp is the Board of Trustees Professor of Materials Science and Engineering, Chemistry, Medicine and Biomedical Engineering at Northwestern and director of the Simpson Querrey Institute He has appointments in the McCormick School of Engineering, Weinberg College of Arts and Sciences and Feinberg School of Medicine. George Schatz, the Charles E. and Emma H. Morrison Professor of Chemistry in Weinberg, led computer simulations of the materials’ lifelike behaviors. Postdoctoral fellow Chuang Li and graduate student Aysenur Iscen, from the Stupp and Schatz laboratories, respectively, are co-first authors of the paper.

Although the moving material seems miraculous, sophisticated science is at play. Its structure comprises nanoscale peptide assemblies that drain water molecules out of the material. An expert in materials chemistry, Stupp linked the peptide arrays to polymer networks designed to be chemically responsive to blue light.

When light hits the material, the network chemically shifts from hydrophilic (attracts water) to hydrophobic (resists water). As the material expels the water through its peptide “pipes,” it contracts — and comes to life. When the light is turned off, water re-enters the material, which expands as it reverts to a hydrophilic structure.

This is reminiscent of the reversible contraction of muscles, which inspired Stupp and his team to design the new materials.

“From biological systems, we learned that the magic of muscles is based on the connection between assemblies of small proteins and giant protein polymers that expand and contract,” Stupp said. “Muscles do this using a chemical fuel rather than light to generate mechanical energy.”

For Northwestern’s bio-inspired material, localized light can trigger directional motion. In other words, bending can occur in different directions, depending on where the light is located. And changing the direction of the light also can force the object to turn as it crawls on a surface.

Stupp and his team believe there are endless possible applications for this new family of materials. With the ability to be designed in different shapes, the materials could play a role in a variety of tasks, ranging from environmental clean-up to brain surgery.

“These materials could augment the function of soft robots needed to pick up fragile objects and then release them in a precise location,” he said. “In medicine, for example, soft materials with ‘living’ characteristics could bend or change shape to retrieve blood clots in the brain after a stroke. They also could swim to clean water supplies and sea water or even undertake healing tasks to repair defects in batteries, membranes and chemical reactors.”

Fascinating, eh? No batteries, no power source, just light to power movement. For the curious, here’s a link to and a citation for the paper,

Supramolecular–covalent hybrid polymers for light-activated mechanical actuation by Chuang Li, Aysenur Iscen, Hiroaki Sai, Kohei Sato, Nicholas A. Sather, Stacey M. Chin, Zaida Álvarez, Liam C. Palmer, George C. Schatz & Samuel I. Stupp. Nature Materials (2020) DOI: Published: 22 June 2020

This paper is behind a paywall.

Energy-efficient artificial synapse

This is the second neuromorphic computing chip story from MIT this summer in what has turned out to be a bumper crop of research announcements in this field. The first MIT synapse story was featured in a June 16, 2020 posting. Now, there’s a second and completely different team announcing results for their artificial brain synapse work in a June 19, 2020 news item on Nanowerk (Note: A link has been removed),

Teams around the world are building ever more sophisticated artificial intelligence systems of a type called neural networks, designed in some ways to mimic the wiring of the brain, for carrying out tasks such as computer vision and natural language processing.

Using state-of-the-art semiconductor circuits to simulate neural networks requires large amounts of memory and high power consumption. Now, an MIT [Massachusetts Institute of Technology] team has made strides toward an alternative system, which uses physical, analog devices that can much more efficiently mimic brain processes.

The findings are described in the journal Nature Communications (“Protonic solid-state electrochemical synapse for physical neural networks”), in a paper by MIT professors Bilge Yildiz, Ju Li, and Jesús del Alamo, and nine others at MIT and Brookhaven National Laboratory. The first author of the paper is Xiahui Yao, a former MIT postdoc now working on energy storage at GRU Energy Lab.

That description of the work is one pretty much every team working on developing memristive (neuromorphic) chips could use.

On other fronts, the team has produced a very attractive illustration accompanying this research (aside: Is it my imagination or has there been a serious investment in the colour pink and other pastels for science illustrations?),

A new system developed at MIT and Brookhaven National Lab could provide a faster, more reliable and much more energy efficient approach to physical neural networks, by using analog ionic-electronic devices to mimic synapses.. Courtesy of the researchers

A June 19, 2020 MIT news release, which originated the news item, provides more insight into this specific piece of research (hint: it’s about energy use and repeatability),

Neural networks attempt to simulate the way learning takes place in the brain, which is based on the gradual strengthening or weakening of the connections between neurons, known as synapses. The core component of this physical neural network is the resistive switch, whose electronic conductance can be controlled electrically. This control, or modulation, emulates the strengthening and weakening of synapses in the brain.

In neural networks using conventional silicon microchip technology, the simulation of these synapses is a very energy-intensive process. To improve efficiency and enable more ambitious neural network goals, researchers in recent years have been exploring a number of physical devices that could more directly mimic the way synapses gradually strengthen and weaken during learning and forgetting.

Most candidate analog resistive devices so far for such simulated synapses have either been very inefficient, in terms of energy use, or performed inconsistently from one device to another or one cycle to the next. The new system, the researchers say, overcomes both of these challenges. “We’re addressing not only the energy challenge, but also the repeatability-related challenge that is pervasive in some of the existing concepts out there,” says Yildiz, who is a professor of nuclear science and engineering and of materials science and engineering.

“I think the bottleneck today for building [neural network] applications is energy efficiency. It just takes too much energy to train these systems, particularly for applications on the edge, like autonomous cars,” says del Alamo, who is the Donner Professor in the Department of Electrical Engineering and Computer Science. Many such demanding applications are simply not feasible with today’s technology, he adds.

The resistive switch in this work is an electrochemical device, which is made of tungsten trioxide (WO3) and works in a way similar to the charging and discharging of batteries. Ions, in this case protons, can migrate into or out of the crystalline lattice of the material,  explains Yildiz, depending on the polarity and strength of an applied voltage. These changes remain in place until altered by a reverse applied voltage — just as the strengthening or weakening of synapses does.

The mechanism is similar to the doping of semiconductors,” says Li, who is also a professor of nuclear science and engineering and of materials science and engineering. In that process, the conductivity of silicon can be changed by many orders of magnitude by introducing foreign ions into the silicon lattice. “Traditionally those ions were implanted at the factory,” he says, but with the new device, the ions are pumped in and out of the lattice in a dynamic, ongoing process. The researchers can control how much of the “dopant” ions go in or out by controlling the voltage, and “we’ve demonstrated a very good repeatability and energy efficiency,” he says.

Yildiz adds that this process is “very similar to how the synapses of the biological brain work. There, we’re not working with protons, but with other ions such as calcium, potassium, magnesium, etc., and by moving those ions you actually change the resistance of the synapses, and that is an element of learning.” The process taking place in the tungsten trioxide in their device is similar to the resistance modulation taking place in biological synapses, she says.

“What we have demonstrated here,” Yildiz says, “even though it’s not an optimized device, gets to the order of energy consumption per unit area per unit change in conductance that’s close to that in the brain.” Trying to accomplish the same task with conventional CMOS type semiconductors would take a million times more energy, she says.

The materials used in the demonstration of the new device were chosen for their compatibility with present semiconductor manufacturing systems, according to Li. But they include a polymer material that limits the device’s tolerance for heat, so the team is still searching for other variations of the device’s proton-conducting membrane and better ways of encapsulating its hydrogen source for long-term operations.

“There’s a lot of fundamental research to be done at the materials level for this device,” Yildiz says. Ongoing research will include “work on how to integrate these devices with existing CMOS transistors” adds del Alamo. “All that takes time,” he says, “and it presents tremendous opportunities for innovation, great opportunities for our students to launch their careers.”

Coincidentally or not a University of Massachusetts at Amherst team announced memristor voltage use comparable to human brain voltage use (see my June 15, 2020 posting), plus, there’s a team at Stanford University touting their low-energy biohybrid synapse in a XXX posting. (June 2020 has been a particularly busy month here for ‘artificial brain’ or ‘memristor’ stories.)

Getting back to this latest MIT research, here’s a link to and a citation for the paper,

Protonic solid-state electrochemical synapse for physical neural networks by Xiahui Yao, Konstantin Klyukin, Wenjie Lu, Murat Onen, Seungchan Ryu, Dongha Kim, Nicolas Emond, Iradwikanari Waluyo, Adrian Hunt, Jesús A. del Alamo, Ju Li & Bilge Yildiz. Nature Communications volume 11, Article number: 3134 (2020) DOI: Published: 19 June 2020

This paper is open access.

Chameleon skin (nanomaterial made of gold nanoparticles) for robots

A June 17, 2020 news item on Nanowerk trumpets research into how robots might be able to sport chameleon-like skin one day,

A new film made of gold nanoparticles changes color in response to any type of movement. Its unprecedented qualities could allow robots to mimic chameleons and octopi — among other futuristic applications.

Unlike other materials that try to emulate nature’s color changers, this one can respond to any type of movement, like bending or twisting. Robots coated in it could enter spaces that might be dangerous or impossible for humans, and offer information just based on the way they look.

For example, a camouflaged robot could enter tough-to-access underwater crevices. If the robot changes color, biologists could learn about the pressures facing animals that live in these environments.

Although some other color-changing materials can also respond to motion, this one can be printed and programmed to display different, complex patterns that are difficult to replicate.

This video from the University of California at Riverside researchers shows the material in action (Note: It gets more interesting after the first 20 secs.),

A June 15, 2020 University of California at Riverside (UCR) news release (also on EurekAlert but published on June 17, 2020) by Jules Bernstein, which originated the news item, delves further,

Nanomaterials are simply materials that have been reduced to an extremely small scale — tens of nanometers in width and length, or, about the size of a virus. When materials like silver or gold become smaller, their colors will change depending on their size, shape, and the direction they face.

“In our case, we reduced gold to nano-sized rods. We knew that if we could make the rods point in a particular direction, we could control their color,” said chemistry professor Yadong Yin. “Facing one way, they might appear red. Move them 45 degrees, and they change to green.”

The problem facing the research team was how to take millions of gold nanorods floating in a liquid solution and get them all to point in the same direction to display a uniform color.

Their solution was to fuse smaller magnetic nanorods onto the larger gold ones. The two different-sized rods were encapsulated in a polymer shield, so that they would remain side by side. That way, the orientation of both rods could be controlled by magnets.

“Just like if you hold a magnet over a pile of needles, they all point in the same direction. That’s how we control the color,” Yin said.

Once the nanorods are dried into a thin film, their orientation is fixed in place and they no longer respond to magnets. “But, if the film is flexible, you can bend and rotate it, and will still see different colors as the orientation changes,” Yin said.

Other materials, like butterfly wings, are shiny and colorful at certain angles, and can also change color when viewed at other angles. However, those materials rely on precisely ordered microstructures, which are difficult and expensive to make for large areas. But this new film can be made to coat the surface of any sized object just as easily as applying spray paint on a house.

Though futuristic robots are an ultimate application of this film, it can be used in many other ways. UC Riverside chemist Zhiwei Li, the first author on this paper, explained that the film can be incorporated into checks or cash as an authentication feature. Under normal lighting, the film is gray, but when you put on sunglasses and look at it through polarized lenses, elaborate patterns can be seen. In addition, the color contrast of the film may change dramatically if you twist the film.

The applications, in fact, are only limited by the imagination. “Artists could use this technology to create fascinating paintings that are wildly different depending on the angle from which they are viewed,” Li said. “It would be wonderful to see how the science in our work could be combined with the beauty of art.”

Here’s a link to and a citation for the paper,

Coupling magnetic and plasmonic anisotropy in hybrid nanorods for mechanochromic responses by Zhiwei Li, Jianbo Jin, Fan Yang, Ningning Song & Yadong Yin. Nature Communications volume 11, Article number: 2883 (2020) DOI: Published: 08 June 2020

This paper is open access.

A biohybrid artificial synapse that can communicate with living cells

As I noted in my June 16, 2020 posting, we may have more than one kind of artificial brain in our future. This latest work features a biohybrid. From a June 15, 2020 news item on ScienceDaily,

In 2017, Stanford University researchers presented a new device that mimics the brain’s efficient and low-energy neural learning process [see my March 8, 2017 posting for more]. It was an artificial version of a synapse — the gap across which neurotransmitters travel to communicate between neurons — made from organic materials. In 2019, the researchers assembled nine of their artificial synapses together in an array, showing that they could be simultaneously programmed to mimic the parallel operation of the brain [see my Sept. 17, 2019 posting].

Now, in a paper published June 15 [2020] in Nature Materials, they have tested the first biohybrid version of their artificial synapse and demonstrated that it can communicate with living cells. Future technologies stemming from this device could function by responding directly to chemical signals from the brain. The research was conducted in collaboration with researchers at Istituto Italiano di Tecnologia (Italian Institute of Technology — IIT) in Italy and at Eindhoven University of Technology (Netherlands).

“This paper really highlights the unique strength of the materials that we use in being able to interact with living matter,” said Alberto Salleo, professor of materials science and engineering at Stanford and co-senior author of the paper. “The cells are happy sitting on the soft polymer. But the compatibility goes deeper: These materials work with the same molecules neurons use naturally.”

While other brain-integrated devices require an electrical signal to detect and process the brain’s messages, the communications between this device and living cells occur through electrochemistry — as though the material were just another neuron receiving messages from its neighbor.

A June 15, 2020 Stanford University news release (also on EurekAlert) by Taylor Kubota, which originated the news item, delves further into this recent work,

How neurons learn

The biohybrid artificial synapse consists of two soft polymer electrodes, separated by a trench filled with electrolyte solution – which plays the part of the synaptic cleft that separates communicating neurons in the brain. When living cells are placed on top of one electrode, neurotransmitters that those cells release can react with that electrode to produce ions. Those ions travel across the trench to the second electrode and modulate the conductive state of this electrode. Some of that change is preserved, simulating the learning process occurring in nature.

“In a biological synapse, essentially everything is controlled by chemical interactions at the synaptic junction. Whenever the cells communicate with one another, they’re using chemistry,” said Scott Keene, a graduate student at Stanford and co-lead author of the paper. “Being able to interact with the brain’s natural chemistry gives the device added utility.”

This process mimics the same kind of learning seen in biological synapses, which is highly efficient in terms of energy because computing and memory storage happen in one action. In more traditional computer systems, the data is processed first and then later moved to storage.

To test their device, the researchers used rat neuroendocrine cells that release the neurotransmitter dopamine. Before they ran their experiment, they were unsure how the dopamine would interact with their material – but they saw a permanent change in the state of their device upon the first reaction.

“We knew the reaction is irreversible, so it makes sense that it would cause a permanent change in the device’s conductive state,” said Keene. “But, it was hard to know whether we’d achieve the outcome we predicted on paper until we saw it happen in the lab. That was when we realized the potential this has for emulating the long-term learning process of a synapse.”

A first step

This biohybrid design is in such early stages that the main focus of the current research was simply to make it work.

“It’s a demonstration that this communication melding chemistry and electricity is possible,” said Salleo. “You could say it’s a first step toward a brain-machine interface, but it’s a tiny, tiny very first step.”

Now that the researchers have successfully tested their design, they are figuring out the best paths for future research, which could include work on brain-inspired computers, brain-machine interfaces, medical devices or new research tools for neuroscience. Already, they are working on how to make the device function better in more complex biological settings that contain different kinds of cells and neurotransmitters.

Here’s a link to and a citation for the paper,

A biohybrid synapse with neurotransmitter-mediated plasticity by Scott T. Keene, Claudia Lubrano, Setareh Kazemzadeh, Armantas Melianas, Yaakov Tuchman, Giuseppina Polino, Paola Scognamiglio, Lucio Cinà, Alberto Salleo, Yoeri van de Burgt & Francesca Santoro. Nature Materials (2020) DOI: Published: 15 June 2020

This paper is behind a paywall.