Tag Archives: neurons

Neuronal regenerative-interfaces made of cross-linked carbon nanotube films

If I understand this research rightly, they are creating a film made of carbon nanotubes that can stimulate the growth of nerve cells (neurons) thus creating a ‘living/nonliving’ hybrid or as they call it in the press release a ‘biosynthetic hybrid’.

An August 2, 2019 news item on Nanowerk introduces the research (Note 1: There seem to be some translation issues; Note 2: Links have been removed),

Carbon nanotubes able to take on the desired shapes thanks to a special chemical treatment, called crosslinking and, at the same time, able to function as substrata for the growth of nerve cells, finely tuning their growth and activity.

The research published in ACS Nano (“Chemically Cross-Linked Carbon Nanotube Films Engineered to Control Neuronal Signaling”), is a new and important step towards the construction of neuronal regenerative-interfaces to repair spinal injuries.

The study is the new achievement of a long-term and, in terms of results, successful collaboration between the scientists Laura Ballerini of SISSA (Scuola Internazionale Superiore di Studi Avanzati), Trieste, and Maurizio Prato of the University of Trieste. The work team has also been assisted by CIC biomaGUNE of San Sebastián, Spain.

Caption: Carbon nanotubes able to take on the desired shapes thanks to a special chemical treatment, called crosslinking and, at the same time, able to function as substrata for the growth of nerve cells, finely tuning their growth and activity. Credit: Rossana Rauti

An August 2, 2019 SISSA press release (also on EurekAlert), which originated the news item, adds detail,

The carbon nanotubes used in the research have been modified by appropriate chemical treatments: “For many years, in our laboratories we have been working on the chemical reactivity of carbon nanotubes, a fascinating but very difficult material to work. Thanks to our experience, we have crosslinked them or, to say it more clearly, we have treated the nanotubes so they could link themselves to one another thanks to specific chemical reactions. We have discovered that this procedure gives the material very interesting characteristics. For example, the material organises itself in a stable manner according to a precise shape, we choose: a tissue where nerve cells need to be planted, for example. Or around some electrodes” explains Professor Prato. “We know from previous research that nerve cells grow well on carbon nanotubes so they could be used as a surface to build hybrid devices to regenerate nerve tissues. It was necessary to ensure that this chemical modification did not compromise this process and study whether the interaction with neurons was altered”.

Towards biosynthetic hybrids

Professor Ballerini continues: “We have discovered that the chemical process has important effects because through this treatment we can modulate the activity of neurons, in terms of growth, adhesion and survival. These materials can also regulate the communication between neurons. We can say that the carpet of crosslinked carbon nanotubes interacts intensely and constructively with the nerve cells”. This interaction depends on how much the different carbon nanotubes are linked to each other, or rather crosslinked. The lower the link number among the nanotubes the higher the activity of neurons that grow on their surface. Through the chemical control of their properties, and of the links between them, it is possible to regulate the response of the neurons. Ballerini and Prato explain: “This is an intriguing result that emerges from the important and fruitful collaboration between our research groups involving advanced research in chemistry, nanoscience and neurobiology . This study provides a further step in the design of future biosynthetic hybrids to recover injured nerve tissues functions”.

Here’s a link to and a citation for the paper,

Chemically Cross-Linked Carbon Nanotube Films Engineered to Control Neuronal Signaling by Myriam Barrejón, Rossana Rauti, Laura Ballerini, Maurizio Prato. ACS Nano2019 XXXXXXXXXX-XXX Publication Date:July 22, 2019 DOI: https://doi.org/10.1021/acsnano.9b02429 Copyright © 2019 American Chemical Society

This paper is behind a paywall.

Repairing brain circuits using nanotechnology

A July 30, 2019 news item on Nanowerk announces some neuroscience research (they used animal models) that could prove helpful with neurodegenerative diseases,

Working with mouse and human tissue, Johns Hopkins Medicine researchers report new evidence that a protein pumped out of some — but not all — populations of “helper” cells in the brain, called astrocytes, plays a specific role in directing the formation of connections among neurons needed for learning and forming new memories.

Using mice genetically engineered and bred with fewer such connections, the researchers conducted proof-of-concept experiments that show they could deliver corrective proteins via nanoparticles to replace the missing protein needed for “road repairs” on the defective neural highway.

Since such connective networks are lost or damaged by neurodegenerative diseases such as Alzheimer’s or certain types of intellectual disability, such as Norrie disease, the researchers say their findings advance efforts to regrow and repair the networks and potentially restore normal brain function.

A July 30, 2019 Johns Hopkins University School of Medicine news release (also on EurekAlert) provides more detail about the work (Note: A link has been removed),

“We are looking at the fundamental biology of how astrocytes function, but perhaps have discovered a new target for someday intervening in neurodegenerative diseases with novel therapeutics,” says Jeffrey Rothstein, M.D., Ph.D., the John W. Griffin Director of the Brain Science Institute and professor of neurology at the Johns Hopkins University School of Medicine.

“Although astrocytes appear to all look alike in the brain, we had an inkling that they might have specialized roles in the brain due to regional differences in the brain’s function and because of observed changes in certain diseases,” says Rothstein. “The hope is that learning to harness the individual differences in these distinct populations of astrocytes may allow us to direct brain development or even reverse the effects of certain brain conditions, and our current studies have advanced that hope.”

In the brain, astrocytes are the support cells that act as guides to direct new cells, promote chemical signaling, and clean up byproducts of brain cell metabolism.

Rothstein’s team focused on a particular astrocyte protein, glutamate transporter-1, which previous studies suggested was lost from astrocytes in certain parts of brains with neurodegenerative diseases. Like a biological vacuum cleaner, the protein normally sucks up the chemical “messenger” glutamate from the spaces between neurons after a message is sent to another cell, a step required to end the transmission and prevent toxic levels of glutamate from building up.

When these glutamate transporters disappear from certain parts of the brain — such as the motor cortex and spinal cord in people with amyotrophic lateral sclerosis (ALS) — glutamate hangs around much too long, sending messages that overexcite and kill the cells.

To figure out how the brain decides which cells need the glutamate transporters, Rothstein and colleagues focused on the region of DNA in front of the gene that typically controls the on-off switch needed to manufacture the protein. They genetically engineered mice to glow red in every cell where the gene is activated.

Normally, the glutamate transporter is turned on in all astrocytes. But, by using between 1,000- and 7,000-bit segments of DNA code from the on-off switch for glutamate, all the cells in the brain glowed red, including the neurons. It wasn’t until the researchers tried the largest sequence of an 8,300-bit DNA code from this location that the researchers began to see some selection in red cells. These red cells were all astrocytes but only in certain layers of the brain’s cortex in mice.

Because they could identify these “8.3 red astrocytes,” the researchers thought they might have a specific function different than other astrocytes in the brain. To find out more precisely what these 8.3 red astrocytes do in the brain, the researchers used a cell-sorting machine to separate the red astrocytes from the uncolored ones in mouse brain cortical tissue, and then identified which genes were turned on to much higher than usual levels in the red compared to the uncolored cell populations. The researchers found that the 8.3 red astrocytes turn on high levels of a gene that codes for a different protein known as Norrin.

Rothstein’s team took neurons from normal mouse brains, treated them with Norrin, and found that those neurons grew more of the “branches” — or extensions — used to transmit chemical messages among brain cells. Then, Rothstein says, the researchers looked at the brains of mice engineered to lack Norrin, and saw that these neurons had fewer branches than in healthy mice that made Norrin.

In another set of experiments, the research team took the DNA code for Norrin plus the 8,300 “location” DNA and assembled them into deliverable nanoparticles. When they injected the Norrin nanoparticles into the brains of mice engineered without Norrin, the neurons in these mice began to quickly grow many more branches, a process suggesting repair to neural networks. They repeated these experiments with human neurons too.

Rothstein notes that mutations in the Norrin protein that reduce levels of the protein in people cause Norrie disease — a rare, genetic disorder that can lead to blindness in infancy and intellectual disability. Because the researchers were able to grow new branches for communication, they believe it may one day be possible to use Norrin to treat some types of intellectual disabilities such as Norrie disease.

For their next steps, the researchers are investigating if Norrin can repair connections in the brains of animal models with neurodegenerative diseases, and in preparation for potential success, Miller [sic] and Rothstein have submitted a patent for Norrin.

Here’s a link to and a citation for the paper,

Molecularly defined cortical astroglia subpopulation modulates neurons via secretion of Norrin by Sean J. Miller, Thomas Philips, Namho Kim, Raha Dastgheyb, Zhuoxun Chen, Yi-Chun Hsieh, J. Gavin Daigle, Malika Datta, Jeannie Chew, Svetlana Vidensky, Jacqueline T. Pham, Ethan G. Hughes, Michael B. Robinson, Rita Sattler, Raju Tomer, Jung Soo Suk, Dwight E. Bergles, Norman Haughey, Mikhail Pletnikov, Justin Hanes & Jeffrey D. Rothstein. Nature Neuroscience volume 22, pages741–752 (2019) DOI: https://doi.org/10.1038/s41593-019-0366-7 Published: 01 April 2019 Issue Date: May 2019

This paper is behind a paywall.

Memristors with better mimicry of synapses

It seems to me it’s been quite a while since I’ve stumbled across a memristor story from the University of Micihigan but it was worth waiting for. (Much of the research around memristors has to do with their potential application in neuromorphic (brainlike) computers.) From a December 17, 2018 news item on ScienceDaily,

A new electronic device developed at the University of Michigan can directly model the behaviors of a synapse, which is a connection between two neurons.

For the first time, the way that neurons share or compete for resources can be explored in hardware without the need for complicated circuits.

“Neuroscientists have argued that competition and cooperation behaviors among synapses are very important. Our new memristive devices allow us to implement a faithful model of these behaviors in a solid-state system,” said Wei Lu, U-M professor of electrical and computer engineering and senior author of the study in Nature Materials.

A December 17, 2018 University of Michigan news release (also on EurekAlert), which originated the news item, provides an explanation of memristors and their ‘similarity’ to synapses while providing more details about this latest research,

Memristors are electrical resistors with memory–advanced electronic devices that regulate current based on the history of the voltages applied to them. They can store and process data simultaneously, which makes them a lot more efficient than traditional systems. They could enable new platforms that process a vast number of signals in parallel and are capable of advanced machine learning.

The memristor is a good model for a synapse. It mimics the way that the connections between neurons strengthen or weaken when signals pass through them. But the changes in conductance typically come from changes in the shape of the channels of conductive material within the memristor. These channels–and the memristor’s ability to conduct electricity–could not be precisely controlled in previous devices.

Now, the U-M team has made a memristor in which they have better command of the conducting pathways.They developed a new material out of the semiconductor molybdenum disulfide–a “two-dimensional” material that can be peeled into layers just a few atoms thick. Lu’s team injected lithium ions into the gaps between molybdenum disulfide layers.
They found that if there are enough lithium ions present, the molybdenum sulfide transforms its lattice structure, enabling electrons to run through the film easily as if it were a metal. But in areas with too few lithium ions, the molybdenum sulfide restores its original lattice structure and becomes a semiconductor, and electrical signals have a hard time getting through.

The lithium ions are easy to rearrange within the layer by sliding them with an electric field. This changes the size of the regions that conduct electricity little by little and thereby controls the device’s conductance.

“Because we change the ‘bulk’ properties of the film, the conductance change is much more gradual and much more controllable,” Lu said.

In addition to making the devices behave better, the layered structure enabled Lu’s team to link multiple memristors together through shared lithium ions–creating a kind of connection that is also found in brains. A single neuron’s dendrite, or its signal-receiving end, may have several synapses connecting it to the signaling arms of other neurons. Lu compares the availability of lithium ions to that of a protein that enables synapses to grow.

If the growth of one synapse releases these proteins, called plasticity-related proteins, other synapses nearby can also grow–this is cooperation. Neuroscientists have argued that cooperation between synapses helps to rapidly form vivid memories that last for decades and create associative memories, like a scent that reminds you of your grandmother’s house, for example. If the protein is scarce, one synapse will grow at the expense of the other–and this competition pares down our brains’ connections and keeps them from exploding with signals.
Lu’s team was able to show these phenomena directly using their memristor devices. In the competition scenario, lithium ions were drained away from one side of the device. The side with the lithium ions increased its conductance, emulating the growth, and the conductance of the device with little lithium was stunted.

In a cooperation scenario, they made a memristor network with four devices that can exchange lithium ions, and then siphoned some lithium ions from one device out to the others. In this case, not only could the lithium donor increase its conductance–the other three devices could too, although their signals weren’t as strong.

Lu’s team is currently building networks of memristors like these to explore their potential for neuromorphic computing, which mimics the circuitry of the brain.

Here’s a link to and a citation for the paper,

Ionic modulation and ionic coupling effects in MoS2 devices for neuromorphic computing by Xiaojian Zhu, Da Li, Xiaogan Liang, & Wei D. Lu. Nature Materials (2018) DOI: https://doi.org/10.1038/s41563-018-0248-5 Published 17 December 2018

This paper is behind a paywall.

The researchers have made images illustrating their work available,

A schematic of the molybdenum disulfide layers with lithium ions between them. On the right, the simplified inset shows how the molybdenum disulfide changes its atom arrangements in the presence and absence of the lithium atoms, between a metal (1T’ phase) and semiconductor (2H phase), respectively. Image credit: Xiaojian Zhu, Nanoelectronics Group, University of Michigan.

A diagram of a synapse receiving a signal from one of the connecting neurons. This signal activates the generation of plasticity-related proteins (PRPs), which help a synapse to grow. They can migrate to other synapses, which enables multiple synapses to grow at once. The new device is the first to mimic this process directly, without the need for software or complicated circuits. Image credit: Xiaojian Zhu, Nanoelectronics Group, University of Michigan.
An electron microscope image showing the rectangular gold (Au) electrodes representing signalling neurons and the rounded electrode representing the receiving neuron. The material of molybdenum disulfide layered with lithium connects the electrodes, enabling the simulation of cooperative growth among synapses. Image credit: Xiaojian Zhu, Nanoelectronics Group, University of Michigan.

That’s all folks.

Artificial synapse courtesy of nanowires

It looks like a popsicle to me,

Caption: Image captured by an electron microscope of a single nanowire memristor (highlighted in colour to distinguish it from other nanowires in the background image). Blue: silver electrode, orange: nanowire, yellow: platinum electrode. Blue bubbles are dispersed over the nanowire. They are made up of silver ions and form a bridge between the electrodes which increases the resistance. Credit: Forschungszentrum Jülich

Not a popsicle but a representation of a device (memristor) scientists claim mimics a biological nerve cell according to a December 5, 2018 news item on ScienceDaily,

Scientists from Jülich [Germany] together with colleagues from Aachen [Germany] and Turin [Italy] have produced a memristive element made from nanowires that functions in much the same way as a biological nerve cell. The component is able to both save and process information, as well as receive numerous signals in parallel. The resistive switching cell made from oxide crystal nanowires is thus proving to be the ideal candidate for use in building bioinspired “neuromorphic” processors, able to take over the diverse functions of biological synapses and neurons.

A Dec. 5, 2018 Forschungszentrum Jülich press release (also on EurekAlert), which originated the news item, provides more details,

Computers have learned a lot in recent years. Thanks to rapid progress in artificial intelligence they are now able to drive cars, translate texts, defeat world champions at chess, and much more besides. In doing so, one of the greatest challenges lies in the attempt to artificially reproduce the signal processing in the human brain. In neural networks, data are stored and processed to a high degree in parallel. Traditional computers on the other hand rapidly work through tasks in succession and clearly distinguish between the storing and processing of information. As a rule, neural networks can only be simulated in a very cumbersome and inefficient way using conventional hardware.

Systems with neuromorphic chips that imitate the way the human brain works offer significant advantages. Experts in the field describe this type of bioinspired computer as being able to work in a decentralised way, having at its disposal a multitude of processors, which, like neurons in the brain, are connected to each other by networks. If a processor breaks down, another can take over its function. What is more, just like in the brain, where practice leads to improved signal transfer, a bioinspired processor should have the capacity to learn.

“With today’s semiconductor technology, these functions are to some extent already achievable. These systems are however suitable for particular applications and require a lot of space and energy,” says Dr. Ilia Valov from Forschungszentrum Jülich. “Our nanowire devices made from zinc oxide crystals can inherently process and even store information, as well as being extremely small and energy efficient,” explains the researcher from Jülich’s Peter Grünberg Institute.

For years memristive cells have been ascribed the best chances of being capable of taking over the function of neurons and synapses in bioinspired computers. They alter their electrical resistance depending on the intensity and direction of the electric current flowing through them. In contrast to conventional transistors, their last resistance value remains intact even when the electric current is switched off. Memristors are thus fundamentally capable of learning.

In order to create these properties, scientists at Forschungszentrum Jülich and RWTH Aachen University used a single zinc oxide nanowire, produced by their colleagues from the polytechnic university in Turin. Measuring approximately one ten-thousandth of a millimeter in size, this type of nanowire is over a thousand times thinner than a human hair. The resulting memristive component not only takes up a tiny amount of space, but also is able to switch much faster than flash memory.

Nanowires offer promising novel physical properties compared to other solids and are used among other things in the development of new types of solar cells, sensors, batteries and computer chips. Their manufacture is comparatively simple. Nanowires result from the evaporation deposition of specified materials onto a suitable substrate, where they practically grow of their own accord.

In order to create a functioning cell, both ends of the nanowire must be attached to suitable metals, in this case platinum and silver. The metals function as electrodes, and in addition, release ions triggered by an appropriate electric current. The metal ions are able to spread over the surface of the wire and build a bridge to alter its conductivity.

Components made from single nanowires are, however, still too isolated to be of practical use in chips. Consequently, the next step being planned by the Jülich and Turin researchers is to produce and study a memristive element, composed of a larger, relatively easy to generate group of several hundred nanowires offering more exciting functionalities.

The Italians have also written about the work in a December 4, 2018 news item for the Polytecnico di Torino’s inhouse magazine, PoliFlash’. I like the image they’ve used better as it offers a bit more detail and looks less like a popsicle. First, the image,

Courtesy: Polytecnico di Torino

Now, the news item, which includes some historical information about the memristor (Note: There is some repetition and links have been removed),

Emulating and understanding the human brain is one of the most important challenges for modern technology: on the one hand, the ability to artificially reproduce the processing of brain signals is one of the cornerstones for the development of artificial intelligence, while on the other the understanding of the cognitive processes at the base of the human mind is still far away.

And the research published in the prestigious journal Nature Communications by Gianluca Milano and Carlo Ricciardi, PhD student and professor, respectively, of the Applied Science and Technology Department of the Politecnico di Torino, represents a step forward in these directions. In fact, the study entitled “Self-limited single nanowire systems combining all-in-one memristive and neuromorphic functionalities” shows how it is possible to artificially emulate the activity of synapses, i.e. the connections between neurons that regulate the learning processes in our brain, in a single “nanowire” with a diameter thousands of times smaller than that of a hair.

It is a crystalline nanowire that takes the “memristor”, the electronic device able to artificially reproduce the functions of biological synapses, to a more performing level. Thanks to the use of nanotechnologies, which allow the manipulation of matter at the atomic level, it was for the first time possible to combine into one single device the synaptic functions that were individually emulated through specific devices. For this reason, the nanowire allows an extreme miniaturisation of the “memristor”, significantly reducing the complexity and energy consumption of the electronic circuits necessary for the implementation of learning algorithms.

Starting from the theorisation of the “memristor” in 1971 by Prof. Leon Chua – now visiting professor at the Politecnico di Torino, who was conferred an honorary degree by the University in 2015 – this new technology will not only allow smaller and more performing devices to be created for the implementation of increasingly “intelligent” computers, but is also a significant step forward for the emulation and understanding of the functioning of the brain.

“The nanowire memristor – said Carlo Ricciardirepresents a model system for the study of physical and electrochemical phenomena that govern biological synapses at the nanoscale. The work is the result of the collaboration between our research team and the RWTH University of Aachen in Germany, supported by INRiM, the National Institute of Metrological Research, and IIT, the Italian Institute of Technology.”

h.t for the Italian info. to Nanowerk’s Dec. 10, 2018 news item.

Here’s a link to and a citation for the paper,

Self-limited single nanowire systems combining all-in-one memristive and neuromorphic functionalities by Gianluca Milano, Michael Luebben, Zheng Ma, Rafal Dunin-Borkowski, Luca Boarino, Candido F. Pirri, Rainer Waser, Carlo Ricciardi, & Ilia Valov. Nature Communicationsvolume 9, Article number: 5151 (2018) DOI: https://doi.org/10.1038/s41467-018-07330-7 Published: 04 December 2018

This paper is open access.

Just use the search term “memristor” in the blog search engine if you’re curious about the multitudinous number of postings on the topic here.

Non-viral ocular gene therapy with gold nanoparticles and femtosecond lasers

I love the stylistic choice the writer made (pay special attention to the second paragraph) when producing this November 19, 2018 Polytechnique Montréal news release (also on EurekAlert),

A scientific breakthrough by Professor Michel Meunier of Polytechnique Montréal and his collaborators offers hope for people with glaucoma, retinitis or macular degeneration.

In January 2009, the life of engineer Michel Meunier, a professor at Polytechnique Montréal, changed dramatically. Like others, he had observed that the extremely short pulse of a femtosecond laser (0.000000000000001 second) could make nanometre-sized holes appear in silicon when it was covered by gold nanoparticles. But this researcher, recognized internationally for his skills in laser and nanotechnology, decided to go a step further with what was then just a laboratory curiosity. He wondered if it was possible to go from silicon to living matter, from inorganic to organic. Could the gold nanoparticles and the femtosecond laser, this “light scalpel,” reproduce the same phenomenon with living cells?

A very pretty image illustrating the work,

Caption: Gold nanoparticles, which act like “nanolenses,” concentrate the energy produced by the extremely short pulse of a femtosecond laser to create a nanoscale incision on the surface of the eye’s retina cells. This technology, which preserves cell integrity, can be used to effectively inject drugs or genes into specific areas of the eye, offering new hope to people with glaucoma, retinitis or macular degeneration. Credit and Copyright: Polytechnique Montréal

The news release goes on to describe the technology in more detail,

Professor Meunier started working on cells in vitro in his Polytechnique laboratory. The challenge was to make a nanometric incision in the cells’ extracellular membrane without damaging it. Using gold nanoparticles that acted as “nanolenses,” Professor Meunier realized that it was possible to concentrate the light energy coming from the laser at a wavelength of 800 nanometres. Since there is very little energy absorption by the cells at this wavelength, their integrity is preserved. Mission accomplished!

Based on this finding, Professor Meunier decided to work on cells in vivo, cells that are part of a complex living cell structure, such as the eye for example.

The eye and the light scalpel

In April 2012, Professor Meunier met Przemyslaw Sapieha, an internationally renowned eye specialist, particularly recognized for his work on the retina. “Mike”, as he goes by, is a professor in the Department of Ophthalmology at Université de Montréal and a researcher at Centre intégré universitaire de santé et de services sociaux (CIUSSS) de l’Est-de-l’Île-de-Montréal. He immediately saw the potential of this new technology and everything that could be done in the eye if you could block the ripple effect that occurs following a trigger that leads to glaucoma or macular degeneration, for example, by injecting drugs, proteins or even genes.

Using a femtosecond laser to treat the eye–a highly specialized and fragile organ–is very complex, however. The eye is part of the central nervous system, and therefore many of the cells or families of cells that compose it are neurons. And when a neuron dies, it does not regenerate like other cells do. Mike Sapieha’s first task was therefore to ensure that a femtosecond laser could be used on one or several neurons without affecting them. This is what is referred to as “proof of concept.”

Proof of concept

Mike and Michel called on biochemistry researcher Ariel Wilson, an expert in eye structures and vision mechanisms, as well as Professor Santiago Costantino and his team from the Department of Ophthalmology at Université de Montréal and the CIUSSS de l’Est-de-l’Île-de-Montréal for their expertise in biophotonics. The team first decided to work on healthy cells, because they are better understood than sick cells. They injected gold nanoparticles combined with antibodies to target specific neuronal cells in the eye, and then waited for the nanoparticles to settle around the various neurons or families of neurons, such as the retina. Following the bright flash generated by the femtosecond laser, the expected phenomenon occurred: small holes appeared in the cells of the eye’s retina, making it possible to effectively inject drugs or genes in specific areas of the eye. It was another victory for Michel Meunier and his collaborators, with these conclusive results now opening the path to new treatments.

The key feature of the technology developed by the researchers from Polytechnique and CIUSSS de l’Est-de-l’Île-de-Montréal is its extreme precision. With the use of functionalized gold nanoparticles, the light scalpel makes it possible to precisely locate the family of cells where the doctor will have to intervene.

Having successfully demonstrated proof of concept, Professor Meunier and his team filed a patent application in the United States. This tremendous work was also the subject of a paper reviewed by an impressive reading committee and published in the renowned journal Nano Letters in October 2018.

While there is still a lot of research to be done–at least 10 years’ worth, first on animals and then on humans–this technology could make all the difference in an aging population suffering from eye deterioration for which there are still no effective long-term treatments. It also has the advantage of avoiding the use of viruses commonly employed in gene therapy. These researchers are looking at applications of this technology in all eye diseases, but more particularly in glaucoma, retinitis and macular degeneration.

This light scalpel is unprecedented.

Here’s a link to and a citation for the paper,

In Vivo Laser-Mediated Retinal Ganglion Cell Optoporation Using KV1.1 Conjugated Gold Nanoparticles by Ariel M. Wilson, Javier Mazzaferri, Éric Bergeron, Sergiy Patskovsky, Paule Marcoux-Valiquette, Santiago Costantino, Przemyslaw Sapieha, Michel Meunier. Nano Lett.201818116981-6988 DOI: https://doi.org/10.1021/acs.nanolett.8b02896 Publication Date: October 4, 2018  Copyright © 2018 American Chemical Society

This paper is behind a paywall.

Artificial synapse based on tantalum oxide from Korean researchers

This memristor story comes from South Korea as we progress on the way to neuromorphic computing (brainlike computing). A Sept. 7, 2018 news item on ScienceDaily makes the announcement,

A research team led by Director Myoung-Jae Lee from the Intelligent Devices and Systems Research Group at DGIST (Daegu Gyeongbuk Institute of Science and Technology) has succeeded in developing an artificial synaptic device that mimics the function of the nerve cells (neurons) and synapses that are response for memory in human brains. [sic]

Synapses are where axons and dendrites meet so that neurons in the human brain can send and receive nerve signals; there are known to be hundreds of trillions of synapses in the human brain.

This chemical synapse information transfer system, which transfers information from the brain, can handle high-level parallel arithmetic with very little energy, so research on artificial synaptic devices, which mimic the biological function of a synapse, is under way worldwide.

Dr. Lee’s research team, through joint research with teams led by Professor Gyeong-Su Park from Seoul National University; Professor Sung Kyu Park from Chung-ang University; and Professor Hyunsang Hwang from Pohang University of Science and Technology (POSTEC), developed a high-reliability artificial synaptic device with multiple values by structuring tantalum oxide — a trans-metallic material — into two layers of Ta2O5-x and TaO2-x and by controlling its surface.

A September 7, 2018 DGIST press release (also on EurekAlert), which originated the news item, delves further into the work,

The artificial synaptic device developed by the research team is an electrical synaptic device that simulates the function of synapses in the brain as the resistance of the tantalum oxide layer gradually increases or decreases depending on the strength of the electric signals. It has succeeded in overcoming durability limitations of current devices by allowing current control only on one layer of Ta2O5-x.

In addition, the research team successfully implemented an experiment that realized synapse plasticity [or synaptic plasticity], which is the process of creating, storing, and deleting memories, such as long-term strengthening of memory and long-term suppression of memory deleting by adjusting the strength of the synapse connection between neurons.

The non-volatile multiple-value data storage method applied by the research team has the technological advantage of having a small area of an artificial synaptic device system, reducing circuit connection complexity, and reducing power consumption by more than one-thousandth compared to data storage methods based on digital signals using 0 and 1 such as volatile CMOS (Complementary Metal Oxide Semiconductor).

The high-reliability artificial synaptic device developed by the research team can be used in ultra-low-power devices or circuits for processing massive amounts of big data due to its capability of low-power parallel arithmetic. It is expected to be applied to next-generation intelligent semiconductor device technologies such as development of artificial intelligence (AI) including machine learning and deep learning and brain-mimicking semiconductors.

Dr. Lee said, “This research secured the reliability of existing artificial synaptic devices and improved the areas pointed out as disadvantages. We expect to contribute to the development of AI based on the neuromorphic system that mimics the human brain by creating a circuit that imitates the function of neurons.”

Here’s a link to and a citation for the paper,

Reliable Multivalued Conductance States in TaOx Memristors through Oxygen Plasma-Assisted Electrode Deposition with in Situ-Biased Conductance State Transmission Electron Microscopy Analysis by Myoung-Jae Lee, Gyeong-Su Park, David H. Seo, Sung Min Kwon, Hyeon-Jun Lee, June-Seo Kim, MinKyung Jung, Chun-Yeol You, Hyangsook Lee, Hee-Goo Kim, Su-Been Pang, Sunae Seo, Hyunsang Hwang, and Sung Kyu Park. ACS Appl. Mater. Interfaces, 2018, 10 (35), pp 29757–29765 DOI: 10.1021/acsami.8b09046 Publication Date (Web): July 23, 2018

Copyright © 2018 American Chemical Society

This paper is open access.

You can find other memristor and neuromorphic computing stories here by using the search terms I’ve highlighted,  My latest (more or less) is an April 19, 2018 posting titled, New path to viable memristor/neuristor?

Finally, here’s an image from the Korean researchers that accompanied their work,

Caption: Representation of neurons and synapses in the human brain. The magnified synapse represents the portion mimicked using solid-state devices. Credit: Daegu Gyeongbuk Institute of Science and Technology(DGIST)

If only AI had a brain (a Wizard of Oz reference?)

The title, which I’ve borrowed from the news release, is the only Wizard of Oz reference that I can find but it works so well, you don’t really need anything more.

Moving onto the news, a July 23, 2018 news item on phys.org announces new work on developing an artificial synapse (Note: A link has been removed),

Digital computation has rendered nearly all forms of analog computation obsolete since as far back as the 1950s. However, there is one major exception that rivals the computational power of the most advanced digital devices: the human brain.

The human brain is a dense network of neurons. Each neuron is connected to tens of thousands of others, and they use synapses to fire information back and forth constantly. With each exchange, the brain modulates these connections to create efficient pathways in direct response to the surrounding environment. Digital computers live in a world of ones and zeros. They perform tasks sequentially, following each step of their algorithms in a fixed order.

A team of researchers from Pitt’s [University of Pittsburgh] Swanson School of Engineering have developed an “artificial synapse” that does not process information like a digital computer but rather mimics the analog way the human brain completes tasks. Led by Feng Xiong, assistant professor of electrical and computer engineering, the researchers published their results in the recent issue of the journal Advanced Materials (DOI: 10.1002/adma.201802353). His Pitt co-authors include Mohammad Sharbati (first author), Yanhao Du, Jorge Torres, Nolan Ardolino, and Minhee Yun.

A July 23, 2018 University of Pittsburgh Swanson School of Engineering news release (also on EurekAlert), which originated the news item, provides further information,

“The analog nature and massive parallelism of the brain are partly why humans can outperform even the most powerful computers when it comes to higher order cognitive functions such as voice recognition or pattern recognition in complex and varied data sets,” explains Dr. Xiong.

An emerging field called “neuromorphic computing” focuses on the design of computational hardware inspired by the human brain. Dr. Xiong and his team built graphene-based artificial synapses in a two-dimensional honeycomb configuration of carbon atoms. Graphene’s conductive properties allowed the researchers to finely tune its electrical conductance, which is the strength of the synaptic connection or the synaptic weight. The graphene synapse demonstrated excellent energy efficiency, just like biological synapses.

In the recent resurgence of artificial intelligence, computers can already replicate the brain in certain ways, but it takes about a dozen digital devices to mimic one analog synapse. The human brain has hundreds of trillions of synapses for transmitting information, so building a brain with digital devices is seemingly impossible, or at the very least, not scalable. Xiong Lab’s approach provides a possible route for the hardware implementation of large-scale artificial neural networks.

According to Dr. Xiong, artificial neural networks based on the current CMOS (complementary metal-oxide semiconductor) technology will always have limited functionality in terms of energy efficiency, scalability, and packing density. “It is really important we develop new device concepts for synaptic electronics that are analog in nature, energy-efficient, scalable, and suitable for large-scale integrations,” he says. “Our graphene synapse seems to check all the boxes on these requirements so far.”

With graphene’s inherent flexibility and excellent mechanical properties, these graphene-based neural networks can be employed in flexible and wearable electronics to enable computation at the “edge of the internet”–places where computing devices such as sensors make contact with the physical world.

“By empowering even a rudimentary level of intelligence in wearable electronics and sensors, we can track our health with smart sensors, provide preventive care and timely diagnostics, monitor plants growth and identify possible pest issues, and regulate and optimize the manufacturing process–significantly improving the overall productivity and quality of life in our society,” Dr. Xiong says.

The development of an artificial brain that functions like the analog human brain still requires a number of breakthroughs. Researchers need to find the right configurations to optimize these new artificial synapses. They will need to make them compatible with an array of other devices to form neural networks, and they will need to ensure that all of the artificial synapses in a large-scale neural network behave in the same exact manner. Despite the challenges, Dr. Xiong says he’s optimistic about the direction they’re headed.

“We are pretty excited about this progress since it can potentially lead to the energy-efficient, hardware implementation of neuromorphic computing, which is currently carried out in power-intensive GPU clusters. The low-power trait of our artificial synapse and its flexible nature make it a suitable candidate for any kind of A.I. device, which would revolutionize our lives, perhaps even more than the digital revolution we’ve seen over the past few decades,” Dr. Xiong says.

There is a visual representation of this artificial synapse,

Caption: Pitt engineers built a graphene-based artificial synapse in a two-dimensional, honeycomb configuration of carbon atoms that demonstrated excellent energy efficiency comparable to biological synapses Credit: Swanson School of Engineering

Here’s a link to and a citation for the paper,

Low‐Power, Electrochemically Tunable Graphene Synapses for Neuromorphic Computing by Mohammad Taghi Sharbati, Yanhao Du, Jorge Torres, Nolan D. Ardolino, Minhee Yun, Feng Xiong. Advanced Materials DOP: https://doi.org/10.1002/adma.201802353 First published [online]: 23 July 2018

This paper is behind a paywall.

I did look at the paper and if I understand it rightly, this approach is different from the memristor-based approaches that I have so often featured here. More than that I cannot say.

Finally, the Wizard of Oz song ‘If I Only Had a Brain’,

A solar, self-charging supercapacitor for wearable technology

Ravinder Dahiya, Carlos García Núñez, and their colleagues at the University of Glasgow (Scotland) strike again (see my May 10, 2017 posting for their first ‘solar-powered graphene skin’ research announcement). Last time it was all about robots and prosthetics, this time they’ve focused on wearable technology according to a July 18, 2018 news item on phys.org,

A new form of solar-powered supercapacitor could help make future wearable technologies lighter and more energy-efficient, scientists say.

In a paper published in the journal Nano Energy, researchers from the University of Glasgow’s Bendable Electronics and Sensing Technologies (BEST) group describe how they have developed a promising new type of graphene supercapacitor, which could be used in the next generation of wearable health sensors.

A July 18, 2018 University of Glasgow press release, which originated the news item, explains further,

Currently, wearable systems generally rely on relatively heavy, inflexible batteries, which can be uncomfortable for long-term users. The BEST team, led by Professor Ravinder Dahiya, have built on their previous success in developing flexible sensors by developing a supercapacitor which could power health sensors capable of conforming to wearer’s bodies, offering more comfort and a more consistent contact with skin to better collect health data.

Their new supercapacitor uses layers of flexible, three-dimensional porous foam formed from graphene and silver to produce a device capable of storing and releasing around three times more power than any similar flexible supercapacitor. The team demonstrated the durability of the supercapacitor, showing that it provided power consistently across 25,000 charging and discharging cycles.

They have also found a way to charge the system by integrating it with flexible solar powered skin already developed by the BEST group, effectively creating an entirely self-charging system, as well as a pH sensor which uses wearer’s sweat to monitor their health.

Professor Dahiya said: “We’re very pleased by the progress this new form of solar-powered supercapacitor represents. A flexible, wearable health monitoring system which only requires exposure to sunlight to charge has a lot of obvious commercial appeal, but the underlying technology has a great deal of additional potential.

“This research could take the wearable systems for health monitoring to remote parts of the world where solar power is often the most reliable source of energy, and it could also increase the efficiency of hybrid electric vehicles. We’re already looking at further integrating the technology into flexible synthetic skin which we’re developing for use in advanced prosthetics.” [emphasis mine]

In addition to the team’s work on robots, prosthetics, and graphene ‘skin’ mentioned in the May 10, 2017 posting the team is working on a synthetic ‘brainy’ skin for which they have just received £1.5m funding from the Engineering and Physical Science Research Council (EPSRC).

Brainy skin

A July 3, 2018 University of Glasgow press release discusses the proposed work in more detail,

A robotic hand covered in ‘brainy skin’ that mimics the human sense of touch is being developed by scientists.

University of Glasgow’s Professor Ravinder Dahiya has plans to develop ultra-flexible, synthetic Brainy Skin that ‘thinks for itself’.

The super-flexible, hypersensitive skin may one day be used to make more responsive prosthetics for amputees, or to build robots with a sense of touch.

Brainy Skin reacts like human skin, which has its own neurons that respond immediately to touch rather than having to relay the whole message to the brain.

This electronic ‘thinking skin’ is made from silicon based printed neural transistors and graphene – an ultra-thin form of carbon that is only an atom thick, but stronger than steel.

The new version is more powerful, less cumbersome and would work better than earlier prototypes, also developed by Professor Dahiya and his Bendable Electronics and Sensing Technologies (BEST) team at the University’s School of Engineering.

His futuristic research, called neuPRINTSKIN (Neuromorphic Printed Tactile Skin), has just received another £1.5m funding from the Engineering and Physical Science Research Council (EPSRC).

Professor Dahiya said: “Human skin is an incredibly complex system capable of detecting pressure, temperature and texture through an array of neural sensors that carry signals from the skin to the brain.

“Inspired by real skin, this project will harness the technological advances in electronic engineering to mimic some features of human skin, such as softness, bendability and now, also sense of touch. This skin will not just mimic the morphology of the skin but also its functionality.

“Brainy Skin is critical for the autonomy of robots and for a safe human-robot interaction to meet emerging societal needs such as helping the elderly.”

Synthetic ‘Brainy Skin’ with sense of touch gets £1.5m funding. Photo of Professor Ravinder Dahiya

This latest advance means tactile data is gathered over large areas by the synthetic skin’s computing system rather than sent to the brain for interpretation.

With additional EPSRC funding, which extends Professor Dahiya’s fellowship by another three years, he plans to introduce tactile skin with neuron-like processing. This breakthrough in the tactile sensing research will lead to the first neuromorphic tactile skin, or ‘brainy skin.’

To achieve this, Professor Dahiya will add a new neural layer to the e-skin that he has already developed using printing silicon nanowires.

Professor Dahiya added: “By adding a neural layer underneath the current tactile skin, neuPRINTSKIN will add significant new perspective to the e-skin research, and trigger transformations in several areas such as robotics, prosthetics, artificial intelligence, wearable systems, next-generation computing, and flexible and printed electronics.”

The Engineering and Physical Sciences Research Council (EPSRC) is part of UK Research and Innovation, a non-departmental public body funded by a grant-in-aid from the UK government.

EPSRC is the main funding body for engineering and physical sciences research in the UK. By investing in research and postgraduate training, the EPSRC is building the knowledge and skills base needed to address the scientific and technological challenges facing the nation.

Its portfolio covers a vast range of fields from healthcare technologies to structural engineering, manufacturing to mathematics, advanced materials to chemistry. The research funded by EPSRC has impact across all sectors. It provides a platform for future UK prosperity by contributing to a healthy, connected, resilient, productive nation.

It’s fascinating to note how these pieces of research fit together for wearable technology and health monitoring and creating more responsive robot ‘skin’ and, possibly, prosthetic devices that would allow someone to feel again.

The latest research paper

Getting back the solar-charging supercapacitors mentioned in the opening, here’s a link to and a citation for the team’s latest research paper,

Flexible self-charging supercapacitor based on graphene-Ag-3D graphene foam electrodes by Libu Manjakka, Carlos García Núñez, Wenting Dang, Ravinder Dahiya. Nano Energy Volume 51, September 2018, Pages 604-612 DOI: https://doi.org/10.1016/j.nanoen.2018.06.072

This paper is open access.

Brainy and brainy: a novel synaptic architecture and a neuromorphic computing platform called SpiNNaker

I have two items about brainlike computing. The first item hearkens back to memristors, a topic I have been following since 2008. (If you’re curious about the various twists and turns just enter  the term ‘memristor’ in this blog’s search engine.) The latest on memristors is from a team than includes IBM (US), École Politechnique Fédérale de Lausanne (EPFL; Swizterland), and the New Jersey Institute of Technology (NJIT; US). The second bit comes from a Jülich Research Centre team in Germany and concerns an approach to brain-like computing that does not include memristors.

Multi-memristive synapses

In the inexorable march to make computers function more like human brains (neuromorphic engineering/computing), an international team has announced its latest results in a July 10, 2018 news item on Nanowerk,

Two New Jersey Institute of Technology (NJIT) researchers, working with collaborators from the IBM Research Zurich Laboratory and the École Polytechnique Fédérale de Lausanne, have demonstrated a novel synaptic architecture that could lead to a new class of information processing systems inspired by the brain.

The findings are an important step toward building more energy-efficient computing systems that also are capable of learning and adaptation in the real world. …

A July 10, 2018 NJIT news release (also on EurekAlert) by Tracey Regan, which originated by the news item, adds more details,

The researchers, Bipin Rajendran, an associate professor of electrical and computer engineering, and S. R. Nandakumar, a graduate student in electrical engineering, have been developing brain-inspired computing systems that could be used for a wide range of big data applications.

Over the past few years, deep learning algorithms have proven to be highly successful in solving complex cognitive tasks such as controlling self-driving cars and language understanding. At the heart of these algorithms are artificial neural networks – mathematical models of the neurons and synapses of the brain – that are fed huge amounts of data so that the synaptic strengths are autonomously adjusted to learn the intrinsic features and hidden correlations in these data streams.

However, the implementation of these brain-inspired algorithms on conventional computers is highly inefficient, consuming huge amounts of power and time. This has prompted engineers to search for new materials and devices to build special-purpose computers that can incorporate the algorithms. Nanoscale memristive devices, electrical components whose conductivity depends approximately on prior signaling activity, can be used to represent the synaptic strength between the neurons in artificial neural networks.

While memristive devices could potentially lead to faster and more power-efficient computing systems, they are also plagued by several reliability issues that are common to nanoscale devices. Their efficiency stems from their ability to be programmed in an analog manner to store multiple bits of information; however, their electrical conductivities vary in a non-deterministic and non-linear fashion.

In the experiment, the team showed how multiple nanoscale memristive devices exhibiting these characteristics could nonetheless be configured to efficiently implement artificial intelligence algorithms such as deep learning. Prototype chips from IBM containing more than one million nanoscale phase-change memristive devices were used to implement a neural network for the detection of hidden patterns and correlations in time-varying signals.

“In this work, we proposed and experimentally demonstrated a scheme to obtain high learning efficiencies with nanoscale memristive devices for implementing learning algorithms,” Nandakumar says. “The central idea in our demonstration was to use several memristive devices in parallel to represent the strength of a synapse of a neural network, but only chose one of them to be updated at each step based on the neuronal activity.”

Here’s a link to and a citation for the paper,

Neuromorphic computing with multi-memristive synapses by Irem Boybat, Manuel Le Gallo, S. R. Nandakumar, Timoleon Moraitis, Thomas Parnell, Tomas Tuma, Bipin Rajendran, Yusuf Leblebici, Abu Sebastian, & Evangelos Eleftheriou. Nature Communications volume 9, Article number: 2514 (2018) DOI: https://doi.org/10.1038/s41467-018-04933-y Published 28 June 2018

This is an open access paper.

Also they’ve got a couple of very nice introductory paragraphs which I’m including here, (from the June 28, 2018 paper in Nature Communications; Note: Links have been removed),

The human brain with less than 20 W of power consumption offers a processing capability that exceeds the petaflops mark, and thus outperforms state-of-the-art supercomputers by several orders of magnitude in terms of energy efficiency and volume. Building ultra-low-power cognitive computing systems inspired by the operating principles of the brain is a promising avenue towards achieving such efficiency. Recently, deep learning has revolutionized the field of machine learning by providing human-like performance in areas, such as computer vision, speech recognition, and complex strategic games1. However, current hardware implementations of deep neural networks are still far from competing with biological neural systems in terms of real-time information-processing capabilities with comparable energy consumption.

One of the reasons for this inefficiency is that most neural networks are implemented on computing systems based on the conventional von Neumann architecture with separate memory and processing units. There are a few attempts to build custom neuromorphic hardware that is optimized to implement neural algorithms2,3,4,5. However, as these custom systems are typically based on conventional silicon complementary metal oxide semiconductor (CMOS) circuitry, the area efficiency of such hardware implementations will remain relatively low, especially if in situ learning and non-volatile synaptic behavior have to be incorporated. Recently, a new class of nanoscale devices has shown promise for realizing the synaptic dynamics in a compact and power-efficient manner. These memristive devices store information in their resistance/conductance states and exhibit conductivity modulation based on the programming history6,7,8,9. The central idea in building cognitive hardware based on memristive devices is to store the synaptic weights as their conductance states and to perform the associated computational tasks in place.

The two essential synaptic attributes that need to be emulated by memristive devices are the synaptic efficacy and plasticity. …

It gets more complicated from there.

Now onto the next bit.


At a guess, those capitalized N’s are meant to indicate ‘neural networks’. As best I can determine, SpiNNaker is not based on the memristor. Moving on, a July 11, 2018 news item on phys.org announces work from a team examining how neuromorphic hardware and neuromorphic software work together,

A computer built to mimic the brain’s neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. Tested for accuracy, speed and energy efficiency, this custom-built computer named SpiNNaker, has the potential to overcome the speed and power consumption problems of conventional supercomputers. The aim is to advance our knowledge of neural processing in the brain, to include learning and disorders such as epilepsy and Alzheimer’s disease.

A July 11, 2018 Frontiers Publishing news release on EurekAlert, which originated the news item, expands on the latest work,

“SpiNNaker can support detailed biological models of the cortex–the outer layer of the brain that receives and processes information from the senses–delivering results very similar to those from an equivalent supercomputer software simulation,” says Dr. Sacha van Albada, lead author of this study and leader of the Theoretical Neuroanatomy group at the Jülich Research Centre, Germany. “The ability to run large-scale detailed neural networks quickly and at low power consumption will advance robotics research and facilitate studies on learning and brain disorders.”

The human brain is extremely complex, comprising 100 billion interconnected brain cells. We understand how individual neurons and their components behave and communicate with each other and on the larger scale, which areas of the brain are used for sensory perception, action and cognition. However, we know less about the translation of neural activity into behavior, such as turning thought into muscle movement.

Supercomputer software has helped by simulating the exchange of signals between neurons, but even the best software run on the fastest supercomputers to date can only simulate 1% of the human brain.

“It is presently unclear which computer architecture is best suited to study whole-brain networks efficiently. The European Human Brain Project and Jülich Research Centre have performed extensive research to identify the best strategy for this highly complex problem. Today’s supercomputers require several minutes to simulate one second of real time, so studies on processes like learning, which take hours and days in real time are currently out of reach.” explains Professor Markus Diesmann, co-author, head of the Computational and Systems Neuroscience department at the Jülich Research Centre.

He continues, “There is a huge gap between the energy consumption of the brain and today’s supercomputers. Neuromorphic (brain-inspired) computing allows us to investigate how close we can get to the energy efficiency of the brain using electronics.”

Developed over the past 15 years and based on the structure and function of the human brain, SpiNNaker — part of the Neuromorphic Computing Platform of the Human Brain Project — is a custom-built computer composed of half a million of simple computing elements controlled by its own software. The researchers compared the accuracy, speed and energy efficiency of SpiNNaker with that of NEST–a specialist supercomputer software currently in use for brain neuron-signaling research.

“The simulations run on NEST and SpiNNaker showed very similar results,” reports Steve Furber, co-author and Professor of Computer Engineering at the University of Manchester, UK. “This is the first time such a detailed simulation of the cortex has been run on SpiNNaker, or on any neuromorphic platform. SpiNNaker comprises 600 circuit boards incorporating over 500,000 small processors in total. The simulation described in this study used just six boards–1% of the total capability of the machine. The findings from our research will improve the software to reduce this to a single board.”

Van Albada shares her future aspirations for SpiNNaker, “We hope for increasingly large real-time simulations with these neuromorphic computing systems. In the Human Brain Project, we already work with neuroroboticists who hope to use them for robotic control.”

Before getting to the link and citation for the paper, here’s a description of SpiNNaker’s hardware from the ‘Spiking neural netowrk’ Wikipedia entry, Note: Links have been removed,

Neurogrid, built at Stanford University, is a board that can simulate spiking neural networks directly in hardware. SpiNNaker (Spiking Neural Network Architecture) [emphasis mine], designed at the University of Manchester, uses ARM processors as the building blocks of a massively parallel computing platform based on a six-layer thalamocortical model.[5]

Now for the link and citation,

Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model by
Sacha J. van Albada, Andrew G. Rowley, Johanna Senk, Michael Hopkins, Maximilian Schmidt, Alan B. Stokes, David R. Lester, Markus Diesmann, and Steve B. Furber. Neurosci. 12:291. doi: 10.3389/fnins.2018.00291 Published: 23 May 2018

As noted earlier, this is an open access paper.

Neurons and graphene carpets

I don’t entirely grasp the carpet analogy. Actually, I have no why they used a carpet analogy but here’s the June 12, 2018 ScienceDaily news item about the research,

A work led by SISSA [Scuola Internazionale Superiore di Studi Avanzati] and published on Nature Nanotechnology reports for the first time experimentally the phenomenon of ion ‘trapping’ by graphene carpets and its effect on the communication between neurons. The researchers have observed an increase in the activity of nerve cells grown on a single layer of graphene. Combining theoretical and experimental approaches they have shown that the phenomenon is due to the ability of the material to ‘trap’ several ions present in the surrounding environment on its surface, modulating its composition. Graphene is the thinnest bi-dimensional material available today, characterised by incredible properties of conductivity, flexibility and transparency. Although there are great expectations for its applications in the biomedical field, only very few works have analysed its interactions with neuronal tissue.

A June 12, 2018 SISSA press release (also on EurekAlert), which originated the news item, provides more detail,

A study conducted by SISSA – Scuola Internazionale Superiore di Studi Avanzati, in association with the University of Antwerp (Belgium), the University of Trieste and the Institute of Science and Technology of Barcelona (Spain), has analysed the behaviour of neurons grown on a single layer of graphene, observing a strengthening in their activity. Through theoretical and experimental approaches the researchers have shown that such behaviour is due to reduced ion mobility, in particular of potassium, to the neuron-graphene interface. This phenomenon is commonly called ‘ion trapping’, already known at theoretical level, but observed experimentally for the first time only now. “It is as if graphene behaves as an ultra-thin magnet on whose surface some of the potassium ions present in the extra cellular solution between the cells and the graphene remain trapped. It is this small variation that determines the increase in neuronal excitability” comments Denis Scaini, researcher at SISSA who has led the research alongside Laura Ballerini.

The study has also shown that this strengthening occurs when the graphene itself is supported by an insulator, like glass, or suspended in solution, while it disappears when lying on a conductor. “Graphene is a highly conductive material which could potentially be used to coat any surface. Understanding how its behaviour varies according to the substratum on which it is laid is essential for its future applications, above all in the neurological field” continues Scaini, “considering the unique properties of graphene it is natural to think for example about the development of innovative electrodes of cerebral stimulation or visual devices”.

It is a study with a double outcome. Laura Ballerini comments as follows: “This ‘ion trap’ effect was described only in theory. Studying the impact of the ‘technology of materials’ on biological systems, we have documented a mechanism to regulate membrane excitability, but at the same time we have also experimentally described a property of the material through the biology of neurons.”

Dexter Johnson in a June 13, 2018 posting, on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website), provides more context for the work (Note: Links have been removed),

While graphene has been tapped to deliver on everything from electronics to optoelectronics, it’s a bit harder to picture how it may offer a key tool for addressing neurological damage and disorders. But that’s exactly what researchers have been looking at lately because of the wonder material’s conductivity and transparency.

In the most recent development, a team from Europe has offered a deeper understanding of how graphene can be combined with neurological tissue and, in so doing, may have not only given us an additional tool for neurological medicine, but also provided a tool for gaining insights into other biological processes.

“The results demonstrate that, depending on how the interface with [single-layer graphene] is engineered, the material may tune neuronal activities by altering the ion mobility, in particular potassium, at the cell/substrate interface,” said Laura Ballerini, a researcher in neurons and nanomaterials at SISSA.

Ballerini provided some context for this most recent development by explaining that graphene-based nanomaterials have come to represent potential tools in neurology and neurosurgery.

“These materials are increasingly engineered as components of a variety of applications such as biosensors, interfaces, or drug-delivery platforms,” said Ballerini. “In particular, in neural electrode or interfaces, a precise requirement is the stable device/neuronal electrical coupling, which requires governing the interactions between the electrode surface and the cell membrane.”

This neuro-electrode hybrid is at the core of numerous studies, she explained, and graphene, thanks to its electrical properties, transparency, and flexibility represents an ideal material candidate.

In all of this work, the real challenge has been to investigate the ability of a single atomic layer to tune neuronal excitability and to demonstrate unequivocally that graphene selectively modifies membrane-associated neuronal functions.

I encourage you to read Dexter’s posting as it clarifies the work described in the SISSA press release for those of us (me) who may fail to grasp the implications.

Here’s a link to and a citation for the paper,

Single-layer graphene modulates neuronal communication and augments membrane ion currents by Niccolò Paolo Pampaloni, Martin Lottner, Michele Giugliano, Alessia Matruglio, Francesco D’Amico, Maurizio Prato, Josè Antonio Garrido, Laura Ballerini, & Denis Scaini. Nature Nanotechnology (2018) DOI: https://doi.org/10.1038/s41565-018-0163-6 Published online June 13, 2018

This paper is behind a paywall.

All this brings to mind a prediction made about the Graphene Flagship and the Human Brain Project shortly after the European Commission announced in January 2013 that each project had won funding of 1B Euros to be paid out over a period of 10 years. The prediction was that scientists would work on graphene/human brain research.