Tag Archives: neurons

Neurons and graphene carpets

I don’t entirely grasp the carpet analogy. Actually, I have no why they used a carpet analogy but here’s the June 12, 2018 ScienceDaily news item about the research,

A work led by SISSA [Scuola Internazionale Superiore di Studi Avanzati] and published on Nature Nanotechnology reports for the first time experimentally the phenomenon of ion ‘trapping’ by graphene carpets and its effect on the communication between neurons. The researchers have observed an increase in the activity of nerve cells grown on a single layer of graphene. Combining theoretical and experimental approaches they have shown that the phenomenon is due to the ability of the material to ‘trap’ several ions present in the surrounding environment on its surface, modulating its composition. Graphene is the thinnest bi-dimensional material available today, characterised by incredible properties of conductivity, flexibility and transparency. Although there are great expectations for its applications in the biomedical field, only very few works have analysed its interactions with neuronal tissue.

A June 12, 2018 SISSA press release (also on EurekAlert), which originated the news item, provides more detail,

A study conducted by SISSA – Scuola Internazionale Superiore di Studi Avanzati, in association with the University of Antwerp (Belgium), the University of Trieste and the Institute of Science and Technology of Barcelona (Spain), has analysed the behaviour of neurons grown on a single layer of graphene, observing a strengthening in their activity. Through theoretical and experimental approaches the researchers have shown that such behaviour is due to reduced ion mobility, in particular of potassium, to the neuron-graphene interface. This phenomenon is commonly called ‘ion trapping’, already known at theoretical level, but observed experimentally for the first time only now. “It is as if graphene behaves as an ultra-thin magnet on whose surface some of the potassium ions present in the extra cellular solution between the cells and the graphene remain trapped. It is this small variation that determines the increase in neuronal excitability” comments Denis Scaini, researcher at SISSA who has led the research alongside Laura Ballerini.

The study has also shown that this strengthening occurs when the graphene itself is supported by an insulator, like glass, or suspended in solution, while it disappears when lying on a conductor. “Graphene is a highly conductive material which could potentially be used to coat any surface. Understanding how its behaviour varies according to the substratum on which it is laid is essential for its future applications, above all in the neurological field” continues Scaini, “considering the unique properties of graphene it is natural to think for example about the development of innovative electrodes of cerebral stimulation or visual devices”.

It is a study with a double outcome. Laura Ballerini comments as follows: “This ‘ion trap’ effect was described only in theory. Studying the impact of the ‘technology of materials’ on biological systems, we have documented a mechanism to regulate membrane excitability, but at the same time we have also experimentally described a property of the material through the biology of neurons.”

Dexter Johnson in a June 13, 2018 posting, on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website), provides more context for the work (Note: Links have been removed),

While graphene has been tapped to deliver on everything from electronics to optoelectronics, it’s a bit harder to picture how it may offer a key tool for addressing neurological damage and disorders. But that’s exactly what researchers have been looking at lately because of the wonder material’s conductivity and transparency.

In the most recent development, a team from Europe has offered a deeper understanding of how graphene can be combined with neurological tissue and, in so doing, may have not only given us an additional tool for neurological medicine, but also provided a tool for gaining insights into other biological processes.

“The results demonstrate that, depending on how the interface with [single-layer graphene] is engineered, the material may tune neuronal activities by altering the ion mobility, in particular potassium, at the cell/substrate interface,” said Laura Ballerini, a researcher in neurons and nanomaterials at SISSA.

Ballerini provided some context for this most recent development by explaining that graphene-based nanomaterials have come to represent potential tools in neurology and neurosurgery.

“These materials are increasingly engineered as components of a variety of applications such as biosensors, interfaces, or drug-delivery platforms,” said Ballerini. “In particular, in neural electrode or interfaces, a precise requirement is the stable device/neuronal electrical coupling, which requires governing the interactions between the electrode surface and the cell membrane.”

This neuro-electrode hybrid is at the core of numerous studies, she explained, and graphene, thanks to its electrical properties, transparency, and flexibility represents an ideal material candidate.

In all of this work, the real challenge has been to investigate the ability of a single atomic layer to tune neuronal excitability and to demonstrate unequivocally that graphene selectively modifies membrane-associated neuronal functions.

I encourage you to read Dexter’s posting as it clarifies the work described in the SISSA press release for those of us (me) who may fail to grasp the implications.

Here’s a link to and a citation for the paper,

Single-layer graphene modulates neuronal communication and augments membrane ion currents by Niccolò Paolo Pampaloni, Martin Lottner, Michele Giugliano, Alessia Matruglio, Francesco D’Amico, Maurizio Prato, Josè Antonio Garrido, Laura Ballerini, & Denis Scaini. Nature Nanotechnology (2018) DOI: https://doi.org/10.1038/s41565-018-0163-6 Published online June 13, 2018

This paper is behind a paywall.

All this brings to mind a prediction made about the Graphene Flagship and the Human Brain Project shortly after the European Commission announced in January 2013 that each project had won funding of 1B Euros to be paid out over a period of 10 years. The prediction was that scientists would work on graphene/human brain research.

Transparent graphene electrode technology and complex brain imaging

Michael Berger has written a May 24, 2018 Nanowerk Spotlight article about some of the latest research on transparent graphene electrode technology and the brain (Note: A link has been removed),

In new work, scientists from the labs of Kuzum [Duygu Kuzum, an Assistant Professor of Electrical and Computer Engineering at the University of California, San Diego {UCSD}] and Anna Devor report a transparent graphene microelectrode neural implant that eliminates light-induced artifacts to enable crosstalk-free integration of 2-photon microscopy, optogenetic stimulation, and cortical recordings in the same in vivo experiment. The new class of transparent brain implant is based on monolayer graphene. It offers a practical pathway to investigate neuronal activity over multiple spatial scales extending from single neurons to large neuronal populations.

Conventional metal-based microelectrodes cannot be used for simultaneous measurements of multiple optical and electrical parameters, which are essential for comprehensive investigation of brain function across spatio-temporal scales. Since they are opaque, they block the field of view of the microscopes and generate optical shadows impeding imaging.

More importantly, they cause light induced artifacts in electrical recordings, which can significantly interfere with neural signals. Transparent graphene electrode technology presented in this paper addresses these problems and allow seamless and crosstalk-free integration of optical and electrical sensing and manipulation technologies.

In their work, the scientists demonstrate that by careful design of key steps in the fabrication process for transparent graphene electrodes, the light-induced artifact problem can be mitigated and virtually artifact-free local field potential (LFP) recordings can be achieved within operating light intensities.

“Optical transparency of graphene enables seamless integration of imaging, optogenetic stimulation and electrical recording of brain activity in the same experiment with animal models,” Kuzum explains. “Different from conventional implants based on metal electrodes, graphene-based electrodes do not generate any electrical artifacts upon interacting with light used for imaging or optogenetics. That enables crosstalk free integration of three modalities: imaging, stimulation and recording to investigate brain activity over multiple spatial scales extending from single neurons to large populations of neurons in the same experiment.”

The team’s new fabrication process avoids any crack formation in the transfer process, resulting in a 95-100% yield for the electrode arrays. This fabrication quality is important for expanding this technology to high-density large area transparent arrays to monitor brain-scale cortical activity in large animal models or humans.

“Our technology is also well-suited for neurovascular and neurometabolic studies, providing a ‘gold standard’ neuronal correlate for optical measurements of vascular, hemodynamic, and metabolic activity,” Kuzum points out. “It will find application in multiple areas, advancing our understanding of how microscopic neural activity at the cellular scale translates into macroscopic activity of large neuron populations.”

“Combining optical techniques with electrical recordings using graphene electrodes will allow to connect the large body of neuroscience knowledge obtained from animal models to human studies mainly relying on electrophysiological recordings of brain-scale activity,” she adds.

Next steps for the team involve employing this technology to investigate coupling and information transfer between different brain regions.

This work is part of the US BRAIN (Brain Research through Advancing Innovative Neurotechnologies) initiative and there’s more than one team working with transparent graphene electrodes. John Hewitt in an Oct. 21, 2014 posting on ExtremeTech describes two other teams’ work (Note: Links have been removed),

The solution [to the problems with metal electrodes], now emerging from multiple labs throughout the universe is to build flexible, transparent electrode arrays from graphene. Two studies in the latest issue of Nature Communications, one from the University of Wisconsin-Madison and the other from Penn [University of Pennsylvania], describe how to build these devices.

The University of Wisconsin researchers are either a little bit smarter or just a little bit richer, because they published their work open access. It’s a no-brainer then that we will focus on their methods first, and also in more detail. To make the arrays, these guys first deposited the parylene (polymer) substrate on a silicon wafer, metalized it with gold, and then patterned it with an electron beam to create small contact pads. The magic was to then apply four stacked single-atom-thick graphene layers using a wet transfer technique. These layers were then protected with a silicon dioxide layer, another parylene layer, and finally molded into brain signal recording goodness with reactive ion etching.

PennTransparentelectrodeThe researchers went with four graphene layers because that provided optimal mechanical integrity and conductivity while maintaining sufficient transparency. They tested the device in opto-enhanced mice whose neurons expressed proteins that react to blue light. When they hit the neurons with a laser fired in through the implant, the protein channels opened and fired the cell beneath. The masterstroke that remained was then to successfully record the electrical signals from this firing, sit back, and wait for the Nobel prize office to call.

The Penn State group [Note: Every reearcher mentioned in the paper Hewitt linked to is from the University of Pennsylvania] in the  used a similar 16-spot electrode array (pictured above right), and proceeded — we presume — in much the same fashion. Their angle was to perform high-resolution optical imaging, in particular calcium imaging, right out through the transparent electrode arrays which simultaneously recorded in high-temporal-resolution signals. They did this in slices of the hippocampus where they could bring to bear the complex and multifarious hardware needed to perform confocal and two-photon microscopy. These latter techniques provide a boost in spatial resolution by zeroing in over narrow planes inside the specimen, and limiting the background by the requirement of two photons to generate an optical signal. We should mention that there are voltage sensitive dyes available, in addition to standard calcium dyes, which can almost record the fastest single spikes, but electrical recording still reigns supreme for speed.

What a mouse looks like with an optogenetics system plugged in

What a mouse looks like with an optogenetics system plugged in

One concern of both groups in making these kinds of simultaneous electro-optic measurements was the generation of light-induced artifacts in the electrical recordings. This potential complication, called the Becqueral photovoltaic effect, has been known to exist since it was first demonstrated back in 1839. When light hits a conventional metal electrode, a photoelectrochemical (or more simply, a photovoltaic) effect occurs. If present in these recordings, the different signals could be highly disambiguatable. The Penn researchers reported that they saw no significant artifact, while the Wisconsin researchers saw some small effects with their device. In particular, when compared with platinum electrodes put into the opposite side cortical hemisphere, the Wisconsin researchers found that the artifact from graphene was similar to that obtained from platinum electrodes.

Here’s a link to and a citation for the latest research from UCSD,

Deep 2-photon imaging and artifact-free optogenetics through transparent graphene microelectrode arrays by Martin Thunemann, Yichen Lu, Xin Liu, Kıvılcım Kılıç, Michèle Desjardins, Matthieu Vandenberghe, Sanaz Sadegh, Payam A. Saisan, Qun Cheng, Kimberly L. Weldy, Hongming Lyu, Srdjan Djurovic, Ole A. Andreassen, Anders M. Dale, Anna Devor, & Duygu Kuzum. Nature Communicationsvolume 9, Article number: 2035 (2018) doi:10.1038/s41467-018-04457-5 Published: 23 May 2018

This paper is open access.

You can find out more about the US BRAIN initiative here and if you’re curious, you can find out more about the project at UCSD here. Duygu Kuzum (now at UCSD) was at  the University of Pennsylvania in 2014 and participated in the work mentioned in Hewitt’s 2014 posting.

Injectable bandages for internal bleeding and hydrogel for the brain

This injectable bandage could be a gamechanger (as they say) if it can be taken beyond the ‘in vitro’ (i.e., petri dish) testing stage. A May 22, 2018 news item on Nanowerk makes the announcement (Note: A link has been removed),

While several products are available to quickly seal surface wounds, rapidly stopping fatal internal bleeding has proven more difficult. Now researchers from the Department of Biomedical Engineering at Texas A&M University are developing an injectable hydrogel bandage that could save lives in emergencies such as penetrating shrapnel wounds on the battlefield (Acta Biomaterialia, “Nanoengineered injectable hydrogels for wound healing application”).

A May 22, 2018 US National Institute of Biomedical Engineering and Bioengiineering news release, which originated the news item, provides more detail (Note: Links have been removed),

The researchers combined a hydrogel base (a water-swollen polymer) and nanoparticles that interact with the body’s natural blood-clotting mechanism. “The hydrogel expands to rapidly fill puncture wounds and stop blood loss,” explained Akhilesh Gaharwar, Ph.D., assistant professor and senior investigator on the work. “The surface of the nanoparticles attracts blood platelets that become activated and start the natural clotting cascade of the body.”

Enhanced clotting when the nanoparticles were added to the hydrogel was confirmed by standard laboratory blood clotting tests. Clotting time was reduced from eight minutes to six minutes when the hydrogel was introduced into the mixture. When nanoparticles were added, clotting time was significantly reduced, to less than three minutes.

In addition to the rapid clotting mechanism of the hydrogel composite, the engineers took advantage of special properties of the nanoparticle component. They found they could use the electric charge of the nanoparticles to add growth factors that efficiently adhered to the particles. “Stopping fatal bleeding rapidly was the goal of our work,” said Gaharwar. “However, we found that we could attach growth factors to the nanoparticles. This was an added bonus because the growth factors act to begin the body’s natural wound healing process—the next step needed after bleeding has stopped.”

The researchers were able to attach vascular endothelial growth factor (VEGF) to the nanoparticles. They tested the hydrogel/nanoparticle/VEGF combination in a cell culture test that mimics the wound healing process. The test uses a petri dish with a layer of endothelial cells on the surface that create a solid skin-like sheet. The sheet is then scratched down the center creating a rip or hole in the sheet that resembles a wound.

When the hydrogel containing VEGF bound to the nanoparticles was added to the damaged endothelial cell wound, the cells were induced to grow back and fill-in the scratched region—essentially mimicking the healing of a wound.

“Our laboratory experiments have verified the effectiveness of the hydrogel for initiating both blood clotting and wound healing,” said Gaharwar. “We are anxious to begin tests in animals with the hope of testing and eventual use in humans where we believe our formulation has great potential to have a significant impact on saving lives in critical situations.”

The work was funded by grant EB023454 from the National Institute of Biomedical Imaging and Bioengineering (NIBIB), and the National Science Foundation. The results were reported in the February issue of the journal Acta Biomaterialia.

The paper was published back in April 2018 and there was an April 2, 2018 Texas A&M University news release on EurekAlert making the announcement (and providing a few unique details),

A penetrating injury from shrapnel is a serious obstacle in overcoming battlefield wounds that can ultimately lead to death.Given the high mortality rates due to hemorrhaging, there is an unmet need to quickly self-administer materials that prevent fatality due to excessive blood loss.

With a gelling agent commonly used in preparing pastries, researchers from the Inspired Nanomaterials and Tissue Engineering Laboratory have successfully fabricated an injectable bandage to stop bleeding and promote wound healing.

In a recent article “Nanoengineered Injectable Hydrogels for Wound Healing Application” published in Acta Biomaterialia, Dr. Akhilesh K. Gaharwar, assistant professor in the Department of Biomedical Engineering at Texas A&M University, uses kappa-carrageenan and nanosilicates to form injectable hydrogels to promote hemostasis (the process to stop bleeding) and facilitate wound healing via a controlled release of therapeutics.

“Injectable hydrogels are promising materials for achieving hemostasis in case of internal injuries and bleeding, as these biomaterials can be introduced into a wound site using minimally invasive approaches,” said Gaharwar. “An ideal injectable bandage should solidify after injection in the wound area and promote a natural clotting cascade. In addition, the injectable bandage should initiate wound healing response after achieving hemostasis.”

The study uses a commonly used thickening agent known as kappa-carrageenan, obtained from seaweed, to design injectable hydrogels. Hydrogels are a 3-D water swollen polymer network, similar to Jell-O, simulating the structure of human tissues.

When kappa-carrageenan is mixed with clay-based nanoparticles, injectable gelatin is obtained. The charged characteristics of clay-based nanoparticles provide hemostatic ability to the hydrogels. Specifically, plasma protein and platelets form blood adsorption on the gel surface and trigger a blood clotting cascade.

“Interestingly, we also found that these injectable bandages can show a prolonged release of therapeutics that can be used to heal the wound” said Giriraj Lokhande, a graduate student in Gaharwar’s lab and first author of the paper. “The negative surface charge of nanoparticles enabled electrostatic interactions with therapeutics thus resulting in the slow release of therapeutics.”

Nanoparticles that promote blood clotting and wound healing (red discs), attached to the wound-filling hydrogel component (black) form a nanocomposite hydrogel. The gel is designed to be self-administered to stop bleeding and begin wound-healing in emergency situations. Credit: Lokhande, et al. 1

Here’s a link to and a citation for the paper,

Nanoengineered injectable hydrogels for wound healing application by Giriraj Lokhande, James K. Carrow, Teena Thakur, Janet R. Xavier, Madasamy Parani, Kayla J. Bayless, Akhilesh K. Gaharwar. Acta Biomaterialia Volume 70, 1 April 2018, Pages 35-47
https://doi.org/10.1016/j.actbio.2018.01.045

This paper is behind a paywall.

Hydrogel and the brain

It’s been an interesting week for hydrogels. On May 21, 2018 there was a news item on ScienceDaily about a bioengineered hydrogel which stimulated brain tissue growth after a stroke (mouse model),

In a first-of-its-kind finding, a new stroke-healing gel helped regrow neurons and blood vessels in mice with stroke-damaged brains, UCLA researchers report in the May 21 issue of Nature Materials.

“We tested this in laboratory mice to determine if it would repair the brain in a model of stroke, and lead to recovery,” said Dr. S. Thomas Carmichael, Professor and Chair of neurology at UCLA. “This study indicated that new brain tissue can be regenerated in what was previously just an inactive brain scar after stroke.”

The brain has a limited capacity for recovery after stroke and other diseases. Unlike some other organs in the body, such as the liver or skin, the brain does not regenerate new connections, blood vessels or new tissue structures. Tissue that dies in the brain from stroke is absorbed, leaving a cavity, devoid of blood vessels, neurons or axons, the thin nerve fibers that project from neurons.

After 16 weeks, stroke cavities in mice contained regenerated brain tissue, including new neural networks — a result that had not been seen before. The mice with new neurons showed improved motor behavior, though the exact mechanism wasn’t clear.

Remarkable stuff.

The roles mathematics and light play in cellular communication

These are two entirely different types of research but taken together they help build a picture about how the cells in our bodies function.

Cells and light

An April 30, 2018 news item on phys.org describes work on controlling biology with light,

Over the past five years, University of Chicago chemist Bozhi Tian has been figuring out how to control biology with light.

A longterm science goal is devices to serve as the interface between researcher and body—both as a way to understand how cells talk among each other and within themselves, and eventually, as a treatment for brain or nervous system disorders [emphasis mine] by stimulating nerves to fire or limbs to move. Silicon—a versatile, biocompatible material used in both solar panels and surgical implants—is a natural choice.

In a paper published April 30 in Nature Biomedical Engineering, Tian’s team laid out a system of design principles for working with silicon to control biology at three levels—from individual organelles inside cells to tissues to entire limbs. The group has demonstrated each in cells or mice models, including the first time anyone has used light to control behavior without genetic modification.

“We want this to serve as a map, where you can decide which problem you would like to study and immediately find the right material and method to address it,” said Tian, an assistant professor in the Department of Chemistry.

Researchers built this thin layer of silicon lace to modulate neural signals when activated by light. Courtesy of Yuanwen Jiang and Bozhi Tian

An April 30, 2018 University of Chicago news release by Louise Lerner, which originated the news item, describes the work in greater detail,

The scientists’ map lays out best methods to craft silicon devices depending on both the intended task and the scale—ranging from inside a cell to a whole animal.

For example, to affect individual brain cells, silicon can be crafted to respond to light by emitting a tiny ionic current, which encourages neurons to fire. But in order to stimulate limbs, scientists need a system whose signals can travel farther and are stronger—such as a gold-coated silicon material in which light triggers a chemical reaction.

The mechanical properties of the implant are important, too. Say researchers would like to work with a larger piece of the brain, like the cortex, to control motor movement. The brain is a soft, squishy substance, so they’ll need a material that’s similarly soft and flexible, but can bind tightly against the surface. They’d want thin and lacy silicon, say the design principles.

The team favors this method because it doesn’t require genetic modification or a power supply wired in, since the silicon can be fashioned into what are essentially tiny solar panels. (Many other forms of monitoring or interacting with the brain need to have a power supply, and keeping a wire running into a patient is an infection risk.)

They tested the concept in mice and found they could stimulate limb movements by shining light on brain implants. Previous research tested the concept in neurons.

“We don’t have answers to a number of intrinsic questions about biology, such as whether individual mitochondria communicate remotely through bioelectric signals,” said Yuanwen Jiang, the first author on the paper, then a graduate student at UChicago and now a postdoctoral researcher at Stanford. “This set of tools could address such questions as well as pointing the way to potential solutions for nervous system disorders.”

Other UChicago authors were Assoc. Profs. Chin-Tu Chen and Chien-Min Kao, Asst. Prof Xiaoyang, postdoctoral researchers Jaeseok Yi, Yin Fang, Xiang Gao, Jiping Yue, Hsiu-Ming Tsai, Bing Liu and Yin Fang, graduate students Kelliann Koehler, Vishnu Nair, and Edward Sudzilovsky, and undergraduate student George Freyermuth.

Other researchers on the paper hailed from Northwestern University, the University of Illinois at Chicago and Hong Kong Polytechnic University.

The researchers have also made this video illustrating their work,

via Gfycat Tiny silicon nanowires (in blue), activated by light, trigger activity in neurons. (Courtesy Yuanwen Jiang and Bozhi Tian)

Here’s a link to and a citation for the paper,

Rational design of silicon structures for optically controlled multiscale biointerfaces by Yuanwen Jiang, Xiaojian Li, Bing Liu, Jaeseok Yi, Yin Fang, Fengyuan Shi, Xiang Gao, Edward Sudzilovsky, Ramya Parameswaran, Kelliann Koehler, Vishnu Nair, Jiping Yue, KuangHua Guo, Yin Fang, Hsiu-Ming Tsai, George Freyermuth, Raymond C. S. Wong, Chien-Min Kao, Chin-Tu Chen, Alan W. Nicholls, Xiaoyang Wu, Gordon M. G. Shepherd, & Bozhi Tian. Nature Biomedical Engineering (2018) doi:10.1038/s41551-018-0230-1 Published: 30 April 2018

This paper is behind a paywall.

Mathematics and how living cells ‘think’

This May 2, 2018 Queensland University of Technology (QUT; Australia) press release is also on EurekAlert,

How does the ‘brain’ of a living cell work, allowing an organism to function and thrive in changing and unfavourable environments?

Queensland University of Technology (QUT) researcher Dr Robyn Araujo has developed new mathematics to solve a longstanding mystery of how the incredibly complex biological networks within cells can adapt and reset themselves after exposure to a new stimulus.

Her findings, published in Nature Communications, provide a new level of understanding of cellular communication and cellular ‘cognition’, and have potential application in a variety of areas, including new targeted cancer therapies and drug resistance.

Dr Araujo, a lecturer in applied and computational mathematics in QUT’s Science and Engineering Faculty, said that while we know a great deal about gene sequences, we have had extremely limited insight into how the proteins encoded by these genes work together as an integrated network – until now.

“Proteins form unfathomably complex networks of chemical reactions that allow cells to communicate and to ‘think’ – essentially giving the cell a ‘cognitive’ ability, or a ‘brain’,” she said. “It has been a longstanding mystery in science how this cellular ‘brain’ works.

“We could never hope to measure the full complexity of cellular networks – the networks are simply too large and interconnected and their component proteins are too variable.

“But mathematics provides a tool that allows us to explore how these networks might be constructed in order to perform as they do.

“My research is giving us a new way to look at unravelling network complexity in nature.”

Dr Araujo’s work has focused on the widely observed function called perfect adaptation – the ability of a network to reset itself after it has been exposed to a new stimulus.

“An example of perfect adaptation is our sense of smell,” she said. “When exposed to an odour we will smell it initially but after a while it seems to us that the odour has disappeared, even though the chemical, the stimulus, is still present.

“Our sense of smell has exhibited perfect adaptation. This process allows it to remain sensitive to further changes in our environment so that we can detect both very feint and very strong odours.

“This kind of adaptation is essentially what takes place inside living cells all the time. Cells are exposed to signals – hormones, growth factors, and other chemicals – and their proteins will tend to react and respond initially, but then settle down to pre-stimulus levels of activity even though the stimulus is still there.

“I studied all the possible ways a network can be constructed and found that to be capable of this perfect adaptation in a robust way, a network has to satisfy an extremely rigid set of mathematical principles. There are a surprisingly limited number of ways a network could be constructed to perform perfect adaptation.

“Essentially we are now discovering the needles in the haystack in terms of the network constructions that can actually exist in nature.

“It is early days, but this opens the door to being able to modify cell networks with drugs and do it in a more robust and rigorous way. Cancer therapy is a potential area of application, and insights into how proteins work at a cellular level is key.”

Dr Araujo said the published study was the result of more than “five years of relentless effort to solve this incredibly deep mathematical problem”. She began research in this field while at George Mason University in Virginia in the US.

Her mentor at the university’s College of Science and co-author of the Nature Communications paper, Professor Lance Liotta, said the “amazing and surprising” outcome of Dr Araujo’s study is applicable to any living organism or biochemical network of any size.

“The study is a wonderful example of how mathematics can have a profound impact on society and Dr Araujo’s results will provide a set of completely fresh approaches for scientists in a variety of fields,” he said.

“For example, in strategies to overcome cancer drug resistance – why do tumours frequently adapt and grow back after treatment?

“It could also help understanding of how our hormone system, our immune defences, perfectly adapt to frequent challenges and keep us well, and it has future implications for creating new hypotheses about drug addiction and brain neuron signalling adaptation.”

Hre’s a link to and a citation for the paper,

The topological requirements for robust perfect adaptation in networks of any size by Robyn P. Araujo & Lance A. Liotta. Nature Communicationsvolume 9, Article number: 1757 (2018) doi:10.1038/s41467-018-04151-6 Published: 01 May 2018

This paper is open access.

New path to viable memristor/neuristor?

I first stumbled onto memristors and the possibility of brain-like computing sometime in 2008 (around the time that R. Stanley Williams and his team at HP Labs first published the results of their research linking Dr. Leon Chua’s memristor theory to their attempts to shrink computer chips). In the almost 10 years since, scientists have worked hard to utilize memristors in the field of neuromorphic (brain-like) engineering/computing.

A January 22, 2018 news item on phys.org describes the latest work,

When it comes to processing power, the human brain just can’t be beat.

Packed within the squishy, football-sized organ are somewhere around 100 billion neurons. At any given moment, a single neuron can relay instructions to thousands of other neurons via synapses—the spaces between neurons, across which neurotransmitters are exchanged. There are more than 100 trillion synapses that mediate neuron signaling in the brain, strengthening some connections while pruning others, in a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, at lightning speeds.

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT [Massachusetts Institute of Technology] have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy.

A January 22, 2018 MIT news release by Jennifer Chua (also on EurekAlert), which originated the news item, provides more detail about the research,

The design, published today [January 22, 2018] in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other learning tasks.

The research was led by Jeehwan Kim, the Class of 1947 Career Development Assistant Professor in the departments of Mechanical Engineering and Materials Science and Engineering, and a principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories. His co-authors are Shinhyun Choi (first author), Scott Tan (co-first author), Zefan Li, Yunjo Kim, Chanyeol Choi, and Hanwool Yeon of MIT, along with Pai-Yu Chen and Shimeng Yu of Arizona State University.

Too many paths

Most neuromorphic chip designs attempt to emulate the synaptic connection between neurons using two conductive layers separated by a “switching medium,” or synapse-like space. When a voltage is applied, ions should move in the switching medium to create conductive filaments, similarly to how the “weight” of a synapse changes.

But it’s been difficult to control the flow of ions in existing designs. Kim says that’s because most switching mediums, made of amorphous materials, have unlimited possible paths through which ions can travel — a bit like Pachinko, a mechanical arcade game that funnels small steel balls down through a series of pins and levers, which act to either divert or direct the balls out of the machine.

Like Pachinko, existing switching mediums contain multiple paths that make it difficult to predict where ions will make it through. Kim says that can create unwanted nonuniformity in a synapse’s performance.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” Kim says. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it’s hard to control. That’s the biggest problem — nonuniformity of the artificial synapse.”

A perfect mismatch

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

To do so, the researchers started with a wafer of silicon, resembling, at microscopic resolution, a chicken-wire pattern. They then grew a similar pattern of silicon germanium — a material also used commonly in transistors — on top of the silicon wafer. Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials can form a funnel-like dislocation, creating a single path through which ions can flow.

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

“This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim says.

Writing, recognized

As a final test, Kim’s team explored how its device would perform if it were to carry out actual learning tasks — specifically, recognizing samples of handwriting, which researchers consider to be a first practical test for neuromorphic chips. Such chips would consist of “input/hidden/output neurons,” each connected to other “neurons” via filament-based artificial synapses.

Scientists believe such stacks of neural nets can be made to “learn.” For instance, when fed an input that is a handwritten ‘1,’ with an output that labels it as ‘1,’ certain output neurons will be activated by input neurons and weights from an artificial synapse. When more examples of handwritten ‘1s’ are fed into the same chip, the same output neurons may be activated when they sense similar features between different samples of the same letter, thus “learning” in a fashion similar to what the brain does.

Kim and his colleagues ran a computer simulation of an artificial neural network consisting of three sheets of neural layers connected via two layers of artificial synapses, the properties of which they based on measurements from their actual neuromorphic chip. They fed into their simulation tens of thousands of samples from a handwritten recognition dataset commonly used by neuromorphic designers, and found that their neural network hardware recognized handwritten samples 95 percent of the time, compared to the 97 percent accuracy of existing software algorithms.

The team is in the process of fabricating a working neuromorphic chip that can carry out handwriting-recognition tasks, not in simulation but in reality. Looking beyond handwriting, Kim says the team’s artificial synapse design will enable much smaller, portable neural network devices that can perform complex computations that currently are only possible with large supercomputers.

“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” Kim says. “This opens a stepping stone to produce real artificial hardware.”

This research was supported in part by the National Science Foundation.

Here’s a link to and a citation for the paper,

SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations by Shinhyun Choi, Scott H. Tan, Zefan Li, Yunjo Kim, Chanyeol Choi, Pai-Yu Chen, Hanwool Yeon, Shimeng Yu, & Jeehwan Kim. Nature Materials (2018) doi:10.1038/s41563-017-0001-5 Published online: 22 January 2018

This paper is behind a paywall.

For the curious I have included a number of links to recent ‘memristor’ postings here,

January 22, 2018: Memristors at Masdar

January 3, 2018: Mott memristor

August 24, 2017: Neuristors and brainlike computing

June 28, 2017: Dr. Wei Lu and bio-inspired ‘memristor’ chips

May 2, 2017: Predicting how a memristor functions

December 30, 2016: Changing synaptic connectivity with a memristor

December 5, 2016: The memristor as computing device

November 1, 2016: The memristor as the ‘missing link’ in bioelectronic medicine?

You can find more by using ‘memristor’ as the search term in the blog search function or on the search engine of your choice.

A bioengineered robot hand with its own nervous system: machine/flesh and a job opening

A November 14, 2017 news item on phys.org announces a grant for a research project which will see engineered robot hands combined with regenerative medicine to imbue neuroprosthetic hands with the sense of touch,

The sense of touch is often taken for granted. For someone without a limb or hand, losing that sense of touch can be devastating. While highly sophisticated prostheses with complex moving fingers and joints are available to mimic almost every hand motion, they remain frustratingly difficult and unnatural for the user. This is largely because they lack the tactile experience that guides every movement. This void in sensation results in limited use or abandonment of these very expensive artificial devices. So why not make a prosthesis that can actually “feel” its environment?

That is exactly what an interdisciplinary team of scientists from Florida Atlantic University and the University of Utah School of Medicine aims to do. They are developing a first-of-its-kind bioengineered robotic hand that will grow and adapt to its environment. This “living” robot will have its own peripheral nervous system directly linking robotic sensors and actuators. FAU’s College of Engineering and Computer Science is leading the multidisciplinary team that has received a four-year, $1.3 million grant from the National Institute of Biomedical Imaging and Bioengineering of the [US] National Institutes of Health for a project titled “Virtual Neuroprosthesis: Restoring Autonomy to People Suffering from Neurotrauma.”

A November14, 2017 Florida Atlantic University (FAU) news release by Gisele Galoustian, which originated the news item, goes into more detail,

With expertise in robotics, bioengineering, behavioral science, nerve regeneration, electrophysiology, microfluidic devices, and orthopedic surgery, the research team is creating a living pathway from the robot’s touch sensation to the user’s brain to help amputees control the robotic hand. A neuroprosthesis platform will enable them to explore how neurons and behavior can work together to regenerate the sensation of touch in an artificial limb.

At the core of this project is a cutting-edge robotic hand and arm developed in the BioRobotics Laboratory in FAU’s College of Engineering and Computer Science. Just like human fingertips, the robotic hand is equipped with numerous sensory receptors that respond to changes in the environment. Controlled by a human, it can sense pressure changes, interpret the information it is receiving and interact with various objects. It adjusts its grip based on an object’s weight or fragility. But the real challenge is figuring out how to send that information back to the brain using living residual neural pathways to replace those that have been damaged or destroyed by trauma.

“When the peripheral nerve is cut or damaged, it uses the rich electrical activity that tactile receptors create to restore itself. We want to examine how the fingertip sensors can help damaged or severed nerves regenerate,” said Erik Engeberg, Ph.D., principal investigator, an associate professor in FAU’s Department of Ocean and Mechanical Engineering, and director of FAU’s BioRobotics Laboratory. “To accomplish this, we are going to directly connect these living nerves in vitro and then electrically stimulate them on a daily basis with sensors from the robotic hand to see how the nerves grow and regenerate while the hand is operated by limb-absent people.”

For the study, the neurons will not be kept in conventional petri dishes. Instead, they will be placed in  biocompatible microfluidic chambers that provide a nurturing environment mimicking the basic function of living cells. Sarah E. Du, Ph.D., co-principal investigator, an assistant professor in FAU’s Department of Ocean and Mechanical Engineering, and an expert in the emerging field of microfluidics, has developed these tiny customized artificial chambers with embedded micro-electrodes. The research team will be able to stimulate the neurons with electrical impulses from the robot’s hand to help regrowth after injury. They will morphologically and electrically measure in real-time how much neural tissue has been restored.

Jianning Wei, Ph.D., co-principal investigator, an associate professor of biomedical science in FAU’s Charles E. Schmidt College of Medicine, and an expert in neural damage and regeneration, will prepare the neurons in vitro, observe them grow and see how they fare and regenerate in the aftermath of injury. This “virtual” method will give the research team multiple opportunities to test and retest the nerves without any harm to subjects.

Using an electroencephalogram (EEG) to detect electrical activity in the brain, Emmanuelle Tognoli, Ph.D., co-principal investigator, associate research professor in FAU’s Center for Complex Systems and Brain Sciences in the Charles E. Schmidt College of Science, and an expert in electrophysiology and neural, behavioral, and cognitive sciences, will examine how the tactile information from the robotic sensors is passed onto the brain to distinguish scenarios with successful or unsuccessful functional restoration of the sense of touch. Her objective: to understand how behavior helps nerve regeneration and how this nerve regeneration helps the behavior.

Once the nerve impulses from the robot’s tactile sensors have gone through the microfluidic chamber, they are sent back to the human user manipulating the robotic hand. This is done with a special device that converts the signals coming from the microfluidic chambers into a controllable pressure at a cuff placed on the remaining portion of the amputated person’s arm. Users will know if they are squeezing the object too hard or if they are losing their grip.

Engeberg also is working with Douglas T. Hutchinson, M.D., co-principal investigator and a professor in the Department of Orthopedics at the University of Utah School of Medicine, who specializes in hand and orthopedic surgery. They are developing a set of tasks and behavioral neural indicators of performance that will ultimately reveal how to promote a healthy sensation of touch in amputees and limb-absent people using robotic devices. The research team also is seeking a post-doctoral researcher with multi-disciplinary experience to work on this breakthrough project.

Here’s more about the job opportunity from the FAU BioRobotics Laboratory job posting, (I checked on January 30, 2018 and it seems applications are still being accepted.)

Post-doctoral Opportunity

Dated Posted: Oct. 13, 2017

The BioRobotics Lab at Florida Atlantic University (FAU) invites applications for a NIH NIBIB-funded Postdoctoral position to develop a Virtual Neuroprosthesis aimed at providing a sense of touch in amputees and limb-absent people.

Candidates should have a Ph.D. in one of the following degrees: mechanical engineering, electrical engineering, biomedical engineering, bioengineering or related, with interest and/or experience in transdisciplinary work at the intersection of robotic hands, biology, and biomedical systems. Prior experience in the neural field will be considered an advantage, though not a necessity. Underrepresented minorities and women are warmly encouraged to apply.

The postdoctoral researcher will be co-advised across the department of Mechanical Engineering and the Center for Complex Systems & Brain Sciences through an interdisciplinary team whose expertise spans Robotics, Microfluidics, Behavioral and Clinical Neuroscience and Orthopedic Surgery.

The position will be for one year with a possibility of extension based on performance. Salary will be commensurate with experience and qualifications. Review of applications will begin immediately and continue until the position is filled.

The application should include:

  1. a cover letter with research interests and experiences,
  2. a CV, and
  3. names and contact information for three professional references.

Qualified candidates can contact Erik Engeberg, Ph.D., Associate Professor, in the FAU Department of Ocean and Mechanical Engineering at eengeberg@fau.edu. Please reference AcademicKeys.com in your cover letter when applying for or inquiring about this job announcement.

You can find the apply button on this page. Good luck!

Art in the details: A look at the role of art in science—a Sept. 19, 2017 Café Scientifique event in Vancouver, Canada

The Sept. 19, 2017 Café Scientifique event, “Art in the Details A look at the role of art in science,” in Vancouver seems to be part of a larger neuroscience and the arts program at the University of British Columbia. First, the details about the Sept. 13, 2017 event from the eventful Vancouver webpage,

Café Scientifique – Art in the Details: A look at the role of art in science

Art in the Details: A look at the role of art in science With so much beauty in the natural world, why does the misconception that art and science are vastly different persist? Join us for discussion and dessert as we hear from artists, researchers and academic professionals about the role art has played in scientific research – from the formative work of Santiago Ramon Y Cajal to modern imaging, and beyond – and how it might help shape scientific understanding in the future. September 19th, 2017  7:00 – 9:00 pm (doors open at 6:45pm)  TELUS World of Science [also known as Science World], 1455 Quebec St., Vancouver, BC V6A 3Z7 Free Admission [emphasis mine] Experts Dr Carol-Ann Courneya Associate Professor in the Department of Cellular and Physiological Science and Assistant Dean of Student Affairs, Faculty of Medicine, University of British Columbia   Dr Jason Snyder  Assistant Professor, Department of Psychology, University of British Columbia http://snyderlab.com/   Dr Steven Barnes Instructor and Assistant Head—Undergraduate Affairs, Department of Psychology, University of British Columbia http://stevenjbarnes.com/   Moderated By   Bruce Claggett Senior Managing Editor, NEWS 1130   This evening event is presented in collaboration with the Djavad Mowafaghian Centre for Brain Health. Please note: this is a private, adult-oriented event and TELUS World of Science will be closed during this discussion.

The Art in the Details event page on the Science World website provides a bit more information about the speakers (mostly in the form of links to their webpage),,

Experts

Dr Carol-Ann Courneya
Associate Professor in the Department of Cellular and Physiological Science and Assistant Dean of Student Affairs, Faculty of Medicine, University of British Columbia

Dr Jason Snyder 

Assistant Professor, Department of Psychology, University of British Columbi

Dr Steven Barnes

Instructor, Department of Psychology, University of British Columbia

Moderated By  

Bruce Claggett

Senior Managing Editor, NEWS 1130

Should you click though to obtain tickets from either the eventful Vancouver or Science World websites, you’ll find the event is sold out but perhaps the organizers will include a waitlist.

Even if you can’t get a ticket, there’s an exhibition of Santiago Ramon Y Cajal’s work (from the Djavad Mowafaghian Centre for Brain Health’s Beautiful brain’s webpage),

Drawings of Santiago Ramón y Cajal to be shown at UBC

Santiago Ramón y Cajal, injured Purkinje neurons, 1914, ink and pencil on paper. Courtesy of Instituto Cajal (CSIC).

Pictured: Santiago Ramón y Cajal, injured Purkinje neurons, 1914, ink and pencil on paper. Courtesy of Instituto Cajal (CSIC).

The Beautiful Brain is the first North American museum exhibition to present the extraordinary drawings of Santiago Ramón y Cajal (1852–1934), a Spanish pathologist, histologist and neuroscientist renowned for his discovery of neuron cells and their structure, for which he was awarded the Nobel Prize in Physiology and Medicine in 1906. Known as the father of modern neuroscience, Cajal was also an exceptional artist. He combined scientific and artistic skills to produce arresting drawings with extraordinary scientific and aesthetic qualities.

A century after their completion, Cajal’s drawings are still used in contemporary medical publications to illustrate important neuroscience principles, and continue to fascinate artists and visual art audiences. Eighty of Cajal’s drawings will be accompanied by a selection of contemporary neuroscience visualizations by international scientists. The Morris and Helen Belkin Art Gallery exhibition will also include early 20th century works that imaged consciousness, including drawings from Annie Besant’s Thought Forms (1901) and Charles Leadbeater’s The Chakras (1927), as well as abstract works by Lawren Harris that explored his interest in spirituality and mysticism.

After countless hours at the microscope, Cajal was able to perceive that the brain was made up of individual nerve cells or neurons rather than a tangled single web, which was only decisively proven by electron microscopy in the 1950s and is the basis of neuroscience today. His speculative drawings stemmed from an understanding of aesthetics in their compressed detail and lucid composition, as he laboured to clearly represent matter and processes that could not be seen.

In a special collaboration with the Morris and Helen Belkin Art Gallery and the VGH & UBC Hospital Foundation this project will encourage meaningful dialogue amongst artists, curators, scientists and scholars on concepts of neuroplasticity and perception. Public and Academic programs will address the emerging field of art and neuroscience and engage interdisciplinary research of scholars from the sciences and humanities alike.

“This is an incredible opportunity for the neuroscience and visual arts communities at the University and Vancouver,” says Dr. Brian MacVicar, who has been working diligently with Director Scott Watson at the Morris and Helen Belkin Art Gallery and with his colleagues at the University of Minnesota for the past few years to bring this exhibition to campus. “Without Cajal’s impressive body of work, our understanding of the anatomy of the brain would not be so well-formed; Cajal’s legacy has been of critical importance to neuroscience teaching and research over the past century.”

A book published by Abrams accompanies the exhibition, containing full colour reproductions of all 80 of the exhibition drawings, commentary on each of the works and essays on Cajal’s life and scientific contributions, artistic roots and achievements and contemporary neuroscience imaging techniques.

Cajal’s work will be on display at the Morris and Helen Belkin Art Gallery from September 5 to December 3, 2017.

Join the UBC arts and neuroscience communities for a free symposium and dance performance celebrating The Beautiful Brain at UBC on September 7. [link removed]

The Beautiful Brain: The Drawings of Santiago Ramón y Cajal was developed by the Frederick R. Weisman Art Museum, University of Minnesota with the Instituto Cajal. The exhibition at the Morris and Helen Belkin Art Gallery, University British Columbia is presented in partnership with the Djavad Mowafaghian Centre for Brain Health with support from the VGH & UBC Hospital Foundation. We gratefully acknowledge the generous support of the Canada Council for the Arts, the British Columbia Arts Council and Belkin Curator’s Forum members.

The Morris and Helen Belkin Art Gallery’s Beautiful Brain webpage has a listing of upcoming events associated with the exhibition as well as instructions on how to get there (if you click on About),

SEMINAR & READING GROUP: Plasticity at SFU Vancouver and 221A: Wednesdays, October 4, 18, November 1, 15 and 21 at 7 pm

CONVERSATION with Anthony Phillips and Timothy Taylor: Wednesday, October 11, 2017 at 7 pm

LECTURE with Catherine Malabou at the Liu Institute: Thursday, November 23 at 6 pm

CONCERT with UBC Contemporary Players: Friday, December 1 at 2 pm

Cajal was also an exceptional artist and studied as a teenager at the Academy of Arts in Huesca, Spain. He combined scientific and artistic skills to produce arresting drawings with extraordinary scientific and aesthetic qualities. A century after their completion, his drawings are still used in contemporary medical publications to illustrate important neuroscience principles, and continue to fascinate artists and visual art audiences. Eighty of Cajal’s drawings are accompanied by a selection of contemporary neuroscience visualizations by international scientists.

Organizationally, this seems a little higgledy piggledy with the Cafe Scientifique event found on some sites, the Belkin Gallery events found on one site, and no single listing of everything on any one site for the Beautiful Brain. Please let me know if you find something I’ve missed.

Carbon nanotubes to repair nerve fibres (cyborg brains?)

Can cyborg brains be far behind now that researchers are looking at ways to repair nerve fibers with carbon nanotubes (CNTs)? A June 26, 2017 news item on ScienceDaily describes the scheme using carbon nanotubes as a material for repairing nerve fibers,

Carbon nanotubes exhibit interesting characteristics rendering them particularly suited to the construction of special hybrid devices — consisting of biological issue and synthetic material — planned to re-establish connections between nerve cells, for instance at spinal level, lost on account of lesions or trauma. This is the result of a piece of research published on the scientific journal Nanomedicine: Nanotechnology, Biology, and Medicine conducted by a multi-disciplinary team comprising SISSA (International School for Advanced Studies), the University of Trieste, ELETTRA Sincrotrone and two Spanish institutions, Basque Foundation for Science and CIC BiomaGUNE. More specifically, researchers have investigated the possible effects on neurons of the interaction with carbon nanotubes. Scientists have proven that these nanomaterials may regulate the formation of synapses, specialized structures through which the nerve cells communicate, and modulate biological mechanisms, such as the growth of neurons, as part of a self-regulating process. This result, which shows the extent to which the integration between nerve cells and these synthetic structures is stable and efficient, highlights the great potentialities of carbon nanotubes as innovative materials capable of facilitating neuronal regeneration or in order to create a kind of artificial bridge between groups of neurons whose connection has been interrupted. In vivo testing has actually already begun.

The researchers have included a gorgeous image to illustrate their work,

Caption: Scientists have proven that these nanomaterials may regulate the formation of synapses, specialized structures through which the nerve cells communicate, and modulate biological mechanisms, such as the growth of neurons, as part of a self-regulating process. Credit: Pixabay

A June 26, 2017 SISSA press release (also on EurekAlert), which originated the news item, describes the work in more detail while explaining future research needs,

“Interface systems, or, more in general, neuronal prostheses, that enable an effective re-establishment of these connections are under active investigation” explain Laura Ballerini (SISSA) and Maurizio Prato (UniTS-CIC BiomaGUNE), coordinating the research project. “The perfect material to build these neural interfaces does not exist, yet the carbon nanotubes we are working on have already proved to have great potentialities. After all, nanomaterials currently represent our best hope for developing innovative strategies in the treatment of spinal cord injuries”. These nanomaterials are used both as scaffolds, a supportive framework for nerve cells, and as means of interfaces releasing those signals that empower nerve cells to communicate with each other.

Many aspects, however, still need to be addressed. Among them, the impact on neuronal physiology of the integration of these nanometric structures with the cell membrane. “Studying the interaction between these two elements is crucial, as it might also lead to some undesired effects, which we ought to exclude”. Laura Ballerini explains: “If, for example, the mere contact provoked a vertiginous rise in the number of synapses, these materials would be essentially unusable”. “This”, Maurizio Prato adds, “is precisely what we have investigated in this study where we used pure carbon nanotubes”.

The results of the research are extremely encouraging: “First of all we have proved that nanotubes do not interfere with the composition of lipids, of cholesterol in particular, which make up the cellular membrane in neurons. Membrane lipids play a very important role in the transmission of signals through the synapses. Nanotubes do not seem to influence this process, which is very important”.

There is more, however. The research has also highlighted the fact that the nerve cells growing on the substratum of nanotubes, thanks to this interaction, develop and reach maturity very quickly, eventually reaching a condition of biological homeostasis. “Nanotubes facilitate the full growth of neurons and the formation of new synapses. This growth, however, is not indiscriminate and unlimited since, as we proved, after a few weeks a physiological balance is attained. Having established the fact that this interaction is stable and efficient is an aspect of fundamental importance”. Maurizio Prato and Laura Ballerini conclude as follows: “We are proving that carbon nanotubes perform excellently in terms of duration, adaptability and mechanical compatibility with the tissue. Now we know that their interaction with the biological material, too, is efficient. Based on this evidence, we are already studying the in vivo application, and preliminary results appear to be quite promising also in terms of recovery of the lost neurological functions”.

Here’s a link to and a citation for the paper,

Sculpting neurotransmission during synaptic development by 2D nanostructured interfaces by Niccolò Paolo Pampaloni, Denis Scaini, Fabio Perissinotto, Susanna Bosi, Maurizio Prato, Laura Ballerini. Nanomedicine: Nanotechnology, Biology and Medicine, DOI: http://dx.doi.org/10.1016/j.nano.2017.01.020 Published online: May 25, 2017

This paper is open access.

Brain stuff: quantum entanglement and a multi-dimensional universe

I have two brain news bits, one about neural networks and quantum entanglement and another about how the brain operates on more than three dimensions.

Quantum entanglement and neural networks

A June 13, 2017 news item on phys.org describes how machine learning can be used to solve problems in physics (Note: Links have been removed),

Machine learning, the field that’s driving a revolution in artificial intelligence, has cemented its role in modern technology. Its tools and techniques have led to rapid improvements in everything from self-driving cars and speech recognition to the digital mastery of an ancient board game.

Now, physicists are beginning to use machine learning tools to tackle a different kind of problem, one at the heart of quantum physics. In a paper published recently in Physical Review X, researchers from JQI [Joint Quantum Institute] and the Condensed Matter Theory Center (CMTC) at the University of Maryland showed that certain neural networks—abstract webs that pass information from node to node like neurons in the brain—can succinctly describe wide swathes of quantum systems.

An artist’s rendering of a neural network with two layers. At the top is a real quantum system, like atoms in an optical lattice. Below is a network of hidden neurons that capture their interactions (Credit: E. Edwards/JQI)

A June 12, 2017 JQI news release by Chris Cesare, which originated the news item, describes how neural networks can represent quantum entanglement,

Dongling Deng, a JQI Postdoctoral Fellow who is a member of CMTC and the paper’s first author, says that researchers who use computers to study quantum systems might benefit from the simple descriptions that neural networks provide. “If we want to numerically tackle some quantum problem,” Deng says, “we first need to find an efficient representation.”

On paper and, more importantly, on computers, physicists have many ways of representing quantum systems. Typically these representations comprise lists of numbers describing the likelihood that a system will be found in different quantum states. But it becomes difficult to extract properties or predictions from a digital description as the number of quantum particles grows, and the prevailing wisdom has been that entanglement—an exotic quantum connection between particles—plays a key role in thwarting simple representations.

The neural networks used by Deng and his collaborators—CMTC Director and JQI Fellow Sankar Das Sarma and Fudan University physicist and former JQI Postdoctoral Fellow Xiaopeng Li—can efficiently represent quantum systems that harbor lots of entanglement, a surprising improvement over prior methods.

What’s more, the new results go beyond mere representation. “This research is unique in that it does not just provide an efficient representation of highly entangled quantum states,” Das Sarma says. “It is a new way of solving intractable, interacting quantum many-body problems that uses machine learning tools to find exact solutions.”

Neural networks and their accompanying learning techniques powered AlphaGo, the computer program that beat some of the world’s best Go players last year (link is external) (and the top player this year (link is external)). The news excited Deng, an avid fan of the board game. Last year, around the same time as AlphaGo’s triumphs, a paper appeared that introduced the idea of using neural networks to represent quantum states (link is external), although it gave no indication of exactly how wide the tool’s reach might be. “We immediately recognized that this should be a very important paper,” Deng says, “so we put all our energy and time into studying the problem more.”

The result was a more complete account of the capabilities of certain neural networks to represent quantum states. In particular, the team studied neural networks that use two distinct groups of neurons. The first group, called the visible neurons, represents real quantum particles, like atoms in an optical lattice or ions in a chain. To account for interactions between particles, the researchers employed a second group of neurons—the hidden neurons—which link up with visible neurons. These links capture the physical interactions between real particles, and as long as the number of connections stays relatively small, the neural network description remains simple.

Specifying a number for each connection and mathematically forgetting the hidden neurons can produce a compact representation of many interesting quantum states, including states with topological characteristics and some with surprising amounts of entanglement.

Beyond its potential as a tool in numerical simulations, the new framework allowed Deng and collaborators to prove some mathematical facts about the families of quantum states represented by neural networks. For instance, neural networks with only short-range interactions—those in which each hidden neuron is only connected to a small cluster of visible neurons—have a strict limit on their total entanglement. This technical result, known as an area law, is a research pursuit of many condensed matter physicists.

These neural networks can’t capture everything, though. “They are a very restricted regime,” Deng says, adding that they don’t offer an efficient universal representation. If they did, they could be used to simulate a quantum computer with an ordinary computer, something physicists and computer scientists think is very unlikely. Still, the collection of states that they do represent efficiently, and the overlap of that collection with other representation methods, is an open problem that Deng says is ripe for further exploration.

Here’s a link to and a citation for the paper,

Quantum Entanglement in Neural Network States by Dong-Ling Deng, Xiaopeng Li, and S. Das Sarma. Phys. Rev. X 7, 021021 – Published 11 May 2017

This paper is open access.

Blue Brain and the multidimensional universe

Blue Brain is a Swiss government brain research initiative which officially came to life in 2006 although the initial agreement between the École Politechnique Fédérale de Lausanne (EPFL) and IBM was signed in 2005 (according to the project’s Timeline page). Moving on, the project’s latest research reveals something astounding (from a June 12, 2017 Frontiers Publishing press release on EurekAlert),

For most people, it is a stretch of the imagination to understand the world in four dimensions but a new study has discovered structures in the brain with up to eleven dimensions – ground-breaking work that is beginning to reveal the brain’s deepest architectural secrets.

Using algebraic topology in a way that it has never been used before in neuroscience, a team from the Blue Brain Project has uncovered a universe of multi-dimensional geometrical structures and spaces within the networks of the brain.

The research, published today in Frontiers in Computational Neuroscience, shows that these structures arise when a group of neurons forms a clique: each neuron connects to every other neuron in the group in a very specific way that generates a precise geometric object. The more neurons there are in a clique, the higher the dimension of the geometric object.

“We found a world that we had never imagined,” says neuroscientist Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland, “there are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”

Markram suggests this may explain why it has been so hard to understand the brain. “The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly.”

If 4D worlds stretch our imagination, worlds with 5, 6 or more dimensions are too complex for most of us to comprehend. This is where algebraic topology comes in: a branch of mathematics that can describe systems with any number of dimensions. The mathematicians who brought algebraic topology to the study of brain networks in the Blue Brain Project were Kathryn Hess from EPFL and Ran Levi from Aberdeen University.

“Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures – the trees in the forest – and see the empty spaces – the clearings – all at the same time,” explains Hess.

In 2015, Blue Brain published the first digital copy of a piece of the neocortex – the most evolved part of the brain and the seat of our sensations, actions, and consciousness. In this latest research, using algebraic topology, multiple tests were performed on the virtual brain tissue to show that the multi-dimensional brain structures discovered could never be produced by chance. Experiments were then performed on real brain tissue in the Blue Brain’s wet lab in Lausanne confirming that the earlier discoveries in the virtual tissue are biologically relevant and also suggesting that the brain constantly rewires during development to build a network with as many high-dimensional structures as possible.

When the researchers presented the virtual brain tissue with a stimulus, cliques of progressively higher dimensions assembled momentarily to enclose high-dimensional holes, that the researchers refer to as cavities. “The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner,” says Levi. “It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates.”

The big question these researchers are asking now is whether the intricacy of tasks we can perform depends on the complexity of the multi-dimensional “sandcastles” the brain can build. Neuroscience has also been struggling to find where the brain stores its memories. “They may be ‘hiding’ in high-dimensional cavities,” Markram speculates.

###

About Blue Brain

The aim of the Blue Brain Project, a Swiss brain initiative founded and directed by Professor Henry Markram, is to build accurate, biologically detailed digital reconstructions and simulations of the rodent brain, and ultimately, the human brain. The supercomputer-based reconstructions and simulations built by Blue Brain offer a radically new approach for understanding the multilevel structure and function of the brain. http://bluebrain.epfl.ch

About Frontiers

Frontiers is a leading community-driven open-access publisher. By taking publishing entirely online, we drive innovation with new technologies to make peer review more efficient and transparent. We provide impact metrics for articles and researchers, and merge open access publishing with a research network platform – Loop – to catalyse research dissemination, and popularize research to the public, including children. Our goal is to increase the reach and impact of research articles and their authors. Frontiers has received the ALPSP Gold Award for Innovation in Publishing in 2014. http://www.frontiersin.org.

Here’s a link to and a citation for the paper,

Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function by Michael W. Reimann, Max Nolte, Martina Scolamiero, Katharine Turner, Rodrigo Perin, Giuseppe Chindemi, Paweł Dłotko, Ran Levi, Kathryn Hess, and Henry Markram. Front. Comput. Neurosci., 12 June 2017 | https://doi.org/10.3389/fncom.2017.00048

This paper is open access.

Hacking the human brain with a junction-based artificial synaptic device

Earlier today I published a piece featuring Dr. Wei Lu’s work on memristors and the movement to create an artificial brain (my June 28, 2017 posting: Dr. Wei Lu and bio-inspired ‘memristor’ chips). For this posting I’m featuring a non-memristor (if I’ve properly understood the technology) type of artificial synapse. From a June 28, 2017 news item on Nanowerk,

One of the greatest challenges facing artificial intelligence development is understanding the human brain and figuring out how to mimic it.

Now, one group reports in ACS Nano (“Emulating Bilingual Synaptic Response Using a Junction-Based Artificial Synaptic Device”) that they have developed an artificial synapse capable of simulating a fundamental function of our nervous system — the release of inhibitory and stimulatory signals from the same “pre-synaptic” terminal.

Unfortunately, the American Chemical Society news release on EurekAlert, which originated the news item, doesn’t provide too much more detail,

The human nervous system is made up of over 100 trillion synapses, structures that allow neurons to pass electrical and chemical signals to one another. In mammals, these synapses can initiate and inhibit biological messages. Many synapses just relay one type of signal, whereas others can convey both types simultaneously or can switch between the two. To develop artificial intelligence systems that better mimic human learning, cognition and image recognition, researchers are imitating synapses in the lab with electronic components. Most current artificial synapses, however, are only capable of delivering one type of signal. So, Han Wang, Jing Guo and colleagues sought to create an artificial synapse that can reconfigurably send stimulatory and inhibitory signals.

The researchers developed a synaptic device that can reconfigure itself based on voltages applied at the input terminal of the device. A junction made of black phosphorus and tin selenide enables switching between the excitatory and inhibitory signals. This new device is flexible and versatile, which is highly desirable in artificial neural networks. In addition, the artificial synapses may simplify the design and functions of nervous system simulations.

Here’s how I concluded that this is not a memristor-type device (from the paper [first paragraph, final sentence]; a link and citation will follow; Note: Links have been removed)),

The conventional memristor-type [emphasis mine](14-20) and transistor-type(21-25) artificial synapses can realize synaptic functions in a single semiconductor device but lacks the ability [emphasis mine] to dynamically reconfigure between excitatory and inhibitory responses without the addition of a modulating terminal.

Here’s a link to and a citation for the paper,

Emulating Bilingual Synaptic Response Using a Junction-Based Artificial Synaptic Device by
He Tian, Xi Cao, Yujun Xie, Xiaodong Yan, Andrew Kostelec, Don DiMarzio, Cheng Chang, Li-Dong Zhao, Wei Wu, Jesse Tice, Judy J. Cha, Jing Guo, and Han Wang. ACS Nano, Article ASAP DOI: 10.1021/acsnano.7b03033 Publication Date (Web): June 28, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.