Tag Archives: neurons

Carbon nanotubes to repair nerve fibres (cyborg brains?)

Can cyborg brains be far behind now that researchers are looking at ways to repair nerve fibers with carbon nanotubes (CNTs)? A June 26, 2017 news item on ScienceDaily describes the scheme using carbon nanotubes as a material for repairing nerve fibers,

Carbon nanotubes exhibit interesting characteristics rendering them particularly suited to the construction of special hybrid devices — consisting of biological issue and synthetic material — planned to re-establish connections between nerve cells, for instance at spinal level, lost on account of lesions or trauma. This is the result of a piece of research published on the scientific journal Nanomedicine: Nanotechnology, Biology, and Medicine conducted by a multi-disciplinary team comprising SISSA (International School for Advanced Studies), the University of Trieste, ELETTRA Sincrotrone and two Spanish institutions, Basque Foundation for Science and CIC BiomaGUNE. More specifically, researchers have investigated the possible effects on neurons of the interaction with carbon nanotubes. Scientists have proven that these nanomaterials may regulate the formation of synapses, specialized structures through which the nerve cells communicate, and modulate biological mechanisms, such as the growth of neurons, as part of a self-regulating process. This result, which shows the extent to which the integration between nerve cells and these synthetic structures is stable and efficient, highlights the great potentialities of carbon nanotubes as innovative materials capable of facilitating neuronal regeneration or in order to create a kind of artificial bridge between groups of neurons whose connection has been interrupted. In vivo testing has actually already begun.

The researchers have included a gorgeous image to illustrate their work,

Caption: Scientists have proven that these nanomaterials may regulate the formation of synapses, specialized structures through which the nerve cells communicate, and modulate biological mechanisms, such as the growth of neurons, as part of a self-regulating process. Credit: Pixabay

A June 26, 2017 SISSA press release (also on EurekAlert), which originated the news item, describes the work in more detail while explaining future research needs,

“Interface systems, or, more in general, neuronal prostheses, that enable an effective re-establishment of these connections are under active investigation” explain Laura Ballerini (SISSA) and Maurizio Prato (UniTS-CIC BiomaGUNE), coordinating the research project. “The perfect material to build these neural interfaces does not exist, yet the carbon nanotubes we are working on have already proved to have great potentialities. After all, nanomaterials currently represent our best hope for developing innovative strategies in the treatment of spinal cord injuries”. These nanomaterials are used both as scaffolds, a supportive framework for nerve cells, and as means of interfaces releasing those signals that empower nerve cells to communicate with each other.

Many aspects, however, still need to be addressed. Among them, the impact on neuronal physiology of the integration of these nanometric structures with the cell membrane. “Studying the interaction between these two elements is crucial, as it might also lead to some undesired effects, which we ought to exclude”. Laura Ballerini explains: “If, for example, the mere contact provoked a vertiginous rise in the number of synapses, these materials would be essentially unusable”. “This”, Maurizio Prato adds, “is precisely what we have investigated in this study where we used pure carbon nanotubes”.

The results of the research are extremely encouraging: “First of all we have proved that nanotubes do not interfere with the composition of lipids, of cholesterol in particular, which make up the cellular membrane in neurons. Membrane lipids play a very important role in the transmission of signals through the synapses. Nanotubes do not seem to influence this process, which is very important”.

There is more, however. The research has also highlighted the fact that the nerve cells growing on the substratum of nanotubes, thanks to this interaction, develop and reach maturity very quickly, eventually reaching a condition of biological homeostasis. “Nanotubes facilitate the full growth of neurons and the formation of new synapses. This growth, however, is not indiscriminate and unlimited since, as we proved, after a few weeks a physiological balance is attained. Having established the fact that this interaction is stable and efficient is an aspect of fundamental importance”. Maurizio Prato and Laura Ballerini conclude as follows: “We are proving that carbon nanotubes perform excellently in terms of duration, adaptability and mechanical compatibility with the tissue. Now we know that their interaction with the biological material, too, is efficient. Based on this evidence, we are already studying the in vivo application, and preliminary results appear to be quite promising also in terms of recovery of the lost neurological functions”.

Here’s a link to and a citation for the paper,

Sculpting neurotransmission during synaptic development by 2D nanostructured interfaces by Niccolò Paolo Pampaloni, Denis Scaini, Fabio Perissinotto, Susanna Bosi, Maurizio Prato, Laura Ballerini. Nanomedicine: Nanotechnology, Biology and Medicine, DOI: http://dx.doi.org/10.1016/j.nano.2017.01.020 Published online: May 25, 2017

This paper is open access.

Brain stuff: quantum entanglement and a multi-dimensional universe

I have two brain news bits, one about neural networks and quantum entanglement and another about how the brain operates on more than three dimensions.

Quantum entanglement and neural networks

A June 13, 2017 news item on phys.org describes how machine learning can be used to solve problems in physics (Note: Links have been removed),

Machine learning, the field that’s driving a revolution in artificial intelligence, has cemented its role in modern technology. Its tools and techniques have led to rapid improvements in everything from self-driving cars and speech recognition to the digital mastery of an ancient board game.

Now, physicists are beginning to use machine learning tools to tackle a different kind of problem, one at the heart of quantum physics. In a paper published recently in Physical Review X, researchers from JQI [Joint Quantum Institute] and the Condensed Matter Theory Center (CMTC) at the University of Maryland showed that certain neural networks—abstract webs that pass information from node to node like neurons in the brain—can succinctly describe wide swathes of quantum systems.

An artist’s rendering of a neural network with two layers. At the top is a real quantum system, like atoms in an optical lattice. Below is a network of hidden neurons that capture their interactions (Credit: E. Edwards/JQI)

A June 12, 2017 JQI news release by Chris Cesare, which originated the news item, describes how neural networks can represent quantum entanglement,

Dongling Deng, a JQI Postdoctoral Fellow who is a member of CMTC and the paper’s first author, says that researchers who use computers to study quantum systems might benefit from the simple descriptions that neural networks provide. “If we want to numerically tackle some quantum problem,” Deng says, “we first need to find an efficient representation.”

On paper and, more importantly, on computers, physicists have many ways of representing quantum systems. Typically these representations comprise lists of numbers describing the likelihood that a system will be found in different quantum states. But it becomes difficult to extract properties or predictions from a digital description as the number of quantum particles grows, and the prevailing wisdom has been that entanglement—an exotic quantum connection between particles—plays a key role in thwarting simple representations.

The neural networks used by Deng and his collaborators—CMTC Director and JQI Fellow Sankar Das Sarma and Fudan University physicist and former JQI Postdoctoral Fellow Xiaopeng Li—can efficiently represent quantum systems that harbor lots of entanglement, a surprising improvement over prior methods.

What’s more, the new results go beyond mere representation. “This research is unique in that it does not just provide an efficient representation of highly entangled quantum states,” Das Sarma says. “It is a new way of solving intractable, interacting quantum many-body problems that uses machine learning tools to find exact solutions.”

Neural networks and their accompanying learning techniques powered AlphaGo, the computer program that beat some of the world’s best Go players last year (link is external) (and the top player this year (link is external)). The news excited Deng, an avid fan of the board game. Last year, around the same time as AlphaGo’s triumphs, a paper appeared that introduced the idea of using neural networks to represent quantum states (link is external), although it gave no indication of exactly how wide the tool’s reach might be. “We immediately recognized that this should be a very important paper,” Deng says, “so we put all our energy and time into studying the problem more.”

The result was a more complete account of the capabilities of certain neural networks to represent quantum states. In particular, the team studied neural networks that use two distinct groups of neurons. The first group, called the visible neurons, represents real quantum particles, like atoms in an optical lattice or ions in a chain. To account for interactions between particles, the researchers employed a second group of neurons—the hidden neurons—which link up with visible neurons. These links capture the physical interactions between real particles, and as long as the number of connections stays relatively small, the neural network description remains simple.

Specifying a number for each connection and mathematically forgetting the hidden neurons can produce a compact representation of many interesting quantum states, including states with topological characteristics and some with surprising amounts of entanglement.

Beyond its potential as a tool in numerical simulations, the new framework allowed Deng and collaborators to prove some mathematical facts about the families of quantum states represented by neural networks. For instance, neural networks with only short-range interactions—those in which each hidden neuron is only connected to a small cluster of visible neurons—have a strict limit on their total entanglement. This technical result, known as an area law, is a research pursuit of many condensed matter physicists.

These neural networks can’t capture everything, though. “They are a very restricted regime,” Deng says, adding that they don’t offer an efficient universal representation. If they did, they could be used to simulate a quantum computer with an ordinary computer, something physicists and computer scientists think is very unlikely. Still, the collection of states that they do represent efficiently, and the overlap of that collection with other representation methods, is an open problem that Deng says is ripe for further exploration.

Here’s a link to and a citation for the paper,

Quantum Entanglement in Neural Network States by Dong-Ling Deng, Xiaopeng Li, and S. Das Sarma. Phys. Rev. X 7, 021021 – Published 11 May 2017

This paper is open access.

Blue Brain and the multidimensional universe

Blue Brain is a Swiss government brain research initiative which officially came to life in 2006 although the initial agreement between the École Politechnique Fédérale de Lausanne (EPFL) and IBM was signed in 2005 (according to the project’s Timeline page). Moving on, the project’s latest research reveals something astounding (from a June 12, 2017 Frontiers Publishing press release on EurekAlert),

For most people, it is a stretch of the imagination to understand the world in four dimensions but a new study has discovered structures in the brain with up to eleven dimensions – ground-breaking work that is beginning to reveal the brain’s deepest architectural secrets.

Using algebraic topology in a way that it has never been used before in neuroscience, a team from the Blue Brain Project has uncovered a universe of multi-dimensional geometrical structures and spaces within the networks of the brain.

The research, published today in Frontiers in Computational Neuroscience, shows that these structures arise when a group of neurons forms a clique: each neuron connects to every other neuron in the group in a very specific way that generates a precise geometric object. The more neurons there are in a clique, the higher the dimension of the geometric object.

“We found a world that we had never imagined,” says neuroscientist Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland, “there are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”

Markram suggests this may explain why it has been so hard to understand the brain. “The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly.”

If 4D worlds stretch our imagination, worlds with 5, 6 or more dimensions are too complex for most of us to comprehend. This is where algebraic topology comes in: a branch of mathematics that can describe systems with any number of dimensions. The mathematicians who brought algebraic topology to the study of brain networks in the Blue Brain Project were Kathryn Hess from EPFL and Ran Levi from Aberdeen University.

“Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures – the trees in the forest – and see the empty spaces – the clearings – all at the same time,” explains Hess.

In 2015, Blue Brain published the first digital copy of a piece of the neocortex – the most evolved part of the brain and the seat of our sensations, actions, and consciousness. In this latest research, using algebraic topology, multiple tests were performed on the virtual brain tissue to show that the multi-dimensional brain structures discovered could never be produced by chance. Experiments were then performed on real brain tissue in the Blue Brain’s wet lab in Lausanne confirming that the earlier discoveries in the virtual tissue are biologically relevant and also suggesting that the brain constantly rewires during development to build a network with as many high-dimensional structures as possible.

When the researchers presented the virtual brain tissue with a stimulus, cliques of progressively higher dimensions assembled momentarily to enclose high-dimensional holes, that the researchers refer to as cavities. “The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner,” says Levi. “It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates.”

The big question these researchers are asking now is whether the intricacy of tasks we can perform depends on the complexity of the multi-dimensional “sandcastles” the brain can build. Neuroscience has also been struggling to find where the brain stores its memories. “They may be ‘hiding’ in high-dimensional cavities,” Markram speculates.

###

About Blue Brain

The aim of the Blue Brain Project, a Swiss brain initiative founded and directed by Professor Henry Markram, is to build accurate, biologically detailed digital reconstructions and simulations of the rodent brain, and ultimately, the human brain. The supercomputer-based reconstructions and simulations built by Blue Brain offer a radically new approach for understanding the multilevel structure and function of the brain. http://bluebrain.epfl.ch

About Frontiers

Frontiers is a leading community-driven open-access publisher. By taking publishing entirely online, we drive innovation with new technologies to make peer review more efficient and transparent. We provide impact metrics for articles and researchers, and merge open access publishing with a research network platform – Loop – to catalyse research dissemination, and popularize research to the public, including children. Our goal is to increase the reach and impact of research articles and their authors. Frontiers has received the ALPSP Gold Award for Innovation in Publishing in 2014. http://www.frontiersin.org.

Here’s a link to and a citation for the paper,

Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function by Michael W. Reimann, Max Nolte, Martina Scolamiero, Katharine Turner, Rodrigo Perin, Giuseppe Chindemi, Paweł Dłotko, Ran Levi, Kathryn Hess, and Henry Markram. Front. Comput. Neurosci., 12 June 2017 | https://doi.org/10.3389/fncom.2017.00048

This paper is open access.

Hacking the human brain with a junction-based artificial synaptic device

Earlier today I published a piece featuring Dr. Wei Lu’s work on memristors and the movement to create an artificial brain (my June 28, 2017 posting: Dr. Wei Lu and bio-inspired ‘memristor’ chips). For this posting I’m featuring a non-memristor (if I’ve properly understood the technology) type of artificial synapse. From a June 28, 2017 news item on Nanowerk,

One of the greatest challenges facing artificial intelligence development is understanding the human brain and figuring out how to mimic it.

Now, one group reports in ACS Nano (“Emulating Bilingual Synaptic Response Using a Junction-Based Artificial Synaptic Device”) that they have developed an artificial synapse capable of simulating a fundamental function of our nervous system — the release of inhibitory and stimulatory signals from the same “pre-synaptic” terminal.

Unfortunately, the American Chemical Society news release on EurekAlert, which originated the news item, doesn’t provide too much more detail,

The human nervous system is made up of over 100 trillion synapses, structures that allow neurons to pass electrical and chemical signals to one another. In mammals, these synapses can initiate and inhibit biological messages. Many synapses just relay one type of signal, whereas others can convey both types simultaneously or can switch between the two. To develop artificial intelligence systems that better mimic human learning, cognition and image recognition, researchers are imitating synapses in the lab with electronic components. Most current artificial synapses, however, are only capable of delivering one type of signal. So, Han Wang, Jing Guo and colleagues sought to create an artificial synapse that can reconfigurably send stimulatory and inhibitory signals.

The researchers developed a synaptic device that can reconfigure itself based on voltages applied at the input terminal of the device. A junction made of black phosphorus and tin selenide enables switching between the excitatory and inhibitory signals. This new device is flexible and versatile, which is highly desirable in artificial neural networks. In addition, the artificial synapses may simplify the design and functions of nervous system simulations.

Here’s how I concluded that this is not a memristor-type device (from the paper [first paragraph, final sentence]; a link and citation will follow; Note: Links have been removed)),

The conventional memristor-type [emphasis mine](14-20) and transistor-type(21-25) artificial synapses can realize synaptic functions in a single semiconductor device but lacks the ability [emphasis mine] to dynamically reconfigure between excitatory and inhibitory responses without the addition of a modulating terminal.

Here’s a link to and a citation for the paper,

Emulating Bilingual Synaptic Response Using a Junction-Based Artificial Synaptic Device by
He Tian, Xi Cao, Yujun Xie, Xiaodong Yan, Andrew Kostelec, Don DiMarzio, Cheng Chang, Li-Dong Zhao, Wei Wu, Jesse Tice, Judy J. Cha, Jing Guo, and Han Wang. ACS Nano, Article ASAP DOI: 10.1021/acsnano.7b03033 Publication Date (Web): June 28, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

Dr. Wei Lu and bio-inspired ‘memristor’ chips

It’s been a while since I’ve featured Dr. Wei Lu’s work here. This April  15, 2010 posting features Lu’s most relevant previous work.) Here’s his latest ‘memristor’ work , from a May 22, 2017 news item on Nanowerk (Note: A link has been removed),

Inspired by how mammals see, a new “memristor” computer circuit prototype at the University of Michigan has the potential to process complex data, such as images and video orders of magnitude, faster and with much less power than today’s most advanced systems.

Faster image processing could have big implications for autonomous systems such as self-driving cars, says Wei Lu, U-M professor of electrical engineering and computer science. Lu is lead author of a paper on the work published in the current issue of Nature Nanotechnology (“Sparse coding with memristor networks”).

Lu’s next-generation computer components use pattern recognition to shortcut the energy-intensive process conventional systems use to dissect images. In this new work, he and his colleagues demonstrate an algorithm that relies on a technique called “sparse coding” to coax their 32-by-32 array of memristors to efficiently analyze and recreate several photos.

A May 22, 2017 University of Michigan news release (also on EurekAlert), which originated the news item, provides more information about memristors and about the research,

Memristors are electrical resistors with memory—advanced electronic devices that regulate current based on the history of the voltages applied to them. They can store and process data simultaneously, which makes them a lot more efficient than traditional systems. In a conventional computer, logic and memory functions are located at different parts of the circuit.

“The tasks we ask of today’s computers have grown in complexity,” Lu said. “In this ‘big data’ era, computers require costly, constant and slow communications between their processor and memory to retrieve large amounts data. This makes them large, expensive and power-hungry.”

But like neural networks in a biological brain, networks of memristors can perform many operations at the same time, without having to move data around. As a result, they could enable new platforms that process a vast number of signals in parallel and are capable of advanced machine learning. Memristors are good candidates for deep neural networks, a branch of machine learning, which trains computers to execute processes without being explicitly programmed to do so.

“We need our next-generation electronics to be able to quickly process complex data in a dynamic environment. You can’t just write a program to do that. Sometimes you don’t even have a pre-defined task,” Lu said. “To make our systems smarter, we need to find ways for them to process a lot of data more efficiently. Our approach to accomplish that is inspired by neuroscience.”

A mammal’s brain is able to generate sweeping, split-second impressions of what the eyes take in. One reason is because they can quickly recognize different arrangements of shapes. Humans do this using only a limited number of neurons that become active, Lu says. Both neuroscientists and computer scientists call the process “sparse coding.”

“When we take a look at a chair we will recognize it because its characteristics correspond to our stored mental picture of a chair,” Lu said. “Although not all chairs are the same and some may differ from a mental prototype that serves as a standard, each chair retains some of the key characteristics necessary for easy recognition. Basically, the object is correctly recognized the moment it is properly classified—when ‘stored’ in the appropriate category in our heads.”

Image of a memristor chip Image of a memristor chip Similarly, Lu’s electronic system is designed to detect the patterns very efficiently—and to use as few features as possible to describe the original input.

In our brains, different neurons recognize different patterns, Lu says.

“When we see an image, the neurons that recognize it will become more active,” he said. “The neurons will also compete with each other to naturally create an efficient representation. We’re implementing this approach in our electronic system.”

The researchers trained their system to learn a “dictionary” of images. Trained on a set of grayscale image patterns, their memristor network was able to reconstruct images of famous paintings and photos and other test patterns.

If their system can be scaled up, they expect to be able to process and analyze video in real time in a compact system that can be directly integrated with sensors or cameras.

The project is titled “Sparse Adaptive Local Learning for Sensing and Analytics.” Other collaborators are Zhengya Zhang and Michael Flynn of the U-M Department of Electrical Engineering and Computer Science, Garrett Kenyon of the Los Alamos National Lab and Christof Teuscher of Portland State University.

The work is part of a $6.9 million Unconventional Processing of Signals for Intelligent Data Exploitation project that aims to build a computer chip based on self-organizing, adaptive neural networks. It is funded by the [US] Defense Advanced Research Projects Agency [DARPA].

Here’s a link to and a citation for the paper,

Sparse coding with memristor networks by Patrick M. Sheridan, Fuxi Cai, Chao Du, Wen Ma, Zhengya Zhang, & Wei D. Lu. Nature Nanotechnology (2017) doi:10.1038/nnano.2017.83 Published online 22 May 2017

This paper is behind a paywall.

For the interested, there are a number of postings featuring memristors here (just use ‘memristor’ as your search term in the blog search engine). You might also want to check out ‘neuromorphic engineeering’ and ‘neuromorphic computing’ and ‘artificial brain’.

Predicting how a memristor functions

An April 3, 2017 news item on Nanowerk announces a new memristor development (Note: A link has been removed),

Researchers from the CNRS [Centre national de la recherche scientifique; France] , Thales, and the Universities of Bordeaux, Paris-Sud, and Evry have created an artificial synapse capable of learning autonomously. They were also able to model the device, which is essential for developing more complex circuits. The research was published in Nature Communications (“Learning through ferroelectric domain dynamics in solid-state synapses”)

An April 3, 2017 CNRS press release, which originated the news item, provides a nice introduction to the memristor concept before providing a few more details about this latest work (Note: A link has been removed),

One of the goals of biomimetics is to take inspiration from the functioning of the brain [also known as neuromorphic engineering or neuromorphic computing] in order to design increasingly intelligent machines. This principle is already at work in information technology, in the form of the algorithms used for completing certain tasks, such as image recognition; this, for instance, is what Facebook uses to identify photos. However, the procedure consumes a lot of energy. Vincent Garcia (Unité mixte de physique CNRS/Thales) and his colleagues have just taken a step forward in this area by creating directly on a chip an artificial synapse that is capable of learning. They have also developed a physical model that explains this learning capacity. This discovery opens the way to creating a network of synapses and hence intelligent systems requiring less time and energy.

Our brain’s learning process is linked to our synapses, which serve as connections between our neurons. The more the synapse is stimulated, the more the connection is reinforced and learning improved. Researchers took inspiration from this mechanism to design an artificial synapse, called a memristor. This electronic nanocomponent consists of a thin ferroelectric layer sandwiched between two electrodes, and whose resistance can be tuned using voltage pulses similar to those in neurons. If the resistance is low the synaptic connection will be strong, and if the resistance is high the connection will be weak. This capacity to adapt its resistance enables the synapse to learn.

Although research focusing on these artificial synapses is central to the concerns of many laboratories, the functioning of these devices remained largely unknown. The researchers have succeeded, for the first time, in developing a physical model able to predict how they function. This understanding of the process will make it possible to create more complex systems, such as a series of artificial neurons interconnected by these memristors.

As part of the ULPEC H2020 European project, this discovery will be used for real-time shape recognition using an innovative camera1 : the pixels remain inactive, except when they see a change in the angle of vision. The data processing procedure will require less energy, and will take less time to detect the selected objects. The research involved teams from the CNRS/Thales physics joint research unit, the Laboratoire de l’intégration du matériau au système (CNRS/Université de Bordeaux/Bordeaux INP), the University of Arkansas (US), the Centre de nanosciences et nanotechnologies (CNRS/Université Paris-Sud), the Université d’Evry, and Thales.

 

Image synapse


© Sören Boyn / CNRS/Thales physics joint research unit.

Artist’s impression of the electronic synapse: the particles represent electrons circulating through oxide, by analogy with neurotransmitters in biological synapses. The flow of electrons depends on the oxide’s ferroelectric domain structure, which is controlled by electric voltage pulses.


Here’s a link to and a citation for the paper,

Learning through ferroelectric domain dynamics in solid-state synapses by Sören Boyn, Julie Grollier, Gwendal Lecerf, Bin Xu, Nicolas Locatelli, Stéphane Fusil, Stéphanie Girod, Cécile Carrétéro, Karin Garcia, Stéphane Xavier, Jean Tomas, Laurent Bellaiche, Manuel Bibes, Agnès Barthélémy, Sylvain Saïghi, & Vincent Garcia. Nature Communications 8, Article number: 14736 (2017) doi:10.1038/ncomms14736 Published online: 03 April 2017

This paper is open access.

Thales or Thales Group is a French company, from its Wikipedia entry (Note: Links have been removed),

Thales Group (French: [talɛs]) is a French multinational company that designs and builds electrical systems and provides services for the aerospace, defence, transportation and security markets. Its headquarters are in La Défense[2] (the business district of Paris), and its stock is listed on the Euronext Paris.

The company changed its name to Thales (from the Greek philosopher Thales,[3] pronounced [talɛs] reflecting its pronunciation in French) from Thomson-CSF in December 2000 shortly after the £1.3 billion acquisition of Racal Electronics plc, a UK defence electronics group. It is partially state-owned by the French government,[4] and has operations in more than 56 countries. It has 64,000 employees and generated €14.9 billion in revenues in 2016. The Group is ranked as the 475th largest company in the world by Fortune 500 Global.[5] It is also the 10th largest defence contractor in the world[6] and 55% of its total sales are military sales.[4]

The ULPEC (Ultra-Low Power Event-Based Camera) H2020 [Horizon 2020 funded) European project can be found here,

The long term goal of ULPEC is to develop advanced vision applications with ultra-low power requirements and ultra-low latency. The output of the ULPEC project is a demonstrator connecting a neuromorphic event-based camera to a high speed ultra-low power consumption asynchronous visual data processing system (Spiking Neural Network with memristive synapses). Although ULPEC device aims to reach TRL 4, it is a highly application-oriented project: prospective use cases will b…

Finally, for anyone curious about Thales, the philosopher (from his Wikipedia entry), Note: Links have been removed,

Thales of Miletus (/ˈθeɪliːz/; Greek: Θαλῆς (ὁ Μῑλήσιος), Thalēs; c. 624 – c. 546 BC) was a pre-Socratic Greek/Phoenician philosopher, mathematician and astronomer from Miletus in Asia Minor (present-day Milet in Turkey). He was one of the Seven Sages of Greece. Many, most notably Aristotle, regard him as the first philosopher in the Greek tradition,[1][2] and he is otherwise historically recognized as the first individual in Western civilization known to have entertained and engaged in scientific philosophy.[3][4]

What is a multiregional brain-on-a-chip?

In response to having created a multiregional brain-on-a-chip, there’s an explanation from the team at Harvard University (which answers my question) in a Jan. 13, 2017 Harvard John A. Paulson School of Engineering and Applied Sciences news release (also on EurekAlert) by Leah Burrows,

Harvard University researchers have developed a multiregional brain-on-a-chip that models the connectivity between three distinct regions of the brain. The in vitro model was used to extensively characterize the differences between neurons from different regions of the brain and to mimic the system’s connectivity.

“The brain is so much more than individual neurons,” said Ben Maoz, co-first author of the paper and postdoctoral fellow in the Disease Biophysics Group in the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). “It’s about the different types of cells and the connectivity between different regions of the brain. When modeling the brain, you need to be able to recapitulate that connectivity because there are many different diseases that attack those connections.”

“Roughly twenty-six percent of the US healthcare budget is spent on neurological and psychiatric disorders,” said Kit Parker, the Tarr Family Professor of Bioengineering and Applied Physics Building at SEAS and Core Faculty Member of the Wyss Institute for Biologically Inspired Engineering at Harvard University. “Tools to support the development of therapeutics to alleviate the suffering of these patients is not only the human thing to do, it is the best means of reducing this cost.”

Researchers from the Disease Biophysics Group at SEAS and the Wyss Institute modeled three regions of the brain most affected by schizophrenia — the amygdala, hippocampus and prefrontal cortex.

They began by characterizing the cell composition, protein expression, metabolism, and electrical activity of neurons from each region in vitro.

“It’s no surprise that neurons in distinct regions of the brain are different but it is surprising just how different they are,” said Stephanie Dauth, co-first author of the paper and former postdoctoral fellow in the Disease Biophysics Group. “We found that the cell-type ratio, the metabolism, the protein expression and the electrical activity all differ between regions in vitro. This shows that it does make a difference which brain region’s neurons you’re working with.”

Next, the team looked at how these neurons change when they’re communicating with one another. To do that, they cultured cells from each region independently and then let the cells establish connections via guided pathways embedded in the chip.

The researchers then measured cell composition and electrical activity again and found that the cells dramatically changed when they were in contact with neurons from different regions.

“When the cells are communicating with other regions, the cellular composition of the culture changes, the electrophysiology changes, all these inherent properties of the neurons change,” said Maoz. “This shows how important it is to implement different brain regions into in vitro models, especially when studying how neurological diseases impact connected regions of the brain.”

To demonstrate the chip’s efficacy in modeling disease, the team doped different regions of the brain with the drug Phencyclidine hydrochloride — commonly known as PCP — which simulates schizophrenia. The brain-on-a-chip allowed the researchers for the first time to look at both the drug’s impact on the individual regions as well as its downstream effect on the interconnected regions in vitro.

The brain-on-a-chip could be useful for studying any number of neurological and psychiatric diseases, including drug addiction, post traumatic stress disorder, and traumatic brain injury.

“To date, the Connectome project has not recognized all of the networks in the brain,” said Parker. “In our studies, we are showing that the extracellular matrix network is an important part of distinguishing different brain regions and that, subsequently, physiological and pathophysiological processes in these brain regions are unique. This advance will not only enable the development of therapeutics, but fundamental insights as to how we think, feel, and survive.”

Here’s an image from the researchers,

Caption: Image of the in vitro model showing three distinct regions of the brain connected by axons. Credit: Disease Biophysics Group/Harvard University

Here’s a link to and a citation for the paper,

Neurons derived from different brain regions are inherently different in vitro: A novel multiregional brain-on-a-chip by Stephanie Dauth, Ben M Maoz, Sean P Sheehy, Matthew A Hemphill, Tara Murty, Mary Kate Macedonia, Angie M Greer, Bogdan Budnik, Kevin Kit Parker. Journal of Neurophysiology Published 28 December 2016 Vol. no. [?] , DOI: 10.1152/jn.00575.2016

This paper is behind a paywall and they haven’t included the vol. no. in the citation I’ve found.

Using melanin in bioelectronic devices

Brazilian researchers are working with melanin to make biosensors and other bioelectronic devices according to a Dec. 20, 2016 news item on phys.org,

Bioelectronics, sometimes called the next medical frontier, is a research field that combines electronics and biology to develop miniaturized implantable devices capable of altering and controlling electrical signals in the human body. Large corporations are increasingly interested: a joint venture in the field has recently been announced by Alphabet, Google’s parent company, and pharmaceutical giant GlaxoSmithKline (GSK).

One of the challenges that scientists face in developing bioelectronic devices is identifying and finding ways to use materials that conduct not only electrons but also ions, as most communication and other processes in the human organism use ionic biosignals (e.g., neurotransmitters). In addition, the materials must be biocompatible.

Resolving this challenge is one of the motivations for researchers at São Paulo State University’s School of Sciences (FC-UNESP) at Bauru in Brazil. They have succeeded in developing a novel route to more rapidly synthesize and to enable the use of melanin, a polymeric compound that pigments the skin, eyes and hair of mammals and is considered one of the most promising materials for use in miniaturized implantable devices such as biosensors.

A Dec. 14, 2016 FAPESP (São Paulo Research Foundation) press release, which originated the news item, further describes both the research and a recent meeting where the research was shared (Note: A link has been removed),

Some of the group’s research findings were presented at FAPESP Week Montevideo during a round-table session on materials science and engineering.

The symposium was organized by the Montevideo Group Association of Universities (AUGM), Uruguay’s University of the Republic (UdelaR) and FAPESP and took place on November 17-18 at UdelaR’s campus in Montevideo. Its purpose was to strengthen existing collaborations and establish new partnerships among South American scientists in a range of knowledge areas. Researchers and leaders of institutions in Uruguay, Brazil, Argentina, Chile and Paraguay attended the meeting.

“All the materials that have been tested to date for applications in bioelectronics are entirely synthetic,” said Carlos Frederico de Oliveira Graeff, a professor at UNESP Bauru and principal investigator for the project, in an interview given to Agência FAPESP.

“One of the great advantages of melanin is that it’s a totally natural compound and biocompatible with the human body: hence its potential use in electronic devices that interface with brain neurons, for example.”

Application challenges

According to Graeff, the challenges of using melanin as a material for the development of bioelectronic devices include the fact that like other carbon-based materials, such as graphene, melanin is not easily dispersible in an aqueous medium, a characteristic that hinders its application in thin-film production.

Furthermore, the conventional process for synthesizing melanin is complex: several steps are hard to control, it can last up to 56 days, and it can result in disorderly structures.

In a series of studies performed in recent years at the Center for Research and Development of Functional Materials (CDFM), where Graeff is a leading researcher and which is one of the Research, Innovation and Dissemination Centers (RIDCs) funded by FAPESP, he and his collaborators managed to obtain biosynthetic melanin with good dispersion in water and a strong resemblance to natural melanin using a novel synthesis route.

The process developed by the group at CDMF takes only a few hours and is based on changes in parameters such as temperature and the application of oxygen pressure to promote oxidation of the material.

By applying oxygen pressure, the researchers were able to increase the density of carboxyl groups, which are organic functional groups consisting of a carbon atom double bonded to an oxygen atom and single bonded to a hydroxyl group (oxygen + hydrogen). This enhances solubility and facilitates the suspension of biosynthetic melanin in water.

“The production of thin films of melanin with high homogeneity and quality is made far easier by these characteristics,” Graeff said.

By increasing the density of carboxyl groups, the researchers were also able to make biosynthetic melanin more similar to the biological compound.

In living organisms, an enzyme that participates in the synthesis of melanin facilitates the production of carboxylic acids. The new melanin synthesis route enabled the researchers to mimic the role of this enzyme chemically while increasing carboxyl group density.

“We’ve succeeded in obtaining a material that’s very close to biological melanin by chemical synthesis and in producing high-quality film for use in bioelectronic devices,” Graeff said.

Through collaboration with colleagues at research institutions in Canada [emphasis mine], the Brazilian researchers have begun using the material in a series of applications, including electrical contacts, pH sensors and photovoltaic cells.

More recently, they have embarked on an attempt to develop a transistor, a semiconductor device used to amplify or switch electronic signals and electrical power.

“Above all, we aim to produce transistors precisely in order to enhance this coupling of electronics with biological systems,” Graeff said.

I’m glad to have gotten some information about the work in South America. It’s one of FrogHeart’s shortcomings that I have so little about the research in that area of the world. I believe this is largely due to my lack of Spanish language skills. Perhaps one day there’ll be a universal translator that works well. In the meantime, it was a surprise to see Canada mentioned in this piece. I wonder which Canadian research institutions are involved with this research in South America.

Changing synaptic connectivity with a memristor

The French have announced some research into memristive devices that mimic both short-term and long-term neural plasticity according to a Dec. 6, 2016 news item on Nanowerk,

Leti researchers have demonstrated that memristive devices are excellent candidates to emulate synaptic plasticity, the capability of synapses to enhance or diminish their connectivity between neurons, which is widely believed to be the cellular basis for learning and memory.

The breakthrough was presented today [Dec. 6, 2016] at IEDM [International Electron Devices Meeting] 2016 in San Francisco in the paper, “Experimental Demonstration of Short and Long Term Synaptic Plasticity Using OxRAM Multi k-bit Arrays for Reliable Detection in Highly Noisy Input Data”.

Neural systems such as the human brain exhibit various types and time periods of plasticity, e.g. synaptic modifications can last anywhere from seconds to days or months. However, prior research in utilizing synaptic plasticity using memristive devices relied primarily on simplified rules for plasticity and learning.

The project team, which includes researchers from Leti’s sister institute at CEA Tech, List, along with INSERM and Clinatec, proposed an architecture that implements both short- and long-term plasticity (STP and LTP) using RRAM devices.

A Dec. 6, 2016 Laboratoire d’électronique des technologies de l’information (LETI) press release, which originated the news item, elaborates,

“While implementing a learning rule for permanent modifications – LTP, based on spike-timing-dependent plasticity – we also incorporated the possibility of short-term modifications with STP, based on the Tsodyks/Markram model,” said Elisa Vianello, Leti non-volatile memories and cognitive computing specialist/research engineer. “We showed the benefits of utilizing both kinds of plasticity with visual pattern extraction and decoding of neural signals. LTP allows our artificial neural networks to learn patterns, and STP makes the learning process very robust against environmental noise.”

Resistive random-access memory (RRAM) devices coupled with a spike-coding scheme are key to implementing unsupervised learning with minimal hardware footprint and low power consumption. Embedding neuromorphic learning into low-power devices could enable design of autonomous systems, such as a brain-machine interface that makes decisions based on real-time, on-line processing of in-vivo recorded biological signals. Biological data are intrinsically highly noisy and the proposed combined LTP and STP learning rule is a powerful technique to improve the detection/recognition rate. This approach may enable the design of autonomous implantable devices for rehabilitation purposes

Leti, which has worked on RRAM to develop hardware neuromorphic architectures since 2010, is the coordinator of the H2020 [Horizon 2020] European project NeuRAM3. That project is working on fabricating a chip with architecture that supports state-of-the-art machine-learning algorithms and spike-based learning mechanisms.

That’s it folks.

Getting your brain cells to glow in the dark

The extraordinary effort to colonize our brains continues apace with a new sensor from Vanderbilt University. From an Oct. 27, 2016 news item on ScienceDaily,

A new kind of bioluminescent sensor causes individual brain cells to imitate fireflies and glow in the dark.

The probe, which was developed by a team of Vanderbilt scientists, is a genetically modified form of luciferase, the enzyme that a number of other species including fireflies use to produce light. …

The scientists created the technique as a new and improved method for tracking the interactions within large neural networks in the brain.

“For a long time neuroscientists relied on electrical techniques for recording the activity of neurons. These are very good at monitoring individual neurons but are limited to small numbers of neurons. The new wave is to use optical techniques to record the activity of hundreds of neurons at the same time,” said Carl Johnson, Stevenson Professor of Biological Sciences, who headed the effort.

Individual neuron glowing with bioluminescent light produced by a new genetically engineered sensor. (Johnson Lab / Vanderbilt University)

Individual neuron glowing with bioluminescent light produced by a new genetically engineered sensor. (Johnson Lab / Vanderbilt University)

An Oct. 27, 2016 Vanderbilt University news release (also on EurekAlert) by David Salisbury, which originated the news item, explains the work in more detail,

“Most of the efforts in optical recording use fluorescence, but this requires a strong external light source which can cause the tissue to heat up and can interfere with some biological processes, particularly those that are light sensitive,” he [Carl Johnson] said.

Based on their research on bioluminescence in “a scummy little organism, the green alga Chlamydomonas, that nobody cares much about” Johnson and his colleagues realized that if they could combine luminescence with optogenetics – a new biological technique that uses light to control cells, particularly neurons, in living tissue – they could create a powerful new tool for studying brain activity.

“There is an inherent conflict between fluorescent techniques and optogenetics. The light required to produce the fluorescence interferes with the light required to control the cells,” said Johnson. “Luminescence, on the other hand, works in the dark!”

Johnson and his collaborators – Associate Professor Donna Webb, Research Assistant Professor Shuqun Shi, post-doctoral student Jie Yang and doctoral student Derrick Cumberbatch in biological sciences and Professor Danny Winder and postdoctoral student Samuel Centanni in molecular physiology and biophysics – genetically modified a type of luciferase obtained from a luminescent species of shrimp so that it would light up when exposed to calcium ions. Then they hijacked a virus that infects neurons and attached it to their sensor molecule so that the sensors are inserted into the cell interior.

The researchers picked calcium ions because they are involved in neuron activation. Although calcium levels are high in the surrounding area, normally they are very low inside the neurons. However, the internal calcium level spikes briefly when a neuron receives an impulse from one of its neighbors.

They tested their new calcium sensor with one of the optogenetic probes (channelrhodopsin) that causes the calcium ion channels in the neuron’s outer membrane to open, flooding the cell with calcium. Using neurons grown in culture they found that the luminescent enzyme reacted visibly to the influx of calcium produced when the probe was stimulated by brief light flashes of visible light.

To determine how well their sensor works with larger numbers of neurons, they inserted it into brain slices from the mouse hippocampus that contain thousands of neurons. In this case they flooded the slices with an increased concentration of potassium ions, which causes the cell’s ion channels to open. Again, they found that the sensor responded to the variations in calcium concentrations by brightening and dimming.

“We’ve shown that the approach works,” Johnson said. “Now we have to determine how sensitive it is. We have some indications that it is sensitive enough to detect the firing of individual neurons, but we have to run more tests to determine if it actually has this capability.”

Here’s a link to and a citation for the paper,

Coupling optogenetic stimulation with NanoLuc-based luminescence (BRET) Ca++ sensing by Jie Yang, Derrick Cumberbatch, Samuel Centanni, Shu-qun Shi, Danny Winder, Donna Webb, & Carl Hirschie Johnson. Nature Communications 7, Article number: 13268 (2016)  doi:10.1038/ncomms13268 Published online: 27 October 2016

This paper is open access.

Stretchy optical materials for implants that could pulse light

An Oct. 17, 2016 Massachusetts Institute of Technology (MIT) news release (also on EurekAlert) by Emily Chu describes research that could lead to long-lasting implants offering preventive health strategies,

Researchers from MIT and Harvard Medical School have developed a biocompatible and highly stretchable optical fiber made from hydrogel — an elastic, rubbery material composed mostly of water. The fiber, which is as bendable as a rope of licorice, may one day be implanted in the body to deliver therapeutic pulses of light or light up at the first sign of disease. [emphasis mine]

The researchers say the fiber may serve as a long-lasting implant that would bend and twist with the body without breaking down. The team has published its results online in the journal Advanced Materials.

Using light to activate cells, and particularly neurons in the brain, is a highly active field known as optogenetics, in which researchers deliver short pulses of light to targeted tissues using needle-like fibers, through which they shine light from an LED source.

“But the brain is like a bowl of Jell-O, whereas these fibers are like glass — very rigid, which can possibly damage brain tissues,” says Xuanhe Zhao, the Robert N. Noyce Career Development Associate Professor in MIT’s Department of Mechanical Engineering. “If these fibers could match the flexibility and softness of the brain, they could provide long-term more effective stimulation and therapy.”

Getting to the core of it

Zhao’s group at MIT, including graduate students Xinyue Liu and Hyunwoo Yuk, specializes in tuning the mechanical properties of hydrogels. The researchers have devised multiple recipes for making tough yet pliable hydrogels out of various biopolymers. The team has also come up with ways to bond hydrogels with various surfaces such as metallic sensors and LEDs, to create stretchable electronics.

The researchers only thought to explore hydrogel’s use in optical fibers after conversations with the bio-optics group at Harvard Medical School, led by Associate Professor Seok-Hyun (Andy) Yun. Yun’s group had previously fabricated an optical fiber from hydrogel material that successfully transmitted light through the fiber. However, the material broke apart when bent or slightly stretched. Zhao’s hydrogels, in contrast, could stretch and bend like taffy. The two groups joined efforts and looked for ways to incorporate Zhao’s hydrogel into Yun’s optical fiber design.

Yun’s design consists of a core material encased in an outer cladding. To transmit the maximum amount of light through the core of the fiber, the core and the cladding should be made of materials with very different refractive indices, or degrees to which they can bend light.

“If these two things are too similar, whatever light source flows through the fiber will just fade away,” Yuk explains. “In optical fibers, people want to have a much higher refractive index in the core, versus cladding, so that when light goes through the core, it bounces off the interface of the cladding and stays within the core.”

Happily, they found that Zhao’s hydrogel material was highly transparent and possessed a refractive index that was ideal as a core material. But when they tried to coat the hydrogel with a cladding polymer solution, the two materials tended to peel apart when the fiber was stretched or bent.

To bond the two materials together, the researchers added conjugation chemicals to the cladding solution, which, when coated over the hydrogel core, generated chemical links between the outer surfaces of both materials.

“It clicks together the carboxyl groups in the cladding, and the amine groups in the core material, like molecular-level glue,” Yuk says.

Sensing strain

The researchers tested the optical fibers’ ability to propagate light by shining a laser through fibers of various lengths. Each fiber transmitted light without significant attenuation, or fading. They also found that fibers could be stretched over seven times their original length without breaking.

Now that they had developed a highly flexible and robust optical fiber, made from a hydrogel material that was also biocompatible, the researchers began to play with the fiber’s optical properties, to see if they could design a fiber that could sense when and where it was being stretched.

They first loaded a fiber with red, green, and blue organic dyes, placed at specific spots along the fiber’s length. Next, they shone a laser through the fiber and stretched, for instance, the red region. They measured the spectrum of light that made it all the way through the fiber, and noted the intensity of the red light. They reasoned that this intensity relates directly to the amount of light absorbed by the red dye, as a result of that region being stretched.

In other words, by measuring the amount of light at the far end of the fiber, the researchers can quantitatively determine where and by how much a fiber was stretched.

“When you stretch a certain portion of the fiber, the dimensions of that part of the fiber changes, along with the amount of light that region absorbs and scatters, so in this way, the fiber can serve as a sensor of strain,” Liu explains.

“This is like a multistrain sensor through a single fiber,” Yuk adds. “So it can be an implantable or wearable strain gauge.”

The researchers imagine that such stretchable, strain-sensing optical fibers could be implanted or fitted along the length of a patient’s arm or leg, to monitor for signs of improving mobility.

Zhao envisions the fibers may also serve as sensors, lighting up in response to signs of disease.

“We may be able to use optical fibers for long-term diagnostics, to optically monitor tumors or inflammation,” he says. “The applications can be impactful.”

Here’s a link to and a citation for the paper,

Highly Stretchable, Strain Sensing Hydrogel Optical Fibers by Jingjing Guo, Xinyue Liu, Nan Jiang, Ali K. Yetisen, Hyunwoo Yuk, Changxi Yang, Ali Khademhosseini, Xuanhe Zhao, and Seok-Hyun Yun. Advanced Materials DOI: 10.1002/adma.201603160 Version of Record online: 7 OCT 2016

© 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.