Tag Archives: neurons

Graphene and neurons in a UK-Italy-Spain collaboration

There’s been a lot of talk about using graphene-based implants in the brain due to the material’s flexibility along with its other properties. A step forward has been taking according to a Jan. 29, 2016 news item on phys.org,

Researchers have successfully demonstrated how it is possible to interface graphene – a two-dimensional form of carbon – with neurons, or nerve cells, while maintaining the integrity of these vital cells. The work may be used to build graphene-based electrodes that can safely be implanted in the brain, offering promise for the restoration of sensory functions for amputee or paralysed patients, or for individuals with motor disorders such as epilepsy or Parkinson’s disease.

A Jan. 29, 2016 Cambridge University press release (also on EurekAlert), which originated the news item, provides more detail,

Previously, other groups had shown that it is possible to use treated graphene to interact with neurons. However the signal to noise ratio from this interface was very low. By developing methods of working with untreated graphene, the researchers retained the material’s electrical conductivity, making it a significantly better electrode.

“For the first time we interfaced graphene to neurons directly,” said Professor Laura Ballerini of the University of Trieste in Italy. “We then tested the ability of neurons to generate electrical signals known to represent brain activities, and found that the neurons retained their neuronal signalling properties unaltered. This is the first functional study of neuronal synaptic activity using uncoated graphene based materials.”

Our understanding of the brain has increased to such a degree that by interfacing directly between the brain and the outside world we can now harness and control some of its functions. For instance, by measuring the brain’s electrical impulses, sensory functions can be recovered. This can be used to control robotic arms for amputee patients or any number of basic processes for paralysed patients – from speech to movement of objects in the world around them. Alternatively, by interfering with these electrical impulses, motor disorders (such as epilepsy or Parkinson’s) can start to be controlled.

Scientists have made this possible by developing electrodes that can be placed deep within the brain. These electrodes connect directly to neurons and transmit their electrical signals away from the body, allowing their meaning to be decoded.

However, the interface between neurons and electrodes has often been problematic: not only do the electrodes need to be highly sensitive to electrical impulses, but they need to be stable in the body without altering the tissue they measure.

Too often the modern electrodes used for this interface (based on tungsten or silicon) suffer from partial or complete loss of signal over time. This is often caused by the formation of scar tissue from the electrode insertion, which prevents the electrode from moving with the natural movements of the brain due to its rigid nature.

Graphene has been shown to be a promising material to solve these problems, because of its excellent conductivity, flexibility, biocompatibility and stability within the body.

Based on experiments conducted in rat brain cell cultures, the researchers found that untreated graphene electrodes interfaced well with neurons. By studying the neurons with electron microscopy and immunofluorescence the researchers found that they remained healthy, transmitting normal electric impulses and, importantly, none of the adverse reactions which lead to the damaging scar tissue were seen.

According to the researchers, this is the first step towards using pristine graphene-based materials as an electrode for a neuro-interface. In future, the researchers will investigate how different forms of graphene, from multiple layers to monolayers, are able to affect neurons, and whether tuning the material properties of graphene might alter the synapses and neuronal excitability in new and unique ways. “Hopefully this will pave the way for better deep brain implants to both harness and control the brain, with higher sensitivity and fewer unwanted side effects,” said Ballerini.

“We are currently involved in frontline research in graphene technology towards biomedical applications,” said Professor Maurizio Prato from the University of Trieste. “In this scenario, the development and translation in neurology of graphene-based high-performance biodevices requires the exploration of the interactions between graphene nano- and micro-sheets with the sophisticated signalling machinery of nerve cells. Our work is only a first step in that direction.”

“These initial results show how we are just at the tip of the iceberg when it comes to the potential of graphene and related materials in bio-applications and medicine,” said Professor Andrea Ferrari, Director of the Cambridge Graphene Centre. “The expertise developed at the Cambridge Graphene Centre allows us to produce large quantities of pristine material in solution, and this study proves the compatibility of our process with neuro-interfaces.”

The research was funded by the Graphene Flagship [emphasis mine],  a European initiative which promotes a collaborative approach to research with an aim of helping to translate graphene out of the academic laboratory, through local industry and into society.

Here’s a link to and a citation for the paper,

Graphene-Based Interfaces Do Not Alter Target Nerve Cells by Alessandra Fabbro, Denis Scaini, Verónica León, Ester Vázquez, Giada Cellot, Giulia Privitera, Lucia Lombardi, Felice Torrisi, Flavia Tomarchio, Francesco Bonaccorso, Susanna Bosi, Andrea C. Ferrari, Laura Ballerini, and Maurizio Prato. ACS Nano, 2016, 10 (1), pp 615–623 DOI: 10.1021/acsnano.5b05647 Publication Date (Web): December 23, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

There are a couple things I found a bit odd about this project. First, all of the funding is from the Graphene Flagship initiative. I was expecting to see at least some funding from the European Union’s other mega-sized science initiative, the Human Brain Project. Second, there was no mention of Spain nor were there any quotes from the Spanish researchers. For the record, the Spanish institutions represented were: University of Castilla-La Mancha, Carbon Nanobiotechnology Laboratory, and the Basque Foundation for Science.

Plastic memristors for neural networks

There is a very nice explanation of memristors and computing systems from the Moscow Institute of Physics and Technology (MIPT). First their announcement, from a Jan. 27, 2016 news item on ScienceDaily,

A group of scientists has created a neural network based on polymeric memristors — devices that can potentially be used to build fundamentally new computers. These developments will primarily help in creating technologies for machine vision, hearing, and other machine sensory systems, and also for intelligent control systems in various fields of applications, including autonomous robots.

The authors of the new study focused on a promising area in the field of memristive neural networks – polymer-based memristors – and discovered that creating even the simplest perceptron is not that easy. In fact, it is so difficult that up until the publication of their paper in the journal Organic Electronics, there were no reports of any successful experiments (using organic materials). The experiments conducted at the Nano-, Bio-, Information and Cognitive Sciences and Technologies (NBIC) centre at the Kurchatov Institute by a joint team of Russian and Italian scientists demonstrated that it is possible to create very simple polyaniline-based neural networks. Furthermore, these networks are able to learn and perform specified logical operations.

A Jan. 27, 2016 MIPT press release on EurekAlert, which originated the news item, offers an explanation of memristors and a description of the research,

A memristor is an electric element similar to a conventional resistor. The difference between a memristor and a traditional element is that the electric resistance in a memristor is dependent on the charge passing through it, therefore it constantly changes its properties under the influence of an external signal: a memristor has a memory and at the same time is also able to change data encoded by its resistance state! In this sense, a memristor is similar to a synapse – a connection between two neurons in the brain that is able, with a high level of plasticity, to modify the efficiency of signal transmission between neurons under the influence of the transmission itself. A memristor enables scientists to build a “true” neural network, and the physical properties of memristors mean that at the very minimum they can be made as small as conventional chips.

Some estimates indicate that the size of a memristor can be reduced up to ten nanometers, and the technologies used in the manufacture of the experimental prototypes could, in theory, be scaled up to the level of mass production. However, as this is “in theory”, it does not mean that chips of a fundamentally new structure with neural networks will be available on the market any time soon, even in the next five years.

The plastic polyaniline was not chosen by chance. Previous studies demonstrated that it can be used to create individual memristors, so the scientists did not have to go through many different materials. Using a polyaniline solution, a glass substrate, and chromium electrodes, they created a prototype with dimensions that, at present, are much larger than those typically used in conventional microelectronics: the strip of the structure was approximately one millimeter wide (they decided to avoid miniaturization for the moment). All of the memristors were tested for their electrical characteristics: it was found that the current-voltage characteristic of the devices is in fact non-linear, which is in line with expectations. The memristors were then connected to a single neuromorphic network.

A current-voltage characteristic (or IV curve) is a graph where the horizontal axis represents voltage and the vertical axis the current. In conventional resistance, the IV curve is a straight line; in strict accordance with Ohm’s Law, current is proportional to voltage. For a memristor, however, it is not just the voltage that is important, but the change in voltage: if you begin to gradually increase the voltage supplied to the memristor, it will increase the current passing through it not in a linear fashion, but with a sharp bend in the graph and at a certain point its resistance will fall sharply.

Then if you begin to reduce the voltage, the memristor will remain in its conducting state for some time, after which it will change its properties rather sharply again to decrease its conductivity. Experimental samples with a voltage increase of 0.5V hardly allowed any current to pass through (around a few tenths of a microamp), but when the voltage was reduced by the same amount, the ammeter registered a figure of 5 microamps. Microamps are of course very small units, but in this case it is the contrast that is most significant: 0.1 μA to 5 μA is a difference of fifty times! This is more than enough to make a clear distinction between the two signals.

After checking the basic properties of individual memristors, the physicists conducted experiments to train the neural network. The training (it is a generally accepted term and is therefore written without inverted commas) involves applying electric pulses at random to the inputs of a perceptron. If a certain combination of electric pulses is applied to the inputs of a perceptron (e.g. a logic one and a logic zero at two inputs) and the perceptron gives the wrong answer, a special correcting pulse is applied to it, and after a certain number of repetitions all the internal parameters of the device (namely memristive resistance) reconfigure themselves, i.e. they are “trained” to give the correct answer.

The scientists demonstrated that after about a dozen attempts their new memristive network is capable of performing NAND logical operations, and then it is also able to learn to perform NOR operations. Since it is an operator or a conventional computer that is used to check for the correct answer, this method is called the supervised learning method.

Needless to say, an elementary perceptron of macroscopic dimensions with a characteristic reaction time of tenths or hundredths of a second is not an element that is ready for commercial production. However, as the researchers themselves note, their creation was made using inexpensive materials, and the reaction time will decrease as the size decreases: the first prototype was intentionally enlarged to make the work easier; it is physically possible to manufacture more compact chips. In addition, polyaniline can be used in attempts to make a three-dimensional structure by placing the memristors on top of one another in a multi-tiered structure (e.g. in the form of random intersections of thin polymer fibers), whereas modern silicon microelectronic systems, due to a number of technological limitations, are two-dimensional. The transition to the third dimension would potentially offer many new opportunities.

The press release goes to explain what the researchers mean when they mention a fundamentally different computer,

The common classification of computers is based either on their casing (desktop/laptop/tablet), or on the type of operating system used (Windows/MacOS/Linux). However, this is only a very simple classification from a user perspective, whereas specialists normally use an entirely different approach – an approach that is based on the principle of organizing computer operations. The computers that we are used to, whether they be tablets, desktop computers, or even on-board computers on spacecraft, are all devices with von Neumann architecture; without going into too much detail, they are devices based on independent processors, random access memory (RAM), and read only memory (ROM).

The memory stores the code of a program that is to be executed. A program is a set of instructions that command certain operations to be performed with data. Data are also stored in the memory* and are retrieved from it (and also written to it) in accordance with the program; the program’s instructions are performed by the processor. There may be several processors, they can work in parallel, data can be stored in a variety of ways – but there is always a fundamental division between the processor and the memory. Even if the computer is integrated into one single chip, it will still have separate elements for processing information and separate units for storing data. At present, all modern microelectronic systems are based on this particular principle and this is partly the reason why most people are not even aware that there may be other types of computer systems – without processors and memory.

*) if physically different elements are used to store data and store a program, the computer is said to be built using Harvard architecture. This method is used in certain microcontrollers, and in small specialized computing devices. The chip that controls the function of a refrigerator, lift, or car engine (in all these cases a “conventional” computer would be redundant) is a microcontroller. However, neither Harvard, nor von Neumann architectures allow the processing and storage of information to be combined into a single element of a computer system.

However, such systems do exist. Furthermore, if you look at the brain itself as a computer system (this is purely hypothetical at the moment: it is not yet known whether the function of the brain is reducible to computations), then you will see that it is not at all built like a computer with von Neumann architecture. Neural networks do not have a specialized computer or separate memory cells. Information is stored and processed in each and every neuron, one element of the computer system, and the human brain has approximately 100 billion of these elements. In addition, almost all of them are able to work in parallel (simultaneously), which is why the brain is able to process information with great efficiency and at such high speed. Artificial neural networks that are currently implemented on von Neumann computers only emulate these processes: emulation, i.e. step by step imitation of functions inevitably leads to a decrease in speed and an increase in energy consumption. In many cases this is not so critical, but in certain cases it can be.

Devices that do not simply imitate the function of neural networks, but are fundamentally the same could be used for a variety of tasks. Most importantly, neural networks are capable of pattern recognition; they are used as a basis for recognising handwritten text for example, or signature verification. When a certain pattern needs to be recognised and classified, such as a sound, an image, or characteristic changes on a graph, neural networks are actively used and it is in these fields where gaining an advantage in terms of speed and energy consumption is critical. In a control system for an autonomous flying robot every milliwatt-hour and every millisecond counts, just in the same way that a real-time system to process data from a collider detector cannot take too long to “think” about highlighting particle tracks that may be of interest to scientists from among a large number of other recorded events.

Bravo to the writer!

Here’s a link to and a citation for the paper,

Hardware elementary perceptron based on polyaniline memristive devices by V.A. Demin. V. V. Erokhin, A.V. Emelyanov, S. Battistoni, G. Baldi, S. Iannotta, P.K. Kashkarov, M.V. Kovalchuk. Organic Electronics Volume 25, October 2015, Pages 16–20 doi:10.1016/j.orgel.2015.06.015

This paper is behind a paywall.

Titanium dioxide nanoparticles and the brain

This research into titanium dioxide nanoparticles and possible effects on your brain should they pass the blood-brain barrier comes from the University of Nebraska-Lincoln (US) according to a Dec. 15, 2015 news item on Nanowerk (Note: A link has been removed),

Even moderate concentrations of a nanoparticle used to whiten certain foods, milk and toothpaste could potentially compromise the brain’s most numerous cells, according to a new study from the University of Nebraska-Lincoln (Nanoscale, “Mitochondrial dysfunction and loss of glutamate uptake in primary astrocytes exposed to titanium dioxide nanoparticles”).

A Dec. 14, 2015 University of Nebraska-Lincoln news release, which originated the news item, provides more detail (Note: Links have been removed),

The researchers examined how three types of titanium dioxide nanoparticles [rutile, anatase, and commercially available P25 TiO2 nanoparticles], the world’s second-most abundant nanomaterial, affected the functioning of astrocyte cells. Astrocytes help regulate the exchange of signal-carrying neurotransmitters in the brain while also supplying energy to the neurons that process those signals, among many other functions.

The team exposed rat-derived astrocyte cells to nanoparticle concentrations well below the extreme levels that have been shown to kill brain cells but are rarely encountered by humans. At the study’s highest concentration of 100 parts per million, or PPM, two of the nanoparticle types still killed nearly two-thirds of the astrocytes within a day. That mortality rate fell to between half and one-third of cells at 50 PPM, settling to about one-quarter at 25 PPM.

Yet the researchers found evidence that even surviving cells are severely impaired by exposure to titanium dioxide nanoparticles. Astrocytes normally take in and process a neurotransmitter called glutamate that plays wide-ranging roles in cognition, memory and learning, along with the formation, migration and maintenance of other cells.

When allowed to accumulate outside cells, however, glutamate becomes a potent toxin that kills neurons and may increase the risk of neurodegenerative diseases such as Alzheimer’s and Parkinson’s. The study reported that one of the nanoparticle types reduced the astrocytes’ uptake of glutamate by 31 percent at concentrations of just 25 PPM. Another type decreased that uptake by 45 percent at 50 PPM.

The team further discovered that the nanoparticles upset the intricate balance of protein dynamics occurring within astrocytes’ mitochondria, the cellular organelles that help regulate energy production and contribute to signaling among cells. Titanium dioxide exposure also led to other signs of mitochondrial distress, breaking apart a significant proportion of the mitochondrial network at 100 PPM.

“These events are oftentimes predecessors of cell death,” said Oleh Khalimonchuk, a UNL assistant professor of biochemistry who co-authored the study. “Usually, people are looking at those ultimate consequences, but what happens before matters just as much. Those little damages add up over time. Ultimately, they’re going to cause a major problem.”

Khalimonchuk and fellow author Srivatsan Kidambi, assistant professor of chemical and biomolecular engineering, cautioned that more research is needed to determine whether titanium dioxide nanoparticles can avoid digestion and cross the blood-brain barrier that blocks the passage of many substances. [emphasis mine]

However, the researchers cited previous studies that have discovered these nanoparticles in the brain tissue of animals with similar blood-brain barriers. [emphasis mine] The concentrations of nanoparticles found in those specimens served as a reference point for the levels examined in the new study.

“There’s evidence building up now that some of these particles can actually cross the (blood-brain) barrier,” Khalimonchuk said. “Few molecules seem to be able to do so, but it turns out that there are certain sites in the brain where you can get this exposure.”

Kidambi said the team hopes the study will help facilitate further research on the presence of nanoparticles in consumer and industrial products.

“We’re hoping that this study will get some discussion going, because these nanoparticles have not been regulated,” said Kidambi, who also holds a courtesy appointment with the University of Nebraska Medical Center. “If you think about anything white – milk, chewing gum, toothpaste, powdered sugar – all these have nanoparticles in them.

“We’ve found that some nanoparticles are safe and some are not, so we are not saying that all of them are bad. Our reasoning is that … we need to have a classification of ‘safe’ versus ‘not safe,’ along with concentration thresholds (for each type). It’s about figuring out how the different forms affect the biology of cells.

I notice the researchers are being careful about alarming anyone unduly while emphasizing the importance of this research. For anyone curious enough to read the paper, here’s a link to and a citation for it,

Mitochondrial dysfunction and loss of glutamate uptake in primary astrocytes exposed to titanium dioxide nanoparticles by Christina L. Wilson, Vaishaali Natarajan, Stephen L. Hayward, Oleh Khalimonchuk and   Srivatsan Kidambi. Nanoscale, 2015,7, 18477-18488 DOI: 10.1039/C5NR03646A First published online 31 Jul 2015

This is paper is open access although you may need to register on the site.

Final comment, I note this was published online way back in July 2015. Either the paper version of the journal was just published and that’s what’s being promoted or the media people thought they’d try to get some attention for this work by reissuing the publicity. Good on them! It’s hard work getting people to notice things when there is so much information floating around.

Better neuroprostheses for brain diseases and mental illneses

I don’t often get news releases from Sweden but I do on occasion and, sometimes, they even come in their original Swedish versions. In this case, Lund University sent me an English language version about their latest work making brain implants (neural prostheses) safer and effective. From a Sept. 29, 2015 Lund University news release (also on EurekAlert),

Neurons thrive and grow in a new type of nanowire material developed by researchers in Nanophysics and Ophthalmology at Lund University in Sweden. In time, the results might improve both neural and retinal implants, and reduce the risk of them losing their effectiveness over time, which is currently a problem

By implanting electrodes in the brain tissue one can stimulate or capture signals from different areas of the brain. These types of brain implants, or neuro-prostheses as they are sometimes called, are used to treat Parkinson’s disease and other neurological diseases.

They are currently being tested in other areas, such as depression, severe cases of autism, obsessive-compulsive disorders and paralysis. Another research track is to determine whether retinal implants are able to replace light-sensitive cells that die in cases of Retinitis Pigmentosa and other eye diseases.

However, there are severe drawbacks associated with today’s implants. One problem is that the body interprets the implants as foreign objects, resulting in an encapsulation of the electrode, which in turn leads to loss of signal.

One of the researchers explains the approach adopted by the research team (from the news release),

“Our nanowire structure prevents the cells that usually encapsulate the electrodes – glial cells – from doing so”, says Christelle Prinz, researcher in Nanophysics at Lund University in Sweden, who developed this technique together with Maria Thereza Perez, a researcher in Ophthalmology.

“I was very pleasantly surprised by these results. In previous in-vitro experiments, the glial cells usually attach strongly to the electrodes”, she says.

To avoid this, the researchers have developed a small substrate where regions of super thin nanowires are combined with flat regions. While neurons grow and extend processes on the nanowires, the glial cells primarily occupy the flat regions in between.

“The different types of cells continue to interact. This is necessary for the neurons to survive because the glial cells provide them with important molecules.”

So far, tests have only been done with cultured cells (in vitro) but hopefully they will soon be able to continue with experiments in vivo.

The substrate is made from the semiconductor material gallium phosphide where each outgrowing nanowire has a diameter of only 80 nanometres (billionths of a metre).

Here’s a link to and a citation for the paper,

Support of Neuronal Growth Over Glial Growth and Guidance of Optic Nerve Axons by Vertical Nanowire Arrays by Gaëlle Piret, Maria-Thereza Perez, and Christelle N. Prinz. ACS Appl. Mater. Interfaces, 2015, 7 (34), pp 18944–18948 DOI: 10.1021/acsami.5b03798 Publication Date (Web): August 11, 2015

Copyright © 2015 American Chemical Society

This paper appears to be open access as I was able to link to the PDF version.

Nanoscale imaging of a mouse brain

Researchers have developed a new brain imaging tool they would like to use as a founding element for a national brain observatory. From a July 30, 2015 news item on Azonano,

A new imaging tool developed by Boston scientists could do for the brain what the telescope did for space exploration.

In the first demonstration of how the technology works, published July 30 in the journal Cell, the researchers look inside the brain of an adult mouse at a scale previously unachievable, generating images at a nanoscale resolution. The inventors’ long-term goal is to make the resource available to the scientific community in the form of a national brain observatory.

A July 30, 2015 Cell Press news release on EurekAlert, which originated the news item, expands on the theme,

“I’m a strong believer in bottom up-science, which is a way of saying that I would prefer to generate a hypothesis from the data and test it,” says senior study author Jeff Lichtman, of Harvard University. “For people who are imagers, being able to see all of these details is wonderful and we’re getting an opportunity to peer into something that has remained somewhat intractable for so long. It’s about time we did this, and it is what people should be doing about things we don’t understand.”

The researchers have begun the process of mining their imaging data by looking first at an area of the brain that receives sensory information from mouse whiskers, which help the animals orient themselves and are even more sensitive than human fingertips. The scientists used a program called VAST, developed by co-author Daniel Berger of Harvard and the Massachusetts Institute of Technology, to assign different colors and piece apart each individual “object” (e.g., neuron, glial cell, blood vessel cell, etc.).

“The complexity of the brain is much more than what we had ever imagined,” says study first author Narayanan “Bobby” Kasthuri, of the Boston University School of Medicine. “We had this clean idea of how there’s a really nice order to how neurons connect with each other, but if you actually look at the material it’s not like that. The connections are so messy that it’s hard to imagine a plan to it, but we checked and there’s clearly a pattern that cannot be explained by randomness.”

The researchers see great potential in the tool’s ability to answer questions about what a neurological disorder actually looks like in the brain, as well as what makes the human brain different from other animals and different between individuals. Who we become is very much a product of the connections our neurons make in response to various life experiences. To be able to compare the physical neuron-to-neuron connections in an infant, a mathematical genius, and someone with schizophrenia would be a leap in our understanding of how our brains shape who we are (or vice versa).

The cost and data storage demands for this type of research are still high, but the researchers expect expenses to drop over time (as has been the case with genome sequencing). To facilitate data sharing, the scientists are now partnering with Argonne National Laboratory with the hopes of creating a national brain laboratory that neuroscientists around the world can access within the next few years.

“It’s bittersweet that there are many scientists who think this is a total waste of time as well as a big investment in money and effort that could be better spent answering questions that are more proximal,” Lichtman says. “As long as data is showing you things that are unexpected, then you’re definitely doing the right thing. And we are certainly far from being out of the surprise element. There’s never a time when we look at this data that we don’t see something that we’ve never seen before.”

Here’s a link to and a citation for the paper,

Saturated Reconstruction of a Volume of Neocortex by Narayanan Kasthuri, Kenneth Jeffrey Hayworth, Daniel Raimund Berger, Richard Lee Schalek, José Angel Conchello, Seymour Knowles-Barley, Dongil Lee, Amelio Vázquez-Reina, Verena Kaynig, Thouis Raymond Jones, Mike Roberts, Josh Lyskowski Morgan, Juan Carlos Tapia, H. Sebastian Seung, William Gray Roncal, Joshua Tzvi Vogelstein, Randal Burns, Daniel Lewis Sussman, Carey Eldin Priebe, Hanspeter Pfister, Jeff William Lichtman. Cell Volume 162, Issue 3, p648–661, 30 July 2015 DOI: http://dx.doi.org/10.1016/j.cell.2015.06.054

This appears to be an open access paper.

Metallic nanoflowers produce neuron-like fractals

I was a bit surprised to find that this University of Oregon story was about a patent. Here’s more from a July 28, 2015 news item on Azonano,

Richard Taylor’s vision of using artificial fractal-based implants to restore sight to the blind — part of a far-reaching concept that won an innovation award this year from the White House — is now covered under a broad U.S. patent.

The patent goes far beyond efforts to use the emerging technology to restore eyesight. It covers all fractal-designed electronic implants that link signaling activity with nerves for any purpose in animal and human biology.

Fractals are objects with irregular curves or shapes. “They are a trademark building block of nature,” said Taylor, a professor of physics and director of the Materials Science Institute at the University of Oregon [UO]. “In math, that property is self-similarity. Trees, clouds, rivers, galaxies, lungs and neurons are fractals. What we hope to do is adapt the technology to nature’s geometry.”

Named in U.S. patent 9079017 are Taylor, the UO, Taylor’s research collaborator Simon Brown, and Brown’s home institution, the University of Canterbury in New Zealand.

A July 28, 2015 University of Oregon news release (also on EurekAlert) by Jim Barlow, which originated the news item, continues the patent celebration,

“We’re very delighted,” Taylor said. “The U.S. Patent and Trademark Office has recognized the novelty and utility of our general concept, but there is a lot to do. We want to get all of the fundamental science sorted out. We’re looking at least another couple of years of basic science before moving forward.”

The patent solidifies the relationship between the two universities, said Charles Williams, associate vice president for innovation at the UO. “This is still in the very early days. This project has attracted national attention, awards and grants.

“We hope to engage the right set of partners to develop the technology over time as the concept moves into potentially vast forms of medical applications,” Williams added. “Dr. Taylor’s interdisciplinary science is a hallmark of the creativity at the University of Oregon and a great example of the international research collaborations that our faculty engage in every day.”

Here’s an image illustrating the ‘fractal neurons’,

FractalImplant

Caption: Retinal neurons, outlined in yellow, attach to and follows branches of a fractal interconnect. Such connections, says University of Oregon physicist Richard Taylor, could some day help to treat eye diseases such as macular degeneration. Credit: Courtesy of Richard Taylor

The news release goes on to describe the ‘fractal approach’ to eye implants which is markedly different from the implants entering the marketplace,

Taylor raised the idea of a fractal-based approach to treat eye diseases in a 2011 article in Physics World, writing that it could overcome problems associated with efforts to insert photodiodes behind the eyes. Current chip technology doesn’t allow sufficient connections with neurons.

“The wiring — the neurons — in the retina is fractal, but the chips are not fractal,” Taylor said. His vision, based on research with Brown, is to grow nanoflowers seeded from nanoparticles of metals that self assemble in a natural process, producing fractals that mimic and communicate with neurons.

It is conceivable, Taylor said, that fractal interconnects — as the implants are called in the patent — could be shaped so they network with like-shaped neurons to address narrow needs, such as a feedback loop for the sensation of touch from a prosthetic arm or leg to the brain.

Such implants would overcome the biological rejection of implants with smooth surfaces or those randomly patterned that have been developed in a trial-and-error approach to link to neurons.

Once perfected, he said, the implants would generate an electrical field that would fool a sea of glial cells that insulate and protect neurons from foreign invaders. Fractal interconnects would allow electrical signals to operate in “a safety zone biologically” that avoids toxicity issues.

“The patent covers any generic interface for connecting any electronics to any nerve,” Taylor said, adding that fractal interconnects are not electrodes. “Our interface is multifunctional. The primary thing is to get the electrical field into the system so that reaches the neurons and induces the signal.”

Taylor’s proposal for using fractal-based technology earned the top prize in a contest held by the innovation company InnoCentive. Taylor was honored in April [2015] at a meeting of the White House Office of Science and Technology Policy.

The competition was sponsored by a collaboration of science philanthropies including the Research Corporation for Science Advancement, the Gordon and Betty Moore Foundation, the W.M. Keck Foundation, the Kavli Foundation, the Templeton Foundation and the Burroughs Wellcome Fund.

You can find out more about InnoCentive here. As for other types of artificial eye implants, the latest here is a June 30, 2015 post titled, Clinical trial for bionic eye (artificial retinal implant) shows encouraging results (safety and efficacy).

On the verge of controlling neurons by wireless?

Scientists have controlled a mouse’s neurons with a wireless device (and unleashed some paranoid fantasies? well, mine if no one else’s) according to a July 16, 2015 news item on Nanowerk (Note: A link has been removed),

A study showed that scientists can wirelessly determine the path a mouse walks with a press of a button. Researchers at the Washington University School of Medicine, St. Louis, and University of Illinois, Urbana-Champaign, created a remote controlled, next-generation tissue implant that allows neuroscientists to inject drugs and shine lights on neurons deep inside the brains of mice. The revolutionary device is described online in the journal Cell (“Wireless Optofluidic Systems for Programmable In Vivo Pharmacology and Optogenetics”). Its development was partially funded by the [US] National Institutes of Health [NIH].

The researchers have made an image/illustration of the probe available,

Mind Bending Probe Scientists used soft materials to create a brain implant a tenth the width of a human hair that can wirelessly control neurons with lights and drugs. Courtesy of Jeong lab, University of Colorado Boulder.

A July 16, 2015 US NIH National Institute of Neurological Disorders and Stroke news release, which originated the news item, describes the study and notes that instructions for building the implant are included in the published study,

“It unplugs a world of possibilities for scientists to learn how brain circuits work in a more natural setting.” said Michael R. Bruchas, Ph.D., associate professor of anesthesiology and neurobiology at Washington University School of Medicine and a senior author of the study.

The Bruchas lab studies circuits that control a variety of disorders including stress, depression, addiction, and pain. Typically, scientists who study these circuits have to choose between injecting drugs through bulky metal tubes and delivering lights through fiber optic cables. Both options require surgery that can damage parts of the brain and introduce experimental conditions that hinder animals’ natural movements.

To address these issues, Jae-Woong Jeong, Ph.D., a bioengineer formerly at the University of Illinois at Urbana-Champaign, worked with Jordan G. McCall, Ph.D., a graduate student in the Bruchas lab, to construct a remote controlled, optofluidic implant. The device is made out of soft materials that are a tenth the diameter of a human hair and can simultaneously deliver drugs and lights.

“We used powerful nano-manufacturing strategies to fabricate an implant that lets us penetrate deep inside the brain with minimal damage,” said John A. Rogers, Ph.D., professor of materials science and engineering, University of Illinois at Urbana-Champaign and a senior author. “Ultra-miniaturized devices like this have tremendous potential for science and medicine.”

With a thickness of 80 micrometers and a width of 500 micrometers, the optofluidic implant is thinner than the metal tubes, or cannulas, scientists typically use to inject drugs. When the scientists compared the implant with a typical cannula they found that the implant damaged and displaced much less brain tissue.

The scientists tested the device’s drug delivery potential by surgically placing it into the brains of mice. In some experiments, they showed that they could precisely map circuits by using the implant to inject viruses that label cells with genetic dyes. In other experiments, they made mice walk in circles by injecting a drug that mimics morphine into the ventral tegmental area (VTA), a region that controls motivation and addiction.

The researchers also tested the device’s combined light and drug delivery potential when they made mice that have light-sensitive VTA neurons stay on one side of a cage by commanding the implant to shine laser pulses on the cells. The mice lost the preference when the scientists directed the device to simultaneously inject a drug that blocks neuronal communication. In all of the experiments, the mice were about three feet away from the command antenna.

“This is the kind of revolutionary tool development that neuroscientists need to map out brain circuit activity,” said James Gnadt, Ph.D., program director at the NIH’s National Institute of Neurological Disorders and Stroke (NINDS).  “It’s in line with the goals of the NIH’s BRAIN Initiative.”

The researchers fabricated the implant using semi-conductor computer chip manufacturing techniques. It has room for up to four drugs and has four microscale inorganic light-emitting diodes. They installed an expandable material at the bottom of the drug reservoirs to control delivery. When the temperature on an electric heater beneath the reservoir rose then the bottom rapidly expanded and pushed the drug out into the brain.

“We tried at least 30 different prototypes before one finally worked,” said Dr. McCall.

“This was truly an interdisciplinary effort,” said Dr. Jeong, who is now an assistant professor of electrical, computer, and energy engineering at University of Colorado Boulder. “We tried to engineer the implant to meet some of neurosciences greatest unmet needs.”

In the study, the scientists provide detailed instructions for manufacturing the implant.

“A tool is only good if it’s used,” said Dr. Bruchas. “We believe an open, crowdsourcing approach to neuroscience is a great way to understand normal and healthy brain circuitry.”

Here’s a link to and a citation for the paper,

Wireless Optofluidic Systems for Programmable In Vivo Pharmacology and Optogenetics by Jae-Woong Jeong, Jordan G. McCall, Gunchul Shin, Yihui Zhang, Ream Al-Hasani, Minku Kim, Shuo Li, Joo Yong Sim, Kyung-In Jang, Yan Shi, Daniel Y. Hong, Yuhao Liu, Gavin P. Schmitz, Li Xia, Zhubin He, Paul Gamble, Wilson Z. Ray, Yonggang Huang, Michael R. Bruchas, and John A. Rogers.  Cell, July 16, 2015. DOI: 10.1016/j.cell.2015.06.058

This paper is behind a paywall.

I last wrote about wireless activation of neurons in a May 28, 2014 posting which featured research at the University of Massachusetts Medical School.

Is it time to invest in a ‘brain chip’ company?

This story take a few twists and turns. First, ‘brain chips’ as they’re sometimes called would allow, theoretically, computers to learn and function like human brains. (Note: There’s another type of ‘brain chip’ which could be implanted in human brains to help deal with diseases such as Parkinson’s and Alzheimer’s. *Today’s [June 26, 2015] earlier posting about an artificial neuron points at some of the work being done in this areas.*)

Returning to the ‘brain ship’ at hand. Second, there’s a company called BrainChip, which has one patent and another pending for, yes, a ‘brain chip’.

The company, BrainChip, founded in Australia and now headquartered in California’s Silicon Valley, recently sparked some investor interest in Australia. From an April 7, 2015 article by Timna Jacks for the Australian Financial Review,

Former mining stock Aziana Limited has whet Australian investors’ appetite for science fiction, with its share price jumping 125 per cent since it announced it was acquiring a US-based tech company called BrainChip, which promises artificial intelligence through a microchip that replicates the neural system of the human brain.

Shares in the company closed at 9¢ before the Easter long weekend, having been priced at just 4¢ when the backdoor listing of BrainChip was announced to the market on March 18.

Creator of the patented digital chip, Peter Van Der Made told The Australian Financial Review the technology has the capacity to learn autonomously, due to its composition of 10,000 biomimic neurons, which, through a process known as synaptic time-dependent plasticity, can form memories and associations in the same way as a biological brain. He said it works 5000 times faster and uses a thousandth of the power of the fastest computers available today.

Mr Van Der Made is inviting technology partners to license the technology for their own chips and products, and is donating the technology to university laboratories in the US for research.

The Netherlands-born Australian, now based in southern California, was inspired to create the brain-like chip in 2004, after working at the IBM Internet Security Systems for two years, where he was chief scientist for behaviour analysis security systems. …

A June 23, 2015 article by Tony Malkovic on phys.org provide a few more details about BrainChip and about the deal,

Mr Van der Made and the company, also called BrainChip, are now based in Silicon Valley in California and he returned to Perth last month as part of the company’s recent merger and listing on the Australian Stock Exchange.

He says BrainChip has the ability to learn autonomously, evolve and associate information and respond to stimuli like a brain.

Mr Van der Made says the company’s chip technology is more than 5,000 faster than other technologies, yet uses only 1/1,000th of the power.

“It’s a hardware only solution, there is no software to slow things down,” he says.

“It doesn’t executes instructions, it learns and supplies what it has learnt to new information.

“BrainChip is on the road to position itself at the forefront of artificial intelligence,” he says.

“We have a clear advantage, at least 10 years, over anybody else in the market, that includes IBM.”

BrainChip is aiming at the global semiconductor market involving almost anything that involves a microprocessor.

You can find out more about the company, BrainChip here. The site does have a little more information about the technology,

Spiking Neuron Adaptive Processor (SNAP)

BrainChip’s inventor, Peter van der Made, has created an exciting new Spiking Neural Networking technology that has the ability to learn autonomously, evolve and associate information just like the human brain. The technology is developed as a digital design containing a configurable “sea of biomimic neurons’.

The technology is fast, completely digital, and consumes very low power, making it feasible to integrate large networks into portable battery-operated products, something that has never been possible before.

BrainChip neurons autonomously learn through a process known as STDP (Synaptic Time Dependent Plasticity). BrainChip’s fully digital neurons process input spikes directly in hardware. Sensory neurons convert physical stimuli into spikes. Learning occurs when the input is intense, or repeating through feedback and this is directly correlated to the way the brain learns.

Computing Artificial Neural Networks (ANNs)

The brain consists of specialized nerve cells that communicate with one another. Each such nerve cell is called a Neuron,. The inputs are memory nodes called synapses. When the neuron associates information, it produces a ‘spike’ or a ‘spike train’. Each spike is a pulse that triggers a value in the next synapse. Synapses store values, similar to the way a computer stores numbers. In combination, these values determine the function of the neural network. Synapses acquire values through learning.

In Artificial Neural Networks (ANNs) this complex function is generally simplified to a static summation and compare function, which severely limits computational power. BrainChip has redefined how neural networks work, replicating the behaviour of the brain. BrainChip’s artificial neurons are completely digital, biologically realistic resulting in increased computational power, high speed and extremely low power consumption.

The Problem with Artificial Neural Networks

Standard ANNs, running on computer hardware are processed sequentially; the processor runs a program that defines the neural network. This consumes considerable time and because these neurons are processed sequentially, all this delayed time adds up resulting in a significant linear decline in network performance with size.

BrainChip neurons are all mapped in parallel. Therefore the performance of the network is not dependent on the size of the network providing a clear speed advantage. So because there is no decline in performance with network size, learning also takes place in parallel within each synapse, making STDP learning very fast.

A hardware solution

BrainChip’s digital neural technology is the only custom hardware solution that is capable of STDP learning. The hardware requires no coding and has no software as it evolves learning through experience and user direction.

The BrainChip neuron is unique in that it is completely digital, behaves asynchronously like an analog neuron, and has a higher level of biological realism. It is more sophisticated than software neural models and is many orders of magnitude faster. The BrainChip neuron consists entirely of binary logic gates with no traditional CPU core. Hence, there are no ‘programming’ steps. Learning and training takes the place of programming and coding. Like of a child learning a task for the first time.

Software ‘neurons’, to compromise for limited processing power, are simplified to a point where they do not resemble any of the features of a biological neuron. This is due to the sequential nature of computers, whereby all data has to pass through a central processor in chunks of 16, 32 or 64 bits. In contrast, the brain’s network is parallel and processes the equivalent of millions of data bits simultaneously.

A significantly faster technology

Performing emulation in digital hardware has distinct advantages over software. As software is processed sequentially, one instruction at a time, Software Neural Networks perform slower with increasing size. Parallel hardware does not have this problem and maintains the same speed no matter how large the network is. Another advantage of hardware is that it is more power efficient by several orders of magnitude.

The speed of the BrainChip device is unparalleled in the industry.

For large neural networks a GPU (Graphics Processing Unit) is ~70 times faster than the Intel i7 executing a similar size neural network. The BrainChip neural network is faster still and takes far fewer CPU (Central Processing Unit) cycles, with just a little communication overhead, which means that the CPU is available for other tasks. The BrainChip network also responds much faster than a software network accelerating the performance of the entire system.

The BrainChip network is completely parallel, with no sequential dependencies. This means that the network does not slow down with increasing size.

Endorsed by the neuroscience community

A number of the world’s pre-eminent neuroscientists have endorsed the technology and are agreeing to joint develop projects.

BrainChip has the potential to become the de facto standard for all autonomous learning technology and computer products.

Patented

BrainChip’s autonomous learning technology patent was granted on the 21st September 2008 (Patent number US 8,250,011 “Autonomous learning dynamic artificial neural computing device and brain inspired system”). BrainChip is the only company in the world to have achieved autonomous learning in a network of Digital Neurons without any software.

A prototype Spiking Neuron Adaptive Processor was designed as a ‘proof of concept’ chip.

The first tests were completed at the end of 2007 and this design was used as the foundation for the US patent application which was filed in 2008. BrainChip has also applied for a continuation-in-part patent filed in 2012, the “Method and System for creating Dynamic Neural Function Libraries”, US Patent Application 13/461,800 which is pending.

Van der Made doesn’t seem to have published any papers on this work and the description of the technology provided on the website is frustratingly vague. There are many acronyms for processes but no mention of what this hardware might be. For example, is it based on a memristor or some kind of atomic ionic switch or something else altogether?

It would be interesting to find out more but, presumably, van der Made, wishes to withhold details. There are many companies following the same strategy while pursuing what they view as a business advantage.

* Artificial neuron link added June 26, 2015 at 1017 hours PST.

Gold and your neurons

Should you need any electrode implants for your neurons at some point in the future, it’s possible they could be coated with gold. Researchers at the Lawrence Livermore National Laboratory (LLNL) and at the University of California at Davis (UC Davis) have discovered that electrodes covered in nanoporous gold could prevent scarring (from a May 5, 2015 news item on Azonano),

A team of researchers from Lawrence Livermore and UC Davis have found that covering an implantable neural electrode with nanoporous gold could eliminate the risk of scar tissue forming over the electrode’s surface.

The team demonstrated that the nanostructure of nanoporous gold achieves close physical coupling of neurons by maintaining a high neuron-to-astrocyte surface coverage ratio. Close physical coupling between neurons and the electrode plays a crucial role in recording fidelity of neural electrical activity.

An April 30, 2015 LLNL news release, which originated the news item, details the scarring issue and offers more information about the proposed solution,

Neural interfaces (e.g., implantable electrodes or multiple-electrode arrays) have emerged as transformative tools to monitor and modify neural electrophysiology, both for fundamental studies of the nervous system, and to diagnose and treat neurological disorders. These interfaces require low electrical impedance to reduce background noise and close electrode-neuron coupling for enhanced recording fidelity.

Designing neural interfaces that maintain close physical coupling of neurons to an electrode surface remains a major challenge for both implantable and in vitro neural recording electrode arrays. An important obstacle in maintaining robust neuron-electrode coupling is the encapsulation of the electrode by scar tissue.

Typically, low-impedance nanostructured electrode coatings rely on chemical cues from pharmaceuticals or surface-immobilized peptides to suppress glial scar tissue formation over the electrode surface, which is an obstacle to reliable neuron−electrode coupling.

However, the team found that nanoporous gold, produced by an alloy corrosion process, is a promising candidate to reduce scar tissue formation on the electrode surface solely through topography by taking advantage of its tunable length scale.

“Our results show that nanoporous gold topography, not surface chemistry, reduces astrocyte surface coverage,” said Monika Biener, one of the LLNL authors of the paper.

Nanoporous gold has attracted significant interest for its use in electrochemical sensors, catalytic platforms, fundamental structure−property studies at the nanoscale and tunable drug release. It also features high effective surface area, tunable pore size, well-defined conjugate chemistry, high electrical conductivity and compatibility with traditional fabrication techniques.

“We found that nanoporous gold reduces scar coverage but also maintains high neuronal coverage in an in vitro neuron-glia co-culture model,” said Juergen Biener, the other LLNL author of the paper. “More broadly, the study demonstrates a novel surface for supporting neuronal cultures without the use of culture medium supplements to reduce scar overgrowth.”

Here’s a link to and a citation for the paper,

Nanoporous Gold as a Neural Interface Coating: Effects of Topography, Surface Chemistry, and Feature Size by Christopher A. R. Chapman, Hao Chen, Marianna Stamou, Juergen Biener, Monika M. Biener, Pamela J. Lein, and Erkin Seker. ACS Appl. Mater. Interfaces, 2015, 7 (13), pp 7093–7100 DOI: 10.1021/acsami.5b00410 Publication Date (Web): February 23, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

The researchers have provided this image to illustrate their work,

The image depicts a neuronal network growing on a novel nanotextured gold electrode coating. The topographical cues presented by the coating preferentially favor spreading of neurons as opposed to scar tissue. This feature has the potential to enhance the performance of neural interfaces. Image by Ryan Chen/LLNL.

The image depicts a neuronal network growing on a novel nanotextured gold electrode coating. The topographical cues presented by the coating preferentially favor spreading of neurons as opposed to scar tissue. This feature has the potential to enhance the performance of neural interfaces. Image by Ryan Chen/LLNL.

Centralized depot (Wikipedia style) for data on neurons

The decades worth of data that has been collected about the billions of neurons in the brain is astounding. To help scientists make sense of this “brain big data,” researchers at Carnegie Mellon University have used data mining to create http://www.neuroelectro.org, a publicly available website that acts like Wikipedia, indexing physiological information about neurons.

opens a March 30, 2015 news item on ScienceDaily (Note: A link has been removed),

The site will help to accelerate the advance of neuroscience research by providing a centralized resource for collecting and comparing data on neuronal function. A description of the data available and some of the analyses that can be performed using the site are published online by the Journal of Neurophysiology

A March 30, 2015 Carnegie Mellon University news release on EurekAlert, which originated the news item, describes, in more detail,  the endeavour and what the scientists hope to achieve,

The neurons in the brain can be divided into approximately 300 different types based on their physical and functional properties. Researchers have been studying the function and properties of many different types of neurons for decades. The resulting data is scattered across tens of thousands of papers in the scientific literature. Researchers at Carnegie Mellon turned to data mining to collect and organize these data in a way that will make possible, for the first time, new methods of analysis.

“If we want to think about building a brain or re-engineering the brain, we need to know what parts we’re working with,” said Nathan Urban, interim provost and director of Carnegie Mellon’s BrainHubSM neuroscience initiative. “We know a lot about neurons in some areas of the brain, but very little about neurons in others. To accelerate our understanding of neurons and their functions, we need to be able to easily determine whether what we already know about some neurons can be applied to others we know less about.”

Shreejoy J. Tripathy, who worked in Urban’s lab when he was a graduate student in the joint Carnegie Mellon/University of Pittsburgh Center for the Neural Basis of Cognition (CNBC) Program in Neural Computation, selected more than 10,000 published papers that contained physiological data describing how neurons responded to various inputs. He used text mining algorithms to “read” each of the papers. The text mining software found the portions of each paper that identified the type of neuron studied and then isolated the electrophysiological data related to the properties of that neuronal type. It also retrieved information about how each of the experiments in the literature was completed, and corrected the data to account for any differences that might be caused by the format of the experiment. Overall, Tripathy, who is now a postdoc at the University of British Columbia, was able to collect and standardize data for approximately 100 different types of neurons, which he published on the website http://www.neuroelectro.org.

Since the data on the website was collected using text mining, the researchers realized that it was likely to contain errors related to extraction and standardization. Urban and his group validated much of the data, but they also created a mechanism that allows site users to flag data for further evaluation. Users also can contribute new data with minimal intervention from site administrators, similar to Wikipedia.

“It’s a dynamic environment in which people can collect, refine and add data,” said Urban, who is the Dr. Frederick A. Schwertz Distinguished Professor of Life Sciences and a member of the CNBC. “It will be a useful resource to people doing neuroscience research all over the world.”

Ultimately, the website will help researchers find groups of neurons that share the same physiological properties, which could provide a better understanding of how a neuron functions. For example, if a researcher finds that a type of neuron in the brain’s neocortex fires spontaneously, they can look up other neurons that fire spontaneously and access research papers that address this type of neuron. Using that information, they can quickly form hypotheses about whether or not the same mechanisms are at play in both the newly discovered and previously studied neurons.

To demonstrate how neuroelectro.org could be used, the researchers compared the electrophysiological data from more than 30 neuron types that had been most heavily studied in the literature. These included pyramidal neurons in the hippocampus, which are responsible for memory, and dopamine neurons in the midbrain, thought to be responsible for reward-seeking behaviors and addiction, among others. The site was able to find many expected similarities between the different types of neurons, and some similarities that were a surprise to researchers. Those surprises represent promising areas for future research.

In ongoing work, the Carnegie Mellon researchers are comparing the data on neuroelectro.org with other kinds of data, including data on neurons’ patterns of gene expression. For example, Urban’s group is using another publicly available resource, the Allen Brain Atlas, to find whether groups of neurons with similar electrical function have similar gene expression.

“It would take a lot of time, effort and money to determine both the physiological properties of a neuron and its gene expression,” Urban said. “Our website will help guide this research, making it much more efficient.”

The researchers have produced a brief video describing neurons and their project,

Here’s a link to and a citation for the researchers’ paper,

Brain-wide analysis of electrophysiological diversity yields novel categorization of mammalian neuron types by Shreejoy J Tripathy, Shawn D. Burton, Matthew Geramita, Richard C. Gerkin, and Nathaniel N. Urban. Journal of Neurophysiology Published 25 March 2015 DOI: 10.1152/jn.00237.2015

This paper is behind a paywall.