Tag Archives: memristor

Announcing the ‘memtransistor’

Yet another advance toward ‘brainlike’ computing (how many times have I written this or a variation thereof in the last 10 years? See: Dexter Johnson’s take on the situation at the end of this post): Northwestern University announced their latest memristor research in a February 21, 2018 news item on Nanowerk,

Computer algorithms might be performing brain-like functions, such as facial recognition and language translation, but the computers themselves have yet to operate like brains.

“Computers have separate processing and memory storage units, whereas the brain uses neurons to perform both functions,” said Northwestern University’s Mark C. Hersam. “Neural networks can achieve complicated computation with significantly lower energy consumption compared to a digital computer.”

A February 21, 2018 Northwestern University news release (also on EurekAlert), which originated the news item, provides more information about the latest work from this team,

In recent years, researchers have searched for ways to make computers more neuromorphic, or brain-like, in order to perform increasingly complicated tasks with high efficiency. Now Hersam, a Walter P. Murphy Professor of Materials Science and Engineering in Northwestern’s McCormick School of Engineering, and his team are bringing the world closer to realizing this goal.

The research team has developed a novel device called a “memtransistor,” which operates much like a neuron by performing both memory and information processing. With combined characteristics of a memristor and transistor, the memtransistor also encompasses multiple terminals that operate more similarly to a neural network.

Supported by the National Institute of Standards and Technology and the National Science Foundation, the research was published online today, February 22 [2018], in Nature. Vinod K. Sangwan and Hong-Sub Lee, postdoctoral fellows advised by Hersam, served as the paper’s co-first authors.

The memtransistor builds upon work published in 2015, in which Hersam, Sangwan, and their collaborators used single-layer molybdenum disulfide (MoS2) to create a three-terminal, gate-tunable memristor for fast, reliable digital memory storage. Memristor, which is short for “memory resistors,” are resistors in a current that “remember” the voltage previously applied to them. Typical memristors are two-terminal electronic devices, which can only control one voltage channel. By transforming it into a three-terminal device, Hersam paved the way for memristors to be used in more complex electronic circuits and systems, such as neuromorphic computing.

To develop the memtransistor, Hersam’s team again used atomically thin MoS2 with well-defined grain boundaries, which influence the flow of current. Similar to the way fibers are arranged in wood, atoms are arranged into ordered domains – called “grains” – within a material. When a large voltage is applied, the grain boundaries facilitate atomic motion, causing a change in resistance.

“Because molybdenum disulfide is atomically thin, it is easily influenced by applied electric fields,” Hersam explained. “This property allows us to make a transistor. The memristor characteristics come from the fact that the defects in the material are relatively mobile, especially in the presence of grain boundaries.”

But unlike his previous memristor, which used individual, small flakes of MoS2, Hersam’s memtransistor makes use of a continuous film of polycrystalline MoS2 that comprises a large number of smaller flakes. This enabled the research team to scale up the device from one flake to many devices across an entire wafer.

“When length of the device is larger than the individual grain size, you are guaranteed to have grain boundaries in every device across the wafer,” Hersam said. “Thus, we see reproducible, gate-tunable memristive responses across large arrays of devices.”

After fabricating memtransistors uniformly across an entire wafer, Hersam’s team added additional electrical contacts. Typical transistors and Hersam’s previously developed memristor each have three terminals. In their new paper, however, the team realized a seven-terminal device, in which one terminal controls the current among the other six terminals.

“This is even more similar to neurons in the brain,” Hersam said, “because in the brain, we don’t usually have one neuron connected to only one other neuron. Instead, one neuron is connected to multiple other neurons to form a network. Our device structure allows multiple contacts, which is similar to the multiple synapses in neurons.”

Next, Hersam and his team are working to make the memtransistor faster and smaller. Hersam also plans to continue scaling up the device for manufacturing purposes.

“We believe that the memtransistor can be a foundational circuit element for new forms of neuromorphic computing,” he said. “However, making dozens of devices, as we have done in our paper, is different than making a billion, which is done with conventional transistor technology today. Thus far, we do not see any fundamental barriers that will prevent further scale up of our approach.”

The researchers have made this illustration available,

Caption: This is the memtransistor symbol overlaid on an artistic rendering of a hypothetical circuit layout in the shape of a brain. Credit; Hersam Research Group

Here’s a link to and a citation for the paper,

Multi-terminal memtransistors from polycrystalline monolayer molybdenum disulfide by Vinod K. Sangwan, Hong-Sub Lee, Hadallia Bergeron, Itamar Balla, Megan E. Beck, Kan-Sheng Chen, & Mark C. Hersam. Nature volume 554, pages 500–504 (22 February 2018 doi:10.1038/nature25747 Published online: 21 February 2018

This paper is behind a paywall.

The team’s earlier work referenced in the news release was featured here in an April 10, 2015 posting.

Dexter Johnson

From a Feb. 23, 2018 posting by Dexter Johnson on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website),

While this all seems promising, one of the big shortcomings in neuromorphic computing has been that it doesn’t mimic the brain in a very important way. In the brain, for every neuron there are a thousand synapses—the electrical signal sent between the neurons of the brain. This poses a problem because a transistor only has a single terminal, hardly an accommodating architecture for multiplying signals.

Now researchers at Northwestern University, led by Mark Hersam, have developed a new device that combines memristors—two-terminal non-volatile memory devices based on resistance switching—with transistors to create what Hersam and his colleagues have dubbed a “memtransistor” that performs both memory storage and information processing.

This most recent research builds on work that Hersam and his team conducted back in 2015 in which the researchers developed a three-terminal, gate-tunable memristor that operated like a kind of synapse.

While this work was recognized as mimicking the low-power computing of the human brain, critics didn’t really believe that it was acting like a neuron since it could only transmit a signal from one artificial neuron to another. This was far short of a human brain that is capable of making tens of thousands of such connections.

“Traditional memristors are two-terminal devices, whereas our memtransistors combine the non-volatility of a two-terminal memristor with the gate-tunability of a three-terminal transistor,” said Hersam to IEEE Spectrum. “Our device design accommodates additional terminals, which mimic the multiple synapses in neurons.”

Hersam believes that these unique attributes of these multi-terminal memtransistors are likely to present a range of new opportunities for non-volatile memory and neuromorphic computing.

If you have the time and the interest, Dexter’s post provides more context,

Nano-neurons from a French-Japanese-US research team

This news about nano-neurons comes from a Nov. 8, 2017 news item on defenceweb.co.za,

Researchers from the Joint Physics Unit CNRS/Thales, the Nanosciences and Nanotechnologies Centre (CNRS/Université Paris Sud), in collaboration with American and Japanese researchers, have developed the world’s first artificial nano-neuron with the ability to recognise numbers spoken by different individuals. Just like the recent development of electronic synapses described in a Nature article, this electronic nano-neuron is a breakthrough in artificial intelligence and its potential applications.

A Sept. 19, 2017 Thales press release, which originated the news item, expands on the theme,

The latest artificial intelligence algorithms are able to recognise visual and vocal cues with high levels of performance. But running these programs on conventional computers uses 10,000 times more energy than the human brain. To reduce electricity consumption, a new type of computer is needed. It is inspired by the human brain and comprises vast numbers of miniaturised neurons and synapses. Until now, however, it had not been possible to produce a stable enough artificial nano-neuron which would process the information reliably.

Today [Sept. 19, 2017 or July 27, 2017 when the paper was published in Nature?]], for the first time, researchers have developed a nano-neuron with the ability to recognise numbers spoken by different individuals with 99.6% accuracy. This breakthrough relied on the use of an exceptionally stable magnetic oscillator. Each gyration of this nano-compass generates an electrical output, which effectively imitates the electrical impulses produced by biological neurons. In the next few years, these magnetic nano-neurons could be interconnected via artificial synapses, such as those recently developed, for real-time big data analytics and classification.

The project is a collaborative initiative between fundamental research laboratories and applied research partners. The long-term goal is to produce extremely energy-efficient miniaturised chips with the intelligence needed to learn from and adapt to the constantly ever-changing and ambiguous situations of the real world. These electronic chips will have many practical applications, such as providing smart guidance to robots or autonomous vehicles, helping doctors in their diagnosis’ and improving medical prostheses. This project included researchers from the Joint Physics Unit CNRS/Thales, the AIST, the CNS-NIST, and the Nanosciences and Nanotechnologies Centre (CNRS/Université Paris-Sud).

About the CNRS
The French National Centre for Scientific Research is Europe’s largest public research institution. It produces knowledge for the benefit of society. With nearly 32,000 employees, a budget exceeding 3.2 billion euros in 2016, and offices throughout France, the CNRS is present in all scientific fields through its 1100 laboratories. With 21 Nobel laureates and 12 Fields Medal winners, the organization has a long tradition of excellence. It carries out research in mathematics, physics, information sciences and technologies, nuclear and particle physics, Earth sciences and astronomy, chemistry, biological sciences, the humanities and social sciences, engineering and the environment.

About the Université Paris-Saclay (France)
To meet global demand for higher education, research and innovation, 19 of France’s most renowned establishments have joined together to form the Université Paris-Saclay. The new university provides world-class teaching and research opportunities, from undergraduate courses to graduate schools and doctoral programmes, across most disciplines including life and natural sciences as well as social sciences. With 9,000 masters students, 5,500 doctoral candidates, an equivalent number of engineering students and an extensive undergraduate population, some 65,000 people now study at member establishments.

About the Center for Nanoscale Science & Technology (Maryland, USA)
The CNST is a national user facility purposely designed to accelerate innovation in nanotechnology-based commerce. Its mission is to operate a national, shared resource for nanoscale fabrication and measurement and develop innovative nanoscale measurement and fabrication capabilities to support researchers from industry, academia, NIST and other government agencies in advancing nanoscale technology from discovery to production. The Center, located in the Advanced Measurement Laboratory Complex on NIST’s Gaithersburg, MD campus, disseminates new nanoscale measurement methods by incorporating them into facility operations, collaborating and partnering with others and providing international leadership in nanotechnology.

About the National Institute of Advanced Industrial Science and Technology (Japan)
The National Institute of Advanced Industrial Science and Technology (AIST), one of the largest public research institutes in Japan, focuses on the creation and practical realization of technologies useful to Japanese industry and society, and on bridging the gap between innovative technological seeds and commercialization. For this, AIST is organized into 7 domains (Energy and Environment, Life Science and Biotechnology, Information Technology and Human Factors, Materials and Chemistry, Electronics and Manufacturing, Geological

About the Centre for Nanoscience and Nanotechnology (France)
Established on 1 June 2016, the Centre for Nanosciences and Nanotechnologies (C2N) was launched in the wake of the joint CNRS and Université Paris-Sud decision to merge and gather on the same campus site the Laboratory for Photonics and Nanostructures (LPN) and the Institut d’Electronique Fondamentale (IEF). Its location in the École Polytechnique district of the Paris-Saclay campus will be completed in 2017 while the new C2N buildings are under construction. The centre conducts research in material science, nanophotonics, nanoelectronics, nanobiotechnologies and microsystems, as well as in nanotechnologies.

There is a video featuring researcher Julie Grollier discussing their work but you will need your French language skills,

(If you’re interested, there is an English language video published on youtube on Feb. 19, 2017 with Julie Grollier speaking more generally about the field at the World Economic Forum about neuromorphic computing,  https://www.youtube.com/watch?v=Sm2BGkTYFeQ

Here’s a link to and a citation for the team’s July 2017 paper,

Neuromorphic computing with nanoscale spintronic oscillators by Jacob Torrejon, Mathieu Riou, Flavio Abreu Araujo, Sumito Tsunegi, Guru Khalsa, Damien Querlioz, Paolo Bortolotti, Vincent Cros, Kay Yakushiji, Akio Fukushima, Hitoshi Kubota, Shinji Yuasa, Mark D. Stiles, & Julie Grollier. Nature 547, 428–431 (27 July 2017) doi:10.1038/nature23011 Published online 26 July 2017

This paper is behind a paywall.

Memristors at Masdar

The Masdar Institute of Science and Technology (Abu Dhabi, United Arab Emirates; Masdar Institute Wikipedia entry) featured its work with memristors in an Oct. 1, 2017 Masdar Institute press release by Erica Solomon (for anyone who’s interested, I have a simple description of memristors and links to more posts about them after the press release),

Researchers Develop New Memristor Prototype Capable of Performing Complex Operations at High-Speed and Low Power, Could Lead to Advancements in Internet of Things, Portable Healthcare Sensing and other Embedded Technologies

Computer circuits in development at the Khalifa University of Science and Technology could make future computers much more compact, efficient and powerful thanks to advancements being made in memory technologies that combine processing and memory storage functions into one densely packed “memristor.”

Enabling faster, smaller and ultra-low-power computers with memristors could have a big impact on embedded technologies, which enable Internet of Things (IoT), artificial intelligence, and portable healthcare sensing systems, says Dr. Baker Mohammad, Associate Professor of Electrical and Computer Engineering. Dr. Mohammad co-authored a book on memristor technologies, which has just been released by Springer, a leading global scientific publisher of books and journals, with Class of 2017 PhD graduate Heba Abunahla. The book, titled Memristor Technology: Synthesis and Modeling for Sensing and Security Applications, provides readers with a single-source guide to fabricate, characterize and model memristor devices for sensing applications.

The pair also contributed to a paper on memristor research that was published in IEEE Transactions on Circuits and Systems I: Regular Papers earlier this month with Class of 2017 MSc graduate Muath Abu Lebdeh and Dr. Mahmoud Al-Qutayri, Professor of Electrical and Computer Engineering.PhD student Yasmin Halawani is also an active member of Dr. Mohammad’s research team.

Conventional computers rely on energy and time-consuming processes to move information back and forth between the computer central processing unit (CPU) and the memory, which are separately located. A memristor, which is an electrical resistor that remembers how much current flows through it, can bridge the gap between computation and storage. Instead of fetching data from the memory and sending that data to the CPU where it is then processed, memristors have the potential to store and process data simultaneously.

“Memristors allow computers to perform many operations at the same time without having to move data around, thereby reducing latency, energy requirements, costs and chip size,” Dr. Mohammad explained. “We are focused on extending the logic gate design of the current memristor architecture with one that leads to even greater reduction of latency, energy dissipation and size.”

Logic gates control an electronics logical operation on one or more binary inputs and typically produce a single binary output. That is why they are at the heart of what makes a computer work, allowing a CPU to carry out a given set of instructions, which are received as electrical signals, using one or a combination of the seven basic logical operations: AND, OR, NOT, XOR, XNOR, NAND and NOR.

The team’s latest work is aimed at advancing a memristor’s ability to perform a complex logic operation, known as the XNOR (Exclusive NOR) logic gate function, which is the most complex logic gate operation among the seven basic logic gates types.

Designing memristive logic gates is difficult, as they require that each electrical input and output be in the form of electrical resistance rather than electrical voltage.

“However, we were able to successfully design an XNOR logic gate prototype with a novel structure, by layering bipolar and unipolar memristor types in a novel heterogeneous structure, which led to a reduction in latency and energy consumption for a memristive XNOR logic circuit gate by 50% compared to state-of the art state full logic proposed by leading research institutes,” Dr. Mohammad revealed.

The team’s current work builds on five years of research in the field of memristors, which is expected to reach a market value of US$384 million by 2025, according to a recent report from Research and Markets. Up to now, the team has fabricated and characterized several memristor prototypes, assessing how different design structures influence efficiency and inform potential applications. Some innovative memristor technology applications the team discovered include machine vision, radiation sensing and diabetes detection. Two patents have already been issued by the US Patents and Trademark Office (USPTO) for novel memristor designs invented by the team, with two additional patents pending.

Their robust research efforts have also led to the publication of several papers on the technology in high impact journals, including The Journal of Physical Chemistry, Materials Chemistry and Physics, and IEEE TCAS. This strong technology base paved the way for undergraduate senior students Reem Aldahmani, Amani Alshkeili, and Reem Jassem Jaffar to build novel and efficient memristive sensing prototypes.

The memristor research is also set to get an additional boost thanks to the new University merger, which Dr. Mohammad believes could help expedite the team’s research and development efforts through convenient and continuous access to the wider range of specialized facilities and tools the new university has on offer.

The team’s prototype memristors are now in the laboratory prototype stage, and Dr. Mohammad plans to initiate discussions for internal partnership opportunities with the Khalifa University Robotics Institute, followed by external collaboration with leading semiconductor companies such as Abu Dhabi-owned GlobalFoundries, to accelerate the transfer of his team’s technology to the market.

With initial positive findings and the promise of further development through the University’s enhanced portfolio of research facilities, this project is a perfect demonstration of how the Khalifa University of Science and Technology is pushing the envelope of electronics and semiconductor technologies to help transform Abu Dhabi into a high-tech hub for research and entrepreneurship.

h/t Oct. 4, 2017 Nanowerk news item

Slightly restating it from the press release, a memristor is a nanoscale electrical component which mimics neural plasticity. Memristor combines the word ‘memory’ with ‘resistor’.

For those who’d like a little more, there are three components: capacitors, inductors, and resistors which make up an electrical circuit. The resistor is the circuit element which represents the resistance to the flow of electric current.  As for how this relates to the memristor (from the Memristor Wikipedia entry; Note: Links have been removed),

The memristor’s electrical resistance is not constant but depends on the history of current that had previously flowed through the device, i.e., its present resistance depends on how much electric charge has flowed in what direction through it in the past; the device remembers its history — the so-called non-volatility property.[2] When the electric power supply is turned off, the memristor remembers its most recent resistance until it is turned on again

The memristor could lead to more energy-saving devices but much of the current (pun noted) interest lies in its similarity to neural plasticity and its potential application on neuromorphic engineering (brainlike computing).

Here’s a sampling of some of the more recent memristor postings on this blog:

August 24, 2017: Neuristors and brainlike computing

June 28, 2017: Dr. Wei Lu and bio-inspired ‘memristor’ chips

May 2, 2017: Predicting how a memristor functions

December 30, 2016: Changing synaptic connectivity with a memristor

December 5, 2016: The memristor as computing device

November 1, 2016: The memristor as the ‘missing link’ in bioelectronic medicine?

You can find more by using ‘memristor’ as the search term in the blog search function or on the search engine of your choice.

Mott memristor

Mott memristors (mentioned in my Aug. 24, 2017 posting about neuristors and brainlike computing) gets more fulsome treatment in an Oct. 9, 2017 posting by Samuel K. Moore on the Nanoclast blog (found on the IEEE [Institute of Electrical and Electronics Engineers] website) Note: 1: Links have been removed; Note 2 : I quite like Moore’s writing style but he’s not for the impatient reader,

When you’re really harried, you probably feel like your head is brimful of chaos. You’re pretty close. Neuroscientists say your brain operates in a regime termed the “edge of chaos,” and it’s actually a good thing. It’s a state that allows for fast, efficient analog computation of the kind that can solve problems that grow vastly more difficult as they become bigger in size.

The trouble is, if you’re trying to replicate that kind of chaotic computation with electronics, you need an element that both acts chaotically—how and when you want it to—and could scale up to form a big system.

“No one had been able to show chaotic dynamics in a single scalable electronic device,” says Suhas Kumar, a researcher at Hewlett Packard Labs, in Palo Alto, Calif. Until now, that is.

He, John Paul Strachan, and R. Stanley Williams recently reported in the journal Nature that a particular configuration of a certain type of memristor contains that seed of controlled chaos. What’s more, when they simulated wiring these up into a type of circuit called a Hopfield neural network, the circuit was capable of solving a ridiculously difficult problem—1,000 instances of the traveling salesman problem—at a rate of 10 trillion operations per second per watt.

(It’s not an apples-to-apples comparison, but the world’s most powerful supercomputer as of June 2017 managed 93,015 trillion floating point operations per second but consumed 15 megawatts doing it. So about 6 billion operations per second per watt.)

The device in question is called a Mott memristor. Memristors generally are devices that hold a memory, in the form of resistance, of the current that has flowed through them. The most familiar type is called resistive RAM (or ReRAM or RRAM, depending on who’s asking). Mott memristors have an added ability in that they can also reflect a temperature-driven change in resistance.

The HP Labs team made their memristor from an 8-nanometer-thick layer of niobium dioxide (NbO2) sandwiched between two layers of titanium nitride. The bottom titanium nitride layer was in the form of a 70-nanometer wide pillar. “We showed that this type of memristor can generate chaotic and nonchaotic signals,” says Williams, who invented the memristor based on theory by Leon Chua.

(The traveling salesman problem is one of these. In it, the salesman must find the shortest route that lets him visit all of his customers’ cities, without going through any of them twice. It’s a difficult problem because it becomes exponentially more difficult to solve with each city you add.)

Here’s what the niobium dioxide-based Mott memristor looks like,

Photo: Suhas Kumar/Hewlett Packard Labs
A micrograph shows the construction of a Mott memristor composed of an 8-nanometer-thick layer of niobium dioxide between two layers of titanium nitride.

Here’s a link to and a citation for the paper,

Chaotic dynamics in nanoscale NbO2 Mott memristors for analogue computing by Suhas Kumar, John Paul Strachan & R. Stanley Williams. Nature 548, 318–321 (17 August 2017) doi:10.1038/nature23307 Published online: 09 August 2017

This paper is behind a paywall.

Neuristors and brainlike computing

As you might suspect, a neuristor is based on a memristor .(For a description of a memristor there’s this Wikipedia entry and you can search this blog with the tags ‘memristor’ and neuromorphic engineering’ for more here.)

Being new to neuristors ,I needed a little more information before reading the latest and found this Dec. 24, 2012 article by John Timmer for Ars Technica (Note: Links have been removed),

Computing hardware is composed of a series of binary switches; they’re either on or off. The other piece of computational hardware we’re familiar with, the brain, doesn’t work anything like that. Rather than being on or off, individual neurons exhibit brief spikes of activity, and encode information in the pattern and timing of these spikes. The differences between the two have made it difficult to model neurons using computer hardware. In fact, the recent, successful generation of a flexible neural system required that each neuron be modeled separately in software in order to get the sort of spiking behavior real neurons display.

But researchers may have figured out a way to create a chip that spikes. The people at HP labs who have been working on memristors have figured out a combination of memristors and capacitors that can create a spiking output pattern. Although these spikes appear to be more regular than the ones produced by actual neurons, it might be possible to create versions that are a bit more variable than this one. And, more significantly, it should be possible to fabricate them in large numbers, possibly right on a silicon chip.

The key to making the devices is something called a Mott insulator. These are materials that would normally be able to conduct electricity, but are unable to because of interactions among their electrons. Critically, these interactions weaken with elevated temperatures. So, by heating a Mott insulator, it’s possible to turn it into a conductor. In the case of the material used here, NbO2, the heat is supplied by resistance itself. By applying a voltage to the NbO2 in the device, it becomes a resistor, heats up, and, when it reaches a critical temperature, turns into a conductor, allowing current to flow through. But, given the chance to cool off, the device will return to its resistive state. Formally, this behavior is described as a memristor.

To get the sort of spiking behavior seen in a neuron, the authors turned to a simplified model of neurons based on the proteins that allow them to transmit electrical signals. When a neuron fires, sodium channels open, allowing ions to rush into a nerve cell, and changing the relative charges inside and outside its membrane. In response to these changes, potassium channels then open, allowing different ions out, and restoring the charge balance. That shuts the whole thing down, and allows various pumps to start restoring the initial ion balance.

Here’s a link to and a citation for the research paper described in Timmer’s article,

A scalable neuristor built with Mott memristors by Matthew D. Pickett, Gilberto Medeiros-Ribeiro, & R. Stanley Williams. Nature Materials 12, 114–117 (2013) doi:10.1038/nmat3510 Published online 16 December 2012

This paper is behind a paywall.

A July 28, 2017 news item on Nanowerk provides an update on neuristors,

A future android brain like that of Star Trek’s Commander Data might contain neuristors, multi-circuit components that emulate the firings of human neurons.

Neuristors already exist today in labs, in small quantities, and to fuel the quest to boost neuristors’ power and numbers for practical use in brain-like computing, the U.S. Department of Defense has awarded a $7.1 million grant to a research team led by the Georgia Institute of Technology. The researchers will mainly work on new metal oxide materials that buzz electronically at the nanoscale to emulate the way human neural networks buzz with electric potential on a cellular level.

A July 28, 2017 Georgia Tech news release, which originated the news item, delves further into neuristors and the proposed work leading to an artificial retina that can learn (!). This was not where I was expecting things to go,

But let’s walk expectations back from the distant sci-fi future into the scientific present: The research team is developing its neuristor materials to build an intelligent light sensor, and not some artificial version of the human brain, which would require hundreds of trillions of circuits.

“We’re not going to reach circuit complexities of that magnitude, not even a tenth,” said Alan Doolittle, a professor at Georgia Tech’s School of Electrical and Computer Engineering. “Also, currently science doesn’t really know yet very well how the human brain works, so we can’t duplicate it.”

Intelligent retina

But an artificial retina that can learn autonomously appears well within reach of the research team from Georgia Tech and Binghamton University. Despite the term “retina,” the development is not intended as a medical implant, but it could be used in advanced image recognition cameras for national defense and police work.

At the same time, it would significantly advance brain-mimicking, or neuromorphic, computing. The research field that takes its cues from what science already does know about how the brain computes to develop exponentially more powerful computing.

The retina would be comprised of an array of ultra-compact circuits called neuristors (a word combining “neuron” and “transistor”) that sense light, compute an image out of it and store the image. All three of the functions would occur simultaneously and nearly instantaneously.

“The same device senses, computes and stores the image,” Doolittle said. “The device is the sensor, and it’s the processor, and it’s the memory all at the same time.” A neuristor itself is comprised in part of devices called memristors inspired by the way human neurons work.

Brain vs. PC

That cuts out loads of processing and memory lag time that are inherent in traditional computing.

Take the device you’re reading this article on: Its microprocessor has to tap a separate memory component to get data, then do some processing, tap memory again for more data, process some more, etc. “That back-and-forth from memory to microprocessor has created a bottleneck,” Doolittle said.

A neuristor array breaks the bottleneck by emulating the extreme flexibility of biological nervous systems: When a brain computes, it uses a broad set of neural pathways that flash with enormous data. Then, later, to compute the same thing again, it will use quite different neural paths.

Traditional computer pathways, by contrast, are hardwired. For example, look at a present-day processor and you’ll see lines etched into it. Those are pathways that computational signals are limited to.

The new memristor materials at the heart of the neuristor are not etched, and signals flow through the surface very freely, more like they do through the brain, exponentially increasing the number of possible pathways computation can take. That helps the new intelligent retina compute powerfully and swiftly.

Terrorists, missing children

The retina’s memory could also store thousands of photos, allowing it to immediately match up what it sees with the saved images. The retina could pinpoint known terror suspects in a crowd, find missing children, or identify enemy aircraft virtually instantaneously, without having to trawl databases to correctly identify what is in the images.

Even if you take away the optics, the new neuristor arrays still advance artificial intelligence. Instead of light, a surface of neuristors could absorb massive data streams at once, compute them, store them, and compare them to patterns of other data, immediately. It could even autonomously learn to extrapolate further information, like calculating the third dimension out of data from two dimensions.

“It will work with anything that has a repetitive pattern like radar signatures, for example,” Doolittle said. “Right now, that’s too challenging to compute, because radar information is flying out at such a high data rate that no computer can even think about keeping up.”

Smart materials

The research project’s title acronym CEREBRAL may hint at distant dreams of an artificial brain, but what it stands for spells out the present goal in neuromorphic computing: Cross-disciplinary Electronic-ionic Research Enabling Biologically Realistic Autonomous Learning.

The intelligent retina’s neuristors are based on novel metal oxide nanotechnology materials, unique to Georgia Tech. They allow computing signals to flow flexibly across pathways that are electronic, which is customary in computing, and at the same time make use of ion motion, which is more commonly know from the way batteries and biological systems work.

The new materials have already been created, and they work, but the researchers don’t yet fully understand why.

Much of the project is dedicated to examining quantum states in the materials and how those states help create useful electronic-ionic properties. Researchers will view them by bombarding the metal oxides with extremely bright x-ray photons at the recently constructed National Synchrotron Light Source II.

Grant sub-awardee Binghamton University is located close by, and Binghamton physicists will run experiments and hone them via theoretical modeling.

‘Sea of lithium’

The neuristors are created mainly by the way the metal oxide materials are grown in the lab, which has advantages over building neuristors in a more wired way.

This materials-growing approach is conducive to mass production. Also, though neuristors in general free signals to take multiple pathways, Georgia Tech’s neuristors do it much more flexibly thanks to chemical properties.

“We also have a sea of lithium, and it’s like an infinite reservoir of computational ionic fluid,” Doolittle said. The lithium niobite imitates the way ionic fluid bathes biological neurons and allows them to flash with electric potential while signaling. In a neuristor array, the lithium niobite helps computational signaling move in myriad directions.

“It’s not like the typical semiconductor material, where you etch a line, and only that line has the computational material,” Doolittle said.

Commander Data’s brain?

“Unlike any other previous neuristors, our neuristors will adapt themselves in their computational-electronic pulsing on the fly, which makes them more like a neurological system,” Doolittle said. “They mimic biology in that we have ion drift across the material to create the memristors (the memory part of neuristors).”

Brains are far superior to computers at most things, but not all. Brains recognize objects and do motor tasks much better. But computers are much better at arithmetic and data processing.

Neuristor arrays can meld both types of computing, making them biological and algorithmic at once, a bit like Commander Data’s brain.

The research is being funded through the U.S. Department of Defense’s Multidisciplinary University Research Initiatives (MURI) Program under grant number FOA: N00014-16-R-FO05. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of those agencies.

Fascinating, non?

Dr. Wei Lu and bio-inspired ‘memristor’ chips

It’s been a while since I’ve featured Dr. Wei Lu’s work here. This April  15, 2010 posting features Lu’s most relevant previous work.) Here’s his latest ‘memristor’ work , from a May 22, 2017 news item on Nanowerk (Note: A link has been removed),

Inspired by how mammals see, a new “memristor” computer circuit prototype at the University of Michigan has the potential to process complex data, such as images and video orders of magnitude, faster and with much less power than today’s most advanced systems.

Faster image processing could have big implications for autonomous systems such as self-driving cars, says Wei Lu, U-M professor of electrical engineering and computer science. Lu is lead author of a paper on the work published in the current issue of Nature Nanotechnology (“Sparse coding with memristor networks”).

Lu’s next-generation computer components use pattern recognition to shortcut the energy-intensive process conventional systems use to dissect images. In this new work, he and his colleagues demonstrate an algorithm that relies on a technique called “sparse coding” to coax their 32-by-32 array of memristors to efficiently analyze and recreate several photos.

A May 22, 2017 University of Michigan news release (also on EurekAlert), which originated the news item, provides more information about memristors and about the research,

Memristors are electrical resistors with memory—advanced electronic devices that regulate current based on the history of the voltages applied to them. They can store and process data simultaneously, which makes them a lot more efficient than traditional systems. In a conventional computer, logic and memory functions are located at different parts of the circuit.

“The tasks we ask of today’s computers have grown in complexity,” Lu said. “In this ‘big data’ era, computers require costly, constant and slow communications between their processor and memory to retrieve large amounts data. This makes them large, expensive and power-hungry.”

But like neural networks in a biological brain, networks of memristors can perform many operations at the same time, without having to move data around. As a result, they could enable new platforms that process a vast number of signals in parallel and are capable of advanced machine learning. Memristors are good candidates for deep neural networks, a branch of machine learning, which trains computers to execute processes without being explicitly programmed to do so.

“We need our next-generation electronics to be able to quickly process complex data in a dynamic environment. You can’t just write a program to do that. Sometimes you don’t even have a pre-defined task,” Lu said. “To make our systems smarter, we need to find ways for them to process a lot of data more efficiently. Our approach to accomplish that is inspired by neuroscience.”

A mammal’s brain is able to generate sweeping, split-second impressions of what the eyes take in. One reason is because they can quickly recognize different arrangements of shapes. Humans do this using only a limited number of neurons that become active, Lu says. Both neuroscientists and computer scientists call the process “sparse coding.”

“When we take a look at a chair we will recognize it because its characteristics correspond to our stored mental picture of a chair,” Lu said. “Although not all chairs are the same and some may differ from a mental prototype that serves as a standard, each chair retains some of the key characteristics necessary for easy recognition. Basically, the object is correctly recognized the moment it is properly classified—when ‘stored’ in the appropriate category in our heads.”

Image of a memristor chip Image of a memristor chip Similarly, Lu’s electronic system is designed to detect the patterns very efficiently—and to use as few features as possible to describe the original input.

In our brains, different neurons recognize different patterns, Lu says.

“When we see an image, the neurons that recognize it will become more active,” he said. “The neurons will also compete with each other to naturally create an efficient representation. We’re implementing this approach in our electronic system.”

The researchers trained their system to learn a “dictionary” of images. Trained on a set of grayscale image patterns, their memristor network was able to reconstruct images of famous paintings and photos and other test patterns.

If their system can be scaled up, they expect to be able to process and analyze video in real time in a compact system that can be directly integrated with sensors or cameras.

The project is titled “Sparse Adaptive Local Learning for Sensing and Analytics.” Other collaborators are Zhengya Zhang and Michael Flynn of the U-M Department of Electrical Engineering and Computer Science, Garrett Kenyon of the Los Alamos National Lab and Christof Teuscher of Portland State University.

The work is part of a $6.9 million Unconventional Processing of Signals for Intelligent Data Exploitation project that aims to build a computer chip based on self-organizing, adaptive neural networks. It is funded by the [US] Defense Advanced Research Projects Agency [DARPA].

Here’s a link to and a citation for the paper,

Sparse coding with memristor networks by Patrick M. Sheridan, Fuxi Cai, Chao Du, Wen Ma, Zhengya Zhang, & Wei D. Lu. Nature Nanotechnology (2017) doi:10.1038/nnano.2017.83 Published online 22 May 2017

This paper is behind a paywall.

For the interested, there are a number of postings featuring memristors here (just use ‘memristor’ as your search term in the blog search engine). You might also want to check out ‘neuromorphic engineeering’ and ‘neuromorphic computing’ and ‘artificial brain’.

Predicting how a memristor functions

An April 3, 2017 news item on Nanowerk announces a new memristor development (Note: A link has been removed),

Researchers from the CNRS [Centre national de la recherche scientifique; France] , Thales, and the Universities of Bordeaux, Paris-Sud, and Evry have created an artificial synapse capable of learning autonomously. They were also able to model the device, which is essential for developing more complex circuits. The research was published in Nature Communications (“Learning through ferroelectric domain dynamics in solid-state synapses”)

An April 3, 2017 CNRS press release, which originated the news item, provides a nice introduction to the memristor concept before providing a few more details about this latest work (Note: A link has been removed),

One of the goals of biomimetics is to take inspiration from the functioning of the brain [also known as neuromorphic engineering or neuromorphic computing] in order to design increasingly intelligent machines. This principle is already at work in information technology, in the form of the algorithms used for completing certain tasks, such as image recognition; this, for instance, is what Facebook uses to identify photos. However, the procedure consumes a lot of energy. Vincent Garcia (Unité mixte de physique CNRS/Thales) and his colleagues have just taken a step forward in this area by creating directly on a chip an artificial synapse that is capable of learning. They have also developed a physical model that explains this learning capacity. This discovery opens the way to creating a network of synapses and hence intelligent systems requiring less time and energy.

Our brain’s learning process is linked to our synapses, which serve as connections between our neurons. The more the synapse is stimulated, the more the connection is reinforced and learning improved. Researchers took inspiration from this mechanism to design an artificial synapse, called a memristor. This electronic nanocomponent consists of a thin ferroelectric layer sandwiched between two electrodes, and whose resistance can be tuned using voltage pulses similar to those in neurons. If the resistance is low the synaptic connection will be strong, and if the resistance is high the connection will be weak. This capacity to adapt its resistance enables the synapse to learn.

Although research focusing on these artificial synapses is central to the concerns of many laboratories, the functioning of these devices remained largely unknown. The researchers have succeeded, for the first time, in developing a physical model able to predict how they function. This understanding of the process will make it possible to create more complex systems, such as a series of artificial neurons interconnected by these memristors.

As part of the ULPEC H2020 European project, this discovery will be used for real-time shape recognition using an innovative camera1 : the pixels remain inactive, except when they see a change in the angle of vision. The data processing procedure will require less energy, and will take less time to detect the selected objects. The research involved teams from the CNRS/Thales physics joint research unit, the Laboratoire de l’intégration du matériau au système (CNRS/Université de Bordeaux/Bordeaux INP), the University of Arkansas (US), the Centre de nanosciences et nanotechnologies (CNRS/Université Paris-Sud), the Université d’Evry, and Thales.

 

Image synapse


© Sören Boyn / CNRS/Thales physics joint research unit.

Artist’s impression of the electronic synapse: the particles represent electrons circulating through oxide, by analogy with neurotransmitters in biological synapses. The flow of electrons depends on the oxide’s ferroelectric domain structure, which is controlled by electric voltage pulses.


Here’s a link to and a citation for the paper,

Learning through ferroelectric domain dynamics in solid-state synapses by Sören Boyn, Julie Grollier, Gwendal Lecerf, Bin Xu, Nicolas Locatelli, Stéphane Fusil, Stéphanie Girod, Cécile Carrétéro, Karin Garcia, Stéphane Xavier, Jean Tomas, Laurent Bellaiche, Manuel Bibes, Agnès Barthélémy, Sylvain Saïghi, & Vincent Garcia. Nature Communications 8, Article number: 14736 (2017) doi:10.1038/ncomms14736 Published online: 03 April 2017

This paper is open access.

Thales or Thales Group is a French company, from its Wikipedia entry (Note: Links have been removed),

Thales Group (French: [talɛs]) is a French multinational company that designs and builds electrical systems and provides services for the aerospace, defence, transportation and security markets. Its headquarters are in La Défense[2] (the business district of Paris), and its stock is listed on the Euronext Paris.

The company changed its name to Thales (from the Greek philosopher Thales,[3] pronounced [talɛs] reflecting its pronunciation in French) from Thomson-CSF in December 2000 shortly after the £1.3 billion acquisition of Racal Electronics plc, a UK defence electronics group. It is partially state-owned by the French government,[4] and has operations in more than 56 countries. It has 64,000 employees and generated €14.9 billion in revenues in 2016. The Group is ranked as the 475th largest company in the world by Fortune 500 Global.[5] It is also the 10th largest defence contractor in the world[6] and 55% of its total sales are military sales.[4]

The ULPEC (Ultra-Low Power Event-Based Camera) H2020 [Horizon 2020 funded) European project can be found here,

The long term goal of ULPEC is to develop advanced vision applications with ultra-low power requirements and ultra-low latency. The output of the ULPEC project is a demonstrator connecting a neuromorphic event-based camera to a high speed ultra-low power consumption asynchronous visual data processing system (Spiking Neural Network with memristive synapses). Although ULPEC device aims to reach TRL 4, it is a highly application-oriented project: prospective use cases will b…

Finally, for anyone curious about Thales, the philosopher (from his Wikipedia entry), Note: Links have been removed,

Thales of Miletus (/ˈθeɪliːz/; Greek: Θαλῆς (ὁ Μῑλήσιος), Thalēs; c. 624 – c. 546 BC) was a pre-Socratic Greek/Phoenician philosopher, mathematician and astronomer from Miletus in Asia Minor (present-day Milet in Turkey). He was one of the Seven Sages of Greece. Many, most notably Aristotle, regard him as the first philosopher in the Greek tradition,[1][2] and he is otherwise historically recognized as the first individual in Western civilization known to have entertained and engaged in scientific philosophy.[3][4]

High-performance, low-energy artificial synapse for neural network computing

This artificial synapse is apparently an improvement on the standard memristor-based artificial synapse but that doesn’t become clear until reading the abstract for the paper. First, there’s a Feb. 20, 2017 Stanford University news release by Taylor Kubota (dated Feb. 21, 2017 on EurekAlert), Note: Links have been removed,

For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain’s efficient design – an artificial version of the space over which neurons communicate, called a synapse.

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. “It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

Building a brain

When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. “Instead of simulating a neural network, our work is trying to make a neural network.”

The artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.

Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

Testing a network of artificial synapses

Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.

This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Organic potential

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.

All this means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.

Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven University of Technology in the Netherlands.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.

Here’s an abstract for the researchers’ paper (link to paper provided after abstract) and it’s where you’ll find the memristor connection explained,

The brain is capable of massively parallel information processing while consuming only ~1–100fJ per synaptic event1, 2. Inspired by the efficiency of the brain, CMOS-based neural architectures3 and memristors4, 5 are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10pJ for 103μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems6, 7. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.

Here’s a link to and a citation for the paper,

A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing by Yoeri van de Burgt, Ewout Lubberman, Elliot J. Fuller, Scott T. Keene, Grégorio C. Faria, Sapan Agarwal, Matthew J. Marinella, A. Alec Talin, & Alberto Salleo. Nature Materials (2017) doi:10.1038/nmat4856 Published online 20 February 2017

This paper is behind a paywall.

ETA March 8, 2017 10:28 PST: You may find this this piece on ferroelectricity and neuromorphic engineering of interest (March 7, 2017 posting titled: Ferroelectric roadmap to neuromorphic computing).

Functional hybrid system that can connect human tissue with electronic devices

I’ve tagged this particular field of interest ‘machine/flesh’ because I find it more descriptive than ‘bio-hybrid system’ which was the term used in a Nov. 15, 2016 news item on phys.org,

One of the biggest challenges in cognitive or rehabilitation neurosciences is the ability to design a functional hybrid system that can connect and exchange information between biological systems, like neurons in the brain, and human-made electronic devices. A large multidisciplinary effort of researchers in Italy brought together physicists, chemists, biochemists, engineers, molecular biologists and physiologists to analyze the biocompatibility of the substrate used to connect these biological and human-made components, and investigate the functionality of the adhering cells, creating a living biohybrid system.

A Nov.15, 2016 American Institute of Physics news release on EurekAlert, which originated the news item, details the investigation,

In an article appearing this week in AIP Advances, from AIP Publishing, the research team used the interaction between light and matter to investigate the material properties at the molecular level using Raman spectroscopy, a technique that, until now, has been principally applied to material science. Thanks to the coupling of the Raman spectrometer with a microscope, spectroscopy becomes a useful tool for investigating micro-objects such as cells and tissues. Raman spectroscopy presents clear advantages for this type of investigation: The molecular composition and the modi?cation of subcellular compartments can be obtained in label-free conditions with non-invasive methods and under physiological conditions, allowing the investigation of a large variety of biological processes both in vitro and in vivo.

Once the biocompatibility of the substrate was analyzed and the functionality of the adhering cells investigated, the next part of this puzzle is connecting with the electronic component. In this case a memristor was used.

“Its name reveals its peculiarity (MEMory ResISTOR), it has a sort of “memory”: depending on the amount of voltage that has been applied to it in the past, it is able to vary its resistance, because of a change of its microscopic physical properties,” said Silvia Caponi, a physicist at the Italian National Research Council in Rome. By combining memristors, it is possible to create pathways within the electrical circuits that work similar to the natural synapses, which develop variable weight in their connections to reproduce the adaptive/learning mechanism. Layers of organic polymers, like polyaniline (PANI) a semiconductor polymer, also have memristive properties, allowing them to work directly with biological materials into a hybrid bio-electronic system.

“We applied the analysis on a hybrid bio-inspired device but in a prospective view, this work provides the proof of concept of an integrated study able to analyse the status of living cells in a large variety of applications that merges nanosciences, neurosciences and bioelectronics,” said Caponi. A natural long-term objective of this work would be interfacing machines and nervous systems as seamlessly as possible.

The multidisciplinary team is ready to build on this proof of principle to realize the potential of memristor networks.

“Once assured the biocompatibility of the materials on which neurons grow,” said Caponi, “we want to define the materials and their functionalization procedures to find the best configuration for the neuron-memristor interface to deliver a full working hybrid bio-memristive system.”

Caption: These are immunofluorescence analysis of SH-SY5Y cells treated for 5 days with 10uM Retinoic Acid and 50ng/ml BDNF for the next 3 days. The DAPI fluorescence stain is blue and Beta-tubulin is green. Credit: Caponi, et al.

Caption: These are immunofluorescence analysis of SH-SY5Y cells treated for 5 days with 10uM Retinoic Acid and 50ng/ml BDNF for the next 3 days. The DAPI fluorescence stain is blue and Beta-tubulin is green. Credit: Caponi, et al.

Here’s a link to and a citation for the paper,

A multidisciplinary approach to study the functional properties of neuron-like cell models constituting a living bio-hybrid system: SH-SY5Y cells adhering to PANI substrate by S. Caponi, S. Mattana, M. Ricci, K. Sagini, L. J. Juarez-Hernandez, A. M. Jimenez-Garduño, N. Cornella, L. Pasquardini, L. Urbanelli, P. Sassi, A. Morresi, C. Emiliani, D. Fioretto, M. Dalla Serra, C. Pederzolli, S. Iannotta, P. Macchi, and C. Musio. AIP Advances 6, 111303 (2016); http://dx.doi.org/10.1063/1.4966587

This paper appears to be open access.

The memristor as the ‘missing link’ in bioelectronic medicine?

The last time I featured memrisors and a neuronal network it was in an April 22, 2016 posting about Russian research in that field. This latest work comes from the UK’s University of Southampton. From a Sept. 27, 2016 news item on phys.org,

New research, led by the University of Southampton, has demonstrated that a nanoscale device, called a memristor, could be the ‘missing link’ in the development of implants that use electrical signals from the brain to help treat medical conditions.

Monitoring neuronal cell activity is fundamental to neuroscience and the development of neuroprosthetics – biomedically engineered devices that are driven by neural activity. However, a persistent problem is the device being able to process the neural data in real-time, which imposes restrictive requirements on bandwidth, energy and computation capacity.

In a new study, published in Nature Communications, the researchers showed that memristors could provide real-time processing of neuronal signals (spiking events) leading to efficient data compression and the potential to develop more precise and affordable neuroprosthetics and bioelectronic medicines.

A Sept. 27, 2016 University of Southampton press release, which originated the news item, expands on the theme,

Memristors are electrical components that limit or regulate the flow of electrical current in a circuit and can remember the amount of charge that was flowing through it and retain the data, even when the power is turned off.

Lead author Isha Gupta, Postgraduate Research Student at the University of Southampton, said: “Our work can significantly contribute towards further enhancing the understanding of neuroscience, developing neuroprosthetics and bio-electronic medicines by building tools essential for interpreting the big data in a more effective way.”

The research team developed a nanoscale Memristive Integrating Sensor (MIS) into which they fed a series of voltage-time samples, which replicated neuronal electrical activity.

Acting like synapses in the brain, the metal-oxide MIS was able to encode and compress (up to 200 times) neuronal spiking activity recorded by multi-electrode arrays. Besides addressing the bandwidth constraints, this approach was also very power efficient – the power needed per recording channel was up to 100 times less when compared to current best practice.

Co-author Dr Themis Prodromakis, Reader in Nanoelectronics and EPSRC Fellow in Electronics and Computer Science at the University of Southampton said: “We are thrilled that we succeeded in demonstrating that these emerging nanoscale devices, despite being rather simple in architecture, possess ultra-rich dynamics that can be harnessed beyond the obvious memory applications to address the fundamental constraints in bandwidth and power that currently prohibit scaling neural interfaces beyond 1,000 recording channels.”

The Prodromakis Group at the University of Southampton is acknowledged as world-leading in this field, collaborating among others with Leon Chua (a Diamond Jubilee Visiting Academic at the University of Southampton), who theoretically predicted the existence of memristors in 1971.

Here’s a link to and a citation for the paper,

Real-time encoding and compression of neuronal spikes by metal-oxide memristors by Isha Gupta, Alexantrou Serb, Ali Khiat, Ralf Zeitler, Stefano Vassanelli, & Themistoklis Prodromakis. Nature Communications 7, Article number: 12805 doi:10.1038/ncomms12805 Published  26 September 2016

This is an open access paper.

For anyone who’s interested in better understanding memristors, there’s an interview with Forrest H Bennett III in my April 7, 2010 posting and you can always check Wikipedia.