Tag Archives: human brain

A bioengineered robot hand with its own nervous system: machine/flesh and a job opening

A November 14, 2017 news item on phys.org announces a grant for a research project which will see engineered robot hands combined with regenerative medicine to imbue neuroprosthetic hands with the sense of touch,

The sense of touch is often taken for granted. For someone without a limb or hand, losing that sense of touch can be devastating. While highly sophisticated prostheses with complex moving fingers and joints are available to mimic almost every hand motion, they remain frustratingly difficult and unnatural for the user. This is largely because they lack the tactile experience that guides every movement. This void in sensation results in limited use or abandonment of these very expensive artificial devices. So why not make a prosthesis that can actually “feel” its environment?

That is exactly what an interdisciplinary team of scientists from Florida Atlantic University and the University of Utah School of Medicine aims to do. They are developing a first-of-its-kind bioengineered robotic hand that will grow and adapt to its environment. This “living” robot will have its own peripheral nervous system directly linking robotic sensors and actuators. FAU’s College of Engineering and Computer Science is leading the multidisciplinary team that has received a four-year, $1.3 million grant from the National Institute of Biomedical Imaging and Bioengineering of the [US] National Institutes of Health for a project titled “Virtual Neuroprosthesis: Restoring Autonomy to People Suffering from Neurotrauma.”

A November14, 2017 Florida Atlantic University (FAU) news release by Gisele Galoustian, which originated the news item, goes into more detail,

With expertise in robotics, bioengineering, behavioral science, nerve regeneration, electrophysiology, microfluidic devices, and orthopedic surgery, the research team is creating a living pathway from the robot’s touch sensation to the user’s brain to help amputees control the robotic hand. A neuroprosthesis platform will enable them to explore how neurons and behavior can work together to regenerate the sensation of touch in an artificial limb.

At the core of this project is a cutting-edge robotic hand and arm developed in the BioRobotics Laboratory in FAU’s College of Engineering and Computer Science. Just like human fingertips, the robotic hand is equipped with numerous sensory receptors that respond to changes in the environment. Controlled by a human, it can sense pressure changes, interpret the information it is receiving and interact with various objects. It adjusts its grip based on an object’s weight or fragility. But the real challenge is figuring out how to send that information back to the brain using living residual neural pathways to replace those that have been damaged or destroyed by trauma.

“When the peripheral nerve is cut or damaged, it uses the rich electrical activity that tactile receptors create to restore itself. We want to examine how the fingertip sensors can help damaged or severed nerves regenerate,” said Erik Engeberg, Ph.D., principal investigator, an associate professor in FAU’s Department of Ocean and Mechanical Engineering, and director of FAU’s BioRobotics Laboratory. “To accomplish this, we are going to directly connect these living nerves in vitro and then electrically stimulate them on a daily basis with sensors from the robotic hand to see how the nerves grow and regenerate while the hand is operated by limb-absent people.”

For the study, the neurons will not be kept in conventional petri dishes. Instead, they will be placed in  biocompatible microfluidic chambers that provide a nurturing environment mimicking the basic function of living cells. Sarah E. Du, Ph.D., co-principal investigator, an assistant professor in FAU’s Department of Ocean and Mechanical Engineering, and an expert in the emerging field of microfluidics, has developed these tiny customized artificial chambers with embedded micro-electrodes. The research team will be able to stimulate the neurons with electrical impulses from the robot’s hand to help regrowth after injury. They will morphologically and electrically measure in real-time how much neural tissue has been restored.

Jianning Wei, Ph.D., co-principal investigator, an associate professor of biomedical science in FAU’s Charles E. Schmidt College of Medicine, and an expert in neural damage and regeneration, will prepare the neurons in vitro, observe them grow and see how they fare and regenerate in the aftermath of injury. This “virtual” method will give the research team multiple opportunities to test and retest the nerves without any harm to subjects.

Using an electroencephalogram (EEG) to detect electrical activity in the brain, Emmanuelle Tognoli, Ph.D., co-principal investigator, associate research professor in FAU’s Center for Complex Systems and Brain Sciences in the Charles E. Schmidt College of Science, and an expert in electrophysiology and neural, behavioral, and cognitive sciences, will examine how the tactile information from the robotic sensors is passed onto the brain to distinguish scenarios with successful or unsuccessful functional restoration of the sense of touch. Her objective: to understand how behavior helps nerve regeneration and how this nerve regeneration helps the behavior.

Once the nerve impulses from the robot’s tactile sensors have gone through the microfluidic chamber, they are sent back to the human user manipulating the robotic hand. This is done with a special device that converts the signals coming from the microfluidic chambers into a controllable pressure at a cuff placed on the remaining portion of the amputated person’s arm. Users will know if they are squeezing the object too hard or if they are losing their grip.

Engeberg also is working with Douglas T. Hutchinson, M.D., co-principal investigator and a professor in the Department of Orthopedics at the University of Utah School of Medicine, who specializes in hand and orthopedic surgery. They are developing a set of tasks and behavioral neural indicators of performance that will ultimately reveal how to promote a healthy sensation of touch in amputees and limb-absent people using robotic devices. The research team also is seeking a post-doctoral researcher with multi-disciplinary experience to work on this breakthrough project.

Here’s more about the job opportunity from the FAU BioRobotics Laboratory job posting, (I checked on January 30, 2018 and it seems applications are still being accepted.)

Post-doctoral Opportunity

Dated Posted: Oct. 13, 2017

The BioRobotics Lab at Florida Atlantic University (FAU) invites applications for a NIH NIBIB-funded Postdoctoral position to develop a Virtual Neuroprosthesis aimed at providing a sense of touch in amputees and limb-absent people.

Candidates should have a Ph.D. in one of the following degrees: mechanical engineering, electrical engineering, biomedical engineering, bioengineering or related, with interest and/or experience in transdisciplinary work at the intersection of robotic hands, biology, and biomedical systems. Prior experience in the neural field will be considered an advantage, though not a necessity. Underrepresented minorities and women are warmly encouraged to apply.

The postdoctoral researcher will be co-advised across the department of Mechanical Engineering and the Center for Complex Systems & Brain Sciences through an interdisciplinary team whose expertise spans Robotics, Microfluidics, Behavioral and Clinical Neuroscience and Orthopedic Surgery.

The position will be for one year with a possibility of extension based on performance. Salary will be commensurate with experience and qualifications. Review of applications will begin immediately and continue until the position is filled.

The application should include:

  1. a cover letter with research interests and experiences,
  2. a CV, and
  3. names and contact information for three professional references.

Qualified candidates can contact Erik Engeberg, Ph.D., Associate Professor, in the FAU Department of Ocean and Mechanical Engineering at eengeberg@fau.edu. Please reference AcademicKeys.com in your cover letter when applying for or inquiring about this job announcement.

You can find the apply button on this page. Good luck!

Leftover 2017 memristor news bits

i have two bits of news, one from this October 2017 about using light to control a memristor’s learning properties and one from December 2017 about memristors and neural networks.

Shining a light on the memristor

Michael Berger wrote an October 30, 2017 Nanowerk Sportlight article about some of the latest work concerning memristors and light,

Memristors – or resistive memory – are nanoelectronic devices that are very promising components for next generation memory and computing devices. They are two-terminal electric elements similar to a conventional resistor – however, the electric resistance in a memristor is dependent on the charge passing through it; which means that its conductance can be precisely modulated by charge or flux through it. Its special property is that its resistance can be programmed (resistor function) and subsequently remains stored (memory function).

In this sense, a memristor is similar to a synapse in the human brain because it exhibits the same switching characteristics, i.e. it is able, with a high level of plasticity, to modify the efficiency of signal transfer between neurons under the influence of the transfer itself. That’s why researchers are hopeful to use memristors for the fabrication of electronic synapses for neuromorphic (i.e. brain-like) computing that mimics some of the aspects of learning and computation in human brains.

Human brains may be slow at pure number crunching but they are excellent at handling fast dynamic sensory information such as image and voice recognition. Walking is something that we take for granted but this is quite challenging for robots, especially over uneven terrain.

“Memristors present an opportunity to make new types of computers that are different from existing von Neumann architectures, which traditional computers are based upon,” Dr Neil T. Kemp, a Lecturer in Physics at the University of Hull [UK], tells Nanowerk. “Our team at the University of Hull is focussed on making memristor devices dynamically reconfigurable and adaptive – we believe this is the route to making a new generation of artificial intelligence systems that are smarter and can exhibit complex behavior. Such systems would also have the advantage of memristors, high density integration and lower power usage, so these systems would be more lightweight, portable and not need re-charging so often – which is something really needed for robots etc.”

In their new paper in Nanoscale (“Reversible Optical Switching Memristors with Tunable STDP Synaptic Plasticity: A Route to Hierarchical Control in Artificial Intelligent Systems”), Kemp and his team demonstrate the ability to reversibly control the learning properties of memristors via optical means.

The reversibility is achieved by changing the polarization of light. The researchers have used this effect to demonstrate tuneable learning in a memristor. One way this is achieved is through something called Spike Timing Dependent Plasticity (STDP), which is an effect known to occur in human brains and is linked with sensory perception, spatial reasoning, language and conscious thought in the neocortex.

STDP learning is based upon differences in the arrival time of signals from two adjacent neurons. The University of Hull team has shown that they can modulate the synaptic plasticity via optical means which enables the devices to have tuneable learning.

“Our research findings are important because it demonstrates that light can be used to control the learning properties of a memristor,” Kemp points out. “We have shown that light can be used in a reversible manner to change the connection strength (or conductivity) of artificial memristor synapses and as well control their ability to forget i.e. we can dynamically change device to have short-term or long-term memory.”

According to the team, there are many potential applications, such as adaptive electronic circuits controllable via light, or in more complex systems, such as neuromorphic computing, the development of optically reconfigurable neural networks.

Having optically controllable memristors can also facilitate the implementation of hierarchical control in larger artificial-brain like systems, whereby some of the key processes that are carried out by biological molecules in human brains can be emulated in solid-state devices through patterning with light.

Some of these processes include synaptic pruning, conversion of short term memory to long term memory, erasing of certain memories that are no longer needed or changing the sensitivity of synapses to be more adept at learning new information.

“The ability to control this dynamically, both spatially and temporally, is particularly interesting since it would allow neural networks to be reconfigurable on the fly through either spatial patterning or by adjusting the intensity of the light source,” notes Kemp.

In their new paper in Nanoscale Currently, the devices are more suited to neuromorphic computing applications, which do not need to be as fast. Optical control of memristors opens the route to dynamically tuneable and reprogrammable synaptic circuits as well the ability (via optical patterning) to have hierarchical control in larger and more complex artificial intelligent systems.

“Artificial Intelligence is really starting to come on strong in many areas, especially in the areas of voice/image recognition and autonomous systems – we could even say that this is the next revolution, similarly to what the industrial revolution was to farming and production processes,” concludes Kemp. “There are many challenges to overcome though. …

That excerpt should give you the gist of Berger’s article and, for those who need more information, there’s Berger’s article and, also, a link to and a citation for the paper,

Reversible optical switching memristors with tunable STDP synaptic plasticity: a route to hierarchical control in artificial intelligent systems by Ayoub H. Jaafar, Robert J. Gray, Emanuele Verrelli, Mary O’Neill, Stephen. M. Kelly, and Neil T. Kemp. Nanoscale, 2017,9, 17091-17098 DOI: 10.1039/C7NR06138B First published on 24 Oct 2017

This paper is behind a paywall.

The memristor and the neural network

It would seem machine learning could experience a significant upgrade if the work in Wei Lu’s University of Michigan laboratory can be scaled for general use. From a December 22, 2017 news item on ScienceDaily,

A new type of neural network made with memristors can dramatically improve the efficiency of teaching machines to think like humans.

The network, called a reservoir computing system, could predict words before they are said during conversation, and help predict future outcomes based on the present.

The research team that created the reservoir computing system, led by Wei Lu, professor of electrical engineering and computer science at the University of Michigan, recently published their work in Nature Communications.

A December 19, 2017 University of Michigan news release (also on EurekAlert) by Dan Newman, which originated the news item, expands on the theme,

Reservoir computing systems, which improve on a typical neural network’s capacity and reduce the required training time, have been created in the past with larger optical components. However, the U-M group created their system using memristors, which require less space and can be integrated more easily into existing silicon-based electronics.

Memristors are a special type of resistive device that can both perform logic and store data. This contrasts with typical computer systems, where processors perform logic separate from memory modules. In this study, Lu’s team used a special memristor that memorizes events only in the near history.

Inspired by brains, neural networks are composed of neurons, or nodes, and synapses, the connections between nodes.

To train a neural network for a task, a neural network takes in a large set of questions and the answers to those questions. In this process of what’s called supervised learning, the connections between nodes are weighted more heavily or lightly to minimize the amount of error in achieving the correct answer.

Once trained, a neural network can then be tested without knowing the answer. For example, a system can process a new photo and correctly identify a human face, because it has learned the features of human faces from other photos in its training set.

“A lot of times, it takes days or months to train a network,” says Lu. “It is very expensive.”

Image recognition is also a relatively simple problem, as it doesn’t require any information apart from a static image. More complex tasks, such as speech recognition, can depend highly on context and require neural networks to have knowledge of what has just occurred, or what has just been said.

“When transcribing speech to text or translating languages, a word’s meaning and even pronunciation will differ depending on the previous syllables,” says Lu.

This requires a recurrent neural network, which incorporates loops within the network that give the network a memory effect. However, training these recurrent neural networks is especially expensive, Lu says.

Reservoir computing systems built with memristors, however, can skip most of the expensive training process and still provide the network the capability to remember. This is because the most critical component of the system – the reservoir – does not require training.

When a set of data is inputted into the reservoir, the reservoir identifies important time-related features of the data, and hands it off in a simpler format to a second network. This second network then only needs training like simpler neural networks, changing weights of the features and outputs that the first network passed on until it achieves an acceptable level of error.

Enlargereservoir computing system

IMAGE:  Schematic of a reservoir computing system, showing the reservoir with internal dynamics and the simpler output. Only the simpler output needs to be trained, allowing for quicker and lower-cost training. Courtesy Wei Lu.

 

“The beauty of reservoir computing is that while we design it, we don’t have to train it,” says Lu.

The team proved the reservoir computing concept using a test of handwriting recognition, a common benchmark among neural networks. Numerals were broken up into rows of pixels, and fed into the computer with voltages like Morse code, with zero volts for a dark pixel and a little over one volt for a white pixel.

Using only 88 memristors as nodes to identify handwritten versions of numerals, compared to a conventional network that would require thousands of nodes for the task, the reservoir achieved 91% accuracy.

Reservoir computing systems are especially adept at handling data that varies with time, like a stream of data or words, or a function depending on past results.

To demonstrate this, the team tested a complex function that depended on multiple past results, which is common in engineering fields. The reservoir computing system was able to model the complex function with minimal error.

Lu plans on exploring two future paths with this research: speech recognition and predictive analysis.

“We can make predictions on natural spoken language, so you don’t even have to say the full word,” explains Lu.

“We could actually predict what you plan to say next.”

In predictive analysis, Lu hopes to use the system to take in signals with noise, like static from far-off radio stations, and produce a cleaner stream of data. “It could also predict and generate an output signal even if the input stopped,” he says.

EnlargeWei Lu

IMAGE:  Wei Lu, Professor of Electrical Engineering & Computer Science at the University of Michigan holds a memristor he created. Photo: Marcin Szczepanski.

 

The work was published in Nature Communications in the article, “Reservoir computing using dynamic memristors for temporal information processing”, with authors Chao Du, Fuxi Cai, Mohammed Zidan, Wen Ma, Seung Hwan Lee, and Prof. Wei Lu.

The research is part of a $6.9 million DARPA [US Defense Advanced Research Projects Agency] project, called “Sparse Adaptive Local Learning for Sensing and Analytics [also known as SALLSA],” that aims to build a computer chip based on self-organizing, adaptive neural networks. The memristor networks are fabricated at Michigan’s Lurie Nanofabrication Facility.

Lu and his team previously used memristors in implementing “sparse coding,” which used a 32-by-32 array of memristors to efficiently analyze and recreate images.

Here’s a link to and a citation for the paper,

Reservoir computing using dynamic memristors for temporal information processing by Chao Du, Fuxi Cai, Mohammed A. Zidan, Wen Ma, Seung Hwan Lee & Wei D. Lu. Nature Communications 8, Article number: 2204 (2017) doi:10.1038/s41467-017-02337-y Published online: 19 December 2017

This is an open access paper.

Neuristors and brainlike computing

As you might suspect, a neuristor is based on a memristor .(For a description of a memristor there’s this Wikipedia entry and you can search this blog with the tags ‘memristor’ and neuromorphic engineering’ for more here.)

Being new to neuristors ,I needed a little more information before reading the latest and found this Dec. 24, 2012 article by John Timmer for Ars Technica (Note: Links have been removed),

Computing hardware is composed of a series of binary switches; they’re either on or off. The other piece of computational hardware we’re familiar with, the brain, doesn’t work anything like that. Rather than being on or off, individual neurons exhibit brief spikes of activity, and encode information in the pattern and timing of these spikes. The differences between the two have made it difficult to model neurons using computer hardware. In fact, the recent, successful generation of a flexible neural system required that each neuron be modeled separately in software in order to get the sort of spiking behavior real neurons display.

But researchers may have figured out a way to create a chip that spikes. The people at HP labs who have been working on memristors have figured out a combination of memristors and capacitors that can create a spiking output pattern. Although these spikes appear to be more regular than the ones produced by actual neurons, it might be possible to create versions that are a bit more variable than this one. And, more significantly, it should be possible to fabricate them in large numbers, possibly right on a silicon chip.

The key to making the devices is something called a Mott insulator. These are materials that would normally be able to conduct electricity, but are unable to because of interactions among their electrons. Critically, these interactions weaken with elevated temperatures. So, by heating a Mott insulator, it’s possible to turn it into a conductor. In the case of the material used here, NbO2, the heat is supplied by resistance itself. By applying a voltage to the NbO2 in the device, it becomes a resistor, heats up, and, when it reaches a critical temperature, turns into a conductor, allowing current to flow through. But, given the chance to cool off, the device will return to its resistive state. Formally, this behavior is described as a memristor.

To get the sort of spiking behavior seen in a neuron, the authors turned to a simplified model of neurons based on the proteins that allow them to transmit electrical signals. When a neuron fires, sodium channels open, allowing ions to rush into a nerve cell, and changing the relative charges inside and outside its membrane. In response to these changes, potassium channels then open, allowing different ions out, and restoring the charge balance. That shuts the whole thing down, and allows various pumps to start restoring the initial ion balance.

Here’s a link to and a citation for the research paper described in Timmer’s article,

A scalable neuristor built with Mott memristors by Matthew D. Pickett, Gilberto Medeiros-Ribeiro, & R. Stanley Williams. Nature Materials 12, 114–117 (2013) doi:10.1038/nmat3510 Published online 16 December 2012

This paper is behind a paywall.

A July 28, 2017 news item on Nanowerk provides an update on neuristors,

A future android brain like that of Star Trek’s Commander Data might contain neuristors, multi-circuit components that emulate the firings of human neurons.

Neuristors already exist today in labs, in small quantities, and to fuel the quest to boost neuristors’ power and numbers for practical use in brain-like computing, the U.S. Department of Defense has awarded a $7.1 million grant to a research team led by the Georgia Institute of Technology. The researchers will mainly work on new metal oxide materials that buzz electronically at the nanoscale to emulate the way human neural networks buzz with electric potential on a cellular level.

A July 28, 2017 Georgia Tech news release, which originated the news item, delves further into neuristors and the proposed work leading to an artificial retina that can learn (!). This was not where I was expecting things to go,

But let’s walk expectations back from the distant sci-fi future into the scientific present: The research team is developing its neuristor materials to build an intelligent light sensor, and not some artificial version of the human brain, which would require hundreds of trillions of circuits.

“We’re not going to reach circuit complexities of that magnitude, not even a tenth,” said Alan Doolittle, a professor at Georgia Tech’s School of Electrical and Computer Engineering. “Also, currently science doesn’t really know yet very well how the human brain works, so we can’t duplicate it.”

Intelligent retina

But an artificial retina that can learn autonomously appears well within reach of the research team from Georgia Tech and Binghamton University. Despite the term “retina,” the development is not intended as a medical implant, but it could be used in advanced image recognition cameras for national defense and police work.

At the same time, it would significantly advance brain-mimicking, or neuromorphic, computing. The research field that takes its cues from what science already does know about how the brain computes to develop exponentially more powerful computing.

The retina would be comprised of an array of ultra-compact circuits called neuristors (a word combining “neuron” and “transistor”) that sense light, compute an image out of it and store the image. All three of the functions would occur simultaneously and nearly instantaneously.

“The same device senses, computes and stores the image,” Doolittle said. “The device is the sensor, and it’s the processor, and it’s the memory all at the same time.” A neuristor itself is comprised in part of devices called memristors inspired by the way human neurons work.

Brain vs. PC

That cuts out loads of processing and memory lag time that are inherent in traditional computing.

Take the device you’re reading this article on: Its microprocessor has to tap a separate memory component to get data, then do some processing, tap memory again for more data, process some more, etc. “That back-and-forth from memory to microprocessor has created a bottleneck,” Doolittle said.

A neuristor array breaks the bottleneck by emulating the extreme flexibility of biological nervous systems: When a brain computes, it uses a broad set of neural pathways that flash with enormous data. Then, later, to compute the same thing again, it will use quite different neural paths.

Traditional computer pathways, by contrast, are hardwired. For example, look at a present-day processor and you’ll see lines etched into it. Those are pathways that computational signals are limited to.

The new memristor materials at the heart of the neuristor are not etched, and signals flow through the surface very freely, more like they do through the brain, exponentially increasing the number of possible pathways computation can take. That helps the new intelligent retina compute powerfully and swiftly.

Terrorists, missing children

The retina’s memory could also store thousands of photos, allowing it to immediately match up what it sees with the saved images. The retina could pinpoint known terror suspects in a crowd, find missing children, or identify enemy aircraft virtually instantaneously, without having to trawl databases to correctly identify what is in the images.

Even if you take away the optics, the new neuristor arrays still advance artificial intelligence. Instead of light, a surface of neuristors could absorb massive data streams at once, compute them, store them, and compare them to patterns of other data, immediately. It could even autonomously learn to extrapolate further information, like calculating the third dimension out of data from two dimensions.

“It will work with anything that has a repetitive pattern like radar signatures, for example,” Doolittle said. “Right now, that’s too challenging to compute, because radar information is flying out at such a high data rate that no computer can even think about keeping up.”

Smart materials

The research project’s title acronym CEREBRAL may hint at distant dreams of an artificial brain, but what it stands for spells out the present goal in neuromorphic computing: Cross-disciplinary Electronic-ionic Research Enabling Biologically Realistic Autonomous Learning.

The intelligent retina’s neuristors are based on novel metal oxide nanotechnology materials, unique to Georgia Tech. They allow computing signals to flow flexibly across pathways that are electronic, which is customary in computing, and at the same time make use of ion motion, which is more commonly know from the way batteries and biological systems work.

The new materials have already been created, and they work, but the researchers don’t yet fully understand why.

Much of the project is dedicated to examining quantum states in the materials and how those states help create useful electronic-ionic properties. Researchers will view them by bombarding the metal oxides with extremely bright x-ray photons at the recently constructed National Synchrotron Light Source II.

Grant sub-awardee Binghamton University is located close by, and Binghamton physicists will run experiments and hone them via theoretical modeling.

‘Sea of lithium’

The neuristors are created mainly by the way the metal oxide materials are grown in the lab, which has advantages over building neuristors in a more wired way.

This materials-growing approach is conducive to mass production. Also, though neuristors in general free signals to take multiple pathways, Georgia Tech’s neuristors do it much more flexibly thanks to chemical properties.

“We also have a sea of lithium, and it’s like an infinite reservoir of computational ionic fluid,” Doolittle said. The lithium niobite imitates the way ionic fluid bathes biological neurons and allows them to flash with electric potential while signaling. In a neuristor array, the lithium niobite helps computational signaling move in myriad directions.

“It’s not like the typical semiconductor material, where you etch a line, and only that line has the computational material,” Doolittle said.

Commander Data’s brain?

“Unlike any other previous neuristors, our neuristors will adapt themselves in their computational-electronic pulsing on the fly, which makes them more like a neurological system,” Doolittle said. “They mimic biology in that we have ion drift across the material to create the memristors (the memory part of neuristors).”

Brains are far superior to computers at most things, but not all. Brains recognize objects and do motor tasks much better. But computers are much better at arithmetic and data processing.

Neuristor arrays can meld both types of computing, making them biological and algorithmic at once, a bit like Commander Data’s brain.

The research is being funded through the U.S. Department of Defense’s Multidisciplinary University Research Initiatives (MURI) Program under grant number FOA: N00014-16-R-FO05. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of those agencies.

Fascinating, non?

Brain stuff: quantum entanglement and a multi-dimensional universe

I have two brain news bits, one about neural networks and quantum entanglement and another about how the brain operates on more than three dimensions.

Quantum entanglement and neural networks

A June 13, 2017 news item on phys.org describes how machine learning can be used to solve problems in physics (Note: Links have been removed),

Machine learning, the field that’s driving a revolution in artificial intelligence, has cemented its role in modern technology. Its tools and techniques have led to rapid improvements in everything from self-driving cars and speech recognition to the digital mastery of an ancient board game.

Now, physicists are beginning to use machine learning tools to tackle a different kind of problem, one at the heart of quantum physics. In a paper published recently in Physical Review X, researchers from JQI [Joint Quantum Institute] and the Condensed Matter Theory Center (CMTC) at the University of Maryland showed that certain neural networks—abstract webs that pass information from node to node like neurons in the brain—can succinctly describe wide swathes of quantum systems.

An artist’s rendering of a neural network with two layers. At the top is a real quantum system, like atoms in an optical lattice. Below is a network of hidden neurons that capture their interactions (Credit: E. Edwards/JQI)

A June 12, 2017 JQI news release by Chris Cesare, which originated the news item, describes how neural networks can represent quantum entanglement,

Dongling Deng, a JQI Postdoctoral Fellow who is a member of CMTC and the paper’s first author, says that researchers who use computers to study quantum systems might benefit from the simple descriptions that neural networks provide. “If we want to numerically tackle some quantum problem,” Deng says, “we first need to find an efficient representation.”

On paper and, more importantly, on computers, physicists have many ways of representing quantum systems. Typically these representations comprise lists of numbers describing the likelihood that a system will be found in different quantum states. But it becomes difficult to extract properties or predictions from a digital description as the number of quantum particles grows, and the prevailing wisdom has been that entanglement—an exotic quantum connection between particles—plays a key role in thwarting simple representations.

The neural networks used by Deng and his collaborators—CMTC Director and JQI Fellow Sankar Das Sarma and Fudan University physicist and former JQI Postdoctoral Fellow Xiaopeng Li—can efficiently represent quantum systems that harbor lots of entanglement, a surprising improvement over prior methods.

What’s more, the new results go beyond mere representation. “This research is unique in that it does not just provide an efficient representation of highly entangled quantum states,” Das Sarma says. “It is a new way of solving intractable, interacting quantum many-body problems that uses machine learning tools to find exact solutions.”

Neural networks and their accompanying learning techniques powered AlphaGo, the computer program that beat some of the world’s best Go players last year (link is external) (and the top player this year (link is external)). The news excited Deng, an avid fan of the board game. Last year, around the same time as AlphaGo’s triumphs, a paper appeared that introduced the idea of using neural networks to represent quantum states (link is external), although it gave no indication of exactly how wide the tool’s reach might be. “We immediately recognized that this should be a very important paper,” Deng says, “so we put all our energy and time into studying the problem more.”

The result was a more complete account of the capabilities of certain neural networks to represent quantum states. In particular, the team studied neural networks that use two distinct groups of neurons. The first group, called the visible neurons, represents real quantum particles, like atoms in an optical lattice or ions in a chain. To account for interactions between particles, the researchers employed a second group of neurons—the hidden neurons—which link up with visible neurons. These links capture the physical interactions between real particles, and as long as the number of connections stays relatively small, the neural network description remains simple.

Specifying a number for each connection and mathematically forgetting the hidden neurons can produce a compact representation of many interesting quantum states, including states with topological characteristics and some with surprising amounts of entanglement.

Beyond its potential as a tool in numerical simulations, the new framework allowed Deng and collaborators to prove some mathematical facts about the families of quantum states represented by neural networks. For instance, neural networks with only short-range interactions—those in which each hidden neuron is only connected to a small cluster of visible neurons—have a strict limit on their total entanglement. This technical result, known as an area law, is a research pursuit of many condensed matter physicists.

These neural networks can’t capture everything, though. “They are a very restricted regime,” Deng says, adding that they don’t offer an efficient universal representation. If they did, they could be used to simulate a quantum computer with an ordinary computer, something physicists and computer scientists think is very unlikely. Still, the collection of states that they do represent efficiently, and the overlap of that collection with other representation methods, is an open problem that Deng says is ripe for further exploration.

Here’s a link to and a citation for the paper,

Quantum Entanglement in Neural Network States by Dong-Ling Deng, Xiaopeng Li, and S. Das Sarma. Phys. Rev. X 7, 021021 – Published 11 May 2017

This paper is open access.

Blue Brain and the multidimensional universe

Blue Brain is a Swiss government brain research initiative which officially came to life in 2006 although the initial agreement between the École Politechnique Fédérale de Lausanne (EPFL) and IBM was signed in 2005 (according to the project’s Timeline page). Moving on, the project’s latest research reveals something astounding (from a June 12, 2017 Frontiers Publishing press release on EurekAlert),

For most people, it is a stretch of the imagination to understand the world in four dimensions but a new study has discovered structures in the brain with up to eleven dimensions – ground-breaking work that is beginning to reveal the brain’s deepest architectural secrets.

Using algebraic topology in a way that it has never been used before in neuroscience, a team from the Blue Brain Project has uncovered a universe of multi-dimensional geometrical structures and spaces within the networks of the brain.

The research, published today in Frontiers in Computational Neuroscience, shows that these structures arise when a group of neurons forms a clique: each neuron connects to every other neuron in the group in a very specific way that generates a precise geometric object. The more neurons there are in a clique, the higher the dimension of the geometric object.

“We found a world that we had never imagined,” says neuroscientist Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland, “there are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”

Markram suggests this may explain why it has been so hard to understand the brain. “The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly.”

If 4D worlds stretch our imagination, worlds with 5, 6 or more dimensions are too complex for most of us to comprehend. This is where algebraic topology comes in: a branch of mathematics that can describe systems with any number of dimensions. The mathematicians who brought algebraic topology to the study of brain networks in the Blue Brain Project were Kathryn Hess from EPFL and Ran Levi from Aberdeen University.

“Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures – the trees in the forest – and see the empty spaces – the clearings – all at the same time,” explains Hess.

In 2015, Blue Brain published the first digital copy of a piece of the neocortex – the most evolved part of the brain and the seat of our sensations, actions, and consciousness. In this latest research, using algebraic topology, multiple tests were performed on the virtual brain tissue to show that the multi-dimensional brain structures discovered could never be produced by chance. Experiments were then performed on real brain tissue in the Blue Brain’s wet lab in Lausanne confirming that the earlier discoveries in the virtual tissue are biologically relevant and also suggesting that the brain constantly rewires during development to build a network with as many high-dimensional structures as possible.

When the researchers presented the virtual brain tissue with a stimulus, cliques of progressively higher dimensions assembled momentarily to enclose high-dimensional holes, that the researchers refer to as cavities. “The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner,” says Levi. “It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates.”

The big question these researchers are asking now is whether the intricacy of tasks we can perform depends on the complexity of the multi-dimensional “sandcastles” the brain can build. Neuroscience has also been struggling to find where the brain stores its memories. “They may be ‘hiding’ in high-dimensional cavities,” Markram speculates.

###

About Blue Brain

The aim of the Blue Brain Project, a Swiss brain initiative founded and directed by Professor Henry Markram, is to build accurate, biologically detailed digital reconstructions and simulations of the rodent brain, and ultimately, the human brain. The supercomputer-based reconstructions and simulations built by Blue Brain offer a radically new approach for understanding the multilevel structure and function of the brain. http://bluebrain.epfl.ch

About Frontiers

Frontiers is a leading community-driven open-access publisher. By taking publishing entirely online, we drive innovation with new technologies to make peer review more efficient and transparent. We provide impact metrics for articles and researchers, and merge open access publishing with a research network platform – Loop – to catalyse research dissemination, and popularize research to the public, including children. Our goal is to increase the reach and impact of research articles and their authors. Frontiers has received the ALPSP Gold Award for Innovation in Publishing in 2014. http://www.frontiersin.org.

Here’s a link to and a citation for the paper,

Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function by Michael W. Reimann, Max Nolte, Martina Scolamiero, Katharine Turner, Rodrigo Perin, Giuseppe Chindemi, Paweł Dłotko, Ran Levi, Kathryn Hess, and Henry Markram. Front. Comput. Neurosci., 12 June 2017 | https://doi.org/10.3389/fncom.2017.00048

This paper is open access.

Hacking the human brain with a junction-based artificial synaptic device

Earlier today I published a piece featuring Dr. Wei Lu’s work on memristors and the movement to create an artificial brain (my June 28, 2017 posting: Dr. Wei Lu and bio-inspired ‘memristor’ chips). For this posting I’m featuring a non-memristor (if I’ve properly understood the technology) type of artificial synapse. From a June 28, 2017 news item on Nanowerk,

One of the greatest challenges facing artificial intelligence development is understanding the human brain and figuring out how to mimic it.

Now, one group reports in ACS Nano (“Emulating Bilingual Synaptic Response Using a Junction-Based Artificial Synaptic Device”) that they have developed an artificial synapse capable of simulating a fundamental function of our nervous system — the release of inhibitory and stimulatory signals from the same “pre-synaptic” terminal.

Unfortunately, the American Chemical Society news release on EurekAlert, which originated the news item, doesn’t provide too much more detail,

The human nervous system is made up of over 100 trillion synapses, structures that allow neurons to pass electrical and chemical signals to one another. In mammals, these synapses can initiate and inhibit biological messages. Many synapses just relay one type of signal, whereas others can convey both types simultaneously or can switch between the two. To develop artificial intelligence systems that better mimic human learning, cognition and image recognition, researchers are imitating synapses in the lab with electronic components. Most current artificial synapses, however, are only capable of delivering one type of signal. So, Han Wang, Jing Guo and colleagues sought to create an artificial synapse that can reconfigurably send stimulatory and inhibitory signals.

The researchers developed a synaptic device that can reconfigure itself based on voltages applied at the input terminal of the device. A junction made of black phosphorus and tin selenide enables switching between the excitatory and inhibitory signals. This new device is flexible and versatile, which is highly desirable in artificial neural networks. In addition, the artificial synapses may simplify the design and functions of nervous system simulations.

Here’s how I concluded that this is not a memristor-type device (from the paper [first paragraph, final sentence]; a link and citation will follow; Note: Links have been removed)),

The conventional memristor-type [emphasis mine](14-20) and transistor-type(21-25) artificial synapses can realize synaptic functions in a single semiconductor device but lacks the ability [emphasis mine] to dynamically reconfigure between excitatory and inhibitory responses without the addition of a modulating terminal.

Here’s a link to and a citation for the paper,

Emulating Bilingual Synaptic Response Using a Junction-Based Artificial Synaptic Device by
He Tian, Xi Cao, Yujun Xie, Xiaodong Yan, Andrew Kostelec, Don DiMarzio, Cheng Chang, Li-Dong Zhao, Wei Wu, Jesse Tice, Judy J. Cha, Jing Guo, and Han Wang. ACS Nano, Article ASAP DOI: 10.1021/acsnano.7b03033 Publication Date (Web): June 28, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

Dr. Wei Lu and bio-inspired ‘memristor’ chips

It’s been a while since I’ve featured Dr. Wei Lu’s work here. This April  15, 2010 posting features Lu’s most relevant previous work.) Here’s his latest ‘memristor’ work , from a May 22, 2017 news item on Nanowerk (Note: A link has been removed),

Inspired by how mammals see, a new “memristor” computer circuit prototype at the University of Michigan has the potential to process complex data, such as images and video orders of magnitude, faster and with much less power than today’s most advanced systems.

Faster image processing could have big implications for autonomous systems such as self-driving cars, says Wei Lu, U-M professor of electrical engineering and computer science. Lu is lead author of a paper on the work published in the current issue of Nature Nanotechnology (“Sparse coding with memristor networks”).

Lu’s next-generation computer components use pattern recognition to shortcut the energy-intensive process conventional systems use to dissect images. In this new work, he and his colleagues demonstrate an algorithm that relies on a technique called “sparse coding” to coax their 32-by-32 array of memristors to efficiently analyze and recreate several photos.

A May 22, 2017 University of Michigan news release (also on EurekAlert), which originated the news item, provides more information about memristors and about the research,

Memristors are electrical resistors with memory—advanced electronic devices that regulate current based on the history of the voltages applied to them. They can store and process data simultaneously, which makes them a lot more efficient than traditional systems. In a conventional computer, logic and memory functions are located at different parts of the circuit.

“The tasks we ask of today’s computers have grown in complexity,” Lu said. “In this ‘big data’ era, computers require costly, constant and slow communications between their processor and memory to retrieve large amounts data. This makes them large, expensive and power-hungry.”

But like neural networks in a biological brain, networks of memristors can perform many operations at the same time, without having to move data around. As a result, they could enable new platforms that process a vast number of signals in parallel and are capable of advanced machine learning. Memristors are good candidates for deep neural networks, a branch of machine learning, which trains computers to execute processes without being explicitly programmed to do so.

“We need our next-generation electronics to be able to quickly process complex data in a dynamic environment. You can’t just write a program to do that. Sometimes you don’t even have a pre-defined task,” Lu said. “To make our systems smarter, we need to find ways for them to process a lot of data more efficiently. Our approach to accomplish that is inspired by neuroscience.”

A mammal’s brain is able to generate sweeping, split-second impressions of what the eyes take in. One reason is because they can quickly recognize different arrangements of shapes. Humans do this using only a limited number of neurons that become active, Lu says. Both neuroscientists and computer scientists call the process “sparse coding.”

“When we take a look at a chair we will recognize it because its characteristics correspond to our stored mental picture of a chair,” Lu said. “Although not all chairs are the same and some may differ from a mental prototype that serves as a standard, each chair retains some of the key characteristics necessary for easy recognition. Basically, the object is correctly recognized the moment it is properly classified—when ‘stored’ in the appropriate category in our heads.”

Image of a memristor chip Image of a memristor chip Similarly, Lu’s electronic system is designed to detect the patterns very efficiently—and to use as few features as possible to describe the original input.

In our brains, different neurons recognize different patterns, Lu says.

“When we see an image, the neurons that recognize it will become more active,” he said. “The neurons will also compete with each other to naturally create an efficient representation. We’re implementing this approach in our electronic system.”

The researchers trained their system to learn a “dictionary” of images. Trained on a set of grayscale image patterns, their memristor network was able to reconstruct images of famous paintings and photos and other test patterns.

If their system can be scaled up, they expect to be able to process and analyze video in real time in a compact system that can be directly integrated with sensors or cameras.

The project is titled “Sparse Adaptive Local Learning for Sensing and Analytics.” Other collaborators are Zhengya Zhang and Michael Flynn of the U-M Department of Electrical Engineering and Computer Science, Garrett Kenyon of the Los Alamos National Lab and Christof Teuscher of Portland State University.

The work is part of a $6.9 million Unconventional Processing of Signals for Intelligent Data Exploitation project that aims to build a computer chip based on self-organizing, adaptive neural networks. It is funded by the [US] Defense Advanced Research Projects Agency [DARPA].

Here’s a link to and a citation for the paper,

Sparse coding with memristor networks by Patrick M. Sheridan, Fuxi Cai, Chao Du, Wen Ma, Zhengya Zhang, & Wei D. Lu. Nature Nanotechnology (2017) doi:10.1038/nnano.2017.83 Published online 22 May 2017

This paper is behind a paywall.

For the interested, there are a number of postings featuring memristors here (just use ‘memristor’ as your search term in the blog search engine). You might also want to check out ‘neuromorphic engineeering’ and ‘neuromorphic computing’ and ‘artificial brain’.

Self-learning neuromorphic chip

There aren’t many details about this chip and so far as I can tell this technology is not based on a memristor. From a May 16, 2017 news item on plys.org,

Today [May 16, 2017], at the imec technology forum (ITF2017), imec demonstrated the world’s first self-learning neuromorphic chip. The brain-inspired chip, based on OxRAM technology, has the capability of self-learning and has been demonstrated to have the ability to compose music.

Here’s a sample,

A May 16, 2017 imec press release, which originated the news item, expands on the theme,

The human brain is a dream for computer scientists: it has a huge computing power while consuming only a few tens of Watts. Imec researchers are combining state-of-the-art hardware and software to design chips that feature these desirable characteristics of a self-learning system. Imec’s ultimate goal is to design the process technology and building blocks to make artificial intelligence to be energy efficient so that that it can be integrated into sensors. Such intelligent sensors will drive the internet of things forward. This would not only allow machine learning to be present in all sensors but also allow on-field learning capability to further improve the learning.

By co-optimizing the hardware and the software, the chip features machine learning and intelligence characteristics on a small area, while consuming only very little power. The chip is self-learning, meaning that is makes associations between what it has experienced and what it experiences. The more it experiences, the stronger the connections will be. The chip presented today has learned to compose new music and the rules for the composition are learnt on the fly.

It is imec’s ultimate goal to further advance both hardware and software to achieve very low-power, high-performance, low-cost and highly miniaturized neuromorphic chips that can be applied in many domains ranging for personal health, energy, traffic management etc. For example, neuromorphic chips integrated into sensors for health monitoring would enable to identify a particular heartrate change that could lead to heart abnormalities, and would learn to recognize slightly different ECG patterns that vary between individuals. Such neuromorphic chips would thus enable more customized and patient-centric monitoring.

“Because we have hardware, system design and software expertise under one roof, imec is ideally positioned to drive neuromorphic computing forward,” says Praveen Raghavan, distinguished member of the technical Staff at imec. “Our chip has evolved from co-optimizing logic, memory, algorithms and system in a holistic way. This way, we succeeded in developing the building blocks for such a self-learning system.”

About ITF

The Imec Technology Forum (ITF) is imec’s series of internationally acclaimed events with a clear focus on the technologies that will drive groundbreaking innovation in healthcare, smart cities and mobility, ICT, logistics and manufacturing, and energy.

At ITF, some of the world’s greatest minds in technology take the stage. Their talks cover a wide range of domains – such as advanced chip scaling, smart imaging, sensor and communication systems, the IoT, supercomputing, sustainable energy and battery technology, and much more. As leading innovators in their fields, they also present early insights in market trends, evolutions, and breakthroughs in nanoelectronics and digital technology: What will be successful and what not, in five or even ten years from now? How will technology evolve, and how fast? And who can help you implement your technology roadmaps?

About imec

Imec is the world-leading research and innovation hub in nano-electronics and digital technologies. The combination of our widely-acclaimed leadership in microchip technology and profound software and ICT expertise is what makes us unique. By leveraging our world-class infrastructure and local and global ecosystem of partners across a multitude of industries, we create groundbreaking innovation in application domains such as healthcare, smart cities and mobility, logistics and manufacturing, and energy.

As a trusted partner for companies, start-ups and universities we bring together close to 3,500 brilliant minds from over 75 nationalities. Imec is headquartered in Leuven, Belgium and also has distributed R&D groups at a number of Flemish universities, in the Netherlands, Taiwan, USA, China, and offices in India and Japan. In 2016, imec’s revenue (P&L) totaled 496 million euro. Further information on imec can be found at www.imec.be.

Imec is a registered trademark for the activities of IMEC International (a legal entity set up under Belgian law as a “stichting van openbaar nut”), imec Belgium (IMEC vzw supported by the Flemish Government), imec the Netherlands (Stichting IMEC Nederland, part of Holst Centre which is supported by the Dutch Government), imec Taiwan (IMEC Taiwan Co.) and imec China (IMEC Microelectronics (Shanghai) Co. Ltd.) and imec India (Imec India Private Limited), imec Florida (IMEC USA nanoelectronics design center).

I don’t usually include the ‘abouts’ but I was quite intrigued by imec. For anyone curious about the ITF (imec Forums), here’s a website with a listing all of the previously held and upcoming 2017 forums.

An explanation of neural networks from the Massachusetts Institute of Technology (MIT)

I always enjoy the MIT ‘explainers’ and have been a little sad that I haven’t stumbled across one in a while. Until now, that is. Here’s an April 14, 201 neural network ‘explainer’ (in its entirety) by Larry Hardesty (?),

In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department.

Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory.

The technique then enjoyed a resurgence in the 1980s, fell into eclipse again in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips.

“There’s this idea that ideas in science are a bit like epidemics of viruses,” says Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT, an investigator at MIT’s McGovern Institute for Brain Research, and director of MIT’s Center for Brains, Minds, and Machines. “There are apparently five or six basic strains of flu viruses, and apparently each one comes back with a period of around 25 years. People get infected, and they develop an immune response, and so they don’t get infected for the next 25 years. And then there is a new generation that is ready to be infected by the same strain of virus. In science, people fall in love with an idea, get excited about it, hammer it to death, and then get immunized — they get tired of it. So ideas should have the same kind of periodicity!”

Weighty matters

Neural nets are a means of doing machine learning, in which a computer learns to perform some task by analyzing training examples. Usually, the examples have been hand-labeled in advance. An object recognition system, for instance, might be fed thousands of labeled images of cars, houses, coffee cups, and so on, and it would find visual patterns in the images that consistently correlate with particular labels.

Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected. Most of today’s neural nets are organized into layers of nodes, and they’re “feed-forward,” meaning that data moves through them in only one direction. An individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data.

To each of its incoming connections, a node will assign a number known as a “weight.” When the network is active, the node receives a different data item — a different number — over each of its connections and multiplies it by the associated weight. It then adds the resulting products together, yielding a single number. If that number is below a threshold value, the node passes no data to the next layer. If the number exceeds the threshold value, the node “fires,” which in today’s neural nets generally means sending the number — the sum of the weighted inputs — along all its outgoing connections.

When a neural net is being trained, all of its weights and thresholds are initially set to random values. Training data is fed to the bottom layer — the input layer — and it passes through the succeeding layers, getting multiplied and added together in complex ways, until it finally arrives, radically transformed, at the output layer. During training, the weights and thresholds are continually adjusted until training data with the same labels consistently yield similar outputs.

Minds and machines

The neural nets described by McCullough and Pitts in 1944 had thresholds and weights, but they weren’t arranged into layers, and the researchers didn’t specify any training mechanism. What McCullough and Pitts showed was that a neural net could, in principle, compute any function that a digital computer could. The result was more neuroscience than computer science: The point was to suggest that the human brain could be thought of as a computing device.

Neural nets continue to be a valuable tool for neuroscientific research. For instance, particular network layouts or rules for adjusting weights and thresholds have reproduced observed features of human neuroanatomy and cognition, an indication that they capture something about how the brain processes information.

The first trainable neural network, the Perceptron, was demonstrated by the Cornell University psychologist Frank Rosenblatt in 1957. The Perceptron’s design was much like that of the modern neural net, except that it had only one layer with adjustable weights and thresholds, sandwiched between input and output layers.

Perceptrons were an active area of research in both psychology and the fledgling discipline of computer science until 1959, when Minsky and Papert published a book titled “Perceptrons,” which demonstrated that executing certain fairly common computations on Perceptrons would be impractically time consuming.

“Of course, all of these limitations kind of disappear if you take machinery that is a little more complicated — like, two layers,” Poggio says. But at the time, the book had a chilling effect on neural-net research.

“You have to put these things in historical context,” Poggio says. “They were arguing for programming — for languages like Lisp. Not many years before, people were still using analog computers. It was not clear at all at the time that programming was the way to go. I think they went a little bit overboard, but as usual, it’s not black and white. If you think of this as this competition between analog computing and digital computing, they fought for what at the time was the right thing.”

Periodicity

By the 1980s, however, researchers had developed algorithms for modifying neural nets’ weights and thresholds that were efficient enough for networks with more than one layer, removing many of the limitations identified by Minsky and Papert. The field enjoyed a renaissance.

But intellectually, there’s something unsatisfying about neural nets. Enough training may revise a network’s settings to the point that it can usefully classify data, but what do those settings mean? What image features is an object recognizer looking at, and how does it piece them together into the distinctive visual signatures of cars, houses, and coffee cups? Looking at the weights of individual connections won’t answer that question.

In recent years, computer scientists have begun to come up with ingenious methods for deducing the analytic strategies adopted by neural nets. But in the 1980s, the networks’ strategies were indecipherable. So around the turn of the century, neural networks were supplanted by support vector machines, an alternative approach to machine learning that’s based on some very clean and elegant mathematics.

The recent resurgence in neural networks — the deep-learning revolution — comes courtesy of the computer-game industry. The complex imagery and rapid pace of today’s video games require hardware that can keep up, and the result has been the graphics processing unit (GPU), which packs thousands of relatively simple processing cores on a single chip. It didn’t take long for researchers to realize that the architecture of a GPU is remarkably like that of a neural net.

Modern GPUs enabled the one-layer networks of the 1960s and the two- to three-layer networks of the 1980s to blossom into the 10-, 15-, even 50-layer networks of today. That’s what the “deep” in “deep learning” refers to — the depth of the network’s layers. And currently, deep learning is responsible for the best-performing systems in almost every area of artificial-intelligence research.

Under the hood

The networks’ opacity is still unsettling to theorists, but there’s headway on that front, too. In addition to directing the Center for Brains, Minds, and Machines (CBMM), Poggio leads the center’s research program in Theoretical Frameworks for Intelligence. Recently, Poggio and his CBMM colleagues have released a three-part theoretical study of neural networks.

The first part, which was published last month in the International Journal of Automation and Computing, addresses the range of computations that deep-learning networks can execute and when deep networks offer advantages over shallower ones. Parts two and three, which have been released as CBMM technical reports, address the problems of global optimization, or guaranteeing that a network has found the settings that best accord with its training data, and overfitting, or cases in which the network becomes so attuned to the specifics of its training data that it fails to generalize to other instances of the same categories.

There are still plenty of theoretical questions to be answered, but CBMM researchers’ work could help ensure that neural networks finally break the generational cycle that has brought them in and out of favor for seven decades.

This image from MIT illustrates a ‘modern’ neural network,

Most applications of deep learning use “convolutional” neural networks, in which the nodes of each layer are clustered, the clusters overlap, and each cluster feeds data to multiple nodes (orange and green) of the next layer. Image: Jose-Luis Olivares/MIT

h/t phys.org April 17, 2017

One final note, I wish the folks at MIT had an ‘explainer’ archive. I’m not sure how to find any more ‘explainers on MIT’s website.

Predicting how a memristor functions

An April 3, 2017 news item on Nanowerk announces a new memristor development (Note: A link has been removed),

Researchers from the CNRS [Centre national de la recherche scientifique; France] , Thales, and the Universities of Bordeaux, Paris-Sud, and Evry have created an artificial synapse capable of learning autonomously. They were also able to model the device, which is essential for developing more complex circuits. The research was published in Nature Communications (“Learning through ferroelectric domain dynamics in solid-state synapses”)

An April 3, 2017 CNRS press release, which originated the news item, provides a nice introduction to the memristor concept before providing a few more details about this latest work (Note: A link has been removed),

One of the goals of biomimetics is to take inspiration from the functioning of the brain [also known as neuromorphic engineering or neuromorphic computing] in order to design increasingly intelligent machines. This principle is already at work in information technology, in the form of the algorithms used for completing certain tasks, such as image recognition; this, for instance, is what Facebook uses to identify photos. However, the procedure consumes a lot of energy. Vincent Garcia (Unité mixte de physique CNRS/Thales) and his colleagues have just taken a step forward in this area by creating directly on a chip an artificial synapse that is capable of learning. They have also developed a physical model that explains this learning capacity. This discovery opens the way to creating a network of synapses and hence intelligent systems requiring less time and energy.

Our brain’s learning process is linked to our synapses, which serve as connections between our neurons. The more the synapse is stimulated, the more the connection is reinforced and learning improved. Researchers took inspiration from this mechanism to design an artificial synapse, called a memristor. This electronic nanocomponent consists of a thin ferroelectric layer sandwiched between two electrodes, and whose resistance can be tuned using voltage pulses similar to those in neurons. If the resistance is low the synaptic connection will be strong, and if the resistance is high the connection will be weak. This capacity to adapt its resistance enables the synapse to learn.

Although research focusing on these artificial synapses is central to the concerns of many laboratories, the functioning of these devices remained largely unknown. The researchers have succeeded, for the first time, in developing a physical model able to predict how they function. This understanding of the process will make it possible to create more complex systems, such as a series of artificial neurons interconnected by these memristors.

As part of the ULPEC H2020 European project, this discovery will be used for real-time shape recognition using an innovative camera1 : the pixels remain inactive, except when they see a change in the angle of vision. The data processing procedure will require less energy, and will take less time to detect the selected objects. The research involved teams from the CNRS/Thales physics joint research unit, the Laboratoire de l’intégration du matériau au système (CNRS/Université de Bordeaux/Bordeaux INP), the University of Arkansas (US), the Centre de nanosciences et nanotechnologies (CNRS/Université Paris-Sud), the Université d’Evry, and Thales.

 

Image synapse


© Sören Boyn / CNRS/Thales physics joint research unit.

Artist’s impression of the electronic synapse: the particles represent electrons circulating through oxide, by analogy with neurotransmitters in biological synapses. The flow of electrons depends on the oxide’s ferroelectric domain structure, which is controlled by electric voltage pulses.


Here’s a link to and a citation for the paper,

Learning through ferroelectric domain dynamics in solid-state synapses by Sören Boyn, Julie Grollier, Gwendal Lecerf, Bin Xu, Nicolas Locatelli, Stéphane Fusil, Stéphanie Girod, Cécile Carrétéro, Karin Garcia, Stéphane Xavier, Jean Tomas, Laurent Bellaiche, Manuel Bibes, Agnès Barthélémy, Sylvain Saïghi, & Vincent Garcia. Nature Communications 8, Article number: 14736 (2017) doi:10.1038/ncomms14736 Published online: 03 April 2017

This paper is open access.

Thales or Thales Group is a French company, from its Wikipedia entry (Note: Links have been removed),

Thales Group (French: [talɛs]) is a French multinational company that designs and builds electrical systems and provides services for the aerospace, defence, transportation and security markets. Its headquarters are in La Défense[2] (the business district of Paris), and its stock is listed on the Euronext Paris.

The company changed its name to Thales (from the Greek philosopher Thales,[3] pronounced [talɛs] reflecting its pronunciation in French) from Thomson-CSF in December 2000 shortly after the £1.3 billion acquisition of Racal Electronics plc, a UK defence electronics group. It is partially state-owned by the French government,[4] and has operations in more than 56 countries. It has 64,000 employees and generated €14.9 billion in revenues in 2016. The Group is ranked as the 475th largest company in the world by Fortune 500 Global.[5] It is also the 10th largest defence contractor in the world[6] and 55% of its total sales are military sales.[4]

The ULPEC (Ultra-Low Power Event-Based Camera) H2020 [Horizon 2020 funded) European project can be found here,

The long term goal of ULPEC is to develop advanced vision applications with ultra-low power requirements and ultra-low latency. The output of the ULPEC project is a demonstrator connecting a neuromorphic event-based camera to a high speed ultra-low power consumption asynchronous visual data processing system (Spiking Neural Network with memristive synapses). Although ULPEC device aims to reach TRL 4, it is a highly application-oriented project: prospective use cases will b…

Finally, for anyone curious about Thales, the philosopher (from his Wikipedia entry), Note: Links have been removed,

Thales of Miletus (/ˈθeɪliːz/; Greek: Θαλῆς (ὁ Μῑλήσιος), Thalēs; c. 624 – c. 546 BC) was a pre-Socratic Greek/Phoenician philosopher, mathematician and astronomer from Miletus in Asia Minor (present-day Milet in Turkey). He was one of the Seven Sages of Greece. Many, most notably Aristotle, regard him as the first philosopher in the Greek tradition,[1][2] and he is otherwise historically recognized as the first individual in Western civilization known to have entertained and engaged in scientific philosophy.[3][4]

Nanoelectronic thread (NET) brain probes for long-term neural recording

A rendering of the ultra-flexible probe in neural tissue gives viewers a sense of the device’s tiny size and footprint in the brain. Image credit: Science Advances.

As long time readers have likely noted, I’m not a big a fan of this rush to ‘colonize’ the brain but it continues apace as a Feb. 15, 2017 news item on Nanowerk announces a new type of brain probe,

Engineering researchers at The University of Texas at Austin have designed ultra-flexible, nanoelectronic thread (NET) brain probes that can achieve more reliable long-term neural recording than existing probes and don’t elicit scar formation when implanted.

A Feb. 15, 2017 University of Texas at Austin news release, which originated the news item, provides more information about the new probes (Note: A link has been removed),

A team led by Chong Xie, an assistant professor in the Department of Biomedical Engineering in the Cockrell School of Engineering, and Lan Luan, a research scientist in the Cockrell School and the College of Natural Sciences, have developed new probes that have mechanical compliances approaching that of the brain tissue and are more than 1,000 times more flexible than other neural probes. This ultra-flexibility leads to an improved ability to reliably record and track the electrical activity of individual neurons for long periods of time. There is a growing interest in developing long-term tracking of individual neurons for neural interface applications, such as extracting neural-control signals for amputees to control high-performance prostheses. It also opens up new possibilities to follow the progression of neurovascular and neurodegenerative diseases such as stroke, Parkinson’s and Alzheimer’s diseases.

One of the problems with conventional probes is their size and mechanical stiffness; their larger dimensions and stiffer structures often cause damage around the tissue they encompass. Additionally, while it is possible for the conventional electrodes to record brain activity for months, they often provide unreliable and degrading recordings. It is also challenging for conventional electrodes to electrophysiologically track individual neurons for more than a few days.

In contrast, the UT Austin team’s electrodes are flexible enough that they comply with the microscale movements of tissue and still stay in place. The probe’s size also drastically reduces the tissue displacement, so the brain interface is more stable, and the readings are more reliable for longer periods of time. To the researchers’ knowledge, the UT Austin probe — which is as small as 10 microns at a thickness below 1 micron, and has a cross-section that is only a fraction of that of a neuron or blood capillary — is the smallest among all neural probes.

“What we did in our research is prove that we can suppress tissue reaction while maintaining a stable recording,” Xie said. “In our case, because the electrodes are very, very flexible, we don’t see any sign of brain damage — neurons stayed alive even in contact with the NET probes, glial cells remained inactive and the vasculature didn’t become leaky.”

In experiments in mouse models, the researchers found that the probe’s flexibility and size prevented the agitation of glial cells, which is the normal biological reaction to a foreign body and leads to scarring and neuronal loss.

“The most surprising part of our work is that the living brain tissue, the biological system, really doesn’t mind having an artificial device around for months,” Luan said.

The researchers also used advanced imaging techniques in collaboration with biomedical engineering professor Andrew Dunn and neuroscientists Raymond Chitwood and Jenni Siegel from the Institute for Neuroscience at UT Austin to confirm that the NET enabled neural interface did not degrade in the mouse model for over four months of experiments. The researchers plan to continue testing their probes in animal models and hope to eventually engage in clinical testing. The research received funding from the UT BRAIN seed grant program, the Department of Defense and National Institutes of Health.

Here’s a link to and citation for the paper,

Ultraflexible nanoelectronic probes form reliable, glial scar–free neural integration by Lan Luan, Xiaoling Wei, Zhengtuo Zhao, Jennifer J. Siegel, Ojas Potnis, Catherine A Tuppen, Shengqing Lin, Shams Kazmi, Robert A. Fowler, Stewart Holloway, Andrew K. Dunn, Raymond A. Chitwood, and Chong Xie. Science Advances  15 Feb 2017: Vol. 3, no. 2, e1601966 DOI: 10.1126/sciadv.1601966

This paper is open access.

You can get more detail about the research in a Feb. 17, 2017 posting by Dexter Johnson on his Nanoclast blog (on the IEEE [International Institute for Electrical and Electronics Engineers] website).