Tag Archives: University of Michigan

Entropic bonding for nanoparticle crystals

A January 19, 2022 University of Michigan news release (also on EurekAlert) is written in a Q&A (question and answer style) not usually seen on news releases, Note: Links have been removed),

Turns out entropy binds nanoparticles a lot like electrons bind chemical crystals

ANN ARBOR—Entropy, a physical property often explained as “disorder,” is revealed as a creator of order with a new bonding theory developed at the University of Michigan and published in the Proceedings of the National Academy of Sciences [PNAS]. 

Engineers dream of using nanoparticles to build designer materials, and the new theory can help guide efforts to make nanoparticles assemble into useful structures. The theory explains earlier results exploring the formation of crystal structures by space-restricted nanoparticles, enabling entropy to be quantified and harnessed in future efforts. 

And curiously, the set of equations that govern nanoparticle interactions due to entropy mirror those that describe chemical bonding. Sharon Glotzer, the Anthony C. Lembke Department Chair of Chemical Engineering, and Thi Vo, a postdoctoral researcher in chemical engineering, answered some questions about their new theory.

What is entropic bonding?

Glotzer: Entropic bonding is a way of explaining how nanoparticles interact to form crystal structures. It’s analogous to the chemical bonds formed by atoms. But unlike atoms, there aren’t electron interactions holding these nanoparticles together. Instead, the attraction arises because of entropy. 

Oftentimes, entropy is associated with disorder, but it’s really about options. When nanoparticles are crowded together and options are limited, it turns out that the most likely arrangement of nanoparticles can be a particular crystal structure. That structure gives the system the most options, and thus the highest entropy. Large entropic forces arise when the particles become close to one another. 

By doing the most extensive studies of particle shapes and the crystals they form, my group found that as you change the shape, you change the directionality of those entropic forces that guide the formation of these crystal structures. That directionality simulates a bond, and since it’s driven by entropy, we call it entropic bonding.

Why is this important?

Glotzer: Entropy’s contribution to creating order is often overlooked when designing nanoparticles for self-assembly, but that’s a mistake. If entropy is helping your system organize itself, you may not need to engineer explicit attraction between particles—for example, using DNA or other sticky molecules—with as strong an interaction as you thought. With our new theory, we can calculate the strength of those entropic bonds.

While we’ve known that entropic interactions can be directional like bonds, our breakthrough is that we can describe those bonds with a theory that line-for-line matches the theory that you would write down for electron interactions in actual chemical bonds. That’s profound. I’m amazed that it’s even possible to do that. Mathematically speaking, it puts chemical bonds and entropic bonds on the same footing. This is both fundamentally important for our understanding of matter and practically important for making new materials.

Electrons are the key to those chemical equations though. How did you do this when no particles mediate the interactions between your nanoparticles?

Glotzer: Entropy is related to the free space in the system, but for years I didn’t know how to count that space. Thi’s big insight was that we could count that space using fictitious point particles. And that gave us the mathematical analogue of the electrons.

Vo: The pseudoparticles move around the system and fill in the spaces that are hard for another nanoparticle to fill—we call this the excluded volume around each nanoparticle. As the nanoparticles become more ordered, the excluded volume around them becomes smaller, and the concentration of pseudoparticles in those regions increases. The entropic bonds are where that concentration is highest. 

In crowded conditions, the entropy lost by increasing the order is outweighed by the entropy gained by shrinking the excluded volume. As a result, the configuration with the highest entropy will be the one where pseudoparticles occupy the least space.

The research is funded by the Simons Foundation, Office of Naval Research, and the Office of the Undersecretary of Defense for Research and Engineering. It relied on the computing resources of the National Science Foundation’s Extreme Science and Engineering Discovery Environment. Glotzer is also the John Werner Cahn Distinguished University Professor of Engineering, the Stuart W. Churchill Collegiate Professor of Chemical Engineering, and a professor of material science and engineering, macromolecular science and engineering, and physics at U-M.

Here’s a link to and a citation for the paper,

A theory of entropic bonding by Thi Vo and Sharon C. Glotzer. PNAS January 25, 2022 119 (4) e2116414119 DOI: https://doi.org/10.1073/pnas.2116414119

This paper is behind a paywall.

The nanoscale precision of pearls

An October 21, 2021 news item on phys.org features a quote about nothingness and symmetry (Note: A link has been removed),

In research that could inform future high-performance nanomaterials, a University of Michigan-led team has uncovered for the first time how mollusks build ultradurable structures with a level of symmetry that outstrips everything else in the natural world, with the exception of individual atoms.

“We humans, with all our access to technology, can’t make something with a nanoscale architecture as intricate as a pearl,” said Robert Hovden, U-M assistant professor of materials science and engineering and an author on the paper. “So we can learn a lot by studying how pearls go from disordered nothingness to this remarkably symmetrical structure.” [emphasis mine]

The analysis was done in collaboration with researchers at the Australian National University, Lawrence Berkeley National Laboratory, Western Norway University [of Applied Sciences] and Cornell University.

a. A Keshi pearl that has been sliced into pieces for study. b. A magnified cross-section of the pearl shows its transition from its disorderly center to thousands of layers of finely matched nacre. c. A magnification of the nacre layers shows their self-correction—when one layer is thicker, the next is thinner to compensate, and vice-versa. d, e: Atomic scale images of the nacre layers. f, g, h, i: Microscopy images detail the transitions between the pearl’s layers. Credit: University of Michigan

An October 21, 2021 University of Michigan news release (also on EurekAlert), which originated the news item, reveals a surprise,

Published in the Proceedings of the National Academy of Sciences [PNAS], the study found that a pearl’s symmetry becomes more and more precise as it builds, answering centuries-old questions about how the disorder at its center becomes a sort of perfection. 

Layers of nacre, the iridescent and extremely durable organic-inorganic composite that also makes up the shells of oysters and other mollusks, build on a shard of aragonite that surrounds an organic center. The layers, which make up more than 90% of a pearl’s volume, become progressively thinner and more closely matched as they build outward from the center.

Perhaps the most surprising finding is that mollusks maintain the symmetry of their pearls by adjusting the thickness of each layer of nacre. If one layer is thicker, the next tends to be thinner, and vice versa. The pearl pictured in the study contains 2,615 finely matched layers of nacre, deposited over 548 days.

“These thin, smooth layers of nacre look a little like bed sheets, with organic matter in between,” Hovden said. “There’s interaction between each layer, and we hypothesize that that interaction is what enables the system to correct as it goes along.”

The team also uncovered details about how the interaction between layers works. A mathematical analysis of the pearl’s layers show that they follow a phenomenon known as “1/f noise,” where a series of events that seem to be random are connected, with each new event influenced by the one before it. 1/f noise has been shown to govern a wide variety of natural and human-made processes including seismic activity, economic markets, electricity, physics and even classical music.

“When you roll dice, for example, every roll is completely independent and disconnected from every other roll. But 1/f noise is different in that each event is linked,” Hovden said. “We can’t predict it, but we can see a structure in the chaos. And within that structure are complex mechanisms that enable a pearl’s thousands of layers of nacre to coalesce toward order and precision.”

The team found that pearls lack true long-range order—the kind of carefully planned symmetry that keeps the hundreds of layers in brick buildings consistent. Instead, pearls exhibit medium-range order, maintaining symmetry for around 20 layers at a time. This is enough to maintain consistency and durability over the thousands of layers that make up a pearl.

The team gathered their observations by studying Akoya “keshi” pearls, produced by the Pinctada imbricata fucata oyster near the Eastern shoreline of Australia. They selected these particular pearls, which measure around 50 millimeters in diameter, because they form naturally, as opposed to bead-cultured pearls, which have an artificial center. Each pearl was cut with a diamond wire saw into sections measuring three to five millimeters in diameter, then polished and examined under an electron microscope.

Hovden says the study’s findings could help inform next-generation materials with precisely layered nanoscale architecture.

“When we build something like a brick building, we can build in periodicity through careful planning and measuring and templating,” he said. “Mollusks can achieve similar results on the nanoscale by using a different strategy. So we have a lot to learn from them, and that knowledge could help us make stronger, lighter materials in the future.”

Here’s a link to and a citation for the paper,

The mesoscale order of nacreous pearls by Jiseok Gim, Alden Koch, Laura M. Otter, Benjamin H. Savitzky, Sveinung Erland, Lara A. Estroff, Dorrit E. Jacob, and Robert Hovden. PNAS vol. 118 no. 42 e2107477118 DOI: https://doi.org/10.1073/pnas.2107477118 Published in issue October 19, 2021 Published online October 18, 2021

This paper appears to be open access.

Memristors with better mimicry of synapses

It seems to me it’s been quite a while since I’ve stumbled across a memristor story from the University of Micihigan but it was worth waiting for. (Much of the research around memristors has to do with their potential application in neuromorphic (brainlike) computers.) From a December 17, 2018 news item on ScienceDaily,

A new electronic device developed at the University of Michigan can directly model the behaviors of a synapse, which is a connection between two neurons.

For the first time, the way that neurons share or compete for resources can be explored in hardware without the need for complicated circuits.

“Neuroscientists have argued that competition and cooperation behaviors among synapses are very important. Our new memristive devices allow us to implement a faithful model of these behaviors in a solid-state system,” said Wei Lu, U-M professor of electrical and computer engineering and senior author of the study in Nature Materials.

A December 17, 2018 University of Michigan news release (also on EurekAlert), which originated the news item, provides an explanation of memristors and their ‘similarity’ to synapses while providing more details about this latest research,

Memristors are electrical resistors with memory–advanced electronic devices that regulate current based on the history of the voltages applied to them. They can store and process data simultaneously, which makes them a lot more efficient than traditional systems. They could enable new platforms that process a vast number of signals in parallel and are capable of advanced machine learning.

The memristor is a good model for a synapse. It mimics the way that the connections between neurons strengthen or weaken when signals pass through them. But the changes in conductance typically come from changes in the shape of the channels of conductive material within the memristor. These channels–and the memristor’s ability to conduct electricity–could not be precisely controlled in previous devices.

Now, the U-M team has made a memristor in which they have better command of the conducting pathways.They developed a new material out of the semiconductor molybdenum disulfide–a “two-dimensional” material that can be peeled into layers just a few atoms thick. Lu’s team injected lithium ions into the gaps between molybdenum disulfide layers.
They found that if there are enough lithium ions present, the molybdenum sulfide transforms its lattice structure, enabling electrons to run through the film easily as if it were a metal. But in areas with too few lithium ions, the molybdenum sulfide restores its original lattice structure and becomes a semiconductor, and electrical signals have a hard time getting through.

The lithium ions are easy to rearrange within the layer by sliding them with an electric field. This changes the size of the regions that conduct electricity little by little and thereby controls the device’s conductance.

“Because we change the ‘bulk’ properties of the film, the conductance change is much more gradual and much more controllable,” Lu said.

In addition to making the devices behave better, the layered structure enabled Lu’s team to link multiple memristors together through shared lithium ions–creating a kind of connection that is also found in brains. A single neuron’s dendrite, or its signal-receiving end, may have several synapses connecting it to the signaling arms of other neurons. Lu compares the availability of lithium ions to that of a protein that enables synapses to grow.

If the growth of one synapse releases these proteins, called plasticity-related proteins, other synapses nearby can also grow–this is cooperation. Neuroscientists have argued that cooperation between synapses helps to rapidly form vivid memories that last for decades and create associative memories, like a scent that reminds you of your grandmother’s house, for example. If the protein is scarce, one synapse will grow at the expense of the other–and this competition pares down our brains’ connections and keeps them from exploding with signals.
Lu’s team was able to show these phenomena directly using their memristor devices. In the competition scenario, lithium ions were drained away from one side of the device. The side with the lithium ions increased its conductance, emulating the growth, and the conductance of the device with little lithium was stunted.

In a cooperation scenario, they made a memristor network with four devices that can exchange lithium ions, and then siphoned some lithium ions from one device out to the others. In this case, not only could the lithium donor increase its conductance–the other three devices could too, although their signals weren’t as strong.

Lu’s team is currently building networks of memristors like these to explore their potential for neuromorphic computing, which mimics the circuitry of the brain.

Here’s a link to and a citation for the paper,

Ionic modulation and ionic coupling effects in MoS2 devices for neuromorphic computing by Xiaojian Zhu, Da Li, Xiaogan Liang, & Wei D. Lu. Nature Materials (2018) DOI: https://doi.org/10.1038/s41563-018-0248-5 Published 17 December 2018

This paper is behind a paywall.

The researchers have made images illustrating their work available,

A schematic of the molybdenum disulfide layers with lithium ions between them. On the right, the simplified inset shows how the molybdenum disulfide changes its atom arrangements in the presence and absence of the lithium atoms, between a metal (1T’ phase) and semiconductor (2H phase), respectively. Image credit: Xiaojian Zhu, Nanoelectronics Group, University of Michigan.

A diagram of a synapse receiving a signal from one of the connecting neurons. This signal activates the generation of plasticity-related proteins (PRPs), which help a synapse to grow. They can migrate to other synapses, which enables multiple synapses to grow at once. The new device is the first to mimic this process directly, without the need for software or complicated circuits. Image credit: Xiaojian Zhu, Nanoelectronics Group, University of Michigan.
An electron microscope image showing the rectangular gold (Au) electrodes representing signalling neurons and the rounded electrode representing the receiving neuron. The material of molybdenum disulfide layered with lithium connects the electrodes, enabling the simulation of cooperative growth among synapses. Image credit: Xiaojian Zhu, Nanoelectronics Group, University of Michigan.

That’s all folks.

Bringing memristors to the masses and cutting down on energy use

One of my earliest posts featuring memristors (May 9, 2008) focused on their potential for energy savings but since then most of my postings feature research into their application in the field of neuromorphic (brainlike) computing. (For a description and abbreviated history of the memristor go to this page on my Nanotech Mysteries Wiki.)

In a sense this July 30, 2018 news item on Nanowerk is a return to the beginning,

A new way of arranging advanced computer components called memristors on a chip could enable them to be used for general computing, which could cut energy consumption by a factor of 100.

This would improve performance in low power environments such as smartphones or make for more efficient supercomputers, says a University of Michigan researcher.

“Historically, the semiconductor industry has improved performance by making devices faster. But although the processors and memories are very fast, they can’t be efficient because they have to wait for data to come in and out,” said Wei Lu, U-M professor of electrical and computer engineering and co-founder of memristor startup Crossbar Inc.

Memristors might be the answer. Named as a portmanteau of memory and resistor, they can be programmed to have different resistance states–meaning they store information as resistance levels. These circuit elements enable memory and processing in the same device, cutting out the data transfer bottleneck experienced by conventional computers in which the memory is separate from the processor.

A July 30, 2018 University of Michigan news release (also on EurekAlert), which originated the news item, expands on the theme,

… unlike ordinary bits, which are 1 or 0, memristors can have resistances that are on a continuum. Some applications, such as computing that mimics the brain (neuromorphic), take advantage of the analog nature of memristors. But for ordinary computing, trying to differentiate among small variations in the current passing through a memristor device is not precise enough for numerical calculations.

Lu and his colleagues got around this problem by digitizing the current outputs—defining current ranges as specific bit values (i.e., 0 or 1). The team was also able to map large mathematical problems into smaller blocks within the array, improving the efficiency and flexibility of the system.

Computers with these new blocks, which the researchers call “memory-processing units,” could be particularly useful for implementing machine learning and artificial intelligence algorithms. They are also well suited to tasks that are based on matrix operations, such as simulations used for weather prediction. The simplest mathematical matrices, akin to tables with rows and columns of numbers, can map directly onto the grid of memristors.

The memristor array situated on a circuit board.

The memristor array situated on a circuit board. Credit: Mohammed Zidan, Nanoelectronics group, University of Michigan.

Once the memristors are set to represent the numbers, operations that multiply and sum the rows and columns can be taken care of simultaneously, with a set of voltage pulses along the rows. The current measured at the end of each column contains the answers. A typical processor, in contrast, would have to read the value from each cell of the matrix, perform multiplication, and then sum up each column in series.

“We get the multiplication and addition in one step. It’s taken care of through physical laws. We don’t need to manually multiply and sum in a processor,” Lu said.

His team chose to solve partial differential equations as a test for a 32×32 memristor array—which Lu imagines as just one block of a future system. These equations, including those behind weather forecasting, underpin many problems science and engineering but are very challenging to solve. The difficulty comes from the complicated forms and multiple variables needed to model physical phenomena.

When solving partial differential equations exactly is impossible, solving them approximately can require supercomputers. These problems often involve very large matrices of data, so the memory-processor communication bottleneck is neatly solved with a memristor array. The equations Lu’s team used in their demonstration simulated a plasma reactor, such as those used for integrated circuit fabrication.

This work is described in a study, “A general memristor-based partial differential equation solver,” published in the journal Nature Electronics.

It was supported by the Defense Advanced Research Projects Agency (DARPA) (grant no. HR0011-17-2-0018) and by the National Science Foundation (NSF) (grant no. CCF-1617315).

Here’s a link and a citation for the paper,

A general memristor-based partial differential equation solver by Mohammed A. Zidan, YeonJoo Jeong, Jihang Lee, Bing Chen, Shuo Huang, Mark J. Kushner & Wei D. Lu. Nature Electronicsvolume 1, pages411–420 (2018) DOI: https://doi.org/10.1038/s41928-018-0100-6 Published: 13 July 2018

This paper is behind a paywall.

For the curious, Dr. Lu’s startup company, Crossbar can be found here.

Leftover 2017 memristor news bits

i have two bits of news, one from this October 2017 about using light to control a memristor’s learning properties and one from December 2017 about memristors and neural networks.

Shining a light on the memristor

Michael Berger wrote an October 30, 2017 Nanowerk Sportlight article about some of the latest work concerning memristors and light,

Memristors – or resistive memory – are nanoelectronic devices that are very promising components for next generation memory and computing devices. They are two-terminal electric elements similar to a conventional resistor – however, the electric resistance in a memristor is dependent on the charge passing through it; which means that its conductance can be precisely modulated by charge or flux through it. Its special property is that its resistance can be programmed (resistor function) and subsequently remains stored (memory function).

In this sense, a memristor is similar to a synapse in the human brain because it exhibits the same switching characteristics, i.e. it is able, with a high level of plasticity, to modify the efficiency of signal transfer between neurons under the influence of the transfer itself. That’s why researchers are hopeful to use memristors for the fabrication of electronic synapses for neuromorphic (i.e. brain-like) computing that mimics some of the aspects of learning and computation in human brains.

Human brains may be slow at pure number crunching but they are excellent at handling fast dynamic sensory information such as image and voice recognition. Walking is something that we take for granted but this is quite challenging for robots, especially over uneven terrain.

“Memristors present an opportunity to make new types of computers that are different from existing von Neumann architectures, which traditional computers are based upon,” Dr Neil T. Kemp, a Lecturer in Physics at the University of Hull [UK], tells Nanowerk. “Our team at the University of Hull is focussed on making memristor devices dynamically reconfigurable and adaptive – we believe this is the route to making a new generation of artificial intelligence systems that are smarter and can exhibit complex behavior. Such systems would also have the advantage of memristors, high density integration and lower power usage, so these systems would be more lightweight, portable and not need re-charging so often – which is something really needed for robots etc.”

In their new paper in Nanoscale (“Reversible Optical Switching Memristors with Tunable STDP Synaptic Plasticity: A Route to Hierarchical Control in Artificial Intelligent Systems”), Kemp and his team demonstrate the ability to reversibly control the learning properties of memristors via optical means.

The reversibility is achieved by changing the polarization of light. The researchers have used this effect to demonstrate tuneable learning in a memristor. One way this is achieved is through something called Spike Timing Dependent Plasticity (STDP), which is an effect known to occur in human brains and is linked with sensory perception, spatial reasoning, language and conscious thought in the neocortex.

STDP learning is based upon differences in the arrival time of signals from two adjacent neurons. The University of Hull team has shown that they can modulate the synaptic plasticity via optical means which enables the devices to have tuneable learning.

“Our research findings are important because it demonstrates that light can be used to control the learning properties of a memristor,” Kemp points out. “We have shown that light can be used in a reversible manner to change the connection strength (or conductivity) of artificial memristor synapses and as well control their ability to forget i.e. we can dynamically change device to have short-term or long-term memory.”

According to the team, there are many potential applications, such as adaptive electronic circuits controllable via light, or in more complex systems, such as neuromorphic computing, the development of optically reconfigurable neural networks.

Having optically controllable memristors can also facilitate the implementation of hierarchical control in larger artificial-brain like systems, whereby some of the key processes that are carried out by biological molecules in human brains can be emulated in solid-state devices through patterning with light.

Some of these processes include synaptic pruning, conversion of short term memory to long term memory, erasing of certain memories that are no longer needed or changing the sensitivity of synapses to be more adept at learning new information.

“The ability to control this dynamically, both spatially and temporally, is particularly interesting since it would allow neural networks to be reconfigurable on the fly through either spatial patterning or by adjusting the intensity of the light source,” notes Kemp.

In their new paper in Nanoscale Currently, the devices are more suited to neuromorphic computing applications, which do not need to be as fast. Optical control of memristors opens the route to dynamically tuneable and reprogrammable synaptic circuits as well the ability (via optical patterning) to have hierarchical control in larger and more complex artificial intelligent systems.

“Artificial Intelligence is really starting to come on strong in many areas, especially in the areas of voice/image recognition and autonomous systems – we could even say that this is the next revolution, similarly to what the industrial revolution was to farming and production processes,” concludes Kemp. “There are many challenges to overcome though. …

That excerpt should give you the gist of Berger’s article and, for those who need more information, there’s Berger’s article and, also, a link to and a citation for the paper,

Reversible optical switching memristors with tunable STDP synaptic plasticity: a route to hierarchical control in artificial intelligent systems by Ayoub H. Jaafar, Robert J. Gray, Emanuele Verrelli, Mary O’Neill, Stephen. M. Kelly, and Neil T. Kemp. Nanoscale, 2017,9, 17091-17098 DOI: 10.1039/C7NR06138B First published on 24 Oct 2017

This paper is behind a paywall.

The memristor and the neural network

It would seem machine learning could experience a significant upgrade if the work in Wei Lu’s University of Michigan laboratory can be scaled for general use. From a December 22, 2017 news item on ScienceDaily,

A new type of neural network made with memristors can dramatically improve the efficiency of teaching machines to think like humans.

The network, called a reservoir computing system, could predict words before they are said during conversation, and help predict future outcomes based on the present.

The research team that created the reservoir computing system, led by Wei Lu, professor of electrical engineering and computer science at the University of Michigan, recently published their work in Nature Communications.

A December 19, 2017 University of Michigan news release (also on EurekAlert) by Dan Newman, which originated the news item, expands on the theme,

Reservoir computing systems, which improve on a typical neural network’s capacity and reduce the required training time, have been created in the past with larger optical components. However, the U-M group created their system using memristors, which require less space and can be integrated more easily into existing silicon-based electronics.

Memristors are a special type of resistive device that can both perform logic and store data. This contrasts with typical computer systems, where processors perform logic separate from memory modules. In this study, Lu’s team used a special memristor that memorizes events only in the near history.

Inspired by brains, neural networks are composed of neurons, or nodes, and synapses, the connections between nodes.

To train a neural network for a task, a neural network takes in a large set of questions and the answers to those questions. In this process of what’s called supervised learning, the connections between nodes are weighted more heavily or lightly to minimize the amount of error in achieving the correct answer.

Once trained, a neural network can then be tested without knowing the answer. For example, a system can process a new photo and correctly identify a human face, because it has learned the features of human faces from other photos in its training set.

“A lot of times, it takes days or months to train a network,” says Lu. “It is very expensive.”

Image recognition is also a relatively simple problem, as it doesn’t require any information apart from a static image. More complex tasks, such as speech recognition, can depend highly on context and require neural networks to have knowledge of what has just occurred, or what has just been said.

“When transcribing speech to text or translating languages, a word’s meaning and even pronunciation will differ depending on the previous syllables,” says Lu.

This requires a recurrent neural network, which incorporates loops within the network that give the network a memory effect. However, training these recurrent neural networks is especially expensive, Lu says.

Reservoir computing systems built with memristors, however, can skip most of the expensive training process and still provide the network the capability to remember. This is because the most critical component of the system – the reservoir – does not require training.

When a set of data is inputted into the reservoir, the reservoir identifies important time-related features of the data, and hands it off in a simpler format to a second network. This second network then only needs training like simpler neural networks, changing weights of the features and outputs that the first network passed on until it achieves an acceptable level of error.

Enlargereservoir computing system

IMAGE:  Schematic of a reservoir computing system, showing the reservoir with internal dynamics and the simpler output. Only the simpler output needs to be trained, allowing for quicker and lower-cost training. Courtesy Wei Lu.

 

“The beauty of reservoir computing is that while we design it, we don’t have to train it,” says Lu.

The team proved the reservoir computing concept using a test of handwriting recognition, a common benchmark among neural networks. Numerals were broken up into rows of pixels, and fed into the computer with voltages like Morse code, with zero volts for a dark pixel and a little over one volt for a white pixel.

Using only 88 memristors as nodes to identify handwritten versions of numerals, compared to a conventional network that would require thousands of nodes for the task, the reservoir achieved 91% accuracy.

Reservoir computing systems are especially adept at handling data that varies with time, like a stream of data or words, or a function depending on past results.

To demonstrate this, the team tested a complex function that depended on multiple past results, which is common in engineering fields. The reservoir computing system was able to model the complex function with minimal error.

Lu plans on exploring two future paths with this research: speech recognition and predictive analysis.

“We can make predictions on natural spoken language, so you don’t even have to say the full word,” explains Lu.

“We could actually predict what you plan to say next.”

In predictive analysis, Lu hopes to use the system to take in signals with noise, like static from far-off radio stations, and produce a cleaner stream of data. “It could also predict and generate an output signal even if the input stopped,” he says.

EnlargeWei Lu

IMAGE:  Wei Lu, Professor of Electrical Engineering & Computer Science at the University of Michigan holds a memristor he created. Photo: Marcin Szczepanski.

 

The work was published in Nature Communications in the article, “Reservoir computing using dynamic memristors for temporal information processing”, with authors Chao Du, Fuxi Cai, Mohammed Zidan, Wen Ma, Seung Hwan Lee, and Prof. Wei Lu.

The research is part of a $6.9 million DARPA [US Defense Advanced Research Projects Agency] project, called “Sparse Adaptive Local Learning for Sensing and Analytics [also known as SALLSA],” that aims to build a computer chip based on self-organizing, adaptive neural networks. The memristor networks are fabricated at Michigan’s Lurie Nanofabrication Facility.

Lu and his team previously used memristors in implementing “sparse coding,” which used a 32-by-32 array of memristors to efficiently analyze and recreate images.

Here’s a link to and a citation for the paper,

Reservoir computing using dynamic memristors for temporal information processing by Chao Du, Fuxi Cai, Mohammed A. Zidan, Wen Ma, Seung Hwan Lee & Wei D. Lu. Nature Communications 8, Article number: 2204 (2017) doi:10.1038/s41467-017-02337-y Published online: 19 December 2017

This is an open access paper.

In scientific race US sees China coming up from rear

Sometime it seems as if scientific research is like a race with everyone competing for first place. As in most sports, there are multiple competitions for various sub-groups but only one important race. The US has held the lead position for decades although always with some anxiety. These days the anxiety is focused on China. A June 15, 2017 news item on ScienceDaily suggests that US dominance is threatened in at least one area of research—the biomedical sector,

American scientific teams still publish significantly more biomedical research discoveries than teams from any other country, a new study shows, and the U.S. still leads the world in research and development expenditures.

But American dominance is slowly shrinking, the analysis finds, as China’s skyrocketing investing on science over the last two decades begins to pay off. Chinese biomedical research teams now rank fourth in the world for total number of new discoveries published in six top-tier journals, and the country spent three-quarters what the U.S. spent on research and development during 2015.

Meanwhile, the analysis shows, scientists from the U.S. and other countries increasingly make discoveries and advancements as part of teams that involve researchers from around the world.

A June 15, 2017 Michigan Medicine University of Michigan news release (also on EurekAlert), which originated the news item, details the research team’s insights,

The last 15 years have ushered in an era of “team science” as research funding in the U.S., Great Britain and other European countries, as well as Canada and Australia, stagnated. The number of authors has also grown over time. For example, in 2000 only two percent of the research papers the new study looked include 21 or more authors — a number that increased to 12.5 percent in 2015.

The new findings, published in JCI Insight by a team of University of Michigan researchers, come at a critical time for the debate over the future of U.S. federal research funding. The study is based on a careful analysis of original research papers published in six top-tier and four mid-tier journals from 2000 to 2015, in addition to data on R&D investment from those same years.

The study builds on other work that has also warned of America’s slipping status in the world of science and medical research, and the resulting impact on the next generation of aspiring scientists.

“It’s time for U.S. policy-makers to reflect and decide whether the year-to-year uncertainty in National Institutes of Health budget and the proposed cuts are in our societal and national best interest,” says Bishr Omary, M.D., Ph.D., senior author of the new data-supported opinion piece and chief scientific officer of Michigan Medicine, U-M’s academic medical center. “If we continue on the path we’re on, it will be harder to maintain our lead and, even more importantly, we could be disenchanting the next generation of bright and passionate biomedical scientists who see a limited future in pursuing a scientist or physician-investigator career.”

The analysis charts South Korea’s entry into the top 10 countries for publications, as well as China’s leap from outside the top 10 in 2000 to fourth place in 2015. They also track the major increases in support for research in South Korea and Singapore since the start of the 21st Century.

Meticulous tracking

First author of the study, U-M informationist Marisa Conte, and Omary co-led a team that looked carefully at the currency of modern science: peer-reviewed basic science and clinical research papers describing new findings, published in journals with long histories of accepting among the world’s most significant discoveries.

They reviewed every issue of six top-tier international journals (JAMA, Lancet, the New England Journal of Medicine, Cell, Nature and Science), and four mid-ranking journals (British Medical Journal, JAMA Internal Medicine, Journal of Cell Science, FASEB Journal), chosen to represent the clinical and basic science aspects of research.

The analysis included only papers that reported new results from basic research experiments, translational studies, clinical trials, metanalyses, and studies of disease outcomes. Author affiliations for corresponding authors and all other authors were recorded by country.

The rise in global cooperation is striking. In 2000, 25 percent of papers in the six top-tier journals were by teams that included researchers from at least two countries. In 2015, that figure was closer to 50 percent. The increasing need for multidisciplinary approaches to make major advances, coupled with the advances of Internet-based collaboration tools, likely have something to do with this, Omary says.

The authors, who also include Santiago Schnell, Ph.D. and Jing Liu, Ph.D., note that part of their group’s interest in doing the study sprang from their hypothesis that a flat NIH budget is likely to have negative consequences but they wanted to gather data to test their hypothesis.

They also observed what appears to be an increasing number of Chinese-born scientists who had trained in the U.S. going back to China after their training, where once most of them would have sought to stay in the U.S. In addition, Singapore has been able to recruit several top notch U.S. and other international scientists due to their marked increase in R&D investments.

The same trends appear to be happening in Great Britain, Australia, Canada, France, Germany and other countries the authors studied – where research investing has stayed consistent when measured as a percentage of the U.S. total over the last 15 years.

The authors note that their study is based on data up to 2015, and that in the current 2017 federal fiscal year, funding for NIH has increased thanks to bipartisan Congressional appropriations. The NIH contributes to most of the federal support for medical and basic biomedical research in the U.S. But discussion of cuts to research funding that hinders many federal agencies is in the air during the current debates for the 2018 budget. Meanwhile, the Chinese R&D spending is projected to surpass the U.S. total by 2022.

“Our analysis, albeit limited to a small number of representative journals, supports the importance of financial investment in research,” Omary says. “I would still strongly encourage any child interested in science to pursue their dream and passion, but I hope that our current and future investment in NIH and other federal research support agencies will rise above any branch of government to help our next generation reach their potential and dreams.”

Here’s a link to and a citation for the paper,

Globalization and changing trends of biomedical research output by Marisa L. Conte, Jing Liu, Santiago Schnell, and M. Bishr Omary. JCI Insight. 2017;2(12):e95206 doi:10.1172/jci.insight.95206 Volume 2, Issue 12 (June 15, 2017)

Copyright © 2017, American Society for Clinical Investigation

This paper is open access.

The notion of a race and looking back to see who, if anyone, is gaining on you reminded me of a local piece of sports lore, the Roger Banister-John Landy ‘Miracle Mile’. In the run up to the 1954 Commonwealth Games held in Vancouver, Canada, two runners were known to have broken the 4-minute mile limit (previously thought to have been impossible) and this meeting was considered an historic meeting. Here’s more from the miraclemile1954.com website,

On August 7, 1954 during the British Empire and Commonwealth Games in Vancouver, B.C., England’s Roger Bannister and Australian John Landy met for the first time in the one mile run at the newly constructed Empire Stadium.

Both men had broken the four minute barrier previously that year. Bannister was the first to break the mark with a time of 3:59.4 on May 6th in Oxford, England. Subsequently, on June 21st in Turku, Finland, John Landy became the new record holder with an official time of 3:58.

The world watched eagerly as both men approached the starting blocks. As 35,000 enthusiastic fans looked on, no one knew what would take place on that historic day.

Promoted as “The Mile of the Century”, it would later be known as the “Miracle Mile”.

With only 90 yards to go in one of the world’s most memorable races, John Landy glanced over his left shoulder to check his opponent’s position. At that instant Bannister streaked by him to victory in a Commonwealth record time of 3:58.8. Landy’s second place finish in 3:59.6 marked the first time the four minute mile had been broken by two men in the same race.

The website hosts an image of the moment memorialized in bronze when Landy looks to his left as Banister passes him on his right,

By Statue: Jack HarmanPhoto: Paul Joseph from vancouver, bc, canada – roger bannister running the four minute mileUploaded by Skeezix1000, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=9801121

Getting back to science, I wonder if some day we’ll stop thinking of it as a race where, inevitably, there’s one winner and everyone else loses and find a new metaphor.

Dr. Wei Lu and bio-inspired ‘memristor’ chips

It’s been a while since I’ve featured Dr. Wei Lu’s work here. This April  15, 2010 posting features Lu’s most relevant previous work.) Here’s his latest ‘memristor’ work , from a May 22, 2017 news item on Nanowerk (Note: A link has been removed),

Inspired by how mammals see, a new “memristor” computer circuit prototype at the University of Michigan has the potential to process complex data, such as images and video orders of magnitude, faster and with much less power than today’s most advanced systems.

Faster image processing could have big implications for autonomous systems such as self-driving cars, says Wei Lu, U-M professor of electrical engineering and computer science. Lu is lead author of a paper on the work published in the current issue of Nature Nanotechnology (“Sparse coding with memristor networks”).

Lu’s next-generation computer components use pattern recognition to shortcut the energy-intensive process conventional systems use to dissect images. In this new work, he and his colleagues demonstrate an algorithm that relies on a technique called “sparse coding” to coax their 32-by-32 array of memristors to efficiently analyze and recreate several photos.

A May 22, 2017 University of Michigan news release (also on EurekAlert), which originated the news item, provides more information about memristors and about the research,

Memristors are electrical resistors with memory—advanced electronic devices that regulate current based on the history of the voltages applied to them. They can store and process data simultaneously, which makes them a lot more efficient than traditional systems. In a conventional computer, logic and memory functions are located at different parts of the circuit.

“The tasks we ask of today’s computers have grown in complexity,” Lu said. “In this ‘big data’ era, computers require costly, constant and slow communications between their processor and memory to retrieve large amounts data. This makes them large, expensive and power-hungry.”

But like neural networks in a biological brain, networks of memristors can perform many operations at the same time, without having to move data around. As a result, they could enable new platforms that process a vast number of signals in parallel and are capable of advanced machine learning. Memristors are good candidates for deep neural networks, a branch of machine learning, which trains computers to execute processes without being explicitly programmed to do so.

“We need our next-generation electronics to be able to quickly process complex data in a dynamic environment. You can’t just write a program to do that. Sometimes you don’t even have a pre-defined task,” Lu said. “To make our systems smarter, we need to find ways for them to process a lot of data more efficiently. Our approach to accomplish that is inspired by neuroscience.”

A mammal’s brain is able to generate sweeping, split-second impressions of what the eyes take in. One reason is because they can quickly recognize different arrangements of shapes. Humans do this using only a limited number of neurons that become active, Lu says. Both neuroscientists and computer scientists call the process “sparse coding.”

“When we take a look at a chair we will recognize it because its characteristics correspond to our stored mental picture of a chair,” Lu said. “Although not all chairs are the same and some may differ from a mental prototype that serves as a standard, each chair retains some of the key characteristics necessary for easy recognition. Basically, the object is correctly recognized the moment it is properly classified—when ‘stored’ in the appropriate category in our heads.”

Image of a memristor chip Image of a memristor chip Similarly, Lu’s electronic system is designed to detect the patterns very efficiently—and to use as few features as possible to describe the original input.

In our brains, different neurons recognize different patterns, Lu says.

“When we see an image, the neurons that recognize it will become more active,” he said. “The neurons will also compete with each other to naturally create an efficient representation. We’re implementing this approach in our electronic system.”

The researchers trained their system to learn a “dictionary” of images. Trained on a set of grayscale image patterns, their memristor network was able to reconstruct images of famous paintings and photos and other test patterns.

If their system can be scaled up, they expect to be able to process and analyze video in real time in a compact system that can be directly integrated with sensors or cameras.

The project is titled “Sparse Adaptive Local Learning for Sensing and Analytics.” Other collaborators are Zhengya Zhang and Michael Flynn of the U-M Department of Electrical Engineering and Computer Science, Garrett Kenyon of the Los Alamos National Lab and Christof Teuscher of Portland State University.

The work is part of a $6.9 million Unconventional Processing of Signals for Intelligent Data Exploitation project that aims to build a computer chip based on self-organizing, adaptive neural networks. It is funded by the [US] Defense Advanced Research Projects Agency [DARPA].

Here’s a link to and a citation for the paper,

Sparse coding with memristor networks by Patrick M. Sheridan, Fuxi Cai, Chao Du, Wen Ma, Zhengya Zhang, & Wei D. Lu. Nature Nanotechnology (2017) doi:10.1038/nnano.2017.83 Published online 22 May 2017

This paper is behind a paywall.

For the interested, there are a number of postings featuring memristors here (just use ‘memristor’ as your search term in the blog search engine). You might also want to check out ‘neuromorphic engineeering’ and ‘neuromorphic computing’ and ‘artificial brain’.

Patent Politics: a June 23, 2017 book launch at the Wilson Center (Washington, DC)

I received a June 12, 2017 notice (via email) from the Wilson Center (also know as the Woodrow Wilson Center for International Scholars) about a book examining patents and policies in the United States and in Europe and its upcoming launch,

Patent Politics: Life Forms, Markets, and the Public Interest in the United States and Europe

Over the past thirty years, the world’s patent systems have experienced pressure from civil society like never before. From farmers to patient advocates, new voices are arguing that patents impact public health, economic inequality, morality—and democracy. These challenges, to domains that we usually consider technical and legal, may seem surprising. But in Patent Politics, Shobita Parthasarathy argues that patent systems have always been deeply political and social.

To demonstrate this, Parthasarathy takes readers through a particularly fierce and prolonged set of controversies over patents on life forms linked to important advances in biology and agriculture and potentially life-saving medicines. Comparing battles over patents on animals, human embryonic stem cells, human genes, and plants in the United States and Europe, she shows how political culture, ideology, and history shape patent system politics. Clashes over whose voices and which values matter in the patent system, as well as what counts as knowledge and whose expertise is important, look quite different in these two places. And through these debates, the United States and Europe are developing very different approaches to patent and innovation governance. Not just the first comprehensive look at the controversies swirling around biotechnology patents, Patent Politics is also the first in-depth analysis of the political underpinnings and implications of modern patent systems, and provides a timely analysis of how we can reform these systems around the world to maximize the public interest.

Join us on June 23 [2017] from 4-6 pm [elsewhere the time is listed at 4-7 pm] for a discussion on the role of the patent system in governing emerging technologies, on the launch of Shobita Parthasarathy’s Patent Politics: Life Forms, Markets, and the Public Interest in the United States and Europe (University of Chicago Press, 2017).

You can find more information such as this on the Patent Politics event page,

Speakers

Keynote


  • Shobita Parthasarathy

    Fellow
    Associate Professor of Public Policy and Women’s Studies, and Director of the Science, Technology, and Public Policy Program, at University of Michigan

Moderator


  • Eleonore Pauwels

    Senior Program Associate and Director of Biology Collectives, Science and Technology Innovation Program
    Formerly European Commission, Directorate-General for Research and Technological Development, Directorate on Science, Economy and Society

Panelists


  • Daniel Sarewitz

    Co-Director, Consortium for Science, Policy & Outcomes Professor of Science and Society, School for the Future of Innovation in Society

  • Richard Harris

    Award-Winning Journalist National Public Radio Author of “Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions”

For those who cannot attend in person, there will be a live webcast. If you can be there in person, you can RSVP here (Note: The time frame for the event is listed in some places as 4-7 pm.) I cannot find any reason for the time frame disparity. My best guess is that the discussion is scheduled for two hours with a one hour reception afterwards for those who can attend in person.

Transparent silver

This March 21, 2017 news item on Nanowerk is the first I’ve heard of transparent silver; it’s usually transparent aluminum (Note: A link has been removed),

The thinnest, smoothest layer of silver that can survive air exposure has been laid down at the University of Michigan, and it could change the way touchscreens and flat or flexible displays are made (Advanced Materials, “High-performance Doped Silver Films: Overcoming Fundamental Material Limits for Nanophotonic Applications”).

It could also help improve computing power, affecting both the transfer of information within a silicon chip and the patterning of the chip itself through metamaterial superlenses.

A March 21, 2017 University of Michigan  news release, which originated the news item, provides details about the research and features a mention about aluminum,

By combining the silver with a little bit of aluminum, the U-M researchers found that it was possible to produce exceptionally thin, smooth layers of silver that are resistant to tarnishing. They applied an anti-reflective coating to make one thin metal layer up to 92.4 percent transparent.

The team showed that the silver coating could guide light about 10 times as far as other metal waveguides—a property that could make it useful for faster computing. And they layered the silver films into a metamaterial hyperlens that could be used to create dense patterns with feature sizes a fraction of what is possible with ordinary ultraviolet methods, on silicon chips, for instance.

Screens of all stripes need transparent electrodes to control which pixels are lit up, but touchscreens are particularly dependent on them. A modern touch screen is made of a transparent conductive layer covered with a nonconductive layer. It senses electrical changes where a conductive object—such as a finger—is pressed against the screen.

“The transparent conductor market has been dominated to this day by one single material,” said L. Jay Guo, professor of electrical engineering and computer science.

This material, indium tin oxide, is projected to become expensive as demand for touch screens continues to grow; there are relatively few known sources of indium, Guo said.

“Before, it was very cheap. Now, the price is rising sharply,” he said.

The ultrathin film could make silver a worthy successor.

Usually, it’s impossible to make a continuous layer of silver less than 15 nanometers thick, or roughly 100 silver atoms. Silver has a tendency to cluster together in small islands rather than extend into an even coating, Guo said.

By adding about 6 percent aluminum, the researchers coaxed the metal into a film of less than half that thickness—seven nanometers. What’s more, when they exposed it to air, it didn’t immediately tarnish as pure silver films do. After several months, the film maintained its conductive properties and transparency. And it was firmly stuck on, whereas pure silver comes off glass with Scotch tape.

In addition to their potential to serve as transparent conductors for touch screens, the thin silver films offer two more tricks, both having to do with silver’s unparalleled ability to transport visible and infrared light waves along its surface. The light waves shrink and travel as so-called surface plasmon polaritons, showing up as oscillations in the concentration of electrons on the silver’s surface.

Those oscillations encode the frequency of the light, preserving it so that it can emerge on the other side. While optical fibers can’t scale down to the size of copper wires on today’s computer chips, plasmonic waveguides could allow information to travel in optical rather than electronic form for faster data transfer. As a waveguide, the smooth silver film could transport the surface plasmons over a centimeter—enough to get by inside a computer chip.

Here’s a link to and a citation for the paper,

High-Performance Doped Silver Films: Overcoming Fundamental Material Limits for Nanophotonic Applications by Cheng Zhang, Nathaniel Kinsey, Long Chen, Chengang Ji, Mingjie Xu, Marcello Ferrera, Xiaoqing Pan, Vladimir M. Shalaev, Alexandra Boltasseva, and Jay Guo. Advanced Materials DOI: 10.1002/adma.201605177 Version of Record online: 20 MAR 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.