Category Archives: light

Leftover 2017 memristor news bits

i have two bits of news, one from this October 2017 about using light to control a memristor’s learning properties and one from December 2017 about memristors and neural networks.

Shining a light on the memristor

Michael Berger wrote an October 30, 2017 Nanowerk Sportlight article about some of the latest work concerning memristors and light,

Memristors – or resistive memory – are nanoelectronic devices that are very promising components for next generation memory and computing devices. They are two-terminal electric elements similar to a conventional resistor – however, the electric resistance in a memristor is dependent on the charge passing through it; which means that its conductance can be precisely modulated by charge or flux through it. Its special property is that its resistance can be programmed (resistor function) and subsequently remains stored (memory function).

In this sense, a memristor is similar to a synapse in the human brain because it exhibits the same switching characteristics, i.e. it is able, with a high level of plasticity, to modify the efficiency of signal transfer between neurons under the influence of the transfer itself. That’s why researchers are hopeful to use memristors for the fabrication of electronic synapses for neuromorphic (i.e. brain-like) computing that mimics some of the aspects of learning and computation in human brains.

Human brains may be slow at pure number crunching but they are excellent at handling fast dynamic sensory information such as image and voice recognition. Walking is something that we take for granted but this is quite challenging for robots, especially over uneven terrain.

“Memristors present an opportunity to make new types of computers that are different from existing von Neumann architectures, which traditional computers are based upon,” Dr Neil T. Kemp, a Lecturer in Physics at the University of Hull [UK], tells Nanowerk. “Our team at the University of Hull is focussed on making memristor devices dynamically reconfigurable and adaptive – we believe this is the route to making a new generation of artificial intelligence systems that are smarter and can exhibit complex behavior. Such systems would also have the advantage of memristors, high density integration and lower power usage, so these systems would be more lightweight, portable and not need re-charging so often – which is something really needed for robots etc.”

In their new paper in Nanoscale (“Reversible Optical Switching Memristors with Tunable STDP Synaptic Plasticity: A Route to Hierarchical Control in Artificial Intelligent Systems”), Kemp and his team demonstrate the ability to reversibly control the learning properties of memristors via optical means.

The reversibility is achieved by changing the polarization of light. The researchers have used this effect to demonstrate tuneable learning in a memristor. One way this is achieved is through something called Spike Timing Dependent Plasticity (STDP), which is an effect known to occur in human brains and is linked with sensory perception, spatial reasoning, language and conscious thought in the neocortex.

STDP learning is based upon differences in the arrival time of signals from two adjacent neurons. The University of Hull team has shown that they can modulate the synaptic plasticity via optical means which enables the devices to have tuneable learning.

“Our research findings are important because it demonstrates that light can be used to control the learning properties of a memristor,” Kemp points out. “We have shown that light can be used in a reversible manner to change the connection strength (or conductivity) of artificial memristor synapses and as well control their ability to forget i.e. we can dynamically change device to have short-term or long-term memory.”

According to the team, there are many potential applications, such as adaptive electronic circuits controllable via light, or in more complex systems, such as neuromorphic computing, the development of optically reconfigurable neural networks.

Having optically controllable memristors can also facilitate the implementation of hierarchical control in larger artificial-brain like systems, whereby some of the key processes that are carried out by biological molecules in human brains can be emulated in solid-state devices through patterning with light.

Some of these processes include synaptic pruning, conversion of short term memory to long term memory, erasing of certain memories that are no longer needed or changing the sensitivity of synapses to be more adept at learning new information.

“The ability to control this dynamically, both spatially and temporally, is particularly interesting since it would allow neural networks to be reconfigurable on the fly through either spatial patterning or by adjusting the intensity of the light source,” notes Kemp.

In their new paper in Nanoscale Currently, the devices are more suited to neuromorphic computing applications, which do not need to be as fast. Optical control of memristors opens the route to dynamically tuneable and reprogrammable synaptic circuits as well the ability (via optical patterning) to have hierarchical control in larger and more complex artificial intelligent systems.

“Artificial Intelligence is really starting to come on strong in many areas, especially in the areas of voice/image recognition and autonomous systems – we could even say that this is the next revolution, similarly to what the industrial revolution was to farming and production processes,” concludes Kemp. “There are many challenges to overcome though. …

That excerpt should give you the gist of Berger’s article and, for those who need more information, there’s Berger’s article and, also, a link to and a citation for the paper,

Reversible optical switching memristors with tunable STDP synaptic plasticity: a route to hierarchical control in artificial intelligent systems by Ayoub H. Jaafar, Robert J. Gray, Emanuele Verrelli, Mary O’Neill, Stephen. M. Kelly, and Neil T. Kemp. Nanoscale, 2017,9, 17091-17098 DOI: 10.1039/C7NR06138B First published on 24 Oct 2017

This paper is behind a paywall.

The memristor and the neural network

It would seem machine learning could experience a significant upgrade if the work in Wei Lu’s University of Michigan laboratory can be scaled for general use. From a December 22, 2017 news item on ScienceDaily,

A new type of neural network made with memristors can dramatically improve the efficiency of teaching machines to think like humans.

The network, called a reservoir computing system, could predict words before they are said during conversation, and help predict future outcomes based on the present.

The research team that created the reservoir computing system, led by Wei Lu, professor of electrical engineering and computer science at the University of Michigan, recently published their work in Nature Communications.

A December 19, 2017 University of Michigan news release (also on EurekAlert) by Dan Newman, which originated the news item, expands on the theme,

Reservoir computing systems, which improve on a typical neural network’s capacity and reduce the required training time, have been created in the past with larger optical components. However, the U-M group created their system using memristors, which require less space and can be integrated more easily into existing silicon-based electronics.

Memristors are a special type of resistive device that can both perform logic and store data. This contrasts with typical computer systems, where processors perform logic separate from memory modules. In this study, Lu’s team used a special memristor that memorizes events only in the near history.

Inspired by brains, neural networks are composed of neurons, or nodes, and synapses, the connections between nodes.

To train a neural network for a task, a neural network takes in a large set of questions and the answers to those questions. In this process of what’s called supervised learning, the connections between nodes are weighted more heavily or lightly to minimize the amount of error in achieving the correct answer.

Once trained, a neural network can then be tested without knowing the answer. For example, a system can process a new photo and correctly identify a human face, because it has learned the features of human faces from other photos in its training set.

“A lot of times, it takes days or months to train a network,” says Lu. “It is very expensive.”

Image recognition is also a relatively simple problem, as it doesn’t require any information apart from a static image. More complex tasks, such as speech recognition, can depend highly on context and require neural networks to have knowledge of what has just occurred, or what has just been said.

“When transcribing speech to text or translating languages, a word’s meaning and even pronunciation will differ depending on the previous syllables,” says Lu.

This requires a recurrent neural network, which incorporates loops within the network that give the network a memory effect. However, training these recurrent neural networks is especially expensive, Lu says.

Reservoir computing systems built with memristors, however, can skip most of the expensive training process and still provide the network the capability to remember. This is because the most critical component of the system – the reservoir – does not require training.

When a set of data is inputted into the reservoir, the reservoir identifies important time-related features of the data, and hands it off in a simpler format to a second network. This second network then only needs training like simpler neural networks, changing weights of the features and outputs that the first network passed on until it achieves an acceptable level of error.

Enlargereservoir computing system

IMAGE:  Schematic of a reservoir computing system, showing the reservoir with internal dynamics and the simpler output. Only the simpler output needs to be trained, allowing for quicker and lower-cost training. Courtesy Wei Lu.

 

“The beauty of reservoir computing is that while we design it, we don’t have to train it,” says Lu.

The team proved the reservoir computing concept using a test of handwriting recognition, a common benchmark among neural networks. Numerals were broken up into rows of pixels, and fed into the computer with voltages like Morse code, with zero volts for a dark pixel and a little over one volt for a white pixel.

Using only 88 memristors as nodes to identify handwritten versions of numerals, compared to a conventional network that would require thousands of nodes for the task, the reservoir achieved 91% accuracy.

Reservoir computing systems are especially adept at handling data that varies with time, like a stream of data or words, or a function depending on past results.

To demonstrate this, the team tested a complex function that depended on multiple past results, which is common in engineering fields. The reservoir computing system was able to model the complex function with minimal error.

Lu plans on exploring two future paths with this research: speech recognition and predictive analysis.

“We can make predictions on natural spoken language, so you don’t even have to say the full word,” explains Lu.

“We could actually predict what you plan to say next.”

In predictive analysis, Lu hopes to use the system to take in signals with noise, like static from far-off radio stations, and produce a cleaner stream of data. “It could also predict and generate an output signal even if the input stopped,” he says.

EnlargeWei Lu

IMAGE:  Wei Lu, Professor of Electrical Engineering & Computer Science at the University of Michigan holds a memristor he created. Photo: Marcin Szczepanski.

 

The work was published in Nature Communications in the article, “Reservoir computing using dynamic memristors for temporal information processing”, with authors Chao Du, Fuxi Cai, Mohammed Zidan, Wen Ma, Seung Hwan Lee, and Prof. Wei Lu.

The research is part of a $6.9 million DARPA [US Defense Advanced Research Projects Agency] project, called “Sparse Adaptive Local Learning for Sensing and Analytics [also known as SALLSA],” that aims to build a computer chip based on self-organizing, adaptive neural networks. The memristor networks are fabricated at Michigan’s Lurie Nanofabrication Facility.

Lu and his team previously used memristors in implementing “sparse coding,” which used a 32-by-32 array of memristors to efficiently analyze and recreate images.

Here’s a link to and a citation for the paper,

Reservoir computing using dynamic memristors for temporal information processing by Chao Du, Fuxi Cai, Mohammed A. Zidan, Wen Ma, Seung Hwan Lee & Wei D. Lu. Nature Communications 8, Article number: 2204 (2017) doi:10.1038/s41467-017-02337-y Published online: 19 December 2017

This is an open access paper.

Cotton that glows ‘naturally’

Interesting, non? This is causing a bit of excitement but before first, here’s more from the Sept. 14, 2017 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

Cotton that’s grown with molecules that endow appealing properties – like fluorescence or magnetism – may one day eliminate the need for applying chemical treatments to fabrics to achieve such qualities, a new study suggests. Applying synthetic polymers to fabrics can result in a range of appealing properties, but anything added to a fabric can get washed or worn away. Furthermore, while many fibers used in fabrics are synthetic (e.g., polyester), some consumers prefer natural fibers to avoid issues related to sensation, skin irritation, smoothness, and weight. Here, Filipe Natalio and colleagues created cotton fibers that incorporate composites with fluorescent and magnetic properties. They synthesized glucose derivatives that deliver the desirable molecules into the growing ovules of the cotton plant, Gossypium hirsutum. Thus, the molecules are embedded into the cotton fibers themselves, rather than added in the form of a chemical treatment. The resulting fibers exhibited fluorescent or magnetic properties, respectively, although they were weaker than raw fibers lacking the embedded composites, the authors report. They propose that similar techniques could be expanded to other biological systems such as bacteria, bamboo, silk, and flax – essentially opening a new era of “material farming.”

Robert Service’s Sept. 14, 2017 article for Science explores the potential of growing cotton with new properties (Note: A link has been removed),

You may have heard about smartphones and smart homes. But scientists are also designing smart clothes, textiles that can harvest energy, light up, detect pollution, and even communicate with the internet. The problem? Even when they work, these often chemically treated fabrics wear out rapidly over time. Now, researchers have figured out a way to “grow” some of these functions directly into cotton fibers. If the work holds, it could lead to stronger, lighter, and brighter textiles that don’t wear out.

Yet, as the new paper went to press today in Science, editors at the journal were made aware of mistakes in a figure in the supplemental material that prompted them to issue an Editorial Expression of Concern, at least until they receive clarification from the authors. Filipe Natalio, lead author and chemist at the Weizmann Institute of Science in Rehovot, Israel, says the mistakes were errors in the names of pigments used in control experiments, which he is working with the editors to fix.

That hasn’t dampened enthusiasm for the work. “I like this paper a lot,” says Michael Strano, a chemical engineer at the Massachusetts Institute of Technology in Cambridge. The study, he says, lays out a new way to add new functions into plants without changing their genes through genetic engineering. Those approaches face steep regulatory hurdles for widespread use. “Assuming the methods claimed are correct, that’s a big advantage,” Strano says.

Sam Lemonick’s Sept. 14, 2017 article for forbes.com describes how the researchers introduced new properties (in this case, glowing colours) into the cotton plants,

His [Filipe Natalio] team of researchers in Israel, Germany, and Austria used sugar molecules to sneak new properties into cotton. Like a Trojan horse, Natalio says. They tested the method by tagging glucose with a fluorescent dye molecule that glows green when hit with the right kind of light.

They bathed cotton ovules—the part of the plant that makes the fibers—in the glucose. And just like flowers suck up dyed water in grade school experiments, the ovules absorbed the sugar solution and piped the tagged glucose molecules to their cells. As the fibers grew, they took on a yellowish tinge—and glowed bright green under ultraviolet light.

Glowing cotton wasn’t enough for Natalio. It took his group about six months to be sure they were actually delivering the fluorescent protein into the cotton cells and not just coating the fibers in it. Once they were certain, they decided to push the envelope with something very unnatural: magnets.

This time, Natalio’s team modified glucose with the rare earth metal dysprosium, making a molecule that acts like a magnet. And just like they did with the dye, the researchers fed it to cotton ovules and ended up with fibers with magnetic properties.

Both Service and Lemonwick note that the editor of the journal Science (where the research paper was published) Jeremy Berg has written an expression of editorial concern as of Sept. 14, 2017,

In the 15 September [2017] issue, Science published the Report “Biological fabrication of cellulose fibers with tailored properties” by F. Natalio et al. (1). After the issue went to press, we became aware of errors in the labeling and/or identification of the pigments used for the control experiments detailed in figs. S1 and S2 of the supplementary materials. Science is publishing this Editorial Expression of Concern to alert our readers to this information as we await full explanation and clarification from the authors.

The problem seems to be one of terminology (from the Lemonwick article),

… Filipe Natalio, lead author and chemist at the Weizmann Institute of Science in Rehovot, Israel, says the mistakes were errors in the names of pigments used in control experiments, which he is working with the editors to fix.

These things happen. Terminology and spelling aren’t always the same from one country to the next and it can result in confusion. I’m glad to see the discussion is being held openly.

Here’s a link to and a citation for the paper,

Biological fabrication of cellulose fibers with tailored properties by Filipe Natalio, Regina Fuchs, Sidney R. Cohen, Gregory Leitus, Gerhard Fritz-Popovski, Oskar Paris, Michael Kappl, Hans-Jürgen Butt. Science 15 Sep 2017: Vol. 357, Issue 6356, pp. 1118-1122 DOI: 10.1126/science.aan5830

This paper is behind a paywall.

For first time: high-dimensional quantum encryption performed in real world city conditions

Having congratulated China on the world’s first quantum communication network a few weeks ago (August 22, 2017 posting), this quantum encryption story seems timely. From an August 24, 2017 news item on phys.org,

For the first time, researchers have sent a quantum-secured message containing more than one bit of information per photon through the air above a city. The demonstration showed that it could one day be practical to use high-capacity, free-space quantum communication to create a highly secure link between ground-based networks and satellites, a requirement for creating a global quantum encryption network.

Quantum encryption uses photons to encode information in the form of quantum bits. In its simplest form, known as 2D encryption, each photon encodes one bit: either a one or a zero. Scientists have shown that a single photon can encode even more information—a concept known as high-dimensional quantum encryption—but until now this has never been demonstrated with free-space optical communication in real-world conditions. With eight bits necessary to encode just one letter, for example, packing more information into each photon would significantly speed up data transmission.

This looks like donuts on a stick to me,

For the first time, researchers have demonstrated sending messages in a secure manner using high dimensional quantum cryptography in realistic city conditions. Image Credit: SQO team, University of Ottawa.

An Aug. 24, 2017 Optical Society news release (also on EurekAlert), which originated the news item, describes the work done by a team in Ottawa, Canada, (Note: The ‘Congratulate China’ piece (August 22, 2017 posting) includes excerpts from an article that gave a brief survey of various national teams [including Canada] working on quantum communication networks; Links have been removed),

“Our work is the first to send messages in a secure manner using high-dimensional quantum encryption in realistic city conditions, including turbulence,” said research team lead, Ebrahim Karimi, University of Ottawa, Canada. “The secure, free-space communication scheme we demonstrated could potentially link Earth with satellites, securely connect places where it is too expensive to install fiber, or be used for encrypted communication with a moving object, such as an airplane.”

For the first time, researchers have demonstrated sending messages in a secure manner using high dimensional quantum cryptography in realistic city conditions. Image Credit: SQO team, University of Ottawa.

As detailed in Optica, The Optical Society’s journal for high impact research, the researchers demonstrated 4D quantum encryption over a free-space optical network spanning two buildings 0.3 kilometers apart at the University of Ottawa. This high-dimensional encryption scheme is referred to as 4D because each photon encodes two bits of information, which provides the four possibilities of 01, 10, 00 or 11.

In addition to sending more information per photon, high-dimensional quantum encryption can also tolerate more signal-obscuring noise before the transmission becomes unsecure. Noise can arise from turbulent air, failed electronics, detectors that don’t work properly and from attempts to intercept the data. “This higher noise threshold means that when 2D quantum encryption fails, you can try to implement 4D because it, in principle, is more secure and more noise resistant,” said Karimi.

Using light for encryption

Today, mathematical algorithms are used to encrypt text messages, banking transactions and health information. Intercepting these encrypted messages requires figuring out the exact algorithm used to encrypt a given piece of data, a feat that is difficult now but that is expected to become easier in the next decade or so as computers become more powerful.

Given the expectation that current algorithms may not work as well in the future, more attention is being given to stronger encryption techniques such as quantum key distribution, which uses properties of light particles known as quantum states to encode and send the key needed to decrypt encoded data.

Although wired and free-space quantum encryption has been deployed on some small, local networks, implementing it globally will require sending encrypted messages between ground-based stations and the satellite-based quantum communication networks that would link cities and countries. Horizontal tests through the air can be used to simulate sending signals to satellites, with about three horizontal kilometers being roughly equal to sending the signal through the Earth’s atmosphere to a satellite.

Before trying a three-kilometer test, the researchers wanted to see if it was even possible to perform 4D quantum encryption outside. This was thought to be so challenging that some other scientists in the field said that the experiment would not work. One of the primary problems faced during any free-space experiment is dealing with air turbulence, which distorts the optical signal.

Real-world testing

For the tests, the researchers brought their laboratory optical setups to two different rooftops and covered them with wooden boxes to provide some protection from the elements. After much trial and error, they successfully sent messages secured with 4D quantum encryption over their intracity link. The messages exhibited an error rate of 11 percent, below the 19 percent threshold needed to maintain a secure connection. They also compared 4D encryption with 2D, finding that, after error correction, they could transmit 1.6 times more information per photon with 4D quantum encryption, even with turbulence.

“After bringing equipment that would normally be used in a clean, isolated lab environment to a rooftop that is exposed to the elements and has no vibration isolation, it was very rewarding to see results showing that we could transmit secure data,” said Alicia Sit, an undergraduate student in Karimi’s lab.

As a next step, the researchers are planning to implement their scheme into a network that includes three links that are about 5.6 kilometers apart and that uses a technology known as adaptive optics to compensate for the turbulence. Eventually, they want to link this network to one that exists now in the city. “Our long-term goal is to implement a quantum communication network with multiple links but using more than four dimensions while trying to get around the turbulence,” said Sit.

Here’s a link to and a citation for the paper,

High-dimensional intracity quantum cryptography with structured photons by Alicia Sit, Frédéric Bouchard, Robert Fickler, Jérémie Gagnon-Bischoff, Hugo Larocque, Khabat Heshami, Dominique Elser, Christian Peuntinger, Kevin Günthner, Bettina Heim, Christoph Marquardt, Gerd Leuchs, Robert W. Boyd, and Ebrahim Karimi. Optica Vol. 4, Issue 9, pp. 1006-1010 (2017) •https://doi.org/10.1364/OPTICA.4.001006

This is an open access paper.

Using only sunlight to desalinate water

The researchers seem to believe that this new desalination technique could be a game changer. From a June 20, 2017 news item on Azonano,

An off-grid technology using only the energy from sunlight to transform salt water into fresh drinking water has been developed as an outcome of the effort from a federally funded research.

The desalination system uses a combination of light-harvesting nanophotonics and membrane distillation technology and is considered to be the first major innovation from the Center for Nanotechnology Enabled Water Treatment (NEWT), which is a multi-institutional engineering research center located at Rice University.

NEWT’s “nanophotonics-enabled solar membrane distillation” technology (NESMD) integrates tried-and-true water treatment methods with cutting-edge nanotechnology capable of transforming sunlight to heat. …

A June 19, 2017 Rice University news release, which originated the news item, expands on the theme,

More than 18,000 desalination plants operate in 150 countries, but NEWT’s desalination technology is unlike any other used today.

“Direct solar desalination could be a game changer for some of the estimated 1 billion people who lack access to clean drinking water,” said Rice scientist and water treatment expert Qilin Li, a corresponding author on the study. “This off-grid technology is capable of providing sufficient clean water for family use in a compact footprint, and it can be scaled up to provide water for larger communities.”

The oldest method for making freshwater from salt water is distillation. Salt water is boiled, and the steam is captured and run through a condensing coil. Distillation has been used for centuries, but it requires complex infrastructure and is energy inefficient due to the amount of heat required to boil water and produce steam. More than half the cost of operating a water distillation plant is for energy.

An emerging technology for desalination is membrane distillation, where hot salt water is flowed across one side of a porous membrane and cold freshwater is flowed across the other. Water vapor is naturally drawn through the membrane from the hot to the cold side, and because the seawater need not be boiled, the energy requirements are less than they would be for traditional distillation. However, the energy costs are still significant because heat is continuously lost from the hot side of the membrane to the cold.

“Unlike traditional membrane distillation, NESMD benefits from increasing efficiency with scale,” said Rice’s Naomi Halas, a corresponding author on the paper and the leader of NEWT’s nanophotonics research efforts. “It requires minimal pumping energy for optimal distillate conversion, and there are a number of ways we can further optimize the technology to make it more productive and efficient.”

NEWT’s new technology builds upon research in Halas’ lab to create engineered nanoparticles that harvest as much as 80 percent of sunlight to generate steam. By adding low-cost, commercially available nanoparticles to a porous membrane, NEWT has essentially turned the membrane itself into a one-sided heating element that alone heats the water to drive membrane distillation.

“The integration of photothermal heating capabilities within a water purification membrane for direct, solar-driven desalination opens new opportunities in water purification,” said Yale University ‘s Menachem “Meny” Elimelech, a co-author of the new study and NEWT’s lead researcher for membrane processes.

In the PNAS study, researchers offered proof-of-concept results based on tests with an NESMD chamber about the size of three postage stamps and just a few millimeters thick. The distillation membrane in the chamber contained a specially designed top layer of carbon black nanoparticles infused into a porous polymer. The light-capturing nanoparticles heated the entire surface of the membrane when exposed to sunlight. A thin half-millimeter-thick layer of salt water flowed atop the carbon-black layer, and a cool freshwater stream flowed below.

Li, the leader of NEWT’s advanced treatment test beds at Rice, said the water production rate increased greatly by concentrating the sunlight. “The intensity got up 17.5 kilowatts per meter squared when a lens was used to concentrate sunlight by 25 times, and the water production increased to about 6 liters per meter squared per hour.”

Li said NEWT’s research team has already made a much larger system that contains a panel that is about 70 centimeters by 25 centimeters. Ultimately, she said, NEWT hopes to produce a modular system where users could order as many panels as they needed based on their daily water demands.

“You could assemble these together, just as you would the panels in a solar farm,” she said. “Depending on the water production rate you need, you could calculate how much membrane area you would need. For example, if you need 20 liters per hour, and the panels produce 6 liters per hour per square meter, you would order a little over 3 square meters of panels.”

Established by the National Science Foundation in 2015, NEWT aims to develop compact, mobile, off-grid water-treatment systems that can provide clean water to millions of people who lack it and make U.S. energy production more sustainable and cost-effective. NEWT, which is expected to leverage more than $40 million in federal and industrial support over the next decade, is the first NSF Engineering Research Center (ERC) in Houston and only the third in Texas since NSF began the ERC program in 1985. NEWT focuses on applications for humanitarian emergency response, rural water systems and wastewater treatment and reuse at remote sites, including both onshore and offshore drilling platforms for oil and gas exploration.

There is a video but it is focused on the NEWT center rather than any specific water technologies,

For anyone interested in the technology, here’s a link to and a citation for the researchers’ paper,

Nanophotonics-enabled solar membrane distillation for off-grid water purification by Pratiksha D. Dongare, Alessandro Alabastri, Seth Pedersen, Katherine R. Zodrow, Nathaniel J. Hogan, Oara Neumann, Jinjian Wu, Tianxiao Wang, Akshay Deshmukh,f, Menachem Elimelech, Qilin Li, Peter Nordlander, and Naomi J. Halas. PNAS {Proceedings of the National Academy of Sciences] doi: 10.1073/pnas.1701835114 June 19, 2017

This paper appears to be open access.

Light-based computation made better with silver

It’s pretty amazing to imagine a future where computers run on light but according to a May 16, 2017 news item on ScienceDaily the idea is not beyond the realms of possibility,

Tomorrow’s computers will run on light, and gold nanoparticle chains show much promise as light conductors. Now Ludwig-Maximilians-Universitaet (LMU) in Munich scientists have demonstrated how tiny spots of silver could markedly reduce energy consumption in light-based computation.

Today’s computers are faster and smaller than ever before. The latest generation of transistors will have structural features with dimensions of only 10 nanometers. If computers are to become even faster and at the same time more energy efficient at these minuscule scales, they will probably need to process information using light particles instead of electrons. This is referred to as “optical computing.”

The silver serves as a kind of intermediary between the gold particles while not dissipating energy. Capture: Liedl/Hohmann (NIM)

A March 15, 2017 LMU press release (also one EurekAlert), which originated the news item, describes a current use of light in telecommunications technology and this latest research breakthrough (the discrepancy in dates is likely due to when the paper was made available online versus in print),

Fiber-optic networks already use light to transport data over long distances at high speed and with minimum loss. The diameters of the thinnest cables, however, are in the micrometer range, as the light waves — with a wavelength of around one micrometer — must be able to oscillate unhindered. In order to process data on a micro- or even nanochip, an entirely new system is therefore required.

One possibility would be to conduct light signals via so-called plasmon oscillations. This involves a light particle (photon) exciting the electron cloud of a gold nanoparticle so that it starts oscillating. These waves then travel along a chain of nanoparticles at approximately 10% of the speed of light. This approach achieves two goals: nanometer-scale dimensions and enormous speed. What remains, however, is the energy consumption. In a chain composed purely of gold, this would be almost as high as in conventional transistors, due to the considerable heat development in the gold particles.

A tiny spot of silver

Tim Liedl, Professor of Physics at LMU and PI at the cluster of excellence Nanosystems Initiative Munich (NIM), together with colleagues from Ohio University, has now published an article in the journal Nature Physics, which describes how silver nanoparticles can significantly reduce the energy consumption. The physicists built a sort of miniature test track with a length of around 100 nanometers, composed of three nanoparticles: one gold nanoparticle at each end, with a silver nanoparticle right in the middle.

The silver serves as a kind of intermediary between the gold particles while not dissipating energy. To make the silver particle’s plasmon oscillate, more excitation energy is required than for gold. Therefore, the energy just flows “around” the silver particle. “Transport is mediated via the coupling of the electromagnetic fields around the so-called hot spots which are created between each of the two gold particles and the silver particle,” explains Tim Liedl. “This allows the energy to be transported with almost no loss, and on a femtosecond time scale.”

Textbook quantum model

The decisive precondition for the experiments was the fact that Tim Liedl and his colleagues are experts in the exquisitely exact placement of nanostructures. This is done by the DNA origami method, which allows different crystalline nanoparticles to be placed at precisely defined nanodistances from each other. Similar experiments had previously been conducted using conventional lithography techniques. However, these do not provide the required spatial precision, in particular where different types of metals are involved.

In parallel, the physicists simulated the experimental set-up on the computer – and had their results confirmed. In addition to classical electrodynamic simulations, Alexander Govorov, Professor of Physics at Ohio University, Athens, USA, was able to establish a simple quantum-mechanical model: “In this model, the classical and the quantum-mechanical pictures match very well, which makes it a potential example for the textbooks.”

Here’s a link to and c citation for the paper,

Hotspot-mediated non-dissipative and ultrafast plasmon passage by Eva-Maria Roller, Lucas V. Besteiro, Claudia Pupp, Larousse Khosravi Khorashad, Alexander O. Govorov, & Tim Liedl. Nature Physics (2017) doi:10.1038/nphys4120 Published online 15 May 2017

This paper is behind a paywall.

Generating power from polluted air

I have no idea how viable this concept might be but it is certainly appealing, From a May 8, 2017 news item on Nanowerk (Note: A link has been removed),

Researchers from the University of Antwerp and KU Leuven (University of Leuven), Belgium, have succeeded in developing a process that purifies air and, at the same time, generates power. The device must only be exposed to light in order to function (ChemSusChem, “Harvesting Hydrogen Gas from Air Pollutants with an Unbiased Gas Phase Photoelectrochemical Cell”).

Caption: The new device must only be exposed to light in order to purify air and generate power. Credit: UAntwerpen and KU Leuven

A May 8, 2017 University of Leuven press release (also on EurekAlert), which originated the news item, describes this nifty research in slightly more detail,

“We use a small device with two rooms separated by a membrane,” explains Professor Sammy Verbruggen (UAntwerp/KU Leuven). “Air is purified on one side, while on the other side hydrogen gas is produced from a part of the degradation products. This hydrogen gas can be stored and used later as fuel, as is already being done in some hydrogen buses, for example.”

In this way, the researchers respond to two major social needs: clean air and alternative energy production. The heart of the solution lies at the membrane level, where the researchers use specific nanomaterials. “These catalysts are capable of producing hydrogen gas and breaking down air pollution,” explains Professor Verbruggen. “In the past, these cells were mostly used to extract hydrogen from water. We have now discovered that this is also possible, and even more efficient, with polluted air.”

It seems to be a complex process, but it is not: the device must only be exposed to light. The researchers’ goal is to be able to use sunlight, as the processes underlying the technology are similar to those found in solar panels. The difference here is that electricity is not generated directly, but rather that air is purified while the generated power is stored as hydrogen gas.

“We are currently working on a scale of only a few square centimetres. At a later stage, we would like to scale up our technology to make the process industrially applicable. We are also working on improving our materials so we can use sunlight more efficiently to trigger the reactions. “

Here’s a link to and a citation for the paper,

Harvesting Hydrogen Gas from Air Pollutants with an Unbiased Gas Phase Photoelectrochemical Cell. by  Prof. Dr. Sammy W. Verbruggen, Myrthe Van Hal1, Tom Bosserez, Dr. Jan Rongé, Dr. Birger Hauchecorne, Prof. Dr. Johan A. Martens, and Prof. Dr. Silvia Lenaerts. ChemSusChem Volume 10, Issue 7, pages 1413–1418, April 10, 2017 DOI: 10.1002/cssc.201601806 Version of Record online: 6 MAR 2017

© 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Magic nano ink

Colour changes © Nature Communications 2017 / MPI [Max Planck Institute] for Intelligent Systems

A March 1, 2017 news item on Nanowerk helps to explain the image seen above (Note: A link has been removed),

Plasmonic printing produces resolutions several times greater than conventional printing methods. In plasmonic printing, colours are formed on the surfaces of tiny metallic particles when light excites their electrons to oscillate. Researchers at the Max Planck Institute for Intelligent Systems in Stuttgart have now shown how the colours of such metallic particles can be altered with hydrogen (Nature Communications, “Dynamic plasmonic colour display”).

The technique could open the way for animating ultra-high-resolution images and for developing extremely sharp displays. At the same time, it provides new approaches for encrypting information and detecting counterfeits.

A March 1, 2017 Max Planck Institute press release, which originated the news item, provides more  history and more detail about the research,

Glass artisans in medieval times exploited the effect long before it was even known. They coloured the magnificent windows of gothic cathedrals with nanoparticles of gold, which glowed red in the light. It was not until the middle of the 20th century that the underlying physical phenomenon was given a name: plasmons. These collective oscillations of free electrons are stimulated by the absorption of incident electromagnetic radiation. The smaller the metallic particles, the shorter the wavelength of the absorbed radiation. In some cases, the resonance frequency, i.e., the absorption maximum, falls within the visible light spectrum. The unabsorbed part of the spectrum is then scattered or reflected, creating an impression of colour. The metallic particles, which usually appear silvery, copper-coloured or golden, then take on entirely new colours.

A resolution of 100,000 dots per inch

Researchers are also taking advantage of the effect to develop plasmonic printing, in which tailor-made square metal particles are arranged in specific patterns on a substrate. The edge length of the particles is in the order of less than 100 nanometres (100 billionths of a metre). This allows a resolution of 100,000 dots per inch – several times greater than what today’s printers and displays can achieve.

For metallic particles measuring several 100 nanometres across, the resonance frequency of the plasmons lies within the visible light spectrum. When white light falls on such particles, they appear in a specific colour, for example red or blue. The colour of the metal in question is determined by the size of the particles and their distance from each other. These adjustment parameters therefore serve the same purpose in plasmonic printing as the palette of colours in painting.

The trick with the chemical reaction

The Smart Nanoplasmonics Research Group at the Max Planck Institute for Intelligent Systems in Stuttgart also makes use of this colour variability. They are currently working on making dynamic plasmonic printing. They have now presented an approach that allows them to alter the colours of the pixels predictably – even after an image has been printed. “The trick is to use magnesium. It can undergo a reversible chemical reaction in which the metallic character of the element is lost,” explains Laura Na Liu, who leads the Stuttgart research group. “Magnesium can absorb up to 7.6% of hydrogen by weight to form magnesium hydride, or MgH2”, Liu continues. The researchers coat the magnesium with palladium, which acts as a catalyst in the reaction.

During the continuous transition of metallic magnesium into non-metallic MgH2, the colour of some of the pixels changes several times. The colour change and the speed of the rate at which it proceeds follow a clear pattern. This is determined both by the size of and the distance between the individual magnesium particles as well as by the amount of hydrogen present.

In the case of total hydrogen saturation, the colour disappears completely, and the pixels reflect all the white light that falls on them. This is because the magnesium is no longer present in metallic form but only as MgH2. Hence, there are also no free metal electrons that can be made to oscillate.

Minerva’s vanishing act

The scientists demonstrated the effect of such dynamic colour behaviour on a plasmonic print of Minerva, the Roman goddess of wisdom, which also bore the logo of the Max Planck Society. They chose the size of their magnesium particles so that Minerva’s hair first appeared reddish, the head covering yellow, the feather crest red and the laurel wreath and outline of her face blue. They then washed the micro-print with hydrogen. A time-lapse film shows how the individual colours change. Yellow turns red, red turns blue, and blue turns white. After a few minutes all the colours disappear, revealing a white surface instead of Minerva.

The scientists also showed that this process is reversible by replacing the hydrogen stream with a stream of oxygen. The oxygen reacts with the hydrogen in the magnesium hydride to form water, so that the magnesium particles become metallic again. The pixels then change back in reverse order, and in the end Minerva appears in her original colours.

In a similar manner the researchers first made the micro image of a famous Van Gogh painting disappear and then reappear. They also produced complex animations that give the impression of fireworks.

The principle of a new encryption technique

Laura Na Liu can imagine using this principle in a new encryption technology. To demonstrate this, the group formed various letters with magnesium pixels. The addition of hydrogen then caused some letters to disappear over time, like the image of Minerva. “As for the rest of the letters, a thin oxide layer formed on the magnesium particles after exposing the sample in air for a short time before palladium deposition,” Liu explains. This layer is impermeable to hydrogen. The magnesium lying under the oxide layer therefore remains metallic − and visible − because light is able to excite the plasmons in the magnesium.

In this way it is possible to conceal a message, for example by mixing real and nonsensical information. Only the intended recipient is able to make the nonsensical information disappear and filter out the real message. For example, after decoding the message “Hartford” with hydrogen, only the words “art or” would remain visible. To make it more difficult to crack such encrypted messages, the group is currently working on a process that would require a precisely adjusted hydrogen concentration for deciphering.

Liu believes that the technology could also be used some day in the fight against counterfeiting. “For example, plasmonic security features could be printed on banknotes or pharmaceutical packs, which could later be checked or read only under specific conditions unknown to counterfeiters.”

It doesn’t necessarily have to be hydrogen

Laura Na Liu knows that the use of hydrogen makes some applications difficult and impractical for everyday use such as in mobile displays. “We see our work as a starting shot for a new principle: the use of chemical reactions for dynamic printing,” the Stuttgart physicist says. It is certainly conceivable that the research will soon lead to the discovery of chemical reactions for colour changes other than the phase transition between magnesium and magnesium dihydride, for example, reactions that require no gaseous reactants.

Here’s a link to and a citation for the paper,

Dynamic plasmonic colour display by Xiaoyang Duan, Simon Kamin, & Na Liu. Nature Communications 8, Article number: 14606 (2017) doi:10.1038/ncomms14606 Published online: 24 February 2017

This paper is open access.

Making lead look like gold (so to speak)

Apparently you can make lead ‘look’ like gold if you can get it to reflect light in the same way. From a Feb. 28, 2017 news item on Nanowerk (Note: A link has been removed),

Since the Middle Ages, alchemists have sought to transmute elements, the most famous example being the long quest to turn lead into gold. Transmutation has been realized in modern times, but on a minute scale using a massive particle accelerator.

Now, theorists at Princeton University have proposed a different approach to this ancient ambition — just make one material behave like another. A computational theory published Feb. 24 [2017] in the journal Physical Review Letters (“How to Make Distinct Dynamical Systems Appear Spectrally Identical”) demonstrates that any two systems can be made to look alike, even if just for the smallest fraction of a second.

In this context, for two objects to “look” like each other, they need to reflect light in the same way. The Princeton researchers’ method involves using light to make non-permanent changes to a substance’s molecules so that they mimic the reflective properties of another substance’s molecules. This ability could have implications for optical computing, a type of computing in which electrons are replaced by photons that could greatly enhance processing power but has proven extremely difficult to engineer. It also could be applied to molecular detection and experiments in which expensive samples could be replaced by cheaper alternatives.

A Feb. 28, 2017 Princeton University news release (also on EurekAlert) by Tien Nguyen, which originated the news item, expands on the theme (Note: Links have been removed),

“It was a big shock for us that such a general statement as ‘any two objects can be made to look alike’ could be made,” said co-author Denys Bondar, an associate research scholar in the laboratory of co-author Herschel Rabitz, Princeton’s Charles Phelps Smyth ’16 *17 Professor of Chemistry.

The Princeton researchers posited that they could control the light that bounces off a molecule or any substance by controlling the light shone on it, which would allow them to alter how it looks. This type of manipulation requires a powerful light source such as an ultrafast laser and would last for only a femtosecond, or one quadrillionth of a second. Unlike normal light sources, this ultrafast laser pulse is strong enough to interact with molecules and distort their electron cloud while not actually changing their identity.

“The light emitted by a molecule depends on the shape of its electron cloud, which can be sculptured by modern lasers,” Bondar said. Using advanced computational theory, the research team developed a method called “spectral dynamic mimicry” that allowed them to calculate the laser pulse shape, which includes timing and wavelength, to produce any desired spectral output. In other words, making any two systems look alike.

Conversely, this spectral control could also be used to make two systems look as different from one another as possible. This differentiation, the researchers suggested, could prove valuable for applications of molecular detections such as identifying toxic versus safe chemicals.

Shaul Mukamel, a chemistry professor at the University of California-Irvine, said that the Princeton research is a step forward in an important and active research field called coherent control, in which light can be manipulated to control behavior at the molecular level. Mukamel, who has collaborated with the Rabitz lab but was not involved in the current work, said that the Rabitz group has had a prominent role in this field for decades, advancing technology such as quantum computing and using light to drive artificial chemical reactivity.

“It’s a very general and nice application of coherent control,” Mukamel said. “It demonstrates that you can, by shaping the optical paths, bring the molecules to do things that you want beforehand — it could potentially be very significant.”

Since the Middle Ages, alchemists have sought to transmute elements, the most famous example being the long quest to turn lead into gold. Now, theorists at Princeton University have proposed a different approach to this ancient ambition — just make one material behave like another, even if just for the smallest fraction of a second. The researchers are, left to right, Renan Cabrera, an associate research scholar in chemistry; Herschel Rabitz, Princeton’s Charles Phelps Smyth ’16 *17 Professor of Chemistry; associate research scholar in chemistry Denys Bondar; and graduate student Andre Campos. (Photo by C. Todd Reichart, Department of Chemistry)

Here’s a link to and a citation for the paper,

How to Make Distinct Dynamical Systems Appear Spectrally Identical by
Andre G. Campos, Denys I. Bondar, Renan Cabrera, and Herschel A. Rabitz.
Phys. Rev. Lett. 118, 083201 (Vol. 118, Iss. 8) DOI:https://doi.org/10.1103/PhysRevLett.118.083201 Published 24 February 2017

© 2017 American Physical Society

This paper is behind a paywall.

Bidirectional prosthetic-brain communication with light?

The possibility of not only being able to make a prosthetic that allows a tetraplegic to grab a coffee but to feel that coffee  cup with their ‘hand’ is one step closer to reality according to a Feb. 22, 2017 news item on ScienceDaily,

Since the early seventies, scientists have been developing brain-machine interfaces; the main application being the use of neural prosthesis in paralyzed patients or amputees. A prosthetic limb directly controlled by brain activity can partially recover the lost motor function. This is achieved by decoding neuronal activity recorded with electrodes and translating it into robotic movements. Such systems however have limited precision due to the absence of sensory feedback from the artificial limb. Neuroscientists at the University of Geneva (UNIGE), Switzerland, asked whether it was possible to transmit this missing sensation back to the brain by stimulating neural activity in the cortex. They discovered that not only was it possible to create an artificial sensation of neuroprosthetic movements, but that the underlying learning process occurs very rapidly. These findings, published in the scientific journal Neuron, were obtained by resorting to modern imaging and optical stimulation tools, offering an innovative alternative to the classical electrode approach.

A Feb. 22, 2017 Université de Genève press release on EurekAlert, which originated the news item, provides more detail,

Motor function is at the heart of all behavior and allows us to interact with the world. Therefore, replacing a lost limb with a robotic prosthesis is the subject of much research, yet successful outcomes are rare. Why is that? Until this moment, brain-machine interfaces are operated by relying largely on visual perception: the robotic arm is controlled by looking at it. The direct flow of information between the brain and the machine remains thus unidirectional. However, movement perception is not only based on vision but mostly on proprioception, the sensation of where the limb is located in space. “We have therefore asked whether it was possible to establish a bidirectional communication in a brain-machine interface: to simultaneously read out neural activity, translate it into prosthetic movement and reinject sensory feedback of this movement back in the brain”, explains Daniel Huber, professor in the Department of Basic Neurosciences of the Faculty of Medicine at UNIGE.

Providing artificial sensations of prosthetic movements

In contrast to invasive approaches using electrodes, Daniel Huber’s team specializes in optical techniques for imaging and stimulating brain activity. Using a method called two-photon microscopy, they routinely measure the activity of hundreds of neurons with single cell resolution. “We wanted to test whether mice could learn to control a neural prosthesis by relying uniquely on an artificial sensory feedback signal”, explains Mario Prsa, researcher at UNIGE and the first author of the study. “We imaged neural activity in the motor cortex. When the mouse activated a specific neuron, the one chosen for neuroprosthetic control, we simultaneously applied stimulation proportional to this activity to the sensory cortex using blue light”. Indeed, neurons of the sensory cortex were rendered photosensitive to this light, allowing them to be activated by a series of optical flashes and thus integrate the artificial sensory feedback signal. The mouse was rewarded upon every above-threshold activation, and 20 minutes later, once the association learned, the rodent was able to more frequently generate the correct neuronal activity.

This means that the artificial sensation was not only perceived, but that it was successfully integrated as a feedback of the prosthetic movement. In this manner, the brain-machine interface functions bidirectionally. The Geneva researchers think that the reason why this fabricated sensation is so rapidly assimilated is because it most likely taps into very basic brain functions. Feeling the position of our limbs occurs automatically, without much thought and probably reflects fundamental neural circuit mechanisms. This type of bidirectional interface might allow in the future more precisely displacing robotic arms, feeling touched objects or perceiving the necessary force to grasp them.

At present, the neuroscientists at UNIGE are examining how to produce a more efficient sensory feedback. They are currently capable of doing it for a single movement, but is it also possible to provide multiple feedback channels in parallel? This research sets the groundwork for developing a new generation of more precise, bidirectional neural prostheses.

Towards better understanding the neural mechanisms of neuroprosthetic control

By resorting to modern imaging tools, hundreds of neurons in the surrounding area could also be observed as the mouse learned the neuroprosthetic task. “We know that millions of neural connections exist. However, we discovered that the animal activated only the one neuron chosen for controlling the prosthetic action, and did not recruit any of the neighbouring neurons”, adds Daniel Huber. “This is a very interesting finding since it reveals that the brain can home in on and specifically control the activity of just one single neuron”. Researchers can potentially exploit this knowledge to not only develop more stable and precise decoding techniques, but also gain a better understanding of most basic neural circuit functions. It remains to be discovered what mechanisms are involved in routing signals to the uniquely activated neuron.

Caption: A novel optical brain-machine interface allows bidirectional communication with the brain. While a robotic arm is controlled by neuronal activity recorded with optical imaging (red laser), the position of the arm is fed back to the brain via optical microstimulation (blue laser). Credit: © Daniel Huber, UNIGE

Here’s a link to and a citation for the paper,

Rapid Integration of Artificial Sensory Feedback during Operant Conditioning of Motor Cortex Neurons by Mario Prsa, Gregorio L. Galiñanes, Daniel Huber. Neuron Volume 93, Issue 4, p929–939.e6, 22 February 2017 DOI: http://dx.doi.org/10.1016/j.neuron.2017.01.023 Open access funded by European Research Council

This paper is open access.