Category Archives: electronics

A DNA switch for new electronic applications

I little dreamed when reading “The Double Helix : A Personal Account of the Discovery of the Structure of DNA” by James Watson that DNA (deoxyribonucleic acid) would one day become just another material for scientists to manipulate. A Feb. 20, 2017 news item on ScienceDaily describes the use of DNA as a material in electronics applications,

DNA, the stuff of life, may very well also pack quite the jolt for engineers trying to advance the development of tiny, low-cost electronic devices.

Much like flipping your light switch at home — only on a scale 1,000 times smaller than a human hair — an ASU [Arizona State University]-led team has now developed the first controllable DNA switch to regulate the flow of electricity within a single, atomic-sized molecule. The new study, led by ASU Biodesign Institute researcher Nongjian Tao, was published in the advanced online journal Nature Communications.

DNA, the stuff of life, may very well also pack quite the jolt for engineers trying to advance the development of tiny, low-cost electronic devices. Courtesy: ASU

A Feb. 20, 2017 ASU news release (also on EurekAlert), which originated the news item, provides more detail,

“It has been established that charge transport is possible in DNA, but for a useful device, one wants to be able to turn the charge transport on and off. We achieved this goal by chemically modifying DNA,” said Tao, who directs the Biodesign Center for Bioelectronics and Biosensors and is a professor in the Fulton Schools of Engineering. “Not only that, but we can also adapt the modified DNA as a probe to measure reactions at the single-molecule level. This provides a unique way for studying important reactions implicated in disease, or photosynthesis reactions for novel renewable energy applications.”

Engineers often think of electricity like water, and the research team’s new DNA switch acts to control the flow of electrons on and off, just like water coming out of a faucet.

Previously, Tao’s research group had made several discoveries to understand and manipulate DNA to more finely tune the flow of electricity through it. They found they could make DNA behave in different ways — and could cajole electrons to flow like waves according to quantum mechanics, or “hop” like rabbits in the way electricity in a copper wire works —creating an exciting new avenue for DNA-based, nano-electronic applications.

Tao assembled a multidisciplinary team for the project, including ASU postdoctoral student Limin Xiang and Li Yueqi performing bench experiments, Julio Palma working on the theoretical framework, with further help and oversight from collaborators Vladimiro Mujica (ASU) and Mark Ratner (Northwestern University).

To accomplish their engineering feat, Tao’s group, modified just one of DNA’s iconic double helix chemical letters, abbreviated as A, C, T or G, with another chemical group, called anthraquinone (Aq). Anthraquinone is a three-ringed carbon structure that can be inserted in between DNA base pairs but contains what chemists call a redox group (short for reduction, or gaining electrons or oxidation, losing electrons).

These chemical groups are also the foundation for how our bodies’ convert chemical energy through switches that send all of the electrical pulses in our brains, our hearts and communicate signals within every cell that may be implicated in the most prevalent diseases.

The modified Aq-DNA helix could now help it perform the switch, slipping comfortably in between the rungs that make up the ladder of the DNA helix, and bestowing it with a new found ability to reversibly gain or lose electrons.

Through their studies, when they sandwiched the DNA between a pair of electrodes, they careful [sic] controlled their electrical field and measured the ability of the modified DNA to conduct electricity. This was performed using a staple of nano-electronics, a scanning tunneling microscope, which acts like the tip of an electrode to complete a connection, being repeatedly pulled in and out of contact with the DNA molecules in the solution like a finger touching a water droplet.

“We found the electron transport mechanism in the present anthraquinone-DNA system favors electron “hopping” via anthraquinone and stacked DNA bases,” said Tao. In addition, they found they could reversibly control the conductance states to make the DNA switch on (high-conductance) or switch-off (low conductance). When anthraquinone has gained the most electrons (its most-reduced state), it is far more conductive, and the team finely mapped out a 3-D picture to account for how anthraquinone controlled the electrical state of the DNA.

For their next project, they hope to extend their studies to get one step closer toward making DNA nano-devices a reality.

“We are particularly excited that the engineered DNA provides a nice tool to examine redox reaction kinetics, and thermodynamics the single molecule level,” said Tao.

Here’s a link to and a citation for the paper,

I last featured Tao’s work with DNA in an April 20, 2015 posting.

Gate-controlled conductance switching in DNA by Limin Xiang, Julio L. Palma, Yueqi Li, Vladimiro Mujica, Mark A. Ratner, & Nongjian Tao.  Nature Communications 8, Article number: 14471 (2017)  doi:10.1038/ncomms14471 Published online: 20 February 2017

This paper is open access.

High-performance, low-energy artificial synapse for neural network computing

This artificial synapse is apparently an improvement on the standard memristor-based artificial synapse but that doesn’t become clear until reading the abstract for the paper. First, there’s a Feb. 20, 2017 Stanford University news release by Taylor Kubota (dated Feb. 21, 2017 on EurekAlert), Note: Links have been removed,

For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain’s efficient design – an artificial version of the space over which neurons communicate, called a synapse.

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. “It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

Building a brain

When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. “Instead of simulating a neural network, our work is trying to make a neural network.”

The artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.

Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

Testing a network of artificial synapses

Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.

This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Organic potential

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.

All this means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.

Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven University of Technology in the Netherlands.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.

Here’s an abstract for the researchers’ paper (link to paper provided after abstract) and it’s where you’ll find the memristor connection explained,

The brain is capable of massively parallel information processing while consuming only ~1–100fJ per synaptic event1, 2. Inspired by the efficiency of the brain, CMOS-based neural architectures3 and memristors4, 5 are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10pJ for 103μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems6, 7. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.

Here’s a link to and a citation for the paper,

A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing by Yoeri van de Burgt, Ewout Lubberman, Elliot J. Fuller, Scott T. Keene, Grégorio C. Faria, Sapan Agarwal, Matthew J. Marinella, A. Alec Talin, & Alberto Salleo. Nature Materials (2017) doi:10.1038/nmat4856 Published online 20 February 2017

This paper is behind a paywall.

ETA March 8, 2017 10:28 PST: You may find this this piece on ferroelectricity and neuromorphic engineering of interest (March 7, 2017 posting titled: Ferroelectric roadmap to neuromorphic computing).

Ferroelectric roadmap to neuromorphic computing

Having written about memristors and neuromorphic engineering a number of times here, I’m  quite intrigued to see some research into another nanoscale device for mimicking the functions of a human brain.

The announcement about the latest research from the team at the US Department of Energy’s Argonne National Laboratory is in a Feb. 14, 2017 news item on Nanowerk (Note: A link has been removed),

Research published in Nature Scientific Reports (“Ferroelectric symmetry-protected multibit memory cell”) lays out a theoretical map to use ferroelectric material to process information using multivalued logic – a leap beyond the simple ones and zeroes that make up our current computing systems that could let us process information much more efficiently.

A Feb. 10, 2017 Argonne National Laboratory news release by Louise Lerner, which originated the news item, expands on the theme,

The language of computers is written in just two symbols – ones and zeroes, meaning yes or no. But a world of richer possibilities awaits us if we could expand to three or more values, so that the same physical switch could encode much more information.

“Most importantly, this novel logic unit will enable information processing using not only “yes” and “no”, but also “either yes or no” or “maybe” operations,” said Valerii Vinokur, a materials scientist and Distinguished Fellow at the U.S. Department of Energy’s Argonne National Laboratory and the corresponding author on the paper, along with Laurent Baudry with the Lille University of Science and Technology and Igor Lukyanchuk with the University of Picardie Jules Verne.

This is the way our brains operate, and they’re something on the order of a million times more efficient than the best computers we’ve ever managed to build – while consuming orders of magnitude less energy.

“Our brains process so much more information, but if our synapses were built like our current computers are, the brain would not just boil but evaporate from the energy they use,” Vinokur said.

While the advantages of this type of computing, called multivalued logic, have long been known, the problem is that we haven’t discovered a material system that could implement it. Right now, transistors can only operate as “on” or “off,” so this new system would have to find a new way to consistently maintain more states – as well as be easy to read and write and, ideally, to work at room temperature.

Hence Vinokur and the team’s interest in ferroelectrics, a class of materials whose polarization can be controlled with electric fields. As ferroelectrics physically change shape when the polarization changes, they’re very useful in sensors and other devices, such as medical ultrasound machines. Scientists are very interested in tapping these properties for computer memory and other applications; but the theory behind their behavior is very much still emerging.

The new paper lays out a recipe by which we could tap the properties of very thin films of a particular class of ferroelectric material called perovskites.

According to the calculations, perovskite films could hold two, three, or even four polarization positions that are energetically stable – “so they could ‘click’ into place, and thus provide a stable platform for encoding information,” Vinokur said.

The team calculated these stable configurations and how to manipulate the polarization to move it between stable positions using electric fields, Vinokur said.

“When we realize this in a device, it will enormously increase the efficiency of memory units and processors,” Vinokur said. “This offers a significant step towards realization of so-called neuromorphic computing, which strives to model the human brain.”

Vinokur said the team is working with experimentalists to apply the principles to create a working system

Here’s a link to and a citation for the paper,

Ferroelectric symmetry-protected multibit memory cell by Laurent Baudry, Igor Lukyanchuk, & Valerii M. Vinokur. Scientific Reports 7, Article number: 42196 (2017) doi:10.1038/srep42196 Published online: 08 February 2017

This paper is open access.

Medieval chain mail inspires physicists

A Feb. 9, 2017 news item on Nanowerk describes new research at the Karlsruhe Institute of Technology (KIT), which takes its inspiration from medieval chain mail,

The Middle Ages certainly were far from being science-friendly: Whoever looked for new findings off the beaten track faced the threat of being burned at the stake. Hence, the contribution of this era to technical progress is deemed to be rather small. Scientists of Karlsruhe Institute of Technology (KIT), however, were inspired by medieval mail armor when producing a new metamaterial with novel properties. They succeeded in reversing the Hall coefficient of a material.

The Hall effect is the occurrence of a transverse electric voltage across an electric conductor passed by current flow, if this conductor is located in a magnetic field. This effect is a basic phenomenon of physics and allows to measure [sic] the strength of magnetic fields. It is the basis of magnetic speed sensors in cars or compasses in smartphones. Apart from measuring magnetic fields, the Hall effect can also be used to characterize metals and semiconductors and in particular to determine charge carrier density of the material. The sign of the measured Hall voltage allows conclusions to be drawn as to whether charge carriers in the semiconductor element carry positive or negative charge.

The ring structure of the metamaterial was inspired by mail armor of medieval knights. (Photo: KIT)

A Feb. ?, 2017 KIT press release (also on EurekAlert), which originated the news item, expands on the theme,

Mathematicians already predicted theoretically that it is possible to reverse the Hall coefficient of a material (such as gold or silicon), i.e. to reverse its sign. This was expected to be achieved by a three-dimensional ring structure resembling medieval mail armor. How-ever, this was considered difficult, as the ring mesh of millionths of a meter in size would have to be composed of three different components.


Die Ringstruktur des Metamaterials wurde von Kettenhemden aus der Ritterzeit inspiriert. (Bild: KIT)
The ring mesh of millionths of a meter in size. (Photo: KIT)

Christian Kern, Muamer Kadic, and Martin Wegener of KIT’s Institute of Applied Physics now found that a single basic material is sufficient, provided that the ring structure chosen follows a certain geometric arrangement. First, they produced polymer scaffolds with a highest-resolution 3D printer. Then, they coated these scaffolds with semiconducting zinc oxide.

The result of the experiment: The scientists can produce meta-materials with a positive coefficient, even though their components have negative coefficients. This sounds a bit like the philosopher’s stone, the formula, by means of which medieval alchemists tried to convert one substance into another. But here, no conversion takes place. “The charge carriers in the metamaterial remain negatively charged electrons,” Christian Kern explains. “Hall measurements only make them appear positively charged, as the structure forces them to take detours.”

Kern admits that this discovery so far is of no practical use. There are sufficient solids with both negative and positive Hall coefficients. But Kern wants to continue research. The next step will be the production of anisotropic structures with a Hall voltage in the direction of the magnetic field. Normally, Hall voltage is directed vertically to current and magnetic fields. Such unconventional materials might be applied in novel sensors for the direct measurement of magnetic field eddies.

The researchers do not seem to have published a paper about this work.

A new type of diode from South Korea’s Ulsan National Institute of Science and Technology

A Feb. 8, 2017 news item on features a ‘dream’ diode from Ulsan National Institute of Science and Technology,

A team of researchers, affiliated with UNIST [Ulsan National Institute of Science and Technology] has created a new technique that greatly enhances the performance of Schottky Diodes (metal-semiconductor junction) used in electronic devices. Their research findings have attracted considerable attention within the scientific community by solving the contact resistance problem of metal-semiconductor, which had remained unsolved for almost 50 years.

As described in the January [2017] issue of Nano Letters, the researchers have created a new type of diode with a graphene insertion layer sandwiched between metal and semiconductor. This new technique blows all previous attemps out the water, as it is expected to significantly contribute to the semiconductor industry’s growth.

A Jan. 27, 2017 UNIST press release, (also on EurekAlert), which originated the news item, describes the research in greater detail,

The Schottky diode is one of the oldest and most representative semiconductor devices, formed by the junction of a semiconductor with a metal.  However, due to the atomic intermixing along the interface between two materials, it has been impossible to produce an ideal diode. (An ideal diode acts like a perfect conductor when voltage is applied forward biased and like a perfect insulator when voltage is applied reverse biased.)

graphene interlayer 2

The schematic view of internal photoemission (IPE) measurements on metal/n-Si(001) junctions with Ni, Pt, and Ti electrodes for with and without a graphene insertion layer.

Professor Kibog Park of Natural Science solved this problem by inserting a graphene layer at the metal-semiconductor interface. In the study, the research team demonstrated that this graphene layer, consisting of a single layer of carbon atoms can not only suppress the material intermixing substantially, but also matches well with the theoretical prediction.

“The sheets of graphene in graphite have a space between each sheet that shows a high electron density of quantum mechanics in that no atoms can pass through,” says Professor Park. “Therefore, with this single-layer graphene sandwiched between metal and semiconductor, it is possible to overcome the inevitable atomic diffusion problem.”

The study also has the physiological meaning of confirming the theoretical prediction that “In the case of silicon semiconductors, the electrical properties of the junction surfaces hardly change regardless of the type of metal they use,” according to Hoon Hahn Yoon (Combined M.S./Ph.D. student of Natural Science), the first author of the study.

The internal photoemission method was used to measure the electronic energy barrier of the newly-fabricated metal/graphene/n-Si(001) junction diodes. The Internal Photoemission (IPE) Measurement System in the image shown above has contributed greatly to these experiments. This system has been developed by four UNIST graduate students (Hoon Han Yoon, Sungchul Jung, Gahyun Choi, and Junhyung Kim), which was carried out as part of an undergraduate research project in 2012 and was supported by the Korea Foundation for the Advancement of Science and Creativity (KOFAC).

This is the internal photoemission (IPE) measurement system, developed by Hoon Hahn Yoon of Physics at UNIST.

Shown above is the Internal Photoemission (IPE) Measurement System, developed by Hoon Hahn Yoon, combined M.S./Ph.D. student of Natural Science at UNIST.

“Students have teamed up and carried out all the necessary steps for the research since they were undergraduates,” Professor Park says. “Therefore, this research is a perfect example of time, persistence, and patience paying off.”

This study has been jointly conducted by Professor Hu Young Jeong of the UNIST Central Research Facilities (UCRF), Professor Kwanpyo Kim of Natural Science, Professor Soon-Yong Kwon of Materials Science and Engineering, and Professor Yong Soo Kim of Ulsan University. It has been also supported by the National Research Foundation of Korea, Nuclear Research Basis Expansion Project, as well as the Global Ph.D Fellowship (GPF).

Here’s a link to and a citation for the paper,

Strong Fermi-Level Pinning at Metal/n-Si(001) Interface Ensured by Forming an Intact Schottky Contact with a Graphene Insertion Layer by Hoon Hahn Yoon, Sungchul Jung, Gahyun Choi, Junhyung Kim, Youngeun Jeon, Yong Soo Kim, Hu Young Jeong, Kwanpyo Kim, Soon-Yong Kwon, and Kibog Park. Nano Lett., 2017, 17 (1), pp 44–49 DOI: 10.1021/acs.nanolett.6b03137 Publication Date (Web): December 14, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Aliens wreak havoc on our personal electronics

The aliens in question are subatomic particles and the havoc they wreak is low-grade according to the scientist who was presenting on the topic at the AAAS (American Association for the Advancement of Science) 2017 Annual Meeting (Feb. 16 – 20, 2017) in Boston, Massachusetts. From a Feb. 17, 2017 news item on ScienceDaily,

You may not realize it but alien subatomic particles raining down from outer space are wreaking low-grade havoc on your smartphones, computers and other personal electronic devices.

When your computer crashes and you get the dreaded blue screen or your smartphone freezes and you have to go through the time-consuming process of a reset, most likely you blame the manufacturer: Microsoft or Apple or Samsung. In many instances, however, these operational failures may be caused by the impact of electrically charged particles generated by cosmic rays that originate outside the solar system.

“This is a really big problem, but it is mostly invisible to the public,” said Bharat Bhuva, professor of electrical engineering at Vanderbilt University, in a presentation on Friday, Feb. 17 at a session titled “Cloudy with a Chance of Solar Flares: Quantifying the Risk of Space Weather” at the annual meeting of the American Association for the Advancement of Science in Boston.

A Feb. 17, 2017 Vanderbilt University news release (also on EurekAlert), which originated the news item, expands on  the theme,

When cosmic rays traveling at fractions of the speed of light strike the Earth’s atmosphere they create cascades of secondary particles including energetic neutrons, muons, pions and alpha particles. Millions of these particles strike your body each second. Despite their numbers, this subatomic torrent is imperceptible and has no known harmful effects on living organisms. However, a fraction of these particles carry enough energy to interfere with the operation of microelectronic circuitry. When they interact with integrated circuits, they may alter individual bits of data stored in memory. This is called a single-event upset or SEU.

Since it is difficult to know when and where these particles will strike and they do not do any physical damage, the malfunctions they cause are very difficult to characterize. As a result, determining the prevalence of SEUs is not easy or straightforward. “When you have a single bit flip, it could have any number of causes. It could be a software bug or a hardware flaw, for example. The only way you can determine that it is a single-event upset is by eliminating all the other possible causes,” Bhuva explained.

There have been a number of incidents that illustrate how serious the problem can be, Bhuva reported. For example, in 2003 in the town of Schaerbeek, Belgium a bit flip in an electronic voting machine added 4,096 extra votes to one candidate. The error was only detected because it gave the candidate more votes than were possible and it was traced to a single bit flip in the machine’s register. In 2008, the avionics system of a Qantus passenger jet flying from Singapore to Perth appeared to suffer from a single-event upset that caused the autopilot to disengage. As a result, the aircraft dove 690 feet in only 23 seconds, injuring about a third of the passengers seriously enough to cause the aircraft to divert to the nearest airstrip. In addition, there have been a number of unexplained glitches in airline computers – some of which experts feel must have been caused by SEUs – that have resulted in cancellation of hundreds of flights resulting in significant economic losses.

An analysis of SEU failure rates for consumer electronic devices performed by Ritesh Mastipuram and Edwin Wee at Cypress Semiconductor on a previous generation of technology shows how prevalent the problem may be. Their results were published in 2004 in Electronic Design News and provided the following estimates:

  • A simple cell phone with 500 kilobytes of memory should only have one potential error every 28 years.
  • A router farm like those used by Internet providers with only 25 gigabytes of memory may experience one potential networking error that interrupts their operation every 17 hours.
  • A person flying in an airplane at 35,000 feet (where radiation levels are considerably higher than they are at sea level) who is working on a laptop with 500 kilobytes of memory may experience one potential error every five hours.

Bhuva is a member of Vanderbilt’s Radiation Effects Research Group, which was established in 1987 and is the largest academic program in the United States that studies the effects of radiation on electronic systems. The group’s primary focus was on military and space applications. Since 2001, the group has also been analyzing radiation effects on consumer electronics in the terrestrial environment. They have studied this phenomenon in the last eight generations of computer chip technology, including the current generation that uses 3D transistors (known as FinFET) that are only 16 nanometers in size. The 16-nanometer study was funded by a group of top microelectronics companies, including Altera, ARM, AMD, Broadcom, Cisco Systems, Marvell, MediaTek, Renesas, Qualcomm, Synopsys, and TSMC

“The semiconductor manufacturers are very concerned about this problem because it is getting more serious as the size of the transistors in computer chips shrink and the power and capacity of our digital systems increase,” Bhuva said. “In addition, microelectronic circuits are everywhere and our society is becoming increasingly dependent on them.”

To determine the rate of SEUs in 16-nanometer chips, the Vanderbilt researchers took samples of the integrated circuits to the Irradiation of Chips and Electronics (ICE) House at Los Alamos National Laboratory. There they exposed them to a neutron beam and analyzed how many SEUs the chips experienced. Experts measure the failure rate of microelectronic circuits in a unit called a FIT, which stands for failure in time. One FIT is one failure per transistor in one billion hours of operation. That may seem infinitesimal but it adds up extremely quickly with billions of transistors in many of our devices and billions of electronic systems in use today (the number of smartphones alone is in the billions). Most electronic components have failure rates measured in 100’s and 1,000’s of FITs.


Trends in single event upset failure rates at the individual transistor, integrated circuit and system or device level for the three most recent manufacturing technologies. (Bharat Bhuva, Radiation Effects Research Group, Vanderbilt University)

“Our study confirms that this is a serious and growing problem,” said Bhuva.“This did not come as a surprise. Through our research on radiation effects on electronic circuits developed for military and space applications, we have been anticipating such effects on electronic systems operating in the terrestrial environment.”

Although the details of the Vanderbilt studies are proprietary, Bhuva described the general trend that they have found in the last three generations of integrated circuit technology: 28-nanometer, 20-nanometer and 16-nanometer.

As transistor sizes have shrunk, they have required less and less electrical charge to represent a logical bit. So the likelihood that one bit will “flip” from 0 to 1 (or 1 to 0) when struck by an energetic particle has been increasing. This has been partially offset by the fact that as the transistors have gotten smaller they have become smaller targets so the rate at which they are struck has decreased.

More significantly, the current generation of 16-nanometer circuits have a 3D architecture that replaced the previous 2D architecture and has proven to be significantly less susceptible to SEUs. Although this improvement has been offset by the increase in the number of transistors in each chip, the failure rate at the chip level has also dropped slightly. However, the increase in the total number of transistors being used in new electronic systems has meant that the SEU failure rate at the device level has continued to rise.

Unfortunately, it is not practical to simply shield microelectronics from these energetic particles. For example, it would take more than 10 feet of concrete to keep a circuit from being zapped by energetic neutrons. However, there are ways to design computer chips to dramatically reduce their vulnerability.

For cases where reliability is absolutely critical, you can simply design the processors in triplicate and have them vote. Bhuva pointed out: “The probability that SEUs will occur in two of the circuits at the same time is vanishingly small. So if two circuits produce the same result it should be correct.” This is the approach that NASA used to maximize the reliability of spacecraft computer systems.

The good news, Bhuva said, is that the aviation, medical equipment, IT, transportation, communications, financial and power industries are all aware of the problem and are taking steps to address it. “It is only the consumer electronics sector that has been lagging behind in addressing this problem.”

The engineer’s bottom line: “This is a major problem for industry and engineers, but it isn’t something that members of the general public need to worry much about.”

That’s fascinating and I hope the consumer electronics industry catches up with this ‘alien invasion’ issue. Finally, the ‘bit flips’ made me think of the 1956 movie ‘Invasion of the Body Snatchers‘.

Drive to operationalize transistors that outperform silicon gets a boost

Dexter Johnson has written a Jan. 19, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers]) about work which could lead to supplanting silicon-based transistors with carbon nanotube-based transistors in the future (Note: Links have been removed),

The end appears nigh for scaling down silicon-based complimentary metal-oxide semiconductor (CMOS) transistors, with some experts seeing the cutoff date as early as 2020.

While carbon nanotubes (CNTs) have long been among the nanomaterials investigated to serve as replacement for silicon in CMOS field-effect transistors (FETs) in a post-silicon future, they have always been bogged down by some frustrating technical problems. But, with some of the main technical showstoppers having been largely addressed—like sorting between metallic and semiconducting carbon nanotubes—the stage has been set for CNTs to start making their presence felt a bit more urgently in the chip industry.

Peking University scientists in China have now developed carbon nanotube field-effect transistors (CNT FETs) having a critical dimension—the gate length—of just five nanometers that would outperform silicon-based CMOS FETs at the same scale. The researchers claim in the journal Science that this marks the first time that sub-10 nanometer CNT CMOS FETs have been reported.

More importantly than just being the first, the Peking group showed that their CNT-based FETs can operate faster and at a lower supply voltage than their silicon-based counterparts.

A Jan. 20, 2017 article by Bob Yirka for provides more insight into the work at Peking University,

One of the most promising candidates is carbon nanotubes—due to their unique properties, transistors based on them could be smaller, faster and more efficient. Unfortunately, the difficulty in growing carbon nanotubes and their sometimes persnickety nature means that a way to make them and mass produce them has not been found. In this new effort, the researchers report on a method of creating carbon nanotube transistors that are suitable for testing, but not mass production.

To create the transistors, the researchers took a novel approach—instead of growing carbon nanotubes that had certain desired properties, they grew some and put them randomly on a silicon surface and then added electronics that would work with the properties they had—clearly not a strategy that would work for mass production, but one that allowed for building a carbon nanotube transistor that could be tested to see if it would verify theories about its performance. Realizing there would still be scaling problems using traditional electrodes, the researchers built a new kind by etching very tiny sheets of graphene. The result was a very tiny transistor, the team reports, capable of moving more current than a standard CMOS transistor using just half of the normal amount of voltage. It was also faster due to a much shorter switch delay, courtesy of a gate capacitance of just 70 femtoseconds.

Peking University has published an edited and more comprehensive version of the article first reported by Lisa Zyga and edited by Arthars,

Now in a new paper published in Nano Letters, researchers Tian Pei, et al., at Peking University in Beijing, China, have developed a modular method for constructing complicated integrated circuits (ICs) made from many FETs on individual CNTs. To demonstrate, they constructed an 8-bits BUS system–a circuit that is widely used for transferring data in computers–that contains 46 FETs on six CNTs. This is the most complicated CNT IC fabricated to date, and the fabrication process is expected to lead to even more complex circuits.

SEM image of an eight-transistor (8-T) unit that was fabricated on two CNTs (marked with two white dotted lines). The scale bar is 100 μm. (Copyright: 2014 American Chemical Society)

Ever since the first CNT FET was fabricated in 1998, researchers have been working to improve CNT-based electronics. As the scientists explain in their paper, semiconducting CNTs are promising candidates for replacing silicon wires because they are thinner, which offers better scaling-down potential, and also because they have a higher carrier mobility, resulting in higher operating speeds.

Yet CNT-based electronics still face challenges. One of the most significant challenges is obtaining arrays of semiconducting CNTs while removing the less-suitable metallic CNTs. Although scientists have devised a variety of ways to separate semiconducting and metallic CNTs, these methods almost always result in damaged semiconducting CNTs with degraded performance.

To get around this problem, researchers usually build ICs on single CNTs, which can be individually selected based on their condition. It’s difficult to use more than one CNT because no two are alike: they each have slightly different diameters and properties that affect performance. However, using just one CNT limits the complexity of these devices to simple logic and arithmetical gates.

The 8-T unit can be used as the basic building block of a variety of ICs other than BUS systems, making this modular method a universal and efficient way to construct large-scale CNT ICs. Building on their previous research, the scientists hope to explore these possibilities in the future.

“In our earlier work, we showed that a carbon nanotube based field-effect transistor is about five (n-type FET) to ten (p-type FET) times faster than its silicon counterparts, but uses much less energy, about a few percent of that of similar sized silicon transistors,” Peng said.

“In the future, we plan to construct large-scale integrated circuits that outperform silicon-based systems. These circuits are faster, smaller, and consume much less power. They can also work at extremely low temperatures (e.g., in space) and moderately high temperatures (potentially no cooling system required), on flexible and transparent substrates, and potentially be bio-compatible.”

Here’s a link to and a citation for the paper,

Scaling carbon nanotube complementary transistors to 5-nm gate lengths by Chenguang Qiu, Zhiyong Zhang, Mengmeng Xiao, Yingjun Yang, Donglai Zhong, Lian-Mao Peng. Science  20 Jan 2017: Vol. 355, Issue 6322, pp. 271-276 DOI: 10.1126/science.aaj1628

This paper is behind a paywall.

New electrical contact technology to exploit nanoscale catalytic effects

A Jan. 20,, 2017 news item on Nanotechnology Now announces research into nanoscale electrical contact technology,

Research by scientists at Swansea University [UK] is helping to meet the challenge of incorporating nanoscale structures into future semiconductor devices that will create new technologies and impact on all aspects of everyday life.

Dr Alex Lord and Professor Steve Wilks from the Centre for Nanohealth led the collaborative research published in Nano Letters. The research team looked at ways to engineer electrical contact technology on minute scales with simple and effective modifications to nanowires that can be used to develop enhanced devices based on the nanomaterials. Well-defined electrical contacts are essential for any electrical circuit and electronic device because they control the flow of electricity that is fundamental to the operational capability.

Everyday materials that are being scaled down to the size of nanometres (one million times smaller than a millimetre on a standard ruler) by scientists on a global scale are seen as the future of electronic devices. The scientific and engineering advances are leading to new technologies such as energy producing clothing to power our personal gadgets and sensors to monitor our health and the surrounding environment.

Over the coming years this will make a massive contribution to the explosion that is the Internet of Things connecting everything from our homes to our cars into a web of communication. All of these new technologies require similar advances in electrical circuits and especially electrical contacts that allow the devices to work correctly with electricity.

A Jan. 19, 2017 Swansea University press release (also on EurekAlert), which originated the news item, explains in greater detail,

Professor Steve Wilks said: “Nanotechnology has delivered new materials and new technologies and the applications of nanotechnology will continue to expand over the coming decades with much of its usefulness stemming from effects that occur at the atomic- or nano-scale. With the advent of nanotechnology, new technologies have emerged such as chemical and biological sensors, quantum computing, energy harvesting, lasers, and environmental and photon-detectors, but there is a pressing need to develop new electrical contact preparation techniques to ensure these devices become an everyday reality.”

“Traditional methods of engineering electrical contacts have been applied to nanomaterials but often neglect the nanoscale effects that nanoscientists have worked so hard to uncover.  Currently, there isn’t a design toolbox to make electrical contacts of chosen properties to nanomaterials and in some respects the research is lagging behind our potential application of the enhanced materials.”

The Swansea research team1 used specialist experimental equipment and collaborated with Professor Quentin Ramasse of the SuperSTEM Laboratory, Science and Facilities Technology Council.  The scientists were able to physically interact with the nanostructures and measure how the nanoscale modifications affected the electrical performance.

Their experiments found for the first time, that simple changes to the catalyst edge can turn-on or turn-off the dominant electrical conduction and most importantly reveal a powerful technique that will allow nanoengineers to select the properties of manufacturable nanowire devices.

Dr Lord said: “The experiments had a simple premise but were challenging to optimise and allow atomic-scale imaging of the interfaces. However, it was essential to this study and will allow many more materials to be investigated in a similar way.”

“This research now gives us an understanding of these new effects and will allow engineers in the future to reliably produce electrical contacts to these nanomaterials which is essential for the materials to be used in the technologies of tomorrow.

“In the near future this work can help enhance current nanotechnology devices such as biosensors and also lead to new technologies such as Transient Electronics that are devices that diminish and vanish without a trace which is an essential property when they are applied as diagnostic tools inside the human body.”

1. Lord, A. M., Ramasse, Q. M., Kepaptsoglou, D. M., Evans, J. E., Davies, P. R., Ward, M. B. & Wilks, S. P. 2016 Modifying the Interface Edge to Control the Electrical Transport Properties of Nanocontacts to Nanowires. Nano Lett. (doi:10.1021/acs.nanolett.6b03699).
2 .Lord, A. M. et al. 2015 Controlling the electrical transport properties of nanocontacts to nanowires. Nano Lett. 15, 4248–4254. (doi:10.1021/nl503743t)

Both papers are open access.

Going underground to observe atoms in a bid for better batteries

A Jan. 16, 2017 news item on ScienceDaily describes what lengths researchers at Stanford University (US) will go to in pursuit of their goals,

In a lab 18 feet below the Engineering Quad of Stanford University, researchers in the Dionne lab camped out with one of the most advanced microscopes in the world to capture an unimaginably small reaction.

The lab members conducted arduous experiments — sometimes requiring a continuous 30 hours of work — to capture real-time, dynamic visualizations of atoms that could someday help our phone batteries last longer and our electric vehicles go farther on a single charge.

Toiling underground in the tunneled labs, they recorded atoms moving in and out of nanoparticles less than 100 nanometers in size, with a resolution approaching 1 nanometer.

A Jan. 16, 2017 Stanford University news release (also on EurekAlert) by Taylor Kubota, which originated the news item, provides more detail,

“The ability to directly visualize reactions in real time with such high resolution will allow us to explore many unanswered questions in the chemical and physical sciences,” said Jen Dionne, associate professor of materials science and engineering at Stanford and senior author of the paper detailing this work, published Jan. 16 [2017] in Nature Communications. “While the experiments are not easy, they would not be possible without the remarkable advances in electron microscopy from the past decade.”

Their experiments focused on hydrogen moving into palladium, a class of reactions known as an intercalation-driven phase transition. This reaction is physically analogous to how ions flow through a battery or fuel cell during charging and discharging. Observing this process in real time provides insight into why nanoparticles make better electrodes than bulk materials and fits into Dionne’s larger interest in energy storage devices that can charge faster, hold more energy and stave off permanent failure.

Technical complexity and ghosts

For these experiments, the Dionne lab created palladium nanocubes, a form of nanoparticle, that ranged in size from about 15 to 80 nanometers, and then placed them in a hydrogen gas environment within an electron microscope. The researchers knew that hydrogen would change both the dimensions of the lattice and the electronic properties of the nanoparticle. They thought that, with the appropriate microscope lens and aperture configuration, techniques called scanning transmission electron microscopy and electron energy loss spectroscopy might show hydrogen uptake in real time.

After months of trial and error, the results were extremely detailed, real-time videos of the changes in the particle as hydrogen was introduced. The entire process was so complicated and novel that the first time it worked, the lab didn’t even have the video software running, leading them to capture their first movie success on a smartphone.

Following these videos, they examined the nanocubes during intermediate stages of hydrogenation using a second technique in the microscope, called dark-field imaging, which relies on scattered electrons. In order to pause the hydrogenation process, the researchers plunged the nanocubes into an ice bath of liquid nitrogen mid-reaction, dropping their temperature to 100 degrees Kelvin (-280 F). These dark-field images served as a way to check that the application of the electron beam hadn’t influenced the previous observations and allowed the researchers to see detailed structural changes during the reaction.

“With the average experiment spanning about 24 hours at this low temperature, we faced many instrument problems and called Ai Leen Koh [co-author and research scientist at Stanford’s Nano Shared Facilities] at the weirdest hours of the night,” recalled Fariah Hayee, co-lead author of the study and graduate student in the Dionne lab. “We even encountered a ‘ghost-of-the-joystick problem,’ where the joystick seemed to move the sample uncontrollably for some time.”

While most electron microscopes operate with the specimen held in a vacuum, the microscope used for this research has the advanced ability to allow the researchers to introduce liquids or gases to their specimen.

“We benefit tremendously from having access to one of the best microscope facilities in the world,” said Tarun Narayan, co-lead author of this study and recent doctoral graduate from the Dionne lab. “Without these specific tools, we wouldn’t be able to introduce hydrogen gas or cool down our samples enough to see these processes take place.”

Pushing out imperfections

Aside from being a widely applicable proof of concept for this suite of visualization techniques, watching the atoms move provides greater validation for the high hopes many scientists have for nanoparticle energy storage technologies.

The researchers saw the atoms move in through the corners of the nanocube and observed the formation of various imperfections within the particle as hydrogen moved within it. This sounds like an argument against the promise of nanoparticles but that’s because it’s not the whole story.

“The nanoparticle has the ability to self-heal,” said Dionne. “When you first introduce hydrogen, the particle deforms and loses its perfect crystallinity. But once the particle has absorbed as much hydrogen as it can, it transforms itself back to a perfect crystal again.”

The researchers describe this as imperfections being “pushed out” of the nanoparticle. This ability of the nanocube to self-heal makes it more durable, a key property needed for energy storage materials that can sustain many charge and discharge cycles.

Looking toward the future

As the efficiency of renewable energy generation increases, the need for higher quality energy storage is more pressing than ever. It’s likely that the future of storage will rely on new chemistries and the findings of this research, including the microscopy techniques the researchers refined along the way, will apply to nearly any solution in those categories.

For its part, the Dionne lab has many directions it can go from here. The team could look at a variety of material compositions, or compare how the sizes and shapes of nanoparticles affect the way they work, and, soon, take advantage of new upgrades to their microscope to study light-driven reactions. At present, Hayee has moved on to experimenting with nanorods, which have more surface area for the ions to move through, promising potentially even faster kinetics.

Here’s a link to and a citation for the paper,

Direct visualization of hydrogen absorption dynamics in individual palladium nanoparticles by Tarun C. Narayan, Fariah Hayee, Andrea Baldi, Ai Leen Koh, Robert Sinclair, & Jennifer A. Dionne. Nature Communications 8, Article number: 14020 (2017) doi:10.1038/ncomms14020 Published online: 16 January 2017

This paper is open access.

Nanotechnology cracks Wall Street (Daily)

David Dittman’s Jan. 11, 2017 article for portrays a great deal of excitement about nanotechnology and the possibilities (I’m highlighting the article because it showcases Dexter Johnson’s Nanoclast blog),

When we talk about next-generation aircraft, next-generation wearable biomedical devices, and next-generation fiber-optic communication, the consistent theme is nano: nanotechnology, nanomaterials, nanophotonics.

For decades, manufacturers have used carbon fiber to make lighter sports equipment, stronger aircraft, and better textiles.

Now, as Dexter Johnson of IEEE [Institute of Electrical and Electronics Engineers] Spectrum reports [on his Nanoclast blog], carbon nanotubes will help make aerospace composites more efficient:

Now researchers at the University of Surrey’s Advanced Technology Institute (ATI), the University of Bristol’s Advanced Composite Centre for Innovation and Science (ACCIS), and aerospace company Bombardier [headquartered in Montréal, Canada] have collaborated on the development of a carbon nanotube-enabled material set to replace the polymer sizing. The reinforced polymers produced with this new material have enhanced electrical and thermal conductivity, opening up new functional possibilities. It will be possible, say the British researchers, to embed gadgets such as sensors and energy harvesters directly into the material.

When it comes to flight, lighter is better, so building sensors and energy harvesters into the body of aircraft marks a significant leap forward.

Johnson also reports for IEEE Spectrum on a “novel hybrid nanomaterial” based on oscillations of electrons — a major advance in nanophotonics:

Researchers at the University of Texas at Austin have developed a hybrid nanomaterial that enables the writing, erasing and rewriting of optical components. The researchers believe that this nanomaterial and the techniques used in exploiting it could create a new generation of optical chips and circuits.

Of course, the concept of rewritable optics is not altogether new; it forms the basis of optical storage mediums like CDs and DVDs. However, CDs and DVDs require bulky light sources, optical media and light detectors. The advantage of the rewritable integrated photonic circuits developed here is that it all happens on a 2-D material.

“To develop rewritable integrated nanophotonic circuits, one has to be able to confine light within a 2-D plane, where the light can travel in the plane over a long distance and be arbitrarily controlled in terms of its propagation direction, amplitude, frequency and phase,” explained Yuebing Zheng, a professor at the University of Texas who led the research… “Our material, which is a hybrid, makes it possible to develop rewritable integrated nanophotonic circuits.”

Who knew that mixing graphene with homemade Silly Putty would create a potentially groundbreaking new material that could make “wearables” actually useful?

Next-generation biomedical devices will undoubtedly include some of this stuff:

A dash of graphene can transform the stretchy goo known as Silly Putty into a pressure sensor able to monitor a human pulse or even track the dainty steps of a small spider.

The material, dubbed G-putty, could be developed into a device that continuously monitors blood pressure, its inventors hope.

The guys who made G-putty often rely on “household stuff” in their research.

It’s nice to see a blogger’s work be highlighted. Congratulations Dexter.

G-putty was mentioned here in a Dec. 30, 2016 posting which also includes a link to Dexter’s piece on the topic.