Category Archives: electronics

Carbon nanotubes that can outperform silicon

According to a Sept. 2, 2016 news item on phys.org, researchers at the University of Wisconsin-Madison have produced carbon nanotube transistors that outperform state-of-the-art silicon transistors,

For decades, scientists have tried to harness the unique properties of carbon nanotubes to create high-performance electronics that are faster or consume less power—resulting in longer battery life, faster wireless communication and faster processing speeds for devices like smartphones and laptops.

But a number of challenges have impeded the development of high-performance transistors made of carbon nanotubes, tiny cylinders made of carbon just one atom thick. Consequently, their performance has lagged far behind semiconductors such as silicon and gallium arsenide used in computer chips and personal electronics.

Now, for the first time, University of Wisconsin-Madison materials engineers have created carbon nanotube transistors that outperform state-of-the-art silicon transistors.

Led by Michael Arnold and Padma Gopalan, UW-Madison professors of materials science and engineering, the team’s carbon nanotube transistors achieved current that’s 1.9 times higher than silicon transistors. …

A Sept. 2, 2016 University of Wisconsin-Madison news release (also on EurekAlert) by Adam Malecek, which originated the news item, describes the research in more detail and notes that the technology has been patented,

“This achievement has been a dream of nanotechnology for the last 20 years,” says Arnold. “Making carbon nanotube transistors that are better than silicon transistors is a big milestone. This breakthrough in carbon nanotube transistor performance is a critical advance toward exploiting carbon nanotubes in logic, high-speed communications, and other semiconductor electronics technologies.”

This advance could pave the way for carbon nanotube transistors to replace silicon transistors and continue delivering the performance gains the computer industry relies on and that consumers demand. The new transistors are particularly promising for wireless communications technologies that require a lot of current flowing across a relatively small area.

As some of the best electrical conductors ever discovered, carbon nanotubes have long been recognized as a promising material for next-generation transistors.

Carbon nanotube transistors should be able to perform five times faster or use five times less energy than silicon transistors, according to extrapolations from single nanotube measurements. The nanotube’s ultra-small dimension makes it possible to rapidly change a current signal traveling across it, which could lead to substantial gains in the bandwidth of wireless communications devices.

But researchers have struggled to isolate purely carbon nanotubes, which are crucial, because metallic nanotube impurities act like copper wires and disrupt their semiconducting properties — like a short in an electronic device.

The UW–Madison team used polymers to selectively sort out the semiconducting nanotubes, achieving a solution of ultra-high-purity semiconducting carbon nanotubes.

“We’ve identified specific conditions in which you can get rid of nearly all metallic nanotubes, where we have less than 0.01 percent metallic nanotubes,” says Arnold.

Placement and alignment of the nanotubes is also difficult to control.

To make a good transistor, the nanotubes need to be aligned in just the right order, with just the right spacing, when assembled on a wafer. In 2014, the UW–Madison researchers overcame that challenge when they announced a technique, called “floating evaporative self-assembly,” that gives them this control.

The nanotubes must make good electrical contacts with the metal electrodes of the transistor. Because the polymer the UW–Madison researchers use to isolate the semiconducting nanotubes also acts like an insulating layer between the nanotubes and the electrodes, the team “baked” the nanotube arrays in a vacuum oven to remove the insulating layer. The result: excellent electrical contacts to the nanotubes.

The researchers also developed a treatment that removes residues from the nanotubes after they’re processed in solution.

“In our research, we’ve shown that we can simultaneously overcome all of these challenges of working with nanotubes, and that has allowed us to create these groundbreaking carbon nanotube transistors that surpass silicon and gallium arsenide transistors,” says Arnold.

The researchers benchmarked their carbon nanotube transistor against a silicon transistor of the same size, geometry and leakage current in order to make an apples-to-apples comparison.

They are continuing to work on adapting their device to match the geometry used in silicon transistors, which get smaller with each new generation. Work is also underway to develop high-performance radio frequency amplifiers that may be able to boost a cellphone signal. While the researchers have already scaled their alignment and deposition process to 1 inch by 1 inch wafers, they’re working on scaling the process up for commercial production.

Arnold says it’s exciting to finally reach the point where researchers can exploit the nanotubes to attain performance gains in actual technologies.

“There has been a lot of hype about carbon nanotubes that hasn’t been realized, and that has kind of soured many people’s outlook,” says Arnold. “But we think the hype is deserved. It has just taken decades of work for the materials science to catch up and allow us to effectively harness these materials.”

The researchers have patented their technology through the Wisconsin Alumni Research Foundation.

Interestingly, at least some of the research was publicly funded according to the news release,

Funding from the National Science Foundation, the Army Research Office and the Air Force supported their work.

Will the public ever benefit financially from this research?

Treating graphene with lasers for paper-based electronics

Engineers at Iowa State University have found a way they hope will make it easier to commercialize graphene. A Sept. 1, 2016 news item on phys.org describes the research,

The researchers in Jonathan Claussen’s lab at Iowa State University (who like to call themselves nanoengineers) have been looking for ways to use graphene and its amazing properties in their sensors and other technologies.

Graphene is a wonder material: The carbon honeycomb is just an atom thick. It’s great at conducting electricity and heat; it’s strong and stable. But researchers have struggled to move beyond tiny lab samples for studying its material properties to larger pieces for real-world applications.

Recent projects that used inkjet printers to print multi-layer graphene circuits and electrodes had the engineers thinking about using it for flexible, wearable and low-cost electronics. For example, “Could we make graphene at scales large enough for glucose sensors?” asked Suprem Das, an Iowa State postdoctoral research associate in mechanical engineering and an associate of the U.S. Department of Energy’s Ames Laboratory.

But there were problems with the existing technology. Once printed, the graphene had to be treated to improve electrical conductivity and device performance. That usually meant high temperatures or chemicals – both could degrade flexible or disposable printing surfaces such as plastic films or even paper.

Das and Claussen came up with the idea of using lasers to treat the graphene. Claussen, an Iowa State assistant professor of mechanical engineering and an Ames Laboratory associate, worked with Gary Cheng, an associate professor at Purdue University’s School of Industrial Engineering, to develop and test the idea.

A Sept. 1, 2016 Iowa State University news release (also on EurekAlert), which originated the news item, provides more detail about the intellectual property, as well as, the technology,

… They found treating inkjet-printed, multi-layer graphene electric circuits and electrodes with a pulsed-laser process improves electrical conductivity without damaging paper, polymers or other fragile printing surfaces.

“This creates a way to commercialize and scale-up the manufacturing of graphene,” Claussen said.

Two major grants are supporting the project and related research: a three-year grant from the National Institute of Food and Agriculture, U.S. Department of Agriculture, under award number 11901762 and a three-year grant from the Roy J. Carver Charitable Trust. Iowa State’s College of Engineering and department of mechanical engineering are also supporting the research.

The Iowa State Research Foundation Inc. has filed for a patent on the technology.

“The breakthrough of this project is transforming the inkjet-printed graphene into a conductive material capable of being used in new applications,” Claussen said.

Those applications could include sensors with biological applications, energy storage systems, electrical conducting components and even paper-based electronics.

To make all that possible, the engineers developed computer-controlled laser technology that selectively irradiates inkjet-printed graphene oxide. The treatment removes ink binders and reduces graphene oxide to graphene – physically stitching together millions of tiny graphene flakes. The process makes electrical conductivity more than a thousand times better.

“The laser works with a rapid pulse of high-energy photons that do not destroy the graphene or the substrate,” Das said. “They heat locally. They bombard locally. They process locally.”

That localized, laser processing also changes the shape and structure of the printed graphene from a flat surface to one with raised, 3-D nanostructures. The engineers say the 3-D structures are like tiny petals rising from the surface. The rough and ridged structure increases the electrochemical reactivity of the graphene, making it useful for chemical and biological sensors.

All of that, according to Claussen’s team of nanoengineers, could move graphene to commercial applications.

“This work paves the way for not only paper-based electronics with graphene circuits,” the researchers wrote in their paper, “it enables the creation of low-cost and disposable graphene-based electrochemical electrodes for myriad applications including sensors, biosensors, fuel cells and (medical) devices.”

Here’s a link to and a citation for the paper,

3D nanostructured inkjet printed graphene via UV-pulsed laser irradiation enables paper-based electronics and electrochemical devices by Suprem R. Das, Qiong Nian, Allison A. Cargill, John A. Hondred, Shaowei Ding, Mojib Saei, Gary J. Cheng, and   Jonathan C. Claussen. Nanoscale, 2016,8, 15870-15879 DOI: 10.1039/C6NR04310K First published online 12 Jul 2016

This paper is open access but you do need to have registered for your free account to access the material.

Nanoavalanches in glass

An Aug. 24, 2016 news item on Nanowerk takes a rather roundabout way to describe some new findings about glass (Note: A link has been removed),

The main purpose of McLaren’s exchange study in Marburg was to learn more about a complex process involving transformations in glass that occur under intense electrical and thermal conditions. New understanding of these mechanisms could lead the way to more energy-efficient glass manufacturing, and even glass supercapacitors that leapfrog the performance of batteries now used for electric cars and solar energy.

“This technology is relevant to companies seeking the next wave of portable, reliable energy,” said Himanshu Jain, McLaren’s advisor and the T. L. Diamond Distinguished Chair in Materials Science and Engineering at Lehigh and director of its International Materials Institute for New Functionality in Glass. “A breakthrough in the use of glass for power storage could unleash a torrent of innovation in the transportation and energy sectors, and even support efforts to curb global warming.”

As part of his doctoral research, McLaren discovered that applying a direct current field across glass reduced its melting temperature. In their experiments, they placed a block of glass between a cathode and anode, and then exerted steady pressure on the glass while gradually heating it. McLaren and Jain, together with colleagues at the University of Colorado, published their discovery in Applied Physics Letters (“Electric field-induced softening of alkali silicate glasses”).

The implications for the finding were intriguing. In addition to making glass formulation viable at lower temperatures and reducing energy needs, designers using electrical current in glass manufacturing would have a tool to make precise manipulations not possible with heat alone.

“You could make a mask for the glass, for example, and apply an electrical field on a micron scale,” said Jain. “This would allow you to deform the glass with high precision, and soften it in a far more selective way than you could with heat, which gets distributed throughout the glass.”

Though McLaren and Jain had isolated the phenomenon and determined how to dial up the variables for optimal results, they did not yet fully understand the mechanisms behind it. McLaren and Jain had been following the work of Dr. Bernard Roling at the University of Marburg, who had discovered some remarkable characteristics of glass using electro-thermal poling, a technique that employs both temperature manipulation and electrical current to create a charge in normally inert glass. The process imparts useful optical and even bioactive qualities to glass.

Roling invited McLaren to spend a semester at Marburg to analyze the behavior of glass under electro-thermal poling, to see if it would reveal more about the fundamental science underlying what McLaren and Jain had observed in their Lehigh lab.

An Aug. 22, 2016 Lehigh University news release by Chris Quirk, which originated the news item, describes the latest work,

McLaren’s work in Marburg revealed a two-step process in which a thin sliver of the glass nearest the anode, called a depletion layer, becomes much more resistant to electrical current than the rest of the glass as alkali ions in the glass migrate away. This is followed by a catastrophic change in the layer, known as dielectric breakdown, which dramatically increases its conductivity. McLaren likens the process of dielectric breakdown to a high-speed avalanche, and uses spectroscopic analysis with electro-thermal poling as a way to see what is happening in slow motion.

“The results in Germany gave us a very good model for what is going on in the electric field-induced softening that we did here. It told us about the start conditions for where dielectric breakdown can begin,” said McLaren.

“Charlie’s work in Marburg has helped us see the kinetics of the process,” Jain said. “We could see it happening abruptly in our experiments here at Lehigh, but we now have a way to separate out what occurs specifically with the depletion layer.”

“The Marburg trip was incredibly useful professionally and enlightening personally,” said McLaren. “Scientifically, it’s always good to see your work from another vantage point, and see how other research groups interpret data or perform experiments. The group in Marburg was extremely hard-working, which I loved, and they were very supportive of each other. If someone submitted a paper, the whole group would have a barbecue to celebrate, and they always gave each other feedback on their work. Sometimes it was brutally honest––they didn’t hold back––but they were things you needed to hear.”

“Working in Marburg also showed me how to interact with a completely different group of people. “You see differences in your own culture best when you have the chance to see other cultures close up. It’s always a fresh perspective.”

Here are links and citations for both the papers mentioned. The first link is for the most recent paper and second link is for the earlier work,

Depletion Layer Formation in Alkali Silicate Glasses by
Electro-Thermal Poling by C. McLaren, M. Balabajew, M. Gellert, B. Roling, and H. Jain. Journal of The Electrochemical Society, 163 (9) H809-H817 (2016) H809 DOI: 10.1149/2.0881609jes Published July 19, 2016

Electric field-induced softening of alkali silicate glasses by C. McLaren, W. Heffner, R. Tessarollo, R. Raj, and H. Jain. Appl. Phys. Lett. 107, 184101 (2015); http://dx.doi.org/10.1063/1.4934945 Published online 03 November 2015

The most recent paper (first link) appears to be open access; the earlier paper (second link) is behind a paywall.

New form of light could lead to circuits that run on photons instead of electrons

If circuits are running on photons instead of electrons, does that mean there will be no more electricity and electronics?  Apparently, the answer is not exactly. First, an Aug. 5, 2016 news item on ScienceDaily makes the announcement about photons and circuits,

New research suggests that it is possible to create a new form of light by binding light to a single electron, combining the properties of both.

According to the scientists behind the study, from Imperial College London, the coupled light and electron would have properties that could lead to circuits that work with packages of light — photons — instead of electrons.

It would also allow researchers to study quantum physical phenomena, which govern particles smaller than atoms, on a visible scale.

An Aug. 5, 2016 Imperial College of London (ICL) press release, which originated the news item, describes the research further (Note: A link has been removed),

In normal materials, light interacts with a whole host of electrons present on the surface and within the material. But by using theoretical physics to model the behaviour of light and a recently-discovered class of materials known as topological insulators, Imperial researchers have found that it could interact with just one electron on the surface.

This would create a coupling that merges some of the properties of the light and the electron. Normally, light travels in a straight line, but when bound to the electron it would instead follow its path, tracing the surface of the material.

Improved electronics

In the study, published today in Nature Communications, Dr Vincenzo Giannini and colleagues modelled this interaction around a nanoparticle – a small sphere below 0.00000001 metres in diameter – made of a topological insulator.

Their models showed that as well as the light taking the property of the electron and circulating the particle, the electron would also take on some of the properties of the light. [emphasis mine]

Normally, as electrons are travelling along materials, such as electrical circuits, they will stop when faced with a defect. However, Dr Giannini’s team discovered that even if there were imperfections in the surface of the nanoparticle, the electron would still be able to travel onwards with the aid of the light.

If this could be adapted into photonic circuits, they would be more robust and less vulnerable to disruption and physical imperfections.

Quantum experiments

Dr Giannini said: “The results of this research will have a huge impact on the way we conceive light. Topological insulators were only discovered in the last decade, but are already providing us with new phenomena to study and new ways to explore important concepts in physics.”

Dr Giannini added that it should be possible to observe the phenomena he has modelled in experiments using current technology, and the team is working with experimental physicists to make this a reality.

He believes that the process that leads to the creation of this new form of light could be scaled up so that the phenomena could observed much more easily.

Currently, quantum phenomena can only be seen when looking at very small objects or objects that have been super-cooled, but this could allow scientists to study these kinds of behaviour at room temperature.

An electron that takes on the properties of light? I find that fascinating.

Artistic image of light trapped on the surface of a nanoparticle topological insulator. Credit: Vincenzo Giannini

Artistic image of light trapped on the surface of a nanoparticle topological insulator. Credit: Vincenzo Giannini

For those who’d like more information, here’s a link to and a citation for the paper,

Single-electron induced surface plasmons on a topological nanoparticle by G. Siroki, D.K.K. Lee, P. D. Haynes,V. Giannini. Nature Communications 7, Article number: 12375  doi:10.1038/ncomms12375 Published 05 August 2016

This paper is open access.

‘Neural dust’ could lead to introduction of electroceuticals

In case anyone is wondering, the woman who’s manipulating a prosthetic arm so she can eat or a drink of coffee probably has a bulky implant/docking station in her head. Right now that bulky implant is the latest and greatest innovation for tetraplegics (aka, quadriplegics) as it frees, to some extent, people who’ve had no independent movement of any kind. By virtue of the juxtaposition of the footage of the woman with the ‘neural dust’ footage, they seem to be suggesting that neural dust might some day accomplish the same type of connection. At this point, hopes for the ‘neural dust’ are more modest.

An Aug. 3, 2016 news item on ScienceDaily announces the ‘neural dust’,

University of California, Berkeley engineers have built the first dust-sized, wireless sensors that can be implanted in the body, bringing closer the day when a Fitbit-like device could monitor internal nerves, muscles or organs in real time.

Because these batteryless sensors could also be used to stimulate nerves and muscles, the technology also opens the door to “electroceuticals” to treat disorders such as epilepsy or to stimulate the immune system or tamp down inflammation.

An Aug. 3, 2016 University of California at Berkeley news release (also on EurekAlert) by Robert Sanders, which originated the news item, explains further and describes the researchers’ hope that one day the neural dust could be used to control implants and prosthetics,

The so-called neural dust, which the team implanted in the muscles and peripheral nerves of rats, is unique in that ultrasound is used both to power and read out the measurements. Ultrasound technology is already well-developed for hospital use, and ultrasound vibrations can penetrate nearly anywhere in the body, unlike radio waves, the researchers say.

“I think the long-term prospects for neural dust are not only within nerves and the brain, but much broader,“ said Michel Maharbiz, an associate professor of electrical engineering and computer sciences and one of the study’s two main authors. “Having access to in-body telemetry has never been possible because there has been no way to put something supertiny superdeep. But now I can take a speck of nothing and park it next to a nerve or organ, your GI tract or a muscle, and read out the data.“

Maharbiz, neuroscientist Jose Carmena, a professor of electrical engineering and computer sciences and a member of the Helen Wills Neuroscience Institute, and their colleagues will report their findings in the August 3 [2016] issue of the journal Neuron.

The sensors, which the researchers have already shrunk to a 1 millimeter cube – about the size of a large grain of sand – contain a piezoelectric crystal that converts ultrasound vibrations from outside the body into electricity to power a tiny, on-board transistor that is in contact with a nerve or muscle fiber. A voltage spike in the fiber alters the circuit and the vibration of the crystal, which changes the echo detected by the ultrasound receiver, typically the same device that generates the vibrations. The slight change, called backscatter, allows them to determine the voltage.

Motes sprinkled thoughout the body

In their experiment, the UC Berkeley team powered up the passive sensors every 100 microseconds with six 540-nanosecond ultrasound pulses, which gave them a continual, real-time readout. They coated the first-generation motes – 3 millimeters long, 1 millimeter high and 4/5 millimeter thick – with surgical-grade epoxy, but they are currently building motes from biocompatible thin films which would potentially last in the body without degradation for a decade or more.

While the experiments so far have involved the peripheral nervous system and muscles, the neural dust motes could work equally well in the central nervous system and brain to control prosthetics, the researchers say. Today’s implantable electrodes degrade within 1 to 2 years, and all connect to wires that pass through holes in the skull. Wireless sensors – dozens to a hundred – could be sealed in, avoiding infection and unwanted movement of the electrodes.

“The original goal of the neural dust project was to imagine the next generation of brain-machine interfaces, and to make it a viable clinical technology,” said neuroscience graduate student Ryan Neely. “If a paraplegic wants to control a computer or a robotic arm, you would just implant this electrode in the brain and it would last essentially a lifetime.”

In a paper published online in 2013, the researchers estimated that they could shrink the sensors down to a cube 50 microns on a side – about 2 thousandths of an inch, or half the width of a human hair. At that size, the motes could nestle up to just a few nerve axons and continually record their electrical activity.

“The beauty is that now, the sensors are small enough to have a good application in the peripheral nervous system, for bladder control or appetite suppression, for example,“ Carmena said. “The technology is not really there yet to get to the 50-micron target size, which we would need for the brain and central nervous system. Once it’s clinically proven, however, neural dust will just replace wire electrodes. This time, once you close up the brain, you’re done.“

The team is working now to miniaturize the device further, find more biocompatible materials and improve the surface transceiver that sends and receives the ultrasounds, ideally using beam-steering technology to focus the sounds waves on individual motes. They are now building little backpacks for rats to hold the ultrasound transceiver that will record data from implanted motes.

They’re also working to expand the motes’ ability to detect non-electrical signals, such as oxygen or hormone levels.

“The vision is to implant these neural dust motes anywhere in the body, and have a patch over the implanted site send ultrasonic waves to wake up and receive necessary information from the motes for the desired therapy you want,” said Dongjin Seo, a graduate student in electrical engineering and computer sciences. “Eventually you would use multiple implants and one patch that would ping each implant individually, or all simultaneously.”

Ultrasound vs radio

Maharbiz and Carmena conceived of the idea of neural dust about five years ago, but attempts to power an implantable device and read out the data using radio waves were disappointing. Radio attenuates very quickly with distance in tissue, so communicating with devices deep in the body would be difficult without using potentially damaging high-intensity radiation.

Marharbiz hit on the idea of ultrasound, and in 2013 published a paper with Carmena, Seo and their colleagues describing how such a system might work. “Our first study demonstrated that the fundamental physics of ultrasound allowed for very, very small implants that could record and communicate neural data,” said Maharbiz. He and his students have now created that system.

“Ultrasound is much more efficient when you are targeting devices that are on the millimeter scale or smaller and that are embedded deep in the body,” Seo said. “You can get a lot of power into it and a lot more efficient transfer of energy and communication when using ultrasound as opposed to electromagnetic waves, which has been the go-to method for wirelessly transmitting power to miniature implants”

“Now that you have a reliable, minimally invasive neural pickup in your body, the technology could become the driver for a whole gamut of applications, things that today don’t even exist,“ Carmena said.

Here’s a link to and a citation for the team’s latest paper,

Wireless Recording in the Peripheral Nervous System with Ultrasonic Neural Dust by Dongjin Seo, Ryan M. Neely, Konlin Shen, Utkarsh Singhal, Elad Alon, Jan M. Rabaey, Jose M. Carmena. and Michel M. Maharbiz. Neuron Volume 91, Issue 3, p529–539, 3 August 2016 DOI: http://dx.doi.org/10.1016/j.neuron.2016.06.034

This paper appears to be open access.

Vitamin-driven lithium-ion battery from the University of Toronto

It seems vitamins aren’t just good for health, they’re also good for batteries. My Aug. 2, 2016 post on vitamins and batteries focused on work from Harvard, this time the work is from the University of Toronto (Canada). From an Aug. 3, 2016 news item on ScienceDaily,

A team of University of Toronto chemists has created a battery that stores energy in a biologically derived unit, paving the way for cheaper consumer electronics that are easier on the environment.

The battery is similar to many commercially-available high-energy lithium-ion batteries with one important difference. It uses flavin from vitamin B2 as the cathode: the part that stores the electricity that is released when connected to a device.

“We’ve been looking to nature for a while to find complex molecules for use in a number of consumer electronics applications,” says Dwight Seferos, an associate professor in U of T’s Department of Chemistry and Canada Research Chair in Polymer Nanotechnology.

“When you take something made by nature that is already complex, you end up spending less time making new material,” says Seferos.

An Aug. 2, 2016 University of Toronto news release (also on EurekAlert) by Peter McMahon, which originated the news item, explains further,

To understand the discovery, it’s important to know that modern batteries contain three basic parts:

  • a positive terminal – the metal part that touches devices to power them – connected to a cathode inside the battery casing
  • a negative terminal connected to an anode inside the battery casing
  • an electrolyte solution, in which ions can travel between the cathode and anode electrodes

When a battery is connected to a phone, iPod, camera or other device that requires power, electrons flow from the anode – the negatively charged electrode of the device supplying current – out to the device, then into the cathode and ions migrate through the electrolyte solution to balance the charge. When connected to a charger, this process happens in reverse.

The reaction in the anode creates electrons and the reaction in the cathode absorbs them when discharging. The net product is electricity. The battery will continue to produce electricity until one or both of the electrodes run out of the substance necessary for the reactions to occur.

Organic chemistry is kind of like Lego

While bio-derived battery parts have been created previously, this is the first one that uses bio-derived polymers – long-chain molecules – for one of the electrodes, essentially allowing battery energy to be stored in a vitamin-created plastic, instead of costlier, harder to process, and more environmentally-harmful metals such as cobalt.

“Getting the right material evolved over time and definitely took some test reactions,” says paper co-author and doctoral student Tyler Schon. “In a lot of ways, it looked like this could have failed. It definitely took a lot of perseverance.”

Schon, Seferos and colleagues happened upon the material while testing a variety of long-chain polymers – specifically pendant group polymers: the molecules attached to a ‘backbone’ chain of a long molecule.

“Organic chemistry is kind of like Lego,” he says. “You put things together in a certain order, but some things that look like they’ll fit together on paper don’t in reality. We tried a few approaches and the fifth one worked,” says Seferos.

Building a better power pack

The team created the material from vitamin B2 that originates in genetically-modified fungi using a semi-synthetic process to prepare the polymer by linking two flavin units to a long-chain molecule backbone.

This allows for a green battery with high capacity and high voltage – something increasingly important as the ‘Internet of Things’ continues to link us together more and more through our battery-powered portable devices.

“It’s a pretty safe, natural compound,” Seferos adds. “If you wanted to, you could actually eat the source material it comes from.”

B2’s ability to be reduced and oxidized makes its well-suited for a lithium ion battery.

“B2 can accept up to two electrons at a time,” says Seferos. “This makes it easy to take multiple charges and have a high capacity compared to a lot of other available molecules.”

A step to greener electronics

“It’s been a lot of trial-and-error,” says Schon. “Now we’re looking to design new variants that can be recharged again and again.”

While the current prototype is on the scale of a hearing aid battery, the team hopes their breakthrough could lay the groundwork for powerful, thin, flexible, and even transparent metal-free batteries that could support the next wave of consumer electronics.

Here’s a link to and a citation for the paper,

Bio-Derived Polymers for Sustainable Lithium-Ion Batteries by Tyler B. Schon, Andrew J. Tilley, Colin R. Bridges, Mark B. Miltenburg, and Dwight S. Seferos. Advanced Functional Materials DOI: 10.1002/adfm.201602114 Version of Record online: 14 JUL 2016

© 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

US white paper on neuromorphic computing (or the nanotechnology-inspired Grand Challenge for future computing)

The US has embarked on a number of what is called “Grand Challenges.” I first came across the concept when reading about the Bill and Melinda Gates (of Microsoft fame) Foundation. I gather these challenges are intended to provide funding for research that advances bold visions.

There is the US National Strategic Computing Initiative established on July 29, 2015 and its first anniversary results were announced one year to the day later. Within that initiative a nanotechnology-inspired Grand Challenge for Future Computing was issued and, according to a July 29, 2016 news item on Nanowerk, a white paper on the topic has been issued (Note: A link has been removed),

Today [July 29, 2016), Federal agencies participating in the National Nanotechnology Initiative (NNI) released a white paper (pdf) describing the collective Federal vision for the emerging and innovative solutions needed to realize the Nanotechnology-Inspired Grand Challenge for Future Computing.

The grand challenge, announced on October 20, 2015, is to “create a new type of computer that can proactively interpret and learn from data, solve unfamiliar problems using what it has learned, and operate with the energy efficiency of the human brain.” The white paper describes the technical priorities shared by the agencies, highlights the challenges and opportunities associated with these priorities, and presents a guiding vision for the research and development (R&D) needed to achieve key technical goals. By coordinating and collaborating across multiple levels of government, industry, academia, and nonprofit organizations, the nanotechnology and computer science communities can look beyond the decades-old approach to computing based on the von Neumann architecture and chart a new path that will continue the rapid pace of innovation beyond the next decade.

A July 29, 2016 US National Nanotechnology Coordination Office news release, which originated the news item, further and succinctly describes the contents of the paper,

“Materials and devices for computing have been and will continue to be a key application domain in the field of nanotechnology. As evident by the R&D topics highlighted in the white paper, this challenge will require the convergence of nanotechnology, neuroscience, and computer science to create a whole new paradigm for low-power computing with revolutionary, brain-like capabilities,” said Dr. Michael Meador, Director of the National Nanotechnology Coordination Office. …

The white paper was produced as a collaboration by technical staff at the Department of Energy, the National Science Foundation, the Department of Defense, the National Institute of Standards and Technology, and the Intelligence Community. …

The white paper titled “A Federal Vision for Future Computing: A Nanotechnology-Inspired Grand Challenge” is 15 pp. and it offers tidbits such as this (Note: Footnotes not included),

A new materials base may be needed for future electronic hardware. While most of today’s electronics use silicon, this approach is unsustainable if billions of disposable and short-lived sensor nodes are needed for the coming Internet-of-Things (IoT). To what extent can the materials base for the implementation of future information technology (IT) components and systems support sustainability through recycling and bio-degradability? More sustainable materials, such as compostable or biodegradable systems (polymers, paper, etc.) that can be recycled or reused,  may play an important role. The potential role for such alternative materials in the fabrication of integrated systems needs to be explored as well. [p. 5]

The basic architecture of computers today is essentially the same as those built in the 1940s—the von Neumann architecture—with separate compute, high-speed memory, and high-density storage components that are electronically interconnected. However, it is well known that continued performance increases using this architecture are not feasible in the long term, with power density constraints being one of the fundamental roadblocks.7 Further advances in the current approach using multiple cores, chip multiprocessors, and associated architectures are plagued by challenges in software and programming models. Thus,  research and development is required in radically new and different computing architectures involving processors, memory, input-output devices, and how they behave and are interconnected. [p. 7]

Neuroscience research suggests that the brain is a complex, high-performance computing system with low energy consumption and incredible parallelism. A highly plastic and flexible organ, the human brain is able to grow new neurons, synapses, and connections to cope with an ever-changing environment. Energy efficiency, growth, and flexibility occur at all scales, from molecular to cellular, and allow the brain, from early to late stage, to never stop learning and to act with proactive intelligence in both familiar and novel situations. Understanding how these mechanisms work and cooperate within and across scales has the potential to offer tremendous technical insights and novel engineering frameworks for materials, devices, and systems seeking to perform efficient and autonomous computing. This research focus area is the most synergistic with the national BRAIN Initiative. However, unlike the BRAIN Initiative, where the goal is to map the network connectivity of the brain, the objective here is to understand the nature, methods, and mechanisms for computation,  and how the brain performs some of its tasks. Even within this broad paradigm,  one can loosely distinguish between neuromorphic computing and artificial neural network (ANN) approaches. The goal of neuromorphic computing is oriented towards a hardware approach to reverse engineering the computational architecture of the brain. On the other hand, ANNs include algorithmic approaches arising from machinelearning,  which in turn could leverage advancements and understanding in neuroscience as well as novel cognitive, mathematical, and statistical techniques. Indeed, the ultimate intelligent systems may as well be the result of merging existing ANN (e.g., deep learning) and bio-inspired techniques. [p. 8]

As government documents go, this is quite readable.

For anyone interested in learning more about the future federal plans for computing in the US, there is a July 29, 2016 posting on the White House blog celebrating the first year of the US National Strategic Computing Initiative Strategic Plan (29 pp. PDF; awkward but that is the title).

Deriving graphene-like films from salt

This research comes from Russia (mostly). A July 29, 2016 news item on ScienceDaily describes a graphene-like structure derived from salt,

Researchers from Moscow Institute of Physics and Technology (MIPT), Skolkovo Institute of Science and Technology (Skoltech), the Technological Institute for Superhard and Novel Carbon Materials (TISNCM), the National University of Science and Technology MISiS (Russia), and Rice University (USA) used computer simulations to find how thin a slab of salt has to be in order for it to break up into graphene-like layers. Based on the computer simulation, they derived the equation for the number of layers in a crystal that will produce ultrathin films with applications in nanoelectronics. …

Caption: Transition from a cubic arrangement into several hexagonal layers. Credit: authors of the study

Caption: Transition from a cubic arrangement into several hexagonal layers. Credit: authors of the study

A July 29, 2016 Moscow Institute of Physics and Technology press release on EurekAlert, which originated the news item,  provides more technical detail,

From 3D to 2D

Unique monoatomic thickness of graphene makes it an attractive and useful material. Its crystal lattice resembles a honeycombs, as the bonds between the constituent atoms form regular hexagons. Graphene is a single layer of a three-dimensional graphite crystal and its properties (as well as properties of any 2D crystal) are radically different from its 3D counterpart. Since the discovery of graphene, a large amount of research has been directed at new two-dimensional materials with intriguing properties. Ultrathin films have unusual properties that might be useful for applications such as nano- and microelectronics.

Previous theoretical studies suggested that films with a cubic structure and ionic bonding could spontaneously convert to a layered hexagonal graphitic structure in what is known as graphitisation. For some substances, this conversion has been experimentally observed. It was predicted that rock salt NaCl can be one of the compounds with graphitisation tendencies. Graphitisation of cubic compounds could produce new and promising structures for applications in nanoelectronics. However, no theory has been developed that would account for this process in the case of an arbitrary cubic compound and make predictions about its conversion into graphene-like salt layers.

For graphitisation to occur, the crystal layers need to be reduced along the main diagonal of the cubic structure. This will result in one crystal surface being made of sodium ions Na? and the other of chloride ions Cl?. It is important to note that positive and negative ions (i.e. Na? and Cl?)–and not neutral atoms–occupy the lattice points of the structure. This generates charges of opposite signs on the two surfaces. As long as the surfaces are remote from each other, all charges cancel out, and the salt slab shows a preference for a cubic structure. However, if the film is made sufficiently thin, this gives rise to a large dipole moment due to the opposite charges of the two crystal surfaces. The structure seeks to get rid of the dipole moment, which increases the energy of the system. To make the surfaces charge-neutral, the crystal undergoes a rearrangement of atoms.

Experiment vs model

To study how graphitisation tendencies vary depending on the compound, the researchers examined 16 binary compounds with the general formula AB, where A stands for one of the four alkali metals lithium Li, sodium Na, potassium K, and rubidium Rb. These are highly reactive elements found in Group 1 of the periodic table. The B in the formula stands for any of the four halogens fluorine F, chlorine Cl, bromine Br, and iodine I. These elements are in Group 17 of the periodic table and readily react with alkali metals.

All compounds in this study come in a number of different structures, also known as crystal lattices or phases. If atmospheric pressure is increased to 300,000 times its normal value, an another phase (B2) of NaCl (represented by the yellow portion of the diagram) becomes more stable, effecting a change in the crystal lattice. To test their choice of methods and parameters, the researchers simulated two crystal lattices and calculated the pressure that corresponds to the phase transition between them. Their predictions agree with experimental data.

Just how thin should it be?

The compounds within the scope of this study can all have a hexagonal, “graphitic”, G phase (the red in the diagram) that is unstable in 3D bulk but becomes the most stable structure for ultrathin (2D or quasi-2D) films. The researchers identified the relationship between the surface energy of a film and the number of layers in it for both cubic and hexagonal structures. They graphed this relationship by plotting two lines with different slopes for each of the compounds studied. Each pair of lines associated with one compound has a common point that corresponds to the critical slab thickness that makes conversion from a cubic to a hexagonal structure energetically favourable. For example, the critical number of layers was found to be close to 11 for all sodium salts and between 19 and 27 for lithium salts.

Based on this data, the researchers established a relationship between the critical number of layers and two parameters that determine the strength of the ionic bonds in various compounds. The first parameter indicates the size of an ion of a given metal–its ionic radius. The second parameter is called electronegativity and is a measure of the ? atom’s ability to attract the electrons of element B. Higher electronegativity means more powerful attraction of electrons by the atom, a more pronounced ionic nature of the bond, a larger surface dipole, and a lower critical slab thickness.

And there’s more

Pavel Sorokin, Dr. habil., [sic] is head of the Laboratory of New Materials Simulation at TISNCM. He explains the importance of the study, ‘This work has already attracted our colleagues from Israel and Japan. If they confirm our findings experimentally, this phenomenon [of graphitisation] will provide a viable route to the synthesis of ultrathin films with potential applications in nanoelectronics.’

The scientists intend to broaden the scope of their studies by examining other compounds. They believe that ultrathin films of different composition might also undergo spontaneous graphitisation, yielding new layered structures with properties that are even more intriguing.

Here’s a link to and a citation for the paper,

Ionic Graphitization of Ultrathin Films of Ionic Compounds by A. G. Kvashnin, E. Y. Pashkin, B. I. Yakobson, and P. B. Sorokin. J. Phys. Chem. Lett., 2016, 7 (14), pp 2659–2663 DOI: 10.1021/acs.jpclett.6b01214 Publication Date (Web): June 23, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Untangling carbon nanotubes at McMaster University (Canada)

Carbon nanotubes can be wiggly, entangled things (more about McMaster in a bit) as Dr. Andrew Maynard notes in this video (part of his Risk Bites video series) describing carbon nanotubes, their ‘infinite’ variety, and risks,

Researchers at Canada’s McMaster University have found a way to untangle carbon nanotubes according to an Aug. 16, 2016 news item on Nanowerk (Note: A link has been removed),

Imagine an electronic newspaper that you could roll up and spill your coffee on, even as it updated itself before your eyes.

It’s an example of the technological revolution that has been waiting to happen, except for one major problem that, until now, scientists have not been able to resolve.

Researchers at McMaster University have cleared that obstacle by developing a new way to purify carbon nanotubes – the smaller, nimbler semiconductors that are expected to replace silicon within computer chips and a wide array of electronics (Chemistry – A European Journal, “Influence of Polymer Electronics on Selective Dispersion of Single-Walled Carbon Nanotubes”).

“Once we have a reliable source of pure nanotubes that are not very expensive, a lot can happen very quickly,” says Alex Adronov, a professor of Chemistry at McMaster whose research team has developed a new and potentially cost-efficient way to purify carbon nanotubes.

The researchers have provided a gorgeous image,

Artistic rendition of a metallic carbon nanotube being pulled into solution, in analogy to the work described by the Adronov group. Image: Alex Adronov McMaster

Artistic rendition of a metallic carbon nanotube being pulled into solution, in analogy to the work described by the Adronov group. Image: Alex Adronov McMaster University

An Aug. 15, 2016 McMaster University news release, which originated the news item, provides a beginner’s introduction to carbon nanotubes and describes the purification process that will make production of carbon nanotubes easier,

Carbon nanotubes – hair-like structures that are one billionth of a metre in diameter but thousands of times longer ­– are tiny, flexible conductive nano-scale materials, expected to revolutionize computers and electronics by replacing much larger silicon-based chips.

A major problem standing in the way of the new technology, however, has been untangling metallic and semiconducting carbon nanotubes, since both are created simultaneously in the process of producing the microscopic structures, which typically involves heating carbon-based gases to a point where mixed clusters of nanotubes form spontaneously as black soot.

Only pure semiconducting or metallic carbon nanotubes are effective in device applications, but efficiently isolating them has proven to be a challenging problem to overcome. Even when the nanotube soot is ground down, semiconducting and metallic nanotubes are knotted together within each grain of powder. Both components are valuable, but only when separated.

Researchers around the world have spent years trying to find effective and efficient ways to isolate carbon nanotubes and unleash their value.

While previous researchers had created polymers that could allow semiconducting carbon nanotubes to be dissolved and washed away, leaving metallic nanotubes behind, there was no such process for doing the opposite: dispersing the metallic nanotubes and leaving behind the semiconducting structures.

Now, Adronov’s research group has managed to reverse the electronic characteristics of a polymer known to disperse semiconducting nanotubes – while leaving the rest of the polymer’s structure intact. By so doing, they have reversed the process, leaving the semiconducting nanotubes behind while making it possible to disperse the metallic nanotubes.

The researchers worked closely with experts and equipment from McMaster’s Faculty of Engineering and the Canada Centre for Electron Microscopy, located on the university’s campus.

“There aren’t many places in the world where you can do this type of interdisciplinary work,” Adronov says.

The next step, he explains, is for his team or other researchers to exploit the discovery by finding a way to develop even more efficient polymers and scale up the process for commercial production.

Here’s a link to and a citation for the paper,

Influence of Polymer Electronics on Selective Dispersion of Single-Walled Carbon Nanotubes by Daryl Fon, William J. Bodnaryk, Dr. Nicole A. Rice, Sokunthearath Saem, Prof. Jose M. Moran-Mirabal, Prof. Alex Adronov. Chemistry A European Journal DOI: 10.1002/chem.201603553 First published: 16 August 2016

This paper appears to be open access.

Book announcement: Atomistic Simulation of Quantum Transport in Nanoelectronic Devices

For anyone who’s curious about where we go after creating chips at the 7nm size, this may be the book for you. Here’s more from a July 27, 2016 news item on Nanowerk,

In the year 2015, Intel, Samsung and TSMC began to mass-market the 14nm technology called FinFETs. In the same year, IBM, working with Global Foundries, Samsung, SUNY, and various equipment suppliers, announced their success in fabricating 7nm devices. A 7nm silicon channel is about 50 atomic layers and these devices are truly atomic! It is clear that we have entered an era of atomic scale transistors. How do we model the carrier transport in such atomic scale devices?

One way is to improve existing device models by including more and more parameters. This is called the top-down approach. However, as device sizes shrink, the number of parameters grows rapidly, making the top-down approach more and more sophisticated and challenging. Most importantly, to continue Moore’s law, electronic engineers are exploring new electronic materials and new operating mechanisms. These efforts are beyond the scope of well-established device models — hence significant changes are necessary to the top-down approach.

An alternative way is called the bottom-up approach. The idea is to build up nanoelectronic devices atom by atom on a computer, and predict the transport behavior from first principles. By doing so, one is allowed to go inside atomic structures and see what happens from there. The elegance of the approach comes from its unification and generality. Everything comes out naturally from the very basic principles of quantum mechanics and nonequilibrium statistics. The bottom-up approach is complementary to the top-down approach, and is extremely useful for testing innovative ideas of future technologies.

A July 27, 2016 World Scientific news release on EurekAlert, which originated the news item, delves into the topics covered by the book,

In recent decades, several device simulation tools using the bottom-up approach have been developed in universities and software companies. Some examples are McDcal, Transiesta, Atomistic Tool Kit, Smeagol, NanoDcal, NanoDsim, OpenMX, GPAW and NEMO-5. These software tools are capable of predicting electric current flowing through a nanostructure. Essentially the input is the atomic coordinates and the output is the electric current. These software tools have been applied extensively to study emerging electronic materials and devices.

However, developing such a software tool is extremely difficult. It takes years-long experiences and requires knowledge of and techniques in condensed matter physics, computer science, electronic engineering, and applied mathematics. In a library, one can find books on density functional theory, books on quantum transport, books on computer programming, books on numerical algorithms, and books on device simulation. But one can hardly find a book integrating all these fields for the purpose of nanoelectronic device simulation.

“Atomistic Simulation of Quantum Transport in Nanoelectronic Devices” (With CD-ROM) fills the chasm. Authors Yu Zhu and Lei Liu have experience in both academic research and software development. Yu Zhu is the project manager of NanoDsim, and Lei Liu is the project manager of NanoDcal. The content of the book is based Zhu and Liu’s combined R&D experiences of more than forty years.

In this book, the authors conduct an experiment and adopt a “paradigm” approach. Instead of organizing materials by fields, they focus on the development of one particular software tool called NanoDsim, and provide relevant knowledge and techniques whenever needed. The black of box of NanoDsim is opened, and the complete procedure from theoretical derivation, to numerical implementation, all the way to device simulation is illustrated. The affilicated source code of NanoDsim also provides an open platform for new researchers.

I’m not recommending the book as I haven’t read it but it does seem intriguing. For anyone who wishes to purchase it, you can do that here.

I wrote about IBM and its 7nm chip in a July 15, 2015 post.