Tag Archives: Stanford University

Split some water molecules and save solar and wind (energy) for a future day

Professor Ted Sargent’s research team at the University of Toronto has a developed a new technique for saving the energy harvested by sun and wind farms according to a March 28, 2016 news item on Nanotechnology Now,

We can’t control when the wind blows and when the sun shines, so finding efficient ways to store energy from alternative sources remains an urgent research problem. Now, a group of researchers led by Professor Ted Sargent at the University of Toronto’s Faculty of Applied Science & Engineering may have a solution inspired by nature.

The team has designed the most efficient catalyst for storing energy in chemical form, by splitting water into hydrogen and oxygen, just like plants do during photosynthesis. Oxygen is released harmlessly into the atmosphere, and hydrogen, as H2, can be converted back into energy using hydrogen fuel cells.

Discovering a better way of storing energy from solar and wind farms is “one of the grand challenges in this field,” Ted Sargent says (photo above by Megan Rosenbloom via flickr) Courtesy: University of Toronto

Discovering a better way of storing energy from solar and wind farms is “one of the grand challenges in this field,” Ted Sargent says (photo above by Megan Rosenbloom via flickr) Courtesy: University of Toronto

A March 24, 2016 University of Toronto news release by Marit Mitchell, which originated the news item, expands on the theme,

“Today on a solar farm or a wind farm, storage is typically provided with batteries. But batteries are expensive, and can typically only store a fixed amount of energy,” says Sargent. “That’s why discovering a more efficient and highly scalable means of storing energy generated by renewables is one of the grand challenges in this field.”

You may have seen the popular high-school science demonstration where the teacher splits water into its component elements, hydrogen and oxygen, by running electricity through it. Today this requires so much electrical input that it’s impractical to store energy this way — too great proportion of the energy generated is lost in the process of storing it.

This new catalyst facilitates the oxygen-evolution portion of the chemical reaction, making the conversion from H2O into O2 and H2 more energy-efficient than ever before. The intrinsic efficiency of the new catalyst material is over three times more efficient than the best state-of-the-art catalyst.

Details are offered in the news release,

The new catalyst is made of abundant and low-cost metals tungsten, iron and cobalt, which are much less expensive than state-of-the-art catalysts based on precious metals. It showed no signs of degradation over more than 500 hours of continuous activity, unlike other efficient but short-lived catalysts. …

“With the aid of theoretical predictions, we became convinced that including tungsten could lead to a better oxygen-evolving catalyst. Unfortunately, prior work did not show how to mix tungsten homogeneously with the active metals such as iron and cobalt,” says one of the study’s lead authors, Dr. Bo Zhang … .

“We invented a new way to distribute the catalyst homogenously in a gel, and as a result built a device that works incredibly efficiently and robustly.”

This research united engineers, chemists, materials scientists, mathematicians, physicists, and computer scientists across three countries. A chief partner in this joint theoretical-experimental studies was a leading team of theorists at Stanford University and SLAC National Accelerator Laboratory under the leadership of Dr. Aleksandra Vojvodic. The international collaboration included researchers at East China University of Science & Technology, Tianjin University, Brookhaven National Laboratory, Canadian Light Source and the Beijing Synchrotron Radiation Facility.

“The team developed a new materials synthesis strategy to mix multiple metals homogeneously — thereby overcoming the propensity of multi-metal mixtures to separate into distinct phases,” said Jeffrey C. Grossman, the Morton and Claire Goulder and Family Professor in Environmental Systems at Massachusetts Institute of Technology. “This work impressively highlights the power of tightly coupled computational materials science with advanced experimental techniques, and sets a high bar for such a combined approach. It opens new avenues to speed progress in efficient materials for energy conversion and storage.”

“This work demonstrates the utility of using theory to guide the development of improved water-oxidation catalysts for further advances in the field of solar fuels,” said Gary Brudvig, a professor in the Department of Chemistry at Yale University and director of the Yale Energy Sciences Institute.

“The intensive research by the Sargent group in the University of Toronto led to the discovery of oxy-hydroxide materials that exhibit electrochemically induced oxygen evolution at the lowest overpotential and show no degradation,” said University Professor Gabor A. Somorjai of the University of California, Berkeley, a leader in this field. “The authors should be complimented on the combined experimental and theoretical studies that led to this very important finding.”

Here’s a link to and a citation for the paper,

Homogeneously dispersed, multimetal oxygen-evolving catalysts by Bo Zhang, Xueli Zheng, Oleksandr Voznyy, Riccardo Comin, Michal Bajdich, Max García-Melchor, Lili Han, Jixian Xu, Min Liu, Lirong Zheng, F. Pelayo García de Arquer, Cao Thang Dinh, Fengjia Fan, Mingjian Yuan, Emre Yassitepe, Ning Chen, Tom Regier, Pengfei Liu, Yuhang Li, Phil De Luna, Alyf Janmohamed, Huolin L. Xin, Huagui Yang, Aleksandra Vojvodic, Edward H. Sargent. Science  24 Mar 2016: DOI: 10.1126/science.aaf1525

This paper is behind a paywall.

3D microtopographic scaffolds for transplantation and generation of reprogrammed human neurons

Should this technology prove successful once they start testing on people, the stated goal is to use it for the treatment of human neurodegenerative disorders such as Parkinson’s disease.  But, I can’t help wondering if they might also consider constructing an artificial brain.

Getting back to the 3D scaffolds for neurons, a March 17, 2016 US National Institutes of Health (NIH) news release (also on EurekAlert), makes the announcement,

National Institutes of Health-funded scientists have developed a 3D micro-scaffold technology that promotes reprogramming of stem cells into neurons, and supports growth of neuronal connections capable of transmitting electrical signals. The injection of these networks of functioning human neural cells — compared to injecting individual cells — dramatically improved their survival following transplantation into mouse brains. This is a promising new platform that could make transplantation of neurons a viable treatment for a broad range of human neurodegenerative disorders.

Previously, transplantation of neurons to treat neurodegenerative disorders, such as Parkinson’s disease, had very limited success due to poor survival of neurons that were injected as a solution of individual cells. The new research is supported by the National Institute of Biomedical Imaging and Bioengineering (NIBIB), part of NIH.

“Working together, the stem cell biologists and the biomaterials experts developed a system capable of shuttling neural cells through the demanding journey of transplantation and engraftment into host brain tissue,” said Rosemarie Hunziker, Ph.D., director of the NIBIB Program in Tissue Engineering and Regenerative Medicine. “This exciting work was made possible by the close collaboration of experts in a wide range of disciplines.”

The research was performed by researchers from Rutgers University, Piscataway, New Jersey, departments of Biomedical Engineering, Neuroscience and Cell Biology, Chemical and Biochemical Engineering, and the Child Health Institute; Stanford University School of Medicine’s Institute of Stem Cell Biology and Regenerative Medicine, Stanford, California; the Human Genetics Institute of New Jersey, Piscataway; and the New Jersey Center for Biomaterials, Piscataway. The results are reported in the March 17, 2016 issue of Nature Communications.

The researchers experimented in creating scaffolds made of different types of polymer fibers, and of varying thickness and density. They ultimately created a web of relatively thick fibers using a polymer that stem cells successfully adhered to. The stem cells used were human induced pluripotent stem cells (iPSCs), which can be readily generated from adult cell types such as skin cells. The iPSCs were induced to differentiate into neural cells by introducing the protein NeuroD1 into the cells.

The space between the polymer fibers turned out to be critical. “If the scaffolds were too dense, the stem cell-derived neurons were unable to integrate into the scaffold, whereas if they are too sparse then the network organization tends to be poor,” explained Prabhas Moghe, Ph.D., distinguished professor of biomedical engineering & chemical engineering at Rutgers University and co-senior author of the paper. “The optimal pore size was one that was large enough for the cells to populate the scaffold but small enough that the differentiating neurons sensed the presence of their neighbors and produced outgrowths resulting in cell-to-cell contact. This contact enhances cell survival and development into functional neurons able to transmit an electrical signal across the developing neural network.”

To test the viability of neuron-seeded scaffolds when transplanted, the researchers created micro-scaffolds that were small enough for injection into mouse brain tissue using a standard hypodermic needle. They injected scaffolds carrying the human neurons into brain slices from mice and compared them to human neurons injected as individual, dissociated cells.

The neurons on the scaffolds had dramatically increased cell-survival compared with the individual cell suspensions. The scaffolds also promoted improved neuronal outgrowth and electrical activity. Neurons injected individually in suspension resulted in very few cells surviving the transplant procedure.

Human neurons on scaffolds compared to neurons in solution were then tested when injected into the brains of live mice. Similar to the results in the brain slices, the survival rate of neurons on the scaffold network was increased nearly 40-fold compared to injected isolated cells. A critical finding was that the neurons on the micro-scaffolds expressed proteins that are involved in the growth and maturation of neural synapses–a good indication that the transplanted neurons were capable of functionally integrating into the host brain tissue.

The success of the study gives this interdisciplinary group reason to believe that their combined areas of expertise have resulted in a system with much promise for eventual treatment of human neurodegenerative disorders. In fact, they are now refining their system for specific use as an eventual transplant therapy for Parkinson’s disease. The plan is to develop methods to differentiate the stem cells into neurons that produce dopamine, the specific neuron type that degenerates in individuals with Parkinson’s disease. The work also will include fine-tuning the scaffold materials, mechanics and dimensions to optimize the survival and function of dopamine-producing neurons, and finding the best mouse models of the disease to test this Parkinson’s-specific therapy.

Here’s a link to and a citation for the paper,

Generation and transplantation of reprogrammed human neurons in the brain using 3D microtopographic scaffolds by Aaron L. Carlson, Neal K. Bennett, Nicola L. Francis, Apoorva Halikere, Stephen Clarke, Jennifer C. Moore, Ronald P. Hart, Kenneth Paradiso, Marius Wernig, Joachim Kohn, Zhiping P. Pang, & Prabhas V. Moghe. Nature Communications 7, Article number: 10862  doi:10.1038/ncomms10862 Published 17 March 2016

This paper is open access.

Cambridge University researchers tell us why Spiderman can’t exist while Stanford University proves otherwise

A team of zoology researchers at Cambridge University (UK) find themselves in the unenviable position of having their peer-reviewed study used as a source of unintentional humour. I gather zoologists (Cambridge) and engineers (Stanford) don’t have much opportunity to share information.

A Jan. 18, 2016 news item on ScienceDaily announces the Cambridge research findings,

Latest research reveals why geckos are the largest animals able to scale smooth vertical walls — even larger climbers would require unmanageably large sticky footpads. Scientists estimate that a human would need adhesive pads covering 40% of their body surface in order to walk up a wall like Spiderman, and believe their insights have implications for the feasibility of large-scale, gecko-like adhesives.

A Jan. 18, 2016 Cambridge University press release (also on EurekAlert), which originated the news item, describes the research and the thinking that led to the researchers’ conclusions,

Dr David Labonte and his colleagues in the University of Cambridge’s Department of Zoology found that tiny mites use approximately 200 times less of their total body area for adhesive pads than geckos, nature’s largest adhesion-based climbers. And humans? We’d need about 40% of our total body surface, or roughly 80% of our front, to be covered in sticky footpads if we wanted to do a convincing Spiderman impression.

Once an animal is big enough to need a substantial fraction of its body surface to be covered in sticky footpads, the necessary morphological changes would make the evolution of this trait impractical, suggests Labonte.

“If a human, for example, wanted to walk up a wall the way a gecko does, we’d need impractically large sticky feet – our shoes would need to be a European size 145 or a US size 114,” says Walter Federle, senior author also from Cambridge’s Department of Zoology.

The researchers say that these insights into the size limits of sticky footpads could have profound implications for developing large-scale bio-inspired adhesives, which are currently only effective on very small areas.

“As animals increase in size, the amount of body surface area per volume decreases – an ant has a lot of surface area and very little volume, and a blue whale is mostly volume with not much surface area” explains Labonte.

“This poses a problem for larger climbing species because, when they are bigger and heavier, they need more sticking power to be able to adhere to vertical or inverted surfaces, but they have comparatively less body surface available to cover with sticky footpads. This implies that there is a size limit to sticky footpads as an evolutionary solution to climbing – and that turns out to be about the size of a gecko.”

Larger animals have evolved alternative strategies to help them climb, such as claws and toes to grip with.

The researchers compared the weight and footpad size of 225 climbing animal species including insects, frogs, spiders, lizards and even a mammal.

“We compared animals covering more than seven orders of magnitude in weight, which is roughly the same as comparing a cockroach to the weight of Big Ben, for example,” says Labonte.

These investigations also gave the researchers greater insights into how the size of adhesive footpads is influenced and constrained by the animals’ evolutionary history.

“We were looking at vastly different animals – a spider and a gecko are about as different as a human is to an ant- but if you look at their feet, they have remarkably similar footpads,” says Labonte.

“Adhesive pads of climbing animals are a prime example of convergent evolution – where multiple species have independently, through very different evolutionary histories, arrived at the same solution to a problem. When this happens, it’s a clear sign that it must be a very good solution.”

The researchers believe we can learn from these evolutionary solutions in the development of large-scale manmade adhesives.

“Our study emphasises the importance of scaling for animal adhesion, and scaling is also essential for improving the performance of adhesives over much larger areas. There is a lot of interesting work still to do looking into the strategies that animals have developed in order to maintain the ability to scale smooth walls, which would likely also have very useful applications in the development of large-scale, powerful yet controllable adhesives,” says Labonte.

There is one other possible solution to the problem of how to stick when you’re a large animal, and that’s to make your sticky footpads even stickier.

“We noticed that within closely related species pad size was not increasing fast enough to match body size, probably a result of evolutionary constraints. Yet these animals can still stick to walls,” says Christofer Clemente, a co-author from the University of the Sunshine Coast [Australia].

“Within frogs, we found that they have switched to this second option of making pads stickier rather than bigger. It’s remarkable that we see two different evolutionary solutions to the problem of getting big and sticking to walls,” says Clemente.

“Across all species the problem is solved by evolving relatively bigger pads, but this does not seem possible within closely related species, probably since there is not enough morphological diversity to allow it. Instead, within these closely related groups, pads get stickier. This is a great example of evolutionary constraint and innovation.”

A researcher at Stanford University (US) took strong exception to the Cambridge team’s conclusions , from a Jan. 28, 2016 article by Michael Grothaus for Fast Company (Note: A link has been removed),

It seems the dreams of the web-slinger’s fans were crushed forever—that is until a rival university swooped in and saved the day. A team of engineers working with mechanical engineering graduate student Elliot Hawkes at Stanford University have announced [in 2014] that they’ve invented a device called “gecko gloves” that proves the Cambridge researchers wrong.

Hawkes has created a video outlining the nature of his dispute with Cambridge University and US tv talk show host, Stephen Colbert who featured the Cambridge University research in one of his monologues,

To be fair to Hawkes, he does prove his point. A Nov. 21, 2014 Stanford University report by Bjorn Carey describes Hawke’s ingenious ‘sticky pads,

Each handheld gecko pad is covered with 24 adhesive tiles, and each of these is covered with sawtooth-shape polymer structures each 100 micrometers long (about the width of a human hair).

The pads are connected to special degressive springs, which become less stiff the further they are stretched. This characteristic means that when the springs are pulled upon, they apply an identical force to each adhesive tile and cause the sawtooth-like structures to flatten.

“When the pad first touches the surface, only the tips touch, so it’s not sticky,” said co-author Eric Eason, a graduate student in applied physics. “But when the load is applied, and the wedges turn over and come into contact with the surface, that creates the adhesion force.”

As with actual geckos, the adhesives can be “turned” on and off. Simply release the load tension, and the pad loses its stickiness. “It can attach and detach with very little wasted energy,” Eason said.

The ability of the device to scale up controllable adhesion to support large loads makes it attractive for several applications beyond human climbing, said Mark Cutkosky, the Fletcher Jones Chair in the School of Engineering and senior author on the paper.

“Some of the applications we’re thinking of involve manufacturing robots that lift large glass panels or liquid-crystal displays,” Cutkosky said. “We’re also working on a project with NASA’s Jet Propulsion Laboratory to apply these to the robotic arms of spacecraft that could gently latch on to orbital space debris, such as fuel tanks and solar panels, and move it to an orbital graveyard or pitch it toward Earth to burn up.”

Previous work on synthetic and gecko adhesives showed that adhesive strength decreased as the size increased. In contrast, the engineers have shown that the special springs in their device make it possible to maintain the same adhesive strength at all sizes from a square millimeter to the size of a human hand.

The current version of the device can support about 200 pounds, Hawkes said, but, theoretically, increasing its size by 10 times would allow it to carry almost 2,000 pounds.

Here’s a link to and a citation for the Stanford paper,

Human climbing with efficiently scaled gecko-inspired dry adhesives by Elliot W. Hawkes, Eric V. Eason, David L. Christensen, Mark R. Cutkosky. Jurnal of the Royal Society Interface DOI: 10.1098/rsif.2014.0675 Published 19 November 2014

This paper is open access.

To be fair to the Cambridge researchers, It’s stretching it a bit to say that Hawke’s gecko gloves allow someone to be like Spiderman. That’s a very careful, slow climb achieved in a relatively short period of time. Can the human body remain suspended that way for more than a few minutes? How big do your sticky pads have to be if you’re going to have the same wall-climbing ease of movement and staying power of either a gecko or Spiderman?

Here’s a link to and a citation for the Cambridge paper,

Extreme positive allometry of animal adhesive pads and the size limits of adhesion-based climbing by David Labonte, Christofer J. Clemente, Alex Dittrich, Chi-Yun Kuo, Alfred J. Crosby, Duncan J. Irschick, and Walter Federle. PNAS doi: 10.1073/pnas.1519459113

This paper is behind a paywall but there is an open access preprint version, which may differ from the PNAS version, available,

Extreme positive allometry of animal adhesive pads and the size limits of adhesion-based climbing by David Labonte, Christofer J Clemente, Alex Dittrich, Chi-Yun Kuo, Alfred J Crosby, Duncan J Irschick, Walter Federle. bioRxiv
doi: http://dx.doi.org/10.1101/033845

I hope that if the Cambridge researchers respond, they will be witty rather than huffy. Finally, there’s this gecko image (which I love) from the Cambridge researchers,

 Caption: This image shows a gecko and ant. Credit: Image courtesy of A Hackmann and D Labonte

Caption: This image shows a gecko and ant. Credit: Image courtesy of A Hackmann and D Labonte

Simon Fraser University (Vancouver, Canada) and its president’s (Andrew Petter) dream colloquium: big data

They have a ‘big data’ start to 2016 planned for the President’s (Andrew Petter at Simon Fraser University [SFU] in Vancouver, Canada) Dream Colloquium according to a Jan. 5, 2016 news release,

Big data explained: SFU launches spring 2016 President’s Dream Colloquium

Speaker series tackles history, use and implications of collecting data

 

Canadians experience and interact with big data on a daily basis. Some interactions are as simple as buying coffee or as complex as filling out the Canadian government’s mandatory long-form census. But while big data may be one of the most important technological and social shifts in the past five years, many experts are still grappling with what to do with the massive amounts of information being gathered every day.

 

To help understand the implications of collecting, analyzing and using big data, Simon Fraser University is launching the President’s Dream Colloquium on Engaging Big Data on Tuesday, January 5.

 

“Big data affects all sectors of society from governments to businesses to institutions to everyday people,” says Peter Chow-White, SFU Associate Professor of Communication. “This colloquium brings together people from industry and scholars in computing and social sciences in a dialogue around one of the most important innovations of our time next to the Internet.”

 

This spring marks the first President’s Dream Colloquium where all faculty and guest lectures will be available to the public. The speaker series will give a historical overview of big data, specific case studies in how big data is used today and discuss what the implications are for this information’s usage in business, health and government in the future.

 

The series includes notable guest speakers such as managing director of Microsoft Research, Surajit Chaudhuri, and Tableau co-founder Pat Hanrahan.  

 

“Pat Hanrahan is a leader in a number of sectors and Tableau is a leader in accessing big data through visual analytics,” says Chow-White. “Rather than big data being available to only a small amount of professionals, Tableau makes it easier for everyday people to access and understand it in a visual way.”

 

The speaker series is free to attend with registration. Lectures will be webcast live and available on the President’s Dream Colloquium website.

 

FAST FACTS:

  • By 2020, over 1/3 of all data will live in or pass through the cloud.
  • Data production will be 44 times greater in 2020 than it was in 2009.
  • More than 70 percent of the digital universe is generated by individuals. But enterprises have responsibility for the storage, protection and management of 80 percent of that.

(Statistics provided by CSC)

 

WHO’S SPEAKING AT THE COLLOQUIUM:

 

The course features lectures from notable guest speakers including:

  • Sasha Issenberg, Author and Journalist
    Tuesday, January 12, 2016
  • Surajit ChaudhuriScientist and Managing Director of XCG (Microsoft Research)
    Tuesday, January 19, 2016
  • Pat Hanrahan, Professor at the Stanford Computer Graphics Laboratory, Cofounder and Chief Scientist of Tableau, Founding member of Pixar
    Wednesday, February 3, 2016
  • Sheelagh Carpendale, Professor of Computing Science University of Calgary, Canada Research Chair in Information Visualization
    Tuesday, February 23, 2016, 3:30pm
  • Colin HillCEO of GNS Healthcare
    Tuesday, March 8, 2016
  • Chad Skelton, Award-winning Data Journalist and Consultant
    Tuesday, March 22, 2016

Not to worry, even though the first talk with Sasha Issenberg and Mark Pickup (strangely, he’s [Pickup is an SFU professor of political science] not mentioned in the news release or on the event page) has taken place, a webcast is being posted to the event page here.

I watched the first event live (via a livestream webcast which I accessed by clicking on the link found on the Event’s Speaker’s page) and found it quite interesting although I’m not sure about asking Issenberg to speak extemporaneously. He rambled and offered more detail about things that don’t matter much to a Canadian audience. I couldn’t tell if part of the problem might lie with the fact that his ‘big data’ book (The Victory Lab: The Secret Science of Winning Campaigns) was published a while back and he’s since published one on medical tourism and is about to publish one on same sex marriages and the LGBTQ communities in the US. As someone else who moves from topic to topic, I know it’s an effort to ‘go back in time’ and to remember the details and to recapture the enthusiasm that made the piece interesting.  Also, he has yet to get the latest scoop on big data and politics in the US as embarking on the 2016 campaign trail won’t take place until sometime later in January.

So, thanks to Issenberg for managing to dredge up as much as he did. Happily, he did recognize that there are differences between Canada and the US and the type of election data that is gathered and other data that can accessed. He provided a capsule version of the data situation in the US where they can identify individuals and predict how they might vote, while Pickup focused on the Canadian scene. As one expects from Canadian political parties and Canadian agencies in general, no one really wants to share how much information they can actually access (yes, that’s true of the Liberals and the NDP [New Democrats] too). By contrast, political parties and strategists in the US quite openly shared information with Issenberg about where and how they get data.

Pickup made some interesting points about data and how more data does not lead to better predictions. There was one study done on psychologists which Pickup replicated with undergraduate political science students. The psychologists and the political science students in the two separate studies were given data and asked to predict behaviour. They were then given more data about the same individuals and asked again to predict behaviour. In all. there were four sessions where the subjects were given successively more data and asked to predict behaviour based on that data. You may have already guessed but prediction accuracy decreased each time more information was added. Conversely, the people making the predictions became more confident as their predictive accuracy declined. A little disconcerting, non?

Pickup made another point noting that it may be easier to use big data to predict voting behaviour in a two-party system such as they have in the US but a multi-party system such as we have in Canada offers more challenges.

So, it was a good beginning and I look forward to more in the coming weeks (President’s Dream Colloquium on Engaging Big Data). Remember if you can’t listen to the live session, just click through to the event’s speaker’s page where they have hopefully posted the webcast.

The next dream colloquium takes place Tuesday, Jan. 19, 2016,

Big Data since 1854

Dr. Surajit Chaudhuri, Scientist and Managing Director of XCG (Microsoft Research)
Standford University, PhD
Tuesday, January 19, 2016, 3:30–5 pm
IRMACS Theatre, ASB 10900, Burnaby campus [or by webcast[

Enjoy!

No more kevlar-wrapped lithium-ion batteries?

Current lithium-ion batteries present a fire hazard, which is why, last, year a team of researchers at the University of Michigan came up with a plan to prevent fires by wrapping the batteries in kevlar. My Jan. 30, 2015 post describes the research and provides some information about airplane fires caused by the use of lithium-ion batteries.

This year, a team of researchers at Stanford University (US) have invented a lithium-ion (li-ion) battery that shuts itself down when it overheats, according to a Jan. 12, 2016 news item on Nanotechnology Now,

Stanford researchers have developed the first lithium-ion battery that shuts down before overheating, then restarts immediately when the temperature cools.

The new technology could prevent the kind of fires that have prompted recalls and bans on a wide range of battery-powered devices, from recliners and computers to navigation systems and hoverboards [and on airplanes].

“People have tried different strategies to solve the problem of accidental fires in lithium-ion batteries,” said Zhenan Bao, a professor of chemical engineering at Stanford. “We’ve designed the first battery that can be shut down and revived over repeated heating and cooling cycles without compromising performance.”

Stanford has produced a video of Dr. Bao discussing her latest work,

A Jan. 11, 2016 Stanford University news release by Mark Schwartz, which originated the news item, provides more detail about li-ion batteries and the new fire prevention technology,

A typical lithium-ion battery consists of two electrodes and a liquid or gel electrolyte that carries charged particles between them. Puncturing, shorting or overcharging the battery generates heat. If the temperature reaches about 300 degrees Fahrenheit (150 degrees Celsius), the electrolyte could catch fire and trigger an explosion.

Several techniques have been used to prevent battery fires, such as adding flame retardants to the electrolyte. In 2014, Stanford engineer Yi Cui created a “smart” battery that provides ample warning before it gets too hot.

“Unfortunately, these techniques are irreversible, so the battery is no longer functional after it overheats,” said study co-author Cui, an associate professor of materials science and engineering and of photon science. “Clearly, in spite of the many efforts made thus far, battery safety remains an important concern and requires a new approach.”

Nanospikes

To address the problem Cui, Bao and postdoctoral scholar Zheng Chen turned to nanotechnology. Bao recently invented a wearable sensor to monitor human body temperature. The sensor is made of a plastic material embedded with tiny particles of nickel with nanoscale spikes protruding from their surface.

For the battery experiment, the researchers coated the spiky nickel particles with graphene, an atom-thick layer of carbon, and embedded the particles in a thin film of elastic polyethylene.

“We attached the polyethylene film to one of the battery electrodes so that an electric current could flow through it,” said Chen, lead author of the study. “To conduct electricity, the spiky particles have to physically touch one another. But during thermal expansion, polyethylene stretches. That causes the particles to spread apart, making the film nonconductive so that electricity can no longer flow through the battery.”

When the researchers heated the battery above 160 F (70 C), the polyethylene film quickly expanded like a balloon, causing the spiky particles to separate and the battery to shut down. But when the temperature dropped back down to 160 F (70 C), the polyethylene shrunk, the particles came back into contact, and the battery started generating electricity again.

“We can even tune the temperature higher or lower depending on how many particles we put in or what type of polymer materials we choose,” said Bao, who is also a professor, by courtesy, of chemistry and of materials science and engineering. “For example, we might want the battery to shut down at 50 C or 100 C.”

Reversible strategy

To test the stability of new material, the researchers repeatedly applied heat to the battery with a hot-air gun. Each time, the battery shut down when it got too hot and quickly resumed operating when the temperature cooled.

“Compared with previous approaches, our design provides a reliable, fast, reversible strategy that can achieve both high battery performance and improved safety,” Cui said. “This strategy holds great promise for practical battery applications.”

Here’s a link to and a citation for the paper,

Fast and reversible thermoresponsive polymer switching materials for safer batteries by Zheng Chen, Po-Chun Hsu, Jeffrey Lopez, Yuzhang Li, John W. F. To, Nan Liu, Chao Wang, Sean C. Andrews, Jia Liu, Yi Cui, & Zhenan Bao. Nature Energy 1, Article number: 15009 (2016) doi:10.1038/nenergy.2015.9 Published online: 11 January 2016

This paper appears to be open access.

Nanopores and a new technique for desalination

There’s been more than one piece here about water desalination and purification and/or remediation efforts and at least one of them claims to have successfully overcome issues such as reverse osmosis energy needs which are hampering adoption of various technologies. Now, researchers at the University of Illinois at Champaign Urbana have developed another new technique for desalinating water while reverse osmosis issues according to a Nov. 11, 2015 news item on Nanowerk (Note: A link has been removed) ,

University of Illinois engineers have found an energy-efficient material for removing salt from seawater that could provide a rebuttal to poet Samuel Taylor Coleridge’s lament, “Water, water, every where, nor any drop to drink.”

The material, a nanometer-thick sheet of molybdenum disulfide (MoS2) riddled with tiny holes called nanopores, is specially designed to let high volumes of water through but keep salt and other contaminates out, a process called desalination. In a study published in the journal Nature Communications (“Water desalination with a single-layer MoS2 nanopore”), the Illinois team modeled various thin-film membranes and found that MoS2 showed the greatest efficiency, filtering through up to 70 percent more water than graphene membranes. [emphasis mine]

I’ll get to the professor’s comments about graphene membranes in a minute. Meanwhile, a Nov. 11, 2015 University of Illinois news release (also on EurekAlert), which originated the news item, provides more information about the research,

“Even though we have a lot of water on this planet, there is very little that is drinkable,” said study leader Narayana Aluru, a U. of I. professor of mechanical science and engineering. “If we could find a low-cost, efficient way to purify sea water, we would be making good strides in solving the water crisis.

“Finding materials for efficient desalination has been a big issue, and I think this work lays the foundation for next-generation materials. These materials are efficient in terms of energy usage and fouling, which are issues that have plagued desalination technology for a long time,” said Aluru, who also is affiliated with the Beckman Institute for Advanced Science and Technology at the U. of I.

Most available desalination technologies rely on a process called reverse osmosis to push seawater through a thin plastic membrane to make fresh water. The membrane has holes in it small enough to not let salt or dirt through, but large enough to let water through. They are very good at filtering out salt, but yield only a trickle of fresh water. Although thin to the eye, these membranes are still relatively thick for filtering on the molecular level, so a lot of pressure has to be applied to push the water through.

“Reverse osmosis is a very expensive process,” Aluru said. “It’s very energy intensive. A lot of power is required to do this process, and it’s not very efficient. In addition, the membranes fail because of clogging. So we’d like to make it cheaper and make the membranes more efficient so they don’t fail as often. We also don’t want to have to use a lot of pressure to get a high flow rate of water.”

One way to dramatically increase the water flow is to make the membrane thinner, since the required force is proportional to the membrane thickness. Researchers have been looking at nanometer-thin membranes such as graphene. However, graphene presents its own challenges in the way it interacts with water.

Aluru’s group has previously studied MoS2 nanopores as a platform for DNA sequencing and decided to explore its properties for water desalination. Using the Blue Waters supercomputer at the National Center for Supercomputing Applications at the U. of I., they found that a single-layer sheet of MoS2 outperformed its competitors thanks to a combination of thinness, pore geometry and chemical properties.

A MoS2 molecule has one molybdenum atom sandwiched between two sulfur atoms. A sheet of MoS2, then, has sulfur coating either side with the molybdenum in the center. The researchers found that creating a pore in the sheet that left an exposed ring of molybdenum around the center of the pore created a nozzle-like shape that drew water through the pore.

“MoS2 has inherent advantages in that the molybdenum in the center attracts water, then the sulfur on the other side pushes it away, so we have much higher rate of water going through the pore,” said graduate student Mohammad Heiranian, the first author of the study. “It’s inherent in the chemistry of MoS2 and the geometry of the pore, so we don’t have to functionalize the pore, which is a very complex process with graphene.”

In addition to the chemical properties, the single-layer sheets of MoS2 have the advantages of thinness, requiring much less energy, which in turn dramatically reduces operating costs. MoS2 also is a robust material, so even such a thin sheet is able to withstand the necessary pressures and water volumes.

The Illinois researchers are establishing collaborations to experimentally test MoS2 for water desalination and to test its rate of fouling, or clogging of the pores, a major problem for plastic membranes. MoS2 is a relatively new material, but the researchers believe that manufacturing techniques will improve as its high performance becomes more sought-after for various applications.

“Nanotechnology could play a great role in reducing the cost of desalination plants and making them energy efficient,” said Amir Barati Farimani, who worked on the study as a graduate student at Illinois and is now a postdoctoral fellow at Stanford University. “I’m in California now, and there’s a lot of talk about the drought and how to tackle it. I’m very hopeful that this work can help the designers of desalination plants. This type of thin membrane can increase return on investment because they are much more energy efficient.”

Here’s a link to and a citation for the paper,

Water desalination with a single-layer MoS2 nanopore by Mohammad Heiranian, Amir Barati Farimani, & Narayana R. Aluru. Nature Communications 6, Article number: 8616 doi:10.1038/ncomms9616 Published 14 October 2015

Graphene membranes

In a July 13, 2015 essay on Nanotechnology Now, Tim Harper provides an overview of the research into using graphene for water desalination and purification/remediation about which he is quite hopeful. There is no mention of an issue with interactions between water and graphene. It should be noted that Tim Harper is the Chief Executive Officer of G20, a company which produces a graphene-based solution (graphene oxide sheets), which can desalinate water and can purify/remediate it. Tim is a scientist and while you might have some hesitation given his fiscal interests, his essay is worthwhile reading as he supplies context and explanations of the science.

The sense of touch via artificial skin

Scientists have been working for years to allow artificial skin to transmit what the brain would recognize as the sense of touch. For anyone who has lost a limb and gotten a prosthetic replacement, the loss of touch is reputedly one of the more difficult losses to accept. The sense of touch is also vital in robotics if the field is to expand and include activities reliant on the sense of touch, e.g., how much pressure do you use to grasp a cup; how much strength  do you apply when moving an object from one place to another?

For anyone interested in the ‘electronic skin and pursuit of touch’ story, I have a Nov. 15, 2013 posting which highlights the evolution of the research into e-skin and what was then some of the latest work.

This posting is a 2015 update of sorts featuring the latest e-skin research from Stanford University and Xerox PARC. (Dexter Johnson in an Oct. 15, 2015 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineering] site) provides a good research summary.) For anyone with an appetite for more, there’s this from an Oct. 15, 2015 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

Using flexible organic circuits and specialized pressure sensors, researchers have created an artificial “skin” that can sense the force of static objects. Furthermore, they were able to transfer these sensory signals to the brain cells of mice in vitro using optogenetics. For the many people around the world living with prosthetics, such a system could one day allow them to feel sensation in their artificial limbs. To create the artificial skin, Benjamin Tee et al. developed a specialized circuit out of flexible, organic materials. It translates static pressure into digital signals that depend on how much mechanical force is applied. A particular challenge was creating sensors that can “feel” the same range of pressure that humans can. Thus, on the sensors, the team used carbon nanotubes molded into pyramidal microstructures, which are particularly effective at tunneling the signals from the electric field of nearby objects to the receiving electrode in a way that maximizes sensitivity. Transferring the digital signal from the artificial skin system to the cortical neurons of mice proved to be another challenge, since conventional light-sensitive proteins used in optogenetics do not stimulate neural spikes for sufficient durations for these digital signals to be sensed. Tee et al. therefore engineered new optogenetic proteins able to accommodate longer intervals of stimulation. Applying these newly engineered optogenic proteins to fast-spiking interneurons of the somatosensory cortex of mice in vitro sufficiently prolonged the stimulation interval, allowing the neurons to fire in accordance with the digital stimulation pulse. These results indicate that the system may be compatible with other fast-spiking neurons, including peripheral nerves.

And, there’s an Oct. 15, 2015 Stanford University news release on EurkeAlert describing this work from another perspective,

The heart of the technique is a two-ply plastic construct: the top layer creates a sensing mechanism and the bottom layer acts as the circuit to transport electrical signals and translate them into biochemical stimuli compatible with nerve cells. The top layer in the new work featured a sensor that can detect pressure over the same range as human skin, from a light finger tap to a firm handshake.

Five years ago, Bao’s [Zhenan Bao, a professor of chemical engineering at Stanford,] team members first described how to use plastics and rubbers as pressure sensors by measuring the natural springiness of their molecular structures. They then increased this natural pressure sensitivity by indenting a waffle pattern into the thin plastic, which further compresses the plastic’s molecular springs.

To exploit this pressure-sensing capability electronically, the team scattered billions of carbon nanotubes through the waffled plastic. Putting pressure on the plastic squeezes the nanotubes closer together and enables them to conduct electricity.

This allowed the plastic sensor to mimic human skin, which transmits pressure information as short pulses of electricity, similar to Morse code, to the brain. Increasing pressure on the waffled nanotubes squeezes them even closer together, allowing more electricity to flow through the sensor, and those varied impulses are sent as short pulses to the sensing mechanism. Remove pressure, and the flow of pulses relaxes, indicating light touch. Remove all pressure and the pulses cease entirely.

The team then hooked this pressure-sensing mechanism to the second ply of their artificial skin, a flexible electronic circuit that could carry pulses of electricity to nerve cells.

Importing the signal

Bao’s team has been developing flexible electronics that can bend without breaking. For this project, team members worked with researchers from PARC, a Xerox company, which has a technology that uses an inkjet printer to deposit flexible circuits onto plastic. Covering a large surface is important to making artificial skin practical, and the PARC collaboration offered that prospect.

Finally the team had to prove that the electronic signal could be recognized by a biological neuron. It did this by adapting a technique developed by Karl Deisseroth, a fellow professor of bioengineering at Stanford who pioneered a field that combines genetics and optics, called optogenetics. Researchers bioengineer cells to make them sensitive to specific frequencies of light, then use light pulses to switch cells, or the processes being carried on inside them, on and off.

For this experiment the team members engineered a line of neurons to simulate a portion of the human nervous system. They translated the electronic pressure signals from the artificial skin into light pulses, which activated the neurons, proving that the artificial skin could generate a sensory output compatible with nerve cells.

Optogenetics was only used as an experimental proof of concept, Bao said, and other methods of stimulating nerves are likely to be used in real prosthetic devices. Bao’s team has already worked with Bianxiao Cui, an associate professor of chemistry at Stanford, to show that direct stimulation of neurons with electrical pulses is possible.

Bao’s team envisions developing different sensors to replicate, for instance, the ability to distinguish corduroy versus silk, or a cold glass of water from a hot cup of coffee. This will take time. There are six types of biological sensing mechanisms in the human hand, and the experiment described in Science reports success in just one of them.

But the current two-ply approach means the team can add sensations as it develops new mechanisms. And the inkjet printing fabrication process suggests how a network of sensors could be deposited over a flexible layer and folded over a prosthetic hand.

“We have a lot of work to take this from experimental to practical applications,” Bao said. “But after spending many years in this work, I now see a clear path where we can take our artificial skin.”

Here’s a link to and a citation for the paper,

A skin-inspired organic digital mechanoreceptor by Benjamin C.-K. Tee, Alex Chortos, Andre Berndt, Amanda Kim Nguyen, Ariane Tom, Allister McGuire, Ziliang Carter Lin, Kevin Tien, Won-Gyu Bae, Huiliang Wang, Ping Mei, Ho-Hsiu Chou, Bianxiao Cui, Karl Deisseroth, Tse Nga Ng, & Zhenan Bao. Science 16 October 2015 Vol. 350 no. 6258 pp. 313-316 DOI: 10.1126/science.aaa9306

This paper is behind a paywall.

Inside-out plants show researchers how cellulose forms

Strictly speaking this story of tricking cellulose into growing on the surface rather than the interior of a cell is not a nanotechnology topic but I imagine that the folks who research nanocellulose materials will find this work of great interest. An Oct. 8, 2015 news item on ScienceDaily describes the research,

Researchers have been able to watch the interior cells of a plant synthesize cellulose for the first time by tricking the cells into growing on the plant’s surface.

“The bulk of the world’s cellulose is produced within the thickened secondary cell walls of tissues hidden inside the plant body,” says University of British Columbia Botany PhD candidate Yoichiro Watanabe, lead author of the paper published this week in Science.

“So we’ve never been able to image the cells in high resolution as they produce this all-important biological material inside living plants.”

An Oct. 8, 2015 University of British Columbia (UBC) news release on EurekAlert, which originated the news item, explains the interest in cellulose,

Cellulose, the structural component of cell walls that enables plants to stay upright, is the most abundant biopolymer on earth. It’s a critical resource for pulp and paper, textiles, building materials, and renewable biofuels.

“In order to be structurally sound, plants have to lay down their secondary cell walls very quickly once the plant has stopped growing, like a layer of concrete with rebar,” says UBC botanist Lacey Samuels, one of the senior authors on the paper.

“Based on our study, it appears plant cells need both a high density of the enzymes that create cellulose, and their rapid movement across the cell surface, to make this happen so quickly.”

This work, the culmination of years of research by four UBC graduate students supervised by UBC Forestry researcher Shawn Mansfield and Samuels, was facilitated by a collaboration with the Nara Institute of Technology in Japan to create the special plant lines, and researchers at the Carnegie Institution for Science at Stanford University to conduct the live cell imaging.

“This is a major step forward in our understanding of how plants synthesize their walls, specifically cellulose,” says Mansfield. “It could have significant implications for the way plants are bred or selected for improved or altered cellulose ultrastructural traits – which could impact industries ranging from cellulose nanocrystals to toiletries to structural building products.”

The researchers used a modified line of Arabidopsis thaliana, a small flowering plant related to cabbage and mustard, to conduct the experiment. The resulting plants look exactly like their non-modified parents, until they are triggered to make secondary cell walls on their exterior.

One of the other partners in this research, Stanford University’s Carnegie Institution of Science published an Oct. 8, 2015 news release on EurekAlert focusing on other aspects of the research (Note: Some of this is repetitive),

Now scientists, including Carnegie’s David Ehrhardt and Heather Cartwright, have exploited a new way to watch the trafficking of the proteins that make cellulose in the formation cell walls in real time. They found that organization of this trafficking by structural proteins called microtubules, combined with the high density and rapid rate of these cellulose producing enzymes explains how thick and high strength secondary walls are built. This basic knowledge helps us understand plants can stand upright, which was essential for the move of plants from the sea to the land, and may useful for engineering plants with improved mechanical properties for to increase yields or to produce novel bio-materials. The research is published in Science.

The live-cell imaging was conducted at Carnegie with colleagues from the University of British Columbia (UBC) using customized high-end instrumentation. For the first time, it directly tracked cellulose production to observe how xylem cells, cells that transport water and some nutrients, make cellulose for their secondary cell walls. Strong walls are based on a high density of enzymes that catalyze the synthesis of cellulose (called cellulose synthase enzymes) and their rapid movement across the xylem cell surface.

Watching xylem cells lay down cellulose in real time has not been possible before, because the vascular tissues of plants are hidden inside the plant body. Lead author Yoichiro Watanabe of UBC applied a system developed by colleagues at the Nara Institute of Science and Technology to trick plants into making xylem cells on their surface. The researchers fluorescently tagged a cellulose synthase enzyme of the experimental plant Arabidopsis to track the activity using high-end microscopes.

“For me, one of the most exciting aspects of this study was being able to observe how the microtubule cytoskeleton was actively directing the synthesis of the new cell walls at the level of individual enzymes. We can guess how a complex cellular process works from static snapshots, which is what we usually have had to work from in biology, but you can’t really understand the process until you can see it in action. ” remarked Carnegie’s David Ehrhardt.

Here’s a link to and a citation for the paper,

Visualization of cellulose synthases in Arabidopsis secondary cell walls by Y. Watanabe, M. J. Meents, L. M. McDonnell, S. Barkwill, A. Sampathkumar, H. N. Cartwright, T. Demura, D. W. Ehrhardt, A.L. Samuels, & S. D. Mansfield. Science 9 October 2015: Vol. 350 no. 6257 pp. 198-203 DOI: 10.1126/science.aac7446

This paper is behind a paywall.

With all of this talk of visualization, it’s only right that the researchers have made an image from their work available,

 Caption: An image of artificially-produced cellulose in cells on the surface of a modified Arabidopsis thaliana plant. Credit: University of British Columbia.

Caption: An image of artificially-produced cellulose in cells on the surface of a modified Arabidopsis thaliana plant. Credit: University of British Columbia.

 

$81M for US National Nanotechnology Coordinated Infrastructure (NNCI)

Academics, small business, and industry researchers are the big winners in a US National Science Foundation bonanza according to a Sept. 16, 2015 news item on Nanowerk,

To advance research in nanoscale science, engineering and technology, the National Science Foundation (NSF) will provide a total of $81 million over five years to support 16 sites and a coordinating office as part of a new National Nanotechnology Coordinated Infrastructure (NNCI).

The NNCI sites will provide researchers from academia, government, and companies large and small with access to university user facilities with leading-edge fabrication and characterization tools, instrumentation, and expertise within all disciplines of nanoscale science, engineering and technology.

A Sept. 16, 2015 NSF news release provides a brief history of US nanotechnology infrastructures and describes this latest effort in slightly more detail (Note: Links have been removed),

The NNCI framework builds on the National Nanotechnology Infrastructure Network (NNIN), which enabled major discoveries, innovations, and contributions to education and commerce for more than 10 years.

“NSF’s long-standing investments in nanotechnology infrastructure have helped the research community to make great progress by making research facilities available,” said Pramod Khargonekar, assistant director for engineering. “NNCI will serve as a nationwide backbone for nanoscale research, which will lead to continuing innovations and economic and societal benefits.”

The awards are up to five years and range from $500,000 to $1.6 million each per year. Nine of the sites have at least one regional partner institution. These 16 sites are located in 15 states and involve 27 universities across the nation.

Through a fiscal year 2016 competition, one of the newly awarded sites will be chosen to coordinate the facilities. This coordinating office will enhance the sites’ impact as a national nanotechnology infrastructure and establish a web portal to link the individual facilities’ websites to provide a unified entry point to the user community of overall capabilities, tools and instrumentation. The office will also help to coordinate and disseminate best practices for national-level education and outreach programs across sites.

New NNCI awards:

Mid-Atlantic Nanotechnology Hub for Research, Education and Innovation, University of Pennsylvania with partner Community College of Philadelphia, principal investigator (PI): Mark Allen
Texas Nanofabrication Facility, University of Texas at Austin, PI: Sanjay Banerjee

Northwest Nanotechnology Infrastructure, University of Washington with partner Oregon State University, PI: Karl Bohringer

Southeastern Nanotechnology Infrastructure Corridor, Georgia Institute of Technology with partners North Carolina A&T State University and University of North Carolina-Greensboro, PI: Oliver Brand

Midwest Nano Infrastructure Corridor, University of  Minnesota Twin Cities with partner North Dakota State University, PI: Stephen Campbell

Montana Nanotechnology Facility, Montana State University with partner Carlton College, PI: David Dickensheets
Soft and Hybrid Nanotechnology Experimental Resource,

Northwestern University with partner University of Chicago, PI: Vinayak Dravid

The Virginia Tech National Center for Earth and Environmental Nanotechnology Infrastructure, Virginia Polytechnic Institute and State University, PI: Michael Hochella

North Carolina Research Triangle Nanotechnology Network, North Carolina State University with partners Duke University and University of North Carolina-Chapel Hill, PI: Jacob Jones

San Diego Nanotechnology Infrastructure, University of California, San Diego, PI: Yu-Hwa Lo

Stanford Site, Stanford University, PI: Kathryn Moler

Cornell Nanoscale Science and Technology Facility, Cornell University, PI: Daniel Ralph

Nebraska Nanoscale Facility, University of Nebraska-Lincoln, PI: David Sellmyer

Nanotechnology Collaborative Infrastructure Southwest, Arizona State University with partners Maricopa County Community College District and Science Foundation Arizona, PI: Trevor Thornton

The Kentucky Multi-scale Manufacturing and Nano Integration Node, University of Louisville with partner University of Kentucky, PI: Kevin Walsh

The Center for Nanoscale Systems at Harvard University, Harvard University, PI: Robert Westervelt

The universities are trumpeting this latest nanotechnology funding,

NSF-funded network set to help businesses, educators pursue nanotechnology innovation (North Carolina State University, Duke University, and University of North Carolina at Chapel Hill)

Nanotech expertise earns Virginia Tech a spot in National Science Foundation network

ASU [Arizona State University] chosen to lead national nanotechnology site

UChicago, Northwestern awarded $5 million nanotechnology infrastructure grant

That is a lot of excitement.

Boosting chip speeds with graphene

There’s a certain hysteria associated with chip speeds as engineers and computer scientists try to achieve the ever improved speed times that consumers have enjoyed for some decades. The question looms, is there some point at which we can no longer improve the speed? Well, we haven’t reached that point yet according to a June 18, 2015 news item on Nanotechnology Now,

Stanford engineers find a simple yet clever way to boost chip speeds: Inside each chip are millions of tiny wires to transport data; wrapping them in a protective layer of graphene could boost speeds by up to 30 percent. [emphasis mine]

A June 16, 2015 Stanford University news release by Tom Abate (also on EurekAlert but dated June 17, 2015), which originated the news item, describes how computer chips are currently designed and the redesign which yields more speed,

A typical computer chip includes millions of transistors connected with an extensive network of copper wires. Although chip wires are unimaginably short and thin compared to household wires both have one thing in common: in each case the copper is wrapped within a protective sheath.

For years a material called tantalum nitride has formed protective layer in chip wires.

Now Stanford-led experiments demonstrate that a different sheathing material, graphene, can help electrons scoot through tiny copper wires in chips more quickly.

Graphene is a single layer of carbon atoms arranged in a strong yet thin lattice. Stanford electrical engineer H.-S. Philip Wong says this modest fix, using graphene to wrap wires, could allow transistors to exchange data faster than is currently possible. And the advantages of using graphene would become greater in the future as transistors continue to shrink.

Wong led a team of six researchers, including two from the University of Wisconsin-Madison, who will present their findings at the Symposia of VLSI Technology and Circuits in Kyoto, a leading venue for the electronics industry.

Ling Li, a graduate student in electrical engineering at Stanford and first author of the research paper, explained why changing the exterior wrapper on connecting wires can have such a big impact on chip performance.

It begins with understanding the dual role of this protective layer: it isolates the copper from the silicon on the chip and also serve to conduct electricity.

On silicon chips, the transistors act like tiny gates to switch electrons on or off. That switching function is how transistors process data.

The copper wires between the transistors transport this data once it is processed.

The isolating material–currently tantalum nitride–keeps the copper from migrating into the silicon transistors and rendering them non-functional.

Why switch to graphene?

Two reasons, starting with the ceaseless desire to keep making electronic components smaller.

When the Stanford team used the thinnest possible layer of tantalum nitride needed to perform this isolating function, they found that the industry-standard was eight times thicker than the graphene layer that did the same work.

Graphene had a second advantage as a protective sheathing and here it’s important to differentiate how this outer layer functions in chip wires versus a household wires.

In house wires the outer layer insulates the copper to prevent electrocution or fires.

In a chip the layer around the wires is a barrier to prevent copper atoms from infiltrating the silicon. Were that to happen the transistors would cease to function. So the protective layer isolates the copper from the silicon

The Stanford experiment showed that graphene could perform this isolating role while also serving as an auxiliary conductor of electrons. Its lattice structure allows electrons to leap from carbon atom to carbon atom straight down the wire, while effectively containing the copper atoms within the copper wire.

These benefits–the thinness of the graphene layer and its dual role as isolator and auxiliary conductor–allow this new wire technology to carry more data between transistors, speeding up overall chip performance in the process.

In today’s chips the benefits are modest; a graphene isolator would boost wire speeds from four percent to 17 percent, depending on the length of the wire. [emphasis mine]

But as transistors and wires continue to shrink in size, the benefits of the ultrathin yet conductive graphene isolator become greater. [emphasis mine] The Stanford engineers estimate that their technology could increase wire speeds by 30 percent in the next two generations

The Stanford researchers think the promise of faster computing will induce other researchers to get interested in wires, and help to overcome some of the hurdles needed to take this proof of principle into common practice.

This would include techniques to grow graphene, especially growing it directly onto wires while chips are being mass-produced. In addition to his University of Wisconsin collaborator Professor Michael Arnold, Wong cited Purdue University Professor Zhihong Chen. Wong noted that the idea of using graphene as an isolator was inspired by Cornell University Professor Paul McEuen and his pioneering research on the basic properties of this marvelous material. Alexander Balandin of the University of California-Riverside has also made contributions to using graphene in chips.

“Graphene has been promised to benefit the electronics industry for a long time, and using it as a copper barrier is perhaps the first realization of this promise,” Wong said.

I gather they’ve decided to highlight the most optimistic outcomes.