Tag Archives: Stanford University

Cambridge University researchers tell us why Spiderman can’t exist while Stanford University proves otherwise

A team of zoology researchers at Cambridge University (UK) find themselves in the unenviable position of having their peer-reviewed study used as a source of unintentional humour. I gather zoologists (Cambridge) and engineers (Stanford) don’t have much opportunity to share information.

A Jan. 18, 2016 news item on ScienceDaily announces the Cambridge research findings,

Latest research reveals why geckos are the largest animals able to scale smooth vertical walls — even larger climbers would require unmanageably large sticky footpads. Scientists estimate that a human would need adhesive pads covering 40% of their body surface in order to walk up a wall like Spiderman, and believe their insights have implications for the feasibility of large-scale, gecko-like adhesives.

A Jan. 18, 2016 Cambridge University press release (also on EurekAlert), which originated the news item, describes the research and the thinking that led to the researchers’ conclusions,

Dr David Labonte and his colleagues in the University of Cambridge’s Department of Zoology found that tiny mites use approximately 200 times less of their total body area for adhesive pads than geckos, nature’s largest adhesion-based climbers. And humans? We’d need about 40% of our total body surface, or roughly 80% of our front, to be covered in sticky footpads if we wanted to do a convincing Spiderman impression.

Once an animal is big enough to need a substantial fraction of its body surface to be covered in sticky footpads, the necessary morphological changes would make the evolution of this trait impractical, suggests Labonte.

“If a human, for example, wanted to walk up a wall the way a gecko does, we’d need impractically large sticky feet – our shoes would need to be a European size 145 or a US size 114,” says Walter Federle, senior author also from Cambridge’s Department of Zoology.

The researchers say that these insights into the size limits of sticky footpads could have profound implications for developing large-scale bio-inspired adhesives, which are currently only effective on very small areas.

“As animals increase in size, the amount of body surface area per volume decreases – an ant has a lot of surface area and very little volume, and a blue whale is mostly volume with not much surface area” explains Labonte.

“This poses a problem for larger climbing species because, when they are bigger and heavier, they need more sticking power to be able to adhere to vertical or inverted surfaces, but they have comparatively less body surface available to cover with sticky footpads. This implies that there is a size limit to sticky footpads as an evolutionary solution to climbing – and that turns out to be about the size of a gecko.”

Larger animals have evolved alternative strategies to help them climb, such as claws and toes to grip with.

The researchers compared the weight and footpad size of 225 climbing animal species including insects, frogs, spiders, lizards and even a mammal.

“We compared animals covering more than seven orders of magnitude in weight, which is roughly the same as comparing a cockroach to the weight of Big Ben, for example,” says Labonte.

These investigations also gave the researchers greater insights into how the size of adhesive footpads is influenced and constrained by the animals’ evolutionary history.

“We were looking at vastly different animals – a spider and a gecko are about as different as a human is to an ant- but if you look at their feet, they have remarkably similar footpads,” says Labonte.

“Adhesive pads of climbing animals are a prime example of convergent evolution – where multiple species have independently, through very different evolutionary histories, arrived at the same solution to a problem. When this happens, it’s a clear sign that it must be a very good solution.”

The researchers believe we can learn from these evolutionary solutions in the development of large-scale manmade adhesives.

“Our study emphasises the importance of scaling for animal adhesion, and scaling is also essential for improving the performance of adhesives over much larger areas. There is a lot of interesting work still to do looking into the strategies that animals have developed in order to maintain the ability to scale smooth walls, which would likely also have very useful applications in the development of large-scale, powerful yet controllable adhesives,” says Labonte.

There is one other possible solution to the problem of how to stick when you’re a large animal, and that’s to make your sticky footpads even stickier.

“We noticed that within closely related species pad size was not increasing fast enough to match body size, probably a result of evolutionary constraints. Yet these animals can still stick to walls,” says Christofer Clemente, a co-author from the University of the Sunshine Coast [Australia].

“Within frogs, we found that they have switched to this second option of making pads stickier rather than bigger. It’s remarkable that we see two different evolutionary solutions to the problem of getting big and sticking to walls,” says Clemente.

“Across all species the problem is solved by evolving relatively bigger pads, but this does not seem possible within closely related species, probably since there is not enough morphological diversity to allow it. Instead, within these closely related groups, pads get stickier. This is a great example of evolutionary constraint and innovation.”

A researcher at Stanford University (US) took strong exception to the Cambridge team’s conclusions , from a Jan. 28, 2016 article by Michael Grothaus for Fast Company (Note: A link has been removed),

It seems the dreams of the web-slinger’s fans were crushed forever—that is until a rival university swooped in and saved the day. A team of engineers working with mechanical engineering graduate student Elliot Hawkes at Stanford University have announced [in 2014] that they’ve invented a device called “gecko gloves” that proves the Cambridge researchers wrong.

Hawkes has created a video outlining the nature of his dispute with Cambridge University and US tv talk show host, Stephen Colbert who featured the Cambridge University research in one of his monologues,

To be fair to Hawkes, he does prove his point. A Nov. 21, 2014 Stanford University report by Bjorn Carey describes Hawke’s ingenious ‘sticky pads,

Each handheld gecko pad is covered with 24 adhesive tiles, and each of these is covered with sawtooth-shape polymer structures each 100 micrometers long (about the width of a human hair).

The pads are connected to special degressive springs, which become less stiff the further they are stretched. This characteristic means that when the springs are pulled upon, they apply an identical force to each adhesive tile and cause the sawtooth-like structures to flatten.

“When the pad first touches the surface, only the tips touch, so it’s not sticky,” said co-author Eric Eason, a graduate student in applied physics. “But when the load is applied, and the wedges turn over and come into contact with the surface, that creates the adhesion force.”

As with actual geckos, the adhesives can be “turned” on and off. Simply release the load tension, and the pad loses its stickiness. “It can attach and detach with very little wasted energy,” Eason said.

The ability of the device to scale up controllable adhesion to support large loads makes it attractive for several applications beyond human climbing, said Mark Cutkosky, the Fletcher Jones Chair in the School of Engineering and senior author on the paper.

“Some of the applications we’re thinking of involve manufacturing robots that lift large glass panels or liquid-crystal displays,” Cutkosky said. “We’re also working on a project with NASA’s Jet Propulsion Laboratory to apply these to the robotic arms of spacecraft that could gently latch on to orbital space debris, such as fuel tanks and solar panels, and move it to an orbital graveyard or pitch it toward Earth to burn up.”

Previous work on synthetic and gecko adhesives showed that adhesive strength decreased as the size increased. In contrast, the engineers have shown that the special springs in their device make it possible to maintain the same adhesive strength at all sizes from a square millimeter to the size of a human hand.

The current version of the device can support about 200 pounds, Hawkes said, but, theoretically, increasing its size by 10 times would allow it to carry almost 2,000 pounds.

Here’s a link to and a citation for the Stanford paper,

Human climbing with efficiently scaled gecko-inspired dry adhesives by Elliot W. Hawkes, Eric V. Eason, David L. Christensen, Mark R. Cutkosky. Jurnal of the Royal Society Interface DOI: 10.1098/rsif.2014.0675 Published 19 November 2014

This paper is open access.

To be fair to the Cambridge researchers, It’s stretching it a bit to say that Hawke’s gecko gloves allow someone to be like Spiderman. That’s a very careful, slow climb achieved in a relatively short period of time. Can the human body remain suspended that way for more than a few minutes? How big do your sticky pads have to be if you’re going to have the same wall-climbing ease of movement and staying power of either a gecko or Spiderman?

Here’s a link to and a citation for the Cambridge paper,

Extreme positive allometry of animal adhesive pads and the size limits of adhesion-based climbing by David Labonte, Christofer J. Clemente, Alex Dittrich, Chi-Yun Kuo, Alfred J. Crosby, Duncan J. Irschick, and Walter Federle. PNAS doi: 10.1073/pnas.1519459113

This paper is behind a paywall but there is an open access preprint version, which may differ from the PNAS version, available,

Extreme positive allometry of animal adhesive pads and the size limits of adhesion-based climbing by David Labonte, Christofer J Clemente, Alex Dittrich, Chi-Yun Kuo, Alfred J Crosby, Duncan J Irschick, Walter Federle. bioRxiv
doi: http://dx.doi.org/10.1101/033845

I hope that if the Cambridge researchers respond, they will be witty rather than huffy. Finally, there’s this gecko image (which I love) from the Cambridge researchers,

 Caption: This image shows a gecko and ant. Credit: Image courtesy of A Hackmann and D Labonte

Caption: This image shows a gecko and ant. Credit: Image courtesy of A Hackmann and D Labonte

Simon Fraser University (Vancouver, Canada) and its president’s (Andrew Petter) dream colloquium: big data

They have a ‘big data’ start to 2016 planned for the President’s (Andrew Petter at Simon Fraser University [SFU] in Vancouver, Canada) Dream Colloquium according to a Jan. 5, 2016 news release,

Big data explained: SFU launches spring 2016 President’s Dream Colloquium

Speaker series tackles history, use and implications of collecting data


Canadians experience and interact with big data on a daily basis. Some interactions are as simple as buying coffee or as complex as filling out the Canadian government’s mandatory long-form census. But while big data may be one of the most important technological and social shifts in the past five years, many experts are still grappling with what to do with the massive amounts of information being gathered every day.


To help understand the implications of collecting, analyzing and using big data, Simon Fraser University is launching the President’s Dream Colloquium on Engaging Big Data on Tuesday, January 5.


“Big data affects all sectors of society from governments to businesses to institutions to everyday people,” says Peter Chow-White, SFU Associate Professor of Communication. “This colloquium brings together people from industry and scholars in computing and social sciences in a dialogue around one of the most important innovations of our time next to the Internet.”


This spring marks the first President’s Dream Colloquium where all faculty and guest lectures will be available to the public. The speaker series will give a historical overview of big data, specific case studies in how big data is used today and discuss what the implications are for this information’s usage in business, health and government in the future.


The series includes notable guest speakers such as managing director of Microsoft Research, Surajit Chaudhuri, and Tableau co-founder Pat Hanrahan.  


“Pat Hanrahan is a leader in a number of sectors and Tableau is a leader in accessing big data through visual analytics,” says Chow-White. “Rather than big data being available to only a small amount of professionals, Tableau makes it easier for everyday people to access and understand it in a visual way.”


The speaker series is free to attend with registration. Lectures will be webcast live and available on the President’s Dream Colloquium website.



  • By 2020, over 1/3 of all data will live in or pass through the cloud.
  • Data production will be 44 times greater in 2020 than it was in 2009.
  • More than 70 percent of the digital universe is generated by individuals. But enterprises have responsibility for the storage, protection and management of 80 percent of that.

(Statistics provided by CSC)




The course features lectures from notable guest speakers including:

  • Sasha Issenberg, Author and Journalist
    Tuesday, January 12, 2016
  • Surajit ChaudhuriScientist and Managing Director of XCG (Microsoft Research)
    Tuesday, January 19, 2016
  • Pat Hanrahan, Professor at the Stanford Computer Graphics Laboratory, Cofounder and Chief Scientist of Tableau, Founding member of Pixar
    Wednesday, February 3, 2016
  • Sheelagh Carpendale, Professor of Computing Science University of Calgary, Canada Research Chair in Information Visualization
    Tuesday, February 23, 2016, 3:30pm
  • Colin HillCEO of GNS Healthcare
    Tuesday, March 8, 2016
  • Chad Skelton, Award-winning Data Journalist and Consultant
    Tuesday, March 22, 2016

Not to worry, even though the first talk with Sasha Issenberg and Mark Pickup (strangely, he’s [Pickup is an SFU professor of political science] not mentioned in the news release or on the event page) has taken place, a webcast is being posted to the event page here.

I watched the first event live (via a livestream webcast which I accessed by clicking on the link found on the Event’s Speaker’s page) and found it quite interesting although I’m not sure about asking Issenberg to speak extemporaneously. He rambled and offered more detail about things that don’t matter much to a Canadian audience. I couldn’t tell if part of the problem might lie with the fact that his ‘big data’ book (The Victory Lab: The Secret Science of Winning Campaigns) was published a while back and he’s since published one on medical tourism and is about to publish one on same sex marriages and the LGBTQ communities in the US. As someone else who moves from topic to topic, I know it’s an effort to ‘go back in time’ and to remember the details and to recapture the enthusiasm that made the piece interesting.  Also, he has yet to get the latest scoop on big data and politics in the US as embarking on the 2016 campaign trail won’t take place until sometime later in January.

So, thanks to Issenberg for managing to dredge up as much as he did. Happily, he did recognize that there are differences between Canada and the US and the type of election data that is gathered and other data that can accessed. He provided a capsule version of the data situation in the US where they can identify individuals and predict how they might vote, while Pickup focused on the Canadian scene. As one expects from Canadian political parties and Canadian agencies in general, no one really wants to share how much information they can actually access (yes, that’s true of the Liberals and the NDP [New Democrats] too). By contrast, political parties and strategists in the US quite openly shared information with Issenberg about where and how they get data.

Pickup made some interesting points about data and how more data does not lead to better predictions. There was one study done on psychologists which Pickup replicated with undergraduate political science students. The psychologists and the political science students in the two separate studies were given data and asked to predict behaviour. They were then given more data about the same individuals and asked again to predict behaviour. In all. there were four sessions where the subjects were given successively more data and asked to predict behaviour based on that data. You may have already guessed but prediction accuracy decreased each time more information was added. Conversely, the people making the predictions became more confident as their predictive accuracy declined. A little disconcerting, non?

Pickup made another point noting that it may be easier to use big data to predict voting behaviour in a two-party system such as they have in the US but a multi-party system such as we have in Canada offers more challenges.

So, it was a good beginning and I look forward to more in the coming weeks (President’s Dream Colloquium on Engaging Big Data). Remember if you can’t listen to the live session, just click through to the event’s speaker’s page where they have hopefully posted the webcast.

The next dream colloquium takes place Tuesday, Jan. 19, 2016,

Big Data since 1854

Dr. Surajit Chaudhuri, Scientist and Managing Director of XCG (Microsoft Research)
Standford University, PhD
Tuesday, January 19, 2016, 3:30–5 pm
IRMACS Theatre, ASB 10900, Burnaby campus [or by webcast[


No more kevlar-wrapped lithium-ion batteries?

Current lithium-ion batteries present a fire hazard, which is why, last, year a team of researchers at the University of Michigan came up with a plan to prevent fires by wrapping the batteries in kevlar. My Jan. 30, 2015 post describes the research and provides some information about airplane fires caused by the use of lithium-ion batteries.

This year, a team of researchers at Stanford University (US) have invented a lithium-ion (li-ion) battery that shuts itself down when it overheats, according to a Jan. 12, 2016 news item on Nanotechnology Now,

Stanford researchers have developed the first lithium-ion battery that shuts down before overheating, then restarts immediately when the temperature cools.

The new technology could prevent the kind of fires that have prompted recalls and bans on a wide range of battery-powered devices, from recliners and computers to navigation systems and hoverboards [and on airplanes].

“People have tried different strategies to solve the problem of accidental fires in lithium-ion batteries,” said Zhenan Bao, a professor of chemical engineering at Stanford. “We’ve designed the first battery that can be shut down and revived over repeated heating and cooling cycles without compromising performance.”

Stanford has produced a video of Dr. Bao discussing her latest work,

A Jan. 11, 2016 Stanford University news release by Mark Schwartz, which originated the news item, provides more detail about li-ion batteries and the new fire prevention technology,

A typical lithium-ion battery consists of two electrodes and a liquid or gel electrolyte that carries charged particles between them. Puncturing, shorting or overcharging the battery generates heat. If the temperature reaches about 300 degrees Fahrenheit (150 degrees Celsius), the electrolyte could catch fire and trigger an explosion.

Several techniques have been used to prevent battery fires, such as adding flame retardants to the electrolyte. In 2014, Stanford engineer Yi Cui created a “smart” battery that provides ample warning before it gets too hot.

“Unfortunately, these techniques are irreversible, so the battery is no longer functional after it overheats,” said study co-author Cui, an associate professor of materials science and engineering and of photon science. “Clearly, in spite of the many efforts made thus far, battery safety remains an important concern and requires a new approach.”


To address the problem Cui, Bao and postdoctoral scholar Zheng Chen turned to nanotechnology. Bao recently invented a wearable sensor to monitor human body temperature. The sensor is made of a plastic material embedded with tiny particles of nickel with nanoscale spikes protruding from their surface.

For the battery experiment, the researchers coated the spiky nickel particles with graphene, an atom-thick layer of carbon, and embedded the particles in a thin film of elastic polyethylene.

“We attached the polyethylene film to one of the battery electrodes so that an electric current could flow through it,” said Chen, lead author of the study. “To conduct electricity, the spiky particles have to physically touch one another. But during thermal expansion, polyethylene stretches. That causes the particles to spread apart, making the film nonconductive so that electricity can no longer flow through the battery.”

When the researchers heated the battery above 160 F (70 C), the polyethylene film quickly expanded like a balloon, causing the spiky particles to separate and the battery to shut down. But when the temperature dropped back down to 160 F (70 C), the polyethylene shrunk, the particles came back into contact, and the battery started generating electricity again.

“We can even tune the temperature higher or lower depending on how many particles we put in or what type of polymer materials we choose,” said Bao, who is also a professor, by courtesy, of chemistry and of materials science and engineering. “For example, we might want the battery to shut down at 50 C or 100 C.”

Reversible strategy

To test the stability of new material, the researchers repeatedly applied heat to the battery with a hot-air gun. Each time, the battery shut down when it got too hot and quickly resumed operating when the temperature cooled.

“Compared with previous approaches, our design provides a reliable, fast, reversible strategy that can achieve both high battery performance and improved safety,” Cui said. “This strategy holds great promise for practical battery applications.”

Here’s a link to and a citation for the paper,

Fast and reversible thermoresponsive polymer switching materials for safer batteries by Zheng Chen, Po-Chun Hsu, Jeffrey Lopez, Yuzhang Li, John W. F. To, Nan Liu, Chao Wang, Sean C. Andrews, Jia Liu, Yi Cui, & Zhenan Bao. Nature Energy 1, Article number: 15009 (2016) doi:10.1038/nenergy.2015.9 Published online: 11 January 2016

This paper appears to be open access.

Nanopores and a new technique for desalination

There’s been more than one piece here about water desalination and purification and/or remediation efforts and at least one of them claims to have successfully overcome issues such as reverse osmosis energy needs which are hampering adoption of various technologies. Now, researchers at the University of Illinois at Champaign Urbana have developed another new technique for desalinating water while reverse osmosis issues according to a Nov. 11, 2015 news item on Nanowerk (Note: A link has been removed) ,

University of Illinois engineers have found an energy-efficient material for removing salt from seawater that could provide a rebuttal to poet Samuel Taylor Coleridge’s lament, “Water, water, every where, nor any drop to drink.”

The material, a nanometer-thick sheet of molybdenum disulfide (MoS2) riddled with tiny holes called nanopores, is specially designed to let high volumes of water through but keep salt and other contaminates out, a process called desalination. In a study published in the journal Nature Communications (“Water desalination with a single-layer MoS2 nanopore”), the Illinois team modeled various thin-film membranes and found that MoS2 showed the greatest efficiency, filtering through up to 70 percent more water than graphene membranes. [emphasis mine]

I’ll get to the professor’s comments about graphene membranes in a minute. Meanwhile, a Nov. 11, 2015 University of Illinois news release (also on EurekAlert), which originated the news item, provides more information about the research,

“Even though we have a lot of water on this planet, there is very little that is drinkable,” said study leader Narayana Aluru, a U. of I. professor of mechanical science and engineering. “If we could find a low-cost, efficient way to purify sea water, we would be making good strides in solving the water crisis.

“Finding materials for efficient desalination has been a big issue, and I think this work lays the foundation for next-generation materials. These materials are efficient in terms of energy usage and fouling, which are issues that have plagued desalination technology for a long time,” said Aluru, who also is affiliated with the Beckman Institute for Advanced Science and Technology at the U. of I.

Most available desalination technologies rely on a process called reverse osmosis to push seawater through a thin plastic membrane to make fresh water. The membrane has holes in it small enough to not let salt or dirt through, but large enough to let water through. They are very good at filtering out salt, but yield only a trickle of fresh water. Although thin to the eye, these membranes are still relatively thick for filtering on the molecular level, so a lot of pressure has to be applied to push the water through.

“Reverse osmosis is a very expensive process,” Aluru said. “It’s very energy intensive. A lot of power is required to do this process, and it’s not very efficient. In addition, the membranes fail because of clogging. So we’d like to make it cheaper and make the membranes more efficient so they don’t fail as often. We also don’t want to have to use a lot of pressure to get a high flow rate of water.”

One way to dramatically increase the water flow is to make the membrane thinner, since the required force is proportional to the membrane thickness. Researchers have been looking at nanometer-thin membranes such as graphene. However, graphene presents its own challenges in the way it interacts with water.

Aluru’s group has previously studied MoS2 nanopores as a platform for DNA sequencing and decided to explore its properties for water desalination. Using the Blue Waters supercomputer at the National Center for Supercomputing Applications at the U. of I., they found that a single-layer sheet of MoS2 outperformed its competitors thanks to a combination of thinness, pore geometry and chemical properties.

A MoS2 molecule has one molybdenum atom sandwiched between two sulfur atoms. A sheet of MoS2, then, has sulfur coating either side with the molybdenum in the center. The researchers found that creating a pore in the sheet that left an exposed ring of molybdenum around the center of the pore created a nozzle-like shape that drew water through the pore.

“MoS2 has inherent advantages in that the molybdenum in the center attracts water, then the sulfur on the other side pushes it away, so we have much higher rate of water going through the pore,” said graduate student Mohammad Heiranian, the first author of the study. “It’s inherent in the chemistry of MoS2 and the geometry of the pore, so we don’t have to functionalize the pore, which is a very complex process with graphene.”

In addition to the chemical properties, the single-layer sheets of MoS2 have the advantages of thinness, requiring much less energy, which in turn dramatically reduces operating costs. MoS2 also is a robust material, so even such a thin sheet is able to withstand the necessary pressures and water volumes.

The Illinois researchers are establishing collaborations to experimentally test MoS2 for water desalination and to test its rate of fouling, or clogging of the pores, a major problem for plastic membranes. MoS2 is a relatively new material, but the researchers believe that manufacturing techniques will improve as its high performance becomes more sought-after for various applications.

“Nanotechnology could play a great role in reducing the cost of desalination plants and making them energy efficient,” said Amir Barati Farimani, who worked on the study as a graduate student at Illinois and is now a postdoctoral fellow at Stanford University. “I’m in California now, and there’s a lot of talk about the drought and how to tackle it. I’m very hopeful that this work can help the designers of desalination plants. This type of thin membrane can increase return on investment because they are much more energy efficient.”

Here’s a link to and a citation for the paper,

Water desalination with a single-layer MoS2 nanopore by Mohammad Heiranian, Amir Barati Farimani, & Narayana R. Aluru. Nature Communications 6, Article number: 8616 doi:10.1038/ncomms9616 Published 14 October 2015

Graphene membranes

In a July 13, 2015 essay on Nanotechnology Now, Tim Harper provides an overview of the research into using graphene for water desalination and purification/remediation about which he is quite hopeful. There is no mention of an issue with interactions between water and graphene. It should be noted that Tim Harper is the Chief Executive Officer of G20, a company which produces a graphene-based solution (graphene oxide sheets), which can desalinate water and can purify/remediate it. Tim is a scientist and while you might have some hesitation given his fiscal interests, his essay is worthwhile reading as he supplies context and explanations of the science.

The sense of touch via artificial skin

Scientists have been working for years to allow artificial skin to transmit what the brain would recognize as the sense of touch. For anyone who has lost a limb and gotten a prosthetic replacement, the loss of touch is reputedly one of the more difficult losses to accept. The sense of touch is also vital in robotics if the field is to expand and include activities reliant on the sense of touch, e.g., how much pressure do you use to grasp a cup; how much strength  do you apply when moving an object from one place to another?

For anyone interested in the ‘electronic skin and pursuit of touch’ story, I have a Nov. 15, 2013 posting which highlights the evolution of the research into e-skin and what was then some of the latest work.

This posting is a 2015 update of sorts featuring the latest e-skin research from Stanford University and Xerox PARC. (Dexter Johnson in an Oct. 15, 2015 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineering] site) provides a good research summary.) For anyone with an appetite for more, there’s this from an Oct. 15, 2015 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

Using flexible organic circuits and specialized pressure sensors, researchers have created an artificial “skin” that can sense the force of static objects. Furthermore, they were able to transfer these sensory signals to the brain cells of mice in vitro using optogenetics. For the many people around the world living with prosthetics, such a system could one day allow them to feel sensation in their artificial limbs. To create the artificial skin, Benjamin Tee et al. developed a specialized circuit out of flexible, organic materials. It translates static pressure into digital signals that depend on how much mechanical force is applied. A particular challenge was creating sensors that can “feel” the same range of pressure that humans can. Thus, on the sensors, the team used carbon nanotubes molded into pyramidal microstructures, which are particularly effective at tunneling the signals from the electric field of nearby objects to the receiving electrode in a way that maximizes sensitivity. Transferring the digital signal from the artificial skin system to the cortical neurons of mice proved to be another challenge, since conventional light-sensitive proteins used in optogenetics do not stimulate neural spikes for sufficient durations for these digital signals to be sensed. Tee et al. therefore engineered new optogenetic proteins able to accommodate longer intervals of stimulation. Applying these newly engineered optogenic proteins to fast-spiking interneurons of the somatosensory cortex of mice in vitro sufficiently prolonged the stimulation interval, allowing the neurons to fire in accordance with the digital stimulation pulse. These results indicate that the system may be compatible with other fast-spiking neurons, including peripheral nerves.

And, there’s an Oct. 15, 2015 Stanford University news release on EurkeAlert describing this work from another perspective,

The heart of the technique is a two-ply plastic construct: the top layer creates a sensing mechanism and the bottom layer acts as the circuit to transport electrical signals and translate them into biochemical stimuli compatible with nerve cells. The top layer in the new work featured a sensor that can detect pressure over the same range as human skin, from a light finger tap to a firm handshake.

Five years ago, Bao’s [Zhenan Bao, a professor of chemical engineering at Stanford,] team members first described how to use plastics and rubbers as pressure sensors by measuring the natural springiness of their molecular structures. They then increased this natural pressure sensitivity by indenting a waffle pattern into the thin plastic, which further compresses the plastic’s molecular springs.

To exploit this pressure-sensing capability electronically, the team scattered billions of carbon nanotubes through the waffled plastic. Putting pressure on the plastic squeezes the nanotubes closer together and enables them to conduct electricity.

This allowed the plastic sensor to mimic human skin, which transmits pressure information as short pulses of electricity, similar to Morse code, to the brain. Increasing pressure on the waffled nanotubes squeezes them even closer together, allowing more electricity to flow through the sensor, and those varied impulses are sent as short pulses to the sensing mechanism. Remove pressure, and the flow of pulses relaxes, indicating light touch. Remove all pressure and the pulses cease entirely.

The team then hooked this pressure-sensing mechanism to the second ply of their artificial skin, a flexible electronic circuit that could carry pulses of electricity to nerve cells.

Importing the signal

Bao’s team has been developing flexible electronics that can bend without breaking. For this project, team members worked with researchers from PARC, a Xerox company, which has a technology that uses an inkjet printer to deposit flexible circuits onto plastic. Covering a large surface is important to making artificial skin practical, and the PARC collaboration offered that prospect.

Finally the team had to prove that the electronic signal could be recognized by a biological neuron. It did this by adapting a technique developed by Karl Deisseroth, a fellow professor of bioengineering at Stanford who pioneered a field that combines genetics and optics, called optogenetics. Researchers bioengineer cells to make them sensitive to specific frequencies of light, then use light pulses to switch cells, or the processes being carried on inside them, on and off.

For this experiment the team members engineered a line of neurons to simulate a portion of the human nervous system. They translated the electronic pressure signals from the artificial skin into light pulses, which activated the neurons, proving that the artificial skin could generate a sensory output compatible with nerve cells.

Optogenetics was only used as an experimental proof of concept, Bao said, and other methods of stimulating nerves are likely to be used in real prosthetic devices. Bao’s team has already worked with Bianxiao Cui, an associate professor of chemistry at Stanford, to show that direct stimulation of neurons with electrical pulses is possible.

Bao’s team envisions developing different sensors to replicate, for instance, the ability to distinguish corduroy versus silk, or a cold glass of water from a hot cup of coffee. This will take time. There are six types of biological sensing mechanisms in the human hand, and the experiment described in Science reports success in just one of them.

But the current two-ply approach means the team can add sensations as it develops new mechanisms. And the inkjet printing fabrication process suggests how a network of sensors could be deposited over a flexible layer and folded over a prosthetic hand.

“We have a lot of work to take this from experimental to practical applications,” Bao said. “But after spending many years in this work, I now see a clear path where we can take our artificial skin.”

Here’s a link to and a citation for the paper,

A skin-inspired organic digital mechanoreceptor by Benjamin C.-K. Tee, Alex Chortos, Andre Berndt, Amanda Kim Nguyen, Ariane Tom, Allister McGuire, Ziliang Carter Lin, Kevin Tien, Won-Gyu Bae, Huiliang Wang, Ping Mei, Ho-Hsiu Chou, Bianxiao Cui, Karl Deisseroth, Tse Nga Ng, & Zhenan Bao. Science 16 October 2015 Vol. 350 no. 6258 pp. 313-316 DOI: 10.1126/science.aaa9306

This paper is behind a paywall.

Inside-out plants show researchers how cellulose forms

Strictly speaking this story of tricking cellulose into growing on the surface rather than the interior of a cell is not a nanotechnology topic but I imagine that the folks who research nanocellulose materials will find this work of great interest. An Oct. 8, 2015 news item on ScienceDaily describes the research,

Researchers have been able to watch the interior cells of a plant synthesize cellulose for the first time by tricking the cells into growing on the plant’s surface.

“The bulk of the world’s cellulose is produced within the thickened secondary cell walls of tissues hidden inside the plant body,” says University of British Columbia Botany PhD candidate Yoichiro Watanabe, lead author of the paper published this week in Science.

“So we’ve never been able to image the cells in high resolution as they produce this all-important biological material inside living plants.”

An Oct. 8, 2015 University of British Columbia (UBC) news release on EurekAlert, which originated the news item, explains the interest in cellulose,

Cellulose, the structural component of cell walls that enables plants to stay upright, is the most abundant biopolymer on earth. It’s a critical resource for pulp and paper, textiles, building materials, and renewable biofuels.

“In order to be structurally sound, plants have to lay down their secondary cell walls very quickly once the plant has stopped growing, like a layer of concrete with rebar,” says UBC botanist Lacey Samuels, one of the senior authors on the paper.

“Based on our study, it appears plant cells need both a high density of the enzymes that create cellulose, and their rapid movement across the cell surface, to make this happen so quickly.”

This work, the culmination of years of research by four UBC graduate students supervised by UBC Forestry researcher Shawn Mansfield and Samuels, was facilitated by a collaboration with the Nara Institute of Technology in Japan to create the special plant lines, and researchers at the Carnegie Institution for Science at Stanford University to conduct the live cell imaging.

“This is a major step forward in our understanding of how plants synthesize their walls, specifically cellulose,” says Mansfield. “It could have significant implications for the way plants are bred or selected for improved or altered cellulose ultrastructural traits – which could impact industries ranging from cellulose nanocrystals to toiletries to structural building products.”

The researchers used a modified line of Arabidopsis thaliana, a small flowering plant related to cabbage and mustard, to conduct the experiment. The resulting plants look exactly like their non-modified parents, until they are triggered to make secondary cell walls on their exterior.

One of the other partners in this research, Stanford University’s Carnegie Institution of Science published an Oct. 8, 2015 news release on EurekAlert focusing on other aspects of the research (Note: Some of this is repetitive),

Now scientists, including Carnegie’s David Ehrhardt and Heather Cartwright, have exploited a new way to watch the trafficking of the proteins that make cellulose in the formation cell walls in real time. They found that organization of this trafficking by structural proteins called microtubules, combined with the high density and rapid rate of these cellulose producing enzymes explains how thick and high strength secondary walls are built. This basic knowledge helps us understand plants can stand upright, which was essential for the move of plants from the sea to the land, and may useful for engineering plants with improved mechanical properties for to increase yields or to produce novel bio-materials. The research is published in Science.

The live-cell imaging was conducted at Carnegie with colleagues from the University of British Columbia (UBC) using customized high-end instrumentation. For the first time, it directly tracked cellulose production to observe how xylem cells, cells that transport water and some nutrients, make cellulose for their secondary cell walls. Strong walls are based on a high density of enzymes that catalyze the synthesis of cellulose (called cellulose synthase enzymes) and their rapid movement across the xylem cell surface.

Watching xylem cells lay down cellulose in real time has not been possible before, because the vascular tissues of plants are hidden inside the plant body. Lead author Yoichiro Watanabe of UBC applied a system developed by colleagues at the Nara Institute of Science and Technology to trick plants into making xylem cells on their surface. The researchers fluorescently tagged a cellulose synthase enzyme of the experimental plant Arabidopsis to track the activity using high-end microscopes.

“For me, one of the most exciting aspects of this study was being able to observe how the microtubule cytoskeleton was actively directing the synthesis of the new cell walls at the level of individual enzymes. We can guess how a complex cellular process works from static snapshots, which is what we usually have had to work from in biology, but you can’t really understand the process until you can see it in action. ” remarked Carnegie’s David Ehrhardt.

Here’s a link to and a citation for the paper,

Visualization of cellulose synthases in Arabidopsis secondary cell walls by Y. Watanabe, M. J. Meents, L. M. McDonnell, S. Barkwill, A. Sampathkumar, H. N. Cartwright, T. Demura, D. W. Ehrhardt, A.L. Samuels, & S. D. Mansfield. Science 9 October 2015: Vol. 350 no. 6257 pp. 198-203 DOI: 10.1126/science.aac7446

This paper is behind a paywall.

With all of this talk of visualization, it’s only right that the researchers have made an image from their work available,

 Caption: An image of artificially-produced cellulose in cells on the surface of a modified Arabidopsis thaliana plant. Credit: University of British Columbia.

Caption: An image of artificially-produced cellulose in cells on the surface of a modified Arabidopsis thaliana plant. Credit: University of British Columbia.


$81M for US National Nanotechnology Coordinated Infrastructure (NNCI)

Academics, small business, and industry researchers are the big winners in a US National Science Foundation bonanza according to a Sept. 16, 2015 news item on Nanowerk,

To advance research in nanoscale science, engineering and technology, the National Science Foundation (NSF) will provide a total of $81 million over five years to support 16 sites and a coordinating office as part of a new National Nanotechnology Coordinated Infrastructure (NNCI).

The NNCI sites will provide researchers from academia, government, and companies large and small with access to university user facilities with leading-edge fabrication and characterization tools, instrumentation, and expertise within all disciplines of nanoscale science, engineering and technology.

A Sept. 16, 2015 NSF news release provides a brief history of US nanotechnology infrastructures and describes this latest effort in slightly more detail (Note: Links have been removed),

The NNCI framework builds on the National Nanotechnology Infrastructure Network (NNIN), which enabled major discoveries, innovations, and contributions to education and commerce for more than 10 years.

“NSF’s long-standing investments in nanotechnology infrastructure have helped the research community to make great progress by making research facilities available,” said Pramod Khargonekar, assistant director for engineering. “NNCI will serve as a nationwide backbone for nanoscale research, which will lead to continuing innovations and economic and societal benefits.”

The awards are up to five years and range from $500,000 to $1.6 million each per year. Nine of the sites have at least one regional partner institution. These 16 sites are located in 15 states and involve 27 universities across the nation.

Through a fiscal year 2016 competition, one of the newly awarded sites will be chosen to coordinate the facilities. This coordinating office will enhance the sites’ impact as a national nanotechnology infrastructure and establish a web portal to link the individual facilities’ websites to provide a unified entry point to the user community of overall capabilities, tools and instrumentation. The office will also help to coordinate and disseminate best practices for national-level education and outreach programs across sites.

New NNCI awards:

Mid-Atlantic Nanotechnology Hub for Research, Education and Innovation, University of Pennsylvania with partner Community College of Philadelphia, principal investigator (PI): Mark Allen
Texas Nanofabrication Facility, University of Texas at Austin, PI: Sanjay Banerjee

Northwest Nanotechnology Infrastructure, University of Washington with partner Oregon State University, PI: Karl Bohringer

Southeastern Nanotechnology Infrastructure Corridor, Georgia Institute of Technology with partners North Carolina A&T State University and University of North Carolina-Greensboro, PI: Oliver Brand

Midwest Nano Infrastructure Corridor, University of  Minnesota Twin Cities with partner North Dakota State University, PI: Stephen Campbell

Montana Nanotechnology Facility, Montana State University with partner Carlton College, PI: David Dickensheets
Soft and Hybrid Nanotechnology Experimental Resource,

Northwestern University with partner University of Chicago, PI: Vinayak Dravid

The Virginia Tech National Center for Earth and Environmental Nanotechnology Infrastructure, Virginia Polytechnic Institute and State University, PI: Michael Hochella

North Carolina Research Triangle Nanotechnology Network, North Carolina State University with partners Duke University and University of North Carolina-Chapel Hill, PI: Jacob Jones

San Diego Nanotechnology Infrastructure, University of California, San Diego, PI: Yu-Hwa Lo

Stanford Site, Stanford University, PI: Kathryn Moler

Cornell Nanoscale Science and Technology Facility, Cornell University, PI: Daniel Ralph

Nebraska Nanoscale Facility, University of Nebraska-Lincoln, PI: David Sellmyer

Nanotechnology Collaborative Infrastructure Southwest, Arizona State University with partners Maricopa County Community College District and Science Foundation Arizona, PI: Trevor Thornton

The Kentucky Multi-scale Manufacturing and Nano Integration Node, University of Louisville with partner University of Kentucky, PI: Kevin Walsh

The Center for Nanoscale Systems at Harvard University, Harvard University, PI: Robert Westervelt

The universities are trumpeting this latest nanotechnology funding,

NSF-funded network set to help businesses, educators pursue nanotechnology innovation (North Carolina State University, Duke University, and University of North Carolina at Chapel Hill)

Nanotech expertise earns Virginia Tech a spot in National Science Foundation network

ASU [Arizona State University] chosen to lead national nanotechnology site

UChicago, Northwestern awarded $5 million nanotechnology infrastructure grant

That is a lot of excitement.

Boosting chip speeds with graphene

There’s a certain hysteria associated with chip speeds as engineers and computer scientists try to achieve the ever improved speed times that consumers have enjoyed for some decades. The question looms, is there some point at which we can no longer improve the speed? Well, we haven’t reached that point yet according to a June 18, 2015 news item on Nanotechnology Now,

Stanford engineers find a simple yet clever way to boost chip speeds: Inside each chip are millions of tiny wires to transport data; wrapping them in a protective layer of graphene could boost speeds by up to 30 percent. [emphasis mine]

A June 16, 2015 Stanford University news release by Tom Abate (also on EurekAlert but dated June 17, 2015), which originated the news item, describes how computer chips are currently designed and the redesign which yields more speed,

A typical computer chip includes millions of transistors connected with an extensive network of copper wires. Although chip wires are unimaginably short and thin compared to household wires both have one thing in common: in each case the copper is wrapped within a protective sheath.

For years a material called tantalum nitride has formed protective layer in chip wires.

Now Stanford-led experiments demonstrate that a different sheathing material, graphene, can help electrons scoot through tiny copper wires in chips more quickly.

Graphene is a single layer of carbon atoms arranged in a strong yet thin lattice. Stanford electrical engineer H.-S. Philip Wong says this modest fix, using graphene to wrap wires, could allow transistors to exchange data faster than is currently possible. And the advantages of using graphene would become greater in the future as transistors continue to shrink.

Wong led a team of six researchers, including two from the University of Wisconsin-Madison, who will present their findings at the Symposia of VLSI Technology and Circuits in Kyoto, a leading venue for the electronics industry.

Ling Li, a graduate student in electrical engineering at Stanford and first author of the research paper, explained why changing the exterior wrapper on connecting wires can have such a big impact on chip performance.

It begins with understanding the dual role of this protective layer: it isolates the copper from the silicon on the chip and also serve to conduct electricity.

On silicon chips, the transistors act like tiny gates to switch electrons on or off. That switching function is how transistors process data.

The copper wires between the transistors transport this data once it is processed.

The isolating material–currently tantalum nitride–keeps the copper from migrating into the silicon transistors and rendering them non-functional.

Why switch to graphene?

Two reasons, starting with the ceaseless desire to keep making electronic components smaller.

When the Stanford team used the thinnest possible layer of tantalum nitride needed to perform this isolating function, they found that the industry-standard was eight times thicker than the graphene layer that did the same work.

Graphene had a second advantage as a protective sheathing and here it’s important to differentiate how this outer layer functions in chip wires versus a household wires.

In house wires the outer layer insulates the copper to prevent electrocution or fires.

In a chip the layer around the wires is a barrier to prevent copper atoms from infiltrating the silicon. Were that to happen the transistors would cease to function. So the protective layer isolates the copper from the silicon

The Stanford experiment showed that graphene could perform this isolating role while also serving as an auxiliary conductor of electrons. Its lattice structure allows electrons to leap from carbon atom to carbon atom straight down the wire, while effectively containing the copper atoms within the copper wire.

These benefits–the thinness of the graphene layer and its dual role as isolator and auxiliary conductor–allow this new wire technology to carry more data between transistors, speeding up overall chip performance in the process.

In today’s chips the benefits are modest; a graphene isolator would boost wire speeds from four percent to 17 percent, depending on the length of the wire. [emphasis mine]

But as transistors and wires continue to shrink in size, the benefits of the ultrathin yet conductive graphene isolator become greater. [emphasis mine] The Stanford engineers estimate that their technology could increase wire speeds by 30 percent in the next two generations

The Stanford researchers think the promise of faster computing will induce other researchers to get interested in wires, and help to overcome some of the hurdles needed to take this proof of principle into common practice.

This would include techniques to grow graphene, especially growing it directly onto wires while chips are being mass-produced. In addition to his University of Wisconsin collaborator Professor Michael Arnold, Wong cited Purdue University Professor Zhihong Chen. Wong noted that the idea of using graphene as an isolator was inspired by Cornell University Professor Paul McEuen and his pioneering research on the basic properties of this marvelous material. Alexander Balandin of the University of California-Riverside has also made contributions to using graphene in chips.

“Graphene has been promised to benefit the electronics industry for a long time, and using it as a copper barrier is perhaps the first realization of this promise,” Wong said.

I gather they’ve decided to highlight the most optimistic outcomes.

Tanzanian research into nanotechnology-enabled water filters

Inexpensive 99.9999…% filtration of metals, bacteria, and viruses from water is an accomplishment worthy of a prize as the UK’s Royal Academy of Engineering noted by awarding its first ever International Innovation Prize of £25,000 ($38,348 [USD?]) to Askwar Hilonga, a Tanzanian academic and entrepreneur. A June 11, 2015 article by Sibusiso Tshabalala for Quartz.com describes the water situation in Tanzania and Hilonga’s accomplishment (Note: Links have been removed),

Despite Tanzania’s proximity to three major lakes almost half of it’s population cannot access potable water.

Groundwater is often the alternative, but the supply is not always clean. Mining waste (pdf, pg 410) and toxic drainage systems easily leak into fresh groundwater, leaving the water contaminated.

Enter Askwar Hilonga: a 38-year old chemical engineer PhD and entrepreneur. With 33 academic journal articles on nanotechnology to his name, Hilonga aims to solve Tanzania’s water contamination problems by using nanotechnology to customize water filters.

There are other filters available (according to Tshabalala’s article) but Hilonga’s has a unique characteristic in addition to being highly efficient and inexpensive,

Purifying water using nanotechnology is hardly a new thing. In 2010, researchers at the Yi Cui Lab at Stanford University developed a synthetic “nanoscanvenger” made out of two silver layers that enable nanoparticles to disinfect water from contaminating bacteria.

What makes Hilonga’s water filter different from the Stanford-developed “nanoscavenger”, or the popular LifeStraw developed by the Swiss-based health innovation company Vestergaard 10 years ago?

“It is customized. The filter can be tailored for specific individual, household and communal use,” says Hilonga.

A June 2, 2015 news item about the award on BBC (British Broadcasting Corporation) online describes how the filter works,

The sand-based water filter that cleans contaminated drinking water using nanotechnology has already been trademarked.

“I put water through sand to trap debris and bacteria,” Mr Hilonga told the BBC’s Newsday programme about the filter.

“But sand cannot remove contaminants like fluoride and other heavy metals so I put them through nano materials to remove chemical contaminants.”

Hilonga describes the filter in a little more detail in his May 30, 2014 video submitted for for the UK Royal Academy of Engineering’s prize (Africa Prize for Engineering Innovation)

Finalists for the prize (there were four) received a six month mentorship which included help to develop the technology further and with business plans. Hilonga has already enabled 23 entrepreneurs to develop nanofilter businesses, according to the Tshabalala article,

Through the Gongali Model Company, a university spin-off company which he co-founded, Hilonga has already enabled 23 entrepreneurs in Karatu to set up their businesses with the filters, and local schools to provide their learners with clean drinking water.

With this prize money, Hilonga will be able to lower the price of his filter ($130 [USD?) according to the BBC news item.

Congratulations to Dr. Hilonga and his team! For anyone curious about the Gongali Model Company, you can go here.

Water-fueled computer

A computer fueled by water? A fascinating concept as described in a June 9, 2015 news item on ScienceDaily,

Computers and water typically don’t mix, but in Manu Prakash’s lab, the two are one and the same. Prakash, an assistant professor of bioengineering at Stanford, and his students have built a synchronous computer that operates using the unique physics of moving water droplets.

A June 8, 2015 Stanford University news release by Bjorn Carey, which originated the news item, details the ideas (new applications) and research (open access to the tools for creating water droplet-fueled circuits) further,

The computer is nearly a decade in the making, incubated from an idea that struck Prakash when he was a graduate student. The work combines his expertise in manipulating droplet fluid dynamics with a fundamental element of computer science – an operating clock.

“In this work, we finally demonstrate a synchronous, universal droplet logic and control,” Prakash said.

Because of its universal nature, the droplet computer can theoretically perform any operation that a conventional electronic computer can crunch, although at significantly slower rates. Prakash and his colleagues, however, have a more ambitious application in mind.

“We already have digital computers to process information. Our goal is not to compete with electronic computers or to operate word processors on this,” Prakash said. “Our goal is to build a completely new class of computers that can precisely control and manipulate physical matter. Imagine if when you run a set of computations that not only information is processed but physical matter is algorithmically manipulated as well. We have just made this possible at the mesoscale.”

The ability to precisely control droplets using fluidic computation could have a number of applications in high-throughput biology and chemistry, and possibly new applications in scalable digital manufacturing.

The crucial clock

For nearly a decade since he was in graduate school, an idea has been nagging at Prakash: What if he could use little droplets as bits of information and utilize the precise movement of those drops to process both information and physical materials simultaneously. Eventually, Prakash decided to build a rotating magnetic field that could act as clock to synchronize all the droplets. The idea showed promise, and in the early stages of the project, Prakash recruited a graduate student, Georgios “Yorgos” Katsikis, who is the first author on the paper.

Computer clocks are responsible for nearly every modern convenience. Smartphones, DVRs, airplanes, the Internet – without a clock, none of these could operate without frequent and serious complications. Nearly every computer program requires several simultaneous operations, each conducted in a perfect step-by-step manner. A clock makes sure that these operations start and stop at the same times, thus ensuring that the information synchronizes.

The results are dire if a clock isn’t present. It’s like soldiers marching in formation: If one person falls dramatically out of time, it won’t be long before the whole group falls apart. The same is true if multiple simultaneous computer operations run without a clock to synchronize them, Prakash explained.

“The reason computers work so precisely is that every operation happens synchronously; it’s what made digital logic so powerful in the first place,” Prakash said.

A magnetic clock

Developing a clock for a fluid-based computer required some creative thinking. It needed to be easy to manipulate, and also able to influence multiple droplets at a time. The system needed to be scalable so that in the future, a large number of droplets could communicate amongst each other without skipping a beat. Prakash realized that a rotating magnetic field might do the trick.

Katsikis and Prakash built arrays of tiny iron bars on glass slides that look something like a Pac-Man maze. They laid a blank glass slide on top and sandwiched a layer of oil in between. Then they carefully injected into the mix individual water droplets that had been infused with tiny magnetic nanoparticles.

Next, they turned on the magnetic field. Every time the field flips, the polarity of the bars reverses, drawing the magnetized droplets in a new, predetermined direction, like slot cars on a track. Every rotation of the field counts as one clock cycle, like a second hand making a full circle on a clock face, and every drop marches exactly one step forward with each cycle.

A camera records the interactions between individual droplets, allowing observation of computation as it occurs in real time. The presence or absence of a droplet represents the 1s and 0s of binary code, and the clock ensures that all the droplets move in perfect synchrony, and thus the system can run virtually forever without any errors.

“Following these rules, we’ve demonstrated that we can make all the universal logic gates used in electronics, simply by changing the layout of the bars on the chip,” said Katsikis. “The actual design space in our platform is incredibly rich. Give us any Boolean logic circuit in the world, and we can build it with these little magnetic droplets moving around.”

The current paper describes the fundamental operating regime of the system and demonstrates building blocks for synchronous logic gates, feedback and cascadability – hallmarks of scalable computation. A simple-state machine including 1-bit memory storage (known as “flip-flop”) is also demonstrated using the above basic building blocks.

A new way to manipulate matter

The current chips are about half the size of a postage stamp, and the droplets are smaller than poppy seeds, but Katsikis said that the physics of the system suggests it can be made even smaller. Combined with the fact that the magnetic field can control millions of droplets simultaneously, this makes the system exceptionally scalable.

“We can keep making it smaller and smaller so that it can do more operations per time, so that it can work with smaller droplet sizes and do more number of operations on a chip,” said graduate student and co-author Jim Cybulski. “That lends itself very well to a variety of applications.”

Prakash said the most immediate application might involve turning the computer into a high-throughput chemistry and biology laboratory. Instead of running reactions in bulk test tubes, each droplet can carry some chemicals and become its own test tube, and the droplet computer offers unprecedented control over these interactions.

From the perspective of basic science, part of why the work is so exciting, Prakash said, is that it opens up a new way of thinking of computation in the physical world. Although the physics of computation has been previously applied to understand the limits of computation, the physical aspects of bits of information has never been exploited as a new way to manipulate matter at the mesoscale (10 microns to 1 millimeter).

Because the system is extremely robust and the team has uncovered universal design rules, Prakash plans to make a design tool for these droplet circuits available to the public. Any group of people can now cobble together the basic logic blocks and make any complex droplet circuit they desire.

“We’re very interested in engaging anybody and everybody who wants to play, to enable everyone to design new circuits based on building blocks we describe in this paper or discover new blocks. Right now, anyone can put these circuits together to form a complex droplet processor with no external control – something that was a very difficult challenge previously,” Prakash said.

“If you look back at big advances in society, computation takes a special place. We are trying to bring the same kind of exponential scale up because of computation we saw in the digital world into the physical world.”

Here’s a link to and a citation for the paper,

Synchronous universal droplet logic and control by Georgios Katsikis, James S. Cybulski, & Manu Prakash. Nature Physics (2015) doi:10.1038/nphys3341 Published online 08 June 2015

This paper is behind a paywall.

For anyone interested in creating a water-fueled circuit, you could try contacting Manu Prakash, Bioengineering: manup@stanford.edu