Category Archives: nanotechnology

Harvesting plants for electricity

A Feb. 27, 2017 article on Nanowerk describes research which could turn living plants into solar cells and panels (Note: Links have been removed),

Plants power life on Earth. They are the original food source supplying energy to almost all living organisms and the basis of the fossil fuels that feed the power demands of the modern world. But burning the remnants of long-dead forests is changing the world in dangerous ways. Can we better harness the power of living plants today?

One way might be to turn plants into natural solar power stations that could convert sunlight into energy far more efficiently. To do this, we’d need a way of getting the energy out in the form of electricity. One company has found a way to harvest electrons deposited by plants into the soil beneath them. But new research (PNAS, “In vivo polymerization and manufacturing of wires and supercapacitors in plants”) from Finland looks at tapping plants’ energy directly by turning their internal structures into electric circuits.

A Feb. 27, 2017 essay by Stuart Thompson for The Conversation (which originated the article) explains the principles underlying the research (Note: A link has been removed),

Plants contain water-filled tubes called “xylem elements” that carry water from their roots to their leaves. The water flow also carries and distributes dissolved nutrients and other things such as chemical signals. The Finnish researchers, whose work is published in PNAS, developed a chemical that was fed into a rose cutting to form a solid material that could carry and store electricity.

Previous experiments have used a chemical called PEDOT to form conducting wires in the xylem, but it didn’t penetrate further into the plant. For the new research, they designed a molecule called ETE-S that forms similar electrical conductors but can also be carried wherever the stream of water travelling though the xylem goes.

This flow is driven by the attraction between water molecules. When water in a leaf evaporates, it pulls on the chain of molecules left behind, dragging water up through the plant all the way from the roots. You can see this for yourself by placing a plant cutting in food colouring and watching the colour move up through the xylem. The researchers’ method was so similar to the food colouring experiment that they could see where in the plant their electrical conductor had travelled to from its colour.

The result was a complex electronic network permeating the leaves and petals, surrounding their cells and replicating their pattern. The wires that formed conducted electricity up to a hundred times better than those made from PEDOT and could also store electrical energy in the same way as an electronic component called a capacitor.

I recommend reading Thompson’s piece in its entirety.

Mimicking the architecture of materials like wood and bone

Caption: Microstructures like this one developed at Washington State University could be used in batteries, lightweight ultrastrong materials, catalytic converters, supercapacitors and biological scaffolds. Credit: Washington State University

A March 3, 2017 news item on Nanowerk features a new 3D manufacturing technique for creating biolike materials, (Note: A link has been removed)

Washington State University nanotechnology researchers have developed a unique, 3-D manufacturing method that for the first time rapidly creates and precisely controls a material’s architecture from the nanoscale to centimeters. The results closely mimic the intricate architecture of natural materials like wood and bone.

They report on their work in the journal Science Advances (“Three-dimensional microarchitected materials and devices using nanoparticle assembly by pointwise spatial printing”) and have filed for a patent.

A March 3, 2017 Washington State University news release by Tina Hilding (also on EurekAlert), which originated the news item, expands on the theme,

“This is a groundbreaking advance in the 3-D architecturing of materials at nano- to macroscales with applications in batteries, lightweight ultrastrong materials, catalytic converters, supercapacitors and biological scaffolds,” said Rahul Panat, associate professor in the School of Mechanical and Materials Engineering, who led the research. “This technique can fill a lot of critical gaps for the realization of these technologies.”

The WSU research team used a 3-D printing method to create foglike microdroplets that contain nanoparticles of silver and to deposit them at specific locations. As the liquid in the fog evaporated, the nanoparticles remained, creating delicate structures. The tiny structures, which look similar to Tinkertoy constructions, are porous, have an extremely large surface area and are very strong.

Silver was used because it is easy to work with. However, Panat said, the method can be extended to any other material that can be crushed into nanoparticles – and almost all materials can be.

The researchers created several intricate and beautiful structures, including microscaffolds that contain solid truss members like a bridge, spirals, electronic connections that resemble accordion bellows or doughnut-shaped pillars.

The manufacturing method itself is similar to a rare, natural process in which tiny fog droplets that contain sulfur evaporate over the hot western Africa deserts and give rise to crystalline flower-like structures called “desert roses.”

Because it uses 3-D printing technology, the new method is highly efficient, creates minimal waste and allows for fast and large-scale manufacturing.

The researchers would like to use such nanoscale and porous metal structures for a number of industrial applications; for instance, the team is developing finely detailed, porous anodes and cathodes for batteries rather than the solid structures that are now used. This advance could transform the industry by significantly increasing battery speed and capacity and allowing the use of new and higher energy materials.

Here’s a link to and a citation for the paper,

Three-dimensional microarchitected materials and devices using nanoparticle assembly by pointwise spatial printing by Mohammad Sadeq Saleh, Chunshan Hu, and Rahul Panat. Science Advances  03 Mar 2017: Vol. 3, no. 3, e1601986 DOI: 10.1126/sciadv.1601986

This paper appears to be open access.

Finally, there is a video,

3D printed biomimetic blood vessel networks

An artificial blood vessel network that could lead the way to regenerating biologically-based blood vessel networks has been printed in 3D at the University of California at San Diego (UCSD) according to a March 2, 2017 news item on ScienceDaily,

Nanoengineers at the University of California San Diego have 3D printed a lifelike, functional blood vessel network that could pave the way toward artificial organs and regenerative therapies.

The new research, led by nanoengineering professor Shaochen Chen, addresses one of the biggest challenges in tissue engineering: creating lifelike tissues and organs with functioning vasculature — networks of blood vessels that can transport blood, nutrients, waste and other biological materials — and do so safely when implanted inside the body.

A March 2, 2017 UCSD news release (also on EurekAlert), which originated the news item, explains why this is an important development,

Researchers from other labs have used different 3D printing technologies to create artificial blood vessels. But existing technologies are slow, costly and mainly produce simple structures, such as a single blood vessel — a tube, basically. These blood vessels also are not capable of integrating with the body’s own vascular system.

“Almost all tissues and organs need blood vessels to survive and work properly. This is a big bottleneck in making organ transplants, which are in high demand but in short supply,” said Chen, who leads the Nanobiomaterials, Bioprinting, and Tissue Engineering Lab at UC San Diego. “3D bioprinting organs can help bridge this gap, and our lab has taken a big step toward that goal.”

Chen’s lab has 3D printed a vasculature network that can safely integrate with the body’s own network to circulate blood. These blood vessels branch out into many series of smaller vessels, similar to the blood vessel structures found in the body. The work was published in Biomaterials.

Chen’s team developed an innovative bioprinting technology, using their own homemade 3D printers, to rapidly produce intricate 3D microstructures that mimic the sophisticated designs and functions of biological tissues. Chen’s lab has used this technology in the past to create liver tissue and microscopic fish that can swim in the body to detect and remove toxins.

Researchers first create a 3D model of the biological structure on a computer. The computer then transfers 2D snapshots of the model to millions of microscopic-sized mirrors, which are each digitally controlled to project patterns of UV light in the form of these snapshots. The UV patterns are shined onto a solution containing live cells and light-sensitive polymers that solidify upon exposure to UV light. The structure is rapidly printed one layer at a time, in a continuous fashion, creating a 3D solid polymer scaffold encapsulating live cells that will grow and become biological tissue.

“We can directly print detailed microvasculature structures in extremely high resolution. Other 3D printing technologies produce the equivalent of ‘pixelated’ structures in comparison and usually require sacrificial materials and additional steps to create the vessels,” said Wei Zhu, a postdoctoral scholar in Chen’s lab and a lead researcher on the project.

And this entire process takes just a few seconds — a vast improvement over competing bioprinting methods, which normally take hours just to print simple structures. The process also uses materials that are inexpensive and biocompatible.

Chen’s team used medical imaging to create a digital pattern of a blood vessel network found in the body. Using their technology, they printed a structure containing endothelial cells, which are cells that form the inner lining of blood vessels.

The entire structure fits onto a small area measuring 4 millimeters × 5 millimeters, 600 micrometers thick (as thick as a stack containing 12 strands of human hair).

Researchers cultured several structures in vitro for one day, then grafted the resulting tissues into skin wounds of mice. After two weeks, the researchers examined the implants and found that they had successfully grown into and merged with the host blood vessel network, allowing blood to circulate normally.

Chen noted that the implanted blood vessels are not yet capable of other functions, such as transporting nutrients and waste. “We still have a lot of work to do to improve these materials. This is a promising step toward the future of tissue regeneration and repair,” he said.

Moving forward, Chen and his team are working on building patient-specific tissues using human induced pluripotent stem cells, which would prevent transplants from being attacked by a patient’s immune system. And since these cells are derived from a patient’s skin cells, researchers won’t need to extract any cells from inside the body to build new tissue. The team’s ultimate goal is to move their work to clinical trials. “It will take at least several years before we reach that goal,” Chen said.

Here’s a link to and a citation for the paper,

Direct 3D bioprinting of prevascularized tissue constructs with complex microarchitecture by Wei Zhu, Xin Qu, Jie Zhu, Xuanyi Ma, Sherrina Patel, Justin Liu, Pengrui Wang, Cheuk Sun Edwin Lai, Maling Gou, Yang Xu, Kang Zhang, Shaochen Chen. Biomaterials 124 (April 2017) 106-15 http://dx.doi.org/10.1016/j.biomaterials.2017.01.042

This paper is behind a paywall.

There is also an open access copy here on the university website but I cannot confirm that it is identical to the version in the journal.

Singing posters and talking shirts can communicate with you via car radio or smartphones

Singing posters and talking shirts haven’t gone beyond the prototype stage yet but I imagine University of Washington engineers are hoping this will happen sooner rather than later. In the meantime, they are  presenting their work at a conference according to a March 1, 2017 news item on ScienceDaily,

Imagine you’re waiting in your car and a poster for a concert from a local band catches your eye. What if you could just tune your car to a radio station and actually listen to that band’s music? Or perhaps you see the poster on the side of a bus stop. What if it could send your smartphone a link for discounted tickets or give you directions to the venue?

Going further, imagine you go for a run, and your shirt can sense your perspiration and send data on your vital signs directly to your phone.

A new technique pioneered by University of Washington engineers makes these “smart” posters and clothing a reality by allowing them to communicate directly with your car’s radio or your smartphone. For instance, bus stop billboards could send digital content about local attractions. A street sign could broadcast the name of an intersection or notice that it is safe to cross a street, improving accessibility for the disabled. In addition, clothing with integrated sensors could monitor vital signs and send them to a phone. [emphasis mine]

“What we want to do is enable smart cities and fabrics where everyday objects in outdoor environments — whether it’s posters or street signs or even the shirt you’re wearing — can ‘talk’ to you by sending information to your phone or car,” said lead faculty and UW assistant professor of computer science and engineering Shyam Gollakota.

“The challenge is that radio technologies like WiFi, Bluetooth and conventional FM radios would last less than half a day with a coin cell battery when transmitting,” said co-author and UW electrical engineering doctoral student Vikram Iyer. “So we developed a new way of communication where we send information by reflecting ambient FM radio signals that are already in the air, which consumes close to zero power.”

The UW team has — for the first time — demonstrated how to apply a technique called “backscattering” to outdoor FM radio signals. The new system transmits messages by reflecting and encoding audio and data in these signals that are ubiquitous in urban environments, without affecting the original radio transmissions. Results are published in a paper to be presented in Boston at the 14th USENIX Symposium on Networked Systems Design and Implementation in March [2017].

The team demonstrated that a “singing poster” for the band Simply Three placed at a bus stop could transmit a snippet of the band’s music, as well as an advertisement for the band, to a smartphone at a distance of 12 feet or to a car over 60 feet away. They overlaid the audio and data on top of ambient news signals from a local NPR radio station.

The University of Washington has produced a video demonstration of the technology

A March 1, 2017 University of Washington news release (also on EurekAlert), which originated the news item, explains further (Note: Links have been removed),

“FM radio signals are everywhere. You can listen to music or news in your car and it’s a common way for us to get our information,” said co-author and UW computer science and engineering doctoral student Anran Wang. “So what we do is basically make each of these everyday objects into a mini FM radio station at almost zero power.”

Such ubiquitous low-power connectivity can also enable smart fabric applications such as clothing integrated with sensors to monitor a runner’s gait and vital signs that transmits the information directly to a user’s phone. In a second demonstration, the researchers from the UW Networks & Mobile Systems Lab used conductive thread to sew an antenna into a cotton T-shirt, which was able to use ambient radio signals to transmit data to a smartphone at rates up to 3.2 kilobits per second.

The system works by taking an everyday FM radio signal broadcast from an urban radio tower. The “smart” poster or T-shirt uses a low-power reflector to manipulate the signal in a way that encodes the desired audio or data on top of the FM broadcast to send a “message” to the smartphone receiver on an unoccupied frequency in the FM radio band.

“Our system doesn’t disturb existing FM radio frequencies,” said co-author Joshua Smith, UW associate professor of computer science and engineering and of electrical engineering. “We send our messages on an adjacent band that no one is using — so we can piggyback on your favorite news or music channel without disturbing the original transmission.”

The team demonstrated three different methods for sending audio signals and data using FM backscatter: one simply overlays the new information on top of the existing signals, another takes advantage of unused portions of a stereo FM broadcast, and the third uses cooperation between two smartphones to decode the message.

“Because of the unique structure of FM radio signals, multiplying the original signal with the backscattered signal actually produces an additive frequency change,” said co-author Vamsi Talla, a UW postdoctoral researcher in computer science and engineering. “These frequency changes can be decoded as audio on the normal FM receivers built into cars and smartphones.”

In the team’s demonstrations, the total power consumption of the backscatter system was 11 microwatts, which could be easily supplied by a tiny coin-cell battery for a couple of years, or powered using tiny solar cells.

I cannot help but notice the interest in using this technology is for monitoring purposes, which could be benign or otherwise.

For anyone curious about the 14th USENIX Symposium on Networked Systems Design and Implementation being held March 27 – 29, 2017 in Boston, Massachusetts, you can find out more here.

Magic nano ink

Colour changes © Nature Communications 2017 / MPI [Max Planck Institute] for Intelligent Systems

A March 1, 2017 news item on Nanowerk helps to explain the image seen above (Note: A link has been removed),

Plasmonic printing produces resolutions several times greater than conventional printing methods. In plasmonic printing, colours are formed on the surfaces of tiny metallic particles when light excites their electrons to oscillate. Researchers at the Max Planck Institute for Intelligent Systems in Stuttgart have now shown how the colours of such metallic particles can be altered with hydrogen (Nature Communications, “Dynamic plasmonic colour display”).

The technique could open the way for animating ultra-high-resolution images and for developing extremely sharp displays. At the same time, it provides new approaches for encrypting information and detecting counterfeits.

A March 1, 2017 Max Planck Institute press release, which originated the news item, provides more  history and more detail about the research,

Glass artisans in medieval times exploited the effect long before it was even known. They coloured the magnificent windows of gothic cathedrals with nanoparticles of gold, which glowed red in the light. It was not until the middle of the 20th century that the underlying physical phenomenon was given a name: plasmons. These collective oscillations of free electrons are stimulated by the absorption of incident electromagnetic radiation. The smaller the metallic particles, the shorter the wavelength of the absorbed radiation. In some cases, the resonance frequency, i.e., the absorption maximum, falls within the visible light spectrum. The unabsorbed part of the spectrum is then scattered or reflected, creating an impression of colour. The metallic particles, which usually appear silvery, copper-coloured or golden, then take on entirely new colours.

A resolution of 100,000 dots per inch

Researchers are also taking advantage of the effect to develop plasmonic printing, in which tailor-made square metal particles are arranged in specific patterns on a substrate. The edge length of the particles is in the order of less than 100 nanometres (100 billionths of a metre). This allows a resolution of 100,000 dots per inch – several times greater than what today’s printers and displays can achieve.

For metallic particles measuring several 100 nanometres across, the resonance frequency of the plasmons lies within the visible light spectrum. When white light falls on such particles, they appear in a specific colour, for example red or blue. The colour of the metal in question is determined by the size of the particles and their distance from each other. These adjustment parameters therefore serve the same purpose in plasmonic printing as the palette of colours in painting.

The trick with the chemical reaction

The Smart Nanoplasmonics Research Group at the Max Planck Institute for Intelligent Systems in Stuttgart also makes use of this colour variability. They are currently working on making dynamic plasmonic printing. They have now presented an approach that allows them to alter the colours of the pixels predictably – even after an image has been printed. “The trick is to use magnesium. It can undergo a reversible chemical reaction in which the metallic character of the element is lost,” explains Laura Na Liu, who leads the Stuttgart research group. “Magnesium can absorb up to 7.6% of hydrogen by weight to form magnesium hydride, or MgH2”, Liu continues. The researchers coat the magnesium with palladium, which acts as a catalyst in the reaction.

During the continuous transition of metallic magnesium into non-metallic MgH2, the colour of some of the pixels changes several times. The colour change and the speed of the rate at which it proceeds follow a clear pattern. This is determined both by the size of and the distance between the individual magnesium particles as well as by the amount of hydrogen present.

In the case of total hydrogen saturation, the colour disappears completely, and the pixels reflect all the white light that falls on them. This is because the magnesium is no longer present in metallic form but only as MgH2. Hence, there are also no free metal electrons that can be made to oscillate.

Minerva’s vanishing act

The scientists demonstrated the effect of such dynamic colour behaviour on a plasmonic print of Minerva, the Roman goddess of wisdom, which also bore the logo of the Max Planck Society. They chose the size of their magnesium particles so that Minerva’s hair first appeared reddish, the head covering yellow, the feather crest red and the laurel wreath and outline of her face blue. They then washed the micro-print with hydrogen. A time-lapse film shows how the individual colours change. Yellow turns red, red turns blue, and blue turns white. After a few minutes all the colours disappear, revealing a white surface instead of Minerva.

The scientists also showed that this process is reversible by replacing the hydrogen stream with a stream of oxygen. The oxygen reacts with the hydrogen in the magnesium hydride to form water, so that the magnesium particles become metallic again. The pixels then change back in reverse order, and in the end Minerva appears in her original colours.

In a similar manner the researchers first made the micro image of a famous Van Gogh painting disappear and then reappear. They also produced complex animations that give the impression of fireworks.

The principle of a new encryption technique

Laura Na Liu can imagine using this principle in a new encryption technology. To demonstrate this, the group formed various letters with magnesium pixels. The addition of hydrogen then caused some letters to disappear over time, like the image of Minerva. “As for the rest of the letters, a thin oxide layer formed on the magnesium particles after exposing the sample in air for a short time before palladium deposition,” Liu explains. This layer is impermeable to hydrogen. The magnesium lying under the oxide layer therefore remains metallic − and visible − because light is able to excite the plasmons in the magnesium.

In this way it is possible to conceal a message, for example by mixing real and nonsensical information. Only the intended recipient is able to make the nonsensical information disappear and filter out the real message. For example, after decoding the message “Hartford” with hydrogen, only the words “art or” would remain visible. To make it more difficult to crack such encrypted messages, the group is currently working on a process that would require a precisely adjusted hydrogen concentration for deciphering.

Liu believes that the technology could also be used some day in the fight against counterfeiting. “For example, plasmonic security features could be printed on banknotes or pharmaceutical packs, which could later be checked or read only under specific conditions unknown to counterfeiters.”

It doesn’t necessarily have to be hydrogen

Laura Na Liu knows that the use of hydrogen makes some applications difficult and impractical for everyday use such as in mobile displays. “We see our work as a starting shot for a new principle: the use of chemical reactions for dynamic printing,” the Stuttgart physicist says. It is certainly conceivable that the research will soon lead to the discovery of chemical reactions for colour changes other than the phase transition between magnesium and magnesium dihydride, for example, reactions that require no gaseous reactants.

Here’s a link to and a citation for the paper,

Dynamic plasmonic colour display by Xiaoyang Duan, Simon Kamin, & Na Liu. Nature Communications 8, Article number: 14606 (2017) doi:10.1038/ncomms14606 Published online: 24 February 2017

This paper is open access.

Making lead look like gold (so to speak)

Apparently you can make lead ‘look’ like gold if you can get it to reflect light in the same way. From a Feb. 28, 2017 news item on Nanowerk (Note: A link has been removed),

Since the Middle Ages, alchemists have sought to transmute elements, the most famous example being the long quest to turn lead into gold. Transmutation has been realized in modern times, but on a minute scale using a massive particle accelerator.

Now, theorists at Princeton University have proposed a different approach to this ancient ambition — just make one material behave like another. A computational theory published Feb. 24 [2017] in the journal Physical Review Letters (“How to Make Distinct Dynamical Systems Appear Spectrally Identical”) demonstrates that any two systems can be made to look alike, even if just for the smallest fraction of a second.

In this context, for two objects to “look” like each other, they need to reflect light in the same way. The Princeton researchers’ method involves using light to make non-permanent changes to a substance’s molecules so that they mimic the reflective properties of another substance’s molecules. This ability could have implications for optical computing, a type of computing in which electrons are replaced by photons that could greatly enhance processing power but has proven extremely difficult to engineer. It also could be applied to molecular detection and experiments in which expensive samples could be replaced by cheaper alternatives.

A Feb. 28, 2017 Princeton University news release (also on EurekAlert) by Tien Nguyen, which originated the news item, expands on the theme (Note: Links have been removed),

“It was a big shock for us that such a general statement as ‘any two objects can be made to look alike’ could be made,” said co-author Denys Bondar, an associate research scholar in the laboratory of co-author Herschel Rabitz, Princeton’s Charles Phelps Smyth ’16 *17 Professor of Chemistry.

The Princeton researchers posited that they could control the light that bounces off a molecule or any substance by controlling the light shone on it, which would allow them to alter how it looks. This type of manipulation requires a powerful light source such as an ultrafast laser and would last for only a femtosecond, or one quadrillionth of a second. Unlike normal light sources, this ultrafast laser pulse is strong enough to interact with molecules and distort their electron cloud while not actually changing their identity.

“The light emitted by a molecule depends on the shape of its electron cloud, which can be sculptured by modern lasers,” Bondar said. Using advanced computational theory, the research team developed a method called “spectral dynamic mimicry” that allowed them to calculate the laser pulse shape, which includes timing and wavelength, to produce any desired spectral output. In other words, making any two systems look alike.

Conversely, this spectral control could also be used to make two systems look as different from one another as possible. This differentiation, the researchers suggested, could prove valuable for applications of molecular detections such as identifying toxic versus safe chemicals.

Shaul Mukamel, a chemistry professor at the University of California-Irvine, said that the Princeton research is a step forward in an important and active research field called coherent control, in which light can be manipulated to control behavior at the molecular level. Mukamel, who has collaborated with the Rabitz lab but was not involved in the current work, said that the Rabitz group has had a prominent role in this field for decades, advancing technology such as quantum computing and using light to drive artificial chemical reactivity.

“It’s a very general and nice application of coherent control,” Mukamel said. “It demonstrates that you can, by shaping the optical paths, bring the molecules to do things that you want beforehand — it could potentially be very significant.”

Since the Middle Ages, alchemists have sought to transmute elements, the most famous example being the long quest to turn lead into gold. Now, theorists at Princeton University have proposed a different approach to this ancient ambition — just make one material behave like another, even if just for the smallest fraction of a second. The researchers are, left to right, Renan Cabrera, an associate research scholar in chemistry; Herschel Rabitz, Princeton’s Charles Phelps Smyth ’16 *17 Professor of Chemistry; associate research scholar in chemistry Denys Bondar; and graduate student Andre Campos. (Photo by C. Todd Reichart, Department of Chemistry)

Here’s a link to and a citation for the paper,

How to Make Distinct Dynamical Systems Appear Spectrally Identical by
Andre G. Campos, Denys I. Bondar, Renan Cabrera, and Herschel A. Rabitz.
Phys. Rev. Lett. 118, 083201 (Vol. 118, Iss. 8) DOI:https://doi.org/10.1103/PhysRevLett.118.083201 Published 24 February 2017

© 2017 American Physical Society

This paper is behind a paywall.

Peripheral nerves (a rat’s) regenerated when wrapped with nanomesh fiber

A Feb.28,2017 news item on Nanowerk announces a proposed nerve regeneration technique (Note: A link has been removed),

A research team consisting of Mitsuhiro Ebara, MANA associate principal investigator, Mechanobiology Group, NIMS, and Hiroyuki Tanaka, assistant professor, Orthopaedic Surgery, Osaka University Graduate School of Medicine, developed a mesh which can be wrapped around injured peripheral nerves to facilitate their regeneration and restore their functions (Acta Biomaterialia, “Electrospun nanofiber sheets incorporating methylcobalamin promote nerve regeneration and functional recovery in a rat sciatic nerve crush injury model”).

This mesh incorporates vitamin B12—a substance vital to the normal functioning of nervous systems—which is very soft and degrades in the body. When the mesh was applied to injured sciatic nerves in rats, it promoted nerve regeneration and recovery of their motor and sensory functions.

A Feb. 27, 2017 Japan National Institute for Materials Science (NIMS) press release for Osaka University, which originated the news item, provides more detail,

Artificial nerve conduits have been developed in the past to treat peripheral nerve injuries, but they merely form a cross-link to the injury site and do not promote faster nerve regeneration. Moreover, their application is limited to relatively few patients suffering from a complete loss of nerve continuity. Vitamin B12 has been known to facilitate nerve regeneration, but oral administration of it has not proven to be very effective, and no devices capable of delivering vitamin B12 directly to affected sites had been available. Therefore, it had been hoped to develop such medical devices to actively promote nerve regeneration in the many patients who suffer from nerve injuries but have not lost nerve continuity.

The NIMS-Osaka University joint research team recently developed a special mesh that can be wrapped around an injured nerve which releases vitamin B12 (methylcobalamin) until the injury heals. By developing very fine mesh fibers (several hundred nanometers in diameter) and reducing the crystallinity of the fibers, the team successfully created a very soft mesh that can be wrapped around a nerve. This mesh is made of a biodegradable plastic which, when implanted in animals, is eventually eliminated from the body. In fact, experiments demonstrated that application of the mesh directly to injured sciatic nerves in rats resulted in regeneration of axons and recovery of motor and sensory functions within six weeks.

The team is currently negotiating with a pharmaceutical company and other organizations to jointly study clinical application of the mesh as a medical device to treat peripheral nerve disorders, such as CTS.

This study was supported by the JSPS KAKENHI program (Grant Number JP15K10405) and AMED’s Project for Japan Translational and Clinical Research Core Centers (also known as Translational Research Network Program).

Figure 1. Conceptual diagram showing a nanofiber mesh incorporating vitamin B12 and its application to treat a peripheral nerve injury.

Here’s a link to and a citation for the paper,

Electrospun nanofiber sheets incorporating methylcobalamin promote nerve regeneration and functional recovery in a rat sciatic nerve crush injury model by Koji Suzuki, Hiroyuki Tanaka, Mitsuhiro Ebara, Koichiro Uto, Hozo Matsuoka, Shunsuke Nishimoto, Kiyoshi Okada, Tsuyoshi Murase, Hideki Yoshikawa. Acta Biomaterialia http://dx.doi.org/10.1016/j.actbio.2017.02.004 Available online 5 February 2017

This paper is behind a paywall.

Bidirectional prosthetic-brain communication with light?

The possibility of not only being able to make a prosthetic that allows a tetraplegic to grab a coffee but to feel that coffee  cup with their ‘hand’ is one step closer to reality according to a Feb. 22, 2017 news item on ScienceDaily,

Since the early seventies, scientists have been developing brain-machine interfaces; the main application being the use of neural prosthesis in paralyzed patients or amputees. A prosthetic limb directly controlled by brain activity can partially recover the lost motor function. This is achieved by decoding neuronal activity recorded with electrodes and translating it into robotic movements. Such systems however have limited precision due to the absence of sensory feedback from the artificial limb. Neuroscientists at the University of Geneva (UNIGE), Switzerland, asked whether it was possible to transmit this missing sensation back to the brain by stimulating neural activity in the cortex. They discovered that not only was it possible to create an artificial sensation of neuroprosthetic movements, but that the underlying learning process occurs very rapidly. These findings, published in the scientific journal Neuron, were obtained by resorting to modern imaging and optical stimulation tools, offering an innovative alternative to the classical electrode approach.

A Feb. 22, 2017 Université de Genève press release on EurekAlert, which originated the news item, provides more detail,

Motor function is at the heart of all behavior and allows us to interact with the world. Therefore, replacing a lost limb with a robotic prosthesis is the subject of much research, yet successful outcomes are rare. Why is that? Until this moment, brain-machine interfaces are operated by relying largely on visual perception: the robotic arm is controlled by looking at it. The direct flow of information between the brain and the machine remains thus unidirectional. However, movement perception is not only based on vision but mostly on proprioception, the sensation of where the limb is located in space. “We have therefore asked whether it was possible to establish a bidirectional communication in a brain-machine interface: to simultaneously read out neural activity, translate it into prosthetic movement and reinject sensory feedback of this movement back in the brain”, explains Daniel Huber, professor in the Department of Basic Neurosciences of the Faculty of Medicine at UNIGE.

Providing artificial sensations of prosthetic movements

In contrast to invasive approaches using electrodes, Daniel Huber’s team specializes in optical techniques for imaging and stimulating brain activity. Using a method called two-photon microscopy, they routinely measure the activity of hundreds of neurons with single cell resolution. “We wanted to test whether mice could learn to control a neural prosthesis by relying uniquely on an artificial sensory feedback signal”, explains Mario Prsa, researcher at UNIGE and the first author of the study. “We imaged neural activity in the motor cortex. When the mouse activated a specific neuron, the one chosen for neuroprosthetic control, we simultaneously applied stimulation proportional to this activity to the sensory cortex using blue light”. Indeed, neurons of the sensory cortex were rendered photosensitive to this light, allowing them to be activated by a series of optical flashes and thus integrate the artificial sensory feedback signal. The mouse was rewarded upon every above-threshold activation, and 20 minutes later, once the association learned, the rodent was able to more frequently generate the correct neuronal activity.

This means that the artificial sensation was not only perceived, but that it was successfully integrated as a feedback of the prosthetic movement. In this manner, the brain-machine interface functions bidirectionally. The Geneva researchers think that the reason why this fabricated sensation is so rapidly assimilated is because it most likely taps into very basic brain functions. Feeling the position of our limbs occurs automatically, without much thought and probably reflects fundamental neural circuit mechanisms. This type of bidirectional interface might allow in the future more precisely displacing robotic arms, feeling touched objects or perceiving the necessary force to grasp them.

At present, the neuroscientists at UNIGE are examining how to produce a more efficient sensory feedback. They are currently capable of doing it for a single movement, but is it also possible to provide multiple feedback channels in parallel? This research sets the groundwork for developing a new generation of more precise, bidirectional neural prostheses.

Towards better understanding the neural mechanisms of neuroprosthetic control

By resorting to modern imaging tools, hundreds of neurons in the surrounding area could also be observed as the mouse learned the neuroprosthetic task. “We know that millions of neural connections exist. However, we discovered that the animal activated only the one neuron chosen for controlling the prosthetic action, and did not recruit any of the neighbouring neurons”, adds Daniel Huber. “This is a very interesting finding since it reveals that the brain can home in on and specifically control the activity of just one single neuron”. Researchers can potentially exploit this knowledge to not only develop more stable and precise decoding techniques, but also gain a better understanding of most basic neural circuit functions. It remains to be discovered what mechanisms are involved in routing signals to the uniquely activated neuron.

Caption: A novel optical brain-machine interface allows bidirectional communication with the brain. While a robotic arm is controlled by neuronal activity recorded with optical imaging (red laser), the position of the arm is fed back to the brain via optical microstimulation (blue laser). Credit: © Daniel Huber, UNIGE

Here’s a link to and a citation for the paper,

Rapid Integration of Artificial Sensory Feedback during Operant Conditioning of Motor Cortex Neurons by Mario Prsa, Gregorio L. Galiñanes, Daniel Huber. Neuron Volume 93, Issue 4, p929–939.e6, 22 February 2017 DOI: http://dx.doi.org/10.1016/j.neuron.2017.01.023 Open access funded by European Research Council

This paper is open access.

The inside scoop on beetle exoskeletons

In the past I’ve covered work on the Namib beetle and its bumps which allow it to access condensation from the air in one of the hottest places on earth and work on jewel beetles and how their structural colo(u)r is derived. Now, there’s research into a beetle’s body armor from the University of Nebraska-Lincoln according to a Feb. 22, 2017 news item on ScienceDaily,

Beetles wear a body armor that should weigh them down — think medieval knights and turtles. In fact, those hard shells protecting delicate wings are surprisingly light, allowing even flight.

Better understanding the structure and properties of beetle exoskeletons could help scientists engineer lighter, stronger materials. Such materials could, for example, reduce gas-guzzling drag in vehicles and airplanes and reduce the weight of armor, lightening the load for the 21st-century knight.

But revealing exoskeleton architecture at the nanoscale has proven difficult. Nebraska’s Ruiguo Yang, assistant professor of mechanical and materials engineering, and his colleagues found a way to analyze the fibrous nanostructure. …

A Feb. 22, 2017 University of Nebraska-Lincoln news release by Gillian Klucas (also on EurekAlert), which originated the news item, describes skeletons and the work in more detail,

The lightweight exoskeleton is composed of chitin fibers just around 20 nanometers in diameter (a human hair measures approximately 75,000 nanometers in diameter) and packed and piled into layers that twist in a spiral, like a spiral staircase. The small diameter and helical twisting, known as Bouligand, make the structure difficult to analyze.

Yang and his team developed a method of slicing down the spiral to reveal a surface of cross-sections of fibers at different orientations. From that viewpoint, the researchers were able to analyze the fibers’ mechanical properties with the aid of an atomic force microscope. This type of microscope applies a tiny force to a test sample, deforms the sample and monitors the sample’s response. Combining the experimental procedure and theoretical analysis, the researchers were able to reveal the nanoscale architecture of the exoskeleton and the material properties of the nanofibers.

Yang holds a piece of the atomic force microscope used to measure the beetle's surface. A small wire can barely be seen in the middle of the piece. Unseen is a two-nano-size probe attached to the wire, which does the actual measuring.

Craig Chandler | University Communication

Yang holds a piece of the atomic force microscope used to measure the beetle’s surface. A small wire can barely be seen in the middle of the piece. Unseen is a two-nano-size probe attached to the wire, which does the actual measuring.

They made their discoveries in the common figeater beetle, Cotinis mutabilis, a metallic green native of the western United States. But the technique can be used on other beetles and hard-shelled creatures and might also extend to artificial materials with fibrous structures, Yang said.

Comparing beetles with differing demands on their exoskeletons, such as defending against predators or environmental damage, could lead to evolutionary insights as well as a better understanding of the relationship between structural features and their properties.

Yang’s co-authors are Alireza Zaheri and Horacio Espinosa of Northwestern University; Wei Gao of the University of Texas at San Antonio; and Cheryl Hayashi of the University of California, Riverside.

Here’s a link to and a citation for the paper,

Exoskeletons: AFM Identification of Beetle Exocuticle: Bouligand Structure and Nanofiber Anisotropic Elastic Properties by Ruiguo Yang, Alireza Zaheri,Wei Gao, Charely Hayashi, Horacio D. Espinosa. Adv. Funct. Mater. vol. 27 (6) 2017 DOI: 10.1002/adfm.201770031 First published: 8 February 2017

This paper is behind a paywall.

Atomic force microscope (AFM) shrunk down to a dime-sized device?

Before getting to the announcement, here’s a little background from Dexter Johnson’s Feb. 21, 2017 posting on his NanoClast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website; Note: Links have been removed),

Ever since the 1980s, when Gerd Binnig of IBM first heard that “beautiful noise” made by the tip of the first scanning tunneling microscope (STM) dragging across the surface of an atom, and he later developed the atomic force microscope (AFM), these microscopy tools have been the bedrock of nanotechnology research and development.

AFMs have continued to evolve over the years, and at one time, IBM even looked into using them as the basis of a memory technology in the company’s Millipede project. Despite all this development, AFMs have remained bulky and expensive devices, costing as much as $50,000 [or more].

Now, here’s the announcement in a Feb. 15, 2017 news item on Nanowerk,

Researchers at The University of Texas at Dallas have created an atomic force microscope on a chip, dramatically shrinking the size — and, hopefully, the price tag — of a high-tech device commonly used to characterize material properties.

“A standard atomic force microscope is a large, bulky instrument, with multiple control loops, electronics and amplifiers,” said Dr. Reza Moheimani, professor of mechanical engineering at UT Dallas. “We have managed to miniaturize all of the electromechanical components down onto a single small chip.”

A Feb. 15, 2017 University of Texas at Dallas news release, which originated the news item, provides more detail,

An atomic force microscope (AFM) is a scientific tool that is used to create detailed three-dimensional images of the surfaces of materials, down to the nanometer scale — that’s roughly on the scale of individual molecules.

The basic AFM design consists of a tiny cantilever, or arm, that has a sharp tip attached to one end. As the apparatus scans back and forth across the surface of a sample, or the sample moves under it, the interactive forces between the sample and the tip cause the cantilever to move up and down as the tip follows the contours of the surface. Those movements are then translated into an image.

“An AFM is a microscope that ‘sees’ a surface kind of the way a visually impaired person might, by touching. You can get a resolution that is well beyond what an optical microscope can achieve,” said Moheimani, who holds the James Von Ehr Distinguished Chair in Science and Technology in the Erik Jonsson School of Engineering and Computer Science. “It can capture features that are very, very small.”

The UT Dallas team created its prototype on-chip AFM using a microelectromechanical systems (MEMS) approach.

“A classic example of MEMS technology are the accelerometers and gyroscopes found in smartphones,” said Dr. Anthony Fowler, a research scientist in Moheimani’s Laboratory for Dynamics and Control of Nanosystems and one of the article’s co-authors. “These used to be big, expensive, mechanical devices, but using MEMS technology, accelerometers have shrunk down onto a single chip, which can be manufactured for just a few dollars apiece.”

The MEMS-based AFM is about 1 square centimeter in size, or a little smaller than a dime. It is attached to a small printed circuit board, about half the size of a credit card, which contains circuitry, sensors and other miniaturized components that control the movement and other aspects of the device.

Conventional AFMs operate in various modes. Some map out a sample’s features by maintaining a constant force as the probe tip drags across the surface, while others do so by maintaining a constant distance between the two.

“The problem with using a constant height approach is that the tip is applying varying forces on a sample all the time, which can damage a sample that is very soft,” Fowler said. “Or, if you are scanning a very hard surface, you could wear down the tip,”

The MEMS-based AFM operates in “tapping mode,” which means the cantilever and tip oscillate up and down perpendicular to the sample, and the tip alternately contacts then lifts off from the surface. As the probe moves back and forth across a sample material, a feedback loop maintains the height of that oscillation, ultimately creating an image.

“In tapping mode, as the oscillating cantilever moves across the surface topography, the amplitude of the oscillation wants to change as it interacts with sample,” said Dr. Mohammad Maroufi, a research associate in mechanical engineering and co-author of the paper. “This device creates an image by maintaining the amplitude of oscillation.”

Because conventional AFMs require lasers and other large components to operate, their use can be limited. They’re also expensive.

“An educational version can cost about $30,000 or $40,000, and a laboratory-level AFM can run $500,000 or more,” Moheimani said. “Our MEMS approach to AFM design has the potential to significantly reduce the complexity and cost of the instrument.

“One of the attractive aspects about MEMS is that you can mass produce them, building hundreds or thousands of them in one shot, so the price of each chip would only be a few dollars. As a result, you might be able to offer the whole miniature AFM system for a few thousand dollars.”

A reduced size and price tag also could expand the AFMs’ utility beyond current scientific applications.

“For example, the semiconductor industry might benefit from these small devices, in particular companies that manufacture the silicon wafers from which computer chips are made,” Moheimani said. “With our technology, you might have an array of AFMs to characterize the wafer’s surface to find micro-faults before the product is shipped out.”

The lab prototype is a first-generation device, Moheimani said, and the group is already working on ways to improve and streamline the fabrication of the device.

“This is one of those technologies where, as they say, ‘If you build it, they will come.’ We anticipate finding many applications as the technology matures,” Moheimani said.

In addition to the UT Dallas researchers, Michael Ruppert, a visiting graduate student from the University of Newcastle in Australia, was a co-author of the journal article. Moheimani was Ruppert’s doctoral advisor.

So, an AFM that could cost as much as $500,000 for a laboratory has been shrunk to this size and become far less expensive,

A MEMS-based atomic force microscope developed by engineers at UT Dallas is about 1 square centimeter in size (top center). Here it is attached to a small printed circuit board that contains circuitry, sensors and other miniaturized components that control the movement and other aspects of the device. Courtesy: University of Texas at Dallas

Of course, there’s still more work to be done as you’ll note when reading Dexter’s Feb. 21, 2017 posting where he features answers to questions he directed to the researchers.

Here’s a link to and a citation for the paper,

On-Chip Dynamic Mode Atomic Force Microscopy: A Silicon-on-Insulator MEMS Approach by  Michael G. Ruppert, Anthony G. Fowler, Mohammad Maroufi, S. O. Reza Moheimani. IEEE Journal of Microelectromechanical Systems Volume: 26 Issue: 1  Feb. 2017 DOI: 10.1109/JMEMS.2016.2628890 Date of Publication: 06 December 2016

This paper is behind a paywall.