Tag Archives: Georgia Institute of Technology

Brain-friendly interface to replace neural prosthetics one day?

This research will not find itself occupying anyone’s brain for some time to come but it is interesting to find out that neural prosthetics have some drawbacks and there is work being done to address them. From an Aug. 10, 2015 news item on Azonano,

Instead of using neural prosthetic devices–which suffer from immune-system rejection and are believed to fail due to a material and mechanical mismatch–a multi-institutional team, including Lohitash Karumbaiah of the University of Georgia’s Regenerative Bioscience Center, has developed a brain-friendly extracellular matrix environment of neuronal cells that contain very little foreign material. These by-design electrodes are shielded by a covering that the brain recognizes as part of its own composition.

An Aug. 5, 2015 University of Georgia news release, which originated the news item, describes the new approach and technique in more detail,

Although once believed to be devoid of immune cells and therefore of immune responses, the brain is now recognized to have its own immune system that protects it against foreign invaders.

“This is not by any means the device that you’re going to implant into a patient,” said Karumbaiah, an assistant professor of animal and dairy science in the UGA College of Agricultural and Environmental Sciences. “This is proof of concept that extracellular matrix can be used to ensheathe a functioning electrode without the use of any other foreign or synthetic materials.”

Implantable neural prosthetic devices in the brain have been around for almost two decades, helping people living with limb loss and spinal cord injury become more independent. However, not only do neural prosthetic devices suffer from immune-system rejection, but most are believed to eventually fail because of a mismatch between the soft brain tissue and the rigid devices.

The collaboration, led by Wen Shen and Mark Allen of the University of Pennsylvania, found that the extracellular matrix derived electrodes adapted to the mechanical properties of brain tissue and were capable of acquiring neural recordings from the brain cortex.

“Neural interface technology is literally mind boggling, considering that one might someday control a prosthetic limb with one’s own thoughts,” Karumbaiah said.

The study’s joint collaborators were Ravi Bellamkonda, who conceived the new approach and is chair of the Wallace H. Coulter Department of Biomedical Engineering at the Georgia Institute of Technology and Emory University, as well as Allen, who at the time was director of the Institute for Electronics and Nanotechnology.

“Hopefully, once we converge upon the nanofabrication techniques that would enable these to be clinically translational, this same methodology could then be applied in getting these extracellular matrix derived electrodes to be the next wave of brain implants,” Karumbaiah said.

Currently, one out of every 190 Americans is living with limb loss, according to the National Institutes of Health. There is a significant burden in cost of care and quality of life for people suffering from this disability.

The research team is one part of many in the prosthesis industry, which includes those who design the robotics for the artificial limbs, others who make the neural prosthetic devices and developers who design the software that decodes the neural signal.

“What neural prosthetic devices do is communicate seamlessly to an external prosthesis,” Karumbaiah said, “providing independence of function without having to have a person or a facility dedicated to their care.”

Karumbaiah hopes further collaboration will allow them to make positive changes in the industry, saying that, “it’s the researcher-to-industry kind of conversation that now needs to take place, where companies need to come in and ask: ‘What have you learned? How are the devices deficient, and how can we make them better?'”

Here’s a link to and a citation for the paper,

Extracellular matrix-based intracortical microelectrodes: Toward a microfabricated neural interface based on natural materials by Wen Shen, Lohitash Karumbaiah, Xi Liu, Tarun Saxena, Shuodan Chen, Radhika Patkar, Ravi V. Bellamkonda, & Mark G. Allen. Microsystems & Nanoengineering 1, Article number: 15010 (2015) doi:10.1038/micronano.2015.10

This appears to be an open access paper.

One final note, I have written frequently about prosthetics and neural prosthetics, which you can find by using either of those terms and/or human enhancement. Here’s my latest piece, a March 25, 2015 posting.

Controlling water with ‘stick-slip surfaces’

Controlling water could lead to better designed microfluidic devices such as ‘lab-on-a-chip’. A July 27, 2015 news item on Nanowerk announces a new technique for controlling water,

Coating the inside of glass microtubes with a polymer hydrogel material dramatically alters the way capillary forces draw water into the tiny structures, researchers have found. The discovery could provide a new way to control microfluidic systems, including popular lab-on-a-chip devices.

Capillary action draws water and other liquids into confined spaces such as tubes, straws, wicks and paper towels, and the flow rate can be predicted using a simple hydrodynamic analysis. But a chance observation by researchers at the Georgia Institute of Technology [US] will cause a recalculation of those predictions for conditions in which hydrogel films line the tubes carrying water-based liquids.

“Rather than moving according to conventional expectations, water-based liquids slip to a new location in the tube, get stuck, then slip again – and the process repeats over and over again,” explained Andrei Fedorov, a professor in the George W. Woodruff School of Mechanical Engineering at Georgia Tech. “Instead of filling the tube with a rate of liquid penetration that slows with time, the water propagates at a nearly constant speed into the hydrogel-coated capillary. This was very different from what we had expected.”

A July 27, 2015 Georgia Institute of Technology (Georgia Tech) news release (also on EurekAlert) by John Toon, which originated the news item, describes the work in more detail,

When the opening of a thin glass tube is exposed to a droplet of water, the liquid begins to flow into the tube, pulled by a combination of surface tension in the liquid and adhesion between the liquid and the walls of the tube. Leading the way is a meniscus, a curved surface of the water at the leading edge of the water column. An ordinary borosilicate glass tube fills by capillary action at a gradually decreasing rate with the speed of meniscus propagation slowing as a square root of time.

But when the inside of a tube is coated with a very thin layer of poly(N-isopropylacrylamide), a so-called “smart” polymer (PNIPAM), everything changes. Water entering a tube coated on the inside with a dry hydrogel film must first wet the film and allow it to swell before it can proceed farther into the tube. The wetting and swelling take place not continuously, but with discrete steps in which the water meniscus first sticks and its motion remains arrested while the polymer layer locally deforms. The meniscus then rapidly slides for a short distance before the process repeats. This “stick-slip” process forces the water to move into the tube in a step-by-step motion.

The flow rate measured by the researchers in the coated tube is three orders of magnitude less than the flow rate in an uncoated tube. A linear equation describes the time dependence of the filling process instead of a classical quadratic equation which describes filling of an uncoated tube.

“Instead of filling the capillary in a hundredth of a second, it might take tens of seconds to fill the same capillary,” said Fedorov. “Though there is some swelling of the hydrogel upon contact with water, the change in the tube diameter is negligible due to the small thickness of the hydrogel layer. This is why we were so surprised when we first observed such a dramatic slow-down of the filing process in our experiments.”

The researchers – who included graduate students James Silva, Drew Loney and Ren Geryak and senior research engineer Peter Kottke – tried the experiment again using glycerol, a liquid that is not absorbed by the hydrogel. With glycerol, the capillary action proceeded through the hydrogel-coated microtube as with an uncoated tube in agreement with conventional theory. After using high-resolution optical visualization to study the meniscus propagation while the polymer swelled, the researchers realized they could put this previously-unknown behavior to good use.

Water absorption by the hydrogels occurs only when the materials remain below a specific transition temperature. When heated above that temperature, the materials no longer absorb water, eliminating the “stick-slip” phenomenon in the microtubes and allowing them to behave like ordinary tubes.

This ability to turn the stick-slip behavior on and off with temperature could provide a new way to control the flow of water-based liquid in microfluidic devices, including labs-on-a-chip. The transition temperature can be controlled by varying the chemical composition of the hydrogel.

“By locally heating or cooling the polymer inside a microfluidic chamber, you can either speed up the filling process or slow it down,” Fedorov said. “The time it takes for the liquid to travel the same distance can be varied up to three orders of magnitude. That would allow precise control of fluid flow on demand using external stimuli to change polymer film behavior.”

The heating or cooling could be done locally with lasers, tiny heaters, or thermoelectric devices placed at specific locations in the microfluidic devices.

That could allow precise timing of reactions in microfluidic devices by controlling the rate of reactant delivery and product removal, or allow a sequence of fast and slow reactions to occur. Another important application could be controlled drug release in which the desired rate of molecule delivery could be dynamically tuned over time to achieve the optimal therapeutic outcome.

In future work, Fedorov and his team hope to learn more about the physics of the hydrogel-modified capillaries and study capillary flow using partially-transparent microtubes. They also want to explore other “smart” polymers which change the flow rate in response to different stimuli, including the changing pH of the liquid, exposure to electromagnetic radiation, or the induction of mechanical stress – all of which can change the properties of a particular hydrogel designed to be responsive to those triggers.

“These experimental and theoretical results provide a new conceptual framework for liquid motion confined by soft, dynamically evolving polymer interfaces in which the system creates an energy barrier to further motion through elasto-capillary deformation, and then lowers the barrier through diffusive softening,” the paper’s authors wrote. “This insight has implications for optimal design of microfluidic and lab-on-a-chip devices based on stimuli-responsive smart polymers.”

In addition to those already mentioned, the research team included Professor Vladimir Tsukruk from the Georgia Tech School of Materials Science and Engineering and Rajesh Naik, Biotechnology Lead and Tech Advisor of the Nanostructured and Biological Materials Branch of the Air Force Research Laboratory (AFRL).

Here’s a link to and a citation for the paper,

Stick–slip water penetration into capillaries coated with swelling hydrogel by J. E. Silva, R. Geryak, D. A. Loney, P. A. Kottke, R. R. Naik, V. V. Tsukruk, and A. G. Fedorov. Soft Matter, 2015,11, 5933-5939 DOI: 10.1039/C5SM00660K First published online 23 Jun 2015

This paper is behind a paywall.

The perfect keyboard: it self-cleans and self-powers and it can identify its owner(s)

There’s a pretty nifty piece of technology being described in a Jan. 21, 2015 news item on Nanowerk, which focuses on the security aspects first (Note: A link has been removed),

In a novel twist in cybersecurity, scientists have developed a self-cleaning, self-powered smart keyboard that can identify computer users by the way they type. The device, reported in the journal ACS Nano (“Personalized Keystroke Dynamics for Self-Powered Human–Machine Interfacing”), could help prevent unauthorized users from gaining direct access to computers.

A Jan. 21, 2015 American Chemical Society (ACS) news release (also on EurekAlert), which originated the news item, continues with the keyboard’s security features before briefly mentioning the keyboard’s self-powering and self-cleaning capabilities,

Zhong Lin Wang and colleagues note that password protection is one of the most common ways we control who can log onto our computers — and see the private information we entrust to them. But as many recent high-profile stories about hacking and fraud have demonstrated, passwords are themselves vulnerable to theft. So Wang’s team set out to find a more secure but still cost-effective and user-friendly approach to safeguarding what’s on our computers.

The researchers developed a smart keyboard that can sense typing patterns — including the pressure applied to keys and speed — that can accurately distinguish one individual user from another. So even if someone knows your password, he or she cannot access your computer because that person types in a different way than you would. It also can harness the energy generated from typing to either power itself or another small device. And the special surface coating repels dirt and grime. The scientists conclude that the keyboard could provide an additional layer of protection to boost the security of our computer systems.

Here’s a link to and a citation for the paper,

Personalized Keystroke Dynamics for Self-Powered Human–Machine Interfacing by Jun Chen, Guang Zhu, Jin Yang, Qingshen Jing, Peng Bai, Weiqing Yang, Xuewei Qi, Yuanjie Su, and Zhong Lin Wang. ACS Nano, Article ASAP DOI: 10.1021/nn506832w Publication Date (Web): December 30, 2014

Copyright © 2014 American Chemical Society

This paper is behind a paywall. I did manage a peek at the paper and found that the keyboard is able to somehow harvest the mechanical energy of typing and turn it into electricity so it can self-power. Self-cleaning is made possible by a nanostructure surface modification. An idle thought and a final comment. First, I wonder what happens if you want to or have to share your keyboard? Second, a Jan. 21, 2015 article about the intelligent keyboard by Luke Dormehl for Fast Company notes that the researchers are from the US and China and names two of the institutions involved in this collaboration, Georgia Institute of Technology and the Beijing Institute of Nanoenergy and Nanosystems,.

ETA Jan. 23, 2015: There’s a Georgia Institute of Technology Jan. 21, 2015 news release on EurekAlert about the intelligent keyboard which offers more technical details such as these,

Conventional keyboards record when a keystroke makes a mechanical contact, indicating the press of a specific key. The intelligent keyboard records each letter touched, but also captures information about the amount of force applied to the key and the length of time between one keystroke and the next. Such typing style is unique to individuals, and so could provide a new biometric for securing computers from unauthorized use.

In addition to providing a small electrical current for registering the key presses, the new keyboard could also generate enough electricity to charge a small portable electronic device or power a transmitter to make the keyboard wireless.

An effect known as contact electrification generates current when the user’s fingertips touch a plastic material on which a layer of electrode material has been coated. Voltage is generated through the triboelectric and electrostatic induction effects. Using the triboelectric effect, a small charge can be produced whenever materials are brought into contact and then moved apart.

“Our skin is dielectric and we have electrostatic charges in our fingers,” Wang noted. “Anything we touch can become charged.”

Instead of individual mechanical keys as in traditional keyboards, Wang’s intelligent keyboard is made up of vertically-stacked transparent film materials. Researchers begin with a layer of polyethylene terephthalate between two layers of indium tin oxide (ITO) that form top and bottom electrodes.

Next, a layer of fluorinated ethylene propylene (FEP) is applied onto the ITO surface to serve as an electrification layer that generates triboelectric charges when touched by fingertips. FEP nanowire arrays are formed on the exposed FEP surface through reactive ion etching.

The keyboard’s operation is based on coupling between contact electrification and electrostatic induction, rather than the traditional mechanical switching. When a finger contacts the FEP, charge is transferred at the contact interface, injecting electrons from the skin into the material and creating a positive charge.

When the finger moves away, the negative charges on the FEP side induces positive charges on the top electrode, and equal amounts of negative charges on the bottom electrode. Consecutive keystrokes produce a periodic electrical field that drives reciprocating flows of electrons between the electrodes. Though eventually dissipating, the charges remain on the FEP surface for an extended period of time.

Wang believes the new smart keyboard will be competitive with existing keyboards, in both cost and durability. The new device is based on inexpensive materials that are widely used in the electronics industry.

Bendable, stretchable, light-weight, and transparent: a new competitor in the competition for ‘thinnest electric generator’

An Oct. 15, 2014 Columbia University (New York, US) press release (also on EurekAlert), describes another contender for the title of the world’s thinnest electric generator,

Researchers from Columbia Engineering and the Georgia Institute of Technology [US] report today [Oct. 15, 2014] that they have made the first experimental observation of piezoelectricity and the piezotronic effect in an atomically thin material, molybdenum disulfide (MoS2), resulting in a unique electric generator and mechanosensation devices that are optically transparent, extremely light, and very bendable and stretchable.

In a paper published online October 15, 2014, in Nature, research groups from the two institutions demonstrate the mechanical generation of electricity from the two-dimensional (2D) MoS2 material. The piezoelectric effect in this material had previously been predicted theoretically.

Here’s a link to and a citation for the paper,

Piezoelectricity of single-atomic-layer MoS2 for energy conversion and piezotronics by Wenzhuo Wu, Lei Wang, Yilei Li, Fan Zhang, Long Lin, Simiao Niu, Daniel Chenet, Xian Zhang, Yufeng Hao, Tony F. Heinz, James Hone, & Zhong Lin Wang. Nature (2014) doi:10.1038/nature13792 Published online 15 October 2014

This paper is behind a paywall. There is a free preview available with ReadCube Access.

Getting back to the Columbia University press release, it offers a general description of piezoelectricity and some insight into this new research on molybdenum disulfide,

Piezoelectricity is a well-known effect in which stretching or compressing a material causes it to generate an electrical voltage (or the reverse, in which an applied voltage causes it to expand or contract). But for materials of only a few atomic thicknesses, no experimental observation of piezoelectricity has been made, until now. The observation reported today provides a new property for two-dimensional materials such as molybdenum disulfide, opening the potential for new types of mechanically controlled electronic devices.

“This material—just a single layer of atoms—could be made as a wearable device, perhaps integrated into clothing, to convert energy from your body movement to electricity and power wearable sensors or medical devices, or perhaps supply enough energy to charge your cell phone in your pocket,” says James Hone, professor of mechanical engineering at Columbia and co-leader of the research.

“Proof of the piezoelectric effect and piezotronic effect adds new functionalities to these two-dimensional materials,” says Zhong Lin Wang, Regents’ Professor in Georgia Tech’s School of Materials Science and Engineering and a co-leader of the research. “The materials community is excited about molybdenum disulfide, and demonstrating the piezoelectric effect in it adds a new facet to the material.”

Hone and his research group demonstrated in 2008 that graphene, a 2D form of carbon, is the strongest material. He and Lei Wang, a postdoctoral fellow in Hone’s group, have been actively exploring the novel properties of 2D materials like graphene and MoS2 as they are stretched and compressed.

Zhong Lin Wang and his research group pioneered the field of piezoelectric nanogenerators for converting mechanical energy into electricity. He and postdoctoral fellow Wenzhuo Wu are also developing piezotronic devices, which use piezoelectric charges to control the flow of current through the material just as gate voltages do in conventional three-terminal transistors.

There are two keys to using molybdenum disulfide for generating current: using an odd number of layers and flexing it in the proper direction. The material is highly polar, but, Zhong Lin Wang notes, so an even number of layers cancels out the piezoelectric effect. The material’s crystalline structure also is piezoelectric in only certain crystalline orientations.

For the Nature study, Hone’s team placed thin flakes of MoS2 on flexible plastic substrates and determined how their crystal lattices were oriented using optical techniques. They then patterned metal electrodes onto the flakes. In research done at Georgia Tech, Wang’s group installed measurement electrodes on samples provided by Hone’s group, then measured current flows as the samples were mechanically deformed. They monitored the conversion of mechanical to electrical energy, and observed voltage and current outputs.

The researchers also noted that the output voltage reversed sign when they changed the direction of applied strain, and that it disappeared in samples with an even number of atomic layers, confirming theoretical predictions published last year. The presence of piezotronic effect in odd layer MoS2 was also observed for the first time.

“What’s really interesting is we’ve now found that a material like MoS2, which is not piezoelectric in bulk form, can become piezoelectric when it is thinned down to a single atomic layer,” says Lei Wang.

To be piezoelectric, a material must break central symmetry. A single atomic layer of MoS2 has such a structure, and should be piezoelectric. However, in bulk MoS2, successive layers are oriented in opposite directions, and generate positive and negative voltages that cancel each other out and give zero net piezoelectric effect.

“This adds another member to the family of piezoelectric materials for functional devices,” says Wenzhuo Wu.

In fact, MoS2 is just one of a group of 2D semiconducting materials known as transition metal dichalcogenides, all of which are predicted to have similar piezoelectric properties. These are part of an even larger family of 2D materials whose piezoelectric materials remain unexplored. Importantly, as has been shown by Hone and his colleagues, 2D materials can be stretched much farther than conventional materials, particularly traditional ceramic piezoelectrics, which are quite brittle.

The research could open the door to development of new applications for the material and its unique properties.

“This is the first experimental work in this area and is an elegant example of how the world becomes different when the size of material shrinks to the scale of a single atom,” Hone adds. “With what we’re learning, we’re eager to build useful devices for all kinds of applications.”

Ultimately, Zhong Lin Wang notes, the research could lead to complete atomic-thick nanosystems that are self-powered by harvesting mechanical energy from the environment. This study also reveals the piezotronic effect in two-dimensional materials for the first time, which greatly expands the application of layered materials for human-machine interfacing, robotics, MEMS, and active flexible electronics.

I see there’s a reference in that last paragraph to “harvesting mechanical energy from  the environment.” I’m not sure what they mean by that but I have written a few times about harvesting biomechanical energy. One of my earliest pieces is a July 12, 2010 post which features work by Zhong Lin Wang on harvesting energy from heart beats, blood flow, muscle stretching, or even irregular vibrations. One of my latest pieces is a Sept. 17, 2014 post about some work in Canada on harvesting energy from the jaw as you chew.

A final note, Dexter Johnson discusses this work in an Oct. 16, 2014 post on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website).

Cardiac pacemakers: Korea’s in vivo demonstration of a self-powered one* and UK’s breath-based approach

As i best I can determine ,the last mention of a self-powered pacemaker and the like on this blog was in a Nov. 5, 2012 posting (Developing self-powered batteries for pacemakers). This latest news from The Korea Advanced Institute of Science and Technology (KAIST) is, I believe, the first time that such a device has been successfully tested in vivo. From a June 23, 2014 news item on ScienceDaily,

As the number of pacemakers implanted each year reaches into the millions worldwide, improving the lifespan of pacemaker batteries has been of great concern for developers and manufacturers. Currently, pacemaker batteries last seven years on average, requiring frequent replacements, which may pose patients to a potential risk involved in medical procedures.

A research team from the Korea Advanced Institute of Science and Technology (KAIST), headed by Professor Keon Jae Lee of the Department of Materials Science and Engineering at KAIST and Professor Boyoung Joung, M.D. of the Division of Cardiology at Severance Hospital of Yonsei University, has developed a self-powered artificial cardiac pacemaker that is operated semi-permanently by a flexible piezoelectric nanogenerator.

A June 23, 2014 KAIST news release on EurekAlert, which originated the news item, provides more details,

The artificial cardiac pacemaker is widely acknowledged as medical equipment that is integrated into the human body to regulate the heartbeats through electrical stimulation to contract the cardiac muscles of people who suffer from arrhythmia. However, repeated surgeries to replace pacemaker batteries have exposed elderly patients to health risks such as infections or severe bleeding during operations.

The team’s newly designed flexible piezoelectric nanogenerator directly stimulated a living rat’s heart using electrical energy converted from the small body movements of the rat. This technology could facilitate the use of self-powered flexible energy harvesters, not only prolonging the lifetime of cardiac pacemakers but also realizing real-time heart monitoring.

The research team fabricated high-performance flexible nanogenerators utilizing a bulk single-crystal PMN-PT thin film (iBULe Photonics). The harvested energy reached up to 8.2 V and 0.22 mA by bending and pushing motions, which were high enough values to directly stimulate the rat’s heart.

Professor Keon Jae Lee said:

“For clinical purposes, the current achievement will benefit the development of self-powered cardiac pacemakers as well as prevent heart attacks via the real-time diagnosis of heart arrhythmia. In addition, the flexible piezoelectric nanogenerator could also be utilized as an electrical source for various implantable medical devices.”

This image illustrating a self-powered nanogenerator for a cardiac pacemaker has been provided by KAIST,

This picture shows that a self-powered cardiac pacemaker is enabled by a flexible piezoelectric energy harvester. Credit: KAIST

This picture shows that a self-powered cardiac pacemaker is enabled by a flexible piezoelectric energy harvester.
Credit: KAIST

Here’s a link to and a citation for the paper,

Self-Powered Cardiac Pacemaker Enabled by Flexible Single Crystalline PMN-PT Piezoelectric Energy Harvester by Geon-Tae Hwang, Hyewon Park, Jeong-Ho Lee, SeKwon Oh, Kwi-Il Park, Myunghwan Byun, Hyelim Park, Gun Ahn, Chang Kyu Jeong, Kwangsoo No, HyukSang Kwon, Sang-Goo Lee, Boyoung Joung, and Keon Jae Lee. Advanced Materials DOI: 10.1002/adma.201400562
Article first published online: 17 APR 2014

© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

There was a May 15, 2014 KAIST news release on EurekAlert announcing this same piece of research but from a technical perspective,

The energy efficiency of KAIST’s piezoelectric nanogenerator has increased by almost 40 times, one step closer toward the commercialization of flexible energy harvesters that can supply power infinitely to wearable, implantable electronic devices

NANOGENERATORS are innovative self-powered energy harvesters that convert kinetic energy created from vibrational and mechanical sources into electrical power, removing the need of external circuits or batteries for electronic devices. This innovation is vital in realizing sustainable energy generation in isolated, inaccessible, or indoor environments and even in the human body.

Nanogenerators, a flexible and lightweight energy harvester on a plastic substrate, can scavenge energy from the extremely tiny movements of natural resources and human body such as wind, water flow, heartbeats, and diaphragm and respiration activities to generate electrical signals. The generators are not only self-powered, flexible devices but also can provide permanent power sources to implantable biomedical devices, including cardiac pacemakers and deep brain stimulators.

However, poor energy efficiency and a complex fabrication process have posed challenges to the commercialization of nanogenerators. Keon Jae Lee, Associate Professor of Materials Science and Engineering at KAIST, and his colleagues have recently proposed a solution by developing a robust technique to transfer a high-quality piezoelectric thin film from bulk sapphire substrates to plastic substrates using laser lift-off (LLO).

Applying the inorganic-based laser lift-off (LLO) process, the research team produced a large-area PZT thin film nanogenerators on flexible substrates (2 cm x 2 cm).

“We were able to convert a high-output performance of ~250 V from the slight mechanical deformation of a single thin plastic substrate. Such output power is just enough to turn on 100 LED lights,” Keon Jae Lee explained.

The self-powered nanogenerators can also work with finger and foot motions. For example, under the irregular and slight bending motions of a human finger, the measured current signals had a high electric power of ~8.7 μA. In addition, the piezoelectric nanogenerator has world-record power conversion efficiency, almost 40 times higher than previously reported similar research results, solving the drawbacks related to the fabrication complexity and low energy efficiency.

Lee further commented,

“Building on this concept, it is highly expected that tiny mechanical motions, including human body movements of muscle contraction and relaxation, can be readily converted into electrical energy and, furthermore, acted as eternal power sources.”

The research team is currently studying a method to build three-dimensional stacking of flexible piezoelectric thin films to enhance output power, as well as conducting a clinical experiment with a flexible nanogenerator.

In addition to the 2012 posting I mentioned earlier, there was also this July 12, 2010 posting which described research on harvesting biomechanical movement ( heart beat, blood flow, muscle stretching, or even irregular vibration) at the Georgia (US) Institute of Technology where the lead researcher observed,

…  Wang [Professor Zhong Lin Wang at Georgia Tech] tells Nanowerk. “However, the applications of the nanogenerators under in vivo and in vitro environments are distinct. Some crucial problems need to be addressed before using these devices in the human body, such as biocompatibility and toxicity.”

Bravo to the KAIST researchers for getting this research to the in vivo testing stage.

Meanwhile at the University of Bristol and at the University of Bath, researchers have received funding for a new approach to cardiac pacemakers, designed them with the breath in mind. From a June 24, 2014 news item on Azonano,

Pacemaker research from the Universities of Bath and Bristol could revolutionise the lives of over 750,000 people who live with heart failure in the UK.

The British Heart Foundation (BHF) is awarding funding to researchers developing a new type of heart pacemaker that modulates its pulses to match breathing rates.

A June 23, 2014 University of Bristol press release, which originated the news item, provides some context,

During 2012-13 in England, more than 40,000 patients had a pacemaker fitted.

Currently, the pulses from pacemakers are set at a constant rate when fitted which doesn’t replicate the natural beating of the human heart.

The normal healthy variation in heart rate during breathing is lost in cardiovascular disease and is an indicator for sleep apnoea, cardiac arrhythmia, hypertension, heart failure and sudden cardiac death.

The device is then briefly described (from the press release),

The novel device being developed by scientists at the Universities of Bath and Bristol uses synthetic neural technology to restore this natural variation of heart rate with lung inflation, and is targeted towards patients with heart failure.

The device works by saving the heart energy, improving its pumping efficiency and enhancing blood flow to the heart muscle itself.  Pre-clinical trials suggest the device gives a 25 per cent increase in the pumping ability, which is expected to extend the life of patients with heart failure.

One aim of the project is to miniaturise the pacemaker device to the size of a postage stamp and to develop an implant that could be used in humans within five years.

Dr Alain Nogaret, Senior Lecturer in Physics at the University of Bath, explained“This is a multidisciplinary project with strong translational value.  By combining fundamental science and nanotechnology we will be able to deliver a unique treatment for heart failure which is not currently addressed by mainstream cardiac rhythm management devices.”

The research team has already patented the technology and is working with NHS consultants at the Bristol Heart Institute, the University of California at San Diego and the University of Auckland. [emphasis mine]

Professor Julian Paton, from the University of Bristol, added: “We’ve known for almost 80 years that the heart beat is modulated by breathing but we have never fully understood the benefits this brings. The generous new funding from the BHF will allow us to reinstate this natural occurring synchrony between heart rate and breathing and understand how it brings therapy to hearts that are failing.”

Professor Jeremy Pearson, Associate Medical Director at the BHF, said: “This study is a novel and exciting first step towards a new generation of smarter pacemakers. More and more people are living with heart failure so our funding in this area is crucial. The work from this innovative research team could have a real impact on heart failure patients’ lives in the future.”

Given some current events (‘Tesla opens up its patents’, Mike Masnick’s June 12, 2014 posting on Techdirt), I wonder what the situation will be vis à vis patents by the time this device gets to market.

* ‘one’ added to title on Aug. 13, 2014.

Roadmap to neuromorphic engineering digital and analog) for the creation of artificial brains *from the Georgia (US) Institute of Technology

While I didn’t mention neuromorphic engineering in my April 16, 2014 posting which focused on the more general aspect of nanotechnology in Transcendence, a movie starring Johnny Depp and opening on April 18, that specialty (neuromorphic engineering) is what makes the events in the movie ‘possible’ (assuming very large stretches of imagination bringing us into the realm implausibility and beyond). From the IMDB.com plot synopsis for Transcendence,

Dr. Will Caster (Johnny Depp) is the foremost researcher in the field of Artificial Intelligence, working to create a sentient machine that combines the collective intelligence of everything ever known with the full range of human emotions. His highly controversial experiments have made him famous, but they have also made him the prime target of anti-technology extremists who will do whatever it takes to stop him. However, in their attempt to destroy Will, they inadvertently become the catalyst for him to succeed to be a participant in his own transcendence. For his wife Evelyn (Rebecca Hall) and best friend Max Waters (Paul Bettany), both fellow researchers, the question is not if they canbut [sic] if they should. Their worst fears are realized as Will’s thirst for knowledge evolves into a seemingly omnipresent quest for power, to what end is unknown. The only thing that is becoming terrifyingly clear is there may be no way to stop him.

In the film, Carter’s intelligence/consciousness is uploaded to the computer, which suggests the computer has human brainlike qualities and abilities. The effort to make computer or artificial intelligence more humanlike is called neuromorphic engineering and according to an April 17, 2014 news item on phys.org, researchers at the Georgia Institute of Technology (Georgia Tech) have published a roadmap for this pursuit,

In the field of neuromorphic engineering, researchers study computing techniques that could someday mimic human cognition. Electrical engineers at the Georgia Institute of Technology recently published a “roadmap” that details innovative analog-based techniques that could make it possible to build a practical neuromorphic computer.

A core technological hurdle in this field involves the electrical power requirements of computing hardware. Although a human brain functions on a mere 20 watts of electrical energy, a digital computer that could approximate human cognitive abilities would require tens of thousands of integrated circuits (chips) and a hundred thousand watts of electricity or more – levels that exceed practical limits.

The Georgia Tech roadmap proposes a solution based on analog computing techniques, which require far less electrical power than traditional digital computing. The more efficient analog approach would help solve the daunting cooling and cost problems that presently make digital neuromorphic hardware systems impractical.

“To simulate the human brain, the eventual goal would be large-scale neuromorphic systems that could offer a great deal of computational power, robustness and performance,” said Jennifer Hasler, a professor in the Georgia Tech School of Electrical and Computer Engineering (ECE), who is a pioneer in using analog techniques for neuromorphic computing. “A configurable analog-digital system can be expected to have a power efficiency improvement of up to 10,000 times compared to an all-digital system.”

An April 16, 2014 Georgia Tech news release by Rick Robinson, which originated the news item, describes why Hasler wants to combine analog (based on biological principles) and digital computing approaches to the creation of artificial brains,

Unlike digital computing, in which computers can address many different applications by processing different software programs, analog circuits have traditionally been hard-wired to address a single application. For example, cell phones use energy-efficient analog circuits for a number of specific functions, including capturing the user’s voice, amplifying incoming voice signals, and controlling battery power.

Because analog devices do not have to process binary codes as digital computers do, their performance can be both faster and much less power hungry. Yet traditional analog circuits are limited because they’re built for a specific application, such as processing signals or controlling power. They don’t have the flexibility of digital devices that can process software, and they’re vulnerable to signal disturbance issues, or noise.

In recent years, Hasler has developed a new approach to analog computing, in which silicon-based analog integrated circuits take over many of the functions now performed by familiar digital integrated circuits. These analog chips can be quickly reconfigured to provide a range of processing capabilities, in a manner that resembles conventional digital techniques in some ways.

Over the last several years, Hasler and her research group have developed devices called field programmable analog arrays (FPAA). Like field programmable gate arrays (FPGA), which are digital integrated circuits that are ubiquitous in modern computing, the FPAA can be reconfigured after it’s manufactured – hence the phrase “field-programmable.”

Hasler and Marr’s 29-page paper traces a development process that could lead to the goal of reproducing human-brain complexity. The researchers investigate in detail a number of intermediate steps that would build on one another, helping researchers advance the technology sequentially.

For example, the researchers discuss ways to scale energy efficiency, performance and size in order to eventually achieve large-scale neuromorphic systems. The authors also address how the implementation and the application space of neuromorphic systems can be expected to evolve over time.

“A major concept here is that we have to first build smaller systems capable of a simple representation of one layer of human brain cortex,” Hasler said. “When that system has been successfully demonstrated, we can then replicate it in ways that increase its complexity and performance.”

Among neuromorphic computing’s major hurdles are the communication issues involved in networking integrated circuits in ways that could replicate human cognition. In their paper, Hasler and Marr emphasize local interconnectivity to reduce complexity. Moreover, they argue it’s possible to achieve these capabilities via purely silicon-based techniques, without relying on novel devices that are based on other approaches.

Commenting on the recent publication, Alice C. Parker, a professor of electrical engineering at the University of Southern California, said, “Professor Hasler’s technology roadmap is the first deep analysis of the prospects for large scale neuromorphic intelligent systems, clearly providing practical guidance for such systems, with a nearer-term perspective than our whole-brain emulation predictions. Her expertise in analog circuits, technology and device models positions her to provide this unique perspective on neuromorphic circuits.”

Eugenio Culurciello, an associate professor of biomedical engineering at Purdue University, commented, “I find this paper to be a very accurate description of the field of neuromorphic data processing systems. Hasler’s devices provide some of the best performance per unit power I have ever seen and are surely on the roadmap for one of the major technologies of the future.”

Said Hasler: “In this study, we conclude that useful neural computation machines based on biological principles – and potentially at the size of the human brain — seems technically within our grasp. We think that it’s more a question of gathering the right research teams and finding the funding for research and development than of any insurmountable technical barriers.”

Here’s a link to and a citation for the roadmap,

Finding a roadmap to achieve large neuromorphic hardware systems by Jennifer Hasler and Bo Marr.  Front. Neurosci. (Frontiers in Neuroscience), 10 September 2013 | doi: 10.3389/fnins.2013.00118

This is an open access article (at least, the HTML version is).

I have looked at Hasler’s roadmap and it provides a good and readable overview (even for an amateur like me; Note: you do have to need some tolerance for ‘not knowing’) of the state of neuromorphic engineering’s problems, and suggestions for overcoming them. Here’s a description of a human brain and its power requirements as compared to a computer’s (from the roadmap),

One of the amazing thing about the human brain is its ability to perform tasks beyond current supercomputers using roughly 20 W of average power, a level smaller than most individual computer microprocessor chips. A single neuron emulation can tax a high performance processor; given there is 1012 neurons operating at 20 W, each neuron consumes 20 pW average power. Assuming a neuron is conservatively performing the wordspotting computation (1000 synapses), 100,000 PMAC (PMAC = “Peta” MAC = 1015 MAC/s) would be required to duplicate the neural structure. A higher computational efficiency due to active dendritic line channels is expected as well as additional computation due to learning. The efficiency of a single neuron would be 5000 PMAC/W (or 5 TMAC/μW). A similar efficiency for 1011 neurons and 10,000 synapses is expected.

Building neuromorphic hardware requires that technology must scale from current levels given constraints of power, area, and cost: all issues typical in industrial and defense applications; if hardware technology does not scale as other available technologies, as well as takes advantage of the capabilities of IC technology that are currently visible, it will not be successful.

One of my main areas of interest is the memristor (a nanoscale ‘device/circuit element’ which emulates synaptic plasticity), which was mentioned in a way that allows me to understand how the device fits (or doesn’t fit) into the overall conceptual framework (from the roadmap),

The density for a 10 nm EEPROM device acting as a synapse begs the question of whether other nanotechnologies can improve on the resulting Si [silicon] synapse density. One transistor per synapse is hard to beat by any approach, particularly in scaled down Si (like 10 nm), when the synapse memory, computation, and update is contained within the EEPROM device. Most nano device technologies [i.e., memristors (Snider et al., 2011)] show considerable difficulties to get to two-dimensional arrays at a similar density level. Recently, a team from U. of Michigan announced the first functioning memristor two-dimensional (30 × 30) array built on a CMOS chip in 2012 (Kim et al., 2012), claiming applications in neuromorphic engineering, the same group has published innovative devices for digital (Jo and Lu, 2009) and analog applications (Jo et al., 2011).

I notice that the reference to the University’s of Michigan is relatively neutral in tone and the memristor does not figure substantively in Hasler’s roadmap.

Intriguingly, there is a section on commercialization; I didn’t think the research was at that stage yet (from the roadmap),

Although one can discuss how to build a cortical computer on the size of mammals and humans, the question is how will the technology developed for these large systems impact commercial development. The cost for ICs [integrated circuits or chips] alone for cortex would be approximately $20 M in current prices, which although possible for large users, would not be common to be found in individual households. Throughout the digital processor approach, commercial market opportunities have driven the progress in the field. Getting neuromorphic technology integrated into commercial environment allows us to ride this powerful economic “engine” rather than pull.

In most applications, the important commercial issues include minimization of cost, time to market, just sufficient performance for the application, power consumed, size and weight. The cost of a system built from ICs is, at a macro-level, a function of the area of those ICs, which then affects the number of ICs needed system wide, the number of components used, and the board space used. Efficiency of design tools, testing time and programming time also considerably affect system costs. Time to get an application to market is affected by the ability to reuse or quickly modify existing designs, and is reduced for a new application if existing hardware can be reconfigured, adapting to changing specifications, and a designer can utilize tools that allow rapid modifications to the design. Performance is key for any algorithm, but for a particular product, one only needs a solution to that particular problem; spending time to make the solution elegant is often a losing strategy.

The neuromorphic community has seen some early entries into commercial spaces, but we are just at the very beginning of the process. As the knowledge of neuromorphic engineering has progressed, which have included knowledge of sensor interfaces and analog signal processing, there have been those who have risen to the opportunities to commercialize these technologies. Neuromorphic research led to better understanding of sensory processing, particularly sensory systems interacting with other humans, enabling companies like Synaptics (touch pads), Foveon (CMOS color imagers), and Sonic Innovation (analog–digital hearing aids); Gilder provides a useful history of these two companies elsewhere (Gilder, 2005). From the early progress in analog signal processing we see companies like GTronix (acquired by National Semiconductor, then acquired by Texas Instruments) applying the impact of custom analog signal processing techniques and programmability toward auditory signal processing that improved sound quality requiring ultra-low power levels. Further, we see in companies like Audience there is some success from mapping the computational flow of the early stage auditory system, and implementing part of the event based auditory front-end to achieve useful results for improved voice quality. But the opportunities for the neuromorphic community are just beginning, and directly related to understanding the computational capabilities of these items. The availability of ICs that have these capabilities, whether or not one mentions they have any neuromorphic material, will further drive applications.

One expects that part of a cortex processing system would have significant computational possibilities, as well as cortex structures from smaller animals, and still be able to reach price points for commercial applications. In the following discussion, we will consider the potential of cortical structures at different levels of commercial applications. Figure 24 shows one typical block diagram, algorithms at each stage, resulting power efficiency (say based on current technology), as well as potential applications of the approach. In all cases, we will be considering a single die solution, typical for a commercial product, and will minimize the resulting communication power to I/O off the chip (no power consumed due to external memories or digital processing devices). We will assume a net computational efficiency of 10 TMAC/mW, corresponding to a lower power supply (i.e., mostly 500 mV, but not 180 mV) and slightly larger load capacitances; we make these assumptions as conservative pull back from possible applications, although we expect the more aggressive targets would be reachable. We assume the external power consumed is set by 1 event/second/neuron average event-rate off chip to a nearby IC. Given the input event rate is hard to predict, we don’t include that power requirement but assume it is handled by the input system. In all of these cases, getting the required computation using only digital techniques in a competitive size, weight, and especially power is hard to foresee.

We expect progress in these neuromorphic systems and that should find applications in traditional signal processing and graphics handling approaches. We will continue to have needs in computing that outpace our available computing resources, particularly at a power consumption required for a particular application. For example, the recent emphasis on cloud computing for academic/research problems shows the incredible need for larger computing resources than those directly available, or even projected to be available, for a portable computing platform (i.e., robotics). Of course a server per computing device is not a computing model that scales well. Given scaling limits on computing, both in power, area, and communication, one can expect to see more and more of these issues going forward.

We expect that a range of different ICs and systems will be built, all at different targets in the market. There are options for even larger networks, or integrating these systems with other processing elements on a chip/board. When moving to larger systems, particularly ones with 10–300 chips (3 × 107 to 109 neurons) or more, one can see utilization of stacking of dies, both decreasing the communication capacitance as well as board complexity. Stacking dies should roughly increase the final chip cost by the number of dies stacked.

In the following subsections, we overview general guidelines to consider when considering using neuromorphic ICs in the commercial market, first for low-cost consumer electronics, and second for a larger neuromorphic processor IC.

I have a casual observation to make. while the authors of the roadmap came to this conclusion “This study concludes that useful neural computation machines based on biological principles at the size of the human brain seems technically within our grasp.,” they’re also leaving themselves some wiggle room because the truth is no one knows if copying a human brain with circuits and various devices will lead to ‘thinking’ as we understand the concept.

For anyone who’s interested, you can search this blog for neuromorphic engineering, artificial brains, and/or memristors as I have many postings on these topics. One of my most recent on the topic of artificial brains is an April 7, 2014 piece titled: Brain-on-a-chip 2014 survey/overview.

One last observation about the movie ‘Transcendence’, has no one else noticed that it’s the ‘Easter’ story with a resurrected and digitized ‘Jesus’?

* Space inserted between ‘brains’ and ‘from’ in head on April 21, 2014.

Good lignin, bad lignin: Florida researchers use plant waste to create lignin nanotubes while researchers in British Columbia develop trees with less lignin

An April 4, 2014 news item on Azonano describes some nanotube research at the University of Florida that reaches past carbon to a new kind of nanotube,

Researchers with the University of Florida’s [UF] Institute of Food and Agricultural Sciences took what some would consider garbage and made a remarkable scientific tool, one that could someday help to correct genetic disorders or treat cancer without chemotherapy’s nasty side effects.

Wilfred Vermerris, an associate professor in UF’s department of microbiology and cell science, and Elena Ten, a postdoctoral research associate, created from plant waste a novel nanotube, one that is much more flexible than rigid carbon nanotubes currently used. The researchers say the lignin nanotubes – about 500 times smaller than a human eyelash – can deliver DNA directly into the nucleus of human cells in tissue culture, where this DNA could then correct genetic conditions. Experiments with DNA injection are currently being done with carbon nanotubes, as well.

“That was a surprising result,” Vermerris said. “If you can do this in actual human beings you could fix defective genes that cause disease symptoms and replace them with functional DNA delivered with these nanotubes.”

An April 3, 2014 University of Florida’s Institute of Food and Agricultural Sciences news release, which originated the news item, describes the lignin nanotubes (LNTs) and future applications in more detail,

The nanotube is made up of lignin from plant material obtained from a UF biofuel pilot facility in Perry, Fla. Lignin is an integral part of the secondary cell walls of plants and enables water movement from the roots to the leaves, but it is not used to make biofuels and would otherwise be burned to generate heat or electricity at the biofuel plant. The lignin nanotubes can be made from a variety of plant residues, including sorghum, poplar, loblolly pine and sugar cane. [emphasis mine]

The researchers first tested to see if the nanotubes were toxic to human cells and were surprised to find that they were less so than carbon nanotubes. Thus, they could deliver a higher dose of medicine to the human cell tissue.  Then they researched if the nanotubes could deliver plasmid DNA to the same cells and that was successful, too. A plasmid is a small DNA molecule that is physically separate from, and can replicate independently of, chromosomal DNA within a cell.

“It’s not a very smooth road because we had to try different experiments to confirm the results,” Ten said. “But it was very fruitful.”

In cases of genetic disorders, the nanotube would be loaded with a functioning copy of a gene, and injected into the body, where it would target the affected tissue, which then makes the missing protein and corrects the genetic disorder.

Although Vermerris cautioned that treatment in humans is many years away, among the conditions that these gene-carrying nanotubes could correct include cystic fibrosis and muscular dystrophy. But, he added, that patients would have to take the corrective DNA via nanotubes on a continuing basis.

Another application under consideration is to use the lignin nanotubes for the delivery of chemotherapy drugs in cancer patients. The nanotubes would ensure the drugs only get to the tumor without affecting healthy tissues.

Vermerris said they created different types of nanotubes, depending on the experiment. They could also adapt nanotubes to a patient’s specific needs, a process called customization.

“You can think about it as a chest of drawers and, depending on the application, you open one drawer or use materials from a different drawer to get things just right for your specific application,” he said.  “It’s not very difficult to do the customization.”

The next step in the research process is for Vermerris and Ten to begin experiments on mice. They are in the application process for those experiments, which would take several years to complete.  If those are successful, permits would need to be obtained for their medical school colleagues to conduct research on human patients, with Vermerris and Ten providing the nanotubes for that research.

“We are a long way from that point,” Vermerris said. “That’s the optimistic long-term trajectory.”

I hope they have good luck with this work. I have emphasized the plant waste the University of Florida scientists studied due to the inclusion of poplar, which is featured in the University of British Columbia research work also being mentioned in this post.

Getting back to Florida for a moment, here’s a link to and a citation for the paper,

Lignin Nanotubes As Vehicles for Gene Delivery into Human Cells by Elena Ten, Chen Ling, Yuan Wang, Arun Srivastava, Luisa Amelia Dempere, and Wilfred Vermerris. Biomacromolecules, 2014, 15 (1), pp 327–338 DOI: 10.1021/bm401555p Publication Date (Web): December 5, 2013
Copyright © 2013 American Chemical Society

This is an open access paper.

Meanwhile, researchers at the University of British Columbia (UBC) are trying to limit the amount of lignin in trees (specifically poplars, which are not mentioned in this excerpt but in the next). From an April 3, 2014 UBC news release,

Researchers have genetically engineered trees that will be easier to break down to produce paper and biofuel, a breakthrough that will mean using fewer chemicals, less energy and creating fewer environmental pollutants.

“One of the largest impediments for the pulp and paper industry as well as the emerging biofuel industry is a polymer found in wood known as lignin,” says Shawn Mansfield, a professor of Wood Science at the University of British Columbia.

Lignin makes up a substantial portion of the cell wall of most plants and is a processing impediment for pulp, paper and biofuel. Currently the lignin must be removed, a process that requires significant chemicals and energy and causes undesirable waste.

Researchers used genetic engineering to modify the lignin to make it easier to break down without adversely affecting the tree’s strength.

“We’re designing trees to be processed with less energy and fewer chemicals, and ultimately recovering more wood carbohydrate than is currently possible,” says Mansfield.

Researchers had previously tried to tackle this problem by reducing the quantity of lignin in trees by suppressing genes, which often resulted in trees that are stunted in growth or were susceptible to wind, snow, pests and pathogens.

“It is truly a unique achievement to design trees for deconstruction while maintaining their growth potential and strength.”

The study, a collaboration between researchers at the University of British Columbia, the University of Wisconsin-Madison, Michigan State University, is a collaboration funded by Great Lakes Bioenergy Research Center, was published today in Science.

Here’s more about lignin and how a decrease would free up more material for biofuels in a more environmentally sustainable fashion, from the news release,

The structure of lignin naturally contains ether bonds that are difficult to degrade. Researchers used genetic engineering to introduce ester bonds into the lignin backbone that are easier to break down chemically.

The new technique means that the lignin may be recovered more effectively and used in other applications, such as adhesives, insolation, carbon fibres and paint additives.

Genetic modification

The genetic modification strategy employed in this study could also be used on other plants like grasses to be used as a new kind of fuel to replace petroleum.

Genetic modification can be a contentious issue, but there are ways to ensure that the genes do not spread to the forest. These techniques include growing crops away from native stands so cross-pollination isn’t possible; introducing genes to make both the male and female trees or plants sterile; and harvesting trees before they reach reproductive maturity.

In the future, genetically modified trees could be planted like an agricultural crop, not in our native forests. Poplar is a potential energy crop for the biofuel industry because the tree grows quickly and on marginal farmland. [emphasis mine] Lignin makes up 20 to 25 per cent of the tree.

“We’re a petroleum reliant society,” says Mansfield. “We rely on the same resource for everything from smartphones to gasoline. We need to diversify and take the pressure off of fossil fuels. Trees and plants have enormous potential to contribute carbon to our society.”

As noted earlier, the researchers in Florida mention poplars in their paper (Note: Links have been removed),

Gymnosperms such as loblolly pine (Pinus taeda L.) contain lignin that is composed almost exclusively of G-residues, whereas lignin from angiosperm dicots, including poplar (Populus spp.) contains a mixture of G- and S-residues. [emphasis mine] Due to the radical-mediated addition of monolignols to the growing lignin polymer, lignin contains a variety of interunit bonds, including aryl–aryl, aryl–alkyl, and alkyl–alkyl bonds.(3) This feature, combined with the association between lignin and cell-wall polysaccharides, which involves both physical and chemical interactions, make the isolation of lignin from plant cell walls challenging. Various isolation methods exist, each relying on breaking certain types of chemical bonds within the lignin, and derivatizations to solubilize the resulting fragments.(5) Several of these methods are used on a large scale in pulp and paper mills and biorefineries, where lignin needs to be removed from woody biomass and crop residues(6) in order to use the cellulose for the production of paper, biofuels, and biobased polymers. The lignin is present in the waste stream and has limited intrinsic economic value.(7)

Since hydroxyl and carboxyl groups in lignin facilitate functionalization, its compatibility with natural and synthetic polymers for different commercial applications have been extensively studied.(8-12) One of the promising directions toward the cost reduction associated with biofuel production is the use of lignin for low-cost carbon fibers.(13) Other recent studies reported development and characterization of lignin nanocomposites for multiple value-added applications. For example, cellulose nanocrystals/lignin nanocomposites were developed for improved optical, antireflective properties(14, 15) and thermal stability of the nanocomposites.(16) [emphasis mine] Model ultrathin bicomponent films prepared from cellulose and lignin derivatives were used to monitor enzyme binding and cellulolytic reactions for sensing platform applications.(17) Enzymes/“synthetic lignin” (dehydrogenation polymer (DHP)) interactions were also investigated to understand how lignin impairs enzymatic hydrolysis during the biomass conversion processes.(18)

The synthesis of lignin nanotubes and nanowires was based on cross-linking a lignin base layer to an alumina membrane, followed by peroxidase-mediated addition of DHP and subsequent dissolution of the membrane in phosphoric acid.(1) Depending upon monomers used for the deposition of DHP, solid nanowires, or hollow nanotubes could be manufactured and easily functionalized due to the presence of many reactive groups. Due to their autofluorescence, lignin nanotubes permit label-free detection under UV radiation.(1) These features make lignin nanotubes suitable candidates for numerous biomedical applications, such as the delivery of therapeutic agents and DNA to specific cells.

The synthesis of LNTs in a sacrificial template membrane is not limited to a single source of lignin or a single lignin isolation procedure. Dimensions of the LNTs and their cytotoxicity to HeLa cells appear to be determined primarily by the lignin isolation procedure, whereas the transfection efficiency is also influenced by the source of the lignin (plant species and genotype). This means that LNTs can be tailored to the application for which they are intended. [emphasis mine] The ability to design LNTs for specific purposes will benefit from a more thorough understanding of the relationship between the structure and the MW of the lignin used to prepare the LNTs, the nanomechanical properties, and the surface characteristics.

We have shown that DNA is physically associated with the LNTs and that the LNTs enter the cytosol, and in some case the nucleus. The LNTs made from NaOH-extracted lignin are of special interest, as they were the shortest in length, substantially reduced HeLa cell viability at levels above approximately 50 mg/mL, and, in the case of pine and poplar, were the most effective in the transfection [penetrating the cell with a bacterial plasmid to leave genetic material in this case] experiments. [emphasis mine]

As I see the issues presented with these two research efforts, there are environmental and energy issues with extracting the lignin while there seem to be some very promising medical applications possible with lignin ‘waste’. These two research efforts aren’t necessarily antithetical but they do raise some very interesting issues as to how we approach our use of resources and future policies.

ETA May 16, 2014: The beat goes on with the Georgia (US) Institute of Technology issues a roadmap for making money from lignin. From a Georgia Tech May 15, 2014 news release on EurekAlert,

When making cellulosic ethanol from plants, one problem is what to do with a woody agricultural waste product called lignin. The old adage in the pulp industry has been that one can make anything from lignin except money.

A new review article in the journal Science points the way toward a future where lignin is transformed from a waste product into valuable materials such as low-cost carbon fiber for cars or bio-based plastics. Using lignin in this way would create new markets for the forest products industry and make ethanol-to-fuel conversion more cost-effective.

“We’ve developed a roadmap for integrating genetic engineering with analytical chemistry tools to tailor the structure of lignin and its isolation so it can be used for materials, chemicals and fuels,” said Arthur Ragauskas, a professor in the School of Chemistry and Biochemistry at the Georgia Institute of Technology. Ragauskas is also part of the Institute for Paper Science and Technology at Georgia Tech.

The roadmap was published May 15 [2014] in the journal Science. …

Here’s a link to and citation for the ‘roadmap’,

Lignin Valorization: Improving Lignin Processing in the Biorefinery by  Arthur J. Ragauskas, Gregg T. Beckham, Mary J. Biddy, Richard Chandra, Fang Chen, Mark F. Davis, Brian H. Davison, Richard A. Dixon, Paul Gilna, Martin Keller, Paul Langan, Amit K. Naskar, Jack N. Saddler, Timothy J. Tschaplinski, Gerald A. Tuskan, and Charles E. Wyman. Science 16 May 2014: Vol. 344 no. 6185 DOI: 10.1126/science.1246843

This paper is behind a paywall.

Mini Lisa made possible by ThermoChemical NanoLithography

One of the world’s most recognizable images has undergone a makeover of sorts. According to an Aug. 6, 2013 news item on Azonano, researchers Georgia institute of Technology (Georgia Tech) in the US, have created a mini Mona Lisa,

The world’s most famous painting has now been created on the world’s smallest canvas. Researchers at the Georgia Institute of Technology have “painted” the Mona Lisa on a substrate surface approximately 30 microns in width – or one-third the width of a human hair.

The team’s creation, the “Mini Lisa,” demonstrates a technique that could potentially be used to achieve nanomanufacturing of devices because the team was able to vary the surface concentration of molecules on such short-length scales.

The Aug. 5, 2013 Georgia Tech news release, which originated the news item, provides more technical details,

The image was created with an atomic force microscope and a process called ThermoChemical NanoLithography (TCNL). Going pixel by pixel, the Georgia Tech team positioned a heated cantilever at the substrate surface to create a series of confined nanoscale chemical reactions. By varying only the heat at each location, Ph.D. Candidate Keith Carroll controlled the number of new molecules that were created. The greater the heat, the greater the local concentration. More heat produced the lighter shades of gray, as seen on the Mini Lisa’s forehead and hands. Less heat produced the darker shades in her dress and hair seen when the molecular canvas is visualized using fluorescent dye. Each pixel is spaced by 125 nanometers.

“By tuning the temperature, our team manipulated chemical reactions to yield variations in the molecular concentrations on the nanoscale,” said Jennifer Curtis, an associate professor in the School of Physics and the study’s lead author. “The spatial confinement of these reactions provides the precision required to generate complex chemical images like the Mini Lisa.”

Production of chemical concentration gradients and variations on the sub-micrometer scale are difficult to achieve with other techniques, despite a wide range of applications the process could allow. The Georgia Tech TCNL research collaboration, which includes associate professor Elisa Riedo and Regents Professor Seth Marder, produced chemical gradients of amine groups, but expects that the process could be extended for use with other materials.

“We envision TCNL will be capable of patterning gradients of other physical or chemical properties, such as conductivity of graphene,” Curtis said. “This technique should enable a wide range of previously inaccessible experiments and applications in fields as diverse as nanoelectronics, optoelectronics and bioengineering.”

Another advantage, according to Curtis, is that atomic force microscopes are fairly common and the thermal control is relatively straightforward, making the approach accessible to both academic and industrial laboratories.  To facilitate their vision of nano-manufacturing devices with TCNL, the Georgia Tech team has recently integrated nanoarrays of five thermal cantilevers to accelerate the pace of production. Because the technique provides high spatial resolutions at a speed faster than other existing methods, even with a single cantilever, Curtis is hopeful that TCNL will provide the option of nanoscale printing integrated with the fabrication of large quantities of surfaces or everyday materials whose dimensions are more than one billion times larger than the TCNL features themselves.

Here’s an image of the AFM and the cantilever used in the TCNL process to create the ‘Mini Lisa’,

Atomic force microscope (AFM) modified with a thermal cantilever. The AFM scanner allows for precise positioning on the nanoscale while the thermal cantilever induces local nanoscale chemical reactions. Courtesy Georgia Tech

Atomic force microscope (AFM) modified with a thermal cantilever. The AFM scanner allows for precise positioning on the nanoscale while the thermal cantilever induces local nanoscale chemical reactions. Courtesy Georgia Tech

Finally, the “Mini Lisa’,

Georgia Tech researchers have created the "Mini Lisa" on a substrate surface approximately 30 microns in width. The image demonstrates a technique that could potentially be used to achieve nano-manufacturing of devices because the team was able to vary the surface concentration of molecules on such short length scales. Courtesy Georgia Tech

Georgia Tech researchers have created the “Mini Lisa” on a substrate surface approximately 30 microns in width. The image demonstrates a technique that could potentially be used to achieve nano-manufacturing of devices because the team was able to vary the surface concentration of molecules on such short length scales. Courtesy Georgia Tech

For those who can’t get enough of the ‘Mini Lisa’ or TCNL, here’s a link to and a citation for the research team’s published paper,

Fabricating Nanoscale Chemical Gradients with ThermoChemical NanoLithography by Keith M. Carroll, Anthony J. Giordano, Debin Wang, Vamsi K. Kodali, Jan Scrimgeour, William P. King, Seth R. Marder, Elisa Riedo, and Jennifer E. Curtis. Langmuir, 2013, 29 (27), pp 8675–8682 DOI: 10.1021/la400996w Publication Date (Web): June 10, 2013
Copyright © 2013 American Chemical Society

This article is behind a paywall.

Solar cells made even more leaflike with inclusion of nanocellulose fibers

Researchers at the US Georgia  Institute of Technology (Georgia Tech)  and Purdue University (Indiana) have used cellulose nanocrystals (CNC), which is also known as nanocrystalline cellulose (NCC), to create solar cells that have greater efficiency and can be recycled. From the Mar. 26, 2013 news item on Nanowerk,

Georgia Institute of Technology and Purdue University researchers have developed efficient solar cells using natural substrates derived from plants such as trees. Just as importantly, by fabricating them on cellulose nanocrystal (CNC) substrates, the solar cells can be quickly recycled in water at the end of their lifecycle.

The Georgia Tech Mar. 25, 2013 news release, which originated the news item,

The researchers report that the organic solar cells reach a power conversion efficiency of 2.7 percent, an unprecedented figure for cells on substrates derived from renewable raw materials. The CNC substrates on which the solar cells are fabricated are optically transparent, enabling light to pass through them before being absorbed by a very thin layer of an organic semiconductor. During the recycling process, the solar cells are simply immersed in water at room temperature. Within only minutes, the CNC substrate dissolves and the solar cell can be separated easily into its major components.

Georgia Tech College of Engineering Professor Bernard Kippelen led the study and says his team’s project opens the door for a truly recyclable, sustainable and renewable solar cell technology.

“The development and performance of organic substrates in solar technology continues to improve, providing engineers with a good indication of future applications,” said Kippelen, who is also the director of Georgia Tech’s Center for Organic Photonics and Electronics (COPE). “But organic solar cells must be recyclable. Otherwise we are simply solving one problem, less dependence on fossil fuels, while creating another, a technology that produces energy from renewable sources but is not disposable at the end of its lifecycle.”

To date, organic solar cells have been typically fabricated on glass or plastic. Neither is easily recyclable, and petroleum-based substrates are not very eco-friendly. For instance, if cells fabricated on glass were to break during manufacturing or installation, the useless materials would be difficult to dispose of. Paper substrates are better for the environment, but have shown limited performance because of high surface roughness or porosity. However, cellulose nanomaterials made from wood are green, renewable and sustainable. The substrates have a low surface roughness of only about two nanometers.

“Our next steps will be to work toward improving the power conversion efficiency over 10 percent, levels similar to solar cells fabricated on glass or petroleum-based substrates,” said Kippelen. The group plans to achieve this by optimizing the optical properties of the solar cell’s electrode.

The news release also notes the impact that using cellulose nanomaterials could have economically,

There’s also another positive impact of using natural products to create cellulose nanomaterials. The nation’s forest product industry projects that tens of millions of tons of them could be produced once large-scale production begins, potentially in the next five years.

One might almost  suspect that the forest products industry is experiencing financial difficulty.

The researchers’ paper was published by Scientific Reports, an open access journal from the Nature Publishing Group,

Recyclable organic solar cells on cellulose nanocrystal substrates by Yinhua Zhou, Canek Fuentes-Hernandez, Talha M. Khan, Jen-Chieh Liu, James Hsu, Jae Won Shim, Amir Dindar, Jeffrey P. Youngblood, Robert J. Moon, & Bernard Kippelen. Scientific Reports  3, Article number: 1536  doi:10.1038/srep01536 Published 25 March 2013

In closing, the news release notes that a provisional patent has been filed at the US Patent Office.And one final note, I have previously commented on how confusing the reported power conversion rates are. You’ll find a recent comment in my Mar. 8, 2013 posting about Ted Sargent’s work with colloidal quantum dots and solar cells.

Samsung ‘GROs’ graphene-based micro-antennas and a brief bit about the business of nanotechnology

A Feb. 22, 2013 news item on Nanowerk highlights a Samsung university grant (GRO) programme which announced funding for graphene-based micro-antennas,

The Graphene-Enabled Wireless Communication project, one of the award-winning proposals under the Samsung Global Research Outreach (GRO) programme, aims to use graphene antennas to implement wireless communication over very short distances (no more than a centimetre) with high-capacity information transmission (tens or hundreds of gigabits per second). Antennas made ??of [sic] graphene could radiate electromagnetic waves in the terahertz band and would allow for high-speed information transmission. Thanks to the unique properties of this nanomaterial, the new graphene-based antenna technology would also make it possible to manufacture antennas a thousand times smaller than those currently used.

The GRO programme—an annual call for research proposals by the Samsung Advanced Institute of Technology (Seoul, South Korea)—has provided the UPC-led project with US$120,000 in financial support.

The Graphene-Enabled Wireless Communication project is a joint project (from the news item; Note: A link has been removed),

“Graphene-Enabled Wireless Communications” – a proposal submitted by an interdepartmental team based at the Universitat Politècnica de Catalunya, BarcelonaTech (UPC) and the Georgia Institute of Technology (Georgia Tech)—will receive US$120,000 to develop micrometre-scale graphene antennas capable of transmitting information at a high speed over very short distances. The project will be carried out in the coming months.

The Graphene-Enabled Wireless Communication project, one of the award-winning proposals under the Samsung Global Research Outreach (GRO) programme, aims to use graphene antennas to implement wireless communication over very short distances (no more than a centimetre) with high-capacity information transmission (tens or hundreds of gigabits per second). Antennas made ??of graphene could radiate electromagnetic waves in the terahertz band and would allow for high-speed information transmission. Thanks to the unique properties of this nanomaterial, the new graphene-based antenna technology would also make it possible to manufacture antennas a thousand times smaller than those currently used.

There’s more about the Graphene-Enabled Wireless Communication project here,

 A remarkably promising application of graphene is that of Graphene-enabled Wireless Communications (GWC). GWC advocate for the use of graphene-based plasmonic antennas –graphennas, see Fig. 1- whose plasmonic effects allow them to radiate EM waves in the terahertz band (0.1 – 10 THz). Moreover, preliminary results sustain that this frequency band is up to two orders of magnitude below the optical frequencies at which metallic antennas of the same size resonate, thereby enhancing the transmission range of graphene-based antennas and lowering the requirements on the corresponding transceivers. In short, graphene enables the implementation of nano-antennas just a few micrometers in size that are not doable with traditional metallic materials.

Thanks to both the reduced size and unique radiation capabilities of ZZ, GWC may represent a breakthrough in the ultra-short range communications research area. In this project we will study the application of GWC within the scenario of off-chip communication, which includes communication between different chips of a given device, e.g. a cell phone.

A new term, graphenna, appears to be have been coined. The news item goes on to offer more detail about the project and about the number of collaborating institutions,

The first stage of the project, launched in October 2012, focuses on the theoretical foundations of wireless communications over short distances using graphene antennas. In particular, the group is analysing the behaviour of electromagnetic waves in the terahertz band for very short distances, and investigating how coding and modulation schemes can be adapted to achieve high transmission rates while maintaining low power consumption.

The group believes the main benefits of the project in the medium term will derive from its application for internal communication in multicore processors. Processors of this type have a number of sub-processors that share and execute tasks in parallel. The application of wireless communication in this area will make it possible to integrate thousands of sub-processors within a single processor, which is not feasible with current communication systems.

The results of the project will lead to an increase in the computational performance of these devices. This improvement would allow large amounts of data to be processed at very high speed, which would be very useful for streamlining data management at processing centres (“big data”) used, for example, in systems like Facebook and Google. The project, which builds on previous results obtained with the collaboration of the University of Wuppertal in Germany, the Royal Institute of Technology (KTH) in Sweden, and Georgia Tech in the United States, is expected to yield its first results in April 2013.

The project is being carried out by the NaNoNetworking Centre in Catalonia (N3Cat), a network formed at the initiative of researchers with the UPC’s departments of Electronic Engineering and Computer Architecture, together with colleagues at Georgia Tech.

Anyone interested in  Samsung’s GRO programme can find more here,

The SAMSUNG Global Research Outreach (GRO) program, open to leading universities around the world, is Samsung Electronics, Co., Ltd. & related Samsung companies (SAMSUNG)’s annual call for research proposals.

As this Samsung-funded research project is being announced, Dexter Johnson details the business failure of NanoInk in a Feb. 22, 2013 posting on his Nanoclast blog (on the IEEE [International Institute of Electrical and Electronics Engineers] website), Note: Links have been removed,

One of the United State’s first nanotechnology companies, NanoInk, has gone belly up, joining a host of high-profile nanotechnology-based companies that have shuttered their doors in the last 12 months: Konarka, A123 Systems and Ener1.

These other three companies were all tied to the energy markets (solar in the case of Konarka and batteries for both A123 and Ener1), which are typically volatile, with a fair number of shuttered businesses dotting their landscapes. But NanoInk is a venerable old company in comparison to these other three and is more in what could be characterized as the “picks-and-shovels” side of the nanotechnology business, microscopy tools.

Dexter goes on to provide an  analysis of the NanoInk situation which makes for some very interesting reading along with the comments—some feisty, some not—his posting has provoked.

I am juxtaposing the Samsung funding announcement with this mention of Dexter’s piece regarding a  ‘nanotechnology’ business failure in an effort to provide some balance between enthusiasm for the research and the realities of developing businesses and products based on that research.