Tag Archives: Georgia Tech

I sing the body cyber: two projects funded by the US National Science Foundation

Points to anyone who recognized the reference to Walt Whitman’s poem, “I sing the body electric,” from his classic collection, Leaves of Grass (1867 edition; h/t Wikipedia entry). I wonder if the cyber physical systems (CPS) work being funded by the US National Science Foundation (NSF) in the US will occasion poetry too.

More practically, a May 15, 2015 news item on Nanowerk, describes two cyber physical systems (CPS) research projects newly funded by the NSF,

Today [May 12, 2015] the National Science Foundation (NSF) announced two, five-year, center-scale awards totaling $8.75 million to advance the state-of-the-art in medical and cyber-physical systems (CPS).

One project will develop “Cyberheart”–a platform for virtual, patient-specific human heart models and associated device therapies that can be used to improve and accelerate medical-device development and testing. The other project will combine teams of microrobots with synthetic cells to perform functions that may one day lead to tissue and organ re-generation.

CPS are engineered systems that are built from, and depend upon, the seamless integration of computation and physical components. Often called the “Internet of Things,” CPS enable capabilities that go beyond the embedded systems of today.

“NSF has been a leader in supporting research in cyber-physical systems, which has provided a foundation for putting the ‘smart’ in health, transportation, energy and infrastructure systems,” said Jim Kurose, head of Computer & Information Science & Engineering at NSF. “We look forward to the results of these two new awards, which paint a new and compelling vision for what’s possible for smart health.”

Cyber-physical systems have the potential to benefit many sectors of our society, including healthcare. While advances in sensors and wearable devices have the capacity to improve aspects of medical care, from disease prevention to emergency response, and synthetic biology and robotics hold the promise of regenerating and maintaining the body in radical new ways, little is known about how advances in CPS can integrate these technologies to improve health outcomes.

These new NSF-funded projects will investigate two very different ways that CPS can be used in the biological and medical realms.

A May 12, 2015 NSF news release (also on EurekAlert), which originated the news item, describes the two CPS projects,

Bio-CPS for engineering living cells

A team of leading computer scientists, roboticists and biologists from Boston University, the University of Pennsylvania and MIT have come together to develop a system that combines the capabilities of nano-scale robots with specially designed synthetic organisms. Together, they believe this hybrid “bio-CPS” will be capable of performing heretofore impossible functions, from microscopic assembly to cell sensing within the body.

“We bring together synthetic biology and micron-scale robotics to engineer the emergence of desired behaviors in populations of bacterial and mammalian cells,” said Calin Belta, a professor of mechanical engineering, systems engineering and bioinformatics at Boston University and principal investigator on the project. “This project will impact several application areas ranging from tissue engineering to drug development.”

The project builds on previous research by each team member in diverse disciplines and early proof-of-concept designs of bio-CPS. According to the team, the research is also driven by recent advances in the emerging field of synthetic biology, in particular the ability to rapidly incorporate new capabilities into simple cells. Researchers so far have not been able to control and coordinate the behavior of synthetic cells in isolation, but the introduction of microrobots that can be externally controlled may be transformative.

In this new project, the team will focus on bio-CPS with the ability to sense, transport and work together. As a demonstration of their idea, they will develop teams of synthetic cell/microrobot hybrids capable of constructing a complex, fabric-like surface.

Vijay Kumar (University of Pennsylvania), Ron Weiss (MIT), and Douglas Densmore (BU) are co-investigators of the project.

Medical-CPS and the ‘Cyberheart’

CPS such as wearable sensors and implantable devices are already being used to assess health, improve quality of life, provide cost-effective care and potentially speed up disease diagnosis and prevention. [emphasis mine]

Extending these efforts, researchers from seven leading universities and centers are working together to develop far more realistic cardiac and device models than currently exist. This so-called “Cyberheart” platform can be used to test and validate medical devices faster and at a far lower cost than existing methods. CyberHeart also can be used to design safe, patient-specific device therapies, thereby lowering the risk to the patient.

“Innovative ‘virtual’ design methodologies for implantable cardiac medical devices will speed device development and yield safer, more effective devices and device-based therapies, than is currently possible,” said Scott Smolka, a professor of computer science at Stony Brook University and one of the principal investigators on the award.

The group’s approach combines patient-specific computational models of heart dynamics with advanced mathematical techniques for analyzing how these models interact with medical devices. The analytical techniques can be used to detect potential flaws in device behavior early on during the device-design phase, before animal and human trials begin. They also can be used in a clinical setting to optimize device settings on a patient-by-patient basis before devices are implanted.

“We believe that our coordinated, multi-disciplinary approach, which balances theoretical, experimental and practical concerns, will yield transformational results in medical-device design and foundations of cyber-physical system verification,” Smolka said.

The team will develop virtual device models which can be coupled together with virtual heart models to realize a full virtual development platform that can be subjected to computational analysis and simulation techniques. Moreover, they are working with experimentalists who will study the behavior of virtual and actual devices on animals’ hearts.

Co-investigators on the project include Edmund Clarke (Carnegie Mellon University), Elizabeth Cherry (Rochester Institute of Technology), W. Rance Cleaveland (University of Maryland), Flavio Fenton (Georgia Tech), Rahul Mangharam (University of Pennsylvania), Arnab Ray (Fraunhofer Center for Experimental Software Engineering [Germany]) and James Glimm and Radu Grosu (Stony Brook University). Richard A. Gray of the U.S. Food and Drug Administration is another key contributor.

It is fascinating to observe how terminology is shifting from pacemakers and deep brain stimulators as implants to “CPS such as wearable sensors and implantable devices … .” A new category has been created, CPS, which conjoins medical devices with other sensing devices such as wearable fitness monitors found in the consumer market. I imagine it’s an attempt to quell fears about injecting strange things into or adding strange things to your body—microrobots and nanorobots partially derived from synthetic biology research which are “… capable of performing heretofore impossible functions, from microscopic assembly to cell sensing within the body.” They’ve also sneaked in a reference to synthetic biology, an area of research where some concerns have been expressed, from my March 19, 2013 post about a poll and synthetic biology concerns,

In our latest survey, conducted in January 2013, three-fourths of respondents say they have heard little or nothing about synthetic biology, a level consistent with that measured in 2010. While initial impressions about the science are largely undefined, these feelings do not necessarily become more positive as respondents learn more. The public has mixed reactions to specific synthetic biology applications, and almost one-third of respondents favor a ban “on synthetic biology research until we better understand its implications and risks,” while 61 percent think the science should move forward.

I imagine that for scientists, 61% in favour of more research is not particularly comforting given how easily and quickly public opinion can shift.

Bendable, stretchable, light-weight, and transparent: a new competitor in the competition for ‘thinnest electric generator’

An Oct. 15, 2014 Columbia University (New York, US) press release (also on EurekAlert), describes another contender for the title of the world’s thinnest electric generator,

Researchers from Columbia Engineering and the Georgia Institute of Technology [US] report today [Oct. 15, 2014] that they have made the first experimental observation of piezoelectricity and the piezotronic effect in an atomically thin material, molybdenum disulfide (MoS2), resulting in a unique electric generator and mechanosensation devices that are optically transparent, extremely light, and very bendable and stretchable.

In a paper published online October 15, 2014, in Nature, research groups from the two institutions demonstrate the mechanical generation of electricity from the two-dimensional (2D) MoS2 material. The piezoelectric effect in this material had previously been predicted theoretically.

Here’s a link to and a citation for the paper,

Piezoelectricity of single-atomic-layer MoS2 for energy conversion and piezotronics by Wenzhuo Wu, Lei Wang, Yilei Li, Fan Zhang, Long Lin, Simiao Niu, Daniel Chenet, Xian Zhang, Yufeng Hao, Tony F. Heinz, James Hone, & Zhong Lin Wang. Nature (2014) doi:10.1038/nature13792 Published online 15 October 2014

This paper is behind a paywall. There is a free preview available with ReadCube Access.

Getting back to the Columbia University press release, it offers a general description of piezoelectricity and some insight into this new research on molybdenum disulfide,

Piezoelectricity is a well-known effect in which stretching or compressing a material causes it to generate an electrical voltage (or the reverse, in which an applied voltage causes it to expand or contract). But for materials of only a few atomic thicknesses, no experimental observation of piezoelectricity has been made, until now. The observation reported today provides a new property for two-dimensional materials such as molybdenum disulfide, opening the potential for new types of mechanically controlled electronic devices.

“This material—just a single layer of atoms—could be made as a wearable device, perhaps integrated into clothing, to convert energy from your body movement to electricity and power wearable sensors or medical devices, or perhaps supply enough energy to charge your cell phone in your pocket,” says James Hone, professor of mechanical engineering at Columbia and co-leader of the research.

“Proof of the piezoelectric effect and piezotronic effect adds new functionalities to these two-dimensional materials,” says Zhong Lin Wang, Regents’ Professor in Georgia Tech’s School of Materials Science and Engineering and a co-leader of the research. “The materials community is excited about molybdenum disulfide, and demonstrating the piezoelectric effect in it adds a new facet to the material.”

Hone and his research group demonstrated in 2008 that graphene, a 2D form of carbon, is the strongest material. He and Lei Wang, a postdoctoral fellow in Hone’s group, have been actively exploring the novel properties of 2D materials like graphene and MoS2 as they are stretched and compressed.

Zhong Lin Wang and his research group pioneered the field of piezoelectric nanogenerators for converting mechanical energy into electricity. He and postdoctoral fellow Wenzhuo Wu are also developing piezotronic devices, which use piezoelectric charges to control the flow of current through the material just as gate voltages do in conventional three-terminal transistors.

There are two keys to using molybdenum disulfide for generating current: using an odd number of layers and flexing it in the proper direction. The material is highly polar, but, Zhong Lin Wang notes, so an even number of layers cancels out the piezoelectric effect. The material’s crystalline structure also is piezoelectric in only certain crystalline orientations.

For the Nature study, Hone’s team placed thin flakes of MoS2 on flexible plastic substrates and determined how their crystal lattices were oriented using optical techniques. They then patterned metal electrodes onto the flakes. In research done at Georgia Tech, Wang’s group installed measurement electrodes on samples provided by Hone’s group, then measured current flows as the samples were mechanically deformed. They monitored the conversion of mechanical to electrical energy, and observed voltage and current outputs.

The researchers also noted that the output voltage reversed sign when they changed the direction of applied strain, and that it disappeared in samples with an even number of atomic layers, confirming theoretical predictions published last year. The presence of piezotronic effect in odd layer MoS2 was also observed for the first time.

“What’s really interesting is we’ve now found that a material like MoS2, which is not piezoelectric in bulk form, can become piezoelectric when it is thinned down to a single atomic layer,” says Lei Wang.

To be piezoelectric, a material must break central symmetry. A single atomic layer of MoS2 has such a structure, and should be piezoelectric. However, in bulk MoS2, successive layers are oriented in opposite directions, and generate positive and negative voltages that cancel each other out and give zero net piezoelectric effect.

“This adds another member to the family of piezoelectric materials for functional devices,” says Wenzhuo Wu.

In fact, MoS2 is just one of a group of 2D semiconducting materials known as transition metal dichalcogenides, all of which are predicted to have similar piezoelectric properties. These are part of an even larger family of 2D materials whose piezoelectric materials remain unexplored. Importantly, as has been shown by Hone and his colleagues, 2D materials can be stretched much farther than conventional materials, particularly traditional ceramic piezoelectrics, which are quite brittle.

The research could open the door to development of new applications for the material and its unique properties.

“This is the first experimental work in this area and is an elegant example of how the world becomes different when the size of material shrinks to the scale of a single atom,” Hone adds. “With what we’re learning, we’re eager to build useful devices for all kinds of applications.”

Ultimately, Zhong Lin Wang notes, the research could lead to complete atomic-thick nanosystems that are self-powered by harvesting mechanical energy from the environment. This study also reveals the piezotronic effect in two-dimensional materials for the first time, which greatly expands the application of layered materials for human-machine interfacing, robotics, MEMS, and active flexible electronics.

I see there’s a reference in that last paragraph to “harvesting mechanical energy from  the environment.” I’m not sure what they mean by that but I have written a few times about harvesting biomechanical energy. One of my earliest pieces is a July 12, 2010 post which features work by Zhong Lin Wang on harvesting energy from heart beats, blood flow, muscle stretching, or even irregular vibrations. One of my latest pieces is a Sept. 17, 2014 post about some work in Canada on harvesting energy from the jaw as you chew.

A final note, Dexter Johnson discusses this work in an Oct. 16, 2014 post on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website).

Cardiac pacemakers: Korea’s in vivo demonstration of a self-powered one* and UK’s breath-based approach

As i best I can determine ,the last mention of a self-powered pacemaker and the like on this blog was in a Nov. 5, 2012 posting (Developing self-powered batteries for pacemakers). This latest news from The Korea Advanced Institute of Science and Technology (KAIST) is, I believe, the first time that such a device has been successfully tested in vivo. From a June 23, 2014 news item on ScienceDaily,

As the number of pacemakers implanted each year reaches into the millions worldwide, improving the lifespan of pacemaker batteries has been of great concern for developers and manufacturers. Currently, pacemaker batteries last seven years on average, requiring frequent replacements, which may pose patients to a potential risk involved in medical procedures.

A research team from the Korea Advanced Institute of Science and Technology (KAIST), headed by Professor Keon Jae Lee of the Department of Materials Science and Engineering at KAIST and Professor Boyoung Joung, M.D. of the Division of Cardiology at Severance Hospital of Yonsei University, has developed a self-powered artificial cardiac pacemaker that is operated semi-permanently by a flexible piezoelectric nanogenerator.

A June 23, 2014 KAIST news release on EurekAlert, which originated the news item, provides more details,

The artificial cardiac pacemaker is widely acknowledged as medical equipment that is integrated into the human body to regulate the heartbeats through electrical stimulation to contract the cardiac muscles of people who suffer from arrhythmia. However, repeated surgeries to replace pacemaker batteries have exposed elderly patients to health risks such as infections or severe bleeding during operations.

The team’s newly designed flexible piezoelectric nanogenerator directly stimulated a living rat’s heart using electrical energy converted from the small body movements of the rat. This technology could facilitate the use of self-powered flexible energy harvesters, not only prolonging the lifetime of cardiac pacemakers but also realizing real-time heart monitoring.

The research team fabricated high-performance flexible nanogenerators utilizing a bulk single-crystal PMN-PT thin film (iBULe Photonics). The harvested energy reached up to 8.2 V and 0.22 mA by bending and pushing motions, which were high enough values to directly stimulate the rat’s heart.

Professor Keon Jae Lee said:

“For clinical purposes, the current achievement will benefit the development of self-powered cardiac pacemakers as well as prevent heart attacks via the real-time diagnosis of heart arrhythmia. In addition, the flexible piezoelectric nanogenerator could also be utilized as an electrical source for various implantable medical devices.”

This image illustrating a self-powered nanogenerator for a cardiac pacemaker has been provided by KAIST,

This picture shows that a self-powered cardiac pacemaker is enabled by a flexible piezoelectric energy harvester. Credit: KAIST

This picture shows that a self-powered cardiac pacemaker is enabled by a flexible piezoelectric energy harvester.
Credit: KAIST

Here’s a link to and a citation for the paper,

Self-Powered Cardiac Pacemaker Enabled by Flexible Single Crystalline PMN-PT Piezoelectric Energy Harvester by Geon-Tae Hwang, Hyewon Park, Jeong-Ho Lee, SeKwon Oh, Kwi-Il Park, Myunghwan Byun, Hyelim Park, Gun Ahn, Chang Kyu Jeong, Kwangsoo No, HyukSang Kwon, Sang-Goo Lee, Boyoung Joung, and Keon Jae Lee. Advanced Materials DOI: 10.1002/adma.201400562
Article first published online: 17 APR 2014

© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

There was a May 15, 2014 KAIST news release on EurekAlert announcing this same piece of research but from a technical perspective,

The energy efficiency of KAIST’s piezoelectric nanogenerator has increased by almost 40 times, one step closer toward the commercialization of flexible energy harvesters that can supply power infinitely to wearable, implantable electronic devices

NANOGENERATORS are innovative self-powered energy harvesters that convert kinetic energy created from vibrational and mechanical sources into electrical power, removing the need of external circuits or batteries for electronic devices. This innovation is vital in realizing sustainable energy generation in isolated, inaccessible, or indoor environments and even in the human body.

Nanogenerators, a flexible and lightweight energy harvester on a plastic substrate, can scavenge energy from the extremely tiny movements of natural resources and human body such as wind, water flow, heartbeats, and diaphragm and respiration activities to generate electrical signals. The generators are not only self-powered, flexible devices but also can provide permanent power sources to implantable biomedical devices, including cardiac pacemakers and deep brain stimulators.

However, poor energy efficiency and a complex fabrication process have posed challenges to the commercialization of nanogenerators. Keon Jae Lee, Associate Professor of Materials Science and Engineering at KAIST, and his colleagues have recently proposed a solution by developing a robust technique to transfer a high-quality piezoelectric thin film from bulk sapphire substrates to plastic substrates using laser lift-off (LLO).

Applying the inorganic-based laser lift-off (LLO) process, the research team produced a large-area PZT thin film nanogenerators on flexible substrates (2 cm x 2 cm).

“We were able to convert a high-output performance of ~250 V from the slight mechanical deformation of a single thin plastic substrate. Such output power is just enough to turn on 100 LED lights,” Keon Jae Lee explained.

The self-powered nanogenerators can also work with finger and foot motions. For example, under the irregular and slight bending motions of a human finger, the measured current signals had a high electric power of ~8.7 μA. In addition, the piezoelectric nanogenerator has world-record power conversion efficiency, almost 40 times higher than previously reported similar research results, solving the drawbacks related to the fabrication complexity and low energy efficiency.

Lee further commented,

“Building on this concept, it is highly expected that tiny mechanical motions, including human body movements of muscle contraction and relaxation, can be readily converted into electrical energy and, furthermore, acted as eternal power sources.”

The research team is currently studying a method to build three-dimensional stacking of flexible piezoelectric thin films to enhance output power, as well as conducting a clinical experiment with a flexible nanogenerator.

In addition to the 2012 posting I mentioned earlier, there was also this July 12, 2010 posting which described research on harvesting biomechanical movement ( heart beat, blood flow, muscle stretching, or even irregular vibration) at the Georgia (US) Institute of Technology where the lead researcher observed,

…  Wang [Professor Zhong Lin Wang at Georgia Tech] tells Nanowerk. “However, the applications of the nanogenerators under in vivo and in vitro environments are distinct. Some crucial problems need to be addressed before using these devices in the human body, such as biocompatibility and toxicity.”

Bravo to the KAIST researchers for getting this research to the in vivo testing stage.

Meanwhile at the University of Bristol and at the University of Bath, researchers have received funding for a new approach to cardiac pacemakers, designed them with the breath in mind. From a June 24, 2014 news item on Azonano,

Pacemaker research from the Universities of Bath and Bristol could revolutionise the lives of over 750,000 people who live with heart failure in the UK.

The British Heart Foundation (BHF) is awarding funding to researchers developing a new type of heart pacemaker that modulates its pulses to match breathing rates.

A June 23, 2014 University of Bristol press release, which originated the news item, provides some context,

During 2012-13 in England, more than 40,000 patients had a pacemaker fitted.

Currently, the pulses from pacemakers are set at a constant rate when fitted which doesn’t replicate the natural beating of the human heart.

The normal healthy variation in heart rate during breathing is lost in cardiovascular disease and is an indicator for sleep apnoea, cardiac arrhythmia, hypertension, heart failure and sudden cardiac death.

The device is then briefly described (from the press release),

The novel device being developed by scientists at the Universities of Bath and Bristol uses synthetic neural technology to restore this natural variation of heart rate with lung inflation, and is targeted towards patients with heart failure.

The device works by saving the heart energy, improving its pumping efficiency and enhancing blood flow to the heart muscle itself.  Pre-clinical trials suggest the device gives a 25 per cent increase in the pumping ability, which is expected to extend the life of patients with heart failure.

One aim of the project is to miniaturise the pacemaker device to the size of a postage stamp and to develop an implant that could be used in humans within five years.

Dr Alain Nogaret, Senior Lecturer in Physics at the University of Bath, explained“This is a multidisciplinary project with strong translational value.  By combining fundamental science and nanotechnology we will be able to deliver a unique treatment for heart failure which is not currently addressed by mainstream cardiac rhythm management devices.”

The research team has already patented the technology and is working with NHS consultants at the Bristol Heart Institute, the University of California at San Diego and the University of Auckland. [emphasis mine]

Professor Julian Paton, from the University of Bristol, added: “We’ve known for almost 80 years that the heart beat is modulated by breathing but we have never fully understood the benefits this brings. The generous new funding from the BHF will allow us to reinstate this natural occurring synchrony between heart rate and breathing and understand how it brings therapy to hearts that are failing.”

Professor Jeremy Pearson, Associate Medical Director at the BHF, said: “This study is a novel and exciting first step towards a new generation of smarter pacemakers. More and more people are living with heart failure so our funding in this area is crucial. The work from this innovative research team could have a real impact on heart failure patients’ lives in the future.”

Given some current events (‘Tesla opens up its patents’, Mike Masnick’s June 12, 2014 posting on Techdirt), I wonder what the situation will be vis à vis patents by the time this device gets to market.

* ‘one’ added to title on Aug. 13, 2014.

Roadmap to neuromorphic engineering digital and analog) for the creation of artificial brains *from the Georgia (US) Institute of Technology

While I didn’t mention neuromorphic engineering in my April 16, 2014 posting which focused on the more general aspect of nanotechnology in Transcendence, a movie starring Johnny Depp and opening on April 18, that specialty (neuromorphic engineering) is what makes the events in the movie ‘possible’ (assuming very large stretches of imagination bringing us into the realm implausibility and beyond). From the IMDB.com plot synopsis for Transcendence,

Dr. Will Caster (Johnny Depp) is the foremost researcher in the field of Artificial Intelligence, working to create a sentient machine that combines the collective intelligence of everything ever known with the full range of human emotions. His highly controversial experiments have made him famous, but they have also made him the prime target of anti-technology extremists who will do whatever it takes to stop him. However, in their attempt to destroy Will, they inadvertently become the catalyst for him to succeed to be a participant in his own transcendence. For his wife Evelyn (Rebecca Hall) and best friend Max Waters (Paul Bettany), both fellow researchers, the question is not if they canbut [sic] if they should. Their worst fears are realized as Will’s thirst for knowledge evolves into a seemingly omnipresent quest for power, to what end is unknown. The only thing that is becoming terrifyingly clear is there may be no way to stop him.

In the film, Carter’s intelligence/consciousness is uploaded to the computer, which suggests the computer has human brainlike qualities and abilities. The effort to make computer or artificial intelligence more humanlike is called neuromorphic engineering and according to an April 17, 2014 news item on phys.org, researchers at the Georgia Institute of Technology (Georgia Tech) have published a roadmap for this pursuit,

In the field of neuromorphic engineering, researchers study computing techniques that could someday mimic human cognition. Electrical engineers at the Georgia Institute of Technology recently published a “roadmap” that details innovative analog-based techniques that could make it possible to build a practical neuromorphic computer.

A core technological hurdle in this field involves the electrical power requirements of computing hardware. Although a human brain functions on a mere 20 watts of electrical energy, a digital computer that could approximate human cognitive abilities would require tens of thousands of integrated circuits (chips) and a hundred thousand watts of electricity or more – levels that exceed practical limits.

The Georgia Tech roadmap proposes a solution based on analog computing techniques, which require far less electrical power than traditional digital computing. The more efficient analog approach would help solve the daunting cooling and cost problems that presently make digital neuromorphic hardware systems impractical.

“To simulate the human brain, the eventual goal would be large-scale neuromorphic systems that could offer a great deal of computational power, robustness and performance,” said Jennifer Hasler, a professor in the Georgia Tech School of Electrical and Computer Engineering (ECE), who is a pioneer in using analog techniques for neuromorphic computing. “A configurable analog-digital system can be expected to have a power efficiency improvement of up to 10,000 times compared to an all-digital system.”

An April 16, 2014 Georgia Tech news release by Rick Robinson, which originated the news item, describes why Hasler wants to combine analog (based on biological principles) and digital computing approaches to the creation of artificial brains,

Unlike digital computing, in which computers can address many different applications by processing different software programs, analog circuits have traditionally been hard-wired to address a single application. For example, cell phones use energy-efficient analog circuits for a number of specific functions, including capturing the user’s voice, amplifying incoming voice signals, and controlling battery power.

Because analog devices do not have to process binary codes as digital computers do, their performance can be both faster and much less power hungry. Yet traditional analog circuits are limited because they’re built for a specific application, such as processing signals or controlling power. They don’t have the flexibility of digital devices that can process software, and they’re vulnerable to signal disturbance issues, or noise.

In recent years, Hasler has developed a new approach to analog computing, in which silicon-based analog integrated circuits take over many of the functions now performed by familiar digital integrated circuits. These analog chips can be quickly reconfigured to provide a range of processing capabilities, in a manner that resembles conventional digital techniques in some ways.

Over the last several years, Hasler and her research group have developed devices called field programmable analog arrays (FPAA). Like field programmable gate arrays (FPGA), which are digital integrated circuits that are ubiquitous in modern computing, the FPAA can be reconfigured after it’s manufactured – hence the phrase “field-programmable.”

Hasler and Marr’s 29-page paper traces a development process that could lead to the goal of reproducing human-brain complexity. The researchers investigate in detail a number of intermediate steps that would build on one another, helping researchers advance the technology sequentially.

For example, the researchers discuss ways to scale energy efficiency, performance and size in order to eventually achieve large-scale neuromorphic systems. The authors also address how the implementation and the application space of neuromorphic systems can be expected to evolve over time.

“A major concept here is that we have to first build smaller systems capable of a simple representation of one layer of human brain cortex,” Hasler said. “When that system has been successfully demonstrated, we can then replicate it in ways that increase its complexity and performance.”

Among neuromorphic computing’s major hurdles are the communication issues involved in networking integrated circuits in ways that could replicate human cognition. In their paper, Hasler and Marr emphasize local interconnectivity to reduce complexity. Moreover, they argue it’s possible to achieve these capabilities via purely silicon-based techniques, without relying on novel devices that are based on other approaches.

Commenting on the recent publication, Alice C. Parker, a professor of electrical engineering at the University of Southern California, said, “Professor Hasler’s technology roadmap is the first deep analysis of the prospects for large scale neuromorphic intelligent systems, clearly providing practical guidance for such systems, with a nearer-term perspective than our whole-brain emulation predictions. Her expertise in analog circuits, technology and device models positions her to provide this unique perspective on neuromorphic circuits.”

Eugenio Culurciello, an associate professor of biomedical engineering at Purdue University, commented, “I find this paper to be a very accurate description of the field of neuromorphic data processing systems. Hasler’s devices provide some of the best performance per unit power I have ever seen and are surely on the roadmap for one of the major technologies of the future.”

Said Hasler: “In this study, we conclude that useful neural computation machines based on biological principles – and potentially at the size of the human brain — seems technically within our grasp. We think that it’s more a question of gathering the right research teams and finding the funding for research and development than of any insurmountable technical barriers.”

Here’s a link to and a citation for the roadmap,

Finding a roadmap to achieve large neuromorphic hardware systems by Jennifer Hasler and Bo Marr.  Front. Neurosci. (Frontiers in Neuroscience), 10 September 2013 | doi: 10.3389/fnins.2013.00118

This is an open access article (at least, the HTML version is).

I have looked at Hasler’s roadmap and it provides a good and readable overview (even for an amateur like me; Note: you do have to need some tolerance for ‘not knowing’) of the state of neuromorphic engineering’s problems, and suggestions for overcoming them. Here’s a description of a human brain and its power requirements as compared to a computer’s (from the roadmap),

One of the amazing thing about the human brain is its ability to perform tasks beyond current supercomputers using roughly 20 W of average power, a level smaller than most individual computer microprocessor chips. A single neuron emulation can tax a high performance processor; given there is 1012 neurons operating at 20 W, each neuron consumes 20 pW average power. Assuming a neuron is conservatively performing the wordspotting computation (1000 synapses), 100,000 PMAC (PMAC = “Peta” MAC = 1015 MAC/s) would be required to duplicate the neural structure. A higher computational efficiency due to active dendritic line channels is expected as well as additional computation due to learning. The efficiency of a single neuron would be 5000 PMAC/W (or 5 TMAC/μW). A similar efficiency for 1011 neurons and 10,000 synapses is expected.

Building neuromorphic hardware requires that technology must scale from current levels given constraints of power, area, and cost: all issues typical in industrial and defense applications; if hardware technology does not scale as other available technologies, as well as takes advantage of the capabilities of IC technology that are currently visible, it will not be successful.

One of my main areas of interest is the memristor (a nanoscale ‘device/circuit element’ which emulates synaptic plasticity), which was mentioned in a way that allows me to understand how the device fits (or doesn’t fit) into the overall conceptual framework (from the roadmap),

The density for a 10 nm EEPROM device acting as a synapse begs the question of whether other nanotechnologies can improve on the resulting Si [silicon] synapse density. One transistor per synapse is hard to beat by any approach, particularly in scaled down Si (like 10 nm), when the synapse memory, computation, and update is contained within the EEPROM device. Most nano device technologies [i.e., memristors (Snider et al., 2011)] show considerable difficulties to get to two-dimensional arrays at a similar density level. Recently, a team from U. of Michigan announced the first functioning memristor two-dimensional (30 × 30) array built on a CMOS chip in 2012 (Kim et al., 2012), claiming applications in neuromorphic engineering, the same group has published innovative devices for digital (Jo and Lu, 2009) and analog applications (Jo et al., 2011).

I notice that the reference to the University’s of Michigan is relatively neutral in tone and the memristor does not figure substantively in Hasler’s roadmap.

Intriguingly, there is a section on commercialization; I didn’t think the research was at that stage yet (from the roadmap),

Although one can discuss how to build a cortical computer on the size of mammals and humans, the question is how will the technology developed for these large systems impact commercial development. The cost for ICs [integrated circuits or chips] alone for cortex would be approximately $20 M in current prices, which although possible for large users, would not be common to be found in individual households. Throughout the digital processor approach, commercial market opportunities have driven the progress in the field. Getting neuromorphic technology integrated into commercial environment allows us to ride this powerful economic “engine” rather than pull.

In most applications, the important commercial issues include minimization of cost, time to market, just sufficient performance for the application, power consumed, size and weight. The cost of a system built from ICs is, at a macro-level, a function of the area of those ICs, which then affects the number of ICs needed system wide, the number of components used, and the board space used. Efficiency of design tools, testing time and programming time also considerably affect system costs. Time to get an application to market is affected by the ability to reuse or quickly modify existing designs, and is reduced for a new application if existing hardware can be reconfigured, adapting to changing specifications, and a designer can utilize tools that allow rapid modifications to the design. Performance is key for any algorithm, but for a particular product, one only needs a solution to that particular problem; spending time to make the solution elegant is often a losing strategy.

The neuromorphic community has seen some early entries into commercial spaces, but we are just at the very beginning of the process. As the knowledge of neuromorphic engineering has progressed, which have included knowledge of sensor interfaces and analog signal processing, there have been those who have risen to the opportunities to commercialize these technologies. Neuromorphic research led to better understanding of sensory processing, particularly sensory systems interacting with other humans, enabling companies like Synaptics (touch pads), Foveon (CMOS color imagers), and Sonic Innovation (analog–digital hearing aids); Gilder provides a useful history of these two companies elsewhere (Gilder, 2005). From the early progress in analog signal processing we see companies like GTronix (acquired by National Semiconductor, then acquired by Texas Instruments) applying the impact of custom analog signal processing techniques and programmability toward auditory signal processing that improved sound quality requiring ultra-low power levels. Further, we see in companies like Audience there is some success from mapping the computational flow of the early stage auditory system, and implementing part of the event based auditory front-end to achieve useful results for improved voice quality. But the opportunities for the neuromorphic community are just beginning, and directly related to understanding the computational capabilities of these items. The availability of ICs that have these capabilities, whether or not one mentions they have any neuromorphic material, will further drive applications.

One expects that part of a cortex processing system would have significant computational possibilities, as well as cortex structures from smaller animals, and still be able to reach price points for commercial applications. In the following discussion, we will consider the potential of cortical structures at different levels of commercial applications. Figure 24 shows one typical block diagram, algorithms at each stage, resulting power efficiency (say based on current technology), as well as potential applications of the approach. In all cases, we will be considering a single die solution, typical for a commercial product, and will minimize the resulting communication power to I/O off the chip (no power consumed due to external memories or digital processing devices). We will assume a net computational efficiency of 10 TMAC/mW, corresponding to a lower power supply (i.e., mostly 500 mV, but not 180 mV) and slightly larger load capacitances; we make these assumptions as conservative pull back from possible applications, although we expect the more aggressive targets would be reachable. We assume the external power consumed is set by 1 event/second/neuron average event-rate off chip to a nearby IC. Given the input event rate is hard to predict, we don’t include that power requirement but assume it is handled by the input system. In all of these cases, getting the required computation using only digital techniques in a competitive size, weight, and especially power is hard to foresee.

We expect progress in these neuromorphic systems and that should find applications in traditional signal processing and graphics handling approaches. We will continue to have needs in computing that outpace our available computing resources, particularly at a power consumption required for a particular application. For example, the recent emphasis on cloud computing for academic/research problems shows the incredible need for larger computing resources than those directly available, or even projected to be available, for a portable computing platform (i.e., robotics). Of course a server per computing device is not a computing model that scales well. Given scaling limits on computing, both in power, area, and communication, one can expect to see more and more of these issues going forward.

We expect that a range of different ICs and systems will be built, all at different targets in the market. There are options for even larger networks, or integrating these systems with other processing elements on a chip/board. When moving to larger systems, particularly ones with 10–300 chips (3 × 107 to 109 neurons) or more, one can see utilization of stacking of dies, both decreasing the communication capacitance as well as board complexity. Stacking dies should roughly increase the final chip cost by the number of dies stacked.

In the following subsections, we overview general guidelines to consider when considering using neuromorphic ICs in the commercial market, first for low-cost consumer electronics, and second for a larger neuromorphic processor IC.

I have a casual observation to make. while the authors of the roadmap came to this conclusion “This study concludes that useful neural computation machines based on biological principles at the size of the human brain seems technically within our grasp.,” they’re also leaving themselves some wiggle room because the truth is no one knows if copying a human brain with circuits and various devices will lead to ‘thinking’ as we understand the concept.

For anyone who’s interested, you can search this blog for neuromorphic engineering, artificial brains, and/or memristors as I have many postings on these topics. One of my most recent on the topic of artificial brains is an April 7, 2014 piece titled: Brain-on-a-chip 2014 survey/overview.

One last observation about the movie ‘Transcendence’, has no one else noticed that it’s the ‘Easter’ story with a resurrected and digitized ‘Jesus’?

* Space inserted between ‘brains’ and ‘from’ in head on April 21, 2014.

Good lignin, bad lignin: Florida researchers use plant waste to create lignin nanotubes while researchers in British Columbia develop trees with less lignin

An April 4, 2014 news item on Azonano describes some nanotube research at the University of Florida that reaches past carbon to a new kind of nanotube,

Researchers with the University of Florida’s [UF] Institute of Food and Agricultural Sciences took what some would consider garbage and made a remarkable scientific tool, one that could someday help to correct genetic disorders or treat cancer without chemotherapy’s nasty side effects.

Wilfred Vermerris, an associate professor in UF’s department of microbiology and cell science, and Elena Ten, a postdoctoral research associate, created from plant waste a novel nanotube, one that is much more flexible than rigid carbon nanotubes currently used. The researchers say the lignin nanotubes – about 500 times smaller than a human eyelash – can deliver DNA directly into the nucleus of human cells in tissue culture, where this DNA could then correct genetic conditions. Experiments with DNA injection are currently being done with carbon nanotubes, as well.

“That was a surprising result,” Vermerris said. “If you can do this in actual human beings you could fix defective genes that cause disease symptoms and replace them with functional DNA delivered with these nanotubes.”

An April 3, 2014 University of Florida’s Institute of Food and Agricultural Sciences news release, which originated the news item, describes the lignin nanotubes (LNTs) and future applications in more detail,

The nanotube is made up of lignin from plant material obtained from a UF biofuel pilot facility in Perry, Fla. Lignin is an integral part of the secondary cell walls of plants and enables water movement from the roots to the leaves, but it is not used to make biofuels and would otherwise be burned to generate heat or electricity at the biofuel plant. The lignin nanotubes can be made from a variety of plant residues, including sorghum, poplar, loblolly pine and sugar cane. [emphasis mine]

The researchers first tested to see if the nanotubes were toxic to human cells and were surprised to find that they were less so than carbon nanotubes. Thus, they could deliver a higher dose of medicine to the human cell tissue.  Then they researched if the nanotubes could deliver plasmid DNA to the same cells and that was successful, too. A plasmid is a small DNA molecule that is physically separate from, and can replicate independently of, chromosomal DNA within a cell.

“It’s not a very smooth road because we had to try different experiments to confirm the results,” Ten said. “But it was very fruitful.”

In cases of genetic disorders, the nanotube would be loaded with a functioning copy of a gene, and injected into the body, where it would target the affected tissue, which then makes the missing protein and corrects the genetic disorder.

Although Vermerris cautioned that treatment in humans is many years away, among the conditions that these gene-carrying nanotubes could correct include cystic fibrosis and muscular dystrophy. But, he added, that patients would have to take the corrective DNA via nanotubes on a continuing basis.

Another application under consideration is to use the lignin nanotubes for the delivery of chemotherapy drugs in cancer patients. The nanotubes would ensure the drugs only get to the tumor without affecting healthy tissues.

Vermerris said they created different types of nanotubes, depending on the experiment. They could also adapt nanotubes to a patient’s specific needs, a process called customization.

“You can think about it as a chest of drawers and, depending on the application, you open one drawer or use materials from a different drawer to get things just right for your specific application,” he said.  “It’s not very difficult to do the customization.”

The next step in the research process is for Vermerris and Ten to begin experiments on mice. They are in the application process for those experiments, which would take several years to complete.  If those are successful, permits would need to be obtained for their medical school colleagues to conduct research on human patients, with Vermerris and Ten providing the nanotubes for that research.

“We are a long way from that point,” Vermerris said. “That’s the optimistic long-term trajectory.”

I hope they have good luck with this work. I have emphasized the plant waste the University of Florida scientists studied due to the inclusion of poplar, which is featured in the University of British Columbia research work also being mentioned in this post.

Getting back to Florida for a moment, here’s a link to and a citation for the paper,

Lignin Nanotubes As Vehicles for Gene Delivery into Human Cells by Elena Ten, Chen Ling, Yuan Wang, Arun Srivastava, Luisa Amelia Dempere, and Wilfred Vermerris. Biomacromolecules, 2014, 15 (1), pp 327–338 DOI: 10.1021/bm401555p Publication Date (Web): December 5, 2013
Copyright © 2013 American Chemical Society

This is an open access paper.

Meanwhile, researchers at the University of British Columbia (UBC) are trying to limit the amount of lignin in trees (specifically poplars, which are not mentioned in this excerpt but in the next). From an April 3, 2014 UBC news release,

Researchers have genetically engineered trees that will be easier to break down to produce paper and biofuel, a breakthrough that will mean using fewer chemicals, less energy and creating fewer environmental pollutants.

“One of the largest impediments for the pulp and paper industry as well as the emerging biofuel industry is a polymer found in wood known as lignin,” says Shawn Mansfield, a professor of Wood Science at the University of British Columbia.

Lignin makes up a substantial portion of the cell wall of most plants and is a processing impediment for pulp, paper and biofuel. Currently the lignin must be removed, a process that requires significant chemicals and energy and causes undesirable waste.

Researchers used genetic engineering to modify the lignin to make it easier to break down without adversely affecting the tree’s strength.

“We’re designing trees to be processed with less energy and fewer chemicals, and ultimately recovering more wood carbohydrate than is currently possible,” says Mansfield.

Researchers had previously tried to tackle this problem by reducing the quantity of lignin in trees by suppressing genes, which often resulted in trees that are stunted in growth or were susceptible to wind, snow, pests and pathogens.

“It is truly a unique achievement to design trees for deconstruction while maintaining their growth potential and strength.”

The study, a collaboration between researchers at the University of British Columbia, the University of Wisconsin-Madison, Michigan State University, is a collaboration funded by Great Lakes Bioenergy Research Center, was published today in Science.

Here’s more about lignin and how a decrease would free up more material for biofuels in a more environmentally sustainable fashion, from the news release,

The structure of lignin naturally contains ether bonds that are difficult to degrade. Researchers used genetic engineering to introduce ester bonds into the lignin backbone that are easier to break down chemically.

The new technique means that the lignin may be recovered more effectively and used in other applications, such as adhesives, insolation, carbon fibres and paint additives.

Genetic modification

The genetic modification strategy employed in this study could also be used on other plants like grasses to be used as a new kind of fuel to replace petroleum.

Genetic modification can be a contentious issue, but there are ways to ensure that the genes do not spread to the forest. These techniques include growing crops away from native stands so cross-pollination isn’t possible; introducing genes to make both the male and female trees or plants sterile; and harvesting trees before they reach reproductive maturity.

In the future, genetically modified trees could be planted like an agricultural crop, not in our native forests. Poplar is a potential energy crop for the biofuel industry because the tree grows quickly and on marginal farmland. [emphasis mine] Lignin makes up 20 to 25 per cent of the tree.

“We’re a petroleum reliant society,” says Mansfield. “We rely on the same resource for everything from smartphones to gasoline. We need to diversify and take the pressure off of fossil fuels. Trees and plants have enormous potential to contribute carbon to our society.”

As noted earlier, the researchers in Florida mention poplars in their paper (Note: Links have been removed),

Gymnosperms such as loblolly pine (Pinus taeda L.) contain lignin that is composed almost exclusively of G-residues, whereas lignin from angiosperm dicots, including poplar (Populus spp.) contains a mixture of G- and S-residues. [emphasis mine] Due to the radical-mediated addition of monolignols to the growing lignin polymer, lignin contains a variety of interunit bonds, including aryl–aryl, aryl–alkyl, and alkyl–alkyl bonds.(3) This feature, combined with the association between lignin and cell-wall polysaccharides, which involves both physical and chemical interactions, make the isolation of lignin from plant cell walls challenging. Various isolation methods exist, each relying on breaking certain types of chemical bonds within the lignin, and derivatizations to solubilize the resulting fragments.(5) Several of these methods are used on a large scale in pulp and paper mills and biorefineries, where lignin needs to be removed from woody biomass and crop residues(6) in order to use the cellulose for the production of paper, biofuels, and biobased polymers. The lignin is present in the waste stream and has limited intrinsic economic value.(7)

Since hydroxyl and carboxyl groups in lignin facilitate functionalization, its compatibility with natural and synthetic polymers for different commercial applications have been extensively studied.(8-12) One of the promising directions toward the cost reduction associated with biofuel production is the use of lignin for low-cost carbon fibers.(13) Other recent studies reported development and characterization of lignin nanocomposites for multiple value-added applications. For example, cellulose nanocrystals/lignin nanocomposites were developed for improved optical, antireflective properties(14, 15) and thermal stability of the nanocomposites.(16) [emphasis mine] Model ultrathin bicomponent films prepared from cellulose and lignin derivatives were used to monitor enzyme binding and cellulolytic reactions for sensing platform applications.(17) Enzymes/“synthetic lignin” (dehydrogenation polymer (DHP)) interactions were also investigated to understand how lignin impairs enzymatic hydrolysis during the biomass conversion processes.(18)

The synthesis of lignin nanotubes and nanowires was based on cross-linking a lignin base layer to an alumina membrane, followed by peroxidase-mediated addition of DHP and subsequent dissolution of the membrane in phosphoric acid.(1) Depending upon monomers used for the deposition of DHP, solid nanowires, or hollow nanotubes could be manufactured and easily functionalized due to the presence of many reactive groups. Due to their autofluorescence, lignin nanotubes permit label-free detection under UV radiation.(1) These features make lignin nanotubes suitable candidates for numerous biomedical applications, such as the delivery of therapeutic agents and DNA to specific cells.

The synthesis of LNTs in a sacrificial template membrane is not limited to a single source of lignin or a single lignin isolation procedure. Dimensions of the LNTs and their cytotoxicity to HeLa cells appear to be determined primarily by the lignin isolation procedure, whereas the transfection efficiency is also influenced by the source of the lignin (plant species and genotype). This means that LNTs can be tailored to the application for which they are intended. [emphasis mine] The ability to design LNTs for specific purposes will benefit from a more thorough understanding of the relationship between the structure and the MW of the lignin used to prepare the LNTs, the nanomechanical properties, and the surface characteristics.

We have shown that DNA is physically associated with the LNTs and that the LNTs enter the cytosol, and in some case the nucleus. The LNTs made from NaOH-extracted lignin are of special interest, as they were the shortest in length, substantially reduced HeLa cell viability at levels above approximately 50 mg/mL, and, in the case of pine and poplar, were the most effective in the transfection [penetrating the cell with a bacterial plasmid to leave genetic material in this case] experiments. [emphasis mine]

As I see the issues presented with these two research efforts, there are environmental and energy issues with extracting the lignin while there seem to be some very promising medical applications possible with lignin ‘waste’. These two research efforts aren’t necessarily antithetical but they do raise some very interesting issues as to how we approach our use of resources and future policies.

ETA May 16, 2014: The beat goes on with the Georgia (US) Institute of Technology issues a roadmap for making money from lignin. From a Georgia Tech May 15, 2014 news release on EurekAlert,

When making cellulosic ethanol from plants, one problem is what to do with a woody agricultural waste product called lignin. The old adage in the pulp industry has been that one can make anything from lignin except money.

A new review article in the journal Science points the way toward a future where lignin is transformed from a waste product into valuable materials such as low-cost carbon fiber for cars or bio-based plastics. Using lignin in this way would create new markets for the forest products industry and make ethanol-to-fuel conversion more cost-effective.

“We’ve developed a roadmap for integrating genetic engineering with analytical chemistry tools to tailor the structure of lignin and its isolation so it can be used for materials, chemicals and fuels,” said Arthur Ragauskas, a professor in the School of Chemistry and Biochemistry at the Georgia Institute of Technology. Ragauskas is also part of the Institute for Paper Science and Technology at Georgia Tech.

The roadmap was published May 15 [2014] in the journal Science. …

Here’s a link to and citation for the ‘roadmap’,

Lignin Valorization: Improving Lignin Processing in the Biorefinery by  Arthur J. Ragauskas, Gregg T. Beckham, Mary J. Biddy, Richard Chandra, Fang Chen, Mark F. Davis, Brian H. Davison, Richard A. Dixon, Paul Gilna, Martin Keller, Paul Langan, Amit K. Naskar, Jack N. Saddler, Timothy J. Tschaplinski, Gerald A. Tuskan, and Charles E. Wyman. Science 16 May 2014: Vol. 344 no. 6185 DOI: 10.1126/science.1246843

This paper is behind a paywall.

Mini Lisa made possible by ThermoChemical NanoLithography

One of the world’s most recognizable images has undergone a makeover of sorts. According to an Aug. 6, 2013 news item on Azonano, researchers Georgia institute of Technology (Georgia Tech) in the US, have created a mini Mona Lisa,

The world’s most famous painting has now been created on the world’s smallest canvas. Researchers at the Georgia Institute of Technology have “painted” the Mona Lisa on a substrate surface approximately 30 microns in width – or one-third the width of a human hair.

The team’s creation, the “Mini Lisa,” demonstrates a technique that could potentially be used to achieve nanomanufacturing of devices because the team was able to vary the surface concentration of molecules on such short-length scales.

The Aug. 5, 2013 Georgia Tech news release, which originated the news item, provides more technical details,

The image was created with an atomic force microscope and a process called ThermoChemical NanoLithography (TCNL). Going pixel by pixel, the Georgia Tech team positioned a heated cantilever at the substrate surface to create a series of confined nanoscale chemical reactions. By varying only the heat at each location, Ph.D. Candidate Keith Carroll controlled the number of new molecules that were created. The greater the heat, the greater the local concentration. More heat produced the lighter shades of gray, as seen on the Mini Lisa’s forehead and hands. Less heat produced the darker shades in her dress and hair seen when the molecular canvas is visualized using fluorescent dye. Each pixel is spaced by 125 nanometers.

“By tuning the temperature, our team manipulated chemical reactions to yield variations in the molecular concentrations on the nanoscale,” said Jennifer Curtis, an associate professor in the School of Physics and the study’s lead author. “The spatial confinement of these reactions provides the precision required to generate complex chemical images like the Mini Lisa.”

Production of chemical concentration gradients and variations on the sub-micrometer scale are difficult to achieve with other techniques, despite a wide range of applications the process could allow. The Georgia Tech TCNL research collaboration, which includes associate professor Elisa Riedo and Regents Professor Seth Marder, produced chemical gradients of amine groups, but expects that the process could be extended for use with other materials.

“We envision TCNL will be capable of patterning gradients of other physical or chemical properties, such as conductivity of graphene,” Curtis said. “This technique should enable a wide range of previously inaccessible experiments and applications in fields as diverse as nanoelectronics, optoelectronics and bioengineering.”

Another advantage, according to Curtis, is that atomic force microscopes are fairly common and the thermal control is relatively straightforward, making the approach accessible to both academic and industrial laboratories.  To facilitate their vision of nano-manufacturing devices with TCNL, the Georgia Tech team has recently integrated nanoarrays of five thermal cantilevers to accelerate the pace of production. Because the technique provides high spatial resolutions at a speed faster than other existing methods, even with a single cantilever, Curtis is hopeful that TCNL will provide the option of nanoscale printing integrated with the fabrication of large quantities of surfaces or everyday materials whose dimensions are more than one billion times larger than the TCNL features themselves.

Here’s an image of the AFM and the cantilever used in the TCNL process to create the ‘Mini Lisa’,

Atomic force microscope (AFM) modified with a thermal cantilever. The AFM scanner allows for precise positioning on the nanoscale while the thermal cantilever induces local nanoscale chemical reactions. Courtesy Georgia Tech

Atomic force microscope (AFM) modified with a thermal cantilever. The AFM scanner allows for precise positioning on the nanoscale while the thermal cantilever induces local nanoscale chemical reactions. Courtesy Georgia Tech

Finally, the “Mini Lisa’,

Georgia Tech researchers have created the "Mini Lisa" on a substrate surface approximately 30 microns in width. The image demonstrates a technique that could potentially be used to achieve nano-manufacturing of devices because the team was able to vary the surface concentration of molecules on such short length scales. Courtesy Georgia Tech

Georgia Tech researchers have created the “Mini Lisa” on a substrate surface approximately 30 microns in width. The image demonstrates a technique that could potentially be used to achieve nano-manufacturing of devices because the team was able to vary the surface concentration of molecules on such short length scales. Courtesy Georgia Tech

For those who can’t get enough of the ‘Mini Lisa’ or TCNL, here’s a link to and a citation for the research team’s published paper,

Fabricating Nanoscale Chemical Gradients with ThermoChemical NanoLithography by Keith M. Carroll, Anthony J. Giordano, Debin Wang, Vamsi K. Kodali, Jan Scrimgeour, William P. King, Seth R. Marder, Elisa Riedo, and Jennifer E. Curtis. Langmuir, 2013, 29 (27), pp 8675–8682 DOI: 10.1021/la400996w Publication Date (Web): June 10, 2013
Copyright © 2013 American Chemical Society

This article is behind a paywall.

Solar cells made even more leaflike with inclusion of nanocellulose fibers

Researchers at the US Georgia  Institute of Technology (Georgia Tech)  and Purdue University (Indiana) have used cellulose nanocrystals (CNC), which is also known as nanocrystalline cellulose (NCC), to create solar cells that have greater efficiency and can be recycled. From the Mar. 26, 2013 news item on Nanowerk,

Georgia Institute of Technology and Purdue University researchers have developed efficient solar cells using natural substrates derived from plants such as trees. Just as importantly, by fabricating them on cellulose nanocrystal (CNC) substrates, the solar cells can be quickly recycled in water at the end of their lifecycle.

The Georgia Tech Mar. 25, 2013 news release, which originated the news item,

The researchers report that the organic solar cells reach a power conversion efficiency of 2.7 percent, an unprecedented figure for cells on substrates derived from renewable raw materials. The CNC substrates on which the solar cells are fabricated are optically transparent, enabling light to pass through them before being absorbed by a very thin layer of an organic semiconductor. During the recycling process, the solar cells are simply immersed in water at room temperature. Within only minutes, the CNC substrate dissolves and the solar cell can be separated easily into its major components.

Georgia Tech College of Engineering Professor Bernard Kippelen led the study and says his team’s project opens the door for a truly recyclable, sustainable and renewable solar cell technology.

“The development and performance of organic substrates in solar technology continues to improve, providing engineers with a good indication of future applications,” said Kippelen, who is also the director of Georgia Tech’s Center for Organic Photonics and Electronics (COPE). “But organic solar cells must be recyclable. Otherwise we are simply solving one problem, less dependence on fossil fuels, while creating another, a technology that produces energy from renewable sources but is not disposable at the end of its lifecycle.”

To date, organic solar cells have been typically fabricated on glass or plastic. Neither is easily recyclable, and petroleum-based substrates are not very eco-friendly. For instance, if cells fabricated on glass were to break during manufacturing or installation, the useless materials would be difficult to dispose of. Paper substrates are better for the environment, but have shown limited performance because of high surface roughness or porosity. However, cellulose nanomaterials made from wood are green, renewable and sustainable. The substrates have a low surface roughness of only about two nanometers.

“Our next steps will be to work toward improving the power conversion efficiency over 10 percent, levels similar to solar cells fabricated on glass or petroleum-based substrates,” said Kippelen. The group plans to achieve this by optimizing the optical properties of the solar cell’s electrode.

The news release also notes the impact that using cellulose nanomaterials could have economically,

There’s also another positive impact of using natural products to create cellulose nanomaterials. The nation’s forest product industry projects that tens of millions of tons of them could be produced once large-scale production begins, potentially in the next five years.

One might almost  suspect that the forest products industry is experiencing financial difficulty.

The researchers’ paper was published by Scientific Reports, an open access journal from the Nature Publishing Group,

Recyclable organic solar cells on cellulose nanocrystal substrates by Yinhua Zhou, Canek Fuentes-Hernandez, Talha M. Khan, Jen-Chieh Liu, James Hsu, Jae Won Shim, Amir Dindar, Jeffrey P. Youngblood, Robert J. Moon, & Bernard Kippelen. Scientific Reports  3, Article number: 1536  doi:10.1038/srep01536 Published 25 March 2013

In closing, the news release notes that a provisional patent has been filed at the US Patent Office.And one final note, I have previously commented on how confusing the reported power conversion rates are. You’ll find a recent comment in my Mar. 8, 2013 posting about Ted Sargent’s work with colloidal quantum dots and solar cells.

Samsung ‘GROs’ graphene-based micro-antennas and a brief bit about the business of nanotechnology

A Feb. 22, 2013 news item on Nanowerk highlights a Samsung university grant (GRO) programme which announced funding for graphene-based micro-antennas,

The Graphene-Enabled Wireless Communication project, one of the award-winning proposals under the Samsung Global Research Outreach (GRO) programme, aims to use graphene antennas to implement wireless communication over very short distances (no more than a centimetre) with high-capacity information transmission (tens or hundreds of gigabits per second). Antennas made ??of [sic] graphene could radiate electromagnetic waves in the terahertz band and would allow for high-speed information transmission. Thanks to the unique properties of this nanomaterial, the new graphene-based antenna technology would also make it possible to manufacture antennas a thousand times smaller than those currently used.

The GRO programme—an annual call for research proposals by the Samsung Advanced Institute of Technology (Seoul, South Korea)—has provided the UPC-led project with US$120,000 in financial support.

The Graphene-Enabled Wireless Communication project is a joint project (from the news item; Note: A link has been removed),

“Graphene-Enabled Wireless Communications” – a proposal submitted by an interdepartmental team based at the Universitat Politècnica de Catalunya, BarcelonaTech (UPC) and the Georgia Institute of Technology (Georgia Tech)—will receive US$120,000 to develop micrometre-scale graphene antennas capable of transmitting information at a high speed over very short distances. The project will be carried out in the coming months.

The Graphene-Enabled Wireless Communication project, one of the award-winning proposals under the Samsung Global Research Outreach (GRO) programme, aims to use graphene antennas to implement wireless communication over very short distances (no more than a centimetre) with high-capacity information transmission (tens or hundreds of gigabits per second). Antennas made ??of graphene could radiate electromagnetic waves in the terahertz band and would allow for high-speed information transmission. Thanks to the unique properties of this nanomaterial, the new graphene-based antenna technology would also make it possible to manufacture antennas a thousand times smaller than those currently used.

There’s more about the Graphene-Enabled Wireless Communication project here,

 A remarkably promising application of graphene is that of Graphene-enabled Wireless Communications (GWC). GWC advocate for the use of graphene-based plasmonic antennas –graphennas, see Fig. 1- whose plasmonic effects allow them to radiate EM waves in the terahertz band (0.1 – 10 THz). Moreover, preliminary results sustain that this frequency band is up to two orders of magnitude below the optical frequencies at which metallic antennas of the same size resonate, thereby enhancing the transmission range of graphene-based antennas and lowering the requirements on the corresponding transceivers. In short, graphene enables the implementation of nano-antennas just a few micrometers in size that are not doable with traditional metallic materials.

Thanks to both the reduced size and unique radiation capabilities of ZZ, GWC may represent a breakthrough in the ultra-short range communications research area. In this project we will study the application of GWC within the scenario of off-chip communication, which includes communication between different chips of a given device, e.g. a cell phone.

A new term, graphenna, appears to be have been coined. The news item goes on to offer more detail about the project and about the number of collaborating institutions,

The first stage of the project, launched in October 2012, focuses on the theoretical foundations of wireless communications over short distances using graphene antennas. In particular, the group is analysing the behaviour of electromagnetic waves in the terahertz band for very short distances, and investigating how coding and modulation schemes can be adapted to achieve high transmission rates while maintaining low power consumption.

The group believes the main benefits of the project in the medium term will derive from its application for internal communication in multicore processors. Processors of this type have a number of sub-processors that share and execute tasks in parallel. The application of wireless communication in this area will make it possible to integrate thousands of sub-processors within a single processor, which is not feasible with current communication systems.

The results of the project will lead to an increase in the computational performance of these devices. This improvement would allow large amounts of data to be processed at very high speed, which would be very useful for streamlining data management at processing centres (“big data”) used, for example, in systems like Facebook and Google. The project, which builds on previous results obtained with the collaboration of the University of Wuppertal in Germany, the Royal Institute of Technology (KTH) in Sweden, and Georgia Tech in the United States, is expected to yield its first results in April 2013.

The project is being carried out by the NaNoNetworking Centre in Catalonia (N3Cat), a network formed at the initiative of researchers with the UPC’s departments of Electronic Engineering and Computer Architecture, together with colleagues at Georgia Tech.

Anyone interested in  Samsung’s GRO programme can find more here,

The SAMSUNG Global Research Outreach (GRO) program, open to leading universities around the world, is Samsung Electronics, Co., Ltd. & related Samsung companies (SAMSUNG)’s annual call for research proposals.

As this Samsung-funded research project is being announced, Dexter Johnson details the business failure of NanoInk in a Feb. 22, 2013 posting on his Nanoclast blog (on the IEEE [International Institute of Electrical and Electronics Engineers] website), Note: Links have been removed,

One of the United State’s first nanotechnology companies, NanoInk, has gone belly up, joining a host of high-profile nanotechnology-based companies that have shuttered their doors in the last 12 months: Konarka, A123 Systems and Ener1.

These other three companies were all tied to the energy markets (solar in the case of Konarka and batteries for both A123 and Ener1), which are typically volatile, with a fair number of shuttered businesses dotting their landscapes. But NanoInk is a venerable old company in comparison to these other three and is more in what could be characterized as the “picks-and-shovels” side of the nanotechnology business, microscopy tools.

Dexter goes on to provide an  analysis of the NanoInk situation which makes for some very interesting reading along with the comments—some feisty, some not—his posting has provoked.

I am juxtaposing the Samsung funding announcement with this mention of Dexter’s piece regarding a  ‘nanotechnology’ business failure in an effort to provide some balance between enthusiasm for the research and the realities of developing businesses and products based on that research.

Developing self-powered batteries for pacemakers

Imagine having your chest cracked open every time your pacemaker needs to have its battery changed? It’s not a pleasant thought and researchers are working on a number of approaches to change that situation.  Scientists from the University of Michigan have presented the results from some preliminary testing of a device that harvests energy from heartbeats (from the Nov. 4, 2012 news release on EurekAlert),

In a preliminary study, researchers tested an energy-harvesting device that uses piezoelectricity — electrical charge generated from motion. The approach is a promising technological solution for pacemakers, because they require only small amounts of power to operate, said M. Amin Karami, Ph.D., lead author of the study and research fellow in the Department of Aerospace Engineering at the University of Michigan in Ann Arbor.

Piezoelectricity might also power other implantable cardiac devices like defibrillators, which also have minimal energy needs, he said.

Today’s pacemakers must be replaced every five to seven years when their batteries run out, which is costly and inconvenient, Karami said.

A University of Michigan at Ann Arbor March 2, 2012 news release provides more technical detail about this energy-harvesting battery which the researchers had not then tested,

… A hundredth-of-an-inch thin slice of a special “piezoelectric” ceramic material would essentially catch heartbeat vibrations and briefly expand in response. Piezoelectric materials’ claim to fame is that they can convert mechanical stress (which causes them to expand) into an electric voltage.

Karami and his colleague Daniel Inman, chair of Aerospace Engineering at U-M, have precisely engineered the ceramic layer to a shape that can harvest vibrations across a broad range of frequencies. They also incorporated magnets, whose additional force field can drastically boost the electric signal that results from the vibrations.

The new device could generate 10 microwatts of power, which is about eight times the amount a pacemaker needs to operate, Karami said. It always generates more energy than the pacemaker requires, and it performs at heart rates from 7 to 700 beats per minute. That’s well below and above the normal range.

Karami and Inman originally designed the harvester for light unmanned airplanes, where it could generate power from wing vibrations.

Since March 2012, the researchers have tested the prototype (from the Nov. 4, 2012 news release on EurekAlert),

Researchers measured heartbeat-induced vibrations in the chest. Then, they used a “shaker” to reproduce the vibrations in the laboratory and connected it to a prototype cardiac energy harvester they developed. Measurements of the prototype’s performance, based on sets of 100 simulated heartbeats at various heart rates, showed the energy harvester performed as the scientists had predicted — generating more than 10 times the power than modern pacemakers require. The next step will be implanting the energy harvester, which is about half the size of batteries now used in pacemakers, Karami said. Researchers hope to integrate their technology into commercial pacemakers.

There are other teams working on energy-harvesting batteries, in my July 12, 2010 posting I mentioned a team led by Professor Zhong Lin Wang at Georgia Tech (Georgia Institute of Technology in the US) which is working on batteries that harvest energy from biomechanical motion such as heart beats, finger tapping, breathing, etc.

Nanotechnology’s economic impacts and full lifecycle assessments

A paper presented at the International Symposium on Assessing the Economic Impact of Nanotechnology, held March 27 – 28, 2012 in Washington, D.C advises that assessments of the economic impacts of nanotechnology need to be more inclusive. From the March 28, 2012 news item on Nanowerk,

“Nanotechnology promises to foster green and sustainable growth in many product and process areas,” said Shapira [Philip Shapira], a professor with Georgia Tech’s [US]  School of Public Policy and the Manchester Institute of Innovation Research at the Manchester Business School in the United Kingdom. “Although nanotechnology commercialization is still in its early phases, we need now to get a better sense of what markets will grow and how new nanotechnology products will impact sustainability. This includes balancing gains in efficiency and performance against the net energy, environmental, carbon and other costs associated with the production, use and end-of-life disposal or recycling of nanotechnology products.”

But because nanotechnology underlies many different industries, assessing and forecasting its impact won’t be easy. “Compared to information technology and biotechnology, for example, nanotechnology has more of the characteristics of a general technology such as the development of electric power,” said Youtie [Jan Youtie], director of policy research services at Georgia Tech’s Enterprise Innovation Institute. “That makes it difficult to analyze the value of products and processes that are enabled by the technology. We hope that our paper will provide background information and help frame the discussion about making those assessments.”

From the March 27, 2012 Georgia Institute of Technology news release,

For their paper, co-authors Shapira and Youtie examined a subset of green nanotechnologies that aim to enable sustainable energy, improve environmental quality, and provide healthy drinking water for areas of the world that now lack it. They argue that the lifecycle of nanotechnology products must be included in the assessment.

I was hoping for a bit more detail about how one would go about including nanotechnology-enabled products in this type of economic impact assessment but this is all I could find (from the news release),

In their paper, Youtie and Shapira cite several examples of green nanotechnology, discuss the potential impacts of the technology, and review forecasts that have been made. Examples of green nanotechnology they cite include:

  • Nano-enabled solar cells that use lower-cost organic materials, as opposed to current photovoltaic technologies that require rare materials such as platinum;
  • Nanogenerators that use piezoelectric materials such as zinc oxide nanowires to convert human movement into energy;
  • Energy storage applications in which nanotechnology materials improve existing batteries and nano-enabled fuel cells;
  • Thermal energy applications, such as nano-enabled insulation;
  • Fuel catalysis in which nanoparticles improve the production and refining of fuels and reduce emissions from automobiles;
  • Technologies used to provide safe drinking water through improved water treatment, desalination and reuse.

I checked both Philip Shapira‘s webpage and Jan Youtie‘s at Georgia Tech to find that neither lists this latest work, which hopefully includes additional detail. I’m hopeful there’ll be a document published in the proceedings for this symposium and access will be possible.

On another note, I did mention this symposium in my Jan. 27, 2012 posting where I speculated about the Canadian participation. I did get a response (March 5, 2012)  from Vanessa Clive, Nanotechnology File, Industry Sector, Industry Canada who kindly cleared up my confusion,

A colleague forwarded the extract from your blog below. Thank you for your interest in the OECD Working Party on Nanotechnology (WPN) work, and giving some additional public profile to its work is welcome. However, some correction is needed, please, to keep the record straight.

“It’s a lot to infer from a list of speakers but I’m going to do it anyway. Given that the only Canadian listed as an invited speaker for a prestigious (OECD/AAAS/NNI as hosts) symposium about nanotechnology’s economic impacts, is someone strongly associated with NCC, it would seem to confirm that Canadians do have an important R&D (research and development) lead in an area of international interest.

One thing about this symposium does surprise and that’s the absence of Vanessa Clive from Industry Canada. She co-authored the OECD’s 2010 report, The Impacts of Nanotechnology on Companies: Policy Insights from Case Studies and would seem a natural choice as one of the speakers on the economic impacts that nanotechnology might have in the future.”

I am a member of the organizing committee, on the OECD WPN side, for the Washington Symposium in March which will focus on the need and, in turn, options for development of metrics for evaluation of the economic impacts of nano. As committee member, I was actively involved in identifying potential Canadian speakers for agenda slots. Apart from the co-sponsors whose generosity made the event possible, countries were limited to one or two speakers in order to bring in experts from as many interested countries as possible. The second Canadian expert which we had invited to participate had to pull out, unfortunately.

Also, the OECD project on nano impacts on business was co-designed and co-led by me, another colleague here at the time, and our Swiss colleague, but the report itself was written by OECD staff.

I did send (March 5, 2012)  a followup email with more questions but I gather time was tight as I’ve not heard back.

In any event, I’m looking forward to hearing more about this symposium, however that occurs, in the coming weeks and months.