Tag Archives: Georgia Institute of Technology

Roadmap to neuromorphic engineering digital and analog) for the creation of artificial brains *from the Georgia (US) Institute of Technology

While I didn’t mention neuromorphic engineering in my April 16, 2014 posting which focused on the more general aspect of nanotechnology in Transcendence, a movie starring Johnny Depp and opening on April 18, that specialty (neuromorphic engineering) is what makes the events in the movie ‘possible’ (assuming very large stretches of imagination bringing us into the realm implausibility and beyond). From the IMDB.com plot synopsis for Transcendence,

Dr. Will Caster (Johnny Depp) is the foremost researcher in the field of Artificial Intelligence, working to create a sentient machine that combines the collective intelligence of everything ever known with the full range of human emotions. His highly controversial experiments have made him famous, but they have also made him the prime target of anti-technology extremists who will do whatever it takes to stop him. However, in their attempt to destroy Will, they inadvertently become the catalyst for him to succeed to be a participant in his own transcendence. For his wife Evelyn (Rebecca Hall) and best friend Max Waters (Paul Bettany), both fellow researchers, the question is not if they canbut [sic] if they should. Their worst fears are realized as Will’s thirst for knowledge evolves into a seemingly omnipresent quest for power, to what end is unknown. The only thing that is becoming terrifyingly clear is there may be no way to stop him.

In the film, Carter’s intelligence/consciousness is uploaded to the computer, which suggests the computer has human brainlike qualities and abilities. The effort to make computer or artificial intelligence more humanlike is called neuromorphic engineering and according to an April 17, 2014 news item on phys.org, researchers at the Georgia Institute of Technology (Georgia Tech) have published a roadmap for this pursuit,

In the field of neuromorphic engineering, researchers study computing techniques that could someday mimic human cognition. Electrical engineers at the Georgia Institute of Technology recently published a “roadmap” that details innovative analog-based techniques that could make it possible to build a practical neuromorphic computer.

A core technological hurdle in this field involves the electrical power requirements of computing hardware. Although a human brain functions on a mere 20 watts of electrical energy, a digital computer that could approximate human cognitive abilities would require tens of thousands of integrated circuits (chips) and a hundred thousand watts of electricity or more – levels that exceed practical limits.

The Georgia Tech roadmap proposes a solution based on analog computing techniques, which require far less electrical power than traditional digital computing. The more efficient analog approach would help solve the daunting cooling and cost problems that presently make digital neuromorphic hardware systems impractical.

“To simulate the human brain, the eventual goal would be large-scale neuromorphic systems that could offer a great deal of computational power, robustness and performance,” said Jennifer Hasler, a professor in the Georgia Tech School of Electrical and Computer Engineering (ECE), who is a pioneer in using analog techniques for neuromorphic computing. “A configurable analog-digital system can be expected to have a power efficiency improvement of up to 10,000 times compared to an all-digital system.”

An April 16, 2014 Georgia Tech news release by Rick Robinson, which originated the news item, describes why Hasler wants to combine analog (based on biological principles) and digital computing approaches to the creation of artificial brains,

Unlike digital computing, in which computers can address many different applications by processing different software programs, analog circuits have traditionally been hard-wired to address a single application. For example, cell phones use energy-efficient analog circuits for a number of specific functions, including capturing the user’s voice, amplifying incoming voice signals, and controlling battery power.

Because analog devices do not have to process binary codes as digital computers do, their performance can be both faster and much less power hungry. Yet traditional analog circuits are limited because they’re built for a specific application, such as processing signals or controlling power. They don’t have the flexibility of digital devices that can process software, and they’re vulnerable to signal disturbance issues, or noise.

In recent years, Hasler has developed a new approach to analog computing, in which silicon-based analog integrated circuits take over many of the functions now performed by familiar digital integrated circuits. These analog chips can be quickly reconfigured to provide a range of processing capabilities, in a manner that resembles conventional digital techniques in some ways.

Over the last several years, Hasler and her research group have developed devices called field programmable analog arrays (FPAA). Like field programmable gate arrays (FPGA), which are digital integrated circuits that are ubiquitous in modern computing, the FPAA can be reconfigured after it’s manufactured – hence the phrase “field-programmable.”

Hasler and Marr’s 29-page paper traces a development process that could lead to the goal of reproducing human-brain complexity. The researchers investigate in detail a number of intermediate steps that would build on one another, helping researchers advance the technology sequentially.

For example, the researchers discuss ways to scale energy efficiency, performance and size in order to eventually achieve large-scale neuromorphic systems. The authors also address how the implementation and the application space of neuromorphic systems can be expected to evolve over time.

“A major concept here is that we have to first build smaller systems capable of a simple representation of one layer of human brain cortex,” Hasler said. “When that system has been successfully demonstrated, we can then replicate it in ways that increase its complexity and performance.”

Among neuromorphic computing’s major hurdles are the communication issues involved in networking integrated circuits in ways that could replicate human cognition. In their paper, Hasler and Marr emphasize local interconnectivity to reduce complexity. Moreover, they argue it’s possible to achieve these capabilities via purely silicon-based techniques, without relying on novel devices that are based on other approaches.

Commenting on the recent publication, Alice C. Parker, a professor of electrical engineering at the University of Southern California, said, “Professor Hasler’s technology roadmap is the first deep analysis of the prospects for large scale neuromorphic intelligent systems, clearly providing practical guidance for such systems, with a nearer-term perspective than our whole-brain emulation predictions. Her expertise in analog circuits, technology and device models positions her to provide this unique perspective on neuromorphic circuits.”

Eugenio Culurciello, an associate professor of biomedical engineering at Purdue University, commented, “I find this paper to be a very accurate description of the field of neuromorphic data processing systems. Hasler’s devices provide some of the best performance per unit power I have ever seen and are surely on the roadmap for one of the major technologies of the future.”

Said Hasler: “In this study, we conclude that useful neural computation machines based on biological principles – and potentially at the size of the human brain — seems technically within our grasp. We think that it’s more a question of gathering the right research teams and finding the funding for research and development than of any insurmountable technical barriers.”

Here’s a link to and a citation for the roadmap,

Finding a roadmap to achieve large neuromorphic hardware systems by Jennifer Hasler and Bo Marr.  Front. Neurosci. (Frontiers in Neuroscience), 10 September 2013 | doi: 10.3389/fnins.2013.00118

This is an open access article (at least, the HTML version is).

I have looked at Hasler’s roadmap and it provides a good and readable overview (even for an amateur like me; Note: you do have to need some tolerance for ‘not knowing’) of the state of neuromorphic engineering’s problems, and suggestions for overcoming them. Here’s a description of a human brain and its power requirements as compared to a computer’s (from the roadmap),

One of the amazing thing about the human brain is its ability to perform tasks beyond current supercomputers using roughly 20 W of average power, a level smaller than most individual computer microprocessor chips. A single neuron emulation can tax a high performance processor; given there is 1012 neurons operating at 20 W, each neuron consumes 20 pW average power. Assuming a neuron is conservatively performing the wordspotting computation (1000 synapses), 100,000 PMAC (PMAC = “Peta” MAC = 1015 MAC/s) would be required to duplicate the neural structure. A higher computational efficiency due to active dendritic line channels is expected as well as additional computation due to learning. The efficiency of a single neuron would be 5000 PMAC/W (or 5 TMAC/μW). A similar efficiency for 1011 neurons and 10,000 synapses is expected.

Building neuromorphic hardware requires that technology must scale from current levels given constraints of power, area, and cost: all issues typical in industrial and defense applications; if hardware technology does not scale as other available technologies, as well as takes advantage of the capabilities of IC technology that are currently visible, it will not be successful.

One of my main areas of interest is the memristor (a nanoscale ‘device/circuit element’ which emulates synaptic plasticity), which was mentioned in a way that allows me to understand how the device fits (or doesn’t fit) into the overall conceptual framework (from the roadmap),

The density for a 10 nm EEPROM device acting as a synapse begs the question of whether other nanotechnologies can improve on the resulting Si [silicon] synapse density. One transistor per synapse is hard to beat by any approach, particularly in scaled down Si (like 10 nm), when the synapse memory, computation, and update is contained within the EEPROM device. Most nano device technologies [i.e., memristors (Snider et al., 2011)] show considerable difficulties to get to two-dimensional arrays at a similar density level. Recently, a team from U. of Michigan announced the first functioning memristor two-dimensional (30 × 30) array built on a CMOS chip in 2012 (Kim et al., 2012), claiming applications in neuromorphic engineering, the same group has published innovative devices for digital (Jo and Lu, 2009) and analog applications (Jo et al., 2011).

I notice that the reference to the University’s of Michigan is relatively neutral in tone and the memristor does not figure substantively in Hasler’s roadmap.

Intriguingly, there is a section on commercialization; I didn’t think the research was at that stage yet (from the roadmap),

Although one can discuss how to build a cortical computer on the size of mammals and humans, the question is how will the technology developed for these large systems impact commercial development. The cost for ICs [integrated circuits or chips] alone for cortex would be approximately $20 M in current prices, which although possible for large users, would not be common to be found in individual households. Throughout the digital processor approach, commercial market opportunities have driven the progress in the field. Getting neuromorphic technology integrated into commercial environment allows us to ride this powerful economic “engine” rather than pull.

In most applications, the important commercial issues include minimization of cost, time to market, just sufficient performance for the application, power consumed, size and weight. The cost of a system built from ICs is, at a macro-level, a function of the area of those ICs, which then affects the number of ICs needed system wide, the number of components used, and the board space used. Efficiency of design tools, testing time and programming time also considerably affect system costs. Time to get an application to market is affected by the ability to reuse or quickly modify existing designs, and is reduced for a new application if existing hardware can be reconfigured, adapting to changing specifications, and a designer can utilize tools that allow rapid modifications to the design. Performance is key for any algorithm, but for a particular product, one only needs a solution to that particular problem; spending time to make the solution elegant is often a losing strategy.

The neuromorphic community has seen some early entries into commercial spaces, but we are just at the very beginning of the process. As the knowledge of neuromorphic engineering has progressed, which have included knowledge of sensor interfaces and analog signal processing, there have been those who have risen to the opportunities to commercialize these technologies. Neuromorphic research led to better understanding of sensory processing, particularly sensory systems interacting with other humans, enabling companies like Synaptics (touch pads), Foveon (CMOS color imagers), and Sonic Innovation (analog–digital hearing aids); Gilder provides a useful history of these two companies elsewhere (Gilder, 2005). From the early progress in analog signal processing we see companies like GTronix (acquired by National Semiconductor, then acquired by Texas Instruments) applying the impact of custom analog signal processing techniques and programmability toward auditory signal processing that improved sound quality requiring ultra-low power levels. Further, we see in companies like Audience there is some success from mapping the computational flow of the early stage auditory system, and implementing part of the event based auditory front-end to achieve useful results for improved voice quality. But the opportunities for the neuromorphic community are just beginning, and directly related to understanding the computational capabilities of these items. The availability of ICs that have these capabilities, whether or not one mentions they have any neuromorphic material, will further drive applications.

One expects that part of a cortex processing system would have significant computational possibilities, as well as cortex structures from smaller animals, and still be able to reach price points for commercial applications. In the following discussion, we will consider the potential of cortical structures at different levels of commercial applications. Figure 24 shows one typical block diagram, algorithms at each stage, resulting power efficiency (say based on current technology), as well as potential applications of the approach. In all cases, we will be considering a single die solution, typical for a commercial product, and will minimize the resulting communication power to I/O off the chip (no power consumed due to external memories or digital processing devices). We will assume a net computational efficiency of 10 TMAC/mW, corresponding to a lower power supply (i.e., mostly 500 mV, but not 180 mV) and slightly larger load capacitances; we make these assumptions as conservative pull back from possible applications, although we expect the more aggressive targets would be reachable. We assume the external power consumed is set by 1 event/second/neuron average event-rate off chip to a nearby IC. Given the input event rate is hard to predict, we don’t include that power requirement but assume it is handled by the input system. In all of these cases, getting the required computation using only digital techniques in a competitive size, weight, and especially power is hard to foresee.

We expect progress in these neuromorphic systems and that should find applications in traditional signal processing and graphics handling approaches. We will continue to have needs in computing that outpace our available computing resources, particularly at a power consumption required for a particular application. For example, the recent emphasis on cloud computing for academic/research problems shows the incredible need for larger computing resources than those directly available, or even projected to be available, for a portable computing platform (i.e., robotics). Of course a server per computing device is not a computing model that scales well. Given scaling limits on computing, both in power, area, and communication, one can expect to see more and more of these issues going forward.

We expect that a range of different ICs and systems will be built, all at different targets in the market. There are options for even larger networks, or integrating these systems with other processing elements on a chip/board. When moving to larger systems, particularly ones with 10–300 chips (3 × 107 to 109 neurons) or more, one can see utilization of stacking of dies, both decreasing the communication capacitance as well as board complexity. Stacking dies should roughly increase the final chip cost by the number of dies stacked.

In the following subsections, we overview general guidelines to consider when considering using neuromorphic ICs in the commercial market, first for low-cost consumer electronics, and second for a larger neuromorphic processor IC.

I have a casual observation to make. while the authors of the roadmap came to this conclusion “This study concludes that useful neural computation machines based on biological principles at the size of the human brain seems technically within our grasp.,” they’re also leaving themselves some wiggle room because the truth is no one knows if copying a human brain with circuits and various devices will lead to ‘thinking’ as we understand the concept.

For anyone who’s interested, you can search this blog for neuromorphic engineering, artificial brains, and/or memristors as I have many postings on these topics. One of my most recent on the topic of artificial brains is an April 7, 2014 piece titled: Brain-on-a-chip 2014 survey/overview.

One last observation about the movie ‘Transcendence’, has no one else noticed that it’s the ‘Easter’ story with a resurrected and digitized ‘Jesus’?

* Space inserted between ‘brains’ and ‘from’ in head on April 21, 2014.

Mini Lisa made possible by ThermoChemical NanoLithography

One of the world’s most recognizable images has undergone a makeover of sorts. According to an Aug. 6, 2013 news item on Azonano, researchers Georgia institute of Technology (Georgia Tech) in the US, have created a mini Mona Lisa,

The world’s most famous painting has now been created on the world’s smallest canvas. Researchers at the Georgia Institute of Technology have “painted” the Mona Lisa on a substrate surface approximately 30 microns in width – or one-third the width of a human hair.

The team’s creation, the “Mini Lisa,” demonstrates a technique that could potentially be used to achieve nanomanufacturing of devices because the team was able to vary the surface concentration of molecules on such short-length scales.

The Aug. 5, 2013 Georgia Tech news release, which originated the news item, provides more technical details,

The image was created with an atomic force microscope and a process called ThermoChemical NanoLithography (TCNL). Going pixel by pixel, the Georgia Tech team positioned a heated cantilever at the substrate surface to create a series of confined nanoscale chemical reactions. By varying only the heat at each location, Ph.D. Candidate Keith Carroll controlled the number of new molecules that were created. The greater the heat, the greater the local concentration. More heat produced the lighter shades of gray, as seen on the Mini Lisa’s forehead and hands. Less heat produced the darker shades in her dress and hair seen when the molecular canvas is visualized using fluorescent dye. Each pixel is spaced by 125 nanometers.

“By tuning the temperature, our team manipulated chemical reactions to yield variations in the molecular concentrations on the nanoscale,” said Jennifer Curtis, an associate professor in the School of Physics and the study’s lead author. “The spatial confinement of these reactions provides the precision required to generate complex chemical images like the Mini Lisa.”

Production of chemical concentration gradients and variations on the sub-micrometer scale are difficult to achieve with other techniques, despite a wide range of applications the process could allow. The Georgia Tech TCNL research collaboration, which includes associate professor Elisa Riedo and Regents Professor Seth Marder, produced chemical gradients of amine groups, but expects that the process could be extended for use with other materials.

“We envision TCNL will be capable of patterning gradients of other physical or chemical properties, such as conductivity of graphene,” Curtis said. “This technique should enable a wide range of previously inaccessible experiments and applications in fields as diverse as nanoelectronics, optoelectronics and bioengineering.”

Another advantage, according to Curtis, is that atomic force microscopes are fairly common and the thermal control is relatively straightforward, making the approach accessible to both academic and industrial laboratories.  To facilitate their vision of nano-manufacturing devices with TCNL, the Georgia Tech team has recently integrated nanoarrays of five thermal cantilevers to accelerate the pace of production. Because the technique provides high spatial resolutions at a speed faster than other existing methods, even with a single cantilever, Curtis is hopeful that TCNL will provide the option of nanoscale printing integrated with the fabrication of large quantities of surfaces or everyday materials whose dimensions are more than one billion times larger than the TCNL features themselves.

Here’s an image of the AFM and the cantilever used in the TCNL process to create the ‘Mini Lisa’,

Atomic force microscope (AFM) modified with a thermal cantilever. The AFM scanner allows for precise positioning on the nanoscale while the thermal cantilever induces local nanoscale chemical reactions. Courtesy Georgia Tech

Atomic force microscope (AFM) modified with a thermal cantilever. The AFM scanner allows for precise positioning on the nanoscale while the thermal cantilever induces local nanoscale chemical reactions. Courtesy Georgia Tech

Finally, the “Mini Lisa’,

Georgia Tech researchers have created the "Mini Lisa" on a substrate surface approximately 30 microns in width. The image demonstrates a technique that could potentially be used to achieve nano-manufacturing of devices because the team was able to vary the surface concentration of molecules on such short length scales. Courtesy Georgia Tech

Georgia Tech researchers have created the “Mini Lisa” on a substrate surface approximately 30 microns in width. The image demonstrates a technique that could potentially be used to achieve nano-manufacturing of devices because the team was able to vary the surface concentration of molecules on such short length scales. Courtesy Georgia Tech

For those who can’t get enough of the ‘Mini Lisa’ or TCNL, here’s a link to and a citation for the research team’s published paper,

Fabricating Nanoscale Chemical Gradients with ThermoChemical NanoLithography by Keith M. Carroll, Anthony J. Giordano, Debin Wang, Vamsi K. Kodali, Jan Scrimgeour, William P. King, Seth R. Marder, Elisa Riedo, and Jennifer E. Curtis. Langmuir, 2013, 29 (27), pp 8675–8682 DOI: 10.1021/la400996w Publication Date (Web): June 10, 2013
Copyright © 2013 American Chemical Society

This article is behind a paywall.

Solar cells made even more leaflike with inclusion of nanocellulose fibers

Researchers at the US Georgia  Institute of Technology (Georgia Tech)  and Purdue University (Indiana) have used cellulose nanocrystals (CNC), which is also known as nanocrystalline cellulose (NCC), to create solar cells that have greater efficiency and can be recycled. From the Mar. 26, 2013 news item on Nanowerk,

Georgia Institute of Technology and Purdue University researchers have developed efficient solar cells using natural substrates derived from plants such as trees. Just as importantly, by fabricating them on cellulose nanocrystal (CNC) substrates, the solar cells can be quickly recycled in water at the end of their lifecycle.

The Georgia Tech Mar. 25, 2013 news release, which originated the news item,

The researchers report that the organic solar cells reach a power conversion efficiency of 2.7 percent, an unprecedented figure for cells on substrates derived from renewable raw materials. The CNC substrates on which the solar cells are fabricated are optically transparent, enabling light to pass through them before being absorbed by a very thin layer of an organic semiconductor. During the recycling process, the solar cells are simply immersed in water at room temperature. Within only minutes, the CNC substrate dissolves and the solar cell can be separated easily into its major components.

Georgia Tech College of Engineering Professor Bernard Kippelen led the study and says his team’s project opens the door for a truly recyclable, sustainable and renewable solar cell technology.

“The development and performance of organic substrates in solar technology continues to improve, providing engineers with a good indication of future applications,” said Kippelen, who is also the director of Georgia Tech’s Center for Organic Photonics and Electronics (COPE). “But organic solar cells must be recyclable. Otherwise we are simply solving one problem, less dependence on fossil fuels, while creating another, a technology that produces energy from renewable sources but is not disposable at the end of its lifecycle.”

To date, organic solar cells have been typically fabricated on glass or plastic. Neither is easily recyclable, and petroleum-based substrates are not very eco-friendly. For instance, if cells fabricated on glass were to break during manufacturing or installation, the useless materials would be difficult to dispose of. Paper substrates are better for the environment, but have shown limited performance because of high surface roughness or porosity. However, cellulose nanomaterials made from wood are green, renewable and sustainable. The substrates have a low surface roughness of only about two nanometers.

“Our next steps will be to work toward improving the power conversion efficiency over 10 percent, levels similar to solar cells fabricated on glass or petroleum-based substrates,” said Kippelen. The group plans to achieve this by optimizing the optical properties of the solar cell’s electrode.

The news release also notes the impact that using cellulose nanomaterials could have economically,

There’s also another positive impact of using natural products to create cellulose nanomaterials. The nation’s forest product industry projects that tens of millions of tons of them could be produced once large-scale production begins, potentially in the next five years.

One might almost  suspect that the forest products industry is experiencing financial difficulty.

The researchers’ paper was published by Scientific Reports, an open access journal from the Nature Publishing Group,

Recyclable organic solar cells on cellulose nanocrystal substrates by Yinhua Zhou, Canek Fuentes-Hernandez, Talha M. Khan, Jen-Chieh Liu, James Hsu, Jae Won Shim, Amir Dindar, Jeffrey P. Youngblood, Robert J. Moon, & Bernard Kippelen. Scientific Reports  3, Article number: 1536  doi:10.1038/srep01536 Published 25 March 2013

In closing, the news release notes that a provisional patent has been filed at the US Patent Office.And one final note, I have previously commented on how confusing the reported power conversion rates are. You’ll find a recent comment in my Mar. 8, 2013 posting about Ted Sargent’s work with colloidal quantum dots and solar cells.

Samsung ‘GROs’ graphene-based micro-antennas and a brief bit about the business of nanotechnology

A Feb. 22, 2013 news item on Nanowerk highlights a Samsung university grant (GRO) programme which announced funding for graphene-based micro-antennas,

The Graphene-Enabled Wireless Communication project, one of the award-winning proposals under the Samsung Global Research Outreach (GRO) programme, aims to use graphene antennas to implement wireless communication over very short distances (no more than a centimetre) with high-capacity information transmission (tens or hundreds of gigabits per second). Antennas made ??of [sic] graphene could radiate electromagnetic waves in the terahertz band and would allow for high-speed information transmission. Thanks to the unique properties of this nanomaterial, the new graphene-based antenna technology would also make it possible to manufacture antennas a thousand times smaller than those currently used.

The GRO programme—an annual call for research proposals by the Samsung Advanced Institute of Technology (Seoul, South Korea)—has provided the UPC-led project with US$120,000 in financial support.

The Graphene-Enabled Wireless Communication project is a joint project (from the news item; Note: A link has been removed),

“Graphene-Enabled Wireless Communications” – a proposal submitted by an interdepartmental team based at the Universitat Politècnica de Catalunya, BarcelonaTech (UPC) and the Georgia Institute of Technology (Georgia Tech)—will receive US$120,000 to develop micrometre-scale graphene antennas capable of transmitting information at a high speed over very short distances. The project will be carried out in the coming months.

The Graphene-Enabled Wireless Communication project, one of the award-winning proposals under the Samsung Global Research Outreach (GRO) programme, aims to use graphene antennas to implement wireless communication over very short distances (no more than a centimetre) with high-capacity information transmission (tens or hundreds of gigabits per second). Antennas made ??of graphene could radiate electromagnetic waves in the terahertz band and would allow for high-speed information transmission. Thanks to the unique properties of this nanomaterial, the new graphene-based antenna technology would also make it possible to manufacture antennas a thousand times smaller than those currently used.

There’s more about the Graphene-Enabled Wireless Communication project here,

 A remarkably promising application of graphene is that of Graphene-enabled Wireless Communications (GWC). GWC advocate for the use of graphene-based plasmonic antennas -graphennas, see Fig. 1- whose plasmonic effects allow them to radiate EM waves in the terahertz band (0.1 – 10 THz). Moreover, preliminary results sustain that this frequency band is up to two orders of magnitude below the optical frequencies at which metallic antennas of the same size resonate, thereby enhancing the transmission range of graphene-based antennas and lowering the requirements on the corresponding transceivers. In short, graphene enables the implementation of nano-antennas just a few micrometers in size that are not doable with traditional metallic materials.

Thanks to both the reduced size and unique radiation capabilities of ZZ, GWC may represent a breakthrough in the ultra-short range communications research area. In this project we will study the application of GWC within the scenario of off-chip communication, which includes communication between different chips of a given device, e.g. a cell phone.

A new term, graphenna, appears to be have been coined. The news item goes on to offer more detail about the project and about the number of collaborating institutions,

The first stage of the project, launched in October 2012, focuses on the theoretical foundations of wireless communications over short distances using graphene antennas. In particular, the group is analysing the behaviour of electromagnetic waves in the terahertz band for very short distances, and investigating how coding and modulation schemes can be adapted to achieve high transmission rates while maintaining low power consumption.

The group believes the main benefits of the project in the medium term will derive from its application for internal communication in multicore processors. Processors of this type have a number of sub-processors that share and execute tasks in parallel. The application of wireless communication in this area will make it possible to integrate thousands of sub-processors within a single processor, which is not feasible with current communication systems.

The results of the project will lead to an increase in the computational performance of these devices. This improvement would allow large amounts of data to be processed at very high speed, which would be very useful for streamlining data management at processing centres (“big data”) used, for example, in systems like Facebook and Google. The project, which builds on previous results obtained with the collaboration of the University of Wuppertal in Germany, the Royal Institute of Technology (KTH) in Sweden, and Georgia Tech in the United States, is expected to yield its first results in April 2013.

The project is being carried out by the NaNoNetworking Centre in Catalonia (N3Cat), a network formed at the initiative of researchers with the UPC’s departments of Electronic Engineering and Computer Architecture, together with colleagues at Georgia Tech.

Anyone interested in  Samsung’s GRO programme can find more here,

The SAMSUNG Global Research Outreach (GRO) program, open to leading universities around the world, is Samsung Electronics, Co., Ltd. & related Samsung companies (SAMSUNG)’s annual call for research proposals.

As this Samsung-funded research project is being announced, Dexter Johnson details the business failure of NanoInk in a Feb. 22, 2013 posting on his Nanoclast blog (on the IEEE [International Institute of Electrical and Electronics Engineers] website), Note: Links have been removed,

One of the United State’s first nanotechnology companies, NanoInk, has gone belly up, joining a host of high-profile nanotechnology-based companies that have shuttered their doors in the last 12 months: Konarka, A123 Systems and Ener1.

These other three companies were all tied to the energy markets (solar in the case of Konarka and batteries for both A123 and Ener1), which are typically volatile, with a fair number of shuttered businesses dotting their landscapes. But NanoInk is a venerable old company in comparison to these other three and is more in what could be characterized as the “picks-and-shovels” side of the nanotechnology business, microscopy tools.

Dexter goes on to provide an  analysis of the NanoInk situation which makes for some very interesting reading along with the comments—some feisty, some not—his posting has provoked.

I am juxtaposing the Samsung funding announcement with this mention of Dexter’s piece regarding a  ‘nanotechnology’ business failure in an effort to provide some balance between enthusiasm for the research and the realities of developing businesses and products based on that research.

Developing self-powered batteries for pacemakers

Imagine having your chest cracked open every time your pacemaker needs to have its battery changed? It’s not a pleasant thought and researchers are working on a number of approaches to change that situation.  Scientists from the University of Michigan have presented the results from some preliminary testing of a device that harvests energy from heartbeats (from the Nov. 4, 2012 news release on EurekAlert),

In a preliminary study, researchers tested an energy-harvesting device that uses piezoelectricity — electrical charge generated from motion. The approach is a promising technological solution for pacemakers, because they require only small amounts of power to operate, said M. Amin Karami, Ph.D., lead author of the study and research fellow in the Department of Aerospace Engineering at the University of Michigan in Ann Arbor.

Piezoelectricity might also power other implantable cardiac devices like defibrillators, which also have minimal energy needs, he said.

Today’s pacemakers must be replaced every five to seven years when their batteries run out, which is costly and inconvenient, Karami said.

A University of Michigan at Ann Arbor March 2, 2012 news release provides more technical detail about this energy-harvesting battery which the researchers had not then tested,

… A hundredth-of-an-inch thin slice of a special “piezoelectric” ceramic material would essentially catch heartbeat vibrations and briefly expand in response. Piezoelectric materials’ claim to fame is that they can convert mechanical stress (which causes them to expand) into an electric voltage.

Karami and his colleague Daniel Inman, chair of Aerospace Engineering at U-M, have precisely engineered the ceramic layer to a shape that can harvest vibrations across a broad range of frequencies. They also incorporated magnets, whose additional force field can drastically boost the electric signal that results from the vibrations.

The new device could generate 10 microwatts of power, which is about eight times the amount a pacemaker needs to operate, Karami said. It always generates more energy than the pacemaker requires, and it performs at heart rates from 7 to 700 beats per minute. That’s well below and above the normal range.

Karami and Inman originally designed the harvester for light unmanned airplanes, where it could generate power from wing vibrations.

Since March 2012, the researchers have tested the prototype (from the Nov. 4, 2012 news release on EurekAlert),

Researchers measured heartbeat-induced vibrations in the chest. Then, they used a “shaker” to reproduce the vibrations in the laboratory and connected it to a prototype cardiac energy harvester they developed. Measurements of the prototype’s performance, based on sets of 100 simulated heartbeats at various heart rates, showed the energy harvester performed as the scientists had predicted — generating more than 10 times the power than modern pacemakers require. The next step will be implanting the energy harvester, which is about half the size of batteries now used in pacemakers, Karami said. Researchers hope to integrate their technology into commercial pacemakers.

There are other teams working on energy-harvesting batteries, in my July 12, 2010 posting I mentioned a team led by Professor Zhong Lin Wang at Georgia Tech (Georgia Institute of Technology in the US) which is working on batteries that harvest energy from biomechanical motion such as heart beats, finger tapping, breathing, etc.

Fish and Chips: Singapore style and Australia style

A*STAR’s Institute of Bioengineering and Nanotechnology (IBN), located in Singapore, has announced a new platform for testing drug applications. From the April 4, 2012 news item on Nanowerk,

A cheaper, faster and more efficient platform for preclinical drug discovery applications has been invented by scientists at the Institute of Bioengineering and Nanotechnology (IBN), the world’s first bioengineering and nanotechnology research institute. Called ‘Fish and Chips’, the novel multi-channel microfluidic perfusion platform can grow and monitor the development of various tissues and organs inside zebrafish embryos for drug toxicity testing. This research, published recently in Lab on a Chip (“Fish and Chips: a microfluidic perfusion platform for monitoring zebrafish development”) …

From the IBN April 4, 2012 media release,

The conventional way of visualizing tissues and organs in embryos is a laborious process, which includes first mounting the embryos in a viscous medium such as gel, and then manually orienting the embryos using fine needles. The embryos also need to be anesthetized to restrict their motion and a drop of saline needs to be continuously applied to prevent the embryos from drying. These additional precautions could further complicate the drug testing results.

The IBN ‘Fish and Chips’ has been designed for dynamic long-term culturing and live imaging of the zebrafish embryos. The microfluidic platform comprises three parts: 1) a row of eight fish tanks, in which the embryos are placed and covered with an oxygen permeable membrane, 2) a fluidic concentration gradient generator to dispense the growth medium and drugs, and 3) eight output channels for the removal of the waste products (see Image 2). The novelty of the ‘Fish and Chips’ lies in its unique diagonal flow architecture, which allows the embryos to be continually submerged in a uniform and consistent flow of growth medium and drugs (…), and the attached gradient generator, which can dispense different concentrations of drugs to eight different embryos at the same time for dose-dependent drug studies.

Professor Hanry Yu, IBN Group Leader, who led the research efforts at IBN, said, “Toxicity is a major cause of drug failures in clinical trials and our novel ‘Fish and Chips’ device can be used as the first step in drug screening during the preclinical phase to complement existing animal models and improve toxicity testing. The design of our platform can also be modified to accommodate more zebrafish embryos, as well as the embryos of other animal models. Our next step will involve investigating cardiotoxicity and hepatoxicity on the chip.”

As a pragmatist I realize that, to date, we have no substitute for testing drugs on animals prior to clinical human trials so this ‘type of platform’ is necessary but it always gives me pause. Just as the relationship between human and animals did the first time I came across a ‘Fish and Chips’ project in the context of a performance at the 2001 Ars Electronica event in Linz, Austria. As I recall Fish and Chips was made up fish neurons grown on silicon chips then hooked up to hardware and software to create a performance both visual and auditory.

Here’s an image of the 2001 Fish and Chips performance at Ars Electronica,

Ars Electronica Festival 2001: Fish & Chips / SymbioticA Research Group, Oron Catts, Ionat Zurr, Guy Ben-Ary

You can find a full size version of the image here on Flickr along with the Creative Commons Licence.

The Fish and Chips performance was developed at SymbioticA (University of Western Australia). From SymbioticA’s Research page,

SymbioticA is a research facility dedicated to artistic inquiry into knowledge and technology in the life sciences.

Our research embodies:

  • identifying and developing new materials and subjects for artistic manipulation
  • researching strategies and implications of presenting living-art in different contexts
  • developing technologies and protocols as artistic tool kits.

Having access to scientific laboratories and tools, SymbioticA is in a unique position to offer these resources for artistic research. Therefore, SymbioticA encourages and favours research projects that involve hands on development of technical skills and the use of scientific tools.

The research undertaken at SymbioticA is speculative in nature. SymbioticA strives to support non-utilitarian, curiosity based and philosophically motivated research.

They list six research areas:

  • Art and biology
  • Art and ecology
  • Bioethics
  • Neuroscience
  • Tissue engineering
  • Sleep science

SymbioticA’s Fish and Chips project has since been retitled MEART, from the SymbioticA Research Group (SARG) page,

Meart – The semi-living artist

The project was originally entitled Fish and Chips and later evolved into MEART – the semi living artist. The project is by the SymbioticA Research group in collaboration with the Potter Lab.

The Potter Lab or Potter Group is located at the Georgia (US) Institute of Technology. Here’s some more information about MEART from the  Potter Group MEART page,

The Semi living artist

Its ‘brain’ of dissociated rat neurons is cultured on an MEA in our lab in Atlanta while the geographically detached ‘body’ lives in Perth. The body itself is a set of pneumatically actuated robotic arms moving pens on a piece of paper …

A camera located above the workspace captures the progress of drawings created by the neurally-controlled movement of the arms. The visual data then instructed stimulation frequencies for the 60 electrodes on the MEA.

The brain and body talk through the internet over TCP/IP in real time providing closed loop communication for a neurally controlled ‘semi-living artist’. We see this as a medium from which to address various scientific, philosophical, and artistic questions.

Getting back to SymbioticA, my most recent mention of them was in a Dec. 28, 2011 posting about Boo Chapple’s (resident at SymbioticA) Transjuicer installation at Dublin’s Science Gallery (I’ve excerpted a portion of an interview with Chapple where she describes what she’s doing),

I’m not sure that Transjuicer is so much about science as it is about belief, the economy of human-animal relations, and the politics of material transformation.

On that note I leave you with these fish and chips (from the Wikipedia essay about the menu item Fish and Chips),

Cod and chips in Horseshoe Bay, B.C., Canada, December 2010. Credit: Robin Miller

Nanotechnology’s economic impacts and full lifecycle assessments

A paper presented at the International Symposium on Assessing the Economic Impact of Nanotechnology, held March 27 – 28, 2012 in Washington, D.C advises that assessments of the economic impacts of nanotechnology need to be more inclusive. From the March 28, 2012 news item on Nanowerk,

“Nanotechnology promises to foster green and sustainable growth in many product and process areas,” said Shapira [Philip Shapira], a professor with Georgia Tech’s [US]  School of Public Policy and the Manchester Institute of Innovation Research at the Manchester Business School in the United Kingdom. “Although nanotechnology commercialization is still in its early phases, we need now to get a better sense of what markets will grow and how new nanotechnology products will impact sustainability. This includes balancing gains in efficiency and performance against the net energy, environmental, carbon and other costs associated with the production, use and end-of-life disposal or recycling of nanotechnology products.”

But because nanotechnology underlies many different industries, assessing and forecasting its impact won’t be easy. “Compared to information technology and biotechnology, for example, nanotechnology has more of the characteristics of a general technology such as the development of electric power,” said Youtie [Jan Youtie], director of policy research services at Georgia Tech’s Enterprise Innovation Institute. “That makes it difficult to analyze the value of products and processes that are enabled by the technology. We hope that our paper will provide background information and help frame the discussion about making those assessments.”

From the March 27, 2012 Georgia Institute of Technology news release,

For their paper, co-authors Shapira and Youtie examined a subset of green nanotechnologies that aim to enable sustainable energy, improve environmental quality, and provide healthy drinking water for areas of the world that now lack it. They argue that the lifecycle of nanotechnology products must be included in the assessment.

I was hoping for a bit more detail about how one would go about including nanotechnology-enabled products in this type of economic impact assessment but this is all I could find (from the news release),

In their paper, Youtie and Shapira cite several examples of green nanotechnology, discuss the potential impacts of the technology, and review forecasts that have been made. Examples of green nanotechnology they cite include:

  • Nano-enabled solar cells that use lower-cost organic materials, as opposed to current photovoltaic technologies that require rare materials such as platinum;
  • Nanogenerators that use piezoelectric materials such as zinc oxide nanowires to convert human movement into energy;
  • Energy storage applications in which nanotechnology materials improve existing batteries and nano-enabled fuel cells;
  • Thermal energy applications, such as nano-enabled insulation;
  • Fuel catalysis in which nanoparticles improve the production and refining of fuels and reduce emissions from automobiles;
  • Technologies used to provide safe drinking water through improved water treatment, desalination and reuse.

I checked both Philip Shapira‘s webpage and Jan Youtie‘s at Georgia Tech to find that neither lists this latest work, which hopefully includes additional detail. I’m hopeful there’ll be a document published in the proceedings for this symposium and access will be possible.

On another note, I did mention this symposium in my Jan. 27, 2012 posting where I speculated about the Canadian participation. I did get a response (March 5, 2012)  from Vanessa Clive, Nanotechnology File, Industry Sector, Industry Canada who kindly cleared up my confusion,

A colleague forwarded the extract from your blog below. Thank you for your interest in the OECD Working Party on Nanotechnology (WPN) work, and giving some additional public profile to its work is welcome. However, some correction is needed, please, to keep the record straight.

“It’s a lot to infer from a list of speakers but I’m going to do it anyway. Given that the only Canadian listed as an invited speaker for a prestigious (OECD/AAAS/NNI as hosts) symposium about nanotechnology’s economic impacts, is someone strongly associated with NCC, it would seem to confirm that Canadians do have an important R&D (research and development) lead in an area of international interest.

One thing about this symposium does surprise and that’s the absence of Vanessa Clive from Industry Canada. She co-authored the OECD’s 2010 report, The Impacts of Nanotechnology on Companies: Policy Insights from Case Studies and would seem a natural choice as one of the speakers on the economic impacts that nanotechnology might have in the future.”

I am a member of the organizing committee, on the OECD WPN side, for the Washington Symposium in March which will focus on the need and, in turn, options for development of metrics for evaluation of the economic impacts of nano. As committee member, I was actively involved in identifying potential Canadian speakers for agenda slots. Apart from the co-sponsors whose generosity made the event possible, countries were limited to one or two speakers in order to bring in experts from as many interested countries as possible. The second Canadian expert which we had invited to participate had to pull out, unfortunately.

Also, the OECD project on nano impacts on business was co-designed and co-led by me, another colleague here at the time, and our Swiss colleague, but the report itself was written by OECD staff.

I did send (March 5, 2012)  a followup email with more questions but I gather time was tight as I’ve not heard back.

In any event, I’m looking forward to hearing more about this symposium, however that occurs, in the coming weeks and months.

Microneedles from Tufts University

Here’s some very exciting news from Tufts University in a Dec. 21, 2011 news item on Nanowerk,

Bioengineers at Tufts University School of Engineering have developed a new silk-based microneedle system able to deliver precise amounts of drugs over time and without need for refrigeration. The tiny needles can be fabricated under normal temperature and pressure and from water, so they can be loaded with sensitive biochemical compounds and maintain their activity prior to use. They are also biodegradable and biocompatible.

I have previously written about a micro needle project at the Georgia Institute of Technology in Nov. 9, 2011 posting and about Mark Kendall’s nano vaccine patch on more than one occasion, most recently in my Aug. 3, 2011 posting.

This new drug delivery project surprised me; I didn’t realize that horesradish could also be a drug,

The Tufts researchers successfully demonstrated the ability of the silk microneedles to deliver a large-molecule, enzymatic model drug, horseradish peroxidase (HRP), at controlled rates while maintaining bioactivity. In addition, silk microneedles loaded with tetracycline were found to inhibit the growth of Staphylococcus aureus, demonstrating the potential of the microneedles to prevent local infections while also delivering therapeutics.

“By adjusting the post-processing conditions of the silk protein and varying the drying time of the silk protein, we were able to precisely control the drug release rates in laboratory experiments,” said Fiorenzo Omenetto, Ph.D., senior author on the paper. “The new system addresses long-standing drug delivery challenges, and we believe that the technology could also be applied to other biological storage applications.”

If we’re all lucky, it won’t be too long before syringes are a museum item and we’ll be getting our medication with far less discomfort/pain and, in some cases, fear.

Petman and lifelike movement

Thanks to the Nov. 7, 2011 posting on the Foresight Institute blog, I’ve found Petman,

Last month we noted the impressive progress achieved by Boston Dynamics’ AlphaDog project to develop a robot “pack animal” for the US military. Apparently there has been equally impressive progress in developing a humanoid robot capable of faithfully mimicking human movements to test protective suits for use by the military, and ultimately, to replace humans in a variety of arduous and dangerous tasks. This month IEEE Spectrum gave us this update: “Stunning Video of PETMAN Humanoid Robot From Boston Dynamics”, by Erico Guizzo.

I have written about Boston Dynamics and its military robots before, most recently about Big Dog in my Feb. 2, 2010 posting [scroll down a paragraph or two]. It’s amazing to see how much smoother the movement has become although I notice that the robot is tethered. From the Oct. 31, 2011 IEEE Spectrum article by Erico Guizzo,

It can walk, squat, kneel, and even do push-ups.

PETMAN is an adult-sized humanoid robot developed by Boston Dynamics, the robotics firm best known for the BigDog quadruped.

Today, the company is unveiling footage of the robot’s latest capabilities. It’s stunning.

The humanoid, which will certainly be compared to the Terminator Series 800 model, can perform various movements and maintain its balance much like a real person.

Boston Dynamics is building PETMAN, short for Protection Ensemble Test Mannequin, for the U.S. Army, which plans to use the robot to test chemical suits and other protective gear used by troops. It has to be capable of moving just like a soldier — walking, running, bending, reaching, army crawling — to test the suit’s durability in a full range of motion.

Marc Raibert, the founder and president of Boston Dynamics, tells me that the biggest challenge was to engineer the robot, which uses a hydraulic actuation system, to have the approximate size of a person. “There was a great deal of mechanical design we had to do to get everything to fit,” he says.

The Guizzo article features a number of images and a video demonstrating Petman’s abilities along with more details about the robot’s full capabilities. I went on YouTube to find this Petman mashup,

The Japanese have featured some robots that look like and dance like people as I noted in my Oct. 18, 2010 posting where I also discussed the ‘uncanny valley’ in relationship to those robots. Keeping on the ‘humanoid’ robot theme, I also posted about Geminoid robots in the context of a Danish philosopher who commissioned, for a philosophy project, a Geminoid that looked like himself and whose facial features are expressive. In that same posting, March 10, 2011, I wrote about some work at the Georgia Institute of Technology (US) where they too are developing robots that move like humans. The March 2011 posting features more information about the ‘uncanny valley’, including a diagram.

I wonder what it will be like to encounter one of these humanoid robots in the flesh as it were.

Micro needle patches project gets Grand Challenges Explorations grant

The project being funded with a Grand Challenges Explorations grant (from the Bill & Melinda Gates Foundation) reminds me a lot of the nanopatch that Mark Kendall and his team have been developing in Australia (a project last mentioned in my Aug. 3, 2011 posting). This new initiative comes from the Georgia Institute of Technology and is aimed at the eradication of polio. From the Nov. 7, 2011 news item on Nanowerk,

The Georgia Institute of Technology will receive funding through Grand Challenges Explorations, an initiative created by the Bill & Melinda Gates Foundation that enables researchers worldwide to test unorthodox ideas that address persistent health and development challenges. Mark Prausnitz, Regents’ professor in Georgia Tech’s School of Chemical and Biomolecular Engineering, will pursue an innovative global health research project focused on using microneedle patches for the low-cost administration of polio vaccine through the skin in collaboration with researchers Steve Oberste and Mark Pallansch of the US Centers for Disease Control and Prevention (CDC).

The goal of the Georgia Tech/CDC project is to demonstrate the scientific and economic feasibility for using microneedle patches in vaccination programs aimed at eradicating the polio virus. Current vaccination programs use an oral polio vaccine that contains a modified live virus. This vaccine is inexpensive and can be administered in door-to-door immunization campaigns, but in rare cases the vaccine can cause polio. There is an alternative injected vaccine that uses killed virus, which carries no risk of polio transmission, but is considerably more expensive than the oral vaccine, requires refrigeration for storage and must be administered by trained personnel. To eradicate polio from the world, health officials will have to discontinue use of the oral vaccine with its live virus, replacing it with the more expensive and logistically-complicated injected vaccine.

Prausnitz and his CDC collaborators believe the use of microneedle patches could reduce the cost and simplify administration of the injected vaccine.

Iwonder if this team working at the microscale rather than the nanoscale, as Kendall’s team does, is finding some of the same benefits, from my August 3, 2011 posting,

Early stage testing in animals so far has shown a Nanopatch-delivered flu vaccine is effective with only 1/150th of the dose compared to a syringe and the adjuvants currently required to boost the immunogenicity of vaccines may not be needed. [emphases mine]

I find the notion that only 1/150th of a standard syringe dosage can be effective quite extraordinary. I wonder if this will hold true in human clinical trials.

If they get similar efficiencies at the microscale as they do at the nanoscale, the expense associated with vaccines using killed viruses should plummet dramatically. I do have one thought, do we have to eradicate the polio virus in a ‘search and destroy mission’? Couldn’t we learn to live with them peacefully while discouraging their noxious effects on our own biology?