Category Archives: electronics

Move over laser—the graphene/carbon nanotube spaser is here, on your t-shirt

This research graphene/carbon nanotube research comes from Australia according to an April 16, 2014 news item on Nanowerk,

A team of researchers from Monash University’s [Australia] Department of Electrical and Computer Systems Engineering (ECSE) has modelled the world’s first spaser …

An April 16, 2014 Monash University news release, which originated the new item, describes the spaser and its relationship to lasers,,

A new version of “spaser” technology being investigated could mean that mobile phones become so small, efficient, and flexible they could be printed on clothing.

A spaser is effectively a nanoscale laser or nanolaser. It emits a beam of light through the vibration of free electrons, rather than the space-consuming electromagnetic wave emission process of a traditional laser.

The news release also provides more details about the graphene/carbon nanotube spaser research and the possibility of turning t-shirts into telephones,

PhD student and lead researcher Chanaka Rupasinghe said the modelled spaser design using carbon would offer many advantages.

“Other spasers designed to date are made of gold or silver nanoparticles and semiconductor quantum dots while our device would be comprised of a graphene resonator and a carbon nanotube gain element,” Chanaka said.

“The use of carbon means our spaser would be more robust and flexible, would operate at high temperatures, and be eco-friendly.

“Because of these properties, there is the possibility that in the future an extremely thin mobile phone could be printed on clothing.”

Spaser-based devices can be used as an alternative to current transistor-based devices such as microprocessors, memory, and displays to overcome current miniaturising and bandwidth limitations.

The researchers chose to develop the spaser using graphene and carbon nanotubes. They are more than a hundred times stronger than steel and can conduct heat and electricity much better than copper. They can also withstand high temperatures.

Their research showed for the first time that graphene and carbon nanotubes can interact and transfer energy to each other through light. These optical interactions are very fast and energy-efficient, and so are suitable for applications such as computer chips.

“Graphene and carbon nanotubes can be used in applications where you need strong, lightweight, conducting, and thermally stable materials due to their outstanding mechanical, electrical and optical properties. They have been tested as nanoscale antennas, electric conductors and waveguides,” Chanaka said.

Chanaka said a spaser generated high-intensity electric fields concentrated into a nanoscale space. These are much stronger than those generated by illuminating metal nanoparticles by a laser in applications such as cancer therapy.

“Scientists have already found ways to guide nanoparticles close to cancer cells. We can move graphene and carbon nanotubes following those techniques and use the high concentrate fields generated through the spasing phenomena to destroy individual cancer cells without harming the healthy cells in the body,” Chanaka said

Here’s a link to and a citation for the paper,

Spaser Made of Graphene and Carbon Nanotubes by Chanaka Rupasinghe, Ivan D. Rukhlenko, and Malin Premaratne. ACS Nano, 2014, 8 (3), pp 2431–2438. DOI: 10.1021/nn406015d Publication Date (Web): February 23, 2014
Copyright © 2014 American Chemical Society

This paper is behind a paywall.

Roadmap to neuromorphic engineering digital and analog) for the creation of artificial brains *from the Georgia (US) Institute of Technology

While I didn’t mention neuromorphic engineering in my April 16, 2014 posting which focused on the more general aspect of nanotechnology in Transcendence, a movie starring Johnny Depp and opening on April 18, that specialty (neuromorphic engineering) is what makes the events in the movie ‘possible’ (assuming very large stretches of imagination bringing us into the realm implausibility and beyond). From the IMDB.com plot synopsis for Transcendence,

Dr. Will Caster (Johnny Depp) is the foremost researcher in the field of Artificial Intelligence, working to create a sentient machine that combines the collective intelligence of everything ever known with the full range of human emotions. His highly controversial experiments have made him famous, but they have also made him the prime target of anti-technology extremists who will do whatever it takes to stop him. However, in their attempt to destroy Will, they inadvertently become the catalyst for him to succeed to be a participant in his own transcendence. For his wife Evelyn (Rebecca Hall) and best friend Max Waters (Paul Bettany), both fellow researchers, the question is not if they canbut [sic] if they should. Their worst fears are realized as Will’s thirst for knowledge evolves into a seemingly omnipresent quest for power, to what end is unknown. The only thing that is becoming terrifyingly clear is there may be no way to stop him.

In the film, Carter’s intelligence/consciousness is uploaded to the computer, which suggests the computer has human brainlike qualities and abilities. The effort to make computer or artificial intelligence more humanlike is called neuromorphic engineering and according to an April 17, 2014 news item on phys.org, researchers at the Georgia Institute of Technology (Georgia Tech) have published a roadmap for this pursuit,

In the field of neuromorphic engineering, researchers study computing techniques that could someday mimic human cognition. Electrical engineers at the Georgia Institute of Technology recently published a “roadmap” that details innovative analog-based techniques that could make it possible to build a practical neuromorphic computer.

A core technological hurdle in this field involves the electrical power requirements of computing hardware. Although a human brain functions on a mere 20 watts of electrical energy, a digital computer that could approximate human cognitive abilities would require tens of thousands of integrated circuits (chips) and a hundred thousand watts of electricity or more – levels that exceed practical limits.

The Georgia Tech roadmap proposes a solution based on analog computing techniques, which require far less electrical power than traditional digital computing. The more efficient analog approach would help solve the daunting cooling and cost problems that presently make digital neuromorphic hardware systems impractical.

“To simulate the human brain, the eventual goal would be large-scale neuromorphic systems that could offer a great deal of computational power, robustness and performance,” said Jennifer Hasler, a professor in the Georgia Tech School of Electrical and Computer Engineering (ECE), who is a pioneer in using analog techniques for neuromorphic computing. “A configurable analog-digital system can be expected to have a power efficiency improvement of up to 10,000 times compared to an all-digital system.”

An April 16, 2014 Georgia Tech news release by Rick Robinson, which originated the news item, describes why Hasler wants to combine analog (based on biological principles) and digital computing approaches to the creation of artificial brains,

Unlike digital computing, in which computers can address many different applications by processing different software programs, analog circuits have traditionally been hard-wired to address a single application. For example, cell phones use energy-efficient analog circuits for a number of specific functions, including capturing the user’s voice, amplifying incoming voice signals, and controlling battery power.

Because analog devices do not have to process binary codes as digital computers do, their performance can be both faster and much less power hungry. Yet traditional analog circuits are limited because they’re built for a specific application, such as processing signals or controlling power. They don’t have the flexibility of digital devices that can process software, and they’re vulnerable to signal disturbance issues, or noise.

In recent years, Hasler has developed a new approach to analog computing, in which silicon-based analog integrated circuits take over many of the functions now performed by familiar digital integrated circuits. These analog chips can be quickly reconfigured to provide a range of processing capabilities, in a manner that resembles conventional digital techniques in some ways.

Over the last several years, Hasler and her research group have developed devices called field programmable analog arrays (FPAA). Like field programmable gate arrays (FPGA), which are digital integrated circuits that are ubiquitous in modern computing, the FPAA can be reconfigured after it’s manufactured – hence the phrase “field-programmable.”

Hasler and Marr’s 29-page paper traces a development process that could lead to the goal of reproducing human-brain complexity. The researchers investigate in detail a number of intermediate steps that would build on one another, helping researchers advance the technology sequentially.

For example, the researchers discuss ways to scale energy efficiency, performance and size in order to eventually achieve large-scale neuromorphic systems. The authors also address how the implementation and the application space of neuromorphic systems can be expected to evolve over time.

“A major concept here is that we have to first build smaller systems capable of a simple representation of one layer of human brain cortex,” Hasler said. “When that system has been successfully demonstrated, we can then replicate it in ways that increase its complexity and performance.”

Among neuromorphic computing’s major hurdles are the communication issues involved in networking integrated circuits in ways that could replicate human cognition. In their paper, Hasler and Marr emphasize local interconnectivity to reduce complexity. Moreover, they argue it’s possible to achieve these capabilities via purely silicon-based techniques, without relying on novel devices that are based on other approaches.

Commenting on the recent publication, Alice C. Parker, a professor of electrical engineering at the University of Southern California, said, “Professor Hasler’s technology roadmap is the first deep analysis of the prospects for large scale neuromorphic intelligent systems, clearly providing practical guidance for such systems, with a nearer-term perspective than our whole-brain emulation predictions. Her expertise in analog circuits, technology and device models positions her to provide this unique perspective on neuromorphic circuits.”

Eugenio Culurciello, an associate professor of biomedical engineering at Purdue University, commented, “I find this paper to be a very accurate description of the field of neuromorphic data processing systems. Hasler’s devices provide some of the best performance per unit power I have ever seen and are surely on the roadmap for one of the major technologies of the future.”

Said Hasler: “In this study, we conclude that useful neural computation machines based on biological principles – and potentially at the size of the human brain — seems technically within our grasp. We think that it’s more a question of gathering the right research teams and finding the funding for research and development than of any insurmountable technical barriers.”

Here’s a link to and a citation for the roadmap,

Finding a roadmap to achieve large neuromorphic hardware systems by Jennifer Hasler and Bo Marr.  Front. Neurosci. (Frontiers in Neuroscience), 10 September 2013 | doi: 10.3389/fnins.2013.00118

This is an open access article (at least, the HTML version is).

I have looked at Hasler’s roadmap and it provides a good and readable overview (even for an amateur like me; Note: you do have to need some tolerance for ‘not knowing’) of the state of neuromorphic engineering’s problems, and suggestions for overcoming them. Here’s a description of a human brain and its power requirements as compared to a computer’s (from the roadmap),

One of the amazing thing about the human brain is its ability to perform tasks beyond current supercomputers using roughly 20 W of average power, a level smaller than most individual computer microprocessor chips. A single neuron emulation can tax a high performance processor; given there is 1012 neurons operating at 20 W, each neuron consumes 20 pW average power. Assuming a neuron is conservatively performing the wordspotting computation (1000 synapses), 100,000 PMAC (PMAC = “Peta” MAC = 1015 MAC/s) would be required to duplicate the neural structure. A higher computational efficiency due to active dendritic line channels is expected as well as additional computation due to learning. The efficiency of a single neuron would be 5000 PMAC/W (or 5 TMAC/μW). A similar efficiency for 1011 neurons and 10,000 synapses is expected.

Building neuromorphic hardware requires that technology must scale from current levels given constraints of power, area, and cost: all issues typical in industrial and defense applications; if hardware technology does not scale as other available technologies, as well as takes advantage of the capabilities of IC technology that are currently visible, it will not be successful.

One of my main areas of interest is the memristor (a nanoscale ‘device/circuit element’ which emulates synaptic plasticity), which was mentioned in a way that allows me to understand how the device fits (or doesn’t fit) into the overall conceptual framework (from the roadmap),

The density for a 10 nm EEPROM device acting as a synapse begs the question of whether other nanotechnologies can improve on the resulting Si [silicon] synapse density. One transistor per synapse is hard to beat by any approach, particularly in scaled down Si (like 10 nm), when the synapse memory, computation, and update is contained within the EEPROM device. Most nano device technologies [i.e., memristors (Snider et al., 2011)] show considerable difficulties to get to two-dimensional arrays at a similar density level. Recently, a team from U. of Michigan announced the first functioning memristor two-dimensional (30 × 30) array built on a CMOS chip in 2012 (Kim et al., 2012), claiming applications in neuromorphic engineering, the same group has published innovative devices for digital (Jo and Lu, 2009) and analog applications (Jo et al., 2011).

I notice that the reference to the University’s of Michigan is relatively neutral in tone and the memristor does not figure substantively in Hasler’s roadmap.

Intriguingly, there is a section on commercialization; I didn’t think the research was at that stage yet (from the roadmap),

Although one can discuss how to build a cortical computer on the size of mammals and humans, the question is how will the technology developed for these large systems impact commercial development. The cost for ICs [integrated circuits or chips] alone for cortex would be approximately $20 M in current prices, which although possible for large users, would not be common to be found in individual households. Throughout the digital processor approach, commercial market opportunities have driven the progress in the field. Getting neuromorphic technology integrated into commercial environment allows us to ride this powerful economic “engine” rather than pull.

In most applications, the important commercial issues include minimization of cost, time to market, just sufficient performance for the application, power consumed, size and weight. The cost of a system built from ICs is, at a macro-level, a function of the area of those ICs, which then affects the number of ICs needed system wide, the number of components used, and the board space used. Efficiency of design tools, testing time and programming time also considerably affect system costs. Time to get an application to market is affected by the ability to reuse or quickly modify existing designs, and is reduced for a new application if existing hardware can be reconfigured, adapting to changing specifications, and a designer can utilize tools that allow rapid modifications to the design. Performance is key for any algorithm, but for a particular product, one only needs a solution to that particular problem; spending time to make the solution elegant is often a losing strategy.

The neuromorphic community has seen some early entries into commercial spaces, but we are just at the very beginning of the process. As the knowledge of neuromorphic engineering has progressed, which have included knowledge of sensor interfaces and analog signal processing, there have been those who have risen to the opportunities to commercialize these technologies. Neuromorphic research led to better understanding of sensory processing, particularly sensory systems interacting with other humans, enabling companies like Synaptics (touch pads), Foveon (CMOS color imagers), and Sonic Innovation (analog–digital hearing aids); Gilder provides a useful history of these two companies elsewhere (Gilder, 2005). From the early progress in analog signal processing we see companies like GTronix (acquired by National Semiconductor, then acquired by Texas Instruments) applying the impact of custom analog signal processing techniques and programmability toward auditory signal processing that improved sound quality requiring ultra-low power levels. Further, we see in companies like Audience there is some success from mapping the computational flow of the early stage auditory system, and implementing part of the event based auditory front-end to achieve useful results for improved voice quality. But the opportunities for the neuromorphic community are just beginning, and directly related to understanding the computational capabilities of these items. The availability of ICs that have these capabilities, whether or not one mentions they have any neuromorphic material, will further drive applications.

One expects that part of a cortex processing system would have significant computational possibilities, as well as cortex structures from smaller animals, and still be able to reach price points for commercial applications. In the following discussion, we will consider the potential of cortical structures at different levels of commercial applications. Figure 24 shows one typical block diagram, algorithms at each stage, resulting power efficiency (say based on current technology), as well as potential applications of the approach. In all cases, we will be considering a single die solution, typical for a commercial product, and will minimize the resulting communication power to I/O off the chip (no power consumed due to external memories or digital processing devices). We will assume a net computational efficiency of 10 TMAC/mW, corresponding to a lower power supply (i.e., mostly 500 mV, but not 180 mV) and slightly larger load capacitances; we make these assumptions as conservative pull back from possible applications, although we expect the more aggressive targets would be reachable. We assume the external power consumed is set by 1 event/second/neuron average event-rate off chip to a nearby IC. Given the input event rate is hard to predict, we don’t include that power requirement but assume it is handled by the input system. In all of these cases, getting the required computation using only digital techniques in a competitive size, weight, and especially power is hard to foresee.

We expect progress in these neuromorphic systems and that should find applications in traditional signal processing and graphics handling approaches. We will continue to have needs in computing that outpace our available computing resources, particularly at a power consumption required for a particular application. For example, the recent emphasis on cloud computing for academic/research problems shows the incredible need for larger computing resources than those directly available, or even projected to be available, for a portable computing platform (i.e., robotics). Of course a server per computing device is not a computing model that scales well. Given scaling limits on computing, both in power, area, and communication, one can expect to see more and more of these issues going forward.

We expect that a range of different ICs and systems will be built, all at different targets in the market. There are options for even larger networks, or integrating these systems with other processing elements on a chip/board. When moving to larger systems, particularly ones with 10–300 chips (3 × 107 to 109 neurons) or more, one can see utilization of stacking of dies, both decreasing the communication capacitance as well as board complexity. Stacking dies should roughly increase the final chip cost by the number of dies stacked.

In the following subsections, we overview general guidelines to consider when considering using neuromorphic ICs in the commercial market, first for low-cost consumer electronics, and second for a larger neuromorphic processor IC.

I have a casual observation to make. while the authors of the roadmap came to this conclusion “This study concludes that useful neural computation machines based on biological principles at the size of the human brain seems technically within our grasp.,” they’re also leaving themselves some wiggle room because the truth is no one knows if copying a human brain with circuits and various devices will lead to ‘thinking’ as we understand the concept.

For anyone who’s interested, you can search this blog for neuromorphic engineering, artificial brains, and/or memristors as I have many postings on these topics. One of my most recent on the topic of artificial brains is an April 7, 2014 piece titled: Brain-on-a-chip 2014 survey/overview.

One last observation about the movie ‘Transcendence’, has no one else noticed that it’s the ‘Easter’ story with a resurrected and digitized ‘Jesus’?

* Space inserted between ‘brains’ and ‘from’ in head on April 21, 2014.

StoreDot scores a coup with bio-organic nanodots that recharge smartphone batteries in 30 secs.or less

Where can you get this magical battery? Unfortunately, when something is a prototype, it means we’re a long way from purchasing the device, which is from Israeli start-up, StoreDot (mentioned in my Dec. 3, 2012 posting about their bio-organic nanodots).

The prototype was well received at a Microsoft conference held in Tel Aviv according to an April 8, 2014 news item on BBC (British Broadcasting Corporation) news online,

Israeli start-up StoreDot displayed the device – made of biological structures – at Microsoft’s Think Next Conference [held in Tel Aviv on April 8, 2014].

A Samsung S4 smartphone went from a dead battery to full power in 26 seconds in the demonstration.

The battery is currently only a prototype and the firm predicts it will take three years to become a commercially viable product.

In the demonstration, a battery pack the size of a cigarette packet was attached to a smartphone.

“We think we can integrate a battery into a smartphone within a year and have a commercially ready device in three years,” founder Dr Dorn Myersdorf told the BBC.

The bio-organic battery utilises tiny self-assembling nano-crystals that were first identified in research being done into Alzheimer’s disease at Tel Aviv University 10 years ago.

An April 8, 2014 news item on Azonano provides more technical details,

… StoreDot specializes in technology that is inspired by natural processes, cost-effective and environmentally-friendly. The company produces “nanodots” derived from bio-organic material that, due to their size, have both increased electrode capacitance and electrolyte performance, resulting in batteries that can be fully charged in minutes rather than hours.

For the more technically-minded, here’s how it actually works. Those multifunctional nanodots are chemically synthesized bio-organic peptide molecules that change the rules of mobile device capabilities. These nanocrystals are made from peptides, short chains of amino acids, the building blocks of proteins. Still with us? Here’s comes the really cool part.

StoreDot’s bio-organic devices such as smartphone displays, provide much more efficient power consumption, and are eco-friendly; while other nanodot and quantum-dot technologies currently in use are heavy metal based, like cadmium, and, therefore, toxic, StoreDot nanodots are biocompatible and superior to all previous discoveries in this field. StoreDot’s technology will allow them to synthesize new nanomaterials that can be used in a wide variety of applications.

Manufacturing Nanodots is also relatively inexpensive as they originate naturally, and utilize a basic biological mechanism of self-assembly. They can be made from a vast range of bio-organic raw materials that are readily available and environmentally friendly.

You can find out more about StoreDot on its website. By the way, those nanodot batteries are likely to be twice as expensive to purchase, once they come to market, as standard batteries according to the BBC news item.

Brain-on-a-chip 2014 survey/overview

Michael Berger has written another of his Nanowerk Spotlight articles focussing on neuromorphic engineering and the concept of a brain-on-a-chip bringing it up-to-date April 2014 style.

It’s a topic he and I have been following (separately) for years. Berger’s April 4, 2014 Brain-on-a-chip Spotlight article provides a very welcome overview of the international neuromorphic engineering effort (Note: Links have been removed),

Constructing realistic simulations of the human brain is a key goal of the Human Brain Project, a massive European-led research project that commenced in 2013.

The Human Brain Project is a large-scale, scientific collaborative project, which aims to gather all existing knowledge about the human brain, build multi-scale models of the brain that integrate this knowledge and use these models to simulate the brain on supercomputers. The resulting “virtual brain” offers the prospect of a fundamentally new and improved understanding of the human brain, opening the way for better treatments for brain diseases and for novel, brain-like computing technologies.

Several years ago, another European project named FACETS (Fast Analog Computing with Emergent Transient States) completed an exhaustive study of neurons to find out exactly how they work, how they connect to each other and how the network can ‘learn’ to do new things. One of the outcomes of the project was PyNN, a simulator-independent language for building neuronal network models.

Scientists have great expectations that nanotechnologies will bring them closer to the goal of creating computer systems that can simulate and emulate the brain’s abilities for sensation, perception, action, interaction and cognition while rivaling its low power consumption and compact size – basically a brain-on-a-chip. Already, scientists are working hard on laying the foundations for what is called neuromorphic engineering – a new interdisciplinary discipline that includes nanotechnologies and whose goal is to design artificial neural systems with physical architectures similar to biological nervous systems.

Several research projects funded with millions of dollars are at work with the goal of developing brain-inspired computer architectures or virtual brains: DARPA’s SyNAPSE, the EU’s BrainScaleS (a successor to FACETS), or the Blue Brain project (one of the predecessors of the Human Brain Project) at Switzerland’s EPFL [École Polytechnique Fédérale de Lausanne].

Berger goes on to describe the raison d’être for neuromorphic engineering (attempts to mimic biological brains),

Programmable machines are limited not only by their computational capacity, but also by an architecture requiring (human-derived) algorithms to both describe and process information from their environment. In contrast, biological neural systems (e.g., brains) autonomously process information in complex environments by automatically learning relevant and probabilistically stable features and associations. Since real world systems are always many body problems with infinite combinatorial complexity, neuromorphic electronic machines would be preferable in a host of applications – but useful and practical implementations do not yet exist.

Researchers are mostly interested in emulating neural plasticity (aka synaptic plasticity), from Berger’s April 4, 2014 article,

Independent from military-inspired research like DARPA’s, nanotechnology researchers in France have developed a hybrid nanoparticle-organic transistor that can mimic the main functionalities of a synapse. This organic transistor, based on pentacene and gold nanoparticles and termed NOMFET (Nanoparticle Organic Memory Field-Effect Transistor), has opened the way to new generations of neuro-inspired computers, capable of responding in a manner similar to the nervous system  (read more: “Scientists use nanotechnology to try building computers modeled after the brain”).

One of the key components of any neuromorphic effort, and its starting point, is the design of artificial synapses. Synapses dominate the architecture of the brain and are responsible for massive parallelism, structural plasticity, and robustness of the brain. They are also crucial to biological computations that underlie perception and learning. Therefore, a compact nanoelectronic device emulating the functions and plasticity of biological synapses will be the most important building block of brain-inspired computational systems.

In 2011, a team at Stanford University demonstrates a new single element nanoscale device, based on the successfully commercialized phase change material technology, emulating the functionality and the plasticity of biological synapses. In their work, the Stanford team demonstrated a single element electronic synapse with the capability of both the modulation of the time constant and the realization of the different synaptic plasticity forms while consuming picojoule level energy for its operation (read more: “Brain-inspired computing with nanoelectronic programmable synapses”).

Berger does mention memristors but not in any great detail in this article,

Researchers have also suggested that memristor devices are capable of emulating the biological synapses with properly designed CMOS neuron components. A memristor is a two-terminal electronic device whose conductance can be precisely modulated by charge or flux through it. It has the special property that its resistance can be programmed (resistor) and subsequently remains stored (memory).

One research project already demonstrated that a memristor can connect conventional circuits and support a process that is the basis for memory and learning in biological systems (read more: “Nanotechnology’s road to artificial brains”).

You can find a number of memristor articles here including these: Memristors have always been with us from June 14, 2013; How to use a memristor to create an artificial brain from Feb. 26, 2013; Electrochemistry of memristors in a critique of the 2008 discovery from Sept. 6, 2012; and many more (type ‘memristor’ into the blog search box and you should receive many postings or alternatively, you can try ‘artificial brains’ if you want everything I have on artificial brains).

Getting back to Berger’s April 4, 2014 article, he mentions one more approach and this one stands out,

A completely different – and revolutionary – human brain model has been designed by researchers in Japan who introduced the concept of a new class of computer which does not use any circuit or logic gate. This artificial brain-building project differs from all others in the world. It does not use logic-gate based computing within the framework of Turing. The decision-making protocol is not a logical reduction of decision rather projection of frequency fractal operations in a real space, it is an engineering perspective of Gödel’s incompleteness theorem.

Berger wrote about this work in much more detail in a Feb. 10, 2014 Nanowerk Spotlight article titled: Brain jelly – design and construction of an organic, brain-like computer, (Note: Links have been removed),

In a previous Nanowerk Spotlight we reported on the concept of a full-fledged massively parallel organic computer at the nanoscale that uses extremely low power (“Will brain-like evolutionary circuit lead to intelligent computers?”). In this work, the researchers created a process of circuit evolution similar to the human brain in an organic molecular layer. This was the first time that such a brain-like ‘evolutionary’ circuit had been realized.

The research team, led by Dr. Anirban Bandyopadhyay, a senior researcher at the Advanced Nano Characterization Center at the National Institute of Materials Science (NIMS) in Tsukuba, Japan, has now finalized their human brain model and introduced the concept of a new class of computer which does not use any circuit or logic gate.

In a new open-access paper published online on January 27, 2014, in Information (“Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System”), Bandyopadhyay and his team now describe the fundamental computing principle of a frequency fractal brain like computer.

“Our artificial brain-building project differs from all others in the world for several reasons,” Bandyopadhyay explains to Nanowerk. He lists the four major distinctions:
1) We do not use logic gate based computing within the framework of Turing, our decision-making protocol is not a logical reduction of decision rather projection of frequency fractal operations in a real space, it is an engineering perspective of Gödel’s incompleteness theorem.
2) We do not need to write any software, the argument and basic phase transition for decision-making, ‘if-then’ arguments and the transformation of one set of arguments into another self-assemble and expand spontaneously, the system holds an astronomically large number of ‘if’ arguments and its associative ‘then’ situations.
3) We use ‘spontaneous reply back’, via wireless communication using a unique resonance band coupling mode, not conventional antenna-receiver model, since fractal based non-radiative power management is used, the power expense is negligible.
4) We have carried out our own single DNA, single protein molecule and single brain microtubule neurophysiological study to develop our own Human brain model.

I encourage people to read Berger’s articles on this topic as they provide excellent information and links to much more. Curiously (mind you, it is easy to miss something), he does not mention James Gimzewski’s work at the University of California at Los Angeles (UCLA). Working with colleagues from the National Institute for Materials Science in Japan, Gimzewski published a paper about “two-, three-terminal WO3-x-based nanoionic devices capable of a broad range of neuromorphic and electrical functions”. You can find out more about the paper in my Dec. 24, 2012 posting titled: Synaptic electronics.

As for the ‘brain jelly’ paper, here’s a link to and a citation for it,

Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System by Subrata Ghoshemail, Krishna Aswaniemail, Surabhi Singhemail, Satyajit Sahuemail, Daisuke Fujitaemail and Anirban Bandyopadhyay. Information 2014, 5(1), 28-100; doi:10.3390/info5010028

It’s an open access paper.

As for anyone who’s curious about why the US BRAIN initiative ((Brain Research through Advancing Innovative Neurotechnologies, also referred to as the Brain Activity Map Project) is not mentioned, I believe that’s because it’s focussed on biological brains exclusively at this point (you can check its Wikipedia entry to confirm).

Anirban Bandyopadhyay was last mentioned here in a January 16, 2014 posting titled: Controversial theory of consciousness confirmed (maybe) in  the context of a presentation in Amsterdam, Netherlands.

Learn to love slime; it may help you to compute in the future

Eeeewww! Slime or slime mold is not well loved and yet scientists seem to retain a certain affection for it, if their efforts at researching ways to make it useful could be termed affection. A March 27, 2014 news item on Nanowerk highlights a project where scientists have used slime and nanoparticles to create logic units (precursors to computers; Note: A link has been removed),

A future computer might be a lot slimier than the solid silicon devices we have today. In a study published in the journal Materials Today (“Slime mold microfluidic logical gates”), European researchers reveal details of logic units built using living slime molds, which might act as the building blocks for computing devices and sensors.

The March 27, 2014 Elsevier press release, which originated the news item, describes the researchers and their work in more detail,

Andrew Adamatzky (University of the West of England, Bristol, UK) and Theresa Schubert (Bauhaus-University Weimar, Germany) have constructed logical circuits that exploit networks of interconnected slime mold tubes to process information.

One is more likely to find the slime mold Physarum polycephalum living somewhere dark and damp rather than in a computer science lab. In its “plasmodium” or vegetative state, the organism spans its environment with a network of tubes that absorb nutrients. The tubes also allow the organism to respond to light and changing environmental conditions that trigger the release of reproductive spores.

In earlier work, the team demonstrated that such a tube network could absorb and transport different colored dyes. They then fed it edible nutrients – oat flakes – to attract tube growth and common salt to repel them, so that they could grow a network with a particular structure. They then demonstrated how this system could mix two dyes to make a third color as an “output”.

Using the dyes with magnetic nanoparticles and tiny fluorescent beads, allowed them to use the slime mold network as a biological “lab-on-a-chip” device. This represents a new way to build microfluidic devices for processing environmental or medical samples on the very small scale for testing and diagnostics, the work suggests. The extension to a much larger network of slime mold tubes could process nanoparticles and carry out sophisticated Boolean logic operations of the kind used by computer circuitry. The team has so far demonstrated that a slime mold network can carry out XOR or NOR Boolean operations. Chaining together arrays of such logic gates might allow a slime mold computer to carry out binary operations for computation.

“The slime mold based gates are non-electronic, simple and inexpensive, and several gates can be realized simultaneously at the sites where protoplasmic tubes merge,” conclude Adamatzky and Schubert.

Are we entering the age of the biological computer? Stewart Bland, Editor of Materials Today, believes that “although more traditional electronic materials are here to stay, research such as this is helping to push and blur the boundaries of materials science, computer science and biology, and represents an exciting prospect for the future.

I did look at the researchers’ paper and it is fascinating even to someone (me) who doesn’t understand the science very well. Here’s a link to and a citation for the paper,

Slime mold microfluidic logical gates by Andrew Adamatzky and Theresa Schubert. Materials Today, Volume 17, Issue 2, March 2014, Pages 86–91 (2014) published by Elsevier. http://dx.doi.org/10.1016/j.mattod.2014.01.018 The article is available for free at www.materialstoday.com

Yes, it’s an open access paper published by Elsevier, good on them!

Transition metal dichalcogenides (molybdenum disulfide and tungsten diselenide) rock the graphene boat

Anyone who’s read stories about scientific discovery knows that the early stages are characterized by a number of possibilities so the current race to unseat graphene as the wonder material of the nanoworld is a ‘business as usual’ sign although I imagine it can be confusing for investors and others hoping to make their fortunes. As for the contenders to the ‘wonder nanomaterial throne’, they are transition metal dichalcogenides: molybdenum disulfide and tungsten diselenide both of which have garnered some recent attention.

A March 12, 2014 news item on Nanwerk features research on molybdenum disulfide from Poland,

Will one-atom-thick layers of molybdenum disulfide, a compound that occurs naturally in rocks, prove to be better than graphene for electronic applications? There are many signs that might prove to be the case. But physicists from the Faculty of Physics at the University of Warsaw have shown that the nature of the phenomena occurring in layered materials are still ill-understood and require further research.

….

Researchers at the University of Warsaw, Faculty of Physics (FUW) have shown that the phenomena occurring in the crystal network of molybdenum disulfide sheets are of a slightly different nature than previously thought. A report describing the discovery, achieved in collaboration with Laboratoire National des Champs Magnétiques Intenses in Grenoble, has recently been published in Applied Physics Letters.

“It will not become possible to construct complex electronic systems consisting of individual atomic sheets until we have a sufficiently good understanding of the physics involved in the phenomena occurring within the crystal network of those materials. Our research shows, however, that research still has a long way to go in this field”, says Prof. Adam Babinski at the UW Faculty of Physics.

A March 12, 2014 Dept. of Physics University of Warsaw (FUW) news release, which originated the news item, describes the researchers’ ideas about graphene and alternative materials such as molybdenum disulfide,

“It will not become possible to construct complex electronic systems consisting of individual atomic sheets until we have a sufficiently good understanding of the physics involved in the phenomena occurring within the crystal network of those materials. Our research shows, however, that research still has a long way to go in this field”, says Prof. Adam Babiński at the UW Faculty of Physics.

The simplest method of creating graphene is called exfoliation: a piece of scotch tape is first stuck to a piece of graphite, then peeled off. Among the particles that remain stuck to the tape, one can find microscopic layers of graphene. This is because graphite consists of many graphene sheets adjacent to one another. The carbon atoms within each layer are very strongly bound to one another (by covalent bonds, to which graphene owes its legendary resilience), but the individual layers are held together by significantly weaker bonds (van de Walls [van der Waals] bonds). Ordinary scotch tape is strong enough to break the latter and to tear individual graphene sheets away from the graphite crystal.

A few years ago it was noticed that just as graphene can be obtained from graphite, sheets a single atom thick can similarly be obtained from many other crystals. This has been successfully done, for instance, with transition metals chalcogenides (sulfides, selenides, and tellurides). Layers of molybdenum disulfide (MoS2), in particular, have proven to be a very interesting material. This compound exists in nature as molybdenite, a crystal material found in rocks around the world, frequently taking the characteristic form of silver-colored hexagonal plates. For years molybdenite has been used in the manufacturing of lubricants and metal alloys. Like in the case of graphite, the properties of single-atom sheets of MoS2 long went unnoticed.

From the standpoint of applications in electronics, molybdenum disulfide sheets exhibit a significant advantage over graphene: they have an energy gap, an energy range within which no electron states can exist. By applying electric field, the material can be switched between a state that conducts electricity and one that behaves like an insulator. By current calculations, a switched-off molybdenum disulfide transistor would consume even as little as several hundred thousand times less energy than a silicon transistor. Graphene, on the other hand, has no energy gap and transistors made of graphene cannot be fully switched off.

The news release goes on to describe how the researchers refined their understanding of molybdenum disulfide and its properties,

Valuable information about a crystal’s structure and phenomena occurring within it can be obtained by analyzing how light gets scattered within the material. Photons of a given energy are usually absorbed by the atoms and molecules of the material, then reemitted at the same energy. In the spectrum of the scattered light one can then see a distinctive peak, corresponding to that energy. It turns out, however, that one out of many millions of photons is able to use some of its energy otherwise, for instance to alter the vibration or circulation of a molecule. The reverse situation also sometimes occurs: a photon may take away some of the energy of a molecule, and so its own energy slightly increases. In this situation, known as Raman scattering, two smaller peaks are observed to either side of the main peak.

The scientists at the UW Faculty of Physics analyzed the Raman spectra of molybdenum disulfide carrying on low-temperature microscopic measurements. The higher sensitivity of the equipment and detailed analysis methods enabled the team to propose a more precise model of the phenomena occurring in the crystal network of molybdenum disulfide.

“In the case of single-layer materials, the shape of the Raman lines has previously been explained in terms of phenomena involving certain characteristic vibrations of the crystal network. We have shown for molybdenum disulfide sheets that the effects ascribed to those vibrations must actually, at least in part, be due to other network vibrations not previously taken into account”, explains Katarzyna Gołasa, a doctorate student at the UW Faculty of Physics.

The presence of the new type of vibration in single-sheet materials has an impact on how electrons behave. As a consequence, these materials must have somewhat different electronic properties than previously anticipated.

Here’s what the rocks look like,

Molybdenum disulfide occurs in nature as molybdenite, crystalline material that frequently takes the characteristic form of silver-colored hexagonal plates. (Source: FUW)

Molybdenum disulfide occurs in nature as molybdenite, crystalline material that frequently takes the characteristic form of silver-colored hexagonal plates. (Source: FUW)

I am not able to find the published research at this time (March 13, 2014).

The tungsten diselenide story is specifically application-centric. Dexter Johnson in a March 11, 2014 post on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) describes the differing perspectives and potential applications suggested by the three teams that cooperated to produce papers united by a joint theme ,

The three research groups focused on optoelectronics applications of tungsten diselenide, but each with a slightly different emphasis.

The University of Washington scientists highlighted applications of the material for a light emitting diode (LED). The Vienna University of Technology group focused on the material’s photovoltaic applications. And, finally, the MIT [Massachusetts Institute of Technology] group looked at all of the optoelectronic applications for the material that would result from the way it can be switched from being a p-type to a n-type semiconductor.

Here are some details of the research from each of the institutions’ news releases.

A March 10, 2014 University of Washington (state) news release highlights their LED work,

University of Washington [UW] scientists have built the thinnest-known LED that can be used as a source of light energy in electronics. The LED is based off of two-dimensional, flexible semiconductors, making it possible to stack or use in much smaller and more diverse applications than current technology allows.

“We are able to make the thinnest-possible LEDs, only three atoms thick yet mechanically strong. Such thin and foldable LEDs are critical for future portable and integrated electronic devices,” said Xiaodong Xu, a UW assistant professor in materials science and engineering and in physics.

The UW’s LED is made from flat sheets of the molecular semiconductor known as tungsten diselenide, a member of a group of two-dimensional materials that have been recently identified as the thinnest-known semiconductors. Researchers use regular adhesive tape to extract a single sheet of this material from thick, layered pieces in a method inspired by the 2010 Nobel Prize in Physics awarded to the University of Manchester for isolating one-atom-thick flakes of carbon, called graphene, from a piece of graphite.

In addition to light-emitting applications, this technology could open doors for using light as interconnects to run nano-scale computer chips instead of standard devices that operate off the movement of electrons, or electricity. The latter process creates a lot of heat and wastes power, whereas sending light through a chip to achieve the same purpose would be highly efficient.

“A promising solution is to replace the electrical interconnect with optical ones, which will maintain the high bandwidth but consume less energy,” Xu said. “Our work makes it possible to make highly integrated and energy-efficient devices in areas such as lighting, optical communication and nano lasers.”

Here’s a link to and a citation for this team’s paper,

Electrically tunable excitonic light-emitting diodes based on monolayer WSe2 p–n junctions by Jason S. Ross, Philip Klement, Aaron M. Jones, Nirmal J. Ghimire, Jiaqiang Yan, D. G. Mandrus, Takashi Taniguchi, Kenji Watanabe, Kenji Kitamura, Wang Yao, David H. Cobden, & Xiaodong Xu. Nature Nanotechnology (2014) doi:10.1038/nnano.2014.26 Published online 09 March 2014

This paper is behind a paywall.

A March 9, 2014 University of Vienna news release highlights their work on tungsten diselinide and its possible application in solar cells,

… With graphene as a light detector, optical signals can be transformed into electric pulses on extremely short timescales.

For one very similar application, however, graphene is not well suited for building solar cells. “The electronic states in graphene are not very practical for creating photovoltaics”, says Thomas Mueller. Therefore, he and his team started to look for other materials, which, similarly to graphene, can arranged in ultrathin layers, but have even better electronic properties.

The material of choice was tungsten diselenide: It consists of one layer of tungsten atoms, which are connected by selenium atoms above and below the tungsten plane. The material absorbs light, much like graphene, but in tungsten diselenide, this light can be used to create electrical power.

The layer is so thin that 95% of the light just passes through – but a tenth of the remaining five percent, which are absorbed by the material, are converted into electrical power. Therefore, the internal efficiency is quite high. A larger portion of the incident light can be used if several of the ultrathin layers are stacked on top of each other – but sometimes the high transparency can be a useful side effect. “We are envisioning solar cell layers on glass facades, which let part of the light into the building while at the same time creating electricity”, says Thomas Mueller.

Today, standard solar cells are mostly made of silicon, they are rather bulky and inflexible. Organic materials are also used for opto-electronic applications, but they age rather quickly. “A big advantage of two-dimensional structures of single atomic layers is their crystallinity. Crystal structures lend stability”, says Thomas Mueller.

Here’s a link to and a citation for the University of Vienna paper,

Solar-energy conversion and light emission in an atomic monolayer p–n diode by Andreas Pospischil, Marco M. Furchi, & Thomas Mueller. Nature Nanotechnology (2014) doi:10.1038/nnano.2014.14 Published online 09 March 2014

This paper is behind a paywll.

Finally, a March 10, 2014 MIT news release details their work about material able to switch from p-type (p = positive) to a n-type (n = negative) semiconductors,

The material they used, called tungsten diselenide (WSe2), is part of a class of single-molecule-thick materials under investigation for possible use in new optoelectronic devices — ones that can manipulate the interactions of light and electricity. In these experiments, the MIT researchers were able to use the material to produce diodes, the basic building block of modern electronics.

Typically, diodes (which allow electrons to flow in only one direction) are made by “doping,” which is a process of injecting other atoms into the crystal structure of a host material. By using different materials for this irreversible process, it is possible to make either of the two basic kinds of semiconducting materials, p-type or n-type.

But with the new material, either p-type or n-type functions can be obtained just by bringing the vanishingly thin film into very close proximity with an adjacent metal electrode, and tuning the voltage in this electrode from positive to negative. That means the material can easily and instantly be switched from one type to the other, which is rarely the case with conventional semiconductors.

In their experiments, the MIT team produced a device with a sheet of WSe2 material that was electrically doped half n-type and half p-type, creating a working diode that has properties “very close to the ideal,” Jarillo-Herrero says.

By making diodes, it is possible to produce all three basic optoelectronic devices — photodetectors, photovoltaic cells, and LEDs; the MIT team has demonstrated all three, Jarillo-Herrero says. While these are proof-of-concept devices, and not designed for scaling up, the successful demonstration could point the way toward a wide range of potential uses, he says.

“It’s known how to make very large-area materials” of this type, Churchill says. While further work will be required, he says, “there’s no reason you wouldn’t be able to do it on an industrial scale.”

In principle, Jarillo-Herrero says, because this material can be engineered to produce different values of a key property called bandgap, it should be possible to make LEDs that produce any color — something that is difficult to do with conventional materials. And because the material is so thin, transparent, and lightweight, devices such as solar cells or displays could potentially be built into building or vehicle windows, or even incorporated into clothing, he says.

While selenium is not as abundant as silicon or other promising materials for electronics, the thinness of these sheets is a big advantage, Churchill points out: “It’s thousands or tens of thousands of times thinner” than conventional diode materials, “so you’d use thousands of times less material” to make devices of a given size.

Here’s a link to and a citation for the MIT paper,

Optoelectronic devices based on electrically tunable p–n diodes in a monolayer dichalcogenide by Britton W. H. Baugher, Hugh O. H. Churchill, Yafang Yang, & Pablo Jarillo-Herrero. Nature Nanotechnology (2014) doi:10.1038/nnano.2014.25 Published online 09 March 2014

This paper is behind a paywall.

These are very exciting, if not to say, electrifying times. (Couldn’t resist the wordplay.)

Self-healing supercapacitors from Singapore

Michael Berger has written up the latest and greatest regarding self-healing capacitors and carbon nanotubes (which could have more relevance to your life than you realize) in a March 10, 2014 Nanowerk Spotlight article,

If you ever had problems with the (non-removable) battery in your iPhone or iPad then you well know that the energy storage or power source is a key component in a tightly integrated electronic device. Any damage to the power source will usually result in the breakdown of the entire device, generating at best inconvenience and cost and in the worst case a safety hazard and your latest contribution to the mountains of electronic waste.

A solution to this problem might now be at hand thanks to researchers in Singapore who have successfully fabricated the first mechanically and electrically self-healing supercapacitor.

Reporting their findings in Advanced Materials (“A Mechanically and Electrically Self-Healing Supercapacitor”) a team led by Xiaodong Chen, an associate professor in the School of Materials Science & Engineering at Nanyang Technological University, have designed and fabricated the first integrated, mechanically and electrically self-healing supercapacitor by spreading functionalized single-walled carbon nanotube (SWCNT) films on self-healing substrates.

Inspired by the biological systems’ intrinsic self-repairing ability, a class of artificial ‘smart’ materials, called self-healing materials, which can repair internal or external damages have been developed over the past decade …

Berger goes on to describe how the researchers addressed the issue of restoring electrical conductivity, as well as, restoring mechanical properties to self-healing materials meant to be used as supercapacitors.

Here’s a link to and a citation for the team’s latest paper,

A Mechanically and Electrically Self-Healing Supercapacitor by Hua Wang, Bowen Zhu, Wencao Jiang, Yun Yang, Wan Ru Leow, Hong Wang, & Xiaodong Chen. Advanced Materials Article first published online: 19 FEB 2014 DOI: 10.1002/adma.201305682

© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

Xiaodong Chen and his team were last mentioned here in a Jan. 9, 2014 posting in connection with their work on memristive nanodevices derived from protein.

New method of Rayleigh scattering for better semiconductors

Rayliegh scattering provides a scientific explanation first devised in the 19th century for why the sky is blue during the day and why it turns red in the evening. A March 4, 2014 news item on Nanowerk describes some research into measuring semiconductor nanowires with a new Rayleigh scattering technique,

A new twist on a very old physics technique could have a profound impact on one of the most buzzed-about aspects of nanotechnology.

Researchers at the University of Cincinnati [UC] have found that their unique method of light-matter interaction analysis appears to be a good way of helping make better semiconductor nanowires.

The March 4, 2014 University of Cincinnati news release, which originated the news item, has the researcher describing his work in further detail (Note: Links have been removed),

“Semiconductor nanowires are one of the hottest topics in the nanoscience research field in the recent decade,” says Yuda Wang, a UC doctoral student. “Due to the unique geometry compared to conventional bulk semiconductors, nanowires have already shown many advantageous properties, particularly in novel applications in such fields as nanoelectronics, nanophotonics, nanobiochemistry and nanoenergy.”

Wang will present the team’s research “Transient Rayleigh Scattering Spectroscopy Measurement of Carrier Dynamics in Zincblende and Wurtzite Indium Phosphide Nanowires” at the American Physical Society (APS) meeting to be held March 3-7 [2014] in Denver. …

Key to this research is UC’s new method of Rayleigh scattering, a phenomenon first described in 1871 and the scientific explanation for why the sky is blue in the daytime and turns red at sunset. The researchers’ Rayleigh scattering technique probes the band structures and electron-hole dynamics inside a single indium phosphide nanowire, allowing them to observe the response with a time resolution in the femtosecond range – or one quadrillionth of a second.

“Basically, we can generate a live picture of how the electrons and holes are excited and slowly return to their original states, and the mechanism behind that can be analyzed and understood,” says Wang, of UC’s Department of Physics. “It’s all critical in characterizing the optical or electronic properties of a semiconducting nanowire.”

Semiconductors are at the center of modern electronics. Computers, TVs and cellphones have them. They’re made from the crystalline form of elements that have scientifically beneficial electrical conductivity properties.

Wang says the burgeoning range of semiconductor nanowire applications – such as smaller, more energy-efficient electronics – has brought rapid improvement to nanowire fabrication techniques. He says his team’s research could offer makers of nanotechnology a new and highly effective option for measuring the physics inside nanowires.

“The key to a good optimization process is an excellent feedback, or a characterization method,” Wang says. “Rayleigh scattering appears to be an exceptional way to measure several nanowire properties simultaneously in a non-invasive and high-quality manner.”

Additional contributors to this research are UC alumnus Mohammad Montazeri; UC physics professors Howard Jackson and Leigh Smith and adjunct associate professor Jan Yarrison-Rice, all of the McMicken College of Arts and Sciences; and Tim Burgess, Suriati Paiman, Hoe Tan, Qiang Gao and Chennupati Jagadish of Australian National University.

You can get more information about the American Physical Society March 3 – 7, 2014 meeting in Denver, Colorado here.

Making nanoelectronic devices last longer in the body could lead to ‘cyborg’ tissue

An American Chemical Society (ACS) Feb. 19, 2014 news release (also on EurekAlert), describes some research devoted to extending a nanoelectronic device’s ‘life’ when implanted in the body,

The debut of cyborgs who are part human and part machine may be a long way off, but researchers say they now may be getting closer. In a study published in ACS’ journal Nano Letters, they report development of a coating that makes nanoelectronics much more stable in conditions mimicking those in the human body. [emphases mine] The advance could also aid in the development of very small implanted medical devices for monitoring health and disease.

Charles Lieber and colleagues note that nanoelectronic devices with nanowire components have unique abilities to probe and interface with living cells. They are much smaller than most implanted medical devices used today. For example, a pacemaker that regulates the heart is the size of a U.S. 50-cent coin, but nanoelectronics are so small that several hundred such devices would fit in the period at the end of this sentence. Laboratory versions made of silicon nanowires can detect disease biomarkers and even single virus cells, or record heart cells as they beat. Lieber’s team also has integrated nanoelectronics into living tissues in three dimensions — creating a “cyborg tissue.” One obstacle to the practical, long-term use of these devices is that they typically fall apart within weeks or days when implanted. In the current study, the researchers set out to make them much more stable.

They found that coating silicon nanowires with a metal oxide shell allowed nanowire devices to last for several months. This was in conditions that mimicked the temperature and composition of the inside of the human body. In preliminary studies, one shell material appears to extend the lifespan of nanoelectronics to about two years.

Depending on how you define the term cyborg, it could be said there are already cyborgs amongst us as I noted in an April 20, 2012 posting titled: My mother is a cyborg. Personally I’m fascinated by the news release’s mention of ‘cyborg tissue’ although there’s no further explanation of what the term might mean.

For the curious, here’s a link to and a citation for the paper,

Long Term Stability of Nanowire Nanoelectronics in Physiological Environments by Wei Zhou, Xiaochuan Dai, Tian-Ming Fu, Chong Xie, Jia Liu, and Charles M. Lieber. Nano Lett., Article ASAP DOI: 10.1021/nl500070h Publication Date (Web): January 30, 2014
Copyright © 2014 American Chemical Society

This paper is behind a paywall.

Injectable and more powerful* batteries for live salmon

Today’s live salmon may sport a battery for monitoring purposes and now scientists have developed one that is significantly more powerful according to a Feb. 17, 2014 Pacific Northwest National Laboratory (PNNL) news release (dated Feb. 18, 2014 on EurekAlert),

Scientists have created a microbattery that packs twice the energy compared to current microbatteries used to monitor the movements of salmon through rivers in the Pacific Northwest and around the world.

The battery, a cylinder just slightly larger than a long grain of rice, is certainly not the world’s smallest battery, as engineers have created batteries far tinier than the width of a human hair. But those smaller batteries don’t hold enough energy to power acoustic fish tags. The new battery is small enough to be injected into an organism and holds much more energy than similar-sized batteries.

Here’s a photo of the battery as it rests amongst grains of rice,

The microbattery created by Jie Xiao and Daniel Deng and colleagues, amid grains of rice. Courtesy PNNL

The microbattery created by Jie Xiao and Daniel Deng and colleagues, amid grains of rice. Courtesy PNNL

The news release goes on to explain why scientists are developing a lighter battery for salmon and how they achieved their goal,

For scientists tracking the movements of salmon, the lighter battery translates to a smaller transmitter which can be inserted into younger, smaller fish. That would allow scientists to track their welfare earlier in the life cycle, oftentimes in the small streams that are crucial to their beginnings. The new battery also can power signals over longer distances, allowing researchers to track fish further from shore or from dams, or deeper in the water.

“The invention of this battery essentially revolutionizes the biotelemetry world and opens up the study of earlier life stages of salmon in ways that have not been possible before,” said M. Brad Eppard, a fisheries biologist with the Portland District of the U.S. Army Corps of Engineers.

“For years the chief limiting factor to creating a smaller transmitter has been the battery size. That hurdle has now been overcome,” added Eppard, who manages the Portland District’s fisheries research program.

The Corps and other agencies use the information from tags to chart the welfare of endangered fish and to help determine the optimal manner to operate dams. Three years ago the Corps turned to Z. Daniel Deng, a PNNL engineer, to create a smaller transmitter, one small enough to be injected, instead of surgically implanted, into fish. Injection is much less invasive and stressful for the fish, and it’s a faster and less costly process.

“This was a major challenge which really consumed us these last three years,” said Deng. “There’s nothing like this available commercially, that can be injected. Either the batteries are too big, or they don’t last long enough to be useful. That’s why we had to design our own.”

Deng turned to materials science expert Jie Xiao to create the new battery design.

To pack more energy into a small area, Xiao’s team improved upon the “jellyroll” technique commonly used to make larger household cylindrical batteries. Xiao’s team laid down layers of the battery materials one on top of the other in a process known as lamination, then rolled them up together, similar to how a jellyroll is created. The layers include a separating material sandwiched by a cathode made of carbon fluoride and an anode made of lithium.

The technique allowed her team to increase the area of the electrodes without increasing their thickness or the overall size of the battery. The increased area addresses one of the chief problems when making such a small battery — keeping the impedance, which is a lot like resistance, from getting too high. High impedance occurs when so many electrons are packed into a small place that they don’t flow easily or quickly along the routes required in a battery, instead getting in each other’s way. The smaller the battery, the bigger the problem.

Using the jellyroll technique allowed Xiao’s team to create a larger area for the electrons to interact, reducing impedance so much that the capacity of the material is about double that of traditional microbatteries used in acoustic fish tags.

“It’s a bit like flattening wads of Play-Doh, one layer at a time, and then rolling them up together, like a jelly roll,” says Xiao. “This allows you to pack more of your active materials into a small space without increasing the resistance.”

The new battery is a little more than half the weight of batteries currently used in acoustic fish tags — just 70 milligrams, compared to about 135 milligrams — and measures six millimeters long by three millimeters wide. The battery has an energy density of about 240 watt hours per kilogram, compared to around 100 for commercially available silver oxide button microbatteries.

The battery holds enough energy to send out an acoustic signal strong enough to be useful for fish-tracking studies even in noisy environments such as near large dams. The battery can power a 744-microsecond signal sent every three seconds for about three weeks, or about every five seconds for a month. It’s the smallest battery the researchers know of with enough energy capacity to maintain that level of signaling.

The batteries also work better in cold water where salmon often live, sending clearer signals at low temperatures compared to current batteries. That’s because their active ingredients are lithium and carbon fluoride, a chemistry that is promising for other applications but has not been common for microbatteries.

Last summer in Xiao’s laboratory, scientists Samuel Cartmell and Terence Lozano made by hand more than 1,000 of the rice-sized batteries. It’s a painstaking process, cutting and forming tiny snippets of sophisticated materials, putting them through a flattening device that resembles a pasta maker, binding them together, and rolling them by hand into tiny capsules. Their skilled hands rival those of surgeons, working not with tissue but with sensitive electronic materials.

A PNNL team led by Deng surgically implanted 700 of the tags into salmon in a field trial in the Snake River last summer. Preliminary results show that the tags performed extremely well. The results of that study and more details about the smaller, enhanced fish tags equipped with the new microbattery will come out in a forthcoming publication. Battelle, which operates PNNL, has applied for a patent on the technology.

I notice that while the second paragraph of the news release (in the first excerpt) says the battery is injectable, the final paragraph (in the second excerpt) says the team “surgically implanted” the tags with their new batteries into the salmon.

Here’s a link to and a citation for the newly published article in Scientific Reports,

Micro-battery Development for Juvenile Salmon Acoustic Telemetry System Applications by Honghao Chen, Samuel Cartmell, Qiang Wang, Terence Lozano, Z. Daniel Deng, Huidong Li, Xilin Chen, Yong Yuan, Mark E. Gross, Thomas J. Carlson, & Jie Xiao. Scientific Reports 4, Article number: 3790 doi:10.1038/srep03790 Published 21 January 2014

This paper is open access.

* I changed the headline from ‘Injectable batteries for live salmon made more powerful’ to ‘Injectable and more powerful batteries for live salmon’  to better reflect the information in the news release. Feb. 19, 2014 at 11:43 am PST.

ETA Feb. 20, 2014: Dexter Johnson has weighed in on this very engaging and practical piece of research in a Feb. 19, 2014 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers]) website (Note: Links have been removed),

There’s no denying that building the world’s smallest battery is a notable achievement. But while they may lay the groundwork for future battery technologies, today such microbatteries are mostly laboratory curiosities.

Developing a battery that’s no bigger than a grain of rice—and that’s actually useful in the real world—is quite another kind of achievement. Researchers at Pacific Northwest National Laboratory (PNNL) have done just that, creating a battery based on graphene that has successfully been used in monitoring the movements of salmon through rivers.

The microbattery is being heralded as a breakthrough in biotelemetry and should give researchers never before insights into the movements and the early stages of life of the fish.

The battery is partly made from a fluorinated graphene that was described last year …