Category Archives: electronics

Implantable living pharmacy

I stumbled across a very interesting US Defense Advanced Research Projects Agency (DARPA) project (from an August 30, 2021 posting on Northwestern University’s Rivnay Lab [a laboratory for organic bioelectronics] blog),

Our lab has received a cooperative agreement with DARPA to develop a wireless, fully implantable ‘living pharmacy’ device that could help regulate human sleep patterns. The project is through DARPA’s BTO (biotechnology office)’s Advanced Acclimation and Protection Tool for Environmental Readiness (ADAPTER) program, meant to address physical challenges of travel, such as jetlag and fatigue.

The device, called NTRAIN (Normalizing Timing of Rhythms Across Internal Networks of Circadian Clocks), would control the body’s circadian clock, reducing the time it takes for a person to recover from disrupted sleep/wake cycles by as much as half the usual time.

The project spans 5 institutions including Northwestern, Rice University, Carnegie Mellon, University of Minnesota, and Blackrock Neurotech.

Prior to the Aug. 30, 2021 posting, Amanda Morris wrote a May 13, 2021 article for Northwestern NOW (university magazine), which provides more details about the project, Note: A link has been removed,

The first phase of the highly interdisciplinary program will focus on developing the implant. The second phase, contingent on the first, will validate the device. If that milestone is met, then researchers will test the device in human trials, as part of the third phase. The full funding corresponds to $33 million over four-and-a-half years. 

Nicknamed the “living pharmacy,” the device could be a powerful tool for military personnel, who frequently travel across multiple time zones, and shift workers including first responders, who vacillate between overnight and daytime shifts.

Combining synthetic biology with bioelectronics, the team will engineer cells to produce the same peptides that the body makes to regulate sleep cycles, precisely adjusting timing and dose with bioelectronic controls. When the engineered cells are exposed to light, they will generate precisely dosed peptide therapies. 

“This control system allows us to deliver a peptide of interest on demand, directly into the bloodstream,” said Northwestern’s Jonathan Rivnay, principal investigator of the project. “No need to carry drugs, no need to inject therapeutics and — depending on how long we can make the device last — no need to refill the device. It’s like an implantable pharmacy on a chip that never runs out.” 

Beyond controlling circadian rhythms, the researchers believe this technology could be modified to release other types of therapies with precise timing and dosing for potentially treating pain and disease. The DARPA program also will help researchers better understand sleep/wake cycles, in general.

“The experiments carried out in these studies will enable new insights into how internal circadian organization is maintained,” said Turek [Fred W. Turek], who co-leads the sleep team with Vitaterna [Martha Hotz Vitaterna]. “These insights will lead to new therapeutic approaches for sleep disorders as well as many other physiological and mental disorders, including those associated with aging where there is often a spontaneous breakdown in temporal organization.” 

For those who like to dig even deeper, Dieynaba Young’s June 17, 2021 article for Smithsonian Magazine (GetPocket.com link to article) provides greater context and greater satisfaction, Note: Links have been removed,

In 1926, Fritz Kahn completed Man as Industrial Palace, the preeminent lithograph in his five-volume publication The Life of Man. The illustration shows a human body bustling with tiny factory workers. They cheerily operate a brain filled with switchboards, circuits and manometers. Below their feet, an ingenious network of pipes, chutes and conveyer belts make up the blood circulatory system. The image epitomizes a central motif in Kahn’s oeuvre: the parallel between human physiology and manufacturing, or the human body as a marvel of engineering.

An apparatus in the embryonic stage of development at the time of this writing in June of 2021—the so-called “implantable living pharmacy”—could have easily originated in Kahn’s fervid imagination. The concept is being developed by the Defense Advanced Research Projects Agency (DARPA) in conjunction with several universities, notably Northwestern and Rice. Researchers envision a miniaturized factory, tucked inside a microchip, that will manufacture pharmaceuticals from inside the body. The drugs will then be delivered to precise targets at the command of a mobile application. …

The implantable living pharmacy, which is still in the “proof of concept” stage of development, is actually envisioned as two separate devices—a microchip implant and an armband. The implant will contain a layer of living synthetic cells, along with a sensor that measures temperature, a short-range wireless transmitter and a photo detector. The cells are sourced from a human donor and reengineered to perform specific functions. They’ll be mass produced in the lab, and slathered onto a layer of tiny LED lights.

The microchip will be set with a unique identification number and encryption key, then implanted under the skin in an outpatient procedure. The chip will be controlled by a battery-powered hub attached to an armband. That hub will receive signals transmitted from a mobile app.

If a soldier wishes to reset their internal clock, they’ll simply grab their phone, log onto the app and enter their upcoming itinerary—say, a flight departing at 5:30 a.m. from Arlington, Virginia, and arriving 16 hours later at Fort Buckner in Okinawa, Japan. Using short-range wireless communications, the hub will receive the signal and activate the LED lights inside the chip. The lights will shine on the synthetic cells, stimulating them to generate two compounds that are naturally produced in the body. The compounds will be released directly into the bloodstream, heading towards targeted locations, such as a tiny, centrally-located structure in the brain called the suprachiasmatic nucleus (SCN) that serves as master pacemaker of the circadian rhythm. Whatever the target location, the flow of biomolecules will alter the natural clock. When the solider arrives in Okinawa, their body will be perfectly in tune with local time.

The synthetic cells will be kept isolated from the host’s immune system by a membrane constructed of novel biomaterials, allowing only nutrients and oxygen in and only the compounds out. Should anything go wrong, they would swallow a pill that would kill the cells inside the chip only, leaving the rest of their body unaffected.

If you have the time, I recommend reading Young’s June 17, 2021 Smithsonian Magazine article (GetPocket.com link to article) in its entirety. Young goes on to discuss, hacking, malware, and ethical/societal issues and more.

There is an animation of Kahn’s original poster in a June 23, 2011 posting on openculture.com (also found on Vimeo; Der Mensch als Industriepalast [Man as Industrial Palace])

Credits: Idea & Animation: Henning M. Lederer / led-r-r.net; Sound-Design: David Indge; and original poster art: Fritz Kahn.

New chip for neuromorphic computing runs at a fraction of the energy of today’s systems

An August 17, 2022 news item on Nanowerk announces big (so to speak) claims from a team researching neuromorphic (brainlike) computer chips,

An international team of researchers has designed and built a chip that runs computations directly in memory and can run a wide variety of artificial intelligence (AI) applications–all at a fraction of the energy consumed by computing platforms for general-purpose AI computing.

The NeuRRAM neuromorphic chip brings AI a step closer to running on a broad range of edge devices, disconnected from the cloud, where they can perform sophisticated cognitive tasks anywhere and anytime without relying on a network connection to a centralized server. Applications abound in every corner of the world and every facet of our lives, and range from smart watches, to VR headsets, smart earbuds, smart sensors in factories and rovers for space exploration.

The NeuRRAM chip is not only twice as energy efficient as the state-of-the-art “compute-in-memory” chips, an innovative class of hybrid chips that runs computations in memory, it also delivers results that are just as accurate as conventional digital chips. Conventional AI platforms are a lot bulkier and typically are constrained to using large data servers operating in the cloud.

In addition, the NeuRRAM chip is highly versatile and supports many different neural network models and architectures. As a result, the chip can be used for many different applications, including image recognition and reconstruction as well as voice recognition.

..

An August 17, 2022 University of California at San Diego (UCSD) news release (also on EurekAlert), which originated the news item, provides more detail than usually found in a news release,

“The conventional wisdom is that the higher efficiency of compute-in-memory is at the cost of versatility, but our NeuRRAM chip obtains efficiency while not sacrificing versatility,” said Weier Wan, the paper’s first corresponding author and a recent Ph.D. graduate of Stanford University who worked on the chip while at UC San Diego, where he was co-advised by Gert Cauwenberghs in the Department of Bioengineering. 

The research team, co-led by bioengineers at the University of California San Diego, presents their results in the Aug. 17 [2022] issue of Nature.

Currently, AI computing is both power hungry and computationally expensive. Most AI applications on edge devices involve moving data from the devices to the cloud, where the AI processes and analyzes it. Then the results are moved back to the device. That’s because most edge devices are battery-powered and as a result only have a limited amount of power that can be dedicated to computing. 

By reducing power consumption needed for AI inference at the edge, this NeuRRAM chip could lead to more robust, smarter and accessible edge devices and smarter manufacturing. It could also lead to better data privacy as the transfer of data from devices to the cloud comes with increased security risks. 

On AI chips, moving data from memory to computing units is one major bottleneck. 

“It’s the equivalent of doing an eight-hour commute for a two-hour work day,” Wan said. 

To solve this data transfer issue, researchers used what is known as resistive random-access memory, a type of non-volatile memory that allows for computation directly within memory rather than in separate computing units. RRAM and other emerging memory technologies used as synapse arrays for neuromorphic computing were pioneered in the lab of Philip Wong, Wan’s advisor at Stanford and a main contributor to this work. Computation with RRAM chips is not necessarily new, but generally it leads to a decrease in the accuracy of the computations performed on the chip and a lack of flexibility in the chip’s architecture. 

“Compute-in-memory has been common practice in neuromorphic engineering since it was introduced more than 30 years ago,” Cauwenberghs said.  “What is new with NeuRRAM is that the extreme efficiency now goes together with great flexibility for diverse AI applications with almost no loss in accuracy over standard digital general-purpose compute platforms.”

A carefully crafted methodology was key to the work with multiple levels of “co-optimization” across the abstraction layers of hardware and software, from the design of the chip to its configuration to run various AI tasks. In addition, the team made sure to account for various constraints that span from memory device physics to circuits and network architecture. 

“This chip now provides us with a platform to address these problems across the stack from devices and circuits to algorithms,” said Siddharth Joshi, an assistant professor of computer science and engineering at the University of Notre Dame , who started working on the project as a Ph.D. student and postdoctoral researcher in Cauwenberghs lab at UC San Diego. 

Chip performance

Researchers measured the chip’s energy efficiency by a measure known as energy-delay product, or EDP. EDP combines both the amount of energy consumed for every operation and the amount of times it takes to complete the operation. By this measure, the NeuRRAM chip achieves 1.6 to 2.3 times lower EDP (lower is better) and 7 to 13 times higher computational density than state-of-the-art chips. 

Researchers ran various AI tasks on the chip. It achieved 99% accuracy on a handwritten digit recognition task; 85.7% on an image classification task; and 84.7% on a Google speech command recognition task. In addition, the chip also achieved a 70% reduction in image-reconstruction error on an image-recovery task. These results are comparable to existing digital chips that perform computation under the same bit-precision, but with drastic savings in energy. 

Researchers point out that one key contribution of the paper is that all the results featured are obtained directly on the hardware. In many previous works of compute-in-memory chips, AI benchmark results were often obtained partially by software simulation. 

Next steps include improving architectures and circuits and scaling the design to more advanced technology nodes. Researchers also plan to tackle other applications, such as spiking neural networks.

“We can do better at the device level, improve circuit design to implement additional features and address diverse applications with our dynamic NeuRRAM platform,” said Rajkumar Kubendran, an assistant professor for the University of Pittsburgh, who started work on the project while a Ph.D. student in Cauwenberghs’ research group at UC San Diego.

In addition, Wan is a founding member of a startup that works on productizing the compute-in-memory technology. “As a researcher and  an engineer, my ambition is to bring research innovations from labs into practical use,” Wan said. 

New architecture 

The key to NeuRRAM’s energy efficiency is an innovative method to sense output in memory. Conventional approaches use voltage as input and measure current as the result. But this leads to the need for more complex and more power hungry circuits. In NeuRRAM, the team engineered a neuron circuit that senses voltage and performs analog-to-digital conversion in an energy efficient manner. This voltage-mode sensing can activate all the rows and all the columns of an RRAM array in a single computing cycle, allowing higher parallelism. 

In the NeuRRAM architecture, CMOS neuron circuits are physically interleaved with RRAM weights. It differs from conventional designs where CMOS circuits are typically on the peripheral of RRAM weights.The neuron’s connections with the RRAM array can be configured to serve as either input or output of the neuron. This allows neural network inference in various data flow directions without incurring overheads in area or power consumption. This in turn makes the architecture easier to reconfigure. 

To make sure that accuracy of the AI computations can be preserved across various neural network architectures, researchers developed a set of hardware algorithm co-optimization techniques. The techniques were verified on various neural networks including convolutional neural networks, long short-term memory, and restricted Boltzmann machines. 

As a neuromorphic AI chip, NeuroRRAM performs parallel distributed processing across 48 neurosynaptic cores. To simultaneously achieve high versatility and high efficiency, NeuRRAM supports data-parallelism by mapping a layer in the neural network model onto multiple cores for parallel inference on multiple data. Also, NeuRRAM offers model-parallelism by mapping different layers of a model onto different cores and performing inference in a pipelined fashion.

An international research team

The work is the result of an international team of researchers. 

The UC San Diego team designed the CMOS circuits that implement the neural functions interfacing with the RRAM arrays to support the synaptic functions in the chip’s architecture, for high efficiency and versatility. Wan, working closely with the entire team, implemented the design; characterized the chip; trained the AI models; and executed the experiments. Wan also developed a software toolchain that maps AI applications onto the chip. 

The RRAM synapse array and its operating conditions were extensively characterized and optimized at Stanford University. 

The RRAM array was fabricated and integrated onto CMOS at Tsinghua University. 

The Team at Notre Dame contributed to both the design and architecture of the chip and the subsequent machine learning model design and training.

The research started as part of the National Science Foundation funded Expeditions in Computing project on Visual Cortex on Silicon at Penn State University, with continued funding support from the Office of Naval Research Science of AI program, the Semiconductor Research Corporation and DARPA [{US} Defense Advanced Research Projects Agency] JUMP program, and Western Digital Corporation. 

Here’s a link to and a citation for the paper,

A compute-in-memory chip based on resistive random-access memory by Weier Wan, Rajkumar Kubendran, Clemens Schaefer, Sukru Burc Eryilmaz, Wenqiang Zhang, Dabin Wu, Stephen Deiss, Priyanka Raina, He Qian, Bin Gao, Siddharth Joshi, Huaqiang Wu, H.-S. Philip Wong & Gert Cauwenberghs. Nature volume 608, pages 504–512 (2022) DOI: https://doi.org/10.1038/s41586-022-04992-8 Published: 17 August 2022 Issue Date: 18 August 2022

This paper is open access.

Synaptic transistors for brainlike computers based on (more environmentally friendly) graphene

An August 9, 2022 news item on ScienceDaily describes research investigating materials other than silicon for neuromorphic (brainlike) computing purposes,

Computers that think more like human brains are inching closer to mainstream adoption. But many unanswered questions remain. Among the most pressing, what types of materials can serve as the best building blocks to unlock the potential of this new style of computing.

For most traditional computing devices, silicon remains the gold standard. However, there is a movement to use more flexible, efficient and environmentally friendly materials for these brain-like devices.

In a new paper, researchers from The University of Texas at Austin developed synaptic transistors for brain-like computers using the thin, flexible material graphene. These transistors are similar to synapses in the brain, that connect neurons to each other.

An August 8, 2022 University of Texas at Austin news release (also on EurekAlert but published August 9, 2022), which originated the news item, provides more detail about the research,

“Computers that think like brains can do so much more than today’s devices,” said Jean Anne Incorvia, an assistant professor in the Cockrell School of Engineering’s Department of Electrical and Computer Engineer and the lead author on the paper published today in Nature Communications. “And by mimicking synapses, we can teach these devices to learn on the fly, without requiring huge training methods that take up so much power.”

The Research: A combination of graphene and nafion, a polymer membrane material, make up the backbone of the synaptic transistor. Together, these materials demonstrate key synaptic-like behaviors — most importantly, the ability for the pathways to strengthen over time as they are used more often, a type of neural muscle memory. In computing, this means that devices will be able to get better at tasks like recognizing and interpreting images over time and do it faster.

Another important finding is that these transistors are biocompatible, which means they can interact with living cells and tissue. That is key for potential applications in medical devices that come into contact with the human body. Most materials used for these early brain-like devices are toxic, so they would not be able to contact living cells in any way.

Why It Matters: With new high-tech concepts like self-driving cars, drones and robots, we are reaching the limits of what silicon chips can efficiently do in terms of data processing and storage. For these next-generation technologies, a new computing paradigm is needed. Neuromorphic devices mimic processing capabilities of the brain, a powerful computer for immersive tasks.

“Biocompatibility, flexibility, and softness of our artificial synapses is essential,” said Dmitry Kireev, a post-doctoral researcher who co-led the project. “In the future, we envision their direct integration with the human brain, paving the way for futuristic brain prosthesis.”

Will It Really Happen: Neuromorphic platforms are starting to become more common. Leading chipmakers such as Intel and Samsung have either produced neuromorphic chips already or are in the process of developing them. However, current chip materials place limitations on what neuromorphic devices can do, so academic researchers are working hard to find the perfect materials for soft brain-like computers.

“It’s still a big open space when it comes to materials; it hasn’t been narrowed down to the next big solution to try,” Incorvia said. “And it might not be narrowed down to just one solution, with different materials making more sense for different applications.”

The Team: The research was led by Incorvia and Deji Akinwande, professor in the Department of Electrical and Computer Engineering. The two have collaborated many times together in the past, and Akinwande is a leading expert in graphene, using it in multiple research breakthroughs, most recently as part of a wearable electronic tattoo for blood pressure monitoring.

The idea for the project was conceived by Samuel Liu, a Ph.D. student and first author on the paper, in a class taught by Akinwande. Kireev then suggested the specific project. Harrison Jin, an undergraduate electrical and computer engineering student, measured the devices and analyzed data.

The team collaborated with T. Patrick Xiao and Christopher Bennett of Sandia National Laboratories, who ran neural network simulations and analyzed the resulting data.

Here’s a link to and a citation for the ‘graphene transistor’ paper,

Metaplastic and energy-efficient biocompatible graphene artificial synaptic transistors for enhanced accuracy neuromorphic computing by Dmitry Kireev, Samuel Liu, Harrison Jin, T. Patrick Xiao, Christopher H. Bennett, Deji Akinwande & Jean Anne C. Incorvia. Nature Communications volume 13, Article number: 4386 (2022) DOI: https://doi.org/10.1038/s41467-022-32078-6 Published: 28 July 2022

This paper is open access.

Neuromorphic computing and liquid-light interaction

Simulation result of light affecting liquid geometry, which in turn affects reflection and transmission properties of the optical mode, thus constituting a two-way light–liquid interaction mechanism. The degree of deformation serves as an optical memory allowing to store the power magnitude of the previous optical pulse and use fluid dynamics to affect the subsequent optical pulse at the same actuation region, thus constituting an architecture where memory is part of the computation process. Credit: Gao et al., doi 10.1117/1.AP.4.4.046005

This is a fascinating approach to neuromorphic (brainlike) computing and given my recent post (August 29, 2022) about human cells being incorporated into computer chips, it’s part o my recent spate of posts about neuromorphic computing. From a July 25, 2022 news item on phys.org,

Sunlight sparkling on water evokes the rich phenomena of liquid-light interaction, spanning spatial and temporal scales. While the dynamics of liquids have fascinated researchers for decades, the rise of neuromorphic computing has sparked significant efforts to develop new, unconventional computational schemes based on recurrent neural networks, crucial to supporting wide range of modern technological applications, such as pattern recognition and autonomous driving. As biological neurons also rely on a liquid environment, a convergence may be attained by bringing nanoscale nonlinear fluid dynamics to neuromorphic computing.

A July 25, 2022 SPIE (International Society for Optics and Photonics) press release (also on EurekAlert), which originated the news item,

Researchers from University of California San Diego recently proposed a novel paradigm where liquids, which usually do not strongly interact with light on a micro- or nanoscale, support significant nonlinear response to optical fields. As reported in Advanced Photonics, the researchers predict a substantial light–liquid interaction effect through a proposed nanoscale gold patch operating as an optical heater and generating thickness changes in a liquid film covering the waveguide.

The liquid film functions as an optical memory. Here’s how it works: Light in the waveguide affects the geometry of the liquid surface, while changes in the shape of the liquid surface affect the properties of the optical mode in the waveguide, thus constituting a mutual coupling between the optical mode and the liquid film. Importantly, as the liquid geometry changes, the properties of the optical mode undergo a nonlinear response; after the optical pulse stops, the magnitude of liquid film’s deformation indicates the power of the previous optical pulse.

Remarkably, unlike traditional computational approaches, the nonlinear response and the memory reside at the same spatial region, thus suggesting realization of a compact (beyond von-Neumann) architecture where memory and computational unit occupy the same space. The researchers demonstrate that the combination of memory and nonlinearity allow the possibility of “reservoir computing” capable of performing digital and analog tasks, such as nonlinear logic gates and handwritten image recognition.

Their model also exploits another significant liquid feature: nonlocality. This enables them to predict computation enhancement that is simply not possible in solid state material platforms with limited nonlocal spatial scale. Despite nonlocality, the model does not quite achieve the levels of modern solid-state optics-based reservoir computing systems, yet the work nonetheless presents a clear roadmap for future experimental works aiming to validate the predicted effects and explore intricate coupling mechanisms of various physical processes in a liquid environment for computation.

Using multiphysics simulations to investigate coupling between light, fluid dynamics, heat transport, and surface tension effects, the researchers predict a family of novel nonlinear and nonlocal optical effects. They go a step further by indicating how these can be used to realize versatile, nonconventional computational platforms. Taking advantage of a mature silicon photonics platform, they suggest improvements to state-of-the-art liquid-assisted computation platforms by around five orders magnitude in space and at least two orders of magnitude in speed.

Here’s a link to and a citation for the paper,

Thin liquid film as an optical nonlinear-nonlocal medium and memory element in integrated optofluidic reservoir computer by Chengkuan Gao, Prabhav Gaur, Shimon Rubin, Yeshaiahu Fainman. Advanced Photonics, 4(4), 046005 (2022). https://doi.org/10.1117/1.AP.4.4.046005 Published: 1 July 2022

This paper is open access.

Guide for memristive hardware design

An August 15 ,2022 news item on ScienceDaily announces a type of guide for memristive hardware design,

They are many times faster than flash memory and require significantly less energy: memristive memory cells could revolutionize the energy efficiency of neuromorphic [brainlike] computers. In these computers, which are modeled on the way the human brain works, memristive cells function like artificial synapses. Numerous groups around the world are working on the use of corresponding neuromorphic circuits — but often with a lack of understanding of how they work and with faulty models. Jülich researchers have now summarized the physical principles and models in a comprehensive review article in the renowned journal Advances in Physics.

An August 15, 2022 Forschungszentrum Juelich press release (also on EurekAlert), which originated the news item, describes two papers designed to help researchers better understand and design memristive hardware,

Certain tasks – such as recognizing patterns and language – are performed highly efficiently by a human brain, requiring only about one ten-thousandth of the energy of a conventional, so-called “von Neumann” computer. One of the reasons lies in the structural differences: In a von Neumann architecture, there is a clear separation between memory and processor, which requires constant moving of large amounts of data. This is time and energy consuming – the so-called von Neumann bottleneck. In the brain, the computational operation takes place directly in the data memory and the biological synapses perform the tasks of memory and processor at the same time.

In Jülich, scientists have been working for more than 15 years on special data storage devices and components that can have similar properties to the synapses in the human brain. So-called memristive memory devices, also known as memristors, are considered to be extremely fast, energy-saving and can be miniaturized very well down to the nanometer range. The functioning of memristive cells is based on a very special effect: Their electrical resistance is not constant, but can be changed and reset again by applying an external voltage, theoretically continuously. The change in resistance is controlled by the movement of oxygen ions. If these move out of the semiconducting metal oxide layer, the material becomes more conductive and the electrical resistance drops. This change in resistance can be used to store information.

The processes that can occur in cells are very complex and vary depending on the material system. Three researchers from the Jülich Peter Grünberg Institute – Prof. Regina Dittmann, Dr. Stephan Menzel, and Prof. Rainer Waser – have therefore compiled their research results in a detailed review article, “Nanoionic memristive phenomena in metal oxides: the valence change mechanism”. They explain in detail the various physical and chemical effects in memristors and shed light on the influence of these effects on the switching properties of memristive cells and their reliability.

“If you look at current research activities in the field of neuromorphic memristor circuits, they are often based on empirical approaches to material optimization,” said Rainer Waser, director at the Peter Grünberg Institute. “Our goal with our review article is to give researchers something to work with in order to enable insight-driven material optimization.” The team of authors worked on the approximately 200-page article for ten years and naturally had to keep incorporating advances in knowledge.

“The analogous functioning of memristive cells required for their use as artificial synapses is not the normal case. Usually, there are sudden jumps in resistance, generated by the mutual amplification of ionic motion and Joule heat,” explains Regina Dittmann of the Peter Grünberg Institute. “In our review article, we provide researchers with the necessary understanding of how to change the dynamics of the cells to enable an analog operating mode.”

“You see time and again that groups simulate their memristor circuits with models that don’t take into account high dynamics of the cells at all. These circuits will never work.” said Stephan Menzel, who leads modeling activities at the Peter Grünberg Institute and has developed powerful compact models that are now in the public domain (www.emrl.de/jart.html). “In our review article, we provide the basics that are extremely helpful for a correct use of our compact models.”

Roadmap neuromorphic computing

The “Roadmap of Neuromorphic Computing and Engineering”, which was published in May 2022, shows how neuromorphic computing can help to reduce the enormous energy consumption of IT globally. In it, researchers from the Peter Grünberg Institute (PGI-7), together with leading experts in the field, have compiled the various technological possibilities, computational approaches, learning algorithms and fields of application. 

According to the study, applications in the field of artificial intelligence, such as pattern recognition or speech recognition, are likely to benefit in a special way from the use of neuromorphic hardware. This is because they are based – much more so than classical numerical computing operations – on the shifting of large amounts of data. Memristive cells make it possible to process these gigantic data sets directly in memory without transporting them back and forth between processor and memory. This could reduce the energy efficiency of artificial neural networks by orders of magnitude.

Memristive cells can also be interconnected to form high-density matrices that enable neural networks to learn locally. This so-called edge computing thus shifts computations from the data center to the factory floor, the vehicle, or the home of people in need of care. Thus, monitoring and controlling processes or initiating rescue measures can be done without sending data via a cloud. “This achieves two things at the same time: you save energy, and at the same time, personal data and data relevant to security remain on site,” says Prof. Dittmann, who played a key role in creating the roadmap as editor.

Here’s a link to and a citation for the ‘roadmap’,

2022 roadmap on neuromorphic computing and engineering by Dennis V Christensen, Regina Dittmann, Bernabe Linares-Barranco, Abu Sebastian, Manuel Le Gallo, Andrea Redaelli, Stefan Slesazeck, Thomas Mikolajick, Sabina Spiga, Stephan Menzel, Ilia Valov, Gianluca Milano, Carlo Ricciardi, Shi-Jun Liang, Feng Miao, Mario Lanza, Tyler J Quill, Scott T Keene, Alberto Salleo, Julie Grollier, Danijela Marković, Alice Mizrahi, Peng Yao, J Joshua Yang, Giacomo Indiveri, John Paul Strachan, Suman Datta, Elisa Vianello, Alexandre Valentian, Johannes Feldmann, Xuan Li, Wolfram H P Pernice, Harish Bhaskaran, Steve Furber, Emre Neftci, Franz Scherr, Wolfgang Maass, Srikanth Ramaswamy, Jonathan Tapson, Priyadarshini Panda, Youngeun Kim, Gouhei Tanaka, Simon Thorpe, Chiara Bartolozzi, Thomas A Cleland, Christoph Posch, ShihChii Liu, Gabriella Panuccio, Mufti Mahmud, Arnab Neelim Mazumder, Morteza Hosseini, Tinoosh Mohsenin, Elisa Donati, Silvia Tolu, Roberto Galeazzi, Martin Ejsing Christensen, Sune Holm, Daniele Ielmini and N Pryds. Neuromorphic Computing and Engineering , Volume 2, Number 2 DOI: 10.1088/2634-4386/ac4a83 20 May 2022 • © 2022 The Author(s)

This paper is open access.

Here’s the most recent paper,

Nanoionic memristive phenomena in metal oxides: the valence change mechanism by Regina Dittmann, Stephan Menzel & Rainer Waser. Advances in Physics
Volume 70, 2021 – Issue 2 Pages 155-349 DOI: https://doi.org/10.1080/00018732.2022.2084006 Published online: 06 Aug 2022

This paper is behind a paywall.

Memristive forming strategy

This is highly technical and it’s here since I’m informally collecting all the research that I stumble across concerning memristors and neuromorphic engineering.

From a Sept. 5, 2022 news item on Nanowerk, Note: A link has been removed,

The silicon-based CMOS [complementary metal-oxide-semiconductor] technology is fast approaching its physical limits, and the electronics industry is urgently calling for new techniques to keep the long-term development. Two-dimensional (2D) semiconductors, like transition-metal dichalcogenides (TMDs), have become a competitive alternative to traditional semiconducting materials in the post-Moore era, and caused worldwide interest. However, before they can be used in practical applications, some key obstacles must be resolved. One of them is the large electrical contact resistances at the metal-semiconductor interfaces.

The large contact resistances mainly come from two aspects: the high tunneling barrier caused by the wide van der Waals (vdW) gap between the 2D material and the metal electrode; the high Schottky barrier accompanied by strong Fermi level pinning at the metal-semiconductor interface.

Four strategies including edge contact, doping TMDs, phase engineering, and using special metals, have been developed to address this problem. However, they all have shortcomings.

In a new work (Nano Letters, “Van der Waals Epitaxy and Photoresponse of Hexagonal Tellurium Nanoplates on Flexible Mica Sheets”) coming out of Zhenxing Wang’s group at the National Center for Nanoscience and Technology [located in Beijing, China], the researchers have proposed a brand-new contact resistance lowering strategy of 2D semiconductors with a good feasibility, a wide generality and a high stability.

You can fill in the blanks at Nanowerk or there’s this link and citation for the paper

Van der Waals Epitaxy and Photoresponse of Hexagonal Tellurium Nanoplates on Flexible Mica Sheets by Qisheng Wang, Muhammad Safdar, Kai Xu, Misbah Mirza, Zhenxing Wang, and Jun He. ACS Nano 2014, 8, 7, 7497–7505 DOI: https://doi.org/10.1021/nn5028104 Publication Date:July 2, 2014 Copyright © 2014 American Chemical Society

This paper is behind a paywall.

Reconfiguring a LEGO-like AI chip with light

MIT engineers have created a reconfigurable AI chip that comprises alternating layers of sensing and processing elements that can communicate with each other. Credit: Figure courtesy of the researchers and edited by MIT News

This image certainly challenges any ideas I have about what Lego looks like. It seems they see things differently at the Massachusetts Institute of Technology (MIT). From a June 13, 2022 MIT news release (also on EurekAlert),

Imagine a more sustainable future, where cellphones, smartwatches, and other wearable devices don’t have to be shelved or discarded for a newer model. Instead, they could be upgraded with the latest sensors and processors that would snap onto a device’s internal chip — like LEGO bricks incorporated into an existing build. Such reconfigurable chipware could keep devices up to date while reducing our electronic waste. 

Now MIT engineers have taken a step toward that modular vision with a LEGO-like design for a stackable, reconfigurable artificial intelligence chip.

The design comprises alternating layers of sensing and processing elements, along with light-emitting diodes (LED) that allow for the chip’s layers to communicate optically. Other modular chip designs employ conventional wiring to relay signals between layers. Such intricate connections are difficult if not impossible to sever and rewire, making such stackable designs not reconfigurable.

The MIT design uses light, rather than physical wires, to transmit information through the chip. The chip can therefore be reconfigured, with layers that can be swapped out or stacked on, for instance to add new sensors or updated processors.

“You can add as many computing layers and sensors as you want, such as for light, pressure, and even smell,” says MIT postdoc Jihoon Kang. “We call this a LEGO-like reconfigurable AI chip because it has unlimited expandability depending on the combination of layers.”

The researchers are eager to apply the design to edge computing devices — self-sufficient sensors and other electronics that work independently from any central or distributed resources such as supercomputers or cloud-based computing.

“As we enter the era of the internet of things based on sensor networks, demand for multifunctioning edge-computing devices will expand dramatically,” says Jeehwan Kim, associate professor of mechanical engineering at MIT. “Our proposed hardware architecture will provide high versatility of edge computing in the future.”

The team’s results are published today in Nature Electronics. In addition to Kim and Kang, MIT authors include co-first authors Chanyeol Choi, Hyunseok Kim, and Min-Kyu Song, and contributing authors Hanwool Yeon, Celesta Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim, Han Eol Lee, Doyoon Lee, Subeen Pang, Sang-Hoon Bae, Hun S. Kum, and Peng Lin, along with collaborators from Harvard University, Tsinghua University, Zhejiang University, and elsewhere.

Lighting the way

The team’s design is currently configured to carry out basic image-recognition tasks. It does so via a layering of image sensors, LEDs, and processors made from artificial synapses — arrays of memory resistors, or “memristors,” that the team previously developed, which together function as a physical neural network, or “brain-on-a-chip.” Each array can be trained to process and classify signals directly on a chip, without the need for external software or an Internet connection.

In their new chip design, the researchers paired image sensors with artificial synapse arrays, each of which they trained to recognize certain letters — in this case, M, I, and T. While a conventional approach would be to relay a sensor’s signals to a processor via physical wires, the team instead fabricated an optical system between each sensor and artificial synapse array to enable communication between the layers, without requiring a physical connection. 

“Other chips are physically wired through metal, which makes them hard to rewire and redesign, so you’d need to make a new chip if you wanted to add any new function,” says MIT postdoc Hyunseok Kim. “We replaced that physical wire connection with an optical communication system, which gives us the freedom to stack and add chips the way we want.”

The team’s optical communication system consists of paired photodetectors and LEDs, each patterned with tiny pixels. Photodetectors constitute an image sensor for receiving data, and LEDs to transmit data to the next layer. As a signal (for instance an image of a letter) reaches the image sensor, the image’s light pattern encodes a certain configuration of LED pixels, which in turn stimulates another layer of photodetectors, along with an artificial synapse array, which classifies the signal based on the pattern and strength of the incoming LED light.

Stacking up

The team fabricated a single chip, with a computing core measuring about 4 square millimeters, or about the size of a piece of confetti. The chip is stacked with three image recognition “blocks,” each comprising an image sensor, optical communication layer, and artificial synapse array for classifying one of three letters, M, I, or T. They then shone a pixellated image of random letters onto the chip and measured the electrical current that each neural network array produced in response. (The larger the current, the larger the chance that the image is indeed the letter that the particular array is trained to recognize.)

The team found that the chip correctly classified clear images of each letter, but it was less able to distinguish between blurry images, for instance between I and T. However, the researchers were able to quickly swap out the chip’s processing layer for a better “denoising” processor, and found the chip then accurately identified the images.

“We showed stackability, replaceability, and the ability to insert a new function into the chip,” notes MIT postdoc Min-Kyu Song.

The researchers plan to add more sensing and processing capabilities to the chip, and they envision the applications to be boundless.

“We can add layers to a cellphone’s camera so it could recognize more complex images, or makes these into healthcare monitors that can be embedded in wearable electronic skin,” offers Choi, who along with Kim previously developed a “smart” skin for monitoring vital signs.

Another idea, he adds, is for modular chips, built into electronics, that consumers can choose to build up with the latest sensor and processor “bricks.”

“We can make a general chip platform, and each layer could be sold separately like a video game,” Jeehwan Kim says. “We could make different types of neural networks, like for image or voice recognition, and let the customer choose what they want, and add to an existing chip like a LEGO.”

This research was supported, in part, by the Ministry of Trade, Industry, and Energy (MOTIE) from South Korea; the Korea Institute of Science and Technology (KIST); and the Samsung Global Research Outreach Program.

Here’s a link to and a citation for the paper,

Reconfigurable heterogeneous integration using stackable chips with embedded artificial intelligence by Chanyeol Choi, Hyunseok Kim, Ji-Hoon Kang, Min-Kyu Song, Hanwool Yeon, Celesta S. Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim, Han Eol Lee, Doyoon Lee, Jaeyong Lee, Ikbeom Jang, Subeen Pang, Kanghyun Ryu, Sang-Hoon Bae, Yifan Nie, Hyun S. Kum, Min-Chul Park, Suyoun Lee, Hyung-Jun Kim, Huaqiang Wu, Peng Lin & Jeehwan Kim. Nature Electronics volume 5, pages 386–393 (2022) 05 May 2022 Issue Date: June 2022 Published: 13 June 2022 DOI: https://doi.org/10.1038/s41928-022-00778-y

This paper is behind a paywall.

Photonic synapses with low power consumption (and a few observations)

This work on brainlike (neuromorphic) computing was announced in a June 30, 2022 Compuscript Ltd news release on EurekAlert,

Photonic synapses with low power consumption and high sensitivity are expected to integrate sensing-memory-preprocessing capabilities

A new publication from Opto-Electronic Advances; DOI 10.29026/oea.2022.210069 discusses how photonic synapses with low power consumption and high sensitivity are expected to integrate sensing-memory-preprocessing capabilities.

Neuromorphic photonics/electronics is the future of ultralow energy intelligent computing and artificial intelligence (AI). In recent years, inspired by the human brain, artificial neuromorphic devices have attracted extensive attention, especially in simulating visual perception and memory storage. Because of its advantages of high bandwidth, high interference immunity, ultrafast signal transmission and lower energy consumption, neuromorphic photonic devices are expected to realize real-time response to input data. In addition, photonic synapses can realize non-contact writing strategy, which contributes to the development of wireless communication. The use of low-dimensional materials provides an opportunity to develop complex brain-like systems and low-power memory logic computers. For example, large-scale, uniform and reproducible transition metal dichalcogenides (TMDs) show great potential for miniaturization and low-power biomimetic device applications due to their excellent charge-trapping properties and compatibility with traditional CMOS processes. The von Neumann architecture with discrete memory and processor leads to high power consumption and low efficiency of traditional computing. Therefore, the sensor-memory fusion or sensor-memory- processor integration neuromorphic architecture system can meet the increasingly developing demands of big data and AI for low power consumption and high performance devices. Artificial synaptic devices are the most important components of neuromorphic systems. The performance evaluation of synaptic devices will help to further apply them to more complex artificial neural networks (ANN).

Chemical vapor deposition (CVD)-grown TMDs inevitably introduce defects or impurities, showed a persistent photoconductivity (PPC) effect. TMDs photonic synapses integrating synaptic properties and optical detection capabilities show great advantages in neuromorphic systems for low-power visual information perception and processing as well as brain memory.

The research Group of Optical Detection and Sensing (GODS) have reported a three-terminal photonic synapse based on the large-area, uniform multilayer MoS2 films. The reported device realized ultrashort optical pulse detection within 5 μs and ultralow power consumption about 40 aJ, which means its performance is much better than the current reported properties of photonic synapses. Moreover, it is several orders of magnitude lower than the corresponding parameters of biological synapses, indicating that the reported photonic synapse can be further used for more complex ANN. The photoconductivity of MoS2 channel grown by CVD is regulated by photostimulation signal, which enables the device to simulate short-term synaptic plasticity (STP), long-term synaptic plasticity (LTP), paired-pulse facilitation (PPF) and other synaptic properties. Therefore, the reported photonic synapse can simulate human visual perception, and the detection wavelength can be extended to near infrared light. As the most important system of human learning, visual perception system can receive 80% of learning information from the outside. With the continuous development of AI, there is an urgent need for low-power and high sensitivity visual perception system that can effectively receive external information. In addition, with the assistant of gate voltage, this photonic synapse can simulate the classical Pavlovian conditioning and the regulation of different emotions on memory ability. For example, positive emotions enhance memory ability and negative emotions weaken memory ability. Furthermore, a significant contrast in the strength of STP and LTP based on the reported photonic synapse suggests that it can preprocess the input light signal. These results indicate that the photo-stimulation and backgate control can effectively regulate the conductivity of MoS2 channel layer by adjusting carrier trapping/detrapping processes. Moreover, the photonic synapse presented in this paper is expected to integrate sensing-memory-preprocessing capabilities, which can be used for real-time image detection and in-situ storage, and also provides the possibility to break the von Neumann bottleneck. 

Here’s a link to and a citation for the paper,

Photonic synapses with ultralow energy consumption for artificial visual perception and brain storage by Caihong Li, Wen Du, Yixuan Huang, Jihua Zou, Lingzhi Luo, Song Sun, Alexander O. Govorov, Jiang Wu, Hongxing Xu, Zhiming Wang. Opto-Electron Adv Vol 5, No 9 210069 (2022). doi: 10.29026/oea.2022.210069

This paper is open access.

Observations

I don’t have much to say about the research itself other than, I believe this is the first time I’ve seen a news release about neuromorphic computing research from China.

it’s China that most interests me, especially these bits from the June 30, 2022 Compuscript Ltd news release on EurekAlert,

Group of Optical Detection and Sensing (GODS) [emphasis mine] was established in 2019. It is a research group focusing on compound semiconductors, lasers, photodetectors, and optical sensors. GODS has established a well-equipped laboratory with research facilities such as Molecular Beam Epitaxy system, IR detector test system, etc. GODS is leading several research projects funded by NSFC and National Key R&D Programmes. GODS have published more than 100 research articles in Nature Electronics, Light: Science and Applications, Advanced Materials and other international well-known high-level journals with the total citations beyond 8000.

Jiang Wu obtained his Ph.D. from the University of Arkansas Fayetteville in 2011. After his Ph.D., he joined UESTC as associate professor and later professor. He joined University College London [UCL] as a research associate in 2012 and then lecturer in the Department of Electronic and Electrical Engineering at UCL from 2015 to 2018. He is now a professor at UESTC [University of Electronic Science and Technology of China] [emphases mine]. His research interests include optoelectronic applications of semiconductor heterostructures. He is a Fellow of the Higher Education Academy and Senior Member of IEEE.

Opto-Electronic Advances (OEA) is a high-impact, open access, peer reviewed monthly SCI journal with an impact factor of 9.682 (Journals Citation Reports for IF 2020). Since its launch in March 2018, OEA has been indexed in SCI, EI, DOAJ, Scopus, CA and ICI databases over the time and expanded its Editorial Board to 36 members from 17 countries and regions (average h-index 49). [emphases mine]

The journal is published by The Institute of Optics and Electronics, Chinese Academy of Sciences, aiming at providing a platform for researchers, academicians, professionals, practitioners, and students to impart and share knowledge in the form of high quality empirical and theoretical research papers covering the topics of optics, photonics and optoelectronics.

The research group’s awkward name was almost certainly developed with the rather grandiose acronym, GODS, in mind. I don’t think you could get away with doing this in an English-speaking country as your colleagues would mock you mercilessly.

It’s Jiang Wu’s academic and work history that’s of most interest as it might provide insight into China’s Young Thousand Talents program. A January 5, 2023 American Association for the Advancement of Science (AAAS) news release describes the program,

In a systematic evaluation of China’s Young Thousand Talents (YTT) program, which was established in 2010, researchers find that China has been successful in recruiting and nurturing high-caliber Chinese scientists who received training abroad. Many of these individuals outperform overseas peers in publications and access to funding, the study shows, largely due to access to larger research teams and better research funding in China. Not only do the findings demonstrate the program’s relative success, but they also hold policy implications for the increasing number of governments pursuing means to tap expatriates for domestic knowledge production and talent development. China is a top sender of international students to United States and European Union science and engineering programs. The YTT program was created to recruit and nurture the productivity of high-caliber, early-career, expatriate scientists who return to China after receiving Ph.Ds. abroad. Although there has been a great deal of international attention on the YTT, some associated with the launch of the U.S.’s controversial China Initiative and federal investigations into academic researchers with ties to China, there has been little evidence-based research on the success, impact, and policy implications of the program itself. Dongbo Shi and colleagues evaluated the YTT program’s first 4 cohorts of scholars and compared their research productivity to that of their peers that remained overseas. Shi et al. found that China’s YTT program successfully attracted high-caliber – but not top-caliber – scientists. However, those young scientists that did return outperformed others in publications across journal-quality tiers – particularly in last-authored publications. The authors suggest that this is due to YTT scholars’ greater access to larger research teams and better research funding in China. The authors say the dearth of such resources in the U.S. and E.U. “may not only expedite expatriates’ return decisions but also motivate young U.S.- and E.U.-born scientists to seek international research opportunities.” They say their findings underscore the need for policy adjustments to allocate more support for young scientists.

Here’s a link to and a citation for the paper,

Has China’s Young Thousand Talents program been successful in recruiting and nurturing top-caliber scientists? by Dongbo Shi, Weichen Liu, and Yanbo Wang. Science 5 Jan 2023 Vol 379, Issue 6627 pp. 62-65 DOI: 10.1126/science.abq1218

This paper is behind a paywall.

Kudos to the folks behind China’s Young Thousands Talents program! Jiang Wu’s career appears to be a prime example of the program’s success. Perhaps Canadian policy makers will be inspired.

Making graphite from coal and a few graphite facts

Canada is the 10th largest (1.2%) producer of graphite in the world with China leading the way in the top spot at 68.1%. That’s right, 1.2% can get you into the top 10.

If you’re curious about which countries fill out the other eight spots, The National Research Council of Canada has a handy webpage titled, Graphite Facts,

Graphite is a non-metallic mineral that has properties similar to metals, such as a good ability to conduct heat and electricity. Graphite occurs naturally or can be produced synthetically. Purified natural graphite has higher crystalline structure and offers better electrical and thermal conductivity than synthetic material.

Among the many applications, natural and synthetic graphite are used for electrodes, refractories, batteries and lubricants and by foundries. Coated spherical graphite is used to manufacture the anode in lithium-ion batteries. High-grade graphite is also used in fuel cells, semiconductors, LEDs and nuclear reactors.

The Lac des Iles mine is the only mine in Canada that is producing graphite. However, many other companies are working on graphite projects.

Canada’s graphite shipments reached 11,937 tonnes in 2020, up slightly from 11,045 tonnes in 2020 [sic].

Global production and demand for graphite are anticipated to increase in the coming years, largely because of the use of graphite in the batteries of electric vehicles. In 2020, global consumption of graphite reached 2.7 million tonnes. Synthetic graphite accounted for about two-thirds of the graphite consumption, which was largely concentrated in Asia.

In 2020, the value of Canada’s exports of graphite was $31.6 million, a 9% decrease compared to the previous year. Imports also decreased in 2020, by 33% to $20.9 million.

Natural graphite accounted for 46.7% ($14.8 million) of the value of Canada’s exports of graphite and 13.5% ($2.8 million) of Canada’s imports of graphite in 2020. Synthetic graphite accounted for 53.3% ($ 16.9 million) of Canada’s exports of graphite and 86.5% ($18.0 million) of Canada’s imports of graphite in 2020.

In 2020, the United States was the primary destination for Canada’s exports of natural and synthetic graphite, accounting for 85% and 42% of the total exports, respectively.

I think the writer meant that shipments were up slightly from 2019. The page was last updated on February 4, 2022.

The news from Ohio

A June 10, 2022 news item on Nanowerk about research into a new type of graphite (Note: A link has been removed),

As the world’s appetite for carbon-based materials like graphite increases, Ohio University researchers presented evidence this week for a new carbon solid they named “amorphous graphite.”

Physicist David Drabold and engineer Jason Trembly started with the question, “Can we make graphite from coal?”

“Graphite is an important carbon material with many uses. A burgeoning application for graphite is for battery anodes in lithium-ion batteries, and it is crucial for the electric vehicle industry — a Tesla Model S on average needs 54 kg of graphite. Such electrodes are best if made with pure carbon materials, which are becoming more difficult to obtain owing to spiraling technological demand,” they write in their paper that published in Physical Review Letters (“Ab initio simulation of amorphous graphite”).

Ab initio means from the beginning, and their work pursues novel paths to synthetic forms of graphite from naturally occurring carbonaceous material. What they found, with several different calculations, was a layered material that forms at very high temperatures (about 3000 degrees Kelvin). Its layers stay together due to the formation of an electron gas between the layers, but they’re not the perfect layers of hexagons that make up ideal graphene. This new material has plenty of hexagons, but also pentagons and heptagons. That ring disorder reduces the electrical conductivity of the new material compared with graphene, but the conductivity is still high in the regions dominated largely by hexagons.

A June 10, 2022 Ohio University news release (also on EurekAlert), which originated the news item, delves further into the research (Note: Links have been removed),

Not all hexagons

“In chemistry, the process of converting carbonaceous materials to a layered graphitic structure by thermal treatment at high temperature is called graphitization. In this letter, we show from ab initio and machine learning molecular dynamic simulations that pure carbon networks have an overwhelming proclivity to convert to a layered structure in a significant density and temperature window with the layering occurring even for random starting configurations. The flat layers are amorphous graphene: topologically disordered three-coordinated carbon atoms arranged in planes with pentagons, hexagons and heptagons of carbon,” said Drabold, Distinguished Professor of Physics and Astronomy in the College of Arts and Sciences at Ohio University.

“Since this phase is topologically disordered, the usual ‘stacking registry’ of graphite is only statistically respected,” Drabold said. “The layering is observed without Van der Waals corrections to density functional (LDA and PBE) forces, and we discuss the formation of a delocalized electron gas in the galleries (voids between planes) and show that interplane cohesion is partly due to this low-density electron gas. The in-plane electronic conductivity is dramatically reduced relative to graphene.”

The researchers expect their announcement to spur experimentation and studies addressing the existence of amorphous graphite, which may be testable from exfoliation and/or experimental surface structural probes.

Trembly, Russ Professor of Mechanical Engineering and director of the Institute for Sustainable Energy and the Environment in the Russ College of Engineering and Technology at Ohio University, has been working in part on green uses of coal. He and Drabold — along with physics doctoral students Rajendra Thapa, Chinonso Ugwumadu and Kishor Nepal — collaborated on the research. Drabold also is part of the Nanoscale & Quantum Phenomena Institute at OHIO, and he has published a series of papers on the theory of amorphous carbon and amorphous graphene. Drabold also emphasized the excellent work of his graduate students in carrying out this research.

Surprising interplane cohesion

“The question that led us to this is whether we could make graphite from coal,” Drabold said. “This paper does not fully answer that question, but it shows that carbon has an overwhelming tendency to layer — like graphite, but with many ‘defects’ such as pentagons and heptagons (five- and seven-member rings of carbon atoms), which fit quite naturally into the network. We present evidence that amorphous graphite exists, and we describe its process of formation. It has been suspected from experiments that graphitization occurs near 3,000K, but the details of the formation process and nature of disorder in the planes was unknown,” he added.

The Ohio University researchers’ work is also a prediction of a new phase of carbon.

“Until we did this, it was not at all obvious that layers of amorphous graphene (the planes including pentagons and heptagons) would stick together in a layered structure. I find that quite surprising, and it is likely that experimentalists will go hunting for this stuff now that its existence is predicted,” Drabold said. “Carbon is the miracle element — you can make life, diamond, graphite, Bucky Balls, nanotubes, graphene, [emphasis mine] and now this. There is a lot of interesting basic physics in this, too — for example how and why the planes bind, this by itself is quite surprising for technical reasons.”

Here’s a link to and a citation for the paper,

Ab Initio Simulation of Amorphous Graphite by R. Thapa, C. Ugwumadu, K. Nepal, J. Trembly, and D. A. Drabold. Phys. Rev. Lett. 128, 236402 DOI: https://doi.org/10.1103/PhysRevLett.128.236402 Published 10 June 2022 © 2022 American Physical Society

This paper is behind a paywall.

There is an earlier version of the paper which is open access at ArXiv (hosted by Cornell University),

[Submitted on 22 Feb 2022 (v1), last revised 23 Apr 2022 (this version, v2)]

Ab initio simulation of amorphous graphite by Rajendra Thapa, Chinonso Ugwumadu, Kishor Nepal, Jason Trembly, David Drabold

About graphite and Canadian mines

A July 25, 2011 posting marks the earliest appearance of graphite on this blog. Titled, “Canadians as hewers of graphite?” It featured Northern Graphite Corporation, which today (June 21, 2022) is the largest North American graphite producer according to the company’s homepage,

  • Only North American producer
  • Will be 3rd largest non-Chinese producer
  • Two large development projects
  • All projects:
    • In politically stable countries
    • Have “battery quality” graphite
    • Close to infrastructure

There’s also this from the company’s homepage,

Northern owns the Lac des Iles (LDI) mine in Quebec, the only significant graphite producer in North America. Northern plans to increase production and extend the mine life.

Northern is currently upgrading its Okorusu processing plant in Namibia. It will be back on line in 1H 2023 and make Northern the third largest non Chinese graphite producer.

Northern plans to develop its advanced stage Bissett Creek project in Ontario which has a full Feasibility Study. It has been rated as the highest margin graphite deposit in the world.

The Okanjande deposit in Namibia has a very large measured and indicated resource. Northern intends to study building a 150,000tpa plant to supply battery markets in Europe.

I notice the involvement in Namibia. I hope this is a ‘good’ mining company. Canadian mining companies have been known to breach human rights and environmental regulations when operating internationally. There’s a recent tragedy described in this June 20, 2020 news article on the Canadian Broadcasting Corporation (CBC) online news site (Note: A link has been removed),

Trevali Mining Corp. says it has recovered the bodies of the final two of eight workers killed after its Perkoa Mine in Burkina Faso flooded following heavy rainfall on Apr. 16 [2022].

The bodies of the other six workers were recovered by search teams late last month.

The Vancouver-based zinc miner says it is working alongside Burkinabe authorities to coordinate the dewatering and rehabilitation of the mine.

The flooding event is under investigation by the company and government authorities.

MiningWatch Canada, an Ottawa-based industry watchdog, has questioned how well the company was prepared for disaster and criticized the federal government’s lack of regulations on how Canadian mining companies operate internationally. [emphasis mine]

They say tighter rules are necessary for companies operating abroad. 

A May 10, 2022 article by Amanda Follett Hosgood about the disaster for The Tyee provides more details and asks some very pertinent and uncomfortable questions. (Yes, The Tyee is a very ‘left wing’ journalistic effort and they have a point where Canadian mining companies are concerned.)

Getting back to Northern Graphite, there’s this from their Governance page,

Northern Graphite is committed to conducting its activities in a manner that meets best international industry practices regardless of the country or location of operation.  The Company will operate with the highest standards of honesty, integrity, and ethical behaviour.  It will conduct its business in a manner that meets or exceeds all applicable laws, rules, and regulations and meets its social and moral obligations.  This policy applies to all Board members, officers and other employees, contractors, and other third parties working on behalf of or representing the Company.

The company gets more specific, from their Governance page,

  1. Taking all reasonable precautions to ensure the health and safety of workers and others affected by the Company’s operations.
  2. Managing and minimizing the environmental impact of the Company’s operations by following best international practices and standards and meeting stakeholder expectations while recognizing that mining will always have some unavoidable impacts on the environment. 
  3. Utilizing practices and technologies that minimize the Company’s water and carbon footprints.
  4. Respecting the rights, culture and development of local and Indigenous communities.
  5. The elimination of fraud, bribery, and corruption.
  6.  The protection and respect of human rights.
  7. Providing an adequate return to shareholders and investors while ensuring that all stakeholders benefit from the extraction of the earth’s resources through fair labour and compensation practices, local hiring and contracting, community support, and the payment of all applicable government taxes and royalties.

There are two other Canadian mining companies (that I know of) in pursuit of graphite, Lomiko Metals (British Columbia) and Focus Graphite (Ontario). All the mines in Canada, whether they are producing or not, are in either Québec or Ontario.

As for the research team in Ohio, congratulations on your very exciting work!

400 nm thick glucose fuel cell uses body’s own sugar

This May 12, 2022 news item on Nanowerk reminds me of bioenergy harvesting (using the body’s own processes rather than batteries to power implants),

Glucose is the sugar we absorb from the foods we eat. It is the fuel that powers every cell in our bodies. Could glucose also power tomorrow’s medical implants?

Engineers at MIT [Massachusetts Institute of Technology] and the Technical University of Munich think so. They have designed a new kind of glucose fuel cell that converts glucose directly into electricity. The device is smaller than other proposed glucose fuel cells, measuring just 400 nanometers thick. The sugary power source generates about 43 microwatts per square centimeter of electricity, achieving the highest power density of any glucose fuel cell to date under ambient conditions.

Caption: Silicon chip with 30 individual glucose micro fuel cells, seen as small silver squares inside each gray rectangle. Credit Image: Kent Dayton

A May 12, 2022 MIT news release (also on EuekAlert) by Jennifer Chu, which originated the news item, describes the technology in more detail, Note: A link has been removed,

The new device is also resilient, able to withstand temperatures up to 600 degrees Celsius. If incorporated into a medical implant, the fuel cell could remain stable through the high-temperature sterilization process required for all implantable devices.

The heart of the new device is made from ceramic, a material that retains its electrochemical properties even at high temperatures and miniature scales. The researchers envision the new design could be made into ultrathin films or coatings and wrapped around implants to passively power electronics, using the body’s abundant glucose supply.

“Glucose is everywhere in the body, and the idea is to harvest this readily available energy and use it to power implantable devices,” says Philipp Simons, who developed the design as part of his PhD thesis in MIT’s Department of Materials Science and Engineering (DMSE). “In our work we show a new glucose fuel cell electrochemistry.”

“Instead of using a battery, which can take up 90 percent of an implant’s volume, you could make a device with a thin film, and you’d have a power source with no volumetric footprint,” says Jennifer L.M. Rupp, Simons’ thesis supervisor and a DMSE visiting professor, who is also an associate professor of solid-state electrolyte chemistry at Technical University Munich in Germany.

Simons and his colleagues detail their design today in the journal Advanced Materials. Co-authors of the study include Rupp, Steven Schenk, Marco Gysel, and Lorenz Olbrich.

A “hard” separation

The inspiration for the new fuel cell came in 2016, when Rupp, who specializes in ceramics and electrochemical devices, went to take a routine glucose test toward the end of her pregnancy.

“In the doctor’s office, I was a very bored electrochemist, thinking what you could do with sugar and electrochemistry,” Rupp recalls. “Then I realized, it would be good to have a glucose-powered solid state device. And Philipp and I met over coffee and wrote out on a napkin the first drawings.”

The team is not the first to conceive of a glucose fuel cell, which was initially introduced in the 1960s and showed potential for converting glucose’s chemical energy into electrical energy. But glucose fuel cells at the time were based on soft polymers and were quickly eclipsed by lithium-iodide batteries, which would become the standard power source for medical implants, most notably the cardiac pacemaker.

However, batteries have a limit to how small they can be made, as their design requires the physical capacity to store energy.

“Fuel cells directly convert energy rather than storing it in a device, so you don’t need all that volume that’s required to store energy in a battery,” Rupp says.

In recent years, scientists have taken another look at glucose fuel cells as potentially smaller power sources, fueled directly by the body’s abundant glucose.

A glucose fuel cell’s basic design consists of three layers: a top anode, a middle electrolyte, and a bottom cathode. The anode reacts with glucose in bodily fluids, transforming the sugar into gluconic acid. This electrochemical conversion releases a pair of protons and a pair of electrons. The middle electrolyte acts to separate the protons from the electrons, conducting the protons through the fuel cell, where they combine with air to form molecules of water — a harmless byproduct that flows away with the body’s fluid. Meanwhile, the isolated electrons flow to an external circuit, where they can be used to power an electronic device.

The team looked to improve on existing materials and designs by modifying the electrolyte layer, which is often made from polymers. But polymer properties, along with their ability to conduct protons, easily degrade at high temperatures, are difficult to retain when scaled down to the dimension of nanometers, and are hard to sterilize. The researchers wondered if a ceramic — a heat-resistant material which can naturally conduct protons — could be made into an electrolyte for glucose fuel cells.

“When you think of ceramics for such a glucose fuel cell, they have the advantage of long-term stability, small scalability, and silicon chip integration,” Rupp notes. “They’re hard and robust.”

Peak power

The researchers designed a glucose fuel cell with an electrolyte made from ceria, a ceramic material that possesses high ion conductivity, is mechanically robust, and as such, is widely used as an electrolyte in hydrogen fuel cells. It has also been shown to be biocompatible.

“Ceria is actively studied in the cancer research community,” Simons notes. “It’s also similar to zirconia, which is used in tooth implants, and is biocompatible and safe.”

The team sandwiched the electrolyte with an anode and cathode made of platinum, a stable material that readily reacts with glucose. They fabricated 150 individual glucose fuel cells on a chip, each about 400 nanometers thin, and about 300 micrometers wide (about the width of 30 human hairs). They patterned the cells onto silicon wafers, showing that the devices can be paired with a common semiconductor material. They then measured the current produced by each cell as they flowed a solution of glucose over each wafer in a custom-fabricated test station.

They found many cells produced a peak voltage of about 80 millivolts. Given the tiny size of each cell, this output is the highest power density of any existing glucose fuel cell design.

“Excitingly, we are able to draw power and current that’s sufficient to power implantable devices,” Simons says.

“It is the first time that proton conduction in electroceramic materials can be used for glucose-to-power conversion, defining a new type of electrochemstry,” Rupp says. “It extends the material use-cases from hydrogen fuel cells to new, exciting glucose-conversion modes.”

Here’s a link to and a citation for the paper,

A Ceramic-Electrolyte Glucose Fuel Cell for Implantable Electronics by Philipp Simons, Steven A. Schenk, Marco A. Gysel, Lorenz F. Olbrich, Jennifer L. M. Rupp. Advanced Materials https://doi.org/10.1002/adma.202109075 First published: 05 April 2022

This paper is open access.