Category Archives: electronics

A biohybrid artificial synapse that can communicate with living cells

As I noted in my June 16, 2020 posting, we may have more than one kind of artificial brain in our future. This latest work features a biohybrid. From a June 15, 2020 news item on ScienceDaily,

In 2017, Stanford University researchers presented a new device that mimics the brain’s efficient and low-energy neural learning process [see my March 8, 2017 posting for more]. It was an artificial version of a synapse — the gap across which neurotransmitters travel to communicate between neurons — made from organic materials. In 2019, the researchers assembled nine of their artificial synapses together in an array, showing that they could be simultaneously programmed to mimic the parallel operation of the brain [see my Sept. 17, 2019 posting].

Now, in a paper published June 15 [2020] in Nature Materials, they have tested the first biohybrid version of their artificial synapse and demonstrated that it can communicate with living cells. Future technologies stemming from this device could function by responding directly to chemical signals from the brain. The research was conducted in collaboration with researchers at Istituto Italiano di Tecnologia (Italian Institute of Technology — IIT) in Italy and at Eindhoven University of Technology (Netherlands).

“This paper really highlights the unique strength of the materials that we use in being able to interact with living matter,” said Alberto Salleo, professor of materials science and engineering at Stanford and co-senior author of the paper. “The cells are happy sitting on the soft polymer. But the compatibility goes deeper: These materials work with the same molecules neurons use naturally.”

While other brain-integrated devices require an electrical signal to detect and process the brain’s messages, the communications between this device and living cells occur through electrochemistry — as though the material were just another neuron receiving messages from its neighbor.

A June 15, 2020 Stanford University news release (also on EurekAlert) by Taylor Kubota, which originated the news item, delves further into this recent work,

How neurons learn

The biohybrid artificial synapse consists of two soft polymer electrodes, separated by a trench filled with electrolyte solution – which plays the part of the synaptic cleft that separates communicating neurons in the brain. When living cells are placed on top of one electrode, neurotransmitters that those cells release can react with that electrode to produce ions. Those ions travel across the trench to the second electrode and modulate the conductive state of this electrode. Some of that change is preserved, simulating the learning process occurring in nature.

“In a biological synapse, essentially everything is controlled by chemical interactions at the synaptic junction. Whenever the cells communicate with one another, they’re using chemistry,” said Scott Keene, a graduate student at Stanford and co-lead author of the paper. “Being able to interact with the brain’s natural chemistry gives the device added utility.”

This process mimics the same kind of learning seen in biological synapses, which is highly efficient in terms of energy because computing and memory storage happen in one action. In more traditional computer systems, the data is processed first and then later moved to storage.

To test their device, the researchers used rat neuroendocrine cells that release the neurotransmitter dopamine. Before they ran their experiment, they were unsure how the dopamine would interact with their material – but they saw a permanent change in the state of their device upon the first reaction.

“We knew the reaction is irreversible, so it makes sense that it would cause a permanent change in the device’s conductive state,” said Keene. “But, it was hard to know whether we’d achieve the outcome we predicted on paper until we saw it happen in the lab. That was when we realized the potential this has for emulating the long-term learning process of a synapse.”

A first step

This biohybrid design is in such early stages that the main focus of the current research was simply to make it work.

“It’s a demonstration that this communication melding chemistry and electricity is possible,” said Salleo. “You could say it’s a first step toward a brain-machine interface, but it’s a tiny, tiny very first step.”

Now that the researchers have successfully tested their design, they are figuring out the best paths for future research, which could include work on brain-inspired computers, brain-machine interfaces, medical devices or new research tools for neuroscience. Already, they are working on how to make the device function better in more complex biological settings that contain different kinds of cells and neurotransmitters.

Here’s a link to and a citation for the paper,

A biohybrid synapse with neurotransmitter-mediated plasticity by Scott T. Keene, Claudia Lubrano, Setareh Kazemzadeh, Armantas Melianas, Yaakov Tuchman, Giuseppina Polino, Paola Scognamiglio, Lucio Cinà, Alberto Salleo, Yoeri van de Burgt & Francesca Santoro. Nature Materials (2020) DOI: https://doi.org/10.1038/s41563-020-0703-y Published: 15 June 2020

This paper is behind a paywall.

Living with a mind-controlled prosthetic

This could be described as the second half of an October 10, 2014 post (Mind-controlled prostheses ready for real world activities). Five and a half years later, Sweden’s Chalmers University of Technology has announced mind-controlled prosthetics in daily use that feature the sense of touch. From an April 30, 2020 Chalmers University of Technology press release (also on EurekAlert but published April 29, 2020) by Johanna Wilde,

For the first time, people with arm amputations can experience sensations of touch in a mind-controlled arm prosthesis that they use in everyday life. A study in the New England Journal of Medicine reports on three Swedish patients who have lived, for several years, with this new technology – one of the world’s most integrated interfaces between human and machine.

See the film: “The most natural robotic prosthesis in the world” [Should you not have Swedish language skills, you can click on the subtitle option in the video’s settings field]

The advance is unique: the patients have used a mind-controlled prosthesis in their everyday life for up to seven years. For the last few years, they have also lived with a new function – sensations of touch in the prosthetic hand. This is a new concept for artificial limbs, which are called neuromusculoskeletal prostheses – as they are connected to the user’s nerves, muscles, and skeleton.

The research was led by Max Ortiz Catalan, Associate Professor at Chalmers University of Technology, in collaboration with Sahlgrenska University Hospital, University of Gothenburg, and Integrum AB, all in Gothenburg, Sweden. Researchers at Medical University of Vienna in Austria and the Massachusetts Institute of Technology in the USA were also involved.

“Our study shows that a prosthetic hand, attached to the bone and controlled by electrodes implanted in nerves and muscles, can operate much more precisely than conventional prosthetic hands. We further improved the use of the prosthesis by integrating tactile sensory feedback that the patients use to mediate how hard to grab or squeeze an object. Over time, the ability of the patients to discern smaller changes in the intensity of sensations has improved,” says Max Ortiz Catalan.

“The most important contribution of this study was to demonstrate that this new type of prosthesis is a clinically viable replacement for a lost arm. No matter how sophisticated a neural interface becomes, it can only deliver real benefit to patients if the connection between the patient and the prosthesis is safe and reliable in the long term. Our results are the product of many years of work, and now we can finally present the first bionic arm prosthesis that can be reliably controlled using implanted electrodes, while also conveying sensations to the user in everyday life”, continues Max Ortiz Catalan.

Since receiving their prostheses, the patients have used them daily in all their professional and personal activities.

The new concept of a neuromusculoskeletal prosthesis is unique in that it delivers several different features which have not been presented together in any other prosthetic technology in the world:

[1] It has a direct connection to a person’s nerves, muscles, and skeleton.

[2] It is mind-controlled and delivers sensations that are perceived by the user as arising from the missing hand.

[3] It is self-contained; all electronics needed are contained within the prosthesis, so patients do not need to carry additional equipment or batteries.

[4] It is safe and stable in the long term; the technology has been used without interruption by patients during their everyday activities, without supervision from the researchers, and it is not restricted to confined or controlled environments.

The newest part of the technology, the sensation of touch, is possible through stimulation of the nerves that used to be connected to the biological hand before the amputation. Force sensors located in the thumb of the prosthesis measure contact and pressure applied to an object while grasping. This information is transmitted to the patients’ nerves leading to their brains. Patients can thus feel when they are touching an object, its characteristics, and how hard they are pressing it, which is crucial for imitating a biological hand.

“Currently, the sensors are not the obstacle for restoring sensation,” says Max Ortiz Catalan. “The challenge is creating neural interfaces that can seamlessly transmit large amounts of artificially collected information to the nervous system, in a way that the user can experience sensations naturally and effortlessly.”
The implantation of this new technology took place at Sahlgrenska University Hospital, led by Professor Rickard Brånemark and Doctor Paolo Sassu. Over a million people worldwide suffer from limb loss, and the end goal for the research team, in collaboration with Integrum AB, is to develop a widely available product suitable for as many of these people as possible.

“Right now, patients in Sweden are participating in the clinical validation of this new prosthetic technology for arm amputation,” says Max Ortiz Catalan. “We expect this system to become available outside Sweden within a couple of years, and we are also making considerable progress with a similar technology for leg prostheses, which we plan to implant in a first patient later this year.”

More about: How the technology works:

The implant system for the arm prosthesis is called e-OPRA and is based on the OPRA implant system created by Integrum AB. The implant system anchors the prosthesis to the skeleton in the stump of the amputated limb, through a process called osseointegration (osseo = bone). Electrodes are implanted in muscles and nerves inside the amputation stump, and the e-OPRA system sends signals in both directions between the prosthesis and the brain, just like in a biological arm.

The prosthesis is mind-controlled, via the electrical muscle and nerve signals sent through the arm stump and captured by the electrodes. The signals are passed into the implant, which goes through the skin and connects to the prosthesis. The signals are then interpreted by an embedded control system developed by the researchers. The control system is small enough to fit inside the prosthesis and it processes the signals using sophisticated artificial intelligence algorithms, resulting in control signals for the prosthetic hand’s movements.

The touch sensations arise from force sensors in the prosthetic thumb. The signals from the sensors are converted by the control system in the prosthesis into electrical signals which are sent to stimulate a nerve in the arm stump. The nerve leads to the brain, which then perceives the pressure levels against the hand.

The neuromusculoskeletal implant can connect to any commercially available arm prosthesis, allowing them to operate more effectively.

More about: How the artificial sensation is experienced:

People who lose an arm or leg often experience phantom sensations, as if the missing body part remains although not physically present. When the force sensors in the prosthetic thumb react, the patients in the study feel that the sensation comes from their phantom hand. Precisely where on the phantom hand varies between patients, depending on which nerves in the stump receive the signals. The lowest level of pressure can be compared to touching the skin with the tip of a pencil. As the pressure increases, the feeling becomes stronger and increasingly ‘electric’.

I have read elsewhere that one of the most difficult aspects of dealing with a prosthetic is the loss of touch. This has to be exciting news for a lot of people. Here’s a link to and a citation for the paper,

Self-Contained Neuromusculoskeletal Arm Prostheses by Max Ortiz-Catalan, Enzo Mastinu, Paolo Sassu, Oskar Aszmann, and Rickard Brånemark. N Engl J Med 2020; 382:1732-1738 DOI: 10.1056/NEJMoa1917537 Published: April 30, 2020

This paper is behind a paywall.

Nanocellulose films made with liquid-phase fabrication method

I always appreciate a reference to Star Trek and three-dimensional chess was one of my favourite concepts. You’ll find that and more in a May 19, 2020 news item on Nanowerk,

Researchers at The Institute of Scientific and Industrial Research at Osaka University [Japan] introduced a new liquid-phase fabrication method for producing nanocellulose films with multiple axes of alignment. Using 3D-printing methods for increased control, this work may lead to cheaper and more environmentally friendly optical and thermal devices.

Ever since appearing on the original Star Trek TV show in the 1960s, the game of “three-dimensional chess” has been used as a metaphor for sophisticated thinking. Now, researchers at Osaka University can say that they have added their own version, with potential applications in advanced optics and inexpensive smartphone displays.

It’s not exactly three-dimensional chess but this nanocellulose film was produced by 3D printing methods,

Caption: Developed multiaxis nanocellulose-oriented film. Credit: Osaka University

A May 20, 2020 Osaka University press release (also on EurekAlert but dated May 19, 2020), which originated the news item, provides more detail,

Many existing optical devices, including liquid-crystal displays (LCDs) found in older flat-screen televisions, rely on long needle-shaped molecules aligned in the same direction. However, getting fibers to line up in multiple directions on the same device is much more difficult. Having a method that can reliably and cheaply produce optical fibers would accelerate the manufacture of low-cost displays or even “paper electronics”–computers that could be printed from biodegradable materials on demand.

Cellulose, the primary component of cotton and wood, is an abundant renewable resource made of long molecules. Nanocelluloses are nanofibers made of uniaxially aligned cellulose molecular chains that have different optical and heat conduction properties along one direction compared to the another.

In newly published research from the Institute of Scientific and Industrial Research at Osaka University, nanocellulose was harvested from sea pineapples, a kind of sea squirt. They then used liquid-phase 3D-pattering, which combined the wet spinning of nanofibers with the precision of 3D-printing. A custom-made triaxial robot dispensed a nanocellulose aqueous suspension into an acetone coagulation bath.

“We developed this liquid-phase three-dimensional patterning technique to allow for nanocellulose alignment along any preferred axis,” says first author Kojiro Uetani. The direction of the patterns could be programmed so that it formed an alternating checkerboard pattern of vertically- and horizontally-aligned fibers.

To demonstrate the method, a film was sandwiched between two orthogonal polarizing films. Under the proper viewing conditions, a birefringent checkerboard pattern appeared. They also measured the thermal transfer and optical retardation properties.

“Our findings could aid in the development of next-generation optical materials and paper electronics,” says senior author Masaya Nogi. “This could be the start of bottom-up techniques for building sophisticated and energy-efficient optical and thermal materials.”

Here’s a link to and a citation for the paper,

Checkered Films of Multiaxis Oriented Nanocelluloses by Liquid-Phase Three-Dimensional Patterning by Kojiro Uetani, Hirotaka Koga and Masaya Nogi. Nanomaterials 2020, 10(5), 958; DOI: https://doi.org/10.3390/nano10050958 Published: 18 May 2020

This is an open access paper.

China’s neuromorphic chips: Darwin and Tianjic

I believe that China has more than two neuromorphic chips. The two being featured here are the ones for which I was easily able to find information.

The Darwin chip

The first information (that I stumbled across) about China and a neuromorphic chip (Darwin) was in a December 22, 2015 Science China Press news release on EurekAlert,

Artificial Neural Network (ANN) is a type of information processing system based on mimicking the principles of biological brains, and has been broadly applied in application domains such as pattern recognition, automatic control, signal processing, decision support system and artificial intelligence. Spiking Neural Network (SNN) is a type of biologically-inspired ANN that perform information processing based on discrete-time spikes. It is more biologically realistic than classic ANNs, and can potentially achieve much better performance-power ratio. Recently, researchers from Zhejiang University and Hangzhou Dianzi University in Hangzhou, China successfully developed the Darwin Neural Processing Unit (NPU), a neuromorphic hardware co-processor based on Spiking Neural Networks, fabricated by standard CMOS technology.

With the rapid development of the Internet-of-Things and intelligent hardware systems, a variety of intelligent devices are pervasive in today’s society, providing many services and convenience to people’s lives, but they also raise challenges of running complex intelligent algorithms on small devices. Sponsored by the college of Computer science of Zhejiang University, the research group led by Dr. De Ma from Hangzhou Dianzi university and Dr. Xiaolei Zhu from Zhejiang university has developed a co-processor named as Darwin.The Darwin NPU aims to provide hardware acceleration of intelligent algorithms, with target application domain of resource-constrained, low-power small embeddeddevices. It has been fabricated by 180nm standard CMOS process, supporting a maximum of 2048 neurons, more than 4 million synapses and 15 different possible synaptic delays. It is highly configurable, supporting reconfiguration of SNN topology and many parameters of neurons and synapses.Figure 1 shows photos of the die and the prototype development board, which supports input/output in the form of neural spike trains via USB port.

The successful development ofDarwin demonstrates the feasibility of real-time execution of Spiking Neural Networks in resource-constrained embedded systems. It supports flexible configuration of a multitude of parameters of the neural network, hence it can be used to implement different functionalities as configured by the user. Its potential applications include intelligent hardware systems, robotics, brain-computer interfaces, and others.Since it uses spikes for information processing and transmission,similar to biological neural networks, it may be suitable for analysis and processing of biological spiking neural signals, and building brain-computer interface systems by interfacing with animal or human brains. As a prototype application in Brain-Computer Interfaces, Figure 2 [not included here] describes an application example ofrecognizingthe user’s motor imagery intention via real-time decoding of EEG signals, i.e., whether he is thinking of left or right, and using it to control the movement direction of a basketball in the virtual environment. Different from conventional EEG signal analysis algorithms, the input and output to Darwin are both neural spikes: the input is spike trains that encode EEG signals; after processing by the neural network, the output neuron with the highest firing rate is chosen as the classification result.

The most recent development for this chip was announced in a September 2, 2019 Zhejiang University press release (Note: Links have been removed),

The second generation of the Darwin Neural Processing Unit (Darwin NPU 2) as well as its corresponding toolchain and micro-operating system was released in Hangzhou recently. This research was led by Zhejiang University, with Hangzhou Dianzi University and Huawei Central Research Institute participating in the development and algorisms of the chip. The Darwin NPU 2 can be primarily applied to smart Internet of Things (IoT). It can support up to 150,000 neurons and has achieved the largest-scale neurons on a nationwide basis.

The Darwin NPU 2 is fabricated by standard 55nm CMOS technology. Every “neuromorphic” chip is made up of 576 kernels, each of which can support 256 neurons. It contains over 10 million synapses which can construct a powerful brain-inspired computing system.

“A brain-inspired chip can work like the neurons inside a human brain and it is remarkably unique in image recognition, visual and audio comprehension and naturalistic language processing,” said MA De, an associate professor at the College of Computer Science and Technology on the research team.

“In comparison with traditional chips, brain-inspired chips are more adept at processing ambiguous data, say, perception tasks. Another prominent advantage is their low energy consumption. In the process of information transmission, only those neurons that receive and process spikes will be activated while other neurons will stay dormant. In this case, energy consumption can be extremely low,” said Dr. ZHU Xiaolei at the School of Microelectronics.

To cater to the demands for voice business, Huawei Central Research Institute designed an efficient spiking neural network algorithm in accordance with the defining feature of the Darwin NPU 2 architecture, thereby increasing computing speeds and improving recognition accuracy tremendously.

Scientists have developed a host of applications, including gesture recognition, image recognition, voice recognition and decoding of electroencephalogram (EEG) signals, on the Darwin NPU 2 and reduced energy consumption by at least two orders of magnitude.

In comparison with the first generation of the Darwin NPU which was developed in 2015, the Darwin NPU 2 has escalated the number of neurons by two orders of magnitude from 2048 neurons and augmented the flexibility and plasticity of the chip configuration, thus expanding the potential for applications appreciably. The improvement in the brain-inspired chip will bring in its wake the revolution of computer technology and artificial intelligence. At present, the brain-inspired chip adopts a relatively simplified neuron model, but neurons in a real brain are far more sophisticated and many biological mechanisms have yet to be explored by neuroscientists and biologists. It is expected that in the not-too-distant future, a fascinating improvement on the Darwin NPU 2 will come over the horizon.

I haven’t been able to find a recent (i.e., post 2017) research paper featuring Darwin but there is another chip and research on that one was published in July 2019. First, the news.

The Tianjic chip

A July 31, 2019 article in the New York Times by Cade Metz describes the research and offers what seems to be a jaundiced perspective about the field of neuromorphic computing (Note: A link has been removed),

As corporate giants like Ford, G.M. and Waymo struggle to get their self-driving cars on the road, a team of researchers in China is rethinking autonomous transportation using a souped-up bicycle.

This bike can roll over a bump on its own, staying perfectly upright. When the man walking just behind it says “left,” it turns left, angling back in the direction it came.

It also has eyes: It can follow someone jogging several yards ahead, turning each time the person turns. And if it encounters an obstacle, it can swerve to the side, keeping its balance and continuing its pursuit.

… Chinese researchers who built the bike believe it demonstrates the future of computer hardware. It navigates the world with help from what is called a neuromorphic chip, modeled after the human brain.

Here’s a video, released by the researchers, demonstrating the chip’s abilities,

Now back to back to Metz’s July 31, 2019 article (Note: A link has been removed),

The short video did not show the limitations of the bicycle (which presumably tips over occasionally), and even the researchers who built the bike admitted in an email to The Times that the skills on display could be duplicated with existing computer hardware. But in handling all these skills with a neuromorphic processor, the project highlighted the wider effort to achieve new levels of artificial intelligence with novel kinds of chips.

This effort spans myriad start-up companies and academic labs, as well as big-name tech companies like Google, Intel and IBM. And as the Nature paper demonstrates, the movement is gaining significant momentum in China, a country with little experience designing its own computer processors, but which has invested heavily in the idea of an “A.I. chip.”

If you can get past what seems to be a patronizing attitude, there are some good explanations and cogent criticisms in the piece (Metz’s July 31, 2019 article, Note: Links have been removed),

… it faces significant limitations.

A neural network doesn’t really learn on the fly. Engineers train a neural network for a particular task before sending it out into the real world, and it can’t learn without enormous numbers of examples. OpenAI, a San Francisco artificial intelligence lab, recently built a system that could beat the world’s best players at a complex video game called Dota 2. But the system first spent months playing the game against itself, burning through millions of dollars in computing power.

Researchers aim to build systems that can learn skills in a manner similar to the way people do. And that could require new kinds of computer hardware. Dozens of companies and academic labs are now developing chips specifically for training and operating A.I. systems. The most ambitious projects are the neuromorphic processors, including the Tianjic chip under development at Tsinghua University in China.

Such chips are designed to imitate the network of neurons in the brain, not unlike a neural network but with even greater fidelity, at least in theory.

Neuromorphic chips typically include hundreds of thousands of faux neurons, and rather than just processing 1s and 0s, these neurons operate by trading tiny bursts of electrical signals, “firing” or “spiking” only when input signals reach critical thresholds, as biological neurons do.

Tiernan Ray’s August 3, 2019 article about the chip for ZDNet.com offers some thoughtful criticism with a side dish of snark (Note: Links have been removed),

Nature magazine’s cover story [July 31, 2019] is about a Chinese chip [Tianjic chip]that can run traditional deep learning code and also perform “neuromorophic” operations in the same circuitry. The work’s value seems obscured by a lot of hype about “artificial general intelligence” that has no real justification.

The term “artificial general intelligence,” or AGI, doesn’t actually refer to anything, at this point, it is merely a placeholder, a kind of Rorschach Test for people to fill the void with whatever notions they have of what it would mean for a machine to “think” like a person.

Despite that fact, or perhaps because of it, AGI is an ideal marketing term to attach to a lot of efforts in machine learning. Case in point, a research paper featured on the cover of this week’s Nature magazine about a new kind of computer chip developed by researchers at China’s Tsinghua University that could “accelerate the development of AGI,” they claim.

The chip is a strange hybrid of approaches, and is intriguing, but the work leaves unanswered many questions about how it’s made, and how it achieves what researchers claim of it. And some longtime chip observers doubt the impact will be as great as suggested.

“This paper is an example of the good work that China is doing in AI,” says Linley Gwennap, longtime chip-industry observer and principal analyst with chip analysis firm The Linley Group. “But this particular idea isn’t going to take over the world.”

The premise of the paper, “Towards artificial general intelligence with hybrid Tianjic chip architecture,” is that to achieve AGI, computer chips need to change. That’s an idea supported by fervent activity these days in the land of computer chips, with lots of new chip designs being proposed specifically for machine learning.

The Tsinghua authors specifically propose that the mainstream machine learning of today needs to be merged in the same chip with what’s called “neuromorphic computing.” Neuromorphic computing, first conceived by Caltech professor Carver Mead in the early ’80s, has been an obsession for firms including IBM for years, with little practical result.

[Missing details about the chip] … For example, the part is said to have “reconfigurable” circuits, but how the circuits are to be reconfigured is never specified. It could be so-called “field programmable gate array,” or FPGA, technology or something else. Code for the project is not provided by the authors as it often is for such research; the authors offer to provide the code “on reasonable request.”

More important is the fact the chip may have a hard time stacking up to a lot of competing chips out there, says analyst Gwennap. …

What the paper calls ANN and SNN are two very different means of solving similar problems, kind of like rotating (helicopter) and fixed wing (airplane) are for aviation,” says Gwennap. “Ultimately, I expect ANN [?] and SNN [spiking neural network] to serve different end applications, but I don’t see a need to combine them in a single chip; you just end up with a chip that is OK for two things but not great for anything.”

But you also end up generating a lot of buzz, and given the tension between the U.S. and China over all things tech, and especially A.I., the notion China is stealing a march on the U.S. in artificial general intelligence — whatever that may be — is a summer sizzler of a headline.

ANN could be either artificial neural network or something mentioned earlier in Ray’s article, a shortened version of CANN [continuous attractor neural network].

Shelly Fan’s August 7, 2019 article for the SingularityHub is almost as enthusiastic about the work as the podcasters for Nature magazine  were (a little more about that later),

The study shows that China is readily nipping at the heels of Google, Facebook, NVIDIA, and other tech behemoths investing in developing new AI chip designs—hell, with billions in government investment it may have already had a head start. A sweeping AI plan from 2017 looks to catch up with the US on AI technology and application by 2020. By 2030, China’s aiming to be the global leader—and a champion for building general AI that matches humans in intellectual competence.

The country’s ambition is reflected in the team’s parting words.

“Our study is expected to stimulate AGI [artificial general intelligence] development by paving the way to more generalized hardware platforms,” said the authors, led by Dr. Luping Shi at Tsinghua University.

Using nanoscale fabrication, the team arranged 156 FCores, containing roughly 40,000 neurons and 10 million synapses, onto a chip less than a fifth of an inch in length and width. Initial tests showcased the chip’s versatility, in that it can run both SNNs and deep learning algorithms such as the popular convolutional neural network (CNNs) often used in machine vision.

Compared to IBM TrueNorth, the density of Tianjic’s cores increased by 20 percent, speeding up performance ten times and increasing bandwidth at least 100-fold, the team said. When pitted against GPUs, the current hardware darling of machine learning, the chip increased processing throughput up to 100 times, while using just a sliver (1/10,000) of energy.

BTW, Fan is a neuroscientist (from her SingularityHub profile page),

Shelly Xuelai Fan is a neuroscientist-turned-science writer. She completed her PhD in neuroscience at the University of British Columbia, where she developed novel treatments for neurodegeneration. While studying biological brains, she became fascinated with AI and all things biotech. Following graduation, she moved to UCSF [University of California at San Francisco] to study blood-based factors that rejuvenate aged brains. She is the co-founder of Vantastic Media, a media venture that explores science stories through text and video, and runs the award-winning blog NeuroFantastic.com. Her first book, “Will AI Replace Us?” (Thames & Hudson) will be out April 2019.

Onto Nature. Here’s a link to and a citation for the paper,

Towards artificial general intelligence with hybrid Tianjic chip architecture by Jing Pei, Lei Deng, Sen Song, Mingguo Zhao, Youhui Zhang, Shuang Wu, Guanrui Wang, Zhe Zou, Zhenzhi Wu, Wei He, Feng Chen, Ning Deng, Si Wu, Yu Wang, Yujie Wu, Zheyu Yang, Cheng Ma, Guoqi Li, Wentao Han, Huanglong Li, Huaqiang Wu, Rong Zhao, Yuan Xie & Luping Shi. Nature volume 572, pages106–111(2019) DOI: https//doi.org/10.1038/s41586-019-1424-8 Published: 31 July 2019 Issue Date: 01 August 2019

This paper is behind a paywall.

The July 31, 2019 Nature podcast, which includes a segment about the Tianjic chip research from China, which is at the 9 mins. 13 secs. mark (AI hardware) or you can scroll down about 55% of the way to the transcript of the interview with Luke Fleet, the Nature editor who dealt with the paper.

Some thoughts

The pundits put me in mind of my own reaction when I heard about phones that could take pictures. I didn’t see the point but, as it turned out, there was a perfectly good reason for combining what had been two separate activities into one device. It was no longer just a telephone and I had completely missed the point.

This too may be the case with the Tianjic chip. I think it’s too early to say whether or not it represents a new type of chip or if it’s a dead end.

A tangle of silver nanowires for brain-like action

I’ve been meaning to get to this news item from late 2019 as it features work from a team that I’ve been following for a number of years now. First mentioned here in an October 17, 2011 posting, James Gimzewski has been working with researchers at the University of California at Los Angeles (UCLA) and researchers at Japan’s National Institute for Materials Science (NIMS) on neuromorphic computing.

This particular research had a protracted rollout with the paper being published in October 2019 and the last news item about it being published in mid-December 2019.

A December 17, 2029 news item on Nanowerk was the first to alert me to this new work (Note: A link has been removed),

UCLA scientists James Gimzewski and Adam Stieg are part of an international research team that has taken a significant stride toward the goal of creating thinking machines.

Led by researchers at Japan’s National Institute for Materials Science, the team created an experimental device that exhibited characteristics analogous to certain behaviors of the brain — learning, memorization, forgetting, wakefulness and sleep. The paper, published in Scientific Reports (“Emergent dynamics of neuromorphic nanowire networks”), describes a network in a state of continuous flux.

A December 16, 2019 UCLA news release, which originated the news item, offers more detail (Note: A link has been removed),

“This is a system between order and chaos, on the edge of chaos,” said Gimzewski, a UCLA distinguished professor of chemistry and biochemistry, a member of the California NanoSystems Institute at UCLA and a co-author of the study. “The way that the device constantly evolves and shifts mimics the human brain. It can come up with different types of behavior patterns that don’t repeat themselves.”

The research is one early step along a path that could eventually lead to computers that physically and functionally resemble the brain — machines that may be capable of solving problems that contemporary computers struggle with, and that may require much less power than today’s computers do.

The device the researchers studied is made of a tangle of silver nanowires — with an average diameter of just 360 nanometers. (A nanometer is one-billionth of a meter.) The nanowires were coated in an insulating polymer about 1 nanometer thick. Overall, the device itself measured about 10 square millimeters — so small that it would take 25 of them to cover a dime.

Allowed to randomly self-assemble on a silicon wafer, the nanowires formed highly interconnected structures that are remarkably similar to those that form the neocortex, the part of the brain involved with higher functions such as language, perception and cognition.

One trait that differentiates the nanowire network from conventional electronic circuits is that electrons flowing through them cause the physical configuration of the network to change. In the study, electrical current caused silver atoms to migrate from within the polymer coating and form connections where two nanowires overlap. The system had about 10 million of these junctions, which are analogous to the synapses where brain cells connect and communicate.

The researchers attached two electrodes to the brain-like mesh to profile how the network performed. They observed “emergent behavior,” meaning that the network displayed characteristics as a whole that could not be attributed to the individual parts that make it up. This is another trait that makes the network resemble the brain and sets it apart from conventional computers.

After current flowed through the network, the connections between nanowires persisted for as much as one minute in some cases, which resembled the process of learning and memorization in the brain. Other times, the connections shut down abruptly after the charge ended, mimicking the brain’s process of forgetting.

In other experiments, the research team found that with less power flowing in, the device exhibited behavior that corresponds to what neuroscientists see when they use functional MRI scanning to take images of the brain of a sleeping person. With more power, the nanowire network’s behavior corresponded to that of the wakeful brain.

The paper is the latest in a series of publications examining nanowire networks as a brain-inspired system, an area of research that Gimzewski helped pioneer along with Stieg, a UCLA research scientist and an associate director of CNSI.

“Our approach may be useful for generating new types of hardware that are both energy-efficient and capable of processing complex datasets that challenge the limits of modern computers,” said Stieg, a co-author of the study.

The borderline-chaotic activity of the nanowire network resembles not only signaling within the brain but also other natural systems such as weather patterns. That could mean that, with further development, future versions of the device could help model such complex systems.

In other experiments, Gimzewski and Stieg already have coaxed a silver nanowire device to successfully predict statistical trends in Los Angeles traffic patterns based on previous years’ traffic data.

Because of their similarities to the inner workings of the brain, future devices based on nanowire technology could also demonstrate energy efficiency like the brain’s own processing. The human brain operates on power roughly equivalent to what’s used by a 20-watt incandescent bulb. By contrast, computer servers where work-intensive tasks take place — from training for machine learning to executing internet searches — can use the equivalent of many households’ worth of energy, with the attendant carbon footprint.

“In our studies, we have a broader mission than just reprogramming existing computers,” Gimzewski said. “Our vision is a system that will eventually be able to handle tasks that are closer to the way the human being operates.”

The study’s first author, Adrian Diaz-Alvarez, is from the International Center for Material Nanoarchitectonics at Japan’s National Institute for Materials Science. Co-authors include Tomonobu Nakayama and Rintaro Higuchi, also of NIMS; and Zdenka Kuncic at the University of Sydney in Australia.

Caption: (a) Micrograph of the neuromorphic network fabricated by this research team. The network contains of numerous junctions between nanowires, which operate as synaptic elements. When voltage is applied to the network (between the green probes), current pathways (orange) are formed in the network. (b) A Human brain and one of its neuronal networks. The brain is known to have a complex network structure and to operate by means of electrical signal propagation across the network. Credit: NIMS

A November 11, 2019 National Institute for Materials Science (Japan) press release (also on EurekAlert but dated December 25, 2019) first announced the news,

An international joint research team led by NIMS succeeded in fabricating a neuromorphic network composed of numerous metallic nanowires. Using this network, the team was able to generate electrical characteristics similar to those associated with higher order brain functions unique to humans, such as memorization, learning, forgetting, becoming alert and returning to calm. The team then clarified the mechanisms that induced these electrical characteristics.

The development of artificial intelligence (AI) techniques has been rapidly advancing in recent years and has begun impacting our lives in various ways. Although AI processes information in a manner similar to the human brain, the mechanisms by which human brains operate are still largely unknown. Fundamental brain components, such as neurons and the junctions between them (synapses), have been studied in detail. However, many questions concerning the brain as a collective whole need to be answered. For example, we still do not fully understand how the brain performs such functions as memorization, learning and forgetting, and how the brain becomes alert and returns to calm. In addition, live brains are difficult to manipulate in experimental research. For these reasons, the brain remains a “mysterious organ.” A different approach to brain research?in which materials and systems capable of performing brain-like functions are created and their mechanisms are investigated?may be effective in identifying new applications of brain-like information processing and advancing brain science.

The joint research team recently built a complex brain-like network by integrating numerous silver (Ag) nanowires coated with a polymer (PVP) insulating layer approximately 1 nanometer in thickness. A junction between two nanowires forms a variable resistive element (i.e., a synaptic element) that behaves like a neuronal synapse. This nanowire network, which contains a large number of intricately interacting synaptic elements, forms a “neuromorphic network”. When a voltage was applied to the neuromorphic network, it appeared to “struggle” to find optimal current pathways (i.e., the most electrically efficient pathways). The research team measured the processes of current pathway formation, retention and deactivation while electric current was flowing through the network and found that these processes always fluctuate as they progress, similar to the human brain’s memorization, learning, and forgetting processes. The observed temporal fluctuations also resemble the processes by which the brain becomes alert or returns to calm. Brain-like functions simulated by the neuromorphic network were found to occur as the huge number of synaptic elements in the network collectively work to optimize current transport, in the other words, as a result of self-organized and emerging dynamic processes..

The research team is currently developing a brain-like memory device using the neuromorphic network material. The team intends to design the memory device to operate using fundamentally different principles than those used in current computers. For example, while computers are currently designed to spend as much time and electricity as necessary in pursuit of absolutely optimum solutions, the new memory device is intended to make a quick decision within particular limits even though the solution generated may not be absolutely optimum. The team also hopes that this research will facilitate understanding of the brain’s information processing mechanisms.

This project was carried out by an international joint research team led by Tomonobu Nakayama (Deputy Director, International Center for Materials Nanoarchitectonics (WPI-MANA), NIMS), Adrian Diaz Alvarez (Postdoctoral Researcher, WPI-MANA, NIMS), Zdenka Kuncic (Professor, School of Physics, University of Sydney, Australia) and James K. Gimzewski (Professor, California NanoSystems Institute, University of California Los Angeles, USA).

Here at last is a link to and a citation for the paper,

Emergent dynamics of neuromorphic nanowire networks by Adrian Diaz-Alvarez, Rintaro Higuchi, Paula Sanz-Leon, Ido Marcus, Yoshitaka Shingaya, Adam Z. Stieg, James K. Gimzewski, Zdenka Kuncic & Tomonobu Nakayama. Scientific Reports volume 9, Article number: 14920 (2019) DOI: https://doi.org/10.1038/s41598-019-51330-6 Published: 17 October 2019

This paper is open access.

Of sleep, electric sheep, and thousands of artificial synapses on a chip

A close-up view of a new neuromorphic “brain-on-a-chip” that includes tens of thousands of memristors, or memory transistors. Credit: Peng Lin Courtesy: MIT

It’s hard to believe that a brain-on-a-chip might need sleep but that seems to be the case as far as the US Dept. of Energy’s Los Alamos National Laboratory is concerned. Before pursuing that line of thought, here’s some work from the Massachusetts Institute of Technology (MIT) involving memristors and a brain-on-a-chip. From a June 8, 2020 news item on ScienceDaily,

MIT engineers have designed a “brain-on-a-chip,” smaller than a piece of confetti, that is made from tens of thousands of artificial brain synapses known as memristors — silicon-based components that mimic the information-transmitting synapses in the human brain.

The researchers borrowed from principles of metallurgy to fabricate each memristor from alloys of silver and copper, along with silicon. When they ran the chip through several visual tasks, the chip was able to “remember” stored images and reproduce them many times over, in versions that were crisper and cleaner compared with existing memristor designs made with unalloyed elements.

Their results, published today in the journal Nature Nanotechnology, demonstrate a promising new memristor design for neuromorphic devices — electronics that are based on a new type of circuit that processes information in a way that mimics the brain’s neural architecture. Such brain-inspired circuits could be built into small, portable devices, and would carry out complex computational tasks that only today’s supercomputers can handle.

This ‘metallurgical’ approach differs somewhat from the protein nanowire approach used by the University of Massachusetts at Amherst team mentioned in my June 15, 2020 posting. Scientists are pursuing multiple pathways and we may find that we arrive with not ‘a single artificial brain but with many types of artificial brains.

A June 8, 2020 MIT news release (also on EurekAlert) provides more detail about this brain-on-a-chip,

“So far, artificial synapse networks exist as software. We’re trying to build real neural network hardware for portable artificial intelligence systems,” says Jeehwan Kim, associate professor of mechanical engineering at MIT. “Imagine connecting a neuromorphic device to a camera on your car, and having it recognize lights and objects and make a decision immediately, without having to connect to the internet. We hope to use energy-efficient memristors to do those tasks on-site, in real-time.”

Wandering ions

Memristors, or memory transistors [Note: Memristors are usually described as memory resistors; this is the first time I’ve seen ‘memory transistor’], are an essential element in neuromorphic computing. In a neuromorphic device, a memristor would serve as the transistor in a circuit, though its workings would more closely resemble a brain synapse — the junction between two neurons. The synapse receives signals from one neuron, in the form of ions, and sends a corresponding signal to the next neuron.

A transistor in a conventional circuit transmits information by switching between one of only two values, 0 and 1, and doing so only when the signal it receives, in the form of an electric current, is of a particular strength. In contrast, a memristor would work along a gradient, much like a synapse in the brain. The signal it produces would vary depending on the strength of the signal that it receives. This would enable a single memristor to have many values, and therefore carry out a far wider range of operations than binary transistors.

Like a brain synapse, a memristor would also be able to “remember” the value associated with a given current strength, and produce the exact same signal the next time it receives a similar current. This could ensure that the answer to a complex equation, or the visual classification of an object, is reliable — a feat that normally involves multiple transistors and capacitors.

Ultimately, scientists envision that memristors would require far less chip real estate than conventional transistors, enabling powerful, portable computing devices that do not rely on supercomputers, or even connections to the Internet.

Existing memristor designs, however, are limited in their performance. A single memristor is made of a positive and negative electrode, separated by a “switching medium,” or space between the electrodes. When a voltage is applied to one electrode, ions from that electrode flow through the medium, forming a “conduction channel” to the other electrode. The received ions make up the electrical signal that the memristor transmits through the circuit. The size of the ion channel (and the signal that the memristor ultimately produces) should be proportional to the strength of the stimulating voltage.

Kim says that existing memristor designs work pretty well in cases where voltage stimulates a large conduction channel, or a heavy flow of ions from one electrode to the other. But these designs are less reliable when memristors need to generate subtler signals, via thinner conduction channels.

The thinner a conduction channel, and the lighter the flow of ions from one electrode to the other, the harder it is for individual ions to stay together. Instead, they tend to wander from the group, disbanding within the medium. As a result, it’s difficult for the receiving electrode to reliably capture the same number of ions, and therefore transmit the same signal, when stimulated with a certain low range of current.

Borrowing from metallurgy

Kim and his colleagues found a way around this limitation by borrowing a technique from metallurgy, the science of melding metals into alloys and studying their combined properties.

“Traditionally, metallurgists try to add different atoms into a bulk matrix to strengthen materials, and we thought, why not tweak the atomic interactions in our memristor, and add some alloying element to control the movement of ions in our medium,” Kim says.

Engineers typically use silver as the material for a memristor’s positive electrode. Kim’s team looked through the literature to find an element that they could combine with silver to effectively hold silver ions together, while allowing them to flow quickly through to the other electrode.

The team landed on copper as the ideal alloying element, as it is able to bind both with silver, and with silicon.

“It acts as a sort of bridge, and stabilizes the silver-silicon interface,” Kim says.

To make memristors using their new alloy, the group first fabricated a negative electrode out of silicon, then made a positive electrode by depositing a slight amount of copper, followed by a layer of silver. They sandwiched the two electrodes around an amorphous silicon medium. In this way, they patterned a millimeter-square silicon chip with tens of thousands of memristors.

As a first test of the chip, they recreated a gray-scale image of the Captain America shield. They equated each pixel in the image to a corresponding memristor in the chip. They then modulated the conductance of each memristor that was relative in strength to the color in the corresponding pixel.

The chip produced the same crisp image of the shield, and was able to “remember” the image and reproduce it many times, compared with chips made of other materials.

The team also ran the chip through an image processing task, programming the memristors to alter an image, in this case of MIT’s Killian Court, in several specific ways, including sharpening and blurring the original image. Again, their design produced the reprogrammed images more reliably than existing memristor designs.

“We’re using artificial synapses to do real inference tests,” Kim says. “We would like to develop this technology further to have larger-scale arrays to do image recognition tasks. And some day, you might be able to carry around artificial brains to do these kinds of tasks, without connecting to supercomputers, the internet, or the cloud.”

Here’s a link to and a citation for the paper,

Alloying conducting channels for reliable neuromorphic computing by Hanwool Yeon, Peng Lin, Chanyeol Choi, Scott H. Tan, Yongmo Park, Doyoon Lee, Jaeyong Lee, Feng Xu, Bin Gao, Huaqiang Wu, He Qian, Yifan Nie, Seyoung Kim & Jeehwan Kim. Nature Nanotechnology (2020 DOI: https://doi.org/10.1038/s41565-020-0694-5 Published: 08 June 2020

This paper is behind a paywall.

Electric sheep and sleeping androids

I find it impossible to mention that androids might need sleep without reference to Philip K. Dick’s 1968 novel, “Do Androids Dream of Electric Sheep?”; its Wikipedia entry is here.

June 8, 2020 Intelligent machines of the future may need to sleep as much as we do. Intelligent machines of the future may need to sleep as much as we do. Courtesy: Los Alamos National Laboratory

As it happens, I’m not the only one who felt the need to reference the novel, from a June 8, 2020 news item on ScienceDaily,

No one can say whether androids will dream of electric sheep, but they will almost certainly need periods of rest that offer benefits similar to those that sleep provides to living brains, according to new research from Los Alamos National Laboratory.

“We study spiking neural networks, which are systems that learn much as living brains do,” said Los Alamos National Laboratory computer scientist Yijing Watkins. “We were fascinated by the prospect of training a neuromorphic processor in a manner analogous to how humans and other biological systems learn from their environment during childhood development.”

Watkins and her research team found that the network simulations became unstable after continuous periods of unsupervised learning. When they exposed the networks to states that are analogous to the waves that living brains experience during sleep, stability was restored. “It was as though we were giving the neural networks the equivalent of a good night’s rest,” said Watkins.

A June 8, 2020 Los Alamos National Laboratory (LANL) news release (also on EurekAlert), which originated the news item, describes the research team’s presentation,

The discovery came about as the research team worked to develop neural networks that closely approximate how humans and other biological systems learn to see. The group initially struggled with stabilizing simulated neural networks undergoing unsupervised dictionary training, which involves classifying objects without having prior examples to compare them to.

“The issue of how to keep learning systems from becoming unstable really only arises when attempting to utilize biologically realistic, spiking neuromorphic processors or when trying to understand biology itself,” said Los Alamos computer scientist and study coauthor Garrett Kenyon. “The vast majority of machine learning, deep learning, and AI researchers never encounter this issue because in the very artificial systems they study they have the luxury of performing global mathematical operations that have the effect of regulating the overall dynamical gain of the system.”

The researchers characterize the decision to expose the networks to an artificial analog of sleep as nearly a last ditch effort to stabilize them. They experimented with various types of noise, roughly comparable to the static you might encounter between stations while tuning a radio. The best results came when they used waves of so-called Gaussian noise, which includes a wide range of frequencies and amplitudes. They hypothesize that the noise mimics the input received by biological neurons during slow-wave sleep. The results suggest that slow-wave sleep may act, in part, to ensure that cortical neurons maintain their stability and do not hallucinate.

The groups’ next goal is to implement their algorithm on Intel’s Loihi neuromorphic chip. They hope allowing Loihi to sleep from time to time will enable it to stably process information from a silicon retina camera in real time. If the findings confirm the need for sleep in artificial brains, we can probably expect the same to be true of androids and other intelligent machines that may come about in the future.

Watkins will be presenting the research at the Women in Computer Vision Workshop on June 14 [2020] in Seattle.

The 2020 Women in Computer Vition Workshop (WICV) website is here. As is becoming standard practice for these times, the workshop was held in a virtual environment. Here’s a link to and a citation for the poster presentation paper,

Using Sinusoidally-Modulated Noise as a Surrogate for Slow-Wave Sleep to
Accomplish Stable Unsupervised Dictionary Learning in a Spike-Based Sparse Coding Model
by Yijing Watkins, Edward Kim, Andrew Sornborger and Garrett T. Kenyon. Women in Computer Vision Workshop on June 14, 2020 in Seattle, Washington (state)

This paper is open access for now.

Neuromorphic computing with voltage usage comparable to human brains

Part of neuromorphic computing’s appeal is the promise of using less energy because, as it turns out, the human brain uses small amounts of energy very efficiently. A team of researchers at the University of Massachusetts at Amherst have developed function in the same range of voltages as the human brain. From an April 20, 2020 news item on ScienceDaily,

Only 10 years ago, scientists working on what they hoped would open a new frontier of neuromorphic computing could only dream of a device using miniature tools called memristors that would function/operate like real brain synapses.

But now a team at the University of Massachusetts Amherst has discovered, while on their way to better understanding protein nanowires, how to use these biological, electricity conducting filaments to make a neuromorphic memristor, or “memory transistor,” device. It runs extremely efficiently on very low power, as brains do, to carry signals between neurons. Details are in Nature Communications.

An April 20, 2020 University of Massachusetts at Amherst news release (also on EurekAlert), which originated the news items, dives into detail about how these researchers were able to achieve bio-voltages,

As first author Tianda Fu, a Ph.D. candidate in electrical and computer engineering, explains, one of the biggest hurdles to neuromorphic computing, and one that made it seem unreachable, is that most conventional computers operate at over 1 volt, while the brain sends signals called action potentials between neurons at around 80 millivolts – many times lower. Today, a decade after early experiments, memristor voltage has been achieved in the range similar to conventional computer, but getting below that seemed improbable, he adds.

Fu reports that using protein nanowires developed at UMass Amherst from the bacterium Geobacter by microbiologist and co-author Derek Lovely, he has now conducted experiments where memristors have reached neurological voltages. Those tests were carried out in the lab of electrical and computer engineering researcher and co-author Jun Yao.

Yao says, “This is the first time that a device can function at the same voltage level as the brain. People probably didn’t even dare to hope that we could create a device that is as power-efficient as the biological counterparts in a brain, but now we have realistic evidence of ultra-low power computing capabilities. It’s a concept breakthrough and we think it’s going to cause a lot of exploration in electronics that work in the biological voltage regime.”

Lovely points out that Geobacter’s electrically conductive protein nanowires offer many advantages over expensive silicon nanowires, which require toxic chemicals and high-energy processes to produce. Protein nanowires also are more stable in water or bodily fluids, an important feature for biomedical applications. For this work, the researchers shear nanowires off the bacteria so only the conductive protein is used, he adds.

Fu says that he and Yao had set out to put the purified nanowires through their paces, to see what they are capable of at different voltages, for example. They experimented with a pulsing on-off pattern of positive-negative charge sent through a tiny metal thread in a memristor, which creates an electrical switch.

They used a metal thread because protein nanowires facilitate metal reduction, changing metal ion reactivity and electron transfer properties. Lovely says this microbial ability is not surprising, because wild bacterial nanowires breathe and chemically reduce metals to get their energy the way we breathe oxygen.

As the on-off pulses create changes in the metal filaments, new branching and connections are created in the tiny device, which is 100 times smaller than the diameter of a human hair, Yao explains. It creates an effect similar to learning – new connections – in a real brain. He adds, “You can modulate the conductivity, or the plasticity of the nanowire-memristor synapse so it can emulate biological components for brain-inspired computing. Compared to a conventional computer, this device has a learning capability that is not software-based.”

Fu recalls, “In the first experiments we did, the nanowire performance was not satisfying, but it was enough for us to keep going.” Over two years, he saw improvement until one fateful day when his and Yao’s eyes were riveted by voltage measurements appearing on a computer screen.

“I remember the day we saw this great performance. We watched the computer as current voltage sweep was being measured. It kept doing down and down and we said to each other, ‘Wow, it’s working.’ It was very surprising and very encouraging.”

Fu, Yao, Lovely and colleagues plan to follow up this discovery with more research on mechanisms, and to “fully explore the chemistry, biology and electronics” of protein nanowires in memristors, Fu says, plus possible applications, which might include a device to monitor heart rate, for example. Yao adds, “This offers hope in the feasibility that one day this device can talk to actual neurons in biological systems.”

That last comment has me wondering about why you would want to have your device talk to actual neurons. For neuroprosthetics perhaps?

Here’s a link to and a citation for the paper,

Bioinspired bio-voltage memristors by Tianda Fu, Xiaomeng Liu, Hongyan Gao, Joy E. Ward, Xiaorong Liu, Bing Yin, Zhongrui Wang, Ye Zhuo, David J. F. Walker, J. Joshua Yang, Jianhan Chen, Derek R. Lovley & Jun Yao. Nature Communications volume 11, Article number: 1861 (2020) DOI: https://doi.org/10.1038/s41467-020-15759-y Published: 20 April 2020

This paper is open access.

There is an illustration of the work

Caption: A graphic depiction of protein nanowires (green) harvested from microbe Geobacter (orange) facilitate the electronic memristor device (silver) to function with biological voltages, emulating the neuronal components (blue junctions) in a brain. Credit: UMass Amherst/Yao lab

Plants as a source of usable electricity

A friend sent me a link to this interview with Iftach Yacoby of Tel Aviv University talking about some new research into plants and electricity. From a June 8, 2020 article by Omer Kabir for Calcalist (CTech) on the Algemeiner website,

For years, scientists have been trying to understand the evolutionary capabilities of plants to produce energy and have had only partial success. But a recent Tel Aviv University [TAU] study seems to make the impossible possible, proving that any plant can be transformed into an electrical source, producing a variety of materials that can revolutionize the global economy — from using hydrogen as fuel to clean ammonia to replace the pollutants in the agriculture industry.

“People are unaware that their plant pots have an electric current for everything,” Iftach Yacoby, head of the Laboratory of Renewable Energy Studies at Tel Aviv University’s Faculty of Life Sciences said in a recent interview with Calcalist.

“Our study opens the door to a new field of agriculture, equivalent to wheat or corn production for food security — generating energy,” he said. However, Yacoby makes it clear that it will take at least a decade before the research findings can be transferred to the commercial level.

At the heart of the research is the understanding that plants have particularly efficient capacities when it comes to electricity generation. “Anything green that is not dollars, but rather leaves, grass, and seaweed for example, contains solar panels that are completely identical to the panels the entire country is now building,” Yacoby explained. “They know how to take in solar radiation and make electrons flow out of it. That’s the essence of photosynthesis. Most people think of oxygen and food production, but the most basic phase of photosynthesis is the same as silicon panels in the Negev and on rooftops — taking in sunlight and generating electric current.”

… “At home, an electric current can be wired to many devices. Just plug the device into a power outlet. But when you want to do it in plants, it’s about the order of nanometers. We have no idea where to plug the plugs. That’s what we did in this study. In plant cells, we found they can be used as a socket for anything, at just a nanometer size. We have an enzyme, which is equivalent to a biological machine that can produce hydrogen. We took this enzyme, put it together so that it sits in the socket in the plant cell, which was previously only hypothetical. When he started to produce hydrogen, we proved that we had a socket for everything, though nanotermically-sized. Now we can take any plant or kelp and engineer it so that their electrical outlet can be used for production purposes,” Yacoby explained.

“If you attach an enzyme that produces hydrogen you get hydrogen, it’s the cleanest fuel that can be,” he said. “There are already electric cars and bicycles with a range of 150 km that travel on hydrogen. There are many types of enzymes in nature that produce valuable substances, such as ammonia needed for the fertilizer industry and today is still produced by a very toxic and harmful method that consumes a lot of energy. We can provide a plant-based alternative for the production of materials that are made in chemical manufacturing facilities. It’s an electric platform inside a living plant cell.”

You might find it helpful to read Kabir’s article in its entirety before moving on to the news release about the work. The work was conducted with researchers from Arizona State University (ASU;US) and a researcher from Yogi Vemana University (India), as well as, Yacoby. There’s a May 7, 2020 ASU news release (also on EurekAlert but published on May 6, 2020) detailing the work,

Hydrogen is an essential commodity with over 60 million tons produced globally every year. However over 95 percent of it is made by steam reformation of fossil fuels, a process that is energy intensive and produces carbon dioxide. If we could replace even a part of that with algal biohydrogen that is made via light and water, it would have a substantial impact.

This is essentially what has just been achieved in the lab of Kevin Redding, professor in the School of Molecular Sciences and director of the Center for Bioenergy and Photosynthesis. Their research, entitled Rewiring photosynthesis: a Photosystem I -hydrogenase chimera that makes hydrogen in vivo was published very recently in the high impact journal Energy and Environmental Science.

“What we have done is to show that it is possible to intercept the high energy electrons from photosynthesis and use them to drive alternate chemistry, in a living cell” explained Redding. “We have used hydrogen production here as an example.”

“Kevin Redding and his group have made a true breakthrough in re-engineering the Photosystem I complex,” explained Ian Gould, interim director of the School of Molecular Sciences, which is part of The College of Liberal Arts and Sciences. “They didn’t just find a way to redirect a complex protein structure that nature designed for one purpose to perform a different, but equally critical process, but they found the best way to do it at the molecular level.”

It is common knowledge that plants and algae, as well as cyanobacteria, use photosynthesis to produce oxygen and “fuels,” the latter being oxidizable substances like carbohydrates and hydrogen. There are two pigment-protein complexes that orchestrate the primary reactions of light in oxygenic photosynthesis: Photosystem I (PSI) and Photosystem II (PSII).

Algae (in this work the single-celled green alga Chlamydomonas reinhardtii, or ‘Chlamy’ for short) possess an enzyme called hydrogenase that uses electrons it gets from the protein ferredoxin, which is normally used to ferry electrons from PSI to various destinations. A problem is that the algal hydrogenase is rapidly and irreversibly inactivated by oxygen that is constantly produced by PSII.

In this study, doctoral student and first author Andrey Kanygin has created a genetic chimera of PSI and the hydrogenase such that they co-assemble and are active in vivo. This new assembly redirects electrons away from carbon dioxide fixation to the production of biohydrogen.

“We thought that some radically different approaches needed to be taken — thus, our crazy idea of hooking up the hydrogenase enzyme directly to Photosystem I in order to divert a large fraction of the electrons from water splitting (by Photosystem II) to make molecular hydrogen,” explained Redding.

Cells expressing the new photosystem (PSI-hydrogenase) make hydrogen at high rates in a light dependent fashion, for several days.

This important result will also be featured in an upcoming article in Chemistry World – a monthly chemistry news magazine published by the Royal Society of Chemistry. The magazine addresses current developments in the world of chemistry including research, international business news and government policy as it affects the chemical science community.

The NSF grant funding this research is part of the U.S.-Israel Binational Science Foundation (BSF). In this arrangement, a U.S. scientist and Israeli scientist join forces to form a joint project. The U.S. partner submits a grant on the joint project to the NSF, and the Israeli partner submits the same grant to the ISF (Israel Science Foundation). Both agencies must agree to fund the project in order to obtain the BSF funding. Professor Iftach Yacoby of Tel Aviv University, Redding’s partner on the BSF project, is a young scientist who first started at TAU about eight years ago and has focused on different ways to increase algal biohydrogen production.

In summary, re-engineering the fundamental processes of photosynthetic microorganisms offers a cheap and renewable platform for creating bio-factories capable of driving difficult electron reactions, powered only by the sun and using water as the electron source.

Here’s a link to and a citation for the paper,

Rewiring photosynthesis: a photosystem I-hydrogenase chimera that makes H2in vivo by Andrey Kanygin, Yuval Milrad, Chandrasekhar Thummala, Kiera Reifschneider, Patricia Baker, Pini Marco, Iftach Yacoby and Kevin E. Redding. Energy Environ. Sci., 2020, Advance DOI: https://doi.org/10.1039/C9EE03859K First published: 17 Apr 2020

In order to gain access to the paper, you must have or sign up for a free account.

This image was used to illustrate the research,

A model of Photosystem 1 core subunits Courtesy: ASU

Canadian and Italian researchers go beyond graphene with 2D polymers

According to a May 20,2020 McGill University news release (also on EurkekAltert), a team of Canadian and Italian researchers has broken new ground in materials science (Note: There’s a press release I found a bit more accessible and therefore informative coming up after this one),

A study by a team of researchers from Canada and Italy recently published in Nature Materials could usher in a revolutionary development in materials science, leading to big changes in the way companies create modern electronics.

The goal was to develop two-dimensional materials, which are a single atomic layer thick, with added functionality to extend the revolutionary developments in materials science that started with the discovery of graphene in 2004.

In total, 19 authors worked on this paper from INRS [Institut National de la Recherche Scientifique], McGill {University], Lakehead [University], and Consiglio Nazionale delle Ricerche, the national research council in Italy.

This work opens exciting new directions, both theoretical and experimental. The integration of this system into a device (e.g. transistors) may lead to outstanding performances. In addition, these results will foster more studies on a wide range of two-dimensional conjugated polymers with different lattice symmetries, thereby gaining further insights into the structure vs. properties of these systems.

The Italian/Canadian team demonstrated the synthesis of large-scale two-dimensional conjugated polymers, also thoroughly characterizing their electronic properties. They achieved success by combining the complementary expertise of organic chemists and surface scientists.

“This work represents an exciting development in the realization of functional two-dimensional materials beyond graphene,” said Mark Gallagher, a Physics professor at Lakehead University.

“I found it particularly rewarding to participate in this collaboration, which allowed us to combine our expertise in organic chemistry, condensed matter physics, and materials science to achieve our goals.”

Dmytro Perepichka, a professor and chair of Chemistry at McGill University, said they have been working on this research for a long time.

“Structurally reconfigurable two-dimensional conjugated polymers can give a new breadth to applications of two-dimensional materials in electronics,” Perepichka said.

“We started dreaming of them more than 15 years ago. It’s only through this four-way collaboration, across the country and between the continents, that this dream has become the reality.”

Federico Rosei, a professor at the Énergie Matériaux Télécommunications Research Centre of the Institut National de la Recherche Scientifique (INRS) in Varennes who holds the Canada Research Chair in Nanostructured Materials since 2016, said they are excited about the results of this collaboration.

“These results provide new insights into mechanisms of surface reactions at a fundamental level and simultaneously yield a novel material with outstanding properties, whose existence had only been predicted theoretically until now,” he said.

About this study

Synthesis of mesoscale ordered two-dimensional π-conjugated polymers with semiconducting properties” by G. Galeotti et al. was published in Nature Materials.

This research was partially supported by a project Grande Rilevanza Italy-Quebec of the Italian Ministero degli Affari Esteri e della Cooperazione Internazionale, Direzione Generale per la Promozione del Sistema Paese, the Natural Sciences and Engineering Research Council of Canada, the Fonds Québécois de la recherche sur la nature et les technologies and a US Army Research Office. Federico Rosei is also grateful to the Canada Research Chairs program for funding and partial salary support.

About McGill University

Founded in Montreal, Quebec, in 1821, McGill is a leading Canadian post-secondary institution. It has two campuses, 11 faculties, 13 professional schools, 300 programs of study and over 40,000 students, including more than 10,200 graduate students. McGill attracts students from over 150 countries around the world, its 12,800 international students making up 31% per cent of the student body. Over half of McGill students claim a first language other than English, including approximately 19% of our students who say French is their mother tongue.

About the INRS
The Institut National de la Recherche Scientifique (INRS) is the only institution in Québec dedicated exclusively to graduate level university research and training. The impacts of its faculty and students are felt around the world. INRS proudly contributes to societal progress in partnership with industry and community stakeholders, both through its discoveries and by training new researchers and technicians to deliver scientific, social, and technological breakthroughs in the future.

Lakehead University
Lakehead University is a fully comprehensive university with approximately 9,700 full-time equivalent students and over 2,000 faculty and staff at two campuses in Orillia and Thunder Bay, Ontario. Lakehead has 10 faculties, including Business Administration, Education, Engineering, Graduate Studies, Health & Behavioural Sciences, Law, Natural Resources Management, the Northern Ontario School of Medicine, Science & Environmental Studies, and Social Sciences & Humanities. In 2019, Maclean’s 2020 University Rankings, once again, included Lakehead University among Canada’s Top 10 primarily undergraduate universities, while Research Infosource named Lakehead ‘Research University of the Year’ in its category for the fifth consecutive year. Visit www.lakeheadu.ca

I’m a little surprised there wasn’t a quote from one of the Italian researchers in the McGill news release but then there isn’t a quote in this slightly more accessible May 18, 2020 Consiglio Nazionale delle Ricerche press release either,

Graphene’s isolation took the world by surprise and was meant to revolutionize modern electronics. However, it was soon realized that its intrinsic properties limit the utilization in our daily electronic devices. When a concept of Mathematics, namely Topology, met the field of on-surface chemistry, new materials with exotic features were theoretically discovered. Topological materials exhibit technological relevant properties such as quantum hall conductivity that are protected by a concept similar to the comparison of a coffee mug and a donut.  These structures can be synthesized by the versatile molecular engineering toolbox that surface reactions provide. Nevertheless, the realization of such a material yields access to properties that suit the figure of merits for modern electronic application and could eventually for example lead to solve the ever-increasing heat conflict in chip design. However, problems such as low crystallinity and defect rich structures prevented the experimental observation and kept it for more than a decade a playground only investigated theoretically.

An international team of scientists from Institut National de la Recherche Scientifique (Centre Energie, Matériaux et Télécommunications), McGill University and Lakehead University, both located in Canada, and the SAMOS laboratory of the Istituto di Struttura della Materia (Cnr), led by Giorgio Contini, demonstrates, in a recent publication on Nature Materials, that the synthesis of two-dimensional π-conjugated polymers with topological Dirac cone and flats bands became a reality allowing a sneak peek into the world of organic topological materials.

Complementary work of organic chemists and surface scientists lead to two-dimensional polymers on a mesoscopic scale and granted access to their electronic properties. The band structure of the topological polymer reveals both flat bands and a Dirac cone confirming the prediction of theory. The observed coexistence of both structures is of particular interest, since whereas Dirac cones yield massless charge carriers (a band velocity of the same order of magnitude of graphene has been obtained), necessary for technological applications, flat bands quench the kinetic energy of charge carriers and could give rise to intriguing phenomena such as the anomalous Hall effect, surface superconductivity or superfluid transport.

This work paths multiple new roads – both theoretical and experimental nature. The integration of this topological polymer into a device such as transistors possibly reveals immense performance. On the other hand, it will foster many researchers to explore a wide range of two-dimensional polymers with different lattice symmetries, obtaining insight into the relationship between geometrical and electrical topology, which would in return be beneficial to fine tune a-priori theoretical studies. These materials – beyond graphene – could be then used for both their intrinsic properties as well as their interplay in new heterostructure designs.

The authors are currently exploring the practical use of the realized material trying to integrate it into transistors, pushing toward a complete designing of artificial topological lattices.

This work was partially supported by a project Grande Rilevanza Italy-Quebec of the Italian Ministero degli Affari Esteri e della Cooperazione Internazionale (MAECI), Direzione Generale per la Promozione del Sistema Paese.

The Italians also included an image to accompany their press release,

Image of the synthesized material and its band structure Courtesy: Consiglio Nazionale delle Ricerche

My heart sank when I saw the number of authors for this paper (WordPress no longer [since their Christmas 2018 update] makes it easy to add the author’s names quickly to the ‘tags field’). Regardless and in keeping with my practice, here’s a link to and a citation for the paper,

Synthesis of mesoscale ordered two-dimensional π-conjugated polymers with semiconducting properties by G. Galeotti, F. De Marchi, E. Hamzehpoor, O. MacLean, M. Rajeswara Rao, Y. Chen, L. V. Besteiro, D. Dettmann, L. Ferrari, F. Frezza, P. M. Sheverdyaeva, R. Liu, A. K. Kundu, P. Moras, M. Ebrahimi, M. C. Gallagher, F. Rosei, D. F. Perepichka & G. Contini. Nature Materials (2020) DOI: https://doi.org/10.1038/s41563-020-0682-z Published 18 May 2020

This paper is behind a paywall.

Brain-inspired electronics with organic memristors for wearable computing

I went down a rabbit hole while trying to figure out the difference between ‘organic’ memristors and standard memristors. I have put the results of my investigation at the end of this post. First, there’s the news.

An April 21, 2020 news item on ScienceDaily explains why researchers are so focused on memristors and brainlike computing,

The advent of artificial intelligence, machine learning and the internet of things is expected to change modern electronics and bring forth the fourth Industrial Revolution. The pressing question for many researchers is how to handle this technological revolution.

“It is important for us to understand that the computing platforms of today will not be able to sustain at-scale implementations of AI algorithms on massive datasets,” said Thirumalai Venkatesan, one of the authors of a paper published in Applied Physics Reviews, from AIP Publishing.

“Today’s computing is way too energy-intensive to handle big data. We need to rethink our approaches to computation on all levels: materials, devices and architecture that can enable ultralow energy computing.”

An April 21, 2020 American Institute of Physics (AIP) news release (also on EurekAlert), which originated the news item, describes the authors’ approach to the problems with organic memristors,

Brain-inspired electronics with organic memristors could offer a functionally promising and cost- effective platform, according to Venkatesan. Memristive devices are electronic devices with an inherent memory that are capable of both storing data and performing computation. Since memristors are functionally analogous to the operation of neurons, the computing units in the brain, they are optimal candidates for brain-inspired computing platforms.

Until now, oxides have been the leading candidate as the optimum material for memristors. Different material systems have been proposed but none have been successful so far.

“Over the last 20 years, there have been several attempts to come up with organic memristors, but none of those have shown any promise,” said Sreetosh Goswami, lead author on the paper. “The primary reason behind this failure is their lack of stability, reproducibility and ambiguity in mechanistic understanding. At a device level, we are now able to solve most of these problems,”

This new generation of organic memristors is developed based on metal azo complex devices, which are the brainchild of Sreebata Goswami, a professor at the Indian Association for the Cultivation of Science in Kolkata and another author on the paper.

“In thin films, the molecules are so robust and stable that these devices can eventually be the right choice for many wearable and implantable technologies or a body net, because these could be bendable and stretchable,” said Sreebata Goswami. A body net is a series of wireless sensors that stick to the skin and track health.

The next challenge will be to produce these organic memristors at scale, said Venkatesan.

“Now we are making individual devices in the laboratory. We need to make circuits for large-scale functional implementation of these devices.”

Caption: The device structure at a molecular level. The gold nanoparticles on the bottom electrode enhance the field enabling an ultra-low energy operation of the molecular device. Credit Sreetosh Goswami, Sreebrata Goswami and Thirumalai Venky Venkatesan

Here’s a link to and a citation for the paper,

An organic approach to low energy memory and brain inspired electronics by Sreetosh Goswami, Sreebrata Goswami, and T. Venkatesan. Applied Physics Reviews 7, 021303 (2020) DOI: https://doi.org/10.1063/1.5124155

This paper is open access.

Basics about memristors and organic memristors

This undated article on Nanowerk provides a relatively complete and technical description of memristors in general (Note: A link has been removed),

A memristor (named as a portmanteau of memory and resistor) is a non-volatile electronic memory device that was first theorized by Leon Ong Chua in 1971 as the fourth fundamental two-terminal circuit element following the resistor, the capacitor, and the inductor (IEEE Transactions on Circuit Theory, “Memristor-The missing circuit element”).

Its special property is that its resistance can be programmed (resistor function) and subsequently remains stored (memory function). Unlike other memories that exist today in modern electronics, memristors are stable and remember their state even if the device loses power.

However, it was only almost 40 years later that the first practical device was fabricated. This was in 2008, when a group led by Stanley Williams at HP Research Labs realized that switching of the resistance between a conducting and less conducting state in metal-oxide thin-film devices was showing Leon Chua’s memristor behavior. …

The article on Nanowerk includes an embedded video presentation on memristors given by Stanley Williams (also known as R. Stanley Williams).

Mention of an ‘organic’memristor can be found in an October 31, 2017 article by Ryan Whitwam,

The memristor is composed of the transition metal ruthenium complexed with “azo-aromatic ligands.” [emphasis mine] The theoretical work enabling this material was performed at Yale, and the organic molecules were synthesized at the Indian Association for the Cultivation of Sciences. …

I highlighted ‘ligands’ because that appears to be the difference. However, there is more than one type of ligand on Wikipedia.

First, there’s the Ligand (biochemistry) entry (Note: Links have been removed),

In biochemistry and pharmacology, a ligand is a substance that forms a complex with a biomolecule to serve a biological purpose. …

Then, there’s the Ligand entry,

In coordination chemistry, a ligand[help 1] is an ion or molecule (functional group) that binds to a central metal atom to form a coordination complex …

Finally, there’s the Ligand (disambiguation) entry (Note: Links have been removed),

  • Ligand, an atom, ion, or functional group that donates one or more of its electrons through a coordinate covalent bond to one or more central atoms or ions
  • Ligand (biochemistry), a substance that binds to a protein
  • a ‘guest’ in host–guest chemistry

I did take a look at the paper and did not see any references to proteins or other biomolecules that I could recognize as such. I’m not sure why the researchers are describing their device as an ‘organic’ memristor but this may reflect a shortcoming in the definitions I have found or shortcomings in my reading of the paper rather than an error on their parts.

Hopefully, more research will be forthcoming and it will be possible to better understand the terminology.