Category Archives: neuromorphic engineering

Brainlike transistor and human intelligence

This brainlike transistor (not a memristor) is important because it functions at room temperature as opposed to others, which require cryogenic temperatures.

A December 20, 2023 Northwestern University news release (received via email; also on EurekAlert) fills in the details,

  • Researchers develop transistor that simultaneously processes and stores information like the human brain
  • Transistor goes beyond categorization tasks to perform associative learning
  • Transistor identified similar patterns, even when given imperfect input
  • Previous similar devices could only operate at cryogenic temperatures; new transistor operates at room temperature, making it more practical

EVANSTON, Ill. — Taking inspiration from the human brain, researchers have developed a new synaptic transistor capable of higher-level thinking.

Designed by researchers at Northwestern University, Boston College and the Massachusetts Institute of Technology (MIT), the device simultaneously processes and stores information just like the human brain. In new experiments, the researchers demonstrated that the transistor goes beyond simple machine-learning tasks to categorize data and is capable of performing associative learning.

Although previous studies have leveraged similar strategies to develop brain-like computing devices, those transistors cannot function outside cryogenic temperatures. The new device, by contrast, is stable at room temperatures. It also operates at fast speeds, consumes very little energy and retains stored information even when power is removed, making it ideal for real-world applications.

The study was published today (Dec. 20 [2023]) in the journal Nature.

“The brain has a fundamentally different architecture than a digital computer,” said Northwestern’s Mark C. Hersam, who co-led the research. “In a digital computer, data move back and forth between a microprocessor and memory, which consumes a lot of energy and creates a bottleneck when attempting to perform multiple tasks at the same time. On the other hand, in the brain, memory and information processing are co-located and fully integrated, resulting in orders of magnitude higher energy efficiency. Our synaptic transistor similarly achieves concurrent memory and information processing functionality to more faithfully mimic the brain.”

Hersam is the Walter P. Murphy Professor of Materials Science and Engineering at Northwestern’s McCormick School of Engineering. He also is chair of the department of materials science and engineering, director of the Materials Research Science and Engineering Center and member of the International Institute for Nanotechnology. Hersam co-led the research with Qiong Ma of Boston College and Pablo Jarillo-Herrero of MIT.

Recent advances in artificial intelligence (AI) have motivated researchers to develop computers that operate more like the human brain. Conventional, digital computing systems have separate processing and storage units, causing data-intensive tasks to devour large amounts of energy. With smart devices continuously collecting vast quantities of data, researchers are scrambling to uncover new ways to process it all without consuming an increasing amount of power. Currently, the memory resistor, or “memristor,” is the most well-developed technology that can perform combined processing and memory function. But memristors still suffer from energy costly switching.

“For several decades, the paradigm in electronics has been to build everything out of transistors and use the same silicon architecture,” Hersam said. “Significant progress has been made by simply packing more and more transistors into integrated circuits. You cannot deny the success of that strategy, but it comes at the cost of high power consumption, especially in the current era of big data where digital computing is on track to overwhelm the grid. We have to rethink computing hardware, especially for AI and machine-learning tasks.”

To rethink this paradigm, Hersam and his team explored new advances in the physics of moiré patterns, a type of geometrical design that arises when two patterns are layered on top of one another. When two-dimensional materials are stacked, new properties emerge that do not exist in one layer alone. And when those layers are twisted to form a moiré pattern, unprecedented tunability of electronic properties becomes possible.

For the new device, the researchers combined two different types of atomically thin materials: bilayer graphene and hexagonal boron nitride. When stacked and purposefully twisted, the materials formed a moiré pattern. By rotating one layer relative to the other, the researchers could achieve different electronic properties in each graphene layer even though they are separated by only atomic-scale dimensions. With the right choice of twist, researchers harnessed moiré physics for neuromorphic functionality at room temperature.

“With twist as a new design parameter, the number of permutations is vast,” Hersam said. “Graphene and hexagonal boron nitride are very similar structurally but just different enough that you get exceptionally strong moiré effects.”

To test the transistor, Hersam and his team trained it to recognize similar — but not identical — patterns. Just earlier this month, Hersam introduced a new nanoelectronic device capable of analyzing and categorizing data in an energy-efficient manner, but his new synaptic transistor takes machine learning and AI one leap further.

“If AI is meant to mimic human thought, one of the lowest-level tasks would be to classify data, which is simply sorting into bins,” Hersam said. “Our goal is to advance AI technology in the direction of higher-level thinking. Real-world conditions are often more complicated than current AI algorithms can handle, so we tested our new devices under more complicated conditions to verify their advanced capabilities.”

First the researchers showed the device one pattern: 000 (three zeros in a row). Then, they asked the AI to identify similar patterns, such as 111 or 101. “If we trained it to detect 000 and then gave it 111 and 101, it knows 111 is more similar to 000 than 101,” Hersam explained. “000 and 111 are not exactly the same, but both are three digits in a row. Recognizing that similarity is a higher-level form of cognition known as associative learning.”

In experiments, the new synaptic transistor successfully recognized similar patterns, displaying its associative memory. Even when the researchers threw curveballs — like giving it incomplete patterns — it still successfully demonstrated associative learning.

“Current AI can be easy to confuse, which can cause major problems in certain contexts,” Hersam said. “Imagine if you are using a self-driving vehicle, and the weather conditions deteriorate. The vehicle might not be able to interpret the more complicated sensor data as well as a human driver could. But even when we gave our transistor imperfect input, it could still identify the correct response.”

The study, “Moiré synaptic transistor with room-temperature neuromorphic functionality,” was primarily supported by the National Science Foundation.

Here’s a link to and a citation for the paper,

Moiré synaptic transistor with room-temperature neuromorphic functionality by Xiaodong Yan, Zhiren Zheng, Vinod K. Sangwan, Justin H. Qian, Xueqiao Wang, Stephanie E. Liu, Kenji Watanabe, Takashi Taniguchi, Su-Yang Xu, Pablo Jarillo-Herrero, Qiong Ma & Mark C. Hersam. Nature volume 624, pages 551–556 (2023) DOI: https://doi.org/10.1038/s41586-023-06791-1 Published online: 20 December 2023 Issue Date: 21 December 2023

This paper is behind a paywall.

Striking similarity between memory processing of artificial intelligence (AI) models and hippocampus of the human brain

A December 18, 2023 news item on ScienceDaily shifted my focus from hardware to software when considering memory in brainlike (neuromorphic) computing,

An interdisciplinary team consisting of researchers from the Center for Cognition and Sociality and the Data Science Group within the Institute for Basic Science (IBS) [Korea] revealed a striking similarity between the memory processing of artificial intelligence (AI) models and the hippocampus of the human brain. This new finding provides a novel perspective on memory consolidation, which is a process that transforms short-term memories into long-term ones, in AI systems.

A November 28 (?), 2023 IBS press release (also on EurekAlert but published December 18, 2023, which originated the news item, describes how the team went about its research,

In the race towards developing Artificial General Intelligence (AGI), with influential entities like OpenAI and Google DeepMind leading the way, understanding and replicating human-like intelligence has become an important research interest. Central to these technological advancements is the Transformer model [Figure 1], whose fundamental principles are now being explored in new depth.

The key to powerful AI systems is grasping how they learn and remember information. The team applied principles of human brain learning, specifically concentrating on memory consolidation through the NMDA receptor in the hippocampus, to AI models.

The NMDA receptor is like a smart door in your brain that facilitates learning and memory formation. When a brain chemical called glutamate is present, the nerve cell undergoes excitation. On the other hand, a magnesium ion acts as a small gatekeeper blocking the door. Only when this ionic gatekeeper steps aside, substances are allowed to flow into the cell. This is the process that allows the brain to create and keep memories, and the gatekeeper’s (the magnesium ion) role in the whole process is quite specific.

The team made a fascinating discovery: the Transformer model seems to use a gatekeeping process similar to the brain’s NMDA receptor [see Figure 1]. This revelation led the researchers to investigate if the Transformer’s memory consolidation can be controlled by a mechanism similar to the NMDA receptor’s gating process.

In the animal brain, a low magnesium level is known to weaken memory function. The researchers found that long-term memory in Transformer can be improved by mimicking the NMDA receptor. Just like in the brain, where changing magnesium levels affect memory strength, tweaking the Transformer’s parameters to reflect the gating action of the NMDA receptor led to enhanced memory in the AI model. This breakthrough finding suggests that how AI models learn can be explained with established knowledge in neuroscience.

C. Justin LEE, who is a neuroscientist director at the institute, said, “This research makes a crucial step in advancing AI and neuroscience. It allows us to delve deeper into the brain’s operating principles and develop more advanced AI systems based on these insights.”

CHA Meeyoung, who is a data scientist in the team and at KAIST [Korea Advanced Institute of Science and Technology], notes, “The human brain is remarkable in how it operates with minimal energy, unlike the large AI models that need immense resources. Our work opens up new possibilities for low-cost, high-performance AI systems that learn and remember information like humans.”

What sets this study apart is its initiative to incorporate brain-inspired nonlinearity into an AI construct, signifying a significant advancement in simulating human-like memory consolidation. The convergence of human cognitive mechanisms and AI design not only holds promise for creating low-cost, high-performance AI systems but also provides valuable insights into the workings of the brain through AI models.

Fig. 1: (a) Diagram illustrating the ion channel activity in post-synaptic neurons. AMPA receptors are involved in the activation of post-synaptic neurons, while NMDA receptors are blocked by magnesium ions (Mg²⁺) but induce synaptic plasticity through the influx of calcium ions (Ca²⁺) when the post-synaptic neuron is sufficiently activated. (b) Flow diagram representing the computational process within the Transformer AI model. Information is processed sequentially through stages such as feed-forward layers, layer normalization, and self-attention layers. The graph depicting the current-voltage relationship of the NMDA receptors is very similar to the nonlinearity of the feed-forward layer. The input-output graph, based on the concentration of magnesium (α), shows the changes in the nonlinearity of the NMDA receptors. Courtesy: IBS

This research was presented at the 37th Conference on Neural Information Processing Systems (NeurIPS 2023) before being published in the proceedings, I found a PDF of the presentation and an early online copy of the paper before locating the paper in the published proceedings.

PDF of presentation: Transformer as a hippocampal memory consolidation model based on NMDAR-inspired nonlinearity

PDF copy of paper:

Transformer as a hippocampal memory consolidation model based on NMDAR-inspired nonlinearity by Dong-Kyum Kim, Jea Kwon, Meeyoung Cha, C. Justin Lee.

This paper was made available on OpenReview.net:

OpenReview is a platform for open peer review, open publishing, open access, open discussion, open recommendations, open directory, open API and open source.

It’s not clear to me if this paper is finalized or not and I don’t know if its presence on OpenReview constitutes publication.

Finally, the paper published in the proceedings,

Transformer as a hippocampal memory consolidation model based on NMDAR-inspired nonlinearity by Dong Kyum Kim, Jea Kwon, Meeyoung Cha, C. Justin Lee. Part of Advances in Neural Information Processing Systems 36 (NeurIPS 2023) Main Conference Track

This link will take you to the abstract, access the paper by clicking on the Paper tab.

Brain-inspired (neuromrophic) computing with twisted magnets and a patent for manufacturing permanent magnets without rare earths

I have two news bits both of them concerned with magnets.

Patent for magnets that can be made without rare earths

I’m starting with the patent news first since this is (as the company notes in its news release) a “Landmark Patent Issued for Technology Critically Needed to Combat Chinese Monopoly.”

For those who don’t know, China supplies most of the rare earths used in computers, smart phones, and other devices. On general principles, having a single supplier dominate production of and access to a necessary material for devices that most of us rely on can raise tensions. Plus, you can’t mine for resources forever.

This December 19, 2023 Nanocrystal Technology LP news release heralds an exciting development (for the impatient, further down the page I have highlighted the salient sections),

Nanotechnology Discovery by 2023 Nobel Prize Winner Became Launch Pad to Create Permanent Magnets without Rare Earths from China

NEW YORK, NY, UNITED STATES, December 19, 2023 /EINPresswire.com/ — Integrated Nano-Magnetics Corp, a wholly owned subsidiary of Nanocrystal Technology LP, was awarded a patent for technology built upon a fundamental nanoscience discovery made by Aleksey Yekimov, its former Chief Scientific Officer.

This patent will enable the creation of strong permanent magnets which are critically needed for both industrial and military applications but cannot be manufactured without certain “rare earth” elements available mostly from China.

At a glittering awards ceremony held in Stockholm on December10, 2023, three scientists, Aleksey Yekimov, Louis Brus (Professor at Columbia University) and Moungi Bawendi (Professor at MIT) were honored with the Nobel Prize in Chemistry for their discovery of the “quantum dot” which is now fueling practical applications in tuning the colors of LEDs, increasing the resolution of TV screens, and improving MRI imaging.

As stated by the Royal Swedish Academy of Sciences, “Quantum dots are … bringing the greatest benefits to humankind. Researchers believe that in the future they could contribute to flexible electronics, tiny sensors, thinner solar cells, and encrypted quantum communications – so we have just started exploring the potential of these tiny particles.”

Aleksey Yekimov worked for over 19 years until his retirement as Chief Scientific Officer of Nanocrystals Technology LP, an R & D company in New York founded by two Indian-American entrepreneurs, Rameshwar Bhargava and Rajan Pillai.

Yekimov, who was born in Russia, had already received the highest scientific honors for his work before he immigrated to USA in 1999. Yekimov was greatly intrigued by Nanocrystal Technology’s research project and chose to join the company as its Chief Scientific Officer.

During its early years, the company worked on efficient light generation by doping host nanoparticles about the same size as a quantum dot with an additional impurity atom. Bhargava came up with the novel idea of incorporating a single impurity atom, a dopant, into a quantum dot sized host, and thus achieve an extraordinary change in the host material’s properties such as inducing strong permanent magnetism in weak, readily available paramagnetic materials. To get a sense of the scale at which nanotechnology works, and as vividly illustrated by the Nobel Foundation, the difference in size between a quantum dot and a soccer ball is about the same as the difference between a soccer ball and planet Earth.

Currently, strong permanent magnets are manufactured from “rare earths” available mostly in China which has established a near monopoly on the supply of rare-earth based strong permanent magnets. Permanent magnets are a fundamental building block for electro-mechanical devices such as motors found in all automobiles including electric vehicles, trucks and tractors, military tanks, wind turbines, aircraft engines, missiles, etc. They are also required for the efficient functioning of audio equipment such as speakers and cell phones as well as certain magnetic storage media.

The existing market for permanent magnets is $28 billion and is projected to reach $50 billion by 2030 in view of the huge increase in usage of electric vehicles. China’s overwhelming dominance in this field has become a matter of great concern to governments of all Western and other industrialized nations. As the Wall St. Journal put it, China’s now has a “stranglehold” on the economies and security of other countries.

The possibility of making permanent magnets without the use of any rare earths mined in China has intrigued leading physicists and chemists for nearly 30 years. On December 19, 2023, a U.S. patent with the title ‘’Strong Non Rare Earth Permanent Magnets from Double Doped Magnetic Nanoparticles” was granted to Integrated Nano-Magnetics Corp. [emphasis mine] Referring to this major accomplishment Bhargava said, “The pioneering work done by Yekimov, Brus and Bawendi has provided the foundation for us to make other discoveries in nanotechnology which will be of great benefit to the world.”

I was not able to find any company websites. The best I could find is a Nanocrystals Technology LinkedIn webpage and some limited corporate data for Integrated Nano-Magnetics on opencorporates.com.

Twisted magnets and brain-inspired computing

This research offers a pathway to neuromorphic (brainlike) computing with chiral (or twisted) magnets, which, as best as I understand it, do not require rare earths. From a November13, 2023 news item on ScienceDaily,

A form of brain-inspired computing that exploits the intrinsic physical properties of a material to dramatically reduce energy use is now a step closer to reality, thanks to a new study led by UCL [University College London] and Imperial College London [ICL] researchers.

In the new study, published in the journal Nature Materials, an international team of researchers used chiral (twisted) magnets as their computational medium and found that, by applying an external magnetic field and changing temperature, the physical properties of these materials could be adapted to suit different machine-learning tasks.

A November 9, 2023 UCL press release (also on EurekAlert but published November 13, 2023), which originated the news item, fill s in a few more details about the research,

Dr Oscar Lee (London Centre for Nanotechnology at UCL and UCL Department of Electronic & Electrical Engineering), the lead author of the paper, said: “This work brings us a step closer to realising the full potential of physical reservoirs to create computers that not only require significantly less energy, but also adapt their computational properties to perform optimally across various tasks, just like our brains.

“The next step is to identify materials and device architectures that are commercially viable and scalable.”

Traditional computing consumes large amounts of electricity. This is partly because it has separate units for data storage and processing, meaning information has to be shuffled constantly between the two, wasting energy and producing heat. This is particularly a problem for machine learning, which requires vast datasets for processing. Training one large AI model can generate hundreds of tonnes of carbon dioxide.

Physical reservoir computing is one of several neuromorphic (or brain inspired) approaches that aims to remove the need for distinct memory and processing units, facilitating more efficient ways to process data. In addition to being a more sustainable alternative to conventional computing, physical reservoir computing could be integrated into existing circuitry to provide additional capabilities that are also energy efficient.

In the study, involving researchers in Japan and Germany, the team used a vector network analyser to determine the energy absorption of chiral magnets at different magnetic field strengths and temperatures ranging from -269 °C to room temperature.

They found that different magnetic phases of chiral magnets excelled at different types of computing task. The skyrmion phase, where magnetised particles are swirling in a vortex-like pattern, had a potent memory capacity apt for forecasting tasks. The conical phase, meanwhile, had little memory, but its non-linearity was ideal for transformation tasks and classification – for instance, identifying if an animal is a cat or dog.

Co-author Dr Jack Gartside, of Imperial College London, said: “Our collaborators at UCL in the group of Professor Hidekazu Kurebayashi recently identified a promising set of materials for powering unconventional computing. These materials are special as they can support an especially rich and varied range of magnetic textures. Working with the lead author Dr Oscar Lee, the Imperial College London group [led by Dr Gartside, Kilian Stenning and Professor Will Branford] designed a neuromorphic computing architecture to leverage the complex material properties to match the demands of a diverse set of challenging tasks. This gave great results, and showed how reconfiguring physical phases can directly tailor neuromorphic computing performance.”

The work also involved researchers at the University of Tokyo and Technische Universität München and was supported by the Leverhulme Trust, Engineering and Physical Sciences Research Council (EPSRC), Imperial College London President’s Excellence Fund for Frontier Research, Royal Academy of Engineering, the Japan Science and Technology Agency, Katsu Research Encouragement Award, Asahi Glass Foundation, and the DFG (German Research Foundation).

Here’s a link to and a citation for the paper,

Task-adaptive physical reservoir computing by Oscar Lee, Tianyi Wei, Kilian D. Stenning, Jack C. Gartside, Dan Prestwood, Shinichiro Seki, Aisha Aqeel, Kosuke Karube, Naoya Kanazawa, Yasujiro Taguchi, Christian Back, Yoshinori Tokura, Will R. Branford & Hidekazu Kurebayashi. Nature Materials volume 23, pages 79–87 (2024) DOI: https://doi.org/10.1038/s41563-023-01698-8 Published online: 13 November 2023 Issue Date: January 2024

This paper is open access.

Physical neural network based on nanowires can learn and remember ‘on the fly’

A November 1, 2023 news item on Nanowerk announced new work on neuromorphic engineering from Australia,

For the first time, a physical neural network has successfully been shown to learn and remember ‘on the fly’, in a way inspired by and similar to how the brain’s neurons work.

The result opens a pathway for developing efficient and low-energy machine intelligence for more complex, real-world learning and memory tasks.

Key Takeaways
*The nanowire-based system can learn and remember ‘on the fly,’ processing dynamic, streaming data for complex learning and memory tasks.

*This advancement overcomes the challenge of heavy memory and energy usage commonly associated with conventional machine learning models.

*The technology achieved a 93.4% accuracy rate in image recognition tasks, using real-time data from the MNIST database of handwritten digits.

*The findings promise a new direction for creating efficient, low-energy machine intelligence applications, such as real-time sensor data processing.

Nanowire neural network
Caption: Electron microscope image of the nanowire neural network that arranges itself like ‘Pick Up Sticks’. The junctions where the nanowires overlap act in a way similar to how our brain’s synapses operate, responding to electric current. Credit: The University of Sydney

A November 1, 2023 University of Sydney news release (also on EurekAlert), which originated the news item, elaborates on the research,

Published today [November 1, 2023] in Nature Communications, the research is a collaboration between scientists at the University of Sydney and University of California at Los Angeles.

Lead author Ruomin Zhu, a PhD student from the University of Sydney Nano Institute and School of Physics, said: “The findings demonstrate how brain-inspired learning and memory functions using nanowire networks can be harnessed to process dynamic, streaming data.”

Nanowire networks are made up of tiny wires that are just billionths of a metre in diameter. The wires arrange themselves into patterns reminiscent of the children’s game ‘Pick Up Sticks’, mimicking neural networks, like those in our brains. These networks can be used to perform specific information processing tasks.

Memory and learning tasks are achieved using simple algorithms that respond to changes in electronic resistance at junctions where the nanowires overlap. Known as ‘resistive memory switching’, this function is created when electrical inputs encounter changes in conductivity, similar to what happens with synapses in our brain.

In this study, researchers used the network to recognise and remember sequences of electrical pulses corresponding to images, inspired by the way the human brain processes information.

Supervising researcher Professor Zdenka Kuncic said the memory task was similar to remembering a phone number. The network was also used to perform a benchmark image recognition task, accessing images in the MNIST database of handwritten digits, a collection of 70,000 small greyscale images used in machine learning.

“Our previous research established the ability of nanowire networks to remember simple tasks. This work has extended these findings by showing tasks can be performed using dynamic data accessed online,” she said.

“This is a significant step forward as achieving an online learning capability is challenging when dealing with large amounts of data that can be continuously changing. A standard approach would be to store data in memory and then train a machine learning model using that stored information. But this would chew up too much energy for widespread application.

“Our novel approach allows the nanowire neural network to learn and remember ‘on the fly’, sample by sample, extracting data online, thus avoiding heavy memory and energy usage.”

Mr Zhu said there were other advantages when processing information online.

“If the data is being streamed continuously, such as it would be from a sensor for instance, machine learning that relied on artificial neural networks would need to have the ability to adapt in real-time, which they are currently not optimised for,” he said.

In this study, the nanowire neural network displayed a benchmark machine learning capability, scoring 93.4 percent in correctly identifying test images. The memory task involved recalling sequences of up to eight digits. For both tasks, data was streamed into the network to demonstrate its capacity for online learning and to show how memory enhances that learning.

Here’s a link to and a citation for the paper,

Online dynamical learning and sequence memory with neuromorphic nanowire networks by Ruomin Zhu, Sam Lilak, Alon Loeffler, Joseph Lizier, Adam Stieg, James Gimzewski & Zdenka Kuncic. Nature Communications volume 14, Article number: 6697 (2023) DOI: https://doi.org/10.1038/s41467-023-42470-5 Published: 01 November 2023

This paper is open access.

You’ll notice a number of this team’s members are also listed in the citation in my June 21, 2023 posting “Learning and remembering like a human brain: nanowire networks” and you’ll see some familiar names in the citation in my June 17, 2020 posting “A tangle of silver nanowires for brain-like action.”

Adaptive neural connectivity with an event-based architecture using photonic processors

On first glance it looked like a set of matches. If there were more dimension, this could also have been a set pencils but no,

Caption: The chip contains almost 8,400 functioning artificial neurons from waveguide-coupled phase-change material. The researchers trained this neural network to distinguish between German and English texts on the basis of vowel frequency. Credit: Jonas Schütte / Pernice Group Courtesy: University of Münster

An October 23, 2023 news item on Nanowerk introduces research into a new approach to optical neural networks

A team of researchers headed by physicists Prof. Wolfram Pernice and Prof. Martin Salinga and computer specialist Prof. Benjamin Risse, all from the University of Münster, has developed a so-called event-based architecture, using photonic processors. In a similar way to the brain, this makes possible the continuous adaptation of the connections within the neural network.

Key Takeaways

Researchers have created a new computing architecture that mimics biological neural networks, using photonic processors for data transportation and processing.

The new system enables continuous adaptation of connections within the neural network, crucial for learning processes. This is known as both synaptic and structural plasticity.

Unlike traditional studies, the connections or synapses in this photonic neural network are not hardware-based but are coded based on optical pulse properties, allowing for a single chip to hold several thousand neurons.

Light-based processors in this system offer a much higher bandwidth and lower energy consumption compared to traditional electronic processors.

The researchers successfully tested the system using an evolutionary algorithm to differentiate between German and English texts based on vowel count, highlighting its potential for rapid and energy-efficient AI applications.

The Research

Modern computer models – for example for complex, potent AI applications – push traditional digital computer processes to their limits.

The person who edited the original press release, which is included in the news item in the above, is not credited.

Here’s the unedited original October 23, 2023 University of Münster press release (also on EurekAlert)

Modern computer models – for example for complex, potent AI applications – push traditional digital computer processes to their limits. New types of computing architecture, which emulate the working principles of biological neural networks, hold the promise of faster, more energy-efficient data processing. A team of researchers has now developed a so-called event-based architecture, using photonic processors with which data are transported and processed by means of light. In a similar way to the brain, this makes possible the continuous adaptation of the connections within the neural network. This changeable connections are the basis for learning processes. For the purposes of the study, a team working at Collaborative Research Centre 1459 (“Intelligent Matter”) – headed by physicists Prof. Wolfram Pernice and Prof. Martin Salinga and computer specialist Prof. Benjamin Risse, all from the University of Münster – joined forces with researchers from the Universities of Exeter and Oxford in the UK. The study has been published in the journal “Science Advances”.

What is needed for a neural network in machine learning are artificial neurons which are activated by external excitatory signals, and which have connections to other neurons. The connections between these artificial neurons are called synapses – just like the biological original. For their study, the team of researchers in Münster used a network consisting of almost 8,400 optical neurons made of waveguide-coupled phase-change material, and the team showed that the connection between two each of these neurons can indeed become stronger or weaker (synaptic plasticity), and that new connections can be formed, or existing ones eliminated (structural plasticity). In contrast to other similar studies, the synapses were not hardware elements but were coded as a result of the properties of the optical pulses – in other words, as a result of the respective wavelength and of the intensity of the optical pulse. This made it possible to integrate several thousand neurons on one single chip and connect them optically.

In comparison with traditional electronic processors, light-based processors offer a significantly higher bandwidth, making it possible to carry out complex computing tasks, and with lower energy consumption. This new approach consists of basic research. “Our aim is to develop an optical computing architecture which in the long term will make it possible to compute AI applications in a rapid and energy-efficient way,” says Frank Brückerhoff-Plückelmann, one of the lead authors.

Methodology: The non-volatile phase-change material can be switched between an amorphous structure and a crystalline structure with a highly ordered atomic lattice. This feature allows permanent data storage even without an energy supply. The researchers tested the performance of the neural network by using an evolutionary algorithm to train it to distinguish between German and English texts. The recognition parameter they used was the number of vowels in the text.

The researchers received financial support from the German Research Association, the European Commission and “UK Research and Innovation”.

Here’s a link to and a citation for the paper,

Event-driven adaptive optical neural network by Frank Brückerhoff-Plückelmann, Ivonne Bente, Marlon Becker, Niklas Vollmar, Nikolaos Farmakidis, Emma Lomonte, Francesco Lenzini, C. David Wright, Harish Bhaskaran, Martin Salinga, Benjamin Risse, and Wolfram H. P. Pernice. Science Advances 20 Oct 2023 Vol 9, Issue 42 DOI: 10.1126/sciadv.adi9127

This paper is open access.

Living technology possibilities

Before launching into the possibilities, here are two descriptions of ‘living technology’ from the European Centre for Living Technology’s (ECLT) homepage,

Goals

Promote, carry out and coordinate research activities and the diffusion of scientific results in the field of living technology. The scientific areas for living technology are the nano-bio-technologies, self-organizing and evolving information and production technologies, and adaptive complex systems.

History

Founded in 2004 the European Centre for Living Technology is an international and interdisciplinary research centre established as an inter-university consortium, currently involving 18 European and extra-European institutional affiliates.

The Centre is devoted to the study of technologies that exhibit life-like properties including self-organization, adaptability and the capacity to evolve.

Despite the reference to “nano-bio-technologies,” this October 11, 2023 news item on ScienceDaily focuses on microscale living technology,

It is noIn a recent article in the high-profile journal “Advanced Materials,” researchers in Chemnitz show just how close and necessary the transition to sustainable living technology is, based on the morphogenesis of self-assembling microelectronic modules, strengthening the recent membership of Chemnitz University of Technology with the European Centre for Living Technology (ECLT) in Venice.

An October 11, 2023 Chemnitz University of Technology (Technische Universität Chemnitz; TU Chemnitz) press release (also on EurekAlert), which originated the news item, delves further into the topic, Note: Links have been removed,

It is now apparent that the mass-produced artefacts of technology in our increasingly densely populated world – whether electronic devices, cars, batteries, phones, household appliances, or industrial robots – are increasingly at odds with the sustainable bounded ecosystems achieved by living organisms based on cells over millions of years. Cells provide organisms with soft and sustainable environmental interactions with complete recycling of material components, except in a few notable cases like the creation of oxygen in the atmosphere, and of the fossil fuel reserves of oil and coal (as a result of missing biocatalysts). However, the fantastic information content of biological cells (gigabits of information in DNA alone) and the complexities of protein biochemistry for metabolism seem to place a cellular approach well beyond the current capabilities of technology, and prevent the development of intrinsically sustainable technology.

SMARTLETs: tiny shape-changing modules that collectively self-organize to larger more complex systems

A recent perspective review published in the very high impact journal Advanced Materials this month [October 2023] by researchers at the Research Center for Materials, Architectures and Integration of Nanomembranes (MAIN) of Chemnitz University of Technology, shows how a novel form of high-information-content Living Technology is now within reach, based on microrobotic electronic modules called SMARTLETs, which will soon be capable of self-assembling into complex artificial organisms. The research belongs to the new field of Microelectronic Morphogenesis, the creation of form under microelectronic control, and builds on work over the previous years at Chemnitz University of Technology to construct self-folding and self-locomoting thin film electronic modules, now carrying tiny silicon chiplets between the folds, for a massive increase in information processing capabilities. Sufficient information can now be stored in each module to encode not only complex functions but fabrication recipes (electronic genomes) for clean rooms to allow the modules to be copied and evolved like cells, but safely because of the gating of reproduction through human operated clean room facilities.

Electrical self-awareness during self-assembly

In addition, the chiplets can provide neuromorphic learning capabilities allowing them to improve performance during operation. A further key feature of the specific self-assembly of these modules, based on matching physical bar codes, is that electrical and fluidic connections can be achieved between modules. These can then be employed, to make the electronic chiplets on board “aware” of the state of assembly, and of potential errors, allowing them to direct repair, correct mis-assembly, induce disassembly and form collective functions spanning many modules. Such functions include extended communication (antennae), power harvesting and redistribution, remote sensing, material redistribution etc.

So why is this technology vital for sustainability?

The complete digital fab description for modules, for which actually only a limited number of types are required even for complex organisms, allows their material content, responsible originator and environmentally relevant exposure all to be read out. Prof. Dagmar Nuissl-Gesmann from the Law Department at Chemnitz University of Technology observes that “this fine-grained documentation of responsibility intrinsic down to microscopic scales will be a game changer in allowing legal assignment of environmental and social responsibility for our technical artefacts”.

Furthermore, the self-locomotion and self-assembly-disassembly capabilities allows the modules to self-sort for recycling. Modules can be regained, reused, reconfigured, and redeployed in different artificial organisms. If they are damaged, then their limited and documented types facilitate efficient custom recycling of materials with established and optimized protocols for these sorted and now identical entities. These capabilities complement the other more obvious advantages in terms of design development and reuse in this novel reconfigurable media. As Prof. Marlen Arnold, an expert in Sustainability of the Faculty of Economics and Business Administration observes, “Even at high volumes of deployment use, these properties could provide this technology with a hitherto unprecedented level of sustainability which would set the bar for future technologies to share our planet safely with us.”

Contribution to European Living Technology

This research is a first contribution of MAIN/Chemnitz University of Technology, as a new member of the European Centre for Living Technology ECLT, based in Venice,” says Prof. Oliver G. Schmidt, Scientific Director of the Research Center MAIN and adds that “It’s fantastic to see that our deep collaboration with ECLT is paying off so quickly with immediate transdisciplinary benefit for several scientific communities.” “Theoretical research at the ECLT has been urgently in need of novel technology systems able to implement the core properties of living systems.” comments Prof. John McCaskill, coauthor of the paper, and a grounding director of the ECLT in 2004.

Here’s a link to and a citation for the researchers’ perspective paper,

Microelectronic Morphogenesis: Smart Materials with Electronics Assembling into Artificial Organisms by John S. McCaskill, Daniil Karnaushenko, Minshen Zhu, Oliver G. Schmidt. Advanced Materials DOI: https://doi.org/10.1002/adma.202306344 First published: 09 October 2023

This paper is open access.

A formal theory for neuromorphic (brainlike) computing hardware needed

This is one my older pieces as the information dates back to October 2023 but neuromorphic computing is one of my key interests and I’m particularly interested to see the upsurge in the discussion of hardware, here goes. From an October 17, 2023 news item on Nanowerk,

There is an intense, worldwide search for novel materials to build computer microchips with that are not based on classic transistors but on much more energy-saving, brain-like components. However, whereas the theoretical basis for classic transistor-based digital computers is solid, there are no real theoretical guidelines for the creation of brain-like computers.

Such a theory would be absolutely necessary to put the efforts that go into engineering new kinds of microchips on solid ground, argues Herbert Jaeger, Professor of Computing in Cognitive Materials at the University of Groningen [Netherlands].

Key Takeaways
Scientists worldwide are searching for new materials to build energy-saving, brain-like computer microchips as classic transistor miniaturization reaches its physical limit.

Theoretical guidelines for brain-like computers are lacking, making it crucial for advancements in the field.

The brain’s versatility and robustness serve as an inspiration, despite limited knowledge about its exact workings.

A recent paper suggests that a theory for non-digital computers should focus on continuous, analogue signals and consider the characteristics of new materials.

Bridging gaps between diverse scientific fields is vital for developing a foundational theory for neuromorphic computing..

An October 17, 2023 University of Groningen press release (also on EurekAlert), which originated the news item, provides more context for this proposal,

Computers have, so far, relied on stable switches that can be off or on, usually transistors. These digital computers are logical machines and their programming is also based on logical reasoning. For decades, computers have become more powerful by further miniaturization of the transistors, but this process is now approaching a physical limit. That is why scientists are working to find new materials to make more versatile switches, which could use more values than just the digitals 0 or 1.

Dangerous pitfall

Jaeger is part of the Groningen Cognitive Systems and Materials Center (CogniGron), which aims to develop neuromorphic (i.e. brain-like) computers. CogniGron is bringing together scientists who have very different approaches: experimental materials scientists and theoretical modelers from fields as diverse as mathematics, computer science, and AI. Working closely with materials scientists has given Jaeger a good idea of the challenges that they face when trying to come up with new computational materials, while it has also made him aware of a dangerous pitfall: there is no established theory for the use of non-digital physical effects in computing systems.

Our brain is not a logical system. We can reason logically, but that is only a small part of what our brain does. Most of the time, it must work out how to bring a hand to a teacup or wave to a colleague on passing them in a corridor. ‘A lot of the information-processing that our brain does is this non-logical stuff, which is continuous and dynamic. It is difficult to formalize this in a digital computer,’ explains Jaeger. Furthermore, our brains keep working despite fluctuations in blood pressure, external temperature, or hormone balance, and so on. How is it possible to create a computer that is as versatile and robust? Jaeger is optimistic: ‘The simple answer is: the brain is proof of principle that it can be done.’

Neurons

The brain is, therefore, an inspiration for materials scientists. Jaeger: ‘They might produce something that is made from a few hundred atoms and that will oscillate, or something that will show bursts of activity. And they will say: “That looks like how neurons work, so let’s build a neural network”.’ But they are missing a vital bit of knowledge here. ‘Even neuroscientists don’t know exactly how the brain works. This is where the lack of a theory for neuromorphic computers is problematic. Yet, the field doesn’t appear to see this.’

In a paper published in Nature Communications on 16 August, Jaeger and his colleagues Beatriz Noheda (scientific director of CogniGron) and Wilfred G. van der Wiel (University of Twente) present a sketch of what a theory for non-digital computers might look like. They propose that instead of stable 0/1 switches, the theory should work with continuous, analogue signals. It should also accommodate the wealth of non-standard nanoscale physical effects that the materials scientists are investigating.

Sub-theories

Something else that Jaeger has learned from listening to materials scientists is that devices from these new materials are difficult to construct. Jaeger: ‘If you make a hundred of them, they will not all be identical.’ This is actually very brain-like, as our neurons are not all exactly identical either. Another possible issue is that the devices are often brittle and temperature-sensitive, continues Jaeger. ‘Any theory for neuromorphic computing should take such characteristics into account.’

Importantly, a theory underpinning neuromorphic computing will not be a single theory but will be constructed from many sub-theories (see image below). Jaeger: ‘This is in fact how digital computer theory works as well, it is a layered system of connected sub-theories.’ Creating such a theoretical description of neuromorphic computers will require close collaboration of experimental materials scientists and formal theoretical modellers. Jaeger: ‘Computer scientists must be aware of the physics of all these new materials [emphasis mine] and materials scientists should be aware of the fundamental concepts in computing.’

Blind spots

Bridging this divide between materials science, neuroscience, computing science, and engineering is exactly why CogniGron was founded at the University of Groningen: it brings these different groups together. ‘We all have our blind spots,’ concludes Jaeger. ‘And the biggest gap in our knowledge is a foundational theory for neuromorphic computing. Our paper is a first attempt at pointing out how such a theory could be constructed and how we can create a common language.’

Here’s a link to and a citation for the paper,

Toward a formal theory for computing machines made out of whatever physics offers by Herbert Jaeger, Beatriz Noheda & Wilfred G. van der Wiel. Nature Communications volume 14, Article number: 4911 (2023) DOI: https://doi.org/10.1038/s41467-023-40533-1 Published: 16 August 2023

This paper is open access and there’s a 76 pp. version, “Toward a formal theory for computing machines made out of whatever physics offers: extended version” (emphasis mine) available on arXchiv.

Caption: A general theory of physical computing systems would comprise existing theories as special cases. Figure taken from an extended version of the Nature Comm paper on arXiv. Credit: Jaeger et al. / University of Groningen

With regard to new materials for neuromorphic computing, my January 4, 2024 posting highlights a proposed quantum material for this purpose.

A hardware (neuromorphic and quantum) proposal for handling increased AI workload

It’s been a while since I’ve featured anything from Purdue University (Indiana, US). From a November 7, 2023 news item on Nanowerk, Note Links have been removed,

Technology is edging closer and closer to the super-speed world of computing with artificial intelligence. But is the world equipped with the proper hardware to be able to handle the workload of new AI technological breakthroughs?

Key Takeaways
Current AI technologies are strained by the limitations of silicon-based computing hardware, necessitating new solutions.

Research led by Erica Carlson [Purdue University] suggests that neuromorphic [brainlike] architectures, which replicate the brain’s neurons and synapses, could revolutionize computing efficiency and power.

Vanadium oxides have been identified as a promising material for creating artificial neurons and synapses, crucial for neuromorphic computing.

Innovative non-volatile memory, observed in vanadium oxides, could be the key to more energy-efficient and capable AI hardware.

Future research will explore how to optimize the synaptic behavior of neuromorphic materials by controlling their memory properties.

The colored landscape above shows a transition temperature map of VO2 (pink surface) as measured by optical microscopy. This reveals the unique way that this neuromorphic quantum material [emphasis mine] stores memory like a synapse. Image credit: Erica Carlson, Alexandre Zimmers, and Adobe Stock

An October 13, 2023 Purdue University news release (also on EurekAlert but published November 6, 2023) by Cheryl Pierce, which originated the news item, provides more detail about the work, Note: A link has been removed,

“The brain-inspired codes of the AI revolution are largely being run on conventional silicon computer architectures which were not designed for it,” explains Erica Carlson, 150th Anniversary Professor of Physics and Astronomy at Purdue University.

A joint effort between Physicists from Purdue University, University of California San Diego (USCD) and École Supérieure de Physique et de Chimie Industrielles (ESPCI) in Paris, France, believe they may have discovered a way to rework the hardware…. [sic] By mimicking the synapses of the human brain.  They published their findings, “Spatially Distributed Ramp Reversal Memory in VO2” in Advanced Electronic Materials which is featured on the back cover of the October 2023 edition.

New paradigms in hardware will be necessary to handle the complexity of tomorrow’s computational advances. According to Carlson, lead theoretical scientist of this research, “neuromorphic architectures hold promise for lower energy consumption processors, enhanced computation, fundamentally different computational modes, native learning and enhanced pattern recognition.”

Neuromorphic architecture basically boils down to computer chips mimicking brain behavior.  Neurons are cells in the brain that transmit information. Neurons have small gaps at their ends that allow signals to pass from one neuron to the next which are called synapses. In biological brains, these synapses encode memory. This team of scientists concludes that vanadium oxides show tremendous promise for neuromorphic computing because they can be used to make both artificial neurons and synapses.

“The dissonance between hardware and software is the origin of the enormously high energy cost of training, for example, large language models like ChatGPT,” explains Carlson. “By contrast, neuromorphic architectures hold promise for lower energy consumption by mimicking the basic components of a brain: neurons and synapses. Whereas silicon is good at memory storage, the material does not easily lend itself to neuron-like behavior. Ultimately, to provide efficient, feasible neuromorphic hardware solutions requires research into materials with radically different behavior from silicon – ones that can naturally mimic synapses and neurons. Unfortunately, the competing design needs of artificial synapses and neurons mean that most materials that make good synaptors fail as neuristors, and vice versa. Only a handful of materials, most of them quantum materials, have the demonstrated ability to do both.”

The team relied on a recently discovered type of non-volatile memory which is driven by repeated partial temperature cycling through the insulator-to-metal transition. This memory was discovered in vanadium oxides.

Alexandre Zimmers, lead experimental scientist from Sorbonne University and École Supérieure de Physique et de Chimie Industrielles, Paris, explains, “Only a few quantum materials are good candidates for future neuromorphic devices, i.e., mimicking artificial synapses and neurons. For the first time, in one of them, vanadium dioxide, we can see optically what is changing in the material as it operates as an artificial synapse. We find that memory accumulates throughout the entirety of the sample, opening new opportunities on how and where to control this property.”

“The microscopic videos show that, surprisingly, the repeated advance and retreat of metal and insulator domains causes memory to be accumulated throughout the entirety of the sample, rather than only at the boundaries of domains,” explains Carlson. “The memory appears as shifts in the local temperature at which the material transitions from insulator to metal upon heating, or from metal to insulator upon cooling. We propose that these changes in the local transition temperature accumulate due to the preferential diffusion of point defects into the metallic domains that are interwoven through the insulator as the material is cycled partway through the transition.”

Now that the team has established that vanadium oxides are possible candidates for future neuromorphic devices, they plan to move forward in the next phase of their research.

“Now that we have established a way to see inside this neuromorphic material, we can locally tweak and observe the effects of, for example, ion bombardment on the material’s surface,” explains Zimmers. “This could allow us to guide the electrical current through specific regions in the sample where the memory effect is at its maximum. This has the potential to significantly enhance the synaptic behavior of this neuromorphic material.”

There’s a very interesting 16 mins. 52 secs. video embedded in the October 13, 2023 Purdue University news release. In an interview with Dr. Erica Carlson who hosts The Quantum Age website and video interviews on its YouTube Channel, Alexandre Zimmers takes you from an amusing phenomenon observed by 19th century scientists through the 20th century where it becomes of more interest as the nanscale phenonenon can be exploited (sonar, scanning tunneling microscopes, singing birthday cards, etc.) to the 21st century where we are integrating this new information into a quantum* material for neuromorphic hardware.

Here’s a link to and a citation for the paper,

Spatially Distributed Ramp Reversal Memory in VO2 by Sayan Basak, Yuxin Sun, Melissa Alzate Banguero, Pavel Salev, Ivan K. Schuller, Lionel Aigouy, Erica W. Carlson, Alexandre Zimmers. Advanced Electronic Materials Volume 9, Issue 10 October 2023 2300085 DOI: https://doi.org/10.1002/aelm.202300085 First published: 10 July 2023

This paper is open access.

There’s a lot of research into neuromorphic hardware, here’s a sampling of some of my most recent posts on the topic,

There’s more, just use ‘neuromorphic hardware’ for your search term.

*’meta’ changed to ‘quantum’ on January 8, 2024.

FrogHeart’s 2023 comes to an end as 2024 comes into view

My personal theme for this last year (2023) and for the coming year was and is: catching up. On the plus side, my 2023 backlog (roughly six months) to be published was whittled down considerably. On the minus side, I start 2024 with a backlog of two to three months.

2023 on this blog had a lot in common with 2022 (see my December 31, 2022 posting), which may be due to what’s going on in the world of emerging science and technology or to my personal interests or possibly a bit of both. On to 2023 and a further blurring of boundaries:

Energy, computing and the environment

The argument against paper is that it uses up resources, it’s polluting, it’s affecting the environment, etc. Somehow the part where electricity which underpins so much of our ‘smart’ society does the same thing is left out of the discussion.

Neuromorphic (brainlike) computing and lower energy

Before launching into the stories about lowering energy usage, here’s an October 16, 2023 posting “The cost of building ChatGPT” that gives you some idea of the consequences of our insatiable desire for more computing and more ‘smart’ devices,

In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022 (to nearly 1.7 billion gallons , or more than 2,500 Olympic-sized swimming pools), a sharp increase compared to previous years that outside researchers tie to its AI research. [emphases mine]

“It’s fair to say the majority of the growth is due to AI,” including “its heavy investment in generative AI and partnership with OpenAI,” said Shaolei Ren, [emphasis mine] a researcher at the University of California, Riverside who has been trying to calculate the environmental impact of generative AI products such as ChatGPT.

Why it matters: Microsoft’s five WDM [West Des Moines in Iowa] data centers — the “epicenter for advancing AI” — represent more than $5 billion in investments in the last 15 years.

Yes, but: They consumed as much as 11.5 million gallons of water a month for cooling, or about 6% of WDM’s total usage during peak summer usage during the last two years, according to information from West Des Moines Water Works.

The focus is AI but it doesn’t take long to realize that all computing has energy and environmental costs. I have more about Ren’s work and about water shortages in the “The cost of building ChatGPT” posting.

This next posting would usually be included with my other art/sci postings but it touches on the issues. My October 13, 2023 posting about Toronto’s Art/Sci Salon events, in particular, there’s the Streaming Carbon Footprint event (just scroll down to the appropriate subhead). For the interested, I also found this 2022 paper “The Carbon Footprint of Streaming Media:; Problems, Calculations, Solutions” co-authored by one of the artist/researchers (Laura U. Marks, philosopher and scholar of new media and film at Simon Fraser University) who presented at the Toronto event.

I’m late to the party; Thomas Daigle posted a January 2, 2020 article about energy use and our appetite for computing and ‘smart’ devices for the Canadian Broadcasting Corporation’s online news,

For those of us binge-watching TV shows, installing new smartphone apps or sharing family photos on social media over the holidays, it may seem like an abstract predicament.

The gigabytes of data we’re using — although invisible — come at a significant cost to the environment. Some experts say it rivals that of the airline industry. 

And as more smart devices rely on data to operate (think internet-connected refrigerators or self-driving cars), their electricity demands are set to skyrocket.

“We are using an immense amount of energy to drive this data revolution,” said Jane Kearns, an environment and technology expert at MaRS Discovery District, an innovation hub in Toronto.

“It has real implications for our climate.”

Some good news

Researchers are working on ways to lower the energy and environmental costs, here’s a sampling of 2023 posts with an emphasis on brainlike computing that attest to it,

If there’s an industry that can make neuromorphic computing and energy savings sexy, it’s the automotive indusry,

On the energy front,

Most people are familiar with nuclear fission and some its attendant issues. There is an alternative nuclear energy, fusion, which is considered ‘green’ or greener anyway. General Fusion is a local (Vancouver area) company focused on developing fusion energy, alongside competitors from all over the planet.

Part of what makes fusion energy attractive is that salt water or sea water can be used in its production and, according to that December posting, there are other applications for salt water power,

More encouraging developments in environmental science

Again, this is a selection. You’ll find a number of nano cellulose research projects and a couple of seaweed projects (seaweed research seems to be of increasing interest).

All by myself (neuromorphic engineering)

Neuromorphic computing is a subset of neuromorphic engineering and I stumbled across an article that outlines the similarities and differences. My ‘summary’ of the main points and a link to the original article can be found here,

Oops! I did it again. More AI panic

I included an overview of the various ‘recent’ panics (in my May 25, 2023 posting below) along with a few other posts about concerning developments but it’s not all doom and gloom..

Governments have realized that regulation might be a good idea. The European Union has a n AI act, the UK held an AI Safety Summit in November 2023, the US has been discussing AI regulation with its various hearings, and there’s impending legislation in Canada (see professor and lawyer Michael Geist’s blog for more).

A long time coming, a nanomedicine comeuppance

Paolo Macchiarini is now infamous for his untested, dangerous approach to medicine. Like a lot of people, I was fooled too as you can see in my August 2, 2011 posting, “Body parts nano style,”

In early July 2011, there were reports of a new kind of transplant involving a body part made of a biocomposite. Andemariam Teklesenbet Beyene underwent a trachea transplant that required an artificial windpipe crafted by UK experts then flown to Sweden where Beyene’s stem cells were used to coat the windpipe before being transplanted into his body.

It is an extraordinary story not least because Beyene, a patient in a Swedish hospital planning to return to Eritrea after his PhD studies in Iceland, illustrates the international cooperation that made the transplant possible.

The scaffolding material for the artificial windpipe was developed by Professor Alex Seifalian at the University College London in a landmark piece of nanotechnology-enabled tissue engineering. …

Five years later I stumbled across problems with Macchiarini’s work as outlined in my April 19, 2016 posting, “Macchiarini controversy and synthetic trachea transplants (part 1 of 2)” and my other April 19, 2016 posting, “Macchiarini controversy and synthetic trachea transplants (part 2 of 2)“.

This year, Gretchen Vogel (whose work was featured in my 2016 posts) has written a June 21, 2023 update about the Macchiarini affair for Science magazine, Note: Links have been removed,

Surgeon Paolo Macchiarini, who was once hailed as a pioneer of stem cell medicine, was found guilty of gross assault against three of his patients today and sentenced to 2 years and 6 months in prison by an appeals court in Stockholm. The ruling comes a year after a Swedish district court found Macchiarini guilty of bodily harm in two of the cases and gave him a suspended sentence. After both the prosecution and Macchiarini appealed that ruling, the Svea Court of Appeal heard the case in April and May. Today’s ruling from the five-judge panel is largely a win for the prosecution—it had asked for a 5-year sentence whereas Macchiarini’s lawyer urged the appeals court to acquit him of all charges.

Macchiarini performed experimental surgeries on the three patients in 2011 and 2012 while working at the renowned Karolinska Institute. He implanted synthetic windpipes seeded with stem cells from the patients’ own bone marrow, with the hope the cells would multiply over time and provide an enduring replacement. All three patients died when the implants failed. One patient died suddenly when the implant caused massive bleeding just 4 months after it was implanted; the two others survived for 2.5 and nearly 5 years, respectively, but suffered painful and debilitating complications before their deaths.

In the ruling released today, the appeals judges disagreed with the district court’s decision that the first two patients were treated under “emergency” conditions. Both patients could have survived for a significant length of time without the surgeries, they said. The third case was an “emergency,” the court ruled, but the treatment was still indefensible because by then Macchiarini was well aware of the problems with the technique. (One patient had already died and the other had suffered severe complications.)

A fictionalized tv series ( part of the Dr. Death anthology series) based on Macchiarini’s deceptions and a Dr. Death documentary are being broadcast/streamed in the US during January 2024. These come on the heels of a November 2023 Macchiarini documentary also broadcast/streamed on US television.

Dr. Death (anthology), based on the previews I’ve seen, is heavily US-centric, which is to be expected since Adam Ciralsky is involved in the production. Ciralsky wrote an exposé about Macchiarini for Vanity Fair published in 2016 (also featured in my 2016 postings). From a December 20, 2023 article by Julie Miller for Vanity Fair, Note: A link has been removed,

Seven years ago [2016], world-renowned surgeon Paolo Macchiarini was the subject of an ongoing Vanity Fair investigation. He had seduced award-winning NBC producer Benita Alexander while she was making a special about him, proposed, and promised her a wedding officiated by Pope Francis and attended by political A-listers. It was only after her designer wedding gown was made that Alexander learned Macchiarini was still married to his wife, and seemingly had no association with the famous names on their guest list.

Vanity Fair contributor Adam Ciralsky was in the midst of reporting the story for this magazine in the fall of 2015 when he turned to Dr. Ronald Schouten, a Harvard psychiatry professor. Ciralsky sought expert insight into the kind of fabulist who would invent and engage in such an audacious lie.

“I laid out the story to him, and he said, ‘Anybody who does this in their private life engages in the same conduct in their professional life,” recalls Ciralsky, in a phone call with Vanity Fair. “I think you ought to take a hard look at his CVs.”

That was the turning point in the story for Ciralsky, a former CIA lawyer who soon learned that Macchiarini was more dangerous as a surgeon than a suitor. …

Here’s a link to Ciralsky’s original article, which I described this way, from my April 19, 2016 posting (part 2 of the Macchiarini controversy),

For some bizarre frosting on this disturbing cake (see part 1 of the Macchiarini controversy and synthetic trachea transplants for the medical science aspects), a January 5, 2016 Vanity Fair article by Adam Ciralsky documents Macchiarini’s courtship of an NBC ([US] National Broadcasting Corporation) news producer who was preparing a documentary about him and his work.

[from Ciralsky’s article]

“Macchiarini, 57, is a magnet for superlatives. He is commonly referred to as “world-renowned” and a “super-surgeon.” He is credited with medical miracles, including the world’s first synthetic organ transplant, which involved fashioning a trachea, or windpipe, out of plastic and then coating it with a patient’s own stem cells. That feat, in 2011, appeared to solve two of medicine’s more intractable problems—organ rejection and the lack of donor organs—and brought with it major media exposure for Macchiarini and his employer, Stockholm’s Karolinska Institute, home of the Nobel Prize in Physiology or Medicine. Macchiarini was now planning another first: a synthetic-trachea transplant on a child, a two-year-old Korean-Canadian girl named Hannah Warren, who had spent her entire life in a Seoul hospital. … “

Other players in the Macchiarini story

Pierre Delaere, a trachea expert and professor of head and neck surgery at KU Leuven (a university in Belgium) was one of the first to draw attention to Macchiarini’s dangerous and unethical practices. To give you an idea of how difficult it was to get attention for this issue, there’s a September 1, 2017 article by John Rasko and Carl Power for the Guardian illustrating the issue. Here’s what they had to say about Delaere and other early critics of the work, Note: Links have been removed,

Delaere was one of the earliest and harshest critics of Macchiarini’s engineered airways. Reports of their success always seemed like “hot air” to him. He could see no real evidence that the windpipe scaffolds were becoming living, functioning airways – in which case, they were destined to fail. The only question was how long it would take – weeks, months or a few years.

Delaere’s damning criticisms appeared in major medical journals, including the Lancet, but weren’t taken seriously by Karolinska’s leadership. Nor did they impress the institute’s ethics council when Delaere lodged a formal complaint. [emphases mine]

Support for Macchiarini remained strong, even as his patients began to die. In part, this is because the field of windpipe repair is a niche area. Few people at Karolinska, especially among those in power, knew enough about it to appreciate Delaere’s claims. Also, in such a highly competitive environment, people are keen to show allegiance to their superiors and wary of criticising them. The official report into the matter dubbed this the “bandwagon effect”.

With Macchiarini’s exploits endorsed by management and breathlessly reported in the media, it was all too easy to jump on that bandwagon.

And difficult to jump off. In early 2014, four Karolinska doctors defied the reigning culture of silence [emphasis mine] by complaining about Macchiarini. In their view, he was grossly misrepresenting his results and the health of his patients. An independent investigator agreed. But the vice-chancellor of Karolinska Institute, Anders Hamsten, wasn’t bound by this judgement. He officially cleared Macchiarini of scientific misconduct, allowing merely that he’d sometimes acted “without due care”.

For their efforts, the whistleblowers were punished. [emphasis mine] When Macchiarini accused one of them, Karl-Henrik Grinnemo, of stealing his work in a grant application, Hamsten found him guilty. As Grinnemo recalls, it nearly destroyed his career: “I didn’t receive any new grants. No one wanted to collaborate with me. We were doing good research, but it didn’t matter … I thought I was going to lose my lab, my staff – everything.”

This went on for three years until, just recently [2017], Grinnemo was cleared of all wrongdoing.

It is fitting that Macchiarini’s career unravelled at the Karolinska Institute. As the home of the Nobel prize in physiology or medicine, one of its ambitions is to create scientific celebrities. Every year, it gives science a show-business makeover, picking out from the mass of medical researchers those individuals deserving of superstardom. The idea is that scientific progress is driven by the genius of a few.

It’s a problematic idea with unfortunate side effects. A genius is a revolutionary by definition, a risk-taker and a law-breaker. Wasn’t something of this idea behind the special treatment Karolinska gave Macchiarini? Surely, he got away with so much because he was considered an exception to the rules with more than a whiff of the Nobel about him. At any rate, some of his most powerful friends were themselves Nobel judges until, with his fall from grace, they fell too.

The September 1, 2017 article by Rasko and Power is worth the read if you have the interest and the time. And, Delaere has written up a comprehensive analysis, which includes basic information about tracheas and more, “The Biggest Lie in Medical History” 2020, PDF, 164 pp., Creative Commons Licence).

I also want to mention Leonid Schneider, science journalist and molecular cell biologist, whose work the Macchiarini scandal on his ‘For Better Science’ website was also featured in my 2016 pieces. Schneider’s site has a page titled, ‘Macchiarini’s trachea transplant patients: the full list‘ started in 2017 and which he continues to update with new information about the patients. The latest update was made on December 20, 2023.

Promising nanomedicine research but no promises and a caveat

Most of the research mentioned here is still in the laboratory. i don’t often come across work that has made its way to clinical trials since the focus of this blog is emerging science and technology,

*If you’re interested in the business of neurotechnology, the July 17, 2023 posting highlights a very good UNESCO report on the topic.

Funky music (sound and noise)

I have couple of stories about using sound for wound healing, bioinspiration for soundproofing applications, detecting seismic activity, more data sonification, etc.

Same old, same old CRISPR

2023 was relatively quiet (no panics) where CRISPR developments are concerned but still quite active.

Art/Sci: a pretty active year

I didn’t realize how active the year was art/sciwise including events and other projects until I reviewed this year’s postings. This is a selection from 2023 but there’s a lot more on the blog, just use the search term, “art/sci,” or “art/science,” or “sciart.”

While I often feature events and projects from these groups (e.g., June 2, 2023 posting, “Metacreation Lab’s greatest hits of Summer 2023“), it’s possible for me to miss a few. So, you can check out Toronto’s Art/Sci Salon’s website (strong focus on visual art) and Simon Fraser University’s Metacreation Lab for Creative Artificial Intelligence website (strong focus on music).

My selection of this year’s postings is more heavily weighted to the ‘writing’ end of things.

Boundaries: life/nonlife

Last year I subtitled this section, ‘Aliens on earth: machinic biology and/or biological machinery?” Here’s this year’s selection,

Canada’s 2023 budget … military

2023 featured an unusual budget where military expenditures were going to be increased, something which could have implications for our science and technology research.

Then things changed as Murray Brewster’s November 21, 2023 article for the Canadian Broadcasting Corporation’s (CBC) news online website comments, Note: A link has been removed,

There was a revelatory moment on the weekend as Defence Minister Bill Blair attempted to bridge the gap between rhetoric and reality in the Liberal government’s spending plans for his department and the Canadian military.

Asked about an anticipated (and long overdue) update to the country’s defence policy (supposedly made urgent two years ago by Russia’s full-on invasion of Ukraine), Blair acknowledged that the reset is now being viewed through a fiscal lens.

“We said we’re going to bring forward a new defence policy update. We’ve been working through that,” Blair told CBC’s Rosemary Barton Live on Sunday.

“The current fiscal environment that the country faces itself does require (that) that defence policy update … recognize (the) fiscal challenges. And so it’ll be part of … our future budget processes.”

One policy goal of the existing defence plan, Strong, Secure and Engaged, was to require that the military be able to concurrently deliver “two sustained deployments of 500 [to] 1,500 personnel in two different theaters of operation, including one as a lead nation.”

In a footnote, the recent estimates said the Canadian military is “currently unable to conduct multiple operations concurrently per the requirements laid out in the 2017 Defence Policy. Readiness of CAF force elements has continued to decrease over the course of the last year, aggravated by decreasing number of personnel and issues with equipment and vehicles.”

Some analysts say they believe that even if the federal government hits its overall budget reduction targets, what has been taken away from defence — and what’s about to be taken away — won’t be coming back, the minister’s public assurances notwithstanding.

10 years: Graphene Flagship Project and Human Brain Project

Graphene and Human Brain Project win biggest research award in history (& this is the 2000th post)” on January 28, 2013 was how I announced the results of what had been a a European Union (EU) competition that stretched out over several years and many stages as projects were evaluated and fell to the wayside or were allowed onto the next stage. The two finalists received €1B each to be paid out over ten years.

Future or not

As you can see, there was plenty of interesting stuff going on in 2023 but no watershed moments in the areas I follow. (Please do let me know in the Comments should you disagree with this or any other part of this posting.) Nanotechnology seems less and less an emerging science/technology in itself and more like a foundational element of our science and technology sectors. On that note, you may find my upcoming (in 2024) post about a report concerning the economic impact of its National Nanotechnology Initiative (NNI) from 2002 to 2022 of interest.

Following on the commercialization theme, I have noticed an increase of interest in commercializing brain and brainlike engineering technologies, as well as, more discussion about ethics.

Colonizing the brain?

UNESCO held events such as, this noted in my July 17, 2023 posting, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report” and this noted in my July 7, 2023 posting “Global dialogue on the ethics of neurotechnology on July 13, 2023 led by UNESCO.” An August 21, 2023 posting, “Ethical nanobiotechnology” adds to the discussion.

Meanwhile, Australia has been producing some very interesting mind/robot research, my June 13, 2023 posting, “Mind-controlled robots based on graphene: an Australian research story.” I have more of this kind of research (mind control or mind reading) from Australia to be published in early 2024. The Australians are not alone, there’s also this April 12, 2023 posting, “Mind-reading prosthetic limbs” from Germany.

My May 12, 2023 posting, “Virtual panel discussion: Canadian Strategies for Responsible Neurotechnology Innovation on May 16, 2023” shows Canada is entering the discussion. Unfortunately, the Canadian Science Policy Centre (CSPC), which held the event, has not posted a video online even though they have a youtube channel featuring other of their events.

As for neurmorphic engineering, China has produced a roadmap for its research in this area as noted in my March 20, 2023 posting, “A nontraditional artificial synaptic device and roadmap for Chinese research into neuromorphic devices.”

Quantum anybody?

I haven’t singled it out in this end-of-year posting but there is a great deal of interest in quantum computer both here in Canada and elsewhere. There is a 2023 report from the Council of Canadian Academies on the topic of quantum computing in Canada, which I hope to comment on soon.

Final words

I have a shout out for the Canadian Science Policy Centre, which celebrated its 15th anniversary in 2023. Congratulations!

For everyone, I wish peace on earth and all the best for you and yours in 2024!

Youthful Canadian inventors win awards

Two teenagers stand next two each other displaying their inventions. One holds a laptop, while the other holds a wireless headset.
Vinny Gu, left, and Anush Mutyala, right, hope to continue to work to improve their inventions. (Niza Lyapa Nondo/CBC)

This November 28, 2023 article by Philip Drost for the Canadian Broadcasting Corporation’s (CBC) The Current radio programme highl8ights two youthful inventors, Note: Links have been removed,

Anush Mutyala [emphasis mine] may only be in Grade 12, but he already has hopes that his innovations and inventions will rival that of Elon Musk.

“I always tell my friends something that would be funny is if I’m competing head-to-head with Elon Musk in the race to getting people [neural] implants,” Mutyala told Matt Galloway on The Current

Mutyala, a student at Chinguacousy Secondary School in Brampton, Ont., created a brain imaging system that he says opens the future for permanent wireless neural implants. 

For his work, he received an award from Youth Science Canada at the National Fair in 2023, which highlights young people pushing innovation. 

Mutyala wanted to create a way for neural implants to last longer. Implants can help people hear better, or move parts of the body they otherwise couldn’t, but neural implants in particular face issues with regard to power consumption, and traditionally must be replaced by surgery after their batteries die. That can be every five years. 

But Mutyala thinks his system, Enerspike, can change that. The algorithm he designed lowers the energy consumption needed for implants to process and translate brain signals into making a limb move.

“You would essentially never need to replace wireless implants again for the purpose of battery replacement,” said Mutyala. 

Mutyala was inspired by Stephen Hawking, who famously spoke with the use of a speech synthesizer.

“What if we used technology like this and we were able to restore his complete communication ability? He would have been able to communicate at a much faster rate and he would have had a much greater impact on society,” said Mutyala. 

… Mutyala isn’t the only innovator. Vinny Gu [emphasis mine], a Grade 11 student at Markville Secondary School in Markham, Ont., also received an award for creating DermaScan, an online application that can look at a photo and predict whether the person photographed has skin cancer or not.

“There has [sic] been some attempts at this problem in the past. However, they usually result in very low accuracy. However, I incorporated a technology to help my model better detect the minor small details in the image in order for it to get a better prediction,” said Gu. 

He says it doesn’t replace visiting a dermatologist — but it can give people an option to do pre-screenings with ease, which can help them decide if they need to go see a dermatologist. He says his model is 90-per-cent accurate. 

He is currently testing Dermascan, and he hopes to one day make it available for free to anyone who needs it. 

Drost’s November 28, 2023 article hosts an embedded audio file of the radio interview and more.

You can find out about Anoush Mutyala and his work on his LinkedIn profile (in a addition to being a high school student, since October 2023, he’s also a neuromorphics researcher at York University). If my link to his profile fails, search Mutyala’s name online and access his public page at the LinkedIn website. There’s something else, Mutyala has an eponymous website.

My online searches for more about Vinny (or Vincent) Gu were not successful.

You can find a bit more information about Mutyala’s Enerspike here and Gu’s DermaScan here. Youth Science Canada can be found here.

Not to forget, there’s grade nine student Arushi Nath and her work on planetary defence, which is being recognized in a number of ways. (See my November 17, 2023 posting, Arushi Nath gives the inside story about the 2023 Natural Sciences and Engineering Research Council of Canada (NSERC) Awards and my November 23, 2023 posting, Margot Lee Shetterly [Hidden Figures author] in Toronto, Canada and a little more STEM [science, technology, engineering, and mathematics] information.) I must say November 2023 has been quite the banner month for youth science in Canada.