Category Archives: graphene

A graphene joke (of sorts): What did the electron ‘say’ to the phonon in the graphene sandwich?

Unfortunately, there isn’t a punch line but I appreciate the effort to inject a little lightness into the description of a fairly technical achievement, from a February 12, 2024 news item on Nanowerk, Note: A link has been removed,

Electrons carry electrical energy, while vibrational energy is carried by phonons. Understanding how they interact with each other in certain materials, like in a sandwich of two graphene layers, will have implications for future optoelectronic devices.

Key Takeaways

Twisted graphene layers exhibit unique electrical properties.

Electron-phonon interactions crucial for energy loss in graphene.

Discovery of a new physical process involving electron-phonon Umklapp scattering.

Potential implications for ultrafast optoelectronics and quantum applications.

A February 9, 2024 Eindhoven University of Technology (TU/e; Netherlands) press release, which originated the news item, is reproduced here in its entirety, Note: Links have been removed,

Electrons carry electrical energy, while vibrational energy is carried by phonons. Understanding how they interact with each other in certain materials, like in a sandwich of two graphene layers, will have implications for future optoelectronic devices. Recent work has revealed that graphene layers twisted relative to each other by a small ‘magic angle’ can act as perfect insulator or superconductor. But the physics of the electron-phonon interactions are a mystery. As part of a worldwide international collaboration, TU/e researcher Klaas-Jan Tielrooij has led a study on electron-phonon interactions in graphene layers. And they have made a startling discovery.

What did the electron say to the phonon between two layers of graphene?

This might sound like the start of a physics meme with a hilarious punchline to follow. But that’s not the case according to Klaas-Jan Tielrooij. He’s an associate professor at the Department of Applied Physics and Science Education at TU/e and the research lead of the new work published in Science Advances.

“We sought to understand how electrons and phonons ‘talk’ to each other within two twisted graphene layers,” says Tielrooij.

Electrons are the well-known charge and energy carriers associated with electricity, while a phonon is linked to the emergence of vibrations between atoms in an atomic crystal.

“Phonons aren’t particles like electrons though, they’re a quasiparticle. Yet, their interaction with electrons in certain materials and how they affect energy loss in electrons has been a mystery for some time,” notes Tielrooij.

But why would it be interesting to learn more about electron-phonon interactions? “These interactions can have a major effect on the electronic and optoelectronic properties of devices, made from materials like graphene, which we are going to see more of in the future.”

Twistronics: Breakthrough of the Year 2018

Tielrooij and his collaborators, who are based around the world in Spain, Germany, Japan, and the US, decided to study electron-phonon interactions in a very particular case – within two layers of graphene where the layers are ever-so-slightly misaligned.

Graphene is a two-dimensional layer of carbon atoms arranged in a honeycomb lattice that has several impressive properties such as high electrical conductivity, high flexibility, and high thermal conductivity, and it is also nearly transparent.

Back in 2018, the Physics World Breakthrough of the Year award went to Pablo Jarillo-Herrero and colleagues at MIT [Massachusetts Institute of Technology] for their pioneering work on twistronics, where adjacent layers of graphene are rotated very slightly relative to each other to change the electronic properties of the graphene.

Twist and astound!

“Depending on how the layers of graphene are rotated and doped with electrons, contrasting outcomes are possible. For certain dopings, the layers act as an insulator, which prevents the movement of electrons. For other doping, the material behaves as a superconductor – a material with zero resistance that allows the dissipation-less movement of electrons,” says Tielrooij.

Better known as twisted bilayer graphene, these outcomes occur at the so-called magic angle of misalignment, which is just over one degree of rotation. “The misalignment between the layers is tiny, but the possibility for a superconductor or an insulator is an astounding result.”

How electrons lose energy

For their study, Tielrooij and the team wanted to learn more about how electrons lose energy in magic-angle twisted bilayer graphene, or MATBG for short.

To achieve this, they used a material consisting of two sheets of monolayer graphene (each 0.3 nanometers thick), placed on top of each other, and misaligned relative to each other by about one degree.

Then using two optoelectronic measurement techniques, the researchers were able to probe the electron-phonon interactions in detail, and they made some staggering discoveries.

“We observed that the energy vanishes very quickly in the MATBG – it occurs on the picosecond timescale, which is one-millionth of one-millionth of a second!” says Tielrooij.

This observation is much faster than for the case of a single layer of graphene, especially at ultracold temperatures (specifically below -73 degrees Celsius). “At these temperatures, it’s very difficult for electrons to lose energy to phonons, yet it happens in the MATBG.”

Why electrons lose energy

So, why are the electrons losing the energy so quickly through interaction with the phonons? Well, it turns out the researchers have uncovered a whole new physical process.

“The strong electron-phonon interaction is a completely new physical process and involves so-called electron-phonon Umklapp scattering,” adds Hiroaki Ishizuka from Tokyo Institute of Technology in Japan, who developed the theoretical understanding of this process together with Leonid Levitov from Massachusetts Institute of Technology in the US.

Umklapp scattering between phonons is a process that often affects heat transfer in materials, because it enables relatively large amounts of momentum to be transferred between phonons.

“We see the effects of phonon-phonon Umklapp scattering all the time as it affects the ability for (non-metallic) materials at room temperature to conduct heat. Just think of an insulating material on the handle of a pot for example,” says Ishizuka. “However, electron-phonon Umklapp scattering is rare. Here though we have observed for the first time how electrons and phonons interact via Umklapp scattering to dissipate electron energy.”

Challenges solved together

Tielrooij and collaborators may have completed most of the work while he was based in Spain at the Catalan Institute of Nanoscience and Nanotechnology (ICN2), but as Tielrooij notes. “The international collaboration proved pivotal to making this discovery.”

So, how did all the collaborators contribute to the research? Tielrooij: “First, we needed advanced fabrication techniques to make the MATBG samples. But we also needed a deep theoretical understanding of what’s happening in the samples. Added to that, ultrafast optoelectronic measurement setups were required to measure what’s happening in the samples too.”

Tielrooij and the team received the magic-angle twisted samples from Dmitri Efetov’s group at Ludwig-Maximilians-Universität in Munich, who were the first group in Europe able to make such samples and who also performed photomixing measurements, while theoretical work at MIT in the US and at Tokyo Institute of Technology in Japan proved crucial to the success of the research.

At ICN2, Tielrooij and his team members Jake Mehew and Alexander Block used cutting-edge equipment particularly time-resolved photovoltage microscopy to perform their measurements of electron-phonon dynamics in the samples.

The future

So, what does the future look like for these materials then? According to Tielrooij, don’t expect anything too soon.

“As the material is only being studied for a few years, we’re still some way from seeing magic-angle twisted bilayer graphene having an impact on society.”

But there is a great deal to be explored about energy loss in the material.

“Future discoveries could have implications for charge transport dynamics, which could have implications for future ultrafast optoelectronics devices,” says Tielrooij. “In particular, they would be very useful at low temperatures, so that makes the material suitable for space and quantum applications.”

The research from Tielrooij and the international team is a real breakthrough when it comes to how electrons and phonons interact with each other.

But we’ll have to wait a little longer to fully understand the consequences of what the electron said to the phonon in the graphene sandwich.

Illustration showing the control of energy relaxation with twist angle. Image: Authors

Here’s a link to and a citation for the paper,

Ultrafast Umklapp-assisted electron-phonon cooling in magic-angle twisted bilayer graphene by Jake Dudley Mehew, Rafael Luque Merino, Hiroaki Ishizuka, Alexander Block, Jaime Díez Mérida, Andrés Díez Carlón, Kenji Watanabe, Takashi Taniguchi, Leonid S. Levitov, Dmitri K. Efetov, and Klaas-Jan Tielrooij. Science Advances 9 Feb 2024 Vol 10, Issue 6 DOI: 10.1126/sciadv.adj1361

This paper is open access.

‘Frozen smoke’ sensors can detect toxic formaldehyde in homes and offices

I love the fact that ‘frozen smoke’ is another term for aerogel (which has multiple alternative terms) and the latest work on this interesting material is from the University of Cambridge (UK) according to a February 9, 2023 news item on ScienceDaily,

Researchers have developed a sensor made from ‘frozen smoke’ that uses artificial intelligence techniques to detect formaldehyde in real time at concentrations as low as eight parts per billion, far beyond the sensitivity of most indoor air quality sensors.

The researchers, from the University of Cambridge, developed sensors made from highly porous materials known as aerogels. By precisely engineering the shape of the holes in the aerogels, the sensors were able to detect the fingerprint of formaldehyde, a common indoor air pollutant, at room temperature.

The proof-of-concept sensors, which require minimal power, could be adapted to detect a wide range of hazardous gases, and could also be miniaturised for wearable and healthcare applications. The results are reported in the journal Science Advances.

A February 9, 2024 University of Cambridge press release (also on EurekAlert), which originated the news item, describes the problem and the proposed solution in more detail, Note: Links have been removed,

Volatile organic compounds (VOCs) are a major source of indoor air pollution, causing watery eyes, burning in the eyes and throat, and difficulty breathing at elevated levels. High concentrations can trigger attacks in people with asthma, and prolonged exposure may cause certain cancers.

Formaldehyde is a common VOC and is emitted by household items including pressed wood products (such as MDF), wallpapers and paints, and some synthetic fabrics. For the most part, the levels of formaldehyde emitted by these items are low, but levels can build up over time, especially in garages where paints and other formaldehyde-emitting products are more likely to be stored.

According to a 2019 report from the campaign group Clean Air Day, a fifth of households in the UK showed notable concentrations of formaldehyde, with 13% of residences surpassing the recommended limit set by the World Health Organization (WHO).

“VOCs such as formaldehyde can lead to serious health problems with prolonged exposure even at low concentrations, but current sensors don’t have the sensitivity or selectivity to distinguish between VOCs that have different impacts on health,” said Professor Tawfique Hasan from the Cambridge Graphene Centre, who led the research.

“We wanted to develop a sensor that is small and doesn’t use much power, but can selectively detect formaldehyde at low concentrations,” said Zhuo Chen, the paper’s first author.

The researchers based their sensors on aerogels: ultra-light materials sometimes referred to as ‘liquid smoke’, since they are more than 99% air by volume. The open structure of aerogels allows gases to easily move in and out. By precisely engineering the shape, or morphology, of the holes, the aerogels can act as highly effective sensors.

Working with colleagues at Warwick University, the Cambridge researchers optimised the composition and structure of the aerogels to increase their sensitivity to formaldehyde, making them into filaments about three times the width of a human hair. The researchers 3D printed lines of a paste made from graphene, a two-dimensional form of carbon, and then freeze-dried the graphene paste to form the holes in the final aerogel structure. The aerogels also incorporate tiny semiconductors known as quantum dots.

The sensors they developed were able to detect formaldehyde at concentrations as low as eight parts per billion, which is 0.4 percent of the level deemed safe in UK workplaces. The sensors also work at room temperature, consuming very low power.

“Traditional gas sensors need to be heated up, but because of the way we’ve engineered the materials, our sensors work incredibly well at room temperature, so they use between 10 and 100 times less power than other sensors,” said Chen.

To improve selectivity, the researchers then incorporated machine learning algorithms into the sensors. The algorithms were trained to detect the ‘fingerprint’ of different gases, so that the sensor was able to distinguish the fingerprint of formaldehyde from other VOCs.

“Existing VOC detectors are blunt instruments – you only get one number for the overall concentration in the air,” said Hasan. “By building a sensor that is able to detect specific VOCs at very low concentrations in real time, it can give home and business owners a more accurate picture of air quality and any potential health risks.”

The researchers say that the same technique could be used to develop sensors to detect other VOCs. In theory, a device the size of a standard household carbon monoxide detector could incorporate multiple different sensors within it, providing real-time information about a range of different hazardous gases. The team at Warwick are developing a low-cost multi-sensor platform that will incorporate these new aerogel materials and, coupled with AI algorithms, detect different VOCs.

“By using highly porous materials as the sensing element, we’re opening up whole new ways of detecting hazardous materials in our environment,” said Chen.

The research was supported in part by the Henry Royce Institute, and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). Tawfique Hasan is a Fellow of Churchill College, Cambridge.

Here’s a link to and a citation for the paper,

Real-time, noise and drift resilient formaldehyde sensing at room temperature with aerogel filaments by Zhuo Chen, Binghan Zhou, Mingfei Xiao, Tynee Bhowmick, Padmanathan Karthick Kannan, Luigi G. Occhipinti, Julian William Gardner, and Tawfique Hasan. Science Advances 9 Feb 2024 Vol 10, Issue 6 DOI: 10.1126/sciadv.adk6856

This paper is open access.

Butterfly mating inspires neuromorphic (brainlike) computing

Michael Berger writes about a multisensory approach to neuromorphic computing inspired by butterflies in his February 2, 2024 Nanowerk Spotlight article, Note: Links have been removed,

Artificial intelligence systems have historically struggled to integrate and interpret information from multiple senses the way animals intuitively do. Humans and other species rely on combining sight, sound, touch, taste and smell to better understand their surroundings and make decisions. However, the field of neuromorphic computing has largely focused on processing data from individual senses separately.

This unisensory approach stems in part from the lack of miniaturized hardware able to co-locate different sensing modules and enable in-sensor and near-sensor processing. Recent efforts have targeted fusing visual and tactile data. However, visuochemical integration, which merges visual and chemical information to emulate complex sensory processing such as that seen in nature—for instance, butterflies integrating visual signals with chemical cues for mating decisions—remains relatively unexplored. Smell can potentially alter visual perception, yet current AI leans heavily on visual inputs alone, missing a key aspect of biological cognition.

Now, researchers at Penn State University have developed bio-inspired hardware that embraces heterogeneous integration of nanomaterials to allow the co-location of chemical and visual sensors along with computing elements. This facilitates efficient visuochemical information processing and decision-making, taking cues from the courtship behaviors of a species of tropical butterfly.

In the paper published in Advanced Materials (“A Butterfly-Inspired Multisensory Neuromorphic Platform for Integration of Visual and Chemical Cues”), the researchers describe creating their visuochemical integration platform inspired by Heliconius butterflies. During mating, female butterflies rely on integrating visual signals like wing color from males along with chemical pheromones to select partners. Specialized neurons combine these visual and chemical cues to enable informed mate choice.

To emulate this capability, the team constructed hardware encompassing monolayer molybdenum disulfide (MoS2) memtransistors serving as visual capture and processing components. Meanwhile, graphene chemitransistors functioned as artificial olfactory receptors. Together, these nanomaterials provided the sensing, memory and computing elements necessary for visuochemical integration in a compact architecture.

While mating butterflies served as inspiration, the developed technology has much wider relevance. It represents a significant step toward overcoming the reliance of artificial intelligence on single data modalities. Enabling integration of multiple senses can greatly improve situational understanding and decision-making for autonomous robots, vehicles, monitoring devices and other systems interacting with complex environments.

The work also helps progress neuromorphic computing approaches seeking to emulate biological brains for next-generation ML acceleration, edge deployment and reduced power consumption. In nature, cross-modal learning underpins animals’ adaptable behavior and intelligence emerging from brains organizing sensory inputs into unified percepts. This research provides a blueprint for hardware co-locating sensors and processors to more closely replicate such capabilities

It’s fascinating to me how many times butterflies inspire science,

Butterfly-inspired visuo-chemical integration. a) A simplified abstraction of visual and chemical stimuli from male butterflies and visuo-chemical integration pathway in female butterflies. b) Butterfly-inspired neuromorphic hardware comprising of monolayer MoS2 memtransistor-based visual afferent neuron, graphene-based chemoreceptor neuron, and MoS2 memtransistor-based neuro-mimetic mating circuits. Courtesy: Wiley/Penn State University Researchers

Here’s a link to and a citation for the paper,

A Butterfly-Inspired Multisensory Neuromorphic Platform for Integration of Visual and Chemical Cues by Yikai Zheng, Subir Ghosh, Saptarshi Das. Advanced Materials SOI: First published: 09 December 2023

This paper is open access.

Brainlike transistor and human intelligence

This brainlike transistor (not a memristor) is important because it functions at room temperature as opposed to others, which require cryogenic temperatures.

A December 20, 2023 Northwestern University news release (received via email; also on EurekAlert) fills in the details,

  • Researchers develop transistor that simultaneously processes and stores information like the human brain
  • Transistor goes beyond categorization tasks to perform associative learning
  • Transistor identified similar patterns, even when given imperfect input
  • Previous similar devices could only operate at cryogenic temperatures; new transistor operates at room temperature, making it more practical

EVANSTON, Ill. — Taking inspiration from the human brain, researchers have developed a new synaptic transistor capable of higher-level thinking.

Designed by researchers at Northwestern University, Boston College and the Massachusetts Institute of Technology (MIT), the device simultaneously processes and stores information just like the human brain. In new experiments, the researchers demonstrated that the transistor goes beyond simple machine-learning tasks to categorize data and is capable of performing associative learning.

Although previous studies have leveraged similar strategies to develop brain-like computing devices, those transistors cannot function outside cryogenic temperatures. The new device, by contrast, is stable at room temperatures. It also operates at fast speeds, consumes very little energy and retains stored information even when power is removed, making it ideal for real-world applications.

The study was published today (Dec. 20 [2023]) in the journal Nature.

“The brain has a fundamentally different architecture than a digital computer,” said Northwestern’s Mark C. Hersam, who co-led the research. “In a digital computer, data move back and forth between a microprocessor and memory, which consumes a lot of energy and creates a bottleneck when attempting to perform multiple tasks at the same time. On the other hand, in the brain, memory and information processing are co-located and fully integrated, resulting in orders of magnitude higher energy efficiency. Our synaptic transistor similarly achieves concurrent memory and information processing functionality to more faithfully mimic the brain.”

Hersam is the Walter P. Murphy Professor of Materials Science and Engineering at Northwestern’s McCormick School of Engineering. He also is chair of the department of materials science and engineering, director of the Materials Research Science and Engineering Center and member of the International Institute for Nanotechnology. Hersam co-led the research with Qiong Ma of Boston College and Pablo Jarillo-Herrero of MIT.

Recent advances in artificial intelligence (AI) have motivated researchers to develop computers that operate more like the human brain. Conventional, digital computing systems have separate processing and storage units, causing data-intensive tasks to devour large amounts of energy. With smart devices continuously collecting vast quantities of data, researchers are scrambling to uncover new ways to process it all without consuming an increasing amount of power. Currently, the memory resistor, or “memristor,” is the most well-developed technology that can perform combined processing and memory function. But memristors still suffer from energy costly switching.

“For several decades, the paradigm in electronics has been to build everything out of transistors and use the same silicon architecture,” Hersam said. “Significant progress has been made by simply packing more and more transistors into integrated circuits. You cannot deny the success of that strategy, but it comes at the cost of high power consumption, especially in the current era of big data where digital computing is on track to overwhelm the grid. We have to rethink computing hardware, especially for AI and machine-learning tasks.”

To rethink this paradigm, Hersam and his team explored new advances in the physics of moiré patterns, a type of geometrical design that arises when two patterns are layered on top of one another. When two-dimensional materials are stacked, new properties emerge that do not exist in one layer alone. And when those layers are twisted to form a moiré pattern, unprecedented tunability of electronic properties becomes possible.

For the new device, the researchers combined two different types of atomically thin materials: bilayer graphene and hexagonal boron nitride. When stacked and purposefully twisted, the materials formed a moiré pattern. By rotating one layer relative to the other, the researchers could achieve different electronic properties in each graphene layer even though they are separated by only atomic-scale dimensions. With the right choice of twist, researchers harnessed moiré physics for neuromorphic functionality at room temperature.

“With twist as a new design parameter, the number of permutations is vast,” Hersam said. “Graphene and hexagonal boron nitride are very similar structurally but just different enough that you get exceptionally strong moiré effects.”

To test the transistor, Hersam and his team trained it to recognize similar — but not identical — patterns. Just earlier this month, Hersam introduced a new nanoelectronic device capable of analyzing and categorizing data in an energy-efficient manner, but his new synaptic transistor takes machine learning and AI one leap further.

“If AI is meant to mimic human thought, one of the lowest-level tasks would be to classify data, which is simply sorting into bins,” Hersam said. “Our goal is to advance AI technology in the direction of higher-level thinking. Real-world conditions are often more complicated than current AI algorithms can handle, so we tested our new devices under more complicated conditions to verify their advanced capabilities.”

First the researchers showed the device one pattern: 000 (three zeros in a row). Then, they asked the AI to identify similar patterns, such as 111 or 101. “If we trained it to detect 000 and then gave it 111 and 101, it knows 111 is more similar to 000 than 101,” Hersam explained. “000 and 111 are not exactly the same, but both are three digits in a row. Recognizing that similarity is a higher-level form of cognition known as associative learning.”

In experiments, the new synaptic transistor successfully recognized similar patterns, displaying its associative memory. Even when the researchers threw curveballs — like giving it incomplete patterns — it still successfully demonstrated associative learning.

“Current AI can be easy to confuse, which can cause major problems in certain contexts,” Hersam said. “Imagine if you are using a self-driving vehicle, and the weather conditions deteriorate. The vehicle might not be able to interpret the more complicated sensor data as well as a human driver could. But even when we gave our transistor imperfect input, it could still identify the correct response.”

The study, “Moiré synaptic transistor with room-temperature neuromorphic functionality,” was primarily supported by the National Science Foundation.

Here’s a link to and a citation for the paper,

Moiré synaptic transistor with room-temperature neuromorphic functionality by Xiaodong Yan, Zhiren Zheng, Vinod K. Sangwan, Justin H. Qian, Xueqiao Wang, Stephanie E. Liu, Kenji Watanabe, Takashi Taniguchi, Su-Yang Xu, Pablo Jarillo-Herrero, Qiong Ma & Mark C. Hersam. Nature volume 624, pages 551–556 (2023) DOI: Published online: 20 December 2023 Issue Date: 21 December 2023

This paper is behind a paywall.

Portable and non-invasive (?) mind-reading AI (artificial intelligence) turns thoughts into text and some thoughts about the near future

First, here’s some of the latest research and if by ‘non-invasive,’ you mean that electrodes are not being planted in your brain, then this December 12, 2023 University of Technology Sydney (UTS) press release (also on EurekAlert) highlights non-invasive mind-reading AI via a brain-computer interface (BCI), Note: Links have been removed,

In a world-first, researchers from the GrapheneX-UTS Human-centric Artificial Intelligence Centre at the University of Technology Sydney (UTS) have developed a portable, non-invasive system that can decode silent thoughts and turn them into text. 

The technology could aid communication for people who are unable to speak due to illness or injury, including stroke or paralysis. It could also enable seamless communication between humans and machines, such as the operation of a bionic arm or robot.

The study has been selected as the spotlight paper at the NeurIPS conference, a top-tier annual meeting that showcases world-leading research on artificial intelligence and machine learning, held in New Orleans on 12 December 2023.

The research was led by Distinguished Professor CT Lin, Director of the GrapheneX-UTS HAI Centre, together with first author Yiqun Duan and fellow PhD candidate Jinzhou Zhou from the UTS Faculty of Engineering and IT.

In the study participants silently read passages of text while wearing a cap that recorded electrical brain activity through their scalp using an electroencephalogram (EEG). A demonstration of the technology can be seen in this video [See UTS press release].

The EEG wave is segmented into distinct units that capture specific characteristics and patterns from the human brain. This is done by an AI model called DeWave developed by the researchers. DeWave translates EEG signals into words and sentences by learning from large quantities of EEG data. 

“This research represents a pioneering effort in translating raw EEG waves directly into language, marking a significant breakthrough in the field,” said Distinguished Professor Lin.

“It is the first to incorporate discrete encoding techniques in the brain-to-text translation process, introducing an innovative approach to neural decoding. The integration with large language models is also opening new frontiers in neuroscience and AI,” he said.

Previous technology to translate brain signals to language has either required surgery to implant electrodes in the brain, such as Elon Musk’s Neuralink [emphasis mine], or scanning in an MRI machine, which is large, expensive, and difficult to use in daily life.

These methods also struggle to transform brain signals into word level segments without additional aids such as eye-tracking, which restrict the practical application of these systems. The new technology is able to be used either with or without eye-tracking.

The UTS research was carried out with 29 participants. This means it is likely to be more robust and adaptable than previous decoding technology that has only been tested on one or two individuals, because EEG waves differ between individuals. 

The use of EEG signals received through a cap, rather than from electrodes implanted in the brain, means that the signal is noisier. In terms of EEG translation however, the study reported state-of the art performance, surpassing previous benchmarks.

“The model is more adept at matching verbs than nouns. However, when it comes to nouns, we saw a tendency towards synonymous pairs rather than precise translations, such as ‘the man’ instead of ‘the author’,” said Duan. [emphases mine; synonymous, eh? what about ‘woman’ or ‘child’ instead of the ‘man’?]

“We think this is because when the brain processes these words, semantically similar words might produce similar brain wave patterns. Despite the challenges, our model yields meaningful results, aligning keywords and forming similar sentence structures,” he said.

The translation accuracy score is currently around 40% on BLEU-1. The BLEU score is a number between zero and one that measures the similarity of the machine-translated text to a set of high-quality reference translations. The researchers hope to see this improve to a level that is comparable to traditional language translation or speech recognition programs, which is closer to 90%.

The research follows on from previous brain-computer interface technology developed by UTS in association with the Australian Defence Force [ADF] that uses brainwaves to command a quadruped robot, which is demonstrated in this ADF video [See my June 13, 2023 posting, “Mind-controlled robots based on graphene: an Australian research story” for the story and embedded video].

About one month after the research announcement regarding the University of Technology Sydney’s ‘non-invasive’ brain-computer interface (BCI), I stumbled across an in-depth piece about the field of ‘non-invasive’ mind-reading research.

Neurotechnology and neurorights

Fletcher Reveley’s January 18, 2024 article on (originally published January 3, 2024 on Undark) shows how quickly the field is developing and raises concerns, Note: Links have been removed,

One afternoon in May 2020, Jerry Tang, a Ph.D. student in computer science at the University of Texas at Austin, sat staring at a cryptic string of words scrawled across his computer screen:

“I am not finished yet to start my career at twenty without having gotten my license I never have to pull out and run back to my parents to take me home.”

The sentence was jumbled and agrammatical. But to Tang, it represented a remarkable feat: A computer pulling a thought, however disjointed, from a person’s mind.

For weeks, ever since the pandemic had shuttered his university and forced his lab work online, Tang had been at home tweaking a semantic decoder — a brain-computer interface, or BCI, that generates text from brain scans. Prior to the university’s closure, study participants had been providing data to train the decoder for months, listening to hours of storytelling podcasts while a functional magnetic resonance imaging (fMRI) machine logged their brain responses. Then, the participants had listened to a new story — one that had not been used to train the algorithm — and those fMRI scans were fed into the decoder, which used GPT1, a predecessor to the ubiquitous AI chatbot ChatGPT, to spit out a text prediction of what it thought the participant had heard. For this snippet, Tang compared it to the original story:

“Although I’m twenty-three years old I don’t have my driver’s license yet and I just jumped out right when I needed to and she says well why don’t you come back to my house and I’ll give you a ride.”

The decoder was not only capturing the gist of the original, but also producing exact matches of specific words — twenty, license. When Tang shared the results with his adviser, a UT Austin neuroscientist named Alexander Huth who had been working towards building such a decoder for nearly a decade, Huth was floored. “Holy shit,” Huth recalled saying. “This is actually working.” By the fall of 2021, the scientists were testing the device with no external stimuli at all — participants simply imagined a story and the decoder spat out a recognizable, albeit somewhat hazy, description of it. “What both of those experiments kind of point to,” said Huth, “is the fact that what we’re able to read out here was really like the thoughts, like the idea.”

The scientists brimmed with excitement over the potentially life-altering medical applications of such a device — restoring communication to people with locked-in syndrome, for instance, whose near full-body paralysis made talking impossible. But just as the potential benefits of the decoder snapped into focus, so too did the thorny ethical questions posed by its use. Huth himself had been one of the three primary test subjects in the experiments, and the privacy implications of the device now seemed visceral: “Oh my god,” he recalled thinking. “We can look inside my brain.”

Huth’s reaction mirrored a longstanding concern in neuroscience and beyond: that machines might someday read people’s minds. And as BCI technology advances at a dizzying clip, that possibility and others like it — that computers of the future could alter human identities, for example, or hinder free will — have begun to seem less remote. “The loss of mental privacy, this is a fight we have to fight today,” said Rafael Yuste, a Columbia University neuroscientist. “That could be irreversible. If we lose our mental privacy, what else is there to lose? That’s it, we lose the essence of who we are.”

Spurred by these concerns, Yuste and several colleagues have launched an international movement advocating for “neurorights” — a set of five principles Yuste argues should be enshrined in law as a bulwark against potential misuse and abuse of neurotechnology. But he may be running out of time.

Reveley’s January 18, 2024 article provides fascinating context and is well worth reading if you have the time.

For my purposes, I’m focusing on ethics, Note: Links have been removed,

… as these and other advances propelled the field forward, and as his own research revealed the discomfiting vulnerability of the brain to external manipulation, Yuste found himself increasingly concerned by the scarce attention being paid to the ethics of these technologies. Even Obama’s multi-billion-dollar BRAIN Initiative, a government program designed to advance brain research, which Yuste had helped launch in 2013 and supported heartily, seemed to mostly ignore the ethical and societal consequences of the research it funded. “There was zero effort on the ethical side,” Yuste recalled.

Yuste was appointed to the rotating advisory group of the BRAIN Initiative in 2015, where he began to voice his concerns. That fall, he joined an informal working group to consider the issue. “We started to meet, and it became very evident to me that the situation was a complete disaster,” Yuste said. “There was no guidelines, no work done.” Yuste said he tried to get the group to generate a set of ethical guidelines for novel BCI technologies, but the effort soon became bogged down in bureaucracy. Frustrated, he stepped down from the committee and, together with a University of Washington bioethicist named Sara Goering, decided to independently pursue the issue. “Our aim here is not to contribute to or feed fear for doomsday scenarios,” the pair wrote in a 2016 article in Cell, “but to ensure that we are reflective and intentional as we prepare ourselves for the neurotechnological future.”

In the fall of 2017, Yuste and Goering called a meeting at the Morningside Campus of Columbia, inviting nearly 30 experts from all over the world in such fields as neurotechnology, artificial intelligence, medical ethics, and the law. By then, several other countries had launched their own versions of the BRAIN Initiative, and representatives from Australia, Canada [emphasis mine], China, Europe, Israel, South Korea, and Japan joined the Morningside gathering, along with veteran neuroethicists and prominent researchers. “We holed ourselves up for three days to study the ethical and societal consequences of neurotechnology,” Yuste said. “And we came to the conclusion that this is a human rights issue. These methods are going to be so powerful, that enable to access and manipulate mental activity, and they have to be regulated from the angle of human rights. That’s when we coined the term ‘neurorights.’”

The Morningside group, as it became known, identified four principal ethical priorities, which were later expanded by Yuste into five clearly defined neurorights: The right to mental privacy, which would ensure that brain data would be kept private and its use, sale, and commercial transfer would be strictly regulated; the right to personal identity, which would set boundaries on technologies that could disrupt one’s sense of self; the right to fair access to mental augmentation, which would ensure equality of access to mental enhancement neurotechnologies; the right of protection from bias in the development of neurotechnology algorithms; and the right to free will, which would protect an individual’s agency from manipulation by external neurotechnologies. The group published their findings in an often-cited paper in Nature.

But while Yuste and the others were focused on the ethical implications of these emerging technologies, the technologies themselves continued to barrel ahead at a feverish speed. In 2014, the first kick of the World Cup was made by a paraplegic man using a mind-controlled robotic exoskeleton. In 2016, a man fist bumped Obama using a robotic arm that allowed him to “feel” the gesture. The following year, scientists showed that electrical stimulation of the hippocampus could improve memory, paving the way for cognitive augmentation technologies. The military, long interested in BCI technologies, built a system that allowed operators to pilot three drones simultaneously, partially with their minds. Meanwhile, a confusing maelstrom of science, science-fiction, hype, innovation, and speculation swept the private sector. By 2020, over $33 billion had been invested in hundreds of neurotech companies — about seven times what the NIH [US National Institutes of Health] had envisioned for the 12-year span of the BRAIN Initiative itself.

Now back to Tang and Huth (from Reveley’s January 18, 2024 article), Note: Links have been removed,

Central to the ethical questions Huth and Tang grappled with was the fact that their decoder, unlike other language decoders developed around the same time, was non-invasive — it didn’t require its users to undergo surgery. Because of that, their technology was free from the strict regulatory oversight that governs the medical domain. (Yuste, for his part, said he believes non-invasive BCIs pose a far greater ethical challenge than invasive systems: “The non-invasive, the commercial, that’s where the battle is going to get fought.”) Huth and Tang’s decoder faced other hurdles to widespread use — namely that fMRI machines are enormous, expensive, and stationary. But perhaps, the researchers thought, there was a way to overcome that hurdle too.

The information measured by fMRI machines — blood oxygenation levels, which indicate where blood is flowing in the brain — can also be measured with another technology, functional Near-Infrared Spectroscopy, or fNIRS. Although lower resolution than fMRI, several expensive, research-grade, wearable fNIRS headsets do approach the resolution required to work with Huth and Tang’s decoder. In fact, the scientists were able to test whether their decoder would work with such devices by simply blurring their fMRI data to simulate the resolution of research-grade fNIRS. The decoded result “doesn’t get that much worse,” Huth said.

And while such research-grade devices are currently cost-prohibitive for the average consumer, more rudimentary fNIRS headsets have already hit the market. Although these devices provide far lower resolution than would be required for Huth and Tang’s decoder to work effectively, the technology is continually improving, and Huth believes it is likely that an affordable, wearable fNIRS device will someday provide high enough resolution to be used with the decoder. In fact, he is currently teaming up with scientists at Washington University to research the development of such a device.

Even comparatively primitive BCI headsets can raise pointed ethical questions when released to the public. Devices that rely on electroencephalography, or EEG, a commonplace method of measuring brain activity by detecting electrical signals, have now become widely available — and in some cases have raised alarm. In 2019, a school in Jinhua, China, drew criticism after trialing EEG headbands that monitored the concentration levels of its pupils. (The students were encouraged to compete to see who concentrated most effectively, and reports were sent to their parents.) Similarly, in 2018 the South China Morning Post reported that dozens of factories and businesses had begun using “brain surveillance devices” to monitor workers’ emotions, in the hopes of increasing productivity and improving safety. The devices “caused some discomfort and resistance in the beginning,” Jin Jia, then a brain scientist at Ningbo University, told the reporter. “After a while, they got used to the device.”

But the primary problem with even low-resolution devices is that scientists are only just beginning to understand how information is actually encoded in brain data. In the future, powerful new decoding algorithms could discover that even raw, low-resolution EEG data contains a wealth of information about a person’s mental state at the time of collection. Consequently, nobody can definitively know what they are giving away when they allow companies to collect information from their brains.

Huth and Tang concluded that brain data, therefore, should be closely guarded, especially in the realm of consumer products. In an article on Medium from last April, Tang wrote that “decoding technology is continually improving, and the information that could be decoded from a brain scan a year from now may be very different from what can be decoded today. It is crucial that companies are transparent about what they intend to do with brain data and take measures to ensure that brain data is carefully protected.” (Yuste said the Neurorights Foundation recently surveyed the user agreements of 30 neurotech companies and found that all of them claim ownership of users’ brain data — and most assert the right to sell that data to third parties. [emphases mine]) Despite these concerns, however, Huth and Tang maintained that the potential benefits of these technologies outweighed their risks, provided the proper guardrails [emphasis mine] were put in place.

It would seem the first guardrails are being set up in South America (from Reveley’s January 18, 2024 article), Note: Links have been removed,

On a hot summer night in 2019, Yuste sat in the courtyard of an adobe hotel in the north of Chile with his close friend, the prominent Chilean doctor and then-senator Guido Girardi, observing the vast, luminous skies of the Atacama Desert and discussing, as they often did, the world of tomorrow. Girardi, who every year organizes the Congreso Futuro, Latin America’s preeminent science and technology event, had long been intrigued by the accelerating advance of technology and its paradigm-shifting impact on society — “living in the world at the speed of light,” as he called it. Yuste had been a frequent speaker at the conference, and the two men shared a conviction that scientists were birthing technologies powerful enough to disrupt the very notion of what it meant to be human.

Around midnight, as Yuste finished his pisco sour, Girardi made an intriguing proposal: What if they worked together to pass an amendment to Chile’s constitution, one that would enshrine protections for mental privacy as an inviolable right of every Chilean? It was an ambitious idea, but Girardi had experience moving bold pieces of legislation through the senate; years earlier he had spearheaded Chile’s famous Food Labeling and Advertising Law, which required companies to affix health warning labels on junk food. (The law has since inspired dozens of countries to pursue similar legislation.) With BCI, here was another chance to be a trailblazer. “I said to Rafael, ‘Well, why don’t we create the first neuro data protection law?’” Girardi recalled. Yuste readily agreed.

… Girardi led the political push, promoting a piece of legislation that would amend Chile’s constitution to protect mental privacy. The effort found surprising purchase across the political spectrum, a remarkable feat in a country famous for its political polarization. In 2021, Chile’s congress unanimously passed the constitutional amendment, which Piñera [Sebastián Piñera] swiftly signed into law. (A second piece of legislation, which would establish a regulatory framework for neurotechnology, is currently under consideration by Chile’s congress.) “There was no divide between the left or right,” recalled Girardi. “This was maybe the only law in Chile that was approved by unanimous vote.” Chile, then, had become the first country in the world to enshrine “neurorights” in its legal code.

Even before the passage of the Chilean constitutional amendment, Yuste had begun meeting regularly with Jared Genser, an international human rights lawyer who had represented such high-profile clients as Desmond Tutu, Liu Xiaobo, and Aung San Suu Kyi. (The New York Times Magazine once referred to Genser as “the extractor” for his work with political prisoners.) Yuste was seeking guidance on how to develop an international legal framework to protect neurorights, and Genser, though he had just a cursory knowledge of neurotechnology, was immediately captivated by the topic. “It’s fair to say he blew my mind in the first hour of discussion,” recalled Genser. Soon thereafter, Yuste, Genser, and a private-sector entrepreneur named Jamie Daves launched the Neurorights Foundation, a nonprofit whose first goal, according to its website, is “to protect the human rights of all people from the potential misuse or abuse of neurotechnology.”

To accomplish this, the organization has sought to engage all levels of society, from the United Nations and regional governing bodies like the Organization of American States, down to national governments, the tech industry, scientists, and the public at large. Such a wide-ranging approach, said Genser, “is perhaps insanity on our part, or grandiosity. But nonetheless, you know, it’s definitely the Wild West as it comes to talking about these issues globally, because so few people know about where things are, where they’re heading, and what is necessary.”

This general lack of knowledge about neurotech, in all strata of society, has largely placed Yuste in the role of global educator — he has met several times with U.N. Secretary-General António Guterres, for example, to discuss the potential dangers of emerging neurotech. And these efforts are starting to yield results. Guterres’s 2021 report, “Our Common Agenda,” which sets forth goals for future international cooperation, urges “updating or clarifying our application of human rights frameworks and standards to address frontier issues,” such as “neuro-technology.” Genser attributes the inclusion of this language in the report to Yuste’s advocacy efforts.

But updating international human rights law is difficult, and even within the Neurorights Foundation there are differences of opinion regarding the most effective approach. For Yuste, the ideal solution would be the creation of a new international agency, akin to the International Atomic Energy Agency — but for neurorights. “My dream would be to have an international convention about neurotechnology, just like we had one about atomic energy and about certain things, with its own treaty,” he said. “And maybe an agency that would essentially supervise the world’s efforts in neurotechnology.”

Genser, however, believes that a new treaty is unnecessary, and that neurorights can be codified most effectively by extending interpretation of existing international human rights law to include them. The International Covenant of Civil and Political Rights, for example, already ensures the general right to privacy, and an updated interpretation of the law could conceivably clarify that that clause extends to mental privacy as well.

There is no need for immediate panic (from Reveley’s January 18, 2024 article),

… while Yuste and the others continue to grapple with the complexities of international and national law, Huth and Tang have found that, for their decoder at least, the greatest privacy guardrails come not from external institutions but rather from something much closer to home — the human mind itself. Following the initial success of their decoder, as the pair read widely about the ethical implications of such a technology, they began to think of ways to assess the boundaries of the decoder’s capabilities. “We wanted to test a couple kind of principles of mental privacy,” said Huth. Simply put, they wanted to know if the decoder could be resisted.

In late 2021, the scientists began to run new experiments. First, they were curious if an algorithm trained on one person could be used on another. They found that it could not — the decoder’s efficacy depended on many hours of individualized training. Next, they tested whether the decoder could be thrown off simply by refusing to cooperate with it. Instead of focusing on the story that was playing through their headphones while inside the fMRI machine, participants were asked to complete other mental tasks, such as naming random animals, or telling a different story in their head. “Both of those rendered it completely unusable,” Huth said. “We didn’t decode the story they were listening to, and we couldn’t decode anything about what they were thinking either.”

Given how quickly this field of research is progressing, it seems like a good idea to increase efforts to establish neurorights (from Reveley’s January 18, 2024 article),

For Yuste, however, technologies like Huth and Tang’s decoder may only mark the beginning of a mind-boggling new chapter in human history, one in which the line between human brains and computers will be radically redrawn — or erased completely. A future is conceivable, he said, where humans and computers fuse permanently, leading to the emergence of technologically augmented cyborgs. “When this tsunami hits us I would say it’s not likely it’s for sure that humans will end up transforming themselves — ourselves — into maybe a hybrid species,” Yuste said. He is now focused on preparing for this future.

In the last several years, Yuste has traveled to multiple countries, meeting with a wide assortment of politicians, supreme court justices, U.N. committee members, and heads of state. And his advocacy is beginning to yield results. In August, Mexico began considering a constitutional reform that would establish the right to mental privacy. Brazil is currently considering a similar proposal, while Spain, Argentina, and Uruguay have also expressed interest, as has the European Union. In September [2023], neurorights were officially incorporated into Mexico’s digital rights charter, while in Chile, a landmark Supreme Court ruling found that Emotiv Inc, a company that makes a wearable EEG headset, violated Chile’s newly minted mental privacy law. That suit was brought by Yuste’s friend and collaborator, Guido Girardi.

“This is something that we should take seriously,” he [Huth] said. “Because even if it’s rudimentary right now, where is that going to be in five years? What was possible five years ago? What’s possible now? Where’s it gonna be in five years? Where’s it gonna be in 10 years? I think the range of reasonable possibilities includes things that are — I don’t want to say like scary enough — but like dystopian enough that I think it’s certainly a time for us to think about this.”

You can find The Neurorights Foundation here and/or read Reveley’s January 18, 2024 article on or as originally published January 3, 2024 on Undark. Finally, thank you for the article, Fletcher Reveley!

Communicating thoughts by means of brain implants?

The Australian military announced mind-controlled robots in Spring 2023 (see my June 13, 2023 posting) and, recently, scientists at Duke University (North Carolina, US) have announced research that may allow people who are unable to speak to communicate their thoughts, from a November 6, 2023 news item on ScienceDaily,

A speech prosthetic developed by a collaborative team of Duke neuroscientists, neurosurgeons, and engineers can translate a person’s brain signals into what they’re trying to say.

Appearing Nov. 6 [2023] in the journal Nature Communications, the new technology might one day help people unable to talk due to neurological disorders regain the ability to communicate through a brain-computer interface.

One more plastic brain for this blog,

Caption: A device no bigger than a postage stamp (dotted portion within white band) packs 128 microscopic sensors that can translate brain cell activity into what someone intends to say. Credit: Dan Vahaba/Duke University

A November 6, 2023 Duke University news release (also on EurekAlert), which originated the news item, provides more detail, Note: Links have been removed,

“There are many patients who suffer from debilitating motor disorders, like ALS (amyotrophic lateral sclerosis) or locked-in syndrome, that can impair their ability to speak,” said Gregory Cogan, Ph.D., a professor of neurology at Duke University’s School of Medicine and one of the lead researchers involved in the project. “But the current tools available to allow them to communicate are generally very slow and cumbersome.”

Imagine listening to an audiobook at half-speed. That’s the best speech decoding rate currently available, which clocks in at about 78 words per minute. People, however, speak around 150 words per minute.

The lag between spoken and decoded speech rates is partially due the relatively few brain activity sensors that can be fused onto a paper-thin piece of material that lays atop the surface of the brain. Fewer sensors provide less decipherable information to decode.

To improve on past limitations, Cogan teamed up with fellow Duke Institute for Brain Sciences faculty member Jonathan Viventi, Ph.D., whose biomedical engineering lab specializes in making high-density, ultra-thin, and flexible brain sensors.

For this project, Viventi and his team packed an impressive 256 microscopic brain sensors onto a postage stamp-sized piece of flexible, medical-grade plastic. Neurons just a grain of sand apart can have wildly different activity patterns when coordinating speech, so it’s necessary to distinguish signals from neighboring brain cells to help make accurate predictions about intended speech.

After fabricating the new implant, Cogan and Viventi teamed up with several Duke University Hospital neurosurgeons, including Derek Southwell, M.D., Ph.D., Nandan Lad, M.D., Ph.D., and Allan Friedman, M.D., who helped recruit four patients to test the implants. The experiment required the researchers to place the device temporarily in patients who were undergoing brain surgery for some other condition, such as  treating Parkinson’s disease or having a tumor removed. Time was limited for Cogan and his team to test drive their device in the OR.

“I like to compare it to a NASCAR pit crew,” Cogan said. “We don’t want to add any extra time to the operating procedure, so we had to be in and out within 15 minutes. As soon as the surgeon and the medical team said ‘Go!’ we rushed into action and the patient performed the task.”

The task was a simple listen-and-repeat activity. Participants heard a series of nonsense words, like “ava,” “kug,” or “vip,” and then spoke each one aloud. The device recorded activity from each patient’s speech motor cortex as it coordinated nearly 100 muscles that move the lips, tongue, jaw, and larynx.

Afterwards, Suseendrakumar Duraivel, the first author of the new report and a biomedical engineering graduate student at Duke, took the neural and speech data from the surgery suite and fed it into a machine learning algorithm to see how accurately it could predict what sound was being made, based only on the brain activity recordings.

For some sounds and participants, like /g/ in the word “gak,”  the decoder got it right 84% of the time when it was the first sound in a string of three that made up a given nonsense word.

Accuracy dropped, though, as the decoder parsed out sounds in the middle or at the end of a nonsense word. It also struggled if two sounds were similar, like /p/ and /b/.

Overall, the decoder was accurate 40% of the time. That may seem like a humble test score, but it was quite impressive given that similar brain-to-speech technical feats require hours or days-worth of data to draw from. The speech decoding algorithm Duraivel used, however, was working with only 90 seconds of spoken data from the 15-minute test.

Duraivel and his mentors are excited about making a cordless version of the device with a recent $2.4M grant from the National Institutes of Health.

“We’re now developing the same kind of recording devices, but without any wires,” Cogan said. “You’d be able to move around, and you wouldn’t have to be tied to an electrical outlet, which is really exciting.”

While their work is encouraging, there’s still a long way to go for Viventi and Cogan’s speech prosthetic to hit the shelves anytime soon.

“We’re at the point where it’s still much slower than natural speech,” Viventi said in a recent Duke Magazine piece about the technology, “but you can see the trajectory where you might be able to get there.”

Here’s a link to and a citation for the paper,

High-resolution neural recordings improve the accuracy of speech decoding by Suseendrakumar Duraivel, Shervin Rahimpour, Chia-Han Chiang, Michael Trumpis, Charles Wang, Katrina Barth, Stephen C. Harward, Shivanand P. Lad, Allan H. Friedman, Derek G. Southwell, Saurabh R. Sinha, Jonathan Viventi & Gregory B. Cogan. Nature Communications volume 14, Article number: 6938 (2023) DO: I Published: 06 November 2023

This paper is open access.

FrogHeart’s 2023 comes to an end as 2024 comes into view

My personal theme for this last year (2023) and for the coming year was and is: catching up. On the plus side, my 2023 backlog (roughly six months) to be published was whittled down considerably. On the minus side, I start 2024 with a backlog of two to three months.

2023 on this blog had a lot in common with 2022 (see my December 31, 2022 posting), which may be due to what’s going on in the world of emerging science and technology or to my personal interests or possibly a bit of both. On to 2023 and a further blurring of boundaries:

Energy, computing and the environment

The argument against paper is that it uses up resources, it’s polluting, it’s affecting the environment, etc. Somehow the part where electricity which underpins so much of our ‘smart’ society does the same thing is left out of the discussion.

Neuromorphic (brainlike) computing and lower energy

Before launching into the stories about lowering energy usage, here’s an October 16, 2023 posting “The cost of building ChatGPT” that gives you some idea of the consequences of our insatiable desire for more computing and more ‘smart’ devices,

In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022 (to nearly 1.7 billion gallons , or more than 2,500 Olympic-sized swimming pools), a sharp increase compared to previous years that outside researchers tie to its AI research. [emphases mine]

“It’s fair to say the majority of the growth is due to AI,” including “its heavy investment in generative AI and partnership with OpenAI,” said Shaolei Ren, [emphasis mine] a researcher at the University of California, Riverside who has been trying to calculate the environmental impact of generative AI products such as ChatGPT.

Why it matters: Microsoft’s five WDM [West Des Moines in Iowa] data centers — the “epicenter for advancing AI” — represent more than $5 billion in investments in the last 15 years.

Yes, but: They consumed as much as 11.5 million gallons of water a month for cooling, or about 6% of WDM’s total usage during peak summer usage during the last two years, according to information from West Des Moines Water Works.

The focus is AI but it doesn’t take long to realize that all computing has energy and environmental costs. I have more about Ren’s work and about water shortages in the “The cost of building ChatGPT” posting.

This next posting would usually be included with my other art/sci postings but it touches on the issues. My October 13, 2023 posting about Toronto’s Art/Sci Salon events, in particular, there’s the Streaming Carbon Footprint event (just scroll down to the appropriate subhead). For the interested, I also found this 2022 paper “The Carbon Footprint of Streaming Media:; Problems, Calculations, Solutions” co-authored by one of the artist/researchers (Laura U. Marks, philosopher and scholar of new media and film at Simon Fraser University) who presented at the Toronto event.

I’m late to the party; Thomas Daigle posted a January 2, 2020 article about energy use and our appetite for computing and ‘smart’ devices for the Canadian Broadcasting Corporation’s online news,

For those of us binge-watching TV shows, installing new smartphone apps or sharing family photos on social media over the holidays, it may seem like an abstract predicament.

The gigabytes of data we’re using — although invisible — come at a significant cost to the environment. Some experts say it rivals that of the airline industry. 

And as more smart devices rely on data to operate (think internet-connected refrigerators or self-driving cars), their electricity demands are set to skyrocket.

“We are using an immense amount of energy to drive this data revolution,” said Jane Kearns, an environment and technology expert at MaRS Discovery District, an innovation hub in Toronto.

“It has real implications for our climate.”

Some good news

Researchers are working on ways to lower the energy and environmental costs, here’s a sampling of 2023 posts with an emphasis on brainlike computing that attest to it,

If there’s an industry that can make neuromorphic computing and energy savings sexy, it’s the automotive indusry,

On the energy front,

Most people are familiar with nuclear fission and some its attendant issues. There is an alternative nuclear energy, fusion, which is considered ‘green’ or greener anyway. General Fusion is a local (Vancouver area) company focused on developing fusion energy, alongside competitors from all over the planet.

Part of what makes fusion energy attractive is that salt water or sea water can be used in its production and, according to that December posting, there are other applications for salt water power,

More encouraging developments in environmental science

Again, this is a selection. You’ll find a number of nano cellulose research projects and a couple of seaweed projects (seaweed research seems to be of increasing interest).

All by myself (neuromorphic engineering)

Neuromorphic computing is a subset of neuromorphic engineering and I stumbled across an article that outlines the similarities and differences. My ‘summary’ of the main points and a link to the original article can be found here,

Oops! I did it again. More AI panic

I included an overview of the various ‘recent’ panics (in my May 25, 2023 posting below) along with a few other posts about concerning developments but it’s not all doom and gloom..

Governments have realized that regulation might be a good idea. The European Union has a n AI act, the UK held an AI Safety Summit in November 2023, the US has been discussing AI regulation with its various hearings, and there’s impending legislation in Canada (see professor and lawyer Michael Geist’s blog for more).

A long time coming, a nanomedicine comeuppance

Paolo Macchiarini is now infamous for his untested, dangerous approach to medicine. Like a lot of people, I was fooled too as you can see in my August 2, 2011 posting, “Body parts nano style,”

In early July 2011, there were reports of a new kind of transplant involving a body part made of a biocomposite. Andemariam Teklesenbet Beyene underwent a trachea transplant that required an artificial windpipe crafted by UK experts then flown to Sweden where Beyene’s stem cells were used to coat the windpipe before being transplanted into his body.

It is an extraordinary story not least because Beyene, a patient in a Swedish hospital planning to return to Eritrea after his PhD studies in Iceland, illustrates the international cooperation that made the transplant possible.

The scaffolding material for the artificial windpipe was developed by Professor Alex Seifalian at the University College London in a landmark piece of nanotechnology-enabled tissue engineering. …

Five years later I stumbled across problems with Macchiarini’s work as outlined in my April 19, 2016 posting, “Macchiarini controversy and synthetic trachea transplants (part 1 of 2)” and my other April 19, 2016 posting, “Macchiarini controversy and synthetic trachea transplants (part 2 of 2)“.

This year, Gretchen Vogel (whose work was featured in my 2016 posts) has written a June 21, 2023 update about the Macchiarini affair for Science magazine, Note: Links have been removed,

Surgeon Paolo Macchiarini, who was once hailed as a pioneer of stem cell medicine, was found guilty of gross assault against three of his patients today and sentenced to 2 years and 6 months in prison by an appeals court in Stockholm. The ruling comes a year after a Swedish district court found Macchiarini guilty of bodily harm in two of the cases and gave him a suspended sentence. After both the prosecution and Macchiarini appealed that ruling, the Svea Court of Appeal heard the case in April and May. Today’s ruling from the five-judge panel is largely a win for the prosecution—it had asked for a 5-year sentence whereas Macchiarini’s lawyer urged the appeals court to acquit him of all charges.

Macchiarini performed experimental surgeries on the three patients in 2011 and 2012 while working at the renowned Karolinska Institute. He implanted synthetic windpipes seeded with stem cells from the patients’ own bone marrow, with the hope the cells would multiply over time and provide an enduring replacement. All three patients died when the implants failed. One patient died suddenly when the implant caused massive bleeding just 4 months after it was implanted; the two others survived for 2.5 and nearly 5 years, respectively, but suffered painful and debilitating complications before their deaths.

In the ruling released today, the appeals judges disagreed with the district court’s decision that the first two patients were treated under “emergency” conditions. Both patients could have survived for a significant length of time without the surgeries, they said. The third case was an “emergency,” the court ruled, but the treatment was still indefensible because by then Macchiarini was well aware of the problems with the technique. (One patient had already died and the other had suffered severe complications.)

A fictionalized tv series ( part of the Dr. Death anthology series) based on Macchiarini’s deceptions and a Dr. Death documentary are being broadcast/streamed in the US during January 2024. These come on the heels of a November 2023 Macchiarini documentary also broadcast/streamed on US television.

Dr. Death (anthology), based on the previews I’ve seen, is heavily US-centric, which is to be expected since Adam Ciralsky is involved in the production. Ciralsky wrote an exposé about Macchiarini for Vanity Fair published in 2016 (also featured in my 2016 postings). From a December 20, 2023 article by Julie Miller for Vanity Fair, Note: A link has been removed,

Seven years ago [2016], world-renowned surgeon Paolo Macchiarini was the subject of an ongoing Vanity Fair investigation. He had seduced award-winning NBC producer Benita Alexander while she was making a special about him, proposed, and promised her a wedding officiated by Pope Francis and attended by political A-listers. It was only after her designer wedding gown was made that Alexander learned Macchiarini was still married to his wife, and seemingly had no association with the famous names on their guest list.

Vanity Fair contributor Adam Ciralsky was in the midst of reporting the story for this magazine in the fall of 2015 when he turned to Dr. Ronald Schouten, a Harvard psychiatry professor. Ciralsky sought expert insight into the kind of fabulist who would invent and engage in such an audacious lie.

“I laid out the story to him, and he said, ‘Anybody who does this in their private life engages in the same conduct in their professional life,” recalls Ciralsky, in a phone call with Vanity Fair. “I think you ought to take a hard look at his CVs.”

That was the turning point in the story for Ciralsky, a former CIA lawyer who soon learned that Macchiarini was more dangerous as a surgeon than a suitor. …

Here’s a link to Ciralsky’s original article, which I described this way, from my April 19, 2016 posting (part 2 of the Macchiarini controversy),

For some bizarre frosting on this disturbing cake (see part 1 of the Macchiarini controversy and synthetic trachea transplants for the medical science aspects), a January 5, 2016 Vanity Fair article by Adam Ciralsky documents Macchiarini’s courtship of an NBC ([US] National Broadcasting Corporation) news producer who was preparing a documentary about him and his work.

[from Ciralsky’s article]

“Macchiarini, 57, is a magnet for superlatives. He is commonly referred to as “world-renowned” and a “super-surgeon.” He is credited with medical miracles, including the world’s first synthetic organ transplant, which involved fashioning a trachea, or windpipe, out of plastic and then coating it with a patient’s own stem cells. That feat, in 2011, appeared to solve two of medicine’s more intractable problems—organ rejection and the lack of donor organs—and brought with it major media exposure for Macchiarini and his employer, Stockholm’s Karolinska Institute, home of the Nobel Prize in Physiology or Medicine. Macchiarini was now planning another first: a synthetic-trachea transplant on a child, a two-year-old Korean-Canadian girl named Hannah Warren, who had spent her entire life in a Seoul hospital. … “

Other players in the Macchiarini story

Pierre Delaere, a trachea expert and professor of head and neck surgery at KU Leuven (a university in Belgium) was one of the first to draw attention to Macchiarini’s dangerous and unethical practices. To give you an idea of how difficult it was to get attention for this issue, there’s a September 1, 2017 article by John Rasko and Carl Power for the Guardian illustrating the issue. Here’s what they had to say about Delaere and other early critics of the work, Note: Links have been removed,

Delaere was one of the earliest and harshest critics of Macchiarini’s engineered airways. Reports of their success always seemed like “hot air” to him. He could see no real evidence that the windpipe scaffolds were becoming living, functioning airways – in which case, they were destined to fail. The only question was how long it would take – weeks, months or a few years.

Delaere’s damning criticisms appeared in major medical journals, including the Lancet, but weren’t taken seriously by Karolinska’s leadership. Nor did they impress the institute’s ethics council when Delaere lodged a formal complaint. [emphases mine]

Support for Macchiarini remained strong, even as his patients began to die. In part, this is because the field of windpipe repair is a niche area. Few people at Karolinska, especially among those in power, knew enough about it to appreciate Delaere’s claims. Also, in such a highly competitive environment, people are keen to show allegiance to their superiors and wary of criticising them. The official report into the matter dubbed this the “bandwagon effect”.

With Macchiarini’s exploits endorsed by management and breathlessly reported in the media, it was all too easy to jump on that bandwagon.

And difficult to jump off. In early 2014, four Karolinska doctors defied the reigning culture of silence [emphasis mine] by complaining about Macchiarini. In their view, he was grossly misrepresenting his results and the health of his patients. An independent investigator agreed. But the vice-chancellor of Karolinska Institute, Anders Hamsten, wasn’t bound by this judgement. He officially cleared Macchiarini of scientific misconduct, allowing merely that he’d sometimes acted “without due care”.

For their efforts, the whistleblowers were punished. [emphasis mine] When Macchiarini accused one of them, Karl-Henrik Grinnemo, of stealing his work in a grant application, Hamsten found him guilty. As Grinnemo recalls, it nearly destroyed his career: “I didn’t receive any new grants. No one wanted to collaborate with me. We were doing good research, but it didn’t matter … I thought I was going to lose my lab, my staff – everything.”

This went on for three years until, just recently [2017], Grinnemo was cleared of all wrongdoing.

It is fitting that Macchiarini’s career unravelled at the Karolinska Institute. As the home of the Nobel prize in physiology or medicine, one of its ambitions is to create scientific celebrities. Every year, it gives science a show-business makeover, picking out from the mass of medical researchers those individuals deserving of superstardom. The idea is that scientific progress is driven by the genius of a few.

It’s a problematic idea with unfortunate side effects. A genius is a revolutionary by definition, a risk-taker and a law-breaker. Wasn’t something of this idea behind the special treatment Karolinska gave Macchiarini? Surely, he got away with so much because he was considered an exception to the rules with more than a whiff of the Nobel about him. At any rate, some of his most powerful friends were themselves Nobel judges until, with his fall from grace, they fell too.

The September 1, 2017 article by Rasko and Power is worth the read if you have the interest and the time. And, Delaere has written up a comprehensive analysis, which includes basic information about tracheas and more, “The Biggest Lie in Medical History” 2020, PDF, 164 pp., Creative Commons Licence).

I also want to mention Leonid Schneider, science journalist and molecular cell biologist, whose work the Macchiarini scandal on his ‘For Better Science’ website was also featured in my 2016 pieces. Schneider’s site has a page titled, ‘Macchiarini’s trachea transplant patients: the full list‘ started in 2017 and which he continues to update with new information about the patients. The latest update was made on December 20, 2023.

Promising nanomedicine research but no promises and a caveat

Most of the research mentioned here is still in the laboratory. i don’t often come across work that has made its way to clinical trials since the focus of this blog is emerging science and technology,

*If you’re interested in the business of neurotechnology, the July 17, 2023 posting highlights a very good UNESCO report on the topic.

Funky music (sound and noise)

I have couple of stories about using sound for wound healing, bioinspiration for soundproofing applications, detecting seismic activity, more data sonification, etc.

Same old, same old CRISPR

2023 was relatively quiet (no panics) where CRISPR developments are concerned but still quite active.

Art/Sci: a pretty active year

I didn’t realize how active the year was art/sciwise including events and other projects until I reviewed this year’s postings. This is a selection from 2023 but there’s a lot more on the blog, just use the search term, “art/sci,” or “art/science,” or “sciart.”

While I often feature events and projects from these groups (e.g., June 2, 2023 posting, “Metacreation Lab’s greatest hits of Summer 2023“), it’s possible for me to miss a few. So, you can check out Toronto’s Art/Sci Salon’s website (strong focus on visual art) and Simon Fraser University’s Metacreation Lab for Creative Artificial Intelligence website (strong focus on music).

My selection of this year’s postings is more heavily weighted to the ‘writing’ end of things.

Boundaries: life/nonlife

Last year I subtitled this section, ‘Aliens on earth: machinic biology and/or biological machinery?” Here’s this year’s selection,

Canada’s 2023 budget … military

2023 featured an unusual budget where military expenditures were going to be increased, something which could have implications for our science and technology research.

Then things changed as Murray Brewster’s November 21, 2023 article for the Canadian Broadcasting Corporation’s (CBC) news online website comments, Note: A link has been removed,

There was a revelatory moment on the weekend as Defence Minister Bill Blair attempted to bridge the gap between rhetoric and reality in the Liberal government’s spending plans for his department and the Canadian military.

Asked about an anticipated (and long overdue) update to the country’s defence policy (supposedly made urgent two years ago by Russia’s full-on invasion of Ukraine), Blair acknowledged that the reset is now being viewed through a fiscal lens.

“We said we’re going to bring forward a new defence policy update. We’ve been working through that,” Blair told CBC’s Rosemary Barton Live on Sunday.

“The current fiscal environment that the country faces itself does require (that) that defence policy update … recognize (the) fiscal challenges. And so it’ll be part of … our future budget processes.”

One policy goal of the existing defence plan, Strong, Secure and Engaged, was to require that the military be able to concurrently deliver “two sustained deployments of 500 [to] 1,500 personnel in two different theaters of operation, including one as a lead nation.”

In a footnote, the recent estimates said the Canadian military is “currently unable to conduct multiple operations concurrently per the requirements laid out in the 2017 Defence Policy. Readiness of CAF force elements has continued to decrease over the course of the last year, aggravated by decreasing number of personnel and issues with equipment and vehicles.”

Some analysts say they believe that even if the federal government hits its overall budget reduction targets, what has been taken away from defence — and what’s about to be taken away — won’t be coming back, the minister’s public assurances notwithstanding.

10 years: Graphene Flagship Project and Human Brain Project

Graphene and Human Brain Project win biggest research award in history (& this is the 2000th post)” on January 28, 2013 was how I announced the results of what had been a a European Union (EU) competition that stretched out over several years and many stages as projects were evaluated and fell to the wayside or were allowed onto the next stage. The two finalists received €1B each to be paid out over ten years.

Future or not

As you can see, there was plenty of interesting stuff going on in 2023 but no watershed moments in the areas I follow. (Please do let me know in the Comments should you disagree with this or any other part of this posting.) Nanotechnology seems less and less an emerging science/technology in itself and more like a foundational element of our science and technology sectors. On that note, you may find my upcoming (in 2024) post about a report concerning the economic impact of its National Nanotechnology Initiative (NNI) from 2002 to 2022 of interest.

Following on the commercialization theme, I have noticed an increase of interest in commercializing brain and brainlike engineering technologies, as well as, more discussion about ethics.

Colonizing the brain?

UNESCO held events such as, this noted in my July 17, 2023 posting, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report” and this noted in my July 7, 2023 posting “Global dialogue on the ethics of neurotechnology on July 13, 2023 led by UNESCO.” An August 21, 2023 posting, “Ethical nanobiotechnology” adds to the discussion.

Meanwhile, Australia has been producing some very interesting mind/robot research, my June 13, 2023 posting, “Mind-controlled robots based on graphene: an Australian research story.” I have more of this kind of research (mind control or mind reading) from Australia to be published in early 2024. The Australians are not alone, there’s also this April 12, 2023 posting, “Mind-reading prosthetic limbs” from Germany.

My May 12, 2023 posting, “Virtual panel discussion: Canadian Strategies for Responsible Neurotechnology Innovation on May 16, 2023” shows Canada is entering the discussion. Unfortunately, the Canadian Science Policy Centre (CSPC), which held the event, has not posted a video online even though they have a youtube channel featuring other of their events.

As for neurmorphic engineering, China has produced a roadmap for its research in this area as noted in my March 20, 2023 posting, “A nontraditional artificial synaptic device and roadmap for Chinese research into neuromorphic devices.”

Quantum anybody?

I haven’t singled it out in this end-of-year posting but there is a great deal of interest in quantum computer both here in Canada and elsewhere. There is a 2023 report from the Council of Canadian Academies on the topic of quantum computing in Canada, which I hope to comment on soon.

Final words

I have a shout out for the Canadian Science Policy Centre, which celebrated its 15th anniversary in 2023. Congratulations!

For everyone, I wish peace on earth and all the best for you and yours in 2024!

Reducing toxicity of Alzheimer’s proteins with graphene oxide

Nobody really knows what causes Alzheimer’s disease (a form of dementia) so researchers continue to investigates the cause(s) and, also, possible remedies. An October 4, 2023 news item on ScienceDaily announces some of the latest research,

A probable early driver of Alzheimer’s disease is the accumulation of molecules called amyloid peptides. These cause cell death, and are commonly found in the brains of Alzheimer’s patients. Researchers at Chalmers University of Technology, Sweden, have now shown that yeast cells that accumulate these misfolded amyloid peptides can recover after being treated with graphene oxide nanoflakes.

An October 4, 2023 Chalmers University of Technology press release (also received via email and on EurekAlert) by Susanne Nilsson Lindh & Johanna Wilde, which originated the news item, delves into the topic,

Alzheimer’s disease is an incurable brain disease, leading to dementia and death, that causes suffering for both the patients and their families. It is estimated that over 40 million people worldwide are living with the disease or a related form of dementia. According to Alzheimer’s News Today, the estimated global cost of these diseases is one percent of the global gross domestic product.

Misfolded amyloid-beta peptides, Aβ peptides, that accumulate and aggregate in the brain, are believed to be the underlying cause of Alzheimer’s disease. They trigger a series of harmful processes in the neurons (brain cells) – causing the loss of many vital cell functions or cell death, and thus a loss of brain function in the affected area. To date, there are no effective strategies to treat amyloid accumulation in the brain.

Researchers at Chalmers University of Technology have now shown that treatment with graphene oxide leads to reduced levels of aggregated amyloid peptides in a yeast cell model.

“This effect of graphene oxide has recently also been shown by other researchers, but not in yeast cells”, says Xin Chen, Researcher in Systems Biology at Chalmers and first author of the study. “Our study also explains the mechanism behind the effect. Graphene oxide affects the metabolism of the cells, in a way that increases their resistance to misfolded proteins and oxidative stress. This has not been previously reported.”

Investigating the mechanisms using baker’s yeast affected by Alzheimer’s disease
In Alzheimer’s disease, the amyloid aggregates exert their neurotoxic effects by causing various cellular metabolic disorders, such as stress in the endoplasmic reticulum – a major part of the cell, in which many of its proteins are produced. This can reduce cells’ ability to handle misfolded proteins, and consequently increase the accumulation of these proteins.

The aggregates also affect the function of the mitochondria, the cells’ powerhouses. Therefore, the neurons are exposed to increased oxidative stress (reactive molecules called oxygen radicals, which damage other molecules); something to which brain cells are particularly sensitive.

The Chalmers researchers have conducted the study by a combination of protein analysis (proteomics) and follow-up experiments. They have used baker’s yeast, Saccharomyces cerevisiae, as an in vivo model for human cells. Both cell types have very similar systems for controlling protein quality. This yeast cell model was previously established by the research group to mimic human neurons affected by Alzheimer’s disease.

“The yeast cells in our model resemble neurons affected by the accumulation of amyloid-beta42, which is the form of amyloid peptide most prone to aggregate formation”, says Xin Chen. “These cells age faster than normal, show endoplasmic reticulum stress and mitochondrial dysfunction, and have elevated production of harmful reactive oxygen radicals.”

High hopes for graphene oxide nanoflakes
Graphene oxide nanoflakes are two-dimensional carbon nanomaterials with unique properties, including outstanding conductivity and high biocompatibility. They are used extensively in various research projects, including the development of cancer treatments, drug delivery systems and biosensors.

The nanoflakes are hydrophilic (water soluble) and interact well with biomolecules such as proteins. When graphene oxide enters living cells, it is able to interfere with the self-assembly processes of proteins.

“As a result, it can hinder the formation of protein aggregates and promote the disintegration of existing aggregates”, says Santosh Pandit, Researcher in Systems Biology at Chalmers and co-author of the study. “We believe that the nanoflakes act via two independent pathways to mitigate the toxic effects of amyloid-beta42 in the yeast cells.”

In one pathway, graphene oxide acts directly to prevent amyloid-beta42 accumulation. In the other, graphene oxide acts indirectly by a (currently unknown) mechanism, in which specific genes for stress response are activated. This increases the cell’s ability to handle misfolded proteins and oxidative stress.

How to treat Alzheimer’s patients is still a question for the future. However, according to the research group at Chalmers, graphene oxide holds great potential for future research in the field of neurodegenerative diseases. The research group has already been able to show that treatment with graphene oxide also reduces the toxic effects of protein aggregates specific to Huntington’s disease in a yeast model.

“The next step is to investigate whether it is possible to develop a drug delivery system based on graphene oxide for Alzheimer’s disease.” says Xin Chen. “We also want to test whether graphene oxide has beneficial effects in additional models of neurodegenerative diseases, such as Parkinson’s disease.”

More about: proteins and peptides
Proteins and peptides are fundamentally the same type of molecule and are made up of amino acids. Peptide molecules are smaller – typically containing less than 50 amino acids – and have a less complicated structure. Proteins and peptides can both become deformed if they fold in the wrong way during formation in the cell. When many amyloid-beta peptides accumulate in the brain, the aggregates are classified as proteins.

Here’s a link to and a citation for the paper,

Graphene Oxide Attenuates Toxicity of Amyloid-β Aggregates in Yeast by Promoting Disassembly and Boosting Cellular Stress Response by Xin Chen, Santosh Pandit, Lei Shi, Vaishnavi Ravikumar, Julie Bonne Køhler, Ema Svetlicic, Zhejian Cao, Abhroop Garg, Dina Petranovic, Ivan Mijakovic. Advanced Functional Materials Volume 33, Issue 45 November 2, 2023 2304053 DOI: First published online: 07 July 2023

This paper is open access.

10 years of the European Union’s roll of the dice: €1B or 1billion euros each for the Human Brain Project (HBP) and the Graphene Flagship

Graphene and Human Brain Project win biggest research award in history (& this is the 2000th post)” on January 28, 2013 was how I announced the results of what had been a a European Union (EU) competition that stretched out over several years and many stages as projects were evaluated and fell to the wayside or were allowed onto the next stage. The two finalists received €1B each to be paid out over ten years.

Human Brain Project (HBP)

A September 12, 2023 Human Brain Project (HBP) press release (also on EurekAlert) summarizes the ten year research effort and the achievements,

The EU-funded Human Brain Project (HBP) comes to an end in September and celebrates its successful conclusion today with a scientific symposium at Forschungszentrum Jülich (FZJ). The HBP was one of the first flagship projects and, with 155 cooperating institutions from 19 countries and a total budget of 607 million euros, one of the largest research projects in Europe. Forschungszentrum Jülich, with its world-leading brain research institute and the Jülich Supercomputing Centre, played an important role in the ten-year project.

“Understanding the complexity of the human brain and explaining its functionality are major challenges of brain research today”, says Astrid Lambrecht, Chair of the Board of Directors of Forschungszentrum Jülich. “The instruments of brain research have developed considerably in the last ten years. The Human Brain Project has been instrumental in driving this development – and not only gained new insights for brain research, but also provided important impulses for information technologies.”

HBP researchers have employed highly advanced methods from computing, neuroinformatics and artificial intelligence in a truly integrative approach to understanding the brain as a multi-level system. The project has contributed to a deeper understanding of the complex structure and function of the brain and enabled novel applications in medicine and technological advances.

Among the project’s highlight achievements are a three-dimensional, digital atlas of the human brain with unprecedented detail, personalised virtual models of patient brains with conditions like epilepsy and Parkinson’s, breakthroughs in the field of artificial intelligence, and an open digital research infrastructure – EBRAINS – that will remain an invaluable resource for the entire neuroscience community beyond the end of the HBP.

Researchers at the HBP have presented scientific results in over 3000 publications, as well as advanced medical and technical applications and over 160 freely accessible digital tools for neuroscience research.

“The Human Brain Project has a pioneering role for digital brain research with a unique interdisciplinary approach at the interface of neuroscience, computing and technology,” says Katrin Amunts, Director of the HBP and of the Institute for Neuroscience and Medicine at FZJ. “EBRAINS will continue to power this new way of investigating the brain and foster developments in brain medicine.”

“The impact of what you achieved in digital science goes beyond the neuroscientific community”, said Gustav Kalbe, CNECT, Acting Director of Digital Excellence and Science Infrastructures at the European Commission during the opening of the event. “The infrastructure that the Human Brain Project has established is already seen as a key building block to facilitate cooperation and research across geographical boundaries, but also across communities.”

Further information about the Human Brain Project as well as photos from research can be found here:

Results highlights and event photos in the online press release.

Results overviews:
– “Human Brain Project: Spotlights on major achievements” and “A closer Look on Scientific

– “Human Brain Project: An extensive guide to the tools developed”

Examples of results from the Human Brain Project:

As the “Google Maps of the brain” [emphasis mine], the Human Brain Project makes the most comprehensive digital brain atlas to date available to all researchers worldwide. The atlas by Jülich researchers and collaborators combines high-resolution data of neurons, fibre connections, receptors and functional specialisations in the brain, and is designed as a constantly growing system.

13 hospitals in France are currently testing the new “Virtual Epileptic Patient” – a platform developed at the University of Marseille [Aix-Marseille University?] in the Human Brain Project. It creates personalised simulation models of brain dynamics to provide surgeons with predictions for the success of different surgical treatment strategies. The approach was presented this year in the journals Science Translational Medicine and The Lancet Neurology.

SpiNNaker2 is a “neuromorphic” [brainlike] computer developed by the University of Manchester and TU Dresden within the Human Brain Project. The company SpiNNcloud Systems in Dresden is commercialising the approach for AI applications. (Image:

As an openly accessible digital infrastructure, EBRAINS offers scientists easy access to the best techniques for complex research questions.


There was a Canadian connection at one time; Montréal Neuro at Canada’s McGill University was involved in developing a computational platform for neuroscience (CBRAIN) for HBP according to an announcement in my January 29, 2013 posting. However, there’s no mention of the EU project on the CBRAIN website nor is there mention of a Canadian partner on the EBRAINS website, which seemed the most likely successor to the CBRAIN portion of the HBP project originally mentioned in 2013.

I couldn’t resist “Google maps of the brain.”

In any event, the statement from Astrid Lambrecht offers an interesting contrast to that offered by the leader of the other project.

Graphene Flagship

In fact, the Graphene Flagship has been celebrating its 10th anniversary since last year; see my September 1, 2022 posting titled “Graphene Week (September 5 – 9, 2022) is a celebration of 10 years of the Graphene Flagship.”

The flagship’s lead institution, Chalmers University of Technology in Sweden, issued an August 28, 2023 press release by Lisa Gahnertz (also on the Graphene Flagship website but published September 4, 2023) touting its achievement with an ebullience I am more accustomed to seeing in US news releases,

Chalmers steers Europe’s major graphene venture to success

For the past decade, the Graphene Flagship, the EU’s largest ever research programme, has been coordinated from Chalmers with Jari Kinaret at the helm. As the project reaches the ten-year mark, expectations have been realised, a strong European research field on graphene has been established, and the journey will continue.

‘Have we delivered what we promised?’ asks Graphene Flagship Director Jari Kinaret from his office in the physics department at Chalmers, overlooking the skyline of central Gothenburg.

‘Yes, we have delivered more than anyone had a right to expect,’ [emphasis mine] he says. ‘In our analysis for the conclusion of the project, we read the documents that were written at the start. What we promised then were over a hundred specific things. Some of them were scientific and technological promises, and they have all been fulfilled. Others were for specific applications, and here 60–70 per cent of what was promised has been delivered. We have also delivered applications we did not promise from the start, but these are more difficult to quantify.’

The autumn of 2013 saw the launch of the massive ten-year Science, Technology and Innovation research programme on graphene and other related two-dimensional materials. Joint funding from the European Commission and EU Member States totalled a staggering €1,000 million. A decade later, it is clear that the large-scale initiative has succeeded in its endeavours. According to a report by the research institute WifOR, the Graphene Flagship will have created a total contribution to GDP of €3,800 million and 38,400 new jobs in the 27 EU countries between 2014 and 2030.

Exceeded expectations

‘Per euro invested and compared to other EU projects, the flagship has performed 13 times better than expected in terms of patent applications, and seven times better for scientific publications. We have 17 spin-off companies that have received over €130 million in private funding – people investing their own money is a real example of trust in the fact that the technology works,’ says Jari Kinaret.

He emphasises that the long time span has been crucial in developing the concepts of the various flagship projects.

‘When it comes to new projects, the ability to work on a long timescale is a must and is more important than a large budget. It takes a long time to build trust, both in one another within a team and in the technology on the part of investors, industry and the wider community. The size of the project has also been significant. There has been an ecosystem around the material, with many graphene manufacturers and other organisations involved. It builds robustness, which means you have the courage to invest in the material and develop it.’

From lab to application

In 2010, Andre Geim and Konstantin Novoselov of the University of Manchester won the Nobel Prize in Physics for their pioneering experiments isolating the ultra-light and ultra-thin material graphene. It was the first known 2D material and stunned the world with its ‘exceptional properties originating in the strange world of quantum physics’ according to the Nobel Foundation’s press release. Many potential applications were identified for this electrically conductive, heat-resistant and light-transmitting material. Jari Kinaret’s research team had been exploring the material since 2006, and when Kinaret learned of the European Commission’s call for a ten-year research programme, it prompted him to submit an application. The Graphene Flagship was initiated to ensure that Europe would maintain its leading position in graphene research and innovation, and its coordination and administration fell to Chalmers.

Is it a staggering thought that your initiative became the biggest EU research project of all time?

‘The fact that the three-minute presentation I gave at a meeting in Brussels has grown into an activity in 22 countries, with 170 organisations and 1,300 people involved … You can’t think about things like that because it can easily become overwhelming. Sometimes you just have to go for it,’ says Jari Kinaret.

One of the objectives of the Graphene Flagship was to take the hopes for this material and move them from lab to application. What has happened so far?

‘We are well on track with 100 products priced and on their way to the market. Many of them are business-to-business products that are not something we ordinary consumers are going to buy, but which may affect us indirectly.’

‘It’s important to remember that getting products to the application stage is a complex process. For a researcher, it may take ten working prototypes; for industry, ten million. Everything has to click into place, on a large scale. All components must work identically and in exactly the same way, and be compatible with existing production in manufacturing as you cannot rebuild an entire factory for a new material. In short, it requires reliability, reproducibility and manufacturability.’

Applications in a wide range of areas

Graphene’s extraordinary properties are being used to deliver the next generation of technologies in a wide range of fields, such as sensors for self-driving cars, advanced batteries, new water purification methods and sophisticated instruments for use in neuroscience. When asked if there are any applications that Jani Kinaret himself would like to highlight, he mentions, among other things, the applications that are underway in the automotive industry – such as sensors to detect obstacles for self-driving cars. Thanks to graphene, they will be so cost-effective to produce that it will be possible to make them available in more than just the most expensive car models.

He also highlights the aerospace industry, where a graphene material for removing ice from aircraft and helicopter wings is under development for the Airbus company. Another favourite, which he has followed from basic research to application, is the development of an air cleaner for Lufthansa passenger aircraft, based on a kind of ‘graphene foam’. Because graphene foam is very light, it can be heated extremely quickly. A pulse of electricity lasting one thousandth of a second is enough to raise the temperature to 300 degrees, thus killing micro-organisms and effectively cleaning the air in the aircraft.

He also mentions the Swedish company ABB, which has developed a graphene composite for circuit breakers in switchgear. These circuit breakers are used to protect the electricity network and must be safe to use. The graphene composite replaces the manual lubrication of the circuit breakers, resulting in significant cost savings.

‘We also see graphene being used in medical technology, but its application requires many years of testing and approval by various bodies. For example, graphene technology can more effectively map the brain before neurosurgery, as it provides a more detailed image. Another aspect of graphene is that it is soft and pliable. This means it can be used for electrodes that are implanted in the brain to treat tremors in Parkinson’s patients, without the electrodes causing scarring,’ says Jari Kinaret.

Coordinated by Chalmers

Jari Kinaret sees the fact that the EU chose Chalmers as the coordinating university as a favourable factor for the Graphene Flagship.

‘Hundreds of millions of SEK [Swedish Kroner] have gone into Chalmers research, but what has perhaps been more important is that we have become well-known and visible in certain areas. We also have the 2D-Tech competence centre and the SIO Grafen programme, both funded by Vinnova and coordinated by Chalmers and Chalmers industriteknik respectively. I think it is excellent that Chalmers was selected, as there could have been too much focus on the coordinating organisation if it had been more firmly established in graphene research at the outset.’

What challenges have been encountered during the project?

‘With so many stakeholders involved, we are not always in agreement. But that is a good thing. A management book I once read said that if two parties always agree, then one is redundant. At the start of the project, it was also interesting to see the major cultural differences we had in our communications and that different cultures read different things between the lines; it took time to realise that we should be brutally straightforward in our communications with one another.’

What has it been like to have the coordinating role that you have had?

‘Obviously, I’ve had to worry about things an ordinary physics professor doesn’t have to worry about, like a phone call at four in the morning after the Brexit vote or helping various parties with intellectual property rights. I have read more legal contracts than I thought I would ever have to read as a professor. As a researcher, your approach when you go into a role is narrow and deep, here it was rather all about breadth. I would have liked to have both, but there are only 26 hours in a day,’ jokes Jari Kinaret.

New phase for the project and EU jobs to come

A new assignment now awaits Jari Kinaret outside Chalmers as Chief Executive Officer of the EU initiative KDT JU (Key Digital Technologies Joint Undertaking, soon to become Chips JU), where industry and the public sector interact to drive the development of new electronic components and systems.

The Graphene Flagship may have reached its destination in its current form, but the work started is progressing in a form more akin to a flotilla. About a dozen projects will continue to live on under the auspices of the European Commission’s Horizon Europe programme. Chalmers is going to coordinate a smaller CSA project called GrapheneEU, where CSA stands for ‘Coordination and Support Action’. It will act as a cohesive force between the research and innovation projects that make up the next phase of the flagship, offering them a range of support and services, including communication, innovation and standardisation.

The Graphene Flagship is about to turn ten. If the project had been a ten-year-old child, what kind of child would it have been?

‘It would have been a very diverse organism. Different aspirations are beginning to emerge – perhaps it is adolescence that is approaching. In addition, within the project we have also studied other related 2D materials, and we found that there are 6,000 distinct materials of this type, of which only about 100 have been studied. So, it’s the younger siblings that are starting to arrive now.’

Facts about the Graphene Flagship:

The Graphene Flagship is the first European flagship for future and emerging technologies. It has been coordinated and administered from the Department of Physics at Chalmers, and as the project enters its next phase, GrapheneEU, coordination will continue to be carried out by staff currently working on the flagship led by Chalmers Professor Patrik Johansson.

The project has proved highly successful in developing graphene-based technology in Europe, resulting in 17 new companies, around 100 new products, nearly 500 patent applications and thousands of scientific papers. All in all, the project has exceeded the EU’s targets for utilisation from research projects by a factor of ten. According to the assessment of the EU research programme Horizon 2020, Chalmers’ coordination of the flagship has been identified as one of the key factors behind its success.

Graphene Week will be held at the Svenska Mässan in Gothenburg from 4 to 8 September 2023. Graphene Week is an international conference, which also marks the finale of the ten-year anniversary of the Graphene Flagship. The conference will be jointly led by academia and industry – Professor Patrik Johansson from Chalmers and Dr Anna Andersson from ABB – and is expected to attract over 400 researchers from Sweden, Europe and the rest of the world. The programme includes an exhibition, press conference and media activities, special sessions on innovation, diversity and ethics, and several technical sessions. The full programme is available here.

Read the press release on Graphene Week from 4 to 8 September and the overall results of the Graphene Flagship. …

Ten years and €1B each. Congratulations to the organizers on such massive undertakings. As for whether or not (and how they’ve been successful), I imagine time will tell.

Nanobiotics and artificial intelligence (AI)

Antibiotics at the nanoscale = nanobiotics. For a more complete explanation, there’s this (Note: the video runs a little longer than most of the others embedded on this blog),

Before pushing further into this research, a note about antibiotic resistance. In a sense, we’ve created the problem we (those scientists in particular) are trying to solve.

Antibiotics and cleaning products kill 99.9% of the bacteria, leaving 0.1% that are immune. As so many living things on earth do, bacteria reproduce. Now, a new antibiotic is needed and discovered; it too kills 99.9% of the bacteria. The 0.1% left are immune to two antibiotics. And,so it goes.

As the scientists have made clear, we’re running out of options using standard methods and they’re hoping this ‘nanoparticle approach’ as described in a June 5, 2023 news item on Nanowerk will work, Note: A link has been removed,

Identifying whether and how a nanoparticle and protein will bind with one another is an important step toward being able to design antibiotics and antivirals on demand, and a computer model developed at the University of Michigan can do it.

The new tool could help find ways to stop antibiotic-resistant infections and new viruses—and aid in the design of nanoparticles for different purposes.

“Just in 2019, the number of people who died of antimicrobial resistance was 4.95 million. Even before COVID, which worsened the problem, studies showed that by 2050, the number of deaths by antibiotic resistance will be 10 million,” said Angela Violi, an Arthur F. Thurnau Professor of mechanical engineering, and corresponding author of the study that made the cover of Nature Computational Science (“Domain-agnostic predictions of nanoscale interactions in proteins and nanoparticles”).

In my ideal scenario, 20 or 30 years from now, I would like—given any superbug—to be able to quickly produce the best nanoparticles that can treat it.”

A June 5, 2023 University of Michigan news release (also on EurekAlert), which originated the news item, provides more technical details, Note: A link has been removed,

Much of the work within cells is done by proteins. Interaction sites on their surfaces can stitch molecules together, break them apart and perform other modifications—opening doorways into cells, breaking sugars down to release energy, building structures to support groups of cells and more. If we could design medicines that target crucial proteins in bacteria and viruses without harming our own cells, that would enable humans to fight new and changing diseases quickly.

The new [computer] model, named NeCLAS [NeCLAS (Nanoparticle-Computed Ligand Affinity Scoring)], uses machine learning—the AI technique that powers the virtual assistant on your smartphone and ChatGPT. But instead of learning to process language, it absorbs structural models of proteins and their known interaction sites. From this information, it learns to extrapolate how proteins and nanoparticles might interact, predict binding sites and the likelihood of binding between them—as well as predicting interactions between two proteins or two nanoparticles.

“Other models exist, but ours is the best for predicting interactions between proteins and nanoparticles,” said Paolo Elvati, U-M associate research scientist in mechanical engineering.

AlphaFold, for example, is a widely used tool for predicting the 3D structure of a protein based on its building blocks, called amino acids. While this capacity is crucial, this is only the beginning: Discovering how these proteins assemble into larger structures and designing practical nanoscale systems are the next steps.

“That’s where NeCLAS comes in,” said Jacob Saldinger, U-M doctoral student in chemical engineering and first author of the study. “It goes beyond AlphaFold by showing how nanostructures will interact with one another, and it’s not limited to proteins. This enables researchers to understand the potential applications of nanoparticles and optimize their designs.”

The team tested three case studies for which they had additional data: 

  • Molecular tweezers, in which a molecule binds to a particular site on another molecule. This approach can stop harmful biological processes, such as the aggregation of protein plaques in diseases of the brain like Alzheimer’s.
  • How graphene quantum dots break up the biofilm produced by staph bacteria. These nanoparticles are flakes of carbon, no more than a few atomic layers thick and 0.0001 millimeters to a side. Breaking up biofilms is likely a crucial tool in fighting antibiotic-resistant infections—including the superbug methicillin-resistant Staphylococcus aureus (MRSA), commonly acquired at hospitals.
  • Whether graphene quantum dots would disperse in water, demonstrating the model’s ability to predict nanoparticle-nanoparticle binding even though it had been trained exclusively on protein-protein data.

While many protein-protein models set amino acids as the smallest unit that the model must consider, this doesn’t work for nanoparticles. Instead, the team set the size of that smallest feature to be roughly the size of the amino acid but then let the computer model decide where the boundaries between these minimum features were. The result is representations of proteins and nanoparticles that look a bit like collections of interconnected beads, providing more flexibility in exploring small scale interactions.

“Besides being more general, NeCLAS also uses way less training data than AlphaFold. We only have 21 nanoparticles to look at, so we have to use protein data in a clever way,” said Matt Raymond, U-M doctoral student in electrical and computer engineering and study co-author.  

Next, the team intends to explore other biofilms and microorganisms, including viruses.

The Nature Computational Science study was funded by the University of Michigan Blue Sky Initiative, the Army Research Office and the National Science Foundation. 

Here’s a link to and a citation for the paper,

Domain-agnostic predictions of nanoscale interactions in proteins and nanoparticles by Jacob Charles Saldinger, Matt Raymond, Paolo Elvati & Angela Violi. Nature Computational Science volume 3, pages 393–402 (2023) DOI: Published: 01 May 2023 Issue Date: May 2023

This paper is behind a paywall.