Category Archives: electronics

Follow up to the Charles M. Lieber affair and US government efforts to prosecute nanotech scientists

Rebecca Trager in a March 5, 2021 news article for Chemistry World highlights support for Charles M. Lieber (Harvard professor and chair of the chemistry department) from his colleagues (Note: Links have been removed),

More than a year after the chair of Harvard University’s chemistry department was arrested for allegedly hiding his receipt of millions of dollars in research funding from China from his university and the US government, dozens of prominent researchers – including many Nobel Prize winners – are coming to Charles Lieber’s defence. They are calling the US Department of Justice (DOJ) case against him ‘unjust’ and urging the agency to drop it.

Following his January 2020 arrest, Lieber was placed on ‘indefinite’ paid administrative leave. The nanoscience pioneer was indicted in June [2020] on charges of making false statements to federal authorities regarding his participation in China’s Thousand Talents plan – the country’s programme to attract, recruit and cultivate high-level scientific talent from abroad. Lieber faces up to five years in prison and a fine of $250,000 (£179,000) if convicted.

A 1 March [2021] open letter, drafted and coordinated by Harvard chemist Stuart Schreiber, co-founder of the Broad Institute, and professor emeritus Elias Corey, winner of the 1990 chemistry Nobel prize, says Lieber became the target of a ‘tragically misguided government campaign’. The letter refers to Lieber as ‘one of the great scientist of his generation’ and warns such government actions are discouraging US scientists from collaborating with peers in other countries, particularly China. The open letter also notes that Lieber is fighting to salvage his reputation while suffering from incurable lymphoma.

Ferguson goes on to contrast Lieber’s treatment by Harvard to another embattled colleague’s treatment by his home institution (Note: Links have been removed),

Harvard’s treatment of Lieber stands in contrast to how the Massachusetts Institute of Technology (MIT) handled the more recent case of nanotechnologist Gang Chen, who was arrested in January [2021] for failing to report his ties to the Chinese government. MIT agreed to cover his legal fees, and more than 100 faculty members signed a letter to their university’s president that picked apart the DOJ’s allegations against Chen.

I have more details about the case against Lieber (as it was presented at the time) in a January 28, 2020 posting.

As for Professor Chen, I found this MIT statement dated January 14, 2021 (the date of his arrest) and this January 14, 2021 statement from The United States District Attorney’s Office District of Massachusetts.

Mechano-photonic artificial synapse is bio-inspired

The word ‘memristor’ usually pops up when there’s research into artificial synapses but not in this new piece of research. I didn’t see any mention of the memristor in the paper’s references either but I did find James Gimzewski from the University of California at Los Angeles (UCLA) whose research into brainlike computing (neuromorphic computing) is running parallel but separately to the memristor research.

Dr. Thamarasee Jeewandara has written a March 25, 2021 article for phys.org about the latest neuromorphic computing research (Note: Links have been removed)

Multifunctional and diverse artificial neural systems can incorporate multimodal plasticity, memory and supervised learning functions to assist neuromorphic computation. In a new report, Jinran Yu and a research team in nanoenergy, nanoscience and materials science in China and the US., presented a bioinspired mechano-photonic artificial synapse with synergistic mechanical and optical plasticity. The team used an optoelectronic transistor made of graphene/molybdenum disulphide (MoS2) heterostructure and an integrated triboelectric nanogenerator to compose the artificial synapse. They controlled the charge transfer/exchange in the heterostructure with triboelectric potential and modulated the optoelectronic synapse behaviors readily, including postsynaptic photocurrents, photosensitivity and photoconductivity. The mechano-photonic artificial synapse is a promising implementation to mimic the complex biological nervous system and promote the development of interactive artificial intelligence. The work is now published on Science Advances.

The human brain can integrate cognition, learning and memory tasks via auditory, visual, olfactory and somatosensory interactions. This process is difficult to be mimicked using conventional von Neumann architectures that require additional sophisticated functions. Brain-inspired neural networks are made of various synaptic devices to transmit information and process using the synaptic weight. Emerging photonic synapse combine the optical and electric neuromorphic modulation and computation to offer a favorable option with high bandwidth, fast speed and low cross-talk to significantly reduce power consumption. Biomechanical motions including touch, eye blinking and arm waving are other ubiquitous triggers or interactive signals to operate electronics during artificial synapse plasticization. In this work, Yu et al. presented a mechano-photonic artificial synapse with synergistic mechanical and optical plasticity. The device contained an optoelectronic transistor and an integrated triboelectric nanogenerator (TENG) in contact-separation mode. The mechano-optical artificial synapses have huge functional potential as interactive optoelectronic interfaces, synthetic retinas and intelligent robots. [emphasis mine]

As you can see Jeewandara has written quite a technical summary of the work. Here’s an image from the Science Advances paper,

Fig. 1 Biological tactile/visual neurons and mechano-photonic artificial synapse. (A) Schematic illustrations of biological tactile/visual sensory system. (B) Schematic diagram of the mechano-photonic artificial synapse based on graphene/MoS2 (Gr/MoS2) heterostructure. (i) Top-view scanning electron microscope (SEM) image of the optoelectronic transistor; scale bar, 5 μm. The cyan area indicates the MoS2 flake, while the white strip is graphene. (ii) Illustration of charge transfer/exchange for Gr/MoS2 heterostructure. (iii) Output mechano-photonic signals from the artificial synapse for image recognition.

You can find the paper here,

Bioinspired mechano-photonic artificial synapse based on graphene/MoS2 heterostructure by Jinran Yu, Xixi Yang, Guoyun Gao, Yao Xiong, Yifei Wang, Jing Han, Youhui Chen, Huai Zhang, Qijun Sun and Zhong Lin Wang. Science Advances 17 Mar 2021: Vol. 7, no. 12, eabd9117 DOI: 10.1126/sciadv.abd9117

This appears to be open access.

Cortical spheroids (like mini-brains) could unlock (larger) brain’s mysteries

A March 19, 2021 Northwestern University news release on EurekAlert announces the creation of a device designed to monitor brain organoids (for anyone unfamiliar with brain organoids there’s more information after the news),

A team of scientists, led by researchers at Northwestern University, Shirley Ryan AbilityLab and the University of Illinois at Chicago (UIC), has developed novel technology promising to increase understanding of how brains develop, and offer answers on repairing brains in the wake of neurotrauma and neurodegenerative diseases.

Their research is the first to combine the most sophisticated 3-D bioelectronic systems with highly advanced 3-D human neural cultures. The goal is to enable precise studies of how human brain circuits develop and repair themselves in vitro. The study is the cover story for the March 19 [March 17, 2021 according to the citation] issue of Science Advances.

The cortical spheroids used in the study, akin to “mini-brains,” were derived from human-induced pluripotent stem cells. Leveraging a 3-D neural interface system that the team developed, scientists were able to create a “mini laboratory in a dish” specifically tailored to study the mini-brains and collect different types of data simultaneously. Scientists incorporated electrodes to record electrical activity. They added tiny heating elements to either keep the brain cultures warm or, in some cases, intentionally overheated the cultures to stress them. They also incorporated tiny probes — such as oxygen sensors and small LED lights — to perform optogenetic experiments. For instance, they introduced genes into the cells that allowed them to control the neural activity using different-colored light pulses.

This platform then enabled scientists to perform complex studies of human tissue without directly involving humans or performing invasive testing. In theory, any person could donate a limited number of their cells (e.g., blood sample, skin biopsy). Scientists can then reprogram these cells to produce a tiny brain spheroid that shares the person’s genetic identity. The authors believe that, by combining this technology with a personalized medicine approach using human stem cell-derived brain cultures, they will be able to glean insights faster and generate better, novel interventions.

“The advances spurred by this research will offer a new frontier in the way we study and understand the brain,” said Shirley Ryan AbilityLab’s Dr. Colin Franz, co-lead author on the paper who led the testing of the cortical spheroids. “Now that the 3-D platform has been developed and validated, we will be able to perform more targeted studies on our patients recovering from neurological injury or battling a neurodegenerative disease.”

Yoonseok Park, postdoctoral fellow at Northwestern University and co-lead author, added, “This is just the beginning of an entirely new class of miniaturized, 3-D bioelectronic systems that we can construct to expand the capacity of the regenerative medicine field. For example, our next generation of device will support the formation of even more complex neural circuits from brain to muscle, and increasingly dynamic tissues like a beating heart.”

Current electrode arrays for tissue cultures are 2-D, flat and unable to match the complex structural designs found throughout nature, such as those found in the human brain. Moreover, even when a system is 3-D, it is extremely challenging to incorporate more than one type of material into a small 3-D structure. With this advance, however, an entire class of 3-D bioelectronics devices has been tailored for the field of regenerative medicine.

“Now, with our small, soft 3-D electronics, the capacity to build devices that mimic the complex biological shapes found in the human body is finally possible, providing a much more holistic understanding of a culture,” said Northwestern’s John Rogers, who led the technology development using technology similar to that found in phones and computers. “We no longer have to compromise function to achieve the optimal form for interfacing with our biology.”

As a next step, scientists will use the devices to better understand neurological disease, test drugs and therapies that have clinical potential, and compare different patient-derived cell models. This understanding will then enable a better grasp of individual differences that may account for the wide variation of outcomes seen in neurological rehabilitation.

“As scientists, our goal is to make laboratory research as clinically relevant as possible,” said Kristen Cotton, research assistant in Dr. Franz’s lab. “This 3-D platform opens the door to new experiments, discovery and scientific advances in regenerative neurorehabilitation medicine that have never been possible.”

Caption: Three dimensional multifunctional neural interfaces for cortical spheroids and engineered assembloids Credit: Northwestern University

As for what brain ogranoids might be, Carl Zimmer in an Aug. 29, 2019 article for the New York Times provides an explanation,

Organoids Are Not Brains. How Are They Making Brain Waves?

Two hundred and fifty miles over Alysson Muotri’s head, a thousand tiny spheres of brain cells were sailing through space.

The clusters, called brain organoids, had been grown a few weeks earlier in the biologist’s lab here at the University of California, San Diego. He and his colleagues altered human skin cells into stem cells, then coaxed them to develop as brain cells do in an embryo.

The organoids grew into balls about the size of a pinhead, each containing hundreds of thousands of cells in a variety of types, each type producing the same chemicals and electrical signals as those cells do in our own brains.

In July, NASA packed the organoids aboard a rocket and sent them to the International Space Station to see how they develop in zero gravity.

Now the organoids were stowed inside a metal box, fed by bags of nutritious broth. “I think they are replicating like crazy at this stage, and so we’re going to have bigger organoids,” Dr. Muotri said in a recent interview in his office overlooking the Pacific.

What, exactly, are they growing into? That’s a question that has scientists and philosophers alike scratching their heads.

On Thursday, Dr. Muotri and his colleagues reported that they  have recorded simple brain waves in these organoids. In mature human brains, such waves are produced by widespread networks of neurons firing in synchrony. Particular wave patterns are linked to particular forms of brain activity, like retrieving memories and dreaming.

As the organoids mature, the researchers also found, the waves change in ways that resemble the changes in the developing brains of premature babies.

“It’s pretty amazing,” said Giorgia Quadrato, a neurobiologist at the University of Southern California who was not involved in the new study. “No one really knew if that was possible.”

But Dr. Quadrato stressed it was important not to read too much into the parallels. What she, Dr. Muotri and other brain organoid experts build are clusters of replicating brain cells, not actual brains.

If you have the time, I recommend reading Zimmer’s article in its entirety. Perhaps not coincidentally, Zimmer has an excerpt titled “Lab-Grown Brain Organoids Aren’t Alive. But They’re Not Not Alive, Either.” published in Slate.com,

From Life’s Edge: The Search For What It Means To Be Alive by Carl Zimmer, published by Dutton, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2021 by Carl Zimmer.

Cleber Trujillo led me to a windowless room banked with refrigerators, incubators, and microscopes. He extended his blue-gloved hands to either side and nearly touched the walls. “This is where we spend half our day,” he said.

In that room Trujillo and a team of graduate students raised a special kind of life. He opened an incubator and picked out a clear plastic box. Raising it above his head, he had me look up at it through its base. Inside the box were six circular wells, each the width of a cookie and filled with what looked like watered-down grape juice. In each well 100 pale globes floated, each the size of a housefly head.

Getting back to the research about monitoring brain organoids, here’s a link to and a citation for the paper about cortical spheroids,

Three-dimensional, multifunctional neural interfaces for cortical spheroids and engineered assembloids by Yoonseok Park, Colin K. Franz, Hanjun Ryu, Haiwen Luan, Kristen Y. Cotton, Jong Uk Kim, Ted S. Chung, Shiwei Zhao, Abraham Vazquez-Guardado, Da Som Yang, Kan Li, Raudel Avila, Jack K. Phillips, Maria J. Quezada, Hokyung Jang, Sung Soo Kwak, Sang Min Won, Kyeongha Kwon, Hyoyoung Jeong, Amay J. Bandodkar, Mengdi Han, Hangbo Zhao, Gabrielle R. Osher, Heling Wang, KunHyuck Lee, Yihui Zhang, Yonggang Huang, John D. Finan and John A. Rogers. Science Advances 17 Mar 2021: Vol. 7, no. 12, eabf9153 DOI: 10.1126/sciadv.abf9153

This paper appears to be open access.

According to a March 22, 2021 posting on the Shirley Riley AbilityLab website, the paper is featured on the front cover of Science Advances (vol. 7 no. 12).

Memristor artificial neural network learning based on phase-change memory (PCM)

Caption: Professor Hongsik Jeong and his research team in the Department of Materials Science and Engineering at UNIST. Credit: UNIST

I’m pretty sure that Professor Hongsik Jeong is the one on the right. He seems more relaxed, like he’s accustomed to posing for pictures highlighting his work.

Now on to the latest memristor news, which features the number 8.

For anyone unfamiliar with the term memristor, it’s a device (of sorts) which scientists, involved in neuromorphic computing (computers that operate like human brains), are researching as they attempt to replicate brainlike processes for computers.

From a January 22, 2021 Ulsan National Institute of Science and Technology (UNIST) press release (also on EurekAlert but published March 15, 2021),

An international team of researchers, affiliated with UNIST has unveiled a novel technology that could improve the learning ability of artificial neural networks (ANNs).

Professor Hongsik Jeong and his research team in the Department of Materials Science and Engineering at UNIST, in collaboration with researchers from Tsinghua University in China, proposed a new learning method to improve the learning ability of ANN chips by challenging its instability.

Artificial neural network chips are capable of mimicking the structural, functional and biological features of human neural networks, and thus have been considered the technology of the future. In this study, the research team demonstrated the effectiveness of the proposed learning method by building phase change memory (PCM) memristor arrays that operate like ANNs. This learning method is also advantageous in that its learning ability can be improved without additional power consumption, since PCM undergoes a spontaneous resistance increase due to the structural relaxation after amorphization.

ANNs, like human brains, use less energy even when performing computation and memory tasks, simultaneously. However, the artificial neural network chip in which a large number of physical devices are integrated has a disadvantage that there is an error. The existing artificial neural network learning method assumes a perfect artificial neural network chip with no errors, so the learning ability of the artificial neural network is poor.

The research team developed a memristor artificial neural network learning method based on a phase-change memory, conceiving that the real human brain does not require near-perfect motion. This learning method reflects the “resistance drift” (increased electrical resistance) of the phase change material in the memory semiconductor in learning. During the learning process, since the information update pattern is recorded in the form of increasing electrical resistance in the memristor, which serves as a synapse, the synapse additionally learns the association between the pattern it changes and the data it is learning.

The research team showed that the learning method developed through an experiment to classify handwriting composed of numbers 0-9 has an effect of improving learning ability by about 3%. In particular, the accuracy of the number 8, which is difficult to classify handwriting, has improved significantly. [emphasis mine] The learning ability improved thanks to the synaptic update pattern that changes differently according to the difficulty of handwriting classification.

Researchers expect that their findings are expected to promote the learning algorithms with the intrinsic properties of memristor devices, opening a new direction for development of neuromorphic computing chips.

Here’s a link to and a citation for the paper,

Spontaneous sparse learning for PCM-based memristor neural networks by Dong-Hyeok Lim, Shuang Wu, Rong Zhao, Jung-Hoon Lee, Hongsik Jeong & Luping Shi. Nature Communications volume 12, Article number: 319 (2021) DOI: https://doi.org/10.1038/s41467-020-20519-z Published 12 January 2021

This paper is open access.

Plug me in: how to power up ingestible and implantable electroncis

From time to time I’ve featured ‘vampire technology’, a name I vastly prefer to energy harvesting or any of its variants. The focus has usually been on implantable electronic devices such as pacemakers and deep brain stimulators.

In this February 16, 2021 Nanowerk Spotlight article, Michael Berger broadens the focus to include other electronic devices,

Imagine edible medical devices that can be safely ingested by patients, perform a test or release a drug, and then transmit feedback to your smartphone; or an ingestible, Jell-O-like pill that monitors the stomach for up to a month.

Devices like these, as well as a wide range of implantable biomedical electronic devices such as pacemakers, neurostimulators, subdermal blood sensors, capsule endoscopes, and drug pumps, can be useful tools for detecting physiological and pathophysiological signals, and providing treatments performed inside the body.

Advances in wireless communication enable medical devices to be untethered when in the human body. Advances in minimally invasive or semi-invasive surgical implantation procedures have enabled biomedical devices to be implanted in locations where clinically important biomarkers and physiological signals can be detected; it has also enabled direct administration of medication or treatment to a target location.

However, one major challenge in the development of these devices is the limited lifetime of their power sources. The energy requirements of biomedical electronic devices are highly dependent on their application and the complexity of the required electrical systems.

Berger’s commentary was occasioned by a review article in Advanced Functional Materials (link and citation to follow at the end of this post). Based on this review, the February 16, 2021 Nanowerk Spotlight article provides insight into the current state of affairs and challenges,

Biomedical electronic devices can be divided into three main categories depending on their application: diagnostic, therapeutic, and closed-loop systems. Each category has a different degree of complexity in the electronic system.

… most biomedical electronic devices are composed of a common set of components, including a power unit, sensors, actuators, a signal processing and control unit, and a data storage unit. Implantable and ingestible devices that require a great deal of data manipulation or large quantities of data logging also need to be wirelessly connected to an external device so that data can be transmitted to an external receiver and signal processing, data storage, and display can be performed more efficiently.

The power unit, which is composed of one or more energy sources – batteries, energy-harvesting, and energy transfer – as well as power management circuits, supplies electrical energy to the whole system.

Implantable medical devices such as cardiac pacemakers, neurostimulators and drug delivery devices are major medical tools to support life activity and provide new therapeutic strategies. Most such devices are powered by lithium batteries whose service life is as low as 10 years. Hence, many patients must undergo a major surgery to check the battery performance and replace the batteries as necessary.

In the last few decades, new battery technology has led to increases in the performance, reliability, and lifetime of batteries. However, challenges remain, especially in terms of volumetric energy density and safety.

Electronic miniaturization allows more functionalities to be added to devices, which increases power requirements. Recently, new material-based battery systems have been developed with higher energy densities.

Different locations and organ systems in the human body have access to different types of energy sources, such as mechanical, chemical, and electromagnetic energies.

Energy transfer technologies can deliver energy from outside the body to implanted or ingested devices. If devices are implanted at the locations where there are no accessible endogenous energies, exogenous energies in the form of ultrasonic or electromagnetic waves can penetrate through the biological barriers and wirelessly deliver the energies to the devices.

Both images embedded in the February 16, 2021 Nanowerk Spotlight article are informative. I’m particularly taken with the timeline which follows the development of batteries, energy harvesting/transfer devices, ingestible electronics, and implantable electronics. The first battery was in 1800 followed by ingestible and implantable electronics in the 1950s.

Berger’s commentary ends on this,

Concluding their review, the authors [in Advanced Functional Materials] note that low energy conversion efficiency and power output are the fundamental bottlenecks of energy harvesting and transfer devices. They suggest that additional studies are needed to improve the power output of energy harvesting and transfer devices so that they can be used to power various biomedical electronics.

Furthermore, durability studies of promising energy harvesters should be performed to evaluate their use in long-term applications. For degradable energy harvesting devices, such as friction-based energy harvesters and galvanic cells, improving the device lifetime is essential for use in real-life applications.

Finally, manufacturing cost is another factor to consider when commercializing novel batteries, energy harvesters, or energy transfer devices as power sources for medical devices.

Here’s a link to and a citation for the paper,

Powering Implantable and Ingestible Electronics by So‐Yoon Yang, Vitor Sencadas, Siheng Sean You, Neil Zi‐Xun Jia, Shriya Sruthi Srinivasan, Hen‐Wei Huang, Abdelsalam Elrefaey Ahmed, Jia Ying Liang, Giovanni Traverso. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.202009289 First published: 04 February 2021

This paper is behind a paywall.

It may be possible to receive a full text PDF of the article from the authors. Try here.

There are others but here are two of my posts about ‘vampire energy’,

Harvesting the heart’s kinetic energy to power implants (July 26, 2019)

Vampire nanogenerators: 2017 (October 19, 2017)

Exotic magnetism: a quantum simulation from D-Wave Sytems

Vancouver (Canada) area company, D-Wave Systems is trumpeting itself (with good reason) again. This 2021 ‘milestone’ achievement builds on work from 2018 (see my August 23, 2018 posting for the earlier work). For me, the big excitement was finding the best explanation for quantum annealing and D-Wave’s quantum computers that I’ve seen yet (that explanation and a link to more is at the end of this posting).

A February 18, 2021 news item on phys.org announces the latest achievement,

D-Wave Systems Inc. today [February 18, 2021] published a milestone study in collaboration with scientists at Google, demonstrating a computational performance advantage, increasing with both simulation size and problem hardness, to over 3 million times that of corresponding classical methods. Notably, this work was achieved on a practical application with real-world implications, simulating the topological phenomena behind the 2016 Nobel Prize in Physics. This performance advantage, exhibited in a complex quantum simulation of materials, is a meaningful step in the journey toward applications advantage in quantum computing.

A February 18, 2021 D-Wave Systems press release (also on EurekAlert), which originated the news item, describes the work in more detail,

The work by scientists at D-Wave and Google also demonstrates that quantum effects can be harnessed to provide a computational advantage in D-Wave processors, at problem scale that requires thousands of qubits. Recent experiments performed on multiple D-Wave processors represent by far the largest quantum simulations carried out by existing quantum computers to date.

The paper, entitled “Scaling advantage over path-integral Monte Carlo in quantum simulation of geometrically frustrated magnets”, was published in the journal Nature Communications (DOI 10.1038/s41467-021-20901-5, February 18, 2021). D-Wave researchers programmed the D-Wave 2000Q™ system to model a two-dimensional frustrated quantum magnet using artificial spins. The behavior of the magnet was described by the Nobel-prize winning work of theoretical physicists Vadim Berezinskii, J. Michael Kosterlitz and David Thouless. They predicted a new state of matter in the 1970s characterized by nontrivial topological properties. This new research is a continuation of previous breakthrough work published by D-Wave’s team in a 2018 Nature paper entitled “Observation of topological phenomena in a programmable lattice of 1,800 qubits” (Vol. 560, Issue 7719, August 22, 2018). In this latest paper, researchers from D-Wave, alongside contributors from Google, utilize D-Wave’s lower noise processor to achieve superior performance and glean insights into the dynamics of the processor never observed before.

“This work is the clearest evidence yet that quantum effects provide a computational advantage in D-Wave processors,” said Dr. Andrew King, principal investigator for this work at D-Wave. “Tying the magnet up into a topological knot and watching it escape has given us the first detailed look at dynamics that are normally too fast to observe. What we see is a huge benefit in absolute terms, with the scaling advantage in temperature and size that we would hope for. This simulation is a real problem that scientists have already attacked using the algorithms we compared against, marking a significant milestone and an important foundation for future development. This wouldn’t have been possible today without D-Wave’s lower noise processor.”

“The search for quantum advantage in computations is becoming increasingly lively because there are special problems where genuine progress is being made. These problems may appear somewhat contrived even to physicists, but in this paper from a collaboration between D-Wave Systems, Google, and Simon Fraser University [SFU], it appears that there is an advantage for quantum annealing using a special purpose processor over classical simulations for the more ‘practical’ problem of finding the equilibrium state of a particular quantum magnet,” said Prof. Dr. Gabriel Aeppli, professor of physics at ETH Zürich and EPF Lausanne, and head of the Photon Science Division of the Paul Scherrer Institute. “This comes as a surprise given the belief of many that quantum annealing has no intrinsic advantage over path integral Monte Carlo programs implemented on classical processors.”

“Nascent quantum technologies mature into practical tools only when they leave classical counterparts in the dust in solving real-world problems,” said Hidetoshi Nishimori, Professor, Institute of Innovative Research, Tokyo Institute of Technology. “A key step in this direction has been achieved in this paper by providing clear evidence of a scaling advantage of the quantum annealer over an impregnable classical computing competitor in simulating dynamical properties of a complex material. I send sincere applause to the team.”

“Successfully demonstrating such complex phenomena is, on its own, further proof of the programmability and flexibility of D-Wave’s quantum computer,” said D-Wave CEO Alan Baratz. “But perhaps even more important is the fact that this was not demonstrated on a synthetic or ‘trick’ problem. This was achieved on a real problem in physics against an industry-standard tool for simulation–a demonstration of the practical value of the D-Wave processor. We must always be doing two things: furthering the science and increasing the performance of our systems and technologies to help customers develop applications with real-world business value. This kind of scientific breakthrough from our team is in line with that mission and speaks to the emerging value that it’s possible to derive from quantum computing today.”

The scientific achievements presented in Nature Communications further underpin D-Wave’s ongoing work with world-class customers to develop over 250 early quantum computing applications, with a number piloting in production applications, in diverse industries such as manufacturing, logistics, pharmaceutical, life sciences, retail and financial services. In September 2020, D-Wave brought its next-generation Advantage™ quantum system to market via the Leap™ quantum cloud service. The system includes more than 5,000 qubits and 15-way qubit connectivity, as well as an expanded hybrid solver service capable of running business problems with up to one million variables. The combination of Advantage’s computing power and scale with the hybrid solver service gives businesses the ability to run performant, real-world quantum applications for the first time.

That last paragraph seems more sales pitch than research oriented. It’s not unexpected in a company’s press release but I was surprised that the editors at EurekAlert didn’t remove it.

Here’s a link to and a citation for the latest paper,

Scaling advantage over path-integral Monte Carlo in quantum simulation of geometrically frustrated magnets by Andrew D. King, Jack Raymond, Trevor Lanting, Sergei V. Isakov, Masoud Mohseni, Gabriel Poulin-Lamarre, Sara Ejtemaee, William Bernoudy, Isil Ozfidan, Anatoly Yu. Smirnov, Mauricio Reis, Fabio Altomare, Michael Babcock, Catia Baron, Andrew J. Berkley, Kelly Boothby, Paul I. Bunyk, Holly Christiani, Colin Enderud, Bram Evert, Richard Harris, Emile Hoskinson, Shuiyuan Huang, Kais Jooya, Ali Khodabandelou, Nicolas Ladizinsky, Ryan Li, P. Aaron Lott, Allison J. R. MacDonald, Danica Marsden, Gaelen Marsden, Teresa Medina, Reza Molavi, Richard Neufeld, Mana Norouzpour, Travis Oh, Igor Pavlov, Ilya Perminov, Thomas Prescott, Chris Rich, Yuki Sato, Benjamin Sheldan, George Sterling, Loren J. Swenson, Nicholas Tsai, Mark H. Volkmann, Jed D. Whittaker, Warren Wilkinson, Jason Yao, Hartmut Neven, Jeremy P. Hilton, Eric Ladizinsky, Mark W. Johnson, Mohammad H. Amin. Nature Communications volume 12, Article number: 1113 (2021) DOI: https://doi.org/10.1038/s41467-021-20901-5 Published: 18 February 2021

This paper is open access.

Quantum annealing and more

Dr. Andrew King, one of the D-Wave researchers, has written a February 18, 2021 article on Medium explaining some of the work. I’ve excerpted one of King’s points,

Insight #1: We observed what actually goes on under the hood in the processor for the first time

Quantum annealing — the approach adopted by D-Wave from the beginning — involves setting up a simple but purely quantum initial state, and gradually reducing the “quantumness” until the system is purely classical. This takes on the order of a microsecond. If you do it right, the classical system represents a hard (NP-complete) computational problem, and the state has evolved to an optimal, or at least near-optimal, solution to that problem.

What happens at the beginning and end of the computation are about as simple as quantum computing gets. But the action in the middle is hard to get a handle on, both theoretically and experimentally. That’s one reason these experiments are so important: they provide high-fidelity measurements of the physical processes at the core of quantum annealing. Our 2018 Nature article introduced the same simulation, but without measuring computation time. To benchmark the experiment this time around, we needed lower-noise hardware (in this case, we used the D-Wave 2000Q lower noise quantum computer), and we needed, strangely, to slow the simulation down. Since the quantum simulation happens so fast, we actually had to make things harder. And we had to find a way to slow down both quantum and classical simulation in an equitable way. The solution? Topological obstruction.

If you have time and the inclination, I encourage you to read King’s piece.

Graphene and its magnetism

I have two news bits about graphene and magnetism. If I understood what I was reading, one is more focused on applications and the other is focused on further establishing the field of valleytronics.

University of Cambridge and superconductivity

A February 8, 2021 news item on Nanowerk announces ‘magnetic work’ from the University of Cambridge (Note: A link has been removed),

The researchers, led by the University of Cambridge, were able to control the conductivity and magnetism of iron thiophosphate (FePS3), a two-dimensional material which undergoes a transition from an insulator to a metal when compressed. This class of magnetic materials offers new routes to understanding the physics of new magnetic states and superconductivity.

Using new high-pressure techniques, the researchers have shown what happens to magnetic graphene during the transition from insulator to conductor and into its unconventional metallic state, realised only under ultra-high pressure conditions. When the material becomes metallic, it remains magnetic, which is contrary to previous results and provides clues as to how the electrical conduction in the metallic phase works. The newly discovered high-pressure magnetic phase likely forms a precursor to superconductivity so understanding its mechanisms is vital.

Their results, published in the journal Physical Review X, also suggest a way that new materials could be engineered to have combined conduction and magnetic properties, which could be useful in the development of new technologies such as spintronics, which could transform the way in which computers process information.

A February 8, 2021 University of Cambridge press release (also on EurekAlert), which originated the news item, delves into the topic,

Properties of matter can alter dramatically with changing dimensionality. For example, graphene, carbon nanotubes, graphite and diamond are all made of carbon atoms, but have very different properties due to their different structure and dimensionality.

“But imagine if you were also able to change all of these properties by adding magnetism,” said first author Dr Matthew Coak, who is jointly based at Cambridge’s Cavendish Laboratory and the University of Warwick. “A material which could be mechanically flexible and form a new kind of circuit to store information and perform computation. This is why these materials are so interesting, and because they drastically change their properties when put under pressure so we can control their behaviour.”

In a previous study by Sebastian Haines of Cambridge’s Cavendish Laboratory and the Department of Earth Sciences, researchers established that the material becomes a metal at high pressure, and outlined how the crystal structure and arrangement of atoms in the layers of this 2D material change through the transition.

“The missing piece has remained however, the magnetism,” said Coak. “With no experimental techniques able to probe the signatures of magnetism in this material at pressures this high, our international team had to develop and test our own new techniques to make it possible.”

The researchers used new techniques to measure the magnetic structure up to record-breaking high pressures, using specially designed diamond anvils and neutrons to act as the probe of magnetism. They were then able to follow the evolution of the magnetism into the metallic state.

“To our surprise, we found that the magnetism survives and is in some ways strengthened,” co-author Dr Siddharth Saxena, group leader at the Cavendish Laboratory. “This is unexpected, as the newly-freely-roaming electrons in a newly conducting material can no longer be locked to their parent iron atoms, generating magnetic moments there – unless the conduction is coming from an unexpected source.”

In their previous paper, the researchers showed these electrons were ‘frozen’ in a sense. But when they made them flow or move, they started interacting more and more. The magnetism survives, but gets modified into new forms, giving rise to new quantum properties in a new type of magnetic metal.

How a material behaves, whether conductor or insulator, is mostly based on how the electrons, or charge, move around. However, the ‘spin’ of the electrons has been shown to be the source of magnetism. Spin makes electrons behave a bit like tiny bar magnets and point a certain way. Magnetism from the arrangement of electron spins is used in most memory devices: harnessing and controlling it is important for developing new technologies such as spintronics, which could transform the way in which computers process information.

“The combination of the two, the charge and the spin, is key to how this material behaves,” said co-author Dr David Jarvis from the Institut Laue-Langevin, France, who carried out this work as the basis of his PhD studies at the Cavendish Laboratory. “Finding this sort of quantum multi-functionality is another leap forward in the study of these materials.”

“We don’t know exactly what’s happening at the quantum level, but at the same time, we can manipulate it,” said Saxena. “It’s like those famous ‘unknown unknowns’: we’ve opened up a new door to properties of quantum information, but we don’t yet know what those properties might be.”

There are more potential chemical compounds to synthesise than could ever be fully explored and characterised. But by carefully selecting and tuning materials with special properties, it is possible to show the way towards the creation of compounds and systems, but without having to apply huge amounts of pressure.

Additionally, gaining fundamental understanding of phenomena such as low-dimensional magnetism and superconductivity allows researchers to make the next leaps in materials science and engineering, with particular potential in energy efficiency, generation and storage.

As for the case of magnetic graphene, the researchers next plan to continue the search for superconductivity within this unique material. “Now that we have some idea what happens to this material at high pressure, we can make some predictions about what might happen if we try to tune its properties through adding free electrons by compressing it further,” said Coak.

“The thing we’re chasing is superconductivity,” said Saxena. “If we can find a type of superconductivity that’s related to magnetism in a two-dimensional material, it could give us a shot at solving a problem that’s gone back decades.”

The citation and link to the paper are at the end of this blog posting.

Aalto University’s valleytronics

Further north in Finland, researchers at Aalto University make some advances applicable to the field of valleytronics, from a February 5, 2021 Aalto University press release (also on EurekAltert but published February 8, 2021),

Electrons in materials have a property known as ‘spin’, which is responsible for a variety of properties, the most well-known of which is magnetism. Permanent magnets, like the ones used for refrigerator doors, have all the spins in their electrons aligned in the same direction. Scientists refer to this behaviour as ferromagnetism, and the research field of trying to manipulate spin as spintronics.

Down in the quantum world, spins can arrange in more exotic ways, giving rise to frustrated states and entangled magnets. Interestingly, a property similar to spin, known as “the valley,” appears in graphene materials. This unique feature has given rise to the field of valleytronics, which aims to exploit the valley property for emergent physics and information processing, very much like spintronics relies on pure spin physics.

‘Valleytronics would potentially allow encoding information in the quantum valley degree of freedom, similar to how electronics do it with charge and spintronics with the spin.’ Explains Professor Jose Lado, from Aalto’s Department of applied physics, and one of the authors of the work. ‘What’s more, valleytronic devices would offer a dramatic increase in the processing speeds in comparison with electronics, and with much higher stability towards magnetic field noise in comparison with spintronic devices.’

Structures made of rotated, ultra-thin materials provide a rich solid-state platform for designing novel devices. In particular, slightly twisted graphene layers have recently been shown to have exciting unconventional properties, that can ultimately lead to a new family of materials for quantum technologies. These unconventional states which are already being explored depend on electrical charge or spin. The open question is if the valley can also lead to its own family of exciting states.

Making materials for valleytronics

For this goal, it turns out that conventional ferromagnets play a vital role, pushing graphene to the realms of valley physics. In a recent work, Ph.D. student Tobias Wolf, together with Profs. Oded Zilberberg and Gianni Blatter at ETH Zurich, and Prof. Jose Lado at Aalto University, showed a new direction for correlated physics in magnetic van der Waals materials.

The team showed that sandwiching two slightly rotated layers of graphene between a ferromagnetic insulator provides a unique setting for new electronic states. The combination of ferromagnets, graphene’s twist engineering, and relativistic effects force the “valley” property to dominate the electrons behaviour in the material. In particular, the researchers showed how these valley-only states can be tuned electrically, providing a materials platform in which valley-only states can be generated. Building on top of the recent breakthrough in spintronics and van der Waals materials, valley physics in magnetic twisted van der Waals multilayers opens the door to the new realm of correlated twisted valleytronics.

‘Demonstrating these states represents the starting point towards new exotic entangled valley states.’ Said Professor Lado, ‘Ultimately, engineering these valley states can allow realizing quantum entangled valley liquids and fractional quantum valley Hall states. These two exotic states of matter have not been found in nature yet, and would open exciting possibilities towards a potentially new graphene-based platform for topological quantum computing.’

Citations and links

Here’s a link to and a citation for the University of Cambridge research,

Emergent Magnetic Phases in Pressure-Tuned van der Waals Antiferromagnet FePS3 by Matthew J. Coak, David M. Jarvis, Hayrullo Hamidov, Andrew R. Wildes, Joseph A. M. Paddison, Cheng Liu, Charles R. S. Haines, Ngoc T. Dang, Sergey E. Kichanov, Boris N. Savenko, Sungmin Lee, Marie Kratochvílová, Stefan Klotz, Thomas C. Hansen, Denis P. Kozlenko, Je-Geun Park, and Siddharth S. Saxena. Phys. Rev. X 11, 011024 DOI: https://doi.org/10.1103/PhysRevX.11.011024 Published 5 February 2021

This article appears to be open access.

Here’s a link to and a citation for the Aalto University research,

Spontaneous Valley Spirals in Magnetically Encapsulated Twisted Bilayer Graphene by Tobias M. R. Wolf, Oded Zilberberg, Gianni Blatter, and Jose L. Lado. Phys. Rev. Lett. 126, 056803 DOI: https://doi.org/10.1103/PhysRevLett.126.056803 Published 4 February 2021

This paper is behind a paywall.

The need for Wi-Fi speed

Yes, it’s a ‘Top Gun’ movie quote (1986) or more accurately, a paraphrasing of Tom Cruise’s line “I feel the need for speed.” I understand there’s a sequel, which is due to arrive in movie theatres or elsewhere at sometime in this decade.

Where wireless and WiFi are concerned I think there is a dog/poodle situation. ‘Dog’ is a general description where ‘poodle’ is a specific description. All poodles (specific) are dogs (general) but not all dogs are poodles. So, wireless is a general description and Wi-Fi is a specific type of wireless communication. All WiFi is wireless but not all wireless is Wi-Fi. That said, onto the research.

Given what seems to be an insatiable desire for speed in the wireless world, the quote seems quite à propos in relation to the latest work on quantum tunneling and its impact on Wi-Fi speed from the Moscow Institute of Physics and Technology (from a February 3, 2021 news item on phys.org,

Scientists from MIPT (Moscow Institute of Physics and Technology), Moscow Pedagogical State University and the University of Manchester have created a highly sensitive terahertz detector based on the effect of quantum-mechanical tunneling in graphene. The sensitivity of the device is already superior to commercially available analogs based on semiconductors and superconductors, which opens up prospects for applications of the graphene detector in wireless communications, security systems, radio astronomy, and medical diagnostics. The research results are published in Nature Communications.

A February 3, 2021 MIPT press release (also on EurekAlert), which originated the news item, provides more technical detail about the work and its relation WiFi,

Information transfer in wireless networks is based on transformation of a high-frequency continuous electromagnetic wave into a discrete sequence of bits. This technique is known as signal modulation. To transfer the bits faster, one has to increase the modulation frequency. However, this requires synchronous increase in carrier frequency. A common FM-radio transmits at frequencies of hundred megahertz, a Wi-Fi receiver uses signals of roughly five gigahertz frequency, while the 5G mobile networks can transmit up to 20 gigahertz signals. This is far from the limit, and further increase in carrier frequency admits a proportional increase in data transfer rates. Unfortunately, picking up signals with hundred gigahertz frequencies and higher is an increasingly challenging problem.

A typical receiver used in wireless communications consists of a transistor-based amplifier of weak signals and a demodulator that rectifies the sequence of bits from the modulated signal. This scheme originated in the age of radio and television, and becomes inefficient at frequencies of hundreds of gigahertz desirable for mobile systems. The fact is that most of the existing transistors aren’t fast enough to recharge at such a high frequency.

An evolutionary way to solve this problem is just to increase the maximum operation frequency of a transistor. Most specialists in the area of nanoelectronics work hard in this direction. A revolutionary way to solve the problem was theoretically proposed in the beginning of 1990’s by physicists Michael Dyakonov and Michael Shur, and realized, among others, by the group of authors in 2018. It implies abandoning active amplification by transistor, and abandoning a separate demodulator. What’s left in the circuit is a single transistor, but its role is now different. It transforms a modulated signal into bit sequence or voice signal by itself, due to non-linear relation between its current and voltage drop.

In the present work, the authors have proved that the detection of a terahertz signal is very efficient in the so-called tunneling field-effect transistor. To understand its work, one can just recall the principle of an electromechanical relay, where the passage of current through control contacts leads to a mechanical connection between two conductors and, hence, to the emergence of current. In a tunneling transistor, applying voltage to the control contact (termed as ”gate”) leads to alignment of the energy levels of the source and channel. This also leads to the flow of current. A distinctive feature of a tunneling transistor is its very strong sensitivity to control voltage. Even a small “detuning” of energy levels is enough to interrupt the subtle process of quantum mechanical tunneling. Similarly, a small voltage at the control gate is able to “connect” the levels and initiate the tunneling current

“The idea of ??a strong reaction of a tunneling transistor to low voltages is known for about fifteen years,” says Dr. Dmitry Svintsov, one of the authors of the study, head of the laboratory for optoelectronics of two-dimensional materials at the MIPT center for photonics and 2D materials. “But it’s been known only in the community of low-power electronics. No one realized before us that the same property of a tunneling transistor can be applied in the technology of terahertz detectors. Georgy Alymov (co-author of the study) and I were lucky to work in both areas. We realized then: if the transistor is opened and closed at a low power of the control signal, then it should also be good in picking up weak signals from the ambient surrounding. “

The created device is based on bilayer graphene, a unique material in which the position of energy levels (more strictly, the band structure) can be controlled using an electric voltage. This allowed the authors to switch between classical transport and quantum tunneling transport within a single device, with just a change in the polarities of the voltage at the control contacts. This possibility is of extreme importance for an accurate comparison of the detecting ability of a classical and quantum tunneling transistor.

The experiment showed that the sensitivity of the device in the tunnelling mode is few orders of magnitude higher than that in the classical transport mode. The minimum signal distinguishable by the detector against the noisy background already competes with that of commercially available superconducting and semiconductor bolometers. However, this is not the limit – the sensitivity of the detector can be further increased in “cleaner” devices with a low concentration of residual impurities. The developed detection theory, tested by the experiment, shows that the sensitivity of the “optimal” detector can be a hundred times higher.

“The current characteristics give rise to great hopes for the creation of fast and sensitive detectors for wireless communications,” says the author of the work, Dr. Denis Bandurin. And this area is not limited to graphene and is not limited to tunnel transistors. We expect that, with the same success, a remarkable detector can be created, for example, based on an electrically controlled phase transition. Graphene turned out to be just a good launching pad here, just a door, behind which is a whole world of exciting new research.”

The results presented in this paper are an example of a successful collaboration between several research groups. The authors note that it is this format of work that allows them to obtain world-class scientific results. For example, earlier, the same team of scientists demonstrated how waves in the electron sea of ??graphene can contribute to the development of terahertz technology. “In an era of rapidly evolving technology, it is becoming increasingly difficult to achieve competitive results.” – comments Dr. Georgy Fedorov, deputy head of the nanocarbon materials laboratory, MIPT, – “Only by combining the efforts and expertise of several groups can we successfully realize the most difficult tasks and achieve the most ambitious goals, which we will continue to do.”

Here’s a link to and a citation for the latest paper,

Tunnel field-effect transistors for sensitive terahertz detection by I. Gayduchenko, S. G. Xu, G. Alymov, M. Moskotin, I. Tretyakov, T. Taniguchi, K. Watanabe, G. Goltsman, A. K. Geim, G. Fedorov, D. Svintsov & D. A. Bandurin. Nature Communications volume 12, Article number: 543 (2021) DOI: https://doi.org/10.1038/s41467-020-20721-z Published: 22 January 2021

This paper is open access.

One last comment, I’m assuming since the University of Manchester is mentioned that A. K. Geim is Sir Andre K. Geim (you can look him up here is you’re not familiar with his role in the graphene research community).

Baby steps toward a quantum brain

My first quantum brain posting! (Well, I do have something that seems loosely related in a July 5, 2017 posting about quantum entanglement and machine learning and more. Also, I have lots of item on brainlike or neuromorphic computing.)

Getting to the latest news, a February 1, 2021 news item on Nanowerk announces research in to new intelligent materials that could lead to a ‘quantum brain’,

An intelligent material that learns by physically changing itself, similar to how the human brain works, could be the foundation of a completely new generation of computers. Radboud [university in the Netherlands] physicists working toward this so-called “quantum brain” have made an important step. They have demonstrated that they can pattern and interconnect a network of single atoms, and mimic the autonomous behaviour of neurons and synapses in a brain.

If I understand the difference between the work in 2017 and this latest work, it’s that in 2017 they were looking at quantum states and their possible effect on machine learning, while this work in 2021 is focused on a new material with some special characteristics.

A February 1, 2021 Radboud University press release (also on EurekAlert), which originated the news item, provides information on the case supporting the need for a quantum brain and some technical details about how it might be achieved,

Considering the growing global demand for computing capacity, more and more data centres are necessary, all of which leave an ever-expanding energy footprint. ‘It is clear that we have to find new strategies to store and process information in an energy efficient way’, says project leader Alexander Khajetoorians, Professor of Scanning Probe Microscopy at Radboud University.

‘This requires not only improvements to technology, but also fundamental research in game changing approaches. Our new idea of building a ‘quantum brain’ based on the quantum properties of materials could be the basis for a future solution for applications in artificial intelligence.’

Quantum brain

For artificial intelligence to work, a computer needs to be able to recognise patterns in the world and learn new ones. Today’s computers do this via machine learning software that controls the storage and processing of information on a separate computer hard drive. ‘Until now, this technology, which is based on a century-old paradigm, worked sufficiently. However, in the end, it is a very energy-inefficient process’, says co-author Bert Kappen, Professor of Neural networks and machine intelligence.

The physicists at Radboud University researched whether a piece of hardware could do the same, without the need of software. They discovered that by constructing a network of cobalt atoms on black phosphorus they were able to build a material that stores and processes information in similar ways to the brain, and, even more surprisingly, adapts itself.

Self-adapting atoms

In 2018, Khajetoorians and collaborators showed that it is possible to store information in the state of a single cobalt atom. By applying a voltage to the atom, they could induce “firing”, where the atom shuttles between a value of 0 and 1 randomly, much like one neuron. They have now discovered a way to create tailored ensembles of these atoms, and found that the firing behaviour of these ensembles mimics the behaviour of a brain-like model used in artificial intelligence.

In addition to observing the behaviour of spiking neurons, they were able to create the smallest synapse known to date. Unknowingly, they observed that these ensembles had an inherent adaptive property: their synapses changed their behaviour depending on what input they “saw”. ‘When stimulating the material over a longer period of time with a certain voltage, we were very surprised to see that the synapses actually changed. The material adapted its reaction based on the external stimuli that it received. It learned by itself’, says Khajetoorians.

Exploring and developing the quantum brain

The researchers now plan to scale up the system and build a larger network of atoms, as well as dive into new “quantum” materials that can be used. Also, they need to understand why the atom network behaves as it does. ‘We are at a state where we can start to relate fundamental physics to concepts in biology, like memory and learning’, says Khajetoorians.

If we could eventually construct a real machine from this material, we would be able to build self-learning computing devices that are more energy efficient and smaller than today’s computers. Yet, only when we understand how it works – and that is still a mystery – will we be able to tune its behaviour and start developing it into a technology. It is a very exciting time.’

Here is a charming image illustrating the reasons for a quantum brain,

Courtesy: Radboud University

Here’s a link to and a citation for the paper,

An atomic Boltzmann machine capable of self-adaption by Brian Kiraly, Elze J. Knol, Werner M. J. van Weerdenburg, Hilbert J. Kappen & Alexander A. Khajetoorians. Nature Nanotechnology (2021) DOI: https://doi.org/10.1038/s41565-020-00838-4 Published: 01 February 2021

This paper is behind a paywall.

Printing paper loudspeakers

When I was working on my undergraduate communications degree, we spent a fair chunk of time discussing the printed word; this introduction (below in the excerpt) brings back memories. I am going to start with an excerpt from the study (link and citation to follow at the end of this post) before moving on to the news item and press release. It’s a good introduction (Note Links have been removed),

For a long time, paper has been used as storing medium for written information only [emphasis mine]. In combination with the development of printing technologies, it became one of the most relevant materials as information could be reproduced multiple times and brought to millions of people in a simple, cheap, and fast way. However, with the digital revolution the end of paper has been forecasted.

However, paper still has its big advantages. The yearly production is still huge with over 400 million tons worldwide[1] for a wide application range going much beyond conventional books, newspapers, packages, or sanitary products. It is a natural light‐weight, flexible, recyclable, multi‐functional material making it an ideal candidate as part of novel electronic devices, especially based on printed electronics.[2] During the last decade, a wide variety of electronic functionalities have been demonstrated with paper as the common substrate platform. It has been used as basis for organic circuits,[3] microwave and digital electronics,[4] sensors,[5-7] actuators,[8, 9] and many more.

My first posting about this work from Chemnitz University of Technology with paper, loudspeakers, and printed electronics was a May 4, 2012 posting.

Enough of that trip down memory lane, a January 26, 2021 news item on Nanowerk announces research into printing loudspeakers onto roll-to-roll printed paper,

If the Institute for Print and Media Technology at Chemnitz University of Technology [Germany] has its way, many loudspeakers of the future will not only be as thin as paper, but will also sound impressive. This is a reality in the laboratories of the Chemnitz researchers, who back in 2015 developed the multiple award-winning T-Book – a large-format illustrated book equipped with printed electronics. If you turn a page, it begins to sound through a speaker invisibly located inside the sheet of paper.

“The T-Book was and is a milestone in the development of printed electronics, but development is continuing all the time,” says Prof. Dr. Arved C. Hübler, under whose leadership this technology trend, which is becoming increasingly important worldwide, has been driven forward for more than 20 years.

A January 26, 2021 Chemnitz University of Technology press release by Mario Steinebach/Translator: Chelsea Burris, which originated the news item, delves further into the topic,

From single-sheet production to roll-to-roll printing

Five years ago, the sonorous paper loudspeakers from Chemnitz were still manufactured in a semi-automatic single-sheet production process. In this process, ordinary paper or foils are printed with two layers of a conductive organic polymer as electrodes. A piezoelectric layer is sandwiched between them as the active element, which causes the paper or film to vibrate. Loud and clear sound is produced by air displacement. The two sides of the speaker paper can be printed in color. Since this was only possible in individual sheets in limited formats, the efficiency of this relatively slow manufacturing process is very low. That’s why researchers at the Institute of Print and Media Technology have been looking for a new way towards cost-effective mass production since May 2017.

The aim of their latest project, roll-to-roll printed speaker paper (T-Paper for short), was therefore to convert sheet production into roll production. “Researchers from the fields of print media technology, chemistry, physics, acoustics, electrical engineering, and economics from six nations developed a continuous, highly productive, and reliable roll production of loudspeaker webs,” reports project manager Georg C. Schmidt. Not only did they use the roll-to-roll (R2R) printing process for this, but they also developed inline technologies for other process steps, such as the lamination of functional layers. “This allows electronics to be embedded in the paper – invisibly and protected,” says Hübler. In addition, he says, inline polarization of piezoelectric polymer layers has been achieved for the first time and complete inline process monitoring of the printed functional layers is possible. The final project results were published in the renowned journal Advanced Materials in January 2021.

Long and lightweight paper loudspeaker webs for museums, the advertising industry, and Industry 4.0

The potential of loudspeaker paper was extended to other areas of application in the T-Paper project. For example, meter-long loudspeaker installations can now be manufactured in web form or as a circle (T-RING). “In our T-RING prototype, an almost four-meter-long track with 56 individual loudspeakers was connected to form seven segments and shaped into a circle, making a 360° surround sound installation possible,” says Schmidt. The speaker track, including printed circuitry, weighs just 150 grams and consists of 90 percent conventional paper that can be printed in color on both sides. “This means that low-cost infotainment solutions are now possible in museums, at trade shows and in the advertising industry, for example. In public buildings, for example, very homogeneous sound reinforcement of long stretches such as corridors is possible. But the process technology itself could also become interesting for other areas, such as the production of inline measurement systems for Industry 4.0,” says the project manager, looking to the future.

The T-Paper project was funded by the Federal Ministry of Education and Research from 2017 to 2020 with 1.37 million euros as part of the Validation of the technological and societal innovation potential of scientific research – VIP+ funding measure.

Here’s a link to and a citation for the paper,

Paper‐Embedded Roll‐to‐Roll Mass Printed Piezoelectric Transducers by Georg C. Schmidt, Pramul M. Panicker, Xunlin Qiu, Aravindan J. Benjamin, Ricardo A. Quintana Soler, Issac Wils, Arved C. Hübler. Advanced Materials DOI: https://doi.org/10.1002/adma.202006437 First published: 18 January 2021

This paper is open access.

For anyone curious about the T-Paper project, you can find it here.