Tag Archives: Massachusetts Institute of Technology (MIT)

Answer to why Roman concrete was so durable

Roman concrete lasts for millenia while our ‘modern’ concrete doesn’t and that’s what makes the Roman stuff so fascinating. There’s a very good January 6, 2023 Massachusetts Institute of Technology (MIT) news release (also on EurekAlert) which may provide an answer the mystery of the this material’s longevity,

The ancient Romans were masters of engineering, constructing vast networks of roads, aqueducts, ports, and massive buildings, whose remains have survived for two millennia. Many of these structures were built with concrete: Rome’s famed Pantheon, which has the world’s largest unreinforced concrete dome and was dedicated in A.D. 128, is still intact, and some ancient Roman aqueducts still deliver water to Rome today. Meanwhile, many modern concrete structures have crumbled after a few decades.

Researchers have spent decades trying to figure out the secret of this ultradurable ancient construction material, particularly in structures that endured especially harsh conditions, such as docks, sewers, and seawalls, or those constructed in seismically active locations.

Now, a team of investigators from MIT, Harvard University, and laboratories in Italy and Switzerland, has made progress in this field, discovering ancient concrete-manufacturing strategies that incorporated several key self-healing functionalities. The findings are published in the journal Science Advances, in a paper by MIT professor of civil and environmental engineering Admir Masic, former doctoral student Linda Seymour, and four others.

For many years, researchers have assumed that the key to the ancient concrete’s durability was based on one ingredient: pozzolanic material such as volcanic ash from the area of Pozzuoli, on the Bay of Naples. [emphasis mine] This specific kind of ash was even shipped all across the vast Roman empire to be used in construction, and was described as a key ingredient for concrete in accounts by architects and historians at the time.

Under closer examination, these ancient samples also contain small, distinctive, millimeter-scale bright white mineral features, which have been long recognized as a ubiquitous component of Roman concretes. These white chunks, often referred to as “lime clasts,” originate from lime, another key component of the ancient concrete mix. “Ever since I first began working with ancient Roman concrete, I’ve always been fascinated by these features,” says Masic. “These are not found in modern concrete formulations, so why are they present in these ancient materials?”

Previously disregarded as merely evidence of sloppy mixing practices, or poor-quality raw materials, the new study suggests that these tiny lime clasts gave the concrete a previously unrecognized self-healing capability. [emphasis mine] “The idea that the presence of these lime clasts was simply attributed to low quality control always bothered me,” says Masic. “If the Romans put so much effort into making an outstanding construction material, following all of the detailed recipes that had been optimized over the course of many centuries, why would they put so little effort into ensuring the production of a well-mixed final product? There has to be more to this story.”

Upon further characterization of these lime clasts, using high-resolution multiscale imaging and chemical mapping techniques pioneered in Masic’s research lab, the researchers gained new insights into the potential functionality of these lime clasts.

Historically, it had been assumed that when lime was incorporated into Roman concrete, it was first combined with water to form a highly reactive paste-like material, in a process known as slaking. But this process alone could not account for the presence of the lime clasts. Masic wondered: “Was it possible that the Romans might have actually directly used lime in its more reactive form, known as quicklime?”

Studying samples of this ancient concrete, he and his team determined that the white inclusions were, indeed, made out of various forms of calcium carbonate. And spectroscopic examination provided clues that these had been formed at extreme temperatures, as would be expected from the exothermic reaction produced by using quicklime instead of, or in addition to, the slaked lime in the mixture. Hot mixing, the team has now concluded, was actually the key to the super-durable nature.

“The benefits of hot mixing are twofold,” Masic says. “First, when the overall concrete is heated to high temperatures, it allows chemistries that are not possible if you only used slaked lime, producing high-temperature-associated compounds that would not otherwise form. Second, this increased temperature significantly reduces curing and setting times since all the reactions are accelerated, allowing for much faster construction.”

During the hot mixing process, the lime clasts develop a characteristically brittle nanoparticulate architecture, creating an easily fractured and reactive calcium source, which, as the team proposed, could provide a critical self-healing functionality. As soon as tiny cracks start to form within the concrete, they can preferentially travel through the high-surface-area lime clasts. This material can then react with water, creating a calcium-saturated solution, which can recrystallize as calcium carbonate and quickly fill the crack, or react with pozzolanic materials to further strengthen the composite material. These reactions take place spontaneously and therefore automatically heal the cracks before they spread. Previous support for this hypothesis was found through the examination of other Roman concrete samples that exhibited calcite-filled cracks.

To prove that this was indeed the mechanism responsible for the durability of the Roman concrete, the team produced samples of hot-mixed concrete that incorporated both ancient and modern formulations, deliberately cracked them, and then ran water through the cracks. Sure enough: Within two weeks the cracks had completely healed and the water could no longer flow. An identical chunk of concrete made without quicklime never healed, and the water just kept flowing through the sample. As a result of these successful tests, the team is working to commercialize this modified cement material.

“It’s exciting to think about how these more durable concrete formulations could expand not only the service life of these materials, but also how it could improve the durability of 3D-printed concrete formulations,” says Masic.

Through the extended functional lifespan and the development of lighter-weight concrete forms, he hopes that these efforts could help reduce the environmental impact of cement production, which currently accounts for about 8 percent of global greenhouse gas emissions. Along with other new formulations, such as concrete that can actually absorb carbon dioxide from the air, another current research focus of the Masic lab, these improvements could help to reduce concrete’s global climate impact.

The research team included Janille Maragh at MIT, Paolo Sabatini at DMAT in Italy, Michel Di Tommaso at the Instituto Meccanica dei Materiali, in Switzerland, and James Weaver at the Wyss Institute for Biologically Inspired Engineering at Harvard University. The work was carried out with the assistance of the archeological museum of Priverno, Italy.

I remember the excitement over volcanic ash (it’s mentioned in my June 3, 2016 posting titled: “Making better concrete by looking to nature for inspiration” and my February 17, 2021 posting “Nuclear power plants take a cue from Roman concrete“). As for something being ignored as unimportant or being a result poor practice when it’s not, that’s one of my favourite kinds of science story.

For the really curious, Jennifer Ouellette’s January 6, 2023 article (Ancient Roman concrete could self-heal thanks to “hot mixing” with quicklime) for Ars Technica provides a little more detail.

Here’s a link to and a citation for the latest paper,

Hot mixing: Mechanistic insights into the durability of ancient Roman concrete by Linda M. Seymour, Janille Maragh, Paolo Sabatini, Michel Di Tommaso, James C. Weaver, and Admir Masic. Science Advances 6 Jan 2023 Vol 9, Issue 1 DOI: 10.1126/sciadv.add1602

This paper is open access.

One last note, DMAT is listed as Paolo Sabatini’s home institution. It is a company for which Sabatini is a co-founder and CEO (chief executive officer). DMAT has this on its About page, “Our mission is to develop breakthrough innovations in construction materials at a global scale. DMAT is at the helm of concrete’s innovation.”

Graphene goes to the moon

The people behind the European Union’s Graphene Flagship programme (if you need a brief explanation, keep scrolling down to the “What is the Graphene Flagship?” subhead) and the United Arab Emirates have got to be very excited about the announcement made in a November 29, 2022 news item on Nanowerk, Note: Canadians too have reason to be excited as of April 3, 2023 when it was announced that Canadian astronaut Jeremy Hansen was selected to be part of the team on NASA’s [US National Aeronautics and Space Administration] Artemis II to orbit the moon (April 3, 2023 CBC news online article by Nicole Mortillaro) ·

Graphene Flagship Partners University of Cambridge (UK) and Université Libre de Bruxelles (ULB, Belgium) paired up with the Mohammed bin Rashid Space Centre (MBRSC, United Arab Emirates), and the European Space Agency (ESA) to test graphene on the Moon. This joint effort sees the involvement of many international partners, such as Airbus Defense and Space, Khalifa University, Massachusetts Institute of Technology, Technische Universität Dortmund, University of Oslo, and Tohoku University.

The Rashid rover is planned to be launched on 30 November 2022 [Note: the launch appears to have occurred on December 11, 2022; keep scrolling for more about that] from Cape Canaveral in Florida and will land on a geologically rich and, as yet, only remotely explored area on the Moon’s nearside – the side that always faces the Earth. During one lunar day, equivalent to approximately 14 days on Earth, Rashid will move on the lunar surface investigating interesting geological features.

A November 29, 2022 Graphene Flagship press release (also on EurekAlert), which originated the news item, provides more details,

The Rashid rover wheels will be used for repeated exposure of different materials to the lunar surface. As part of this Material Adhesion and abrasion Detection experiment, graphene-based composites on the rover wheels will be used to understand if they can protect spacecraft against the harsh conditions on the Moon, and especially against regolith (also known as ‘lunar dust’).

Regolith is made of extremely sharp, tiny and sticky grains and, since the Apollo missions, it has been one of the biggest challenges lunar missions have had to overcome. Regolith is responsible for mechanical and electrostatic damage to equipment, and is therefore also hazardous for astronauts. It clogs spacesuits’ joints, obscures visors, erodes spacesuits and protective layers, and is a potential health hazard.  

University of Cambridge researchers from the Cambridge Graphene Centre produced graphene/polyether ether ketone (PEEK) composites. The interaction of these composites with the Moon regolith (soil) will be investigated. The samples will be monitored via an optical camera, which will record footage throughout the mission. ULB researchers will gather information during the mission and suggest adjustments to the path and orientation of the rover. Images obtained will be used to study the effects of the Moon environment and the regolith abrasive stresses on the samples.

This moon mission comes soon after the ESA announcement of the 2022 class of astronauts, including the Graphene Flagship’s own Meganne Christian, a researcher at Graphene Flagship Partner the Institute of Microelectronics and Microsystems (IMM) at the National Research Council of Italy.

“Being able to follow the Moon rover’s progress in real time will enable us to track how the lunar environment impacts various types of graphene-polymer composites, thereby allowing us to infer which of them is most resilient under such conditions. This will enhance our understanding of how graphene-based composites could be used in the construction of future lunar surface vessels,” says Sara Almaeeni, MBRSC science team lead, who designed Rashid’s communication system.

“New materials such as graphene have the potential to be game changers in space exploration. In combination with the resources available on the Moon, advanced materials will enable radiation protection, electronics shielding and mechanical resistance to the harshness of the Moon’s environment. The Rashid rover will be the first opportunity to gather data on the behavior of graphene composites within a lunar environment,” says Carlo Iorio, Graphene Flagship Space Champion, from ULB.

Leading up to the Moon mission, a variety of inks containing graphene and related materials, such as conducting graphene, insulating hexagonal boron nitride and graphene oxide, semiconducting molybdenum disulfide, prepared by the University of Cambridge and ULB were also tested on the MAterials Science Experiment Rocket 15 (MASER 15) mission, successfully launched on the 23rd of November 2022 from the Esrange Space Center in Sweden. This experiment, named ARLES-2 (Advanced Research on Liquid Evaporation in Space) and supported by European and UK space agencies (ESA, UKSA) included contributions from Graphene Flagship Partners University of Cambridge (UK), University of Pisa (Italy) and Trinity College Dublin (Ireland), with many international collaborators, including Aix-Marseille University (France), Technische Universität Darmstadt (Germany), York University (Canada), Université de Liège (Belgium), University of Edinburgh and Loughborough.

This experiment will provide new information about the printing of GMR inks in weightless conditions, contributing to the development of new addictive manufacturing procedures in space such as 3d printing. Such procedures are key for space exploration, during which replacement components are often needed, and could be manufactured from functional inks.

“Our experiments on graphene and related materials deposition in microgravity pave the way addictive manufacturing in space. The study of the interaction of Moon regolith with graphene composites will address some key challenges brought about by the harsh lunar environment,” says Yarjan Abdul Samad, from the Universities of Cambridge and Khalifa, who prepared the samples and coordinated the interactions with the United Arab Emirates.    

“The Graphene Flagship is spearheading the investigation of graphene and related materials (GRMs) for space applications. In November 2022, we had the first member of the Graphene Flagship appointed to the ESA astronaut class. We saw the launch of a sounding rocket to test printing of a variety of GRMs in zero gravity conditions, and the launch of a lunar rover that will test the interaction of graphene—based composites with the Moon surface. Composites, coatings and foams based on GRMs have been at the core of the Graphene Flagship investigations since its beginning. It is thus quite telling that, leading up to the Flagship’s 10th anniversary, these innovative materials are now to be tested on the lunar surface. This is timely, given the ongoing effort to bring astronauts back to the Moon, with the aim of building lunar settlements. When combined with polymers, GRMs can tailor the mechanical, thermal, electrical properties of then host matrices. These pioneering experiments could pave the way for widespread adoption of GRM-enhanced materials for space exploration,” says Andrea Ferrari, Science and Technology Officer and Chair of the Management Panel of the Graphene Flagship. 

Caption: The MASER15 launch Credit: John-Charles Dupin

A pioneering graphene work and a first for the Arab World

A December 11, 2022 news item on Alarabiya news (and on CNN) describes the ‘graphene’ launch which was also marked the Arab World’s first mission to the moon,

The United Arab Emirates’ Rashid Rover – the Arab world’s first mission to the Moon – was launched on Sunday [December 11, 2022], the Mohammed bin Rashid Space Center (MBRSC) announced on its official Twitter account.

The launch came after it was previously postponed for “pre-flight checkouts.”

The launch of a SpaceX Falcon 9 rocket carrying the UAE’s Rashid rover successfully took off from Cape Canaveral, Florida.

The Rashid rover – built by Emirati engineers from the UAE’s Mohammed bin Rashid Space Center (MBRSC) – is to be sent to regions of the Moon unexplored by humans.

What is the Graphene Flagship?

In 2013, the Graphene Flagship was chosen as one of two FET (Future and Emerging Technologies) funding projects (the other being the Human Brain Project) each receiving €1 billion to be paid out over 10 years. In effect, it’s a science funding programme specifically focused on research, development, and commercialization of graphene (a two-dimensional [it has length and width but no depth] material made of carbon atoms).

You can find out more about the flagship and about graphene here.

Transforming bacterial cells into living computers

If this were a movie instead of a press release, we’d have some ominous music playing over a scene in a pristine white lab. Instead, we have a November 13, 2022 Technion-Israel Institute of Technology press release (also on EurekAlert) where the writer tries to highlight the achievement while downplaying the sort of research (in synthetic biology) that could have people running for the exits,

Bringing together concepts from electrical engineering and bioengineering tools, Technion and MIT [Massachusetts Institute of Technology] scientists collaborated to produce cells engineered to compute sophisticated functions – “biocomputers” of sorts. Graduate students and researchers from Technion – Israel Institute of Technology Professor Ramez Daniel’s Laboratory for Synthetic Biology & Bioelectronics worked together with Professor Ron Weiss from the Massachusetts Institute of Technology to create genetic “devices” designed to perform computations like artificial neural circuits. Their results were recently published in Nature Communications.

The genetic material was inserted into the bacterial cell in the form of a plasmid: a relatively short DNA molecule that remains separate from the bacteria’s “natural” genome. Plasmids also exist in nature, and serve various functions. The research group designed the plasmid’s genetic sequence to function as a simple computer, or more specifically, a simple artificial neural network. This was done by means of several genes on the plasmid regulating each other’s activation and deactivation according to outside stimuli.

What does it mean that a cell is a circuit? How can a computer be biological?

At its most basic level, a computer consists of 0s and 1s, of switches. Operations are performed on these switches: summing them, picking the maximal or minimal value between them, etc. More advanced operations rely on the basic ones, allowing a computer to play chess or fly a rocket to the moon.

In the electronic computers we know, the 0/1 switches take the form of transistors. But our cells are also computers, of a different sort. There, the presence or absence of a molecule can act as a switch. Genes activate, trigger or suppress other genes, forming, modifying, or removing molecules. Synthetic biology aims (among other goals) to harness these processes, to synthesize the switches and program the genes that would make a bacterial cell perform complex tasks. Cells are naturally equipped to sense chemicals and to produce organic molecules. Being able to “computerize” these processes within the cell could have major implications for biomanufacturing and have multiple medical applications.

The Ph.D students (now doctors) Luna Rizik and Loai Danial, together with Dr. Mouna Habib, under the guidance of Prof. Ramez Daniel from the Faculty of Biomedical Engineering at the Technion, and in collaboration with Prof. Ron Weiss from the Synthetic Biology Center, MIT,  were inspired by how artificial neural networks function. They created synthetic computation circuits by combining existing genetic “parts,” or engineered genes, in novel ways, and implemented concepts from neuromorphic electronics into bacterial cells. The result was the creation of bacterial cells that can be trained using artificial intelligence algorithms.

The group were able to create flexible bacterial cells that can be dynamically reprogrammed to switch between reporting whether at least one of a test chemicals, or two, are present (that is, the cells were able to switch between performing the OR and the AND functions). Cells that can change their programming dynamically are capable of performing different operations under different conditions. (Indeed, our cells do this naturally.) Being able to create and control this process paves the way for more complex programming, making the engineered cells suitable for more advanced tasks. Artificial Intelligence algorithms allowed the scientists to produce the required genetic modifications to the bacterial cells at a significantly reduced time and cost.

Going further, the group made use of another natural property of living cells: they are capable of responding to gradients. Using artificial intelligence algorithms, the group succeeded in harnessing this natural ability to make an analog-to-digital converter – a cell capable of reporting whether the concentration of a particular molecule is “low”, “medium”, or “high.” Such a sensor could be used to deliver the correct dosage of medicaments, including cancer immunotherapy and diabetes drugs.

Of the researchers working on this study, Dr. Luna Rizik and Dr. Mouna Habib hail from the Department of Biomedical Engineering, while Dr. Loai Danial is from the Andrew and Erna Viterbi Faculty of Electrical Engineering. It is bringing the two fields together that allowed the group to make the progress they did in the field of synthetic biology.

This work was partially funded by the Neubauer Family Foundation, the Israel Science Foundation (ISF), European Union’s Horizon 2020 Research and Innovation Programme, the Technion’s Lorry I. Lokey interdisciplinary Center for Life Sciences and Engineering, and the [US Department of Defense] Defense Advanced Research Projects Agency [DARPA].

Here’s a link to and a citation for the paper,

Synthetic neuromorphic computing in living cells by Luna Rizik, Loai Danial, Mouna Habib, Ron Weiss & Ramez Daniel. Nature Communications volume 13, Article number: 5602 (2022) DOIL https://doi.org/10.1038/s41467-022-33288-8 Published: 24 September 2022

This paper is open access.

A CRISPR (clustered regularly interspaced short palindromic repeats) anniversary

June 2022 was the 10th anniversary of the publication of a study the paved the way for CRISPR-Cas9 gene editing and Sophie Fessl’s June 28, 2022 article for The Scientist offers a brief history (Note: Links have been removed),

Ten years ago, Emmanuelle Charpentier and Jennifer Doudna published the study that paved the way for a new kind of genome editing: the suite of technologies now known as CRISPR. Writing in [the journal] Science, they adapted an RNA-mediated bacterial immune defense into a targeted DNA-altering system. “Our study . . . highlights the potential to exploit the system for RNA-programmable genome editing,” they conclude in the abstract of their paper—a potential that, in the intervening years, transformed the life sciences. 

From gene drives to screens, and diagnostics to therapeutics, CRISPR nucleic acids and the Cas enzymes with which they’re frequently paired have revolutionized how scientists tinker with DNA and RNA. … altering the code of life with CRISPR has been marred by ethical concerns. Perhaps the most prominent example was when Chinese scientist He Jiankui created the first gene edited babies using CRISPR/Cas9 genome editing. Doudna condemned Jiankui’s work, for which he was jailed, as “risky and medically unnecessary” and a “shocking reminder of the scientific and ethical challenges raised by this powerful technology.” 

There’s also the fact that legal battles over who gets to claim ownership of the system’s many applications have persisted almost as long as the technology has been around. Both Doudna and Charpentier’s teams from the University of California, Berkeley, and the University of Vienna and a team led by the Broad Institute’s Feng Zhang claim to be the first to have adapted CRISPR-Cas9 for gene editing in complex cells (eukaryotes). Patent offices in different countries have reached varying decisions, but in the US, the latest rulings say that the Broad Institute of MIT [Massachusetts Institute of Technology] and Harvard retains intellectual property of using CRISPR-Cas9 in eukaryotes, while Emmanuelle Charpentier, the University of California, and the University of Vienna maintain their original patent over using CRISPR-Cas9 for editing in vitro and in prokaryotes. 

Still, despite the controversies, the technique continues to be explored academically and commercially for everything from gene therapy to crop improvement. Here’s a look at seven different ways scientists have utilized CRISPR.

Fessl goes on to give a brief overview of CRISPR and gene drives, genetic screens, diagnostics, including COVID-19 tests, gene therapy, therapeutics, crop and livestock improvement, and basic research.

For anyone interested in the ethical issues (with an in depth look at the Dr. He Jiankui story), I suggest reading either or both Eben Kirksey’s 2020 book, “The Mutant Project; Inside the Global Race to Genetically Modify Humans,”

An anthropologist visits the frontiers of genetics, medicine, and technology to ask: Whose values are guiding gene editing experiments? And what does this new era of scientific inquiry mean for the future of the human species?

“That rare kind of scholarship that is also a page-turner.”
—Britt Wray, author of Rise of the Necrofauna

At a conference in Hong Kong in November 2018, Dr. He Jiankui announced that he had created the first genetically modified babies—twin girls named Lulu and Nana—sending shockwaves around the world. A year later, a Chinese court sentenced Dr. He to three years in prison for “illegal medical practice.”

As scientists elsewhere start to catch up with China’s vast genetic research program, gene editing is fueling an innovation economy that threatens to widen racial and economic inequality. Fundamental questions about science, health, and social justice are at stake: Who gets access to gene editing technologies? As countries loosen regulations around the globe, from the U.S. to Indonesia, can we shape research agendas to promote an ethical and fair society?

Eben Kirksey takes us on a groundbreaking journey to meet the key scientists, lobbyists, and entrepreneurs who are bringing cutting-edge genetic engineering tools like CRISPR—created by Nobel Prize-winning biochemists Jennifer Doudna and Emmanuelle Charpentier—to your local clinic. He also ventures beyond the scientific echo chamber, talking to disabled scholars, doctors, hackers, chronically-ill patients, and activists who have alternative visions of a genetically modified future for humanity.

and/or Kevin Davies’s 2020 book, “Editing Humanity: The CRISPR Revolution and the New Era of Genome Editing,”

One of the world’s leading experts on genetics unravels one of the most important breakthroughs in modern science and medicine. 

If our genes are, to a great extent, our destiny, then what would happen if mankind could engineer and alter the very essence of our DNA coding? Millions might be spared the devastating effects of hereditary disease or the challenges of disability, whether it was the pain of sickle-cell anemia to the ravages of Huntington’s disease.

But this power to “play God” also raises major ethical questions and poses threats for potential misuse. For decades, these questions have lived exclusively in the realm of science fiction, but as Kevin Davies powerfully reveals in his new book, this is all about to change.

Engrossing and page-turning, Editing Humanity takes readers inside the fascinating world of a new gene editing technology called CRISPR, a high-powered genetic toolkit that enables scientists to not only engineer but to edit the DNA of any organism down to the individual building blocks of the genetic code.

Davies introduces readers to arguably the most profound scientific breakthrough of our time. He tracks the scientists on the front lines of its research to the patients whose powerful stories bring the narrative movingly to human scale.

Though the birth of the “CRISPR babies” in China made international news, there is much more to the story of CRISPR than headlines seemingly ripped from science fiction. In Editing Humanity, Davies sheds light on the implications that this new technology can have on our everyday lives and in the lives of generations to come.

Kevin Davies is the executive editor of The CRISPR Journal and the founding editor of Nature Genetics. He holds an MA in biochemistry from the University of Oxford and a PhD in molecular genetics from the University of London. He is the author of Cracking the Genome, The $1,000 Genome, and co-authored a new edition of DNA: The Story of the Genetic Revolution with Nobel Laureate James D. Watson and Andrew Berry. In 2017, Kevin was selected for a Guggenheim Fellowship in science writing.

I’ve read both books and while some of the same ground is covered, the perspectives diverge somewhat. Both authors offer a more nuanced discussion of the issues than was the case in the original reporting about Dr. He’s work.

Reconfiguring a LEGO-like AI chip with light

MIT engineers have created a reconfigurable AI chip that comprises alternating layers of sensing and processing elements that can communicate with each other. Credit: Figure courtesy of the researchers and edited by MIT News

This image certainly challenges any ideas I have about what Lego looks like. It seems they see things differently at the Massachusetts Institute of Technology (MIT). From a June 13, 2022 MIT news release (also on EurekAlert),

Imagine a more sustainable future, where cellphones, smartwatches, and other wearable devices don’t have to be shelved or discarded for a newer model. Instead, they could be upgraded with the latest sensors and processors that would snap onto a device’s internal chip — like LEGO bricks incorporated into an existing build. Such reconfigurable chipware could keep devices up to date while reducing our electronic waste. 

Now MIT engineers have taken a step toward that modular vision with a LEGO-like design for a stackable, reconfigurable artificial intelligence chip.

The design comprises alternating layers of sensing and processing elements, along with light-emitting diodes (LED) that allow for the chip’s layers to communicate optically. Other modular chip designs employ conventional wiring to relay signals between layers. Such intricate connections are difficult if not impossible to sever and rewire, making such stackable designs not reconfigurable.

The MIT design uses light, rather than physical wires, to transmit information through the chip. The chip can therefore be reconfigured, with layers that can be swapped out or stacked on, for instance to add new sensors or updated processors.

“You can add as many computing layers and sensors as you want, such as for light, pressure, and even smell,” says MIT postdoc Jihoon Kang. “We call this a LEGO-like reconfigurable AI chip because it has unlimited expandability depending on the combination of layers.”

The researchers are eager to apply the design to edge computing devices — self-sufficient sensors and other electronics that work independently from any central or distributed resources such as supercomputers or cloud-based computing.

“As we enter the era of the internet of things based on sensor networks, demand for multifunctioning edge-computing devices will expand dramatically,” says Jeehwan Kim, associate professor of mechanical engineering at MIT. “Our proposed hardware architecture will provide high versatility of edge computing in the future.”

The team’s results are published today in Nature Electronics. In addition to Kim and Kang, MIT authors include co-first authors Chanyeol Choi, Hyunseok Kim, and Min-Kyu Song, and contributing authors Hanwool Yeon, Celesta Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim, Han Eol Lee, Doyoon Lee, Subeen Pang, Sang-Hoon Bae, Hun S. Kum, and Peng Lin, along with collaborators from Harvard University, Tsinghua University, Zhejiang University, and elsewhere.

Lighting the way

The team’s design is currently configured to carry out basic image-recognition tasks. It does so via a layering of image sensors, LEDs, and processors made from artificial synapses — arrays of memory resistors, or “memristors,” that the team previously developed, which together function as a physical neural network, or “brain-on-a-chip.” Each array can be trained to process and classify signals directly on a chip, without the need for external software or an Internet connection.

In their new chip design, the researchers paired image sensors with artificial synapse arrays, each of which they trained to recognize certain letters — in this case, M, I, and T. While a conventional approach would be to relay a sensor’s signals to a processor via physical wires, the team instead fabricated an optical system between each sensor and artificial synapse array to enable communication between the layers, without requiring a physical connection. 

“Other chips are physically wired through metal, which makes them hard to rewire and redesign, so you’d need to make a new chip if you wanted to add any new function,” says MIT postdoc Hyunseok Kim. “We replaced that physical wire connection with an optical communication system, which gives us the freedom to stack and add chips the way we want.”

The team’s optical communication system consists of paired photodetectors and LEDs, each patterned with tiny pixels. Photodetectors constitute an image sensor for receiving data, and LEDs to transmit data to the next layer. As a signal (for instance an image of a letter) reaches the image sensor, the image’s light pattern encodes a certain configuration of LED pixels, which in turn stimulates another layer of photodetectors, along with an artificial synapse array, which classifies the signal based on the pattern and strength of the incoming LED light.

Stacking up

The team fabricated a single chip, with a computing core measuring about 4 square millimeters, or about the size of a piece of confetti. The chip is stacked with three image recognition “blocks,” each comprising an image sensor, optical communication layer, and artificial synapse array for classifying one of three letters, M, I, or T. They then shone a pixellated image of random letters onto the chip and measured the electrical current that each neural network array produced in response. (The larger the current, the larger the chance that the image is indeed the letter that the particular array is trained to recognize.)

The team found that the chip correctly classified clear images of each letter, but it was less able to distinguish between blurry images, for instance between I and T. However, the researchers were able to quickly swap out the chip’s processing layer for a better “denoising” processor, and found the chip then accurately identified the images.

“We showed stackability, replaceability, and the ability to insert a new function into the chip,” notes MIT postdoc Min-Kyu Song.

The researchers plan to add more sensing and processing capabilities to the chip, and they envision the applications to be boundless.

“We can add layers to a cellphone’s camera so it could recognize more complex images, or makes these into healthcare monitors that can be embedded in wearable electronic skin,” offers Choi, who along with Kim previously developed a “smart” skin for monitoring vital signs.

Another idea, he adds, is for modular chips, built into electronics, that consumers can choose to build up with the latest sensor and processor “bricks.”

“We can make a general chip platform, and each layer could be sold separately like a video game,” Jeehwan Kim says. “We could make different types of neural networks, like for image or voice recognition, and let the customer choose what they want, and add to an existing chip like a LEGO.”

This research was supported, in part, by the Ministry of Trade, Industry, and Energy (MOTIE) from South Korea; the Korea Institute of Science and Technology (KIST); and the Samsung Global Research Outreach Program.

Here’s a link to and a citation for the paper,

Reconfigurable heterogeneous integration using stackable chips with embedded artificial intelligence by Chanyeol Choi, Hyunseok Kim, Ji-Hoon Kang, Min-Kyu Song, Hanwool Yeon, Celesta S. Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim, Han Eol Lee, Doyoon Lee, Jaeyong Lee, Ikbeom Jang, Subeen Pang, Kanghyun Ryu, Sang-Hoon Bae, Yifan Nie, Hyun S. Kum, Min-Chul Park, Suyoun Lee, Hyung-Jun Kim, Huaqiang Wu, Peng Lin & Jeehwan Kim. Nature Electronics volume 5, pages 386–393 (2022) 05 May 2022 Issue Date: June 2022 Published: 13 June 2022 DOI: https://doi.org/10.1038/s41928-022-00778-y

This paper is behind a paywall.

400 nm thick glucose fuel cell uses body’s own sugar

This May 12, 2022 news item on Nanowerk reminds me of bioenergy harvesting (using the body’s own processes rather than batteries to power implants),

Glucose is the sugar we absorb from the foods we eat. It is the fuel that powers every cell in our bodies. Could glucose also power tomorrow’s medical implants?

Engineers at MIT [Massachusetts Institute of Technology] and the Technical University of Munich think so. They have designed a new kind of glucose fuel cell that converts glucose directly into electricity. The device is smaller than other proposed glucose fuel cells, measuring just 400 nanometers thick. The sugary power source generates about 43 microwatts per square centimeter of electricity, achieving the highest power density of any glucose fuel cell to date under ambient conditions.

Caption: Silicon chip with 30 individual glucose micro fuel cells, seen as small silver squares inside each gray rectangle. Credit Image: Kent Dayton

A May 12, 2022 MIT news release (also on EuekAlert) by Jennifer Chu, which originated the news item, describes the technology in more detail, Note: A link has been removed,

The new device is also resilient, able to withstand temperatures up to 600 degrees Celsius. If incorporated into a medical implant, the fuel cell could remain stable through the high-temperature sterilization process required for all implantable devices.

The heart of the new device is made from ceramic, a material that retains its electrochemical properties even at high temperatures and miniature scales. The researchers envision the new design could be made into ultrathin films or coatings and wrapped around implants to passively power electronics, using the body’s abundant glucose supply.

“Glucose is everywhere in the body, and the idea is to harvest this readily available energy and use it to power implantable devices,” says Philipp Simons, who developed the design as part of his PhD thesis in MIT’s Department of Materials Science and Engineering (DMSE). “In our work we show a new glucose fuel cell electrochemistry.”

“Instead of using a battery, which can take up 90 percent of an implant’s volume, you could make a device with a thin film, and you’d have a power source with no volumetric footprint,” says Jennifer L.M. Rupp, Simons’ thesis supervisor and a DMSE visiting professor, who is also an associate professor of solid-state electrolyte chemistry at Technical University Munich in Germany.

Simons and his colleagues detail their design today in the journal Advanced Materials. Co-authors of the study include Rupp, Steven Schenk, Marco Gysel, and Lorenz Olbrich.

A “hard” separation

The inspiration for the new fuel cell came in 2016, when Rupp, who specializes in ceramics and electrochemical devices, went to take a routine glucose test toward the end of her pregnancy.

“In the doctor’s office, I was a very bored electrochemist, thinking what you could do with sugar and electrochemistry,” Rupp recalls. “Then I realized, it would be good to have a glucose-powered solid state device. And Philipp and I met over coffee and wrote out on a napkin the first drawings.”

The team is not the first to conceive of a glucose fuel cell, which was initially introduced in the 1960s and showed potential for converting glucose’s chemical energy into electrical energy. But glucose fuel cells at the time were based on soft polymers and were quickly eclipsed by lithium-iodide batteries, which would become the standard power source for medical implants, most notably the cardiac pacemaker.

However, batteries have a limit to how small they can be made, as their design requires the physical capacity to store energy.

“Fuel cells directly convert energy rather than storing it in a device, so you don’t need all that volume that’s required to store energy in a battery,” Rupp says.

In recent years, scientists have taken another look at glucose fuel cells as potentially smaller power sources, fueled directly by the body’s abundant glucose.

A glucose fuel cell’s basic design consists of three layers: a top anode, a middle electrolyte, and a bottom cathode. The anode reacts with glucose in bodily fluids, transforming the sugar into gluconic acid. This electrochemical conversion releases a pair of protons and a pair of electrons. The middle electrolyte acts to separate the protons from the electrons, conducting the protons through the fuel cell, where they combine with air to form molecules of water — a harmless byproduct that flows away with the body’s fluid. Meanwhile, the isolated electrons flow to an external circuit, where they can be used to power an electronic device.

The team looked to improve on existing materials and designs by modifying the electrolyte layer, which is often made from polymers. But polymer properties, along with their ability to conduct protons, easily degrade at high temperatures, are difficult to retain when scaled down to the dimension of nanometers, and are hard to sterilize. The researchers wondered if a ceramic — a heat-resistant material which can naturally conduct protons — could be made into an electrolyte for glucose fuel cells.

“When you think of ceramics for such a glucose fuel cell, they have the advantage of long-term stability, small scalability, and silicon chip integration,” Rupp notes. “They’re hard and robust.”

Peak power

The researchers designed a glucose fuel cell with an electrolyte made from ceria, a ceramic material that possesses high ion conductivity, is mechanically robust, and as such, is widely used as an electrolyte in hydrogen fuel cells. It has also been shown to be biocompatible.

“Ceria is actively studied in the cancer research community,” Simons notes. “It’s also similar to zirconia, which is used in tooth implants, and is biocompatible and safe.”

The team sandwiched the electrolyte with an anode and cathode made of platinum, a stable material that readily reacts with glucose. They fabricated 150 individual glucose fuel cells on a chip, each about 400 nanometers thin, and about 300 micrometers wide (about the width of 30 human hairs). They patterned the cells onto silicon wafers, showing that the devices can be paired with a common semiconductor material. They then measured the current produced by each cell as they flowed a solution of glucose over each wafer in a custom-fabricated test station.

They found many cells produced a peak voltage of about 80 millivolts. Given the tiny size of each cell, this output is the highest power density of any existing glucose fuel cell design.

“Excitingly, we are able to draw power and current that’s sufficient to power implantable devices,” Simons says.

“It is the first time that proton conduction in electroceramic materials can be used for glucose-to-power conversion, defining a new type of electrochemstry,” Rupp says. “It extends the material use-cases from hydrogen fuel cells to new, exciting glucose-conversion modes.”

Here’s a link to and a citation for the paper,

A Ceramic-Electrolyte Glucose Fuel Cell for Implantable Electronics by Philipp Simons, Steven A. Schenk, Marco A. Gysel, Lorenz F. Olbrich, Jennifer L. M. Rupp. Advanced Materials https://doi.org/10.1002/adma.202109075 First published: 05 April 2022

This paper is open access.

Overview of fusion energy scene

It’s funny how you think you know something and then realize you don’t. I’ve been hearing about cold fusion/fusion energy for years but never really understood what the term meant. So, this post includes an explanation, as well as, an overview, and a Cold Fusion Rap to ‘wrap’ it all up. (Sometimes I cannot resist a pun.)

Fusion energy explanation (1)

The Massachusetts Institute of Technology (MIT) has a Climate Portal where fusion energy is explained,

Fusion energy is the source of energy at the center of stars, including our own sun. Stars, like most of the universe, are made up of hydrogen, the simplest and most abundant element in the universe, created during the big bang. The center of a star is so hot and so dense that the immense pressure forces hydrogen atoms together. These atoms are forced together so strongly that they create new atoms entirely—helium atoms—and release a staggering amount of energy in the process. This energy is called fusion energy.

More energy than chemical energy

Fusion energy, like fossil fuels, is a form of stored energy. But fusion can create 20 to 100 million times more energy than the chemical reaction of a fossil fuel. Most of the mass of an atom, 99.9 percent, is contained at an atom’s center—inside of its nucleus. The ratio of this matter to the empty space in an atom is almost exactly the same ratio of how much energy you release when you manipulate the nucleus. In contrast, a chemical reaction, such as burning coal, rearranges the atoms through heat, but doesn’t alter the atoms themselves, so we don’t get as much energy.

Making fusion energy

For scientists, making fusion energy means recreating the conditions of stars, starting with plasma. Plasma is the fourth state of matter, after solids, liquids and gases. Ice is an example of a solid. When heated up, it becomes a liquid. Place that liquid in a pot on the stove, and it becomes a gas (steam). If you take that gas and continue to make it hotter, at around 10,000 degrees Fahrenheit (~6,000 Kelvin), it will change from a gas to the next phase of matter: plasma. Ninety-nine percent of the mass in the universe is in the plasma state, since almost the entire mass of the universe is in super hot stars that exist as plasma.

To make fusion energy, scientists must first build a steel chamber and create a vacuum, like in outer space. The next step is to add hydrogen gas. The gas particles are charged to produce an electric current and then surrounded and contained with an electromagnetic force; the hydrogen is now a plasma. This plasma is then heated to about 100 million degrees and fusion energy is released.

Fusion energy explanation (2)

A Vancouver-based company, General Fusion, offers an explanation of how they have approached making fusion energy a reality,

How It Works: Plasma Injector Technology at General Fusion from General Fusion on Vimeo.

After announcing that a General Fusion demonstration plant would be built in the UK (see June 17, 2021 General Fusion news release), there’s a recent announcement about an agreement with the UK Atomic Energy Authority (UKAEA) to commericialize the technology, from an October 17, 2022 General Fusion news release,

Today [October 17, 2022], General Fusion and the UKAEA kick off projects to advance the commercialization of magnetized target fusion energy as part of an important collaborative agreement. With these unique projects, General Fusion will benefit from the vast experience of the UKAEA’s team. The results will hone the design of General Fusion’s demonstration machine being built at the Culham Campus, part of the thriving UK fusion cluster. Ultimately, the company expects the projects will support its efforts to provide low-cost and low-carbon energy to the electricity grid.

General Fusion’s approach to fusion maximizes the reapplication of existing industrialized technologies, bypassing the need for expensive superconducting magnets, significant new materials, or high-power lasers. The demonstration machine will create fusion conditions in a power-plant-relevant environment, confirming the performance and economics of the company’s technology.

“The leading-edge fusion researchers at UKAEA have proven experience building, commissioning, and successfully operating large fusion machines,” said Greg Twinney, Chief Executive Officer, General Fusion. “Partnering with UKAEA’s incredible team will fast-track work to advance our technology and achieve our mission of delivering affordable commercial fusion power to the world.”

“Fusion energy is one of the greatest scientific and engineering quests of our time,” said Ian Chapman, UKAEA CEO. “This collaboration will enable General Fusion to benefit from the ground-breaking research being done in the UK and supports our shared aims of making fusion part of the world’s future energy mix for generations to come.”

I last wrote about General Fusion in a November 3, 2021 posting about the company’s move (?) to Sea Island, Richmond,

I first wrote about General Fusion in a December 2, 2011 posting titled: Burnaby-based company (Canada) challenges fossil fuel consumption with nuclear fusion. (For those unfamiliar with the Vancouver area, there’s the city of Vancouver and there’s Vancouver Metro, which includes the city of Vancouver and others in the region. Burnaby is part of Metro Vancouver; General Fusion is moving to Sea Island (near Vancouver Airport), in Richmond, which is also in Metro Vancouver.) Kenneth Chan’s October 20, 2021 article for the Daily Hive gives more detail about General Fusion’s new facilities (Note: A link has been removed),

The new facility will span two buildings at 6020 and 6082 Russ Baker Way, near YVR’s [Vancouver Airport] South Terminal. This includes a larger building previously used for aircraft engine maintenance and repair.

The relocation process could start before the end of 2021, allowing the company to more than quadruple its workforce over the coming years. Currently, it employs about 140 people.

The Sea Island [in Richmond] facility will house its corporate offices, primary fusion technology development division, and many of its engineering laboratories. This new facility provides General Fusion with the ability to build a new demonstration prototype to support the commercialization of its magnetized target fusion technology.

As of the date of this posting, I have not been able to confirm the move. The company’s Contact webpage lists an address in Burnaby, BC for its headquarters.

The overview

Alex **Pasternack** in an August 17, 2022 article (The frontrunners in the trillion-dollar race for limitless fusion power), **in Fast Company,** provides an overview of the international race with a very, very strong emphasis on the US scene (Note: Links have been removed),

With energy prices on the rise, along with demands for energy independence and an urgent need for carbon-free power, plans to walk away from nuclear energy are now being revised in Japan, South Korea, and even Germany. Last month, Europe announced green bonds for nuclear, and the U.S., thanks to the Inflation Reduction Act, will soon devote millions to new nuclear designs, incentives for nuclear production and domestic uranium mining, and, after years of paucity in funding, cash for fusion.

The new investment comes as fusion—long considered a pipe dream—has attracted real money from big venture capital and big companies, who are increasingly betting that abundant, cheap, clean nuclear will be a multi-trillion dollar industry. Last year, investors like Bill Gates and Jeff Bezos injected a record $3.4 billion into firms working on the technology, according to Pitchbook. One fusion firm, Seattle-based Helion, raised a record $500 million from Sam Altman and Peter Thiel. That money has certainly supercharged the nuclear sector: The Fusion Industry Association says that at least 33 different companies were now pursuing nuclear fusion, and predicted that fusion would be connected to the energy grid sometime in the 2030s.

… What’s not a joke is that we have about zero years to stop powering our civilization with earth-warming energy. The challenge with fusion is to achieve net energy gain, where the energy produced by a fusion reaction exceeds the energy used to make it. One milestone came quietly this month, when a team of researchers at the National Ignition Facility at Lawrence Livermore National Lab in California announced that an experiment last year had yielded over 1.3 megajoules (MJ) of energy, setting a new world record for energy yield for a nuclear fusion experiment. The experiment also achieved scientific ignition for the first time in history: after applying enough heat using an arsenal of lasers, the plasma became self-heating. (Researchers have since been trying to replicate the result, so far without success.)

On a growing campus an hour outside of Boston, the MIT spinoff Commonwealth Fusion Systems is building their first machine, SPARC, with a goal of producing power by 2025. “You’ll push a button,” CEO and cofounder Bob Mumgaard told the Khosla Ventures CEO Summit this summer, “and for the first time on earth you will make more power out than in from a fusion plasma. That’s about 200 million degrees—you know, cooling towers will have a bunch of steam go out of them—and you let your finger off the button and it will stop, and you push the button again and it will go.” With an explosion in funding from investors including Khosla, Bill Gates, George Soros, Emerson Collective and Google to name a few—they raised $1.8 billion last year alone—CFS hopes to start operating a prototype in 2025.

Like the three-decade-old ITER project in France, set for operation in 2025, Commonwealth and many other companies will try to reach net energy gain using a machine called a tokamak, a bagel-shaped device filled with super-hot plasma, heated to about 150 million degrees, within which hydrogen atoms can fuse and release energy. To control that hot plasma, you need to build a very powerful magnetic field. Commonwealth’s breakthrough was tape—specifically, a high-temperature-superconducting steel tape coated with a compound called yttrium-barium-copper oxide. When a prototype was first made commercially available in 2009, Dennis Whyte, director of MIT’s Plasma Science and Fusion Center, ordered as much as he could. With Mumgaard and a team of students, his lab used coils of the stuff to build a new kind of superconducting magnet, and a prototype reactor named ARC, after Tony Stark’s energy source. Commonwealth was born in 2015.

Southern California-based TAE Technologies has raised a whopping $1.2 billion since it was founded in 1998, and $250 million in its latest round. The round, announced in July, was led by Chevron’s venture arm, Google, and Sumitomo, a Tokyo-based holding company that aims to deploy fusion power in the Asia-Pacific market. TAE’s approach, which involves creating a fusion reaction at incredibly high heat, has a key advantage. Whereas ITER uses the hydrogen isotopes deuterium and tritium, an extremely rare element that must be specially created from lithium—and that produces as a byproduct radioactive-free neutrons—TAE’s linear reactor is completely non-radioactive, because it relies on hydrogen and boron, two abundant, naturally-occurring elements that react to produce only helium.

General Atomics, of San Diego, California, has the largest tokamak in the U.S. Its powerful magnetic chamber, called the DIII-D National Fusion Facility, or just “D-three-D,” now features a Toroidal Field Reversing Switch, which allows for the redirection of 120,000 amps of the current that power the primary magnetic field. It’s the only tokamak in the world that allows researchers to switch directions of the magnetic fields in minutes rather than hours. Another new upgrade, a traveling-wave antenna, allows physicists to inject high-powered “helicon” radio waves into DIII-D plasmas so fusion reactions occur much more powerfully and efficiently.

“We’ve got new tools for flexibility and new tools to help us figure out how to make that fusion plasma just keep going,” Richard Buttery, director of the project, told the San Diego Union-Tribune in January. The company is also behind eight of the magnet modules at the heart of the ITER facility, including its wild Central Solenoid — the world’s most powerful magnet — in a kind of scaled up version of the California machine.

But like an awful lot in fusion, ITER has been hampered by cost overruns and delays, with “first plasma” not expected to occur in 2025 as previously expected due to global pandemic-related disruptions. Some have complained that the money going to ITER has distracted from other more practical energy projects—the latest price tag is $22 billion—and others doubt if the project can ever produce net energy gain.

Based in Canada, General Fusion is backed by Jeff Bezos and building on technology originally developed by the U.S. Navy and explored by Russian scientists for potential use in weapons. Inside the machine, molten metal is spun to create a cavity, and pumped with pistons that push the metal inward to form a sphere. Hydrogen, heated to super-hot temperatures and held in place by a magnetic field, fills the sphere to create the reaction. Heat transferred to the metal can be turned into steam to drive a turbine and generate electricity. As former CEO Christofer Mowry told Fast Company last year, “to re-create a piece of the sun on Earth, as you can imagine, is very, very challenging.” Like many fusion companies, GF depends on modern supercomputers and advanced modeling and computational techniques to understand the science of plasma physics, as well as modern manufacturing technologies and materials.

“That’s really opened the door not just to being able to make fusion work but to make it work in a practical way,” Mowry said. This has been difficult to make work, but with a demonstration center it announced last year in Culham, England, GF isn’t aiming to generate electricity but to gather the data needed to later build a commercial pilot plant that could—and to generate more interest in fusion.

Magneto-Intertial Fusion Technologies, or MIFTI, of Tustin, Calif., founded by researchers from the University of California, Irvine, is developing a reactor that uses what’s known as a Staged Z-Pinch approach. A Z-Pinch design heats, confines, and compresses plasma using an intense, pulsed electrical current to generate a magnetic field that could reduce instabilities in the plasma, allowing fusion to persist for longer periods of time. But only recently have MIFTI’s scientists been able to overcome the instability problems, the company says, thanks to software made available to them at UC-Irvine by the U.S. Air Force. …

Princeton Fusion Systems of Plainsboro, New Jersey, is a small business focused on developing small, clean fusion reactors for both terrestrial and space applications. A spinoff of Princeton Satellite Systems, which specializes in spacecraft control, the company’s Princeton FRC reactor is built upon 15 years of research at the Princeton Plasma Physics Laboratory, funded primarily by the U.S. DOE and NASA, and is designed to eventually provide between 1 and 10 megawatts of power in off-grid locations and in modular power plants, “from remote industrial applications to emergency power after natural disasters to off-world bases on the moon or Mars.” The concept uses radio-frequency electromagnetic fields to generates and sustain a plasma formation called a Field-Reversed Configuration (FRC) inside a strong magnetic bottle. …

Tokamak Energy, a U.K.-based company named after the popular fusion device, announced in July that its ST-40 tokamak reactor had reached the 100 million Celsius threshold for commercially viable nuclear fusion. The achievement was made possible by a proprietary design built on a spherical, rather than donut, shape. This means that the magnets are closer to the plasma stream, allowing for smaller and cheaper magnets to create even stronger magnetic fields. …

Based in Pasadena, California, Helicity Space is developing a propulsion and power technology based on a specialized magneto inertial fusion concept. The system, a spin on what fellow fusion engineer, Seattle-based Helion is doing, appears to use twisted compression coils, like a braided rope, to achieve a known phenomenon called the Magnetic Helicity. … According to ZoomInfo and Linkedin, Helicity has over $4 million in funding and up to 10 employees, all aimed, the company says, at “enabling humanity’s access to the solar system, with a Helicity Drive-powered flight to Mars expected to take two months, without planetary alignment.”

ITER (International Thermonuclear Experimental Reactor), meaning “the way” or “the path” in Latin and mentioned in Pasternak’s article, dates its history with *fusion back to about 1978 when cold fusion was the ‘hot’ topic*. (You can read more here in the ITER Wikipedia entry.)

For more about the various approaches to fusion energy, read Pasternack’s August 17, 2022 article (The frontrunners in the trillion-dollar race for limitless fusion power) provides details. I wish there had been a little more about efforts in Japan and South Korea and other parts of the world. Pasternak’s singular focus on the US with a little of Canada and the UK seemingly thrown into the mix to provide an international flavour seems a little myopic.

Fusion rap

In an August 30, 2022 Baba Brinkman announcement (received via email) which gave an extensive update of Brinkman’s activities, there was this,

And the other new topic, which was surprisingly fun to explore, is cold fusion also known as “Low Energy Nuclear Reactions” which you may or may not have a strong opinion about, but if you do I imagine you probably think the technology is either bunk or destined to save the world.

That makes for an interesting topic to explore in rap songs! And fortunately last month I had the pleasure of performing for the cream of the LENR crop at the 24th International Conference on Cold Fusion, including rap ups and two new songs about the field, one very celebratory (for the insiders), and one cautiously optimistic (as an outreach tool).

You can watch “Cold Fusion Renaissance” and “You Must LENR” [L ow E nergy N uclear R eactions or sometimes L attice E nabled N anoscale R eactions or Cold Fusion or CANR (C hemically A ssisted N uclear R eactions)] for yourself to determine which video is which, and also enjoy this article in Infinite Energy Magazine which chronicles my whole cold fusion rap saga.

Here’s one of the rap videos mentioned in Brinkman’s email,

Enjoy!

*December 13, 2022: Sentence changed from “ITER (International Thermonuclear Experimental Reactor), meaning “the way” or “the path” in Latin and mentioned in Pasternak’s article, dates its history with fusion back to about 1978 when cold fusion was the ‘hot’ topic.” to “ITER (International Thermonuclear Experimental Reactor), meaning “the way” or “the path” in Latin and mentioned in Pasternak’s article, dates its history with fusion back to about 1978 when cold fusion was the ‘hot’ topic.”

** ‘Pasternak’ corrected to ‘Pasternack” and ‘in Fast Company’ added on December 29, 2022

Detangling carbon nanotubes (CNTs)

An April 27, 2022 news item on ScienceDaily announces research into a solution to a vexing problem associated with the production of carbon nanotubes (CNTs),

Carbon nanotubes that are prone to tangle like spaghetti can use a little special sauce to realize their full potential.

Rice University scientists have come up with just the sauce, an acid-based solvent that simplifies carbon nanotube processing in a way that’s easier to scale up for industrial applications.

The Rice lab of Matteo Pasquali reported in Science Advances on its discovery of a unique combination of acids that helps separate nanotubes in a solution and turn them into films, fibers or other materials with excellent electrical and mechanical properties.

The study co-led by graduate alumnus Robert Headrick and graduate student Steven Williams reports the solvent is compatible with conventional manufacturing processes. That should help it find a place in the production of advanced materials for many applications.

An April 22, 2022 Rice University news release (received via email and also published on April 27, 2022 on EurekAlert), which originated the news item, delves further into how the research has environmental benefits and into its technical aspects (Note Links have been removed),

“There’s a growing realization that it’s probably not a good idea to increase the mining of copper and aluminum and nickel,” said Pasquali, Rice’s A.J. Hartsook Professor and a professor of chemical and biomolecular engineering, chemistry and materials science and nanoengineering. He is also director of the Rice-based Carbon Hub, which promotes the development of advanced carbon materials to benefit the environment.

“But there is this giant opportunity to use hydrocarbons as our ore,” he said. “In that light, we need to broaden as much as possible the range in which we can use carbon materials, especially where it can displace metals with a product that can be manufactured sustainably from a feedstock like hydrocarbons.” Pasquali noted these manufacturing processes produce clean hydrogen as well.

“Carbon is plentiful, we control the supply chains and we know how to get it out in an environmentally responsible way,” he said.

A better way to process carbon will help. The solvent is based on methanesulfonic (MSA), p-toluenesulfonic (pToS)and oleum acids that, when combined, are less corrosive than those currently used to process nanotubes in a solution. Separating nanotubes (which researchers refer to as dissolving) is a necessary step before they can be extruded through a needle or other device where shear forces help turn them into familiar fibers or sheets. 

Oleum and chlorosulfonic acids have long been used to dissolve nanotubes without modifying their structures, but both are highly corrosive. By combining oleum with two weaker acids, the team developed a broadly applicable process that enables new manufacturing for nanotubes products.

“The oleum surrounds each individual nanotube and gives it a very localized positive charge,” said Headrick, now a research scientist at Shell. “That charge makes them repel each other.”

After detangling, the milder acids further separate the nanotubes. They found MSA is best for fiber spinning and roll-to-roll film production, while pToS, a solid that melts at 40 degrees Celsius (104 degrees Fahrenheit), is particularly useful for 3D printing applications because it allows nanotube solutions to be processed at a moderate temperature and then solidified by cooling.

The researchers used these stable liquid crystalline solutions to make things in both modern and traditional ways, 3D printing carbon nanotube aerogels and silk screen printing patterns onto a variety of surfaces, including glass. 

The solutions also enabled roll-to-roll production of transparent films that can be used as electrodes. “Honestly, it was a little surprising how well that worked,” Headrick said. “It came out pretty flawless on the very first try.”

The researchers noted oleum still requires careful handling, but once diluted with the other acids, the solution is much less aggressive to other materials. 

“The acids we’re using are so much gentler that you can use them with common plastics,” Headrick said. “That opens the door to a lot of materials processing and printing techniques that are already in place in manufacturing facilities. 

“It’s also really important for integrating carbon nanotubes into other devices, depositing them as one step in a device-manufacturing process,” he said.

They reported the less-corrosive solutions did not give off harmful fumes and were easier to clean up after production. MSA and pToS can also be recycled after processing nanotubes, lowering their environmental impact and energy and processing costs.

Williams said the next step is to fine-tune the solvent for applications, and to determine how factors like chirality and size affect nanotube processing. “It’s really important that we have high-quality, clean, large diameter tubes,” he said.

Co-authors of the paper are alumna Lauren Taylor and graduate students Oliver Dewey and Cedric Ginestra of Rice; graduate student Crystal Owens and professors Gareth McKinley and A. John Hart at the Massachusetts Institute of Technology; alumna Lucy Liberman, graduate student Asia Matatyaho Ya’akobi and Yeshayahu Talmon, a professor emeritus of chemical engineering, at the Technion-Israel Institute of Technology, Haifa, Israel; and Benji Maruyama, autonomous materials lead in the Materials and Manufacturing Directorate, Air Force Research Laboratory.

Here’s a link to and a citation for the paper,

Versatile acid solvents for pristine carbon nanotube assembly by Robert J. Headrick, Steven M. Williams, Crystal E. Owens, Lauren W. Taylor, Oliver S. Dewey, Cedric J. Ginestra, Lucy Liberman, Asia Matatyaho Ya’akobi, Yeshayahu Talmon, Benji Maruyama, Gareth H. McKinley, A. John Hart, Matteo Pasquali. Science Advances • 27 Apr 2022 • Vol 8, Issue 17 • DOI: 10.1126/sciadv.abm3285

This paper is open access.

We have math neurons and singing neurons?

According to the two items I have here, the answer is: yes, we have neurons that are specific to math and to the sound of singing.

Math neurons

A February 14, 2022 news item on ScienceDaily explains how specific the math neurons are,

The brain has neurons that fire specifically during certain mathematical operations. This is shown by a recent study conducted by the Universities of Tübingen and Bonn [both in Germany]. The findings indicate that some of the neurons detected are active exclusively during additions, while others are active during subtractions. They do not care whether the calculation instruction is written down as a word or a symbol. The results have now been published in the journal Current Biology.

Using ultrafine electrodes – implanted in the temporal lobes of epilepsy patients, researchers can visualize the activity of brain regions. © Photo: Christian Burkert/Volkswagen-Stiftung/University of Bonn

A February 14, 2022 University of Bonn press release (also on EurekAlert), which originated the news item, delves further,

Most elementary school children probably already know that three apples plus two apples add up to five apples. However, what happens in the brain during such calculations is still largely unknown. The current study by the Universities of Bonn and Tübingen now sheds light on this issue.

The researchers benefited from a special feature of the Department of Epileptology at the University Hospital Bonn. It specializes in surgical procedures on the brains of people with epilepsy. In some patients, seizures always originate from the same area of the brain. In order to precisely localize this defective area, the doctors implant several electrodes into the patients. The probes can be used to precisely determine the origin of the spasm. In addition, the activity of individual neurons can be measured via the wiring.

Some neurons fire only when summing up

Five women and four men participated in the current study. They had electrodes implanted in the so-called temporal lobe of the brain to record the activity of nerve cells. Meanwhile, the participants had to perform simple arithmetic tasks. “We found that different neurons fired during additions than during subtractions,” explains Prof. Florian Mormann from the Department of Epileptology at the University Hospital Bonn.

It was not the case that some neurons responded only to a “+” sign and others only to a “-” sign: “Even when we replaced the mathematical symbols with words, the effect remained the same,” explains Esther Kutter, who is doing her doctorate in Prof. Mormann’s research group. “For example, when subjects were asked to calculate ‘5 and 3’, their addition neurons sprang back into action; whereas for ‘7 less 4,’ their subtraction neurons did.”

This shows that the cells discovered actually encode a mathematical instruction for action. The brain activity thus showed with great accuracy what kind of tasks the test subjects were currently calculating: The researchers fed the cells’ activity patterns into a self-learning computer program. At the same time, they told the software whether the subjects were currently calculating a sum or a difference. When the algorithm was confronted with new activity data after this training phase, it was able to accurately identify during which computational operation it had been recorded.

Prof. Andreas Nieder from the University of Tübingen supervised the study together with Prof. Mormann. “We know from experiments with monkeys that neurons specific to certain computational rules also exist in their brains,” he says. “In humans, however, there is hardly any data in this regard.” During their analysis, the two working groups came across an interesting phenomenon: One of the brain regions studied was the so-called parahippocampal cortex. There, too, the researchers found nerve cells that fired specifically during addition or subtraction. However, when summing up, different addition neurons became alternately active during one and the same arithmetic task. Figuratively speaking, it is as if the plus key on the calculator were constantly changing its location. It was the same with subtraction. Researchers also refer to this as “dynamic coding.”

“This study marks an important step towards a better understanding of one of our most important symbolic abilities, namely calculating with numbers,” stresses Mormann. The two teams from Bonn and Tübingen now want to investigate exactly what role the nerve cells found play in this.

Funding:

The study was funded by the German Research Foundation (DFG) and the Volkswagen Foundation.

Here’s a link to and a citation for the paper,

Neuronal codes for arithmetic rule processing in the human brain by Esther F. Kutter, Jan Boström, Christian E. Elger, Andreas Nieder, Florian Mormann. Current Biology, 2022; DOI: 10.1016/j.cub.2022.01.054 Published February 14, 2022

This paper appears to be open access.

Neurons for the sounds of singing

This work from the Massachusetts Institute of Technology (MIT) according to a February 22, 2022 news item on ScienceDaily,

For the first time, MIT neuroscientists have identified a population of neurons in the human brain that lights up when we hear singing, but not other types of music.

Pretty nifty, eh? As is the news release headline with its nod to a classic Hollywood musical and song, from a February 22, 2022 MIT news release (also on EurekAlert),

Singing in the brain

These neurons, found in the auditory cortex, appear to respond to the specific combination of voice and music, but not to either regular speech or instrumental music. Exactly what they are doing is unknown and will require more work to uncover, the researchers say.

“The work provides evidence for relatively fine-grained segregation of function within the auditory cortex, in a way that aligns with an intuitive distinction within music,” says Sam Norman-Haignere, a former MIT postdoc who is now an assistant professor of neuroscience at the University of Rochester Medical Center.

The work builds on a 2015 study in which the same research team used functional magnetic resonance imaging (fMRI) to identify a population of neurons in the brain’s auditory cortex that responds specifically to music. In the new work, the researchers used recordings of electrical activity taken at the surface of the brain, which gave them much more precise information than fMRI.

“There’s one population of neurons that responds to singing, and then very nearby is another population of neurons that responds broadly to lots of music. At the scale of fMRI, they’re so close that you can’t disentangle them, but with intracranial recordings, we get additional resolution, and that’s what we believe allowed us to pick them apart,” says Norman-Haignere.

Norman-Haignere is the lead author of the study, which appears today in the journal Current Biology. Josh McDermott, an associate professor of brain and cognitive sciences, and Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, both members of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds and Machines (CBMM), are the senior authors of the study.

Neural recordings

In their 2015 study, the researchers used fMRI to scan the brains of participants as they listened to a collection of 165 sounds, including different types of speech and music, as well as everyday sounds such as finger tapping or a dog barking. For that study, the researchers devised a novel method of analyzing the fMRI data, which allowed them to identify six neural populations with different response patterns, including the music-selective population and another population that responds selectively to speech.

In the new study, the researchers hoped to obtain higher-resolution data using a technique known as electrocorticography (ECoG), which allows electrical activity to be recorded by electrodes placed inside the skull. This offers a much more precise picture of electrical activity in the brain compared to fMRI, which measures blood flow in the brain as a proxy of neuron activity.

“With most of the methods in human cognitive neuroscience, you can’t see the neural representations,” Kanwisher says. “Most of the kind of data we can collect can tell us that here’s a piece of brain that does something, but that’s pretty limited. We want to know what’s represented in there.”

Electrocorticography cannot be typically be performed in humans because it is an invasive procedure, but it is often used to monitor patients with epilepsy who are about to undergo surgery to treat their seizures. Patients are monitored over several days so that doctors can determine where their seizures are originating before operating. During that time, if patients agree, they can participate in studies that involve measuring their brain activity while performing certain tasks. For this study, the MIT team was able to gather data from 15 participants over several years.

For those participants, the researchers played the same set of 165 sounds that they used in the earlier fMRI study. The location of each patient’s electrodes was determined by their surgeons, so some did not pick up any responses to auditory input, but many did. Using a novel statistical analysis that they developed, the researchers were able to infer the types of neural populations that produced the data that were recorded by each electrode.

“When we applied this method to this data set, this neural response pattern popped out that only responded to singing,” Norman-Haignere says. “This was a finding we really didn’t expect, so it very much justifies the whole point of the approach, which is to reveal potentially novel things you might not think to look for.”

That song-specific population of neurons had very weak responses to either speech or instrumental music, and therefore is distinct from the music- and speech-selective populations identified in their 2015 study.

Music in the brain

In the second part of their study, the researchers devised a mathematical method to combine the data from the intracranial recordings with the fMRI data from their 2015 study. Because fMRI can cover a much larger portion of the brain, this allowed them to determine more precisely the locations of the neural populations that respond to singing.

“This way of combining ECoG and fMRI is a significant methodological advance,” McDermott says. “A lot of people have been doing ECoG over the past 10 or 15 years, but it’s always been limited by this issue of the sparsity of the recordings. Sam is really the first person who figured out how to combine the improved resolution of the electrode recordings with fMRI data to get better localization of the overall responses.”

The song-specific hotspot that they found is located at the top of the temporal lobe, near regions that are selective for language and music. That location suggests that the song-specific population may be responding to features such as the perceived pitch, or the interaction between words and perceived pitch, before sending information to other parts of the brain for further processing, the researchers say.

The researchers now hope to learn more about what aspects of singing drive the responses of these neurons. They are also working with MIT Professor Rebecca Saxe’s lab to study whether infants have music-selective areas, in hopes of learning more about when and how these brain regions develop.

Here’s a link to and a citation for the paper,

A neural population selective for song in human auditory cortex by Sam V. Norman-Haignere, Jenelle Feather, Dana Boebinger, Peter Brunner, Anthony Ritaccio, Josh H. Mcdermott, Gerwin Schalk, Nancy Kanwisher. Current Biology, 2022 DOI: 10.1016/j.cub.2022.01.069 Published February 22, 2022.

This paper appears to be open access.

I couldn’t resist,