Tag Archives: University of California at San Diego (UCSD)

Sleep helps artificial neural networks (ANNs) to keep learning without “catastrophic forgetting”

A November 18, 2022 news item on phys.org describes some of the latest work on neuromorphic (brainlike) computing from the University of California at San Diego (UCSD or UC San Diego), Note: Links have been removed,

Depending on age, humans need 7 to 13 hours of sleep per 24 hours. During this time, a lot happens: Heart rate, breathing and metabolism ebb and flow; hormone levels adjust; the body relaxes. Not so much in the brain.

“The brain is very busy when we sleep, repeating what we have learned during the day,” said Maxim Bazhenov, Ph.D., professor of medicine and a sleep researcher at University of California San Diego School of Medicine. “Sleep helps reorganize memories and presents them in the most efficient way.”

In previous published work, Bazhenov and colleagues have reported how sleep builds rational memory, the ability to remember arbitrary or indirect associations between objects, people or events, and protects against forgetting old memories.

Artificial neural networks leverage the architecture of the human brain to improve numerous technologies and systems, from basic science and medicine to finance and social media. In some ways, they have achieved superhuman performance, such as computational speed, but they fail in one key aspect: When artificial neural networks learn sequentially, new information overwrites previous information, a phenomenon called catastrophic forgetting.

“In contrast, the human brain learns continuously and incorporates new data into existing knowledge,” said Bazhenov, “and it typically learns best when new training is interleaved with periods of sleep for memory consolidation.”

Writing in the November 18, 2022 issue of PLOS Computational Biology, senior author Bazhenov and colleagues discuss how biological models may help mitigate the threat of catastrophic forgetting in artificial neural networks, boosting their utility across a spectrum of research interests. 

A November 18, 2022 UC San Diego news release (also one EurekAlert), which originated the news item, adds some technical details,

The scientists used spiking neural networks that artificially mimic natural neural systems: Instead of information being communicated continuously, it is transmitted as discrete events (spikes) at certain time points.

They found that when the spiking networks were trained on a new task, but with occasional off-line periods that mimicked sleep, catastrophic forgetting was mitigated. Like the human brain, said the study authors, “sleep” for the networks allowed them to replay old memories without explicitly using old training data. 

Memories are represented in the human brain by patterns of synaptic weight — the strength or amplitude of a connection between two neurons. 

“When we learn new information,” said Bazhenov, “neurons fire in specific order and this increases synapses between them. During sleep, the spiking patterns learned during our awake state are repeated spontaneously. It’s called reactivation or replay. 

“Synaptic plasticity, the capacity to be altered or molded, is still in place during sleep and it can further enhance synaptic weight patterns that represent the memory, helping to prevent forgetting or to enable transfer of knowledge from old to new tasks.”

When Bazhenov and colleagues applied this approach to artificial neural networks, they found that it helped the networks avoid catastrophic forgetting. 

“It meant that these networks could learn continuously, like humans or animals. Understanding how human brain processes information during sleep can help to augment memory in human subjects. Augmenting sleep rhythms can lead to better memory. 

“In other projects, we use computer models to develop optimal strategies to apply stimulation during sleep, such as auditory tones, that enhance sleep rhythms and improve learning. This may be particularly important when memory is non-optimal, such as when memory declines in aging or in some conditions like Alzheimer’s disease.”

Here’s a link to and a citation for the paper,

Sleep prevents catastrophic forgetting in spiking neural networks by forming a joint synaptic weight representation by Ryan Golden, Jean Erik Delanois, Pavel Sanda, Maxim Bazhenov. PLOS [Computational Biology] DOI: https://doi.org/10.1371/journal.pcbi.1010628 Published: November 18, 2022

This paper is open access.

New chip for neuromorphic computing runs at a fraction of the energy of today’s systems

An August 17, 2022 news item on Nanowerk announces big (so to speak) claims from a team researching neuromorphic (brainlike) computer chips,

An international team of researchers has designed and built a chip that runs computations directly in memory and can run a wide variety of artificial intelligence (AI) applications–all at a fraction of the energy consumed by computing platforms for general-purpose AI computing.

The NeuRRAM neuromorphic chip brings AI a step closer to running on a broad range of edge devices, disconnected from the cloud, where they can perform sophisticated cognitive tasks anywhere and anytime without relying on a network connection to a centralized server. Applications abound in every corner of the world and every facet of our lives, and range from smart watches, to VR headsets, smart earbuds, smart sensors in factories and rovers for space exploration.

The NeuRRAM chip is not only twice as energy efficient as the state-of-the-art “compute-in-memory” chips, an innovative class of hybrid chips that runs computations in memory, it also delivers results that are just as accurate as conventional digital chips. Conventional AI platforms are a lot bulkier and typically are constrained to using large data servers operating in the cloud.

In addition, the NeuRRAM chip is highly versatile and supports many different neural network models and architectures. As a result, the chip can be used for many different applications, including image recognition and reconstruction as well as voice recognition.

..

An August 17, 2022 University of California at San Diego (UCSD) news release (also on EurekAlert), which originated the news item, provides more detail than usually found in a news release,

“The conventional wisdom is that the higher efficiency of compute-in-memory is at the cost of versatility, but our NeuRRAM chip obtains efficiency while not sacrificing versatility,” said Weier Wan, the paper’s first corresponding author and a recent Ph.D. graduate of Stanford University who worked on the chip while at UC San Diego, where he was co-advised by Gert Cauwenberghs in the Department of Bioengineering. 

The research team, co-led by bioengineers at the University of California San Diego, presents their results in the Aug. 17 [2022] issue of Nature.

Currently, AI computing is both power hungry and computationally expensive. Most AI applications on edge devices involve moving data from the devices to the cloud, where the AI processes and analyzes it. Then the results are moved back to the device. That’s because most edge devices are battery-powered and as a result only have a limited amount of power that can be dedicated to computing. 

By reducing power consumption needed for AI inference at the edge, this NeuRRAM chip could lead to more robust, smarter and accessible edge devices and smarter manufacturing. It could also lead to better data privacy as the transfer of data from devices to the cloud comes with increased security risks. 

On AI chips, moving data from memory to computing units is one major bottleneck. 

“It’s the equivalent of doing an eight-hour commute for a two-hour work day,” Wan said. 

To solve this data transfer issue, researchers used what is known as resistive random-access memory, a type of non-volatile memory that allows for computation directly within memory rather than in separate computing units. RRAM and other emerging memory technologies used as synapse arrays for neuromorphic computing were pioneered in the lab of Philip Wong, Wan’s advisor at Stanford and a main contributor to this work. Computation with RRAM chips is not necessarily new, but generally it leads to a decrease in the accuracy of the computations performed on the chip and a lack of flexibility in the chip’s architecture. 

“Compute-in-memory has been common practice in neuromorphic engineering since it was introduced more than 30 years ago,” Cauwenberghs said.  “What is new with NeuRRAM is that the extreme efficiency now goes together with great flexibility for diverse AI applications with almost no loss in accuracy over standard digital general-purpose compute platforms.”

A carefully crafted methodology was key to the work with multiple levels of “co-optimization” across the abstraction layers of hardware and software, from the design of the chip to its configuration to run various AI tasks. In addition, the team made sure to account for various constraints that span from memory device physics to circuits and network architecture. 

“This chip now provides us with a platform to address these problems across the stack from devices and circuits to algorithms,” said Siddharth Joshi, an assistant professor of computer science and engineering at the University of Notre Dame , who started working on the project as a Ph.D. student and postdoctoral researcher in Cauwenberghs lab at UC San Diego. 

Chip performance

Researchers measured the chip’s energy efficiency by a measure known as energy-delay product, or EDP. EDP combines both the amount of energy consumed for every operation and the amount of times it takes to complete the operation. By this measure, the NeuRRAM chip achieves 1.6 to 2.3 times lower EDP (lower is better) and 7 to 13 times higher computational density than state-of-the-art chips. 

Researchers ran various AI tasks on the chip. It achieved 99% accuracy on a handwritten digit recognition task; 85.7% on an image classification task; and 84.7% on a Google speech command recognition task. In addition, the chip also achieved a 70% reduction in image-reconstruction error on an image-recovery task. These results are comparable to existing digital chips that perform computation under the same bit-precision, but with drastic savings in energy. 

Researchers point out that one key contribution of the paper is that all the results featured are obtained directly on the hardware. In many previous works of compute-in-memory chips, AI benchmark results were often obtained partially by software simulation. 

Next steps include improving architectures and circuits and scaling the design to more advanced technology nodes. Researchers also plan to tackle other applications, such as spiking neural networks.

“We can do better at the device level, improve circuit design to implement additional features and address diverse applications with our dynamic NeuRRAM platform,” said Rajkumar Kubendran, an assistant professor for the University of Pittsburgh, who started work on the project while a Ph.D. student in Cauwenberghs’ research group at UC San Diego.

In addition, Wan is a founding member of a startup that works on productizing the compute-in-memory technology. “As a researcher and  an engineer, my ambition is to bring research innovations from labs into practical use,” Wan said. 

New architecture 

The key to NeuRRAM’s energy efficiency is an innovative method to sense output in memory. Conventional approaches use voltage as input and measure current as the result. But this leads to the need for more complex and more power hungry circuits. In NeuRRAM, the team engineered a neuron circuit that senses voltage and performs analog-to-digital conversion in an energy efficient manner. This voltage-mode sensing can activate all the rows and all the columns of an RRAM array in a single computing cycle, allowing higher parallelism. 

In the NeuRRAM architecture, CMOS neuron circuits are physically interleaved with RRAM weights. It differs from conventional designs where CMOS circuits are typically on the peripheral of RRAM weights.The neuron’s connections with the RRAM array can be configured to serve as either input or output of the neuron. This allows neural network inference in various data flow directions without incurring overheads in area or power consumption. This in turn makes the architecture easier to reconfigure. 

To make sure that accuracy of the AI computations can be preserved across various neural network architectures, researchers developed a set of hardware algorithm co-optimization techniques. The techniques were verified on various neural networks including convolutional neural networks, long short-term memory, and restricted Boltzmann machines. 

As a neuromorphic AI chip, NeuroRRAM performs parallel distributed processing across 48 neurosynaptic cores. To simultaneously achieve high versatility and high efficiency, NeuRRAM supports data-parallelism by mapping a layer in the neural network model onto multiple cores for parallel inference on multiple data. Also, NeuRRAM offers model-parallelism by mapping different layers of a model onto different cores and performing inference in a pipelined fashion.

An international research team

The work is the result of an international team of researchers. 

The UC San Diego team designed the CMOS circuits that implement the neural functions interfacing with the RRAM arrays to support the synaptic functions in the chip’s architecture, for high efficiency and versatility. Wan, working closely with the entire team, implemented the design; characterized the chip; trained the AI models; and executed the experiments. Wan also developed a software toolchain that maps AI applications onto the chip. 

The RRAM synapse array and its operating conditions were extensively characterized and optimized at Stanford University. 

The RRAM array was fabricated and integrated onto CMOS at Tsinghua University. 

The Team at Notre Dame contributed to both the design and architecture of the chip and the subsequent machine learning model design and training.

The research started as part of the National Science Foundation funded Expeditions in Computing project on Visual Cortex on Silicon at Penn State University, with continued funding support from the Office of Naval Research Science of AI program, the Semiconductor Research Corporation and DARPA [{US} Defense Advanced Research Projects Agency] JUMP program, and Western Digital Corporation. 

Here’s a link to and a citation for the paper,

A compute-in-memory chip based on resistive random-access memory by Weier Wan, Rajkumar Kubendran, Clemens Schaefer, Sukru Burc Eryilmaz, Wenqiang Zhang, Dabin Wu, Stephen Deiss, Priyanka Raina, He Qian, Bin Gao, Siddharth Joshi, Huaqiang Wu, H.-S. Philip Wong & Gert Cauwenberghs. Nature volume 608, pages 504–512 (2022) DOI: https://doi.org/10.1038/s41586-022-04992-8 Published: 17 August 2022 Issue Date: 18 August 2022

This paper is open access.

Antikythera: a new Berggruen Institute program and a 2,000 year old computer

Starting with the new Antikythera program at the Berggruen Institute before moving onto the Antikythera itself, one of my favourite scientific mysteries.

Antikythera program at the Berggruen Institute

An October 5, 2022 Berggruen Institute news release (also received via email) announces a program exploring the impact of planetary-scale computation and invites applications for the program’s first ‘studio’,

Antikythera is convening over 75 philosophers, technologists, designers, and scientists in seminars, design research studios, and global salons to create new models that shift computation toward more viable long-term futures: https://antikythera.xyz/

Applications are now open for researchers to join Antikythera’s fully-funded five month Studio in 2023, launching at the Berggruen Institute in Los Angeles: https://antikythera.xyz/apply/

Today [October 5, 2022] the Berggruen Institute announced that it will incubate Antikythera, an initiative focused on understanding and shaping the impact of computation on philosophy, global society, and planetary systems. Antikythera will engage a wide range of thinkers at the intersections of software, speculative thought, governance, and design to explore computation’s ultimate pitfalls and potentials. Research will range from the significance of machine intelligence and the geopolitics of AI to new economic models and the long-term project of composing a healthy planetary society.

“Against a background of rising geopolitical tensions and an accelerating climate crisis, technology has outpaced our theory. As such, we are less interested in applying philosophy to the topic of computation than generating new ideas from a direct encounter with it.” said Benjamin Bratton, Professor at the University of California, San Diego, and director of the new program. “The purpose of Antikythera is to reorient the question “what is computation for?” and to model what it may become. That is a project that is not only technological but also philosophical, political, and ecological.”

Antikythera will begin this exploration with its Studio program, applications for which are now open at antikythera.xyz/apply/. The Studio program will take place over five months in spring 2023 and bring together researchers from across the world to work in multidisciplinary teams. These teams will work on speculative design proposals, and join 75+ Affiliate Researchers for workshops, talks, and design sprints that inform thinking and propositions around Antikythera’s core research topics. Affiliate Researchers will include philosophers, technologists, designers, scientists, and other thinkers and practitioners. Applications for the program are due November 11, 2022.

Program project outcomes will include new combinations of theory, cinema, software, and policy. The five initial research themes animating this work are:

Synthetic Intelligence: the longer-term implications of machine intelligence, particularly as seen through the lens of artificial language

Hemispherical Stacks: the multipolar geopolitics of planetary computation

Recursive Simulations: the emergence of simulation as an epistemological technology, from scientific simulation to VR/AR

Synthetic Catallaxy: the ongoing organization of computational economics, pricing, and planning

Planetary Sapience: the evolutionary emergence of natural/artificial intelligence, and its role in composing a viable planetary condition

The program is named after the Antikythera Mechanism, the world’s first known computer, used more than 2,000 years ago to predict the movements of constellations and eclipses decades in advance. As an origin point for computation, it combined calculation, orientation and cosmology, dimensions of practice whose synergies may be crucial in setting our planetary future on a better course than it is on today.

Bratton continues, “The evolution of planetary intelligence has also meant centuries of destruction; its future must be radically different. We must ask, what future would make this past worth it? Taking the question seriously demands a different sort of speculative and practical philosophy and a corresponding sort of computation.”

Bratton is a philosopher of technology and Professor at the University of California, San Diego, and author of many books including The Stack: On Software and Sovereignty (MIT Press). His most recent book is The Revenge of the Real: Politics for a Post-Pandemic World (Verso Books), exploring the implications for political philosophy of COVID-19. Associate directors are Ben Cerveny, technologist, speculative designer, and director of the Amsterdam-based Foundation for Public Code, and Stephanie Sherman, strategist, writer, and director of the MA Narrative Environments program at Central St. Martins, London. The Studio is directed by architect and creative director Nicolay Boyadjiev.

In addition to the Studio, program activities will include a series of invitation-only planning salons inviting philosophers, designers, technologists, strategists, and others to discuss how to best interpret and intervene in the future of planetary-scale computation, and the historic philosophical and geopolitical force that it represents. These salons began in London in October 2022 and will continue in locations across the world including in Berlin; Amsterdam; Los Angeles; San Francisco; New York; Mexico City; Seoul; and Venice.

The announcement of Antikythera at the Berggruen Institute follows the recent spinoff of the Transformations of the Human school, successfully incubated at the Institute from 2017-2021.

“Computational technology covering the planet represents one of the largest and most urgent philosophical opportunities of our time,” said Nicolas Berggruen, Chairman and Co-Founder of the Berggruen Institute. “It is with great pleasure that we invite Antikythera to join our work at the Institute. Together, we can develop new ways of thinking to support planetary flourishing in the years to come.”

Web: Antikythera.xyz
Social: Antikythera_xyz on Twitter, Instagram, and Linkedin.
Email: contact@antikythera.xyz

Applications were opened on October, 4, 2022, the deadline is November 11, 2022 followed by interviews. Participants will be confirmed by December 11, 2022. Here are a few more details from the application portal,

Who should apply to the Studio?

Antikythera hopes to bring together a diverse cohort of researchers from different backgrounds, disciplines, perspectives, and levels of experience. The Antikythera research themes engage with global challenges that necessitate harnessing a diversity of thought and expertise. Anyone who is passionate about the research themes of the Antikythera program is strongly encouraged to apply. We accept applications from every discipline and background, from established to emerging researchers. Applicants do not need to meet any specific set of educational or professional experience.

Is the program free?

Yes, the program is free. You will be supported to cover the cost of housing, living expenses, and all program-related fieldwork travel along with a monthly stipend. Any other associated program costs will also be covered by the program.

Is the program in person and full-time?

Yes, the Studio program requires a full-time commitment (PhD students must also be on leave to participate). There is no part-time participation option. Though we understand this commitment may be challenging logistically for some individuals, we believe it is important for the Studio’s success. We will do our best to enable an environment that is comfortable and safe for participants from all backgrounds. Please do not hesitate to contact us if you may require any accommodations or have questions regarding the full-time, in-person nature of the program.

Do I need a Visa?

The Studio is a traveling program with time spent between the USA, Mexico, and South Korea. Applicable visa requirements set by these countries will apply and will vary depending on your nationality. We are aware that current visa appointment wait times may preclude some individuals who would require a brand new visa from being able to enter the US by January, and we are working to ensure access to the program for all (if not for January 2023, then for future Studio cohorts). We will therefore ask you to identify your country of origin and passport/visa status in the application form so we can work to enable your participation. Anyone who is passionate about the research themes of the Antikythera program is strongly encouraged to apply.

For those who like to put a face to a name, you can find out more about the program and the people behind it on this page.

Antikythera, a 2000 year old computer & 100 year old mystery

As noted in the Berggruen Institute news release, the Antikythera Mechanism is considered the world’s first computer (as far as we know). The image below is one of the best known illustrations of the device as visualized by researchers,

Exploded model of the Cosmos gearing of the Antikythera Mechanism. ©2020 Tony Freeth.

Briefly, the Antikythera mechanism was discovered at the turn of the twentieth century in 1901 by sponge divers off the coast of Greece. Philip Chrysopoulos’s September 21, 2022 article for The Greek Reporter gives more details in an exuberant style (Note: Links have been removed),

… now—more than 120 years later—the astounding machine has been recreated once again, using 3-D imagery, by a brilliant group of researchers from University College London (UCL).

Not only is the recreation a thing of great beauty and amazing genius, but it has also made possible a new understanding of how it worked.

Since only eighty-two fragments of the original mechanism are extant—comprising only one-third of the entire calculator—this left researchers stymied as to its full capabilities.

Until this moment [in 2020 according to the copyright for the image], the front of the mechanism, containing most of the gears, has been a bit of a Holy Grail for marine archeologists and astronomers.

Professor Tony Freeth says in an article published in the periodical Scientific Reports: “Ours is the first model that conforms to all the physical evidence and matches the descriptions in the scientific inscriptions engraved on the mechanism itself.”

“The sun, moon and planets are displayed in an impressive tour de force of ancient Greek brilliance,” Freeth said.

The largest surviving piece of the mechanism, referred to by researchers as “Fragment A,” has bearings, pillars, and a block. Another piece, known as “Fragment D,” has a mysterious disk along with an extraordinarily intricate 63-toothed gear and a plate.

The inscriptions—just discovered recently by researchers—on the back cover of the mechanism have a description of the cosmos and the planets, shown by beads of various colors, and move on rings set around the inscriptions.

By employing the information gleaned from recent x-rays of the computer and their knowledge of ancient Greek mathematics, the UCL researchers have now shown that they can demonstrate how the mechanism determined the cycles of the planets Venus and Saturn.

Evaggelos Vallianatos, author of many books on the Antikythera Mechanism writing at Greek Reporter said that it was much more than a mere mechanism. It was a sophisticated, mind-bogglingly complex astronomical computer, he said “and Greeks made it.”

They employed advanced astronomy, mathematics, metallurgy, and engineering to do so, constructing the astronomical device 2,200 years ago. These scientific facts of the computer’s age and its flowless high-tech nature profoundly disturbed some of the scientists who studied it.

A few Western scientists of the twentieth century were shocked by the Antikythera Mechanism, Vallianatos said. They called it an astrolabe for several decades and refused to call it a computer. The astrolabe, a Greek invention, is a useful instrument for calculating the position of the Sun and other prominent stars. Yet, its technology is rudimentary compared to that of the Antikythera device.

In 2015, Kyriakos Efstathiou, a professor of mechanical engineering at the Aristotle University of Thessaloniki and head of the group which studied the Antikythera Mechanism said: “All of our research has shown that our ancestors used their deep knowledge of astronomy and technology to construct such mechanisms, and based only on this conclusion, the history of technology should be re-written because it sets its start many centuries back.”

The professor further explained that the Antikythera Mechanism is undoubtedly the first machine of antiquity which can be classified by the scientific term “computer,” because “it is a machine with an entry where we can import data, and this machine can bring and create results based on a scientific mathematical scale.

In 2016, yet another astounding discovery was made when an inscription on the device was revealed—something like a label or a user’s manual for the device.

It included a discussion of the colors of eclipses, details used at the time in the making of astrological predictions, including the ability to see exact times of eclipses of the moon and the sun, as well as the correct movements of celestial bodies.

Inscribed numbers 76, 19 and 223 show maker “was a Pythagorean”

On one side of the device lies a handle that begins the movement of the whole system. By turning the handle and rotating the gauges in the front and rear of the mechanism, the user could set a date that would reveal the astronomical phenomena that would potentially occur around the Earth.

Physicist Yiannis Bitsakis has said that today the NASA [US National Aeronautics and Space Adiministration] website can detail all the eclipses of the past and those that are to occur in the future. However, “what we do with computers today, was done with the Antikythera Mechanism about 2000 years ago,” he said.

The stars and night heavens have been important to peoples around the world. (This September 18, 2020 posting highlights millennia old astronomy as practiced by indigenous peoples in North America, Australia, and elsewhere. There’s also this March 17, 2022 article “How did ancient civilizations make sense of the cosmos, and what did they get right?” by Susan Bell of University of Southern California on phys.org.)

I have covered the Antikythera in three previous postings (March 17, 2021, August 3, 2016, and October 2, 2012) with the 2021 posting being the most comprehensive and the one featuring Professor Tony Freeth’s latest breakthrough.

However, 2022 has blessed us with more as this April 11, 2022 article by Jennifer Ouellette for Ars Technica reveals (Note: Links have been removed)

The mysterious Antikythera mechanism—an ancient device believed to have been used for tracking the heavens—has fascinated scientists and the public alike since it was first recovered from a shipwreck over a century ago. Much progress has been made in recent years to reconstruct the surviving fragments and learn more about how the mechanism might have been used. And now, members of a team of Greek researchers believe they have pinpointed the start date for the Antikythera mechanism, according to a preprint posted to the physics arXiv repository. Knowing that “day zero” is critical to ensuring the accuracy of the device.

“Any measuring system, from a thermometer to the Antikythera mechanism, needs a calibration in order to [perform] its calculations correctly,” co-author Aristeidis Voulgaris of the Thessaloniki Directorate of Culture and Tourism in Greece told New Scientist. “Of course it wouldn’t have been perfect—it’s not a digital computer, it’s gears—but it would have been very good at predicting solar and lunar eclipses.”

Last year, an interdisciplinary team at University College London (UCL) led by mechanical engineer Tony Freeth made global headlines with their computational model, revealing a dazzling display of the ancient Greek cosmos. The team is currently building a replica mechanism, moving gears and all, using modern machinery. The display is described in the inscriptions on the mechanism’s back cover, featuring planets moving on concentric rings with marker beads as indicators. X-rays of the front cover accurately represent the cycles of Venus and Saturn—462 and 442 years, respectively. 

The Antikythera mechanism was likely built sometime between 200 BCE and 60 BCE. However, in February 2022, Freeth suggested that the famous Greek mathematician and inventor Archimedes (sometimes referred to as the Leonardo da Vinci of antiquity) may have actually designed the mechanism, even if he didn’t personally build it. (Archimedes died in 212 BCE at the hands of a Roman soldier during the siege of Syracuse.) There are references in the writings of Cicero (106-43 BCE) to a device built by Archimedes for tracking the movement of the Sun, Moon, and five planets; it was a prized possession of the Roman general Marcus Claudius Marcellus. According to Freeth, that description is remarkably similar to the Antikythera mechanism, suggesting it was not a one-of-a-kind device.

Voulgaris and his co-authors based their new analysis on a 223-month cycle called a Saros, represented by a spiral inset on the back of the device. The cycle covers the time it takes for the Sun, Moon, and Earth to return to their same positions and includes associated solar and lunar eclipses. Given our current knowledge about how the device likely functioned, as well as the inscriptions, the team believed the start date would coincide with an annular solar eclipse.

“This is a very specific and unique date [December 22, 178 BCE],” Voulgaris said. “In one day, there occurred too many astronomical events for it to be coincidence. This date was a new moon, the new moon was at apogee, there was a solar eclipse, the Sun entered into the constellation Capricorn, it was the winter solstice.”

Others have made independent calculations and arrived at a different conclusion: the calibration date would more likely fall sometime in the summer of 204 BCE, although Voulgaris countered that this doesn’t explain why the winter solstice is engraved so prominently on the device.

“The eclipse predictions on the [device’s back] contain enough astronomical information to demonstrate conclusively that the 18-year series of lunar and solar eclipse predictions started in 204 BCE,” Alexander Jones of New York University told New Scientist, adding that there have been four independent calculations of this. “The reason such a dating is possible is because the Saros period is not a highly accurate equation of lunar and solar periodicities, so every time you push forward by 223 lunar months… the quality of the prediction degrades.”

Read Ouellette’s April 11, 2022 article for a pretty accessible description of the work involved in establishing the date. Here’s a link to and a citation for the latest attempt to date the Antikythera,

The Initial Calibration Date of the Antikythera Mechanism after the Saros spiral mechanical Apokatastasis by Aristeidis Voulgaris, Christophoros Mouratidis, Andreas Vossinakis. arXiv > physics > arXiv:2203.15045 Submission history: From: Aristeidis Voulgaris Mr [view email] [v1] Mon, 28 Mar 2022 19:17:57 UTC (1,545 KB)

It’s open access. The calculations are beyond me otherwise, it’s quite readable.

Getting back to the Berggruen Institute and its Antikythera program/studio, good luck to all the applicants (the Antikythera application portal).

Better recording with flexible backing on a brain-computer interface (BCI)

This work has already been patented, from a March 15, 2022 news item on ScienceDaily,

Engineering researchers have invented an advanced brain-computer interface with a flexible and moldable backing and penetrating microneedles. Adding a flexible backing to this kind of brain-computer interface allows the device to more evenly conform to the brain’s complex curved surface and to more uniformly distribute the microneedles that pierce the cortex. The microneedles, which are 10 times thinner than the human hair, protrude from the flexible backing, penetrate the surface of the brain tissue without piercing surface venules, and record signals from nearby nerve cells evenly across a wide area of the cortex.

This novel brain-computer interface has thus far been tested in rodents. The details were published online on February 25 [2022] in the journal Advanced Functional Materials. This work is led by a team in the lab of electrical engineering professor Shadi Dayeh at the University of California San Diego, together with researchers at Boston University led by biomedical engineering professor Anna Devor.

Caption: Artist rendition of the flexible, conformable, transparent backing of the new brain-computer interface with penetrating microneedles developed by a team led by engineers at the University of California San Diego in the laboratory of electrical engineering professor Shadi Dayeh. The smaller illustration at bottom left shows the current technology in experimental use called Utah Arrays. Credit: Shadi Dayeh / UC San Diego / SayoStudio

A March 14, 2022 University of California at San Diego news release (also on EurekAlert but published March 15, 2022), which originated the news item, delves further into the topic,

This new brain-computer interface is on par with and outperforms the “Utah Array,” which is the existing gold standard for brain-computer interfaces with penetrating microneedles. The Utah Array has been demonstrated to help stroke victims and people with spinal cord injury. People with implanted Utah Arrays are able to use their thoughts to control robotic limbs and other devices in order to restore some everyday activities such as moving objects.

The backing of the new brain-computer interface is flexible, conformable, and reconfigurable, while the Utah Array has a hard and inflexible backing. The flexibility and conformability of the backing of the novel microneedle-array favors closer contact between the brain and the electrodes, which allows for better and more uniform recording of the brain-activity signals. Working with rodents as model species, the researchers have demonstrated stable broadband recordings producing robust signals for the duration of the implant which lasted 196 days. 

In addition, the way the soft-backed brain-computer interfaces are manufactured allows for larger sensing surfaces, which means that a significantly larger area of the brain surface can be monitored simultaneously. In the Advanced Functional Materials paper, the researchers demonstrate that a penetrating microneedle array with 1,024 microneedles successfully recorded signals triggered by precise stimuli from the brains of rats. This represents ten times more microneedles and ten times the area of brain coverage, compared to current technologies.

Thinner and transparent backings

These soft-backed brain-computer interfaces are thinner and lighter than the traditional, glass backings of these kinds of brain-computer interfaces. The researchers note in their Advanced Functional Materials paper that light, flexible backings may reduce irritation of the brain tissue that contacts the arrays of sensors. 

The flexible backings are also transparent. In the new paper, the researchers demonstrate that this transparency can be leveraged to perform fundamental neuroscience research involving animal models that would not be possible otherwise. The team, for example, demonstrated simultaneous electrical recording from arrays of penetrating micro-needles as well as optogenetic photostimulation.

Two-sided lithographic manufacturing

The flexibility, larger microneedle array footprints, reconfigurability and transparency of the backings of the new brain sensors are all thanks to the double-sided lithography approach the researchers used. 

Conceptually, starting from a rigid silicon wafer, the team’s manufacturing process allows them to build microscopic circuits and devices on both sides of the rigid silicon wafer. On one side, a flexible, transparent film is added on top of the silicon wafer. Within this film, a bilayer of titanium and gold traces is embedded so that the traces line up with where the needles will be manufactured on the other side of the silicon wafer. 

Working from the other side, after the flexible film has been added, all the silicon is etched away, except for free-standing, thin, pointed columns of silicon. These pointed columns of silicon are, in fact, the microneedles, and their bases align with the titanium-gold traces within the flexible layer that remains after the silicon has been etched away. These titanium-gold traces are patterned via standard and scalable microfabrication techniques, allowing scalable production with minimal manual labor. The manufacturing process offers the possibility of flexible array design and scalability to tens of thousands of microneedles.  

Toward closed-loop systems

Looking to the future, penetrating microneedle arrays with large spatial coverage will be needed to improve brain-machine interfaces to the point that they can be used in “closed-loop systems” that can help individuals with severely limited mobility. For example, this kind of closed-loop system might offer a person using a robotic hand real-time tactical feedback on the objects the robotic hand is grasping.  

Tactile sensors on the robotic hand would sense the hardness, texture, and weight of an object. This information recorded by the sensors would be translated into electrical stimulation patterns which travel through wires outside the body to the brain-computer interface with penetrating microneedles. These electrical signals would provide information directly to the person’s brain about the hardness, texture, and weight of the object. In turn, the person would adjust their grasp strength based on sensed information directly from the robotic arm. 

This is just one example of the kind of closed-loop system that could be possible once penetrating microneedle arrays can be made larger to conform to the brain and coordinate activity across the “command” and “feedback” centers of the brain.

Previously, the Dayeh laboratory invented and demonstrated the kinds of tactile sensors that would be needed for this kind of application, as highlighted in this video.

Pathway to commercialization

The advanced dual-side lithographic microfabrication processes described in this paper are patented (US 10856764). Dayeh co-founded Precision Neurotek Inc. to translate technologies innovated in his laboratory to advance state of the art in clinical practice and to advance the fields of neuroscience and neurophysiology.

Here’s a link to and a citation for the paper,

Scalable Thousand Channel Penetrating Microneedle Arrays on Flex for Multimodal and Large Area Coverage BrainMachine Interfaces by Sang Heon Lee, Martin Thunemann, Keundong Lee, Daniel R. Cleary, Karen J. Tonsfeldt, Hongseok Oh, Farid Azzazy, Youngbin Tchoe, Andrew M. Bourhis, Lorraine Hossain, Yun Goo Ro, Atsunori Tanaka, Kıvılcım Kılıç, Anna Devor, Shadi A. Dayeh. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.202112045 First published (online): 25 February 2022

This paper is open access.

Fridge-free COVID-19 vaccines?

COVID-19 vaccines require cold storage conditions (in some cases, extraordinarily cold storage), which pose problems with both storage and distribution.

A September 7, 2021 news item on phys.org describes research that may make vaccine distribution and storage problems a thing of the past (Note: A link has been removed),

Nanoengineers at the University of California San Diego have developed COVID-19 vaccine candidates that can take the heat. Their key ingredients? Viruses from plants or bacteria.

The new fridge-free COVID-19 vaccines are still in the early stage of development. In mice, the vaccine candidates triggered high production of neutralizing antibodies against SARS-CoV-2, the virus that causes COVID-19. If they prove to be safe and effective in people, the vaccines could be a big game changer for global distribution efforts, including those in rural areas or resource-poor communities.

A September 7, 2021 University of California at San Diego (UCSD or UC San Diego) news release (also on EurekAlert), which originated the news item, delves further into the research,

“What’s exciting about our vaccine technology is that is thermally stable, so it could easily reach places where setting up ultra-low temperature freezers, or having trucks drive around with these freezers, is not going to be possible,” said Nicole Steinmetz, a professor of nanoengineering and the director of the Center for Nano-ImmunoEngineering at the UC San Diego Jacobs School of Engineering.

The vaccines are detailed in a paper published Sept. 7 [2021] in the Journal of the American Chemical Society.

The researchers created two COVID-19 vaccine candidates. One is made from a plant virus, called cowpea mosaic virus. The other is made from a bacterial virus, or bacteriophage, called Q beta.

Both vaccines were made using similar recipes. The researchers used cowpea plants and E. coli bacteria to grow millions of copies of the plant virus and bacteriophage, respectively, in the form of ball-shaped nanoparticles. The researchers harvested these nanoparticles and then attached a small piece of the SARS-CoV-2 spike protein to the surface. The finished products look like an infectious virus so the immune system can recognize them, but they are not infectious in animals and humans. The small piece of the spike protein attached to the surface is what stimulates the body to generate an immune response against the coronavirus.

The researchers note several advantages of using plant viruses and bacteriophages to make their vaccines. For one, they can be easy and inexpensive to produce at large scales. “Growing plants is relatively easy and involves infrastructure that’s not too sophisticated,” said Steinmetz. “And fermentation using bacteria is already an established process in the biopharmaceutical industry.”

Another big advantage is that the plant virus and bacteriophage nanoparticles are extremely stable at high temperatures. As a result, the vaccines can be stored and shipped without needing to be kept cold. They also can be put through fabrication processes that use heat. The team is using such processes to package their vaccines into polymer implants and microneedle patches. These processes involve mixing the vaccine candidates with polymers and melting them together in an oven at temperatures close to 100 degrees Celsius. Being able to directly mix the plant virus and bacteriophage nanoparticles with the polymers from the start makes it easy and straightforward to create vaccine implants and patches. 

The goal is to give people more options for getting a COVID-19 vaccine and making it more accessible. The implants, which are injected underneath the skin and slowly release vaccine over the course of a month, would only need to be administered once. And the microneedle patches, which can be worn on the arm without pain or discomfort, would allow people to self-administer the vaccine.

“Imagine if vaccine patches could be sent to the mailboxes of our most vulnerable people, rather than having them leave their homes and risk exposure,” said Jon Pokorski, a professor of nanoengineering at the UC San Diego Jacobs School of Engineering, whose team developed the technology to make the implants and microneedle patches.

“If clinics could offer a one-dose implant to those who would have a really hard time making it out for their second shot, that would offer protection for more of the population and we could have a better chance at stemming transmission,” added Pokorski, who is also a founding faculty member of the university’s Institute for Materials Discovery and Design.

In tests, the team’s COVID-19 vaccine candidates were administered to mice either via implants, microneedle patches, or as a series of two shots. All three methods produced high levels of neutralizing antibodies in the blood against SARS-CoV-2.

Potential Pan-Coronavirus Vaccine

These same antibodies also neutralized against the SARS virus, the researchers found.

It all comes down to the piece of the coronavirus spike protein that is attached to the surface of the nanoparticles. One of these pieces that Steinmetz’s team chose, called an epitope, is almost identical between SARS-CoV-2 and the original SARS virus.

“The fact that neutralization is so profound with an epitope that’s so well conserved among another deadly coronavirus is remarkable,” said co-author Matthew Shin, a nanoengineering Ph.D. student in Steinmetz’s lab. “This gives us hope for a potential pan-coronavirus vaccine that could offer protection against future pandemics.”

Another advantage of this particular epitope is that it is not affected by any of the SARS-CoV-2 mutations that have so far been reported. That’s because this epitope comes from a region of the spike protein that does not directly bind to cells. This is different from the epitopes in the currently administered COVID-19 vaccines, which come from the spike protein’s binding region. This is a region where a lot of the mutations have occurred. And some of these mutations have made the virus more contagious.

Epitopes from a nonbinding region are less likely to undergo these mutations, explained Oscar Ortega-Rivera, a postdoctoral researcher in Steinmetz’s lab and the study’s first author. “Based on our sequence analyses, the epitope that we chose is highly conserved amongst the SARS-CoV-2 variants.”

This means that the new COVID-19 vaccines could potentially be effective against the variants of concern, said Ortega-Rivera, and tests are currently underway to see what effect they have against the Delta variant, for example.

Plug and Play Vaccine

Another thing that gets Steinmetz really excited about this vaccine technology is the versatility it offers to make new vaccines. “Even if this technology does not make an impact for COVID-19, it can be quickly adapted for the next threat, the next virus X,” said Steinmetz.

Making these vaccines, she says, is “plug and play:” grow plant virus or bacteriophage nanoparticles from plants or bacteria, respectively, then attach a piece of the target virus, pathogen, or biomarker to the surface.

“We use the same nanoparticles, the same polymers, the same equipment, and the same chemistry to put everything together. The only variable really is the antigen that we stick to the surface,” said Steinmetz.

The resulting vaccines do not need to be kept cold. They can be packaged into implants or microneedle patches. Or, they can be directly administered in the traditional way via shots.

Steinmetz and Pokorski’s labs have used this recipe in previous studies to make vaccine candidates for diseases like HPV and cholesterol. And now they’ve shown that it works for making COVID-19 vaccine candidates as well.

Next Steps

The vaccines still have a long way to go before they make it into clinical trials. Moving forward, the team will test if the vaccines protect against infection from COVID-19, as well as its variants and other deadly coronaviruses, in vivo.

Here’s a link to and a citation for the paper,

Trivalent Subunit Vaccine Candidates for COVID-19 and Their Delivery Devices by Oscar A. Ortega-Rivera, Matthew D. Shin, Angela Chen, Veronique Beiss, Miguel A. Moreno-Gonzalez, Miguel A. Lopez-Ramirez, Maria Reynoso, Hong Wang, Brett L. Hurst, Joseph Wang, Jonathan K. Pokorski, and Nicole F. Steinmetz. J. Am. Chem. Soc. 2021, XXXX, XXX, XXX-XXX DOI: https://doi.org/10.1021/jacs.1c06600 Publication Date:September 7, 2021 © 2021 American Chemical Society

This paper is behind a paywall.

Tiny sponges lure coronavirus away from lung cells

This research approach looks promising as three news releases trumpeting the possibilities indicate. First, there’s the June 17, 2020 American Chemical Society (ACS) news release,

Scientists are working overtime to find an effective treatment for COVID-19, the illness caused by the new coronavirus, SARS-CoV-2. Many of these efforts target a specific part of the virus, such as the spike protein. Now, researchers reporting in Nano Letters have taken a different approach, using nanosponges coated with human cell membranes –– the natural targets of the virus –– to soak up SARS-CoV-2 and keep it from infecting cells in a petri dish.

To gain entry, SARS-CoV-2 uses its spike protein to bind to two known proteins on human cells, called ACE2 and CD147. Blocking these interactions would keep the virus from infecting cells, so many researchers are trying to identify drugs directed against the spike protein. Anthony Griffiths, Liangfang Zhang and colleagues had a different idea: making a nanoparticle decoy with the virus’ natural targets, including ACE2 and CD147, to lure SARS-CoV-2 away from cells. And to test this idea, they conducted experiments with the actual SARS-CoV-2 virus in a biosafety level 4 lab.

The researchers coated a nanoparticle polymer core with cell membranes from either human lung epithelial cells or macrophages –– two cell types infected by SARS-CoV-2. They showed that the nanosponges had ACE2 and CD147, as well as other cell membrane proteins, projecting outward from the polymer core. When administered to mice, the nanosponges did not show any short-term toxicity. Then, the researchers treated cells in a dish with SARS-CoV-2 and the lung epithelial or macrophage nanosponges. Both decoys neutralized SARS-CoV-2 and prevented it from infecting cells to a similar extent. The researchers plan to next test the nanosponges in animals before moving to human clinical trials. In theory, the nanosponge approach would work even if SARS-CoV-2 mutates to resist other therapies, and it could be used against other viruses, as well, the researchers say.

In this illustration, a nanosponge coated with a human cell membrane acts as a decoy to prevent a virus from entering cells. Credit: Adapted from Nano Letters 2020, DOI: 10.1021/acs.nanolett.0c02278

There are two research teams involved, one at Boston University and the other at the University of California at San Diego (UC San Diego or UCSD). The June 18, 2020 Boston University news release (also on EurekAlert) by Kat J. McAlpine adds more details about the research, provides some insights from the researchers, and is a little redundant if you’ve already seen the ACS news release,

Imagine if scientists could stop the coronavirus infection in its tracks simply by diverting its attention away from living lung cells? A new therapeutic countermeasure, announced in a Nano Letters study by researchers from Boston University’s National Emerging Infectious Diseases Laboratories (NEIDL) and the University of California San Diego, appears to do just that in experiments that were carried out at the NEIDL in Boston.

The breakthrough technology could have major implications for fighting the SARS-CoV-2 virus responsible for the global pandemic that’s already claimed nearly 450,000 lives and infected more than 8 million people. But, perhaps even more significantly, it has the potential to be adapted to combat virtually any virus, such as influenza or even Ebola.

“I was skeptical at the beginning because it seemed too good to be true,” says NEIDL microbiologist Anna Honko, one of the co-first authors on the study. “But when I saw the first set of results in the lab, I was just astonished.”

The technology consists of very small, nanosized drops of polymers–essentially, soft biofriendly plastics–covered in fragments of living lung cell and immune cell membranes.

“It looks like a nanoparticle coated in pieces of cell membrane,” Honko says. “The small polymer [droplet] mimics a cell having a membrane around it.”

The SARS-CoV-2 virus seeks out unique signatures of lung cell membranes and latches onto them. When that happens inside the human body, the coronavirus infection takes hold, with the SARS-CoV-2 viruses hijacking lung cells to replicate their own genetic material. But in experiments at the NEIDL, BU researchers observed that polymer droplets laden with pieces of lung cell membrane did a better job of attracting the SARS-CoV-2 virus than living lung cells. [emphasis mine]

By fusing with the SARS-CoV-2 virus better than living cells can, the nanotechnology appears to be an effective countermeasure to coronavirus infection, preventing SARS-CoV-2 from attacking cells.

“Our guess is that it acts like a decoy, it competes with cells for the virus,” says NEIDL microbiologist Anthony Griffiths, co-corresponding author on the study. “They are little bits of plastic, just containing the outer pieces of cells with none of the internal cellular machinery contained inside living cells. Conceptually, it’s such a simple idea. It mops up the virus like a sponge.”

That attribute is why the UC San Diego and BU research team call the technology “nanosponges.” Once SARS-CoV-2 binds with the cell fragments inside a nanosponge droplet–each one a thousand times smaller than the width of a human hair–the coronavirus dies. Although the initial results are based on experiments conducted in cell culture dishes, the researchers believe that inside a human body, the biodegradable nanosponges and the SARS-CoV-2 virus trapped inside them could then be disposed of by the body’s immune system. The immune system routinely breaks down and gets rid of dead cell fragments caused by infection or normal cell life cycles.

There is also another important effect that the nanosponges have in the context of coronavirus infection. Honko says nanosponges containing fragments of immune cells can soak up cellular signals that increase inflammation [emphases mine]. Acute respiratory distress, caused by an inflammatory cascade inside the lungs, is the most deadly aspect of the coronavirus infection, sending patients into the intensive care unit for oxygen or ventilator support to help them breathe.

But the nanosponges, which can attract the inflammatory molecules that send the immune system into dangerous overdrive, can help tamp down that response, Honko says. By using both kinds of nanosponges, some containing lung cell fragments and some containing pieces of immune cells, she says it’s possible to “attack the coronavirus and the [body’s] response” responsible for disease and eventual lung failure.

At the NEIDL, Honko and Griffiths are now planning additional experiments to see how well the nanosponges can prevent coronavirus infection in animal models of the disease. They plan to work closely with the team of engineers at UC San Diego, who first developed the nanosponges more than a decade ago, to tailor the technology for eventual safe and effective use in humans.

“Traditionally, drug developers for infectious diseases dive deep on the details of the pathogen in order to find druggable targets,” said Liangfang Zhang, a UC San Diego nanoengineer and leader of the California-based team, according to a UC San Diego press release. “Our approach is different. We only need to know what the target cells are. And then we aim to protect the targets by creating biomimetic decoys.”

When the novel coronavirus first appeared, the idea of using the nanosponges to combat the infection came to Zhang almost immediately. He reached out to the NEIDL for help. Looking ahead, the BU and UC San Diego collaborators believe the nanosponges can easily be converted into a noninvasive treatment.

“We should be able to drop it right into the nose,” Griffiths says. “In humans, it could be something like a nasal spray.”

Honko agrees: “That would be an easy and safe administration method that should target the appropriate [respiratory] tissues. And if you wanted to treat patients that are already intubated, you could deliver it straight into the lung.”

Griffiths and Honko are especially intrigued by the nanosponges as a new platform for treating all types of viral infections. “The broad spectrum aspect of this is exceptionally appealing,” Griffiths says. The researchers say the nanosponge could be easily adapted to house other types of cell membranes preferred by other viruses, creating many new opportunities to use the technology against other tough-to-treat infections like the flu and even deadly hemorrhagic fevers caused by Ebola, Marburg, or Lassa viruses.

“I’m interested in seeing how far we can push this technology,” Honko says.

The University of California at* San Diego has released a video illustrating the nanosponges work,

There’s also this June 17, 2020 University of California at San Diego (UC San Diego) news release (also on EurekAlert) by Ioana Patringenaru, which offers extensive new detail along with, if you’ve read one or both of the news releases in the above, a few redundant bits,

Nanoparticles cloaked in human lung cell membranes and human immune cell membranes can attract and neutralize the SARS-CoV-2 virus in cell culture, causing the virus to lose its ability to hijack host cells and reproduce.

The first data describing this new direction for fighting COVID-19 were published on June 17 in the journal Nano Letters. The “nanosponges” were developed by engineers at the University of California San Diego and tested by researchers at Boston University.

The UC San Diego researchers call their nano-scale particles “nanosponges” because they soak up harmful pathogens and toxins.

In lab experiments, both the lung cell and immune cell types of nanosponges caused the SARS-CoV-2 virus to lose nearly 90% of its “viral infectivity” in a dose-dependent manner. Viral infectivity is a measure of the ability of the virus to enter the host cell and exploit its resources to replicate and produce additional infectious viral particles.

Instead of targeting the virus itself, these nanosponges are designed to protect the healthy cells the virus invades.

“Traditionally, drug developers for infectious diseases dive deep on the details of the pathogen in order to find druggable targets. Our approach is different. We only need to know what the target cells are. And then we aim to protect the targets by creating biomimetic decoys,” said Liangfang Zhang, a nanoengineering professor at the UC San Diego Jacobs School of Engineering.

His lab first created this biomimetic nanosponge platform more than a decade ago and has been developing it for a wide range of applications ever since [emphasis mine]. When the novel coronavirus appeared, the idea of using the nanosponge platform to fight it came to Zhang “almost immediately,” he said.

In addition to the encouraging data on neutralizing the virus in cell culture, the researchers note that nanosponges cloaked with fragments of the outer membranes of macrophages could have an added benefit: soaking up inflammatory cytokine proteins, which are implicated in some of the most dangerous aspects of COVID-19 and are driven by immune response to the infection.

Making and testing COVID-19 nanosponges

Each COVID-19 nanosponge–a thousand times smaller than the width of a human hair–consists of a polymer core coated in cell membranes extracted from either lung epithelial type II cells or macrophage cells. The membranes cover the sponges with all the same protein receptors as the cells they impersonate–and this inherently includes whatever receptors SARS-CoV-2 uses to enter cells in the body.

The researchers prepared several different concentrations of nanosponges in solution to test against the novel coronavirus. To test the ability of the nanosponges to block SARS-CoV-2 infectivity, the UC San Diego researchers turned to a team at Boston University’s National Emerging Infectious Diseases Laboratories (NEIDL) to perform independent tests. In this BSL-4 lab–the highest biosafety level for a research facility–the researchers, led by Anthony Griffiths, associate professor of microbiology at Boston University School of Medicine, tested the ability of various concentrations of each nanosponge type to reduce the infectivity of live SARS-CoV-2 virus–the same strains that are being tested in other COVID-19 therapeutic and vaccine research.

At a concentration of 5 milligrams per milliliter, the lung cell membrane-cloaked sponges inhibited 93% of the viral infectivity of SARS-CoV-2. The macrophage-cloaked sponges inhibited 88% of the viral infectivity of SARS-CoV-2. Viral infectivity is a measure of the ability of the virus to enter the host cell and exploit its resources to replicate and produce additional infectious viral particles.

“From the perspective of an immunologist and virologist, the nanosponge platform was immediately appealing as a potential antiviral because of its ability to work against viruses of any kind. This means that as opposed to a drug or antibody that might very specifically block SARS-CoV-2 infection or replication, these cell membrane nanosponges might function in a more holistic manner in treating a broad spectrum of viral infectious diseases. I was optimistically skeptical initially that it would work, and then thrilled once I saw the results and it sunk in what this could mean for therapeutic development as a whole,” said Anna Honko, a co-first author on the paper and a Research Associate Professor, Microbiology at Boston University’s National Emerging Infectious Diseases Laboratories (NEIDL).

In the next few months, the UC San Diego researchers and collaborators will evaluate the nanosponges’ efficacy in animal models. The UC San Diego team has already shown short-term safety in the respiratory tracts and lungs of mice. If and when these COVID-19 nanosponges will be tested in humans depends on a variety of factors, but the researchers are moving as fast as possible.

“Another interesting aspect of our approach is that even as SARS-CoV-2 mutates, as long as the virus can still invade the cells we are mimicking, our nanosponge approach should still work. I’m not sure this can be said for some of the vaccines and therapeutics that are currently being developed,” said Zhang.

The researchers also expect these nanosponges would work against any new coronavirus or even other respiratory viruses, including whatever virus might trigger the next respiratory pandemic.

Mimicking lung epithelial cells and immune cells

Since the novel coronavirus often infects lung epithelial cells as the first step in COVID-19 infection, Zhang and his colleagues reasoned that it would make sense to cloak a nanoparticle in fragments of the outer membranes of lung epithelial cells to see if the virus could be tricked into latching on it instead of a lung cell.

Macrophages, which are white blood cells that play a major role in inflammation, also are very active in the lung during the course of a COVID-19 illness, so Zhang and colleagues created a second sponge cloaked in macrophage membrane.

The research team plans to study whether the macrophage sponges also have the ability to quiet cytokine storms in COVID-19 patients.

“We will see if the macrophage nanosponges can neutralize the excessive amount of these cytokines as well as neutralize the virus,” said Zhang.

Using macrophage cell fragments as cloaks builds on years of work to develop therapies for sepsis using macrophage nanosponges.

In a paper published in 2017 in Proceedings of the National Academy of Sciences, Zhang and a team of researchers at UC San Diego showed that macrophage nanosponges can safely neutralize both endotoxins and pro-inflammatory cytokines in the bloodstream of mice. A San Diego biotechnology company co-founded by Zhang called Cellics Therapeutics is working to translate this macrophage nanosponge work into the clinic.

A potential COVID-19 therapeutic The COVID-19 nanosponge platform has significant testing ahead of it before scientists know whether it would be a safe and effective therapy against the virus in humans, Zhang cautioned [emphasis mine]. But if the sponges reach the clinical trial stage, there are multiple potential ways of delivering the therapy that include direct delivery into the lung for intubated patients, via an inhaler like for asthmatic patients, or intravenously, especially to treat the complication of cytokine storm.

A therapeutic dose of nanosponges might flood the lung with a trillion or more tiny nanosponges that could draw the virus away from healthy cells. Once the virus binds with a sponge, “it loses its viability and is not infective anymore, and will be taken up by our own immune cells and digested,” said Zhang.

“I see potential for a preventive treatment, for a therapeutic that could be given early because once the nanosponges get in the lung, they can stay in the lung for some time,” Zhang said. “If a virus comes, it could be blocked if there are nanosponges waiting for it.”

Growing momentum for nanosponges

Zhang’s lab at UC San Diego created the first membrane-cloaked nanoparticles over a decade ago. The first of these nanosponges were cloaked with fragments of red blood cell membranes. These nanosponges are being developed to treat bacterial pneumonia and have undergone all stages of pre-clinical testing by Cellics Therapeutics, the San Diego startup cofounded by Zhang. The company is currently in the process of submitting the investigational new drug (IND) application to the FDA for their lead candidate: red blood cell nanosponges for the treatment of methicillin-resistant staphylococcus aureus (MRSA) pneumonia. The company estimates the first patients in a clinical trial will be dosed next year.

The UC San Diego researchers have also shown that nanosponges can deliver drugs to a wound site; sop up bacterial toxins that trigger sepsis; and intercept HIV before it can infect human T cells.

The basic construction for each of these nanosponges is the same: a biodegradable, FDA-approved polymer core is coated in a specific type of cell membrane, so that it might be disguised as a red blood cell, or an immune T cell or a platelet cell. The cloaking keeps the immune system from spotting and attacking the particles as dangerous invaders.

“I think of the cell membrane fragments as the active ingredients. This is a different way of looking at drug development,” said Zhang. “For COVID-19, I hope other teams come up with safe and effective therapies and vaccines as soon as possible. At the same time, we are working and planning as if the world is counting on us.”

I wish the researchers good luck. For the curious, here’s a link to and a citation for the paper,

Cellular Nanosponges Inhibit SARS-CoV-2 Infectivity by Qiangzhe Zhang, Anna Honko, Jiarong Zhou, Hua Gong, Sierra N. Downs, Jhonatan Henao Vasquez, Ronnie H. Fang, Weiwei Gao, Anthony Griffiths, and Liangfang Zhang. Nano Lett. 2020, XXXX, XXX, XXX-XXX DOI: https://doi.org/10.1021/acs.nanolett.0c02278 Publication Date:June 17, 2020 Copyright © 2020 American Chemical Society

This paper appears to be open access.

Here, too, is the Cellics Therapeutics website.

*University of California as San Diego corrected to University of California at San Diego on Dec. 30, 2020.

Bringing a technique from astronomy down to the nanoscale

A January 2, 2020 Columbia University news release on EurekAlert (also on phys.org but published Jan. 3, 2020) describes research that takes the inter-galactic down to the quantum level,

Researchers at Columbia University and University of California, San Diego, have introduced a novel “multi-messenger” approach to quantum physics that signifies a technological leap in how scientists can explore quantum materials.

The findings appear in a recent article published in Nature Materials, led by A. S. McLeod, postdoctoral researcher, Columbia Nano Initiative, with co-authors Dmitri Basov and A. J. Millis at Columbia and R.A. Averitt at UC San Diego.

“We have brought a technique from the inter-galactic scale down to the realm of the ultra-small,” said Basov, Higgins Professor of Physics and Director of the Energy Frontier Research Center at Columbia. Equipped with multi-modal nanoscience tools we can now routinely go places no one thought would be possible as recently as five years ago.”

The work was inspired by “multi-messenger” astrophysics, which emerged during the last decade as a revolutionary technique for the study of distant phenomena like black hole mergers. Simultaneous measurements from instruments, including infrared, optical, X-ray and gravitational-wave telescopes can, taken together, deliver a physical picture greater than the sum of their individual parts.

The search is on for new materials that can supplement the current reliance on electronic semiconductors. Control over material properties using light can offer improved functionality, speed, flexibility and energy efficiency for next-generation computing platforms.

Experimental papers on quantum materials have typically reported results obtained by using only one type of spectroscopy. The researchers have shown the power of using a combination of measurement techniques to simultaneously examine electrical and optical properties.

The researchers performed their experiment by focusing laser light onto the sharp tip of a needle probe coated with magnetic material. When thin films of metal oxide are subject to a unique strain, ultra-fast light pulses can trigger the material to switch into an unexplored phase of nanometer-scale domains, and the change is reversible.

By scanning the probe over the surface of their thin film sample, the researchers were able to trigger the change locally and simultaneously manipulate and record the electrical, magnetic and optical properties of these light-triggered domains with nanometer-scale precision.

The study reveals how unanticipated properties can emerge in long-studied quantum materials at ultra-small scales when scientists tune them by strain.

“It is relatively common to study these nano-phase materials with scanning probes. But this is the first time an optical nano-probe has been combined with simultaneous magnetic nano-imaging, and all at the very low temperatures where quantum materials show their merits,” McLeod said. “Now, investigation of quantum materials by multi-modal nanoscience offers a means to close the loop on programs to engineer them.”

The excitement is palpable.

Caption: The discovery of multi-messenger nanoprobes allows scientists to simultaneously probe multiple properties of quantum materials at nanometer-scale spatial resolutions. Credit: Ella Maru Studio

Here’s a link to and a citation for the paper,

Multi-messenger nanoprobes of hidden magnetism in a strained manganite by A. S. McLeod, Jingdi Zhang, M. Q. Gu, F. Jin, G. Zhang, K. W. Post, X. G. Zhao, A. J. Millis, W. B. Wu, J. M. Rondinelli, R. D. Averitt & D. N. Basov. Nature Materials (2019) doi:10.1038/s41563-019-0533-y Published: 16 December 2019

This paper is behind a paywall.

The latest and greatest in gene drives (for flies)

This is a CRISPR (clustered regularly interspaced short palindromic repeats) story where the researchers are working on flies. If successful, this has much wider implications. From an April 10, 2019 news item on phys.org,

New CRISPR-based gene drives and broader active genetics technologies are revolutionizing the way scientists engineer the transfer of specific traits from one generation to another.

Scientists at the University of California San Diego have now developed a new version of a gene drive that opens the door to the spread of specific, favorable subtle genetic variants, also known as “alleles,” throughout a population.

The new “allelic drive,” described April 9 [2019] in Nature Communications, is equipped with a guide RNA (gRNA) that directs the CRISPR system to cut undesired variants of a gene and replace it with a preferred version of the gene. The new drive extends scientists’ ability to modify populations of organisms with precision editing. Using word processing as an analogy, CRISPR-based gene drives allow scientists to edit sentences of genetic information, while the new allelic drive offers letter-by-letter editing.

An April 9, 2019 University of California at San Diego (UCSD) news release (also on EurekAlert) by Mario Aguilera, which originated the news item, delves into this technique’s potential uses while further explaining the work


In one example of its potential applications, specific genes in agricultural pests that have become resistant to insecticides could be replaced by original natural genetic variants conferring sensitivity to insecticides using allelic drives that selectively swap the identities of a single protein residue (amino acid).

In addition to agricultural applications, disease-carrying insects could be a target for allelic drives.

“If we incorporate such a normalizing gRNA on a gene-drive element, for example, one designed to immunize mosquitoes against malaria, the resulting allelic gene drive will spread through a population. When this dual action drive encounters an insecticide-resistant allele, it will cut and repair it using the wild-type susceptible allele,” said Ethan Bier, the new paper’s senior author. “The result being that nearly all emerging progeny will be sensitive to insecticides as well as refractory to malaria transmission.”

“Forcing these species to return to their natural sensitive state using allelic drives would help break a downward cycle of ever-increasing and environmentally damaging pesticide over-use,” said Annabel Guichard, the paper’s first author.

The researchers describe two versions of the allelic drive, including “copy-cutting,” in which researchers use the CRISPR system to selectively cut the undesired version of a gene, and a more broadly applicable version referred to as “copy-grafting” that promotes transmission of a favored allele next to the site that is selectively protected from gRNA cleavage.

“An unexpected finding from this study is that mistakes created by such allelic drives do not get transmitted to the next generation,” said Guichard. “These mutations instead produce an unusual form of lethality referred to as ‘lethal mosaicism.’ This process helps make allelic drives more efficient by immediately eliminating unwanted mutations created by CRISPR-based drives.”

Although demonstrated in fruit flies, the new technology also has potential for broad application in insects, mammals and plants. According to the researchers, several variations of the allelic drive technology could be developed with combinations of favorable traits in crops that, for example, thrive in poor soil and arid environments to help feed the ever-growing world population.

Beyond environmental applications, allelic drives should enable next-generation engineering of animal models to study human disease as well as answer important questions in basic science. As a member of the Tata Institute for Genetics and Society (TIGS), Bier says allelic drives could be used to aid in environmental conservation efforts to protect vulnerable endemic species or stop the spread of invasive species.

Gene drives and active genetics systems are now being developed for use in mammals. The scientists say allelic drives could accelerate new laboratory strains of animal models of human disease that aid in the development of new cures.

Here’s a link to and a citation for the paper,

Efficient allelic-drive in Drosophila by Annabel Guichard, Tisha Haque, Marketta Bobik, Xiang-Ru S. Xu, Carissa Klanseck, Raja Babu Singh Kushwah, Mateus Berni, Bhagyashree Kaduskar, Valentino M. Gantz & Ethan Bier. Nature Communicationsvolume 10, Article number: 1640 (2019) DOI: https://doi.org/10.1038/s41467-019-09694-w Published 09 April 2019

This paper is open access.

For anyone new to gene drives, I have a February 8, 2018 posting that highlights a report from the UK on the latest in genetic engineering, which provides a definition for [synthetic] gene drives, and if you scroll down about 75% of the way, you’ll also find excerpts from an article for The Atlantic by Ed Yong on gene drives as proposed for a project in New Zealand.

Biohybrid cyborgs

Cyborgs are usually thought of as people who’ve been enhanced with some sort of technology, In contemporary real life that technology might be a pacemaker or hip replacement but in science fiction it’s technology such as artificial retinas (for example) that expands the range of visible light for an enhanced human.

Rarely does the topic of a microscopic life form come up in discussion about cyborgs and yet, that’s exactly what an April 3, 2019 Nanowerk spotlight article by Michael Berger describes in relationship to its use in water remediation efforts (Note: links have been removed),

Researchers often use living systems as inspiration for the design and engineering of micro- and nanoscale propulsion systems, actuators, sensors, and robots. …

“Although microrobots have recently proved successful for remediating decontaminated water at the laboratory scale, the major challenge in the field is to scale up these applications to actual environmental settings,” Professor Joseph Wang, Chair of Nanoengineering and Director, Center of Wearable Sensors at the University California San Diego, tells Nanowerk. “In order to do this, we need to overcome the toxicity of their chemical fuels, the short time span of biocompatible magnesium-based micromotors and the small domain operation of externally actuated microrobots.”

In their recent work on self-propelled biohybrid microrobots, Wang and his team were inspired by recent developments of biohybrid cyborgs that integrate self-propelling bacteria with functionalized synthetic nanostructures to transport materials.

“These tiny cyborgs are incredibly efficient for transport materials, but the limitation that we observed is that they do not provide large-scale fluid mixing,” notes Wang. ” We wanted to combine the best properties of both worlds. So, we searched for the best candidate to create a more robust biohybrid for mixing and we decided on using rotifers (Brachionus) as the engine of the cyborg.”

These marine microorganisms, which measure between 100 and 300 micrometers, are amazing creatures as they already possess sensing ability, energetic autonomy, and provide large-scale fluid mixing capability. They are also are very resilient and can survive in very harsh environments and even are one of the few organisms that have survived via asexual reproduction.

“Taking inspiration from the science fiction concept of a cybernetic organism, or cyborg – where an organism has enhanced abilities due to the integration of some artificial component – we developed a self-propelled biohybrid microrobot, that we named rotibot, employing rotifers as their engine,” says Fernando Soto, first author of a paper on this work (Advanced Functional Materials, “Rotibot: Use of Rotifers as Self-Propelling Biohybrid Microcleaners”).

This is the first demonstration of a biohybrid cyborg used for the removal and degradation of pollutants from solution. The technical breakthrough that allowed the team to achieve this task is based on a novel fabrication mechanism based on the selective accumulation of functionalized microbeads in the microorganism’s mouth: The rotifer serves not only as a transport vessel for active material or cargo but also acting as a powerful biological pump, as it creates fluid flows directed towards its mouth

Nanowerk has made this video demonstrating a rotifer available along with a description,

“The rotibot is a rotifer (a marine microorganism) that has plastic microbeads attached to the mouth, which are functionalized with pollutant-degrading enzymes. This video illustrates a free swimming rotibot mixing tracer particles in solution. “

Here’s a link to and a citation for the paper,

Rotibot: Use of Rotifers as Self‐Propelling Biohybrid Microcleaners by Fernando Soto, Miguel Angel Lopez‐Ramirez, Itthipon Jeerapan, Berta Esteban‐Fernandez de Avila, Rupesh, Kumar Mishra, Xiaolong Lu, Ingrid Chai, Chuanrui Chen, Daniel Kupor. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.201900658 First published: 28 March 2019

This paper is behind a paywall.

Berger’s April 3, 2019 Nanowerk spotlight article includes some useful images if you are interested in figuring out how these rotibots function.

Transparent graphene electrode technology and complex brain imaging

Michael Berger has written a May 24, 2018 Nanowerk Spotlight article about some of the latest research on transparent graphene electrode technology and the brain (Note: A link has been removed),

In new work, scientists from the labs of Kuzum [Duygu Kuzum, an Assistant Professor of Electrical and Computer Engineering at the University of California, San Diego {UCSD}] and Anna Devor report a transparent graphene microelectrode neural implant that eliminates light-induced artifacts to enable crosstalk-free integration of 2-photon microscopy, optogenetic stimulation, and cortical recordings in the same in vivo experiment. The new class of transparent brain implant is based on monolayer graphene. It offers a practical pathway to investigate neuronal activity over multiple spatial scales extending from single neurons to large neuronal populations.

Conventional metal-based microelectrodes cannot be used for simultaneous measurements of multiple optical and electrical parameters, which are essential for comprehensive investigation of brain function across spatio-temporal scales. Since they are opaque, they block the field of view of the microscopes and generate optical shadows impeding imaging.

More importantly, they cause light induced artifacts in electrical recordings, which can significantly interfere with neural signals. Transparent graphene electrode technology presented in this paper addresses these problems and allow seamless and crosstalk-free integration of optical and electrical sensing and manipulation technologies.

In their work, the scientists demonstrate that by careful design of key steps in the fabrication process for transparent graphene electrodes, the light-induced artifact problem can be mitigated and virtually artifact-free local field potential (LFP) recordings can be achieved within operating light intensities.

“Optical transparency of graphene enables seamless integration of imaging, optogenetic stimulation and electrical recording of brain activity in the same experiment with animal models,” Kuzum explains. “Different from conventional implants based on metal electrodes, graphene-based electrodes do not generate any electrical artifacts upon interacting with light used for imaging or optogenetics. That enables crosstalk free integration of three modalities: imaging, stimulation and recording to investigate brain activity over multiple spatial scales extending from single neurons to large populations of neurons in the same experiment.”

The team’s new fabrication process avoids any crack formation in the transfer process, resulting in a 95-100% yield for the electrode arrays. This fabrication quality is important for expanding this technology to high-density large area transparent arrays to monitor brain-scale cortical activity in large animal models or humans.

“Our technology is also well-suited for neurovascular and neurometabolic studies, providing a ‘gold standard’ neuronal correlate for optical measurements of vascular, hemodynamic, and metabolic activity,” Kuzum points out. “It will find application in multiple areas, advancing our understanding of how microscopic neural activity at the cellular scale translates into macroscopic activity of large neuron populations.”

“Combining optical techniques with electrical recordings using graphene electrodes will allow to connect the large body of neuroscience knowledge obtained from animal models to human studies mainly relying on electrophysiological recordings of brain-scale activity,” she adds.

Next steps for the team involve employing this technology to investigate coupling and information transfer between different brain regions.

This work is part of the US BRAIN (Brain Research through Advancing Innovative Neurotechnologies) initiative and there’s more than one team working with transparent graphene electrodes. John Hewitt in an Oct. 21, 2014 posting on ExtremeTech describes two other teams’ work (Note: Links have been removed),

The solution [to the problems with metal electrodes], now emerging from multiple labs throughout the universe is to build flexible, transparent electrode arrays from graphene. Two studies in the latest issue of Nature Communications, one from the University of Wisconsin-Madison and the other from Penn [University of Pennsylvania], describe how to build these devices.

The University of Wisconsin researchers are either a little bit smarter or just a little bit richer, because they published their work open access. It’s a no-brainer then that we will focus on their methods first, and also in more detail. To make the arrays, these guys first deposited the parylene (polymer) substrate on a silicon wafer, metalized it with gold, and then patterned it with an electron beam to create small contact pads. The magic was to then apply four stacked single-atom-thick graphene layers using a wet transfer technique. These layers were then protected with a silicon dioxide layer, another parylene layer, and finally molded into brain signal recording goodness with reactive ion etching.

PennTransparentelectrodeThe researchers went with four graphene layers because that provided optimal mechanical integrity and conductivity while maintaining sufficient transparency. They tested the device in opto-enhanced mice whose neurons expressed proteins that react to blue light. When they hit the neurons with a laser fired in through the implant, the protein channels opened and fired the cell beneath. The masterstroke that remained was then to successfully record the electrical signals from this firing, sit back, and wait for the Nobel prize office to call.

The Penn State group [Note: Every reearcher mentioned in the paper Hewitt linked to is from the University of Pennsylvania] in the  used a similar 16-spot electrode array (pictured above right), and proceeded — we presume — in much the same fashion. Their angle was to perform high-resolution optical imaging, in particular calcium imaging, right out through the transparent electrode arrays which simultaneously recorded in high-temporal-resolution signals. They did this in slices of the hippocampus where they could bring to bear the complex and multifarious hardware needed to perform confocal and two-photon microscopy. These latter techniques provide a boost in spatial resolution by zeroing in over narrow planes inside the specimen, and limiting the background by the requirement of two photons to generate an optical signal. We should mention that there are voltage sensitive dyes available, in addition to standard calcium dyes, which can almost record the fastest single spikes, but electrical recording still reigns supreme for speed.

What a mouse looks like with an optogenetics system plugged in

What a mouse looks like with an optogenetics system plugged in

One concern of both groups in making these kinds of simultaneous electro-optic measurements was the generation of light-induced artifacts in the electrical recordings. This potential complication, called the Becqueral photovoltaic effect, has been known to exist since it was first demonstrated back in 1839. When light hits a conventional metal electrode, a photoelectrochemical (or more simply, a photovoltaic) effect occurs. If present in these recordings, the different signals could be highly disambiguatable. The Penn researchers reported that they saw no significant artifact, while the Wisconsin researchers saw some small effects with their device. In particular, when compared with platinum electrodes put into the opposite side cortical hemisphere, the Wisconsin researchers found that the artifact from graphene was similar to that obtained from platinum electrodes.

Here’s a link to and a citation for the latest research from UCSD,

Deep 2-photon imaging and artifact-free optogenetics through transparent graphene microelectrode arrays by Martin Thunemann, Yichen Lu, Xin Liu, Kıvılcım Kılıç, Michèle Desjardins, Matthieu Vandenberghe, Sanaz Sadegh, Payam A. Saisan, Qun Cheng, Kimberly L. Weldy, Hongming Lyu, Srdjan Djurovic, Ole A. Andreassen, Anders M. Dale, Anna Devor, & Duygu Kuzum. Nature Communicationsvolume 9, Article number: 2035 (2018) doi:10.1038/s41467-018-04457-5 Published: 23 May 2018

This paper is open access.

You can find out more about the US BRAIN initiative here and if you’re curious, you can find out more about the project at UCSD here. Duygu Kuzum (now at UCSD) was at  the University of Pennsylvania in 2014 and participated in the work mentioned in Hewitt’s 2014 posting.