Tag Archives: University of California at San Diego (UCSD)

A hardware (neuromorphic and quantum) proposal for handling increased AI workload

It’s been a while since I’ve featured anything from Purdue University (Indiana, US). From a November 7, 2023 news item on Nanowerk, Note Links have been removed,

Technology is edging closer and closer to the super-speed world of computing with artificial intelligence. But is the world equipped with the proper hardware to be able to handle the workload of new AI technological breakthroughs?

Key Takeaways
Current AI technologies are strained by the limitations of silicon-based computing hardware, necessitating new solutions.

Research led by Erica Carlson [Purdue University] suggests that neuromorphic [brainlike] architectures, which replicate the brain’s neurons and synapses, could revolutionize computing efficiency and power.

Vanadium oxides have been identified as a promising material for creating artificial neurons and synapses, crucial for neuromorphic computing.

Innovative non-volatile memory, observed in vanadium oxides, could be the key to more energy-efficient and capable AI hardware.

Future research will explore how to optimize the synaptic behavior of neuromorphic materials by controlling their memory properties.

The colored landscape above shows a transition temperature map of VO2 (pink surface) as measured by optical microscopy. This reveals the unique way that this neuromorphic quantum material [emphasis mine] stores memory like a synapse. Image credit: Erica Carlson, Alexandre Zimmers, and Adobe Stock

An October 13, 2023 Purdue University news release (also on EurekAlert but published November 6, 2023) by Cheryl Pierce, which originated the news item, provides more detail about the work, Note: A link has been removed,

“The brain-inspired codes of the AI revolution are largely being run on conventional silicon computer architectures which were not designed for it,” explains Erica Carlson, 150th Anniversary Professor of Physics and Astronomy at Purdue University.

A joint effort between Physicists from Purdue University, University of California San Diego (USCD) and École Supérieure de Physique et de Chimie Industrielles (ESPCI) in Paris, France, believe they may have discovered a way to rework the hardware…. [sic] By mimicking the synapses of the human brain.  They published their findings, “Spatially Distributed Ramp Reversal Memory in VO2” in Advanced Electronic Materials which is featured on the back cover of the October 2023 edition.

New paradigms in hardware will be necessary to handle the complexity of tomorrow’s computational advances. According to Carlson, lead theoretical scientist of this research, “neuromorphic architectures hold promise for lower energy consumption processors, enhanced computation, fundamentally different computational modes, native learning and enhanced pattern recognition.”

Neuromorphic architecture basically boils down to computer chips mimicking brain behavior.  Neurons are cells in the brain that transmit information. Neurons have small gaps at their ends that allow signals to pass from one neuron to the next which are called synapses. In biological brains, these synapses encode memory. This team of scientists concludes that vanadium oxides show tremendous promise for neuromorphic computing because they can be used to make both artificial neurons and synapses.

“The dissonance between hardware and software is the origin of the enormously high energy cost of training, for example, large language models like ChatGPT,” explains Carlson. “By contrast, neuromorphic architectures hold promise for lower energy consumption by mimicking the basic components of a brain: neurons and synapses. Whereas silicon is good at memory storage, the material does not easily lend itself to neuron-like behavior. Ultimately, to provide efficient, feasible neuromorphic hardware solutions requires research into materials with radically different behavior from silicon – ones that can naturally mimic synapses and neurons. Unfortunately, the competing design needs of artificial synapses and neurons mean that most materials that make good synaptors fail as neuristors, and vice versa. Only a handful of materials, most of them quantum materials, have the demonstrated ability to do both.”

The team relied on a recently discovered type of non-volatile memory which is driven by repeated partial temperature cycling through the insulator-to-metal transition. This memory was discovered in vanadium oxides.

Alexandre Zimmers, lead experimental scientist from Sorbonne University and École Supérieure de Physique et de Chimie Industrielles, Paris, explains, “Only a few quantum materials are good candidates for future neuromorphic devices, i.e., mimicking artificial synapses and neurons. For the first time, in one of them, vanadium dioxide, we can see optically what is changing in the material as it operates as an artificial synapse. We find that memory accumulates throughout the entirety of the sample, opening new opportunities on how and where to control this property.”

“The microscopic videos show that, surprisingly, the repeated advance and retreat of metal and insulator domains causes memory to be accumulated throughout the entirety of the sample, rather than only at the boundaries of domains,” explains Carlson. “The memory appears as shifts in the local temperature at which the material transitions from insulator to metal upon heating, or from metal to insulator upon cooling. We propose that these changes in the local transition temperature accumulate due to the preferential diffusion of point defects into the metallic domains that are interwoven through the insulator as the material is cycled partway through the transition.”

Now that the team has established that vanadium oxides are possible candidates for future neuromorphic devices, they plan to move forward in the next phase of their research.

“Now that we have established a way to see inside this neuromorphic material, we can locally tweak and observe the effects of, for example, ion bombardment on the material’s surface,” explains Zimmers. “This could allow us to guide the electrical current through specific regions in the sample where the memory effect is at its maximum. This has the potential to significantly enhance the synaptic behavior of this neuromorphic material.”

There’s a very interesting 16 mins. 52 secs. video embedded in the October 13, 2023 Purdue University news release. In an interview with Dr. Erica Carlson who hosts The Quantum Age website and video interviews on its YouTube Channel, Alexandre Zimmers takes you from an amusing phenomenon observed by 19th century scientists through the 20th century where it becomes of more interest as the nanscale phenonenon can be exploited (sonar, scanning tunneling microscopes, singing birthday cards, etc.) to the 21st century where we are integrating this new information into a quantum* material for neuromorphic hardware.

Here’s a link to and a citation for the paper,

Spatially Distributed Ramp Reversal Memory in VO2 by Sayan Basak, Yuxin Sun, Melissa Alzate Banguero, Pavel Salev, Ivan K. Schuller, Lionel Aigouy, Erica W. Carlson, Alexandre Zimmers. Advanced Electronic Materials Volume 9, Issue 10 October 2023 2300085 DOI: https://doi.org/10.1002/aelm.202300085 First published: 10 July 2023

This paper is open access.

There’s a lot of research into neuromorphic hardware, here’s a sampling of some of my most recent posts on the topic,

There’s more, just use ‘neuromorphic hardware’ for your search term.

*’meta’ changed to ‘quantum’ on January 8, 2024.

They glow under stress: soft, living materials made with algae

Caption: These soft, living materials glow in response to mechanical stress, such as compression, stretching or twisting. Credit: UC San Diego Jacobs School of Engineering

An October 20, 2023 news item on phys.org describes research into bioluminescent materials, Note: A link has been removed,

A team of researchers led by the University of California San Diego has developed soft yet durable materials that glow in response to mechanical stress, such as compression, stretching or twisting. The materials derive their luminescence from single-celled algae known as dinoflagellates.

The work, inspired by the bioluminescent waves observed during red tide events at San Diego’s beaches, was published Oct. 20 [2023] in Science Advances.

An October 23, 2023 University of California at San Diego news release (also on EurekAlert but published October 20, 2023) by Liezel Labios, which originated the news item, delves further into the research,

An exciting feature of these materials is their inherent simplicity—they need no electronics, no external power source,” said study senior author Shengqiang Cai, a professor of mechanical and aerospace engineering at the UC San Diego Jacobs School of Engineering. “We demonstrate how we can harness the power of nature to directly convert mechanical stimuli into light emission.”

This study was a multi-disciplinary collaboration involving engineers and materials scientists in Cai’s lab, marine biologist Michael Latz at UC San Diego’s Scripps Institution of Oceanography, and physics professor Maziyar Jalaal at University of Amsterdam.

The primary ingredients of the bioluminescent materials are dinoflagellates and a seaweed-based polymer called alginate. These elements were mixed to form a solution, which was then processed with a 3D printer to create a diverse array of shapes, such as grids, spirals, spiderwebs, balls, blocks and pyramid-like structures. The 3D-printed structures were then cured as a final step.

When the materials are subjected to compression, stretching or twisting, the dinoflagellates within them respond by emitting light. This response mimics what happens in the ocean, when dinoflagellates produce flashes of light as part of a predator defense strategy. In tests, the materials glowed when the researchers pressed on them and traced patterns on their surface. The materials were even sensitive enough to glow under the weight of a foam ball rolling on their surface.

The greater the applied stress, the brighter the glow. The researchers were able to quantify this behavior and developed a mathematical model that can predict the intensity of the glow based on the magnitude of the mechanical stress applied.

The researchers also demonstrated techniques to make these materials resilient in various experimental conditions. To reinforce the materials so that they can bear substantial mechanical loads, a second polymer, poly(ethylene glycol) diacrylate, was added to the original blend. Also, coating the materials with a stretchy rubber-like polymer called Ecoflex provided protection in acidic and basic solutions. With this protective layer, the materials could even be stored in seawater for up to five months without losing their form or bioluminescent properties.

Another beneficial feature of these materials is their minimal maintenance requirements. To keep working, the dinoflagellates within the materials need periodic cycles of light and darkness. During the light phase, they photosynthesize to produce food and energy, which are then used in the dark phase to emit light when mechanical stress is applied. This behavior mirrors the natural processes at play when the dinoflagellates cause bioluminescence in the ocean during red tide events. 

“This current work demonstrates a simple method to combine living organisms with non-living components to fabricate novel materials that are self-sustaining and are sensitive to fundamental mechanical stimuli found in nature,” said study first author Chenghai Li, a mechanical and aerospace engineering Ph.D. candidate in Cai’s lab.

The researchers envision that these materials could potentially be used as mechanical sensors to gauge pressure, strain or stress. Other potential applications include soft robotics and biomedical devices that use light signals to perform treatment or controlled drug release.

However, there is much work to be done before these applications can be realized. The researchers are working on further improving and optimizing the materials.

Here’s a link to and a citation for the paper,

Ultrasensitive and robust mechanoluminescent living composites by Chenghai Li, Nico Schramma, Zijun Wang, Nada F. Qari, Maziyar Jalaal, Michael I. Latz, and Shengqiang Cai. Science Advances 20 Oct 2023 Vol 9, Issue 42 DOI: 10.1126/sciadv.adi8643

This paper is open access.

AI for salmon recovery

Hopefully you won’t be subjected to a commercial prior to this 3 mins. 49 secs. video about the salmon and how artificial intelligence (AI) could make a difference in theirs and our continued survival,

Video caption: Wild Salmon Center is partnering with First Nations to pilot the Salmon Vision technology. (Credit: Olivia Leigh Nowak/Le Colibri Studio.)

An October 19, 2023 news item on phys.org announces this research, Note: Links have been removed,

Scientists and natural resource managers from Canadian First Nations, governments, academic institutions, and conservation organizations published the first results of a unique salmon population monitoring tool in Frontiers in Marine Science.

This groundbreaking new technology, dubbed “Salmon Vision,” combines artificial intelligence with age-old fishing weir technology. Early assessments show it to be remarkably adept at identifying and counting fish species, potentially enabling real-time salmon population monitoring for fisheries managers.

An October 19, 2023 Wild Salmon Center news release on EurekAlert, which originated the news item, provides more detail about the work,

“In recent years, we’ve seen the promise of underwater video technology to help us literally see salmon return to rivers,” says lead author Dr. Will Atlas, Senior Watershed Scientist with the Portland-based Wild Salmon Center. “That dovetails with what many of our First Nations partners are telling us: that we need to automate fish counting to make informed decisions while salmon are still running.” 

The Salmon Vision pilot study annotates more than 500,000 individual video frames captured at two Indigenous-run fish counting weirs on the Kitwanga and Bear Rivers of B.C.’s Central Coast. 

The first-of-its-kind deep learning computer model, developed in data partnership with the Gitanyow Fisheries Authority and Skeena Fisheries Commission, shows promising accuracy in identifying salmon species. It yielded mean average precision rates of 67.6 percent in tracking 12 different fish species passing through custom fish-counting boxes at the two weirs, with scores surpassing 90 and 80 percent for coho and sockeye salmon: two of the principal fish species targeted by First Nations, commercial, and recreational fishers. 

“When we envisioned providing fast grants for projects focused on Indigenous futurism and climate resilience, this is the type of project that we hoped would come our way,” says Dr. Keolu Fox, a professor at the University of California-San Diego, and one of several reviewers in an early crowdfunding round for the development of Salmon Vision. 

Collaborators on the model, funded by the British Columbia Salmon Recovery and Innovation Fund, include researchers and fisheries managers with Simon Fraser University and Douglas College computing sciences, the Pacific Salmon Foundation, Gitanyow Fisheries Authority, and the Skeena Fisheries Commission. Following these exciting early results, the next step is to expand the model with partner First Nations into a half-dozen new watersheds on B.C.’s North and Central Coast.

Real-time data on salmon returns is critical on several fronts. According to Dr. Atlas, many fisheries in British Columbia have been data-poor for decades. That leaves fisheries managers to base harvest numbers on early-season catch data, rather than the true number of salmon returning. Meanwhile, changing weather patterns, stream flows, and ocean conditions are creating more variable salmon returns: uncertainty that compounds the ongoing risks of overfishing already-vulnerable populations.

“Without real-time data on salmon returns, it’s extremely difficult to build climate-smart, responsive fisheries,” says Dr. Atlas. “Salmon Vision data collection and analysis can fill that information gap.” 

It’s a tool that he says will be invaluable to First Nation fisheries managers and other organizations both at the decision-making table—in providing better information to manage conservation risks and fishing opportunities—and in remote rivers across salmon country, where on-the-ground data collection is challenging and costly. 

The Salmon Vision team is implementing automated counting on a trial basis in several rivers around the B.C. North and Central Coasts in 2023. The goal is to provide reliable real-time count data by 2024.

This October 18, 2023 article by Ramona DeNies for the Wild Salmon Center (WSC) is nicely written although it does cover some of the same material seen in the news release, Note: A link has been removed,

Right now, in rivers across British Columbia’s Central Coast, we don’t know how many salmon are actually returning. At least, not until fishing seasons are over.

And yet, fisheries managers still have to make decisions. They have to make forecasts, modeled on data from the past. They have to set harvest targets for commercial and recreational fisheries. And increasingly, they have to make the call on emergency closures, when things start looking grim.

“On the north and central coast of BC, we’ve seen really wildly variable returns of salmon over the last decade,” says Dr. Will Atlas, Wild Salmon Center Senior Watershed Scientist. “With accelerating climate change, every year is unprecedented now. Yet from a fisheries management perspective, we’re still going into most seasons assuming that this year will look like the past.”

One answer, Dr. Atlas says, is “Salmon Vision.” Results from this first-of-its-kind technology—developed by WSC in data partnership with the Gitanyow Fisheries Authority and Skeena Fisheries Commission—were recently published in Frontiers in Marine Science.

There are embedded images in DeNies’ October 18, 2023 article; it’s where I found the video.

Here’s a link to and a citation for the paper,

Wild salmon enumeration and monitoring using deep learning empowered detection and tracking by William I. Atlas, Sami Ma, Yi Ching Chou, Katrina Connors, Daniel Scurfield, Brandon Nam, Xiaoqiang Ma, Mark Cleveland, Janvier Doire, Jonathan W. Moore, Ryan Shea, Jiangchuan Liu. Front. Mar. Sci., 20 September 2023 Volume 10 – 2023 DOI: https://doi.org/10.3389/fmars.2023.1200408

This paper appears to be open access.

Dynamic magnetic fractal networks for neuromorphic (brainlike) computing

Credit: Advanced Materials (2023). DOI: 10.1002/adma.202300416 [cover image]

This is a different approach to neuromorphic (brainlike) computing being described in an August 28, 2023 news item on phys.org, Note: A link has been removed,

The word “fractals” might inspire images of psychedelic colors spiraling into infinity in a computer animation. An invisible, but powerful and useful, version of this phenomenon exists in the realm of dynamic magnetic fractal networks.

Dustin Gilbert, assistant professor in the Department of Materials Science and Engineering [University of Tennessee, US], and colleagues have published new findings in the behavior of these networks—observations that could advance neuromorphic computing capabilities.

Their research is detailed in their article “Skyrmion-Excited Spin-Wave Fractal Networks,” cover story for the August 17, 2023, issue of Advanced Materials.

An August 18, 2023 University of Tennessee news release, which originated the news item, provides more details,

“Most magnetic materials—like in refrigerator magnets—are just comprised of domains where the magnetic spins all orient parallel,” said Gilbert. “Almost 15 years ago, a German research group discovered these special magnets where the spins make loops—like a nanoscale magnetic lasso. These are called skyrmions.”

Named for legendary particle physicist Tony Skyrme, a skyrmion’s magnetic swirl gives it a non-trivial topology. As a result of this topology, the skyrmion has particle-like properties—they are hard to create or destroy, they can move and even bounce off of each other. The skyrmion also has dynamic modes—they can wiggle, shake, stretch, whirl, and breath[e].

As the skyrmions “jump and jive,” they are creating magnetic spin waves with a very narrow wavelength. The interactions of these waves form an unexpected fractal structure.

“Just like a person dancing in a pool of water, they generate waves which ripple outward,” said Gilbert. “Many people dancing make many waves, which normally would seem like a turbulent, chaotic sea. We measured these waves and showed that they have a well-defined structure and collectively form a fractal which changes trillions of times per second.”

Fractals are important and interesting because they are inherently tied to a “chaos effect”—small changes in initial conditions lead to big changes in the fractal network.

“Where we want to go with this is that if you have a skyrmion lattice and you illuminate it with spin waves, the way the waves make its way through this fractal-generating structure is going to depend very intimately on its construction,” said Gilbert. “So, if you could write individual skyrmions, it can effectively process incoming spin waves into something on the backside—and it’s programmable. It’s a neuromorphic architecture.”

The Advanced Materials cover illustration [image at top of this posting] depicts a visual representation of this process, with the skyrmions floating on top of a turbulent blue sea illustrative of the chaotic structure generated by the spin wave fractal.

“Those waves interfere just like if you throw a handful of pebbles into a pond,” said Gilbert. “You get a choppy, turbulent mess. But it’s not just any simple mess, it’s actually a fractal. We have an experiment now showing that the spin waves generated by skyrmions aren’t just a mess of waves, they have inherent structure of their very own. By, essentially, controlling those stones that we ‘throw in,’ you get very different patterns, and that’s what we’re driving towards.”

The discovery was made in part by neutron scattering experiments at the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor and at the National Institute of Standards and Technology (NIST) Center for Neutron Research. Neutrons are magnetic and pass through materials easily, making them ideal probes for studying materials with complex magnetic behavior such as skyrmions and other quantum phenomena.

Gilbert’s co-authors for the new article are Nan Tang, Namila Liyanage, and Liz Quigley, students in his research group; Alex Grutter and Julie Borchers from National Institute of Standards and Technology (NIST), Lisa DeBeer-Schmidt and Mike Fitzsimmons from Oak Ridge National Laboratory; and Eric Fullerton, Sheena Patel, and Sergio Montoya from the University of California, San Diego.

The team’s next step is to build a working model using the skyrmion behavior.

“If we can develop thinking computers, that, of course, is extraordinarily important,” said Gilbert. “So, we will propose to make a miniaturized, spin wave neuromorphic architecture.” He also hopes that the ripples from this UT Knoxville discovery inspire researchers to explore uses for a spiraling range of future applications.

Here’s a link to and a citation for the paper,

Skyrmion-Excited Spin-Wave Fractal Networks by Nan Tang, W. L. N. C. Liyanage, Sergio A. Montoya, Sheena Patel, Lizabeth J. Quigley, Alexander J. Grutter, Michael R. Fitzsimmons, Sunil Sinha, Julie A. Borchers, Eric E. Fullerton, Lisa DeBeer-Schmitt, Dustin A. Gilbert. Advanced Materials Volume 35, Issue 33 August 17, 2023 2300416 DOI: https://doi.org/10.1002/adma.202300416 First published: 04 May 2023

This paper is behind a paywall.

Agricultural pest control with nanoparticles derived from plant viruses

As with many of these ‘nanoparticle solutions’ to a problem, it seems the nanoparticles are the delivery system. A September 21, 2023 news item on ScienceDaily announces the research,

A new form of agricultural pest control could one day take root — one that treats crop infestations deep under the ground in a targeted manner with less pesticide.

Engineers at the University of California San Diego have developed nanoparticles, fashioned from plant viruses, that can deliver pesticide molecules to soil depths that were previously unreachable. This advance could potentially help farmers effectively combat parasitic nematodes that plague the root zones of crops, all while minimizing costs, pesticide use and environmental toxicity.

A September 21, 2023 University of California at San Diego news release (also on EurekAlert) by Liezel Labios, which originated the news item, provides more information about the problems along with a nod to nanomedicine as the inspiration for the proposed solution, Note: Links have been removed,

Controlling infestations caused by root-damaging nematodes has long been a challenge in agriculture. One reason is that the types of pesticides used against nematodes tend to cling to the top layers of soil, making it tough to reach the root level where nematodes wreak havoc. As a result, farmers often resort to applying excessive amounts of pesticide, as well as water to wash pesticides down to the root zone. This can lead to contamination of soil and groundwater.

To find a more sustainable and effective solution, a team led by Nicole Steinmetz, a professor of nanoengineering at the UC San Diego Jacobs School of Engineering and founding director of the Center for Nano-ImmunoEngineering, developed plant virus nanoparticles that can transport pesticide molecules deep into the soil, precisely where they are needed. The work is detailed in a paper published in Nano Letters.

Steinmetz’s team drew inspiration from nanomedicine [emphasis mine], where nanoparticles are being created for targeted drug delivery, and adapted this concept to agriculture. This idea of repurposing and redesigning biological materials for different applications is also a focus area of the UC San Diego Materials Research Science and Engineering Center (MRSEC), of which Steinmetz is a co-lead. 

“We’re developing a precision farming approach where we’re creating nanoparticles for targeted pesticide delivery,” said Steinmetz, who is the study’s senior author. “This technology holds the promise of enhancing treatment effectiveness in the field without the need to increase pesticide dosage.”

The star of this approach is the tobacco mild green mosaic virus, a plant virus that has the ability to move through soil with ease. Researchers modified these virus nanoparticles, rendering them noninfectious to crops by removing their RNA. They then mixed these nanoparticles with pesticide solutions in water and heated them, creating spherical virus-like nanoparticles packed with pesticides through a simple one-pot synthesis.

This one-pot synthesis offers several advantages. First, it is cost-effective, with just a few steps and a straightforward purification process. The result is a more scalable method, paving the way toward a more affordable product for farmers, noted Steinmetz. Second, by simply packaging the pesticide inside the nanoparticles, rather than chemically binding it to the surface, this method preserves the original chemical structure of the pesticide.

“If we had used a traditional synthetic method where we link the pesticide molecules to the nanoparticles, we would have essentially created a new compound, which will need to go through a whole new registration and regulatory approval process,” said study first author Adam Caparco, a postdoctoral researcher in Steinmetz’s lab. “But since we’re just encapsulating the pesticide within the nanoparticles, we’re not changing the active ingredient, so we won’t need to get new approval for it. That could help expedite the translation of this technology to the market.”

Moreover, the tobacco mild green mosaic virus is already approved by the Environmental Protection Agency (EPA) for use as an herbicide to control an invasive plant called the tropical soda apple. This existing approval could further streamline the path from lab to market.

The researchers conducted experiments in the lab to demonstrate the efficacy of their pesticide-packed nanoparticles. The nanoparticles were watered through columns of soil and successfully transported the pesticides to depths of at least 10 centimeters. The solutions were collected from the bottom of the soil columns and were found to contain the pesticide-packed nanoparticles. When the researchers treated nematodes with these solutions, they eliminated at least half of the population in a petri dish.

While the researchers have not yet tested the nanoparticles on nematodes lurking beneath the soil, they note that this study marks a significant step forward.

“Our technology enables pesticides meant to combat nematodes to be used in the soil,” said Caparco. “These pesticides alone cannot penetrate the soil. But with our nanoparticles, they now have soil mobility, can reach the root level, and potentially kill the nematodes.”

Future research will involve testing the nanoparticles on actual infested plants to assess their effectiveness in real-world agricultural scenarios. Steinmetz’s lab will perform these follow-up studies in collaboration with the U.S. Horticultural Research Laboratory. Her team has also established plans for an industry partnership aimed at advancing the nanoparticles into a commercial product.

Here’s a link to and a citation for the paper,

Delivery of Nematicides Using TMGMV-Derived Spherical Nanoparticles by Adam A. Caparco, Ivonne González-Gamboa, Samuel S. Hays, Jonathan K. Pokorski, and Nicole F. Steinmetz. Nano Lett. 2023, 23, 12, 5785–5793 DOI: https://doi.org/10.1021/acs.nanolett.3c01684 Publication Date:June 16, 2023 Copyright © 2023 American Chemical Society

This paper is behind a paywall.

Sleep helps artificial neural networks (ANNs) to keep learning without “catastrophic forgetting”

A November 18, 2022 news item on phys.org describes some of the latest work on neuromorphic (brainlike) computing from the University of California at San Diego (UCSD or UC San Diego), Note: Links have been removed,

Depending on age, humans need 7 to 13 hours of sleep per 24 hours. During this time, a lot happens: Heart rate, breathing and metabolism ebb and flow; hormone levels adjust; the body relaxes. Not so much in the brain.

“The brain is very busy when we sleep, repeating what we have learned during the day,” said Maxim Bazhenov, Ph.D., professor of medicine and a sleep researcher at University of California San Diego School of Medicine. “Sleep helps reorganize memories and presents them in the most efficient way.”

In previous published work, Bazhenov and colleagues have reported how sleep builds rational memory, the ability to remember arbitrary or indirect associations between objects, people or events, and protects against forgetting old memories.

Artificial neural networks leverage the architecture of the human brain to improve numerous technologies and systems, from basic science and medicine to finance and social media. In some ways, they have achieved superhuman performance, such as computational speed, but they fail in one key aspect: When artificial neural networks learn sequentially, new information overwrites previous information, a phenomenon called catastrophic forgetting.

“In contrast, the human brain learns continuously and incorporates new data into existing knowledge,” said Bazhenov, “and it typically learns best when new training is interleaved with periods of sleep for memory consolidation.”

Writing in the November 18, 2022 issue of PLOS Computational Biology, senior author Bazhenov and colleagues discuss how biological models may help mitigate the threat of catastrophic forgetting in artificial neural networks, boosting their utility across a spectrum of research interests. 

A November 18, 2022 UC San Diego news release (also one EurekAlert), which originated the news item, adds some technical details,

The scientists used spiking neural networks that artificially mimic natural neural systems: Instead of information being communicated continuously, it is transmitted as discrete events (spikes) at certain time points.

They found that when the spiking networks were trained on a new task, but with occasional off-line periods that mimicked sleep, catastrophic forgetting was mitigated. Like the human brain, said the study authors, “sleep” for the networks allowed them to replay old memories without explicitly using old training data. 

Memories are represented in the human brain by patterns of synaptic weight — the strength or amplitude of a connection between two neurons. 

“When we learn new information,” said Bazhenov, “neurons fire in specific order and this increases synapses between them. During sleep, the spiking patterns learned during our awake state are repeated spontaneously. It’s called reactivation or replay. 

“Synaptic plasticity, the capacity to be altered or molded, is still in place during sleep and it can further enhance synaptic weight patterns that represent the memory, helping to prevent forgetting or to enable transfer of knowledge from old to new tasks.”

When Bazhenov and colleagues applied this approach to artificial neural networks, they found that it helped the networks avoid catastrophic forgetting. 

“It meant that these networks could learn continuously, like humans or animals. Understanding how human brain processes information during sleep can help to augment memory in human subjects. Augmenting sleep rhythms can lead to better memory. 

“In other projects, we use computer models to develop optimal strategies to apply stimulation during sleep, such as auditory tones, that enhance sleep rhythms and improve learning. This may be particularly important when memory is non-optimal, such as when memory declines in aging or in some conditions like Alzheimer’s disease.”

Here’s a link to and a citation for the paper,

Sleep prevents catastrophic forgetting in spiking neural networks by forming a joint synaptic weight representation by Ryan Golden, Jean Erik Delanois, Pavel Sanda, Maxim Bazhenov. PLOS [Computational Biology] DOI: https://doi.org/10.1371/journal.pcbi.1010628 Published: November 18, 2022

This paper is open access.

New chip for neuromorphic computing runs at a fraction of the energy of today’s systems

An August 17, 2022 news item on Nanowerk announces big (so to speak) claims from a team researching neuromorphic (brainlike) computer chips,

An international team of researchers has designed and built a chip that runs computations directly in memory and can run a wide variety of artificial intelligence (AI) applications–all at a fraction of the energy consumed by computing platforms for general-purpose AI computing.

The NeuRRAM neuromorphic chip brings AI a step closer to running on a broad range of edge devices, disconnected from the cloud, where they can perform sophisticated cognitive tasks anywhere and anytime without relying on a network connection to a centralized server. Applications abound in every corner of the world and every facet of our lives, and range from smart watches, to VR headsets, smart earbuds, smart sensors in factories and rovers for space exploration.

The NeuRRAM chip is not only twice as energy efficient as the state-of-the-art “compute-in-memory” chips, an innovative class of hybrid chips that runs computations in memory, it also delivers results that are just as accurate as conventional digital chips. Conventional AI platforms are a lot bulkier and typically are constrained to using large data servers operating in the cloud.

In addition, the NeuRRAM chip is highly versatile and supports many different neural network models and architectures. As a result, the chip can be used for many different applications, including image recognition and reconstruction as well as voice recognition.

..

An August 17, 2022 University of California at San Diego (UCSD) news release (also on EurekAlert), which originated the news item, provides more detail than usually found in a news release,

“The conventional wisdom is that the higher efficiency of compute-in-memory is at the cost of versatility, but our NeuRRAM chip obtains efficiency while not sacrificing versatility,” said Weier Wan, the paper’s first corresponding author and a recent Ph.D. graduate of Stanford University who worked on the chip while at UC San Diego, where he was co-advised by Gert Cauwenberghs in the Department of Bioengineering. 

The research team, co-led by bioengineers at the University of California San Diego, presents their results in the Aug. 17 [2022] issue of Nature.

Currently, AI computing is both power hungry and computationally expensive. Most AI applications on edge devices involve moving data from the devices to the cloud, where the AI processes and analyzes it. Then the results are moved back to the device. That’s because most edge devices are battery-powered and as a result only have a limited amount of power that can be dedicated to computing. 

By reducing power consumption needed for AI inference at the edge, this NeuRRAM chip could lead to more robust, smarter and accessible edge devices and smarter manufacturing. It could also lead to better data privacy as the transfer of data from devices to the cloud comes with increased security risks. 

On AI chips, moving data from memory to computing units is one major bottleneck. 

“It’s the equivalent of doing an eight-hour commute for a two-hour work day,” Wan said. 

To solve this data transfer issue, researchers used what is known as resistive random-access memory, a type of non-volatile memory that allows for computation directly within memory rather than in separate computing units. RRAM and other emerging memory technologies used as synapse arrays for neuromorphic computing were pioneered in the lab of Philip Wong, Wan’s advisor at Stanford and a main contributor to this work. Computation with RRAM chips is not necessarily new, but generally it leads to a decrease in the accuracy of the computations performed on the chip and a lack of flexibility in the chip’s architecture. 

“Compute-in-memory has been common practice in neuromorphic engineering since it was introduced more than 30 years ago,” Cauwenberghs said.  “What is new with NeuRRAM is that the extreme efficiency now goes together with great flexibility for diverse AI applications with almost no loss in accuracy over standard digital general-purpose compute platforms.”

A carefully crafted methodology was key to the work with multiple levels of “co-optimization” across the abstraction layers of hardware and software, from the design of the chip to its configuration to run various AI tasks. In addition, the team made sure to account for various constraints that span from memory device physics to circuits and network architecture. 

“This chip now provides us with a platform to address these problems across the stack from devices and circuits to algorithms,” said Siddharth Joshi, an assistant professor of computer science and engineering at the University of Notre Dame , who started working on the project as a Ph.D. student and postdoctoral researcher in Cauwenberghs lab at UC San Diego. 

Chip performance

Researchers measured the chip’s energy efficiency by a measure known as energy-delay product, or EDP. EDP combines both the amount of energy consumed for every operation and the amount of times it takes to complete the operation. By this measure, the NeuRRAM chip achieves 1.6 to 2.3 times lower EDP (lower is better) and 7 to 13 times higher computational density than state-of-the-art chips. 

Researchers ran various AI tasks on the chip. It achieved 99% accuracy on a handwritten digit recognition task; 85.7% on an image classification task; and 84.7% on a Google speech command recognition task. In addition, the chip also achieved a 70% reduction in image-reconstruction error on an image-recovery task. These results are comparable to existing digital chips that perform computation under the same bit-precision, but with drastic savings in energy. 

Researchers point out that one key contribution of the paper is that all the results featured are obtained directly on the hardware. In many previous works of compute-in-memory chips, AI benchmark results were often obtained partially by software simulation. 

Next steps include improving architectures and circuits and scaling the design to more advanced technology nodes. Researchers also plan to tackle other applications, such as spiking neural networks.

“We can do better at the device level, improve circuit design to implement additional features and address diverse applications with our dynamic NeuRRAM platform,” said Rajkumar Kubendran, an assistant professor for the University of Pittsburgh, who started work on the project while a Ph.D. student in Cauwenberghs’ research group at UC San Diego.

In addition, Wan is a founding member of a startup that works on productizing the compute-in-memory technology. “As a researcher and  an engineer, my ambition is to bring research innovations from labs into practical use,” Wan said. 

New architecture 

The key to NeuRRAM’s energy efficiency is an innovative method to sense output in memory. Conventional approaches use voltage as input and measure current as the result. But this leads to the need for more complex and more power hungry circuits. In NeuRRAM, the team engineered a neuron circuit that senses voltage and performs analog-to-digital conversion in an energy efficient manner. This voltage-mode sensing can activate all the rows and all the columns of an RRAM array in a single computing cycle, allowing higher parallelism. 

In the NeuRRAM architecture, CMOS neuron circuits are physically interleaved with RRAM weights. It differs from conventional designs where CMOS circuits are typically on the peripheral of RRAM weights.The neuron’s connections with the RRAM array can be configured to serve as either input or output of the neuron. This allows neural network inference in various data flow directions without incurring overheads in area or power consumption. This in turn makes the architecture easier to reconfigure. 

To make sure that accuracy of the AI computations can be preserved across various neural network architectures, researchers developed a set of hardware algorithm co-optimization techniques. The techniques were verified on various neural networks including convolutional neural networks, long short-term memory, and restricted Boltzmann machines. 

As a neuromorphic AI chip, NeuroRRAM performs parallel distributed processing across 48 neurosynaptic cores. To simultaneously achieve high versatility and high efficiency, NeuRRAM supports data-parallelism by mapping a layer in the neural network model onto multiple cores for parallel inference on multiple data. Also, NeuRRAM offers model-parallelism by mapping different layers of a model onto different cores and performing inference in a pipelined fashion.

An international research team

The work is the result of an international team of researchers. 

The UC San Diego team designed the CMOS circuits that implement the neural functions interfacing with the RRAM arrays to support the synaptic functions in the chip’s architecture, for high efficiency and versatility. Wan, working closely with the entire team, implemented the design; characterized the chip; trained the AI models; and executed the experiments. Wan also developed a software toolchain that maps AI applications onto the chip. 

The RRAM synapse array and its operating conditions were extensively characterized and optimized at Stanford University. 

The RRAM array was fabricated and integrated onto CMOS at Tsinghua University. 

The Team at Notre Dame contributed to both the design and architecture of the chip and the subsequent machine learning model design and training.

The research started as part of the National Science Foundation funded Expeditions in Computing project on Visual Cortex on Silicon at Penn State University, with continued funding support from the Office of Naval Research Science of AI program, the Semiconductor Research Corporation and DARPA [{US} Defense Advanced Research Projects Agency] JUMP program, and Western Digital Corporation. 

Here’s a link to and a citation for the paper,

A compute-in-memory chip based on resistive random-access memory by Weier Wan, Rajkumar Kubendran, Clemens Schaefer, Sukru Burc Eryilmaz, Wenqiang Zhang, Dabin Wu, Stephen Deiss, Priyanka Raina, He Qian, Bin Gao, Siddharth Joshi, Huaqiang Wu, H.-S. Philip Wong & Gert Cauwenberghs. Nature volume 608, pages 504–512 (2022) DOI: https://doi.org/10.1038/s41586-022-04992-8 Published: 17 August 2022 Issue Date: 18 August 2022

This paper is open access.

Antikythera: a new Berggruen Institute program and a 2,000 year old computer

Starting with the new Antikythera program at the Berggruen Institute before moving onto the Antikythera itself, one of my favourite scientific mysteries.

Antikythera program at the Berggruen Institute

An October 5, 2022 Berggruen Institute news release (also received via email) announces a program exploring the impact of planetary-scale computation and invites applications for the program’s first ‘studio’,

Antikythera is convening over 75 philosophers, technologists, designers, and scientists in seminars, design research studios, and global salons to create new models that shift computation toward more viable long-term futures: https://antikythera.xyz/

Applications are now open for researchers to join Antikythera’s fully-funded five month Studio in 2023, launching at the Berggruen Institute in Los Angeles: https://antikythera.xyz/apply/

Today [October 5, 2022] the Berggruen Institute announced that it will incubate Antikythera, an initiative focused on understanding and shaping the impact of computation on philosophy, global society, and planetary systems. Antikythera will engage a wide range of thinkers at the intersections of software, speculative thought, governance, and design to explore computation’s ultimate pitfalls and potentials. Research will range from the significance of machine intelligence and the geopolitics of AI to new economic models and the long-term project of composing a healthy planetary society.

“Against a background of rising geopolitical tensions and an accelerating climate crisis, technology has outpaced our theory. As such, we are less interested in applying philosophy to the topic of computation than generating new ideas from a direct encounter with it.” said Benjamin Bratton, Professor at the University of California, San Diego, and director of the new program. “The purpose of Antikythera is to reorient the question “what is computation for?” and to model what it may become. That is a project that is not only technological but also philosophical, political, and ecological.”

Antikythera will begin this exploration with its Studio program, applications for which are now open at antikythera.xyz/apply/. The Studio program will take place over five months in spring 2023 and bring together researchers from across the world to work in multidisciplinary teams. These teams will work on speculative design proposals, and join 75+ Affiliate Researchers for workshops, talks, and design sprints that inform thinking and propositions around Antikythera’s core research topics. Affiliate Researchers will include philosophers, technologists, designers, scientists, and other thinkers and practitioners. Applications for the program are due November 11, 2022.

Program project outcomes will include new combinations of theory, cinema, software, and policy. The five initial research themes animating this work are:

Synthetic Intelligence: the longer-term implications of machine intelligence, particularly as seen through the lens of artificial language

Hemispherical Stacks: the multipolar geopolitics of planetary computation

Recursive Simulations: the emergence of simulation as an epistemological technology, from scientific simulation to VR/AR

Synthetic Catallaxy: the ongoing organization of computational economics, pricing, and planning

Planetary Sapience: the evolutionary emergence of natural/artificial intelligence, and its role in composing a viable planetary condition

The program is named after the Antikythera Mechanism, the world’s first known computer, used more than 2,000 years ago to predict the movements of constellations and eclipses decades in advance. As an origin point for computation, it combined calculation, orientation and cosmology, dimensions of practice whose synergies may be crucial in setting our planetary future on a better course than it is on today.

Bratton continues, “The evolution of planetary intelligence has also meant centuries of destruction; its future must be radically different. We must ask, what future would make this past worth it? Taking the question seriously demands a different sort of speculative and practical philosophy and a corresponding sort of computation.”

Bratton is a philosopher of technology and Professor at the University of California, San Diego, and author of many books including The Stack: On Software and Sovereignty (MIT Press). His most recent book is The Revenge of the Real: Politics for a Post-Pandemic World (Verso Books), exploring the implications for political philosophy of COVID-19. Associate directors are Ben Cerveny, technologist, speculative designer, and director of the Amsterdam-based Foundation for Public Code, and Stephanie Sherman, strategist, writer, and director of the MA Narrative Environments program at Central St. Martins, London. The Studio is directed by architect and creative director Nicolay Boyadjiev.

In addition to the Studio, program activities will include a series of invitation-only planning salons inviting philosophers, designers, technologists, strategists, and others to discuss how to best interpret and intervene in the future of planetary-scale computation, and the historic philosophical and geopolitical force that it represents. These salons began in London in October 2022 and will continue in locations across the world including in Berlin; Amsterdam; Los Angeles; San Francisco; New York; Mexico City; Seoul; and Venice.

The announcement of Antikythera at the Berggruen Institute follows the recent spinoff of the Transformations of the Human school, successfully incubated at the Institute from 2017-2021.

“Computational technology covering the planet represents one of the largest and most urgent philosophical opportunities of our time,” said Nicolas Berggruen, Chairman and Co-Founder of the Berggruen Institute. “It is with great pleasure that we invite Antikythera to join our work at the Institute. Together, we can develop new ways of thinking to support planetary flourishing in the years to come.”

Web: Antikythera.xyz
Social: Antikythera_xyz on Twitter, Instagram, and Linkedin.
Email: contact@antikythera.xyz

Applications were opened on October, 4, 2022, the deadline is November 11, 2022 followed by interviews. Participants will be confirmed by December 11, 2022. Here are a few more details from the application portal,

Who should apply to the Studio?

Antikythera hopes to bring together a diverse cohort of researchers from different backgrounds, disciplines, perspectives, and levels of experience. The Antikythera research themes engage with global challenges that necessitate harnessing a diversity of thought and expertise. Anyone who is passionate about the research themes of the Antikythera program is strongly encouraged to apply. We accept applications from every discipline and background, from established to emerging researchers. Applicants do not need to meet any specific set of educational or professional experience.

Is the program free?

Yes, the program is free. You will be supported to cover the cost of housing, living expenses, and all program-related fieldwork travel along with a monthly stipend. Any other associated program costs will also be covered by the program.

Is the program in person and full-time?

Yes, the Studio program requires a full-time commitment (PhD students must also be on leave to participate). There is no part-time participation option. Though we understand this commitment may be challenging logistically for some individuals, we believe it is important for the Studio’s success. We will do our best to enable an environment that is comfortable and safe for participants from all backgrounds. Please do not hesitate to contact us if you may require any accommodations or have questions regarding the full-time, in-person nature of the program.

Do I need a Visa?

The Studio is a traveling program with time spent between the USA, Mexico, and South Korea. Applicable visa requirements set by these countries will apply and will vary depending on your nationality. We are aware that current visa appointment wait times may preclude some individuals who would require a brand new visa from being able to enter the US by January, and we are working to ensure access to the program for all (if not for January 2023, then for future Studio cohorts). We will therefore ask you to identify your country of origin and passport/visa status in the application form so we can work to enable your participation. Anyone who is passionate about the research themes of the Antikythera program is strongly encouraged to apply.

For those who like to put a face to a name, you can find out more about the program and the people behind it on this page.

Antikythera, a 2000 year old computer & 100 year old mystery

As noted in the Berggruen Institute news release, the Antikythera Mechanism is considered the world’s first computer (as far as we know). The image below is one of the best known illustrations of the device as visualized by researchers,

Exploded model of the Cosmos gearing of the Antikythera Mechanism. ©2020 Tony Freeth.

Briefly, the Antikythera mechanism was discovered at the turn of the twentieth century in 1901 by sponge divers off the coast of Greece. Philip Chrysopoulos’s September 21, 2022 article for The Greek Reporter gives more details in an exuberant style (Note: Links have been removed),

… now—more than 120 years later—the astounding machine has been recreated once again, using 3-D imagery, by a brilliant group of researchers from University College London (UCL).

Not only is the recreation a thing of great beauty and amazing genius, but it has also made possible a new understanding of how it worked.

Since only eighty-two fragments of the original mechanism are extant—comprising only one-third of the entire calculator—this left researchers stymied as to its full capabilities.

Until this moment [in 2020 according to the copyright for the image], the front of the mechanism, containing most of the gears, has been a bit of a Holy Grail for marine archeologists and astronomers.

Professor Tony Freeth says in an article published in the periodical Scientific Reports: “Ours is the first model that conforms to all the physical evidence and matches the descriptions in the scientific inscriptions engraved on the mechanism itself.”

“The sun, moon and planets are displayed in an impressive tour de force of ancient Greek brilliance,” Freeth said.

The largest surviving piece of the mechanism, referred to by researchers as “Fragment A,” has bearings, pillars, and a block. Another piece, known as “Fragment D,” has a mysterious disk along with an extraordinarily intricate 63-toothed gear and a plate.

The inscriptions—just discovered recently by researchers—on the back cover of the mechanism have a description of the cosmos and the planets, shown by beads of various colors, and move on rings set around the inscriptions.

By employing the information gleaned from recent x-rays of the computer and their knowledge of ancient Greek mathematics, the UCL researchers have now shown that they can demonstrate how the mechanism determined the cycles of the planets Venus and Saturn.

Evaggelos Vallianatos, author of many books on the Antikythera Mechanism writing at Greek Reporter said that it was much more than a mere mechanism. It was a sophisticated, mind-bogglingly complex astronomical computer, he said “and Greeks made it.”

They employed advanced astronomy, mathematics, metallurgy, and engineering to do so, constructing the astronomical device 2,200 years ago. These scientific facts of the computer’s age and its flowless high-tech nature profoundly disturbed some of the scientists who studied it.

A few Western scientists of the twentieth century were shocked by the Antikythera Mechanism, Vallianatos said. They called it an astrolabe for several decades and refused to call it a computer. The astrolabe, a Greek invention, is a useful instrument for calculating the position of the Sun and other prominent stars. Yet, its technology is rudimentary compared to that of the Antikythera device.

In 2015, Kyriakos Efstathiou, a professor of mechanical engineering at the Aristotle University of Thessaloniki and head of the group which studied the Antikythera Mechanism said: “All of our research has shown that our ancestors used their deep knowledge of astronomy and technology to construct such mechanisms, and based only on this conclusion, the history of technology should be re-written because it sets its start many centuries back.”

The professor further explained that the Antikythera Mechanism is undoubtedly the first machine of antiquity which can be classified by the scientific term “computer,” because “it is a machine with an entry where we can import data, and this machine can bring and create results based on a scientific mathematical scale.

In 2016, yet another astounding discovery was made when an inscription on the device was revealed—something like a label or a user’s manual for the device.

It included a discussion of the colors of eclipses, details used at the time in the making of astrological predictions, including the ability to see exact times of eclipses of the moon and the sun, as well as the correct movements of celestial bodies.

Inscribed numbers 76, 19 and 223 show maker “was a Pythagorean”

On one side of the device lies a handle that begins the movement of the whole system. By turning the handle and rotating the gauges in the front and rear of the mechanism, the user could set a date that would reveal the astronomical phenomena that would potentially occur around the Earth.

Physicist Yiannis Bitsakis has said that today the NASA [US National Aeronautics and Space Adiministration] website can detail all the eclipses of the past and those that are to occur in the future. However, “what we do with computers today, was done with the Antikythera Mechanism about 2000 years ago,” he said.

The stars and night heavens have been important to peoples around the world. (This September 18, 2020 posting highlights millennia old astronomy as practiced by indigenous peoples in North America, Australia, and elsewhere. There’s also this March 17, 2022 article “How did ancient civilizations make sense of the cosmos, and what did they get right?” by Susan Bell of University of Southern California on phys.org.)

I have covered the Antikythera in three previous postings (March 17, 2021, August 3, 2016, and October 2, 2012) with the 2021 posting being the most comprehensive and the one featuring Professor Tony Freeth’s latest breakthrough.

However, 2022 has blessed us with more as this April 11, 2022 article by Jennifer Ouellette for Ars Technica reveals (Note: Links have been removed)

The mysterious Antikythera mechanism—an ancient device believed to have been used for tracking the heavens—has fascinated scientists and the public alike since it was first recovered from a shipwreck over a century ago. Much progress has been made in recent years to reconstruct the surviving fragments and learn more about how the mechanism might have been used. And now, members of a team of Greek researchers believe they have pinpointed the start date for the Antikythera mechanism, according to a preprint posted to the physics arXiv repository. Knowing that “day zero” is critical to ensuring the accuracy of the device.

“Any measuring system, from a thermometer to the Antikythera mechanism, needs a calibration in order to [perform] its calculations correctly,” co-author Aristeidis Voulgaris of the Thessaloniki Directorate of Culture and Tourism in Greece told New Scientist. “Of course it wouldn’t have been perfect—it’s not a digital computer, it’s gears—but it would have been very good at predicting solar and lunar eclipses.”

Last year, an interdisciplinary team at University College London (UCL) led by mechanical engineer Tony Freeth made global headlines with their computational model, revealing a dazzling display of the ancient Greek cosmos. The team is currently building a replica mechanism, moving gears and all, using modern machinery. The display is described in the inscriptions on the mechanism’s back cover, featuring planets moving on concentric rings with marker beads as indicators. X-rays of the front cover accurately represent the cycles of Venus and Saturn—462 and 442 years, respectively. 

The Antikythera mechanism was likely built sometime between 200 BCE and 60 BCE. However, in February 2022, Freeth suggested that the famous Greek mathematician and inventor Archimedes (sometimes referred to as the Leonardo da Vinci of antiquity) may have actually designed the mechanism, even if he didn’t personally build it. (Archimedes died in 212 BCE at the hands of a Roman soldier during the siege of Syracuse.) There are references in the writings of Cicero (106-43 BCE) to a device built by Archimedes for tracking the movement of the Sun, Moon, and five planets; it was a prized possession of the Roman general Marcus Claudius Marcellus. According to Freeth, that description is remarkably similar to the Antikythera mechanism, suggesting it was not a one-of-a-kind device.

Voulgaris and his co-authors based their new analysis on a 223-month cycle called a Saros, represented by a spiral inset on the back of the device. The cycle covers the time it takes for the Sun, Moon, and Earth to return to their same positions and includes associated solar and lunar eclipses. Given our current knowledge about how the device likely functioned, as well as the inscriptions, the team believed the start date would coincide with an annular solar eclipse.

“This is a very specific and unique date [December 22, 178 BCE],” Voulgaris said. “In one day, there occurred too many astronomical events for it to be coincidence. This date was a new moon, the new moon was at apogee, there was a solar eclipse, the Sun entered into the constellation Capricorn, it was the winter solstice.”

Others have made independent calculations and arrived at a different conclusion: the calibration date would more likely fall sometime in the summer of 204 BCE, although Voulgaris countered that this doesn’t explain why the winter solstice is engraved so prominently on the device.

“The eclipse predictions on the [device’s back] contain enough astronomical information to demonstrate conclusively that the 18-year series of lunar and solar eclipse predictions started in 204 BCE,” Alexander Jones of New York University told New Scientist, adding that there have been four independent calculations of this. “The reason such a dating is possible is because the Saros period is not a highly accurate equation of lunar and solar periodicities, so every time you push forward by 223 lunar months… the quality of the prediction degrades.”

Read Ouellette’s April 11, 2022 article for a pretty accessible description of the work involved in establishing the date. Here’s a link to and a citation for the latest attempt to date the Antikythera,

The Initial Calibration Date of the Antikythera Mechanism after the Saros spiral mechanical Apokatastasis by Aristeidis Voulgaris, Christophoros Mouratidis, Andreas Vossinakis. arXiv > physics > arXiv:2203.15045 Submission history: From: Aristeidis Voulgaris Mr [view email] [v1] Mon, 28 Mar 2022 19:17:57 UTC (1,545 KB)

It’s open access. The calculations are beyond me otherwise, it’s quite readable.

Getting back to the Berggruen Institute and its Antikythera program/studio, good luck to all the applicants (the Antikythera application portal).

Better recording with flexible backing on a brain-computer interface (BCI)

This work has already been patented, from a March 15, 2022 news item on ScienceDaily,

Engineering researchers have invented an advanced brain-computer interface with a flexible and moldable backing and penetrating microneedles. Adding a flexible backing to this kind of brain-computer interface allows the device to more evenly conform to the brain’s complex curved surface and to more uniformly distribute the microneedles that pierce the cortex. The microneedles, which are 10 times thinner than the human hair, protrude from the flexible backing, penetrate the surface of the brain tissue without piercing surface venules, and record signals from nearby nerve cells evenly across a wide area of the cortex.

This novel brain-computer interface has thus far been tested in rodents. The details were published online on February 25 [2022] in the journal Advanced Functional Materials. This work is led by a team in the lab of electrical engineering professor Shadi Dayeh at the University of California San Diego, together with researchers at Boston University led by biomedical engineering professor Anna Devor.

Caption: Artist rendition of the flexible, conformable, transparent backing of the new brain-computer interface with penetrating microneedles developed by a team led by engineers at the University of California San Diego in the laboratory of electrical engineering professor Shadi Dayeh. The smaller illustration at bottom left shows the current technology in experimental use called Utah Arrays. Credit: Shadi Dayeh / UC San Diego / SayoStudio

A March 14, 2022 University of California at San Diego news release (also on EurekAlert but published March 15, 2022), which originated the news item, delves further into the topic,

This new brain-computer interface is on par with and outperforms the “Utah Array,” which is the existing gold standard for brain-computer interfaces with penetrating microneedles. The Utah Array has been demonstrated to help stroke victims and people with spinal cord injury. People with implanted Utah Arrays are able to use their thoughts to control robotic limbs and other devices in order to restore some everyday activities such as moving objects.

The backing of the new brain-computer interface is flexible, conformable, and reconfigurable, while the Utah Array has a hard and inflexible backing. The flexibility and conformability of the backing of the novel microneedle-array favors closer contact between the brain and the electrodes, which allows for better and more uniform recording of the brain-activity signals. Working with rodents as model species, the researchers have demonstrated stable broadband recordings producing robust signals for the duration of the implant which lasted 196 days. 

In addition, the way the soft-backed brain-computer interfaces are manufactured allows for larger sensing surfaces, which means that a significantly larger area of the brain surface can be monitored simultaneously. In the Advanced Functional Materials paper, the researchers demonstrate that a penetrating microneedle array with 1,024 microneedles successfully recorded signals triggered by precise stimuli from the brains of rats. This represents ten times more microneedles and ten times the area of brain coverage, compared to current technologies.

Thinner and transparent backings

These soft-backed brain-computer interfaces are thinner and lighter than the traditional, glass backings of these kinds of brain-computer interfaces. The researchers note in their Advanced Functional Materials paper that light, flexible backings may reduce irritation of the brain tissue that contacts the arrays of sensors. 

The flexible backings are also transparent. In the new paper, the researchers demonstrate that this transparency can be leveraged to perform fundamental neuroscience research involving animal models that would not be possible otherwise. The team, for example, demonstrated simultaneous electrical recording from arrays of penetrating micro-needles as well as optogenetic photostimulation.

Two-sided lithographic manufacturing

The flexibility, larger microneedle array footprints, reconfigurability and transparency of the backings of the new brain sensors are all thanks to the double-sided lithography approach the researchers used. 

Conceptually, starting from a rigid silicon wafer, the team’s manufacturing process allows them to build microscopic circuits and devices on both sides of the rigid silicon wafer. On one side, a flexible, transparent film is added on top of the silicon wafer. Within this film, a bilayer of titanium and gold traces is embedded so that the traces line up with where the needles will be manufactured on the other side of the silicon wafer. 

Working from the other side, after the flexible film has been added, all the silicon is etched away, except for free-standing, thin, pointed columns of silicon. These pointed columns of silicon are, in fact, the microneedles, and their bases align with the titanium-gold traces within the flexible layer that remains after the silicon has been etched away. These titanium-gold traces are patterned via standard and scalable microfabrication techniques, allowing scalable production with minimal manual labor. The manufacturing process offers the possibility of flexible array design and scalability to tens of thousands of microneedles.  

Toward closed-loop systems

Looking to the future, penetrating microneedle arrays with large spatial coverage will be needed to improve brain-machine interfaces to the point that they can be used in “closed-loop systems” that can help individuals with severely limited mobility. For example, this kind of closed-loop system might offer a person using a robotic hand real-time tactical feedback on the objects the robotic hand is grasping.  

Tactile sensors on the robotic hand would sense the hardness, texture, and weight of an object. This information recorded by the sensors would be translated into electrical stimulation patterns which travel through wires outside the body to the brain-computer interface with penetrating microneedles. These electrical signals would provide information directly to the person’s brain about the hardness, texture, and weight of the object. In turn, the person would adjust their grasp strength based on sensed information directly from the robotic arm. 

This is just one example of the kind of closed-loop system that could be possible once penetrating microneedle arrays can be made larger to conform to the brain and coordinate activity across the “command” and “feedback” centers of the brain.

Previously, the Dayeh laboratory invented and demonstrated the kinds of tactile sensors that would be needed for this kind of application, as highlighted in this video.

Pathway to commercialization

The advanced dual-side lithographic microfabrication processes described in this paper are patented (US 10856764). Dayeh co-founded Precision Neurotek Inc. to translate technologies innovated in his laboratory to advance state of the art in clinical practice and to advance the fields of neuroscience and neurophysiology.

Here’s a link to and a citation for the paper,

Scalable Thousand Channel Penetrating Microneedle Arrays on Flex for Multimodal and Large Area Coverage BrainMachine Interfaces by Sang Heon Lee, Martin Thunemann, Keundong Lee, Daniel R. Cleary, Karen J. Tonsfeldt, Hongseok Oh, Farid Azzazy, Youngbin Tchoe, Andrew M. Bourhis, Lorraine Hossain, Yun Goo Ro, Atsunori Tanaka, Kıvılcım Kılıç, Anna Devor, Shadi A. Dayeh. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.202112045 First published (online): 25 February 2022

This paper is open access.