The original headline for the University of Oxford press release was “Batteries for miniature bio-integrated devices and robotics” but it’s not clear to me what they mean by robotics (soft robots? robotic prostheses? something else?).
University of Oxford researchers have made a significant step towards realising miniature, soft batteries for use in a variety of biomedical applications, including the defibrillation and pacing of heart tissues. The work has been published today [October 25, 2024] in the journal Nature Chemical Engineering.
…
An October 28, 2024 University of Oxford press release (also on EurekAlert but published October 25, 2024), which originated the lightly edited news item and posting on EurekAlert, provides more technical detail about this advance, Note: Links have been removed,
The development of tiny smart devices, smaller than a few cubic millimeters, demands equally small power sources. For minimally invasive biomedical devices that interact with biological tissues, these power sources must be fabricated from soft materials. Ideally, these should also have features such as high capacity, biocompatibility and biodegradability, triggerable activation, and the ability to be controlled remotely. To date, there has been no battery that can fulfil these requirements all at once.
To address these requirements, researchers from the University of Oxford’s Department of Chemistry and Department of Pharmacology have developed a miniature, soft lithium-ion battery constructed from biocompatible hydrogel droplets. Surfactant-supported assembly (assembly aided by soap-like molecules), a technique reported by the same group last year in the journal Nature (DOI: 10.1038/s41586-023-06295-y), is used to connect three microscale droplets of 10 nanolitres volume. Different lithium-ion particles contained in each of the two ends then generate the output energy.
‘Our droplet battery is light-activated, rechargeable, and biodegradable after use. To date, it is the smallest hydrogel lithium-ion battery and has a superior energy density’ said Dr Yujia Zhang (Department of Chemistry, University of Oxford), the lead researcher for the study and a starting Assistant Professor at the École Polytechnique Fédérale de Lausanne. ‘We used the droplet battery to power the movement of charged molecules between synthetic cells and to control the beating and defibrillation of mouse hearts. By including magnetic particles to control movement, the battery can also function as a mobile energy carrier.’
Proof-of-concept heart treatments were carried out in the laboratory of Professor Ming Lei (Department of Pharmacology), a senior electrophysiologist in cardiac arrhythmias. He said: ‘Cardiac arrhythmia is a leading cause of death worldwide. Our proof-of-concept application in animal models demonstrates an exciting new avenue of wireless and biodegradable devices for the management of arrhythmias.’
Professor Hagan Bayley (Department of Chemistry), the research group leader for the study, said: ‘The tiny soft lithium-ion battery is the most sophisticated in a series of microscale power packs developed by Dr Zhang and points to a fantastic future for biocompatible electronic devices that can operate under physiological conditions.’
The researchers have filed a patent application through Oxford University Innovation. They envisage that the tiny versatile battery, particularly relevant to small-scale robots for bioapplications, will open up new possibilities in various areas including clinical medicine.
Here’s a link to and a citation for the paper,
A microscale soft lithium-ion battery for tissue stimulation by Yujia Zhang, Tianyi Sun, Xingyun Yang, Linna Zhou, Cheryl M. J. Tan, Ming Lei & Hagan Bayley. Nature Chemical Engineering volume 1, pages 691–701 (2024) DOI: https://doi.org/10.1038/s44286-024-00136-z Published online: 25 October 2024 Issue Date: November 2024
This paper is open access.
Now, I want to highlight a few items from the paper’s introduction, Note: Links have been removed,
The miniaturization of electronic devices is a burgeoning area of research1,2,3. Therefore, the development of tiny batteries to power these devices is of critical importance, and techniques such as three-dimensional (3D) printing4,5,6 and micro-origami assembly7 [emphases mine] are beginning to have an impact. For minimally invasive applications in biomedicine, batteries are also preferred to be soft, biocompatible and biodegradable, with additional functionality and responsiveness, such as triggerable activation and remote-controlled mobility8. However, at present, such a multifunctional microscale soft battery is not available. Although hydrogel-based lithium-ion (Li-ion) batteries demonstrate some of these features9,10,11,12, none currently exhibits microscale fabrication of the battery architecture, in terms of self-assembled integration of hydrogel-based cathode, separator and anode at the submillimeter level. Manual assembly of precrosslinked compartments11 or multistep deposition and crosslinking4 is necessary to avoid the mixing of materials from different compartments at the pregel (liquid) state or during the gelation process. This limitation not only makes it difficult to shrink hydrogel-based functional architectures but also hinders the implementation of high-density energy storage.
Toward that end, Zhang et al. have reported a miniaturized ionic power source by depositing lipid-supported networks of nanoliter hydrogel droplets13. The power source mimics the electrical eel [emphasis mine] by using internal ion gradients to generate ionic current14, and can induce neuronal modulation. However, the ionic power source has several limitations [emphasis mine] that should be addressed. First, the stored salt gradient produces less power than conventional Li-ion batteries, and the device cannot be fully recharged. Second, activation of the power source relies on temperature-triggered gelation and oil for buffer exchange, which is a demanding requirement. Third, the functionality of the power source is limited to the generation of ionic output, leaving the full versatility of synthetic tissues unexploited15,16,17. Last, but not least, while the power source can modulate the activity of neural microtissues, organ-level stimulation necessitates a higher and more stable output performance in physiological environments18.
Here, we present a miniature, soft, rechargeable Li-ion droplet battery (LiDB) made by depositing self-assembling [emphasis mine], nanoliter, lipid-supported, silk hydrogel droplets. The tiny hydrogel compartmentalization produces a superior energy density. The battery is switched on by ultraviolet (UV) light, which crosslinks the hydrogel and breaks the lipid barrier between droplets. The droplets are soft, biocompatible and biodegradable. The LiDBs can power charge molecule translocation between synthetic cells, defibrillate mouse hearts with ventricular arrhythmias and pace heart rhythms. Further, the LiDB can be translocated from one site to another magnetically.
This team has integrated a number of cutting edge (I think you can still call them that) techniques such as 3D printing and origami along with inspiration from electric eels (biomimicry) for using light as a power source. .Finally, there’s self-assembly or, as it’s sometimes known, bottom-up engineering, just like nature.
This work still needs to be tested in human clinical trials but taking that into account: Bravo to the researchers!
First, thank you to anyone who’s dropped by to read any of my posts. Second, I didn’t quite catch up on my backlog in what was then the new year (2024) despite my promises. (sigh) I will try to publish my drafts in a more timely fashion but I start this coming year as I did 2024 with a backlog of two to three months. This may be my new normal.
As for now, here’s an overview of FrogHeart’s 2024. The posts that follow are loosely organized under a heading but many of them could fit under other headings as well. After my informal review, there’s some material on foretelling the future as depicted in an exhibition, “Oracles, Omens and Answers,” at the Bodleian Libraries, University of Oxford.
Human enhancement: prosthetics, robotics, and more
Within a year or two of starting this blog I created a tag ‘machine/flesh’ to organize information about a number of converging technologies such as robotics, brain implants, and prosthetics that could alter our concepts of what it means to be human. The larger category of human enhancement functions in much the same way also allowing a greater range of topics to be covered.
Here are some of the 2024 human enhancement and/or machine/flesh stories on this blog,
As for anyone who’s curious about hydrogels, there’s this from an October 20, 2016 article by D.C.Demetre for ScienceBeta, Note: A link has been removed,
Hydrogels, materials that can absorb and retain large quantities of water, could revolutionise medicine. Our bodies contain up to 60% water, but hydrogels can hold up to 90%.
It is this similarity to human tissue that has led researchers to examine if these materials could be used to improve the treatment of a range of medical conditions including heart disease and cancer.
These days hydrogels can be found in many everyday products, from disposable nappies and soft contact lenses to plant-water crystals. But the history of hydrogels for medical applications started in the 1960s.
Scientists developed artificial materials with the ambitious goal of using them in permanent contact applications , ones that are implanted in the body permanently.
For anyone who wants a more technical explanation, there’s the Hydrogel entry on Wikipedia.
Science education and citizen science
Where science education is concerned I’m seeing some innovative approaches to teaching science, which can include citizen science. As for citizen science (also known as, participatory science) I’ve been noticing heightened interest at all age levels.
It’s been another year where artificial intelligence (AI) has absorbed a lot of energy from nearly everyone. I’m highlighting the more unusual AI stories I’ve stumbled across,
As you can see, I’ve tucked in two tangentially related stories, one which references a neuromorphic computing story ((see my Neuromorphic engineering category or search for ‘memristors’ in the blog search engine for more on brain-like computing topics) and the other is intellectual property. There are many, many more stories on these topics
Art/science (or art/sci or sciart)
It’s a bit of a surprise to see how many art/sci stories were published here this year, although some might be better described as art/tech stories.
There may be more 2024 art/sci stories but the list was getting long. In addition to searching for art/sci on the blog search engine, you may want to try data sonification too.
Moving off planet to outer space
This is not a big interest of mine but there were a few stories,
I expect to be delighted, horrified, thrilled, and left shaking my head by science stories in 2025. Year after year the world of science reveals a world of wonder.
More mundanely, I can state with some confidence that my commentary (mentioned in the future-oriented subsection of my 2023 review and 2024 look forward) on Quantum Potential, a 2023 report from the Council of Canadian Academies, will be published early in this new year as I’ve almost finished writing it.
Some questions are hard to answer and always have been. Does my beloved love me back? Should my country go to war? Who stole my goats?
Questions like these have been asked of diviners around the world throughout history – and still are today. From astrology and tarot to reading entrails, divination comes in a wide variety of forms.
Yet they all address the same human needs. They promise to tame uncertainty, help us make decisions or simply satisfy our desire to understand.
Anthropologists and historians like us study divination because it sheds light on the fears and anxieties of particular cultures, many of which are universal. Our new exhibition at Oxford’s Bodleian Library, Oracles, Omens & Answers, explores these issues by showcasing divination techniques from around the world.
…
1. Spider divination
In Cameroon, Mambila spider divination (ŋgam dù) addresses difficult questions to spiders or land crabs that live in holes in the ground.
Asking the spiders a question involves covering their hole with a broken pot and placing a stick, a stone and cards made from leaves around it. The diviner then asks a question in a yes or no format while tapping the enclosure to encourage the spider or crab to emerge. The stick and stone represent yes or no, while the leaf cards, which are specially incised with certain meanings, offer further clarification.
…
2. Palmistry
Reading people’s palms (palmistry) is well known as a fairground amusement, but serious forms of this divination technique exist in many cultures. The practice of reading the hands to gather insights into a person’s character and future was used in many ancient cultures across Asia and Europe.
In some traditions, the shape and depth of the lines on the palm are richest in meaning. In others, the size of the hands and fingers are also considered. In some Indian traditions, special marks and symbols appearing on the palm also provide insights.
Palmistry experienced a huge resurgence in 19th-century England and America, just as the science of fingerprints was being developed. If you could identify someone from their fingerprints, it seemed plausible to read their personality from their hands.
…
3. Bibliomancy
If you want a quick answer to a difficult question, you could try bibliomancy. Historically, this DIY [do-it-yourself] divining technique was performed with whatever important books were on hand.
Throughout Europe, the works of Homer or Virgil were used. In Iran, it was often the Divan of Hafiz, a collection of Persian poetry. In Christian, Muslim and Jewish traditions, holy texts have often been used, though not without controversy.
…
4. Astrology
Astrology exists in almost every culture around the world. As far back as ancient Babylon, astrologers have interpreted the heavens to discover hidden truths and predict the future.
…
5. Calendrical divination
Calendars have long been used to divine the future and establish the best times to perform certain activities. In many countries, almanacs still advise auspicious and inauspicious days for tasks ranging from getting a haircut to starting a new business deal.
In Indonesia, Hindu almanacs called pawukon [calendar] explain how different weeks are ruled by different local deities. The characteristics of the deities mean that some weeks are better than others for activities like marriage ceremonies.
6 December 2024 – 27 April 2025 ST Lee Gallery, Weston Library
The Bodleian Libraries’ new exhibition, Oracles, Omens and Answers, will explore the many ways in which people have sought answers in the face of the unknown across time and cultures. From astrology and palm reading to weather and public health forecasting, the exhibition demonstrates the ubiquity of divination practices, and humanity’s universal desire to tame uncertainty, diagnose present problems, and predict future outcomes.
Through plagues, wars and political turmoil, divination, or the practice of seeking knowledge of the future or the unknown, has remained an integral part of society. Historically, royals and politicians would consult with diviners to guide decision-making and incite action. People have continued to seek comfort and guidance through divination in uncertain times — the COVID-19 pandemic saw a rise in apps enabling users to generate astrological charts or read the Yijing [I Ching], alongside a growth in horoscope and tarot communities on social media such as ‘WitchTok’. Many aspects of our lives are now dictated by algorithmic predictions, from e-health platforms to digital advertising. Scientific forecasters as well as doctors, detectives, and therapists have taken over many of the societal roles once held by diviners. Yet the predictions of today’s experts are not immune to criticism, nor can they answer all our questions.
Curated by Dr Michelle Aroney, whose research focuses on early modern science and religion, and Professor David Zeitlyn, an expert in the anthropology of divination, the exhibition will take a historical-anthropological approach to methods of prophecy, prediction and forecasting, covering a broad range of divination methods, including astrology, tarot, necromancy, and spider divination.
Dating back as far as ancient Mesopotamia, the exhibition will show us that the same kinds of questions have been asked of specialist practitioners from around the world throughout history. What is the best treatment for this illness? Does my loved one love me back? When will this pandemic end? Through materials from the archives of the Bodleian Libraries alongside other collections in Oxford, the exhibition demonstrates just how universally human it is to seek answers to difficult questions.
Highlights of the exhibition include: oracle bones from Shang Dynasty China (ca. 1250-1050 BCE); an Egyptian celestial globe dating to around 1318; a 16th-century armillary sphere from Flanders, once used by astrologers to place the planets in the sky in relation to the Zodiac; a nineteenth-century illuminated Javanese almanac; and the autobiography of astrologer Joan Quigley, who worked with Nancy and Ronald Reagan in the White House for seven years. The casebooks of astrologer-physicians in 16th- and 17th-century England also offer rare insights into the questions asked by clients across the social spectrum, about their health, personal lives, and business ventures, and in some cases the actions taken by them in response.
The exhibition also explores divination which involves the interpretation of patterns or clues in natural things, with the idea that natural bodies contain hidden clues that can be decrypted. Some diviners inspect the entrails of sacrificed animals (known as ‘extispicy’), as evidenced by an ancient Mesopotamian cuneiform tablet describing the observation of patterns in the guts of birds. Others use human bodies, with palm readers interpreting characters and fortunes etched in their clients’ hands. A sketch of Oscar Wilde’s palms – which his palm reader believed indicated “a great love of detail…extraordinary brain power and profound scholarship” – shows the revival of palmistry’s popularity in 19th century Britain.
The exhibition will also feature a case study of spider divination practised by the Mambila people of Cameroon and Nigeria, which is the research specialism of curator Professor David Zeitlyn, himself a Ŋgam dù diviner. This process uses burrowing spiders or land crabs to arrange marked leaf cards into a pattern, which is read by the diviner. The display will demonstrate the methods involved in this process and the way in which its results are interpreted by the card readers. African basket divination has also been observed through anthropological research, where diviners receive answers to their questions in the form of the configurations of thirty plus items after they have been tossed in the basket.
Dr Michelle Aroney and Professor David Zeitlyn, co-curators of the exhibition, say:
Every day we confront the limits of our own knowledge when it comes to the enigmas of the past and present and the uncertainties of the future. Across history and around the world, humans have used various techniques that promise to unveil the concealed, disclosing insights that offer answers to private or shared dilemmas and help to make decisions. Whether a diviner uses spiders or tarot cards, what matters is whether the answers they offer are meaningful and helpful to their clients. What is fun or entertainment for one person is deadly serious for another.
Richard Ovenden, Bodley’s [a nickname? Bodleian Libraries were founded by Sir Thomas Bodley] Librarian, said:
People have tried to find ways of predicting the future for as long as we have had recorded history. This exhibition examines and illustrates how across time and culture, people manage the uncertainty of everyday life in their own way. We hope that through the extraordinary exhibits, and the scholarship that brings them together, visitors to the show will appreciate the long history of people seeking answers to life’s biggest questions, and how people have approached it in their own unique way.
The exhibition will be accompanied by the book Divinations, Oracles & Omens, edited by Michelle Aroney and David Zeitlyn, which will be published by Bodleian Library Publishing on 5 December 2024.
Courtesy: Bodleian Libraries, University of Oxford
I’m not sure why the preceding image is used to illustrate the exhibition webpage but I find it quite interesting. Should you be in Oxford, UK and lucky enough to visit the exhibition, there are a few more details on the Oracles, Omens and Answers event webpage, Note: There are 26 Bodleian Libraries at Oxford and the exhibition is being held in the Weston Library,
EXHIBITION
Oracles, Omens and Answers
6 December 2024 – 27 April 2025
ST Lee Gallery, Weston Library
Free admission, no ticket required
…
Note: This exhibition includes a large continuous projection of spider divination practice, including images of the spiders in action.
Exhibition tours
Oracles, Omens and Answers exhibition tours are available on selected Wednesdays and Saturdays from 1–1.45pm and are open to all.
This August 7, 2024 news item on phys.org explains what it means for a microscope to have a resolution of better than five nanometers, Note: A link has been removed,
What does the inside of a cell really look like? In the past, standard microscopes were limited in how well they could answer this question. Now, researchers from the Universities of Göttingen [Netherlands] and Oxford [UK[, in collaboration with the University Medical Center Göttingen (UMG), have succeeded in developing a microscope with resolutions better than five nanometers (five billionths of a meter). This is roughly equivalent to the width of a hair split into 10,000 strands. Their new method was published in Nature Photonics.
Many structures in cells are so small that standard microscopes can only produce fragmented images. Their resolution only begins at around 200 nanometres. However, human cells for instance contain a kind of scaffold of fine tubes that are only around seven nanometres wide. The synaptic cleft, meaning the distance between two nerve cells or between a nerve cell and a muscle cell, is just 10 to 50 nanometres – too small for conventional microscopes. The new microscope, which researchers at the University of Göttingen have helped to develop, promises much richer information. It benefits from a resolution better than five nanometres, enabling it to capture even the tiniest cell structures. It is difficult to imagine something so tiny, but if we were to compare one nanometre with one metre, it would be the equivalent of comparing the diameter of a hazelnut with the diameter of the Earth.
This type of microscope is known as a fluorescence microscope. Their function relies on “single-molecule localization microscopy”, in which individual fluorescent molecules in a sample are switched on and off and their individual positions are then determined very precisely. The entire structure of the sample can then be modelled from the positions of these molecules. The current process enables resolutions of around 10 to 20 nanometres. Professor Jörg Enderlein’s research group at the University of Göttingen’s Faculty of Physics has now been able to double this resolution again – with the help of a highly sensitive detector and special data analysis. This means that even the tiniest details of protein organization in the connecting area between two nerve cells can be very precisely revealed.
“This newly developed technology is a milestone in the field of high-resolution microscopy. It not only offers resolutions in the single-digit nanometre range, but it is also particularly cost-effective and easy to use compared to other methods,” explains Enderlein. The scientists also developed an open-source software package for data processing in the course of publishing their findings. This means that this type of microscopy will be available to a wide range of specialists in the future.
It seems that physicists are having a moment in the pop culture scene and they are excited about two television series (Fallout and 3 Body Problem) televised earlier this year in US/Canada.
The world ends on Oct. 23, 2077, in a series of radioactive explosions—at least in the world of “Fallout,” a post-apocalyptic video game series that has now been adapted into a blockbuster TV show on Amazon’s Prime Video.
The literal fallout that ensues creates a post-apocalyptic United States that is full of mutated monstrosities, irradiated humans called ghouls and hard scrabble survivors who are caught in the middle of it all. It’s the material of classic Atomic Age sci-fi, the kind of pulp stories “Fallout” draws inspiration from for its retro-futuristic version of America.
But there is more science in this science fiction story than you might think, according to Pran Nath, Matthews distinguished university professor of physics at Northeastern University.
…
“Fallout” depicts a post-apocalyptic world centuries after nuclear war ravaged the United States. Amazon MGM Studios Photo
In the opening moments of “Fallout,” which debuted on April 10 [2024], Los Angeles is hit with a series of nuclear bombs. Although it takes place in a clearly fictional version of La La Land –– the robots and glistening, futuristic skyscrapers in the distance are dead giveaways –– the nuclear explosions themselves are shockingly realistic.
Nath says that when a nuclear device is dropped there are three stages.
“When the nuclear blast occurs, because of the chain reaction, in a very short period of time, a lot of energy and radiation is emitted,” Nath says. “In the first instance, a huge flash occurs, which is the nuclear reaction producing gamma rays. If you are exposed to it, people, for example, in Hiroshima were essentially evaporated, leaving shadows.”
Depending on how far someone is from the blast, even those who are partially protected will have their body rapidly heat up to 50 degrees Celsius, or 122 degrees Fahrenheit, causing severe burns. The scalded skin of the ghouls in “Fallout” are not entirely unheard of (although their centuries-long lifespan stretches things a bit).
The second phase is a shockwave and heat blast –– what Nath calls a “fireball.” The shockwave in the first scene of “Fallout” quickly spreads from the blast, but Nath says it would probably happen even faster and less cinematically. It would travel around the speed of sound, around 760 miles per hour.
The shockwave also has a huge amount of pressure, “so huge … that it can collapse concrete buildings.” It’s followed by a “fireball” that would burn every building in the blast area with an intense heatwave.
“The blast area is defined as the area where the shockwaves and the fireball are the most intense,” Nath says. “For Hiroshima, that was between 1 and 2 miles. Basically, everything is destroyed in that blast area.”
The third phase of the nuclear blast is the fallout, which lasts for much longer and has even wider ranging impacts than the blast and shockwave. The nuclear blast creates a mushroom cloud, which can reach as high as 10 miles into the atmosphere. Carried by the wind, the cloud spreads radioactivity far outside the blast area.
“In a nuclear blast, up to 100 different radioactive elements are produced,” Nath says. “These radioactive elements have lifetimes which could be a few seconds, and they could be up to millions of years. … It causes pollution and damage to the body and injuries over a longer period, causing cancer and leukemia, things like this.”
A key part of the world of “Fallout” is the Vaults, massive underground bunkers the size of small towns that the luckiest of people get to retreat into when the world ends. The Vaults are several steps above most real-world fallout shelters, but Nath says that kind of protection would be necessary if you wanted to stay safe from the kind of radiation released by nuclear weapons, particularly gamma rays that can penetrate several feet of concrete.
“If you are further away and you keep inside and behind concrete, then you can avoid both the initial flash of the nuclear blast and also could probably withstand the shockwaves and the heatwave that follows, so the survivability becomes larger,” Nath says.
But what about all the radioactive mutants wandering around the post-apocalyptic wasteland?
It might seem like the colossal, monstrous mutant salamanders and giant cockroaches of “Fallout” are a science fiction fabrication. But there is a real-world basis for this, Nath says.
“There are various kinds of abnormalities that occur [with radiation,]” Nath says. “They can also be genetic. Radiation can create mutations, which are similar to spontaneous mutation, in animals and humans. In Chernobyl, for example, they are discovering animals which are mutated.”
In the Chernobyl Exclusion Zone, the genetics of wild dogs have been radically altered. Scientists hypothesize that thewolves near Chernobyl may have developed to be more resistant to radiation, which could make them “cancer resistant,” or at least less impacted by cancer. And frogs have adapted to have more melanin in their bodies, a form of protection against radiation, turning them black.
“Fallout” takes the horrifying reality of nuclear war and spins a darkly comic sci-fi yarn, but Nath says it’s important to remember how devastating these real-world forces are.
It’s estimated that as many as 146,000 people in Hiroshima and 80,000 people in Nagasaki were killed by the effects of the bombs dropped by the U.S. Today’s nuclear weapons are so much more powerful that there is very little understanding of the impact these weapons could have. Nath says the fallout could even exacerbate global warming.
“Thermonuclear war would be a global problem,” Nath says.
Although “Fallout” is a piece of science fiction, the reality of its world-ending scenario is terrifyingly real, says Pran Nath, Matthews distinguished university professor of physics at Northeastern University. Photo by Adam Glanzman/Northeastern University
Kudos to the photographer!
3 Body Problem (television series)
This one seems to have a lit a fire in the breasts of physicists everywhere. I have a number of written pieces and a video about this this show, which is based on a book by Liu Cixn. (You can find out more about Cixin and his work in his Wikipedia entry.)
“3 Body Problem,” Netflix’s new big-budget adaptation of Liu Cixin’s book series helmed by the creators behind “Game of Thrones,” puts the science in science fiction.
The series focuses on scientists as they attempt to solve a mystery that spans decades, continents and even galaxies. That means “3 Body Problem” throws some pretty complicated quantum mechanics and astrophysics concepts at the audience as it, sometimes literally, tries to bring these ideas down to earth.
However, at the core of the series is the three-body problem, a question that has stumped scientists for centuries.
What exactly is the three-body problem, and why is it still unsolvable? Jonathan Blazek, an assistant professor of physics at Northeastern University, explains that systems with two objects exerting gravitational force on one another, whether they’re particles or stars and planets, are predictable. Scientists have been able to solve this two-body problem and predict the orbits of objects since the days of Isaac Newton. But as soon as a third body enters the mix, the whole system gets thrown into chaos.
“The three-body problem is the statement that if you have three bodies gravitating toward each other under Newton’s law of gravitation, there is no general closed-form solution for that situation,” Blazek says. “Little differences get amplified and can lead to wildly unpredictable behavior in the future.”
In “3 Body Problem,” like in Cixin’s book, this is a reality for aliens that live in a solar system with three suns. Since all three stars are exerting gravitational forces on each other, they end up throwing the solar system into chaos as they fling each other back and forth. For the Trisolarans, the name for these aliens, it means that when a sun is jettisoned far away, their planet freezes, and when a sun is thrown extremely close to their planet, it gets torched. Worse, because of the three-body problem, these movements are completely unpredictable.
For centuries, scientists have pondered the question of how to determine a stable starting point for three gravitational bodies that would result in predictable orbits. There is still no generalizable solution that can be taken out of theory and modeled in reality, although recently scientists have started to find some potentially creative solutions, including with models based on the movements of drunk people.
“If you want to [predict] what the solar system’s going to do, we can put all the planets and as many asteroids as we know into a computer code and basically say we’re going to calculate the force between everything and move everything forward a little bit,” Blazek says. “This works, but to the extent that you’re making some approximations … all of these things will eventually break down and your prediction is going to become inaccurate.”
Blazek says the three-body problem has captivated scientific minds because it’s a seemingly simple problem. Most high school physics students learn Newton’s law of gravity and can reasonably calculate and predict the movement of two bodies.
Three-body systems, and more than three-body systems, also show up throughout the universe, so the question is incredibly relevant. Look no further than our solar system.
The relationship between the sun, Earth and our moon is a three-body system. But Blazek says since the sun exerts a stronger gravitational force on Earth and Earth does the same on the moon, it creates a pair of two-body systems with stable, predictable orbits –– for now.
Blazek says that although our solar system appears stable, there’s no guarantee that it will stay that way in the far future because there are still multi-body systems at play. Small changes like an asteroid hitting one of Jupiter’s moons and altering its orbit ever so slightly could eventually spiral into larger changes.
That doesn’t mean humanity will face a crisis like the one the Trisolarans face in “3 Body Problem.” These changes happen extremely slowly, but Blazek says it’s another reminder of why these concepts are interesting and important to think about in both science and science fiction.
“I don’t think anything is going to happen on the time scale of our week or even probably our species –– we have bigger problems than the instability of orbits in our solar system,” Blazek says. “But, that said, if you think about billions of years, during that period we don’t know that the orbits will stay as they currently are. There’s a good chance there will be some instability that changes how things look in the solar system.”
An April 12, 2024 news item on phys.org covers some of the same ground, Note: A link has been removed.
The science fiction television series 3 Body Problem, the latest from the creators of HBO’s Game of Thrones, has become the most watched show on Netflix since its debut last month. Based on the bestselling book trilogy Remembrance of Earth’s Past by Chinese computer engineer and author Cixin Liu, 3 Body Problem introduces viewers to advanced concepts in physics in service to a suspenseful story involving investigative police work, international intrigue, and the looming threat of an extraterrestrial invasion.
Yet how closely does the story of 3 Body Problem adhere to the science that it’s based on? The very name of the show comes from the three-body problem, a mathematical problem in physics long considered to be unsolvable.
Virginia Tech physicist Djordje Minic says, “The three-body problem is a very famous problem in classical and celestial mechanics, which goes back to Isaac Newton. It involves three celestial bodies interacting via the gravitational force—that is, Newton’s law of gravity. Unlike mathematical predictions of the motions of two-body systems, such as Earth-moon or Earth-sun, the three-body problem does not have an analytic solution.”
“At the end of the 19th century, the great French mathematician Henri Poincaré’s work on the three-body problem gave birth to what is known as chaos theory and the concept of the ‘butterfly effect.'”
Both the novels and the Netflix show contain a visualization of the three-body problem in action: a solar system made up of three suns in erratic orbit around one another. Virginia Tech aerospace engineer and mathematics expert Shane Ross discussed liberties the story takes with the science that informs it.
“There are no known configurations of three massive stars that could maintain an erratic orbit,” Ross said. “There was a big breakthrough about 20 years ago when a figure eight solution of the three-body problem was discovered, in which three equal-sized stars chase each other around on a figure eight-shaped course. In fact, Cixin Liu makes reference to this in his books. Building on that development, other mathematicians found other solutions, but in each case the movement is not chaotic.”
Ross elaborated, “It’s even more unlikely that a fourth body, a planet, would be in orbit around this system of three stars, however erratically — it would either collide with one or be ejected from the system. The situation in the book would therefore be a solution of the ‘four-body problem,’ which I guess didn’t have quite the right ring to use as a title.
“Furthermore, a stable climate is unlikely even on an Earth-like planet. At last count, there are at least a hundred independent factors that are required to create an Earth-like planet that supports life as we know it,” Ross said. “We have been fortunate to have had about 10,000 years of the most stable climate in Earth’s history, which makes us think climate stability is the norm, when in fact, it’s the exception. It’s likely no coincidence that this has corresponded with the rise of advanced human civilization.”
About Ross A professor of Aerospace and Ocean Engineering at Virginia Tech, Shane Ross directs the Ross Dynamics Lab, which specializes in mathematical modeling, simulation, visualization, and experiments involving oceanic and atmospheric patterns, aerodynamic gliding, orbital mechanics, and many other disciplines. He has made fundamental contributions toward finding chaotic solutions to the three-body problem. Read his bio …
About Minic Djordje Minic teaches physics at Virginia Tech. A specialist in string theory and quantum gravity, he has collaborated on award-winning research related to dark matter and dark energy. His most recent investigation involves the possibility that in the context of quantum gravity the geometry of quantum theory might be dynamical in analogy with the dynamical nature of spacetime geometry in Einstein’s theory of gravity. View his full bio …
For the last ‘3 Body Problem’ essay, there’s this April 5, 2023 article by Tara Bitran and Phillipe Thao for Netflix.com featuring comments from a physicist concerning a number of science questions,, Note: Links have been removed,
If you’ve raced through 3 Body Problem, the new series from Game of Thrones creators David Benioff and D.B. Weiss and True Blood writer Alexander Woo, chances are you want to know more about everything from Sophons and nanofibers to what actually constitutes a three-body problem. After all, even the show’s scientists are stumped when they witness their well-known theories unravel at the seams.
But for physicists like 3 Body Problem’s Jin (Jess Hong) and real-life astrophysicist Dr. Becky Smethurst (who researches how supermassive black holes grow at the University of Oxford and explains how scientific phenomena work in viral videos), answering the universe’s questions is a problem they’re delighted to solve. In fact, it’s part of the fun. “I feel like scientists look at the term ‘problem’ more excitedly than anybody else does,” Smethurst tells Tudum. “Every scientist’s dream is to be told that they got it wrong before and here’s some new data that you can now work on that shows you something different where you can learn something new.”
The eight-episode series, based on writer Cixin Liu’s internationally celebrated Remembrance of Earth’s Past trilogy, repeatedly defies human science standards and forces the characters to head back to the drawing board to figure out how to face humanity’s greatest threat. Taking us on a mind-boggling journey that spans continents and timelines, the story begins in ’60s China, when a young woman makes a fateful decision that reverberates across space and time into the present day. With humanity’s future in danger, a group of tight-knit scientists, dubbed the Oxford Five, must work against time to save the world from catastrophic consequences.
Dr. Matt Kenzie, associate professor of physics at University of Cambridge and 3 Body Problem’s science advisor, sits down with Tudum to dive into the science behind the series. So if you can’t stop thinking about stars blinking and chaotic eras, keep reading for all the answers to your burning scientific questions. Education time!
What is a Cherenkov tank?
In Episode 1, the Oxford Five’s former college professor, Dr. Vera Ye (Vedette Lim), walks out onto a platform at the top of a large tank and plunges to her death in a shallow pool of water below. If you were wondering what that huge tank was, it’s called a particle detector (sometimes also known as a Cherenkov tank). It’s used to observe, measure, and identify particles, including, in this case, neutrinos, a common particle that comes largely from the sun. “Part of the reason that they’re kind of interesting is that we don’t really understand much about them, and we suspect that they could be giving us clues to other types of physics in the universe that we don’t yet understand,” Dr. Kenzie told Netflix.
When a neutrino interacts with the water molecules stored inside the tank, it sets off a series of photomultiplier tubes — the little circles that line the tank Vera jumps into. Because Vera’s experiment is shut down and the water is reduced to a shallow level, the fall ends up killing her.
…
What are nanofibers?
In the show, Auggie’s a trailblazer in nanofiber technology. She runs a company that designs self-assembling synthetic polymer nanofibers and hopes to use her latest innovation to solve world problems, like poverty and disease. But what are nanofibers and how do they work? Dr. Kenzie describes nanofiber technology as “any material with a width of nanometers” — in other words, one millionth of a millimeter in thickness. Nanofibers can be constructed out of graphene (a one-atom thick layer of carbon) and are often very strong. “They can be very flexible,” he adds. “They tend to be very good conductors of both heat and electricity.”
Is nanofiber technology real, and can it actually cut through human flesh?
Nanofiber technology does exist, although Dr. Kenzie says it’s curated and grown in labs under very specific conditions. “One of the difficulties is how you hold them in place — the scaffolding it’s called,” he adds. “You have to design molecules which hold these things whilst you’re trying to build them.”
After being tested on a synthetic diamond cube in Episode 2, we see the real horrors of nanofiber technology when it’s used to slice through human bodies in Episode 5. Although the nanofiber technology that exists today is not as mass produced as Auggie’s — due to the cost of producing and containing it — Dr. Kenzie says it’s still strong enough to slice through almost anything.
What can nanofiber technology be used for?
According to Dr. Kenzie, the nanofiber technology being developed today can be used in several ways within the manufacturing and construction industries. “If you wanted a machine that could do some precision cutting, then maybe [nanofiber] would be good,” he says. “I know they’re also tested in the safety of the munitions world. If you need to bulletproof a room or bulletproof a vest, they’re incredibly light and they’re incredibly strong.” He also adds that nanofiber technology is viewed as a material of the future, which can be used for water filtration — just as we see Auggie use it in the season finale.
…
The Bitran and Thao piece includes another description of the 3 Body Problem but it’s the first I’ve seen that describes some of the other science.
Also mentioned in one of the excerpts in this posting is The Science and Entertainment Exchange (also known as The Science & Entertainment Exchange or Science & Entertainment Exchange) according to its Wikipedia entry, Note: Links have been removed,
The Science & Entertainment Exchange[1] is a program run and developed by the United States National Academy of Sciences (NAS) to increase public awareness, knowledge, and understanding of science and advanced science technology through its representation in television, film, and other media. It serves as a pro-science movement with the main goal of re-cultivating how science and scientists truly are in order to rid the public of false perceptions on these topics. The Exchange provides entertainment industry professionals with access to credible and knowledgeable scientists and engineers who help to encourage and create effective representations of science and scientists in the media, whether it be on television, in films, plays, etc. The Exchange also helps the science community understand the needs and requirements of the entertainment industry, while making sure science is conveyed in a correct and positive manner to the target audience.
Officially launched in November 2008, the Exchange can be thought of as a partnership between NAS and Hollywood, as it arranges direct consultations between scientists and entertainment professionals who develop science-themed content. This collaboration allows for industry professionals to accurately portray the science that they wish to capture and include in their media production. It also provides scientists and science organizations with the opportunity to communicate effectively with a large audience that may otherwise be hard to reach such as through innovative physics outreach. It also provides a variety of other services, including scheduling briefings, brainstorming sessions, screenings, and salons. The Exchange is based in Los Angeles, California.
…
I hadn’t realized the exchange was physics specific. Given the success with physics, I’d expect the biology and chemistry communities would be eager to participate or start their own exchanges.
Back in 2019 Canada was having a problem with Malaysia and the Phillipines over the garbage (this is meant literally) that we were shipping over to those counties, which is why an article about Chinese science fiction writer, Chen Qiufan and his 2013 novel, The Waste Tide, caught my attention and I pubisihed this May 31, 2019 posting, “Chen Qiufan, garbage, and Chinese science fiction stories.” There’s a very brief mention of Liu Cxin in one of the excerpts.
This May 20, 2024 University of Oxford press release (also on EurekAlert) was under embargo until almost noon on May 20, 2024, which is a bit unusual, in my experience, (Note: I have more about the 1st summit and the interest in AI safety at the end of this posting),
Leading AI scientists are calling for stronger action on AI risks from world leaders, warning that progress has been insufficient since the first AI Safety Summit in Bletchley Park six months ago.
Then, the world’s leaders pledged to govern AI responsibly. However, as the second AI Safety Summit in Seoul (21-22 May [2024]) approaches, twenty-five of the world’s leading AI scientists say not enough is actually being done to protect us from the technology’s risks. In an expert consensus paper published today in Science, they outline urgent policy priorities that global leaders should adopt to counteract the threats from AI technologies.
Professor Philip Torr,Department of Engineering Science,University of Oxford, a co-author on the paper, says: “The world agreed during the last AI summit that we needed action, but now it is time to go from vague proposals to concrete commitments. This paper provides many important recommendations for what companies and governments should commit to do.”
World’s response not on track in face of potentially rapid AI progress;
According to the paper’s authors, it is imperative that world leaders take seriously the possibility that highly powerful generalist AI systems—outperforming human abilities across many critical domains—will be developed within the current decade or the next. They say that although governments worldwide have been discussing frontier AI and made some attempt at introducing initial guidelines, this is simply incommensurate with the possibility of rapid, transformative progress expected by many experts.
Current research into AI safety is seriously lacking, with only an estimated 1-3% of AI publications concerning safety. Additionally, we have neither the mechanisms or institutions in place to prevent misuse and recklessness, including regarding the use of autonomous systems capable of independently taking actions and pursuing goals.
World-leading AI experts issue call to action
In light of this, an international community of AI pioneers has issued an urgent call to action. The co-authors include Geoffrey Hinton, Andrew Yao, Dawn Song, the late Daniel Kahneman; in total 25 of the world’s leading academic experts in AI and its governance. The authors hail from the US, China, EU, UK, and other AI powers, and include Turing award winners, Nobel laureates, and authors of standard AI textbooks.
This article is the first time that such a large and international group of experts have agreed on priorities for global policy makers regarding the risks from advanced AI systems.
Urgent priorities for AI governance
The authors recommend governments to:
establish fast-acting, expert institutions for AI oversight and provide these with far greater funding than they are due to receive under almost any current policy plan. As a comparison, the US AI Safety Institute currently has an annual budget of $10 million, while the US Food and Drug Administration (FDA) has a budget of $6.7 billion.
mandate much more rigorous risk assessments with enforceable consequences, rather than relying on voluntary or underspecified model evaluations.
require AI companies to prioritise safety, and to demonstrate their systems cannot cause harm. This includes using “safety cases” (used for other safety-critical technologies such as aviation) which shifts the burden for demonstrating safety to AI developers.
implement mitigation standards commensurate to the risk-levels posed by AI systems. An urgent priority is to set in place policies that automatically trigger when AI hits certain capability milestones. If AI advances rapidly, strict requirements automatically take effect, but if progress slows, the requirements relax accordingly.
According to the authors, for exceptionally capable future AI systems, governments must be prepared to take the lead in regulation. This includes licensing the development of these systems, restricting their autonomy in key societal roles, halting their development and deployment in response to worrying capabilities, mandating access controls, and requiring information security measures robust to state-level hackers, until adequate protections are ready.
AI impacts could be catastrophic
AI is already making rapid progress in critical domains such as hacking, social manipulation, and strategic planning, and may soon pose unprecedented control challenges. To advance undesirable goals, AI systems could gain human trust, acquire resources, and influence key decision-makers. To avoid human intervention, they could be capable of copying their algorithms across global server networks. Large-scale cybercrime, social manipulation, and other harms could escalate rapidly. In open conflict, AI systems could autonomously deploy a variety of weapons, including biological ones. Consequently, there is a very real chance that unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or extinction of humanity.
Stuart Russell OBE [Order of the British Empire], Professor of Computer Science at the University of California at Berkeley and an author of the world’s standard textbook on AI, says: “This is a consensus paper by leading experts, and it calls for strict regulation by governments, not voluntary codes of conduct written by industry. It’s time to get serious about advanced AI systems. These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless. Companies will complain that it’s too hard to satisfy regulations—that “regulation stifles innovation.” That’s ridiculous. There are more regulations on sandwich shops than there are on AI companies.”
…
Notable co-authors:
The world’s most-cited computer scientist (Prof. Hinton), and the most-cited scholar in AI security and privacy (Prof. Dawn Song)
China’s first Turing Award winner (Andrew Yao).
The authors of the standard textbook on artificial intelligence (Prof. Stuart Russell) and machine learning theory (Prof. Shai Shalev-Schwartz)
One of the world’s most influential public intellectuals (Prof. Yuval Noah Harari)
A Nobel Laureate in economics, the world’s most-cited economist (Prof. Daniel Kahneman)
Department-leading AI legal scholars and social scientists (Lan Xue, Qiqi Gao, and Gillian Hadfield).
Some of the world’s most renowned AI researchers from subfields such as reinforcement learning (Pieter Abbeel, Jeff Clune, Anca Dragan), AI security and privacy (Dawn Song), AI vision (Trevor Darrell, Phil Torr, Ya-Qin Zhang), automated machine learning (Frank Hutter), and several researchers in AI safety.
Additional quotes from the authors:
Philip Torr, Professor in AI, University of Oxford:
“I believe if we tread carefully the benefits of AI will outweigh the downsides, but for me one of the biggest immediate risks from AI is that we develop the ability to rapidly process data and control society, by government and industry. We could risk slipping into some Orwellian future with some form of totalitarian state having complete control.“
Dawn Song: Professor in AI at UC Berkeley, most-cited researcher in AI security and privacy:
“Explosive AI advancement is the biggest opportunity and at the same time the biggest risk for mankind. It is important to unite and reorient towards advancing AI responsibly, with dedicated resources and priority to ensure that the development of AI safety and risk mitigation capabilities can keep up with the pace of the development of AI capabilities and avoid any catastrophe”
Yuval Noah Harari, Professor of history at Hebrew University of Jerusalem, best-selling author of ‘Sapiens’ and ‘Homo Deus’, world leading public intellectual:
“In developing AI, humanity is creating something more powerful than itself, that may escape our control and endanger the survival of our species. Instead of uniting against this shared threat, we humans are fighting among ourselves. Humankind seems hell-bent on self-destruction. We pride ourselves on being the smartest animals on the planet. It seems then that evolution is switching from survival of the fittest, to extinction of the smartest.”
Jeff Clune, Professor in AI at University of British Columbia and one of the leading researchers in reinforcement learning:
“Technologies like spaceflight, nuclear weapons and the Internet moved from science fiction to reality in a matter of years. AI is no different. We have to prepare now for risks that may seem like science fiction – like AI systems hacking into essential networks and infrastructure, AI political manipulation at scale, AI robot soldiers and fully autonomous killer drones, and even AIs attempting to outsmart us and evade our efforts to turn them off.”
“The risks we describe are not necessarily long-term risks. AI is progressing extremely rapidly. Even just with current trends, it is difficult to predict how capable it will be in 2-3 years. But what very few realize is that AI is already dramatically speeding up AI development. What happens if there is a breakthrough for how to create a rapidly self-improving AI system? We are now in an era where that could happen any month. Moreover, the odds of that being possible go up each month as AI improves and as the resources we invest in improving AI continue to exponentially increase.”
Gillian Hadfield, CIFAR AI Chair and Director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto:
“AI labs need to walk the walk when it comes to safety. But they’re spending far less on safety than they spend on creating more capable AI systems. Spending one-third on ensuring safety and ethical use should be the minimum.”
“This technology is powerful, and we’ve seen it is becoming more powerful, fast. What is powerful is dangerous, unless it is controlled. That is why we call on major tech companies and public funders to allocate at least one-third of their AI R&D budget to safety and ethical use, comparable to their funding for AI capabilities.”
Sheila McIlrath, Professor in AI, University of Toronto, Vector Institute:
AI is software. Its reach is global and its governance needs to be as well.
Just as we’ve done with nuclear power, aviation, and with biological and nuclear weaponry, countries must establish agreements that restrict development and use of AI, and that enforce information sharing to monitor compliance. Countries must unite for the greater good of humanity.
Now is the time to act, before AI is integrated into our critical infrastructure. We need to protect and preserve the institutions that serve as the foundation of modern society.
Frank Hutter, Professor in AI at the University of Freiburg, Head of the ELLIS Unit Freiburg, 3x ERC grantee:
To be clear: we need more research on AI, not less. But we need to focus our efforts on making this technology safe. For industry, the right type of regulation will provide economic incentives to shift resources from making the most capable systems yet more powerful to making them safer. For academia, we need more public funding for trustworthy AI and maintain a low barrier to entry for research on less capable open-source AI systems. This is the most important research challenge of our time, and the right mechanism design will focus the community at large to work towards the right breakthroughs.
…
Here’s a link to and a citation for the paper,
Managing extreme AI risks amid rapid progress; Preparation requires technical research and development, as well as adaptive, proactive governance by Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Dawn Song, Pieter Abbeel, Trevor Darrell, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, Shai Shalev-Shwartz, Gillian Hadfield, Jeff Clune, Tegan Maharaj, Frank Hutter, Atılım Güneş Baydin, Sheila McIlraith, Qiqi Gao, Ashwin Acharya, David Krueger, Anca Dragan, Philip Torr, Stuart Russell, Daniel Kahneman, Jan Brauner, and Sören Mindermann. Science 20 May 2024 First Release DOI: 10.1126/science.adn0117
Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).
A very software approach?
This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,
In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.
…
The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.
…
The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),
At a glance
The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.
Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.
Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.
The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.
Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.
…
While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.
As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.
In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.
The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:
*The ban of AI systems posing unacceptable risks will apply six months after the entry into force
*Codes of practice will apply nine months after entry into force
*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force
High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.
…
This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”
… The AI Act is expected to come into effect in late 2025 or early 2026.[109
I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.
A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,
Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.
A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.
Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.
The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.
The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.
“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI.
“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.
“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.
“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”
The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.
Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.
Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute.
The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”
Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.
For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.
The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.
“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.
Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.
“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”
These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.
AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.
The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.
They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.
Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”
Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.
The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.
“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.
*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.
As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.
The 4th gathering was in Montréal, Québec, Canada (as per my August 31, 2021 posting). Unfortunately,this is one of those times where I’m late to the party. The 5th International Conference on Governmental Science Advice (INGSA2024) ran from May 1 – 2, 2024 bu there are some satellite events taking place over the next few days.
I’m featuring this somewhat stale news because it offers a more global perspective on science policy and government advisors, from the May 1, 2024 International Network for Government Science Advice (INGSA) news release (PDF and on EurekAlert),
What? 5th International Conference on Governmental Science Advice, INGSA2024, marking the 10th Anniversary of the creation of the International Network for Governmental Science Advice (INGSA) & first meeting held in the global south.
Context: One of the largest independent gatherings of thought- and practice-leaders in governmental science advice, research funding, multi-lateral institutions, academia, science communication and diplomacy is taking place in Kigali, Rwanda. Organised by Prof Rémi Quirion, Chief Scientist of Québec and President of the International Network for Governmental Science Advice (INGSA), speakers from 39 countries[1] from Brazil to Burkina Faso and from Ireland to Indonesia, plus over 300 delegates from 65 countries, will spotlight what is really at stake in the relationship between science, societies and policy-making, during times of crisis and routine.
From the air we breathe, the cars we drive, and the Artificial Intelligence we use, to the medical treatments or the vaccines we take, and the education we provide to children, this relationship, and the decisions it can influence, matter immensely. In our post-Covid, climate-shifted, and digitally-evolving world, the importance of robust knowledge in policy-making is more pronounced than ever. This imperative is accompanied by growing complexities that demand attention. INGSA’s two-day gathering strives to both examine and empower inclusion and diversity as keystones in how we approach all-things Science Advice and Science Diplomacy to meet these local-to-global challenges.
Held previously in Auckland 2014, Brussels 2016, Tokyo 2018 and Montréal 2021, Kigali 2024 organisers have made it a priority to involve more diverse speakers from developing countries and to broaden the thematic scope. Examining the complex interactions between scientists, public policy and diplomatic relations at local, national, regional and international levels, especially in times of crisis, the overarching theme is: “The Transformation Imperative”.
The main conference programme (see link below)will scrutinise everything from case-studies outlining STI funding tips, successes and failures in our advisory systems, plus regional to global initiatives to better connect them, to how digital technologies and A.I. are reshaping the profession itself.
INGSA2024 is also initiating and hosting a range of independent side-events that, in themselves, act as major meeting and rallying points that partners and attending delegates are encouraged to maximise. These include, amongst others, events organised by the Foreign Ministries Science & Technology Advice Network (FMSTAN); the International Public Policy Observatory Roundtable (IPPO); the High-Level Dialogue on the Future of Science Diplomacy (co-organised by the American Association for the Advancement of Science (AAAS), the European Commission, the Geneva Science & Diplomacy Anticipator (GESDA), and The Royal Society); the Organisation of Southern Cooperation (OSC)meeting on ‘Bridging Worlds of Knowledge – Promoting Endogenous Knowledge Development;’ the Science for Africa Foundation, University of Oxford Pandemic Sciences Institute’s meeting on ‘Translating Research Into Policy and Practice’; and the African Institute of Mathematical Sciences (AIMS) ‘World Build Simulation Training on Quantum Technology’ with INGSA and GESDA. INGSA will also host its own internal strategy Global Chapter & Division Meetings.
Prof Rémi Quirion, Conference Co-Chair, Chief Scientist of Québec and President of INGSA, has said that:
“For those of us who believe wholeheartedly in evidence and the integrity of science, recent years have been challenging. Mis- and disinformation can spread like a virus. So positive developments like our gathering here in Rwanda are even more critical. The importance of open science and access to data to better inform scientific integration and the collective action we now need, has never been more pressing. Our shared UN sustainable development goals play out at national and local levels. Cities and municipalities bear the brunt of climate change, but also can drive the solutions. I am excited to see and hear first-hand how the global south is increasingly at the forefront of these efforts, and to help catalyse new ways to support this. I have no doubt that INGSA’s efforts and the Kigali conference, which is co-led with the Rwandan Ministry of Education and the University of Rwanda, will act as a carrier-wave for greater engagement. I hope we will see new global collaborations and actions that will be remembered as having first taken root at INGSA2024”.
Hon. Gaspard Twagirayezu, Minister of Education of Rwanda has lent his support to the INGSA conference, saying:
“We are proud to see the INSGA conference come to Rwanda, as we are at a turning point in our management of longer-term challenges that affect us all. Issues that were considered marginal even five or ten years ago are today rightly seen as central to our social, environmental, and economic wellbeing. We are aware of how rapid scientific advances are generating enormous public interest, but we also must build the capabilities to absorb, generate and critically consider new knowledge and technologies. Overcoming current crisis and future challenges requires global coordination in science advice, and INGSA is well positioned to carry out this important work. It makes me particularly proud that INGSA’s Africa Chapter has chosen our capital Kigali as it’s pan-African base. Rwanda and Africa can benefit greatly from this collaboration.”
Ass. Prof. Didas Kayihura Muganga, Vice-Chancellor, University of Rwanda, stated:
“What this conference shows is that grass-roots citizens in Rwanda, across Africa and Worldwide can no longer be treated as simple statistics or passive bystanders. Citizens and communities are rightfully demanding greater transparency and accountability especially about science and technology. Ensuring, and demonstrating, that decisions are informed by robust evidence is an important step. But we must also ensure that the evidence is meaningful to our context and our population. Complex problems arise from a multiplicity of factors, so we need greater diversity of perspectives to help address them. This is what is changing before our very eyes. For some it is climate, biodiversity or energy supply that matters most, for others it remains access to basic education and public health. Regardless, all exemplify humanity’s interdependence.”
Daan du Toit, acting Director-General of the Department of Science & Innovation of the Government of South Africa and Programme Committee Member commented:
“INGSA has long helped build and elevate open and ongoing public and policy dialogue about the role of robust evidence in sound policy making. But now, the conversation is deepening to critically consider the scope and breadth of evidence, what evidence, whose evidence and who has access to the evidence? Operating on all continents, INGSA demonstrates the value of a well-networked community of emerging and experienced practitioners and academics working at the interfaces between science, societies and public policy. We were involved in its creation in Auckland in 2014, and have stayed close and applaud the decision to bring this 5th International Biennial Meeting to Africa. Learning from each other, we can help bring a wider variety of robust knowledge more centrally into policy-making. That is why in 2022 we supported a start-up initiative based in Pretoria called the Science Diplomacy Capital for Africa (SDCfA). The energy shown in the set-up of this meeting demonstrates our potential as Africans to do so much more together”.
INGSA-Africa’s Regional Chapter
INGSA2024 is very much ‘coming home’ and represents the first time that this biennial event is being co-hosted by a Regional Chapter. In February 2016, INGSA announced the creation of the INGSA-Africa Regional Chapter, which held its first workshop in Hermanus, South Africa. The Chapter has since made great strides in engaging francophone Africa, organising INGSA’s first French-language workshop in Dakar, Senegal in 2017 and a bi-lingual meeting as a side-event of the World Science Forum 2022, Cape Town. The Chapter’s decentralised virtual governance structure means that it embraces the continent, but new initiatives, like the Kigali Training Hub are set to become a pivotal player in the development of evidence-to-policy ecosystems across Africa.
Dr M. Oladoyin Odubanjo, Conference Co-Chair and Chair of INGSA-Africa, outlined that:
“As a public health physician and current Executive Secretary of the Nigerian Academy of Sciences (NAS), responsible for providing scientific advice to policy-makers, I have learnt that science and politics share common features. Both operate at the boundaries of knowledge and uncertainty, but they approach problems differently. Scientists question and challenge our assumptions, constantly searching for empiric evidence to determine the best options. In contrast, politicians are most often guided by the needs or demands of voters and constituencies, and by ideology. Our INGSA-Africa Chapter is working at the nexus of both communities and we encourage everybody to get involved. Hosting this conference in Kigali is like a shot in the arm that can only lead us on to even bigger and brighter things.”
Sir Peter Gluckman, President of the International Science Council, and founding chair of INGSA mentioned: “Good science advice is critical to decision making at any level from local to global. It helps decision makers understand the evidence for or against, and the implications of any choice they make. In that way science advice makes it more likely that decision makers will make better decisions. INGSA as the global capacity building platform has a critical role to play in ensuring the quality of science policy interface.”
Strength in numbers
What makes the 5th edition of this biennial event stand out is the perhaps the novel range of speakers from all continents working at the boundary between science, society and policy willing to make their voices heard. More information on Parallel Sessions organisers as well as speakers can be found on the website.
About INGSA
Founded in 2014 with regional chapters in Africa, Asia and Latin America and the Caribbean, and key partnerships in Europe and North America, INGSA has quicky established an important reputation as a collaborative platform for policy exchange, capacity building and operational research across diverse global science advisory organisations and national systems. INGSA is a free community of peer support and practice with over 6,000 members globally. Science communicators and members of the media are warmly welcomed to join for free.
Through workshops, conferences and a growing catalogue of tools and guidance, the network aims to enhance the global science-policy interface to improve the potential for evidence-informed policy formation at sub-national, national and transnational levels. INGSA operates as an affiliated body of the International Science Council. INGSA’s secretariat is based at the University of Auckland in New Zealand, while the office of the President is hosted at the Fonds de Recherche de Quebec in Montreal, which has also launched the Réseau francophone international en conseil scientifique (RFICS), which mandate is towards capacity reinforcement in science advice in the Francophonie.
INGSA2024 Sponsors
As always, INGSA organized a highly accessible and inclusive conference by not charging a registration fee. Philanthropic support from many sponsors made the conference possible. Special recognition is made to the Fonds de recherche du Québec, the Rwanda Ministry of Education as well as the University of Rwanda. The full list of donors is available on the INGSA2024 website (link below).
[1] Australia, Belgium, Brazil, Cameroon, Canada, Chile, China, Costa Rica, Cote d’Ivoire, Denmark, Egypt, Ethiopia, Finland, France, Germany, Ghana, India, Ireland, Italy, Jamaica, Japan, Kenya, Lebanon, Malawi, Malaysia, Mauritius, Mexico, New Zealand, Nigeria, Portugal, Rwanda, Saudi Arabia, South Africa, Spain, Sri Lanka, Uganda, UK, USA, Zimbabwe
Satellite session are taking place today (May 3, 2024),
High-Level Dialogue on the Future of Science
Bridging Worlds of Knowledge
Translating Research into Policy and Practice
Quantum Technology in Africa
The last session on the list, “Quantum Technology …,” is a science diplomacy role-playing workshop. (It’s of particular interest to me as the Council of Canadian Academies (CCA) released a report, Quantum Potential, in Fall 2023 and about which I’m still hoping to write a commentary.)
Even though the sessions have already taken place,it’s worth taking a look at the conference programme and the satellite events just to get a sense of the global breadth of interest in this work. Here’s the INGSA2024 website.
An October 22, 2023 commentary by Rae Hodge for Salon.com introduces the new work with a beautiful lede/lead and more,
A recently published scientific article proposes a sweeping new law of nature, approaching the matter with dry, clinical efficiency that still reads like poetry.
“A pervasive wonder of the natural world is the evolution of varied systems, including stars, minerals, atmospheres, and life,” the scientists write in the Proceedings of the National Academy of Sciences. “Evolving systems are asymmetrical with respect to time; they display temporal increases in diversity, distribution, and/or patterned behavior,” they continue, mounting their case from the shoulders of Charles Darwin, extending it toward all things living and not.
To join the known physics laws of thermodynamics, electromagnetism and Newton’s laws of motion and gravity, the nine scientists and philosophers behind the paper propose their “law of increasing functional information.”
In short, a complex and evolving system — whether that’s a flock of gold finches or a nebula or the English language — will produce ever more diverse and intricately detailed states and configurations of itself.
And here, any writer should find their breath caught in their throat. Any writer would have to pause and marvel.
It’s a rare thing to hear the voice of science singing toward its twin in the humanities. The scientists seem to be searching in their paper for the right words to describe the way the nested trills of a flautist rise through a vaulted cathedral to coalesce into notes themselves not played by human breath. And how, in the very same way, the oil-slick sheen of a June Bug wing may reveal its unseen spectra only against the brief-blooming dogwood in just the right season of sun.
Both intricate configurations of art and matter arise and fade according to their shared characteristic, long-known by students of the humanities: each have been graced with enough time to attend to the necessary affairs of their most enduring pleasures.
A paper published in the Proceedings of the National Academy of Sciences describes “a missing law of nature,” recognizing for the first time an important norm within the natural world’s workings.
In essence, the new law states that complex natural systems evolve to states of greater patterning, diversity, and complexity. In other words, evolution is not limited to life on Earth, it also occurs in other massively complex systems, from planets and stars to atoms, minerals, and more.
It was authored by a nine-member team— scientists from the Carnegie Institution for Science, the California Institute of Technology (Caltech) and Cornell University, and philosophers from the University of Colorado.
“Macroscopic” laws of nature describe and explain phenomena experienced daily in the natural world. Natural laws related to forces and motion, gravity, electromagnetism, and energy, for example, were described more than 150 years ago.
The new work presents a modern addition — a macroscopic law recognizing evolution as a common feature of the natural world’s complex systems, which are characterised as follows:
They are formed from many different components, such as atoms, molecules, or cells, that can be arranged and rearranged repeatedly
Are subject to natural processes that cause countless different arrangements to be formed
Only a small fraction of all these configurations survive in a process called “selection for function.”
Regardless of whether the system is living or nonliving, when a novel configuration works well and function improves, evolution occurs.
The authors’ “Law of Increasing Functional Information” states that the system will evolve “if many different configurations of the system undergo selection for one or more functions.”
“An important component of this proposed natural law is the idea of ‘selection for function,’” says Carnegie astrobiologist Dr. Michael L. Wong, first author of the study.
In the case of biology, Darwin equated function primarily with survival—the ability to live long enough to produce fertile offspring.
The new study expands that perspective, noting that at least three kinds of function occur in nature.
The most basic function is stability – stable arrangements of atoms or molecules are selected to continue. Also chosen to persist are dynamic systems with ongoing supplies of energy.
The third and most interesting function is “novelty”—the tendency of evolving systems to explore new configurations that sometimes lead to startling new behaviors or characteristics.
Life’s evolutionary history is rich with novelties—photosynthesis evolved when single cells learned to harness light energy, multicellular life evolved when cells learned to cooperate, and species evolved thanks to advantageous new behaviors such as swimming, walking, flying, and thinking.
The same sort of evolution happens in the mineral kingdom. The earliest minerals represent particularly stable arrangements of atoms. Those primordial minerals provided foundations for the next generations of minerals, which participated in life’s origins. The evolution of life and minerals are intertwined, as life uses minerals for shells, teeth, and bones.
Indeed, Earth’s minerals, which began with about 20 at the dawn of our Solar System, now number almost 6,000 known today thanks to ever more complex physical, chemical, and ultimately biological processes over 4.5 billion years.
In the case of stars, the paper notes that just two major elements – hydrogen and helium – formed the first stars shortly after the big bang. Those earliest stars used hydrogen and helium to make about 20 heavier chemical elements. And the next generation of stars built on that diversity to produce almost 100 more elements.
“Charles Darwin eloquently articulated the way plants and animals evolve by natural selection, with many variations and traits of individuals and many different configurations,” says co-author Robert M. Hazen of Carnegie Science, a leader of the research.
“We contend that Darwinian theory is just a very special, very important case within a far larger natural phenomenon. The notion that selection for function drives evolution applies equally to stars, atoms, minerals, and many other conceptually equivalent situations where many configurations are subjected to selective pressure.”
The co-authors themselves represent a unique multi-disciplinary configuration: three philosophers of science, two astrobiologists, a data scientist, a mineralogist, and a theoretical physicist.
Says Dr. Wong: “In this new paper, we consider evolution in the broadest sense—change over time—which subsumes Darwinian evolution based upon the particulars of ‘descent with modification.’”
“The universe generates novel combinations of atoms, molecules, cells, etc. Those combinations that are stable and can go on to engender even more novelty will continue to evolve. This is what makes life the most striking example of evolution, but evolution is everywhere.”
Among many implications, the paper offers:
Understanding into how differing systems possess varying degrees to which they can continue to evolve. “Potential complexity” or “future complexity” have been proposed as metrics of how much more complex an evolving system might become
Insights into how the rate of evolution of some systems can be influenced artificially. The notion of functional information suggests that the rate of evolution in a system might be increased in at least three ways: (1) by increasing the number and/or diversity of interacting agents, (2) by increasing the number of different configurations of the system; and/or 3) by enhancing the selective pressure on the system (for example, in chemical systems by more frequent cycles of heating/cooling or wetting/drying).
A deeper understanding of generative forces behind the creation and existence of complex phenomena in the universe, and the role of information in describing them
An understanding of life in the context of other complex evolving systems. Life shares certain conceptual equivalencies with other complex evolving systems, but the authors point to a future research direction, asking if there is something distinct about how life processes information on functionality (see also https://royalsocietypublishing.org/doi/10.1098/rsif.2022.0810).
Aiding the search for life elsewhere: if there is a demarcation between life and non-life that has to do with selection for function, can we identify the “rules of life” that allow us to discriminate that biotic dividing line in astrobiological investigations? (See also https://conta.cc/3LwLRYS, “Did Life Exist on Mars? Other Planets? With AI’s Help, We May Know Soon”)
At a time when evolving AI systems are an increasing concern, a predictive law of information that characterizes how both natural and symbolic systems evolve is especially welcome
Laws of nature – motion, gravity, electromagnetism, thermodynamics – etc. codify the general behavior of various macroscopic natural systems across space and time.
The “law of increasing functional information” published today complements the 2nd law of thermodynamics, which states that the entropy (disorder) of an isolated system increases over time (and heat always flows from hotter to colder objects).
* * * * *
Comments
“This is a superb, bold, broad, and transformational article. … The authors are approaching the fundamental issue of the increase in complexity of the evolving universe. The purpose is a search for a ‘missing law’ that is consistent with the known laws.
“At this stage of the development of these ideas, rather like the early concepts in the mid-19th century of coming to understand ‘energy’ and ‘entropy,’ open broad discussion is now essential.”
Stuart Kauffman Institute for Systems Biology, Seattle WA
“The study of Wong et al. is like a breeze of fresh air blowing over the difficult terrain at the trijunction of astrobiology, systems science and evolutionary theory. It follows in the steps of giants such as Erwin Schrödinger, Ilya Prigogine, Freeman Dyson and James Lovelock. In particular, it was Schrödinger who formulated the perennial puzzle: how can complexity increase — and drastically so! — in living systems, while they remain bound by the Second Law of thermodynamics? In the pile of attempts to resolve this conundrum in the course of the last 80 years, Wong et al. offer perhaps the best shot so far.”
“Their central idea, the formulation of the law of increasing functional information, is simple but subtle: a system will manifest an increase in functional information if its various configurations generated in time are selected for one or more functions. This, the authors claim, is the controversial ‘missing law’ of complexity, and they provide a bunch of excellent examples. From my admittedly quite subjective point of view, the most interesting ones pertain to life in radically different habitats like Titan or to evolutionary trajectories characterized by multiple exaptations of traits resulting in a dramatic increase in complexity. Does the correct answer to Schrödinger’s question lie in this direction? Only time will tell, but both my head and my gut are curiously positive on that one. Finally, another great merit of this study is worth pointing out: in this day and age of rabid Counter-Enlightenment on the loose, as well as relentless attacks on the freedom of thought and speech, we certainly need more unabashedly multidisciplinary and multicultural projects like this one.”
Milan Cirkovic Astronomical Observatory of Belgrade, Serbia; The Future of Humanity Institute, Oxford University [University of Oxford]
The natural laws we recognize today cannot yet account for one astounding characteristic of our universe—the propensity of natural systems to “evolve.” As the authors of this study attest, the tendency to increase in complexity and function through time is not specific to biology, but is a fundamental property observed throughout the universe. Wong and colleagues have distilled a set of principles which provide a foundation for cross-disciplinary discourse on evolving systems. In so doing, their work will facilitate the study of self-organization and emergent complexity in the natural world.
Corday Selden Department of Marine and Coastal Sciences, Rutgers University
The paper “On the roles of function and selection in evolving systems” provides an innovative, compelling, and sound theoretical framework for the evolution of complex systems, encompassing both living and non-living systems. Pivotal in this new law is functional information, which quantitatively captures the possibilities a system has to perform a function. As some functions are indeed crucial for the survival of a living organism, this theory addresses the core of evolution and is open to quantitative assessment. I believe this contribution has also the merit of speaking to different scientific communities that might find a common ground for open and fruitful discussions on complexity and evolution.
Andrea Roli Assistant Professor, Università di Bologna.
Here’s a link to and a citation for the paper,
On the roles of function and selection in evolving systems by Michael L. Wong, Carol E. Cleland, Daniel Arends Jr., Stuart Bartlett, H. James Cleaves, Heather Demarest, Anirudh Prabhu, Jonathan I. Lunine, and Robert M. Hazen. Proceedings of the National Academy of Sciences (PNAS) 120 (43) e2310223120 DOI: https://doi.org/10.1073/pnas.2310223120 Published: October 16, 2023
A June 5, 2023 news item on Nanowerk announced a paper which reviews the state-of-the-art of optical memristors, Note: Links have been removed,
AI, machine learning, and ChatGPT may be relatively new buzzwords in the public domain, but developing a computer that functions like the human brain and nervous system – both hardware and software combined – has been a decades-long challenge. Engineers at the University of Pittsburgh are today exploring how optical “memristors” may be a key to developing neuromorphic computing.
Resistors with memory, or memristors, have already demonstrated their versatility in electronics, with applications as computational circuit elements in neuromorphic computing and compact memory elements in high-density data storage. Their unique design has paved the way for in-memory computing and captured significant interest from scientists and engineers alike.
A new review article published in Nature Photonics (“Integrated Optical Memristors”), sheds light on the evolution of this technology—and the work that still needs to be done for it to reach its full potential. Led by Nathan Youngblood, assistant professor of electrical and computer engineering at the University of Pittsburgh Swanson School of Engineering, the article explores the potential of optical devices which are analogs of electronic memristors. This new class of device could play a major role in revolutionizing high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence in the optical domain.
“Researchers are truly captivated by optical memristors because of their incredible potential in high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence,” explained Youngblood. “Imagine merging the incredible advantages of optics with local information processing. It’s like opening the door to a whole new realm of technological possibilities that were previously unimaginable.”
The review article presents a comprehensive overview of recent progress in this emerging field of photonic integrated circuits. It explores the current state-of-the-art and highlights the potential applications of optical memristors, which combine the benefits of ultrafast, high-bandwidth optical communication with local information processing. However, scalability emerged as the most pressing issue that future research should address.
“Scaling up in-memory or neuromorphic computing in the optical domain is a huge challenge. Having a technology that is fast, compact, and efficient makes scaling more achievable and would represent a huge step forward,” explained Youngblood.
“One example of the limitations is that if you were to take phase change materials, which currently have the highest storage density for optical memory, and try to implement a relatively simplistic neural network on-chip, it would take a wafer the size of a laptop to fit all the memory cells needed,” he continued. “Size matters for photonics, and we need to find a way to improve the storage density, energy efficiency, and programming speed to do useful computing at useful scales.”
Using Light to Revolutionize Computing
Optical memristors can revolutionize computing and information processing across several applications. They can enable active trimming of photonic integrated circuits (PICs), allowing for on-chip optical systems to be adjusted and reprogrammed as needed without continuously consuming power. They also offer high-speed data storage and retrieval, promising to accelerate processing, reduce energy consumption, and enable parallel processing.
Optical memristors can even be used for artificial synapses and brain-inspired architectures. Dynamic memristors with nonvolatile storage and nonlinear output replicate the long-term plasticity of synapses in the brain and pave the way for spiking integrate-and-fire computing architectures.
Research to scale up and improve optical memristor technology could unlock unprecedented possibilities for high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence.
“We looked at a lot of different technologies. The thing we noticed is that we’re still far away from the target of an ideal optical memristor–something that is compact, efficient, fast, and changes the optical properties in a significant manner,” Youngblood said. “We’re still searching for a material or a device that actually meets all these criteria in a single technology in order for it to drive the field forward.”
The publication of “Integrated Optical Memristors” (DOI: 10.1038/s41566-023-01217-w) was published in Nature Photonics and is coauthored by senior author Harish Bhaskaran at the University of Oxford, Wolfram Pernice at Heidelberg University, and Carlos Ríos at the University of Maryland.
Despite including that final paragraph, I’m also providing a link to and a citation for the paper,
Integrated optical memristors by Nathan Youngblood, Carlos A. Ríos Ocampo, Wolfram H. P. Pernice & Harish Bhaskaran. Nature Photonics volume 17, pages 561–572 (2023) DOI: https://doi.org/10.1038/s41566-023-01217-w Published online: 29 May 2023 Issue Date: July 2023
What are the ethics of incorporating human cells into computer chips? That’s the question that Julian Savulescu (Visiting Professor in biomedical Ethics, University of Melbourne and Uehiro Chair in Practical Ethics, University of Oxford), Christopher Gyngell (Research Fellow in Biomedical Ethics, The University of Melbourne), and Tsutomu Sawai (Associate Professor, Humanities and Social Sciences, Hiroshima University) discuss in a May 24, 2022 essay on The Conversation (Note: A link has been removed),
The year is 2030 and we are at the world’s largest tech conference, CES in Las Vegas. A crowd is gathered to watch a big tech company unveil its new smartphone. The CEO comes to the stage and announces the Nyooro, containing the most powerful processor ever seen in a phone. The Nyooro can perform an astonishing quintillion operations per second, which is a thousand times faster than smartphone models in 2020. It is also ten times more energy-efficient with a battery that lasts for ten days.
A journalist asks: “What technological advance allowed such huge performance gains?” The chief executive replies: “We created a new biological chip using lab-grown human neurons. These biological chips are better than silicon chips because they can change their internal structure, adapting to a user’s usage pattern and leading to huge gains in efficiency.”
Another journalist asks: “Aren’t there ethical concerns about computers that use human brain matter?”
Although the name and scenario are fictional, this is a question we have to confront now. In December 2021, Melbourne-based Cortical Labs grew groups of neurons (brain cells) that were incorporated into a computer chip. The resulting hybrid chip works because both brains and neurons share a common language: electricity.
…
The authors explain their comment that brains and neurons share the common language of electricity (Note: Links have been removed),
In silicon computers, electrical signals travel along metal wires that link different components together. In brains, neurons communicate with each other using electric signals across synapses (junctions between nerve cells). In Cortical Labs’ Dishbrain system, neurons are grown on silicon chips. These neurons act like the wires in the system, connecting different components. The major advantage of this approach is that the neurons can change their shape, grow, replicate, or die in response to the demands of the system.
Dishbrain could learn to play the arcade game Pong faster than conventional AI systems. The developers of Dishbrain said: “Nothing like this has ever existed before … It is an entirely new mode of being. A fusion of silicon and neuron.”
Cortical Labs believes its hybrid chips could be the key to the kinds of complex reasoning that today’s computers and AI cannot produce. Another start-up making computers from lab-grown neurons, Koniku, believes their technology will revolutionise several industries including agriculture, healthcare, military technology and airport security. Other types of organic computers are also in the early stages of development.
…
Ethics issues arise (Note: Links have been removed),
… this raises questions about donor consent. Do people who provide tissue samples for technology research and development know that it might be used to make neural computers? Do they need to know this for their consent to be valid?
People will no doubt be much more willing to donate skin cells for research than their brain tissue. One of the barriers to brain donation is that the brain is seen as linked to your identity. But in a world where we can grow mini-brains from virtually any cell type, does it make sense to draw this type of distinction?
…
…
… Consider the scandal regarding Henrietta Lacks, an African-American woman whose cells were used extensively in medical and commercial research without her knowledge and consent.
Henrietta’s cells are still used in applications which generate huge amounts of revenue for pharmaceutical companies (including recently to develop COVID vaccines. The Lacks family still has not received any compensation. If a donor’s neurons end up being used in products like the imaginary Nyooro, should they be entitled to some of the profit made from those products?
Another key ethical consideration for neural computers is whether they could develop some form of consciousness and experience pain. Would neural computers be more likely to have experiences than silicon-based ones? …
This May 24, 2022 essay is fascinating and, if you have the time, I encourage you to read it all.
If you’re curious, you can find out about Cortical Labs here, more about Dishbrain in a February 22, 2022 article by Brian Patrick Green for iai (Institute for Art and Ideas) news, and more about Koniku in a May 31, 2018 posting about ‘wetware’ by Alissa Greenberg on Medium.
*HeLa cells are named for Henrietta Lacks who unknowingly donated her immortal cell line to medical research. You can find more about the story on the Oprah Winfrey website, which features an excerpt from the Rebecca Skloot book “The Immortal Life of Henrietta Lacks.”’ …
I checked; the excerpt is still on the Oprah Winfrey site.