Artificial intelligence made a splash when the 2024 Nobel Prize announcements were made as they were a key factor in both the prizes for physics and for chemistry.
Where do physics, chemistry, and AI go from here?
I have a few speculative pieces about physics, chemistry, and AI. First off we have Nello Cristianini’s (Professor of Artificial Intelligence at the University of Bath (England) October 10, 2024 essay for The Conversation, Note: Links have been removed,
The 2024 Nobel Prizes in physics and chemistry have given us a glimpse of the future of science. Artificial intelligence (AI) was central to the discoveries honoured by both awards. You have to wonder what Alfred Nobel, who founded the prizes, would think of it all.
We are certain to see many more Nobel medals handed to researchers who made use of AI tools. As this happens, we may find the scientific methods honoured by the Nobel committee depart from straightforward categories like “physics”, “chemistry” and “physiology or medicine”.
We may also see the scientific backgrounds of recipients retain a looser connection with these categories. This year’s physics prize was awarded to the American John Hopfield, at Princeton University, and British-born Geoffrey Hinton, from the University of Toronto. While Hopfield is a physicist, Hinton studied experimental psychology before gravitating to AI.
The chemistry prize was shared between biochemist David Baker, from the University of Washington, and the computer scientists Demis Hassabis and John Jumper, who are both at Google DeepMind in the UK.
There is a close connection between the AI-based advances honoured in the physics and chemistry categories. Hinton helped develop an approach used by DeepMind to make its breakthrough in predicting the shapes of proteins.
The physics laureates, Hinton in particular, laid the foundations of the powerful field known as machine learning. This is a subset of AI that’s concerned with algorithms, sets of rules for performing specific computational tasks.
Hopfield’s work is not particularly in use today, but the backpropagation algorithm (co-invented by Hinton) has had a tremendous impact on many different sciences and technologies. This is concerned with neural networks, a model of computing that mimics the human brain’s structure and function to process data. Backpropagation allows scientists to “train” enormous neural networks. While the Nobel committee did its best to connect this influential algorithm to physics, it’s fair to say that the link is not a direct one.
…
Every two years, since 1994, scientists have been holding a contest to find the best ways to predict protein structures and shapes from the sequences of their amino acids. The competition is called Critical Assessment of Structure Prediction (CASP).
For the past few contests, CASP winners have used some version of DeepMind’s AlphaFold. There is, therefore, a direct line to be drawn from Hinton’s backpropagation to Google DeepMind’s AlphaFold 2 breakthrough.
…
Attributing credit has always been controversial aspect of the Nobel prizes. A maximum of three researchers can share a Nobel. But big advances in science are collaborative. Scientific papers may have 10, 20, 30 authors or more. More than one team might contribute to the discoveries honoured by the Nobel committee.
This year we may have further discussions about the attribution of the research on backpropagation algorithm, which has been claimed by various researchers, as well as for the general attribution of a discovery to a field like physics.
We now have a new dimension to the attribution problem. It’s increasingly unclear whether we will always be able to distinguish between the contributions of human scientists and those of their artificial collaborators – the AI tools that are already helping push forward the boundaries of our knowledge.
…
This November 26, 2024 news item on ScienceDaily, which is a little repetitive, considers interdisciplinarity in relation to the 2024 Nobel prizes,
In 2024, the Nobel Prize in physics was awarded to John Hopfield and Geoffrey Hinton for their foundational work in artificial intelligence (AI), and the Nobel Prize in chemistry went to David Baker, Demis Hassabis, and John Jumper for using AI to solve the protein-folding problem, a 50-year grand challenge problem in science.
A new article, written by researchers at Carnegie Mellon University and Calculation Consulting, examines the convergence of physics, chemistry, and AI, highlighted by recent Nobel Prizes. It traces the historical development of neural networks, emphasizing the role of interdisciplinary research in advancing AI. The authors advocate for nurturing AI-enabled polymaths to bridge the gap between theoretical advancements and practical applications, driving progress toward artificial general intelligence. The article is published in Patterns.
“With AI being recognized in connections to both physics and chemistry, practitioners of machine learning may wonder how these sciences relate to AI and how these awards might influence their work,” explained Ganesh Mani, Professor of Innovation Practice and Director of Collaborative AI at Carnegie Mellon’s Tepper School of Business, who coauthored the article. “As we move forward, it is crucial to recognize the convergence of different approaches in shaping modern AI systems based on generative AI.”
…
A November 25, 2024 Carnegie Mellon University (CMU) news release, which originated the news item, describes the paper,
In their article, the authors explore the historical development of neural networks. By examining the history of AI development, they contend, we can understand more thoroughly the connections among computer science, theoretical chemistry, theoretical physics, and applied mathematics. The historical perspective illuminates how foundational discoveries and inventions across these disciplines have enabled modern machine learning with artificial neural networks.
Then they turn to key breakthroughs and challenges in this field, starting with Hopfield’s work, and go on to explain how engineering has at times preceded scientific understanding, as is the case with the work of Jumper and Hassabis.
The authors conclude with a call to action, suggesting that the rapid progress of AI across diverse sectors presents both unprecedented opportunities and significant challenges. To bridge the gap between hype and tangible development, they say, a new generation of interdisciplinary thinkers must be cultivated.
These “modern-day Leonardo da Vincis,” as the authors call them, will be crucial in developing practical learning theories that can be applied immediately by engineers, propelling the field toward the ambitious goal of artificial general intelligence.
This calls for a paradigm shift in how scientific inquiry and problem solving are approached, say the authors, one that embraces holistic, cross-disciplinary collaboration and learns from nature to understand nature. By breaking down silos between fields and fostering a culture of intellectual curiosity that spans multiple domains, innovative solutions can be identified to complex global challenges like climate change. Through this synthesis of diverse knowledge and perspectives, catalyzed by AI, meaningful progress can be made and the field can realize the full potential of technological aspirations.
“This interdisciplinary approach is not just beneficial but essential for addressing the many complex challenges that lie ahead,” suggests Charles Martin, Principal Consultant at Calculation Consulting, who coauthored the article. “We need to harness the momentum of current advancements while remaining grounded in practical realities.”
The authors acknowledge the contributions of Scott E. Fahlman, Professor Emeritus in Carnegie Mellon’s School of Computer Science.
Here’s a link to and a citation for the paper,
The recent Physics and Chemistry Nobel Prizes, AI, and the convergence of knowledge fields by Charles H. Martin, Ganesh Mani. Patterns, 2024 DOI: 10.1016/j.patter.2024.101099 Published online November 25, 2024 Copyright: © 2024 The Author(s). Published by Elsevier Inc.
This paper is open access under Creative Commons Attribution (CC BY 4.0.
A scientific enthusiast: “I was a beta tester for the Nobel prize-winning AlphaFold AI”
From an October 11, 2024 essay by Rivka Isaacson (Professor of Molecular Biophysics, King’s College London) for The Conversation, Note: Links have been removed,
The deep learning machine AlphaFold, which was created by Google’s AI research lab DeepMind, is already transforming our understanding of the molecular biology that underpins health and disease.
One half of the 2024 Nobel prize in chemistry went to David Baker from the University of Washington in the US, with the other half jointly awarded to Demis Hassabis and John M. Jumper, both from London-based Google DeepMind.
If you haven’t heard of AlphaFold, it may be difficult to appreciate how important it is becoming to researchers. But as a beta tester for the software, I got to see first-hand how this technology can reveal the molecular structures of different proteins in minutes. It would take researchers months or even years to unpick these structures in laboratory experiments.
This technology could pave the way for revolutionary new treatments and drugs. But first, it’s important to understand what AlphaFold does.
Proteins are produced by series of molecular “beads”, created from a selection of the human body’s 20 different amino acids. These beads form a long chain that folds up into a mechanical shape that is crucial for the protein’s function.
Their sequence is determined by DNA. And while DNA research means we know the order of the beads that build most proteins, it’s always been a challenge to predict how the chain folds up into each “3D machine”.
These protein structures underpin all of biology. Scientists study them in the same way you might take a clock apart to understand how it works. Comprehend the parts and put together the whole: it’s the same with the human body.
Proteins are tiny, with a huge number located inside each of our 30 trillion cells. This meant for decades, the only way to find out their shape was through laborious experimental methods – studies that could take years.
Throughout my career I, along with many other scientists, have been engaged in such pursuits. Every time we solve a protein structure, we deposit it in a global database called the Protein Data Bank, which is free for anyone to use.
AlphaFold was trained on these structures, the majority of which were found using X-ray crystallography. For this technique, proteins are tested under thousands of different chemical states, with variations in temperature, density and pH. Researchers use a microscope to identify the conditions under which each protein lines up in a particular formation. These are then shot with X-rays to work out the spatial arrangement of all the atoms in that protein.
…
Addictive experience
In March 2024, researchers at DeepMind approached me to beta test AlphaFold3, the latest incarnation of the software, which was close to release at the time.
I’ve never been a gamer but I got a taste of the addictive experience as, once I got access, all I wanted to do was spend hours trying out molecular combinations. As well as lightning speed, this new version introduced the option to include bigger and more varied molecules, including DNA and metals, and the opportunity to modify amino acids to mimic chemical signalling in cells.
…
Understanding the moving parts and dynamics of proteins is the next frontier, now that we can predict static protein shapes with AlphaFold. Proteins come in a huge variety of shapes and sizes. They can be rigid or flexible, or made of neatly structured units connected by bendy loops.
…
…
You can read Isaacson’s entire October 11, 2024 essay on The Conversation or in an October 14, 2024 news item on phys.org.