Tag Archives: Nil Köksal

Singing the science—a study about karaoke and blushing

How did the scientists get the idea to study blushing by having people watch themselves at karaoke? The answer follows the news item and press release. First, here’s the excerpt from the July 17, 2024 news item on ScienceDaily,

A new collaboration between researchers from the Netherlands Institute for Neuroscience, the University of Amsterdam and the University of Chieti [located in Abruzzo, Italy’ also known as, Università degli Studi “G. d’Annunzio” Chieti – Pescara] explores the neural substrates of blushing in a MRI scanner.

A July 17, 2024 Netherlands Institute for Neuroscience (KNAW) press release (also on Eurekalert), which originated the news item, delves further into the research,

Most of us know what it feels like to blush. The face becomes warm and red, and we experience self-conscious emotions, such as embarrassment, shyness, shame, and pride. It is perhaps no wonder that Charles Darwin referred to it as “the most peculiar and the most human of all expressions”. But why do we blush, and what are the underlying mechanisms of blushing?

To answer this question, Milica Nikolic and Disa Sauter form [sic] the University of Amsterdam collaborated with Simone di Plinio from the University of Chieti under the supervision of Christian Keysers and Valeria Gazzola from the Netherlands Institute for Neuroscience.

“Blushing is a really interesting phenomenon because we still don’t know which cognitive skills are needed for it to occur”, developmental psychologist Nikolic explains. “There’s this idea in psychology that dates back to Darwin, who said that blushing occurs when we think about what other people think of us, which involves relatively complex cognitive skills”.

Blushing in a karaoke setting

The researchers investigated blushing by looking at the activated brain areas in an MRI scanner while measuring the cheek temperature—an indicator of blushing. Their participants were female adolescents, a group known to be particularly sensitive to social judgement. “It is known that blushing increases during this life stage, since adolescents are very sensitive to other people’s opinions and can be afraid of rejection or leaving wrong impression” Nikolic explains.

To evoke a blushing response in a controlled experimental setting, participants came to the lab for two separate sessions. During the first session, they were asked to sing purposefully-chosen difficult karaoke songs and, in the second session, they watched recordings of their own singing while their brain activity and physiological responses were measured.

Adding salt to the wound, they were also told that an audience would watch their recording with them. Finally, the participants were shown recordings from another participant who had sung at comparable level, and a professional singer who was disguised as a third participant.

The mechanism behind blushing

As expected, the researchers found that participants blushed more while watching their own recordings in comparison to other participants’  recordings.

The correlating brain areas were somewhat surprising. Blushing was associated with increased activity in the cerebellum, an area most known for its role in movement and coordination. “Lately, there has been a lot of research suggesting its involvement in emotional processing”, Nikolic adds. The researchers also found increased activation in the early visual areas, suggesting that the videos of own singing captured blushing participants’ attention the most.

Surprisingly, there was no activation in areas that are traditionally known to be involved in the process of understanding the mental state of oneself or others. “Based on this we concluded that thinking about others’ thoughts may not be necessary for blushing to occur” Nikolic concludes. “Blushing may be a part of the automatic arousal you feel when you are exposed and there is something that is relevant to the self”.

Universal phenomenon

Nikolic: “The next step would be to look at blushing under different conditions, or perhaps, even explore the phenomenon in younger children, before they have developed the cognitive skills to think about other people’s thoughts.”

“Blushing in itself is very interesting because it’s universal. There are even people who develop a phobia of blushing, for example, people with a social anxiety disorder. When we understand the mechanisms of blushing, we can target the fear of blushing better as well. Aside from that, it’s interesting to know more about blushing in a general sense as well, since it happens very often and is a common part of our everyday lives.”

In answer to my question

Before getting to the paper, there’s a July 23, 2024 article by Sheena Goodyear based on a recent Canadian Broadcasting Corporation (CBC) As it happens radio segment (if you follow the article link, you’ll find the embedded radio segment),

When researchers set out to learn what happens to our brains when we blush, they faced a conundrum — how could they sufficiently humiliate their study’s subjects as they lay alone in a dark MRI machine?

We were thinking, well, what can we do to make people feel embarrassed and exposed while they are actually alone? [emphasis mine]” Milica Nikolić, a developmental psychologist at the University of Amsterdam, told As It Happens host Nil Köksal. 

“And we knew that singing karaoke is, of course, very embarrassing.” [emphasis mine]

So Nikolić and her colleagues had the participants sing karaoke songs — each carefully selected to ensure maximum mortification — then had them watch video clips of their own performances while getting their brains scanned.

The findings, published in the journal Proceedings of the Royal Society B, shed light on the psychology of blushing, which the authors say is a completely involuntary reaction to feeling exposed. 

Nikolić says she and her colleagues knew from previous research that doing karaoke makes people blush. Nevertheless, they took steps to ensure the experience was as embarrassing as possible.

The study’s participants were all women and girls in Amsterdam, aged 16 to 20 — an age group shown to be more self-conscious about how others perceive them.

The researchers collaborated with music experts to select songs the cohort would be familiar with, and which are challenging to sing  — Hello by Adele, Let It Go by Idina Menzel from the movie FrozenAll I Want for Christmas is You by Mariah Carey and All The Things She Said by Russian pop duo Tatu.

But before she subjected others to this experiment, Nikolić says she tried it herself — albeit, minus the MRI exam [emphasis mine].

“It was terrible,” she said. “So I knew back then that the task would work.”

Dr. Mary Lamia, a clinical psychologist in Marin County, Calif., and author of The Upside of Shame, says the findings do not surprise her — especially the part about blushing making us pay attention. 

Lamia urges people people to lean into embarrassment instead of hiding or lashing out. 

“Many people who blush would like to not blush, but some people just blush.  And so they have to learn to accept it and maybe even enjoy it — that they’re wearing their emotions on their face,” she said.

“It’s an honest response, an authentic response.”

The paper

Here’s a link to and a citation for the paper,

The blushing brain: neural substrates of cheek temperature increase in response to self-observation by Milica Nikolić, Simone di Plinio, Disa Sauter, Christian Keysers and Valeria Gazzola. Proceedings of the Royal Society B Biological Sciences August 2024 Volume 291 Issue 2027 DOI: https://doi.org/10.1098/rspb.2024.0958 Published online: 17 July 2024 © 2024 The Authors. Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited.

This paper is open access.

2023 Nobel prizes (medicine, physics, and chemistry)

For the first time in the 15 years this blog has been around, the Nobel prizes awarded in medicine, physics, and chemistry all are in areas discussed here at one or another. As usual where people are concerned, some of these scientists had a tortuous journey to this prestigious outcome.

Medicine

Two people (Katalin Karikó and Drew Weissman) were awarded the prize in medicine according to the October 2, 2023 Nobel Prize press release, Note: Links have been removed,

The Nobel Assembly at Karolinska Institutet [Sweden]

has today decided to award

the 2023 Nobel Prize in Physiology or Medicine

jointly to

Katalin Karikó and Drew Weissman

for their discoveries concerning nucleoside base modifications that enabled the development of effective mRNA vaccines against COVID-19

The discoveries by the two Nobel Laureates were critical for developing effective mRNA vaccines against COVID-19 during the pandemic that began in early 2020. Through their groundbreaking findings, which have fundamentally changed our understanding of how mRNA interacts with our immune system, the laureates contributed to the unprecedented rate of vaccine development during one of the greatest threats to human health in modern times.

Vaccines before the pandemic

Vaccination stimulates the formation of an immune response to a particular pathogen. This gives the body a head start in the fight against disease in the event of a later exposure. Vaccines based on killed or weakened viruses have long been available, exemplified by the vaccines against polio, measles, and yellow fever. In 1951, Max Theiler was awarded the Nobel Prize in Physiology or Medicine for developing the yellow fever vaccine.

Thanks to the progress in molecular biology in recent decades, vaccines based on individual viral components, rather than whole viruses, have been developed. Parts of the viral genetic code, usually encoding proteins found on the virus surface, are used to make proteins that stimulate the formation of virus-blocking antibodies. Examples are the vaccines against the hepatitis B virus and human papillomavirus. Alternatively, parts of the viral genetic code can be moved to a harmless carrier virus, a “vector.” This method is used in vaccines against the Ebola virus. When vector vaccines are injected, the selected viral protein is produced in our cells, stimulating an immune response against the targeted virus.

Producing whole virus-, protein- and vector-based vaccines requires large-scale cell culture. This resource-intensive process limits the possibilities for rapid vaccine production in response to outbreaks and pandemics. Therefore, researchers have long attempted to develop vaccine technologies independent of cell culture, but this proved challenging.

Illustration of methods for vaccine production before the COVID-19 pandemic.
Figure 1. Methods for vaccine production before the COVID-19 pandemic. © The Nobel Committee for Physiology or Medicine. Ill. Mattias Karlén

mRNA vaccines: A promising idea

In our cells, genetic information encoded in DNA is transferred to messenger RNA (mRNA), which is used as a template for protein production. During the 1980s, efficient methods for producing mRNA without cell culture were introduced, called in vitro transcription. This decisive step accelerated the development of molecular biology applications in several fields. Ideas of using mRNA technologies for vaccine and therapeutic purposes also took off, but roadblocks lay ahead. In vitro transcribed mRNA was considered unstable and challenging to deliver, requiring the development of sophisticated carrier lipid systems to encapsulate the mRNA. Moreover, in vitro-produced mRNA gave rise to inflammatory reactions. Enthusiasm for developing the mRNA technology for clinical purposes was, therefore, initially limited.

These obstacles did not discourage the Hungarian biochemist Katalin Karikó, who was devoted to developing methods to use mRNA for therapy. During the early 1990s, when she was an assistant professor at the University of Pennsylvania, she remained true to her vision of realizing mRNA as a therapeutic despite encountering difficulties in convincing research funders of the significance of her project. A new colleague of Karikó at her university was the immunologist Drew Weissman. He was interested in dendritic cells, which have important functions in immune surveillance and the activation of vaccine-induced immune responses. Spurred by new ideas, a fruitful collaboration between the two soon began, focusing on how different RNA types interact with the immune system.

The breakthrough

Karikó and Weissman noticed that dendritic cells recognize in vitro transcribed mRNA as a foreign substance, which leads to their activation and the release of inflammatory signaling molecules. They wondered why the in vitro transcribed mRNA was recognized as foreign while mRNA from mammalian cells did not give rise to the same reaction. Karikó and Weissman realized that some critical properties must distinguish the different types of mRNA.

RNA contains four bases, abbreviated A, U, G, and C, corresponding to A, T, G, and C in DNA, the letters of the genetic code. Karikó and Weissman knew that bases in RNA from mammalian cells are frequently chemically modified, while in vitro transcribed mRNA is not. They wondered if the absence of altered bases in the in vitro transcribed RNA could explain the unwanted inflammatory reaction. To investigate this, they produced different variants of mRNA, each with unique chemical alterations in their bases, which they delivered to dendritic cells. The results were striking: The inflammatory response was almost abolished when base modifications were included in the mRNA. This was a paradigm change in our understanding of how cells recognize and respond to different forms of mRNA. Karikó and Weissman immediately understood that their discovery had profound significance for using mRNA as therapy. These seminal results were published in 2005, fifteen years before the COVID-19 pandemic.

Illustration of the four different bases mRNA contains.
Figure 2. mRNA contains four different bases, abbreviated A, U, G, and C. The Nobel Laureates discovered that base-modified mRNA can be used to block activation of inflammatory reactions (secretion of signaling molecules) and increase protein production when mRNA is delivered to cells.  © The Nobel Committee for Physiology or Medicine. Ill. Mattias Karlén

In further studies published in 2008 and 2010, Karikó and Weissman showed that the delivery of mRNA generated with base modifications markedly increased protein production compared to unmodified mRNA. The effect was due to the reduced activation of an enzyme that regulates protein production. Through their discoveries that base modifications both reduced inflammatory responses and increased protein production, Karikó and Weissman had eliminated critical obstacles on the way to clinical applications of mRNA.

mRNA vaccines realized their potential

Interest in mRNA technology began to pick up, and in 2010, several companies were working on developing the method. Vaccines against Zika virus and MERS-CoV were pursued; the latter is closely related to SARS-CoV-2. After the outbreak of the COVID-19 pandemic, two base-modified mRNA vaccines encoding the SARS-CoV-2 surface protein were developed at record speed. Protective effects of around 95% were reported, and both vaccines were approved as early as December 2020.

The impressive flexibility and speed with which mRNA vaccines can be developed pave the way for using the new platform also for vaccines against other infectious diseases. In the future, the technology may also be used to deliver therapeutic proteins and treat some cancer types.

Several other vaccines against SARS-CoV-2, based on different methodologies, were also rapidly introduced, and together, more than 13 billion COVID-19 vaccine doses have been given globally. The vaccines have saved millions of lives and prevented severe disease in many more, allowing societies to open and return to normal conditions. Through their fundamental discoveries of the importance of base modifications in mRNA, this year’s Nobel laureates critically contributed to this transformative development during one of the biggest health crises of our time.

Read more about this year’s prize

Scientific background: Discoveries concerning nucleoside base modifications that enabled the development of effective mRNA vaccines against COVID-19

Katalin Karikó was born in 1955 in Szolnok, Hungary. She received her PhD from Szeged’s University in 1982 and performed postdoctoral research at the Hungarian Academy of Sciences in Szeged until 1985. She then conducted postdoctoral research at Temple University, Philadelphia, and the University of Health Science, Bethesda. In 1989, she was appointed Assistant Professor at the University of Pennsylvania, where she remained until 2013. After that, she became vice president and later senior vice president at BioNTech RNA Pharmaceuticals. Since 2021, she has been a Professor at Szeged University and an Adjunct Professor at Perelman School of Medicine at the University of Pennsylvania.

Drew Weissman was born in 1959 in Lexington, Massachusetts, USA. He received his MD, PhD degrees from Boston University in 1987. He did his clinical training at Beth Israel Deaconess Medical Center at Harvard Medical School and postdoctoral research at the National Institutes of Health. In 1997, Weissman established his research group at the Perelman School of Medicine at the University of Pennsylvania. He is the Roberts Family Professor in Vaccine Research and Director of the Penn Institute for RNA Innovations.

The University of Pennsylvania October 2, 2023 news release is a very interesting announcement (more about why it’s interesting afterwards), Note: Links have been removed,

The University of Pennsylvania messenger RNA pioneers whose years of scientific partnership unlocked understanding of how to modify mRNA to make it an effective therapeutic—enabling a platform used to rapidly develop lifesaving vaccines amid the global COVID-19 pandemic—have been named winners of the 2023 Nobel Prize in Physiology or Medicine. They become the 28th and 29th Nobel laureates affiliated with Penn, and join nine previous Nobel laureates with ties to the University of Pennsylvania who have won the Nobel Prize in Medicine.

Nearly three years after the rollout of mRNA vaccines across the world, Katalin Karikó, PhD, an adjunct professor of Neurosurgery in Penn’s Perelman School of Medicine, and Drew Weissman, MD, PhD, the Roberts Family Professor of Vaccine Research in the Perelman School of Medicine, are recipients of the prize announced this morning by the Nobel Assembly in Solna, Sweden.

After a chance meeting in the late 1990s while photocopying research papers, Karikó and Weissman began investigating mRNA as a potential therapeutic. In 2005, they published a key discovery: mRNA could be altered and delivered effectively into the body to activate the body’s protective immune system. The mRNA-based vaccines elicited a robust immune response, including high levels of antibodies that attack a specific infectious disease that has not previously been encountered. Unlike other vaccines, a live or attenuated virus is not injected or required at any point.

When the COVID-19 pandemic struck, the true value of the pair’s lab work was revealed in the most timely of ways, as companies worked to quickly develop and deploy vaccines to protect people from the virus. Both Pfizer/BioNTech and Moderna utilized Karikó and Weissman’s technology to build their highly effective vaccines to protect against severe illness and death from the virus. In the United States alone, mRNA vaccines make up more than 655 million total doses of SARS-CoV-2 vaccines that have been administered since they became available in December 2020.

Editor’s Note: The Pfizer/BioNTech and Moderna COVID-19 mRNA vaccines both use licensed University of Pennsylvania technology. As a result of these licensing relationships, Penn, Karikó and Weissman have received and may continue to receive significant financial benefits in the future based on the sale of these products. BioNTech provides funding for Weissman’s research into the development of additional infectious disease vaccines.

Science can be brutal

Now for the interesting bit: it’s in my March 5, 2021 posting (mRNA, COVID-19 vaccines, treating genetic diseases before birth, and the scientist who started it all),

Before messenger RNA was a multibillion-dollar idea, it was a scientific backwater. And for the Hungarian-born scientist behind a key mRNA discovery, it was a career dead-end.

Katalin Karikó spent the 1990s collecting rejections. Her work, attempting to harness the power of mRNA to fight disease, was too far-fetched for government grants, corporate funding, and even support from her own colleagues.

“Every night I was working: grant, grant, grant,” Karikó remembered, referring to her efforts to obtain funding. “And it came back always no, no, no.”

By 1995, after six years on the faculty at the University of Pennsylvania, Karikó got demoted. [emphasis mine] She had been on the path to full professorship, but with no money coming in to support her work on mRNA, her bosses saw no point in pressing on.

She was back to the lower rungs of the scientific academy.

“Usually, at that point, people just say goodbye and leave because it’s so horrible,” Karikó said.

There’s no opportune time for demotion, but 1995 had already been uncommonly difficult. Karikó had recently endured a cancer scare, and her husband was stuck in Hungary sorting out a visa issue. Now the work to which she’d devoted countless hours was slipping through her fingers.

In time, those better experiments came together. After a decade of trial and error, Karikó and her longtime collaborator at Penn — Drew Weissman [emphasis mine], an immunologist with a medical degree and Ph.D. from Boston University — discovered a remedy for mRNA’s Achilles’ heel.

You can get the whole story from my March 5, 2021 posting, scroll down to the “mRNA—it’s in the details, plus, the loneliness of pioneer researchers, a demotion, and squabbles” subhead. If you are very curious about mRNA and the rough and tumble of the world of science, there’s my August 20, 2021 posting “Getting erased from the mRNA/COVID-19 story” where Ian MacLachlan is featured as a researcher who got erased and where Karikó credits his work.

‘Rowing Mom Wins Nobel’ (credit: rowing website Row 2K)

Karikó’s daughter is a two-time gold medal Olympic athlete as the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens, notes in an interview with the daughter (Susan Francia). From an October 4, 2023 As It Happens article (with embedded audio programme excerpt) by Sheena Goodyear,

Olympic gold medallist Susan Francia is coming to terms with the fact that she’s no longer the most famous person in her family.

That’s because the retired U.S. rower’s mother, Katalin Karikó, just won a Nobel Prize in Medicine. The biochemist was awarded alongside her colleague, vaccine researcher Drew Weissman, for their groundbreaking work that led to the development of COVID-19 vaccines. 

“Now I’m like, ‘Shoot! All right, I’ve got to work harder,'” Francia said with a laugh during an interview with As It Happens host Nil Köksal. 

But in all seriousness, Francia says she’s immensely proud of her mother’s accomplishments. In fact, it was Karikó’s fierce dedication to science that inspired Francia to win Olympic gold medals in 2008 and 2012.

“Sport is a lot like science in that, you know, you have a passion for something and you just go and you train, attain your goal, whether it be making this discovery that you truly believe in, or for me, it was trying to be the best in the world,” Francia said.

“It’s a grind and, honestly, I love that grind. And my mother did too.”

… one of her [Karikó] favourite headlines so far comes from a little blurb on the rowing website Row 2K: “Rowing Mom Wins Nobel.”

Nowadays, scientists are trying to harness the power of mRNA to fight cancer, malaria, influenza and rabies. But when Karikó first began her work, it was a fringe concept. For decades, she toiled in relative obscurity, struggling to secure funding for her research.

“That’s also that same passion that I took into my rowing,” Francia said.

But even as Karikó struggled to make a name for herself, she says her own mother, Zsuzsanna, always believed she would earn a Nobel Prize one day.

Every year, as the Nobel Prize announcement approached, she would tell Karikó she’d be watching for her name. 

“I was laughing [and saying] that, ‘Mom, I am not getting anything,'” she said. 

But her mother, who died a few years ago, ultimately proved correct. 

Congratulations to both Katalin Karikó and Drew Weissman and thank you both for persisting!

Physics

This prize is for physics at the attoscale.

Aaron W. Harrison (Assistant Professor of Chemistry, Austin College, Texas, US) attempts an explanation of an attosecond in his October 3, 2023 essay (in English “What is an attosecond? A physical chemist explains the tiny time scale behind Nobel Prize-winning research” and in French “Nobel de physique : qu’est-ce qu’une attoseconde?”) for The Conversation, Note: Links have been removed,

“Atto” is the scientific notation prefix that represents 10-18, which is a decimal point followed by 17 zeroes and a 1. So a flash of light lasting an attosecond, or 0.000000000000000001 of a second, is an extremely short pulse of light.

In fact, there are approximately as many attoseconds in one second as there are seconds in the age of the universe.

Previously, scientists could study the motion of heavier and slower-moving atomic nuclei with femtosecond (10-15) light pulses. One thousand attoseconds are in 1 femtosecond. But researchers couldn’t see movement on the electron scale until they could generate attosecond light pulses – electrons move too fast for scientists to parse exactly what they are up to at the femtosecond level.

Harrison does a very good job of explaining something that requires a leap of imagination. He also explains why scientists engage in attosecond research. h/t October 4, 2023 news item on phys.org

Amelle Zaïr (Imperial College London) offers a more technical explanation in her October 4, 2023 essay about the 2023 prize winners for The Conversation. h/t October 4, 2023 news item on phys.org

Main event

Here’s the October 3, 2023 Nobel Prize press release, Note: A link has been removed,

The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Physics 2023 to

Pierre Agostini
The Ohio State University, Columbus, USA

Ferenc Krausz
Max Planck Institute of Quantum Optics, Garching and Ludwig-Maximilians-Universität München, Germany

Anne L’Huillier
Lund University, Sweden

“for experimental methods that generate attosecond pulses of light for the study of electron dynamics in matter”

Experiments with light capture the shortest of moments

The three Nobel Laureates in Physics 2023 are being recognised for their experiments, which have given humanity new tools for exploring the world of electrons inside atoms and molecules. Pierre Agostini, Ferenc Krausz and Anne L’Huillier have demonstrated a way to create extremely short pulses of light that can be used to measure the rapid processes in which electrons move or change energy.

Fast-moving events flow into each other when perceived by humans, just like a film that consists of still images is perceived as continual movement. If we want to investigate really brief events, we need special technology. In the world of electrons, changes occur in a few tenths of an attosecond – an attosecond is so short that there are as many in one second as there have been seconds since the birth of the universe.

The laureates’ experiments have produced pulses of light so short that they are measured in attoseconds, thus demonstrating that these pulses can be used to provide images of processes inside atoms and molecules.

In 1987, Anne L’Huillier discovered that many different overtones of light arose when she transmitted infrared laser light through a noble gas. Each overtone is a light wave with a given number of cycles for each cycle in the laser light. They are caused by the laser light interacting with atoms in the gas; it gives some electrons extra energy that is then emitted as light. Anne L’Huillier has continued to explore this phenomenon, laying the ground for subsequent breakthroughs.

In 2001, Pierre Agostini succeeded in producing and investigating a series of consecutive light pulses, in which each pulse lasted just 250 attoseconds. At the same time, Ferenc Krausz was working with another type of experiment, one that made it possible to isolate a single light pulse that lasted 650 attoseconds.

The laureates’ contributions have enabled the investigation of processes that are so rapid they were previously impossible to follow.

“We can now open the door to the world of electrons. Attosecond physics gives us the opportunity to understand mechanisms that are governed by electrons. The next step will be utilising them,” says Eva Olsson, Chair of the Nobel Committee for Physics.

There are potential applications in many different areas. In electronics, for example, it is important to understand and control how electrons behave in a material. Attosecond pulses can also be used to identify different molecules, such as in medical diagnostics.

Read more about this year’s prize

Popular science background: Electrons in pulses of light (pdf)
Scientific background: “For experimental methods that generate attosecond pulses of light for the study of electron dynamics in matter” (pdf)

Pierre Agostini. PhD 1968 from Aix-Marseille University, France. Professor at The Ohio State University, Columbus, USA.

Ferenc Krausz, born 1962 in Mór, Hungary. PhD 1991 from Vienna University of Technology, Austria. Director at Max Planck Institute of Quantum Optics, Garching and Professor at Ludwig-Maximilians-Universität München, Germany.

Anne L’Huillier, born 1958 in Paris, France. PhD 1986 from University Pierre and Marie Curie, Paris, France. Professor at Lund University, Sweden.

A Canadian connection?

An October 3, 2023 CBC online news item from the Associated Press reveals a Canadian connection of sorts ,

Three scientists have won the Nobel Prize in physics Tuesday for giving us the first split-second glimpse into the superfast world of spinning electrons, a field that could one day lead to better electronics or disease diagnoses.

The award went to French-Swedish physicist Anne L’Huillier, French scientist Pierre Agostini and Hungarian-born Ferenc Krausz for their work with the tiny part of each atom that races around the centre, and that is fundamental to virtually everything: chemistry, physics, our bodies and our gadgets.

Electrons move around so fast that they have been out of reach of human efforts to isolate them. But by looking at the tiniest fraction of a second possible, scientists now have a “blurry” glimpse of them, and that opens up whole new sciences, experts said.

“The electrons are very fast, and the electrons are really the workforce in everywhere,” Nobel Committee member Mats Larsson said. “Once you can control and understand electrons, you have taken a very big step forward.”

L’Huillier is the fifth woman to receive a Nobel in Physics.

L’Huillier was teaching basic engineering physics to about 100 undergraduates at Lund when she got the call that she had won, but her phone was on silent and she didn’t pick up. She checked it during a break and called the Nobel Committee.

Then she went back to teaching.

Agostini, an emeritus professor at Ohio State University, was in Paris and could not be reached by the Nobel Committee before it announced his win to the world

Here’s the Canadian connection (from the October 3, 2023 CBC online news item),

Krausz, of the Max Planck Institute of Quantum Optics and Ludwig Maximilian University of Munich, told reporters that he was bewildered.

“I have been trying to figure out since 11 a.m. whether I’m in reality or it’s just a long dream,” the 61-year-old said.

Last year, Krausz and L’Huillier won the prestigious Wolf prize in physics for their work, sharing it with University of Ottawa scientist Paul Corkum [emphasis mine]. Nobel prizes are limited to only three winners and Krausz said it was a shame that it could not include Corkum.

Corkum was key to how the split-second laser flashes could be measured [emphasis mine], which was crucial, Krausz said.

Congratulations to Pierre Agostini, Ferenc Krausz and Anne L’Huillier and a bow to Paul Corkum!

For those who are curious. a ‘Paul Corkum’ search should bring up a few postings on this blog but I missed this piece of news, a May 4, 2023 University of Ottawa news release about Corkum and the 2022 Wolf Prize, which he shared with Krausz and L’Huillier,

Chemistry

There was a little drama where this prize was concerned, It was announced too early according to an October 4, 2023 news item on phys.org and, again, in another October 4, 2023 news item on phys.org (from the Oct. 4, 2023 news item by Karl Ritter for the Associated Press),

Oops! Nobel chemistry winners are announced early in a rare slip-up

The most prestigious and secretive prize in science ran headfirst into the digital era Wednesday when Swedish media got an emailed press release revealing the winners of the Nobel Prize in chemistry and the news prematurely went public.

Here’s the fully sanctioned October 4, 2023 Nobel Prize press release, Note: A link has been removed,

The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Chemistry 2023 to

Moungi G. Bawendi
Massachusetts Institute of Technology (MIT), Cambridge, MA, USA

Louis E. Brus
Columbia University, New York, NY, USA

Alexei I. Ekimov
Nanocrystals Technology Inc., New York, NY, USA

“for the discovery and synthesis of quantum dots”

They planted an important seed for nanotechnology

The Nobel Prize in Chemistry 2023 rewards the discovery and development of quantum dots, nanoparticles so tiny that their size determines their properties. These smallest components of nanotechnology now spread their light from televisions and LED lamps, and can also guide surgeons when they remove tumour tissue, among many other things.

Everyone who studies chemistry learns that an element’s properties are governed by how many electrons it has. However, when matter shrinks to nano-dimensions quantum phenomena arise; these are governed by the size of the matter. The Nobel Laureates in Chemistry 2023 have succeeded in producing particles so small that their properties are determined by quantum phenomena. The particles, which are called quantum dots, are now of great importance in nanotechnology.

“Quantum dots have many fascinating and unusual properties. Importantly, they have different colours depending on their size,” says Johan Åqvist, Chair of the Nobel Committee for Chemistry.

Physicists had long known that in theory size-dependent quantum effects could arise in nanoparticles, but at that time it was almost impossible to sculpt in nanodimensions. Therefore, few people believed that this knowledge would be put to practical use.

However, in the early 1980s, Alexei Ekimov succeeded in creating size-dependent quantum effects in coloured glass. The colour came from nanoparticles of copper chloride and Ekimov demonstrated that the particle size affected the colour of the glass via quantum effects.

A few years later, Louis Brus was the first scientist in the world to prove size-dependent quantum effects in particles floating freely in a fluid.

In 1993, Moungi Bawendi revolutionised the chemical production of quantum dots, resulting in almost perfect particles. This high quality was necessary for them to be utilised in applications.

Quantum dots now illuminate computer monitors and television screens based on QLED technology. They also add nuance to the light of some LED lamps, and biochemists and doctors use them to map biological tissue.

Quantum dots are thus bringing the greatest benefit to humankind. Researchers believe that in the future they could contribute to flexible electronics, tiny sensors, thinner solar cells and encrypted quantum communication – so we have just started exploring the potential of these tiny particles.

Read more about this year’s prize

Popular science background: They added colour to nanotechnology (pdf)
Scientific background: Quantum dots – seeds of nanoscience (pdf)

Moungi G. Bawendi, born 1961 in Paris, France. PhD 1988 from University of Chicago, IL, USA. Professor at Massachusetts Institute of Technology (MIT), Cambridge, MA, USA.

Louis E. Brus, born 1943 in Cleveland, OH, USA. PhD 1969 from Columbia University, New York, NY, USA. Professor at Columbia University, New York, NY, USA.

Alexei I. Ekimov, born 1945 in the former USSR. PhD 1974 from Ioffe Physical-Technical Institute, Saint Petersburg, Russia. Formerly Chief Scientist at Nanocrystals Technology Inc., New York, NY, USA.


The most recent ‘quantum dot’ (a particular type of nanoparticle) story here is a January 5, 2023 posting, “Can I have a beer with those carbon quantum dots?

Proving yet again that scientists can have a bumpy trip to a Nobel prize, an October 4, 2023 news item on phys.org describes how one of the winners flunked his first undergraduate chemistry test, Note: Links have been removed,

Talk about bouncing back. MIT professor Moungi Bawendi is a co-winner of this year’s Nobel chemistry prize for helping develop “quantum dots”—nanoparticles that are now found in next generation TV screens and help illuminate tumors within the body.

But as an undergraduate, he flunked his very first chemistry exam, recalling that the experience nearly “destroyed” him.

The 62-year-old of Tunisian and French heritage excelled at science throughout high school, without ever having to break a sweat.

But when he arrived at Harvard University as an undergraduate in the late 1970s, he was in for a rude awakening.

You can find more about the winners and quantum dots in an October 4, 2023 news item on Nanowerk and in Dr. Andrew Maynard’s (Professor of Advanced Technology Transitions, Arizona State University) October 4, 2023 essay for The Conversation (h/t October 4, 2023 news item on phys.org), Note: Links have been removed,

This year’s prize recognizes Moungi Bawendi, Louis Brus and Alexei Ekimov for the discovery and development of quantum dots. For many years, these precisely constructed nanometer-sized particles – just a few hundred thousandths the width of a human hair in diameter – were the darlings of nanotechnology pitches and presentations. As a researcher and adviser on nanotechnology [emphasis mine], I’ve [Dr. Andrew Maynard] even used them myself when talking with developers, policymakers, advocacy groups and others about the promise and perils of the technology.

The origins of nanotechnology predate Bawendi, Brus and Ekimov’s work on quantum dots – the physicist Richard Feynman speculated on what could be possible through nanoscale engineering as early as 1959, and engineers like Erik Drexler were speculating about the possibilities of atomically precise manufacturing in the the 1980s. However, this year’s trio of Nobel laureates were part of the earliest wave of modern nanotechnology where researchers began putting breakthroughs in material science to practical use.

Quantum dots brilliantly fluoresce: They absorb one color of light and reemit it nearly instantaneously as another color. A vial of quantum dots, when illuminated with broad spectrum light, shines with a single vivid color. What makes them special, though, is that their color is determined by how large or small they are. Make them small and you get an intense blue. Make them larger, though still nanoscale, and the color shifts to red.

The wavelength of light a quantum dot emits depends on its size. Maysinger, Ji, Hutter, Cooper, CC BY

There’s also an October 4, 2023 overview article by Tekla S. Perry and Margo Anderson for the IEEE Spectrum about the magazine’s almost twenty-five years of reporting on quantum dots

Red blue and green dots mass in rows, with some dots moving away

Image credit: Brandon Palacio/IEEE Spectrum

Your Guide to the Newest Nobel Prize: Quantum Dots

What you need to know—and what we’ve reported—about this year’s Chemistry award

It’s not a long article and it has a heavy focus on the IEEEE’s (Institute of Electrical and Electtronics Engineers) the road quantum dots have taken to become applications and being commercialized.

Congratulations to Moungi Bawendi, Louis Brus, and Alexei Ekimov!

Frog pants?

How would you go about tracking these frogs?

Six images of tiny frogs wearing little plastic trackers attached to wire harnesses on their back legs.
Researchers have fitted tiny trackable radio-pants to three species of South American frogs to test their ability to navigate through the rainforest. (Submitted by Andrius Pašukonis)

You can see how tiny they are when you compare one of the frogs to a leaf visible in one of the images (top left or top right).

The answer to the question, as you may have guessed, are frog pants (or G-strings).

Sheena Goodyear’s June 13, 2023 article for the CBC’s (Canadian Broadcasting Corporation) As It Happens radio show explores the question and the research and includes an embedded 6:20 radio interview with researcher, Andrius Pašukonis,

How do you track a bunch of teeny-weeny frogs across the vast rainforests of South America? By putting teeny-weeny trackers on their teeny-weeny underwear, of course.

Biologist Andrius Pašukonis and his colleagues wanted to study the navigational capabilities of poisonous frogs that are too small for most animal tracking devices.

So he designed a Speedo-like harness that wraps around their back legs and props a tiny radio tracker on their backsides. The research team dubbed the invention “frog pants” — though Pašukonis says that’s “a bit of a misnomer.”

“My French colleagues like to call it a telemetric G-string,” Pašukonis, a senior scientist at Lithuania’s Vilnius University, told As It Happens host Nil Köksal.

“It’s a lot of fine motor skills and a lot of practice in handling tiny frogs and sewing little frog harnesses. But we go find them in the rainforest, and we catch them, and we put the tags on.”

My favourite part is “… sewing little frog harnesses.” Note: The following video features a commercial and then, moves onto a 2:22 interview,

More from Goodyear’s June 13, 2023 article, Note: A link has been removed,

Pašukonis was a PhD student at the University of Vienna when he first started experimenting with the frog pants design, and later put it to use while working as a postdoctoral fellow at Stanford University in California.

He and is colleagues used the tracker pants to study the spatial skills of three frog species that range from three to five centimetres in length — diablito poison frogs in Ecuador, and brilliant-thighed poison frogs and dyeing poison frogs in French Guiana. The findings were published late last year in the journal e-Life [sic].

“The only way to study movements of animals is to be able to track them and follow them around, which nobody has managed to do or even tried to do with these tiny, tiny frogs in the rainforest,” he said.

“So that became my goal and challenge, where I spent a good part of my PhD trying different versions of different tags and different attachment methods, trial and error, to finally get to be able to put tags on and track them and study their behaviour.”

The frogs, he admits, didn’t particularly like the pants. But they didn’t seem to mind too much, and the team removed the trackers after four to six days. 

“Like any animal, they might scratch a little bit afterwards … like a dog with a new collar,” he said. “And then they just go on with their business.”

Other scientists have tried to track tiny frogs, from Goodyear’s June 13, 2023 article, Note: Links have been removed,

The design caught the eye of Richard Essner, a biologist at Southern Illinois University Edwardsville who studies animal locomotion, and has a particular interest in little frogs.

“Tracking small frogs with radio telemetry is not an easy thing to do,” Essner, who wasn’t involved in the Stanford research, told CBC in an email. 

About a decade ago, he says his lab attempted to use radio telemetry to track the movement of the threatened Illinois chorus frog using a transmitter attached via an elastic belt around the waist.

“Unfortunately, we had to abandon the study because we found that the transmitter apparatus was interfering with locomotion. If the belt was too tight, it caused abrasion. If it was too loose it slid down around the legs and left the frog immobilized and vulnerable to predation,” he said.

The frog pants, he says, seem to offer a solution to this conundrum. 

Lea Randall, a Calgary Zoo and Wilder Institute ecologist who specializes in amphibians and reptiles, ran into similar obstacles while trying to track northern leopard frogs at a reintroduction site in B.C. 

Like the Stanford researchers, her team experimented with several different designs before landing on one that worked — a belt-like attachment with some “very stylish” smooth glass beads to prevent abrasion. 

“Unfortunately, due to the weight of the radio transmitters at the time we couldn’t study smaller individuals,” she said. 

“We didn’t use leg straps, but I can see the advantages of that to help keep the transmitters in place. The creative thinking and problem solving that goes into developing these kinds of studies always amazes me.”

Finally, frogs may be smarter than we think, from Goodyear’s June 13, 2023 article,

When it comes to animal cognition and behaviour, Pašukonis says frogs are understudied —  and he believes, underestimated — compared to birds and mammals.

The poisonous rainforest frogs, he says, may be only a few centimetres in size, but when they breed, they carry their tadpoles between 200 to 300 metres across the rainforest to find them the perfect puddle to grow in.

Then they turn right around, and make their way home again. 

“How could a little frog — frogs typically are not thought to be very smart — learn to navigate on such a big scale? And how do they find their way around more on a fundamental scientific level?” Pašukonis said.

“We’re uncovering that overall amphibians, for example, might be smarter or have more complicated cognitive abilities than we thought.”

Here’s a link to and a citation for the paper published by Pašukonis and his colleagues,

Contrasting parental roles shape sex differences in poison frog space use but not navigational performance by Andrius Pašukonis, Shirley Jennifer Serrano-Rojas, Marie-Therese Fischer, Matthias-Claudio Loretto, Daniel A Shaykevich, Bibiana Rojas, Max Ringler, Alexandre B Roland, Alejandro Marcillo-Lara, Eva Ringler, Camilo Rodríguez, Luis A Coloma, Lauren A O’Connell. eLife DOI: https://doi.org/10.7554/eLife.80483 Version of Record Published: Nov 15, 2022

This paper appears to be open access.

Non-human authors (ChatGPT or others) of scientific and medical studies and the latest AI panic!!!

It’s fascinating to see all the current excitement (distressed and/or enthusiastic) around the act of writing and artificial intelligence. Easy to forget that it’s not new. First, the ‘non-human authors’ and then the panic(s). *What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.*

How to handle non-human authors (ChatGPT and other AI agents)—the medical edition

The first time I wrote about the incursion of robots or artificial intelligence into the field of writing was in a July 16, 2014 posting titled “Writing and AI or is a robot writing this blog?” ChatGPT (then known as GPT-2) first made its way onto this blog in a February 18, 2019 posting titled “AI (artificial intelligence) text generator, too dangerous to release?

The folks at the Journal of the American Medical Association (JAMA) have recently adopted a pragmatic approach to the possibility of nonhuman authors of scientific and medical papers, from a January 31, 2022 JAMA editorial,

Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability..1

In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.

Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11

This is a link to and a citation for the JAMA editorial,

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge by Annette Flanagin, Kirsten Bibbins-Domingo, Michael Berkwits, Stacy L. Christiansen. JAMA. 2023;329(8):637-639. doi:10.1001/jama.2023.1344

The editorial appears to be open access.

ChatGPT in the field of education

Dr. Andrew Maynard (scientist, author, and professor of Advanced Technology Transitions in the Arizona State University [ASU] School for the Future if Innovation in Society and founder of the ASU Future of Being Human initiative and Director of the ASU Risk Innovation Nexus) also takes a pragmatic approach in a March 14, 2023 posting on his eponymous blog,

Like many of my colleagues, I’ve been grappling with how ChatGPT and other Large Language Models (LLMs) are impacting teaching and education — especially at the undergraduate level.

We’re already seeing signs of the challenges here as a growing divide emerges between LLM-savvy students who are experimenting with novel ways of using (and abusing) tools like ChatGPT, and educators who are desperately trying to catch up. As a result, educators are increasingly finding themselves unprepared and poorly equipped to navigate near-real-time innovations in how students are using these tools. And this is only exacerbated where their knowledge of what is emerging is several steps behind that of their students.

To help address this immediate need, a number of colleagues and I compiled a practical set of Frequently Asked Questions on ChatGPT in the classroom. These covers the basics of what ChatGPT is, possible concerns over use by students, potential creative ways of using the tool to enhance learning, and suggestions for class-specific guidelines.

Dr. Maynard goes on to offer the FAQ/practical guide here. Prior to issuing the ‘guide’, he wrote a December 8, 2022 essay on Medium titled “I asked Open AI’s ChatGPT about responsible innovation. This is what I got.”

Crawford Kilian, a longtime educator, author, and contributing editor to The Tyee, expresses measured enthusiasm for the new technology (as does Dr. Maynard), in a December 13, 2022 article for thetyee.ca, Note: Links have been removed,

ChatGPT, its makers tell us, is still in beta form. Like a million other new users, I’ve been teaching it (tuition-free) so its answers will improve. It’s pretty easy to run a tutorial: once you’ve created an account, you’re invited to ask a question or give a command. Then you watch the reply, popping up on the screen at the speed of a fast and very accurate typist.

Early responses to ChatGPT have been largely Luddite: critics have warned that its arrival means the end of high school English, the demise of the college essay and so on. But remember that the Luddites were highly skilled weavers who commanded high prices for their products; they could see that newfangled mechanized looms would produce cheap fabrics that would push good weavers out of the market. ChatGPT, with sufficient tweaks, could do just that to educators and other knowledge workers.

Having spent 40 years trying to teach my students how to write, I have mixed feelings about this prospect. But it wouldn’t be the first time that a technological advancement has resulted in the atrophy of a human mental skill.

Writing arguably reduced our ability to memorize — and to speak with memorable and persuasive coherence. …

Writing and other technological “advances” have made us what we are today — powerful, but also powerfully dangerous to ourselves and our world. If we can just think through the implications of ChatGPT, we may create companions and mentors that are not so much demonic as the angels of our better nature.

More than writing: emergent behaviour

The ChatGPT story extends further than writing and chatting. From a March 6, 2023 article by Stephen Ornes for Quanta Magazine, Note: Links have been removed,

What movie do these emojis describe?

That prompt was one of 204 tasks chosen last year to test the ability of various large language models (LLMs) — the computational engines behind AI chatbots such as ChatGPT. The simplest LLMs produced surreal responses. “The movie is a movie about a man who is a man who is a man,” one began. Medium-complexity models came closer, guessing The Emoji Movie. But the most complex model nailed it in one guess: Finding Nemo.

“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.

“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors [emphasis mine], including several identified in Dyer’s project. That list continues to grow.

Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.

But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.

As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli [Deep Ganguli, a computer scientist at the AI startup Anthropic] said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

There is an obvious problem with asking these models to explain themselves: They are notorious liars. [emphasis mine] “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” As one of many amusing examples, in February [2023] Google introduced its AI chatbot, Bard. The blog post announcing the new tool shows Bard making a factual error.

If you have time, I recommend reading Omes’s March 6, 2023 article.

The panic

Perhaps not entirely unrelated to current developments, there was this announcement in a May 1, 2023 article by Hannah Alberga for CTV (Canadian Television Network) news, Note: Links have been removed,

Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI

Geoffrey Hinton, professor at the University of Toronto and the “godfather” of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday [May 1, 2023] citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by “bad actors.”

Hinton, a British-Canadian computer scientist, is best-known for a series of deep neural network breakthroughs that won him, Yann LeCun and Yoshua Bengio the 2018 Turing Award, known as the Nobel Prize of computing. 

Hinton has been invested in the now-hot topic of artificial intelligence since its early stages. In 1970, he got a Bachelor of Arts in experimental psychology from Cambridge, followed by his Ph.D. in artificial intelligence in Edinburgh, U.K. in 1978.

He joined Google after spearheading a major breakthrough with two of his graduate students at the University of Toronto in 2012, in which the team uncovered and built a new method of artificial intelligence: neural networks. The team’s first neural network was  incorporated and sold to Google for $44 million.

Neural networks are a method of deep learning that effectively teaches computers how to learn the way humans do by analyzing data, paving the way for machines to classify objects and understand speech recognition.

There’s a bit more from Hinton in a May 3, 2023 article by Sheena Goodyear for the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens (the 10 minute radio interview is embedded in the article), Note: A link has been removed,

There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.

Nowadays, he’s not so sure.

“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”

For the last decade, he [Geoffrey Hinton] divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.

Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”

But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes. 

Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good. 

“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.

“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”

Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”

“That being said, I do think that some of these ideas that … AI supercomputers are going to ‘wake up’ and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear” and distract from more pressing issues, he said.

He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.

“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he [Hinton] said.

Don Pittis in his May 4, 2022 business analysis for CBC news online offers a somewhat jaundiced view of Hinton’s concern regarding AI, Note: Links have been removed,

As if we needed one more thing to terrify us, the latest warning from a University of Toronto scientist considered by many to be the founding intellect of artificial intelligence, adds a new layer of dread.

Others who have warned in the past that thinking machines are a threat to human existence seem a little miffed with the rock-star-like media coverage Geoffrey Hinton, billed at a conference this week as the Godfather of AI, is getting for what seems like a last minute conversion. Others say Hinton’s authoritative voice makes a difference.

Not only did Hinton tell an audience of experts at Wednesday’s [May 3, 2023] EmTech Digital conference that humans will soon be supplanted by AI — “I think it’s serious and fairly close.” — he said that due to national and business competition, there is no obvious way to prevent it.

“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial,” said Hinton on Wednesday [May 3, 2023] as he explained his change of heart in detailed technical terms. 

“But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.”

“I wish I had a nice and simple solution I could push, but I don’t,” he said. “It’s not clear there is a solution.”

So when is all this happening?

“In a few years time they may be significantly more intelligent than people,” he told Nil Köksal on CBC Radio’s As It Happens on Wednesday [May 3, 2023].

While he may be late to the party, Hinton’s voice adds new clout to growing anxiety that artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.

But long before that final day, he worries that the new technology will soon begin to strip away jobs and lead to a destabilizing societal gap between rich and poor that current politics will be unable to solve.

The EmTech Digital conference is a who’s who of AI business and academia, fields which often overlap. Most other participants at the event were not there to warn about AI like Hinton, but to celebrate the explosive growth of AI research and business.

As one expert I spoke to pointed out, the growth in AI is exponential and has been for a long time. But even knowing that, the increase in the dollar value of AI to business caught the sector by surprise.

Eight years ago when I wrote about the expected increase in AI business, I quoted the market intelligence group Tractica that AI spending would “be worth more than $40 billion in the coming decade,” which sounded like a lot at the time. It appears that was an underestimate.

“The global artificial intelligence market size was valued at $428 billion U.S. in 2022,” said an updated report from Fortune Business Insights. “The market is projected to grow from $515.31 billion U.S. in 2023.”  The estimate for 2030 is more than $2 trillion. 

This week the new Toronto AI company Cohere, where Hinton has a stake of his own, announced it was “in advanced talks” to raise $250 million. The Canadian media company Thomson Reuters said it was planning “a deeper investment in artificial intelligence.” IBM is expected to “pause hiring for roles that could be replaced with AI.” The founders of Google DeepMind and LinkedIn have launched a ChatGPT competitor called Pi.

And that was just this week.

“My one hope is that, because if we allow it to take over it will be bad for all of us, we could get the U.S. and China to agree, like we did with nuclear weapons,” said Hinton. “We’re all the in same boat with respect to existential threats, so we all ought to be able to co-operate on trying to stop it.”

Interviewer and moderator Will Douglas Heaven, an editor at MIT Technology Review finished Hinton’s sentence for him: “As long as we can make some money on the way.”

Hinton has attracted some criticism himself. Wilfred Chan writing for Fast Company has two articles, “‘I didn’t see him show up’: Ex-Googlers blast ‘AI godfather’ Geoffrey Hinton’s silence on fired AI experts” on May 5, 2023, Note: Links have been removed,

Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord.

But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.

“It’s disappointing to see this autumn-years redemption tour [emphasis mine] from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)

Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.

“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”

Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. …

… An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them. 

Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday [May 2, 2023], for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.” [emphasis mine]

Gebru has been mentioned here a few times. She’s mentioned in passing in a June 23, 2022 posting “Racist and sexist robots have flawed AI” and in a little more detail in an August 30, 2022 posting “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” scroll down to the ‘Consciousness and ethical AI’ subhead

Chan has another Fast Company article investigating AI issues also published on May 5, 2023, “Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them.”

The last two existential AI panics

The term “autumn-years redemption tour”is striking and while the reference to age could be viewed as problematic, it also hints at the money, honours, and acknowledgement that Hinton has enjoyed as an eminent scientist. I’ve covered two previous panics set off by eminent scientists. “Existential risk” is the title of my November 26, 2012 posting which highlights Martin Rees’ efforts to found the Centre for Existential Risk at the University of Cambridge.

Rees is a big deal. From his Wikipedia entry, Note: Links have been removed,

Martin John Rees, Baron Rees of Ludlow OM FRS FREng FMedSci FRAS HonFInstP[10][2] (born 23 June 1942) is a British cosmologist and astrophysicist.[11] He is the fifteenth Astronomer Royal, appointed in 1995,[12][13][14] and was Master of Trinity College, Cambridge, from 2004 to 2012 and President of the Royal Society between 2005 and 2010.[15][16][17][18][19][20]

The Centre for Existential Risk can be found here online (it is located at the University of Cambridge). Interestingly, Hinton who was born in December 1947 will be giving a lecture “Digital versus biological intelligence: Reasons for concern about AI” in Cambridge on May 25, 2023.

The next panic was set off by Stephen Hawking (1942 – 2018; also at the University of Cambridge, Wikipedia entry) a few years before he died. (Note: Rees, Hinton, and Hawking were all born within five years of each other and all have/had ties to the University of Cambridge. Interesting coincidence, eh?) From a January 9, 2015 article by Emily Chung for CBC news online,

Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, “the development of full artificial intelligence could spell the end of the human race.” Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably “our biggest existential threat.”

Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.

Hawking made his most recent comments at the beginning of December [2014], in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.

Popular works of science fiction – from the latest Terminator trailer, to the Matrix trilogy, to Star Trek’s borg – envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.

Sawyer has written about a different vision of life beyond singularity [when machines surpass humans in general intelligence,] — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the “possible misuse of powerful technologies” such as AI. He said Hawking and Musk have good reason to be concerned.

To sum up, the first panic was in 2012, the next in 2014/15, and the latest one began earlier this year (2023) with a letter. A March 29, 2023 Thompson Reuters news item on CBC news online provides information on the contents,

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a pioneer of research in the field.

According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.

The concerns come as EU police force Europol on Monday {March 27, 2023] joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Meanwhile, the U.K. government unveiled proposals for an “adaptable” regulatory framework around AI.

The government’s approach, outlined in a policy paper published on Wednesday [March 29, 2023], would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.

The engineers have chimed in, from an April 7, 2023 article by Margo Anderson for the IEEE (institute of Electrical and Electronics Engineers) Spectrum magazine, Note: Links have been removed,

The open letter [published March 29, 2023], titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”

It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.

In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.

IEEE members have expressed a similar diversity of opinions.

There was an earlier open letter in January 2015 according to Wikipedia’s “Open Letter on Artificial Intelligence” entry, Note: Links have been removed,

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.

As for ‘Mr. ChatGPT’ or Sam Altman, CEO of OpenAI, while he didn’t sign the March 29, 2023 letter, he appeared before US Congress suggesting AI needs to be regulated according to May 16, 2023 news article by Mohar Chatterjee for Politico.

You’ll notice I’ve arbitrarily designated three AI panics by assigning their origins to eminent scientists. In reality, these concerns rise and fall in ways that don’t allow for such a tidy analysis. As Chung notes, science fiction regularly addresses this issue. For example, there’s my October 16, 2013 posting, “Wizards & Robots: a comic book encourages study in the sciences and maths and discussions about existential risk.” By the way, will.i.am (of the Black Eyed Peas band was involved in the comic book project and he us a longtime supporter of STEM (science, technology, engineering, and mathematics) initiatives.

Finally (but not quite)

Puzzling, isn’t it? I’m not sure we’re asking the right questions but it’s encouraging to see that at least some are being asked.

Dr. Andrew Maynard in a May 12, 2023 essay for The Conversation (h/t May 12, 2023 item on phys.org) notes that ‘Luddites’ questioned technology’s inevitable progress and were vilified for doing so, Note: Links have been removed,

The term “Luddite” emerged in early 1800s England. At the time there was a thriving textile industry that depended on manual knitting frames and a skilled workforce to create cloth and garments out of cotton and wool. But as the Industrial Revolution gathered momentum, steam-powered mills threatened the livelihood of thousands of artisanal textile workers.

Faced with an industrialized future that threatened their jobs and their professional identity, a growing number of textile workers turned to direct action. Galvanized by their leader, Ned Ludd, they began to smash the machines that they saw as robbing them of their source of income.

It’s not clear whether Ned Ludd was a real person, or simply a figment of folklore invented during a period of upheaval. But his name became synonymous with rejecting disruptive new technologies – an association that lasts to this day.

Questioning doesn’t mean rejecting

Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time. Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life

In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.” Their sin? Raising concerns over the potential dangers of artificial intelligence.

The irony of three prominent scientists and entrepreneurs being labeled as Luddites underlines the disconnect between the term’s original meaning and its more modern use as an epithet for anyone who doesn’t wholeheartedly and unquestioningly embrace technological progress.

Yet technologists like Musk and Gates aren’t rejecting technology or innovation. Instead, they’re rejecting a worldview that all technological advances are ultimately good for society. This worldview optimistically assumes that the faster humans innovate, the better the future will be.

In an age of ChatGPT, gene editing and other transformative technologies, perhaps we all need to channel the spirit of Ned Ludd as we grapple with how to ensure that future technologies do more good than harm.

In fact, “Neo-Luddites” or “New Luddites” is a term that emerged at the end of the 20th century.

In 1990, the psychologist Chellis Glendinning published an essay titled “Notes toward a Neo-Luddite Manifesto.”

Then there are the Neo-Luddites who actively reject modern technologies, fearing that they are damaging to society. New York City’s Luddite Club falls into this camp. Formed by a group of tech-disillusioned Gen-Zers, the club advocates the use of flip phones, crafting, hanging out in parks and reading hardcover or paperback books. Screens are an anathema to the group, which sees them as a drain on mental health.

I’m not sure how many of today’s Neo-Luddites – whether they’re thoughtful technologists, technology-rejecting teens or simply people who are uneasy about technological disruption – have read Glendinning’s manifesto. And to be sure, parts of it are rather contentious. Yet there is a common thread here: the idea that technology can lead to personal and societal harm if it is not developed responsibly.

Getting back to where this started with nonhuman authors, Amelia Eqbal has written up an informal transcript of a March 16, 2023 CBC radio interview (radio segment is embedded) about ChatGPT-4 (the latest AI chatbot from OpenAI) between host Elamin Abdelmahmoud and tech journalist, Alyssa Bereznak.

I was hoping to add a little more Canadian content, so in March 2023 and again in April 2023, I sent a question about whether there were any policies regarding nonhuman or AI authors to Kim Barnhardt at the Canadian Medical Association Journal (CMAJ). To date, there has been no reply but should one arrive, I will place it here.

In the meantime, I have this from Canadian writer, Susan Baxter in her May 15, 2023 blog posting “Coming soon: Robot Overlords, Sentient AI and more,”

The current threat looming (Covid having been declared null and void by the WHO*) is Artificial Intelligence (AI) which, we are told, is becoming too smart for its own good and will soon outsmart humans. Then again, given some of the humans I’ve met along the way that wouldn’t be difficult.

All this talk of scary-boo AI seems to me to have become the worst kind of cliché, one that obscures how our lives have become more complicated and more frustrating as apps and bots and cyber-whatsits take over.

The trouble with clichés, as Alain de Botton wrote in How Proust Can Change Your Life, is not that they are wrong or contain false ideas but more that they are “superficial articulations of good ones”. Cliches are oversimplifications that become so commonplace we stop noticing the more serious subtext. (This is rife in medicine where metaphors such as talk of “replacing” organs through transplants makes people believe it’s akin to changing the oil filter in your car. Or whatever it is EV’s have these days that needs replacing.)

Should you live in Vancouver (Canada) and are attending a May 28, 2023 AI event, you may want to read Susan Baxter’s piece as a counterbalance to, “Discover the future of artificial intelligence at this unique AI event in Vancouver,” a May 19, 2023 sponsored content by Katy Brennan for the Daily Hive,

If you’re intrigued and eager to delve into the rapidly growing field of AI, you’re not going to want to miss this unique Vancouver event.

On Sunday, May 28 [2023], a Multiplatform AI event is coming to the Vancouver Playhouse — and it’s set to take you on a journey into the future of artificial intelligence.

The exciting conference promises a fusion of creativity, tech innovation, and thought–provoking insights, with talks from renowned AI leaders and concept artists, who will share their experiences and opinions.

Guests can look forward to intense discussions about AI’s pros and cons, hear real-world case studies, and learn about the ethical dimensions of AI, its potential threats to humanity, and the laws that govern its use.

Live Q&A sessions will also be held, where leading experts in the field will address all kinds of burning questions from attendees. There will also be a dynamic round table and several other opportunities to connect with industry leaders, pioneers, and like-minded enthusiasts. 

This conference is being held at The Playhouse, 600 Hamilton Street, from 11 am to 7:30 pm, ticket prices range from $299 to $349 to $499 (depending on when you make your purchase, From the Multiplatform AI Conference homepage,

Event Speakers

Max Sills
General Counsel at Midjourney

From Jan 2022 – present (Advisor – now General Counsel) – Midjourney – An independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species (SF) Midjourney – a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called “prompts”, similar to OpenAI’s DALL-E and Stable Diffusion. For now the company uses Discord Server as a source of service and, with huge 15M+ members, is the biggest Discord server in the world. In the two-things-at-once department, Max Sills also known as an owner of Open Advisory Services, firm which is set up to help small and medium tech companies with their legal needs (managing outside counsel, employment, carta, TOS, privacy). Their clients are enterprise level, medium companies and up, and they are here to help anyone on open source and IP strategy. Max is an ex-counsel at Block, ex-general manager of the Crypto Open Patent Alliance. Prior to that Max led Google’s open source legal group for 7 years.

So, the first speaker listed is a lawyer associated with Midjourney, a highly controversial generative artificial intelligence programme used to generate images. According to their entry on Wikipedia, the company is being sued, Note: Links have been removed,

On January 13, 2023, three artists – Sarah Andersen, Kelly McKernan, and Karla Ortiz – filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists.[32]

My October 24, 2022 posting highlights some of the issues with generative image programmes and Midjourney is mentioned throughout.

As I noted earlier, I’m glad to see more thought being put into the societal impact of AI and somewhat disconcerted by the hyperbole from the like of Geoffrey Hinton and the like of Vancouver’s Multiplatform AI conference organizers. Mike Masnick put it nicely in his May 24, 2023 posting on TechDirt (Note 1: I’ve taken a paragraph out of context, his larger issue is about proposals for legislation; Note 2: Links have been removed),

Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.

For anyone interested in the Canadian government attempts to legislate AI, there’s my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”

Addendum (June 1, 2023)

Another statement warning about runaway AI was issued on Tuesday, May 30, 2023. This was far briefer than the previous March 2023 warning, from the Center for AI Safety’s “Statement on AI Risk” webpage,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war [followed by a list of signatories] …

Vanessa Romo’s May 30, 2023 article (with contributions from Bobby Allyn) for NPR ([US] National Public Radio) offers an overview of both warnings. Rae Hodge’s May 31, 2023 article for Salon offers a more critical view, Note: Links have been removed,

The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of “nuclear war” and human “extinction.” Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement from the non-profit Center for AI Safety said.

But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board.

TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry.

“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote.

“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.

The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts.

“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”

Fear, after all, is a powerful sales tool.

Romo’s May 30, 2023 article for NPR offers a good overview and, if you have the time, I recommend reading Hodge’s May 31, 2023 article for Salon in its entirety.

*ETA June 8, 2023: This sentence “What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.” was added to the introductory paragraph at the beginning of this post.