Tag Archives: Andrew Maynard

2023 Nobel prizes (medicine, physics, and chemistry)

For the first time in the 15 years this blog has been around, the Nobel prizes awarded in medicine, physics, and chemistry all are in areas discussed here at one or another. As usual where people are concerned, some of these scientists had a tortuous journey to this prestigious outcome.

Medicine

Two people (Katalin Karikó and Drew Weissman) were awarded the prize in medicine according to the October 2, 2023 Nobel Prize press release, Note: Links have been removed,

The Nobel Assembly at Karolinska Institutet [Sweden]

has today decided to award

the 2023 Nobel Prize in Physiology or Medicine

jointly to

Katalin Karikó and Drew Weissman

for their discoveries concerning nucleoside base modifications that enabled the development of effective mRNA vaccines against COVID-19

The discoveries by the two Nobel Laureates were critical for developing effective mRNA vaccines against COVID-19 during the pandemic that began in early 2020. Through their groundbreaking findings, which have fundamentally changed our understanding of how mRNA interacts with our immune system, the laureates contributed to the unprecedented rate of vaccine development during one of the greatest threats to human health in modern times.

Vaccines before the pandemic

Vaccination stimulates the formation of an immune response to a particular pathogen. This gives the body a head start in the fight against disease in the event of a later exposure. Vaccines based on killed or weakened viruses have long been available, exemplified by the vaccines against polio, measles, and yellow fever. In 1951, Max Theiler was awarded the Nobel Prize in Physiology or Medicine for developing the yellow fever vaccine.

Thanks to the progress in molecular biology in recent decades, vaccines based on individual viral components, rather than whole viruses, have been developed. Parts of the viral genetic code, usually encoding proteins found on the virus surface, are used to make proteins that stimulate the formation of virus-blocking antibodies. Examples are the vaccines against the hepatitis B virus and human papillomavirus. Alternatively, parts of the viral genetic code can be moved to a harmless carrier virus, a “vector.” This method is used in vaccines against the Ebola virus. When vector vaccines are injected, the selected viral protein is produced in our cells, stimulating an immune response against the targeted virus.

Producing whole virus-, protein- and vector-based vaccines requires large-scale cell culture. This resource-intensive process limits the possibilities for rapid vaccine production in response to outbreaks and pandemics. Therefore, researchers have long attempted to develop vaccine technologies independent of cell culture, but this proved challenging.

Illustration of methods for vaccine production before the COVID-19 pandemic.
Figure 1. Methods for vaccine production before the COVID-19 pandemic. © The Nobel Committee for Physiology or Medicine. Ill. Mattias Karlén

mRNA vaccines: A promising idea

In our cells, genetic information encoded in DNA is transferred to messenger RNA (mRNA), which is used as a template for protein production. During the 1980s, efficient methods for producing mRNA without cell culture were introduced, called in vitro transcription. This decisive step accelerated the development of molecular biology applications in several fields. Ideas of using mRNA technologies for vaccine and therapeutic purposes also took off, but roadblocks lay ahead. In vitro transcribed mRNA was considered unstable and challenging to deliver, requiring the development of sophisticated carrier lipid systems to encapsulate the mRNA. Moreover, in vitro-produced mRNA gave rise to inflammatory reactions. Enthusiasm for developing the mRNA technology for clinical purposes was, therefore, initially limited.

These obstacles did not discourage the Hungarian biochemist Katalin Karikó, who was devoted to developing methods to use mRNA for therapy. During the early 1990s, when she was an assistant professor at the University of Pennsylvania, she remained true to her vision of realizing mRNA as a therapeutic despite encountering difficulties in convincing research funders of the significance of her project. A new colleague of Karikó at her university was the immunologist Drew Weissman. He was interested in dendritic cells, which have important functions in immune surveillance and the activation of vaccine-induced immune responses. Spurred by new ideas, a fruitful collaboration between the two soon began, focusing on how different RNA types interact with the immune system.

The breakthrough

Karikó and Weissman noticed that dendritic cells recognize in vitro transcribed mRNA as a foreign substance, which leads to their activation and the release of inflammatory signaling molecules. They wondered why the in vitro transcribed mRNA was recognized as foreign while mRNA from mammalian cells did not give rise to the same reaction. Karikó and Weissman realized that some critical properties must distinguish the different types of mRNA.

RNA contains four bases, abbreviated A, U, G, and C, corresponding to A, T, G, and C in DNA, the letters of the genetic code. Karikó and Weissman knew that bases in RNA from mammalian cells are frequently chemically modified, while in vitro transcribed mRNA is not. They wondered if the absence of altered bases in the in vitro transcribed RNA could explain the unwanted inflammatory reaction. To investigate this, they produced different variants of mRNA, each with unique chemical alterations in their bases, which they delivered to dendritic cells. The results were striking: The inflammatory response was almost abolished when base modifications were included in the mRNA. This was a paradigm change in our understanding of how cells recognize and respond to different forms of mRNA. Karikó and Weissman immediately understood that their discovery had profound significance for using mRNA as therapy. These seminal results were published in 2005, fifteen years before the COVID-19 pandemic.

Illustration of the four different bases mRNA contains.
Figure 2. mRNA contains four different bases, abbreviated A, U, G, and C. The Nobel Laureates discovered that base-modified mRNA can be used to block activation of inflammatory reactions (secretion of signaling molecules) and increase protein production when mRNA is delivered to cells.  © The Nobel Committee for Physiology or Medicine. Ill. Mattias Karlén

In further studies published in 2008 and 2010, Karikó and Weissman showed that the delivery of mRNA generated with base modifications markedly increased protein production compared to unmodified mRNA. The effect was due to the reduced activation of an enzyme that regulates protein production. Through their discoveries that base modifications both reduced inflammatory responses and increased protein production, Karikó and Weissman had eliminated critical obstacles on the way to clinical applications of mRNA.

mRNA vaccines realized their potential

Interest in mRNA technology began to pick up, and in 2010, several companies were working on developing the method. Vaccines against Zika virus and MERS-CoV were pursued; the latter is closely related to SARS-CoV-2. After the outbreak of the COVID-19 pandemic, two base-modified mRNA vaccines encoding the SARS-CoV-2 surface protein were developed at record speed. Protective effects of around 95% were reported, and both vaccines were approved as early as December 2020.

The impressive flexibility and speed with which mRNA vaccines can be developed pave the way for using the new platform also for vaccines against other infectious diseases. In the future, the technology may also be used to deliver therapeutic proteins and treat some cancer types.

Several other vaccines against SARS-CoV-2, based on different methodologies, were also rapidly introduced, and together, more than 13 billion COVID-19 vaccine doses have been given globally. The vaccines have saved millions of lives and prevented severe disease in many more, allowing societies to open and return to normal conditions. Through their fundamental discoveries of the importance of base modifications in mRNA, this year’s Nobel laureates critically contributed to this transformative development during one of the biggest health crises of our time.

Read more about this year’s prize

Scientific background: Discoveries concerning nucleoside base modifications that enabled the development of effective mRNA vaccines against COVID-19

Katalin Karikó was born in 1955 in Szolnok, Hungary. She received her PhD from Szeged’s University in 1982 and performed postdoctoral research at the Hungarian Academy of Sciences in Szeged until 1985. She then conducted postdoctoral research at Temple University, Philadelphia, and the University of Health Science, Bethesda. In 1989, she was appointed Assistant Professor at the University of Pennsylvania, where she remained until 2013. After that, she became vice president and later senior vice president at BioNTech RNA Pharmaceuticals. Since 2021, she has been a Professor at Szeged University and an Adjunct Professor at Perelman School of Medicine at the University of Pennsylvania.

Drew Weissman was born in 1959 in Lexington, Massachusetts, USA. He received his MD, PhD degrees from Boston University in 1987. He did his clinical training at Beth Israel Deaconess Medical Center at Harvard Medical School and postdoctoral research at the National Institutes of Health. In 1997, Weissman established his research group at the Perelman School of Medicine at the University of Pennsylvania. He is the Roberts Family Professor in Vaccine Research and Director of the Penn Institute for RNA Innovations.

The University of Pennsylvania October 2, 2023 news release is a very interesting announcement (more about why it’s interesting afterwards), Note: Links have been removed,

The University of Pennsylvania messenger RNA pioneers whose years of scientific partnership unlocked understanding of how to modify mRNA to make it an effective therapeutic—enabling a platform used to rapidly develop lifesaving vaccines amid the global COVID-19 pandemic—have been named winners of the 2023 Nobel Prize in Physiology or Medicine. They become the 28th and 29th Nobel laureates affiliated with Penn, and join nine previous Nobel laureates with ties to the University of Pennsylvania who have won the Nobel Prize in Medicine.

Nearly three years after the rollout of mRNA vaccines across the world, Katalin Karikó, PhD, an adjunct professor of Neurosurgery in Penn’s Perelman School of Medicine, and Drew Weissman, MD, PhD, the Roberts Family Professor of Vaccine Research in the Perelman School of Medicine, are recipients of the prize announced this morning by the Nobel Assembly in Solna, Sweden.

After a chance meeting in the late 1990s while photocopying research papers, Karikó and Weissman began investigating mRNA as a potential therapeutic. In 2005, they published a key discovery: mRNA could be altered and delivered effectively into the body to activate the body’s protective immune system. The mRNA-based vaccines elicited a robust immune response, including high levels of antibodies that attack a specific infectious disease that has not previously been encountered. Unlike other vaccines, a live or attenuated virus is not injected or required at any point.

When the COVID-19 pandemic struck, the true value of the pair’s lab work was revealed in the most timely of ways, as companies worked to quickly develop and deploy vaccines to protect people from the virus. Both Pfizer/BioNTech and Moderna utilized Karikó and Weissman’s technology to build their highly effective vaccines to protect against severe illness and death from the virus. In the United States alone, mRNA vaccines make up more than 655 million total doses of SARS-CoV-2 vaccines that have been administered since they became available in December 2020.

Editor’s Note: The Pfizer/BioNTech and Moderna COVID-19 mRNA vaccines both use licensed University of Pennsylvania technology. As a result of these licensing relationships, Penn, Karikó and Weissman have received and may continue to receive significant financial benefits in the future based on the sale of these products. BioNTech provides funding for Weissman’s research into the development of additional infectious disease vaccines.

Science can be brutal

Now for the interesting bit: it’s in my March 5, 2021 posting (mRNA, COVID-19 vaccines, treating genetic diseases before birth, and the scientist who started it all),

Before messenger RNA was a multibillion-dollar idea, it was a scientific backwater. And for the Hungarian-born scientist behind a key mRNA discovery, it was a career dead-end.

Katalin Karikó spent the 1990s collecting rejections. Her work, attempting to harness the power of mRNA to fight disease, was too far-fetched for government grants, corporate funding, and even support from her own colleagues.

“Every night I was working: grant, grant, grant,” Karikó remembered, referring to her efforts to obtain funding. “And it came back always no, no, no.”

By 1995, after six years on the faculty at the University of Pennsylvania, Karikó got demoted. [emphasis mine] She had been on the path to full professorship, but with no money coming in to support her work on mRNA, her bosses saw no point in pressing on.

She was back to the lower rungs of the scientific academy.

“Usually, at that point, people just say goodbye and leave because it’s so horrible,” Karikó said.

There’s no opportune time for demotion, but 1995 had already been uncommonly difficult. Karikó had recently endured a cancer scare, and her husband was stuck in Hungary sorting out a visa issue. Now the work to which she’d devoted countless hours was slipping through her fingers.

In time, those better experiments came together. After a decade of trial and error, Karikó and her longtime collaborator at Penn — Drew Weissman [emphasis mine], an immunologist with a medical degree and Ph.D. from Boston University — discovered a remedy for mRNA’s Achilles’ heel.

You can get the whole story from my March 5, 2021 posting, scroll down to the “mRNA—it’s in the details, plus, the loneliness of pioneer researchers, a demotion, and squabbles” subhead. If you are very curious about mRNA and the rough and tumble of the world of science, there’s my August 20, 2021 posting “Getting erased from the mRNA/COVID-19 story” where Ian MacLachlan is featured as a researcher who got erased and where Karikó credits his work.

‘Rowing Mom Wins Nobel’ (credit: rowing website Row 2K)

Karikó’s daughter is a two-time gold medal Olympic athlete as the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens, notes in an interview with the daughter (Susan Francia). From an October 4, 2023 As It Happens article (with embedded audio programme excerpt) by Sheena Goodyear,

Olympic gold medallist Susan Francia is coming to terms with the fact that she’s no longer the most famous person in her family.

That’s because the retired U.S. rower’s mother, Katalin Karikó, just won a Nobel Prize in Medicine. The biochemist was awarded alongside her colleague, vaccine researcher Drew Weissman, for their groundbreaking work that led to the development of COVID-19 vaccines. 

“Now I’m like, ‘Shoot! All right, I’ve got to work harder,'” Francia said with a laugh during an interview with As It Happens host Nil Köksal. 

But in all seriousness, Francia says she’s immensely proud of her mother’s accomplishments. In fact, it was Karikó’s fierce dedication to science that inspired Francia to win Olympic gold medals in 2008 and 2012.

“Sport is a lot like science in that, you know, you have a passion for something and you just go and you train, attain your goal, whether it be making this discovery that you truly believe in, or for me, it was trying to be the best in the world,” Francia said.

“It’s a grind and, honestly, I love that grind. And my mother did too.”

… one of her [Karikó] favourite headlines so far comes from a little blurb on the rowing website Row 2K: “Rowing Mom Wins Nobel.”

Nowadays, scientists are trying to harness the power of mRNA to fight cancer, malaria, influenza and rabies. But when Karikó first began her work, it was a fringe concept. For decades, she toiled in relative obscurity, struggling to secure funding for her research.

“That’s also that same passion that I took into my rowing,” Francia said.

But even as Karikó struggled to make a name for herself, she says her own mother, Zsuzsanna, always believed she would earn a Nobel Prize one day.

Every year, as the Nobel Prize announcement approached, she would tell Karikó she’d be watching for her name. 

“I was laughing [and saying] that, ‘Mom, I am not getting anything,'” she said. 

But her mother, who died a few years ago, ultimately proved correct. 

Congratulations to both Katalin Karikó and Drew Weissman and thank you both for persisting!

Physics

This prize is for physics at the attoscale.

Aaron W. Harrison (Assistant Professor of Chemistry, Austin College, Texas, US) attempts an explanation of an attosecond in his October 3, 2023 essay (in English “What is an attosecond? A physical chemist explains the tiny time scale behind Nobel Prize-winning research” and in French “Nobel de physique : qu’est-ce qu’une attoseconde?”) for The Conversation, Note: Links have been removed,

“Atto” is the scientific notation prefix that represents 10-18, which is a decimal point followed by 17 zeroes and a 1. So a flash of light lasting an attosecond, or 0.000000000000000001 of a second, is an extremely short pulse of light.

In fact, there are approximately as many attoseconds in one second as there are seconds in the age of the universe.

Previously, scientists could study the motion of heavier and slower-moving atomic nuclei with femtosecond (10-15) light pulses. One thousand attoseconds are in 1 femtosecond. But researchers couldn’t see movement on the electron scale until they could generate attosecond light pulses – electrons move too fast for scientists to parse exactly what they are up to at the femtosecond level.

Harrison does a very good job of explaining something that requires a leap of imagination. He also explains why scientists engage in attosecond research. h/t October 4, 2023 news item on phys.org

Amelle Zaïr (Imperial College London) offers a more technical explanation in her October 4, 2023 essay about the 2023 prize winners for The Conversation. h/t October 4, 2023 news item on phys.org

Main event

Here’s the October 3, 2023 Nobel Prize press release, Note: A link has been removed,

The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Physics 2023 to

Pierre Agostini
The Ohio State University, Columbus, USA

Ferenc Krausz
Max Planck Institute of Quantum Optics, Garching and Ludwig-Maximilians-Universität München, Germany

Anne L’Huillier
Lund University, Sweden

“for experimental methods that generate attosecond pulses of light for the study of electron dynamics in matter”

Experiments with light capture the shortest of moments

The three Nobel Laureates in Physics 2023 are being recognised for their experiments, which have given humanity new tools for exploring the world of electrons inside atoms and molecules. Pierre Agostini, Ferenc Krausz and Anne L’Huillier have demonstrated a way to create extremely short pulses of light that can be used to measure the rapid processes in which electrons move or change energy.

Fast-moving events flow into each other when perceived by humans, just like a film that consists of still images is perceived as continual movement. If we want to investigate really brief events, we need special technology. In the world of electrons, changes occur in a few tenths of an attosecond – an attosecond is so short that there are as many in one second as there have been seconds since the birth of the universe.

The laureates’ experiments have produced pulses of light so short that they are measured in attoseconds, thus demonstrating that these pulses can be used to provide images of processes inside atoms and molecules.

In 1987, Anne L’Huillier discovered that many different overtones of light arose when she transmitted infrared laser light through a noble gas. Each overtone is a light wave with a given number of cycles for each cycle in the laser light. They are caused by the laser light interacting with atoms in the gas; it gives some electrons extra energy that is then emitted as light. Anne L’Huillier has continued to explore this phenomenon, laying the ground for subsequent breakthroughs.

In 2001, Pierre Agostini succeeded in producing and investigating a series of consecutive light pulses, in which each pulse lasted just 250 attoseconds. At the same time, Ferenc Krausz was working with another type of experiment, one that made it possible to isolate a single light pulse that lasted 650 attoseconds.

The laureates’ contributions have enabled the investigation of processes that are so rapid they were previously impossible to follow.

“We can now open the door to the world of electrons. Attosecond physics gives us the opportunity to understand mechanisms that are governed by electrons. The next step will be utilising them,” says Eva Olsson, Chair of the Nobel Committee for Physics.

There are potential applications in many different areas. In electronics, for example, it is important to understand and control how electrons behave in a material. Attosecond pulses can also be used to identify different molecules, such as in medical diagnostics.

Read more about this year’s prize

Popular science background: Electrons in pulses of light (pdf)
Scientific background: “For experimental methods that generate attosecond pulses of light for the study of electron dynamics in matter” (pdf)

Pierre Agostini. PhD 1968 from Aix-Marseille University, France. Professor at The Ohio State University, Columbus, USA.

Ferenc Krausz, born 1962 in Mór, Hungary. PhD 1991 from Vienna University of Technology, Austria. Director at Max Planck Institute of Quantum Optics, Garching and Professor at Ludwig-Maximilians-Universität München, Germany.

Anne L’Huillier, born 1958 in Paris, France. PhD 1986 from University Pierre and Marie Curie, Paris, France. Professor at Lund University, Sweden.

A Canadian connection?

An October 3, 2023 CBC online news item from the Associated Press reveals a Canadian connection of sorts ,

Three scientists have won the Nobel Prize in physics Tuesday for giving us the first split-second glimpse into the superfast world of spinning electrons, a field that could one day lead to better electronics or disease diagnoses.

The award went to French-Swedish physicist Anne L’Huillier, French scientist Pierre Agostini and Hungarian-born Ferenc Krausz for their work with the tiny part of each atom that races around the centre, and that is fundamental to virtually everything: chemistry, physics, our bodies and our gadgets.

Electrons move around so fast that they have been out of reach of human efforts to isolate them. But by looking at the tiniest fraction of a second possible, scientists now have a “blurry” glimpse of them, and that opens up whole new sciences, experts said.

“The electrons are very fast, and the electrons are really the workforce in everywhere,” Nobel Committee member Mats Larsson said. “Once you can control and understand electrons, you have taken a very big step forward.”

L’Huillier is the fifth woman to receive a Nobel in Physics.

L’Huillier was teaching basic engineering physics to about 100 undergraduates at Lund when she got the call that she had won, but her phone was on silent and she didn’t pick up. She checked it during a break and called the Nobel Committee.

Then she went back to teaching.

Agostini, an emeritus professor at Ohio State University, was in Paris and could not be reached by the Nobel Committee before it announced his win to the world

Here’s the Canadian connection (from the October 3, 2023 CBC online news item),

Krausz, of the Max Planck Institute of Quantum Optics and Ludwig Maximilian University of Munich, told reporters that he was bewildered.

“I have been trying to figure out since 11 a.m. whether I’m in reality or it’s just a long dream,” the 61-year-old said.

Last year, Krausz and L’Huillier won the prestigious Wolf prize in physics for their work, sharing it with University of Ottawa scientist Paul Corkum [emphasis mine]. Nobel prizes are limited to only three winners and Krausz said it was a shame that it could not include Corkum.

Corkum was key to how the split-second laser flashes could be measured [emphasis mine], which was crucial, Krausz said.

Congratulations to Pierre Agostini, Ferenc Krausz and Anne L’Huillier and a bow to Paul Corkum!

For those who are curious. a ‘Paul Corkum’ search should bring up a few postings on this blog but I missed this piece of news, a May 4, 2023 University of Ottawa news release about Corkum and the 2022 Wolf Prize, which he shared with Krausz and L’Huillier,

Chemistry

There was a little drama where this prize was concerned, It was announced too early according to an October 4, 2023 news item on phys.org and, again, in another October 4, 2023 news item on phys.org (from the Oct. 4, 2023 news item by Karl Ritter for the Associated Press),

Oops! Nobel chemistry winners are announced early in a rare slip-up

The most prestigious and secretive prize in science ran headfirst into the digital era Wednesday when Swedish media got an emailed press release revealing the winners of the Nobel Prize in chemistry and the news prematurely went public.

Here’s the fully sanctioned October 4, 2023 Nobel Prize press release, Note: A link has been removed,

The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Chemistry 2023 to

Moungi G. Bawendi
Massachusetts Institute of Technology (MIT), Cambridge, MA, USA

Louis E. Brus
Columbia University, New York, NY, USA

Alexei I. Ekimov
Nanocrystals Technology Inc., New York, NY, USA

“for the discovery and synthesis of quantum dots”

They planted an important seed for nanotechnology

The Nobel Prize in Chemistry 2023 rewards the discovery and development of quantum dots, nanoparticles so tiny that their size determines their properties. These smallest components of nanotechnology now spread their light from televisions and LED lamps, and can also guide surgeons when they remove tumour tissue, among many other things.

Everyone who studies chemistry learns that an element’s properties are governed by how many electrons it has. However, when matter shrinks to nano-dimensions quantum phenomena arise; these are governed by the size of the matter. The Nobel Laureates in Chemistry 2023 have succeeded in producing particles so small that their properties are determined by quantum phenomena. The particles, which are called quantum dots, are now of great importance in nanotechnology.

“Quantum dots have many fascinating and unusual properties. Importantly, they have different colours depending on their size,” says Johan Åqvist, Chair of the Nobel Committee for Chemistry.

Physicists had long known that in theory size-dependent quantum effects could arise in nanoparticles, but at that time it was almost impossible to sculpt in nanodimensions. Therefore, few people believed that this knowledge would be put to practical use.

However, in the early 1980s, Alexei Ekimov succeeded in creating size-dependent quantum effects in coloured glass. The colour came from nanoparticles of copper chloride and Ekimov demonstrated that the particle size affected the colour of the glass via quantum effects.

A few years later, Louis Brus was the first scientist in the world to prove size-dependent quantum effects in particles floating freely in a fluid.

In 1993, Moungi Bawendi revolutionised the chemical production of quantum dots, resulting in almost perfect particles. This high quality was necessary for them to be utilised in applications.

Quantum dots now illuminate computer monitors and television screens based on QLED technology. They also add nuance to the light of some LED lamps, and biochemists and doctors use them to map biological tissue.

Quantum dots are thus bringing the greatest benefit to humankind. Researchers believe that in the future they could contribute to flexible electronics, tiny sensors, thinner solar cells and encrypted quantum communication – so we have just started exploring the potential of these tiny particles.

Read more about this year’s prize

Popular science background: They added colour to nanotechnology (pdf)
Scientific background: Quantum dots – seeds of nanoscience (pdf)

Moungi G. Bawendi, born 1961 in Paris, France. PhD 1988 from University of Chicago, IL, USA. Professor at Massachusetts Institute of Technology (MIT), Cambridge, MA, USA.

Louis E. Brus, born 1943 in Cleveland, OH, USA. PhD 1969 from Columbia University, New York, NY, USA. Professor at Columbia University, New York, NY, USA.

Alexei I. Ekimov, born 1945 in the former USSR. PhD 1974 from Ioffe Physical-Technical Institute, Saint Petersburg, Russia. Formerly Chief Scientist at Nanocrystals Technology Inc., New York, NY, USA.


The most recent ‘quantum dot’ (a particular type of nanoparticle) story here is a January 5, 2023 posting, “Can I have a beer with those carbon quantum dots?

Proving yet again that scientists can have a bumpy trip to a Nobel prize, an October 4, 2023 news item on phys.org describes how one of the winners flunked his first undergraduate chemistry test, Note: Links have been removed,

Talk about bouncing back. MIT professor Moungi Bawendi is a co-winner of this year’s Nobel chemistry prize for helping develop “quantum dots”—nanoparticles that are now found in next generation TV screens and help illuminate tumors within the body.

But as an undergraduate, he flunked his very first chemistry exam, recalling that the experience nearly “destroyed” him.

The 62-year-old of Tunisian and French heritage excelled at science throughout high school, without ever having to break a sweat.

But when he arrived at Harvard University as an undergraduate in the late 1970s, he was in for a rude awakening.

You can find more about the winners and quantum dots in an October 4, 2023 news item on Nanowerk and in Dr. Andrew Maynard’s (Professor of Advanced Technology Transitions, Arizona State University) October 4, 2023 essay for The Conversation (h/t October 4, 2023 news item on phys.org), Note: Links have been removed,

This year’s prize recognizes Moungi Bawendi, Louis Brus and Alexei Ekimov for the discovery and development of quantum dots. For many years, these precisely constructed nanometer-sized particles – just a few hundred thousandths the width of a human hair in diameter – were the darlings of nanotechnology pitches and presentations. As a researcher and adviser on nanotechnology [emphasis mine], I’ve [Dr. Andrew Maynard] even used them myself when talking with developers, policymakers, advocacy groups and others about the promise and perils of the technology.

The origins of nanotechnology predate Bawendi, Brus and Ekimov’s work on quantum dots – the physicist Richard Feynman speculated on what could be possible through nanoscale engineering as early as 1959, and engineers like Erik Drexler were speculating about the possibilities of atomically precise manufacturing in the the 1980s. However, this year’s trio of Nobel laureates were part of the earliest wave of modern nanotechnology where researchers began putting breakthroughs in material science to practical use.

Quantum dots brilliantly fluoresce: They absorb one color of light and reemit it nearly instantaneously as another color. A vial of quantum dots, when illuminated with broad spectrum light, shines with a single vivid color. What makes them special, though, is that their color is determined by how large or small they are. Make them small and you get an intense blue. Make them larger, though still nanoscale, and the color shifts to red.

The wavelength of light a quantum dot emits depends on its size. Maysinger, Ji, Hutter, Cooper, CC BY

There’s also an October 4, 2023 overview article by Tekla S. Perry and Margo Anderson for the IEEE Spectrum about the magazine’s almost twenty-five years of reporting on quantum dots

Red blue and green dots mass in rows, with some dots moving away

Image credit: Brandon Palacio/IEEE Spectrum

Your Guide to the Newest Nobel Prize: Quantum Dots

What you need to know—and what we’ve reported—about this year’s Chemistry award

It’s not a long article and it has a heavy focus on the IEEEE’s (Institute of Electrical and Electtronics Engineers) the road quantum dots have taken to become applications and being commercialized.

Congratulations to Moungi Bawendi, Louis Brus, and Alexei Ekimov!

Non-human authors (ChatGPT or others) of scientific and medical studies and the latest AI panic!!!

It’s fascinating to see all the current excitement (distressed and/or enthusiastic) around the act of writing and artificial intelligence. Easy to forget that it’s not new. First, the ‘non-human authors’ and then the panic(s). *What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.*

How to handle non-human authors (ChatGPT and other AI agents)—the medical edition

The first time I wrote about the incursion of robots or artificial intelligence into the field of writing was in a July 16, 2014 posting titled “Writing and AI or is a robot writing this blog?” ChatGPT (then known as GPT-2) first made its way onto this blog in a February 18, 2019 posting titled “AI (artificial intelligence) text generator, too dangerous to release?

The folks at the Journal of the American Medical Association (JAMA) have recently adopted a pragmatic approach to the possibility of nonhuman authors of scientific and medical papers, from a January 31, 2022 JAMA editorial,

Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability..1

In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.

Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11

This is a link to and a citation for the JAMA editorial,

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge by Annette Flanagin, Kirsten Bibbins-Domingo, Michael Berkwits, Stacy L. Christiansen. JAMA. 2023;329(8):637-639. doi:10.1001/jama.2023.1344

The editorial appears to be open access.

ChatGPT in the field of education

Dr. Andrew Maynard (scientist, author, and professor of Advanced Technology Transitions in the Arizona State University [ASU] School for the Future if Innovation in Society and founder of the ASU Future of Being Human initiative and Director of the ASU Risk Innovation Nexus) also takes a pragmatic approach in a March 14, 2023 posting on his eponymous blog,

Like many of my colleagues, I’ve been grappling with how ChatGPT and other Large Language Models (LLMs) are impacting teaching and education — especially at the undergraduate level.

We’re already seeing signs of the challenges here as a growing divide emerges between LLM-savvy students who are experimenting with novel ways of using (and abusing) tools like ChatGPT, and educators who are desperately trying to catch up. As a result, educators are increasingly finding themselves unprepared and poorly equipped to navigate near-real-time innovations in how students are using these tools. And this is only exacerbated where their knowledge of what is emerging is several steps behind that of their students.

To help address this immediate need, a number of colleagues and I compiled a practical set of Frequently Asked Questions on ChatGPT in the classroom. These covers the basics of what ChatGPT is, possible concerns over use by students, potential creative ways of using the tool to enhance learning, and suggestions for class-specific guidelines.

Dr. Maynard goes on to offer the FAQ/practical guide here. Prior to issuing the ‘guide’, he wrote a December 8, 2022 essay on Medium titled “I asked Open AI’s ChatGPT about responsible innovation. This is what I got.”

Crawford Kilian, a longtime educator, author, and contributing editor to The Tyee, expresses measured enthusiasm for the new technology (as does Dr. Maynard), in a December 13, 2022 article for thetyee.ca, Note: Links have been removed,

ChatGPT, its makers tell us, is still in beta form. Like a million other new users, I’ve been teaching it (tuition-free) so its answers will improve. It’s pretty easy to run a tutorial: once you’ve created an account, you’re invited to ask a question or give a command. Then you watch the reply, popping up on the screen at the speed of a fast and very accurate typist.

Early responses to ChatGPT have been largely Luddite: critics have warned that its arrival means the end of high school English, the demise of the college essay and so on. But remember that the Luddites were highly skilled weavers who commanded high prices for their products; they could see that newfangled mechanized looms would produce cheap fabrics that would push good weavers out of the market. ChatGPT, with sufficient tweaks, could do just that to educators and other knowledge workers.

Having spent 40 years trying to teach my students how to write, I have mixed feelings about this prospect. But it wouldn’t be the first time that a technological advancement has resulted in the atrophy of a human mental skill.

Writing arguably reduced our ability to memorize — and to speak with memorable and persuasive coherence. …

Writing and other technological “advances” have made us what we are today — powerful, but also powerfully dangerous to ourselves and our world. If we can just think through the implications of ChatGPT, we may create companions and mentors that are not so much demonic as the angels of our better nature.

More than writing: emergent behaviour

The ChatGPT story extends further than writing and chatting. From a March 6, 2023 article by Stephen Ornes for Quanta Magazine, Note: Links have been removed,

What movie do these emojis describe?

That prompt was one of 204 tasks chosen last year to test the ability of various large language models (LLMs) — the computational engines behind AI chatbots such as ChatGPT. The simplest LLMs produced surreal responses. “The movie is a movie about a man who is a man who is a man,” one began. Medium-complexity models came closer, guessing The Emoji Movie. But the most complex model nailed it in one guess: Finding Nemo.

“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.

“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors [emphasis mine], including several identified in Dyer’s project. That list continues to grow.

Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.

But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.

As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli [Deep Ganguli, a computer scientist at the AI startup Anthropic] said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

There is an obvious problem with asking these models to explain themselves: They are notorious liars. [emphasis mine] “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” As one of many amusing examples, in February [2023] Google introduced its AI chatbot, Bard. The blog post announcing the new tool shows Bard making a factual error.

If you have time, I recommend reading Omes’s March 6, 2023 article.

The panic

Perhaps not entirely unrelated to current developments, there was this announcement in a May 1, 2023 article by Hannah Alberga for CTV (Canadian Television Network) news, Note: Links have been removed,

Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI

Geoffrey Hinton, professor at the University of Toronto and the “godfather” of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday [May 1, 2023] citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by “bad actors.”

Hinton, a British-Canadian computer scientist, is best-known for a series of deep neural network breakthroughs that won him, Yann LeCun and Yoshua Bengio the 2018 Turing Award, known as the Nobel Prize of computing. 

Hinton has been invested in the now-hot topic of artificial intelligence since its early stages. In 1970, he got a Bachelor of Arts in experimental psychology from Cambridge, followed by his Ph.D. in artificial intelligence in Edinburgh, U.K. in 1978.

He joined Google after spearheading a major breakthrough with two of his graduate students at the University of Toronto in 2012, in which the team uncovered and built a new method of artificial intelligence: neural networks. The team’s first neural network was  incorporated and sold to Google for $44 million.

Neural networks are a method of deep learning that effectively teaches computers how to learn the way humans do by analyzing data, paving the way for machines to classify objects and understand speech recognition.

There’s a bit more from Hinton in a May 3, 2023 article by Sheena Goodyear for the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens (the 10 minute radio interview is embedded in the article), Note: A link has been removed,

There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.

Nowadays, he’s not so sure.

“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”

For the last decade, he [Geoffrey Hinton] divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.

Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”

But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes. 

Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good. 

“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.

“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”

Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”

“That being said, I do think that some of these ideas that … AI supercomputers are going to ‘wake up’ and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear” and distract from more pressing issues, he said.

He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.

“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he [Hinton] said.

Don Pittis in his May 4, 2022 business analysis for CBC news online offers a somewhat jaundiced view of Hinton’s concern regarding AI, Note: Links have been removed,

As if we needed one more thing to terrify us, the latest warning from a University of Toronto scientist considered by many to be the founding intellect of artificial intelligence, adds a new layer of dread.

Others who have warned in the past that thinking machines are a threat to human existence seem a little miffed with the rock-star-like media coverage Geoffrey Hinton, billed at a conference this week as the Godfather of AI, is getting for what seems like a last minute conversion. Others say Hinton’s authoritative voice makes a difference.

Not only did Hinton tell an audience of experts at Wednesday’s [May 3, 2023] EmTech Digital conference that humans will soon be supplanted by AI — “I think it’s serious and fairly close.” — he said that due to national and business competition, there is no obvious way to prevent it.

“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial,” said Hinton on Wednesday [May 3, 2023] as he explained his change of heart in detailed technical terms. 

“But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.”

“I wish I had a nice and simple solution I could push, but I don’t,” he said. “It’s not clear there is a solution.”

So when is all this happening?

“In a few years time they may be significantly more intelligent than people,” he told Nil Köksal on CBC Radio’s As It Happens on Wednesday [May 3, 2023].

While he may be late to the party, Hinton’s voice adds new clout to growing anxiety that artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.

But long before that final day, he worries that the new technology will soon begin to strip away jobs and lead to a destabilizing societal gap between rich and poor that current politics will be unable to solve.

The EmTech Digital conference is a who’s who of AI business and academia, fields which often overlap. Most other participants at the event were not there to warn about AI like Hinton, but to celebrate the explosive growth of AI research and business.

As one expert I spoke to pointed out, the growth in AI is exponential and has been for a long time. But even knowing that, the increase in the dollar value of AI to business caught the sector by surprise.

Eight years ago when I wrote about the expected increase in AI business, I quoted the market intelligence group Tractica that AI spending would “be worth more than $40 billion in the coming decade,” which sounded like a lot at the time. It appears that was an underestimate.

“The global artificial intelligence market size was valued at $428 billion U.S. in 2022,” said an updated report from Fortune Business Insights. “The market is projected to grow from $515.31 billion U.S. in 2023.”  The estimate for 2030 is more than $2 trillion. 

This week the new Toronto AI company Cohere, where Hinton has a stake of his own, announced it was “in advanced talks” to raise $250 million. The Canadian media company Thomson Reuters said it was planning “a deeper investment in artificial intelligence.” IBM is expected to “pause hiring for roles that could be replaced with AI.” The founders of Google DeepMind and LinkedIn have launched a ChatGPT competitor called Pi.

And that was just this week.

“My one hope is that, because if we allow it to take over it will be bad for all of us, we could get the U.S. and China to agree, like we did with nuclear weapons,” said Hinton. “We’re all the in same boat with respect to existential threats, so we all ought to be able to co-operate on trying to stop it.”

Interviewer and moderator Will Douglas Heaven, an editor at MIT Technology Review finished Hinton’s sentence for him: “As long as we can make some money on the way.”

Hinton has attracted some criticism himself. Wilfred Chan writing for Fast Company has two articles, “‘I didn’t see him show up’: Ex-Googlers blast ‘AI godfather’ Geoffrey Hinton’s silence on fired AI experts” on May 5, 2023, Note: Links have been removed,

Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord.

But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.

“It’s disappointing to see this autumn-years redemption tour [emphasis mine] from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)

Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.

“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”

Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. …

… An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them. 

Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday [May 2, 2023], for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.” [emphasis mine]

Gebru has been mentioned here a few times. She’s mentioned in passing in a June 23, 2022 posting “Racist and sexist robots have flawed AI” and in a little more detail in an August 30, 2022 posting “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” scroll down to the ‘Consciousness and ethical AI’ subhead

Chan has another Fast Company article investigating AI issues also published on May 5, 2023, “Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them.”

The last two existential AI panics

The term “autumn-years redemption tour”is striking and while the reference to age could be viewed as problematic, it also hints at the money, honours, and acknowledgement that Hinton has enjoyed as an eminent scientist. I’ve covered two previous panics set off by eminent scientists. “Existential risk” is the title of my November 26, 2012 posting which highlights Martin Rees’ efforts to found the Centre for Existential Risk at the University of Cambridge.

Rees is a big deal. From his Wikipedia entry, Note: Links have been removed,

Martin John Rees, Baron Rees of Ludlow OM FRS FREng FMedSci FRAS HonFInstP[10][2] (born 23 June 1942) is a British cosmologist and astrophysicist.[11] He is the fifteenth Astronomer Royal, appointed in 1995,[12][13][14] and was Master of Trinity College, Cambridge, from 2004 to 2012 and President of the Royal Society between 2005 and 2010.[15][16][17][18][19][20]

The Centre for Existential Risk can be found here online (it is located at the University of Cambridge). Interestingly, Hinton who was born in December 1947 will be giving a lecture “Digital versus biological intelligence: Reasons for concern about AI” in Cambridge on May 25, 2023.

The next panic was set off by Stephen Hawking (1942 – 2018; also at the University of Cambridge, Wikipedia entry) a few years before he died. (Note: Rees, Hinton, and Hawking were all born within five years of each other and all have/had ties to the University of Cambridge. Interesting coincidence, eh?) From a January 9, 2015 article by Emily Chung for CBC news online,

Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, “the development of full artificial intelligence could spell the end of the human race.” Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably “our biggest existential threat.”

Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.

Hawking made his most recent comments at the beginning of December [2014], in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.

Popular works of science fiction – from the latest Terminator trailer, to the Matrix trilogy, to Star Trek’s borg – envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.

Sawyer has written about a different vision of life beyond singularity [when machines surpass humans in general intelligence,] — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the “possible misuse of powerful technologies” such as AI. He said Hawking and Musk have good reason to be concerned.

To sum up, the first panic was in 2012, the next in 2014/15, and the latest one began earlier this year (2023) with a letter. A March 29, 2023 Thompson Reuters news item on CBC news online provides information on the contents,

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a pioneer of research in the field.

According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.

The concerns come as EU police force Europol on Monday {March 27, 2023] joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Meanwhile, the U.K. government unveiled proposals for an “adaptable” regulatory framework around AI.

The government’s approach, outlined in a policy paper published on Wednesday [March 29, 2023], would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.

The engineers have chimed in, from an April 7, 2023 article by Margo Anderson for the IEEE (institute of Electrical and Electronics Engineers) Spectrum magazine, Note: Links have been removed,

The open letter [published March 29, 2023], titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”

It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.

In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.

IEEE members have expressed a similar diversity of opinions.

There was an earlier open letter in January 2015 according to Wikipedia’s “Open Letter on Artificial Intelligence” entry, Note: Links have been removed,

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.

As for ‘Mr. ChatGPT’ or Sam Altman, CEO of OpenAI, while he didn’t sign the March 29, 2023 letter, he appeared before US Congress suggesting AI needs to be regulated according to May 16, 2023 news article by Mohar Chatterjee for Politico.

You’ll notice I’ve arbitrarily designated three AI panics by assigning their origins to eminent scientists. In reality, these concerns rise and fall in ways that don’t allow for such a tidy analysis. As Chung notes, science fiction regularly addresses this issue. For example, there’s my October 16, 2013 posting, “Wizards & Robots: a comic book encourages study in the sciences and maths and discussions about existential risk.” By the way, will.i.am (of the Black Eyed Peas band was involved in the comic book project and he us a longtime supporter of STEM (science, technology, engineering, and mathematics) initiatives.

Finally (but not quite)

Puzzling, isn’t it? I’m not sure we’re asking the right questions but it’s encouraging to see that at least some are being asked.

Dr. Andrew Maynard in a May 12, 2023 essay for The Conversation (h/t May 12, 2023 item on phys.org) notes that ‘Luddites’ questioned technology’s inevitable progress and were vilified for doing so, Note: Links have been removed,

The term “Luddite” emerged in early 1800s England. At the time there was a thriving textile industry that depended on manual knitting frames and a skilled workforce to create cloth and garments out of cotton and wool. But as the Industrial Revolution gathered momentum, steam-powered mills threatened the livelihood of thousands of artisanal textile workers.

Faced with an industrialized future that threatened their jobs and their professional identity, a growing number of textile workers turned to direct action. Galvanized by their leader, Ned Ludd, they began to smash the machines that they saw as robbing them of their source of income.

It’s not clear whether Ned Ludd was a real person, or simply a figment of folklore invented during a period of upheaval. But his name became synonymous with rejecting disruptive new technologies – an association that lasts to this day.

Questioning doesn’t mean rejecting

Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time. Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life

In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.” Their sin? Raising concerns over the potential dangers of artificial intelligence.

The irony of three prominent scientists and entrepreneurs being labeled as Luddites underlines the disconnect between the term’s original meaning and its more modern use as an epithet for anyone who doesn’t wholeheartedly and unquestioningly embrace technological progress.

Yet technologists like Musk and Gates aren’t rejecting technology or innovation. Instead, they’re rejecting a worldview that all technological advances are ultimately good for society. This worldview optimistically assumes that the faster humans innovate, the better the future will be.

In an age of ChatGPT, gene editing and other transformative technologies, perhaps we all need to channel the spirit of Ned Ludd as we grapple with how to ensure that future technologies do more good than harm.

In fact, “Neo-Luddites” or “New Luddites” is a term that emerged at the end of the 20th century.

In 1990, the psychologist Chellis Glendinning published an essay titled “Notes toward a Neo-Luddite Manifesto.”

Then there are the Neo-Luddites who actively reject modern technologies, fearing that they are damaging to society. New York City’s Luddite Club falls into this camp. Formed by a group of tech-disillusioned Gen-Zers, the club advocates the use of flip phones, crafting, hanging out in parks and reading hardcover or paperback books. Screens are an anathema to the group, which sees them as a drain on mental health.

I’m not sure how many of today’s Neo-Luddites – whether they’re thoughtful technologists, technology-rejecting teens or simply people who are uneasy about technological disruption – have read Glendinning’s manifesto. And to be sure, parts of it are rather contentious. Yet there is a common thread here: the idea that technology can lead to personal and societal harm if it is not developed responsibly.

Getting back to where this started with nonhuman authors, Amelia Eqbal has written up an informal transcript of a March 16, 2023 CBC radio interview (radio segment is embedded) about ChatGPT-4 (the latest AI chatbot from OpenAI) between host Elamin Abdelmahmoud and tech journalist, Alyssa Bereznak.

I was hoping to add a little more Canadian content, so in March 2023 and again in April 2023, I sent a question about whether there were any policies regarding nonhuman or AI authors to Kim Barnhardt at the Canadian Medical Association Journal (CMAJ). To date, there has been no reply but should one arrive, I will place it here.

In the meantime, I have this from Canadian writer, Susan Baxter in her May 15, 2023 blog posting “Coming soon: Robot Overlords, Sentient AI and more,”

The current threat looming (Covid having been declared null and void by the WHO*) is Artificial Intelligence (AI) which, we are told, is becoming too smart for its own good and will soon outsmart humans. Then again, given some of the humans I’ve met along the way that wouldn’t be difficult.

All this talk of scary-boo AI seems to me to have become the worst kind of cliché, one that obscures how our lives have become more complicated and more frustrating as apps and bots and cyber-whatsits take over.

The trouble with clichés, as Alain de Botton wrote in How Proust Can Change Your Life, is not that they are wrong or contain false ideas but more that they are “superficial articulations of good ones”. Cliches are oversimplifications that become so commonplace we stop noticing the more serious subtext. (This is rife in medicine where metaphors such as talk of “replacing” organs through transplants makes people believe it’s akin to changing the oil filter in your car. Or whatever it is EV’s have these days that needs replacing.)

Should you live in Vancouver (Canada) and are attending a May 28, 2023 AI event, you may want to read Susan Baxter’s piece as a counterbalance to, “Discover the future of artificial intelligence at this unique AI event in Vancouver,” a May 19, 2023 sponsored content by Katy Brennan for the Daily Hive,

If you’re intrigued and eager to delve into the rapidly growing field of AI, you’re not going to want to miss this unique Vancouver event.

On Sunday, May 28 [2023], a Multiplatform AI event is coming to the Vancouver Playhouse — and it’s set to take you on a journey into the future of artificial intelligence.

The exciting conference promises a fusion of creativity, tech innovation, and thought–provoking insights, with talks from renowned AI leaders and concept artists, who will share their experiences and opinions.

Guests can look forward to intense discussions about AI’s pros and cons, hear real-world case studies, and learn about the ethical dimensions of AI, its potential threats to humanity, and the laws that govern its use.

Live Q&A sessions will also be held, where leading experts in the field will address all kinds of burning questions from attendees. There will also be a dynamic round table and several other opportunities to connect with industry leaders, pioneers, and like-minded enthusiasts. 

This conference is being held at The Playhouse, 600 Hamilton Street, from 11 am to 7:30 pm, ticket prices range from $299 to $349 to $499 (depending on when you make your purchase, From the Multiplatform AI Conference homepage,

Event Speakers

Max Sills
General Counsel at Midjourney

From Jan 2022 – present (Advisor – now General Counsel) – Midjourney – An independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species (SF) Midjourney – a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called “prompts”, similar to OpenAI’s DALL-E and Stable Diffusion. For now the company uses Discord Server as a source of service and, with huge 15M+ members, is the biggest Discord server in the world. In the two-things-at-once department, Max Sills also known as an owner of Open Advisory Services, firm which is set up to help small and medium tech companies with their legal needs (managing outside counsel, employment, carta, TOS, privacy). Their clients are enterprise level, medium companies and up, and they are here to help anyone on open source and IP strategy. Max is an ex-counsel at Block, ex-general manager of the Crypto Open Patent Alliance. Prior to that Max led Google’s open source legal group for 7 years.

So, the first speaker listed is a lawyer associated with Midjourney, a highly controversial generative artificial intelligence programme used to generate images. According to their entry on Wikipedia, the company is being sued, Note: Links have been removed,

On January 13, 2023, three artists – Sarah Andersen, Kelly McKernan, and Karla Ortiz – filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists.[32]

My October 24, 2022 posting highlights some of the issues with generative image programmes and Midjourney is mentioned throughout.

As I noted earlier, I’m glad to see more thought being put into the societal impact of AI and somewhat disconcerted by the hyperbole from the like of Geoffrey Hinton and the like of Vancouver’s Multiplatform AI conference organizers. Mike Masnick put it nicely in his May 24, 2023 posting on TechDirt (Note 1: I’ve taken a paragraph out of context, his larger issue is about proposals for legislation; Note 2: Links have been removed),

Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.

For anyone interested in the Canadian government attempts to legislate AI, there’s my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”

Addendum (June 1, 2023)

Another statement warning about runaway AI was issued on Tuesday, May 30, 2023. This was far briefer than the previous March 2023 warning, from the Center for AI Safety’s “Statement on AI Risk” webpage,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war [followed by a list of signatories] …

Vanessa Romo’s May 30, 2023 article (with contributions from Bobby Allyn) for NPR ([US] National Public Radio) offers an overview of both warnings. Rae Hodge’s May 31, 2023 article for Salon offers a more critical view, Note: Links have been removed,

The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of “nuclear war” and human “extinction.” Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement from the non-profit Center for AI Safety said.

But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board.

TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry.

“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote.

“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.

The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts.

“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”

Fear, after all, is a powerful sales tool.

Romo’s May 30, 2023 article for NPR offers a good overview and, if you have the time, I recommend reading Hodge’s May 31, 2023 article for Salon in its entirety.

*ETA June 8, 2023: This sentence “What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.” was added to the introductory paragraph at the beginning of this post.

New podcast—Mission: Interplanetary and Event Rap: a one-stop custom rap shop Kickstarter

I received two email notices recently, one from Dr. Andrew Maynard (Arizona State University; ASU) and one from Baba Brinkman (Canadian rapper of science and other topics now based in New York).

Mission: Interplanetary

I found a “Mission: Interplanetary— a podcast on the future of humans as a spacefaring species!” webpage (Link: https://collegeofglobalfutures.asu.edu/blog/2021/03/23/mission-interplanetary-redefining-how-we-talk-about-humans-in-space/) on the Arizona State University College of Global Futures website,

Back in January 2019 I got an email from my good friend and colleague Lance Gharavi with the title “Podcast brainstorming.” Two years on, we’ve just launched the Mission: Interplanetary podcast–and it’s amazing!

It’s been a long journey — especially with a global pandemic thrown in along the way — but on March 23 [2021], the Mission: Interplanetary podcast with Slate and ASU finally launched.

After two years of planning, many discussions, a bunch dry runs, and lots (and by that I mean lots) of Zoom meetings, we are live!

As the team behind the podcast talked about and developed the ideas underpinning the Mission: Interplanetary,we were interested in exploring new ways of thinking and talking about the future of humanity as a space-faring species as part of Arizona State University’s Interplanetary Initiative. We also wanted to go big with these conversations — really big!

And that is exactly what we’ve done in this partnership with Slate.

The guests we’re hosting, the conversations we have lined up, the issues we grapple with, are all literally out of this world. But don’t just take my word for it — listen to the first episode above with the incredible Lindy Elkins-Tanton talking about NASA’s mission to the asteroid 16 Psyche.

And this is just a taste of what’s to come over the next few weeks as we talk to an amazing lineup of guests.

So if you’re looking for a space podcast with a difference, and one that grapples with big questions around our space-based future, please do subscribe on your favorite podcast platform. And join me and the fabulous former NASA astronaut Cady Coleman as we explore the future of humanity in space.

See you there!

Slate’s webpage (Mission: Interplanetary; Link: https://slate.com/podcasts/mission-interplanetary) offers more details about the co-hosts and the programmes along with embedded podcasts,

Cady Coleman is a former NASA astronaut and Air Force colonel. She flew aboard the International Space Station on a six-month expedition as the lead science and robotics officer. A frequent speaker on space and STEM topics, Coleman is also a musician who’s played from space with the Chieftains and Ian Anderson of Jethro Tull.

Andrew Maynard is a scientist, author, and expert in risk innovation. His books include Films From the Future: The Technology and Morality of Sci-Fi Movies and Future Rising

Latest Episodes

April 27, 2021

Murder in Space

What laws govern us when we leave Earth?

Happy listening. And, I apologize for the awkward links.

Event Rap Kickstarter

Baba Brinkman’s April 27, 2021 email notice has this to say about his latest venture,

Join the Movement, Get Rewards

My new Kickstarter campaign for Event Rap is live as of right now! Anyone who backs the project is helping to launch an exciting new company, actually a new kind of company, the first creator marketplace for rappers. Please take a few minutes to read the campaign description, I put a lot of love into it.

The campaign goal is to raise $26K in 30 days, an average of $2K per artist participating. If we succeed, this platform becomes a new income stream for independent artists during the pandemic and beyond. That’s the vision, and I’m asking for your help to share it and support it.

But instead of why it matters, let’s talk about what you get if you support the campaign!

$10-$50 gets you an advance copy of my new science rap album, Bright Future. I’m extremely proud of this record, which you can preview here, and Bright Future is also a prototype for Event Rap, since all ten of the songs were commissioned by people like you.

$250 – $500 gets you a Custom Rap Video written and produced by one of our artists, and you have twelve artists and infinite topics to choose from. This is an insanely low starting price for an original rap video from a seasoned professional, and it applies only during the Kickstarter. What can the video be about? Anything at all. You choose!

In case it’s helpful, here’s a guide I wrote entitled “How to Brief a Rapper

$750 – $1,500 gets you a live rap performance at your virtual event. This is also an amazingly low price, especially since you can choose to have the artist freestyle interactively with your audience, write and perform a custom rap live, or best of all compose a “Rap Up” summary of the event, written during the event, that the artist will perform as the grand finale.

That’s about as fresh and fun as rap gets.

$3,000 – $5,000 the highest tiers bring the highest quality, a brand new custom-written, recorded, mixed and mastered studio track, or studio track plus full rap music video, with an exclusive beat and lyrics that amplify your message in the impactful, entertaining way that rap does best.

I know this higher price range isn’t for everyone, but check out some of the music videos our artists have made, and maybe you can think of a friend to send this to who has a budget and a worthy cause.

Okay, that’s it!

Those prices are in US dollars.

I gather at least one person has backed given enough money to request a custom rap on cycling culture in the Netherlands.

The campaign runs for another 26 days. It has amassed over $8,400 CAD towards a goal of $32,008 CAD. (The site doesn’t show me the goal in USD although the pledges/reward are listed in that currency.)

Health Canada advisory: Face masks that contain graphene may pose health risks

Since COVID-19, we’ve been advised to wear face masks. It seems some of them may not be as safe as we assumed. First, the Health Canada advisory that was issued today, April 2, 2021 and then excerpts from an in-depth posting by Dr. Andrew Maynard (associate dean in the Arizona State University College of Global Futures) about the advisory and the use of graphene in masks.

From the Health Canada Recalls & alerts: Face masks that contain graphene may pose health risks webpage,

Summary

  • Product: Face masks labelled to contain graphene or biomass graphene.
  • Issue: There is a potential that wearers could inhale graphene particles from some masks, which may pose health risks.
  • What to do: Do not use these face masks. Report any health product adverse events or complaints to Health Canada.

Issue

Health Canada is advising Canadians not to use face masks that contain graphene because there is a potential that they could inhale graphene particles, which may pose health risks.

Graphene is a novel nanomaterial (materials made of tiny particles) reported to have antiviral and antibacterial properties. Health Canada conducted a preliminary scientific assessment after being made aware that masks containing graphene have been sold with COVID-19 claims and used by adults and children in schools and daycares. Health Canada believes they may also have been distributed for use in health care settings.

Health Canada’s preliminary assessment of available research identified that inhaled graphene particles had some potential to cause early lung toxicity in animals. However, the potential for people to inhale graphene particles from face masks and the related health risks are not yet known, and may vary based on mask design. The health risk to people of any age is not clear. Variables, such as the amount and duration of exposure, and the type and characteristics of the graphene material used, all affect the potential to inhale particles and the associated health risks. Health Canada has requested data from mask manufacturers to assess the potential health risks related to their masks that contain graphene.

Until the Department completes a thorough scientific assessment and has established the safety and effectiveness of graphene-containing face masks, it is taking the precautionary approach of removing them from the market while continuing to gather and assess information. Health Canada has directed all known distributors, importers and manufacturers to stop selling and to recall the affected products. Additionally, Health Canada has written to provinces and territories advising them to stop distribution and use of masks containing graphene. The Department will continue to take appropriate action to stop the import and sale of graphene face masks.

Products affected

Face masks labelled as containing graphene or biomass graphene.

What you should do

  • Do not use face masks labelled to contain graphene or biomass graphene.
  • Consult your health care provider if you have used graphene face masks and have health concerns, such as new or unexplained shortness of breath, discomfort or difficulty breathing.
  • Report any health product adverse events or complaints regarding graphene face masks to Health Canada.

Dr. Andrew Maynard’s Edge of Innovation series features a March 26, 2021 posting about the use of graphene in masks (Note: Links have been removed),

Face masks should protect you, not place you in greater danger. However, last Friday Radio Canada revealed that residents of Quebec and Ottawa were being advised not to use specific types of graphene-containing masks as they could potentially be harmful.

The offending material in the masks is graphene — a form of carbon that consists of nanoscopically thin flakes of hexagonally-arranged carbon atoms. It’s a material that has a number of potentially beneficial properties, including the ability to kill bacteria and viruses when they’re exposed to it.

Yet despite its many potential uses, the scientific jury is still out when it comes to how safe the material is.

As with all materials, the potential health risks associated with graphene depend on whether it can get into the body, where it goes if it can, what it does when it gets there, and how much of it is needed to cause enough damage to be of concern.

Unfortunately, even though these are pretty basic questions, there aren’t many answers forthcoming when it comes to the substance’s use in face masks.

Early concerns around graphene were sparked by previous research on another form of carbon — carbon nanotubes. It turns out that some forms of these fiber-like materials can cause serious harm if inhaled. And following on from research here, a natural next-question to ask is whether carbon nanotubes’ close cousin graphene comes with similar concerns.

Because graphene lacks many of the physical and chemical aspects of carbon nanotubes that make them harmful (such as being long, thin, and hard for the body to get rid of), the indications are that the material is safer than its nanotube cousins. But safer doesn’t mean safe. And current research indicates that this is not a material that should be used where it could potentially be inhaled, without a good amount of safety testing first.

[downloaded from https://medium.com/edge-of-innovation/how-safe-are-graphene-based-face-masks-b88740547e8c] Original source: Wikimedia

When it comes to inhaling graphene, the current state of the science indicates that if the material can get into the lower parts of the lungs (the respirable or alveolar region) it can lead to an inflammatory response at high enough concentrations.

There is some evidence that adverse responses are relatively short-lived, and that graphene particles can be broken down and disposed of by the lungs’ defenses.

This is good news as it means that there are less likely to be long-term health impacts from inhaling the material.

There’s also evidence that graphene, unlike some forms of thin, straight carbon nanotubes, does not migrate to the outside layers of the lungs where it could potentially do a lot more damage.

Again, this is encouraging as it suggests that graphene is unlikely to lead to serious long-term health impacts like mesothelioma.

However, research also shows that this is not a benign material. Despite being made of carbon — and it’s tempting to think of carbon as being safe, just because we’re familiar with it — there is some evidence that the jagged edges of some graphene particles can harm cells, leading to local damage as the body responds to any damage the material causes.

There are also concerns, although they are less well explored in the literature, that some forms of graphene may be carriers for nanometer-sized metal particles that can be quite destructive in the lungs. This is certainly the case with some carbon nanotubes, as the metallic catalyst particles used to manufacture them become embedded in the material, and contribute to its toxicity.

The long and short of this is that, while there are still plenty of gaps in our knowledge around how much graphene it’s safe to inhale, inhaling small graphene particles probably isn’t a great idea unless there’s been comprehensive testing to show otherwise.

And this brings us to graphene-containing face masks.

….

Here, it’s important to stress that we don’t yet know if graphene particles are being released and, if they are, whether they are being released in sufficient quantities to cause health effects. And there are indications that, if there are health risks, these may be relatively short-term — simply because graphene particles may be effectively degraded by the lungs’ defenses.

At the same time, it seems highly irresponsible to include a material with unknown inhalation risks in a product that is intimately associated with inhalation. Especially when there are a growing number of face masks available that claim to use graphene.

… There are millions of graphene face masks and respirators being sold and used around the world. And while the unfolding news focuses on Quebec and one particular type of face mask, this is casting uncertainty over the safety of any graphene-containing masks that are being sold.

And this uncertainty will persist until manufacturers and regulators provide data indicating that they have tested the products for the release and subsequent inhalation of fine graphene particles, and shown the risks to be negligible.

I strongly recommend reading, in its entirety , Dr. Maynard’s March 26, 2021 posting, Which he has updated twice since first posting the story.

In short. you may want to hold off before buying a mask with graphene until there’s more data about safety.

A look back at 2020 on this blog and a welcome to 2021

Things past

A year later i still don’t know what came over me but I got the idea that I could write a 10-year (2010 – 2019) review of science culture in Canada during the last few days of 2019. Somehow two and half months later, I managed to publish my 25,000+ multi-part series.

Plus,

Sadly, 2020 started on a somber note with this January 13, 2020 posting, In memory of those in the science, engineering, or technology communities returning to or coming to live or study in Canada on Flight PS752.

COVID-19 was mentioned and featured here a number of times throughout the year. I’m highlighting two of those postings. The first is a June 24, 2020 posting titled, Tiny sponges lure coronavirus away from lung cells. It’s a therapeutic approach that is not a vaccine but a way of neutralizing the virus. The idea is that the nanosponge is coated in the material that the virus seeks in a human cell. Once the virus locks onto the sponge, it is unable to seek out cells. If I remember rightly, the sponges along with the virus are disposed of by the body’s usual processes.

The second COVID-19 posting I’m highlighting is my first ever accepted editorial opinion by the Canadian Science Policy Centre (CSPC). I republished the piece here in a May 15, 2020 posting, which included all of my references. However, the magazine version is more attractively displayed in the CSPC Featured Editorial Series Volume 1, Issue 2, May 2020 PDF on pp. 31-2.

Artist Joseph Nechvatal reached out to me earlier this year regarding his viral symphOny (2006-2008), a 1 hour 40 minute collaborative electronic noise music symphony. It was featured in an April 7, 2020 posting which seemed strangely à propos during a pandemic even though the work was focused on viral artificial life. You can access it for free https://archive.org/details/ViralSymphony but the Internet Archive where this is stored is requesting donations.

Also on a vaguely related COVID-19 note, there’s my December 7, 2020 posting titled, Digital aromas? And a potpourri of ‘scents and sensibility’. As any regular readers may know, I have a longstanding interest in scent and fragrances. The COVID-19 part of the posting (it’s not about losing your sense of smell) is in the subsection titled, Smelling like an old book. Apparently some folks are missing the smell of bookstores and Powell’s books have responded to that need with a new fragrance.

For anyone who may have missed it, I wrote an update of the CRISPR twin affair in my July 28, 2020 posting, titled, July 2020 update on Dr. He Jiankui (the CRISPR twins) situation.

Finishing off with 2020, I wrote a commentary (mostly focused on the Canada chapter) about a book titled, Communicating Science: A Global Perspective in my December 10, 2020 posting. The book offers science communication perspectives from 39 different countries.

Things future

I have no doubt there will be delights ahead but as they are in the realm of discovery and, at this point, they are currently unknown.

My future plans include a posting about trust and governance. This has come about since writing my Dec. 29, 2020 posting titled, “Governments need to tell us when and how they’re using AI (artificial intelligence) algorithms to make decisions” and stumbling across a reference to a December 15, 2020 article by Dr. Andrew Maynard titled, Why Trustworthiness Matters in Building Global Futures. Maynard’s focus was on a newly published report titled, Trust & Tech Governance.

I will also be considering the problematic aspects of science communication and my own shortcomings. On the heels of reading more than usually forthright discussions of racism in Canada across multiple media platforms, I was horrified to discover I had featured, without any caveats, work by a man who was deeply problematic with regard to his beliefs about race. He was a eugenicist, as well as, a zoologist, naturalist, philosopher, physician, professor, marine biologist, and artist who coined many terms in biology, including ecology, phylum, phylogeny, and Protista; see his Wikipedia entry.

A Dec. 23, 2020 news release on EurekAlert (Scientists at Tel Aviv University develop new gene therapy for deafness) and a December 2020 article by Sarah Zhang for The Atlantic about prenatal testing and who gets born have me wanting to further explore the field of how genetic testing and therapies will affect our concepts of ‘normality’. Fingers crossed I’ll be able to get Dr. Gregor Wolbring to answer a few questions for publication here. (Gregor is a tenured associate professor [in Alberta, Canada] at the University of Calgary’s Cumming School of Medicine and a scholar in the field of ‘ableism’. He is deeply knowledgeable about notions of ability vs disability.)

As 2021 looms, I’m hopeful that I’ll be featuring more art/sci (or sciart) postings, which is my segue to a more hopeful note about 2021 will bring us,

The Knobbed Russet has a rough exterior, with creamy insides. Photo courtesy of William Mullan.

It’s an apple! This is one of the many images embedded in Annie Ewbank’s January 6, 2020 article about rare and beautiful apples for Atlas Obscura (featured on getpocket.com),

In early 2020, inside a bright Brooklyn gallery that is plastered in photographs of apples, William Mullan is being besieged with questions.

A writer is researching apples for his novel set in post-World War II New York. An employee of a fruit-delivery company, who covetously eyes the round table on which Mullan has artfully arranged apples, asks where to buy his artwork.

But these aren’t your Granny Smith’s apples. A handful of Knobbed Russets slumping on the table resemble rotting masses. Despite their brown, wrinkly folds, they’re ripe, with clean white interiors. Another, the small Roberts Crab, when sliced by Mullan through the middle to show its vermillion flesh, looks less like an apple than a Bing cherry. The entire lineup consists of apples assembled by Mullan, who, by publishing his fruit photographs in a book and on Instagram, is putting the glorious diversity of apples in the limelight.

Do go and enjoy! Happy 2021!

Sunscreens 2020 and the Environmental Working Group (EWG)

There must be some sweet satisfaction or perhaps it’s better described as relief for the Environmental Working Group (EWG) now that sunscreens with metallic (zinc oxide and/or titanium dioxide) nanoparticles are gaining wide acceptance. (More about the history and politics EWG and metallic nanoparticles at the end of this posting.)

This acceptance has happened alongside growing concerns about oxybenzone, a sunscreen ingredient that EWG has long warned against. Oxybenzone has been banned from use in Hawaii due to environmental concerns (see my July 6, 2018 posting; scroll down about 40% of the way for specifics about Hawaii). Also, it is one of the common sunscreen ingredients for which the US Food and Drug Administration (FDA) is completing a safety review.

Today, zinc oxide and titanium dioxide metallic nanoparticles are being called minerals, as in, “mineral-based” sunscreens. They are categorized as physical sunscreens as opposed to chemical sunscreens.

I believe the most recent sunscreen posting here was my 2018 update (uly 6, 2018 posting) so the topic is overdue for some attention here. From a May 21, 2020 EWG news release (received via email),

As states reopen and Americans leave their homes to venture outside, it’s important for them to remember to protect their skin from the sun’s harmful rays. Today the Environmental Working Group released its 14th annual Guide to Sunscreens.  

This year researchers rated the safety and efficacy of more than 1,300 SPF products – including sunscreens, moisturizers and lip balms – and found that only 25 percent offer adequate protection and do not contain worrisome ingredients such as oxybenzone, a potential hormone-disrupting chemical that is readily absorbed by the body.

Despite a delay in finalizing rules that would make all sunscreens on U.S. store shelves safer, the Food and Drug Administration, the agency that governs sunscreen safety, is completing tests that highlight concerns with common sunscreen ingredients. Last year, the agency published two studies showing that, with just a single application, six commonly used chemical active ingredients, including oxybenzone, are readily absorbed through the skin and could be detected in our bodies at levels that could cause harm.

“It’s quite concerning,” said Nneka Leiba, EWG’s vice president of Healthy Living science. “Those studies don’t prove whether the sunscreens are unsafe, but they do highlight problems with how these products are regulated.”

“EWG has been advocating for the FDA to review these chemical ingredients for 14 years,” Leiba said. “We slather these ingredients on our skin, but these chemicals haven’t been adequately tested. This is just one example of the backward nature of product regulation in the U.S.”

Oxybenzone remains a commonly used active ingredient, found in more than 40 percent of the non-mineral sunscreens in this year’s guide. Oxybenzone is allergenic and a potential endocrine disruptor, and has been detected in human breast milk, amniotic fluid, urine and blood.

According to EWG’s assessment, fewer than half of the products in this year’s guide contain active ingredients that the FDA has proposed are safe and effective.

“Based on the best current science and toxicology data, we continue to recommend sunscreens with the mineral active ingredients zinc dioxide and titanium dioxide, because they are the only two ingredients the FDA recognized as safe or effective in their proposed draft rules,” said Carla Burns, an EWG research and database analyst who manages the updates to the sunscreen guide.

Most people select sunscreen products based on their SPF, or sunburn protection factor, and mistakenly assume that bigger numbers offer better protection. According to the FDA, higher SPF values have not been shown to provide additional clinical benefit and may give users a false sense of protection. This may lead to overexposure to UVA rays that increase the risk of long-term skin damage and cancer. The FDA has proposed limiting SPF claims to 60+.

EWG continues to hone our recommendations by strengthening the criteria for assessing sunscreens, which are based on the latest findings in the scientific literature and commissioned tests of sunscreen product efficacy. This year EWG made changes to our methodology in order to strengthen our requirement that products provide the highest level of UVA protection.

“Our understanding of the dangers associated with UVA exposure is increasing, and they are of great concern,” said Burns. “Sunburn during early life, especially childhood, is very dangerous and a risk factor for all skin cancers, but especially melanoma. Babies and young children are especially vulnerable to sun damage. Just a few blistering sunburns early in life can double a person’s risk of developing melanoma later in life.”

EWG researchers found 180 sunscreens that meet our criteria for safety and efficacy and would likely meet the proposed FDA standards. Even the biggest brands now provide mineral options for consumers.  

Even for Americans continuing to follow stay-at-home orders, wearing an SPF product may still be important. If you’re sitting by a window, UVA and UVB rays can penetrate the glass.  

It is important to remember that sunscreen is only one part of a sun safety routine. People should also protect their skin by covering up with clothing, hats and sunglasses. And sunscreen must be reapplied at least every two hours to stay effective.

EWG’s Guide to Sunscreens helps consumers find products that get high ratings for providing adequate broad-spectrum protection and that are made with ingredients that pose fewer health concerns.

The new guide also includes lists of:

Here are more quick tips for choosing better sunscreens:

  • Check your products in EWG’s sunscreen database and avoid those with harmful ingredients.
  • Avoid products with oxybenzone. This chemical penetrates the skin, gets into the bloodstream and can affect normal hormone activities.
  • Steer clear of products with SPF higher than 50+. High SPF values do not necessarily provide increased UVA protection and may fool you into thinking you are safe from sun damage.
  • Avoid sprays. These popular products pose inhalation concerns, and they may not provide a thick and uniform coating on the skin.
  • Stay away from retinyl palmitate. Government studies link the use of retinyl palmitate, a form of vitamin A, to the formation of skin tumors and lesions when it is applied to sun-exposed skin.
  • Avoid intense sun exposure during the peak hours of 10 a.m. to 4 p.m.

Shoppers on the go can download EWG’s Healthy Living app to get ratings and safety information on sunscreens and other personal care products. Also be sure to check out EWG’s sunscreen label decoder.

One caveat, these EWG-recommended products might not be found in Canadian stores or your favourite product may not have been reviewed for inclusion, as a product to be sought out or avoided, in their database. For example, I use a sunscreen that isn’t listed in the database, although at least a few other of the company’s sunscreen products are. On the plus side, my sunscreen doesn’t include oxybenzone or retinyl palmitate as ingredients.

To sum up the situation with sunscreens containing metallic nanoparticles (minerals), they are considered to be relatively safe but should new research emerge that designation could change. In effect, all we can do is our best with the information at hand.

History and politics of metallic nanoparticles in sunscreens

In 2009 it was a bit of a shock when the EWG released a report recommending the use of sunscreens with metallic nanoparticles in the list of ingredients. From my July 9, 2009 posting,

The EWG (Environmental Working Group) is, according to Maynard [as of 20202: Dr. Andrew Maynard is a scientist and author, Associate Director of Faculty in the ASU {Arizona State University} School for the Future of Innovation in Society, also the director of the ASU Risk Innovation Lab, and leader of the Risk Innovation Nexus], not usually friendly to industry and they had this to say about their own predisposition prior to reviewing the data (from EWG),

When we began our sunscreen investigation at the Environmental Working Group, our researchers thought we would ultimately recommend against micronized and nano-sized zinc oxide and titanium dioxide sunscreens. After all, no one has taken a more expansive and critical look than EWG at the use of nanoparticles in cosmetics and sunscreens, including the lack of definitive safety data and consumer information on these common new ingredients, and few substances more dramatically highlight gaps in our system of public health protections than the raw materials used in the burgeoning field of nanotechnology. But many months and nearly 400 peer-reviewed studies later, we find ourselves drawing a different conclusion, and recommending some sunscreens that may contain nano-sized ingredients.

My understanding is that after this report, the EWG was somewhat ostracized by collegial organizations. Friends of the Earth (FoE) and the ETC Group both of which issued reports that were published after the EWG report and were highly critical of ‘nano sunscreens’.

The ETC Group did not continue its anti nanosunscreen campaign for long (I saw only one report) but FoE (in particular the Australian arm of the organization) more than made up for that withdrawal and to sad effect. My February 9, 2012 post title was this: Unintended consequences: Australians not using sunscreens to avoid nanoparticles?

An Australian government survey found that 13% of Australians were not using any sunscreen due to fears about nanoparticles. In a country with the highest incidence of skin cancer in the world and, which spent untold millions over decades getting people to cover up in the sun, it was devastating news.

FoE immediately withdrew all their anti nanosunscreen materials in Australia from circulation while firing broadsides at the government. The organization’s focus on sunscreens with metallic nanoparticles has diminished since 2012.

Research

I have difficulty trusting materials from FoE and you can see why here in this July 26, 2011 posting (Misunderstanding the data or a failure to research? Georgia Straight article about nanoparticles). In it, I analyze Alex Roslin’s profoundly problematic article about metallic nanoparticles and other engineered nanoparticles. All of Roslin’s article was based on research and materials produced by FoE which misrepresented some of the research. Roslin would have realized that if he had bothered to do any research for himself.

EWG impressed me mightily with their refusal to set aside or dismiss the research disputing their initial assumption that metallic nanoparticles in sunscreens were hazardous. (BTW, there is one instance where metallic nanoparticles in sunscreens are of concern. My October 13, 2013 posting about anatase and rutile forms of titanium dioxide at the nanoscale features research on that issue.)

EWG’s Wikipedia entry

Whoever and however many are maintaining this page, they don’t like EWG at all,

The accuracy of EWG reports and statements have been criticized, as has its funding by the organic food industry[2][3][4][5] Its warnings have been labeled “alarmist”, “scaremongering” and “misleading”.[6][7][8] Despite the questionable status of its work, EWG has been influential.[9]

This is the third paragraph in the Introduction. At its very best, the information is neutral, otherwise, it’s much like that third paragraph.

Even John D. Rockeller’s entry is more flattering and he was known as the ‘most hated man in America’ as this show description on the Public Broadcasting Service (PBS) website makes clear,

American Experience

The Rockefellers Chapter One

Clip: Season 13 Episode 1 | 9m 37s

John D. Rockefeller was the world’s first billionaire and the most hated man in America. Watch the epic story of the man who monopolized oil.

Fun in the sun

Have fun in the sun this summer. There’s EWG’s sunscreen database, the tips listed in the news release, and EWG also has a webpage where they describe their methodology for how they assess sunscreens. It gets a little technical (for me anyway) but it should answer any further safety questions you might have after reading this post.

It may require a bit of ingenuity given the concerns over COVID-19 but I’m constantly amazed at the inventiveness with which so many people have met this pandemic. (This June 15, 2020 Canadian Broadcasting Corporation article by Sheena Goodyear features a family that created a machine that won the 2020 Rube Goldberg Bar of Soap Video challenge. The article includes an embedded video of the winning machine in action.)

Reading (2 of 2): Is zinc-infused underwear healthier for women?

This first part of this Reading ‘series’, Reading (1 of 2): an artificial intelligence story in British Columbia (Canada) was mostly about how one type of story, in this case,based on a survey, is presented and placed in one or more media outlets. The desired outcome is for more funding by government and for more investors (they tucked in an ad for an upcoming artificial intelligence conference in British Columbia).

This story about zinc-infused underwear for women also uses science to prove its case and it, too, is about raising money. In this case, it’s a Kickstarter campaign to raise money.

If Huha’s (that’s the company name) claims for ‘zinc-infused mineral undies’ are to be believed, the answer is an unequivocal yes. The reality as per the current research on the topic is not quite as conclusive.

The semiotics (symbolism)

Huha features fruit alongside the pictures of their underwear. You’ll see an orange, papaya, and melon in the kickstarter campaign images and on the company website. It seems to be one of those attempts at subliminal communication. Fruit is good for you therefore our underwear is good for you. In fact, our underwear (just like the fruit) has health benefits.

For a deeper dive into the world of semiotics, there’s the ‘be fruitful and multiply’ stricture which is found in more than one religious or cultural orientation and is hard to dismiss once considered.

There is no reason to add fruit to the images other than to suggest benefits from nature and fertility (or fruitfulness). They’re not selling fruit and these ones are not particularly high in zinc. If all you’re looking for is colour, why not vegetables or puppies?

The claims

I don’t have time to review all of the claims but I’ll highlight a few. My biggest problem with the claims is that there are no citations or links to studies, i.e., the research. So, something like this becomes hard to assess,

Most women’s underwear are made with chemical-based, synthetic fibers that lead to yeast and UTI [urinary tract infection] infections, odor, and discomfort. They’ve also been proven to disrupt human hormones, have been linked to cancer, pollute the planet aggressively, and stay in landfills far too long.

There’s more than one path to a UTI and/or odor and/or discomfort but I can see where fabrics that don’t breathe can exacerbate or cause problems of that nature. I have a little more difficulty with the list that follows. I’d like to see the research on underpants disrupting human hormones. Is this strictly a problem for women or could men also be affected? (If you should know, please leave a comment.)

As for ‘linked to cancer’, I’m coming to the conclusion that everything is linked to cancer. Offhand, I’ve been told peanuts, charcoal broiled items (I think it’s the char), and my negative thoughts are all linked to cancer.

One of the last claims in the excerpted section, ‘pollute the planet aggressively’ raises this question.When did underpants become aggressive’?

The final claim seems unexceptional. Our detritus is staying too long in our landfills. Of course, the next question is: how much faster do the Huha underpants degrade in a landfill? That question is not addressed in Kickstarter campaign material.

Talking to someone with more expertise

I contacted Dr. Andrew Maynard, Associate Director at Arizona State University (ASU) School for the Future of Innovation in Society, He has a PhD in physics and longstanding experience in research and evaluation of emerging technologies (for many years he specialized in nanoparticle analysis and aerosol exposure in occupational settings),.

Professor Maynard is a widely recognized expert and public commentator on emerging technologies and their safe and responsible development and use, and has testified before [US] congressional committees on a number of occasions. 

None of this makes him infallible but I trust that he always works with integrity and bases his opinions on the best information at hand. I’ve always found him to be a reliable source of information.

Here’s what he had to say (from an October 25, 2019 email),

I suspect that their claims are pushing things too far – from what I can tell, professionals tend to advise against synthetic underwear because of the potential build up of moisture and bacteria and the lack of breathability, and tend to suggest natural materials – which indicating that natural fibers and good practices should be all most people need. I haven’t seen any evidence for an underwear crisis here, and one concern is that the company is manufacturing a problem which they then claim to solve. That said, I can’t see anything totally egregious in what they are doing. And the zinc presence makes sense in that it prevents bacterial growth/activity within the fabric, thus reducing the chances of odor and infection.

Pharmaceutical grade zinc and research into underwear

I was a little curious about ‘pharmaceutical grade’ zinc as my online searches for a description were unsuccessful. Andrew explained that the term likely means ‘high purity’ zinc suitable for use in medications rather than the zinc found in roofing panels.

After the reference to ‘pharmaceutical grade’ zinc there’s a reference to ‘smartcel sensitive Zinc’. Here’s more from the smartcel sensitive webpage,

smartcel™ sensitive is skin friendly thanks to zinc oxide’s soothing and anti-inflammatory capabilities. This is especially useful for people with sensitive skin or skin conditions such as eczema or neurodermitis. Since zinc is a component of skin building enzymes, it operates directly on the skin. An active exchange between the fiber and the skin occurs when the garment is worn.

Zinc oxide also acts as a shield against harmful UVA and UVB radiation [it’s used in sunscreens], which can damage our skin cells. Depending on the percentage of smartcel™ sensitive used in any garment, it can provide up to 50 SPF.

Further to this, zinc oxide possesses strong antibacterial properties, especially against odour causing bacteria, which helps to make garments stay fresh longer. *

I couldn’t see how zinc helps the pH balance in anyone’s vagina as claimed in the Kickstarter campaign and smartcel, on its ‘sensitive’ webpage, doesn’t make that claim but I found an answer in an April 4, 2017 Q&A (question and answer) interview by Jocelyn Cavallo for Medium,

What women need to know about their vaginal p

Q & A with Dr. Joanna Ellington

A woman’s vagina is a pretty amazing body part. Not only can it be a source of pleasure but it also can help create and bring new life into the world. On top of all that, it has the extraordinary ability to keep itself clean by secreting natural fluids and maintaining a healthy pH to encourage the growth of good bacteria and discourage harmful bacteria from moving in. Despite being so important, many women are never taught the vital role that pH plays in their vaginal health or how to keep it in balance.

We recently interviewed renowned Reproductive Physiologist and inventor of IsoFresh Balancing Vaginal Gel, Dr. Joanna Ellington, to give us the low down on what every woman needs to know about their vaginal pH and how to maintain a healthy level.

What is pH?

Dr. Ellington: PH is a scale of acidity and alkalinity. The measurements range from 0 to 14: a pH lower than 7 is acidic and a pH higher than 7 is considered alkaline.

What is the “perfect” pH level for a woman’s vagina?

Dr. E.: For most women of a reproductive age vaginal pH should be 4.5 or less. For post-menopausal women this can go up to about 5. The vagina will naturally be at a high pH right after sex, during your period, after you have a baby or during ovulation (your fertile time).

Are there diet and environmental factors that affect a women’s vaginal pH level?

Dr. E.: Yes, iron zinc and manganese have been found to be critical for lactobacillus (healthy bacteria) to function. Many women don’t eat well and should supplement these, especially if they are vegetarian. Additionally, many vegetarians have low estrogen because they do not eat the animal fats that help make our sex steroids. Without estrogen, vaginal pH and bacterial imbalance can occur. It is important that women on these diets ensure good fat intake from other sources, and have estrogen and testosterone and iron levels checked each year.

Do clothing and underwear affect vaginal pH?

Dr. E.: Yes, tight clothing and thong underwear [emphasis mine] have been shown in studies to decrease populations of healthy vaginal bacteria and cause pH changes in the vagina. Even if you wear these sometimes, it is important for your vaginal ecosystem that loose clothing or skirts be worn some too.

Yes, Dr. Ellington has the IsoFresh Balancing Vaginal Gel and whether that’s a good product should be researched but all of the information in the excerpt accords with what I’ve heard over the years and fits in nicely with what Andrew said, zinc in underwear could be useful for its antimicrobial properties. Also, note the reference to ‘thong underwear’ as a possible source of difficulty and note that Huha is offering thong and very high cut underwear.

Of course, your underwear may already have zinc in it as this research suggests (thank you, Andrew, for the reference),

Exposure of women to trace elements through the skin by direct contact with underwear clothing by Thao Nguyen & Mahmoud A. Saleh. Journal of Environmental Science and Health, Part A Toxic/Hazardous Substances and Environmental Engineering Volume 52, 2017 – Issue 1 Pages 1-6 DOI: https://doi.org/10.1080/10934529.2016.1221212 Published online: 09 Sep 2016

This paper is behind a paywall but I have access through a membership in the Canadian Academy of Independent Scholars. So, here’s the part I found interesting,

… The main chemical pollutants present in textiles are dyes containing carcinogenic amines, metals, pentachlorophenol, chlorine bleaching, halogen carriers, free formaldehyde, biocides, fire retardants and softeners.[1] Metals are also found in textile products and clothing are used for many purposes: Co [cobalt], Cu [copper], Cr [chromium] and Pb [lead] are used as metal complex dyes, Cr as pigments mordant, Sn as catalyst in synthetic fabrics and as synergists of flame retardants,Ag [silver] as antimicrobials and Ti [titanium] and Zn [zinc] as water repellents and odor preventive agents.[2–5] When present in textile materials, the toxic elements mentioned above represent not only a major environmental problem in the textile industry but also they may impose potential danger to human health by absorption through the skin.[6,7] [emphasis mine] Chronic exposure to low levels of toxic elements has been associated with a number of adverse human health effects.[8–11] Also exposure to high concentration of elements which are considered as essential for humans such as Cu, Co, Fe [iron], Mn [manganese] or Zn among others, can also be harmful.[12] [emphasis mine] Co, Cr, Cu and Ni [nitrogen] are skin sensitizers,[13,14] which may lead to contact dermatitis, also Cr can lead to liver damage, pulmonary congestion and cancer.[15] [emphasis mine] The purpose of the present study was to determine the concentrations of a number of elements in various skin-contact clothes. For risk estimations, the determination of the extractable amounts of heavy metals is of importance, since they reflect their possible impact on human health. [p. 2 PDF]

So, there’s the link to cancer. Maybe.

Are zinc-infused undies a good idea?

It could go either way. (For specifics about the conclusions reached in the study, scroll down to the Ooops! subheading.) I like the idea of using sustainable Eucalyptus-based material (TencelL) for the underwear as I have heard that cotton isn’t sustainably cultivated. As for claims regarding the product’s environmental friendliness, it’s based on wood, specifically, cellulose, which Canadian researchers have been experimenting with at the nanoscale* and they certainly have been touting nanocellulose as environmentally friendly. Tencel’s sustainability page lists a number of environmental certifications from the European Union, Belgium, and the US.

*Somewhere in the Kickstarter campaign material, there’s a reference to nanofibrils and I’m guessing those nanofibrils are Tencel’s wood fibers at the nanoscale. As well, I’m guessing that smartcel’s fabric contains zinc oxide nanoparticles.

Whether or not you need more zinc is something you need to determine for yourself. Finding out if the pH balance in your vagina is within a healthy range might be a good way to start. It would also be nice to know how much zinc is in the underwear and whether it’s being used antimicrobial properties and/or as a source for one of minerals necessary for your health.

How the Kickstarter campaign is going

At the time of this posting, they’ve reached a little over $24,000 with six days left. The goal was $10,000. Sadly, there are no questions in the FAQ (frequently asked questions).

Reading tips

It’s exhausting trying to track down authenticity. In this case, there were health and environmental claims but I do have a few suggestions.

  1. Look at the imagery critically and try to ignore the hyperbole.
  2. How specific are the claims? e.g., How much zinc is there in the underpants?
  3. Who are their experts and how trustworthy are the agencies/companies mentioned?
  4. If research is cited, are the publishers reputable and is the journal reputable?
  5. Does it make sense given your own experience?
  6. What are the consequences if you make a mistake?

Overblown claims and vague intimations of disease are not usually good signs. Conversely, someone with great credential may not be trustworthy which is why I usually try to find more than one source for confirmation. The person behind this campaign and the Huha company is Alexa Suter. She’s based in Vancouver, Canada and seems to have spent most of her time as a writer and social media and video producer with a few forays into sales and real estate. I wonder if she’s modeling herself and her current lifestyle entrepreneurial effort on Gwyneth Paltrow and her lifestyle company, Goop.

Huha underwear may fulfill its claims or it may be just another pair of underwear or it may be unhealthy. As for the environmentally friendly claims, let’s hope that the case. On a personal level, I’m more hopeful about that.

Regardless, the underwear is not cheap. The smallest pledge that will get your underwear (a three-pack) is $65 CAD.

Ooops! ETA: November 8, 2019:

I forgot to include the conclusion the researchers arrived at and some details on how they arrived at those conclusions. First, they tested 120 pairs of underpants in all sorts of colours and made in different parts of the world.

Second, some underpants showed excessive levels of metals. Cotton was the most likely material to show excess although nylon and polyester can also be problematic. To put this into proportion and with reference to zinc, “Zn exceeded the limit in 4% of the tested samples
and was found mostly in samples manufactured in China.” [p. 6 PDF] Finally, dark colours tested for higher levels of metals than light colours.

While it doesn’t mention underpants as such, there’s a November 8, 2019 article ‘Five things everyone with a vagina should know‘ by Paula McGrath for BBC news online. McGrath’s health expert is Dr. Jen Gunter, a physician whose specialties are obstetrics, gynaecology, and pain.

I found it at the movies: a commentary on/review of “Films from the Future”

Kudos to anyone who recognized the reference to Pauline Kael (she changed film criticism forever) and her book “I Lost it at the Movies.” Of course, her book title was a bit of sexual innuendo, quite risqué for an important film critic in 1965 but appropriate for a period (the 1960s) associated with a sexual revolution. (There’s more about the 1960’s sexual revolution in the US along with mention of a prior sexual revolution in the 1920s in this Wikipedia entry.)

The title for this commentary is based on an anecdote from Dr. Andrew Maynard’s (director of the Arizona State University [ASU] Risk Innovation Lab) popular science and technology book, “Films from the Future: The Technology and Morality of Sci-Fi Movies.”

The ‘title-inspiring’ anecdote concerns Maynard’s first viewing of ‘2001: A Space Odyssey, when as a rather “bratty” 16-year-old who preferred to read science fiction, he discovered new ways of seeing and imaging the world. Maynard isn’t explicit about when he became a ‘techno nerd’ or how movies gave him an experience books couldn’t but presumably at 16 he was already gearing up for a career in the sciences. That ‘movie’ revelation received in front of a black and white television on January 1,1982 eventually led him to write, “Films from the Future.” (He has a PhD in physics which he is now applying to the field of risk innovation. For a more detailed description of Dr. Maynard and his work, there’s his ASU profile webpage and, of course, the introduction to his book.)

The book is quite timely. I don’t know how many people have noticed but science and scientific innovation is being covered more frequently in the media than it has been in many years. Science fairs and festivals are being founded on what seems to be a daily basis and you can now find science in art galleries. (Not to mention the movies and television where science topics are covered in comic book adaptations, in comedy, and in standard science fiction style.) Much of this activity is centered on what’s called ’emerging technologies’. These technologies are why people argue for what’s known as ‘blue sky’ or ‘basic’ or ‘fundamental’ science for without that science there would be no emerging technology.

Films from the Future

Isn’t reading the Table of Contents (ToC) the best way to approach a book? (From Films from the Future; Note: The formatting has been altered),

Table of Contents
Chapter One
In the Beginning 14
Beginnings 14
Welcome to the Future 16
The Power of Convergence 18
Socially Responsible Innovation 21
A Common Point of Focus 25
Spoiler Alert 26
Chapter Two
Jurassic Park: The Rise of Resurrection Biology 27
When Dinosaurs Ruled the World 27
De-Extinction 31
Could We, Should We? 36
The Butterfly Effect 39
Visions of Power 43
Chapter Three
Never Let Me Go: A Cautionary Tale of Human Cloning 46
Sins of Futures Past 46
Cloning 51
Genuinely Human? 56
Too Valuable to Fail? 62
Chapter Four
Minority Report: Predicting Criminal Intent 64
Criminal Intent 64
The “Science” of Predicting Bad Behavior 69
Criminal Brain Scans 74
Machine Learning-Based Precognition 77
Big Brother, Meet Big Data 79
Chapter Five
Limitless: Pharmaceutically-enhanced Intelligence 86
A Pill for Everything 86
The Seduction of Self-Enhancement 89
Nootropics 91
If You Could, Would You? 97
Privileged Technology 101
Our Obsession with Intelligence 105
Chapter Six
Elysium: Social Inequity in an Age of Technological
Extremes 110
The Poor Shall Inherit the Earth 110
Bioprinting Our Future Bodies 115
The Disposable Workforce 119
Living in an Automated Future 124
Chapter Seven
Ghost in the Shell: Being Human in an
Augmented Future 129
Through a Glass Darkly 129
Body Hacking 135
More than “Human”? 137
Plugged In, Hacked Out 142
Your Corporate Body 147
Chapter Eight
Ex Machina: AI and the Art of Manipulation 154
Plato’s Cave 154
The Lure of Permissionless Innovation 160
Technologies of Hubris 164
Superintelligence 169
Defining Artificial Intelligence 172
Artificial Manipulation 175
Chapter Nine
Transcendence: Welcome to the Singularity 180
Visions of the Future 180
Technological Convergence 184
Enter the Neo-Luddites 190
Techno-Terrorism 194
Exponential Extrapolation 200
Make-Believe in the Age of the Singularity 203
Chapter Ten
The Man in the White Suit: Living in a Material World 208
There’s Plenty of Room at the Bottom 208
Mastering the Material World 213
Myopically Benevolent Science 220
Never Underestimate the Status Quo 224
It’s Good to Talk 227
Chapter Eleven
Inferno: Immoral Logic in an Age of
Genetic Manipulation 231
Decoding Make-Believe 231
Weaponizing the Genome 234
Immoral Logic? 238
The Honest Broker 242
Dictating the Future 248
Chapter Twelve
The Day After Tomorrow: Riding the Wave of
Climate Change 251
Our Changing Climate 251
Fragile States 255
A Planetary “Microbiome” 258
The Rise of the Anthropocene 260
Building Resiliency 262
Geoengineering the Future 266
Chapter Thirteen
Contact: Living by More than Science Alone 272
An Awful Waste of Space 272
More than Science Alone 277
Occam’s Razor 280
What If We’re Not Alone? 283
Chapter Fourteen
Looking to the Future 288
Acknowledgments 293

The ToC gives the reader a pretty clue as to where the author is going with their book and Maynard explains how he chose his movies in his introductory chapter (from Films from the Future),

“There are some quite wonderful science fiction movies that didn’t make the cut because they didn’t fit the overarching narrative (Blade Runner and its sequel Blade Runner 2049, for instance, and the first of the Matrix trilogy). There are also movies that bombed with the critics, but were included because they ably fill a gap in the bigger story around emerging and converging technologies. Ultimately, the movies that made the cut were chosen because, together, they create an overarching narrative around emerging trends in biotechnologies, cybertechnologies, and materials-based technologies, and they illuminate a broader landscape around our evolving relationship with science and technology. And, to be honest, they are all movies that I get a kick out of watching.” (p. 17)

Jurassic Park (Chapter Two)

Dinosaurs do not interest me—they never have. Despite my profound indifference I did see the movie, Jurassic Park, when it was first released (someone talked me into going). And, I am still profoundly indifferent. Thankfully, Dr. Maynard finds meaning and a connection to current trends in biotechnology,

Jurassic Park is unabashedly a movie about dinosaurs. But it’s also a movie about greed, ambition, genetic engineering, and human folly—all rich pickings for thinking about the future, and what could possibly go wrong. (p. 28)

What really stands out with Jurassic Park, over twenty-five years later, is how it reveals a very human side of science and technology. This comes out in questions around when we should tinker with technology and when we should leave well enough alone. But there is also a narrative here that appears time and time again with the movies in this book, and that is how we get our heads around the sometimes oversized roles mega-entrepreneurs play in dictating how new tech is used, and possibly abused. These are all issues that are just as relevant now as they were in 1993, and are front and center of ensuring that the technologyenabled future we’re building is one where we want to live, and not one where we’re constantly fighting for our lives.  (pp. 30-1)

He also describes a connection to current trends in biotechnology,

De-Extinction

In a far corner of Siberia, two Russians—Sergey Zimov and his son Nikita—are attempting to recreate the Ice Age. More precisely, their vision is to reconstruct the landscape and ecosystem of northern Siberia in the Pleistocene, a period in Earth’s history that stretches from around two and a half million years ago to eleven thousand years ago. This was a time when the environment was much colder than now, with huge glaciers and ice sheets flowing over much of the Earth’s northern hemisphere. It was also a time when humans
coexisted with animals that are long extinct, including saber-tooth cats, giant ground sloths, and woolly mammoths.

The Zimovs’ ambitions are an extreme example of “Pleistocene rewilding,” a movement to reintroduce relatively recently extinct large animals, or their close modern-day equivalents, to regions where they were once common. In the case of the Zimovs, the
father-and-son team believe that, by reconstructing the Pleistocene ecosystem in the Siberian steppes and elsewhere, they can slow down the impacts of climate change on these regions. These areas are dominated by permafrost, ground that never thaws through
the year. Permafrost ecosystems have developed and survived over millennia, but a warming global climate (a theme we’ll come back to in chapter twelve and the movie The Day After Tomorrow) threatens to catastrophically disrupt them, and as this happens, the impacts
on biodiversity could be devastating. But what gets climate scientists even more worried is potentially massive releases of trapped methane as the permafrost disappears.

Methane is a powerful greenhouse gas—some eighty times more effective at exacerbating global warming than carbon dioxide— and large-scale releases from warming permafrost could trigger catastrophic changes in climate. As a result, finding ways to keep it in the ground is important. And here the Zimovs came up with a rather unusual idea: maintaining the stability of the environment by reintroducing long-extinct species that could help prevent its destruction, even in a warmer world. It’s a wild idea, but one that has some merit.8 As a proof of concept, though, the Zimovs needed somewhere to start. And so they set out to create a park for deextinct Siberian animals: Pleistocene Park.9

Pleistocene Park is by no stretch of the imagination a modern-day Jurassic Park. The dinosaurs in Hammond’s park date back to the Mesozoic period, from around 250 million years ago to sixty-five million years ago. By comparison, the Pleistocene is relatively modern history, ending a mere eleven and a half thousand years ago. And the vision behind Pleistocene Park is not thrills, spills, and profit, but the serious use of science and technology to stabilize an increasingly unstable environment. Yet there is one thread that ties them together, and that’s using genetic engineering to reintroduce extinct species. In this case, the species in question is warm-blooded and furry: the woolly mammoth.

The idea of de-extinction, or bringing back species from extinction (it’s even called “resurrection biology” in some circles), has been around for a while. It’s a controversial idea, and it raises a lot of tough ethical questions. But proponents of de-extinction argue
that we’re losing species and ecosystems at such a rate that we can’t afford not to explore technological interventions to help stem the flow.

Early approaches to bringing species back from the dead have involved selective breeding. The idea was simple—if you have modern ancestors of a recently extinct species, selectively breeding specimens that have a higher genetic similarity to their forebears can potentially help reconstruct their genome in living animals. This approach is being used in attempts to bring back the aurochs, an ancestor of modern cattle.10 But it’s slow, and it depends on
the fragmented genome of the extinct species still surviving in its modern-day equivalents.

An alternative to selective breeding is cloning. This involves finding a viable cell, or cell nucleus, in an extinct but well-preserved animal and growing a new living clone from it. It’s definitely a more appealing route for impatient resurrection biologists, but it does mean getting your hands on intact cells from long-dead animals and devising ways to “resurrect” these, which is no mean feat. Cloning has potential when it comes to recently extinct species whose cells have been well preserved—for instance, where the whole animal has become frozen in ice. But it’s still a slow and extremely limited option.

Which is where advances in genetic engineering come in.

The technological premise of Jurassic Park is that scientists can reconstruct the genome of long-dead animals from preserved DNA fragments. It’s a compelling idea, if you think of DNA as a massively long and complex instruction set that tells a group of biological molecules how to build an animal. In principle, if we could reconstruct the genome of an extinct species, we would have the basic instruction set—the biological software—to reconstruct
individual members of it.

The bad news is that DNA-reconstruction-based de-extinction is far more complex than this. First you need intact fragments of DNA, which is not easy, as DNA degrades easily (and is pretty much impossible to obtain, as far as we know, for dinosaurs). Then you
need to be able to stitch all of your fragments together, which is akin to completing a billion-piece jigsaw puzzle without knowing what the final picture looks like. This is a Herculean task, although with breakthroughs in data manipulation and machine learning,
scientists are getting better at it. But even when you have your reconstructed genome, you need the biological “wetware”—all the stuff that’s needed to create, incubate, and nurture a new living thing, like eggs, nutrients, a safe space to grow and mature, and so on. Within all this complexity, it turns out that getting your DNA sequence right is just the beginning of translating that genetic code into a living, breathing entity. But in some cases, it might be possible.

In 2013, Sergey Zimov was introduced to the geneticist George Church at a conference on de-extinction. Church is an accomplished scientist in the field of DNA analysis and reconstruction, and a thought leader in the field of synthetic biology (which we’ll come
back to in chapter nine). It was a match made in resurrection biology heaven. Zimov wanted to populate his Pleistocene Park with mammoths, and Church thought he could see a way of
achieving this.

What resulted was an ambitious project to de-extinct the woolly mammoth. Church and others who are working on this have faced plenty of hurdles. But the technology has been advancing so fast that, as of 2017, scientists were predicting they would be able to reproduce the woolly mammoth within the next two years.

One of those hurdles was the lack of solid DNA sequences to work from. Frustratingly, although there are many instances of well preserved woolly mammoths, their DNA rarely survives being frozen for tens of thousands of years. To overcome this, Church and others
have taken a different tack: Take a modern, living relative of the mammoth, and engineer into it traits that would allow it to live on the Siberian tundra, just like its woolly ancestors.

Church’s team’s starting point has been the Asian elephant. This is their source of base DNA for their “woolly mammoth 2.0”—their starting source code, if you like. So far, they’ve identified fifty plus gene sequences they think they can play with to give their modern-day woolly mammoth the traits it would need to thrive in Pleistocene Park, including a coat of hair, smaller ears, and a constitution adapted to cold.

The next hurdle they face is how to translate the code embedded in their new woolly mammoth genome into a living, breathing animal. The most obvious route would be to impregnate a female Asian elephant with a fertilized egg containing the new code. But Asian elephants are endangered, and no one’s likely to allow such cutting edge experimentation on the precious few that are still around, so scientists are working on an artificial womb for their reinvented woolly mammoth. They’re making progress with mice and hope to crack the motherless mammoth challenge relatively soon.

It’s perhaps a stretch to call this creative approach to recreating a species (or “reanimation” as Church refers to it) “de-extinction,” as what is being formed is a new species. … (pp. 31-4)

This selection illustrates what Maynard does so very well throughout the book where he uses each film as a launching pad for a clear, readable description of relevant bits of science so you understand why the premise was likely, unlikely, or pure fantasy while linking it to contemporary practices, efforts, and issues. In the context of Jurassic Park, Maynard goes on to raise some fascinating questions such as: Should we revive animals rendered extinct (due to obsolescence or inability to adapt to new conditions) when we could develop new animals?

General thoughts

‘Films for the Future’ offers readable (to non-scientific types) science, lively writing, and the occasional ‘memorish’ anecdote. As well, Dr. Maynard raises the curtain on aspects of the scientific enterprise that most of us do not get to see.  For example, the meeting  between Sergey Zimov and George Church and how it led to new ‘de-extinction’ work’. He also describes the problems that the scientists encountered and are encountering. This is in direct contrast to how scientific work is usually presented in the news media as one glorious breakthrough after the next.

Maynard does discuss the issues of social inequality and power and ownership. For example, who owns your transplant or data? Puzzlingly, he doesn’t touch on the current environment where scientists in the US and elsewhere are encouraged/pressured to start up companies commercializing their work.

Nor is there any mention of how universities are participating in this grand business experiment often called ‘innovation’. (My March 15, 2017 posting describes an outcome for the CRISPR [gene editing system] patent fight taking place between Harvard University’s & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley and my Sept. 11, 2018 posting about an art/science exhibit in Vancouver [Canada] provides an update for round 2 of the Broad Institute vs. UC Berkeley patent fight [scroll down about 65% of the way.) *To read about how my ‘cultural blindness’ shows up here scroll down to the single asterisk at the end.*

There’s a foray through machine-learning and big data as applied to predictive policing in Maynard’s ‘Minority Report’ chapter (my November 23, 2017 posting describes Vancouver’s predictive policing initiative [no psychics involved], the first such in Canada). There’s no mention of surveillance technology, which if I recall properly was part of the future environment, both by the state and by corporations. (Mia Armstrong’s November 15, 2018 article for Slate on Chinese surveillance being exported to Venezuela provides interesting insight.)

The gaps are interesting and various. This of course points to a problem all science writers have when attempting an overview of science. (Carl Zimmer’s latest, ‘She Has Her Mother’s Laugh: The Powers, Perversions, and Potential of Heredity’] a doorstopping 574 pages, also has some gaps despite his focus on heredity,)

Maynard has worked hard to give an comprehensive overview in a remarkably compact 279 pages while developing his theme about science and the human element. In other words, science is not monolithic; it’s created by human beings and subject to all the flaws and benefits that humanity’s efforts are always subject to—scientists are people too.

The readership for ‘Films from the Future’ spans from the mildly interested science reader to someone like me who’s been writing/blogging about these topics (more or less) for about 10 years. I learned a lot reading this book.

Next time, I’m hopeful there’ll be a next time, Maynard might want to describe the parameters he’s set for his book in more detail that is possible in his chapter headings. He could have mentioned that he’s not a cinéaste so his descriptions of the movies are very much focused on the story as conveyed through words. He doesn’t mention colour palates, camera angles, or, even, cultural lenses.

Take for example, his chapter on ‘Ghost in the Shell’. Focused on the Japanese animation film and not the live action Hollywood version he talks about human enhancement and cyborgs. The Japanese have a different take on robots, inanimate objects, and, I assume, cyborgs than is found in Canada or the US or Great Britain, for that matter (according to a colleague of mine, an Englishwoman who lived in Japan for ten or more years). There’s also the chapter on the Ealing comedy, The Man in The White Suit, an English film from the 1950’s. That too has a cultural (as well as, historical) flavour but since Maynard is from England, he may take that cultural flavour for granted. ‘Never let me go’ in Chapter Two was also a UK production, albeit far more recent than the Ealing comedy and it’s interesting to consider how a UK production about cloning might differ from a US or Chinese or … production on the topic. I am hearkening back to Maynard’s anecdote about movies giving him new ways of seeing and imagining the world.

There’s a corrective. A couple of sentences in Maynard’s introductory chapter cautioning that in depth exploration of ‘cultural lenses’ was not possible without expanding the book to an unreadable size followed by a sentence in each of the two chapters that there are cultural differences.

One area where I had a significant problem was with regard to being “programmed” and having  “instinctual” behaviour,

As a species, we are embarrassingly programmed to see “different” as “threatening,” and to take instinctive action against it. It’s a trait that’s exploited in many science fiction novels and movies, including those in this book. If we want to see the rise of increasingly augmented individuals, we need to be prepared for some social strife. (p. 136)

These concepts are much debated in the social sciences and there are arguments for and against ‘instincts regarding strangers and their possible differences’. I gather Dr. Maynard hies to the ‘instinct to defend/attack’ school of thought.

One final quandary, there was no sex and I was expecting it in the Ex Machina chapter, especially now that sexbots are about to take over the world (I exaggerate). Certainly, if you’re talking about “social strife,” then sexbots would seem to be fruitful line of inquiry, especially when there’s talk of how they could benefit families (my August 29, 2018 posting). Again, there could have been a sentence explaining why Maynard focused almost exclusively in this chapter on the discussions about artificial intelligence and superintelligence.

Taken in the context of the book, these are trifling issues and shouldn’t stop you from reading Films from the Future. What Maynard has accomplished here is impressive and I hope it’s just the beginning.

Final note

Bravo Andrew! (Note: We’ve been ‘internet acquaintances/friends since the first year I started blogging. When I’m referring to him in his professional capacity, he’s Dr. Maynard and when it’s not strictly in his professional capacity, it’s Andrew. For this commentary/review I wanted to emphasize his professional status.)

If you need to see a few more samples of Andrew’s writing, there’s a Nov. 15, 2018 essay on The Conversation, Sci-fi movies are the secret weapon that could help Silicon Valley grow up and a Nov. 21, 2018 article on slate.com, The True Cost of Stain-Resistant Pants; The 1951 British comedy The Man in the White Suit anticipated our fears about nanotechnology. Enjoy.

****Added at 1700 hours on Nov. 22, 2018: You can purchase Films from the Future here.

*Nov. 23, 2018: I should have been more specific and said ‘academic scientists’. In Canada, the great percentage of scientists are academic. It’s to the point where the OECD (Organization for Economic Cooperation and Development) has noted that amongst industrialized countries, Canada has very few industrial scientists in comparison to the others.

FrogHeart’s good-bye to 2017 and hello to 2018

This is going to be relatively short and sweet(ish). Starting with the 2017 review:

Nano blogosphere and the Canadian blogosphere

From my perspective there’s been a change taking place in the nano blogosphere over the last few years. There are fewer blogs along with fewer postings from those who still blog. Interestingly, some blogs are becoming more generalized. At the same time, Foresight Institute’s Nanodot blog (as has FrogHeart) has expanded its range of topics to include artificial intelligence and other topics. Andrew Maynard’s 2020 Science blog now exists in an archived from but before its demise, it, too, had started to include other topics, notably risk in its many forms as opposed to risk and nanomaterials. Dexter Johnson’s blog, Nanoclast (on the IEEE [Institute for Electrical and Electronics Engineers] website), maintains its 3x weekly postings. Tim Harper who often wrote about nanotechnology on his Cientifica blog appears to have found a more freewheeling approach that is dominated by his Twitter feed although he also seems (I can’t confirm that the latest posts were written in 2017) to blog here on timharper.net.

The Canadian science blogosphere seems to be getting quieter if Science Borealis (blog aggregator) is a measure. My overall impression is that the bloggers have been a bit quieter this year with fewer postings on the feed or perhaps that’s due to some technical issues (sometimes FrogHeart posts do not get onto the feed). On the promising side, Science Borealis teamed with the Science Writers and Communicators of Canada Association to run a contest, “2017 People’s Choice Awards: Canada’s Favourite Science Online!”  There were two categories (Favourite Science Blog and Favourite Science Site) and you can find a list of the finalists with links to the winners here.

Big congratulations for the winners: Canada’s Favourite Blog 2017: Body of Evidence (Dec. 6, 2017 article by Alina Fisher for Science Borealis) and Let’s Talk Science won Canada’s Favourite Science Online 2017 category as per this announcement.

However, I can’t help wondering: where were ASAP Science, Acapella Science, Quirks & Quarks, IFLS (I f***ing love science), and others on the list for finalists? I would have thought any of these would have a lock on a position as a finalist. These are Canadian online science purveyors and they are hugely popular, which should mean they’d have no problem getting nominated and getting votes. I can’t find the criteria for nominations (or any hint there will be a 2018 contest) so I imagine their nonpresence on the 2017 finalists list will remain a mystery to me.

Looking forward to 2018, I think that the nano blogosphere will continue with its transformation into a more general science/technology-oriented community. To some extent, I believe this reflects the fact that nanotechnology is being absorbed into the larger science/technology effort as foundational (something wiser folks than me predicted some years ago).

As for Science Borealis and the Canadian science online effort, I’m going to interpret the quieter feeds as a sign of a maturing community. After all, there are always ups and downs in terms of enthusiasm and participation and as I noted earlier the launch of an online contest is promising as is the collaboration with Science Writers and Communicators of Canada.

Canadian science policy

It was a big year.

Canada’s Chief Science Advisor

With Canada’s first chief science advisor in many years, being announced Dr. Mona Nemer stepped into her position sometime in Fall 2017. The official announcement was made on Sept. 26, 2017. I covered the event in my Sept. 26, 2017 posting, which includes a few more details than found the official announcement.

You’ll also find in that Sept. 26, 2017 posting a brief discourse on the Naylor report (also known as the Review of Fundamental Science) and some speculation on why, to my knowledge, there has been no action taken as a consequence.  The Naylor report was released April 10, 2017 and was covered here in a three-part review, published on June 8, 2017,

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 1 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 2 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 3 of 3

I have found another commentary (much briefer than mine) by Paul Dufour on the Canadian Science Policy Centre website. (November 9, 2017)

Subnational and regional science funding

This began in 2016 with a workshop mentioned in my November 10, 2016 posting: ‘Council of Canadian Academies and science policy for Alberta.” By the time the report was published the endeavour had been transformed into: Science Policy: Considerations for Subnational Governments (report here and my June 22, 2017 commentary here).

I don’t know what will come of this but I imagine scientists will be supportive as it means more money and they are always looking for more money. Still, the new government in British Columbia has only one ‘science entity’ and I’m not sure it’s still operational but i was called the Premier’s Technology Council. To my knowledge, there is no ministry or other agency that is focused primarily or partially on science.

Meanwhile, a couple of representatives from the health sciences (neither of whom were involved in the production of the report) seem quite enthused about the prospects for provincial money in their (Bev Holmes, Interim CEO, Michael Smith Foundation for Health Research, British Columbia, and Patrick Odnokon (CEO, Saskatchewan Health Research Foundation) October 27, 2017 opinion piece for the Canadian Science Policy Centre.

Artificial intelligence and Canadians

An event which I find more interesting with time was the announcement of the Pan=Canadian Artificial Intelligence Strategy in the 2017 Canadian federal budget. Since then there has been a veritable gold rush mentality with regard to artificial intelligence in Canada. One announcement after the next about various corporations opening new offices in Toronto or Montréal has been made in the months since.

What has really piqued my interest recently is a report being written for Canada’s Treasury Board by Michael Karlin (you can learn more from his Twitter feed although you may need to scroll down past some of his more personal tweets (something cassoulet in the Dec. 29, 2017 tweets).  As for Karlin’s report, which is a work in progress, you can find out more about the report and Karlin in a December 12, 2017 article by Rob Hunt for the Algorithmic Media Observatory (sponsored by the Social Sciences and Humanities Research Council of Canada [SHRCC], the Centre for Study of Democratic Citizenship, and the Fonds de recherche du Québec: Société et culture).

You can ring in 2018 by reading and making comments, which could influence the final version, on Karlin’s “Responsible Artificial Intelligence in the Government of Canada” part of the government’s Digital Disruption White Paper Series.

As for other 2018 news, the Council of Canadian Academies is expected to publish “The State of Science and Technology and Industrial Research and Development in Canada” at some point soon (we hope). This report follows and incorporates two previous ‘states’, The State of Science and Technology in Canada, 2012 (the first of these was a 2006 report) and the 2013 version of The State of Industrial R&D in Canada. There is already some preliminary data for this latest ‘state of’  (you can find a link and commentary in my December 15, 2016 posting).

FrogHeart then (2017) and soon (2018)

On looking back I see that the year started out at quite a clip as I was attempting to hit the 5000th blog posting mark, which I did on March 3,  2017. I have cut back somewhat from the 3 postings/day high to approximately 1 posting/day. It makes things more manageable allowing me to focus on other matters.

By the way, you may note that the ‘Donate’ button has disappeared from my sidebard. I thank everyone who donated from the bottom of my heart. The money was more than currency, it also symbolized encouragement. On the sad side, I moved from one hosting service to a new one (Sibername) late in December 2016 and have been experiencing serious bandwidth issues which result on FrogHeart’s disappearance from the web for days at a time. I am trying to resolve the issues and hope that such actions as removing the ‘Donate’ button will help.

I wish my readers all the best for 2018 as we explore nanotechnology and other emerging technologies!

(I apologize for any and all errors. I usually take a little more time to write this end-of-year and coming-year piece but due to bandwidth issues I was unable to access my draft and give it at least one review. And at this point, I’m too tired to try spotting error. If you see any, please do let me know.)

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.