Tag Archives: Sheena Goodyear

Non-human authors (ChatGPT or others) of scientific and medical studies and the latest AI panic!!!

It’s fascinating to see all the current excitement (distressed and/or enthusiastic) around the act of writing and artificial intelligence. Easy to forget that it’s not new. First, the ‘non-human authors’ and then the panic(s).

How to handle non-human authors (ChatGPT and other AI agents)—the medical edition

The first time I wrote about the incursion of robots or artificial intelligence into the field of writing was in a July 16, 2014 posting titled “Writing and AI or is a robot writing this blog?” ChatGPT (then known as GPT-2) first made its way onto this blog in a February 18, 2019 posting titled “AI (artificial intelligence) text generator, too dangerous to release?

The folks at the Journal of the American Medical Association (JAMA) have recently adopted a pragmatic approach to the possibility of nonhuman authors of scientific and medical papers, from a January 31, 2022 JAMA editorial,

Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability..1

In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.

Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11

This is a link to and a citation for the JAMA editorial,

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge by Annette Flanagin, Kirsten Bibbins-Domingo, Michael Berkwits, Stacy L. Christiansen. JAMA. 2023;329(8):637-639. doi:10.1001/jama.2023.1344

The editorial appears to be open access.

ChatGPT in the field of education

Dr. Andrew Maynard (scientist, author, and professor of Advanced Technology Transitions in the Arizona State University [ASU] School for the Future if Innovation in Society and founder of the ASU Future of Being Human initiative and Director of the ASU Risk Innovation Nexus) also takes a pragmatic approach in a March 14, 2023 posting on his eponymous blog,

Like many of my colleagues, I’ve been grappling with how ChatGPT and other Large Language Models (LLMs) are impacting teaching and education — especially at the undergraduate level.

We’re already seeing signs of the challenges here as a growing divide emerges between LLM-savvy students who are experimenting with novel ways of using (and abusing) tools like ChatGPT, and educators who are desperately trying to catch up. As a result, educators are increasingly finding themselves unprepared and poorly equipped to navigate near-real-time innovations in how students are using these tools. And this is only exacerbated where their knowledge of what is emerging is several steps behind that of their students.

To help address this immediate need, a number of colleagues and I compiled a practical set of Frequently Asked Questions on ChatGPT in the classroom. These covers the basics of what ChatGPT is, possible concerns over use by students, potential creative ways of using the tool to enhance learning, and suggestions for class-specific guidelines.

Dr. Maynard goes on to offer the FAQ/practical guide here. Prior to issuing the ‘guide’, he wrote a December 8, 2022 essay on Medium titled “I asked Open AI’s ChatGPT about responsible innovation. This is what I got.”

Crawford Kilian, a longtime educator, author, and contributing editor to The Tyee, expresses measured enthusiasm for the new technology (as does Dr. Maynard), in a December 13, 2022 article for thetyee.ca, Note: Links have been removed,

ChatGPT, its makers tell us, is still in beta form. Like a million other new users, I’ve been teaching it (tuition-free) so its answers will improve. It’s pretty easy to run a tutorial: once you’ve created an account, you’re invited to ask a question or give a command. Then you watch the reply, popping up on the screen at the speed of a fast and very accurate typist.

Early responses to ChatGPT have been largely Luddite: critics have warned that its arrival means the end of high school English, the demise of the college essay and so on. But remember that the Luddites were highly skilled weavers who commanded high prices for their products; they could see that newfangled mechanized looms would produce cheap fabrics that would push good weavers out of the market. ChatGPT, with sufficient tweaks, could do just that to educators and other knowledge workers.

Having spent 40 years trying to teach my students how to write, I have mixed feelings about this prospect. But it wouldn’t be the first time that a technological advancement has resulted in the atrophy of a human mental skill.

Writing arguably reduced our ability to memorize — and to speak with memorable and persuasive coherence. …

Writing and other technological “advances” have made us what we are today — powerful, but also powerfully dangerous to ourselves and our world. If we can just think through the implications of ChatGPT, we may create companions and mentors that are not so much demonic as the angels of our better nature.

More than writing: emergent behaviour

The ChatGPT story extends further than writing and chatting. From a March 6, 2023 article by Stephen Ornes for Quanta Magazine, Note: Links have been removed,

What movie do these emojis describe?

That prompt was one of 204 tasks chosen last year to test the ability of various large language models (LLMs) — the computational engines behind AI chatbots such as ChatGPT. The simplest LLMs produced surreal responses. “The movie is a movie about a man who is a man who is a man,” one began. Medium-complexity models came closer, guessing The Emoji Movie. But the most complex model nailed it in one guess: Finding Nemo.

“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.

“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors [emphasis mine], including several identified in Dyer’s project. That list continues to grow.

Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.

But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.

As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli [Deep Ganguli, a computer scientist at the AI startup Anthropic] said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

There is an obvious problem with asking these models to explain themselves: They are notorious liars. [emphasis mine] “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” As one of many amusing examples, in February [2023] Google introduced its AI chatbot, Bard. The blog post announcing the new tool shows Bard making a factual error.

If you have time, I recommend reading Omes’s March 6, 2023 article.

The panic

Perhaps not entirely unrelated to current developments, there was this announcement in a May 1, 2023 article by Hannah Alberga for CTV (Canadian Television Network) news, Note: Links have been removed,

Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI

Geoffrey Hinton, professor at the University of Toronto and the “godfather” of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday [May 1, 2023] citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by “bad actors.”

Hinton, a British-Canadian computer scientist, is best-known for a series of deep neural network breakthroughs that won him, Yann LeCun and Yoshua Bengio the 2018 Turing Award, known as the Nobel Prize of computing. 

Hinton has been invested in the now-hot topic of artificial intelligence since its early stages. In 1970, he got a Bachelor of Arts in experimental psychology from Cambridge, followed by his Ph.D. in artificial intelligence in Edinburgh, U.K. in 1978.

He joined Google after spearheading a major breakthrough with two of his graduate students at the University of Toronto in 2012, in which the team uncovered and built a new method of artificial intelligence: neural networks. The team’s first neural network was  incorporated and sold to Google for $44 million.

Neural networks are a method of deep learning that effectively teaches computers how to learn the way humans do by analyzing data, paving the way for machines to classify objects and understand speech recognition.

There’s a bit more from Hinton in a May 3, 2023 article by Sheena Goodyear for the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens (the 10 minute radio interview is embedded in the article), Note: A link has been removed,

There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.

Nowadays, he’s not so sure.

“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”

For the last decade, he [Geoffrey Hinton] divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.

Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”

But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes. 

Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good. 

“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.

“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”

Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”

“That being said, I do think that some of these ideas that … AI supercomputers are going to ‘wake up’ and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear” and distract from more pressing issues, he said.

He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.

“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he [Hinton] said.

Don Pittis in his May 4, 2022 business analysis for CBC news online offers a somewhat jaundiced view of Hinton’s concern regarding AI, Note: Links have been removed,

As if we needed one more thing to terrify us, the latest warning from a University of Toronto scientist considered by many to be the founding intellect of artificial intelligence, adds a new layer of dread.

Others who have warned in the past that thinking machines are a threat to human existence seem a little miffed with the rock-star-like media coverage Geoffrey Hinton, billed at a conference this week as the Godfather of AI, is getting for what seems like a last minute conversion. Others say Hinton’s authoritative voice makes a difference.

Not only did Hinton tell an audience of experts at Wednesday’s [May 3, 2023] EmTech Digital conference that humans will soon be supplanted by AI — “I think it’s serious and fairly close.” — he said that due to national and business competition, there is no obvious way to prevent it.

“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial,” said Hinton on Wednesday [May 3, 2023] as he explained his change of heart in detailed technical terms. 

“But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.”

“I wish I had a nice and simple solution I could push, but I don’t,” he said. “It’s not clear there is a solution.”

So when is all this happening?

“In a few years time they may be significantly more intelligent than people,” he told Nil Köksal on CBC Radio’s As It Happens on Wednesday [May 3, 2023].

While he may be late to the party, Hinton’s voice adds new clout to growing anxiety that artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.

But long before that final day, he worries that the new technology will soon begin to strip away jobs and lead to a destabilizing societal gap between rich and poor that current politics will be unable to solve.

The EmTech Digital conference is a who’s who of AI business and academia, fields which often overlap. Most other participants at the event were not there to warn about AI like Hinton, but to celebrate the explosive growth of AI research and business.

As one expert I spoke to pointed out, the growth in AI is exponential and has been for a long time. But even knowing that, the increase in the dollar value of AI to business caught the sector by surprise.

Eight years ago when I wrote about the expected increase in AI business, I quoted the market intelligence group Tractica that AI spending would “be worth more than $40 billion in the coming decade,” which sounded like a lot at the time. It appears that was an underestimate.

“The global artificial intelligence market size was valued at $428 billion U.S. in 2022,” said an updated report from Fortune Business Insights. “The market is projected to grow from $515.31 billion U.S. in 2023.”  The estimate for 2030 is more than $2 trillion. 

This week the new Toronto AI company Cohere, where Hinton has a stake of his own, announced it was “in advanced talks” to raise $250 million. The Canadian media company Thomson Reuters said it was planning “a deeper investment in artificial intelligence.” IBM is expected to “pause hiring for roles that could be replaced with AI.” The founders of Google DeepMind and LinkedIn have launched a ChatGPT competitor called Pi.

And that was just this week.

“My one hope is that, because if we allow it to take over it will be bad for all of us, we could get the U.S. and China to agree, like we did with nuclear weapons,” said Hinton. “We’re all the in same boat with respect to existential threats, so we all ought to be able to co-operate on trying to stop it.”

Interviewer and moderator Will Douglas Heaven, an editor at MIT Technology Review finished Hinton’s sentence for him: “As long as we can make some money on the way.”

Hinton has attracted some criticism himself. Wilfred Chan writing for Fast Company has two articles, “‘I didn’t see him show up’: Ex-Googlers blast ‘AI godfather’ Geoffrey Hinton’s silence on fired AI experts” on May 5, 2023, Note: Links have been removed,

Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord.

But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.

“It’s disappointing to see this autumn-years redemption tour [emphasis mine] from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)

Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.

“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”

Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. …

… An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them. 

Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday [May 2, 2023], for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.” [emphasis mine]

Gebru has been mentioned here a few times. She’s mentioned in passing in a June 23, 2022 posting “Racist and sexist robots have flawed AI” and in a little more detail in an August 30, 2022 posting “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” scroll down to the ‘Consciousness and ethical AI’ subhead

Chan has another Fast Company article investigating AI issues also published on May 5, 2023, “Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them.”

The last two existential AI panics

The term “autumn-years redemption tour”is striking and while the reference to age could be viewed as problematic, it also hints at the money, honours, and acknowledgement that Hinton has enjoyed as an eminent scientist. I’ve covered two previous panics set off by eminent scientists. “Existential risk” is the title of my November 26, 2012 posting which highlights Martin Rees’ efforts to found the Centre for Existential Risk at the University of Cambridge.

Rees is a big deal. From his Wikipedia entry, Note: Links have been removed,

Martin John Rees, Baron Rees of Ludlow OM FRS FREng FMedSci FRAS HonFInstP[10][2] (born 23 June 1942) is a British cosmologist and astrophysicist.[11] He is the fifteenth Astronomer Royal, appointed in 1995,[12][13][14] and was Master of Trinity College, Cambridge, from 2004 to 2012 and President of the Royal Society between 2005 and 2010.[15][16][17][18][19][20]

The Centre for Existential Risk can be found here online (it is located at the University of Cambridge). Interestingly, Hinton who was born in December 1947 will be giving a lecture “Digital versus biological intelligence: Reasons for concern about AI” in Cambridge on May 25, 2023.

The next panic was set off by Stephen Hawking (1942 – 2018; also at the University of Cambridge, Wikipedia entry) a few years before he died. (Note: Rees, Hinton, and Hawking were all born within five years of each other and all have/had ties to the University of Cambridge. Interesting coincidence, eh?) From a January 9, 2015 article by Emily Chung for CBC news online,

Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, “the development of full artificial intelligence could spell the end of the human race.” Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably “our biggest existential threat.”

Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.

Hawking made his most recent comments at the beginning of December [2014], in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.

Popular works of science fiction – from the latest Terminator trailer, to the Matrix trilogy, to Star Trek’s borg – envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.

Sawyer has written about a different vision of life beyond singularity [when machines surpass humans in general intelligence,] — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the “possible misuse of powerful technologies” such as AI. He said Hawking and Musk have good reason to be concerned.

To sum up, the first panic was in 2012, the next in 2014/15, and the latest one began earlier this year (2023) with a letter. A March 29, 2023 Thompson Reuters news item on CBC news online provides information on the contents,

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a pioneer of research in the field.

According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.

The concerns come as EU police force Europol on Monday {March 27, 2023] joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Meanwhile, the U.K. government unveiled proposals for an “adaptable” regulatory framework around AI.

The government’s approach, outlined in a policy paper published on Wednesday [March 29, 2023], would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.

The engineers have chimed in, from an April 7, 2023 article by Margo Anderson for the IEEE (institute of Electrical and Electronics Engineers) Spectrum magazine, Note: Links have been removed,

The open letter [published March 29, 2023], titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”

It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.

In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.

IEEE members have expressed a similar diversity of opinions.

There was an earlier open letter in January 2015 according to Wikipedia’s “Open Letter on Artificial Intelligence” entry, Note: Links have been removed,

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.

As for ‘Mr. ChatGPT’ or Sam Altman, CEO of OpenAI, while he didn’t sign the March 29, 2023 letter, he appeared before US Congress suggesting AI needs to be regulated according to May 16, 2023 news article by Mohar Chatterjee for Politico.

You’ll notice I’ve arbitrarily designated three AI panics by assigning their origins to eminent scientists. In reality, these concerns rise and fall in ways that don’t allow for such a tidy analysis. As Chung notes, science fiction regularly addresses this issue. For example, there’s my October 16, 2013 posting, “Wizards & Robots: a comic book encourages study in the sciences and maths and discussions about existential risk.” By the way, will.i.am (of the Black Eyed Peas band was involved in the comic book project and he us a longtime supporter of STEM (science, technology, engineering, and mathematics) initiatives.

Finally (but not quite)

Puzzling, isn’t it? I’m not sure we’re asking the right questions but it’s encouraging to see that at least some are being asked.

Dr. Andrew Maynard in a May 12, 2023 essay for The Conversation (h/t May 12, 2023 item on phys.org) notes that ‘Luddites’ questioned technology’s inevitable progress and were vilified for doing so, Note: Links have been removed,

The term “Luddite” emerged in early 1800s England. At the time there was a thriving textile industry that depended on manual knitting frames and a skilled workforce to create cloth and garments out of cotton and wool. But as the Industrial Revolution gathered momentum, steam-powered mills threatened the livelihood of thousands of artisanal textile workers.

Faced with an industrialized future that threatened their jobs and their professional identity, a growing number of textile workers turned to direct action. Galvanized by their leader, Ned Ludd, they began to smash the machines that they saw as robbing them of their source of income.

It’s not clear whether Ned Ludd was a real person, or simply a figment of folklore invented during a period of upheaval. But his name became synonymous with rejecting disruptive new technologies – an association that lasts to this day.

Questioning doesn’t mean rejecting

Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time. Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life

In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.” Their sin? Raising concerns over the potential dangers of artificial intelligence.

The irony of three prominent scientists and entrepreneurs being labeled as Luddites underlines the disconnect between the term’s original meaning and its more modern use as an epithet for anyone who doesn’t wholeheartedly and unquestioningly embrace technological progress.

Yet technologists like Musk and Gates aren’t rejecting technology or innovation. Instead, they’re rejecting a worldview that all technological advances are ultimately good for society. This worldview optimistically assumes that the faster humans innovate, the better the future will be.

In an age of ChatGPT, gene editing and other transformative technologies, perhaps we all need to channel the spirit of Ned Ludd as we grapple with how to ensure that future technologies do more good than harm.

In fact, “Neo-Luddites” or “New Luddites” is a term that emerged at the end of the 20th century.

In 1990, the psychologist Chellis Glendinning published an essay titled “Notes toward a Neo-Luddite Manifesto.”

Then there are the Neo-Luddites who actively reject modern technologies, fearing that they are damaging to society. New York City’s Luddite Club falls into this camp. Formed by a group of tech-disillusioned Gen-Zers, the club advocates the use of flip phones, crafting, hanging out in parks and reading hardcover or paperback books. Screens are an anathema to the group, which sees them as a drain on mental health.

I’m not sure how many of today’s Neo-Luddites – whether they’re thoughtful technologists, technology-rejecting teens or simply people who are uneasy about technological disruption – have read Glendinning’s manifesto. And to be sure, parts of it are rather contentious. Yet there is a common thread here: the idea that technology can lead to personal and societal harm if it is not developed responsibly.

Getting back to where this started with nonhuman authors, Amelia Eqbal has written up an informal transcript of a March 16, 2023 CBC radio interview (radio segment is embedded) about ChatGPT-4 (the latest AI chatbot from OpenAI) between host Elamin Abdelmahmoud and tech journalist, Alyssa Bereznak.

I was hoping to add a little more Canadian content, so in March 2023 and again in April 2023, I sent a question about whether there were any policies regarding nonhuman or AI authors to Kim Barnhardt at the Canadian Medical Association Journal (CMAJ). To date, there has been no reply but should one arrive, I will place it here.

In the meantime, I have this from Canadian writer, Susan Baxter in her May 15, 2023 blog posting “Coming soon: Robot Overlords, Sentient AI and more,”

The current threat looming (Covid having been declared null and void by the WHO*) is Artificial Intelligence (AI) which, we are told, is becoming too smart for its own good and will soon outsmart humans. Then again, given some of the humans I’ve met along the way that wouldn’t be difficult.

All this talk of scary-boo AI seems to me to have become the worst kind of cliché, one that obscures how our lives have become more complicated and more frustrating as apps and bots and cyber-whatsits take over.

The trouble with clichés, as Alain de Botton wrote in How Proust Can Change Your Life, is not that they are wrong or contain false ideas but more that they are “superficial articulations of good ones”. Cliches are oversimplifications that become so commonplace we stop noticing the more serious subtext. (This is rife in medicine where metaphors such as talk of “replacing” organs through transplants makes people believe it’s akin to changing the oil filter in your car. Or whatever it is EV’s have these days that needs replacing.)

Should you live in Vancouver (Canada) and are attending a May 28, 2023 AI event, you may want to read Susan Baxter’s piece as a counterbalance to, “Discover the future of artificial intelligence at this unique AI event in Vancouver,” a May 19, 2023 sponsored content by Katy Brennan for the Daily Hive,

If you’re intrigued and eager to delve into the rapidly growing field of AI, you’re not going to want to miss this unique Vancouver event.

On Sunday, May 28 [2023], a Multiplatform AI event is coming to the Vancouver Playhouse — and it’s set to take you on a journey into the future of artificial intelligence.

The exciting conference promises a fusion of creativity, tech innovation, and thought–provoking insights, with talks from renowned AI leaders and concept artists, who will share their experiences and opinions.

Guests can look forward to intense discussions about AI’s pros and cons, hear real-world case studies, and learn about the ethical dimensions of AI, its potential threats to humanity, and the laws that govern its use.

Live Q&A sessions will also be held, where leading experts in the field will address all kinds of burning questions from attendees. There will also be a dynamic round table and several other opportunities to connect with industry leaders, pioneers, and like-minded enthusiasts. 

This conference is being held at The Playhouse, 600 Hamilton Street, from 11 am to 7:30 pm, ticket prices range from $299 to $349 to $499 (depending on when you make your purchase, From the Multiplatform AI Conference homepage,

Event Speakers

Max Sills
General Counsel at Midjourney

From Jan 2022 – present (Advisor – now General Counsel) – Midjourney – An independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species (SF) Midjourney – a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called “prompts”, similar to OpenAI’s DALL-E and Stable Diffusion. For now the company uses Discord Server as a source of service and, with huge 15M+ members, is the biggest Discord server in the world. In the two-things-at-once department, Max Sills also known as an owner of Open Advisory Services, firm which is set up to help small and medium tech companies with their legal needs (managing outside counsel, employment, carta, TOS, privacy). Their clients are enterprise level, medium companies and up, and they are here to help anyone on open source and IP strategy. Max is an ex-counsel at Block, ex-general manager of the Crypto Open Patent Alliance. Prior to that Max led Google’s open source legal group for 7 years.

So, the first speaker listed is a lawyer associated with Midjourney, a highly controversial generative artificial intelligence programme used to generate images. According to their entry on Wikipedia, the company is being sued, Note: Links have been removed,

On January 13, 2023, three artists – Sarah Andersen, Kelly McKernan, and Karla Ortiz – filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists.[32]

My October 24, 2022 posting highlights some of the issues with generative image programmes and Midjourney is mentioned throughout.

As I noted earlier, I’m glad to see more thought being put into the societal impact of AI and somewhat disconcerted by the hyperbole from the like of Geoffrey Hinton and the like of Vancouver’s Multiplatform AI conference organizers. Mike Masnick put it nicely in his May 24, 2023 posting on TechDirt (Note 1: I’ve taken a paragraph out of context, his larger issue is about proposals for legislation; Note 2: Links have been removed),

Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.

For anyone interested in the Canadian government attempts to legislate AI, there’s my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”

Addendum (June 1, 2023)

Another statement warning about runaway AI was issued on Tuesday, May 30, 2023. This was far briefer than the previous March 2023 warning, from the Center for AI Safety’s “Statement on AI Risk” webpage,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war [followed by a list of signatories] …

Vanessa Romo’s May 30, 2023 article (with contributions from Bobby Allyn) for NPR ([US] National Public Radio) offers an overview of both warnings. Rae Hodge’s May 31, 2023 article for Salon offers a more critical view, Note: Links have been removed,

The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of “nuclear war” and human “extinction.” Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement from the non-profit Center for AI Safety said.

But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board.

TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry.

“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote.

“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.

The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts.

“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”

Fear, after all, is a powerful sales tool.

Romo’s May 30, 2023 article for NPR offers a good overview and, if you have the time, I recommend reading Hodge’s May 31, 2023 article for Salon in its entirety.

July 2020 update on Dr. He Jiankui (the CRISPR twins) situation

This was going to be written for January 2020 but sometimes things happen (e.g., a two-part overview of science culture in Canada from 2010-19 morphed into five parts with an addendum and, then, a pandemic). By now (July 28, 2020), Dr. He’s sentencing to three years in jail announced by the Chinese government in January 2020 is old news.

Regardless, it seems a neat and tidy ending to an international scientific scandal concerned with germline-editing which resulted in at least one set of twins, Lulu and Nana. He claimed to have introduced a variant (“Delta 32” variation) of their CCR5 gene. This does occur naturally and scientists have noted that people with this mutation seem to be resistant to HIV and smallpox.

For those not familiar with the events surrounding the announcement, here’s a brief recap. News of the world’s first gene-edited twins’ birth was announced in November 2018 just days before an international meeting group of experts who had agreed on a moratorium in 2015 on exactly that kind of work. The scientist making the announcement about the twins was scheduled for at least one presentation at the meeting, which was to be held in Hong Kong. He did give his presentation but left the meeting shortly afterwards as shock was beginning to abate and fierce criticism was rising. My November 28, 2018 posting (First CRISPR gene-edited babies? Ethics and the science story) offers a timeline of sorts and my initial response.

I subsequently followed up with two mores posts as the story continued to develop. My May 17, 2019 posting (Genes, intelligence, Chinese CRISPR (clustered regularly interspaced short palindromic repeats) babies, and other children) featured news that Dr. He’s gene-editing may have resulted in the twins having improved cognitive skills. Then, more news broke. The title for my June 20, 2019 posting (Greater mortality for the CRISPR twins Lulu and Nana?) is self-explanatory.

I have roughly organized my sources for this posting into two narratives, which I’m contrasting with each other. First, there is one found in the mainstream media (English language), ‘The Popular Narrative’. Second, there is story where Dr. He is viewed more sympathetically and as part of a larger community where there isn’t nearly as much consensus over what should or shouldn’t be done as ‘the popular narrative’ insists.

The popular narrative: Dr. He was a rogue scientist

A December 30, 2019 article for Fast Company by Kristin Toussaint lays out the latest facts (Note: A link has been removed),

… Now, a court in China has sentenced He to three years in prison, according to Xinhua, China’s state-run press agency, for “illegal medical practices.”

The court in China’s southern city of Shenzhen says that He’s team, which included colleagues Zhang Renli and Qin Jinzhou from two medical institutes in Guangdong Province, falsified ethical approval documents and violated China’s “regulations and ethical principles” with their gene-editing work. Zhang was sentenced to two years in jail, and Qin to 18 months with a two-year reprieve, according to Xinhau.

Ian Sample’s December 31, 2020 article for the Guardian offers more detail (Note: Links have been removed),

The court in Shenzhen found He guilty of “illegal medical practices” and in addition to the prison sentence fined him 3m yuan (£327,360), according to the state news agency, Xinhua. Two others on He’s research team received lesser fines and sentences.

“The three accused did not have the proper certification to practise medicine, and in seeking fame and wealth, deliberately violated national regulations in scientific research and medical treatment,” the court said, according to Xinhua. “They’ve crossed the bottom line of ethics in scientific research and medical ethics.”

[…] the court found He had forged documents from an ethics review panel that were used to recruit couples for the research. The couples that enrolled had a man with HIV and a woman without and were offered IVF in return for taking part.

Zhang Renli, who worked with He, was sentenced to two years in prison and fined 1m yuan. Colleague Qin Jinzhou received an 18-month sentence, but with a two-year reprieve, and a 500,000 yuan fine.

He’s experiments, which were carried out on seven embryos in late 2018, sent shockwaves through the medical and scientific world. The work was swiftly condemned for deceiving vulnerable patients and using a risky, untested procedure with no medical justification. Earlier this month, MIT Technology Review released excerpts from an early manuscript of He’s work. It casts serious doubts on his claims to have made the children immune to HIV.

Even as the scientific community turned against He, the scientist defended his work and said he was proud of having created Lulu and Nana. A third child has since been born as a result of the experiments.

Robin Lovell-Badge at the Francis Crick Institute in London said it was “far too premature” for anyone to pursue genome editing on embryos that are intended to lead to pregnancies. “At this stage we do not know if the methods will ever be sufficiently safe and efficient, although the relevant science is progressing rapidly, and new methods can look promising. It is also important to have standards established, including detailed regulatory pathways, and appropriate means of governance.”

A December 30, 2019 article, by Carolyn Y. Johnson for the Washington Post, covers much the same ground although it does go on to suggest that there might be some blame to spread around (Note: Links have been removed),

The Chinese researcher who stunned and alarmed the international scientific community with the announcement that he had created the world’s first gene-edited babies has been sentenced to three years in prison by a court in China.

He Jiankui sparked a bioethical crisis last year when he claimed to have edited the DNA of human embryos, resulting in the birth of twins called Lulu and Nana as well as a possible third pregnancy. The gene editing, which was aimed at making the children immune to HIV, was excoriated by many scientists as a reckless experiment on human subjects that violated basic ethical principles.

The judicial proceedings were not public, and outside experts said it is hard to know what to make of the punishment without the release of the full investigative report or extensive knowledge of Chinese law and the conditions under which He will be incarcerated.

Jennifer Doudna, a biochemist at the University of California at Berkeley who co-invented CRISPR, the gene editing technology that He utilized, has been outspoken in condemning the experiments and has repeatedly said CRISPR is not ready to be used for reproductive purposes.

R. Alta Charo, a fellow at Stanford’s Center for Advanced Study in the Behavioral Sciences, was among a small group of experts who had dinner with He the night before he unveiled his controversial research in Hong Kong in November 2018.

“He Jiankui is an example of somebody who fundamentally didn’t understand, or didn’t want to recognize, what have become international norms around responsible research,” Charo said. “My impression is he allowed his personal ambition to completely cloud rational thinking and judgment.”

Scientists have been testing an array of powerful biotechnology tools to fix genetic diseases in adults. There is tremendous excitement about the possibility of fixing genes that cause serious disease, and the first U.S. patients were treated with CRISPR this year.

But scientists have long drawn a clear moral line between curing genetic diseases in adults and editing and implanting human embryos, which raises the specter of “designer babies.” Those changes and any unanticipated ones could be inherited by future generations — in essence altering the human species.

“The fact that the individual at the center of the story has been punished for his role in it should not distract us from examining what supporting roles were played by others, particularly in the international scientific community and also the environment that shaped and encouraged him to push the limits,” said Benjamin Hurlbut [emphasis mine], associate professor in the School of Life Sciences at Arizona State University.

Stanford University cleared its scientists, including He’s former postdoctoral adviser, Stephen Quake, finding that Quake and others did not participate in the research and had expressed “serious concerns to Dr. He about his work.” A Rice University spokesman said an investigation continues into bioengineering professor Michael Deem, He’s former academic adviser. Deem was listed as a co-author on a paper called “Birth of Twins After Genome Editing for HIV Resistance,” submitted to scientific journals, according to MIT Technology Review.

It’s interesting that it’s only the Chinese scientists who are seen to be punished, symbolically at least. Meanwhile, Stanford clears its scientists of any wrongdoing and Rice University continues to investigate.

Watch for the Hurlbut name (son, Benjamin and father, William) to come up again in the ‘complex narrative’ section.

Criticism of the ‘twins’ CRISPR editing’ research

Antonio Regalado’s December 3, 2020 article for the MIT (Massachusetts Institute of Technology) Technology Review features comments from various experts on an unpublished draft of Dr. He Jiankui’s research

Earlier this year a source sent us a copy of an unpublished manuscript describing the creation of the first gene-edited babies, born last year in China. Today, we are making excerpts of that manuscript public for the first time.

Titled “Birth of Twins After Genome Editing for HIV Resistance,” and 4,699 words long, the still unpublished paper was authored by He Jiankui, the Chinese biophysicist who created the edited twin girls. A second manuscript we also received discusses laboratory research on human and animal embryos.

The metadata in the files we were sent indicate that the two draft papers were edited by He in late November 2018 and appear to be what he initially submitted for publication. Other versions, including a combined manuscript, may also exist. After consideration by at least two prestigious journals, Nature and JAMA, his research remains unpublished.

The text of the twins paper is replete with expansive claims of a medical breakthrough that can “control the HIV epidemic.” It claims “success”—a word used more than once—in using a “novel therapy” to render the girls resistant to HIV. Yet surprisingly, it makes little attempt to prove that the twins really are resistant to the virus. And the text largely ignores data elsewhere in the paper suggesting that the editing went wrong.

We shared the unpublished manuscripts with four experts—a legal scholar, an IVF doctor, an embryologist, and a gene-editing specialist—and asked them for their reactions. Their views were damning. Among them: key claims that He and his team made are not supported by the data; the babies’ parents may have been under pressure to agree to join the experiment; the supposed medical benefits are dubious at best; and the researchers moved forward with creating living human beings before they fully understood the effects of the edits they had made.

1. Why aren’t the doctors among the paper’s authors?

The manuscript begins with a list of the authors—10 of them, mostly from He Jiankui’s lab at the Southern University of Science and Technology, but also including Hua Bai, director of an AIDS support network, who helped recruit couples, and Michael Deem, an American biophysicist whose role is under review by Rice University. (His attorney previously said Deem never agreed to submit the manuscript and sought to remove his name from it.)

It’s a small number of people for such a significant project, and one reason is that some names are missing—notably, the fertility doctors who treated the patients and the obstetrician who delivered the babies. Concealing them may be an attempt to obscure the identities of the patients. However, it also leaves unclear whether or not these doctors understood they were helping to create the first gene-edited babies.

To some, the question of whether the manuscript is trustworthy arises immediately.

Hank Greely, professor of law, Stanford University: We have no, or almost no, independent evidence for anything reported in this paper. Although I believe that the babies probably were DNA-edited and were born, there’s very little evidence for that. Given the circumstances of this case, I am not willing to grant He Jiankui the usual presumption of honesty. 

That last article by Regalado is the purest example I have of how fierce the criticism is and how almost all of it is focused on Dr. He and his Chinese colleagues.

A complex, measured narrative: multiple players in the game

The most sympathetic and, in many ways, the most comprehensive article is an August 1, 2019 piece by Jon Cohen for Science magazine (Note: Links have been removed),

On 10 June 2017, a sunny and hot Saturday in Shenzhen, China, two couples came to the Southern University of Science and Technology (SUSTech) to discuss whether they would participate in a medical experiment that no researcher had ever dared to conduct. The Chinese couples, who were having fertility problems, gathered around a conference table to meet with He Jiankui, a SUSTech biophysicist. Then 33, He (pronounced “HEH”) had a growing reputation in China as a scientist-entrepreneur but was little known outside the country. “We want to tell you some serious things that might be scary,” said He, who was trim from years of playing soccer and wore a gray collared shirt, his cuffs casually unbuttoned.

He simply meant the standard in vitro fertilization (IVF) procedures. But as the discussion progressed, He and his postdoc walked the couples through informed consent forms [emphasis mine] that described what many ethicists and scientists view as a far more frightening proposition. Seventeen months later, the experiment triggered an international controversy, and the worldwide scientific community rejected him. The scandal cost him his university position and the leadership of a biotech company he founded. Commentaries labeled He, who also goes by the nickname JK, a “rogue,” “China’s Frankenstein,” and “stupendously immoral.” [emphases mine]

But that day in the conference room, He’s reputation remained untarnished. As the couples listened and flipped through the forms, occasionally asking questions, two witnesses—one American, the other Chinese—observed [emphasis mine]. Another lab member shot video, which Science has seen [emphasis mine], of part of the 50-minute meeting. He had recruited those couples because the husbands were living with HIV infections kept under control by antiviral drugs. The IVF procedure would use a reliable process called sperm washing to remove the virus before insemination, so father-to-child transmission was not a concern. Rather, He sought couples who had endured HIV-related stigma and discrimination and wanted to spare their children that fate by dramatically reducing their risk of ever becoming infected. [emphasis mine]

He, who for much of his brief career had specialized in sequencing DNA, offered a potential solution: CRISPR, the genome-editing tool that was revolutionizing biology, could alter a gene in IVF embryos to cripple production of an immune cell surface protein, CCR5, that HIV uses to establish an infection. “This technique may be able to produce an IVF baby naturally immunized against AIDS,” one consent form read.[emphasis mine]

The couples’ children could also pass the protective mutation to future generations. The prospect of this irrevocable genetic change is why, since the advent of CRISPR as a genome editor 5 years earlier, the editing of human embryos, eggs, or sperm has been hotly debated. The core issue is whether such germline editing would cross an ethical red line because it could ultimately alter our species. Regulations, some with squishy language, arguably prohibited it in many countries, China included.

Yet opposition was not unanimous. A few months before He met the couples, a committee convened by the U.S. National Academies of Sciences, Engineering, and Medicine (NASEM) concluded in a well-publicized report that human trials of germline editing “might be permitted” if strict criteria were met. The group of scientists, lawyers, bioethicists, and patient advocates spelled out a regulatory framework but cautioned that “these criteria are necessarily vague” because various societies, caregivers, and patients would view them differently. The committee notably did not call for an international ban, arguing instead for governmental regulation as each country deemed appropriate and “voluntary self-regulation pursuant to professional guidelines.”

[…] He hid his plans and deceived his colleagues and superiors, as many people have asserted? A preliminary investigation in China stated that He had forged documents, “dodged supervision,” and misrepresented blood tests—even though no proof of those charges was released [emphasis mine], no outsiders were part of the inquiry, and He has not publicly admitted to any wrongdoing. (CRISPR scientists in China say the He fallout has affected their research.) Many scientists outside China also portrayed He as a rogue actor. “I think there has been a failure of self-regulation by the scientific community because of a lack of transparency,” virologist David Baltimore, a Nobel Prize–winning researcher at the California Institute of Technology (Caltech) in Pasadena and co-chair of the Hong Kong summit, thundered at He after the biophysicist’s only public talk on the experiment.

Because the Chinese government has revealed little and He is not talking, key questions about his actions are hard to answer. Many of his colleagues and confidants also ignored Science‘s requests for interviews. But Ryan Ferrell, a public relations specialist He hired, has cataloged five dozen people who were not part of the study but knew or suspected what He was doing before it became public. Ferrell calls it He’s circle of trust. [emphasis mine]

That circle included leading scientists—among them a Nobel laureate—in China and the United States, business executives, an entrepreneur connected to venture capitalists, authors of the NASEM report, a controversial U.S. IVF specialist [John Zhang] who discussed opening a gene-editing clinic with He [emphasis mine], and at least one Chinese politician. “He had an awful lot of company to be called a ‘rogue,’” says geneticist George Church [emphases mine], a CRISPR pioneer at Harvard University who was not in the circle of trust and is one of the few scientists to defend at least some aspects of He’s experiment.

Some people sharply criticized He when he brought them into the circle; others appear to have welcomed his plans or did nothing. Several went out of their way to distance themselves from He after the furor erupted. For example, the two onlookers in that informed consent meeting were Michael Deem, He’s Ph.D. adviser at Rice University in Houston, Texas, and Yu Jun, a member of the Chinese Academy of Sciences (CAS) and co-founder of the Beijing Genomics Institute, the famed DNA sequencing company in Shenzhen. Deem remains under investigation by Rice for his role in the experiment and would not speak with Science. In a carefully worded statement, Deem’s lawyers later said he “did not meet the parents of the reported CCR5-edited children, or anyone else whose embryos were edited.” But earlier, Deem cooperated with the Associated Press (AP) for its exclusive story revealing the birth of the babies, which reported that Deem was “present in China when potential participants gave their consent and that he ‘absolutely’ thinks they were able to understand the risks. [emphasis mine]”

Yu, who works at CAS’s Beijing Institute of Genomics, acknowledges attending the informed consent meeting with Deem, but he told Science he did not know that He planned to implant gene-edited embryos. “Deem and I were chatting about something else,” says Yu, who has sequenced the genomes of humans, rice, silkworms, and date palms. “What was happening in the room was not my business, and that’s my personality: If it’s not my business, I pay very little attention.”

Some people who know He and have spoken to Science contend it is time for a more open discussion of how the biophysicist formed his circle of confidants and how the larger circle of trust—the one between the scientific community and the public—broke down. Bioethicist William Hurlbut at Stanford University [emphasis mine] in Palo Alto, California, who knew He wanted to conduct the embryo-editing experiment and tried to dissuade him, says that He was “thrown under the bus” by many people who once supported him. “Everyone ran for the exits, in both the U.S. and China. I think everybody would do better if they would just openly admit what they knew and what they did, and then collectively say, ‘Well, people weren’t clear what to do. We should all admit this is an unfamiliar terrain.’”

Steve Lombardi, a former CEO of Helicos, reacted far more charitably. Lombardi, who runs a consulting business in Bridgewater, Connecticut, says Quake introduced him to He to help find investors for Direct Genomics. “He’s your classic, incredibly bright, naïve entrepreneur—I run into them all the time,” Lombardi says. “He had the right instincts for what to do in China and just didn’t know how to do it. So I put him in front of as many people as I could.” Lombardi says He told him about his embryo-editing ambitions in August 2017, asking whether Lombardi could find investors for a new company that focused on “genetic medical tourism” and was based in China or, because of a potentially friendlier regulatory climate, Thailand. “I kept saying to him, ‘You know, you’ve got to deal with the ethics of this and be really sure that you know what you’re doing.’”

In April 2018, He asked Ferrell to handle his media full time. Ferrell was a good fit—he had an undergraduate degree in neuroscience, had spent a year in Beijing studying Chinese, and had helped another company using a pre-CRISPR genome editor. Now that a woman in the trial was pregnant, Ferrell says, He’s “understanding of the gravity of what he had done increased.” Ferrell had misgivings about the experiment, but he quit HDMZ and that August moved to Shenzhen. With the pregnancy already underway, Ferrell reasoned, “It was going to be the biggest science story of that week or longer, no matter what I did.”

MIT Technology Review had broken a story early that morning China time, saying human embryos were being edited and implanted, after reporter Antonio Regalado discovered descriptions of the project that He had posted online, without Ferrell’s knowledge, in an official Chinese clinical trial registry. Now, He gave AP the green light to post a detailed account, which revealed that twin girls—whom He, to protect their identifies, named Lulu and Nana—had been born. Ferrell and He also posted five unfinished YouTube videos explaining and justifying the unprecedented experiment.

“He was fearful that he’d be unable to communicate to the press and the onslaught in a way that would be in any way manageable for him,” Ferrell says. One video tried to forestall eugenics accusations, with He rejecting goals such as enhancing intelligence, changing skin color, and increasing sports performance as “not love.” Still, the group knew it had lost control of the news. [emphasis mine]

… On 7 March 2017, 5 weeks after the California gathering, He submitted a medical ethics approval application to the Shenzhen HarMoniCare Women and Children’s Hospital that outlined the planned CCR5 edit of human embryos. The babies, it claimed, would be resistant to HIV as well as to smallpox and cholera. (The natural CCR5 mutation may have been selected for because it helps carriers survive smallpox and plague, some studies suggest—but they don’t mention cholera.) “This is going to be a great science and medicine achievement ever since the IVF technology which was awarded the Nobel Prize in 2010, and will also bring hope to numerous genetic disease patients,” the application says. Seven people on the ethics committee, chaired by Lin Zhitong—a one-time Direct Genomics director and a HarMoniCare administrator—signed the application, indicating they approved it.

[…] John Zhang, […] [emphasis mine] earned his medical degree in China and a Ph.D. in reproductive biology at the University of Cambridge in the United Kingdom. Zhang had made international headlines himself in September 2016, when New Scientist revealed that he had created the world’s first “three-parent baby” by using mitochondrial DNA from a donor egg to revitalize the egg of a woman with infertility and then inseminating the resulting egg. “This technology holds great hope for ladies with advanced maternal age to have their own children with their own eggs,” Zhang explains in the center’s promotional video, which alternates between Chinese and English. It does not mention that Zhang did the IVF experiment in Mexico because it is not now allowed in the United States. [emphasis mine]

When Science contacted Zhang, the physician initially said he barely knew He: [emphases mine] “I know him just like many people know him, in an academic meeting.”

After his talk [November 2018 at Hong Kong meeting], He immediately drove back to Shenzhen, and his circle of trust began to disintegrate. He has not spoken publicly since. “I don’t think he can recover himself through PR,” says Ferrell, who no longer works for He but recently started to do part-time work for He’s wife. “He has to do other service to the world.”

Calls for a moratorium on human germline editing have increased, although at the end of the Hong Kong summit, the organizing committee declined in its consensus to call for a ban. China has stiffened its regulations on work with human embryos, and Chinese bioethicists in a Nature editorial about the incident urged the country to confront “the eugenic thinking that has persisted among a small proportion of Chinese scholars.”

Church, who has many CRISPR collaborations in China, finds it inconceivable that He’s work surprised the Chinese government. China has “the best surveillance system in the world,” he says. “I conclude that they were totally aware of what he was doing at every step of the way, especially because he wasn’t particularly secretive about it.”

Benjamin Hurlbut, William’s son and a historian of biomedicine at Arizona State University in Tempe, says leaders in the scientific community should take a hard look at their actions, too. [emphases mine] He thinks the 2017 NASEM report helped give rise to He by following a well-established approach to guiding science: appointing an elite group to decide how scientists should be regulated. Benjamin Hurlbut, whose book Experiments in Democracy explores the governance of embryo research and bioethics, questions why small, scientist-led groups—à la the totemic Asilomar conference held in 1975 to discuss the future of recombinant DNA research—are seen as the best way to shape thinking about new technologies. Hurlbut has called for a “global observatory for gene editing” to convene meetings with diverse perspectives.

The prevailing notion that the scientific community simply “failed to see the rogue among the responsible,” Hurlbut says, is a convenient narrative for those scientific leaders and inhibits their ability to learn from such failures. [emphases mine] “It puts them on the right side of history,” he says. They failed to paint a bright enough red line, Hurlbut contends. “They are not on the right side of history because they contributed to this.”

If you have the time, I strongly recommend reading Cohen’s piece in its entirety. You’ll find links to the reports and more articles with in-depth reporting on this topic.

A little kindness and no regrets

William Hurlbut was interviewed in an As it happens (Canadian Broadcasting Corporation’ CBC) radio programme segment on December 30, 2020. This is an excerpt from the story transcript written by Sheena Goodyear (Note: A link has been removed),

Dr. William Hurlbut, a physician and professor of neural-biology at Stanford University, says he tried to warn He to slow down before it was too late. Here is part of his conversation with As It Happens guest host Helen Mann.

What was your reaction to the news that Dr. He had been sentenced to three years in prison?

My first reaction was one of sadness because I know Dr. He — who we call J.K., that’s his nickname.

I spent quite a few hours talking with him, and I’m just sad that this worked out this way. It didn’t work out well for him or for his country or for the world, in some sense.

Except the one good thing is it’s alerted us, it’s awakened the world, to the seriousness of the issues that are coming down toward us with biotechnology, especially in genetics.

How does he feel about [how] not just the Chinese government, but the world generally, responded to his experiment?

He was surprised, personally. But I had actually warned him that he was proceeding too fast, and I didn’t know he had implanted embryos.

We had several conversations before this was disclosed, and I warned him to go more slowly and to keep in conversation with the rest of the international scientific community, and more broadly the international perspectives on social and ethical matters.

He was doing that to some extent, but not deeply enough and not transparently enough.

It sounds like you were very thoughtful in the conversations you had with him and the advice you gave him. And I guess you operated with what you had. But do you have any regrets yourself?

I don’t have any regrets about the way I conducted myself. I regret that this happened this way for J.K., who is a very bright person, and a very nice person, a humble person.

He grew up in a poor urban farming village. He told me that at one point he wanted to ask out a certain girl that he thought was really pretty … but he was embarrassed to do so because her family owned the restaurant. And so you see how humble his origins were.

By the way, he did end up asking her out and he ended up marrying her, which is a happy story, except now they’re separated for years of crucial time, and they have little children. 

I know this is a bigger story than just J.K. and his family. But there’s a personal story to it too.

What happens He Jiankui? … Is his research career over?

It’s hard to imagine that a nation like China would not give him some some useful role in their society. A very intelligent and very well-educated young man. 

But on the other hand, he will be forever a sign of a very crucial and difficult moment for the human species. He’s not going outlive that.

It’s going to be interesting. I hope I get a chance to have good conversations with him again and hear his internal ruminations and perspectives on it all.

This (“I don’t have any regrets about the way I conducted myself”) is where Hurlbut lost me. I think he could have suggested that he’d reviewed and rethought everything and feels that he and others could have done better and maybe they need to rethink how scientists are trained and how we talk about science, genetics, and emerging technology. Interestingly, it’s his son who comes up with something closer to what I’m suggesting (this excerpt was quoted earlier in this posting from a December 30, 2019 article, by Carolyn Y. Johnson for the Washington Post),

“The fact that the individual at the center of the story has been punished for his role in it should not distract us from examining what supporting roles were played by others, particularly in the international scientific community and also the environment that shaped and encouraged him to push the limits,” said Benjamin Hurlbut [emphasis mine], associate professor in the School of Life Sciences at Arizona State University.

The man who CRISPRs himself approves

Josiah Zayner publicly injected himself with CRISPR in a demonstration (see my January 25, 2018 posting for details about Zayner, his demonstration, and his plans). As you might expect, his take on the He affair is quite individual. From a January 2, 2020 article for STAT, Zayner presents the case for Dr. He’s work (Note: Links have been removed),

When I saw the news that He Jiankui and colleagues had been sentenced to three years in prison for the first human embryo gene editing and implantation experiments, all I could think was, “How will we look back at what they had done in 100 years?”

When the scientist described his research and revealed the births of gene edited twin girls at the [Second] International Summit on Human Genome Editing in Hong Kong in late November 2018, I stayed up into the early hours of the morning in Oakland, Calif., watching it. Afterward, I couldn’t sleep for a few days and couldn’t stop thinking about his achievement.

This was the first time a viable human embryo was edited and allowed to live past 14 days, much less the first time such an embryo was implanted and the baby brought to term.

The majority of scientists were outraged at the ethics of what had taken place, despite having very little information on what had actually occurred.

To me, no matter how abhorrent one views [sic] the research, it represents a substantial step forward in human embryo editing. Now there is a clear path forward that anyone can follow when before it had been only a dream.

As long as the children He Jiankui engineered haven’t been harmed by the experiment, he is just a scientist who forged some documents to convince medical doctors to implant gene-edited embryos. The 4-minute mile of human genetic engineering has been broken. It will happen again.

The academic establishment and federal funding regulations have made it easy to control the number of heretical scientists. We rarely if ever hear of individuals pushing the ethical and legal boundaries of science.

The rise of the biohacker is changing that.

A biohacker is a scientist who exists outside academia or an institution. By this definition, He Jiankui is a biohacker. I’m also part of this community, and helped build an organization to support it.

Such individuals have much more freedom than “traditional” scientists because scientific regulation in the U.S. is very much institutionally enforced by the universities, research organizations, or grant-giving agencies. But if you are your own institution and don’t require federal grants, who can police you? If you don’t tell anyone what you are doing, there is no way to stop you — especially since there is no government agency actively trying to stop people from editing embryos.

… When a human embryo being edited and implanted is no longer interesting enough for a news story, will we still view He Jiankui as a villain?

I don’t think we will. But even if we do, He Jiankui will be remembered and talked about more than any scientist of our day. Although that may seriously aggravate many scientists and bioethicists, I think he deserves that honor.

Josiah Zayner is CEO of The ODIN, a company that teaches people how to do genetic engineering in their homes.

You can find The ODIN here.

Final comments

There can’t be any question that this was inevitable. One needs only to take a brief stroll through the history of science to know that scientists are going to push boundaries or, as in this case, press past an ill-defined grey zone.

The only scientists who are being publicly punished for hubris are Dr. He Jiankui and his two colleagues in China. Dr. Michael Deem is still working for Rice University as far as I can determine. Here’s how the Wikipedia entry for the He Jiankui Affair describes the investigation (Note: Links have been removed),

Michael W. Deem, an American bioengineering professor at Rice University and He’s doctoral advisor, was involved in the research, and was present when people involved in He’s study gave consent.[24] He was the only non-Chinese out of 10 authors listed in the manuscript submitted to Nature.[30] Deem came under investigation by Rice University after news of the work was made public.[58] As of 31 December 2019, the university had not released a decision.[59] [emphasis mine]

Meanwhile the scientists at Stanford are cleared. While there are comments about the Chinese government not being transparent, it seems to me that US universities are just as opaque.

What seems missing from all this discussion and opprobrium is that the CRISPR technology itself is problematic. My September 20, 2019 post features research into off-target results from CRISPR gene-editing and, prior, there was this July 17, 2018 posting (The CRISPR [clustered regularly interspaced short palindromic repeats]-CAS9 gene-editing technique may cause new genetic damage kerfuffle).

I’d like to see more discussion and, in line with Benjamin Hurlbut’s thinking, I’d like to see more than a small group of experts talking to each other as part of the process especially here in Canada and in light of efforts to remove our ban on germline-editing (see my April 26, 2019 posting for more about those efforts).

Sunscreens 2020 and the Environmental Working Group (EWG)

There must be some sweet satisfaction or perhaps it’s better described as relief for the Environmental Working Group (EWG) now that sunscreens with metallic (zinc oxide and/or titanium dioxide) nanoparticles are gaining wide acceptance. (More about the history and politics EWG and metallic nanoparticles at the end of this posting.)

This acceptance has happened alongside growing concerns about oxybenzone, a sunscreen ingredient that EWG has long warned against. Oxybenzone has been banned from use in Hawaii due to environmental concerns (see my July 6, 2018 posting; scroll down about 40% of the way for specifics about Hawaii). Also, it is one of the common sunscreen ingredients for which the US Food and Drug Administration (FDA) is completing a safety review.

Today, zinc oxide and titanium dioxide metallic nanoparticles are being called minerals, as in, “mineral-based” sunscreens. They are categorized as physical sunscreens as opposed to chemical sunscreens.

I believe the most recent sunscreen posting here was my 2018 update (uly 6, 2018 posting) so the topic is overdue for some attention here. From a May 21, 2020 EWG news release (received via email),

As states reopen and Americans leave their homes to venture outside, it’s important for them to remember to protect their skin from the sun’s harmful rays. Today the Environmental Working Group released its 14th annual Guide to Sunscreens.  

This year researchers rated the safety and efficacy of more than 1,300 SPF products – including sunscreens, moisturizers and lip balms – and found that only 25 percent offer adequate protection and do not contain worrisome ingredients such as oxybenzone, a potential hormone-disrupting chemical that is readily absorbed by the body.

Despite a delay in finalizing rules that would make all sunscreens on U.S. store shelves safer, the Food and Drug Administration, the agency that governs sunscreen safety, is completing tests that highlight concerns with common sunscreen ingredients. Last year, the agency published two studies showing that, with just a single application, six commonly used chemical active ingredients, including oxybenzone, are readily absorbed through the skin and could be detected in our bodies at levels that could cause harm.

“It’s quite concerning,” said Nneka Leiba, EWG’s vice president of Healthy Living science. “Those studies don’t prove whether the sunscreens are unsafe, but they do highlight problems with how these products are regulated.”

“EWG has been advocating for the FDA to review these chemical ingredients for 14 years,” Leiba said. “We slather these ingredients on our skin, but these chemicals haven’t been adequately tested. This is just one example of the backward nature of product regulation in the U.S.”

Oxybenzone remains a commonly used active ingredient, found in more than 40 percent of the non-mineral sunscreens in this year’s guide. Oxybenzone is allergenic and a potential endocrine disruptor, and has been detected in human breast milk, amniotic fluid, urine and blood.

According to EWG’s assessment, fewer than half of the products in this year’s guide contain active ingredients that the FDA has proposed are safe and effective.

“Based on the best current science and toxicology data, we continue to recommend sunscreens with the mineral active ingredients zinc dioxide and titanium dioxide, because they are the only two ingredients the FDA recognized as safe or effective in their proposed draft rules,” said Carla Burns, an EWG research and database analyst who manages the updates to the sunscreen guide.

Most people select sunscreen products based on their SPF, or sunburn protection factor, and mistakenly assume that bigger numbers offer better protection. According to the FDA, higher SPF values have not been shown to provide additional clinical benefit and may give users a false sense of protection. This may lead to overexposure to UVA rays that increase the risk of long-term skin damage and cancer. The FDA has proposed limiting SPF claims to 60+.

EWG continues to hone our recommendations by strengthening the criteria for assessing sunscreens, which are based on the latest findings in the scientific literature and commissioned tests of sunscreen product efficacy. This year EWG made changes to our methodology in order to strengthen our requirement that products provide the highest level of UVA protection.

“Our understanding of the dangers associated with UVA exposure is increasing, and they are of great concern,” said Burns. “Sunburn during early life, especially childhood, is very dangerous and a risk factor for all skin cancers, but especially melanoma. Babies and young children are especially vulnerable to sun damage. Just a few blistering sunburns early in life can double a person’s risk of developing melanoma later in life.”

EWG researchers found 180 sunscreens that meet our criteria for safety and efficacy and would likely meet the proposed FDA standards. Even the biggest brands now provide mineral options for consumers.  

Even for Americans continuing to follow stay-at-home orders, wearing an SPF product may still be important. If you’re sitting by a window, UVA and UVB rays can penetrate the glass.  

It is important to remember that sunscreen is only one part of a sun safety routine. People should also protect their skin by covering up with clothing, hats and sunglasses. And sunscreen must be reapplied at least every two hours to stay effective.

EWG’s Guide to Sunscreens helps consumers find products that get high ratings for providing adequate broad-spectrum protection and that are made with ingredients that pose fewer health concerns.

The new guide also includes lists of:

Here are more quick tips for choosing better sunscreens:

  • Check your products in EWG’s sunscreen database and avoid those with harmful ingredients.
  • Avoid products with oxybenzone. This chemical penetrates the skin, gets into the bloodstream and can affect normal hormone activities.
  • Steer clear of products with SPF higher than 50+. High SPF values do not necessarily provide increased UVA protection and may fool you into thinking you are safe from sun damage.
  • Avoid sprays. These popular products pose inhalation concerns, and they may not provide a thick and uniform coating on the skin.
  • Stay away from retinyl palmitate. Government studies link the use of retinyl palmitate, a form of vitamin A, to the formation of skin tumors and lesions when it is applied to sun-exposed skin.
  • Avoid intense sun exposure during the peak hours of 10 a.m. to 4 p.m.

Shoppers on the go can download EWG’s Healthy Living app to get ratings and safety information on sunscreens and other personal care products. Also be sure to check out EWG’s sunscreen label decoder.

One caveat, these EWG-recommended products might not be found in Canadian stores or your favourite product may not have been reviewed for inclusion, as a product to be sought out or avoided, in their database. For example, I use a sunscreen that isn’t listed in the database, although at least a few other of the company’s sunscreen products are. On the plus side, my sunscreen doesn’t include oxybenzone or retinyl palmitate as ingredients.

To sum up the situation with sunscreens containing metallic nanoparticles (minerals), they are considered to be relatively safe but should new research emerge that designation could change. In effect, all we can do is our best with the information at hand.

History and politics of metallic nanoparticles in sunscreens

In 2009 it was a bit of a shock when the EWG released a report recommending the use of sunscreens with metallic nanoparticles in the list of ingredients. From my July 9, 2009 posting,

The EWG (Environmental Working Group) is, according to Maynard [as of 20202: Dr. Andrew Maynard is a scientist and author, Associate Director of Faculty in the ASU {Arizona State University} School for the Future of Innovation in Society, also the director of the ASU Risk Innovation Lab, and leader of the Risk Innovation Nexus], not usually friendly to industry and they had this to say about their own predisposition prior to reviewing the data (from EWG),

When we began our sunscreen investigation at the Environmental Working Group, our researchers thought we would ultimately recommend against micronized and nano-sized zinc oxide and titanium dioxide sunscreens. After all, no one has taken a more expansive and critical look than EWG at the use of nanoparticles in cosmetics and sunscreens, including the lack of definitive safety data and consumer information on these common new ingredients, and few substances more dramatically highlight gaps in our system of public health protections than the raw materials used in the burgeoning field of nanotechnology. But many months and nearly 400 peer-reviewed studies later, we find ourselves drawing a different conclusion, and recommending some sunscreens that may contain nano-sized ingredients.

My understanding is that after this report, the EWG was somewhat ostracized by collegial organizations. Friends of the Earth (FoE) and the ETC Group both of which issued reports that were published after the EWG report and were highly critical of ‘nano sunscreens’.

The ETC Group did not continue its anti nanosunscreen campaign for long (I saw only one report) but FoE (in particular the Australian arm of the organization) more than made up for that withdrawal and to sad effect. My February 9, 2012 post title was this: Unintended consequences: Australians not using sunscreens to avoid nanoparticles?

An Australian government survey found that 13% of Australians were not using any sunscreen due to fears about nanoparticles. In a country with the highest incidence of skin cancer in the world and, which spent untold millions over decades getting people to cover up in the sun, it was devastating news.

FoE immediately withdrew all their anti nanosunscreen materials in Australia from circulation while firing broadsides at the government. The organization’s focus on sunscreens with metallic nanoparticles has diminished since 2012.

Research

I have difficulty trusting materials from FoE and you can see why here in this July 26, 2011 posting (Misunderstanding the data or a failure to research? Georgia Straight article about nanoparticles). In it, I analyze Alex Roslin’s profoundly problematic article about metallic nanoparticles and other engineered nanoparticles. All of Roslin’s article was based on research and materials produced by FoE which misrepresented some of the research. Roslin would have realized that if he had bothered to do any research for himself.

EWG impressed me mightily with their refusal to set aside or dismiss the research disputing their initial assumption that metallic nanoparticles in sunscreens were hazardous. (BTW, there is one instance where metallic nanoparticles in sunscreens are of concern. My October 13, 2013 posting about anatase and rutile forms of titanium dioxide at the nanoscale features research on that issue.)

EWG’s Wikipedia entry

Whoever and however many are maintaining this page, they don’t like EWG at all,

The accuracy of EWG reports and statements have been criticized, as has its funding by the organic food industry[2][3][4][5] Its warnings have been labeled “alarmist”, “scaremongering” and “misleading”.[6][7][8] Despite the questionable status of its work, EWG has been influential.[9]

This is the third paragraph in the Introduction. At its very best, the information is neutral, otherwise, it’s much like that third paragraph.

Even John D. Rockeller’s entry is more flattering and he was known as the ‘most hated man in America’ as this show description on the Public Broadcasting Service (PBS) website makes clear,

American Experience

The Rockefellers Chapter One

Clip: Season 13 Episode 1 | 9m 37s

John D. Rockefeller was the world’s first billionaire and the most hated man in America. Watch the epic story of the man who monopolized oil.

Fun in the sun

Have fun in the sun this summer. There’s EWG’s sunscreen database, the tips listed in the news release, and EWG also has a webpage where they describe their methodology for how they assess sunscreens. It gets a little technical (for me anyway) but it should answer any further safety questions you might have after reading this post.

It may require a bit of ingenuity given the concerns over COVID-19 but I’m constantly amazed at the inventiveness with which so many people have met this pandemic. (This June 15, 2020 Canadian Broadcasting Corporation article by Sheena Goodyear features a family that created a machine that won the 2020 Rube Goldberg Bar of Soap Video challenge. The article includes an embedded video of the winning machine in action.)

Touchy robots and prosthetics

I have briefly speculated about the importance of touch elsewhere (see my July 19, 2019 posting regarding BlocKit and blockchain; scroll down about 50% of the way) but this upcoming news bit and the one following it put a different spin on the importance of touch.

Exceptional sense of touch

Robots need a sense of touch to perform their tasks and a July 18, 2019 National University of Singapore press release (also on EurekAlert) announces work on an improved sense of touch,

Robots and prosthetic devices may soon have a sense of touch equivalent to, or better than, the human skin with the Asynchronous Coded Electronic Skin (ACES), an artificial nervous system developed by a team of researchers at the National University of Singapore (NUS).

The new electronic skin system achieved ultra-high responsiveness and robustness to damage, and can be paired with any kind of sensor skin layers to function effectively as an electronic skin.

The innovation, achieved by Assistant Professor Benjamin Tee and his team from the Department of Materials Science and Engineering at the NUS Faculty of Engineering, was first reported in prestigious scientific journal Science Robotics on 18 July 2019.

Faster than the human sensory nervous system

“Humans use our sense of touch to accomplish almost every daily task, such as picking up a cup of coffee or making a handshake. Without it, we will even lose our sense of balance when walking. Similarly, robots need to have a sense of touch in order to interact better with humans, but robots today still cannot feel objects very well,” explained Asst Prof Tee, who has been working on electronic skin technologies for over a decade in hope of giving robots and prosthetic devices a better sense of touch.

Drawing inspiration from the human sensory nervous system, the NUS team spent a year and a half developing a sensor system that could potentially perform better. While the ACES electronic nervous system detects signals like the human sensor nervous system, it is made up of a network of sensors connected via a single electrical conductor, unlike the nerve bundles in the human skin. It is also unlike existing electronic skins which have interlinked wiring systems that can make them sensitive to damage and difficult to scale up.

Elaborating on the inspiration, Asst Prof Tee, who also holds appointments in the NUS Department of Electrical and Computer Engineering, NUS Institute for Health Innovation & Technology (iHealthTech), N.1 Institute for Health and the Hybrid Integrated Flexible Electronic Systems (HiFES) programme, said, “The human sensory nervous system is extremely efficient, and it works all the time to the extent that we often take it for granted. It is also very robust to damage. Our sense of touch, for example, does not get affected when we suffer a cut. If we can mimic how our biological system works and make it even better, we can bring about tremendous advancements in the field of robotics where electronic skins are predominantly applied.”

ACES can detect touches more than 1,000 times faster than the human sensory nervous system. For example, it is capable of differentiating physical contacts between different sensors in less than 60 nanoseconds – the fastest ever achieved for an electronic skin technology – even with large numbers of sensors. ACES-enabled skin can also accurately identify the shape, texture and hardness of objects within 10 milliseconds, ten times faster than the blinking of an eye. This is enabled by the high fidelity and capture speed of the ACES system.

The ACES platform can also be designed to achieve high robustness to physical damage, an important property for electronic skins because they come into the frequent physical contact with the environment. Unlike the current system used to interconnect sensors in existing electronic skins, all the sensors in ACES can be connected to a common electrical conductor with each sensor operating independently. This allows ACES-enabled electronic skins to continue functioning as long as there is one connection between the sensor and the conductor, making them less vulnerable to damage.

Smart electronic skins for robots and prosthetics

ACES’ simple wiring system and remarkable responsiveness even with increasing numbers of sensors are key characteristics that will facilitate the scale-up of intelligent electronic skins for Artificial Intelligence (AI) applications in robots, prosthetic devices and other human machine interfaces.

“Scalability is a critical consideration as big pieces of high performing electronic skins are required to cover the relatively large surface areas of robots and prosthetic devices,” explained Asst Prof Tee. “ACES can be easily paired with any kind of sensor skin layers, for example, those designed to sense temperatures and humidity, to create high performance ACES-enabled electronic skin with an exceptional sense of touch that can be used for a wide range of purposes,” he added.

For instance, pairing ACES with the transparent, self-healing and water-resistant sensor skin layer also recently developed by Asst Prof Tee’s team, creates an electronic skin that can self-repair, like the human skin. This type of electronic skin can be used to develop more realistic prosthetic limbs that will help disabled individuals restore their sense of touch.

Other potential applications include developing more intelligent robots that can perform disaster recovery tasks or take over mundane operations such as packing of items in warehouses. The NUS team is therefore looking to further apply the ACES platform on advanced robots and prosthetic devices in the next phase of their research.

For those who like videos, the researchers have prepared this,

Here’s a link to and a citation for the paper,

A neuro-inspired artificial peripheral nervous system for scalable electronic skins by Wang Wei Lee, Yu Jun Tan, Haicheng Yao, Si Li, Hian Hian See, Matthew Hon, Kian Ann Ng, Betty Xiong, John S. Ho and Benjamin C. K. Tee. Science Robotics Vol 4, Issue 32 31 July 2019 eaax2198 DOI: 10.1126/scirobotics.aax2198 Published online first: 17 Jul 2019:

This paper is behind a paywall.

Picking up a grape and holding his wife’s hand

This story comes from the Canadian Broadcasting Corporation (CBC) Radio with a six minute story embedded in the text, from a July 25, 2019 CBC Radio ‘As It Happens’ article by Sheena Goodyear,

The West Valley City, Utah, real estate agent [Keven Walgamott] lost his left hand in an electrical accident 17 years ago. Since then, he’s tried out a few different prosthetic limbs, but always found them too clunky and uncomfortable.

Then he decided to work with the University of Utah in 2016 to test out new prosthetic technology that mimics the sensation of human touch, allowing Walgamott to perform delicate tasks with precision — including shaking his wife’s hand. 

“I extended my left hand, she came and extended hers, and we were able to feel each other with the left hand for the first time in 13 years, and it was just a marvellous and wonderful experience,” Walgamott told As It Happens guest host Megan Williams. 

Walgamott, one of seven participants in the University of Utah study, was able to use an advanced prosthetic hand called the LUKE Arm to pick up an egg without cracking it, pluck a single grape from a bunch, hammer a nail, take a ring on and off his finger, fit a pillowcase over a pillow and more. 

While performing the tasks, Walgamott was able to actually feel the items he was holding and correctly gauge the amount of pressure he needed to exert — mimicking a process the human brain does automatically.

“I was able to feel something in each of my fingers,” he said. “What I feel, I guess the easiest way to explain it, is little electrical shocks.”

Those shocks — which he describes as a kind of a tingling sensation — intensify as he tightens his grip.

“Different variations of the intensity of the electricity as I move my fingers around and as I touch things,” he said. 

To make that [sense of touch] happen, the researchers implanted electrodes into the nerves on Walgamott’s forearm, allowing his brain to communicate with his prosthetic through a computer outside his body. That means he can move the hand just by thinking about it.

But those signals also work in reverse.

The team attached sensors to the hand of a LUKE Arm. Those sensors detect touch and positioning, and send that information to the electrodes so it can be interpreted by the brain.

For Walgamott, performing a series of menial tasks as a team of scientists recorded his progress was “fun to do.”

“I’d forgotten how well two hands work,” he said. “That was pretty cool.”

But it was also a huge relief from the phantom limb pain he has experienced since the accident, which he describes as a “burning sensation” in the place where his hand used to be.

A July 24, 2019 University of Utah news release (also on EurekAlert) provides more detail about the research,

Keven Walgamott had a good “feeling” about picking up the egg without crushing it.

What seems simple for nearly everyone else can be more of a Herculean task for Walgamott, who lost his left hand and part of his arm in an electrical accident 17 years ago. But he was testing out the prototype of a high-tech prosthetic arm with fingers that not only can move, they can move with his thoughts. And thanks to a biomedical engineering team at the University of Utah, he “felt” the egg well enough so his brain could tell the prosthetic hand not to squeeze too hard.

That’s because the team, led by U biomedical engineering associate professor Gregory Clark, has developed a way for the “LUKE Arm” (so named after the robotic hand that Luke Skywalker got in “The Empire Strikes Back”) to mimic the way a human hand feels objects by sending the appropriate signals to the brain. Their findings were published in a new paper co-authored by U biomedical engineering doctoral student Jacob George, former doctoral student David Kluger, Clark and other colleagues in the latest edition of the journal Science Robotics. A copy of the paper may be obtained by emailing robopak@aaas.org.

“We changed the way we are sending that information to the brain so that it matches the human body. And by matching the human body, we were able to see improved benefits,” George says. “We’re making more biologically realistic signals.”

That means an amputee wearing the prosthetic arm can sense the touch of something soft or hard, understand better how to pick it up and perform delicate tasks that would otherwise be impossible with a standard prosthetic with metal hooks or claws for hands.

“It almost put me to tears,” Walgamott says about using the LUKE Arm for the first time during clinical tests in 2017. “It was really amazing. I never thought I would be able to feel in that hand again.”

Walgamott, a real estate agent from West Valley City, Utah, and one of seven test subjects at the U, was able to pluck grapes without crushing them, pick up an egg without cracking it and hold his wife’s hand with a sensation in the fingers similar to that of an able-bodied person.

“One of the first things he wanted to do was put on his wedding ring. That’s hard to do with one hand,” says Clark. “It was very moving.”

Those things are accomplished through a complex series of mathematical calculations and modeling.

The LUKE Arm

The LUKE Arm has been in development for some 15 years. The arm itself is made of mostly metal motors and parts with a clear silicon “skin” over the hand. It is powered by an external battery and wired to a computer. It was developed by DEKA Research & Development Corp., a New Hampshire-based company founded by Segway inventor Dean Kamen.

Meanwhile, the U’s team has been developing a system that allows the prosthetic arm to tap into the wearer’s nerves, which are like biological wires that send signals to the arm to move. It does that thanks to an invention by U biomedical engineering Emeritus Distinguished Professor Richard A. Normann called the Utah Slanted Electrode Array. The array is a bundle of 100 microelectrodes and wires that are implanted into the amputee’s nerves in the forearm and connected to a computer outside the body. The array interprets the signals from the still-remaining arm nerves, and the computer translates them to digital signals that tell the arm to move.

But it also works the other way. To perform tasks such as picking up objects requires more than just the brain telling the hand to move. The prosthetic hand must also learn how to “feel” the object in order to know how much pressure to exert because you can’t figure that out just by looking at it.

First, the prosthetic arm has sensors in its hand that send signals to the nerves via the array to mimic the feeling the hand gets upon grabbing something. But equally important is how those signals are sent. It involves understanding how your brain deals with transitions in information when it first touches something. Upon first contact of an object, a burst of impulses runs up the nerves to the brain and then tapers off. Recreating this was a big step.

“Just providing sensation is a big deal, but the way you send that information is also critically important, and if you make it more biologically realistic, the brain will understand it better and the performance of this sensation will also be better,” says Clark.

To achieve that, Clark’s team used mathematical calculations along with recorded impulses from a primate’s arm to create an approximate model of how humans receive these different signal patterns. That model was then implemented into the LUKE Arm system.

Future research

In addition to creating a prototype of the LUKE Arm with a sense of touch, the overall team is already developing a version that is completely portable and does not need to be wired to a computer outside the body. Instead, everything would be connected wirelessly, giving the wearer complete freedom.

Clark says the Utah Slanted Electrode Array is also capable of sending signals to the brain for more than just the sense of touch, such as pain and temperature, though the paper primarily addresses touch. And while their work currently has only involved amputees who lost their extremities below the elbow, where the muscles to move the hand are located, Clark says their research could also be applied to those who lost their arms above the elbow.

Clark hopes that in 2020 or 2021, three test subjects will be able to take the arm home to use, pending federal regulatory approval.

The research involves a number of institutions including the U’s Department of Neurosurgery, Department of Physical Medicine and Rehabilitation and Department of Orthopedics, the University of Chicago’s Department of Organismal Biology and Anatomy, the Cleveland Clinic’s Department of Biomedical Engineering and Utah neurotechnology companies Ripple Neuro LLC and Blackrock Microsystems. The project is funded by the Defense Advanced Research Projects Agency and the National Science Foundation.

“This is an incredible interdisciplinary effort,” says Clark. “We could not have done this without the substantial efforts of everybody on that team.”

Here’s a link to and a citation for the paper,

Biomimetic sensory feedback through peripheral nerve stimulation improves dexterous use of a bionic hand by J. A. George, D. T. Kluger, T. S. Davis, S. M. Wendelken, E. V. Okorokova, Q. He, C. C. Duncan, D. T. Hutchinson, Z. C. Thumser, D. T. Beckler, P. D. Marasco, S. J. Bensmaia and G. A. Clark. Science Robotics Vol. 4, Issue 32, eaax2352 31 July 2019 DOI: 10.1126/scirobotics.aax2352 Published online first: 24 Jul 2019

This paper is definitely behind a paywall.

The University of Utah researchers have produced a video highlighting their work,