Tag Archives: Writing and AI or is a robot writing this blog?

Non-human authors (ChatGPT or others) of scientific and medical studies and the latest AI panic!!!

It’s fascinating to see all the current excitement (distressed and/or enthusiastic) around the act of writing and artificial intelligence. Easy to forget that it’s not new. First, the ‘non-human authors’ and then the panic(s). *What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.*

How to handle non-human authors (ChatGPT and other AI agents)—the medical edition

The first time I wrote about the incursion of robots or artificial intelligence into the field of writing was in a July 16, 2014 posting titled “Writing and AI or is a robot writing this blog?” ChatGPT (then known as GPT-2) first made its way onto this blog in a February 18, 2019 posting titled “AI (artificial intelligence) text generator, too dangerous to release?

The folks at the Journal of the American Medical Association (JAMA) have recently adopted a pragmatic approach to the possibility of nonhuman authors of scientific and medical papers, from a January 31, 2022 JAMA editorial,

Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability..1

In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.

Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11

This is a link to and a citation for the JAMA editorial,

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge by Annette Flanagin, Kirsten Bibbins-Domingo, Michael Berkwits, Stacy L. Christiansen. JAMA. 2023;329(8):637-639. doi:10.1001/jama.2023.1344

The editorial appears to be open access.

ChatGPT in the field of education

Dr. Andrew Maynard (scientist, author, and professor of Advanced Technology Transitions in the Arizona State University [ASU] School for the Future if Innovation in Society and founder of the ASU Future of Being Human initiative and Director of the ASU Risk Innovation Nexus) also takes a pragmatic approach in a March 14, 2023 posting on his eponymous blog,

Like many of my colleagues, I’ve been grappling with how ChatGPT and other Large Language Models (LLMs) are impacting teaching and education — especially at the undergraduate level.

We’re already seeing signs of the challenges here as a growing divide emerges between LLM-savvy students who are experimenting with novel ways of using (and abusing) tools like ChatGPT, and educators who are desperately trying to catch up. As a result, educators are increasingly finding themselves unprepared and poorly equipped to navigate near-real-time innovations in how students are using these tools. And this is only exacerbated where their knowledge of what is emerging is several steps behind that of their students.

To help address this immediate need, a number of colleagues and I compiled a practical set of Frequently Asked Questions on ChatGPT in the classroom. These covers the basics of what ChatGPT is, possible concerns over use by students, potential creative ways of using the tool to enhance learning, and suggestions for class-specific guidelines.

Dr. Maynard goes on to offer the FAQ/practical guide here. Prior to issuing the ‘guide’, he wrote a December 8, 2022 essay on Medium titled “I asked Open AI’s ChatGPT about responsible innovation. This is what I got.”

Crawford Kilian, a longtime educator, author, and contributing editor to The Tyee, expresses measured enthusiasm for the new technology (as does Dr. Maynard), in a December 13, 2022 article for thetyee.ca, Note: Links have been removed,

ChatGPT, its makers tell us, is still in beta form. Like a million other new users, I’ve been teaching it (tuition-free) so its answers will improve. It’s pretty easy to run a tutorial: once you’ve created an account, you’re invited to ask a question or give a command. Then you watch the reply, popping up on the screen at the speed of a fast and very accurate typist.

Early responses to ChatGPT have been largely Luddite: critics have warned that its arrival means the end of high school English, the demise of the college essay and so on. But remember that the Luddites were highly skilled weavers who commanded high prices for their products; they could see that newfangled mechanized looms would produce cheap fabrics that would push good weavers out of the market. ChatGPT, with sufficient tweaks, could do just that to educators and other knowledge workers.

Having spent 40 years trying to teach my students how to write, I have mixed feelings about this prospect. But it wouldn’t be the first time that a technological advancement has resulted in the atrophy of a human mental skill.

Writing arguably reduced our ability to memorize — and to speak with memorable and persuasive coherence. …

Writing and other technological “advances” have made us what we are today — powerful, but also powerfully dangerous to ourselves and our world. If we can just think through the implications of ChatGPT, we may create companions and mentors that are not so much demonic as the angels of our better nature.

More than writing: emergent behaviour

The ChatGPT story extends further than writing and chatting. From a March 6, 2023 article by Stephen Ornes for Quanta Magazine, Note: Links have been removed,

What movie do these emojis describe?

That prompt was one of 204 tasks chosen last year to test the ability of various large language models (LLMs) — the computational engines behind AI chatbots such as ChatGPT. The simplest LLMs produced surreal responses. “The movie is a movie about a man who is a man who is a man,” one began. Medium-complexity models came closer, guessing The Emoji Movie. But the most complex model nailed it in one guess: Finding Nemo.

“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.

“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors [emphasis mine], including several identified in Dyer’s project. That list continues to grow.

Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.

But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.

As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli [Deep Ganguli, a computer scientist at the AI startup Anthropic] said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

There is an obvious problem with asking these models to explain themselves: They are notorious liars. [emphasis mine] “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” As one of many amusing examples, in February [2023] Google introduced its AI chatbot, Bard. The blog post announcing the new tool shows Bard making a factual error.

If you have time, I recommend reading Omes’s March 6, 2023 article.

The panic

Perhaps not entirely unrelated to current developments, there was this announcement in a May 1, 2023 article by Hannah Alberga for CTV (Canadian Television Network) news, Note: Links have been removed,

Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI

Geoffrey Hinton, professor at the University of Toronto and the “godfather” of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday [May 1, 2023] citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by “bad actors.”

Hinton, a British-Canadian computer scientist, is best-known for a series of deep neural network breakthroughs that won him, Yann LeCun and Yoshua Bengio the 2018 Turing Award, known as the Nobel Prize of computing. 

Hinton has been invested in the now-hot topic of artificial intelligence since its early stages. In 1970, he got a Bachelor of Arts in experimental psychology from Cambridge, followed by his Ph.D. in artificial intelligence in Edinburgh, U.K. in 1978.

He joined Google after spearheading a major breakthrough with two of his graduate students at the University of Toronto in 2012, in which the team uncovered and built a new method of artificial intelligence: neural networks. The team’s first neural network was  incorporated and sold to Google for $44 million.

Neural networks are a method of deep learning that effectively teaches computers how to learn the way humans do by analyzing data, paving the way for machines to classify objects and understand speech recognition.

There’s a bit more from Hinton in a May 3, 2023 article by Sheena Goodyear for the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens (the 10 minute radio interview is embedded in the article), Note: A link has been removed,

There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.

Nowadays, he’s not so sure.

“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”

For the last decade, he [Geoffrey Hinton] divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.

Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”

But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes. 

Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good. 

“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.

“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”

Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”

“That being said, I do think that some of these ideas that … AI supercomputers are going to ‘wake up’ and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear” and distract from more pressing issues, he said.

He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.

“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he [Hinton] said.

Don Pittis in his May 4, 2022 business analysis for CBC news online offers a somewhat jaundiced view of Hinton’s concern regarding AI, Note: Links have been removed,

As if we needed one more thing to terrify us, the latest warning from a University of Toronto scientist considered by many to be the founding intellect of artificial intelligence, adds a new layer of dread.

Others who have warned in the past that thinking machines are a threat to human existence seem a little miffed with the rock-star-like media coverage Geoffrey Hinton, billed at a conference this week as the Godfather of AI, is getting for what seems like a last minute conversion. Others say Hinton’s authoritative voice makes a difference.

Not only did Hinton tell an audience of experts at Wednesday’s [May 3, 2023] EmTech Digital conference that humans will soon be supplanted by AI — “I think it’s serious and fairly close.” — he said that due to national and business competition, there is no obvious way to prevent it.

“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial,” said Hinton on Wednesday [May 3, 2023] as he explained his change of heart in detailed technical terms. 

“But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.”

“I wish I had a nice and simple solution I could push, but I don’t,” he said. “It’s not clear there is a solution.”

So when is all this happening?

“In a few years time they may be significantly more intelligent than people,” he told Nil Köksal on CBC Radio’s As It Happens on Wednesday [May 3, 2023].

While he may be late to the party, Hinton’s voice adds new clout to growing anxiety that artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.

But long before that final day, he worries that the new technology will soon begin to strip away jobs and lead to a destabilizing societal gap between rich and poor that current politics will be unable to solve.

The EmTech Digital conference is a who’s who of AI business and academia, fields which often overlap. Most other participants at the event were not there to warn about AI like Hinton, but to celebrate the explosive growth of AI research and business.

As one expert I spoke to pointed out, the growth in AI is exponential and has been for a long time. But even knowing that, the increase in the dollar value of AI to business caught the sector by surprise.

Eight years ago when I wrote about the expected increase in AI business, I quoted the market intelligence group Tractica that AI spending would “be worth more than $40 billion in the coming decade,” which sounded like a lot at the time. It appears that was an underestimate.

“The global artificial intelligence market size was valued at $428 billion U.S. in 2022,” said an updated report from Fortune Business Insights. “The market is projected to grow from $515.31 billion U.S. in 2023.”  The estimate for 2030 is more than $2 trillion. 

This week the new Toronto AI company Cohere, where Hinton has a stake of his own, announced it was “in advanced talks” to raise $250 million. The Canadian media company Thomson Reuters said it was planning “a deeper investment in artificial intelligence.” IBM is expected to “pause hiring for roles that could be replaced with AI.” The founders of Google DeepMind and LinkedIn have launched a ChatGPT competitor called Pi.

And that was just this week.

“My one hope is that, because if we allow it to take over it will be bad for all of us, we could get the U.S. and China to agree, like we did with nuclear weapons,” said Hinton. “We’re all the in same boat with respect to existential threats, so we all ought to be able to co-operate on trying to stop it.”

Interviewer and moderator Will Douglas Heaven, an editor at MIT Technology Review finished Hinton’s sentence for him: “As long as we can make some money on the way.”

Hinton has attracted some criticism himself. Wilfred Chan writing for Fast Company has two articles, “‘I didn’t see him show up’: Ex-Googlers blast ‘AI godfather’ Geoffrey Hinton’s silence on fired AI experts” on May 5, 2023, Note: Links have been removed,

Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord.

But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.

“It’s disappointing to see this autumn-years redemption tour [emphasis mine] from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)

Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.

“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”

Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. …

… An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them. 

Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday [May 2, 2023], for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.” [emphasis mine]

Gebru has been mentioned here a few times. She’s mentioned in passing in a June 23, 2022 posting “Racist and sexist robots have flawed AI” and in a little more detail in an August 30, 2022 posting “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” scroll down to the ‘Consciousness and ethical AI’ subhead

Chan has another Fast Company article investigating AI issues also published on May 5, 2023, “Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them.”

The last two existential AI panics

The term “autumn-years redemption tour”is striking and while the reference to age could be viewed as problematic, it also hints at the money, honours, and acknowledgement that Hinton has enjoyed as an eminent scientist. I’ve covered two previous panics set off by eminent scientists. “Existential risk” is the title of my November 26, 2012 posting which highlights Martin Rees’ efforts to found the Centre for Existential Risk at the University of Cambridge.

Rees is a big deal. From his Wikipedia entry, Note: Links have been removed,

Martin John Rees, Baron Rees of Ludlow OM FRS FREng FMedSci FRAS HonFInstP[10][2] (born 23 June 1942) is a British cosmologist and astrophysicist.[11] He is the fifteenth Astronomer Royal, appointed in 1995,[12][13][14] and was Master of Trinity College, Cambridge, from 2004 to 2012 and President of the Royal Society between 2005 and 2010.[15][16][17][18][19][20]

The Centre for Existential Risk can be found here online (it is located at the University of Cambridge). Interestingly, Hinton who was born in December 1947 will be giving a lecture “Digital versus biological intelligence: Reasons for concern about AI” in Cambridge on May 25, 2023.

The next panic was set off by Stephen Hawking (1942 – 2018; also at the University of Cambridge, Wikipedia entry) a few years before he died. (Note: Rees, Hinton, and Hawking were all born within five years of each other and all have/had ties to the University of Cambridge. Interesting coincidence, eh?) From a January 9, 2015 article by Emily Chung for CBC news online,

Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, “the development of full artificial intelligence could spell the end of the human race.” Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably “our biggest existential threat.”

Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.

Hawking made his most recent comments at the beginning of December [2014], in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.

Popular works of science fiction – from the latest Terminator trailer, to the Matrix trilogy, to Star Trek’s borg – envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.

Sawyer has written about a different vision of life beyond singularity [when machines surpass humans in general intelligence,] — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the “possible misuse of powerful technologies” such as AI. He said Hawking and Musk have good reason to be concerned.

To sum up, the first panic was in 2012, the next in 2014/15, and the latest one began earlier this year (2023) with a letter. A March 29, 2023 Thompson Reuters news item on CBC news online provides information on the contents,

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a pioneer of research in the field.

According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.

The concerns come as EU police force Europol on Monday {March 27, 2023] joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Meanwhile, the U.K. government unveiled proposals for an “adaptable” regulatory framework around AI.

The government’s approach, outlined in a policy paper published on Wednesday [March 29, 2023], would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.

The engineers have chimed in, from an April 7, 2023 article by Margo Anderson for the IEEE (institute of Electrical and Electronics Engineers) Spectrum magazine, Note: Links have been removed,

The open letter [published March 29, 2023], titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”

It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.

In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.

IEEE members have expressed a similar diversity of opinions.

There was an earlier open letter in January 2015 according to Wikipedia’s “Open Letter on Artificial Intelligence” entry, Note: Links have been removed,

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.

As for ‘Mr. ChatGPT’ or Sam Altman, CEO of OpenAI, while he didn’t sign the March 29, 2023 letter, he appeared before US Congress suggesting AI needs to be regulated according to May 16, 2023 news article by Mohar Chatterjee for Politico.

You’ll notice I’ve arbitrarily designated three AI panics by assigning their origins to eminent scientists. In reality, these concerns rise and fall in ways that don’t allow for such a tidy analysis. As Chung notes, science fiction regularly addresses this issue. For example, there’s my October 16, 2013 posting, “Wizards & Robots: a comic book encourages study in the sciences and maths and discussions about existential risk.” By the way, will.i.am (of the Black Eyed Peas band was involved in the comic book project and he us a longtime supporter of STEM (science, technology, engineering, and mathematics) initiatives.

Finally (but not quite)

Puzzling, isn’t it? I’m not sure we’re asking the right questions but it’s encouraging to see that at least some are being asked.

Dr. Andrew Maynard in a May 12, 2023 essay for The Conversation (h/t May 12, 2023 item on phys.org) notes that ‘Luddites’ questioned technology’s inevitable progress and were vilified for doing so, Note: Links have been removed,

The term “Luddite” emerged in early 1800s England. At the time there was a thriving textile industry that depended on manual knitting frames and a skilled workforce to create cloth and garments out of cotton and wool. But as the Industrial Revolution gathered momentum, steam-powered mills threatened the livelihood of thousands of artisanal textile workers.

Faced with an industrialized future that threatened their jobs and their professional identity, a growing number of textile workers turned to direct action. Galvanized by their leader, Ned Ludd, they began to smash the machines that they saw as robbing them of their source of income.

It’s not clear whether Ned Ludd was a real person, or simply a figment of folklore invented during a period of upheaval. But his name became synonymous with rejecting disruptive new technologies – an association that lasts to this day.

Questioning doesn’t mean rejecting

Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time. Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life

In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.” Their sin? Raising concerns over the potential dangers of artificial intelligence.

The irony of three prominent scientists and entrepreneurs being labeled as Luddites underlines the disconnect between the term’s original meaning and its more modern use as an epithet for anyone who doesn’t wholeheartedly and unquestioningly embrace technological progress.

Yet technologists like Musk and Gates aren’t rejecting technology or innovation. Instead, they’re rejecting a worldview that all technological advances are ultimately good for society. This worldview optimistically assumes that the faster humans innovate, the better the future will be.

In an age of ChatGPT, gene editing and other transformative technologies, perhaps we all need to channel the spirit of Ned Ludd as we grapple with how to ensure that future technologies do more good than harm.

In fact, “Neo-Luddites” or “New Luddites” is a term that emerged at the end of the 20th century.

In 1990, the psychologist Chellis Glendinning published an essay titled “Notes toward a Neo-Luddite Manifesto.”

Then there are the Neo-Luddites who actively reject modern technologies, fearing that they are damaging to society. New York City’s Luddite Club falls into this camp. Formed by a group of tech-disillusioned Gen-Zers, the club advocates the use of flip phones, crafting, hanging out in parks and reading hardcover or paperback books. Screens are an anathema to the group, which sees them as a drain on mental health.

I’m not sure how many of today’s Neo-Luddites – whether they’re thoughtful technologists, technology-rejecting teens or simply people who are uneasy about technological disruption – have read Glendinning’s manifesto. And to be sure, parts of it are rather contentious. Yet there is a common thread here: the idea that technology can lead to personal and societal harm if it is not developed responsibly.

Getting back to where this started with nonhuman authors, Amelia Eqbal has written up an informal transcript of a March 16, 2023 CBC radio interview (radio segment is embedded) about ChatGPT-4 (the latest AI chatbot from OpenAI) between host Elamin Abdelmahmoud and tech journalist, Alyssa Bereznak.

I was hoping to add a little more Canadian content, so in March 2023 and again in April 2023, I sent a question about whether there were any policies regarding nonhuman or AI authors to Kim Barnhardt at the Canadian Medical Association Journal (CMAJ). To date, there has been no reply but should one arrive, I will place it here.

In the meantime, I have this from Canadian writer, Susan Baxter in her May 15, 2023 blog posting “Coming soon: Robot Overlords, Sentient AI and more,”

The current threat looming (Covid having been declared null and void by the WHO*) is Artificial Intelligence (AI) which, we are told, is becoming too smart for its own good and will soon outsmart humans. Then again, given some of the humans I’ve met along the way that wouldn’t be difficult.

All this talk of scary-boo AI seems to me to have become the worst kind of cliché, one that obscures how our lives have become more complicated and more frustrating as apps and bots and cyber-whatsits take over.

The trouble with clichés, as Alain de Botton wrote in How Proust Can Change Your Life, is not that they are wrong or contain false ideas but more that they are “superficial articulations of good ones”. Cliches are oversimplifications that become so commonplace we stop noticing the more serious subtext. (This is rife in medicine where metaphors such as talk of “replacing” organs through transplants makes people believe it’s akin to changing the oil filter in your car. Or whatever it is EV’s have these days that needs replacing.)

Should you live in Vancouver (Canada) and are attending a May 28, 2023 AI event, you may want to read Susan Baxter’s piece as a counterbalance to, “Discover the future of artificial intelligence at this unique AI event in Vancouver,” a May 19, 2023 sponsored content by Katy Brennan for the Daily Hive,

If you’re intrigued and eager to delve into the rapidly growing field of AI, you’re not going to want to miss this unique Vancouver event.

On Sunday, May 28 [2023], a Multiplatform AI event is coming to the Vancouver Playhouse — and it’s set to take you on a journey into the future of artificial intelligence.

The exciting conference promises a fusion of creativity, tech innovation, and thought–provoking insights, with talks from renowned AI leaders and concept artists, who will share their experiences and opinions.

Guests can look forward to intense discussions about AI’s pros and cons, hear real-world case studies, and learn about the ethical dimensions of AI, its potential threats to humanity, and the laws that govern its use.

Live Q&A sessions will also be held, where leading experts in the field will address all kinds of burning questions from attendees. There will also be a dynamic round table and several other opportunities to connect with industry leaders, pioneers, and like-minded enthusiasts. 

This conference is being held at The Playhouse, 600 Hamilton Street, from 11 am to 7:30 pm, ticket prices range from $299 to $349 to $499 (depending on when you make your purchase, From the Multiplatform AI Conference homepage,

Event Speakers

Max Sills
General Counsel at Midjourney

From Jan 2022 – present (Advisor – now General Counsel) – Midjourney – An independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species (SF) Midjourney – a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called “prompts”, similar to OpenAI’s DALL-E and Stable Diffusion. For now the company uses Discord Server as a source of service and, with huge 15M+ members, is the biggest Discord server in the world. In the two-things-at-once department, Max Sills also known as an owner of Open Advisory Services, firm which is set up to help small and medium tech companies with their legal needs (managing outside counsel, employment, carta, TOS, privacy). Their clients are enterprise level, medium companies and up, and they are here to help anyone on open source and IP strategy. Max is an ex-counsel at Block, ex-general manager of the Crypto Open Patent Alliance. Prior to that Max led Google’s open source legal group for 7 years.

So, the first speaker listed is a lawyer associated with Midjourney, a highly controversial generative artificial intelligence programme used to generate images. According to their entry on Wikipedia, the company is being sued, Note: Links have been removed,

On January 13, 2023, three artists – Sarah Andersen, Kelly McKernan, and Karla Ortiz – filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists.[32]

My October 24, 2022 posting highlights some of the issues with generative image programmes and Midjourney is mentioned throughout.

As I noted earlier, I’m glad to see more thought being put into the societal impact of AI and somewhat disconcerted by the hyperbole from the like of Geoffrey Hinton and the like of Vancouver’s Multiplatform AI conference organizers. Mike Masnick put it nicely in his May 24, 2023 posting on TechDirt (Note 1: I’ve taken a paragraph out of context, his larger issue is about proposals for legislation; Note 2: Links have been removed),

Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.

For anyone interested in the Canadian government attempts to legislate AI, there’s my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”

Addendum (June 1, 2023)

Another statement warning about runaway AI was issued on Tuesday, May 30, 2023. This was far briefer than the previous March 2023 warning, from the Center for AI Safety’s “Statement on AI Risk” webpage,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war [followed by a list of signatories] …

Vanessa Romo’s May 30, 2023 article (with contributions from Bobby Allyn) for NPR ([US] National Public Radio) offers an overview of both warnings. Rae Hodge’s May 31, 2023 article for Salon offers a more critical view, Note: Links have been removed,

The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of “nuclear war” and human “extinction.” Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement from the non-profit Center for AI Safety said.

But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board.

TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry.

“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote.

“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.

The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts.

“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”

Fear, after all, is a powerful sales tool.

Romo’s May 30, 2023 article for NPR offers a good overview and, if you have the time, I recommend reading Hodge’s May 31, 2023 article for Salon in its entirety.

*ETA June 8, 2023: This sentence “What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.” was added to the introductory paragraph at the beginning of this post.

Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?

A couple of Australian academics have written a comment for the journal Nature, which bears the intriguing subtitle: “The patent system assumes that inventors are human. Inventions devised by machines require their own intellectual property law and an international treaty.” (For the curious, I’ve linked to a few of my previous posts touching on intellectual property [IP], specifically the patent’s fraternal twin, copyright at the end of this piece.)

Before linking to the comment, here’s the May 27, 2022 University of New South Wales (UNCSW) press release (also on EurekAlert but published May 30, 2022) which provides an overview of their thinking on the subject, Note: Links have been removed,

It’s not surprising these days to see new inventions that either incorporate or have benefitted from artificial intelligence (AI) in some way, but what about inventions dreamt up by AI – do we award a patent to a machine?

This is the quandary facing lawmakers around the world with a live test case in the works that its supporters say is the first true example of an AI system named as the sole inventor.

In commentary published in the journal Nature, two leading academics from UNSW Sydney examine the implications of patents being awarded to an AI entity.

Intellectual Property (IP) law specialist Associate Professor Alexandra George and AI expert, Laureate Fellow and Scientia Professor Toby Walsh argue that patent law as it stands is inadequate to deal with such cases and requires legislators to amend laws around IP and patents – laws that have been operating under the same assumptions for hundreds of years.

The case in question revolves around a machine called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) created by Dr Stephen Thaler, who is president and chief executive of US-based AI firm Imagination Engines. Dr Thaler has named DABUS as the inventor of two products – a food container with a fractal surface that helps with insulation and stacking, and a flashing light for attracting attention in emergencies.

For a short time in Australia, DABUS looked like it might be recognised as the inventor because, in late July 2021, a trial judge accepted Dr Thaler’s appeal against IP Australia’s rejection of the patent application five months earlier. But after the Commissioner of Patents appealed the decision to the Full Court of the Federal Court of Australia, the five-judge panel upheld the appeal, agreeing with the Commissioner that an AI system couldn’t be named the inventor.

A/Prof. George says the attempt to have DABUS awarded a patent for the two inventions instantly creates challenges for existing laws which has only ever considered humans or entities comprised of humans as inventors and patent-holders.

“Even if we do accept that an AI system is the true inventor, the first big problem is ownership. How do you work out who the owner is? An owner needs to be a legal person, and an AI is not recognised as a legal person,” she says.

Ownership is crucial to IP law. Without it there would be little incentive for others to invest in the new inventions to make them a reality.

“Another problem with ownership when it comes to AI-conceived inventions, is even if you could transfer ownership from the AI inventor to a person: is it the original software writer of the AI? Is it a person who has bought the AI and trained it for their own purposes? Or is it the people whose copyrighted material has been fed into the AI to give it all that information?” asks A/Prof. George.

For obvious reasons

Prof. Walsh says what makes AI systems so different to humans is their capacity to learn and store so much more information than an expert ever could. One of the requirements of inventions and patents is that the product or idea is novel, not obvious and is useful.

“There are certain assumptions built into the law that an invention should not be obvious to a knowledgeable person in the field,” Prof. Walsh says.

“Well, what might be obvious to an AI won’t be obvious to a human because AI might have ingested all the human knowledge on this topic, way more than a human could, so the nature of what is obvious changes.”

Prof. Walsh says this isn’t the first time that AI has been instrumental in coming up with new inventions. In the area of drug development, a new antibiotic was created in 2019 – Halicin – that used deep learning to find a chemical compound that was effective against drug-resistant strains of bacteria.

“Halicin was originally meant to treat diabetes, but its effectiveness as an antibiotic was only discovered by AI that was directed to examine a vast catalogue of drugs that could be repurposed as antibiotics. So there’s a mixture of human and machine coming into this discovery.”

Prof. Walsh says in the case of DABUS, it’s not entirely clear whether the system is truly responsible for the inventions.

“There’s lots of involvement of Dr Thaler in these inventions, first in setting up the problem, then guiding the search for the solution to the problem, and then interpreting the result,” Prof. Walsh says.

“But it’s certainly the case that without the system, you wouldn’t have come up with the inventions.”

Change the laws

Either way, both authors argue that governing bodies around the world will need to modernise the legal structures that determine whether or not AI systems can be awarded IP protection. They recommend the introduction of a new ‘sui generis’ form of IP law – which they’ve dubbed ‘AI-IP’ – that would be specifically tailored to the circumstances of AI-generated inventiveness. This, they argue, would be more effective than trying to retrofit and shoehorn AI-inventiveness into existing patent laws.

Looking forward, after examining the legal questions around AI and patent law, the authors are currently working on answering the technical question of how AI is going to be inventing in the future.

Dr Thaler has sought ‘special leave to appeal’ the case concerning DABUS to the High Court of Australia. It remains to be seen whether the High Court will agree to hear it. Meanwhile, the case continues to be fought in multiple other jurisdictions around the world.

Here’s a link to and a citation for the paper,

Artificial intelligence is breaking patent law by Alexandra George & Toby Walsh. Nature (Nature) COMMENT ISSN 1476-4687 (online) 24 May 2022 ISSN 0028-0836 (print) Vol 605 26 May 2022 pp. 616-18 DOI: 10.1038/d41586-022-01391-x

This paper appears to be open access.

The Journey

DABIUS has gotten a patent in one jurisdiction, from an August 8, 2021 article on brandedequity.com,

The patent application listing DABUS as the inventor was filed in patent offices around the world, including the US, Europe, Australia, and South Afica. But only South Africa granted the patent (Australia followed suit a few days later after a court judgment gave the go-ahard [and rejected it several months later]).

Natural person?

This September 27, 2021 article by Miguel Bibe for Inventa covers some of the same ground adding some some discussion of the ‘natural person’ problem,

The patent is for “a food container based on fractal geometry”, and was accepted by the CIPC [Companies and Intellectual Property Commission] on June 24, 2021. The notice of issuance was published in the July 2021 “Patent Journal”.  

South Africa does not have a substantive patent examination system and, instead, requires applicants to merely complete a filing for their inventions. This means that South Africa patent laws do not provide a definition for “inventor” and the office only proceeds with a formal examination in order to confirm if the paperwork was filled correctly.

… according to a press release issued by the University of Surrey: “While patent law in many jurisdictions is very specific in how it defines an inventor, the DABUS team is arguing that the status quo is not fit for purpose in the Fourth Industrial Revolution.”

On the other hand, this may not be considered as a victory for the DABUS team since several doubts and questions remain as to who should be considered the inventor of the patent. Current IP laws in many jurisdictions follow the traditional term of “inventor” as being a “natural person”, and there is no legal precedent in the world for inventions created by a machine.

August 2022 update

Mike Masnick in an August 15, 2022 posting on Techdirt provides the latest information on Stephen Thaler’s efforts to have patents and copyrights awarded to his AI entity, DABUS,

Stephen Thaler is a man on a mission. It’s not a very good mission, but it’s a mission. He created something called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) and claims that it’s creating things, for which he has tried to file for patents and copyrights around the globe, with his mission being to have DABUS named as the inventor or author. This is dumb for many reasons. The purpose of copyright and patents are to incentivize the creation of these things, by providing to the inventor or author a limited time monopoly, allowing them to, in theory, use that monopoly to make some money, thereby making the entire inventing/authoring process worthwhile. An AI doesn’t need such an incentive. And this is why patents and copyright only are given to persons and not animals or AI.

… Thaler’s somewhat quixotic quest continues to fail. The EU Patent Office rejected his application. The Australian patent office similarly rejected his request. In that case, a court sided with Thaler after he sued the Australian patent office, and said that his AI could be named as an inventor, but thankfully an appeals court set aside that ruling a few months ago. In the US, Thaler/DABUS keeps on losing as well. Last fall, he lost in court as he tried to overturn the USPTO ruling, and then earlier this year, the US Copyright Office also rejected his copyright attempt (something it has done a few times before). In June, he sued the Copyright Office over this, which seems like a long shot.

And now, he’s also lost his appeal of the ruling in the patent case. CAFC, the Court of Appeals for the Federal Circuit — the appeals court that handles all patent appeals — has rejected Thaler’s request just like basically every other patent and copyright office, and nearly all courts.

If you have the time, the August 15, 2022 posting is an interesting read.

Consciousness and ethical AI

Just to make things more fraught, an engineer at Google has claimed that one of their AI chatbots has consciousness. From a June 16, 2022 article (in Canada’s National Post [previewed on epaper]) by Patrick McGee,

Google has ignited a social media firestorm on the the nature of consciousness after placing an engineer on paid leave with his belief that the tech group’s chatbot has become “sentient.”

Blake Lemoine, a senior software engineer in Google’s Responsible AI unit, did not receive much attention when he wrote a Medium post saying he “may be fired soon for doing AI ethics work.”

But a Saturday [June 11, 2022] profile in the Washington Post characterized Lemoine as “the Google engineer who thinks “the company’s AI has come to life.”

This is not the first time that Google has run into a problem with ethics and AI. Famously, Timnit Gebru who co-led (with Margaret Mitchell) Google’s ethics and AI unit departed in 2020. Gebru said (and maintains to this day) she was fired. They said she was ?, they never did make a final statement although after an investigation Gebru did receive an apology. You *can* read more about Gebru and the issues she brought to light in her Wikipedia entry. Coincidentally (or not), Margaret Mitchell was terminated/fired in February 2021 from Google after criticizing the company for Gebru’s ‘firing’. See a February 19, 2021 article by Megan Rose Dickey for TechCrunch for details about what the company has admitted is a firing or Margaret Mitchell’s termination from the company.

Getting back intellectual property and AI.

What about copyright?

There are no mentions of copyright in the earliest material I have here about the ‘creative’ arts and artificial intelligence is this, “Writing and AI or is a robot writing this blog?” posted July 16, 2014. More recently, there’s “Beer and wine reviews, the American Chemical Society’s (ACS) AI editors, and the Turing Test” posted May 20, 2022. The type of writing featured is not literary or typically considered creative writing.

On the more creative front, there’s “True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)” posted on December 3, 2021. The literary/creative portion of the post can be found under the ‘AI and creativity’ subhead approximately 30% of the way down and where I mention Douglas Coupland. Again, there’s no mention of copyright.

It’s with the visual arts that copyright gets mentioned. The first one I can find here is “Robot artists—should they get copyright protection” posted on July 10, 2017.

Fun fact: Andres Guadamuz who was mentioned in my posting took to his own blog where he gave my blog a shout out while implying that I wasn’t thoughtful. The gist of his August 8, 2017 posting was that he was misunderstood by many people, which led to the title for his post, “Should academics try to engage the public?” Thankfully, he soldiers on trying to educate us with his TechnoLama blog.

Lastly, there’s this August 16, 2019 posting “AI (artificial intelligence) artist got a show at a New York City art gallery” where you can scroll down to the ‘What about intellectual property?’ subhead about 80% of the way.

You look like a thing …

i am recommending a book for anyone who’d like to learn a little more about how artificial intelligence (AI) works, “You look like a thing and I love you; How Artificial Intelligence Works and Why It’s Making the World a Weirder Place” by Janelle Shane (2019).

It does not require an understanding of programming/coding/algorithms/etc.; Shane makes the subject as accessible as possible and gives you insight into why the term ‘artificial stupidity’ is more applicable than you might think. You can find Shane’s website here and you can find her 10 minute TED talk here.

*’can’ added to sentence on May 12, 2023.

Getting to be more literate than humans

Lucinda McKnight, lecturer at Deakin University, Australia, has a February 9, 2021 essay about literacy in the coming age of artificial intelligence (AI) for The Conversation (Note 1: You can also find this essay as a February 10, 2021 news item on phys.org; Note 2: Links have been removed),

Students across Australia have started the new school year using pencils, pens and keyboards to learn to write.

In workplaces, machines are also learning to write, so effectively that within a few years they may write better than humans.

Sometimes they already do, as apps like Grammarly demonstrate. Certainly, much everyday writing humans now do may soon be done by machines with artificial intelligence (AI).

The predictive text commonly used by phone and email software is a form of AI writing that countless humans use every day.

According to an industry research organisation Gartner, AI and related technology will automate production of 30% of all content found on the internet by 2022.

Some prose, poetry, reports, newsletters, opinion articles, reviews, slogans and scripts are already being written by artificial intelligence.

Literacy increasingly means and includes interacting with and critically evaluating AI.

This means our children should no longer be taught just formulaic writing. [emphasis mine] Instead, writing education should encompass skills that go beyond the capacities of artificial intelligence.

McKnight’s focus is on how Australian education should approach the coming AI writer ‘supremacy’, from her February 9, 2021 essay (Note: Links have been removed),

In 2019, the New Yorker magazine did an experiment to see if IT company OpenAI’s natural language generator GPT-2 could write an entire article in the magazine’s distinctive style. This attempt had limited success, with the generator making many errors.

But by 2020, GPT-3, the new version of the machine, trained on even more data, wrote an article for The Guardian newspaper with the headline “A robot wrote this entire article. Are you scared yet, human?”

This latest much improved generator has implications for the future of journalism, as the Elon Musk-funded OpenAI invests ever more in research and development.

AI writing is said to have voice but no soul. Human writers, as the New Yorker’s John Seabrook says, give “color, personality and emotion to writing by bending the rules”. Students, therefore, need to learn the rules and be encouraged to break them.

Creativity and co-creativity (with machines) should be fostered. Machines are trained on a finite amount of data, to predict and replicate, not to innovate in meaningful and deliberate ways.

AI cannot yet plan and does not have a purpose. Students need to hone skills in purposeful writing that achieves their communication goals.

AI is not yet as complex as the human brain. Humans detect humor and satire. They know words can have multiple and subtle meanings. Humans are capable of perception and insight; they can make advanced evaluative judgements about good and bad writing.

There are calls for humans to become expert in sophisticated forms of writing and in editing writing created by robots as vital future skills.

… OpenAI’s managers originally refused to release GPT-3, ostensibly because they were concerned about the generator being used to create fake material, such as reviews of products or election-related commentary.

AI writing bots have no conscience and may need to be eliminated by humans, as with Microsoft’s racist Twitter prototype, Tay.

Critical, compassionate and nuanced assessment of what AI produces, management and monitoring of content, and decision-making and empathy with readers are all part of the “writing” roles of a democratic future.

It’s an interesting line of thought and McKnight’s ideas about writing education could be applicable beyond Australia., assuming you accept her basic premise.

I have a few other postings here about AI and writing:

Writing and AI or is a robot writing this blog? a July 16, 2014 posting

AI (artificial intelligence) text generator, too dangerous to release? a February 18, 2019 posting

Automated science writing? a September 16, 2019 posting

It seems I have a lot of questions* about the automation of any kind of writing.

*’question’ changed to ‘questions’ on November 25, 2021.

Science communication: perspectives from 39 countries

Bravo to the team behind “Communicating Science: A Global Perspective” published in September 2020 by the Australian National University Press!

Two of the editors, Toss Gascoigne (Visiting fellow, Centre for the Public Awareness of Science, Australian National University) and Joan Leach (Professor, Australian National University) have written November 8, 2020 essay featuring their book for The Conversation,

It’s a challenging time to be a science communicator. The current pandemic, climate crisis, and concerns over new technologies from artificial intelligence to genetic modification by CRISPR demand public accountability, clear discussion and the ability to disagree in public.

Since the Second World War, there have been many efforts to negotiate a social contract between science and civil society. In the West, part of that negotiation has emphasised the distribution of scientific knowledge. But how is the relationship between science and society formulated around the globe?

We collected stories from 39 countries together into a book. …

The term “science communication” is not universal. For 50 years, what is called “science communication” in Australia has had different names in other countries: “science popularisation”, “public understanding”, “vulgarisation”, “public understanding of science”, and the cultivation of a “scientific temper”.

Colombia uses the term “the social appropriation of science and technology”. This definition underscores that scientific knowledge is transformed through social interaction.

Each definition delivers insights into how science and society are positioned. Is science imagined as part of society? Is science held in high esteem? Does association with social issues lessen or strengthen the perception of science?

Governments play a variety of roles in the stories we collected. The 1970s German government stood back, perhaps recalling the unsavoury relationship between Nazi propaganda and science. Private foundations filled the gap by funding ambitious programs to train science journalists. In the United States, the absence of a strong central agency encouraged diversity in a field described variously as “vibrant”, “jostling” or “cacophonous”.

Russia saw a state-driven focus on science through the communist years, to modernise and industrialise. In 1990 the Knowledge Society’s weekly science newspaper Argumenty i Fakty had the highest weekly circulation of any newspaper in the world: 33.5 million copies. But the collapse of the Soviet Union showed how fragile these scientific views were, as people turned to mysticism.

Eighteen countries contributing to the book have a recent colonial history, and many are from the Global South. They saw the end of colonial rule as an opportunity to embrace science. …

Science in these countries focused mainly on health, the environment and agriculture. Nigeria’s polio vaccine campaign was almost derailed in 2003 when two influential groups, the Supreme Council for Shari’ah in Nigeria and the Kaduna State Council of Imams and Ulamas, declared the vaccine contained anti-fertility substances and was part of a Western conspiracy to sterilise children. Only after five Muslim leaders witnessed a successful vaccine program in Egypt was it recognised as being compatible with the Qur’an.

If you have time, I recommend reading the entire essay, which can be found here in November 8, 2020 essay on The Conversation or in a Nov. 9, 2020 news item on phys.org.

I found more information about the book on the Australian National University Press’s Communicating Science: A Global Perspective webpage,

This collection charts the emergence of modern science communication across the world. This is the first volume to map investment around the globe in science centres, university courses and research, publications and conferences as well as tell the national stories of science communication.

Communicating Science describes the pathways followed by 39 different countries. All continents and many cultures are represented. For some countries, this is the first time that their science communication story has been told. [emphasis mine]

Here’s a link to and a citation for the book,

Communicating Science; A Global Perspective. Edited by Toss Gascoigne, Bernard Schiele, Joan Leach, Michelle Riedlinger, Bruce V. Lewenstein, Luisa Massarani, Peter Broks. DOI: http://doi.org/10.22459/CS.2020 ISBN (print): 9781760463656 ISBN (online): 9781760463663 Imprint [Publisher]: ANU Press Publication date: Sep 2020

The paper copy is $150 and I assume those are Australian dollars. There are free online and e-versions but they do ask you to: Please read Conditions of use before downloading the formats.

A commentary on the Canadian chapter, mostly

Before launching into the commentary, Here’s a bit about words.

Terminology

Terminology, whether it’s within one language or across two or more languages, is almost always an issue and science communication is no exception as is noted in the Introduction (Subsection 4, page 11),

In the course of compiling the chapters, we found that the term ‘science communication’ has many definitions and not all researchers or practitioners agree on its goals and boundaries. It has been variously described as an objective, goals, a process, a result and an outcome. This confusion over a definition is reflected in the terminology used internationally for the field. From the second half of the 20th century, what we have chosen to call ‘science communication’ for this book has flown under different headings: ‘science popularisation, ‘public understanding’, ‘vulgarisation’, ‘social appropriation of science and technology’, ‘public understanding of science’ and ‘scientific temper’ for example. In all, the chapters mention 24 separate terms for the expression ‘science communication’ that we chose. We have taken note of that variety.

Very few of the chapters which are organized by country name attempt to establish a definition. The chapter on Canada written by Michelle Riedlinger, Alexandre Schiele and Germana Barata is one of the many not offering any definitions for ‘science communication’. Although, it does offer a few other terms used as synonyms or closely allied concepts (also without definitions). They include ‘science or scientific culture’, which (according to a Nov.13.20 email from Toss Gascoigne in response to my question about science culture being a term unique to Canada) has French roots and is used in France and Canada.

Scope

The scope for both the book and the chapter on Canada is substantive and everyone involved is to be lauded for their efforts. Here’s how the book is described on the publisher’s ‘Communicating Science; A Global Perspective’ webpage (Note: more about the emphases in the ‘I love you; we need to talk’ subsection below),

This collection charts the emergence of modern science communication across the world. This is the first volume to map investment around the globe in science centres, university courses and research, publications and conferences as well as tell the national stories of science communication. [emphases mine]

The authors of the Canada chapter managed to squeeze a lot of Canadian science communication history into 21 pp. of text.

Quite an accomplishment. I am particularly admiring as earlier this year I decided to produce a 10 year overview (2010 – 19) of science culture in Canada and got carried away proceeded to write a 25,000 word, multi-part series.

Given the November 8, 2020 essay and its storytelling style, I wasn’t expecting the largely historical review I found in both the Canada and France chapters. I advise reading the Introduction to the book first as that will set expectations more accurately.

I love you; we need to talk

I learned a lot about the history of science communication in Canada. It’s the first time I’ve seen a document that pulls together so much material ranging from 19th century efforts to relatively contemporaneous efforts, i.e., 2018 or thereabouts.

There’s something quite exciting about recognizing the deep roots that science communication has in Canada.

I just wish the authors hadn’t taken ‘the two cultures’ (French and English) route. By doing so, they managed to write a history that ignores a lot of other influences including that of Canada’s Indigenous peoples and their impact on Canadian science, science culture, and, increasingly, science communication. (Confession, I too missed the impact from Indigenous peoples in my series.)

Plus, ‘two cultures’ seems a dated (1970s?) view of Canadian society and, by extension, its science culture and communication.

This was not the only element that seemed out of date. The authors mentioned Canada’s National Science and Technology Week without noting that the effort was rebranded in 2016 as ‘Science Odyssey’ (plus, its dates moved from Oct. to May of each year).

No surprise, the professional and institutional nature of science communication was heavily emphasized. So, it was delightful to find a section (2.10 on page 11) titled, “Citizen involvement in science communication.” Perhaps, they were constrained for space as they didn’t include the astronomy community, which I believe is amongst our oldest citizen science groups with roots that can be traced back to the 19th century (1868).

There are some other omissions (unless noted otherwise, I managed to include something on the topic in my series):

  • the Canadian Arctic and/or The North (I tried but did not succeed)
  • art/science (also known as sciart) communities
  • the maker and do-it-yourself (DIY) communities
  • open science, specifically, the open science initiative at McGill University’s Neuro (Montreal Neurological Institute-Hospital) (can’t remember but I probably missed this too)
  • the immigrant communities and their impact (especially obvious in light of the January 2020 downed PS752 Flight from Iran to the Ukraine; many of the passengers were Canadians and/or students coming to study and a stunning percentage of those people were in science and/or technology) (I didn’t do as good as job as I should have)
  • women or gender issues (I missed it too)
  • BIPOC representation (yes, I missed it)
  • LGBTQ+ representation (yes, me too)
  • social sciences (yes, me too)
  • etc.

The bits I emphasized in the publisher’s description of the book “science centres, university courses and research, publications and conferences as well as tell the national stories of science communication” set up tension between a ‘national story of science communication’ and a ‘national story of institutionalized and/or academic science communication’.

Clearly, the authors had an almost impossible task and by including citizen science and social media and some independent actors they made an attempt to recognize the totality. Still, I wish they had managed even a sentence or two mentioning some of these other communities of interest and/or noting the omissions.

Here’s more about the difficulties I think the authors encountered.

It’s all about central Canada

As noted with other problems, this one happened to me too (in my 2010 – 19 Canadian science culture overview). It’s as if the provinces of Ontario and Québec exert a centrifugal force throughout every aspect of our nationhood including our science and science communication. Almost everything tracks back to those provinces.

The authors have mentioned most of the provinces, although none of the three Northern territories, in their chapter, evidence they made an attempt. What confounds me is the 7 pp. of 21 pp. of text dedicated to Québec alone, in addition to the Québec mentions in the other 14 pp. If there was a problem with word count, couldn’t they have shaved off a paragraph or two to include some or all of the omissions I noted earlier? Or added a paragraph or two to the chapter?

Framing and authors

By framing the discussion about Canada within the ‘two culture’ paradigm, the authors made things difficult for themselves. Take a look at the title and first sentence for the chapter,

CANADA
One country, two cultures: Two routes to science communication

This chapter provides an account of modern science communication in Canada, including historical factors influencing its development, and the development of the distinct Province of Quebec. …

The title and discussion frame the article so tightly that anything outside the frame is an outlier, i.e., they ‘baked’ in the bias. It’s very similar to the problem in scientific research where you have to be careful about your research question because asking the wrong question or framing it poorly will result in problematic research.

Authors

It’s not unusual for family members to work in the same field and even work together (Marie and Pierre Curie spring to mind). I believe the failure to acknowledge (I checked the introduction, the acknowledgements, and the Canada chapter) the relationship between one of the authors (Alexandre Schiele, son) of the Canada chapter to one of the book’s editors (Bernard Schiele, father) was an oversight. (Both also have some sort of affiliation with the Université du Québec à Montréal [UQAM]).

Anyway, I hope subsequent editions of the book will include an acknowledgement. These days, transparency is important, eh?

Having gotten that out of the way, I was curious about the ‘Canada’ authors and found this on p. 204,

Contributors

Dr Michelle Riedlinger is an associate professor at the University of the Fraser Valley, British Columbia, Canada, and secretary of the PCST Network [Public Communication of Science and Technology Network] and her career spans the practical and theoretical sides of science communication.

Dr Alexandre Schiele holds a PhD in communication science (Sorbonne) and another in political science (University of Quebec). He is working on a project ‘Mapping the New Science Communication Landscape in Canada’.

Dr Germana Barata is a science communication researcher at the Laboratory of Advanced Studies in Journalism (Labjor) at the State University of Campinas, Brazil, and a member of the Scientific Committee of the PCST Network.

Outsiders often provide perceptive and thoughtful commentary. I did not find any discernible trace of that perspective n the chapter despite all three authors having extensive experience in other countries.

Riedlinger is more strongly associated with Australia than Canada (source: Riedlinger’s biography on the Public Communication of Science and Technology Network). As of July 2020, she is a senior lecturer at Australia’s Queensland University of Technology (QUT).

Interestingly, she is also a Board member of the Science Writers and Communicators of Canada (SWCC) (source: her QUT biography). I’ll get back to this membership later.

Barata is (or was?) a research associate at Simon Fraser University’s Canada Scholar Communications Lab (ScholCommLab) (source: Barata’s SFU biography) in addition to her work in Brazil.

Those two would seem to cover the southern hemisphere. The third gives us the northern hemisphere.

A. Schiele (source: his CV on ResearchGate) is (or was?) a researcher at the UQAM (Université du Québec à Montréa) East Asia Observatory and is (or was?) at (source: profile on Academia.edu) The Hebrew University of Jerusalem’s Louis Frieberg Center for East Asian Studies.

After looking at their biographies and CV, the Canada book chapter is even more disappointing. Yes, the authors were constrained by the book’s raison d’être and the way they framed their chapter but , perhaps, there’s something more to the story?

The future of science communication and the ‘elephant in the room’

At the conclusion of the Canada chapter (pp. 194-6), there’s this,

4. The future for modern science communication in Canada

Recent surveys of Canadian science communicators identified though Twitter and Instagram show that, compared to traditional science communication professionals, social media communicators are younger, paid less (or not at all) for their science communication activities, and have been communicating for fewer years than other kinds of science communicators (Riedlinger, Barata and Schiele [A], 2019). They are more likely to have a science background (rather than communication, journalism or education background) and are less likely to be members of professional associations. These communicators tend to be based in Ontario, Quebec and British Columbia, and communicate with each other through their own informal networks. Canadian social media science communicators are primarily located in the provinces identified by Schiele [B] and Landry (2012) as the most prolific regions for science communication in Canada, where Canada’s most prestigious and traditional universities are located, and where the bulk of Canada’s population is concentrated. While some science journalists and communicators in Canada mourn the perceived loss of control over science communication as a loss of quality and accuracy, others welcome digital technology for the public engagement potential it offers. For example, Canadian science Instagram communicator Samantha Yammine [emphasis mine] was recently criticised in a Sciencemagazine op-ed piece for trivialising scientific endeavours on social media (Wright, 2018). However, supporters of Yammine argued that she was successfully responding to the Instagram medium in her communication (see, for example, Lougheed, 2018 [emphasis mine]; Marks, 2018). Science has subsequently published an article by Yammine and other social media communicators on the benefits of social media for science communication (Yammine, Liu, Jarreau and Coe, 2018). Social media platforms are allowing space for sociopolitically motivated communicators in Canada to work productively. The impact of these social media science communication efforts is difficult to assess; yet open science for consensus building and support for science in society efforts are needed in Canada now more than ever.

Canada has seen increased investments in science as described by the Naylor Report and the Global Young Academy, but science communication and outreach efforts are still needed to support science culture nationally (Boon, 2017a) [emphasis mine]. Funding for activities happens at the federal level through agency funding; however, Canadian scientists, science communicators and science policymakers have criticised some recent initiatives for being primarily aimed at youth rather than adults, supporting mainly traditional and established organisations rather than innovative science communication initiatives, and having limited connection with the current and broader community of science communicators in Canada. While some science communicators are actively advocating for greater institutional support for a wider range of science communication initiatives (see Boon, 2017b) [emphasis mine], governments and scientific communities have been slow to respond.

Austerity continues to dominate public policy in Quebec, and science culture has ceased to be a priority. The Society for the Promotion of Science and Technology dissolved in 2010 and State-sponsored PCST in Quebec has come to an end. PCST actors and networks in Quebec persevere although they face difficulties in achieving an online presence in a global, yet overwhelmingly Anglophone, social media environment. However, the European Union program Horizon 2020 may very well encourage a new period of renewed government interest in science communication.

As a preface to the next subsection, I want to note that the relationships and networks I’m describing are not problematic or evil or sinister in and of themselves. We all work with friends and acquaintances and, even, family when we can. If not, we find other ways to establish affiliations such as professional and informal networks.

The advantages include confidence in the work quality, knowing deadlines will be met and that you’ll be treated fairly and acknowledged, getting a fast start, etc. There are many advantages and one of the biggest disadvantages (in my opinion) is ‘group think’, i.e., the tendency for a group to unconsciously reinforce each other’s biases.

Weirdly, outsiders such as myself have a similar problem. While people within networks tend to get reinforcing feedback, ‘group think’, outsiders don’t get much, if any. Without feedback you’re at the mercy of your search techniques and you tend to reinforce your own biases and shortsightedness (you’re inside your own echo chamber). In the end research needs to take those shortcomings, biases, and beliefs into account.

Networks and research can be a trap

All three authors are in one fashion or another closely associated with the PCST Network. Two (Riedlinger and Barata) are board or executive members of the PCST Network and one (A. Schiele) has familial relationship with a book editor (B. Schiele) who is himself an executive member of the PCST Network. (Keep tuned, there’s one more network of relationships coming up.)

Barata, Riedlinger, and A. Schiele were the research team for the ‘Mapping the New Science Communication Landscape in Canada’ project as you can see here. (Note: Oops! There’s a typo in the project title on the webpage, which, unexpectedly, is hosted by Brazil’s Laboratory of Advanced Studies in Journalism [Labjor] where Barata is a researcher.)

My points about ‘Mapping …’ and the Canada book chapter,

  1. The Canada book chapter’s ‘The impact of new and emerging technology …’ has roots that can be traced back to the ‘Mapping’ project, which focused on social media (specifically, Instagram and Twitter).
  2. The ‘Mapping’ project is heavily dependent on one network (not PCST).
  3. The Canada chapter is listed as one of the ‘Mapping’ project’s publications. (Source: Project’s Publications page).
  4. The ‘Impact’ subsection sets the tone for a big chunk of the final subsection, ‘The future …’ both heavily dependent on the ‘Mapping’ project.
  5. The ‘Mapping’ project has a few problems, which I describe in the following.

In the end, two sections of the Canada chapter are heavily dependent on one research project that the authors themselves conducted.

Rather than using an authoritative style, perhaps the authors could have included a sentence indicating that more research is needed before making definitive statements about Canadian science communication and its use of new and emerging technologies and about its future.

The second network and other issues

Counterintuitively, I’m starting with the acknowledgements in the materials produced by the three authors for their ‘Mapping’ project and then examining the Canada chapter’s ‘Impact of new emerging and technologies …’ subsection before getting back to the Canada chapter’s final subsection ‘The future …’.

The authors’ 2019 paper is interesting. You can access the title, “The landscape of science communication in contemporary Canada: A focus on anglophone actors and networks” here on Academia.edu and you can access the author’s 2018 paper “Using social media metrics to identify science communicators in Canada” for the 2018 Science & You conference in Beijing, China here on ResearchGate. Both appear to be open access. That is wonderful and much appreciated.

The 2019 and 2018 papers’ Acknowledgements have something interesting (excerpt from 2019 paper),

This study was supported by the Social Sciences and Humanities Research Council of Canada through Grant (892-2017-2019) to Juan Pablo Alperin [there’s a bit more info. about the grant on Alperin’s CV in the Grants subsection] and Michelle Riedlinger. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. We would like to thank the Science Writers and Communicators of Canada (SWCC) for their partnership in this project. [emphasis mine] In particular, we are grateful for the continued support and assistance of Shelley McIvor, Janice Benthin and Tim Lougheed [emphasis mine] from SWCC, and Stéphanie Thibault from l’Association des communicateurs scientifiques du Québec (ACS).

It seems the partnership with SWCC very heavily influenced the text found in the Canada chapter’s subsection ‘The impact of new and emerging technologies on science communication (p. 187),

2.12. The impact of new and emerging technologies on science communication

Coupled with government ambivalence towards science communication over the last decade, Canada has experienced the impact of new and emerging technologies and changing economic conditions. These changes have reshaped the mainstream media landscape in many parts of the world, including Canada, and the effects have been exacerbated by neoliberal agendas. The changes and their impacts on Canadian journalism were captured in the Canadian survey report The Shattered Mirror (2017). The survey found that Canadians prefer to be informed through the media but on their own timelines and with little or no cost to themselves.

Canada’s science media have responded to new media in many ways. For example, in 2005, CBC’s Quirks and Quarks became the first major CBC radio show to be made available as a free podcast. Canada’s very active blogging community has been developing from the early 2000s, and recent digital initiatives are helping redefine what independent science communication looks like. These initiatives include Science Borealis, launched in 2013 [emphasis mine] (Science Borealis, 2018), Hakai Magazine [emphasis mine] launched in 2015 (Hakai Magazine, n.d.), and The Conversation Canada launched in 2017 (The Conversation Canada, 2018). Twitter, Instagram and YouTube are also supporting a growing number of science communicators engaging a diverse range of publics in digital spaces. …

[assume my emphasis for this paragraph; I didn’t have the heart to make any readers struggle through that much bolding] In 2016, the Canadian Science Writers Association changed its name to the Science Writers and Communicators of Canada Association (SWCC) to reflect the new diversity of its membership as well as the declining number of full-time journalists in mass media organisations. SWCC now describes itself as a national alliance of professional science communicators in all media, to reflect the blurring boundaries between journalism, science communication and public relations activities (SWCC, 2017). In 2017, SWCC launched the People’s Choice Awards for Canada’s favourite science site and Canada’s favourite blog to reflect the inclusion of new media.

Given that so much of the relatively brief text in this three paragraph subsection is devoted to SWCC and the examples of new media science practitioners (Science Borealis, Hakai Magazine, and Samantha Yammine) are either associated with or members of SWCC, it might have been good idea to make the relationship between the organization and the three authors a little more transparent.

We’re all in this together: PCST, SWCC, Science Borealis, Hakai Magazine, etc.

Here’s a brief recapitulation of the relationships so far: Riedlinger and Barata, both co-authors of the Canada chapter, are executive/board/committee members of the Public Communication of Science and Technology (PCST) network. As well, Bernard Schiele one of the co-editors of the book is also a committee member of PCST (source: PCST webpage) and, as noted earlier, he’s related to the third co-author of the Canada chapter, Alexandre Schiele.

Plus, Riedlinger is one of the book’s editors.

Interestingly, four of the seven editors for the book are members of the PCST network.

More connections:

  • Remember Riedlinger is also a board member of the Science Writers and Communicators of Canada (SWCC)?
  • One of the founding members* of Science Borealis (a Canadian science blog aggregator), Sarah Boon is the managing editor for Science Borealis (source: Boon’s LinkedIn profile) and also a member of the SWCC (source: About me webpage on Watershed Notes). *Full disclosure: I too am a co-founding member of Science Borealis.*
    • Boon’s works and works from other SWCC members (e.g., Tim Lougheed) are cited in the conclusion for the Canada chapter.
  • Hakai Magazine and Science Borealis both cited as “… recent digital initiatives … helping redefine what independent science communication looks like.”
    • Hakai’s founding and current editor-in-chief is Jude Isabella, a past board member of the *SWCC’s predecessor organization Canadian Science Writers Association (source: Dec. 11, 2020 communication from Ms. Isabella)*

In short, there are many interlaced relationships.

The looking glass and a lack of self-criticism

Reviewing this work put some shortcomings of and biases in my own work into high relief. It’s one of the eternal problems, blindness, whether it’s a consequence of ‘group think’ or a failure to get out of your own personal bubble. Canadian science communication/culture is a big topic and it’s easy to get trapped in your own bubble or your group’s bubble.

As far as I can tell from reading the conference paper (2018) and the paper published in Cultures of Science (2019), there is no indication in the text that the researchers critiqued their own methodology.

Specifically,. most of the respondents to their survey were from one of two professional science communication organizations (SWCC and ACS [Association des communicateurs scientifiques du Québec]). As for the folks the authors found on Twitter and Instagram, those people had to self-identify as science communicators or use scicomm, commsci, vulgarisation and sciart as hashtags. If you didn’t use one of those hashtags, you weren’t seen. Also, ‘sciart’ can be called ‘artsci’ so, why wasn’t that hashtag also used?

In short, the research seems to have a rather narrow dataset, which is not a problem in and of itself, as long as it’s noted in your paper. Unfortunately, the authors didn’t and that problem/weakness followed the researchers into the book.

Remember the subsection: ‘2.12. The impact of new and emerging technologies on science communication’? As noted, it was heavily influenced by the co-authors own research and in this book, those words attain great authority as they are writing about Canada’s science communication and the ‘The future for modern science communication in Canada‘.

Getting back briefly to connections or, in this case, a lack of. There seems to have been one ‘outside’ editor/reviewer (source: Acknowledgements] for the book, Ranjan Chaudhuri, Associate Professor at National Institute of Industrial Engineering Mumbai (source: Chaudhuri’s LinkedIn profile). He’s the only person amongst the authors and the editors for whom I could find no connection to PCST.

(Book editors who weren’t previously mentioned: Joan Leach and Bruce V. Lewenstein were both invited speakers at the 2016 PCST Talk in Istanbul, Turkey and Peter Broks presented in 2004 at the PCST conference in Barcelona, Spain and his work was presented at a 2018 PCST conference in Dunedin, New Zealand.)

Chaudhuri doesn’t seem to have any connection and the other three seem to have, at best, a weak connection to PCST. That leaves four ‘outsiders’ to critically review and edit chapters from 39 countries. It’s an impossible job.

So, what is the future of science communication in Canada?

In the end, I have love for and two big problems with the Canada chapter.

What were they thinking?

Maybe someone could help me understand why the final paragraph of the Canada chapter is about Québec, the PCST, and the European Union’s Horizon 2020 science funding initiative.

Ending the chapter with the focus, largely, on one province, **an international organization (PCST) incorporated in Australia**, and a European science funding initiative that sunsets in 2020 to be replaced by Horizon Europe 2021-27 confounds me.

Please, someone out there, please help me. How do these impact or set the future for science communication in Canada?

Aside: the authors never mention Québec’s Agence Science-Presse. It’s an independent media outlet founded in 1978 and devoted, as you can see from the name, entirely to science. It seems like an odd omission.

Now, I have another question.

What about other realities, artificial intelligence, and more?

Why didn’t the authors mention virtual reality (VR)/augmented reality (AR)/mixed reality (MR)/cross reality (XR) and others? What about artificial intelligence (AI) and automated writing, i.e., will we need writers and communicators? (For anyone not familiar with the move to automate more of the writing process, see my July 16, 2014 posting “Writing and AI or is a robot writing this blog?” when Associated Press (AP) had made a deal with Automated Insights and my Sept. 16, 2019 posting “Automated science writing?” about some work at the Massachusetts Institute of Technology [MIT].)

It’s not exactly new but what impact are games of the virtual and real life types having?

All of these technologies and others on the horizon are certain to have an effect on the future of science communication in Canada.

Confession: I too missed these new and emerging technologies when pointing to the future in my own series. (sigh) Blindness affects all of us.

The future

I wish the authors had applied a little more imagination to the ‘future’ because I think it has major possibilities grounded in both new and emerging technologies and in hopes for greater inclusiveness (Indigenous communities, citizen scientists, elders, artists, and more) in the Canadian science communication effort. As for the possible impact these groups and technologies will have on institutionalized and noninstitutionalized science communication, I would dearly like to have seen mention of the possibility if not outright speculation.

The end

There is a lot to admire in the Canada chapter. Given the amount of history they were covering, the authors were admirably succinct and disciplined. There’s a lot to be learned in this chapter.

As for the flaws, as noted many times, I am subject to many of the same ones. I have often longed for a critical reader who can see what I can’t. In some ways, it’s the same problem academics face.

Thank you to the authors and the editors for an unexpected treat. Examining their work made it possible for me to cast a jaundiced eye on some of my own, becoming my own critical reader. Again, thank you to the authors and editors of this book. I just hope this critique proves useful to someone else too.

Links

For anyone who is curious, here’s a link to the authors’ interactive map of the new landscape (Twitter and Instagram) of science communication in Canada. BTW, I was charmed by and it looks like they’re still adding to the map.

My multipart series,

Part 1 covers science communication, science media (mainstream and others such as blogging) and arts as exemplified by music and dance: The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (1 of 5).

Part 2 covers art/science (or art/sci or sciart) efforts, science festivals both national and local, international art and technology conferences held in Canada, and various bar/pub/café events: The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (2 of 5).

Part 3 covers comedy, do-it-yourself (DIY) biology, chief science advisor, science policy, mathematicians, and more: The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (3 of 5).

Part 4 covers citizen science, birds, climate change, indigenous knowledge (science), and the IISD Experimental Lakes Area: The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (4 of 5).

Part 5: includes science podcasting, eco art, a Saskatchewan lab with an artist-in-residence, the Order of Canada and children’s science literature, animation and mathematics, publishing science, *French language science media,* and more: The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (5 of 5).

Plus,

An addendum: where I make some corrections and include a reference to some ‘biopoetry’: The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (an addendum).

There you have it, science communication in Canada, more or less, as a book chapter and as a multipart series warts and all.

*Original: “a past board member of the SWCC’ (source: homepage of Isabella’s eponymous website)” changed on Dec. 11, 2020 to”past board member of SWCC’s predecessor organization Canadian Science Writers Association (source: Dec. 11, 2020 communication from Ms. Isabella)”

**Original:”an Australian organization (PCST)” changed on Dec. 11, 2020 to “an international organization (PCST) incorporated in Australia”

Automated science writing?

It seems that automated science writing is not ready—yet. Still, an April 18, 2019 news item on ScienceDaily suggests that progress is being made,

The work of a science writer, including this one, includes reading journal papers filled with specialized technical terminology, and figuring out how to explain their contents in language that readers without a scientific background can understand.

Now, a team of scientists at MIT [Massachusetts Institute of Technology] and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papers and render a plain-English summary in a sentence or two.

An April 17, 2019 MIT news release, which originated the news item, delves into the research and its implications,

Even in this limited form, such a neural network could be useful for helping editors, writers, and scientists [emphasis mine] scan a large number of papers to get a preliminary sense of what they’re about. But the approach the team developed could also find applications in a variety of other areas besides language processing, including machine translation and speech recognition.

The work is described in the journal Transactions of the Association for Computational Linguistics, in a paper by Rumen Dangovski and Li Jing, both MIT graduate students; Marin Soljačić, a professor of physics at MIT; Preslav Nakov, a principal scientist at the Qatar Computing Research Institute, HBKU; and Mićo Tatalović, a former Knight Science Journalism fellow at MIT and a former editor at New Scientist magazine.

From AI for physics to natural language

The work came about as a result of an unrelated project, which involved developing new artificial intelligence approaches based on neural networks, aimed at tackling certain thorny problems in physics. However, the researchers soon realized that the same approach could be used to address other difficult computational problems, including natural language processing, in ways that might outperform existing neural network systems.

“We have been doing various kinds of work in AI for a few years now,” Soljačić says. “We use AI to help with our research, basically to do physics better. And as we got to be  more familiar with AI, we would notice that every once in a while there is an opportunity to add to the field of AI because of something that we know from physics — a certain mathematical construct or a certain law in physics. We noticed that hey, if we use that, it could actually help with this or that particular AI algorithm.”

This approach could be useful in a variety of specific kinds of tasks, he says, but not all. “We can’t say this is useful for all of AI, but there are instances where we can use an insight from physics to improve on a given AI algorithm.”

Neural networks in general are an attempt to mimic the way humans learn certain new things: The computer examines many different examples and “learns” what the key underlying patterns are. Such systems are widely used for pattern recognition, such as learning to identify objects depicted in photos.

But neural networks in general have difficulty correlating information from a long string of data, such as is required in interpreting a research paper. Various tricks have been used to improve this capability, including techniques known as long short-term memory (LSTM) and gated recurrent units (GRU), but these still fall well short of what’s needed for real natural-language processing, the researchers say.

The team came up with an alternative system, which instead of being based on the multiplication of matrices, as most conventional neural networks are, is based on vectors rotating in a multidimensional space. The key concept is something they call a rotational unit of memory (RUM).

Essentially, the system represents each word in the text by a vector in multidimensional space — a line of a certain length pointing in a particular direction. Each subsequent word swings this vector in some direction, represented in a theoretical space that can ultimately have thousands of dimensions. At the end of the process, the final vector or set of vectors is translated back into its corresponding string of words.

“RUM helps neural networks to do two things very well,” Nakov says. “It helps them to remember better, and it enables them to recall information more accurately.”

After developing the RUM system to help with certain tough physics problems such as the behavior of light in complex engineered materials, “we realized one of the places where we thought this approach could be useful would be natural language processing,” says Soljačić,  recalling a conversation with Tatalović, who noted that such a tool would be useful for his work as an editor trying to decide which papers to write about. Tatalović was at the time exploring AI in science journalism as his Knight fellowship project.

“And so we tried a few natural language processing tasks on it,” Soljačić says. “One that we tried was summarizing articles, and that seems to be working quite well.”

The proof is in the reading

As an example, they fed the same research paper through a conventional LSTM-based neural network and through their RUM-based system. The resulting summaries were dramatically different.

The LSTM system yielded this highly repetitive and fairly technical summary: “Baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat.

Based on the same paper, the RUM system produced a much more readable summary, and one that did not include the needless repetition of phrases: Urban raccoons may infect people more than previously assumed. 7 percent of surveyed individuals tested positive for raccoon roundworm antibodies. Over 90 percent of raccoons in Santa Barbara play host to this parasite.

Already, the RUM-based system has been expanded so it can “read” through entire research papers, not just the abstracts, to produce a summary of their contents. The researchers have even tried using the system on their own research paper describing these findings — the paper that this news story is attempting to summarize.

Here is the new neural network’s summary: Researchers have developed a new representation process on the rotational unit of RUM, a recurrent memory that can be used to solve a broad spectrum of the neural revolution in natural language processing.

It may not be elegant prose, but it does at least hit the key points of information.

Çağlar Gülçehre, a research scientist at the British AI company Deepmind Technologies, who was not involved in this work, says this research tackles an important problem in neural networks, having to do with relating pieces of information that are widely separated in time or space. “This problem has been a very fundamental issue in AI due to the necessity to do reasoning over long time-delays in sequence-prediction tasks,” he says. “Although I do not think this paper completely solves this problem, it shows promising results on the long-term dependency tasks such as question-answering, text summarization, and associative recall.”

Gülçehre adds, “Since the experiments conducted and model proposed in this paper are released as open-source on Github, as a result many researchers will be interested in trying it on their own tasks. … To be more specific, potentially the approach proposed in this paper can have very high impact on the fields of natural language processing and reinforcement learning, where the long-term dependencies are very crucial.”

The research received support from the Army Research Office, the National Science Foundation, the MIT-SenseTime Alliance on Artificial Intelligence, and the Semiconductor Research Corporation. The team also had help from the Science Daily website, whose articles were used in training some of the AI models in this research.

As usual, this ‘automated writing system’ is framed as a ‘helper’ not an usurper of anyone’s job. However, its potential for changing the nature of the work is there. About five years ago I featured another ‘automated writing’ story in a July 16, 2014 posting titled: ‘Writing and AI or is a robot writing this blog?’ You may have been reading ‘automated’ news stories for years. At the time, the focus was on sports and business.

Getting back to 2019 and science writing, here’s a link to and a citation for the paper,

Rotational Unit of Memory: A Novel Representation Unit for RNNs with Scalable Applications by Rumen Dangovski, Li Jing, Preslav Nakov, Mićo Tatalović and Marin Soljačić. Transactions of the Association for Computational Linguistics Volume 07, 2019 pp.121-138 DOI: https://doi.org/10.1162/tacl_a_00258 Posted Online 2019

© 2019 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.

This paper is open access.

The future of work during the age of robots and artificial intelligence

2014 was quite the year for discussions about robots/artificial intelligence (AI) taking over the world of work. There was my July 16, 2014 post titled, Writing and AI or is a robot writing this blog?, where I discussed the implications of algorithms which write news stories (business and sports, so far) in the wake of a deal that Associated Press signed with a company called Automated Insights. A few weeks later, the Pew Research Center released a report titled, AI, Robotics, and the Future of Jobs, which was widely covered. As well, sometime during the year, renowned physicist Stephen Hawking expressed serious concerns about artificial intelligence and our ability to control it.

It seems that 2015 is going to be another banner for this discussion. Before launching into the latest on this topic, here’s a sampling of the Pew Research and the response to it. From an Aug. 6, 2014 Pew summary about AI, Robotics, and the Future of Jobs by Aaron Smith and Janna Anderson,

The vast majority of respondents to the 2014 Future of the Internet canvassing anticipate that robotics and artificial intelligence will permeate wide segments of daily life by 2025, with huge implications for a range of industries such as health care, transport and logistics, customer service, and home maintenance. But even as they are largely consistent in their predictions for the evolution of technology itself, they are deeply divided on how advances in AI and robotics will impact the economic and employment picture over the next decade.

We call this a canvassing because it is not a representative, randomized survey. Its findings emerge from an “opt in” invitation to experts who have been identified by researching those who are widely quoted as technology builders and analysts and those who have made insightful predictions to our previous queries about the future of the Internet. …

I wouldn’t have expected Jeff Bercovici’s Aug. 6, 2014 article for Forbes to be quite so hesitant about the possibilities of our robotic and artificially intelligent future,

As part of a major ongoing project looking at the future of the internet, the Pew Research Internet Project canvassed some 1,896 technologists, futurists and other experts about how they see advances in robotics and artificial intelligence affecting the human workforce in 2025.

The results were not especially reassuring. Nearly half of the respondents (48%) predicted that robots and AI will displace more jobs than they create over the coming decade. While that left a slim majority believing the impact of technology on employment will be neutral or positive, that’s not necessarily grounds for comfort: Many experts told Pew they expect the jobs created by the rise of the machines will be lower paying and less secure than the ones displaced, widening the gap between rich and poor, while others said they simply don’t think the major effects of robots and AI, for better or worse, will be in evidence yet by 2025.

Chris Gayomali’s Aug. 6, 2014 article for Fast Company poses an interesting question about how this brave new future will be financed,

A new study by Pew Internet Research takes a hard look at how innovations in robotics and artificial intelligence will impact the future of work. To reach their conclusions, Pew researchers invited 12,000 experts (academics, researchers, technologists, and the like) to answer two basic questions:

Will networked, automated, artificial intelligence (AI) applications and robotic devices have displaced more jobs than they have created by 2025?
To what degree will AI and robotics be parts of the ordinary landscape of the general population by 2025?

Close to 1,900 experts responded. About half (48%) of the people queried envision a future in which machines have displaced both blue- and white-collar jobs. It won’t be so dissimilar from the fundamental shift we saw in manufacturing, in which fewer (human) bosses oversaw automated assembly lines.

Meanwhile, the other 52% of experts surveyed speculate while that many of the jobs will be “substantially taken over by robots,” humans won’t be displaced outright. Rather, many people will be funneled into new job categories that don’t quite exist yet. …

Some worry that over the next 10 years, we’ll see a large number of middle class jobs disappear, widening the economic gap between the rich and the poor. The shift could be dramatic. As artificial intelligence becomes less artificial, they argue, the worry is that jobs that earn a decent living wage (say, customer service representatives, for example) will no longer be available, putting lots and lots of people out of work, possibly without the requisite skill set to forge new careers for themselves.

How do we avoid this? One revealing thread suggested by experts argues that the responsibility will fall on businesses to protect their employees. “There is a relentless march on the part of commercial interests (businesses) to increase productivity so if the technical advances are reliable and have a positive ROI [return on investment],” writes survey respondent Glenn Edens, a director of research in networking, security, and distributed systems at PARC, which is owned by Xerox. “Ultimately we need a broad and large base of employed population, otherwise there will be no one to pay for all of this new world.” [emphasis mine]

Alex Hearn’s Aug. 6, 2014 article for the Guardian reviews the report and comments on the current educational system’s ability to prepare students for the future,

Almost all of the respondents are united on one thing: the displacement of work by robots and AI is going to continue, and accelerate, over the coming decade. Where they split is in the societal response to that displacement.

The optimists predict that the economic boom that would result from vastly reduced costs to businesses would lead to the creation of new jobs in huge numbers, and a newfound premium being placed on the value of work that requires “uniquely human capabilities”. …

But the pessimists worry that the benefits of the labor replacement will accrue to those already wealthy enough to own the automatons, be that in the form of patents for algorithmic workers or the physical form of robots.

The ranks of the unemployed could swell, as people are laid off from work they are qualified in without the ability to retrain for careers where their humanity is a positive. And since this will happen in every economic sector simultaneously, civil unrest could be the result.

One thing many experts agreed on was the need for education to prepare for a post-automation world. ““Only the best-educated humans will compete with machines,” said internet sociologist Howard Rheingold.

“And education systems in the US and much of the rest of the world are still sitting students in rows and columns, teaching them to keep quiet and memorise what is told them, preparing them for life in a 20th century factory.”

Then, Will Oremus’ Aug. 6, 2014 article for Slate suggests we are already experiencing displacement,

… the current jobless recovery, along with a longer-term trend toward income and wealth inequality, has some thinkers wondering whether the latest wave of automation is different from those that preceded it.

Massachusetts Institute of Technology researchers Andrew McAfee and Erik Brynjolfsson, among others, see a “great decoupling” of productivity from wages since about 2000 as technology outpaces human workers’ education and skills. Workers, in other words, are losing the race between education and technology. This may be exacerbating a longer-term trend in which capital has gained the upper hand on labor since the 1970s.

The results of the survey were fascinating. Almost exactly half of the respondents (48 percent) predicted that intelligent software will disrupt more jobs than it can replace. The other half predicted the opposite.

The lack of expert consensus on such a crucial and seemingly straightforward question is startling. It’s even more so given that history and the leading economic models point so clearly to one side of the question: the side that reckons society will adjust, new jobs will emerge, and technology will eventually leave the economy stronger.

More recently, Manish Singh has written about some of his concerns as a writer who could be displaced in a Jan. 31, 2015 (?) article for Beta News (Note: A link has been removed),

Robots are after my job. They’re after yours as well, but let us deal with my problem first. Associated Press, an American multinational nonprofit news agency, revealed on Friday [Jan. 30, 2015] that it published 3,000 articles in the last three months of 2014. The company could previously only publish 300 stories. It didn’t hire more journalists, neither did its existing headcount start writing more, but the actual reason behind this exponential growth is technology. All those stories were written by an algorithm.

The articles produced by the algorithm were accurate, and you won’t be able to separate them from stories written by humans. Good lord, all the stories were written in accordance with the AP Style Guide, something not all journalists follow (but arguably, should).

There has been a growth in the number of such software. Narrative Science, a Chicago-based company offers an automated narrative generator powered by artificial intelligence. The company’s co-founder and CTO, Kristian Hammond, said last year that he believes that by 2030, 90 percent of news could be written by computers. Forbes, a reputable news outlet, has used Narrative’s software. Some news outlets use it to write email newsletters and similar things.

Singh also sounds a note of concern for other jobs by including this video (approximately 16 mins.) in his piece,

This video (Humans Need Not Apply) provides an excellent overview of the situation although it seems C. G. P. Grey, the person who produced and posted the video on YouTube, holds a more pessimistic view of the future than some other futurists.  C. G. P. Grey has a website here and is profiled here on Wikipedia.

One final bit, there’s a robot art critic which some are suggesting is superior to human art critics in Thomas Gorton’s Jan. 16, 2015 (?) article ‘This robot reviews art better than most critics‘ for Dazed Digital (Note: Links have been removed),

… the Novice Art Blogger, a Tumblr page set up by Matthew Plummer Fernandez. The British-Colombian artist programmed a bot with deep learning algorithms to analyse art; so instead of an overarticulate critic rambling about praxis, you get a review that gets down to the nitty-gritty about what exactly you see in front of you.

The results are charmingly honest: think a round robin of Google Translate text uninhibited by PR fluff, personal favouritism or the whims of a bad mood. We asked Novice Art Blogger to review our most recent Winter 2014 cover with Kendall Jenner. …

Beyond Kendall Jenner, it’s worth reading Gorton’s article for the interview with Plummer Fernandez.