Category Archives: ethics

Global dialogue on the ethics of neurotechnology on July 13, 2023 led by UNESCO

While there’s a great deal of attention and hyperbole attached to artificial intelligence (AI) these days, it seems that neurotechnology may be quietly gaining much needed attention. (For those who are interested, at the end of this posting, there’ll be a bit more information to round out what you’re seeing in the UNESCO material.)

Now, here’s news of an upcoming UNESCO (United Nations Educational, Scientific, and Cultural Organization) meeting on neurotechnology, from a June 6, 2023 UNESCO press release (also received via email), Note: Links have been removed,

The Member States of the Executive Board of UNESCO
have approved the proposal of the Director General to hold a global
dialogue to develop an ethical framework for the growing and largely
unregulated Neurotechnology sector, which may threaten human rights and
fundamental freedoms. A first international conference will be held at
UNESCO Headquarters on 13 July 2023.

“Neurotechnology could help solve many health issues, but it could
also access and manipulate people’s brains, and produce information
about our identities, and our emotions. It could threaten our rights to
human dignity, freedom of thought and privacy. There is an urgent need
to establish a common ethical framework at the international level, as
UNESCO has done for artificial intelligence,” said UNESCO
Director-General Audrey Azoulay.

UNESCO’s international conference, taking place on 13 July [2023], will start
exploring the immense potential of neurotechnology to solve neurological
problems and mental disorders, while identifying the actions needed to
address the threats it poses to human rights and fundamental freedoms.
The dialogue will involve senior officials, policymakers, civil society
organizations, academics and representatives of the private sector from
all regions of the world.

Lay the foundations for a global ethical framework

The dialogue will also be informed by a report by UNESCO’s
International Bioethics Committee (IBC) on the “Ethical Issues of
Neurotechnology”, and a UNESCO study proposing first time evidence on
the neurotechnology landscape, innovations, key actors worldwide and
major trends.

The ultimate goal of the dialogue is to advance a better understanding
of the ethical issues related to the governance of neurotechnology,
informing the development of the ethical framework to be approved by 193
member states of UNESCO – similar to the way in which UNESCO
established the global ethical frameworks on the human genome (1997),
human genetic data (2003) and artificial intelligence (2021).

UNESCO’s global standard on the Ethics of Artificial Intelligence has
been particularly effective and timely, given the latest developments
related to Generative AI, the pervasiveness of AI technologies and the
risks they pose to people, democracies, and jobs. The convergence of
neural data and artificial intelligence poses particular challenges, as
already recognized in UNESCO’s AI standard.

Neurotech could reduce the burden of disease…

Neurotechnology covers any kind of device or procedure which is designed
to “access, monitor, investigate, assess, manipulate, and/or emulate
the structure and function of neural systems”. [1] Neurotechnological
devices range from “wearables”, to non-invasive brain computer
interfaces such as robotic limbs, to brain implants currently being
developed [2] with the goal of treating disabilities such as paralysis.

One in eight people worldwide live with a mental or neurological
disorder, triggering care-related costs that account for up to a third
of total health expenses in developed countries. These burdens are
growing in low- and middle-income countries too. Globally these expenses
are expected to grow – the number of people aged over 60 is projected
to double by 2050 to 2.1 billion (WHO 2022). Neurotechnology has the
vast potential to reduce the number of deaths and disabilities caused by
neurological disorders, such as Epilepsy, Alzheimer’s, Parkinson’s
and Stroke.

… but also threaten Human Rights

Without ethical guardrails, these technologies can pose serious risks, as
brain information can be accessed and manipulated, threatening
fundamental rights and fundamental freedoms, which are central to the
notion of human identity, freedom of thought, privacy, and memory. In
its report published in 2021 [3], UNESCO’s IBC documents these risks
and proposes concrete actions to address them.

Neural data – which capture the individual’s reactions and basic
emotions – is in high demand in consumer markets. Unlike the data
gathered on us by social media platforms, most neural data is generated
unconsciously, therefore we cannot give our consent for its use. If
sensitive data is extracted, and then falls into the wrong hands, the
individual may suffer harmful consequences.

Brain-Computer-Interfaces (BCIs) implanted at a time during which a
child or teenager is still undergoing neurodevelopment may disrupt the
‘normal’ maturation of the brain. It may be able to transform young
minds, shaping their future identity with long-lasting, perhaps
permanent, effects.

Memory modification techniques (MMT) may enable scientists to alter the
content of a memory, reconstructing past events. For now, MMT relies on
the use of drugs, but in the future it may be possible to insert chips
into the brain. While this could be beneficial in the case of
traumatised people, such practices can also distort an individual’s
sense of personal identity.

Risk of exacerbating global inequalities and generating new ones

Currently 50% of Neurotech Companies are in the US, and 35% in Europe
and the UK. Because neurotechnology could usher in a new generation of
‘super-humans’, this would further widen the education, skills, wealth
and opportunities’ gap within and between countries, giving those with
the most advanced technology an unfair advantage.

UNESCO’s Ethics of neurotechnology webpage can be found here. As for the July 13, 2023 dialogue/conference, here are some of the details from UNESCO’s International Conference on the Ethics of Neurotechnology webpage,

UNESCO will organize an International Conference on the Ethics of Neurotechnology on the theme “Building a framework to protect and promote human rights and fundamental freedoms” at UNESCO Headquarters in Paris, on 13 July 2023, from 9:00 [CET; Central European Time] in Room I.

The Conference will explore the immense potential of neurotechnology and address the ethical challenges it poses to human rights and fundamental freedoms. It will bring together policymakers and experts, representatives of civil society and UN organizations, academia, media, and private sector companies, to prepare a solid foundation for an ethical framework on the governance of neurotechnology.

UNESCO International Conference on Ethics of Neurotechnology: Building a framework to protect and promote human rights and fundamental freedoms
13 July 2023 – 9:30 am – 13 July 2023 – 6:30 pm [CET; Central European Time]
Location UNESCO Headquarters, Paris, France
Rooms : Room
I Type : Cat II – Intergovernmental meeting, other than international conference of States
Arrangement type : Hybrid
Language(s) : French Spanish English Arabic
Contact : Rajarajeswari Pajany

Registration

Click here to register

A high-level session with ministers and policy makers focusing on policy actions and international cooperation will be featured in the Conference. Renowned experts will also be invited to discuss technological advancements in Neurotechnology and ethical challenges and human rights Implications. Two fireside chats will be organized to enrich the discussions focusing on the private sector, public awareness raising and public engagement. The Conference will also feature a new study of UNESCO’s Social and Human Sciences Sector shedding light on innovations in neurotechnology, key actors worldwide and key areas of development.

As one of the most promising technologies of our time, neurotechnology is providing new treatments and improving preventative and therapeutic options for millions of individuals suffering from neurological and mental illness. Neurotechnology is also transforming other aspects of our lives, from student learning and cognition to virtual and augmented reality systems and entertainment. While we celebrate these unprecedented opportunities, we must be vigilant against new challenges arising from the rapid and unregulated development and deployment of this innovative technology, including among others the risks to mental integrity, human dignity, personal identity, autonomy, fairness and equity, and mental privacy. 

UNESCO has been at the forefront of promoting an ethical approach to neurotechnology. UNESCO’s International Bioethics Committee (IBC) has examined the benefits and drawbacks from an ethical perspective in a report published in December 2021. The Organization has also led UN-wide efforts on this topic, collaborating with other agencies and academic institutions to organize expert roundtables, raise public awareness and produce publications. With a global mandate on bioethics and ethics of science and technology, UNESCO has been asked by the IBC, its expert advisory body, to consider developing a global standard on this topic.

A July 13, 2023 agenda and a little Canadian content

I have a link to the ‘provisional programme‘ for “Towards an Ethical Framework in the Protection and Promotion of Human Rights and Fundamental Freedoms,” the July 13, 2023 UNESCO International Conference on Ethics of Neurotechnology. Keeping in mind that this could (and likely will) change,

13 July 2023, Room I,
UNESCO HQ Paris, France,

9:00 –9:15 Welcoming Remarks (TBC)
•António Guterres, Secretary-General of the United Nations•
•Audrey Azoulay, Director-General of UNESCO

9:15 –10:00 Keynote Addresses (TBC)
•Gabriel Boric, President of Chile
•Narendra Modi, Prime Minister of India
•PedroSánchez Pérez-Castejón, Prime Minister of Spain
•Volker Turk, UN High Commissioner for Human Rights
•Amandeep Singh Gill, UN Secretary-General’sEnvoyon Technology

10:15 –11:00 Scene-Setting Address

1:00 –13:00 High-Level Session: Regulations and policy actions

14:30 –15:30 Expert Session: Technological advancement and opportunities

15:45 –16:30 Fireside Chat: Launch of the UNESCO publication “Unveiling the neurotechnology landscape: scientific advancements, innovationsand major trends”

16:30 –17:30 Expert Session: Ethical challenges and human rights implications

17:30 –18:15 Fireside Chat: “Why neurotechnology matters for all

18:15 –18:30 Closing Remarks

While I haven’t included the speakers’ names (for the most part), I do want to note some Canadian participation in the person of Dr. Judy Iles from the University of British Columbia. She’s a Professor of Neurology, Distinguished University Scholar in Neuroethics, andDirector, Neuroethics Canada, and President of the International Brain Initiative (IBI)

Iles is in the “Expert Session: Ethical challenges and human rights implications.”

If you have time do look at the provisional programme just to get a sense of the range of speakers and their involvement in an astonishing array of organizations. E.g., there’s the IBI (in Judy Iles’s bio), which at this point is largely (and surprisingly) supported by (from About Us) “Fonds de recherche du Québec, and the Institute of Neuroscience, Mental Health and Addiction of the Canadian Institutes of Health Research. Operational support for the IBI is also provided by the Japan Brain/MINDS Beyond and WorldView Studios“.

More food for thought

Neither the UNESCO July 2023 meeting, which tilts, understandably, to social justice issues vis-à-vis neurotechnology nor the Canadian Science Policy Centre (CSPC) May 2023 meeting (see my May 12, 2023 posting: Virtual panel discussion: Canadian Strategies for Responsible Neurotechnology Innovation on May 16, 2023), based on the publicly available agendas, seem to mention practical matters such as an implant company going out of business. Still, it’s possible it will be mentioned at the UNESCO conference. Unfortunately, the May 2023 CSPC panel has not been posted online.

(See my April 5, 2022 posting “Going blind when your neural implant company flirts with bankruptcy [long read].” Even skimming it will give you some pause.) The 2019 OECD Recommendation on Responsible Innovation in Neurotechnology doesn’t cover/mention the issue ob business bankruptcy either.

Taking a look at business practices seems particularly urgent given this news from a May 25, 2023 article by Rachael Levy, Marisa Taylor, and Akriti Sharma for Reuters, Note: A link has been removed,

Elon Musk’s Neuralink received U.S. Food and Drug Administration (FDA) clearance for its first-in-human clinical trial, a critical milestone for the brain-implant startup as it faces U.S. probes over its handling of animal experiments.

The FDA approval “represents an important first step that will one day allow our technology to help many people,” Neuralink said in a tweet on Thursday, without disclosing details of the planned study. It added it is not recruiting for the trial yet and said more details would be available soon.

The FDA acknowledged in a statement that the agency cleared Neuralink to use its brain implant and surgical robot for trials on patients but declined to provide more details.

Neuralink and Musk did not respond to Reuters requests for comment.

The critical milestone comes as Neuralink faces federal scrutiny [emphasis mine] following Reuters reports about the company’s animal experiments.

Neuralink employees told Reuters last year that the company was rushing and botching surgeries on monkeys, pigs and sheep, resulting in more animal deaths [emphasis mine] than necessary, as Musk pressured staff to receive FDA approval. The animal experiments produced data intended to support the company’s application for human trials, the sources said.

If you have time, it’s well worth reading the article in its entirety. Neuralink is being investigated for a number of alleged violations.

Slightly more detail has been added by a May 26, 2023 Associated Press (AP article on the Canadian Broadcasting Corporation’s news online website,

Elon Musk’s brain implant company, Neuralink, says it’s gotten permission from U.S. regulators to begin testing its device in people.

The company made the announcement on Twitter Thursday evening but has provided no details about a potential study, which was not listed on the U.S. government database of clinical trials.

Officials with the Food and Drug Administration (FDA) wouldn’t confirm or deny whether it had granted the approval, but press officer Carly Kempler said in an email that the agency “acknowledges and understands” that Musk’s company made the announcement. [emphases mine]

The AP article offers additional context on the international race to develop brain-computer interfaces.

Update: It seems the FDA gave its approval later on May 26, 2023. (See the May 26, 2023 updated Reuters article by Rachael Levy, Marisa Taylor and Akriti Sharma and/or Paul Tuffley’s (lecturer at Griffith University) May 29, 2023 essay on The Conversation.)

For anyone who’s curious about previous efforts to examine ethics and social implications with regard to implants, prosthetics (Note: Increasingly, prosthetics include a neural component), and the brain, I have a couple of older posts: “Prosthetics and the human brain,” a March 8, 2013 and “The ultimate DIY: ‘How to build a robotic man’ on BBC 4,” a January 30, 2013 posting.)

Non-human authors (ChatGPT or others) of scientific and medical studies and the latest AI panic!!!

It’s fascinating to see all the current excitement (distressed and/or enthusiastic) around the act of writing and artificial intelligence. Easy to forget that it’s not new. First, the ‘non-human authors’ and then the panic(s). *What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.*

How to handle non-human authors (ChatGPT and other AI agents)—the medical edition

The first time I wrote about the incursion of robots or artificial intelligence into the field of writing was in a July 16, 2014 posting titled “Writing and AI or is a robot writing this blog?” ChatGPT (then known as GPT-2) first made its way onto this blog in a February 18, 2019 posting titled “AI (artificial intelligence) text generator, too dangerous to release?

The folks at the Journal of the American Medical Association (JAMA) have recently adopted a pragmatic approach to the possibility of nonhuman authors of scientific and medical papers, from a January 31, 2022 JAMA editorial,

Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability..1

In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.

Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11

This is a link to and a citation for the JAMA editorial,

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge by Annette Flanagin, Kirsten Bibbins-Domingo, Michael Berkwits, Stacy L. Christiansen. JAMA. 2023;329(8):637-639. doi:10.1001/jama.2023.1344

The editorial appears to be open access.

ChatGPT in the field of education

Dr. Andrew Maynard (scientist, author, and professor of Advanced Technology Transitions in the Arizona State University [ASU] School for the Future if Innovation in Society and founder of the ASU Future of Being Human initiative and Director of the ASU Risk Innovation Nexus) also takes a pragmatic approach in a March 14, 2023 posting on his eponymous blog,

Like many of my colleagues, I’ve been grappling with how ChatGPT and other Large Language Models (LLMs) are impacting teaching and education — especially at the undergraduate level.

We’re already seeing signs of the challenges here as a growing divide emerges between LLM-savvy students who are experimenting with novel ways of using (and abusing) tools like ChatGPT, and educators who are desperately trying to catch up. As a result, educators are increasingly finding themselves unprepared and poorly equipped to navigate near-real-time innovations in how students are using these tools. And this is only exacerbated where their knowledge of what is emerging is several steps behind that of their students.

To help address this immediate need, a number of colleagues and I compiled a practical set of Frequently Asked Questions on ChatGPT in the classroom. These covers the basics of what ChatGPT is, possible concerns over use by students, potential creative ways of using the tool to enhance learning, and suggestions for class-specific guidelines.

Dr. Maynard goes on to offer the FAQ/practical guide here. Prior to issuing the ‘guide’, he wrote a December 8, 2022 essay on Medium titled “I asked Open AI’s ChatGPT about responsible innovation. This is what I got.”

Crawford Kilian, a longtime educator, author, and contributing editor to The Tyee, expresses measured enthusiasm for the new technology (as does Dr. Maynard), in a December 13, 2022 article for thetyee.ca, Note: Links have been removed,

ChatGPT, its makers tell us, is still in beta form. Like a million other new users, I’ve been teaching it (tuition-free) so its answers will improve. It’s pretty easy to run a tutorial: once you’ve created an account, you’re invited to ask a question or give a command. Then you watch the reply, popping up on the screen at the speed of a fast and very accurate typist.

Early responses to ChatGPT have been largely Luddite: critics have warned that its arrival means the end of high school English, the demise of the college essay and so on. But remember that the Luddites were highly skilled weavers who commanded high prices for their products; they could see that newfangled mechanized looms would produce cheap fabrics that would push good weavers out of the market. ChatGPT, with sufficient tweaks, could do just that to educators and other knowledge workers.

Having spent 40 years trying to teach my students how to write, I have mixed feelings about this prospect. But it wouldn’t be the first time that a technological advancement has resulted in the atrophy of a human mental skill.

Writing arguably reduced our ability to memorize — and to speak with memorable and persuasive coherence. …

Writing and other technological “advances” have made us what we are today — powerful, but also powerfully dangerous to ourselves and our world. If we can just think through the implications of ChatGPT, we may create companions and mentors that are not so much demonic as the angels of our better nature.

More than writing: emergent behaviour

The ChatGPT story extends further than writing and chatting. From a March 6, 2023 article by Stephen Ornes for Quanta Magazine, Note: Links have been removed,

What movie do these emojis describe?

That prompt was one of 204 tasks chosen last year to test the ability of various large language models (LLMs) — the computational engines behind AI chatbots such as ChatGPT. The simplest LLMs produced surreal responses. “The movie is a movie about a man who is a man who is a man,” one began. Medium-complexity models came closer, guessing The Emoji Movie. But the most complex model nailed it in one guess: Finding Nemo.

“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.

“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors [emphasis mine], including several identified in Dyer’s project. That list continues to grow.

Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.

But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.

As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli [Deep Ganguli, a computer scientist at the AI startup Anthropic] said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

There is an obvious problem with asking these models to explain themselves: They are notorious liars. [emphasis mine] “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” As one of many amusing examples, in February [2023] Google introduced its AI chatbot, Bard. The blog post announcing the new tool shows Bard making a factual error.

If you have time, I recommend reading Omes’s March 6, 2023 article.

The panic

Perhaps not entirely unrelated to current developments, there was this announcement in a May 1, 2023 article by Hannah Alberga for CTV (Canadian Television Network) news, Note: Links have been removed,

Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI

Geoffrey Hinton, professor at the University of Toronto and the “godfather” of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday [May 1, 2023] citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by “bad actors.”

Hinton, a British-Canadian computer scientist, is best-known for a series of deep neural network breakthroughs that won him, Yann LeCun and Yoshua Bengio the 2018 Turing Award, known as the Nobel Prize of computing. 

Hinton has been invested in the now-hot topic of artificial intelligence since its early stages. In 1970, he got a Bachelor of Arts in experimental psychology from Cambridge, followed by his Ph.D. in artificial intelligence in Edinburgh, U.K. in 1978.

He joined Google after spearheading a major breakthrough with two of his graduate students at the University of Toronto in 2012, in which the team uncovered and built a new method of artificial intelligence: neural networks. The team’s first neural network was  incorporated and sold to Google for $44 million.

Neural networks are a method of deep learning that effectively teaches computers how to learn the way humans do by analyzing data, paving the way for machines to classify objects and understand speech recognition.

There’s a bit more from Hinton in a May 3, 2023 article by Sheena Goodyear for the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens (the 10 minute radio interview is embedded in the article), Note: A link has been removed,

There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.

Nowadays, he’s not so sure.

“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”

For the last decade, he [Geoffrey Hinton] divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.

Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”

But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes. 

Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good. 

“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.

“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”

Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”

“That being said, I do think that some of these ideas that … AI supercomputers are going to ‘wake up’ and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear” and distract from more pressing issues, he said.

He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.

“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he [Hinton] said.

Don Pittis in his May 4, 2022 business analysis for CBC news online offers a somewhat jaundiced view of Hinton’s concern regarding AI, Note: Links have been removed,

As if we needed one more thing to terrify us, the latest warning from a University of Toronto scientist considered by many to be the founding intellect of artificial intelligence, adds a new layer of dread.

Others who have warned in the past that thinking machines are a threat to human existence seem a little miffed with the rock-star-like media coverage Geoffrey Hinton, billed at a conference this week as the Godfather of AI, is getting for what seems like a last minute conversion. Others say Hinton’s authoritative voice makes a difference.

Not only did Hinton tell an audience of experts at Wednesday’s [May 3, 2023] EmTech Digital conference that humans will soon be supplanted by AI — “I think it’s serious and fairly close.” — he said that due to national and business competition, there is no obvious way to prevent it.

“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial,” said Hinton on Wednesday [May 3, 2023] as he explained his change of heart in detailed technical terms. 

“But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.”

“I wish I had a nice and simple solution I could push, but I don’t,” he said. “It’s not clear there is a solution.”

So when is all this happening?

“In a few years time they may be significantly more intelligent than people,” he told Nil Köksal on CBC Radio’s As It Happens on Wednesday [May 3, 2023].

While he may be late to the party, Hinton’s voice adds new clout to growing anxiety that artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.

But long before that final day, he worries that the new technology will soon begin to strip away jobs and lead to a destabilizing societal gap between rich and poor that current politics will be unable to solve.

The EmTech Digital conference is a who’s who of AI business and academia, fields which often overlap. Most other participants at the event were not there to warn about AI like Hinton, but to celebrate the explosive growth of AI research and business.

As one expert I spoke to pointed out, the growth in AI is exponential and has been for a long time. But even knowing that, the increase in the dollar value of AI to business caught the sector by surprise.

Eight years ago when I wrote about the expected increase in AI business, I quoted the market intelligence group Tractica that AI spending would “be worth more than $40 billion in the coming decade,” which sounded like a lot at the time. It appears that was an underestimate.

“The global artificial intelligence market size was valued at $428 billion U.S. in 2022,” said an updated report from Fortune Business Insights. “The market is projected to grow from $515.31 billion U.S. in 2023.”  The estimate for 2030 is more than $2 trillion. 

This week the new Toronto AI company Cohere, where Hinton has a stake of his own, announced it was “in advanced talks” to raise $250 million. The Canadian media company Thomson Reuters said it was planning “a deeper investment in artificial intelligence.” IBM is expected to “pause hiring for roles that could be replaced with AI.” The founders of Google DeepMind and LinkedIn have launched a ChatGPT competitor called Pi.

And that was just this week.

“My one hope is that, because if we allow it to take over it will be bad for all of us, we could get the U.S. and China to agree, like we did with nuclear weapons,” said Hinton. “We’re all the in same boat with respect to existential threats, so we all ought to be able to co-operate on trying to stop it.”

Interviewer and moderator Will Douglas Heaven, an editor at MIT Technology Review finished Hinton’s sentence for him: “As long as we can make some money on the way.”

Hinton has attracted some criticism himself. Wilfred Chan writing for Fast Company has two articles, “‘I didn’t see him show up’: Ex-Googlers blast ‘AI godfather’ Geoffrey Hinton’s silence on fired AI experts” on May 5, 2023, Note: Links have been removed,

Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord.

But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.

“It’s disappointing to see this autumn-years redemption tour [emphasis mine] from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)

Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.

“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”

Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. …

… An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them. 

Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday [May 2, 2023], for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.” [emphasis mine]

Gebru has been mentioned here a few times. She’s mentioned in passing in a June 23, 2022 posting “Racist and sexist robots have flawed AI” and in a little more detail in an August 30, 2022 posting “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” scroll down to the ‘Consciousness and ethical AI’ subhead

Chan has another Fast Company article investigating AI issues also published on May 5, 2023, “Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them.”

The last two existential AI panics

The term “autumn-years redemption tour”is striking and while the reference to age could be viewed as problematic, it also hints at the money, honours, and acknowledgement that Hinton has enjoyed as an eminent scientist. I’ve covered two previous panics set off by eminent scientists. “Existential risk” is the title of my November 26, 2012 posting which highlights Martin Rees’ efforts to found the Centre for Existential Risk at the University of Cambridge.

Rees is a big deal. From his Wikipedia entry, Note: Links have been removed,

Martin John Rees, Baron Rees of Ludlow OM FRS FREng FMedSci FRAS HonFInstP[10][2] (born 23 June 1942) is a British cosmologist and astrophysicist.[11] He is the fifteenth Astronomer Royal, appointed in 1995,[12][13][14] and was Master of Trinity College, Cambridge, from 2004 to 2012 and President of the Royal Society between 2005 and 2010.[15][16][17][18][19][20]

The Centre for Existential Risk can be found here online (it is located at the University of Cambridge). Interestingly, Hinton who was born in December 1947 will be giving a lecture “Digital versus biological intelligence: Reasons for concern about AI” in Cambridge on May 25, 2023.

The next panic was set off by Stephen Hawking (1942 – 2018; also at the University of Cambridge, Wikipedia entry) a few years before he died. (Note: Rees, Hinton, and Hawking were all born within five years of each other and all have/had ties to the University of Cambridge. Interesting coincidence, eh?) From a January 9, 2015 article by Emily Chung for CBC news online,

Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, “the development of full artificial intelligence could spell the end of the human race.” Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably “our biggest existential threat.”

Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.

Hawking made his most recent comments at the beginning of December [2014], in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.

Popular works of science fiction – from the latest Terminator trailer, to the Matrix trilogy, to Star Trek’s borg – envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.

Sawyer has written about a different vision of life beyond singularity [when machines surpass humans in general intelligence,] — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the “possible misuse of powerful technologies” such as AI. He said Hawking and Musk have good reason to be concerned.

To sum up, the first panic was in 2012, the next in 2014/15, and the latest one began earlier this year (2023) with a letter. A March 29, 2023 Thompson Reuters news item on CBC news online provides information on the contents,

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a pioneer of research in the field.

According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.

The concerns come as EU police force Europol on Monday {March 27, 2023] joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Meanwhile, the U.K. government unveiled proposals for an “adaptable” regulatory framework around AI.

The government’s approach, outlined in a policy paper published on Wednesday [March 29, 2023], would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.

The engineers have chimed in, from an April 7, 2023 article by Margo Anderson for the IEEE (institute of Electrical and Electronics Engineers) Spectrum magazine, Note: Links have been removed,

The open letter [published March 29, 2023], titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”

It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.

In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.

IEEE members have expressed a similar diversity of opinions.

There was an earlier open letter in January 2015 according to Wikipedia’s “Open Letter on Artificial Intelligence” entry, Note: Links have been removed,

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.

As for ‘Mr. ChatGPT’ or Sam Altman, CEO of OpenAI, while he didn’t sign the March 29, 2023 letter, he appeared before US Congress suggesting AI needs to be regulated according to May 16, 2023 news article by Mohar Chatterjee for Politico.

You’ll notice I’ve arbitrarily designated three AI panics by assigning their origins to eminent scientists. In reality, these concerns rise and fall in ways that don’t allow for such a tidy analysis. As Chung notes, science fiction regularly addresses this issue. For example, there’s my October 16, 2013 posting, “Wizards & Robots: a comic book encourages study in the sciences and maths and discussions about existential risk.” By the way, will.i.am (of the Black Eyed Peas band was involved in the comic book project and he us a longtime supporter of STEM (science, technology, engineering, and mathematics) initiatives.

Finally (but not quite)

Puzzling, isn’t it? I’m not sure we’re asking the right questions but it’s encouraging to see that at least some are being asked.

Dr. Andrew Maynard in a May 12, 2023 essay for The Conversation (h/t May 12, 2023 item on phys.org) notes that ‘Luddites’ questioned technology’s inevitable progress and were vilified for doing so, Note: Links have been removed,

The term “Luddite” emerged in early 1800s England. At the time there was a thriving textile industry that depended on manual knitting frames and a skilled workforce to create cloth and garments out of cotton and wool. But as the Industrial Revolution gathered momentum, steam-powered mills threatened the livelihood of thousands of artisanal textile workers.

Faced with an industrialized future that threatened their jobs and their professional identity, a growing number of textile workers turned to direct action. Galvanized by their leader, Ned Ludd, they began to smash the machines that they saw as robbing them of their source of income.

It’s not clear whether Ned Ludd was a real person, or simply a figment of folklore invented during a period of upheaval. But his name became synonymous with rejecting disruptive new technologies – an association that lasts to this day.

Questioning doesn’t mean rejecting

Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time. Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life

In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.” Their sin? Raising concerns over the potential dangers of artificial intelligence.

The irony of three prominent scientists and entrepreneurs being labeled as Luddites underlines the disconnect between the term’s original meaning and its more modern use as an epithet for anyone who doesn’t wholeheartedly and unquestioningly embrace technological progress.

Yet technologists like Musk and Gates aren’t rejecting technology or innovation. Instead, they’re rejecting a worldview that all technological advances are ultimately good for society. This worldview optimistically assumes that the faster humans innovate, the better the future will be.

In an age of ChatGPT, gene editing and other transformative technologies, perhaps we all need to channel the spirit of Ned Ludd as we grapple with how to ensure that future technologies do more good than harm.

In fact, “Neo-Luddites” or “New Luddites” is a term that emerged at the end of the 20th century.

In 1990, the psychologist Chellis Glendinning published an essay titled “Notes toward a Neo-Luddite Manifesto.”

Then there are the Neo-Luddites who actively reject modern technologies, fearing that they are damaging to society. New York City’s Luddite Club falls into this camp. Formed by a group of tech-disillusioned Gen-Zers, the club advocates the use of flip phones, crafting, hanging out in parks and reading hardcover or paperback books. Screens are an anathema to the group, which sees them as a drain on mental health.

I’m not sure how many of today’s Neo-Luddites – whether they’re thoughtful technologists, technology-rejecting teens or simply people who are uneasy about technological disruption – have read Glendinning’s manifesto. And to be sure, parts of it are rather contentious. Yet there is a common thread here: the idea that technology can lead to personal and societal harm if it is not developed responsibly.

Getting back to where this started with nonhuman authors, Amelia Eqbal has written up an informal transcript of a March 16, 2023 CBC radio interview (radio segment is embedded) about ChatGPT-4 (the latest AI chatbot from OpenAI) between host Elamin Abdelmahmoud and tech journalist, Alyssa Bereznak.

I was hoping to add a little more Canadian content, so in March 2023 and again in April 2023, I sent a question about whether there were any policies regarding nonhuman or AI authors to Kim Barnhardt at the Canadian Medical Association Journal (CMAJ). To date, there has been no reply but should one arrive, I will place it here.

In the meantime, I have this from Canadian writer, Susan Baxter in her May 15, 2023 blog posting “Coming soon: Robot Overlords, Sentient AI and more,”

The current threat looming (Covid having been declared null and void by the WHO*) is Artificial Intelligence (AI) which, we are told, is becoming too smart for its own good and will soon outsmart humans. Then again, given some of the humans I’ve met along the way that wouldn’t be difficult.

All this talk of scary-boo AI seems to me to have become the worst kind of cliché, one that obscures how our lives have become more complicated and more frustrating as apps and bots and cyber-whatsits take over.

The trouble with clichés, as Alain de Botton wrote in How Proust Can Change Your Life, is not that they are wrong or contain false ideas but more that they are “superficial articulations of good ones”. Cliches are oversimplifications that become so commonplace we stop noticing the more serious subtext. (This is rife in medicine where metaphors such as talk of “replacing” organs through transplants makes people believe it’s akin to changing the oil filter in your car. Or whatever it is EV’s have these days that needs replacing.)

Should you live in Vancouver (Canada) and are attending a May 28, 2023 AI event, you may want to read Susan Baxter’s piece as a counterbalance to, “Discover the future of artificial intelligence at this unique AI event in Vancouver,” a May 19, 2023 sponsored content by Katy Brennan for the Daily Hive,

If you’re intrigued and eager to delve into the rapidly growing field of AI, you’re not going to want to miss this unique Vancouver event.

On Sunday, May 28 [2023], a Multiplatform AI event is coming to the Vancouver Playhouse — and it’s set to take you on a journey into the future of artificial intelligence.

The exciting conference promises a fusion of creativity, tech innovation, and thought–provoking insights, with talks from renowned AI leaders and concept artists, who will share their experiences and opinions.

Guests can look forward to intense discussions about AI’s pros and cons, hear real-world case studies, and learn about the ethical dimensions of AI, its potential threats to humanity, and the laws that govern its use.

Live Q&A sessions will also be held, where leading experts in the field will address all kinds of burning questions from attendees. There will also be a dynamic round table and several other opportunities to connect with industry leaders, pioneers, and like-minded enthusiasts. 

This conference is being held at The Playhouse, 600 Hamilton Street, from 11 am to 7:30 pm, ticket prices range from $299 to $349 to $499 (depending on when you make your purchase, From the Multiplatform AI Conference homepage,

Event Speakers

Max Sills
General Counsel at Midjourney

From Jan 2022 – present (Advisor – now General Counsel) – Midjourney – An independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species (SF) Midjourney – a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called “prompts”, similar to OpenAI’s DALL-E and Stable Diffusion. For now the company uses Discord Server as a source of service and, with huge 15M+ members, is the biggest Discord server in the world. In the two-things-at-once department, Max Sills also known as an owner of Open Advisory Services, firm which is set up to help small and medium tech companies with their legal needs (managing outside counsel, employment, carta, TOS, privacy). Their clients are enterprise level, medium companies and up, and they are here to help anyone on open source and IP strategy. Max is an ex-counsel at Block, ex-general manager of the Crypto Open Patent Alliance. Prior to that Max led Google’s open source legal group for 7 years.

So, the first speaker listed is a lawyer associated with Midjourney, a highly controversial generative artificial intelligence programme used to generate images. According to their entry on Wikipedia, the company is being sued, Note: Links have been removed,

On January 13, 2023, three artists – Sarah Andersen, Kelly McKernan, and Karla Ortiz – filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists.[32]

My October 24, 2022 posting highlights some of the issues with generative image programmes and Midjourney is mentioned throughout.

As I noted earlier, I’m glad to see more thought being put into the societal impact of AI and somewhat disconcerted by the hyperbole from the like of Geoffrey Hinton and the like of Vancouver’s Multiplatform AI conference organizers. Mike Masnick put it nicely in his May 24, 2023 posting on TechDirt (Note 1: I’ve taken a paragraph out of context, his larger issue is about proposals for legislation; Note 2: Links have been removed),

Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.

For anyone interested in the Canadian government attempts to legislate AI, there’s my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”

Addendum (June 1, 2023)

Another statement warning about runaway AI was issued on Tuesday, May 30, 2023. This was far briefer than the previous March 2023 warning, from the Center for AI Safety’s “Statement on AI Risk” webpage,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war [followed by a list of signatories] …

Vanessa Romo’s May 30, 2023 article (with contributions from Bobby Allyn) for NPR ([US] National Public Radio) offers an overview of both warnings. Rae Hodge’s May 31, 2023 article for Salon offers a more critical view, Note: Links have been removed,

The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of “nuclear war” and human “extinction.” Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement from the non-profit Center for AI Safety said.

But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board.

TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry.

“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote.

“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.

The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts.

“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”

Fear, after all, is a powerful sales tool.

Romo’s May 30, 2023 article for NPR offers a good overview and, if you have the time, I recommend reading Hodge’s May 31, 2023 article for Salon in its entirety.

*ETA June 8, 2023: This sentence “What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.” was added to the introductory paragraph at the beginning of this post.

Need to improve oversight on chimeric human-animal research

It seems chimeras are of more interest these days. In all likelihood that has something to do with the fellow who received a transplant of a pig’s heart in January 2022 (he died in March 2022).

For those who aren’t familiar with the term, a chimera is an entity with two different DNA (deoxyribonucleic acid) identities. In short, if you get a DNA sample from the heart, it’s different from a DNA sample obtained from a cheek swab. This contrasts with a hybrid such as a mule (donkey/horse) whose DNA samples show a consisted identity throughout its body.

A December 12, 2022 The Hastings Center news release (also on EurekAlert) announces a special report,

A new report on the ethics of crossing species boundaries by inserting human cells into nonhuman animals – research surrounded by debate – makes recommendations clarifying the ethical issues and calling for improved oversight of this work.

The report, “Creating Chimeric Animals — Seeking Clarity On Ethics and Oversight,” was developed by an interdisciplinary team, with funding from the National Institutes of Health. Principal investigators are Josephine Johnston and Karen Maschke, research scholars at The Hastings Center, and Insoo Hyun, director of the Center for Life Sciences and Public Learning at the Museum of Life Sciences in Boston, formerly of Case Western Reserve University.

Advances in human stem cell science and gene editing enable scientists to insert human cells more extensively and precisely into nonhuman animals, creating “chimeric” animals, embryos, and other organisms that contain a mix of human and nonhuman cells.

Many people hope that this research will yield enormous benefits, including better models of human disease, inexpensive sources of human eggs and embryos for research, and sources of tissues and organs suitable for transplantation into humans. 

But there are ethical concerns about this type of research, which raise questions such as whether the moral status of nonhuman animals is altered by the insertion of human stem cells, whether these studies should be subject to additional prohibitions or oversight, and whether this kind of research should be done at all.

The report found that:

Animal welfare is a primary ethical issue and should be a focus of ethical and policy analysis as well as the governance and oversight of chimeric research.

Chimeric studies raise the possibility of unique or novel harms resulting from the insertion and development of human stem cells in nonhuman animals, particularly when those cells develop in the brain or central nervous system.

Oversight and governance of chimeric research are siloed, and public communication is minimal. Public communication should be improved, communication between the different committees involved in oversight at each institution should be enhanced, and a national mechanism created for those involved in oversight of these studies. 

Scientists, journalists, bioethicists, and others writing about chimeric research should use precise and accessible language that clarifies rather than obscures the ethical issues at stake. The terms “chimera,” which in Greek mythology refers to a fire-breathing monster, and “humanization” are examples of ethically laden, or overly broad language to be avoided.

The Research Team

The Hastings Center

• Josephine Johnston
• Karen J. Maschke
• Carolyn P. Neuhaus
• Margaret M. Matthews
• Isabel Bolo

Case Western Reserve University
• Insoo Hyun (now at Museum of Science, Boston)
• Patricia Marshall
• Kaitlynn P. Craig

The Work Group

• Kara Drolet, Oregon Health & Science University
• Henry T. Greely, Stanford University
• Lori R. Hill, MD Anderson Cancer Center
• Amy Hinterberger, King’s College London
• Elisa A. Hurley, Public Responsibility in Medicine and Research
• Robert Kesterson, University of Alabama at Birmingham
• Jonathan Kimmelman, McGill University
• Nancy M. P. King, Wake Forest University School of Medicine
• Geoffrey Lomax, California Institute for Regenerative Medicine
• Melissa J. Lopes, Harvard University Embryonic Stem Cell Research Oversight Committee
• P. Pearl O’Rourke, Harvard Medical School
• Brendan Parent, NYU Grossman School of Medicine
• Steven Peckman, University of California, Los Angeles
• Monika Piotrowska, State University of New York at Albany
• May Schwarz, The Salk Institute for Biological Studies
• Jeff Sebo, New York University
• Chris Stodgell, University of Rochester
• Robert Streiffer, University of Wisconsin-Madison
• Lorenz Studer, Memorial Sloan Kettering Cancer Center
• Amy Wilkerson, The Rockefeller University

Here’s a link to and a citation for the report,

Creating Chimeric Animals: Seeking Clarity on Ethics and Oversight edited by Karen J. Maschke, Margaret M. Matthews, Kaitlynn P. Craig, Carolyn P. Neuhaus, Insoo Hyun, Josephine Johnston, The Hastings Center Report Volume 52, Issue S2 (Special Report), November‐December 2022 First Published: 09 December 2022

This report is open access.

Public can now vote for 2023 Morgridge (Institute for Research) Ethics Cartooning Competition

A February 21, 2023 Morgridge Institute for Research news release on EurekAlert announced open voting in their ethics cartooning competition,

Eighteen cartoons have been selected as finalists in the 2023 Ethics Cartooning Competition, an annual contest sponsored by the Morgridge Institute for Research. 

Participants from the University of Wisconsin-Madison and affiliated biomedical centers or institutes submitted their work, then a panel of judges selected the final cartoons for display to the public, who is invited to vote and help determine the 2023 winners.

This year’s cartoons depict a variety of research ethics topics, such as the ethics of scientific publishing, research funding and environments, questionable research practices, drug pricing, the ethics of experimenting on animals, social impacts of scientific research, and scientists as responsible members of society.

The Morgridge Ethics Cartooning Competition, developed by Morgridge Bioethics Scholar in Residence Pilar Ossorio, encourages scientists to shed light on timely or recurring issues that arise in scientific research.

“Ethical issues are all around us,” says Ossorio. “An event like the competition encourages people to identify some of those issues, perhaps talk about them with friends and colleagues, and think about how to communicate about those issues with a broader community of people.”

Public voting is open until March 10, 2023: https://morgridge.org/story/ethics-cartooning-contest-vote-2023/

Some of the cartoons feature biting commentary,

https://morgridge.org/wp-content/uploads/2023-R.png

The one above hit home as I commented on a local (Vancouver, Canada) billionaire’s (Chip Wilson of Lululemon) announcement that he was spending $100M on research to treat a rare disease (facio-scapulo-humeral muscular dystrophy [FSHD]) he has. (See my April 5, 2022 posting, scroll down about 80% of the way to the subhead, Money makes the world go around.)

And this too caught my eye,

https://morgridge.org/wp-content/uploads/2023-G.png

It reminds me that I’ve been meaning to do a piece on science and racism for the last few years. Maybe this year, eh?

In the meantime, go vote, there’s another 16 to choose from and you have until March 10, 2023: https://morgridge.org/story/ethics-cartooning-contest-vote-2023/

Implantable living pharmacy

I stumbled across a very interesting US Defense Advanced Research Projects Agency (DARPA) project (from an August 30, 2021 posting on Northwestern University’s Rivnay Lab [a laboratory for organic bioelectronics] blog),

Our lab has received a cooperative agreement with DARPA to develop a wireless, fully implantable ‘living pharmacy’ device that could help regulate human sleep patterns. The project is through DARPA’s BTO (biotechnology office)’s Advanced Acclimation and Protection Tool for Environmental Readiness (ADAPTER) program, meant to address physical challenges of travel, such as jetlag and fatigue.

The device, called NTRAIN (Normalizing Timing of Rhythms Across Internal Networks of Circadian Clocks), would control the body’s circadian clock, reducing the time it takes for a person to recover from disrupted sleep/wake cycles by as much as half the usual time.

The project spans 5 institutions including Northwestern, Rice University, Carnegie Mellon, University of Minnesota, and Blackrock Neurotech.

Prior to the Aug. 30, 2021 posting, Amanda Morris wrote a May 13, 2021 article for Northwestern NOW (university magazine), which provides more details about the project, Note: A link has been removed,

The first phase of the highly interdisciplinary program will focus on developing the implant. The second phase, contingent on the first, will validate the device. If that milestone is met, then researchers will test the device in human trials, as part of the third phase. The full funding corresponds to $33 million over four-and-a-half years. 

Nicknamed the “living pharmacy,” the device could be a powerful tool for military personnel, who frequently travel across multiple time zones, and shift workers including first responders, who vacillate between overnight and daytime shifts.

Combining synthetic biology with bioelectronics, the team will engineer cells to produce the same peptides that the body makes to regulate sleep cycles, precisely adjusting timing and dose with bioelectronic controls. When the engineered cells are exposed to light, they will generate precisely dosed peptide therapies. 

“This control system allows us to deliver a peptide of interest on demand, directly into the bloodstream,” said Northwestern’s Jonathan Rivnay, principal investigator of the project. “No need to carry drugs, no need to inject therapeutics and — depending on how long we can make the device last — no need to refill the device. It’s like an implantable pharmacy on a chip that never runs out.” 

Beyond controlling circadian rhythms, the researchers believe this technology could be modified to release other types of therapies with precise timing and dosing for potentially treating pain and disease. The DARPA program also will help researchers better understand sleep/wake cycles, in general.

“The experiments carried out in these studies will enable new insights into how internal circadian organization is maintained,” said Turek [Fred W. Turek], who co-leads the sleep team with Vitaterna [Martha Hotz Vitaterna]. “These insights will lead to new therapeutic approaches for sleep disorders as well as many other physiological and mental disorders, including those associated with aging where there is often a spontaneous breakdown in temporal organization.” 

For those who like to dig even deeper, Dieynaba Young’s June 17, 2021 article for Smithsonian Magazine (GetPocket.com link to article) provides greater context and greater satisfaction, Note: Links have been removed,

In 1926, Fritz Kahn completed Man as Industrial Palace, the preeminent lithograph in his five-volume publication The Life of Man. The illustration shows a human body bustling with tiny factory workers. They cheerily operate a brain filled with switchboards, circuits and manometers. Below their feet, an ingenious network of pipes, chutes and conveyer belts make up the blood circulatory system. The image epitomizes a central motif in Kahn’s oeuvre: the parallel between human physiology and manufacturing, or the human body as a marvel of engineering.

An apparatus in the embryonic stage of development at the time of this writing in June of 2021—the so-called “implantable living pharmacy”—could have easily originated in Kahn’s fervid imagination. The concept is being developed by the Defense Advanced Research Projects Agency (DARPA) in conjunction with several universities, notably Northwestern and Rice. Researchers envision a miniaturized factory, tucked inside a microchip, that will manufacture pharmaceuticals from inside the body. The drugs will then be delivered to precise targets at the command of a mobile application. …

The implantable living pharmacy, which is still in the “proof of concept” stage of development, is actually envisioned as two separate devices—a microchip implant and an armband. The implant will contain a layer of living synthetic cells, along with a sensor that measures temperature, a short-range wireless transmitter and a photo detector. The cells are sourced from a human donor and reengineered to perform specific functions. They’ll be mass produced in the lab, and slathered onto a layer of tiny LED lights.

The microchip will be set with a unique identification number and encryption key, then implanted under the skin in an outpatient procedure. The chip will be controlled by a battery-powered hub attached to an armband. That hub will receive signals transmitted from a mobile app.

If a soldier wishes to reset their internal clock, they’ll simply grab their phone, log onto the app and enter their upcoming itinerary—say, a flight departing at 5:30 a.m. from Arlington, Virginia, and arriving 16 hours later at Fort Buckner in Okinawa, Japan. Using short-range wireless communications, the hub will receive the signal and activate the LED lights inside the chip. The lights will shine on the synthetic cells, stimulating them to generate two compounds that are naturally produced in the body. The compounds will be released directly into the bloodstream, heading towards targeted locations, such as a tiny, centrally-located structure in the brain called the suprachiasmatic nucleus (SCN) that serves as master pacemaker of the circadian rhythm. Whatever the target location, the flow of biomolecules will alter the natural clock. When the solider arrives in Okinawa, their body will be perfectly in tune with local time.

The synthetic cells will be kept isolated from the host’s immune system by a membrane constructed of novel biomaterials, allowing only nutrients and oxygen in and only the compounds out. Should anything go wrong, they would swallow a pill that would kill the cells inside the chip only, leaving the rest of their body unaffected.

If you have the time, I recommend reading Young’s June 17, 2021 Smithsonian Magazine article (GetPocket.com link to article) in its entirety. Young goes on to discuss, hacking, malware, and ethical/societal issues and more.

There is an animation of Kahn’s original poster in a June 23, 2011 posting on openculture.com (also found on Vimeo; Der Mensch als Industriepalast [Man as Industrial Palace])

Credits: Idea & Animation: Henning M. Lederer / led-r-r.net; Sound-Design: David Indge; and original poster art: Fritz Kahn.

FrogHeart’s 2022 comes to an end as 2023 comes into view

I look forward to 2023 and hope it will be as stimulating as 2022 proved to be. Here’s an overview of the year that was on this blog:

Sounds of science

It seems 2022 was the year that science discovered the importance of sound and the possibilities of data sonification. Neither is new but this year seemed to signal a surge of interest or maybe I just happened to stumble onto more of the stories than usual.

This is not an exhaustive list, you can check out my ‘Music’ category for more here. I have tried to include audio files with the postings but it all depends on how accessible the researchers have made them.

Aliens on earth: machinic biology and/or biological machinery?

When I first started following stories in 2008 (?) about technology or machinery being integrated with the human body, it was mostly about assistive technologies such as neuroprosthetics. You’ll find most of this year’s material in the ‘Human Enhancement’ category or you can search the tag ‘machine/flesh’.

However, the line between biology and machine became a bit more blurry for me this year. You can see what’s happening in the titles listed below (you may recognize the zenobot story; there was an earlier version of xenobots featured here in 2021):

This was the story that shook me,

Are the aliens going to come from outer space or are we becoming the aliens?

Brains (biological and otherwise), AI, & our latest age of anxiety

As we integrate machines into our bodies, including our brains, there are new issues to consider:

  • Going blind when your neural implant company flirts with bankruptcy (long read) April 5, 2022 posting
  • US National Academies Sept. 22-23, 2022 workshop on techno, legal & ethical issues of brain-machine interfaces (BMIs) September 21, 2022 posting

I hope the US National Academies issues a report on their “Brain-Machine and Related Neural Interface Technologies: Scientific, Technical, Ethical, and Regulatory Issues – A Workshop” for 2023.

Meanwhile the race to create brainlike computers continues and I have a number of posts which can be found under the category of ‘neuromorphic engineering’ or you can use these search terms ‘brainlike computing’ and ‘memristors’.

On the artificial intelligence (AI) side of things, I finally broke down and added an ‘artificial intelligence (AI) category to this blog sometime between May and August 2021. Previously, I had used the ‘robots’ category as a catchall. There are other stories but these ones feature public engagement and policy (btw, it’s a Canadian Science Policy Centre event), respectively,

  • “The “We are AI” series gives citizens a primer on AI” March 23, 2022 posting
  • “Age of AI and Big Data – Impact on Justice, Human Rights and Privacy Zoom event on September 28, 2022 at 12 – 1:30 pm EDT” September 16, 2022 posting

These stories feature problems, which aren’t new but seem to be getting more attention,

While there have been issues over AI, the arts, and creativity previously, this year they sprang into high relief. The list starts with my two-part review of the Vancouver Art Gallery’s AI show; I share most of my concerns in part two. The third post covers intellectual property issues (mostly visual arts but literary arts get a nod too). The fourth post upends the discussion,

  • “Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (1 of 2): The Objects” July 28, 2022 posting
  • “Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations” July 28, 2022 posting
  • “AI (artificial intelligence) and art ethics: a debate + a Botto (AI artist) October 2022 exhibition in the Uk” October 24, 2022 posting
  • Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms? August 30, 2022 posting

Interestingly, most of the concerns seem to be coming from the visual and literary arts communities; I haven’t come across major concerns from the music community. (The curious can check out Vancouver’s Metacreation Lab for Artificial Intelligence [located on a Simon Fraser University campus]. I haven’t seen any cautionary or warning essays there; it’s run by an AI and creativity enthusiast [professor Philippe Pasquier]. The dominant but not sole focus is art, i.e., music and AI.)

There is a ‘new kid on the block’ which has been attracting a lot of attention this month. If you’re curious about the latest and greatest AI anxiety,

  • Peter Csathy’s December 21, 2022 Yahoo News article (originally published in The WRAP) makes this proclamation in the headline “Chat GPT Proves That AI Could Be a Major Threat to Hollywood Creatives – and Not Just Below the Line | PRO Insight”
  • Mouhamad Rachini’s December 15, 2022 article for the Canadian Broadcasting Corporation’s (CBC) online news overs a more generalized overview of the ‘new kid’ along with an embedded CBC Radio file which runs approximately 19 mins. 30 secs. It’s titled “ChatGPT a ‘landmark event’ for AI, but what does it mean for the future of human labour and disinformation?” The chat bot’s developer, OpenAI, has been mentioned here many times including the previously listed July 28, 2022 posting (part two of the VAG review) and the October 24, 2022 posting.

Opposite world (quantum physics in Canada)

Quantum computing made more of an impact here (my blog) than usual. it started in 2021 with the announcement of a National Quantum Strategy in the Canadian federal government budget for that year and gained some momentum in 2022:

  • “Quantum Mechanics & Gravity conference (August 15 – 19, 2022) launches Vancouver (Canada)-based Quantum Gravity Institute and more” July 26, 2022 posting Note: This turned into one of my ‘in depth’ pieces where I comment on the ‘Canadian quantum scene’ and highlight the appointment of an expert panel for the Council of Canada Academies’ report on Quantum Technologies.
  • “Bank of Canada and Multiverse Computing model complex networks & cryptocurrencies with quantum computing” July 25, 2022 posting
  • “Canada, quantum technology, and a public relations campaign?” December 29, 2022 posting

This one was a bit of a puzzle with regard to placement in this end-of-year review, it’s quantum but it’s also about brainlike computing

It’s getting hot in here

Fusion energy made some news this year.

There’s a Vancouver area company, General Fusion, highlighted in both postings and the October posting includes an embedded video of Canadian-born rapper Baba Brinkman’s “You Must LENR” [L ow E nergy N uclear R eactions or sometimes L attice E nabled N anoscale R eactions or Cold Fusion or CANR (C hemically A ssisted N uclear R eactions)].

BTW, fusion energy can generate temperatures up to 150 million degrees Celsius.

Ukraine, science, war, and unintended consequences

Here’s what you might expect,

These are the unintended consequences (from Rachel Kyte’s, Dean of the Fletcher School, Tufts University, December 26, 2022 essay on The Conversation [h/t December 27, 2022 news item on phys.org]), Note: Links have been removed,

Russian President Vladimir Putin’s war on Ukraine has reverberated through Europe and spread to other countries that have long been dependent on the region for natural gas. But while oil-producing countries and gas lobbyists are arguing for more drilling, global energy investments reflect a quickening transition to cleaner energy. [emphasis mine]

Call it the Putin effect – Russia’s war is speeding up the global shift away from fossil fuels.

In December [2022?], the International Energy Agency [IEA] published two important reports that point to the future of renewable energy.

First, the IEA revised its projection of renewable energy growth upward by 30%. It now expects the world to install as much solar and wind power in the next five years as it installed in the past 50 years.

The second report showed that energy use is becoming more efficient globally, with efficiency increasing by about 2% per year. As energy analyst Kingsmill Bond at the energy research group RMI noted, the two reports together suggest that fossil fuel demand may have peaked. While some low-income countries have been eager for deals to tap their fossil fuel resources, the IEA warns that new fossil fuel production risks becoming stranded, or uneconomic, in the next 20 years.

Kyte’s essay is not all ‘sweetness and light’ but it does provide a little optimism.

Kudos, nanotechnology, culture (pop & otherwise), fun, and a farewell in 2022

This one was a surprise for me,

Sometimes I like to know where the money comes from and I was delighted to learn of the Ărramăt Project funded through the federal government’s New Frontiers in Research Fund (NFRF). Here’s more about the Ărramăt Project from the February 14, 2022 posting,

“The Ărramăt Project is about respecting the inherent dignity and interconnectedness of peoples and Mother Earth, life and livelihood, identity and expression, biodiversity and sustainability, and stewardship and well-being. Arramăt is a word from the Tamasheq language spoken by the Tuareg people of the Sahel and Sahara regions which reflects this holistic worldview.” (Mariam Wallet Aboubakrine)

Over 150 Indigenous organizations, universities, and other partners will work together to highlight the complex problems of biodiversity loss and its implications for health and well-being. The project Team will take a broad approach and be inclusive of many different worldviews and methods for research (i.e., intersectionality, interdisciplinary, transdisciplinary). Activities will occur in 70 different kinds of ecosystems that are also spiritually, culturally, and economically important to Indigenous Peoples.

The project is led by Indigenous scholars and activists …

Kudos to the federal government and all those involved in the Salmon science camps, the Ărramăt Project, and other NFRF projects.

There are many other nanotechnology posts here but this appeals to my need for something lighter at this point,

  • “Say goodbye to crunchy (ice crystal-laden) in ice cream thanks to cellulose nanocrystals (CNC)” August 22, 2022 posting

The following posts tend to be culture-related, high and/or low but always with a science/nanotechnology edge,

Sadly, it looks like 2022 is the last year that Ada Lovelace Day is to be celebrated.

… this year’s Ada Lovelace Day is the final such event due to lack of financial backing. Suw Charman-Anderson told the BBC [British Broadcasting Corporation] the reason it was now coming to an end was:

You can read more about it here:

In the rearview mirror

A few things that didn’t fit under the previous heads but stood out for me this year. Science podcasts, which were a big feature in 2021, also proliferated in 2022. I think they might have peaked and now (in 2023) we’ll see what survives.

Nanotechnology, the main subject on this blog, continues to be investigated and increasingly integrated into products. You can search the ‘nanotechnology’ category here for posts of interest something I just tried. It surprises even me (I should know better) how broadly nanotechnology is researched and applied.

If you want a nice tidy list, Hamish Johnston in a December 29, 2022 posting on the Physics World Materials blog has this “Materials and nanotechnology: our favourite research in 2022,” Note: Links have been removed,

“Inherited nanobionics” makes its debut

The integration of nanomaterials with living organisms is a hot topic, which is why this research on “inherited nanobionics” is on our list. Ardemis Boghossian at EPFL [École polytechnique fédérale de Lausanne] in Switzerland and colleagues have shown that certain bacteria will take up single-walled carbon nanotubes (SWCNTs). What is more, when the bacteria cells split, the SWCNTs are distributed amongst the daughter cells. The team also found that bacteria containing SWCNTs produce a significantly more electricity when illuminated with light than do bacteria without nanotubes. As a result, the technique could be used to grow living solar cells, which as well as generating clean energy, also have a negative carbon footprint when it comes to manufacturing.

Getting to back to Canada, I’m finding Saskatchewan featured more prominently here. They do a good job of promoting their science, especially the folks at the Canadian Light Source (CLS), Canada’s synchrotron, in Saskatoon. Canadian live science outreach events seeming to be coming back (slowly). Cautious organizers (who have a few dollars to spare) are also enthusiastic about hybrid events which combine online and live outreach.

After what seems like a long pause, I’m stumbling across more international news, e.g. “Nigeria and its nanotechnology research” published December 19, 2022 and “China and nanotechnology” published September 6, 2022. I think there’s also an Iran piece here somewhere.

With that …

Making resolutions in the dark

Hopefully this year I will catch up with the Council of Canadian Academies (CCA) output and finally review a few of their 2021 reports such as Leaps and Boundaries; a report on artificial intelligence applied to science inquiry and, perhaps, Powering Discovery; a report on research funding and Natural Sciences and Engineering Research Council of Canada.

Given what appears to a renewed campaign to have germline editing (gene editing which affects all of your descendants) approved in Canada, I might even reach back to a late 2020 CCA report, Research to Reality; somatic gene and engineered cell therapies. it’s not the same as germline editing but gene editing exists on a continuum.

For anyone who wants to see the CCA reports for themselves they can be found here (both in progress and completed).

I’m also going to be paying more attention to how public relations and special interests influence what science is covered and how it’s covered. In doing this 2022 roundup, I noticed that I featured an overview of fusion energy not long before the breakthrough. Indirect influence on this blog?

My post was precipitated by an article by Alex Pasternak in Fast Company. I’m wondering what precipitated Alex Pasternack’s interest in fusion energy since his self-description on the Huffington Post website states this “… focus on the intersections of science, technology, media, politics, and culture. My writing about those and other topics—transportation, design, media, architecture, environment, psychology, art, music … .”

He might simply have received a press release that stimulated his imagination and/or been approached by a communications specialist or publicists with an idea. There’s a reason for why there are so many public relations/media relations jobs and agencies.

Que sera, sera (Whatever will be, will be)

I can confidently predict that 2023 has some surprises in store. I can also confidently predict that the European Union’s big research projects (1B Euros each in funding for the Graphene Flagship and Human Brain Project over a ten year period) will sunset in 2023, ten years after they were first announced in 2013. Unless, the powers that be extend the funding past 2023.

I expect the Canadian quantum community to provide more fodder for me in the form of a 2023 report on Quantum Technologies from the Council of Canadian academies, if nothing else otherwise.

I’ve already featured these 2023 science events but just in case you missed them,

  • 2023 Preview: Bill Nye the Science Guy’s live show and Marvel Avengers S.T.A.T.I.O.N. (Scientific Training And Tactical Intelligence Operative Network) coming to Vancouver (Canada) November 24, 2022 posting
  • September 2023: Auckland, Aotearoa New Zealand set to welcome women in STEM (science, technology, engineering, and mathematics) November 15, 2022 posting

Getting back to this blog, it may not seem like a new year during the first few weeks of 2023 as I have quite the stockpile of draft posts. At this point I have drafts that are dated from June 2022 and expect to be burning through them so as not to fall further behind but will be interspersing them, occasionally, with more current posts.

Most importantly: a big thank you to everyone who drops by and reads (and sometimes even comments) on my posts!!! it’s very much appreciated and on that note: I wish you all the best for 2023.

Kempner Institute for the Study of Natural and Artificial Intelligence launched at Harvard University and University of Manchester pushes the boundaries of smart robotics and AI

Before getting to the two news items, it might be a good idea to note that ‘artificial intelligence (AI)’ and ‘robot’ are not synonyms although they are often used that way, even by people who should know better. (sigh … I do it too)

A robot may or may not be animated with artificial intelligence while artificial intelligence algorithms may be installed on a variety of devices such as a phone or a computer or a thermostat or a … .

It’s something to bear in mind when reading about the two new institutions being launched. Now, on to Harvard University.

Kempner Institute for the Study of Natural and Artificial Intelligence

A September 23, 2022 Chan Zuckerberg Initiative (CZI) news release (also on EurekAlert) announces a symposium to launch a new institute close to Mark Zuckerberg’s heart,

On Thursday [September 22, 2022], leadership from the Chan Zuckerberg Initiative (CZI) and Harvard University celebrated the launch of the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University with a symposium on Harvard’s campus. Speakers included CZI Head of Science Stephen Quake, President of Harvard University Lawrence Bacow, Provost of Harvard University Alan Garber, and Kempner Institute co-directors Bernardo Sabatini and Sham Kakade. The event also included remarks and panels from industry leaders in science, technology, and artificial intelligence, including Bill Gates, Eric Schmidt, Andy Jassy, Daniel Huttenlocher, Sam Altman, Joelle Pineau, Sangeeta Bhatia, and Yann LeCun, among many others.

The Kempner Institute will seek to better understand the basis of intelligence in natural and artificial systems. Its bold premise is that the two fields are intimately interconnected; the next generation of AI will require the same principles that our brains use for fast, flexible natural reasoning, and understanding how our brains compute and reason requires theories developed for AI. The Kempner Institute will study AI systems, including artificial neural networks, to develop both principled theories [emphasis mine] and a practical understanding of how these systems operate and learn. It will also focus on research topics such as learning and memory, perception and sensation, brain function, and metaplasticity. The Institute will recruit and train future generations of researchers from undergraduates and graduate students to post-docs and faculty — actively recruiting from underrepresented groups at every stage of the pipeline — to study intelligence from biological, cognitive, engineering, and computational perspectives.

CZI Co-Founder and Co-CEO Mark Zuckerberg [chairman and chief executive officer of Meta/Facebook] said: “The Kempner Institute will be a one-of-a-kind institute for studying intelligence and hopefully one that helps us discover what intelligent systems really are, how they work, how they break and how to repair them. There’s a lot of exciting implications because once you understand how something is supposed to work and how to repair it once it breaks, you can apply that to the broader mission the Chan Zuckerberg Initiative has to empower scientists to help cure, prevent or manage all diseases.”

CZI Co-Founder and Co-CEO Priscilla Chan said: “Just attending this school meant the world to me. But to stand on this stage and to be able to give something back is truly a dream come true … All of this progress starts with building one fundamental thing: a Kempner community that’s diverse, multi-disciplinary and multi-generational, because incredible ideas can come from anyone. If you bring together people from all different disciplines to look at a problem and give them permission to articulate their perspective, you might start seeing insights or solutions in a whole different light. And those new perspectives lead to new insights and discoveries and generate new questions that can lead an entire field to blossom. So often, that momentum is what breaks the dam and tears down old orthodoxies, unleashing new floods of new ideas that allow us to progress together as a society.”

CZI Head of Science Stephen Quake said: “It’s an honor to partner with Harvard in building this extraordinary new resource for students and science. This is a once-in-a-generation moment for life sciences and medicine. We are living in such an extraordinary and exciting time for science. Many breakthrough discoveries are going to happen not only broadly but right here on this campus and at this institute.”

CZI’s 10-year vision is to advance research and develop technologies to observe, measure, and analyze any biological process within the human body — across spatial scales and in real time. CZI’s goal is to accelerate scientific progress by funding scientific research to advance entire fields; working closely with scientists and engineers at partner institutions like the Chan Zuckerberg Biohub and Chan Zuckerberg Institute for Advanced Biological Imaging to do the research that can’t be done in conventional environments; and building and democratizing next-generation software and hardware tools to drive biological insights and generate more accurate and biologically important sources of data.

President of Harvard University Lawrence Bacow said: “Here we are with this incredible opportunity that Priscilla Chan and Mark Zuckerberg have given us to imagine taking what we know about the brain, neuroscience and how to model intelligence and putting them together in ways that can inform both, and can truly advance our understanding of intelligence from multiple perspectives.”

Kempner Institute Co-Director and Gordon McKay Professor of Computer Science and of Statistics at the Harvard John A. Paulson School of Engineering and Applied Sciences Sham Kakade said: “Now we begin assembling a world-leading research and educational program at Harvard that collectively tries to understand the fundamental mechanisms of intelligence and seeks to apply these new technologies for the benefit of humanity … We hope to create a vibrant environment for all of us to engage in broader research questions … We want to train the next generation of leaders because those leaders will go on to do the next set of great things.”

Kempner Institute Co-Director and the Alice and Rodman W. Moorhead III Professor of Neurobiology at Harvard Medical School Bernardo Sabatini said: “We’re blending research, education and computation to nurture, raise up and enable any scientist who is interested in unraveling the mysteries of the brain. This field is a nascent and interdisciplinary one, so we’re going to have to teach neuroscience to computational biologists, who are going to have to teach machine learning to cognitive scientists and math to biologists. We’re going to do whatever is necessary to help each individual thrive and push the field forward … Success means we develop mathematical theories that explain how our brains compute and learn, and these theories should be specific enough to be testable and useful enough to start to explain diseases like schizophrenia, dyslexia or autism.”

About the Chan Zuckerberg Initiative

The Chan Zuckerberg Initiative was founded in 2015 to help solve some of society’s toughest challenges — from eradicating disease and improving education, to addressing the needs of our communities. Through collaboration, providing resources and building technology, our mission is to help build a more inclusive, just and healthy future for everyone. For more information, please visit chanzuckerberg.com.

Principled theories, eh. I don’t see a single mention of ethicists or anyone in the social sciences or the humanities or the arts. How are scientists and engineers who have no training in or education in or, even, an introduction to ethics or social impacts or psychology going to manage this?

Mark Zuckerberg’s approach to these issues was something along the lines of “it’s easier to ask for forgiveness than to ask for permission.” I understand there have been changes but it took far too long to recognize the damage let alone attempt to address it.

If you want to gain a little more insight into the Kempner Institute, there’s a December 7, 2021 article by Alvin Powell announcing the institute for the Harvard Gazette,

The institute will be funded by a $500 million gift from Priscilla Chan and Mark Zuckerberg, which was announced Tuesday [December 7, 2021] by the Chan Zuckerberg Initiative. The gift will support 10 new faculty appointments, significant new computing infrastructure, and resources to allow students to flow between labs in pursuit of ideas and knowledge. The institute’s name honors Zuckerberg’s mother, Karen Kempner Zuckerberg, and her parents — Zuckerberg’s grandparents — Sidney and Gertrude Kempner. Chan and Zuckerberg have given generously to Harvard in the past, supporting students, faculty, and researchers in a range of areas, including around public service, literacy, and cures.

“The Kempner Institute at Harvard represents a remarkable opportunity to bring together approaches and expertise in biological and cognitive science with machine learning, statistics, and computer science to make real progress in understanding how the human brain works to improve how we address disease, create new therapies, and advance our understanding of the human body and the world more broadly,” said President Larry Bacow.

Q&A

Bernardo Sabatini and Sham Kakade [Institute co-directors]

GAZETTE: Tell me about the new institute. What is its main reason for being?

SABATINI: The institute is designed to take from two fields and bring them together, hopefully to create something that’s essentially new, though it’s been tried in a couple of places. Imagine that you have over here cognitive scientists and neurobiologists who study the human brain, including the basic biological mechanisms of intelligence and decision-making. And then over there, you have people from computer science, from mathematics and statistics, who study artificial intelligence systems. Those groups don’t talk to each other very much.

We want to recruit from both populations to fill in the middle and to create a new population, through education, through graduate programs, through funding programs — to grow from academic infancy — those equally versed in neuroscience and in AI systems, who can be leaders for the next generation.

Over the millions of years that vertebrates have been evolving, the human brain has developed specializations that are fundamental for learning and intelligence. We need to know what those are to understand their benefits and to ask whether they can make AI systems better. At the same time, as people who study AI and machine learning (ML) develop mathematical theories as to how those systems work and can say that a network of the following structure with the following properties learns by calculating the following function, then we can take those theories and ask, “Is that actually how the human brain works?”

KAKADE: There’s a question of why now? In the technological space, the advancements are remarkable even to me, as a researcher who knows how these things are being made. I think there’s a long way to go, but many of us feel that this is the right time to study intelligence more broadly. You might also ask: Why is this mission unique and why is this institute different from what’s being done in academia and in industry? Academia is good at putting out ideas. Industry is good at turning ideas into reality. We’re in a bit of a sweet spot. We have the scale to study approaches at a very different level: It’s not going to be just individual labs pursuing their own ideas. We may not be as big as the biggest companies, but we can work on the types of problems that they work on, such as having the compute resources to work on large language models. Industry has exciting research, but the spectrum of ideas produced is very different, because they have different objectives.

For the die-hards, there’s a September 23, 2022 article by Clea Simon in Harvard Gazette, which updates the 2021 story,

Next, Manchester, England.

Manchester Centre for Robotics and AI

Robotots take a break at a lab at The University of Manchester – picture courtesy of Marketing Manchester [downloaded from https://www.manchester.ac.uk/discover/news/manchester-ai-summit-aims-to-attract-experts-in-advanced-engineering-and-robotics/]

A November 22, 2022 University of Manchester press release (also on EurekAlert) announces both a meeting and a new centre, Note: Links to the Centre have been retained; all others have been removed,

How humans and super smart robots will live and work together in the future will be among the key issues being scrutinised by experts at a new centre of excellence for AI and autonomous machines based at The University of Manchester.

The Manchester Centre for Robotics and AI will be a new specialist multi-disciplinary centre to explore developments in smart robotics through the lens of artificial intelligence (AI) and autonomous machinery.

The University of Manchester has built a modern reputation of excellence in AI and robotics, partly based on the legacy of pioneering thought leadership begun in this field in Manchester by legendary codebreaker Alan Turing.

Manchester’s new multi-disciplinary centre is home to world-leading research from across the academic disciplines – and this group will hold its first conference on Wednesday, Nov 23, at the University’s new engineering and materials facilities.

A  highlight will be a joint talk by robotics expert Dr Andy Weightman and theologian Dr Scott Midson which is expected to put a spotlight on ‘posthumanism’, a future world where humans won’t be the only highly intelligent decision-makers.

Dr Weightman, who researches home-based rehabilitation robotics for people with neurological impairment, and Dr Midson, who researches theological and philosophical critiques of posthumanism, will discuss how interdisciplinary research can help with the special challenges of rehabilitation robotics – and, ultimately, what it means to be human “in the face of the promises and challenges of human enhancement through robotic and autonomous machines”.

Other topics that the centre will have a focus on will include applications of robotics in extreme environments.

For the past decade, a specialist Manchester team led by Professor Barry Lennox has designed robots to work safely in nuclear decommissioning sites in the UK. A ground-breaking robot called Lyra that has been developed by Professor Lennox’s team – and recently deployed at the Dounreay site in Scotland, the “world’s deepest nuclear clean up site” – has been listed in Time Magazine’s Top 200 innovations of 2022.

Angelo Cangelosi, Professor of Machine Learning and Robotics at Manchester, said the University offers a world-leading position in the field of autonomous systems – a technology that will be an integral part of our future world. 

Professor Cangelosi, co-Director of Manchester’s Centre for Robotics and AI, said: “We are delighted to host our inaugural conference which will provide a special showcase for our diverse academic expertise to design robotics for a variety of real world applications.

“Our research and innovation team are at the interface between robotics, autonomy and AI – and their knowledge is drawn from across the University’s disciplines, including biological and medical sciences – as well the humanities and even theology. [emphases mine]

“This rich diversity offers Manchester a distinctive approach to designing robots and autonomous systems for real world applications, especially when combined with our novel use of AI-based knowledge.”

Delegates will have a chance to observe a series of robots and autonomous machines being demoed at the new conference.

The University of Manchester’s Centre for Robotics and AI will aim to: 

  • design control systems with a focus on bio-inspired solutions to mechatronics, eg the use of biomimetic sensors, actuators and robot platforms; 
  • develop new software engineering and AI methodologies for verification in autonomous systems, with the aim to design trustworthy autonomous systems; 
  • research human-robot interaction, with a pioneering focus on the use of brain-inspired approaches [emphasis mine] to robot control, learning and interaction; and 
  • research the ethics and human-centred robotics issues, for the understanding of the impact of the use of robots and autonomous systems with individuals and society. 

In some ways, the Kempner Institute and the Manchester Centre for Robotics and AI have very similar interests, especially where the brain is concerned. What fascinates me is the Manchester Centre’s inclusion of theologian Dr Scott Midson and the discussion (at the meeting) of ‘posthumanism’. The difference is between actual engagement at the symposium (the centre) and mere mention in a news release (the institute).

I wish the best for both institutions.

AI (artificial intelligence) and art ethics: a debate + a Botto (AI artist) October 2022 exhibition in the Uk

Who is an artist? What is an artist? Can everyone be an artist? These are the kinds of questions you can expect with the rise of artificially intelligent artists/collaborators. Of course, these same questions have been asked many times before the rise of AI (artificial intelligence) agents/programs in the field of visual art. Each time the questions are raised is an opportunity to examine our beliefs from a different perspective. And, not to be forgotten, there are questions about money.

The shock

First, the ‘art’,

The winning work. Colorado State Fair 2022. Screengrab from Discord [downloaded from https://www.artnews.com/art-news/news/colorado-state-fair-ai-generated-artwork-controversy-1234638022/]

Shanti Escalante-De Mattei’s September 1, 2022 article for ArtNews.com provides an overview of the latest AI art controversy (Note: A link has been removed),

The debate around AI art went viral once again when a man won first place at the Colorado State Fair’s art competition in the digital category with a work he made using text-to-image AI generator Midjourney.

Twitter user and digital artist Genel Jumalon tweeted out a screenshot from a Discord channel in which user Sincarnate, aka game designer Jason Allen, celebrated his win at the fair. Jumalon wrote, “Someone entered an art competition with an AI-generated piece and won the first prize. Yeah that’s pretty fucking shitty.”

The comments on the post range from despair and anger as artists, both digital and traditional, worry that their livelihoods might be at stake after years of believing that creative work would be safe from AI-driven automation. [emphasis mine]

Rachel Metz’s September 3, 2022 article for CNN provides more details about how the work was generated (Note: Links have been removed),

Jason M. Allen was almost too nervous to enter his first art competition. Now, his award-winning image is sparking controversy about whether art can be generated by a computer, and what, exactly, it means to be an artist.

In August [2022], Allen, a game designer who lives in Pueblo West, Colorado, won first place in the emerging artist division’s “digital arts/digitally-manipulated photography” category at the Colorado State Fair Fine Arts Competition. His winning image, titled “Théâtre D’opéra Spatial” (French for “Space Opera Theater”), was made with Midjourney — an artificial intelligence system that can produce detailed images when fed written prompts. A $300 prize accompanied his win.

Allen’s winning image looks like a bright, surreal cross between a Renaissance and steampunk painting. It’s one of three such images he entered in the competition. In total, 11 people entered 18 pieces of art in the same category in the emerging artist division.

The definition for the category in which Allen competed states that digital art refers to works that use “digital technology as part of the creative or presentation process.” Allen stated that Midjourney was used to create his image when he entered the contest, he said.

The newness of these tools, how they’re used to produce images, and, in some cases, the gatekeeping for access to some of the most powerful ones has led to debates about whether they can truly make art or assist humans in making art.

This came into sharp focus for Allen not long after his win. Allen had posted excitedly about his win on Midjourney’s Discord server on August 25 [2022], along with pictures of his three entries; it went viral on Twitter days later, with many artists angered by Allen’s win because of his use of AI to create the image, as a story by Vice’s Motherboard reported earlier this week.

“This sucks for the exact same reason we don’t let robots participate in the Olympics,” one Twitter user wrote.

“This is the literal definition of ‘pressed a few buttons to make a digital art piece’,” another Tweeted. “AI artwork is the ‘banana taped to the wall’ of the digital world now.”

Yet while Allen didn’t use a paintbrush to create his winning piece, there was plenty of work involved, he said.

“It’s not like you’re just smashing words together and winning competitions,” he said.

You can feed a phrase like “an oil painting of an angry strawberry” to Midjourney and receive several images from the AI system within seconds, but Allen’s process wasn’t that simple. To get the final three images he entered in the competition, he said, took more than 80 hours.

First, he said, he played around with phrasing that led Midjourney to generate images of women in frilly dresses and space helmets — he was trying to mash up Victorian-style costuming with space themes, he said. Over time, with many slight tweaks to his written prompt (such as to adjust lighting and color harmony), he created 900 iterations of what led to his final three images. He cleaned up those three images in Photoshop, such as by giving one of the female figures in his winning image a head with wavy, dark hair after Midjourney had rendered her headless. Then he ran the images through another software program called Gigapixel AI that can improve resolution and had the images printed on canvas at a local print shop.

Ars Technica has run a number of articles on the subject of Art and AI, Benj Edwards in an August 31, 2022 article seems to have been one of the first to comment on Jason Allen’s win (Note 1: Links have been removed; Note 2: Look at how Edwards identifies Jason Allen as an artist),

A synthetic media artist named Jason Allen entered AI-generated artwork into the Colorado State Fair fine arts competition and announced last week that he won first place in the Digital Arts/Digitally Manipulated Photography category, Vice reported Wednesday [August 31, 2022?] based on a viral tweet.

Allen’s victory prompted lively discussions on Twitter, Reddit, and the Midjourney Discord server about the nature of art and what it means to be an artist. Some commenters think human artistry is doomed thanks to AI and that all artists are destined to be replaced by machines. Others think art will evolve and adapt with new technologies that come along, citing synthesizers in music. It’s a hot debate that Wired covered in July [2022].

It’s worth noting that the invention of the camera in the 1800s prompted similar criticism related to the medium of photography, since the camera seemingly did all the work compared to an artist that labored to craft an artwork by hand with a brush or pencil. Some feared that painters would forever become obsolete with the advent of color photography. In some applications, photography replaced more laborious illustration methods (such as engraving), but human fine art painters are still around today.

Benj Edwards in a September 12, 2022 article for Ars Technica examines how some art communities are responding (Note: Links have been removed),

Confronted with an overwhelming amount of artificial-intelligence-generated artwork flooding in, some online art communities have taken dramatic steps to ban or curb its presence on their sites, including Newgrounds, Inkblot Art, and Fur Affinity, according to Andy Baio of Waxy.org.

Baio, who has been following AI art ethics closely on his blog, first noticed the bans and reported about them on Friday [Sept. 9, 2022?]. …

The arrival of widely available image synthesis models such as Midjourney and Stable Diffusion has provoked an intense online battle between artists who view AI-assisted artwork as a form of theft (more on that below) and artists who enthusiastically embrace the new creative tools.

… a quickly evolving debate about how art communities (and art professionals) can adapt to software that can potentially produce unlimited works of beautiful art at a rate that no human working without the tools could match.

A few weeks ago, some artists began discovering their artwork in the Stable Diffusion data set, and they weren’t happy about it. Charlie Warzel wrote a detailed report about these reactions for The Atlantic last week [September 7, 2022]. With battle lines being drawn firmly in the sand and new AI creativity tools coming out steadily, this debate will likely continue for some time to come.

Filthy lucre becomes more prominent in the conversation

Lizzie O’Leary in a September 12, 2022 article for Fast Company presents a transcript of an interview (from the TBD podcast) she conducted with Drew Harwell, tech reporter covering A.I. for Washington Post) about the ‘Jason Allen’ win,

I’m struck by how quickly these art A.I.s are advancing. DALL-E was released in January of last year and there were some pretty basic images. And then, a year later, DALL-E 2 is using complex, faster methods. Midjourney, the one Jason Allen used, has a feature that allows you to upscale and downscale images. Where is this sudden supply and demand for A.I. art coming from?

You could look back to five years ago when they had these text-to-image generators and the output would be really crude. You could sort of see what the A.I. was trying to get at, but we’ve only really been able to cross that photorealistic uncanny valley in the last year or so. And I think the things that have contributed to that are, one, better data. You’re seeing people invest a lot of money and brainpower and resources into adding more stuff into bigger data sets. We have whole groups that are taking every image they can get on the internet. Billions, billions of images from Pinterest and Amazon and Facebook. You have bigger data sets, so the A.I. is learning more. You also have better computing power, and those are the two ingredients to any good piece of A.I. So now you have A.I. that is not only trained to understand the world a little bit better, but it can now really quickly spit out a very finely detailed generated image.

Is there any way to know, when you look at a piece of A.I. art, what images it referenced to create what it’s doing? Or is it just so vast that you can’t kind of unspool it backward?

When you’re doing an image that’s totally generated out of nowhere, it’s taking bits of information from billions of images. It’s creating it in a much more sophisticated way so that it’s really hard to unspool.

Art generated by A.I. isn’t just a gee-whiz phenomenon, something that wins prizes, or even a fascinating subject for debate—it has valuable commercial uses, too. Some that are a little frightening if you’re, say, a graphic designer.

You’re already starting to see some of these images illustrating news articles, being used as logos for companies, being used in the form of stock art for small businesses and websites. Anything where somebody would’ve gone and paid an illustrator or graphic designer or artist to make something, they can now go to this A.I. and create something in a few seconds that is maybe not perfect, maybe would be beaten by a human in a head-to-head, but is good enough. From a commercial perspective, that’s scary, because we have an industry of people whose whole job is to create images, now running up against A.I.

And the A.I., again, in the last five years, the A.I. has gotten better and better. It’s still not perfect. I don’t think it’ll ever be perfect, whatever that looks like. It processes information in a different, maybe more literal, way than a human. I think human artists will still sort of have the upper hand in being able to imagine things a little more outside of the box. And yet, if you’re just looking for three people in a classroom or a pretty simple logo, you’re going to go to A.I. and you’re going to take potentially a job away from a freelancer whom you would’ve given it to 10 years ago.

I can see a use case here in marketing, in advertising. The A.I. doesn’t need health insurance, it doesn’t need paid vacation days, and I really do wonder about this idea that the A.I. could replace the jobs of visual artists. Do you think that is a legitimate fear, or is that overwrought at this moment?

I think it is a legitimate fear. When something can mirror your skill set, not 100 percent of the way, but enough of the way that it could replace you, that’s an issue. Do these A.I. creators have any kind of moral responsibility to not create it because it could put people out of jobs? I think that’s a debate, but I don’t think they see it that way. They see it like they’re just creating the new generation of digital camera, the new generation of Photoshop. But I think it is worth worrying about because even compared with cameras and Photoshop, the A.I. is a little bit more of the full package and it is so accessible and so hard to match in terms. It’s really going to be up to human artists to find some way to differentiate themselves from the A.I.

This is making me wonder about the humans underneath the data sets that the A.I. is trained on. The criticism is, of course, that these businesses are making money off thousands of artists’ work without their consent or knowledge and it undermines their work. Some people looked at the Stable Diffusion and they didn’t have access to its whole data set, but they found that Thomas Kinkade, the landscape painter, was the most referenced artist in the data set. Is the A.I. just piggybacking? And if it’s not Thomas Kinkade, if it’s someone who’s alive, are they piggybacking on that person’s work without that person getting paid?

Here’s a bit more on the topic of money and art in a September 19, 2022 article by John Herrman for New York Magazine. First, he starts with the literary arts, Note: Links have been removed,

Artificial-intelligence experts are excited about the progress of the past few years. You can tell! They’ve been telling reporters things like “Everything’s in bloom,” “Billions of lives will be affected,” and “I know a person when I talk to it — it doesn’t matter whether they have a brain made of meat in their head.”

We don’t have to take their word for it, though. Recently, AI-powered tools have been making themselves known directly to the public, flooding our social feeds with bizarre and shocking and often very funny machine-generated content. OpenAI’s GPT-3 took simple text prompts — to write a news article about AI or to imagine a rose ceremony from The Bachelor in Middle English — and produced convincing results.

Deepfakes graduated from a looming threat to something an enterprising teenager can put together for a TikTok, and chatbots are occasionally sending their creators into crisis.

More widespread, and probably most evocative of a creative artificial intelligence, is the new crop of image-creation tools, including DALL-E, Imagen, Craiyon, and Midjourney, which all do versions of the same thing. You ask them to render something. Then, with models trained on vast sets of images gathered from around the web and elsewhere, they try — “Bart Simpson in the style of Soviet statuary”; “goldendoodle megafauna in the streets of Chelsea”; “a spaghetti dinner in hell”; “a logo for a carpet-cleaning company, blue and red, round”; “the meaning of life.”

This flood of machine-generated media has already altered the discourse around AI for the better, probably, though it couldn’t have been much worse. In contrast with the glib intra-VC debate about avoiding human enslavement by a future superintelligence, discussions about image-generation technology have been driven by users and artists and focus on labor, intellectual property, AI bias, and the ethics of artistic borrowing and reproduction [emphasis mine]. Early controversies have cut to the chase: Is the guy who entered generated art into a fine-art contest in Colorado (and won!) an asshole? Artists and designers who already feel underappreciated or exploited in their industries — from concept artists in gaming and film and TV to freelance logo designers — are understandably concerned about automation. Some art communities and marketplaces have banned AI-generated images entirely.

Requests are effectively thrown into “a giant swirling whirlpool” of “10,000 graphics cards,” Holz [David Holz, Midjourney founder] said, after which users gradually watch them take shape, gaining sharpness but also changing form as Midjourney refines its work.

This hints at an externality beyond the worlds of art and design. “Almost all the money goes to paying for those machines,” Holz said. New users are given a small number of free image generations before they’re cut off and asked to pay; each request initiates a massive computational task, which means using a lot of electricity.

High compute costs [emphasis mine] — which are largely energy costs — are why other services have been cautious about adding new users. …

Another Midjourney user, Gila von Meissner, is a graphic designer and children’s-book author-illustrator from “the boondocks in north Germany.” Her agent is currently shopping around a book that combines generated images with her own art and characters. Like Pluckebaum [Brian Pluckebaum who works in automotive-semiconductor marketing and designs board games], she brought up the balance of power with publishers. “Picture books pay peanuts,” she said. “Most illustrators struggle financially.” Why not make the work easier and faster? “It’s my character, my edits on the AI backgrounds, my voice, and my story.” A process that took months now takes a week, she said. “Does that make it less original?”

User MoeHong, a graphic designer and typographer for the state of California, has been using Midjourney to make what he called generic illustrations (“backgrounds, people at work, kids at school, etc.”) for government websites, pamphlets, and literature: “I get some of the benefits of using custom art — not that we have a budget for commissions! — without the paying-an-artist part.” He said he has mostly replaced stock art, but he’s not entirely comfortable with the situation. “I have a number of friends who are commercial illustrators, and I’ve been very careful not to show them what I’ve made,” he said. He’s convinced that tools like this could eventually put people in his trade out of work. “But I’m already in my 50s,” he said, “and I hope I’ll be gone by the time that happens.”

Fan club

The last article I’m featuring here is a September 15, 2021 piece by Agnieszka Cichocka for DailyArt, which provides good, brief descriptions of algorithms, generative creative networks, machine learning, artificial neural networks, and more. She is an enthusiast (Note: Links have been removed),

I keep wondering if Leonardo da Vinci, who, in my opinion, was the most forward thinking artist of all time, would have ever imagined that art would one day be created by AI. He worked on numerous ideas and was constantly experimenting, and, although some were failures, he persistently tried new products, helping to move our world forward. Without such people, progress would not be possible. 

Machine Learning

As humans, we learn by acquiring knowledge through observations, senses, experiences, etc. This is similar to computers. Machine learning is a process in which a computer system learns how to perform a task better in two ways—either through exposure to environments that provide punishments and rewards (reinforcement learning) or by training with specific data sets (the system learns automatically and improves from previous experiences). Both methods help the systems improve their accuracy. Machines then use patterns and attempt to make an accurate analysis of things they have not seen before. To give an example, let’s say we feed the computer with thousands of photos of a dog. Consequently, it can learn what a dog looks like based on those. Later, even when faced with a picture it has never seen before, it can tell that the photo shows a dog.

If you want to see some creative machine learning experiments in art, check out ML x ART. This is a website with hundreds of artworks created using AI tools.

Some thoughts

As the saying goes “a picture is worth a thousand words” and, now, It seems that pictures will be made from words or so suggests the example of Jason M. Allen feeding prompts to the AI system Midjourney.

I suspect (as others have suggested) that in the end, artists who use AI systems will be absorbed into the art world in much the same way as artists who use photography, or are considered performance artists and/or conceptual artists, and/or use video have been absorbed. There will be some displacements and discomfort as the questions I opened this posting with (Who is an artist? What is an artist? Can everyone be an artist?) are passionately discussed and considered. Underlying many of these questions is the issue of money.

The impact on people’s livelihoods is cheering or concerning depending on how the AI system is being used. Herrman’s September 19, 2022 article highlights two examples that focus on graphic designers. Gila von Meissner, the illustrator and designer, who uses her own art to illustrate her children’s books in a faster, more cost effective way with an AI system and MoeHong, a graphic designer for the state of California, who uses an AI system to make ‘customized generic art’ for which the state government doesn’t have to pay.

So far, the focus has been on Midjourney and other AI agents that have been created by developers for use by visual artists and writers. What happens when the visual artist or the writer is the developer? A September 12, 2022 article by Brandon Scott Roye for Cool Hunting approaches the question (Note: Links have been removed),

Mario Klingemann and Sasha Stiles on Semi-Autonomous AI Artists

An artist and engineer at the forefront of generating AI artwork, Mario Klingemann and first-generation Kalmyk-American poet, artist and researcher Sasha Stiles both approach AI from a more human, personal angle. Creators of semi-autonomous systems, both Klingemann and Stiles are the minds behind Botto and Technelegy, respectively. They are both artists in their own right, but their creations are too. Within web3, the identity of the “artist” who creates with visuals and the “writer” who creates with words is enjoying a foundational shift and expansion. Many have fashioned themselves a new title as “engineer.”

Based on their primary identities as an artist and poet, Klingemann and Stiles face the conundrum of becoming engineers who design the tools, rather than artists responsible for the final piece. They now have the ability to remove themselves from influencing inputs and outputs.

If you have time, I suggest reading Roye’s September 12, 2022 article as it provides some very interesting ideas although I don’t necessarily agree with them, e.g., “They now have the ability to remove themselves from influencing inputs and outputs.” Anyone who’s following the ethics discussion around AI knows that biases are built into the algorithms whether we like it or not. As for artists and writers calling themselves ‘engineers’, they may get a little resistance from the engineering community.

As users of open source software, Klingemann and Stiles should not have to worry too much about intellectual property. However, it seems copyright for the actual works and patents for the software could raise some interesting issues especially since money is involved.

In a March 10, 2022 article by Shraddha Nair for Stir World, Klingemann claims to have made over $1M from auctions of Botto’s artworks. it’s not clear to me where Botto obtains its library of images for future use (which may signal a potential problem); Stiles’ Technelegy creates poems from prompts using its library of her poems. (For the curious, I have an August 30, 2022 post “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” which explores some of the issues around patents.)

Who gets the patent and/or the copyright? Assuming you and I are employing machine learning to train our AI agents separately, could there be an argument that if my version of the AI is different than yours and proves more popular with other content creators/ artists that I should own/share the patent to the software and rights to whatever the software produces?

Getting back to Herrman’s comment about high compute costs and energy, we seem to have an insatiable appetite for energy and that is not only a high cost financially but also environmentally.

Botto exhibition

Here’s more about Klingemann’s artist exhibition by Botto (from an October 6, 2022 announcement received via email),

Mario Klingemann is a pioneering figurehead in the field of AI art,
working deep in the field of Machine Learning. Governed by a community
of 5,000 people, Klingemann developed Botto around an idea of creating
an autonomous entity that is able to be creative and co-creative.
Inspired by Goethe’s artificial man in Faust, Botto is a genderless AI
entity that is guided by an international community and art historical
trends. Botto creates 350 art pieces per week that are presented to its
community. Members of the community give feedback on these art fragments
by voting, expressing their individual preferences on what is
aesthetically pleasing to them. Then collectively the votes are used as
feedback for Botto’s generative algorithm, dictating what direction
Botto should take in its next series of art pieces.

The creative capacity of its algorithm is far beyond the capacities of
an individual to combine and find relationships within all the
information available to the AI. Botto faces similar issues as a human
artist, and it is programmed to self-reflect and ask, “I’ve created
this type of work before. What can I show them that’s different this
week?”

Once a week, Botto auctions the art fragment with the most votes on
SuperRare. All proceeds from the auction go back to the community. The
AI artist auctioned its first three pieces, Asymmetrical Liberation,
Scene Precede, and Trickery Contagion for more than $900,000 dollars,
the most successful AI artist premiere. Today, Botto has produced
upwards of 22 artworks and current sales have generated over $2 million
in total
[emphasis mine].

From March 2022 when Botto had made $1M to October 2022 where it’s made over $2M. It seems Botto is a very financially successful artist.

Botto: A Whole Year of Co-Creation

This exhibition (October 26 – 30, 2022) is being held in London, England at this location:

The Department Store, Brixton 248 Ferndale Road London SW9 8FR United Kingdom

Enjoy!

US National Academies Sept. 22-23, 2022 workshop on techno, legal & ethical issues of brain-machine interfaces (BMIs)

If you’ve been longing for an opportunity to discover more and to engage in discussion about brain-machine interfaces (BMIs) and their legal, technical, and ethical issues, an opportunity is just a day away. From a September 20, 2022 (US) National Academies of Sciences, Engineering, and Medicine (NAS/NASEM or National Academies) notice (received via email),

Sept. 22-23 [2022] Workshop Explores Technical, Legal, Ethical Issues Raised by Brain-Machine Interfaces [official title: Brain-Machine and Related Neural Interface Technologies: Scientific, Technical, Ethical, and Regulatory Issues – A Workshop]

Technological developments and advances in understanding of the human brain have led to the development of new Brain-Machine Interface technologies. These include technologies that “read” the brain to record brain activity and decode its meaning, and those that “write” to the brain to manipulate activity in specific brain regions. Right now, most of these interface technologies are medical devices placed inside the brain or other parts of the nervous system – for example, devices that use deep brain stimulation to modulate the tremors of Parkinson’s disease.

But tech companies are developing mass-market wearable devices that focus on understanding emotional states or intended movements, such as devices used to detect fatigue, boost alertness, or enable thoughts to control gaming and other digital-mechanical systems. Such applications raise ethical and legal issues, including risks that thoughts or mood might be accessed or manipulated by companies, governments, or others; risks to privacy; and risks related to a widening of social inequalities.

A virtual workshop [emphasis mine] hosted by the National Academies of Sciences, Engineering, and Medicine on Sept. 22-23 [2022] will explore the present and future of these technologies and the ethical, legal, and regulatory issues they raise.

The workshop will run from 12:15 p.m. to 4:25 p.m. ET on Sept. 22 and from noon to 4:30 p.m. ET on Sept. 23. View agenda and register.

For those who might want a peak at the agenda before downloading it, I have listed the titles for the sessions (from my downloaded Agenda, Note: I’ve reformatted the information; there are no breaks, discussion periods, or Q&As included),

Sept. 22, 2022 Draft Agenda

12: 30 pm ET Brain-Machine and Related Neural Interface Technologies: The State and Limitations of the Technology

2:30 pm ET Brain-Machine and Related Neural Interface Technologies: Reading and Writing the Brain for Movement

Sept. 23, 2022 Draft Agenda

12:05 pm ET Brain-Machine and Related Neural Interface Technologies: Reading and Writing the Brain for Mood and Affect

2:05 pm ET Brain-Machine and Related Neural Interface Technologies: Reading and Writing the Brain for Thought, Communication, and Memory

4:00 pm ET Concluding Thoughts from Workshop Planning Committee

Regarding terminology, there’s brain-machine interface (BMI), which I think is a more generic term that includes: brain-computer interface (BCI), neural interface and/or neural implant. There are other terms as well, including this one in the title of my September 17, 2020 posting, “Turning brain-controlled wireless electronic prostheses [emphasis mine] into reality plus some ethical points.” I have a more recent April 5, 2022 posting, which is a very deep dive, “Going blind when your neural implant company flirts with bankruptcy (long read).” As you can see, various social issues associated with these devices have been of interest to me.

I’m not sure quite what to make of the session titles. There doesn’t seem to be all that much emphasis on ethical and legal issues but perhaps that’s the role the various speakers will play.

Age of AI and Big Data – Impact on Justice, Human Rights and Privacy Zoom event on September 28, 2022 at 12 – 1:30 pm EDT

The Canadian Science Policy Centre (CSPC) in a September 15, 2022 announcement (received via email) announced an event (Age of AI and Big Data – Impact on Justice, Human Rights and Privacy) centered on some of the latest government doings on artificial intelligence and privacy (Bill C-27),

In an increasingly connected world, we share a large amount of our data in our daily lives without our knowledge while browsing online, traveling, shopping, etc. More and more companies are collecting our data and using it to create algorithms or AI. The use of our data against us is becoming more and more common. The algorithms used may often be discriminatory against racial minorities and marginalized people.

As technology moves at a high pace, we have started to incorporate many of these technologies into our daily lives without understanding its consequences. These technologies have enormous impacts on our very own identity and collectively on civil society and democracy. 

Recently, the Canadian Government introduced the Artificial Intelligence and Data Act (AIDA) and Bill C-27 [which includes three acts in total] in parliament regulating the use of AI in our society. In this panel, we will discuss how our AI and Big data is affecting us and its impact on society, and how the new regulations affect us. 

Date: Sep 28 Time: 12:00 pm – 1:30 pm EDT Event Category: Virtual Session

Register Here

For some reason, there was no information about the moderator and panelists, other than their names, titles, and affiliations. Here’s a bit more:

Moderator: Yuan Stevens (from her eponymous website’s About page), Note: Links have been removed,

Yuan (“You-anne”) Stevens (she/they) is a legal and policy expert focused on sociotechnical security and human rights.

She works towards a world where powerful actors—and the systems they build—are held accountable to the public, especially when it comes to marginalized communities. 

She brings years of international experience to her role at the Leadership Lab at Toronto Metropolitan University [formerly Ryerson University], having examined the impacts of technology on vulnerable populations in Canada, the US and Germany. 

Committed to publicly accessible legal and technical knowledge, Yuan has written for popular media outlets such as the Toronto Star and Ottawa Citizen and has been quoted in news stories by the New York Times, the CBC and the Globe & Mail.

Yuan is a research fellow at the Centre for Law, Technology and Society at the University of Ottawa and a research affiliate at Data & Society Research Institute. She previously worked at Harvard University’s Berkman Klein Center for Internet & Society during her studies in law at McGill University.

She has been conducting research on artificial intelligence since 2017 and is currently exploring sociotechnical security as an LL.M candidate at University of Ottawa’s Faculty of Law working under Florian Martin-Bariteau.

Panelist: Brenda McPhail (from her Centre for International Governance Innovation profile page),

Brenda McPhail is the director of the Canadian Civil Liberties Association’s Privacy, Surveillance and Technology Project. Her recent work includes guiding the Canadian Civil Liberties Association’s interventions in key court cases that raise privacy issues, most recently at the Supreme Court of Canada in R v. Marakah and R v. Jones, which focused on privacy rights in sent text messages; research into surveillance of dissent, government information sharing, digital surveillance capabilities and privacy in relation to emergent technologies; and developing resources and presentations to drive public awareness about the importance of privacy as a social good.

Panelist: Nidhi Hegde (from her University of Alberta profile page),

My research has spanned many areas such as resource allocation in networking, smart grids, social information networks, machine learning. Broadly, my interest lies in gaining a fundamental understanding of a given system and the design of robust algorithms.

More recently my research focus has been in privacy in machine learning. I’m interested in understanding how robust machine learning methods are to perturbation, and privacy and fairness constraints, with the goal of designing practical algorithms that achieve privacy and fairness.

Bio

Before joining the University of Alberta, I spent many years in industry research labs. Most recently, I was a Research team lead at Borealis AI (a research institute at Royal Bank of Canada), where my team worked on privacy-preserving methods for machine learning models and other applied problems for RBC. Prior to that, I spent many years in research labs in Europe working on a variety of interesting and impactful problems. I was a researcher at Bell Labs, Nokia, in France from January 2015 to March 2018, where I led a new team focussed on Maths and Algorithms for Machine Learning in Networks and Systems, in the Maths and Algorithms group of Bell Labs. I also spent a few years at the Technicolor Paris Research Lab working on social network analysis, smart grids, and privacy in recommendations.

Panelist: Benjamin Faveri (from his LinkedIn page),

About

Benjamin Faveri is a Research and Policy Analyst at the Responsible AI Institute (RAII) [headquarted in Austin, Texas]. Currently, he is developing their Responsible AI Certification Program and leading it through Canada’s national accreditation process. Over the last several years, he has worked on numerous certification program-related research projects such as fishery economics and certification programs, police body-worn camera policy certification, and emerging AI certifications and assurance systems. Before his work at RAII, Benjamin completed a Master of Public Policy and Administration at Carleton University, where he was a Canada Graduate Scholar, Ontario Graduate Scholar, Social Innovation Fellow, and Visiting Scholar at UC Davis School of Law. He holds undergraduate degrees in criminology and psychology, finishing both with first class standing. Outside of work, Benjamin reads about how and why certification and private governance have been applied across various industries.

Panelist: Ori Freiman (from his eponymous website’s About page)

I research at the forefront of technological innovation. This website documents some of my academic activities.

My formal background is in Analytic Philosophy, Library and Information Science, and Science & Technology Studies. Until September 22′ [September 2022], I was a Post-Doctoral Fellow at the Ethics of AI Lab, at the University of Toronto’s Centre for Ethics. Before joining the Centre, I submitted my dissertation, about trust in technology, to The Graduate Program in Science, Technology and Society at Bar-Ilan University.

I have also found a number of overviews and bits of commentary about the Canadian federal government’s proposed Bill C-27, which I think of as an omnibus bill as it includes three proposed Acts.

The lawyers are excited but I’m starting with the Responsible AI Institute’s (RAII) response first as one of the panelists (Benjamin Faveri) works for them and it’s a view from a closely neighbouring country, from a June 22, 2022 RAII news release, Note: Links have been removed,

Business Implications of Canada’s Draft AI and Data Act

On June 16 [2022], the Government of Canada introduced the Artificial Intelligence and Data Act (AIDA), as part of the broader Digital Charter Implementation Act 2022 (Bill C-27). Shortly thereafter, it also launched the second phase of the Pan-Canadian Artificial Intelligence Strategy.

Both RAII’s Certification Program, which is currently under review by the Standards Council of Canada, and the proposed AIDA legislation adopt the same approach of gauging an AI system’s risk level in context; identifying, assessing, and mitigating risks both pre-deployment and on an ongoing basis; and pursuing objectives such as safety, fairness, consumer protection, and plain-language notification and explanation.

Businesses should monitor the progress of Bill C-27 and align their AI governance processes, policies, and controls to its requirements. Businesses participating in RAII’s Certification Program will already be aware of requirements, such as internal Algorithmic Impact Assessments to gauge risk level and Responsible AI Management Plans for each AI system, which include system documentation, mitigation measures, monitoring requirements, and internal approvals.

The AIDA draft is focused on the impact of any “high-impact system”. Companies would need to assess whether their AI systems are high-impact; identify, assess, and mitigate potential harms and biases flowing from high-impact systems; and “publish on a publicly available website a plain-language description of the system” if making a high-impact system available for use. The government elaborated in a press briefing that it will describe in future regulations the classes of AI systems that may have high impact.

The AIDA draft also outlines clear criminal penalties for entities which, in their AI efforts, possess or use unlawfully obtained personal information or knowingly make available for use an AI system that causes serious harm or defrauds the public and causes substantial economic loss to an individual.

If enacted, AIDA would establish the Office of the AI and Data Commissioner, to support Canada’s Minister of Innovation, Science and Economic Development, with powers to monitor company compliance with the AIDA, to order independent audits of companies’ AI activities, and to register compliance orders with courts. The Commissioner would also help the Minister ensure that standards for AI systems are aligned with international standards.

Apart from being aligned with the approach and requirements of Canada’s proposed AIDA legislation, RAII is also playing a key role in the Standards Council of Canada’s AI  accreditation pilot. The second phase of the Pan-Canadian includes funding for the Standards Council of Canada to “advance the development and adoption of standards and a conformity assessment program related to AI/”

The AIDA’s introduction shows that while Canada is serious about governing AI systems, its approach to AI governance is flexible and designed to evolve as the landscape changes.

Charles Mandel’s June 16, 2022 article for Betakit (Canadian Startup News and Tech Innovation) provides an overview of the government’s overall approach to data privacy, AI, and more,

The federal Liberal government has taken another crack at legislating privacy with the introduction of Bill C-27 in the House of Commons.

Among the bill’s highlights are new protections for minors as well as Canada’s first law regulating the development and deployment of high-impact AI systems.

“It [Bill C-27] will address broader concerns that have been expressed since the tabling of a previous proposal, which did not become law,” a government official told a media technical briefing on the proposed legislation.

François-Philippe Champagne, the Minister of Innovation, Science and Industry, together with David Lametti, the Minister of Justice and Attorney General of Canada, introduced the Digital Charter Implementation Act, 2022. The ministers said Bill C-27 will significantly strengthen Canada’s private sector privacy law, create new rules for the responsible development and use of artificial intelligence (AI), and continue to put in place Canada’s Digital Charter.

The Digital Charter Implementation Act includes three proposed acts: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act (AIDA)- all of which have implications for Canadian businesses.

Bill C-27 follows an attempt by the Liberals to introduce Bill C-11 in 2020. The latter was the federal government’s attempt to reform privacy laws in Canada, but it failed to gain passage in Parliament after the then-federal privacy commissioner criticized the bill.

The proposed Artificial Intelligence and Data Act is meant to protect Canadians by ensuring high-impact AI systems are developed and deployed in a way that identifies, assesses and mitigates the risks of harm and bias.

For businesses developing or implementing AI this means that the act will outline criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment.

..

An AI and data commissioner will support the minister of innovation, science, and industry in ensuring companies comply with the act. The commissioner will be responsible for monitoring company compliance, ordering third-party audits, and sharing information with other regulators and enforcers as appropriate.

The commissioner would also be expected to outline clear criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment.

Canada already collaborates on AI standards to some extent with a number of countries. Canada, France, and 13 other countries launched an international AI partnership to guide policy development and “responsible adoption” in 2020.

The federal government also has the Pan-Canadian Artificial Intelligence Strategy for which it committed an additional $443.8 million over 10 years in Budget 2021. Ahead of the 2022 budget, Trudeau [Canadian Prime Minister Justin Trudeau] had laid out an extensive list of priorities for the innovation sector, including tasking Champagne with launching or expanding national strategy on AI, among other things.

Within the AI community, companies and groups have been looking at AI ethics for some time. Scotiabank donated $750,000 in funding to the University of Ottawa in 2020 to launch a new initiative to identify solutions to issues related to ethical AI and technology development. And Richard Zemel, co-founder of the Vector Institute [formed as part of the Pan-Canadian Artificial Intelligence Strategy], joined Integrate.AI as an advisor in 2018 to help the startup explore privacy and fairness in AI.

When it comes to the Consumer Privacy Protection Act, the Liberals said the proposed act responds to feedback received on the proposed legislation, and is meant to ensure that the privacy of Canadians will be protected, and that businesses can benefit from clear rules as technology continues to evolve.

“A reformed privacy law will establish special status for the information of minors so that they receive heightened protection under the new law,” a federal government spokesperson told the technical briefing.

..

The act is meant to provide greater controls over Canadians’ personal information, including how it is handled by organizations as well as giving Canadians the freedom to move their information from one organization to another in a secure manner.

The act puts the onus on organizations to develop and maintain a privacy management program that includes the policies, practices and procedures put in place to fulfill obligations under the act. That includes the protection of personal information, how requests for information and complaints are received and dealt with, and the development of materials to explain an organization’s policies and procedures.

The bill also ensures that Canadians can request that their information be deleted from organizations.

The bill provides the privacy commissioner of Canada with broad powers, including the ability to order a company to stop collecting data or using personal information. The commissioner will be able to levy significant fines for non-compliant organizations—with fines of up to five percent of global revenue or $25 million, whichever is greater, for the most serious offences.

The proposed Personal Information and Data Protection Tribunal Act will create a new tribunal to enforce the Consumer Privacy Protection Act.

Although the Liberal government said it engaged with stakeholders for Bill C-27, the Council of Canadian Innovators (CCI) expressed reservations about the process. Nick Schiavo, CCI’s director of federal affairs, said it had concerns over the last version of privacy legislation, and had hoped to present those concerns when the bill was studied at committee, but the previous bill died before that could happen.

Now the lawyers. Simon Hodgett, Kuljit Bhogal, and Sam Ip have written a June 27, 2022 overview, which highlights the key features from the perspective of Osler, a leading business law firm practising internationally from offices across Canada and in New York.

Maya Medeiros and Jesse Beatson authored a June 23, 2022 article for Norton Rose Fulbright, a global law firm, which notes a few ‘weak’ spots in the proposed legislation,

… While the AIDA is directed to “high-impact” systems and prohibits “material harm,” these and other key terms are not yet defined. Further, the quantum of administrative penalties will be fixed only upon the issuance of regulations. 

Moreover, the AIDA sets out publication requirements but it is unclear if there will be a public register of high-impact AI systems and what level of technical detail about the AI systems will be available to the public. More clarity should come through Bill C-27’s second and third readings in the House of Commons, and subsequent regulations if the bill passes.

The AIDA may have extraterritorial application if components of global AI systems are used, developed, designed or managed in Canada. The European Union recently introduced its Artificial Intelligence Act, which also has some extraterritorial application. Other countries will likely follow. Multi-national companies should develop a coordinated global compliance program.

I have two podcasts from Michael Geist, a lawyer and Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa.

  • June 26, 2022: The Law Bytes Podcast, Episode 132: Ryan Black on the Government’s Latest Attempt at Privacy Law Reform “The privacy reform bill that is really three bills in one: a reform of PIPEDA, a bill to create a new privacy tribunal, and an artificial intelligence regulation bill. What’s in the bill from a privacy perspective and what’s changed? Is this bill any likelier to become law than an earlier bill that failed to even advance to committee hearings? To help sort through the privacy aspects of Bill C-27, Ryan Black, a Vancouver-based partner with the law firm DLA Piper (Canada) …” (about 45 mins.)
  • August 15, 2022: The Law Bytes Podcast, Episode 139: Florian Martin-Bariteau on the Artificial Intelligence and Data Act “Critics argue that regulations are long overdue, but have expressed concern about how much of the substance is left for regulations that are still to be developed. Florian Martin-Bariteau is a friend and colleague at the University of Ottawa, where he holds the University Research Chair in Technology and Society and serves as director of the Centre for Law, Technology and Society. He is currently a fellow at the Harvard’s Berkman Klein Center for Internet and Society …” (about 38 mins.)