Category Archives: ethics

Incorporating human cells into computer chips

What are the ethics of incorporating human cells into computer chips? That’s the question that Julian Savulescu (Visiting Professor in biomedical Ethics, University of Melbourne and Uehiro Chair in Practical Ethics, University of Oxford), Christopher Gyngell (Research Fellow in Biomedical Ethics, The University of Melbourne), and Tsutomu Sawai (Associate Professor, Humanities and Social Sciences, Hiroshima University) discuss in a May 24, 2022 essay on The Conversation (Note: A link has been removed),

The year is 2030 and we are at the world’s largest tech conference, CES in Las Vegas. A crowd is gathered to watch a big tech company unveil its new smartphone. The CEO comes to the stage and announces the Nyooro, containing the most powerful processor ever seen in a phone. The Nyooro can perform an astonishing quintillion operations per second, which is a thousand times faster than smartphone models in 2020. It is also ten times more energy-efficient with a battery that lasts for ten days.

A journalist asks: “What technological advance allowed such huge performance gains?” The chief executive replies: “We created a new biological chip using lab-grown human neurons. These biological chips are better than silicon chips because they can change their internal structure, adapting to a user’s usage pattern and leading to huge gains in efficiency.”

Another journalist asks: “Aren’t there ethical concerns about computers that use human brain matter?”

Although the name and scenario are fictional, this is a question we have to confront now. In December 2021, Melbourne-based Cortical Labs grew groups of neurons (brain cells) that were incorporated into a computer chip. The resulting hybrid chip works because both brains and neurons share a common language: electricity.

The authors explain their comment that brains and neurons share the common language of electricity (Note: Links have been removed),

In silicon computers, electrical signals travel along metal wires that link different components together. In brains, neurons communicate with each other using electric signals across synapses (junctions between nerve cells). In Cortical Labs’ Dishbrain system, neurons are grown on silicon chips. These neurons act like the wires in the system, connecting different components. The major advantage of this approach is that the neurons can change their shape, grow, replicate, or die in response to the demands of the system.

Dishbrain could learn to play the arcade game Pong faster than conventional AI systems. The developers of Dishbrain said: “Nothing like this has ever existed before … It is an entirely new mode of being. A fusion of silicon and neuron.”

Cortical Labs believes its hybrid chips could be the key to the kinds of complex reasoning that today’s computers and AI cannot produce. Another start-up making computers from lab-grown neurons, Koniku, believes their technology will revolutionise several industries including agriculture, healthcare, military technology and airport security. Other types of organic computers are also in the early stages of development.

Ethics issues arise (Note: Links have been removed),

… this raises questions about donor consent. Do people who provide tissue samples for technology research and development know that it might be used to make neural computers? Do they need to know this for their consent to be valid?

People will no doubt be much more willing to donate skin cells for research than their brain tissue. One of the barriers to brain donation is that the brain is seen as linked to your identity. But in a world where we can grow mini-brains from virtually any cell type, does it make sense to draw this type of distinction?

… Consider the scandal regarding Henrietta Lacks, an African-American woman whose cells were used extensively in medical and commercial research without her knowledge and consent.

Henrietta’s cells are still used in applications which generate huge amounts of revenue for pharmaceutical companies (including recently to develop COVID vaccines. The Lacks family still has not received any compensation. If a donor’s neurons end up being used in products like the imaginary Nyooro, should they be entitled to some of the profit made from those products?

Another key ethical consideration for neural computers is whether they could develop some form of consciousness and experience pain. Would neural computers be more likely to have experiences than silicon-based ones? …

This May 24, 2022 essay is fascinating and, if you have the time, I encourage you to read it all.

If you’re curious, you can find out about Cortical Labs here, more about Dishbrain in a February 22, 2022 article by Brian Patrick Green for iai (Institute for Art and Ideas) news, and more about Koniku in a May 31, 2018 posting about ‘wetware’ by Alissa Greenberg on Medium.

As for Henrietta Lacks, there’s this from my May 13, 2016 posting,

*HeLa cells are named for Henrietta Lacks who unknowingly donated her immortal cell line to medical research. You can find more about the story on the Oprah Winfrey website, which features an excerpt from the Rebecca Skloot book “The Immortal Life of Henrietta Lacks.”’ …

I checked; the excerpt is still on the Oprah Winfrey site.

h/t May 24, 2022 Nanowerk Spotlight article

Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations

Dear friend,

I thought it best to break this up a bit. There are a couple of ‘objects’ still to be discussed but this is mostly the commentary part of this letter to you. (Here’s a link for anyone who stumbled here but missed Part 1.)

Ethics, the natural world, social justice, eeek, and AI

Dorothy Woodend in her March 10, 2022 review for The Tyee) suggests some ethical issues in her critique of the ‘bee/AI collaboration’ and she’s not the only one with concerns. UNESCO (United Nations Educational, Scientific and Cultural Organization) has produced global recommendations for ethical AI (see my March 18, 2022 posting). More recently, there’s “Racist and sexist robots have flawed AI,” a June 23, 2022 posting, where researchers prepared a conference presentation and paper about deeply flawed AI still being used in robots.

Ultimately, the focus is always on humans and Woodend has extended the ethical AI conversation to include insects and the natural world. In short, something less human-centric.

My friend, this reference to the de Young exhibit may seem off topic but I promise it isn’t in more ways than one. The de Young Museum in San Francisco (February 22, 2020 – June 27, 2021) also held and AI and art show called, “Uncanny Valley: Being Human in the Age of AI”), from the exhibitions page,

In today’s AI-driven world, increasingly organized and shaped by algorithms that track, collect, and evaluate our data, the question of what it means to be human [emphasis mine] has shifted. Uncanny Valley is the first major exhibition to unpack this question through a lens of contemporary art and propose new ways of thinking about intelligence, nature, and artifice. [emphasis mine]

Courtesy: de Young Museum [downloaded from https://deyoung.famsf.org/exhibitions/uncanny-valley]

As you can see, it hinted (perhaps?) at an attempt to see beyond human-centric AI. (BTW, I featured this ‘Uncanny Valley’ show in my February 25, 2020 posting where I mentioned Stephanie Dinkins [featured below] and other artists.)

Social justice

While the VAG show doesn’t see much past humans and AI, it does touch on social justice. In particular there’s Pod 15 featuring the Algorithmic Justice League (AJL). The group “combine[s] art and research to illuminate the social implications and harms of AI” as per their website’s homepage.

In Pod 9, Stephanie Dinkins’ video work with a robot (Bina48), which was also part of the de Young Museum ‘Uncanny Valley’ show, addresses some of the same issues.

Still of Stephanie Dinkins, “Conversations with Bina48,” 2014–present. Courtesy of the artist [downloaded from https://deyoung.famsf.org/stephanie-dinkins-conversations-bina48-0]

From the the de Young Museum’s Stephanie Dinkins “Conversations with Bina48” April 23, 2020 article by Janna Keegan (Dinkins submitted the same work you see at the VAG show), Note: Links have been removed,

Transdisciplinary artist and educator Stephanie Dinkins is concerned with fostering AI literacy. The central thesis of her social practice is that AI, the internet, and other data-based technologies disproportionately impact people of color, LGBTQ+ people, women, and disabled and economically disadvantaged communities—groups rarely given a voice in tech’s creation. Dinkins strives to forge a more equitable techno-future by generating AI that includes the voices of multiple constituencies …

The artist’s ongoing Conversations with Bina48 takes the form of a series of interactions with the social robot Bina48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second). The machine is the brainchild of Martine Rothblatt, an entrepreneur in the field of biopharmaceuticals who, with her wife, Bina, cofounded the Terasem Movement, an organization that seeks to extend human life through cybernetic means. In 2007 Martine commissioned Hanson Robotics to create a robot whose appearance and consciousness simulate Bina’s. The robot was released in 2010, and Dinkins began her work with it in 2014.

Part psychoanalytical discourse, part Turing test, Conversations with Bina48 also participates in a larger dialogue regarding bias and representation in technology. Although Bina Rothblatt is a Black woman, Bina48 was not programmed with an understanding of its Black female identity or with knowledge of Black history. Dinkins’s work situates this omission amid the larger tech industry’s lack of diversity, drawing attention to the problems that arise when a roughly homogenous population creates technologies deployed globally. When this occurs, writes art critic Tess Thackara, “the unconscious biases of white developers proliferate on the internet, mapping our social structures and behaviors onto code and repeating imbalances and injustices that exist in the real world.” One of the most appalling and public of these instances occurred when a Google Photos image-recognition algorithm mislabeled the faces of Black people as “gorillas.”

Eeek

You will find as you go through the ‘imitation game’ a pod with a screen showing your movements through the rooms in realtime on a screen. The installation is called “Creepers” (2021-22). The student team from Vancouver’s Centre for Digital Media (CDM) describes their project this way, from the CDM’s AI-driven Installation Piece for the Vancouver Art Gallery webpage,

Project Description

Kaleidoscope [team name] is designing an installation piece that harnesses AI to collect and visualize exhibit visitor behaviours, and interactions with art, in an impactful and thought-provoking way.

There’s no warning that you’re being tracked and you can see they’ve used facial recognition software to track your movements through the show. It’s claimed on the pod’s signage that they are deleting the data once you’ve left.

‘Creepers’ is an interesting approach to the ethics of AI. The name suggests that even the student designers were aware it was problematic.

For the curious, there’s a description of the other VAG ‘imitation game’ installations provided by CDM students on the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage.

In recovery from an existential crisis (meditations)

There’s something greatly ambitious about “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” and walking up the VAG’s grand staircase affirms that ambition. Bravo to the two curators, Grenville and Entis for an exhibition.that presents a survey (or overview) of artificial intelligence, and its use in and impact on creative visual culture.

I’ve already enthused over the history (specifically Turing, Lovelace, Ovid), admitted to being mesmerized by Scott Eaton’s sculpture/AI videos, and confessed to a fascination (and mild repulsion) regarding Oxman’s honeycombs.

It’s hard to remember all of the ‘objects’ as the curators have offered a jumble of work, almost all of them on screens. Already noted, there’s Norbert Wiener’s The Moth (1949) and there are also a number of other computer-based artworks from the 1960s and 1970s. Plus, you’ll find works utilizing a GAN (generative adversarial network), an AI agent that is explained in the exhibit.

It’s worth going more than once to the show as there is so much to experience.

Why did they do that?

Dear friend, I’ve already commented on the poor flow through the show and It’s hard to tell if the curators intended the experience to be disorienting but this is to the point of chaos, especially when the exhibition is crowded.

I’ve seen Grenville’s shows before. In particular there was “MashUp: The Birth of Modern Culture, a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century” and there was “KRAZY! The Delirious World of Anime + Manga + Video Games + Art.” As you can see from the description, he pulls together disparate works and ideas into a show for you to ‘make sense’ of them.

One of the differences between those shows and the “imitation Game: …” is that most of us have some familiarity, whether we like it or not, with modern art/culture and anime/manga/etc. and can try to ‘make sense’ of it.

By contrast, artificial intelligence (which even experts have difficulty defining) occupies an entirely different set of categories; all of them associated with science/technology. This makes for a different kind of show so the curators cannot rely on the audience’s understanding of basics. It’s effectively an art/sci or art/tech show and, I believe, the first of its kind at the Vancouver Art Gallery. Unfortunately, the curators don’t seem to have changed their approach to accommodate that difference.

AI is also at the centre of a current panic over job loss, loss of personal agency, automated racism and sexism, etc. which makes the experience of viewing the show a little tense. In this context, their decision to commission and use ‘Creepers’ seems odd.

Where were Ai-Da and Dall-E-2 and the others?

Oh friend, I was hoping for a robot. Those roomba paintbots didn’t do much for me. All they did was lie there on the floor

To be blunt I wanted some fun and perhaps a bit of wonder and maybe a little vitality. I wasn’t necessarily expecting Ai-Da, an artisitic robot, but something three dimensional and fun in this very flat, screen-oriented show would have been nice.

This image has an empty alt attribute; its file name is image-asset.jpeg
Ai-Da was at the Glastonbury Festival in the U from 23-26th June 2022. Here’s Ai-Da and her Billie Eilish (one of the Glastonbury 2022 headliners) portrait. [downloaded from https://www.ai-darobot.com/exhibition]

Ai-Da was first featured here in a December 17, 2021 posting about performing poetry that she had written in honour of the 700th anniversary of poet Dante Alighieri’s death.

Named in honour of Ada Lovelace, Ai-Da visited the 2022 Venice Biennale as Leah Henrickson and Simone Natale describe in their May 12, 2022 article for Fast Company (Note: Links have been removed),

Ai-Da sits behind a desk, paintbrush in hand. She looks up at the person posing for her, and then back down as she dabs another blob of paint onto the canvas. A lifelike portrait is taking shape. If you didn’t know a robot produced it, this portrait could pass as the work of a human artist.

Ai-Da is touted as the “first robot to paint like an artist,” and an exhibition of her work, called Leaping into the Metaverse, opened at the Venice Biennale.

Ai-Da produces portraits of sitting subjects using a robotic hand attached to her lifelike feminine figure. She’s also able to talk, giving detailed answers to questions about her artistic process and attitudes toward technology. She even gave a TEDx talk about “The Intersection of Art and AI” in Oxford a few years ago. While the words she speaks are programmed, Ai-Da’s creators have also been experimenting with having her write and perform her own poetry.

She has her own website.

If not Ai-Da, what about Dall-E-2? Aaron Hertzmann’s June 20, 2022 commentary, “Give this AI a few words of description and it produces a stunning image – but is it art?” investigates for Salon (Note: Links have been removed),

DALL-E 2 is a new neural network [AI] algorithm that creates a picture from a short phrase or sentence that you provide. The program, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public. But a small and growing number of people – myself included – have been given access to experiment with it.

As a researcher studying the nexus of technology and art, I was keen to see how well the program worked. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology. It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.

A July 4, 2022 article “DALL-E, Make Me Another Picasso, Please” by Laura Lane for The New Yorker has a rebuttal to Ada Lovelace’s contention that creativity is uniquely human (Note: A link has been removed),

“There was this belief that creativity is this deeply special, only-human thing,” Sam Altman, OpenAI’s C.E.O., explained the other day. Maybe not so true anymore, he said. Altman, who wore a gray sweater and had tousled brown hair, was videoconferencing from the company’s headquarters, in San Francisco. DALL-E is still in a testing phase. So far, OpenAI has granted access to a select group of people—researchers, artists, developers—who have used it to produce a wide array of images: photorealistic animals, bizarre mashups, punny collages. Asked by a user to generate “a plate of various alien fruits from another planet photograph,” DALL-E returned something kind of like rambutans. “The rest of mona lisa” is, according to DALL-E, mostly just one big cliff. Altman described DALL-E as “an extension of your own creativity.”

There are other AI artists, in my August 16, 2019 posting, I had this,

AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.

That posting also included AI artist, AICAN. Both artist-AI agents (Obvious and AICAN) are based on GANs (generative adversarial networks) for learning and eventual output. Both artist-AI agents work independently or with human collaborators on art works that are available for purchase.

As might be expected not everyone is excited about AI and visual art. Sonja Drimmer, Professor of Medieval Art, University of Massachusetts at Amherst, provides another perspective on AI, visual art, and, her specialty, art history in her November 1, 2021 essay for The Conversation (Note: Links have been removed),

Over the past year alone, I’ve come across articles highlighting how artificial intelligence recovered a “secret” painting of a “lost lover” of Italian painter Modigliani, “brought to life” a “hidden Picasso nude”, “resurrected” Austrian painter Gustav Klimt’s destroyed works and “restored” portions of Rembrandt’s 1642 painting “The Night Watch.” The list goes on.

As an art historian, I’ve become increasingly concerned about the coverage and circulation of these projects.

They have not, in actuality, revealed one secret or solved a single mystery.

What they have done is generate feel-good stories about AI.

Take the reports about the Modigliani and Picasso paintings.

These were projects executed by the same company, Oxia Palus, which was founded not by art historians but by doctoral students in machine learning.

In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been carried out and published years prior – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases.

The company edited these X-rays and reconstituted them as new works of art by applying a technique called “neural style transfer.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them and then promises to recreate images of other content in that same style.

As you can ‘see’ my friend, the topic of AI and visual art is a juicy one. In fact, I have another example in my June 27, 2022 posting, which is titled, “Art appraised by algorithm.” So, Grenville’s and Entis’ decision to focus on AI and its impact on visual culture is quite timely.

Visual culture: seeing into the future

The VAG Imitation Game webpage lists these categories of visual culture “animation, architecture, art, fashion, graphic design, urban design and video games …” as being represented in the show. Movies and visual art, not mentioned in the write up, are represented while theatre and other performing arts are not mentioned or represented. That’ s not a surprise.

In addition to an area of science/technology that’s not well understood even by experts, the curators took on the truly amorphous (and overwhelming) topic of visual culture. Given that even writing this commentary has been a challenge, I imagine pulling the show together was quite the task.

Grenville often grounds his shows in a history of the subject and, this time, it seems especially striking. You’re in a building that is effectively a 19th century construct and in galleries that reflect a 20th century ‘white cube’ aesthetic, while looking for clues into the 21st century future of visual culture employing technology that has its roots in the 19th century and, to some extent, began to flower in the mid-20th century.

Chung’s collaboration is one of the only ‘optimistic’ notes about the future and, as noted earlier, it bears a resemblance to Wiener’s 1949 ‘Moth’

Overall, it seems we are being cautioned about the future. For example, Oxman’s work seems bleak (bees with no flowers to pollinate and living in an eternal spring). Adding in ‘Creepers’ and surveillance along with issues of bias and social injustice reflects hesitation and concern about what we will see, who sees it, and how it will be represented visually.

Learning about robots, automatons, artificial intelligence, and more

I wish the Vancouver Art Gallery (and Vancouver’s other art galleries) would invest a little more in audience education. A couple of tours, by someone who may or may not know what they’re talking, about during the week do not suffice. The extra material about Stephanie Dinkins and her work (“Conversations with Bina48,” 2014–present) came from the de Young Museum’s website. In my July 26, 2021 commentary on North Vancouver’s Polygon Gallery 2021 show “Interior Infinite,” I found background information for artist Zanele Muholi on the Tate Modern’s website. There is nothing on the VAG website that helps you to gain some perspective on the artists’ works.

It seems to me that if the VAG wants to be considered world class, it should conduct itself accordingly and beefing up its website with background information about their current shows would be a good place to start.

Robots, automata, and artificial intelligence

Prior to 1921, robots were known exclusively as automatons. These days, the word ‘automaton’ (or ‘automata’ in the plural) seems to be used to describe purely mechanical representations of humans from over 100 years ago whereas the word ‘robot’ can be either ‘humanlike’ or purely machine, e.g. a mechanical arm that performs the same function over and over. I have a good February 24, 2017 essay on automatons by Miguel Barral for OpenMind BBVA*, which provides some insight into the matter,

The concept of robot is relatively recent. The idea was introduced in 1921 by the Czech writer Karel Capek in his work R.U.R to designate a machine that performs tasks in place of man. But their predecessors, the automatons (from the Greek automata, or “mechanical device that works by itself”), have been the object of desire and fascination since antiquity. Some of the greatest inventors in history, such as Leonardo Da Vinci, have contributed to our fascination with these fabulous creations:

The Al-Jazari automatons

The earliest examples of known automatons appeared in the Islamic world in the 12th and 13th centuries. In 1206, the Arab polymath Al-Jazari, whose creations were known for their sophistication, described some of his most notable automatons: an automatic wine dispenser, a soap and towels dispenser and an orchestra-automaton that operated by the force of water. This latter invention was meant to liven up parties and banquets with music while floating on a pond, lake or fountain.

As the water flowed, it started a rotating drum with pegs that, in turn, moved levers whose movement produced different sounds and movements. As the pegs responsible for the musical notes could be exchanged for different ones in order to interpret another melody, it is considered one of the first programmable machines in history.

If you’re curious about automata, my friend, I found this Sept. 26, 2016 ABC news radio news item about singer Roger Daltrey’s and his wife, Heather’s auction of their collection of 19th century French automata (there’s an embedded video showcasing these extraordinary works of art). For more about automata, robots, and androids, there’s an excellent May 4, 2022 article by James Vincent, ‘A visit to the human factory; How to build the world’s most realistic robot‘ for The Verge; Vincent’s article is about Engineered Arts, the UK-based company that built Ai-Da.

AI is often used interchangeably with ‘robot’ but they aren’t the same. Not all robots have AI integrated into their processes. At its simplest AI is an algorithm or set of algorithms, which may ‘live’ in a CPU and be effectively invisible or ‘live’ in or make use of some kind of machine and/or humanlike body. As the experts have noted, the concept of artificial intelligence is a slippery concept.

*OpenMind BBVA is a Spanish multinational financial services company, Banco Bilbao Vizcaya Argentaria (BBVA), which runs the non-profit project, OpenMind (About us page) to disseminate information on robotics and so much more.*

You can’t always get what you want

My friend,

I expect many of the show’s shortcomings (as perceived by me) are due to money and/or scheduling issues. For example, Ai-Da was at the Venice Biennale and if there was a choice between the VAG and Biennale, I know where I’d be.

Even with those caveats in mind, It is a bit surprising that there were no examples of wearable technology. For example, Toronto’s Tapestry Opera recently performed R.U.R. A Torrent of Light (based on the word ‘robot’ from Karel Čapek’s play, R.U.R., ‘Rossumovi Univerzální Roboti’), from my May 24, 2022 posting,

I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,

“This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.”

And, from later in my posting,

“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design, [emphasis mine]” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director. 

That last quote brings me back to the my comment about theatre and performing arts not being part of the show. Of course, the curators couldn’t do it all but a website with my hoped for background and additional information could have helped to solve the problem.

The absence of the theatrical and performing arts in the VAG’s ‘Imitation Game’ is a bit surprising as the Council of Canadian Academies (CCA) in their third assessment, “Competing in a Global Innovation Economy: The Current State of R&D in Canada” released in 2018 noted this (from my April 12, 2018 posting),

Canada, relative to the world, specializes in subjects generally referred to as the
humanities and social sciences (plus health and the environment), and does
not specialize as much as others in areas traditionally referred to as the physical
sciences and engineering. Specifically, Canada has comparatively high levels
of research output in Psychology and Cognitive Sciences, Public Health and
Health Services, Philosophy and Theology, Earth and Environmental Sciences,
and Visual and Performing Arts. [emphasis mine] It accounts for more than 5% of world research in these fields. Conversely, Canada has lower research output than expected in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies,
Engineering, and Mathematics and Statistics. The comparatively low research
output in core areas of the natural sciences and engineering is concerning,
and could impair the flexibility of Canada’s research base, preventing research
institutions and researchers from being able to pivot to tomorrow’s emerging
research areas. [p. xix Print; p. 21 PDF]

US-centric

My friend,

I was a little surprised that the show was so centered on work from the US given that Grenville has curated ate least one show where there was significant input from artists based in Asia. Both Japan and Korea are very active with regard to artificial intelligence and it’s hard to believe that their artists haven’t kept pace. (I’m not as familiar with China and its AI efforts, other than in the field of facial recognition, but it’s hard to believe their artists aren’t experimenting.)

The Americans, of course, are very important developers in the field of AI but they are not alone and it would have been nice to have seen something from Asia and/or Africa and/or something from one of the other Americas. In fact, anything which takes us out of the same old, same old. (Luba Elliott wrote this (2019/2020/2021?) essay, “Artificial Intelligence Art from Africa and Black Communities Worldwide” on Aya Data if you want to get a sense of some of the activity on the African continent. Elliott does seem to conflate Africa and Black Communities, for some clarity you may want to check out the Wikipedia entry on Africanfuturism, which contrasts with this August 12, 2020 essay by Donald Maloba, “What is Afrofuturism? A Beginner’s Guide.” Maloba also conflates the two.)

As it turns out, Luba Elliott presented at the 2019 Montréal Digital Spring event, which brings me to Canada’s artificial intelligence and arts scene.

I promise I haven’t turned into a flag waving zealot, my friend. It’s just odd there isn’t a bit more given that machine learning was pioneered at the University of Toronto. Here’s more about that (from Wikipedia entry for Geoffrey Hinston),

Geoffrey Everest HintonCCFRSFRSC[11] (born 6 December 1947) is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks.

Hinton received the 2018 Turing Award, together with Yoshua Bengio [Canadian scientist] and Yann LeCun, for their work on deep learning.[24] They are sometimes referred to as the “Godfathers of AI” and “Godfathers of Deep Learning“,[25][26] and have continued to give public talks together.[27][28]

Some of Hinton’s work was started in the US but since 1987, he has pursued his interests at the University of Toronto. He wasn’t proven right until 2012. Katrina Onstad’s February 29, 2018 article (Mr. Robot) for Toronto Life is a gripping read about Hinton and his work on neural networks. BTW, Yoshua Bengio (co-Godfather) is a Canadian scientist at the Université de Montréal and Yann LeCun (co-Godfather) is a French scientist at New York University.

Then, there’s another contribution, our government was the first in the world to develop a national artificial intelligence strategy. Adding those developments to the CCA ‘State of Science’ report findings about visual arts and performing arts, is there another word besides ‘odd’ to describe the lack of Canadian voices?

You’re going to point out the installation by Ben Bogart (a member of Simon Fraser University’s Metacreation Lab for Creative AI and instructor at the Emily Carr University of Art + Design (ECU)) but it’s based on the iconic US scifi film, 2001: A Space Odyssey. As for the other Canadian, Sougwen Chung, she left Canada pretty quickly to get her undergraduate degree in the US and has since moved to the UK. (You could describe hers as the quintessential success story, i.e., moving from Canada only to get noticed here after success elsewhere.)

Of course, there are the CDM student projects but the projects seem less like an exploration of visual culture than an exploration of technology and industry requirements, from the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage, Note: A link has been removed,

In 2019, Bruce Grenville, Senior Curator at Vancouver Art Gallery, approached [the] Centre for Digital Media to collaborate on several industry projects for the forthcoming exhibition. Four student teams tackled the project briefs over the course of the next two years and produced award-winning installations that are on display until October 23 [2022].

Basically, my friend, it would have been nice to see other voices or, at the least, an attempt at representing other voices and visual cultures informed by AI. As for Canadian contributions, maybe put something on the VAG website?

Playing well with others

it’s always a mystery to me why the Vancouver cultural scene seems comprised of a set of silos or closely guarded kingdoms. Reaching out to the public library and other institutions such as Science World might have cost time but could have enhanced the show

For example, one of the branches of the New York Public Library ran a programme called, “We are AI” in March 2022 (see my March 23, 2022 posting about the five-week course, which was run as a learning circle). The course materials are available for free (We are AI webpage) and I imagine that adding a ‘visual culture module’ wouldn’t be that difficult.

There is one (rare) example of some Vancouver cultural institutions getting together to offer an art/science programme and that was in 2017 when the Morris and Helen Belkin Gallery (at the University of British Columbia; UBC) hosted an exhibition of Santiago Ramon y Cajal’s work (see my Sept. 11, 2017 posting about the gallery show) along with that show was an ancillary event held by the folks at Café Scientifique at Science World and featuring a panel of professionals from UBC’s Faculty of Medicine and Dept. of Psychology, discussing Cajal’s work.

In fact, where were the science and technology communities for this show?

On a related note, the 2022 ACM SIGGRAPH conference (August 7 – 11, 2022) is being held in Vancouver. (ACM is the Association for Computing Machinery; SIGGRAPH is for Special Interest Group on Computer Graphics and Interactive Techniques.) SIGGRAPH has been holding conferences in Vancouver every few years since at least 2011.

At this year’s conference, they have at least two sessions that indicate interests similar to the VAG’s. First, there’s Immersive Visualization for Research, Science and Art which includes AI and machine learning along with other related topics. There’s also, Frontiers Talk: Art in the Age of AI: Can Computers Create Art?

This is both an international conference and an exhibition (of art) and the whole thing seems to have kicked off on July 25, 2022. If you’re interested, the programme can be found here and registration here.

Last time SIGGRAPH was here the organizers seemed interested in outreach and they offered some free events.

In the end

It was good to see the show. The curators brought together some exciting material. As is always the case, there were some missed opportunities and a few blind spots. But all is not lost.

July 27, 2022, the VAG held a virtual event with an artist,

Gwenyth Chao to learn more about what happened to the honeybees and hives in Oxman’s Synthetic Apiary project. As a transdisciplinary artist herself, Chao will also discuss the relationship between art, science, technology and design. She will then guide participants to create a space (of any scale, from insect to human) inspired by patterns found in nature.

Hopefully there will be more more events inspired by specific ‘objects’. Meanwhile, August 12, 2022, the VAG is hosting,

… in partnership with the Canadian Music Centre BC, New Music at the Gallery is a live concert series hosted by the Vancouver Art Gallery that features an array of musicians and composers who draw on contemporary art themes.

Highlighting a selection of twentieth- and twenty-first-century music compositions, this second concert, inspired by the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, will spotlight The Iliac Suite (1957), the first piece ever written using only a computer, and Kaija Saariaho’s Terra Memoria (2006), which is in a large part dependent on a computer-generated musical process.

It would be lovely if they could include an Ada Lovelace Day event. This is an international celebration held on October 11, 2022.

Do go. Do enjoy, my friend.

Coming soon: Responsible AI at the 35th Canadian Conference on Artificial Intelligence (AI) from 30 May to 3 June, 2022

35 years? How have I not stumbled on this conference before? Anyway, I’m glad to have the news (even if I’m late to the party), from the 35th Canadian Conference on Artificial Intelligence homepage,

The 35th Canadian Conference on Artificial Intelligence will take place virtually in Toronto, Ontario, from 30 May to 3 June, 2022. All presentations and posters will be online, with in-person social events to be scheduled in Toronto for those who are able to attend in-person. Viewing rooms and isolated presentation facilities will be available for all visitors to the University of Toronto during the event.

The event is collocated with the Computer and Robot Vision conferences. These events (AI·CRV 2022) will bring together hundreds of leaders in research, industry, and government, as well as Canada’s most accomplished students. They showcase Canada’s ingenuity, innovation and leadership in intelligent systems and advanced information and communications technology. A single registration lets you attend any session in the two conferences, which are scheduled in parallel tracks.

The conference proceedings are published on PubPub, an open-source, privacy-respecting, and open access online platform. They are submitted to be indexed and abstracted in leading indexing services such as DBLP, ACM, Google Scholar.

You can view last year’s [2021] proceedings here: https://caiac.pubpub.org/ai2021.

The 2021 proceedings appear to be open access.

I can’t tell if ‘Responsible AI’ has been included as a specific topic in previous conferences but 2022 is definitely hosting a couple of sessions based on that theme, from the Responsible AI activities webpage,

Keynote speaker: Julia Stoyanovich

New York University

“Building Data Equity Systems”

Equity as a social concept — treating people differently depending on their endowments and needs to provide equality of outcome rather than equality of treatment — lends a unifying vision for ongoing work to operationalize ethical considerations across technology, law, and society.  In my talk I will present a vision for designing, developing, deploying, and overseeing data-intensive systems that consider equity as an essential objective.  I will discuss ongoing technical work, and will place this work into the broader context of policy, education, and public outreach.

Biography: Julia Stoyanovich is an Institute Associate Professor of Computer Science & Engineering at the Tandon School of Engineering, Associate Professor of Data Science at the Center for Data Science, and Director of the Center for Responsible AI at New York University (NYU).  Her research focuses on responsible data management and analysis: on operationalizing fairness, diversity, transparency, and data protection in all stages of the data science lifecycle.  She established the “Data, Responsibly” consortium and served on the New York City Automated Decision Systems Task Force, by appointment from Mayor de Blasio.  Julia developed and has been teaching courses on Responsible Data Science at NYU, and is a co-creator of an award-winning comic book series on this topic.  In addition to data ethics, Julia works on the management and analysis of preference and voting data, and on querying large evolving graphs. She holds M.S. and Ph.D. degrees in Computer Science from Columbia University, and a B.S. in Computer Science and in Mathematics & Statistics from the University of Massachusetts at Amherst.  She is a recipient of an NSF CAREER award and a Senior Member of the ACM.

Panel on ethical implications of AI

Panelists

Luke Stark, Faculty of Information and Media Studies, Western University

Luke Stark is an Assistant Professor in the Faculty of Information and Media Studies at Western University in London, ON. His work interrogating the historical, social, and ethical impacts of computing and AI technologies has appeared in journals including The Information Society, Social Studies of Science, and New Media & Society, and in popular venues like Slate, The Globe and Mail, and The Boston Globe. Luke was previously a Postdoctoral Researcher in AI ethics at Microsoft Research, and a Postdoctoral Fellow in Sociology at Dartmouth College; he holds a PhD from the Department of Media, Culture, and Communication at New York University, and a BA and MA from the University of Toronto.

Nidhi Hegde, Associate Professor in Computer Science and Amii [Alberta Machine Intelligence Institute] Fellow at the University of Alberta

Nidhi is a Fellow and Canada CIFAR [Canadian Institute for Advanced Research] AI Chair at Amii and an Associate Professor in the Department of Computing Science at the University of Alberta. Before joining UAlberta, she spent many years in industry research labs. Most recently, she was a Research team lead at Borealis AI (a research institute at Royal Bank of Canada), where her team worked on privacy-preserving methods for machine learning models and other applied problems for RBC. Prior to that, she spent many years in research labs in Europe working on a variety of interesting and impactful problems. She was a researcher at Bell Labs, Nokia, in France from January 2015 to March 2018, where she led a new team focussed on Maths and Algorithms for Machine Learning in Networks and Systems, in the Maths and Algorithms group of Bell Labs. She also spent a few years at the Technicolor Paris Research Lab working on social network analysis, smart grids, privacy, and recommendations. Nidhi is an associate editor of the IEEE/ACM Transactions on Networking, and an editor of the Elsevier Performance Evaluation Journal.

Karina Vold, Assistant Professor, Institute for the History and Philosophy of Science and Technology, University of Toronto

Dr. Karina Vold is an Assistant Professor at the Institute for the History and Philosophy of Science and Technology at the University of Toronto. She is also a Faculty Affiliate at the U of T Schwartz Reisman Institute for Technology and Society, a Faculty Associate at the U of T Centre for Ethics, and an Associate Fellow at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence. Vold specialises in Philosophy of Cognitive Science and Philosophy of Artificial Intelligence, and her recent research has focused on human autonomy, cognitive enhancement, extended cognition, and the risks and ethics of AI.

Elissa Strome, Executive Director, Pan-Canadian Artificial Intelligence Strategy at CIFAR

Elissa is Executive Director, Pan-Canadian Artificial Intelligence Strategy at CIFAR, working with research leaders across the country to implement Canada’s national research strategy in AI.  Elissa completed her PhD in Neuroscience from the University of British Columbia in 2006. Following a post-doc at Lund University, in Sweden, she decided to pursue a career in research strategy, policy and leadership. In 2008, she joined the University of Toronto’s Office of the Vice-President, Research and Innovation and was Director of Strategic Initiatives from 2011 to 2015. In that role, she led a small team dedicated to advancing the University’s strategic research priorities, including international institutional research partnerships, the institutional strategy for prestigious national and international research awards, and the establishment of the SOSCIP [Southern Ontario Smart Computing Innovation Platform] research consortium in 2012. From 2015 to 2017, Elissa was Executive Director of SOSCIP, leading the 17-member industry-academic consortium through a major period of growth and expansion, and establishing SOSCIP as Ontario’s leading platform for collaborative research and development in data science and advanced computing.

Tutorial on AI and the Law

Prof. Maura R. Grossman, University of Waterloo, and

Hon. Paul W. Grimm, United States District Court for the District of Maryland

AI applications are becoming more and more ubiquitous in almost every field of endeavor, and the same is true as to the legal industry. This panel, consisting of an experienced lawyer and computer scientist, and a U.S. federal trial court judge, will discuss how AI is currently being used in the legal profession, what adoption has been like since the introduction of AI to law in about 2009, what legal and ethical issues AI applications have raised in the legal system, and how a sitting trial court judge approaches AI evidence, in particular, the determination of whether to admit that AI evidence or not, when they are a non-expert.

How is AI being used in the legal industry today?

What has the legal industry’s reaction been to legal AI applications?

What are some of the biggest legal and ethical issues implicated by legal and other AI applications?

How does a sitting trial court judge evaluate AI evidence when making a determination of whether to admit that AI evidence or not?

What considerations go into the trial judge’s decision?

What happens if the judge is not an expert in AI?  Do they recuse?

You may recognize the name, Julia Stoyanovich, as she was mentioned here in my March 23, 2022 posting titled, The “We are AI” series gives citizens a primer on AI, a series of peer-to-peer workshops aimed at introducing the basics of AI to the public. There’s also a comic book series associated with it and all of the materials are available for free. It’s all there in the posting.

Getting back to the Responsible AI activities webpage,, there’s one more activity and this seems a little less focused on experts,

Virtual Meet and Greet on Responsible AI across Canada

Given the many activities that are fortunately happening around the responsible and ethical aspects of AI here in Canada, we are organizing an event in conjunction with Canadian AI 2022 this year to become familiar with what everyone is doing and what activities they are engaged in.

It would be wonderful to have a unified community here in Canada around responsible AI so we can support each other and find ways to more effectively collaborate and synergize. We are aiming for a casual, discussion-oriented event rather than talks or formal presentations.

The meet and greet will be hosted by Ebrahim Bagheri, Eleni Stroulia and Graham Taylor. If you are interested in participating, please email Ebrahim Bagheri (bagheri@ryerson.ca).

Thank you to the co-chairs for getting the word out about the Responsible AI topic at the conference,

Responsible AI Co-chairs

Ebrahim Bagheri
Professor
Electrical, Computer, and Biomedical Engineering, Ryerson University
Website

Eleni Stroulia
Professor, Department of Computing Science
Acting Vice Dean, Faculty of Science
Director, AI4Society Signature Area
University of Alberta
Website

The organization which hosts these conference has an almost palindromic abbreviation, CAIAC for Canadian Artificial Intelligence Association (CAIA) or Association Intelligence Artificiel Canadien (AIAC). Yes, you do have to read it in English and French and the C at either end gets knocked depending on which language you’re using, which is why it’s almost.

The CAIAC is almost 50 years old (under various previous names) and has its website here.

*April 22, 2022 at 1400 hours PT removed ‘the’ from this section of the headline: “… from 30 May to 3 June, 2022.” and removed period from the end.

Going blind when your neural implant company flirts with bankruptcy (long read)

This story got me to thinking about what happens when any kind of implant company (pacemaker, deep brain stimulator, etc.) goes bankrupt or is acquired by another company with a different business model.

As I worked on this piece, more issues were raised and the scope expanded to include prosthetics along with implants while the focus narrowed to neuro as in, neural implants and neuroprosthetics. At the same time, I found salient examples for this posting in other medical advances such as gene editing.

In sum, all references to implants and prosthetics are to neural devices and some issues are illustrated with salient examples from other medical advances (specifically, gene editing).

Definitions (for those who find them useful)

The US Food and Drug Administration defines implants and prosthetics,

Medical implants are devices or tissues that are placed inside or on the surface of the body. Many implants are prosthetics, intended to replace missing body parts. Other implants deliver medication, monitor body functions, or provide support to organs and tissues.

As for what constitutes a neural implant/neuroprosthetic, there’s this from Emily Waltz’s January 20, 2020 article (How Do Neural Implants Work? Neural implants are used for deep brain stimulation, vagus nerve stimulation, and mind-controlled prostheses) for the Institute of Electrical and Electronics Engineers (IEEE) Spectrum magazine,

A neural implant, then, is a device—typically an electrode of some kind—that’s inserted into the body, comes into contact with tissues that contain neurons, and interacts with those neurons in some way.

Now, let’s start with the recent near bankruptcy of a retinal implant company.

The company goes bust (more or less)

From a February 25, 2022 Science Friday (a National Public Radio program) posting/audio file, Note: Links have been removed,

Barbara Campbell was walking through a New York City subway station during rush hour when her world abruptly went dark. For four years, Campbell had been using a high-tech implant in her left eye that gave her a crude kind of bionic vision, partially compensating for the genetic disease that had rendered her completely blind in her 30s. “I remember exactly where I was: I was switching from the 6 train to the F train,” Campbell tells IEEE Spectrum. “I was about to go down the stairs, and all of a sudden I heard a little ‘beep, beep, beep’ sound.’”

It wasn’t her phone battery running out. It was her Argus II retinal implant system powering down. The patches of light and dark that she’d been able to see with the implant’s help vanished.

Terry Byland is the only person to have received this kind of implant in both eyes. He got the first-generation Argus I implant, made by the company Second Sight Medical Products, in his right eye in 2004, and the subsequent Argus II implant in his left 11 years later. He helped the company test the technology, spoke to the press movingly about his experiences, and even met Stevie Wonder at a conference. “[I] went from being just a person that was doing the testing to being a spokesman,” he remembers.

Yet in 2020, Byland had to find out secondhand that the company had abandoned the technology and was on the verge of going bankrupt. While his two-implant system is still working, he doesn’t know how long that will be the case. “As long as nothing goes wrong, I’m fine,” he says. “But if something does go wrong with it, well, I’m screwed. Because there’s no way of getting it fixed.”

Science Friday and the IEEE [Institute of Electrical and Electronics Engineers] Spectrum magazine collaborated to produce this story. You’ll find the audio files and the transcript of interviews with the authors and one of the implant patients in this February 25, 2022 Science Friday (a National Public Radio program) posting.

Here’s more from the February 15, 2022 IEEE Spectrum article by Eliza Strickland and Mark Harris,

Ross Doerr, another Second Sight patient, doesn’t mince words: “It is fantastic technology and a lousy company,” he says. He received an implant in one eye in 2019 and remembers seeing the shining lights of Christmas trees that holiday season. He was thrilled to learn in early 2020 that he was eligible for software upgrades that could further improve his vision. Yet in the early months of the COVID-19 pandemic, he heard troubling rumors about the company and called his Second Sight vision-rehab therapist. “She said, ‘Well, funny you should call. We all just got laid off,’ ” he remembers. She said, ‘By the way, you’re not getting your upgrades.’ ”

These three patients, and more than 350 other blind people around the world with Second Sight’s implants in their eyes, find themselves in a world in which the technology that transformed their lives is just another obsolete gadget. One technical hiccup, one broken wire, and they lose their artificial vision, possibly forever. To add injury to insult: A defunct Argus system in the eye could cause medical complications or interfere with procedures such as MRI scans, and it could be painful or expensive to remove.

The writers included some information about what happened to the business, from the February 15, 2022 IEEE Spectrum article, Note: Links have been removed,

After Second Sight discontinued its retinal implant in 2019 and nearly went out of business in 2020, a public offering in June 2021 raised US $57.5 million at $5 per share. The company promised to focus on its ongoing clinical trial of a brain implant, called Orion, that also provides artificial vision. But its stock price plunged to around $1.50, and in February 2022, just before this article was published, the company announced a proposed merger with an early-stage biopharmaceutical company called Nano Precision Medical (NPM). None of Second Sight’s executives will be on the leadership team of the new company, which will focus on developing NPM’s novel implant for drug delivery.The company’s current leadership declined to be interviewed for this article but did provide an emailed statement prior to the merger announcement. It said, in part: “We are a recognized global leader in neuromodulation devices for blindness and are committed to developing new technologies to treat the broadest population of sight-impaired individuals.”

It’s unclear what Second Sight’s proposed merger means for Argus patients. The day after the merger was announced, Adam Mendelsohn, CEO of Nano Precision Medical, told Spectrum that he doesn’t yet know what contractual obligations the combined company will have to Argus and Orion patients. But, he says, NPM will try to do what’s “right from an ethical perspective.” The past, he added in an email, is “simply not relevant to the new future.”

There may be some alternatives, from the February 15, 2022 IEEE Spectrum article (Note: Links have been removed),

Second Sight may have given up on its retinal implant, but other companies still see a need—and a market—for bionic vision without brain surgery. Paris-based Pixium Vision is conducting European and U.S. feasibility trials to see if its Prima system can help patients with age-related macular degeneration, a much more common condition than retinitis pigmentosa.

Daniel Palanker, a professor of ophthalmology at Stanford University who licensed his technology to Pixium, says the Prima implant is smaller, simpler, and cheaper than the Argus II. But he argues that Prima’s superior image resolution has the potential to make Pixium Vision a success. “If you provide excellent vision, there will be lots of patients,” he tells Spectrum. “If you provide crappy vision, there will be very few.”

Some clinicians involved in the Argus II work are trying to salvage what they can from the technology. Gislin Dagnelie, an associate professor of ophthalmology at Johns Hopkins University School of Medicine, has set up a network of clinicians who are still working with Argus II patients. The researchers are experimenting with a thermal camera to help users see faces, a stereo camera to filter out the background, and AI-powered object recognition. These upgrades are unlikely to result in commercial hardware today but could help future vision prostheses.

The writers have carefully balanced this piece so it is not an outright condemnation of the companies (Second Sight and Nano Precision), from the February 15, 2022 IEEE Spectrum article,

Failure is an inevitable part of innovation. The Argus II was an innovative technology, and progress made by Second Sight may pave the way for other companies that are developing bionic vision systems. But for people considering such an implant in the future, the cautionary tale of Argus patients left in the lurch may make a tough decision even tougher. Should they take a chance on a novel technology? If they do get an implant and find that it helps them navigate the world, should they allow themselves to depend upon it?

Abandoning the Argus II technology—and the people who use it—might have made short-term financial sense for Second Sight, but it’s a decision that could come back to bite the merged company if it does decide to commercialize a brain implant, believes Doerr.

For anyone curious about retinal implant technology (specifically the Argus II), I have a description in a June 30, 2015 posting.

Speculations and hopes for neuroprosthetics

The field of neuroprosthetics is very active. Dr Arthur Saniotis and Prof Maciej Henneberg have written an article where they speculate about the possibilities of a neuroprosthetic that may one day merge with neurons in a February 21, 2022 Nanowerk Spotlight article,

For over a generation several types of medical neuroprosthetics have been developed, which have improved the lives of thousands of individuals. For instance, cochlear implants have restored functional hearing in individuals with severe hearing impairment.

Further advances in motor neuroprosthetics are attempting to restore motor functions in tetraplegic, limb loss and brain stem stroke paralysis subjects.

Currently, scientists are working on various kinds of brain/machine interfaces [BMI] in order to restore movement and partial sensory function. One such device is the ‘Ipsihand’ that enables movement of a paralyzed hand. The device works by detecting the recipient’s intention in the form of electrical signals, thereby triggering hand movement.

Another recent development is the 12 month BMI gait neurohabilitation program that uses a visual-tactile feedback system in combination with a physical exoskeleton and EEG operated AI actuators while walking. This program has been tried on eight patients with reported improvements in lower limb movement and somatic sensation.

Surgically placed electrode implants have also reduced tremor symptoms in individuals with Parkinson’s disease.

Although neuroprosthetics have provided various benefits they do have their problems. Firstly, electrode implants to the brain are prone to degradation, necessitating new implants after a few years. Secondly, as in any kind of surgery, implanted electrodes can cause post-operative infection and glial scarring. Furthermore, one study showed that the neurobiological efficacy of an implant is dependent on the rate of speed of its insertion.

But what if humans designed a neuroprosthetic, which could bypass the medical glitches of invasive neuroprosthetics? However, instead of connecting devices to neural networks, this neuroprosthetic would directly merge with neurons – a novel step. Such a neuroprosthetic could radically optimize treatments for neurodegenerative disorders and brain injuries, and possibly cognitive enhancement [emphasis mine].

A team of three international scientists has recently designed a nanobased neuroprosthetic, which was published in Frontiers in Neuroscience (“Integration of Nanobots Into Neural Circuits As a Future Therapy for Treating Neurodegenerative Disorders“). [open access paper published in 2018]

An interesting feature of their nanobot neuroprosthetic is that it has been inspired from nature by way of endomyccorhizae – a type of plant/fungus symbiosis, which is over four hundred million years old. During endomyccorhizae, fungi use numerous threadlike projections called mycelium that penetrate plant roots, forming colossal underground networks with nearby root systems. During this process fungi take up vital nutrients while protecting plant roots from infections – a win-win relationship. Consequently, the nano-neuroprosthetic has been named ‘endomyccorhizae ligand interface’, or ‘ELI’ for short.

The Spotlight article goes on to describe how these nanobots might function. As for the possibility of cognitive enhancement, I wonder if that might come to be described as a form of ‘artificial intelligence’.

(Dr Arthur Saniotis and Prof Maciej Henneberg are both from the Department of Anthropology, Ludwik Hirszfeld Institute of Immunology and Experimental Therapy, Polish Academy of Sciences; and Biological Anthropology and Comparative Anatomy Research Unit, Adelaide Medical School, University of Adelaide. Abdul-Rahman Sawalma who’s listed as an author on the 2018 paper is from the Palestinian Neuroscience Initiative, Al-Quds University, Beit Hanina, Palestine.)

Saniotis and Henneberg’s Spotlight article presents an optimistic view of neuroprosthetics. It seems telling that they cite cochlear implants as a success story when it is viewed by many as ethically fraught (see the Cochlear implant Wikipedia entry; scroll down to ‘Criticism and controversy’).

Ethics and your implants

This is from an April 6, 2015 article by Luc Henry on technologist.eu,

Technologist: What are the potential consequences of accepting the “augmented human” in society?

Gregor Wolbring: There are many that we might not even envision now. But let me focus on failure and obsolescence [emphasis mine], two issues that are rarely discussed. What happens when the mechanisms fails in the middle of an action? Failure has hazardous consequences, but obsolescence has psychological ones. …. The constant surgical inter­vention needed to update the hardware may not be feasible. A person might feel obsolete if she cohabits with others using a newer version.

T. Are researchers working on prosthetics sometimes disconnected from reality?

G. W. Students engaged in the development of prosthetics have to learn how to think in societal terms and develop a broader perspective. Our education system provides them with a fascination for clever solutions to technological challenges but not with tools aiming at understanding the consequences, such as whether their product might increase or decrease social justice.

Wolbring is a professor at the University of Calgary’s Cumming School of Medicine (profile page) who writes on social issues to do with human enhancement/ augmentation. As well,

Some of his areas of engagement are: ability studies including governance of ability expectations, disability studies, governance of emerging and existing sciences and technologies (e.g. nanoscale science and technology, molecular manufacturing, aging, longevity and immortality, cognitive sciences, neuromorphic engineering, genetics, synthetic biology, robotics, artificial intelligence, automatization, brain machine interfaces, sensors), impact of science and technology on marginalized populations, especially people with disabilities he governance of bodily enhancement, sustainability issues, EcoHealth, resilience, ethics issues, health policy issues, human rights and sport.

He also maintains his own website here.

Not just startups

I’d classify Second Sight as a tech startup company and they have a high rate of failure, which may not have been clear to the patients who had the implants. Clinical trials can present problems too as this excerpt from my September 17, 2020 posting notes,

This October 31, 2017 article by Emily Underwood for Science was revelatory,

“In 2003, neurologist Helen Mayberg of Emory University in Atlanta began to test a bold, experimental treatment for people with severe depression, which involved implanting metal electrodes deep in the brain in a region called area 25 [emphases mine]. The initial data were promising; eventually, they convinced a device company, St. Jude Medical in Saint Paul, to sponsor a 200-person clinical trial dubbed BROADEN.

This month [October 2017], however, Lancet Psychiatry reported the first published data on the trial’s failure. The study stopped recruiting participants in 2012, after a 6-month study in 90 people failed to show statistically significant improvements between those receiving active stimulation and a control group, in which the device was implanted but switched off.

… a tricky dilemma for companies and research teams involved in deep brain stimulation (DBS) research: If trial participants want to keep their implants [emphases mine], who will take responsibility—and pay—for their ongoing care? And participants in last week’s meeting said it underscores the need for the growing corps of DBS researchers to think long-term about their planned studies.”

Symbiosis can be another consequence, as mentioned in my September 17, 2020 posting,

From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence. [emphasis mine]

It’s complicated

For a lot of people these devices are or could be life-changing. At the same time, there are a number of different issues related to implants/prosthetics; the following is not an exhaustive list. As Wolbring notes, issues that we can’t begin to imagine now are likely to emerge as these medical advances become more ubiquitous.

Ability/disability?

Assistive technologies are almost always portrayed as helpful. For example, a cochlear implant gives people without hearing the ability to hear. The assumption is that this is always a good thing—unless you’re a deaf person who wants to define the problem a little differently. Who gets to decide what is good and ‘normal’ and what is desirable?

While the cochlear implant is the most extreme example I can think of, there are variations of these questions throughout the ‘disability’ communities.

Also, as Wolbring notes in his interview with the Technologist.eu, the education system tends to favour technological solutions which don’t take social issues into account. Wolbring cites social justice issues when he mentions failure and obsolescence.

Technical failures and obsolescence

The story, excerpted earlier in this posting, opened with a striking example of a technical failure at an awkward moment; a blind woman depending on her retinal implant loses all sight as she maneuvers through a subway station in New York City.

Aside from being an awful way to find out the company supplying and supporting your implant is in serious financial trouble and can’t offer assistance or repair, the failure offers a preview of what could happen as implants and prosthetics become more commonly used.

Keeping up/fomo (fear of missing out)/obsolescence

It used to be called ‘keeping up with the Joneses, it’s the practice of comparing yourself and your worldly goods to someone else(‘s) and then trying to equal what they have or do better. Usually, people want to have more and better than the mythical Joneses.

These days, the phenomenon (which has been expanded to include social networking) is better known as ‘fomo’ or fear of missing out (see the Fear of missing out Wikipedia entry).

Whatever you want to call it, humanity’s competitive nature can be seen where technology is concerned. When I worked in technology companies, I noticed that hardware and software were sometimes purchased for features that were effectively useless to us. But, not upgrading to a newer version was unthinkable.

Call it fomo or ‘keeping up with the Joneses’, it’s a powerful force and when people (and even companies) miss out or can’t keep up, it can lead to a sense of inferiority in the same way that having an obsolete implant or prosthetic could.

Social consequences

Could there be a neural implant/neuroprosthetic divide? There is already a digital divide (from its Wikipedia entry),

The digital divide is a gap between those who have access to new technology and those who do not … people without access to the Internet and other ICTs [information and communication technologies] are at a socio-economic disadvantage because they are unable or less able to find and apply for jobs, shop and sell online, participate democratically, or research and learn.

After reading Wolbring’s comments, it’s not hard to imagine a neural implant/neuroprosthetic divide with its attendant psychological and social consequences.

What kind of human am I?

There are other issues as noted in my September 17, 2020 posting. I’ve already mentioned ‘patient 6’, the woman who developed a symbiotic relationship with her brain/computer interface. This is how the relationship ended,

… He [Frederic Gilbert, ethicist] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.

… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.

“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”

Above human

The possibility that implants will not merely restore or endow someone with ‘standard’ sight or hearing or motion or … but will augment or improve on nature was broached in this May 2, 2013 posting, More than human—a bionic ear that extends hearing beyond the usual frequencies and is one of many in the ‘Human Enhancement’ category on this blog.

More recently, Hugh Herr, an Associate Professor at the Massachusetts Institute of Technology (MIT), leader of the Biomechatronics research group at MIT’s Media Lab, a double amputee, and prosthetic enthusiast, starred in the recent (February 23, 2022) broadcast of ‘Augmented‘ on the Public Broadcasting Service (PBS) science programme, Nova.

I found ‘Augmented’ a little offputting as it gave every indication of being an advertisement for Herr’s work in the form of a hero’s journey. I was not able to watch more than 10 mins. This preview gives you a pretty good idea of what it was like although the part in ‘Augmented, where he says he’d like to be a cyborg hasn’t been included,

At a guess, there were a few talking heads (taking up from 10%-20% of the running time) who provided some cautionary words to counterbalance the enthusiasm in the rest of the programme. It’s a standard approach designed to give the impression that both sides of a question are being recognized. The cautionary material is usually inserted past the 1/2 way mark while leaving several minutes at the end for returning to the more optimistic material.

In a February 2, 2010 posting I have excerpts from an article featuring quotes from Herr that I still find startling,

Written by Paul Hochman for Fast Company, Bionic Legs, iLimbs, and Other Super-Human Prostheses [ETA March 23, 2022: an updated version of the article is now on Genius.com] delves further into the world where people may be willing to trade a healthy limb for a prosthetic. From the article,

There are many advantages to having your leg amputated.

Pedicure costs drop 50% overnight. A pair of socks lasts twice as long. But Hugh Herr, the director of the Biomechatronics Group at the MIT Media Lab, goes a step further. “It’s actually unfair,” Herr says about amputees’ advantages over the able-bodied. “As tech advancements in prosthetics come along, amputees can exploit those improvements. They can get upgrades. A person with a natural body can’t.”

Herr is not the only one who favours prosthetics (also from the Hochman article),

This influx of R&D cash, combined with breakthroughs in materials science and processor speed, has had a striking visual and social result: an emblem of hurt and loss has become a paradigm of the sleek, modern, and powerful. Which is why Michael Bailey, a 24-year-old student in Duluth, Georgia, is looking forward to the day when he can amputate the last two fingers on his left hand.

“I don’t think I would have said this if it had never happened,” says Bailey, referring to the accident that tore off his pinkie, ring, and middle fingers. “But I told Touch Bionics I’d cut the rest of my hand off if I could make all five of my fingers robotic.”

But Bailey is most surprised by his own reaction. “When I’m wearing it, I do feel different: I feel stronger. As weird as that sounds, having a piece of machinery incorporated into your body, as a part of you, well, it makes you feel above human.[emphasis mine] It’s a very powerful thing.”

My September 17, 2020 posting touches on more ethical and social issues including some of those surrounding consumer neurotechnologies or brain-computer interfaces (BCI). Unfortunately, I don’t have space for these issues here.

As for Paul Hochman’s article, Bionic Legs, iLimbs, and Other Super-Human Prostheses, now on Genius.com, it has been updated.

Money makes the world go around

Money and business practices have been indirectly referenced (for the most part) up to now in this posting. The February 15, 2022 IEEE Spectrum article and Hochman’s article, Bionic Legs, iLimbs, and Other Super-Human Prostheses, cover two aspects of the money angle.

In the IEEE Spectrum article, a tech start-up company, Second Sight, ran into financial trouble and is acquired by a company that has no plans to develop Second Sight’s core technology. The people implanted with the Argus II technology have been stranded as were ‘patient 6’ and others participating in the clinical trial described in the July 24, 2019 article by Liam Drew for Nature Outlook: The brain mentioned earlier in this posting.

I don’t know anything about the business bankruptcy mentioned in the Drew article but one of the business problems described in the IEEE Spectrum article suggests that Second Sight was founded before answering a basic question, “What is the market size for this product?”

On 18 July 2019, Second Sight sent Argus patients a letter saying it would be phasing out the retinal implant technology to clear the way for the development of its next-generation brain implant for blindness, Orion, which had begun a clinical trial with six patients the previous year. …

“The leadership at the time didn’t believe they could make [the Argus retinal implant] part of the business profitable,” Greenberg [Robert Greenberg, Second Sight co-founder] says. “I understood the decision, because I think the size of the market turned out to be smaller than we had thought.”

….

The question of whether a medical procedure or medicine can be profitable (or should the question be sufficiently profitable?) was referenced in my April 26, 2019 posting in the context of gene editing and personalized medicine

Edward Abrahams, president of the Personalized Medicine Coalition (US-based), advocates for personalized medicine while noting in passing, market forces as represented by Goldman Sachs in his May 23, 2018 piece for statnews.com (Note: A link has been removed),

Goldman Sachs, for example, issued a report titled “The Genome Revolution.” It argues that while “genome medicine” offers “tremendous value for patients and society,” curing patients may not be “a sustainable business model.” [emphasis mine] The analysis underlines that the health system is not set up to reap the benefits of new scientific discoveries and technologies. Just as we are on the precipice of an era in which gene therapies, gene-editing, and immunotherapies promise to address the root causes of disease, Goldman Sachs says that these therapies have a “very different outlook with regard to recurring revenue versus chronic therapies.”

The ‘Glybera’ story in my July 4, 2019 posting (scroll down about 40% of the way) highlights the issue with “recurring revenue versus chronic therapies,”

Kelly Crowe in a November 17, 2018 article for the CBC (Canadian Broadcasting Corporation) news writes about Glybera,

It is one of this country’s great scientific achievements.

“The first drug ever approved that can fix a faulty gene.

It’s called Glybera, and it can treat a painful and potentially deadly genetic disorder with a single dose — a genuine made-in-Canada medical breakthrough.

But most Canadians have never heard of it.

Here’s my summary (from the July 4, 2019 posting),

It cost $1M for a single treatment and that single treatment is good for at least 10 years.

Pharmaceutical companies make their money from repeated use of their medicaments and Glybera required only one treatment so the company priced it according to how much they would have gotten for repeated use, $100,000 per year over a 10 year period. The company was not able to persuade governments and/or individuals to pay the cost

In the end, 31 people got the treatment, most of them received it for free through clinical trials.

For rich people only?

Megan Devlin’s March 8, 2022 article for the Daily Hive announces a major research investment into medical research (Note: A link has been removed),

Vancouver [Canada] billionaire Chip Wilson revealed Tuesday [March 8, 2022] that he has a rare genetic condition that causes his muscles to waste away, and announced he’s spending $100 million on research to find a cure.

His condition is called facio-scapulo-humeral muscular dystrophy, or FSHD for short. It progresses rapidly in some people and more slowly in others, but is characterized by progressive muscle weakness starting the the face, the neck, shoulders, and later the lower body.

“I’m out for survival of my own life,” Wilson said.

“I also have the resources to do something about this which affects so many people in the world.”

Wilson hopes the $100 million will produce a cure or muscle-regenerating treatment by 2027.

“This could be one of the biggest discoveries of all time, for humankind,” Wilson said. “Most people lose muscle, they fall, and they die. If we can keep muscle as we age this can be a longevity drug like we’ve never seen before.”

According to rarediseases.org, FSHD affects between four and 10 people out of every 100,000 [emphasis mine], Right now, therapies are limited to exercise and pain management. There is no way to stall or reverse the disease’s course.

Wilson is best known for founding athleisure clothing company Lululemon. He also owns the most expensive home in British Columbia, a $73 million mansion in Vancouver’s Kitsilano neighbourhood.

Let’s see what the numbers add up to,

4 – 10 people out of 100,000

40 – 100 people out of 1M

1200 – 3,000 people out of 30M (let’s say this is Canada’s population)\

12,000 – 30,000 people out of 300M (let’s say this is the US’s population)

42,000 – 105,000 out of 1.115B (let’s say this is China’s population)

The rough total comes to 55,200 to 138,000 people between three countries with a combined population total of 1.445B. Given how business currently operates, it seems unlikely that any company will want to offer Wilson’s hoped for medical therapy although he and possibly others may benefit from a clinical trial.

Should profit or wealth be considerations?

The stories about the patients with the implants and the patients who need Glybera are heartbreaking and point to a question not often asked when medical therapies and medications are developed. Is the profit model the best choice and, if so, how much profit?

I have no answer to that question but I wish it was asked by medical researchers and policy makers.

As for wealthy people dictating the direction for medical research, I don’t have answers there either. I hope the research will yield applications and/or valuable information for more than Wilson’s disease.

It’s his money after all

Wilson calls his new venture, SolveFSHD. It doesn’t seem to be affiliated with any university or biomedical science organization and it’s not clear how the money will be awarded (no programmes, no application procedure, no panel of experts). There are three people on the team, Eva R. Chin, scientist and executive director, Chip Wilson, SolveFSHD founder/funder, and FSHD patient, and Neil Camarta, engineer, executive (fossil fuels and clean energy), and FSHD patient. There’s also a Twitter feed (presumably for the latest updates): https://twitter.com/SOLVEFSHD.

Perhaps unrelated but intriguing is news about a proposed new building in Kenneth Chan’s March 31, 2022 article for the Daily Hive,

Low Tide Properties, the real estate arm of Lululemon founder Chip Wilson [emphasis mine], has submitted a new development permit application to build a 148-ft-tall, eight-storey, mixed-use commercial building in the False Creek Flats of Vancouver.

The proposal, designed by local architectural firm Musson Cattell Mackey Partnership, calls for 236,000 sq ft of total floor area, including 105,000 sq ft of general office space, 102,000 sq ft of laboratory space [emphasis mine], and 5,000 sq ft of ground-level retail space. An outdoor amenity space for building workers will be provided on the rooftop.

[next door] The 2001-built, five-storey building at 1618 Station Street immediately to the west of the development site is also owned by Low Tide Properties [emphasis mine]. The Ferguson, the name of the existing building, contains about 79,000 sq ft of total floor area, including 47,000 sq ft of laboratory space and 32,000 sq ft of general office space. Biotechnology company Stemcell technologies [STEMCELL] Technologies] is the anchor tenant [emphasis mine].

I wonder if this proposed new building will house SolveFSHD and perhaps other FSHD-focused enterprises. The proximity of STEMCELL Technologies could be quite convenient. In any event, $100M will buy a lot (pun intended).

The end

Issues I’ve described here in the context of neural implants/neuroprosthetics and cutting edge medical advances are standard problems not specific to these technologies/treatments:

  • What happens when the technology fails (hopefully not at a critical moment)?
  • What happens when your supplier goes out of business or discontinues the products you purchase from them?
  • How much does it cost?
  • Who can afford the treatment/product? Will it only be for rich people?
  • Will this technology/procedure/etc. exacerbate or create new social tensions between social classes, cultural groups, religious groups, races, etc.?

Of course, having your neural implant fail suddenly in the middle of a New York City subway station seems a substantively different experience than having your car break down on the road.

There are, of course, there are the issues we can’t yet envision (as Wolbring notes) and there are issues such as symbiotic relationships with our implants and/or feeling that you are “above human.” Whether symbiosis and ‘implant/prosthetic superiority’ will affect more than a small number of people or become major issues is still to be determined.

There’s a lot to be optimistic about where new medical research and advances are concerned but I would like to see more thoughtful coverage in the media (e.g., news programmes and documentaries like ‘Augmented’) and more thoughtful comments from medical researchers.

Of course, the biggest issue I’ve raised here is about the current business models for health care products where profit is valued over people’s health and well-being. it’s a big question and I don’t see any definitive answers but the question put me in mind of this quote (from a September 22, 2020 obituary for US Supreme Court Justice Ruth Bader Ginsburg by Irene Monroe for Curve),

Ginsburg’s advocacy for justice was unwavering and showed it, especially with each oral dissent. In another oral dissent, Ginsburg quoted a familiar Martin Luther King Jr. line, adding her coda:” ‘The arc of the universe is long, but it bends toward justice,’” but only “if there is a steadfast commitment to see the task through to completion.” …

Martin Luther King Jr. popularized and paraphrased the quote (from a January 18, 2018 article by Mychal Denzel Smith for Huffington Post),

His use of the quote is best understood by considering his source material. “The arc of the moral universe is long, but it bends toward justice” is King’s clever paraphrasing of a portion of a sermon delivered in 1853 by the abolitionist minister Theodore Parker. Born in Lexington, Massachusetts, in 1810, Parker studied at Harvard Divinity School and eventually became an influential transcendentalist and minister in the Unitarian church. In that sermon, Parker said: “I do not pretend to understand the moral universe. The arc is a long one. My eye reaches but little ways. I cannot calculate the curve and complete the figure by experience of sight. I can divine it by conscience. And from what I see I am sure it bends toward justice.”

I choose to keep faith that people will get the healthcare products they need and that all of us need to keep working at making access more fair.

UNESCO’s first global recommendations on the ethics of artificial intelligence (AI) announced

This makes a nice accompaniment to my commentary (December 3, 2021 posting) on the Nature of Things programme (telecast by the Canadian Broadcasting Corporation), The Machine That Feels.

Here’s UNESCO’s (United Nations Educational, Scientific and Cultural Organization) November 25, 2021 press release making the announcement (also received via email),

UNESCO member states adopt the first ever global agreement [recommendation] on the Ethics of Artificial Intelligence

Paris, 25 Nov [2021] – Audrey Azoulay, Director-General of UNESCO presented
Thursday the first ever global standard on the ethics of artificial
intelligence adopted by the member states of UNESCO at the General
Conference.

This historical text defines the common values and principles which will
guide the construction of the necessary legal infrastructure to ensure
the healthy development of AI.

AI is pervasive, and enables many of our daily routines – booking
flights, steering driverless cars, and personalising our morning news
feeds. AI also supports the decision-making of governments and the
private sector.

AI technologies are delivering remarkable results in highly specialized
fields such as cancer screening and building inclusive environments for
people with disabilities. They also help combat global problems like
climate change and world hunger, and help reduce poverty by optimizing
economic aid.

But the technology is also bringing new unprecedented challenges. We see
increased gender and ethnic bias, significant threats to privacy,
dignity and agency, dangers of mass surveillance, and increased use of
unreliable AI technologies in law enforcement, to name a few. Until now,
there were no universal standards to provide an answer to these issues.

In 2018, Audrey Azoulay, Director-General of UNESCO, launched an
ambitious project: to give the world an ethical framework for the use of
artificial intelligence. Three years later, thanks to the mobilization
of hundreds of experts from around the world and intense international
negotiations, the 193 UNESCO’s member states have just officially
adopted this ethical framework.

“The world needs rules for artificial intelligence to benefit
humanity. The Recommendation on the ethics of AI is a major answer. It
sets the first global normative framework while giving States the
responsibility to apply it at their level. UNESCO will support its 193
Member States in its implementation and ask them to report regularly on
their progress and practices”, said Audrey Azoulay, UNESCO Director-General.

The content of the recommendation

The Recommendation [emphasis mine] aims to realize the advantages AI brings to society and reduce the risks it entails. It ensures that digital transformations
promote human rights and contribute to the achievement of the
Sustainable Development Goals, addressing issues around transparency,
accountability and privacy, with action-oriented policy chapters on data
governance, education, culture, labour, healthcare and the economy.

*Protecting data

The Recommendation calls for action beyond what tech firms and
governments are doing to guarantee individuals more protection by
ensuring transparency, agency and control over their personal data. It
states that individuals should all be able to access or even erase
records of their personal data. It also includes actions to improve data
protection and an individual’s knowledge of, and right to control,
their own data. It also increases the ability of regulatory bodies
around the world to enforce this.

*Banning social scoring and mass surveillance

The Recommendation explicitly bans the use of AI systems for social
scoring and mass surveillance. These types of technologies are very
invasive, they infringe on human rights and fundamental freedoms, and
they are used in a broad way. The Recommendation stresses that when
developing regulatory frameworks, Member States should consider that
ultimate responsibility and accountability must always lie with humans
and that AI technologies should not be given legal personality
themselves.

*Helping to monitor and evalute

The Recommendation also sets the ground for tools that will assist in
its implementation. Ethical Impact Assessment is intended to help
countries and companies developing and deploying AI systems to assess
the impact of those systems on individuals, on society and on the
environment. Readiness Assessment Methodology helps Member States to
assess how ready they are in terms of legal and technical
infrastructure. This tool will assist in enhancing the institutional
capacity of countries and recommend appropriate measures to be taken in
order to ensure that ethics are implemented in practice. In addition,
the Recommendation encourages Member States to consider adding the role
of an independent AI Ethics Officer or some other mechanism to oversee
auditing and continuous monitoring efforts.

*Protecting the environment

The Recommendation emphasises that AI actors should favour data, energy
and resource-efficient AI methods that will help ensure that AI becomes
a more prominent tool in the fight against climate change and on
tackling environmental issues. The Recommendation asks governments to
assess the direct and indirect environmental impact throughout the AI
system life cycle. This includes its carbon footprint, energy
consumption and the environmental impact of raw material extraction for
supporting the manufacturing of AI technologies. It also aims at
reducing the environmental impact of AI systems and data
infrastructures. It incentivizes governments to invest in green tech,
and if there are disproportionate negative impact of AI systems on the
environment, the Recommendation instruct that they should not be used.

Decisions impacting millions of people should be fair, transparent and contestable. These new technologies must help us address the major challenges in our world today, such as increased inequalities and the environmental crisis, and not deepening them.” said Gabriela Ramos, UNESCO’s Assistant Director General for Social and Human Sciences.

Emerging technologies such as AI have proven their immense capacity to
deliver for good. However, its negative impacts that are exacerbating an
already divided and unequal world, should be controlled. AI developments
should abide by the rule of law, avoiding harm, and ensuring that when
harm happens, accountability and redressal mechanisms are at hand for
those affected.

If I read this properly (and it took me a little while), this is an agreement on the nature of the recommendations themselves and not an agreement to uphold them.

You can find more background information about the process for developing the framework outlined in the press release on the Recommendation on the ethics of artificial intelligence webpage. I was curious as to the composition of the Adhoc Expert Group (AHEG) for the Recommendation; they had varied representation from every continent. (FYI, The US and Mexico represented North America.)

East/West collaboration on scholarship and imagination about humanity’s long-term future— six new fellows at Berggruen Research Center at Peking University

According to a January 4, 2022 Berggruen Institute (also received via email), they have appointed a new crop of fellows for their research center at Peking University,

The Berggruen Institute has announced six scientists and philosophers to serve as Fellows at the Berggruen Research Center at Peking University in Beijing, China. These eminent scholars will work together across disciplines to explore how the great transformations of our time may shift human experience and self-understanding in the decades and centuries to come.

The new Fellows are Chenjian Li, University Chair Professor at Peking University; Xianglong Zhang, professor of philosophy at Peking University; Xiaoli Liu, professor of philosophy at Renmin University of China; Jianqiao Ge, lecturer at the Academy for Advanced Interdisciplinary Studies (AAIS) at Peking University; Xiaoping Chen, Director of the Robotics Laboratory at the University of Science and Technology of China; and Haidan Chen, associate professor of medical ethics and law at the School of Health Humanities at Peking University.

“Amid the pandemic, climate change, and the rest of the severe challenges of today, our Fellows are surmounting linguistic and cultural barriers to imagine positive futures for all people,” said Bing Song, Director of the China Center and Vice President of the Berggruen Institute. “Dialogue and shared understanding are crucial if we are to understand what today’s breakthroughs in science and technology really mean for the human community and the planet we all share.”

The Fellows will investigate deep questions raised by new understandings and capabilities in science and technology, exploring their implications for philosophy and other areas of study.  Chenjian Li is considering the philosophical and ethical considerations of gene editing technology. Meanwhile, Haidan Chen is exploring the social implications of brain/computer interface technologies in China, while Xiaoli Liu is studying philosophical issues arising from the intersections among psychology, neuroscience, artificial intelligence, and art.

Jianqiao Ge’s project considers the impact of artificial intelligence on the human brain, given the relative recency of its evolution into current form. Xianglong Zhang’s work explores the interplay between literary culture and the development of technology. Finally, Xiaoping Chen is developing a new concept for describing innovation that draws from Daoist, Confucianist, and ancient Greek philosophical traditions.

Fellows at the China Center meet monthly with the Institute’s Los Angeles-based Fellows. These fora provide an opportunity for all Fellows to share and discuss their work. Through this cross-cultural dialogue, the Institute is helping to ensure continued high-level of ideas among China, the United States, and the rest of the world about some of the deepest and most fundamental questions humanity faces today.

“Changes in our capability and understanding of the physical world affect all of humanity, and questions about their implications must be pondered at a cross-cultural level,” said Bing. “Through multidisciplinary dialogue that crosses the gulf between East and West, our Fellows are pioneering new thought about what it means to be human.”

Haidan Chen is associate professor of medical ethics and law at the School of Health Humanities at Peking University. She was a visiting postgraduate researcher at the Institute for the Study of Science Technology and Innovation (ISSTI), the University of Edinburgh; a visiting scholar at the Brocher Foundation, Switzerland; and a Fulbright visiting scholar at the Center for Biomedical Ethics, Stanford University. Her research interests embrace the ethical, legal, and social implications (ELSI) of genetics and genomics, and the governance of emerging technologies, in particular stem cells, biobanks, precision medicine, and brain science. Her publications appear at Social Science & MedicineBioethics and other journals.

Xiaoping Chen is the director of the Robotics Laboratory at University of Science and Technology of China. He also currently serves as the director of the Robot Technical Standard Innovation Base, an executive member of the Global AI Council, Chair of the Chinese RoboCup Committee, and a member of the International RoboCup Federation’s Board of Trustees. He has received the USTC’s Distinguished Research Presidential Award and won Best Paper at IEEE ROBIO 2016. His projects have won the IJCAI’s Best Autonomous Robot and Best General-Purpose Robot awards as well as twelve world champions at RoboCup. He proposed an intelligent technology pathway for robots based on Open Knowledge and the Rong-Cha principle, which have been implemented and tested in the long-term research on KeJia and JiaJia intelligent robot systems.

Jianqiao Ge is a lecturer at the Academy for Advanced Interdisciplinary Studies (AAIS) at Peking University. Before, she was a postdoctoral fellow at the University of Chicago and the Principal Investigator / Co-Investigator of more than 10 research grants supported by the Ministry of Science and Technology of China, the National Natural Science Foundation of China, and Beijing Municipal Science & Technology Commission. She has published more than 20 peer-reviewed articles on leading academic journals such as PNAS, the Journal of Neuroscience, and has been awarded two national patents. In 2008, by scanning the human brain with functional MRI, Ge and her collaborator were among the first to confirm that the human brain engages distinct neurocognitive strategies to comprehend human intelligence and artificial intelligence. Ge received her Ph.D. in psychology, B.S in physics, a double B.S in mathematics and applied mathematics, and a double B.S in economics from Peking University.

Chenjian Li is the University Chair Professor of Peking University. He also serves on the China Advisory Board of Eli Lilly and Company, the China Advisory Board of Cornell University, and the Rhodes Scholar Selection Committee. He is an alumnus of Peking University’s Biology Department, Peking Union Medical College, and Purdue University. He was the former Vice Provost of Peking University, Executive Dean of Yuanpei College, and Associate Dean of the School of Life Sciences at Peking University. Prior to his return to China, he was an associate professor at Weill Medical College of Cornell University and the Aidekman Endowed Chair of Neurology at Mount Sinai School of Medicine. Dr. Li’s academic research focuses on the molecular and cellular mechanisms of neurological diseases, cancer drug development, and gene-editing and its philosophical and ethical considerations. Li also writes as a public intellectual on science and humanity, and his Chinese translation of Richard Feynman’s book What Do You Care What Other People Think? received the 2001 National Publisher’s Book Award.

Xiaoli Liu is professor of philosophy at Renmin University. She is also Director of the Chinese Society of Philosophy of Science Leader. Her primary research interests are philosophy of mathematics, philosophy of science and philosophy of cognitive science. Her main works are “Life of Reason: A Study of Gödel’s Thought,” “Challenges of Cognitive Science to Contemporary Philosophy,” “Philosophical Issues in the Frontiers of Cognitive Science.” She edited “Symphony of Mind and Machine” and series of books “Mind and Cognition.” In 2003, she co-founded the “Mind and Machine workshop” with interdisciplinary scholars, which has held 18 consecutive annual meetings. Liu received her Ph.D. from Peking University and was a senior visiting scholar in Harvard University.

Xianglong Zhang is a professor of philosophy at Peking University. His research areas include Confucian philosophy, phenomenology, Western and Eastern comparative philosophy. His major works (in Chinese except where noted) include: Heidegger’s Thought and Chinese Tao of HeavenBiography of HeideggerFrom Phenomenology to ConfuciusThe Exposition and Comments of Contemporary Western Philosophy; The Exposition and Comments of Classic Western PhilosophyThinking to Take Refuge: The Chinese Ancient Philosophies in the GlobalizationLectures on the History of Confucian Philosophy (four volumes); German Philosophy, German Culture and Chinese Philosophical ThinkingHome and Filial Piety: From the View between the Chinese and the Western.

About the Berggruen China Center
Breakthroughs in artificial intelligence and life science have led to the fourth scientific and technological revolution. The Berggruen China Center is a hub for East-West research and dialogue dedicated to the cross-cultural and interdisciplinary study of the transformations affecting humanity. Intellectual themes for research programs are focused on frontier sciences, technologies, and philosophy, as well as issues involving digital governance and globalization.

About the Berggruen Institute:
The Berggruen Institute’s mission is to develop foundational ideas and shape political, economic, and social institutions for the 21st century. Providing critical analysis using an outwardly expansive and purposeful network, we bring together some of the best minds and most authoritative voices from across cultural and political boundaries to explore fundamental questions of our time. Our objective is enduring impact on the progress and direction of societies around the world. To date, projects inaugurated at the Berggruen Institute have helped develop a youth jobs plan for Europe, fostered a more open and constructive dialogue between Chinese leadership and the West, strengthened the ballot initiative process in California, and launched Noema, a new publication that brings thought leaders from around the world together to share ideas. In addition, the Berggruen Prize, a $1 million award, is conferred annually by an independent jury to a thinker whose ideas are shaping human self-understanding to advance humankind.

You can find out more about the Berggruen China Center here and you can access a list along with biographies of all the Berggruen Institute fellows here.

Getting ready

I look forward to hearing about the projects from these thinkers.

Gene editing and ethics

I may have to reread some books in anticipation of Chenjian Li’s philosophical work and ethical considerations of gene editing technology. I wonder if there’ll be any reference to the He Jiankui affair.

(Briefly for those who may not be familiar with the situation, He claimed to be the first to gene edit babies. In November 2018, news about the twins, Lulu and Nana, was a sensation and He was roundly criticized for his work. I have not seen any information about how many babies were gene edited for He’s research; there could be as many as six. My July 28, 2020 posting provided an update. I haven’t stumbled across anything substantive since then.)

There are two books I recommend should you be interested in gene editing, as told through the lens of the He Jiankui affair. If you can, read both as that will give you a more complete picture.

In no particular order: This book provides an extensive and accessible look at the science, the politics of scientific research, and some of the pressures on scientists of all countries. Kevin Davies’ 2020 book, “Editing Humanity; the CRISPR Revolution and the New Era of Genome Editing” provides an excellent introduction from an insider. Here’s more from Davies’ biographical sketch,

Kevin Davies is the executive editor of The CRISPR Journal and the founding editor of Nature Genetics . He holds an MA in biochemistry from the University of Oxford and a PhD in molecular genetics from the University of London. He is the author of Cracking the Genome, The $1,000 Genome, and co-authored a new edition of DNA: The Story of the Genetic Revolution with Nobel Laureate James D. Watson and Andrew Berry. …

The other book is “The Mutant Project; Inside the Global Race to Genetically Modify Humans” (2020) by Eben Kirksey, an anthropologist who has an undergraduate degree in one of the sciences. He too provides scientific underpinning but his focus is on the cultural and personal underpinnings of the He Jiankui affair, on the culture of science research, irrespective of where it’s practiced, and the culture associated with the DIY (do-it-yourself) Biology community. Here’s more from Kirksey’s biographical sketch,

EBEN KIRKSEY is an American anthropologist and Member of the Institute for Advanced Study in Princeton, New Jersey. He has been published in Wired, The Atlantic, The Guardian and The Sunday Times . He is sought out as an expert on science in society by the Associated Press, The Wall Street Journal, The New York Times, Democracy Now, Time and the BBC, among other media outlets. He speaks widely at the world’s leading academic institutions including Oxford, Yale, Columbia, UCLA, and the International Summit of Human Genome Editing, plus music festivals, art exhibits, and community events. Professor Kirksey holds a long-term position at Deakin University in Melbourne, Australia.

Brain/computer interfaces (BCI)

I’m happy to see that Haidan Chen will be exploring the social implications of brain/computer interface technologies in China. I haven’t seen much being done here in Canada but my December 23, 2021 posting, Your cyborg future (brain-computer interface) is closer than you think, highlights work being done at the Imperial College London (ICL),

“For some of these patients, these devices become such an integrated part of themselves that they refuse to have them removed at the end of the clinical trial,” said Rylie Green, one of the authors. “It has become increasingly evident that neurotechnologies have the potential to profoundly shape our own human experience and sense of self.”

You might also find my September 17, 2020 posting has some useful information. Check under the “Brain-computer interfaces, symbiosis, and ethical issues” subhead for another story about attachment to one’s brain implant and also the “Finally” subhead for more reading suggestions.

Artificial intelligence (AI), art, and the brain

I’ve lumped together three of the thinkers, Xiaoli Liu, Jianqiao Ge and Xianglong Zhang, as there is some overlap (in my mind, if nowhere else),

  • Liu’s work on philosophical issues as seen in the intersections of psychology, neuroscience, artificial intelligence, and art
  • Ge’s work on the evolution of the brain and the impact that artificial intelligence may have on it
  • Zhang’s work on the relationship between literary culture and the development of technology

A December 3, 2021 posting, True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read), is both a review of a recent episode of the Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, and a dive into a number of issues as can be seen under subheads such as “AI and Creativity,” “Kazuo Ishiguro?” and “Evolution.”

You may also want to check out my December 27, 2021 posting, Ai-Da (robot artist) writes and performs poem honouring Dante’s 700th anniversary, for an eye opening experience. If nothing else, just watch the embedded video.

This suggestion relates most closely to Ge’s and Zhang’s work. If you haven’t already come across it, there’s Walter J. Ong’s 1982 book, “Orality and Literacy: The Technologizing of the Word.” From the introductory page of the 2002 edition (PDF),

This classic work explores the vast differences between oral and
literate cultures and offers a brilliantly lucid account of the
intellectual, literary and social effects of writing, print and
electronic technology. In the course of his study, Walter J.Ong
offers fascinating insights into oral genres across the globe and
through time and examines the rise of abstract philosophical and
scientific thinking. He considers the impact of orality-literacy
studies not only on literary criticism and theory but on our very
understanding of what it is to be a human being, conscious of self
and other.

In 2013, a 30th anniversary edition of the book was released and is still in print.

Philosophical traditions

I’m very excited to learn more about Xiaoping Chen’s work describing innovation that draws from Daoist, Confucianist, and ancient Greek philosophical traditions.

Should any of my readers have suggestions for introductory readings on these philosophical traditions, please do use the Comments option for this blog. In fact, if you have suggestions for other readings on these topics, I would be very happy to learn of them.

Congratulations to the six Fellows at the Berggruen Research Center at Peking University in Beijing, China. I look forward to reading articles about your work in the Berggruen Institute’s Noema magazine and, possibly, attending your online events.

SFU’s Philippe Pasquier speaks at “The rise of Creative AI and its ethics” online event on Tuesday, January 11, 2022 at 6 am PST

Simon Fraser University’s (SFU) Metacreation Lab for Creative AI (artificial intelligence) in Vancouver, Canada, has just sent me (via email) a January 2022 newsletter, which you can find here. There are a two items I found of special interest.

Max Planck Centre for Humans and Machines Seminars

From the January 2022 newsletter,

Max Planck Institute Seminar – The rise of Creative AI & its ethics
January 11, 2022 at 15:00 pm [sic] CET | 6:00 am PST

Next Monday [sic], Philippe Pasquier, director of the Metacreation Labn will
be providing a seminar titled “The rise of Creative AI & its ethics”
[Tuesday, January 11, 2022] at the Max Planck Institute’s Centre for Humans and
Machine [sic].

The Centre for Humans and Machines invites interested attendees to
our public seminars, which feature scientists from our institute and
experts from all over the world. Their seminars usually take 1 hour and
provide an opportunity to meet the speaker afterwards.

The seminar is openly accessible to the public via Webex Access, and
will be a great opportunity to connect with colleagues and friends of
the Lab on European and East Coast time. For more information and the
link, head to the Centre for Humans and Machines’ Seminars page linked
below.

Max Planck Institute – Upcoming Events

The Centre’s seminar description offers an abstract for the talk and a profile of Philippe Pasquier,

Creative AI is the subfield of artificial intelligence concerned with the partial or complete automation of creative tasks. In turn, creative tasks are those for which the notion of optimality is ill-defined. Unlike car driving, chess moves, jeopardy answers or literal translations, creative tasks are more subjective in nature. Creative AI approaches have been proposed and evaluated in virtually every creative domain: design, visual art, music, poetry, cooking, … These algorithms most often perform at human-competitive or superhuman levels for their precise task. Two main use of these algorithms have emerged that have implications on workflows reminiscent of the industrial revolution:

– Augmentation (a.k.a, computer-assisted creativity or co-creativity): a human operator interacts with the algorithm, often in the context of already existing creative software.

– Automation (computational creativity): the creative task is performed entirely by the algorithms without human intervention in the generation process.

Both usages will have deep implications for education and work in creative fields. Away from the fear of strong – sentient – AI, taking over the world: What are the implications of these ongoing developments for students, educators and professionals? How will Creative AI transform the way we create, as well as what we create?

Philippe Pasquier is a professor at Simon Fraser University’s School for Interactive Arts and Technology, where he directs the Metacreation Lab for Creative AI since 2008. Philippe leads a research-creation program centred around generative systems for creative tasks. As such, he is a scientist specialized in artificial intelligence, a multidisciplinary media artist, an educator, and a community builder. His contributions range from theoretical research on generative systems, computational creativity, multi-agent systems, machine learning, affective computing, and evaluation methodologies. This work is applied in the creative software industry as well as through artistic practice in computer music, interactive and generative art.

Interpreting soundscapes

Folks at the Metacreation Lab have made available an interactive search engine for sounds, from the January 2022 newsletter,

Audio Metaphor is an interactive search engine that transforms users’ queries into soundscapes interpreting them.  Using state of the art algorithms for sound retrieval, segmentation, background and foreground classification, AuMe offers a way to explore the vast open source library of sounds available on the  freesound.org online community through natural language and its semantic, symbolic, and metaphorical expressions. 

We’re excited to see Audio Metaphor included  among many other innovative projects on Freesound Labs, a directory of projects, hacks, apps, research and other initiatives that use content from Freesound or use the Freesound API. Take a minute to check out the variety of projects applying creative coding, machine learning, and many other techniques towards the exploration of sound and music creation, generative music, and soundscape composition in diverse forms an interfaces.

Explore AuMe and other FreeSound Labs projects    

The Audio Metaphor (AuMe) webpage on the Metacreation Lab website has a few more details about the search engine,

Audio Metaphor (AuMe) is a research project aimed at designing new methodologies and tools for sound design and composition practices in film, games, and sound art. Through this project, we have identified the processes involved in working with audio recordings in creative environments, addressing these in our research by implementing computational systems that can assist human operations.

We have successfully developed Audio Metaphor for the retrieval of audio file recommendations from natural language texts, and even used phrases generated automatically from Twitter to sonify the current state of Web 2.0. Another significant achievement of the project has been in the segmentation and classification of environmental audio with composition-specific categories, which were then applied in a generative system approach. This allows users to generate sound design simply by entering textual prompts.

As we direct Audio Metaphor further toward perception and cognition, we will continue to contribute to the music information retrieval field through environmental audio classification and segmentation. The project will continue to be instrumental in the design and implementation of new tools for sound designers and artists.

See more information on the website audiometaphor.ca.

As for Freesound Labs, you can find them here.

Your cyborg future (brain-computer interface) is closer than you think

Researchers at the Imperial College London (ICL) are warning that brain-computer interfaces (BCIs) may pose a number of quandaries. (At the end of this post, I have a little look into some of the BCI ethical issues previously explored on this blog.)

Here’s more from a July 20, 2021American Institute of Physics (AIP) news release (also on EurekAlert),

Surpassing the biological limitations of the brain and using one’s mind to interact with and control external electronic devices may sound like the distant cyborg future, but it could come sooner than we think.

Researchers from Imperial College London conducted a review of modern commercial brain-computer interface (BCI) devices, and they discuss the primary technological limitations and humanitarian concerns of these devices in APL Bioengineering, from AIP Publishing.

The most promising method to achieve real-world BCI applications is through electroencephalography (EEG), a method of monitoring the brain noninvasively through its electrical activity. EEG-based BCIs, or eBCIs, will require a number of technological advances prior to widespread use, but more importantly, they will raise a variety of social, ethical, and legal concerns.

Though it is difficult to understand exactly what a user experiences when operating an external device with an eBCI, a few things are certain. For one, eBCIs can communicate both ways. This allows a person to control electronics, which is particularly useful for medical patients that need help controlling wheelchairs, for example, but also potentially changes the way the brain functions.

“For some of these patients, these devices become such an integrated part of themselves that they refuse to have them removed at the end of the clinical trial,” said Rylie Green, one of the authors. “It has become increasingly evident that neurotechnologies have the potential to profoundly shape our own human experience and sense of self.”

Aside from these potentially bleak mental and physiological side effects, intellectual property concerns are also an issue and may allow private companies that develop eBCI technologies to own users’ neural data.

“This is particularly worrisome, since neural data is often considered to be the most intimate and private information that could be associated with any given user,” said Roberto Portillo-Lara, another author. “This is mainly because, apart from its diagnostic value, EEG data could be used to infer emotional and cognitive states, which would provide unparalleled insight into user intentions, preferences, and emotions.”

As the availability of these platforms increases past medical treatment, disparities in access to these technologies may exacerbate existing social inequalities. For example, eBCIs can be used for cognitive enhancement and cause extreme imbalances in academic or professional successes and educational advancements.

“This bleak panorama brings forth an interesting dilemma about the role of policymakers in BCI commercialization,” Green said. “Should regulatory bodies intervene to prevent misuse and unequal access to neurotech? Should society follow instead the path taken by previous innovations, such as the internet or the smartphone, which originally targeted niche markets but are now commercialized on a global scale?”

She calls on global policymakers, neuroscientists, manufacturers, and potential users of these technologies to begin having these conversations early and collaborate to produce answers to these difficult moral questions.

“Despite the potential risks, the ability to integrate the sophistication of the human mind with the capabilities of modern technology constitutes an unprecedented scientific achievement, which is beginning to challenge our own preconceptions of what it is to be human,” [emphasis mine] Green said.

Caption: A schematic demonstrates the steps required for eBCI operation. EEG sensors acquire electrical signals from the brain, which are processed and outputted to control external devices. Credit: Portillo-Lara et al.

Here’s a link to and a citation for the paper,

Mind the gap: State-of-the-art technologies and applications for EEG-based brain-computer interfaces by Roberto Portillo-Lara, Bogachan Tahirbegi, Christopher A.R. Chapman, Josef A. Goding, and Rylie A. Green. APL Bioengineering, Volume 5, Issue 3, , 031507 (2021) DOI: https://doi.org/10.1063/5.0047237 Published Online: 20 July 2021

This paper appears to be open access.

Back on September 17, 2020 I published a post about a brain implant and included some material I’d dug up on ethics and brain-computer interfaces and was most struck by one of the stories. Here’s the excerpt (which can be found under the “Brain-computer interfaces, symbiosis, and ethical issues” subhead): … From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.

This is from another part of the September 17, 2020 posting,

… He [Gilbert] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.

… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.

“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”

It wasn’t my first thought when the topic of ethics and BCIs came up but as Gilbert’s research highlights: what happens if the company that made your implant and monitors it goes bankrupt?

If you have the time, do take a look at the entire entry under the “Brain-computer interfaces, symbiosis, and ethical issues” subhead of the September 17, 2020 posting or read the July 24, 2019 article by Liam Drew.

Should you have a problem finding the July 20, 2021 American Institute of Physics news release at either of the two links I have previously supplied, there’s a July 20, 2021 copy at SciTechDaily.com

Follow up to the Charles M. Lieber affair and US government efforts to prosecute nanotech scientists

Rebecca Trager in a March 5, 2021 news article for Chemistry World highlights support for Charles M. Lieber (Harvard professor and chair of the chemistry department) from his colleagues (Note: Links have been removed),

More than a year after the chair of Harvard University’s chemistry department was arrested for allegedly hiding his receipt of millions of dollars in research funding from China from his university and the US government, dozens of prominent researchers – including many Nobel Prize winners – are coming to Charles Lieber’s defence. They are calling the US Department of Justice (DOJ) case against him ‘unjust’ and urging the agency to drop it.

Following his January 2020 arrest, Lieber was placed on ‘indefinite’ paid administrative leave. The nanoscience pioneer was indicted in June [2020] on charges of making false statements to federal authorities regarding his participation in China’s Thousand Talents plan – the country’s programme to attract, recruit and cultivate high-level scientific talent from abroad. Lieber faces up to five years in prison and a fine of $250,000 (£179,000) if convicted.

A 1 March [2021] open letter, drafted and coordinated by Harvard chemist Stuart Schreiber, co-founder of the Broad Institute, and professor emeritus Elias Corey, winner of the 1990 chemistry Nobel prize, says Lieber became the target of a ‘tragically misguided government campaign’. The letter refers to Lieber as ‘one of the great scientist of his generation’ and warns such government actions are discouraging US scientists from collaborating with peers in other countries, particularly China. The open letter also notes that Lieber is fighting to salvage his reputation while suffering from incurable lymphoma.

Ferguson goes on to contrast Lieber’s treatment by Harvard to another embattled colleague’s treatment by his home institution (Note: Links have been removed),

Harvard’s treatment of Lieber stands in contrast to how the Massachusetts Institute of Technology (MIT) handled the more recent case of nanotechnologist Gang Chen, who was arrested in January [2021] for failing to report his ties to the Chinese government. MIT agreed to cover his legal fees, and more than 100 faculty members signed a letter to their university’s president that picked apart the DOJ’s allegations against Chen.

I have more details about the case against Lieber (as it was presented at the time) in a January 28, 2020 posting.

As for Professor Chen, I found this MIT statement dated January 14, 2021 (the date of his arrest) and this January 14, 2021 statement from The United States District Attorney’s Office District of Massachusetts.

US Army researchers’ vision for artificial intelligence and ethics

The US Army peeks into a near future where humans and some forms of artificial intelligence (AI) work together in battle and elsewhere. From a February 3, 2021 U.S. Army Research Laboratory news release (also on EurekAlert but published on February 16, 2021),

The Army of the future will involve humans and autonomous machines working together to accomplish the mission. According to Army researchers, this vision will only succeed if artificial intelligence is perceived to be ethical.

Researchers, based at the U.S. Army Combat Capabilities Development Command, now known as DEVCOM, Army Research Laboratory, Northeastern University and the University of Southern California, expanded existing research to cover moral dilemmas and decision making that has not been pursued elsewhere.

This research, featured in Frontiers in Robotics and AI, tackles the fundamental challenge of developing ethical artificial intelligence, which, according to the researchers, is still mostly understudied.

“Autonomous machines, such as automated vehicles and robots, are poised to become pervasive in the Army,” said DEVCOM ARL researcher Dr. Celso de Melo, who is located at the laboratory’s ARL West regional site in Playa Vista, California. “These machines will inevitably face moral dilemmas where they must make decisions that could very well injure humans.”

For example, de Melo said, imagine that an automated vehicle is driving in a tunnel and suddenly five pedestrians cross the street; the vehicle must decide whether to continue moving forward injuring the pedestrians or swerve towards the wall risking the driver.

What should the automated vehicle do in this situation?

Prior work has framed these dilemmas in starkly simple terms, framing decisions as life and death, de Melo said, neglecting the influence of risk of injury to the involved parties on the outcome.

“By expanding the study of moral dilemmas to consider the risk profile of the situation, we significantly expanded the space of acceptable solutions for these dilemmas,” de Melo said. “In so doing, we contributed to the development of autonomous technology that abides to acceptable moral norms and thus is more likely to be adopted in practice and accepted by the general public.”

The researchers focused on this gap and presented experimental evidence that, in a moral dilemma with automated vehicles, the likelihood of making the utilitarian choice – which minimizes the overall injury risk to humans and, in this case, saves the pedestrians – was moderated by the perceived risk of injury to pedestrians and drivers.

In their study, participants were found more likely to make the utilitarian choice with decreasing risk to the driver and with increasing risk to the pedestrians. However, interestingly, most were willing to risk the driver (i.e., self-sacrifice), even if the risk to the pedestrians was lower than to the driver.

As a second contribution, the researchers also demonstrated that participants’ moral decisions were influenced by what other decision makers do – for instance, participants were less likely to make the utilitarian choice, if others often chose the non-utilitarian choice.

“This research advances the state-of-the-art in the study of moral dilemmas involving autonomous machines by shedding light on the role of risk on moral choices,” de Melo said. “Further, both of these mechanisms introduce opportunities to develop AI that will be perceived to make decisions that meet moral standards, as well as introduce an opportunity to use technology to shape human behavior and promote a more moral society.”

For the Army, this research is particularly relevant to Army modernization, de Melo said.

“As these vehicles become increasingly autonomous and operate in complex and dynamic environments, they are bound to face situations where injury to humans is unavoidable,” de Melo said. “This research informs how to navigate these moral dilemmas and make decisions that will be perceived as optimal given the circumstances; for example, minimizing overall risk to human life.”

Moving in to the future, researchers will study this type of risk-benefit analysis in Army moral dilemmas and articulate the corresponding practical implications for the development of AI systems.

“When deployed at scale, the decisions made by AI systems can be very consequential, in particular for situations involving risk to human life,” de Melo said. “It is critical that AI is able to make decisions that reflect society’s ethical standards to facilitate adoption by the Army and acceptance by the general public. This research contributes to realizing this vision by clarifying some of the key factors shaping these standards. This research is personally important because AI is expected to have considerable impact to the Army of the future; however, what kind of impact will be defined by the values reflected in that AI.”

The last time I had an item on a similar topic from the US Army Research Laboratory (ARL) it was in a March 26, 2018 posting; scroll down to the subhead, US Army (about 50% of the way down),

“As machine agents become more sophisticated and independent, it is critical for their human counterparts to understand their intent, behaviors, reasoning process behind those behaviors, and expected outcomes so the humans can properly calibrate their trust [emphasis mine] in the systems and make appropriate decisions,” explained ARL’s Dr. Jessie Chen, senior research psychologist.

This latest work also revolves around the issue of trust according to the last sentence in the 2021 study paper (link and citation to follow),

… Overall, these questions emphasize the importance of the kind of experimental work presented here, as it has the potential to shed light on people’s preferences about moral behavior in machines, inform the design of autonomous machines people are likely to trust and adopt, and, perhaps, even introduce an opportunity to promote a more moral society. [emphases mine]

From trust to adoption to a more moral society—that’s an interesting progression. For another more optimistic view of how robots and AI can have positive impacts there’s my March 29, 2021 posting, Little Lost Robot and humane visions of our technological future

Here’s a link to and a citation for the paper,

Risk of Injury in Moral Dilemmas With Autonomous Vehicles by Celso M. de Melo, Stacy Marsella, and Jonathan Gratch. Front. Robot. AI [Frontiers in Robotics and AI], 20 January 2021 DOI: https://doi.org/10.3389/frobt.2020.572529

This paper is in an open access journal.