Category Archives: ethics

Implantable living pharmacy

I stumbled across a very interesting US Defense Advanced Research Projects Agency (DARPA) project (from an August 30, 2021 posting on Northwestern University’s Rivnay Lab [a laboratory for organic bioelectronics] blog),

Our lab has received a cooperative agreement with DARPA to develop a wireless, fully implantable ‘living pharmacy’ device that could help regulate human sleep patterns. The project is through DARPA’s BTO (biotechnology office)’s Advanced Acclimation and Protection Tool for Environmental Readiness (ADAPTER) program, meant to address physical challenges of travel, such as jetlag and fatigue.

The device, called NTRAIN (Normalizing Timing of Rhythms Across Internal Networks of Circadian Clocks), would control the body’s circadian clock, reducing the time it takes for a person to recover from disrupted sleep/wake cycles by as much as half the usual time.

The project spans 5 institutions including Northwestern, Rice University, Carnegie Mellon, University of Minnesota, and Blackrock Neurotech.

Prior to the Aug. 30, 2021 posting, Amanda Morris wrote a May 13, 2021 article for Northwestern NOW (university magazine), which provides more details about the project, Note: A link has been removed,

The first phase of the highly interdisciplinary program will focus on developing the implant. The second phase, contingent on the first, will validate the device. If that milestone is met, then researchers will test the device in human trials, as part of the third phase. The full funding corresponds to $33 million over four-and-a-half years. 

Nicknamed the “living pharmacy,” the device could be a powerful tool for military personnel, who frequently travel across multiple time zones, and shift workers including first responders, who vacillate between overnight and daytime shifts.

Combining synthetic biology with bioelectronics, the team will engineer cells to produce the same peptides that the body makes to regulate sleep cycles, precisely adjusting timing and dose with bioelectronic controls. When the engineered cells are exposed to light, they will generate precisely dosed peptide therapies. 

“This control system allows us to deliver a peptide of interest on demand, directly into the bloodstream,” said Northwestern’s Jonathan Rivnay, principal investigator of the project. “No need to carry drugs, no need to inject therapeutics and — depending on how long we can make the device last — no need to refill the device. It’s like an implantable pharmacy on a chip that never runs out.” 

Beyond controlling circadian rhythms, the researchers believe this technology could be modified to release other types of therapies with precise timing and dosing for potentially treating pain and disease. The DARPA program also will help researchers better understand sleep/wake cycles, in general.

“The experiments carried out in these studies will enable new insights into how internal circadian organization is maintained,” said Turek [Fred W. Turek], who co-leads the sleep team with Vitaterna [Martha Hotz Vitaterna]. “These insights will lead to new therapeutic approaches for sleep disorders as well as many other physiological and mental disorders, including those associated with aging where there is often a spontaneous breakdown in temporal organization.” 

For those who like to dig even deeper, Dieynaba Young’s June 17, 2021 article for Smithsonian Magazine (GetPocket.com link to article) provides greater context and greater satisfaction, Note: Links have been removed,

In 1926, Fritz Kahn completed Man as Industrial Palace, the preeminent lithograph in his five-volume publication The Life of Man. The illustration shows a human body bustling with tiny factory workers. They cheerily operate a brain filled with switchboards, circuits and manometers. Below their feet, an ingenious network of pipes, chutes and conveyer belts make up the blood circulatory system. The image epitomizes a central motif in Kahn’s oeuvre: the parallel between human physiology and manufacturing, or the human body as a marvel of engineering.

An apparatus in the embryonic stage of development at the time of this writing in June of 2021—the so-called “implantable living pharmacy”—could have easily originated in Kahn’s fervid imagination. The concept is being developed by the Defense Advanced Research Projects Agency (DARPA) in conjunction with several universities, notably Northwestern and Rice. Researchers envision a miniaturized factory, tucked inside a microchip, that will manufacture pharmaceuticals from inside the body. The drugs will then be delivered to precise targets at the command of a mobile application. …

The implantable living pharmacy, which is still in the “proof of concept” stage of development, is actually envisioned as two separate devices—a microchip implant and an armband. The implant will contain a layer of living synthetic cells, along with a sensor that measures temperature, a short-range wireless transmitter and a photo detector. The cells are sourced from a human donor and reengineered to perform specific functions. They’ll be mass produced in the lab, and slathered onto a layer of tiny LED lights.

The microchip will be set with a unique identification number and encryption key, then implanted under the skin in an outpatient procedure. The chip will be controlled by a battery-powered hub attached to an armband. That hub will receive signals transmitted from a mobile app.

If a soldier wishes to reset their internal clock, they’ll simply grab their phone, log onto the app and enter their upcoming itinerary—say, a flight departing at 5:30 a.m. from Arlington, Virginia, and arriving 16 hours later at Fort Buckner in Okinawa, Japan. Using short-range wireless communications, the hub will receive the signal and activate the LED lights inside the chip. The lights will shine on the synthetic cells, stimulating them to generate two compounds that are naturally produced in the body. The compounds will be released directly into the bloodstream, heading towards targeted locations, such as a tiny, centrally-located structure in the brain called the suprachiasmatic nucleus (SCN) that serves as master pacemaker of the circadian rhythm. Whatever the target location, the flow of biomolecules will alter the natural clock. When the solider arrives in Okinawa, their body will be perfectly in tune with local time.

The synthetic cells will be kept isolated from the host’s immune system by a membrane constructed of novel biomaterials, allowing only nutrients and oxygen in and only the compounds out. Should anything go wrong, they would swallow a pill that would kill the cells inside the chip only, leaving the rest of their body unaffected.

If you have the time, I recommend reading Young’s June 17, 2021 Smithsonian Magazine article (GetPocket.com link to article) in its entirety. Young goes on to discuss, hacking, malware, and ethical/societal issues and more.

There is an animation of Kahn’s original poster in a June 23, 2011 posting on openculture.com (also found on Vimeo; Der Mensch als Industriepalast [Man as Industrial Palace])

Credits: Idea & Animation: Henning M. Lederer / led-r-r.net; Sound-Design: David Indge; and original poster art: Fritz Kahn.

FrogHeart’s 2022 comes to an end as 2023 comes into view

I look forward to 2023 and hope it will be as stimulating as 2022 proved to be. Here’s an overview of the year that was on this blog:

Sounds of science

It seems 2022 was the year that science discovered the importance of sound and the possibilities of data sonification. Neither is new but this year seemed to signal a surge of interest or maybe I just happened to stumble onto more of the stories than usual.

This is not an exhaustive list, you can check out my ‘Music’ category for more here. I have tried to include audio files with the postings but it all depends on how accessible the researchers have made them.

Aliens on earth: machinic biology and/or biological machinery?

When I first started following stories in 2008 (?) about technology or machinery being integrated with the human body, it was mostly about assistive technologies such as neuroprosthetics. You’ll find most of this year’s material in the ‘Human Enhancement’ category or you can search the tag ‘machine/flesh’.

However, the line between biology and machine became a bit more blurry for me this year. You can see what’s happening in the titles listed below (you may recognize the zenobot story; there was an earlier version of xenobots featured here in 2021):

This was the story that shook me,

Are the aliens going to come from outer space or are we becoming the aliens?

Brains (biological and otherwise), AI, & our latest age of anxiety

As we integrate machines into our bodies, including our brains, there are new issues to consider:

  • Going blind when your neural implant company flirts with bankruptcy (long read) April 5, 2022 posting
  • US National Academies Sept. 22-23, 2022 workshop on techno, legal & ethical issues of brain-machine interfaces (BMIs) September 21, 2022 posting

I hope the US National Academies issues a report on their “Brain-Machine and Related Neural Interface Technologies: Scientific, Technical, Ethical, and Regulatory Issues – A Workshop” for 2023.

Meanwhile the race to create brainlike computers continues and I have a number of posts which can be found under the category of ‘neuromorphic engineering’ or you can use these search terms ‘brainlike computing’ and ‘memristors’.

On the artificial intelligence (AI) side of things, I finally broke down and added an ‘artificial intelligence (AI) category to this blog sometime between May and August 2021. Previously, I had used the ‘robots’ category as a catchall. There are other stories but these ones feature public engagement and policy (btw, it’s a Canadian Science Policy Centre event), respectively,

  • “The “We are AI” series gives citizens a primer on AI” March 23, 2022 posting
  • “Age of AI and Big Data – Impact on Justice, Human Rights and Privacy Zoom event on September 28, 2022 at 12 – 1:30 pm EDT” September 16, 2022 posting

These stories feature problems, which aren’t new but seem to be getting more attention,

While there have been issues over AI, the arts, and creativity previously, this year they sprang into high relief. The list starts with my two-part review of the Vancouver Art Gallery’s AI show; I share most of my concerns in part two. The third post covers intellectual property issues (mostly visual arts but literary arts get a nod too). The fourth post upends the discussion,

  • “Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (1 of 2): The Objects” July 28, 2022 posting
  • “Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations” July 28, 2022 posting
  • “AI (artificial intelligence) and art ethics: a debate + a Botto (AI artist) October 2022 exhibition in the Uk” October 24, 2022 posting
  • Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms? August 30, 2022 posting

Interestingly, most of the concerns seem to be coming from the visual and literary arts communities; I haven’t come across major concerns from the music community. (The curious can check out Vancouver’s Metacreation Lab for Artificial Intelligence [located on a Simon Fraser University campus]. I haven’t seen any cautionary or warning essays there; it’s run by an AI and creativity enthusiast [professor Philippe Pasquier]. The dominant but not sole focus is art, i.e., music and AI.)

There is a ‘new kid on the block’ which has been attracting a lot of attention this month. If you’re curious about the latest and greatest AI anxiety,

  • Peter Csathy’s December 21, 2022 Yahoo News article (originally published in The WRAP) makes this proclamation in the headline “Chat GPT Proves That AI Could Be a Major Threat to Hollywood Creatives – and Not Just Below the Line | PRO Insight”
  • Mouhamad Rachini’s December 15, 2022 article for the Canadian Broadcasting Corporation’s (CBC) online news overs a more generalized overview of the ‘new kid’ along with an embedded CBC Radio file which runs approximately 19 mins. 30 secs. It’s titled “ChatGPT a ‘landmark event’ for AI, but what does it mean for the future of human labour and disinformation?” The chat bot’s developer, OpenAI, has been mentioned here many times including the previously listed July 28, 2022 posting (part two of the VAG review) and the October 24, 2022 posting.

Opposite world (quantum physics in Canada)

Quantum computing made more of an impact here (my blog) than usual. it started in 2021 with the announcement of a National Quantum Strategy in the Canadian federal government budget for that year and gained some momentum in 2022:

  • “Quantum Mechanics & Gravity conference (August 15 – 19, 2022) launches Vancouver (Canada)-based Quantum Gravity Institute and more” July 26, 2022 posting Note: This turned into one of my ‘in depth’ pieces where I comment on the ‘Canadian quantum scene’ and highlight the appointment of an expert panel for the Council of Canada Academies’ report on Quantum Technologies.
  • “Bank of Canada and Multiverse Computing model complex networks & cryptocurrencies with quantum computing” July 25, 2022 posting
  • “Canada, quantum technology, and a public relations campaign?” December 29, 2022 posting

This one was a bit of a puzzle with regard to placement in this end-of-year review, it’s quantum but it’s also about brainlike computing

It’s getting hot in here

Fusion energy made some news this year.

There’s a Vancouver area company, General Fusion, highlighted in both postings and the October posting includes an embedded video of Canadian-born rapper Baba Brinkman’s “You Must LENR” [L ow E nergy N uclear R eactions or sometimes L attice E nabled N anoscale R eactions or Cold Fusion or CANR (C hemically A ssisted N uclear R eactions)].

BTW, fusion energy can generate temperatures up to 150 million degrees Celsius.

Ukraine, science, war, and unintended consequences

Here’s what you might expect,

These are the unintended consequences (from Rachel Kyte’s, Dean of the Fletcher School, Tufts University, December 26, 2022 essay on The Conversation [h/t December 27, 2022 news item on phys.org]), Note: Links have been removed,

Russian President Vladimir Putin’s war on Ukraine has reverberated through Europe and spread to other countries that have long been dependent on the region for natural gas. But while oil-producing countries and gas lobbyists are arguing for more drilling, global energy investments reflect a quickening transition to cleaner energy. [emphasis mine]

Call it the Putin effect – Russia’s war is speeding up the global shift away from fossil fuels.

In December [2022?], the International Energy Agency [IEA] published two important reports that point to the future of renewable energy.

First, the IEA revised its projection of renewable energy growth upward by 30%. It now expects the world to install as much solar and wind power in the next five years as it installed in the past 50 years.

The second report showed that energy use is becoming more efficient globally, with efficiency increasing by about 2% per year. As energy analyst Kingsmill Bond at the energy research group RMI noted, the two reports together suggest that fossil fuel demand may have peaked. While some low-income countries have been eager for deals to tap their fossil fuel resources, the IEA warns that new fossil fuel production risks becoming stranded, or uneconomic, in the next 20 years.

Kyte’s essay is not all ‘sweetness and light’ but it does provide a little optimism.

Kudos, nanotechnology, culture (pop & otherwise), fun, and a farewell in 2022

This one was a surprise for me,

Sometimes I like to know where the money comes from and I was delighted to learn of the Ărramăt Project funded through the federal government’s New Frontiers in Research Fund (NFRF). Here’s more about the Ărramăt Project from the February 14, 2022 posting,

“The Ărramăt Project is about respecting the inherent dignity and interconnectedness of peoples and Mother Earth, life and livelihood, identity and expression, biodiversity and sustainability, and stewardship and well-being. Arramăt is a word from the Tamasheq language spoken by the Tuareg people of the Sahel and Sahara regions which reflects this holistic worldview.” (Mariam Wallet Aboubakrine)

Over 150 Indigenous organizations, universities, and other partners will work together to highlight the complex problems of biodiversity loss and its implications for health and well-being. The project Team will take a broad approach and be inclusive of many different worldviews and methods for research (i.e., intersectionality, interdisciplinary, transdisciplinary). Activities will occur in 70 different kinds of ecosystems that are also spiritually, culturally, and economically important to Indigenous Peoples.

The project is led by Indigenous scholars and activists …

Kudos to the federal government and all those involved in the Salmon science camps, the Ărramăt Project, and other NFRF projects.

There are many other nanotechnology posts here but this appeals to my need for something lighter at this point,

  • “Say goodbye to crunchy (ice crystal-laden) in ice cream thanks to cellulose nanocrystals (CNC)” August 22, 2022 posting

The following posts tend to be culture-related, high and/or low but always with a science/nanotechnology edge,

Sadly, it looks like 2022 is the last year that Ada Lovelace Day is to be celebrated.

… this year’s Ada Lovelace Day is the final such event due to lack of financial backing. Suw Charman-Anderson told the BBC [British Broadcasting Corporation] the reason it was now coming to an end was:

You can read more about it here:

In the rearview mirror

A few things that didn’t fit under the previous heads but stood out for me this year. Science podcasts, which were a big feature in 2021, also proliferated in 2022. I think they might have peaked and now (in 2023) we’ll see what survives.

Nanotechnology, the main subject on this blog, continues to be investigated and increasingly integrated into products. You can search the ‘nanotechnology’ category here for posts of interest something I just tried. It surprises even me (I should know better) how broadly nanotechnology is researched and applied.

If you want a nice tidy list, Hamish Johnston in a December 29, 2022 posting on the Physics World Materials blog has this “Materials and nanotechnology: our favourite research in 2022,” Note: Links have been removed,

“Inherited nanobionics” makes its debut

The integration of nanomaterials with living organisms is a hot topic, which is why this research on “inherited nanobionics” is on our list. Ardemis Boghossian at EPFL [École polytechnique fédérale de Lausanne] in Switzerland and colleagues have shown that certain bacteria will take up single-walled carbon nanotubes (SWCNTs). What is more, when the bacteria cells split, the SWCNTs are distributed amongst the daughter cells. The team also found that bacteria containing SWCNTs produce a significantly more electricity when illuminated with light than do bacteria without nanotubes. As a result, the technique could be used to grow living solar cells, which as well as generating clean energy, also have a negative carbon footprint when it comes to manufacturing.

Getting to back to Canada, I’m finding Saskatchewan featured more prominently here. They do a good job of promoting their science, especially the folks at the Canadian Light Source (CLS), Canada’s synchrotron, in Saskatoon. Canadian live science outreach events seeming to be coming back (slowly). Cautious organizers (who have a few dollars to spare) are also enthusiastic about hybrid events which combine online and live outreach.

After what seems like a long pause, I’m stumbling across more international news, e.g. “Nigeria and its nanotechnology research” published December 19, 2022 and “China and nanotechnology” published September 6, 2022. I think there’s also an Iran piece here somewhere.

With that …

Making resolutions in the dark

Hopefully this year I will catch up with the Council of Canadian Academies (CCA) output and finally review a few of their 2021 reports such as Leaps and Boundaries; a report on artificial intelligence applied to science inquiry and, perhaps, Powering Discovery; a report on research funding and Natural Sciences and Engineering Research Council of Canada.

Given what appears to a renewed campaign to have germline editing (gene editing which affects all of your descendants) approved in Canada, I might even reach back to a late 2020 CCA report, Research to Reality; somatic gene and engineered cell therapies. it’s not the same as germline editing but gene editing exists on a continuum.

For anyone who wants to see the CCA reports for themselves they can be found here (both in progress and completed).

I’m also going to be paying more attention to how public relations and special interests influence what science is covered and how it’s covered. In doing this 2022 roundup, I noticed that I featured an overview of fusion energy not long before the breakthrough. Indirect influence on this blog?

My post was precipitated by an article by Alex Pasternak in Fast Company. I’m wondering what precipitated Alex Pasternack’s interest in fusion energy since his self-description on the Huffington Post website states this “… focus on the intersections of science, technology, media, politics, and culture. My writing about those and other topics—transportation, design, media, architecture, environment, psychology, art, music … .”

He might simply have received a press release that stimulated his imagination and/or been approached by a communications specialist or publicists with an idea. There’s a reason for why there are so many public relations/media relations jobs and agencies.

Que sera, sera (Whatever will be, will be)

I can confidently predict that 2023 has some surprises in store. I can also confidently predict that the European Union’s big research projects (1B Euros each in funding for the Graphene Flagship and Human Brain Project over a ten year period) will sunset in 2023, ten years after they were first announced in 2013. Unless, the powers that be extend the funding past 2023.

I expect the Canadian quantum community to provide more fodder for me in the form of a 2023 report on Quantum Technologies from the Council of Canadian academies, if nothing else otherwise.

I’ve already featured these 2023 science events but just in case you missed them,

  • 2023 Preview: Bill Nye the Science Guy’s live show and Marvel Avengers S.T.A.T.I.O.N. (Scientific Training And Tactical Intelligence Operative Network) coming to Vancouver (Canada) November 24, 2022 posting
  • September 2023: Auckland, Aotearoa New Zealand set to welcome women in STEM (science, technology, engineering, and mathematics) November 15, 2022 posting

Getting back to this blog, it may not seem like a new year during the first few weeks of 2023 as I have quite the stockpile of draft posts. At this point I have drafts that are dated from June 2022 and expect to be burning through them so as not to fall further behind but will be interspersing them, occasionally, with more current posts.

Most importantly: a big thank you to everyone who drops by and reads (and sometimes even comments) on my posts!!! it’s very much appreciated and on that note: I wish you all the best for 2023.

Kempner Institute for the Study of Natural and Artificial Intelligence launched at Harvard University and University of Manchester pushes the boundaries of smart robotics and AI

Before getting to the two news items, it might be a good idea to note that ‘artificial intelligence (AI)’ and ‘robot’ are not synonyms although they are often used that way, even by people who should know better. (sigh … I do it too)

A robot may or may not be animated with artificial intelligence while artificial intelligence algorithms may be installed on a variety of devices such as a phone or a computer or a thermostat or a … .

It’s something to bear in mind when reading about the two new institutions being launched. Now, on to Harvard University.

Kempner Institute for the Study of Natural and Artificial Intelligence

A September 23, 2022 Chan Zuckerberg Initiative (CZI) news release (also on EurekAlert) announces a symposium to launch a new institute close to Mark Zuckerberg’s heart,

On Thursday [September 22, 2022], leadership from the Chan Zuckerberg Initiative (CZI) and Harvard University celebrated the launch of the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University with a symposium on Harvard’s campus. Speakers included CZI Head of Science Stephen Quake, President of Harvard University Lawrence Bacow, Provost of Harvard University Alan Garber, and Kempner Institute co-directors Bernardo Sabatini and Sham Kakade. The event also included remarks and panels from industry leaders in science, technology, and artificial intelligence, including Bill Gates, Eric Schmidt, Andy Jassy, Daniel Huttenlocher, Sam Altman, Joelle Pineau, Sangeeta Bhatia, and Yann LeCun, among many others.

The Kempner Institute will seek to better understand the basis of intelligence in natural and artificial systems. Its bold premise is that the two fields are intimately interconnected; the next generation of AI will require the same principles that our brains use for fast, flexible natural reasoning, and understanding how our brains compute and reason requires theories developed for AI. The Kempner Institute will study AI systems, including artificial neural networks, to develop both principled theories [emphasis mine] and a practical understanding of how these systems operate and learn. It will also focus on research topics such as learning and memory, perception and sensation, brain function, and metaplasticity. The Institute will recruit and train future generations of researchers from undergraduates and graduate students to post-docs and faculty — actively recruiting from underrepresented groups at every stage of the pipeline — to study intelligence from biological, cognitive, engineering, and computational perspectives.

CZI Co-Founder and Co-CEO Mark Zuckerberg [chairman and chief executive officer of Meta/Facebook] said: “The Kempner Institute will be a one-of-a-kind institute for studying intelligence and hopefully one that helps us discover what intelligent systems really are, how they work, how they break and how to repair them. There’s a lot of exciting implications because once you understand how something is supposed to work and how to repair it once it breaks, you can apply that to the broader mission the Chan Zuckerberg Initiative has to empower scientists to help cure, prevent or manage all diseases.”

CZI Co-Founder and Co-CEO Priscilla Chan said: “Just attending this school meant the world to me. But to stand on this stage and to be able to give something back is truly a dream come true … All of this progress starts with building one fundamental thing: a Kempner community that’s diverse, multi-disciplinary and multi-generational, because incredible ideas can come from anyone. If you bring together people from all different disciplines to look at a problem and give them permission to articulate their perspective, you might start seeing insights or solutions in a whole different light. And those new perspectives lead to new insights and discoveries and generate new questions that can lead an entire field to blossom. So often, that momentum is what breaks the dam and tears down old orthodoxies, unleashing new floods of new ideas that allow us to progress together as a society.”

CZI Head of Science Stephen Quake said: “It’s an honor to partner with Harvard in building this extraordinary new resource for students and science. This is a once-in-a-generation moment for life sciences and medicine. We are living in such an extraordinary and exciting time for science. Many breakthrough discoveries are going to happen not only broadly but right here on this campus and at this institute.”

CZI’s 10-year vision is to advance research and develop technologies to observe, measure, and analyze any biological process within the human body — across spatial scales and in real time. CZI’s goal is to accelerate scientific progress by funding scientific research to advance entire fields; working closely with scientists and engineers at partner institutions like the Chan Zuckerberg Biohub and Chan Zuckerberg Institute for Advanced Biological Imaging to do the research that can’t be done in conventional environments; and building and democratizing next-generation software and hardware tools to drive biological insights and generate more accurate and biologically important sources of data.

President of Harvard University Lawrence Bacow said: “Here we are with this incredible opportunity that Priscilla Chan and Mark Zuckerberg have given us to imagine taking what we know about the brain, neuroscience and how to model intelligence and putting them together in ways that can inform both, and can truly advance our understanding of intelligence from multiple perspectives.”

Kempner Institute Co-Director and Gordon McKay Professor of Computer Science and of Statistics at the Harvard John A. Paulson School of Engineering and Applied Sciences Sham Kakade said: “Now we begin assembling a world-leading research and educational program at Harvard that collectively tries to understand the fundamental mechanisms of intelligence and seeks to apply these new technologies for the benefit of humanity … We hope to create a vibrant environment for all of us to engage in broader research questions … We want to train the next generation of leaders because those leaders will go on to do the next set of great things.”

Kempner Institute Co-Director and the Alice and Rodman W. Moorhead III Professor of Neurobiology at Harvard Medical School Bernardo Sabatini said: “We’re blending research, education and computation to nurture, raise up and enable any scientist who is interested in unraveling the mysteries of the brain. This field is a nascent and interdisciplinary one, so we’re going to have to teach neuroscience to computational biologists, who are going to have to teach machine learning to cognitive scientists and math to biologists. We’re going to do whatever is necessary to help each individual thrive and push the field forward … Success means we develop mathematical theories that explain how our brains compute and learn, and these theories should be specific enough to be testable and useful enough to start to explain diseases like schizophrenia, dyslexia or autism.”

About the Chan Zuckerberg Initiative

The Chan Zuckerberg Initiative was founded in 2015 to help solve some of society’s toughest challenges — from eradicating disease and improving education, to addressing the needs of our communities. Through collaboration, providing resources and building technology, our mission is to help build a more inclusive, just and healthy future for everyone. For more information, please visit chanzuckerberg.com.

Principled theories, eh. I don’t see a single mention of ethicists or anyone in the social sciences or the humanities or the arts. How are scientists and engineers who have no training in or education in or, even, an introduction to ethics or social impacts or psychology going to manage this?

Mark Zuckerberg’s approach to these issues was something along the lines of “it’s easier to ask for forgiveness than to ask for permission.” I understand there have been changes but it took far too long to recognize the damage let alone attempt to address it.

If you want to gain a little more insight into the Kempner Institute, there’s a December 7, 2021 article by Alvin Powell announcing the institute for the Harvard Gazette,

The institute will be funded by a $500 million gift from Priscilla Chan and Mark Zuckerberg, which was announced Tuesday [December 7, 2021] by the Chan Zuckerberg Initiative. The gift will support 10 new faculty appointments, significant new computing infrastructure, and resources to allow students to flow between labs in pursuit of ideas and knowledge. The institute’s name honors Zuckerberg’s mother, Karen Kempner Zuckerberg, and her parents — Zuckerberg’s grandparents — Sidney and Gertrude Kempner. Chan and Zuckerberg have given generously to Harvard in the past, supporting students, faculty, and researchers in a range of areas, including around public service, literacy, and cures.

“The Kempner Institute at Harvard represents a remarkable opportunity to bring together approaches and expertise in biological and cognitive science with machine learning, statistics, and computer science to make real progress in understanding how the human brain works to improve how we address disease, create new therapies, and advance our understanding of the human body and the world more broadly,” said President Larry Bacow.

Q&A

Bernardo Sabatini and Sham Kakade [Institute co-directors]

GAZETTE: Tell me about the new institute. What is its main reason for being?

SABATINI: The institute is designed to take from two fields and bring them together, hopefully to create something that’s essentially new, though it’s been tried in a couple of places. Imagine that you have over here cognitive scientists and neurobiologists who study the human brain, including the basic biological mechanisms of intelligence and decision-making. And then over there, you have people from computer science, from mathematics and statistics, who study artificial intelligence systems. Those groups don’t talk to each other very much.

We want to recruit from both populations to fill in the middle and to create a new population, through education, through graduate programs, through funding programs — to grow from academic infancy — those equally versed in neuroscience and in AI systems, who can be leaders for the next generation.

Over the millions of years that vertebrates have been evolving, the human brain has developed specializations that are fundamental for learning and intelligence. We need to know what those are to understand their benefits and to ask whether they can make AI systems better. At the same time, as people who study AI and machine learning (ML) develop mathematical theories as to how those systems work and can say that a network of the following structure with the following properties learns by calculating the following function, then we can take those theories and ask, “Is that actually how the human brain works?”

KAKADE: There’s a question of why now? In the technological space, the advancements are remarkable even to me, as a researcher who knows how these things are being made. I think there’s a long way to go, but many of us feel that this is the right time to study intelligence more broadly. You might also ask: Why is this mission unique and why is this institute different from what’s being done in academia and in industry? Academia is good at putting out ideas. Industry is good at turning ideas into reality. We’re in a bit of a sweet spot. We have the scale to study approaches at a very different level: It’s not going to be just individual labs pursuing their own ideas. We may not be as big as the biggest companies, but we can work on the types of problems that they work on, such as having the compute resources to work on large language models. Industry has exciting research, but the spectrum of ideas produced is very different, because they have different objectives.

For the die-hards, there’s a September 23, 2022 article by Clea Simon in Harvard Gazette, which updates the 2021 story,

Next, Manchester, England.

Manchester Centre for Robotics and AI

Robotots take a break at a lab at The University of Manchester – picture courtesy of Marketing Manchester [downloaded from https://www.manchester.ac.uk/discover/news/manchester-ai-summit-aims-to-attract-experts-in-advanced-engineering-and-robotics/]

A November 22, 2022 University of Manchester press release (also on EurekAlert) announces both a meeting and a new centre, Note: Links to the Centre have been retained; all others have been removed,

How humans and super smart robots will live and work together in the future will be among the key issues being scrutinised by experts at a new centre of excellence for AI and autonomous machines based at The University of Manchester.

The Manchester Centre for Robotics and AI will be a new specialist multi-disciplinary centre to explore developments in smart robotics through the lens of artificial intelligence (AI) and autonomous machinery.

The University of Manchester has built a modern reputation of excellence in AI and robotics, partly based on the legacy of pioneering thought leadership begun in this field in Manchester by legendary codebreaker Alan Turing.

Manchester’s new multi-disciplinary centre is home to world-leading research from across the academic disciplines – and this group will hold its first conference on Wednesday, Nov 23, at the University’s new engineering and materials facilities.

A  highlight will be a joint talk by robotics expert Dr Andy Weightman and theologian Dr Scott Midson which is expected to put a spotlight on ‘posthumanism’, a future world where humans won’t be the only highly intelligent decision-makers.

Dr Weightman, who researches home-based rehabilitation robotics for people with neurological impairment, and Dr Midson, who researches theological and philosophical critiques of posthumanism, will discuss how interdisciplinary research can help with the special challenges of rehabilitation robotics – and, ultimately, what it means to be human “in the face of the promises and challenges of human enhancement through robotic and autonomous machines”.

Other topics that the centre will have a focus on will include applications of robotics in extreme environments.

For the past decade, a specialist Manchester team led by Professor Barry Lennox has designed robots to work safely in nuclear decommissioning sites in the UK. A ground-breaking robot called Lyra that has been developed by Professor Lennox’s team – and recently deployed at the Dounreay site in Scotland, the “world’s deepest nuclear clean up site” – has been listed in Time Magazine’s Top 200 innovations of 2022.

Angelo Cangelosi, Professor of Machine Learning and Robotics at Manchester, said the University offers a world-leading position in the field of autonomous systems – a technology that will be an integral part of our future world. 

Professor Cangelosi, co-Director of Manchester’s Centre for Robotics and AI, said: “We are delighted to host our inaugural conference which will provide a special showcase for our diverse academic expertise to design robotics for a variety of real world applications.

“Our research and innovation team are at the interface between robotics, autonomy and AI – and their knowledge is drawn from across the University’s disciplines, including biological and medical sciences – as well the humanities and even theology. [emphases mine]

“This rich diversity offers Manchester a distinctive approach to designing robots and autonomous systems for real world applications, especially when combined with our novel use of AI-based knowledge.”

Delegates will have a chance to observe a series of robots and autonomous machines being demoed at the new conference.

The University of Manchester’s Centre for Robotics and AI will aim to: 

  • design control systems with a focus on bio-inspired solutions to mechatronics, eg the use of biomimetic sensors, actuators and robot platforms; 
  • develop new software engineering and AI methodologies for verification in autonomous systems, with the aim to design trustworthy autonomous systems; 
  • research human-robot interaction, with a pioneering focus on the use of brain-inspired approaches [emphasis mine] to robot control, learning and interaction; and 
  • research the ethics and human-centred robotics issues, for the understanding of the impact of the use of robots and autonomous systems with individuals and society. 

In some ways, the Kempner Institute and the Manchester Centre for Robotics and AI have very similar interests, especially where the brain is concerned. What fascinates me is the Manchester Centre’s inclusion of theologian Dr Scott Midson and the discussion (at the meeting) of ‘posthumanism’. The difference is between actual engagement at the symposium (the centre) and mere mention in a news release (the institute).

I wish the best for both institutions.

AI (artificial intelligence) and art ethics: a debate + a Botto (AI artist) October 2022 exhibition in the Uk

Who is an artist? What is an artist? Can everyone be an artist? These are the kinds of questions you can expect with the rise of artificially intelligent artists/collaborators. Of course, these same questions have been asked many times before the rise of AI (artificial intelligence) agents/programs in the field of visual art. Each time the questions are raised is an opportunity to examine our beliefs from a different perspective. And, not to be forgotten, there are questions about money.

The shock

First, the ‘art’,

The winning work. Colorado State Fair 2022. Screengrab from Discord [downloaded from https://www.artnews.com/art-news/news/colorado-state-fair-ai-generated-artwork-controversy-1234638022/]

Shanti Escalante-De Mattei’s September 1, 2022 article for ArtNews.com provides an overview of the latest AI art controversy (Note: A link has been removed),

The debate around AI art went viral once again when a man won first place at the Colorado State Fair’s art competition in the digital category with a work he made using text-to-image AI generator Midjourney.

Twitter user and digital artist Genel Jumalon tweeted out a screenshot from a Discord channel in which user Sincarnate, aka game designer Jason Allen, celebrated his win at the fair. Jumalon wrote, “Someone entered an art competition with an AI-generated piece and won the first prize. Yeah that’s pretty fucking shitty.”

The comments on the post range from despair and anger as artists, both digital and traditional, worry that their livelihoods might be at stake after years of believing that creative work would be safe from AI-driven automation. [emphasis mine]

Rachel Metz’s September 3, 2022 article for CNN provides more details about how the work was generated (Note: Links have been removed),

Jason M. Allen was almost too nervous to enter his first art competition. Now, his award-winning image is sparking controversy about whether art can be generated by a computer, and what, exactly, it means to be an artist.

In August [2022], Allen, a game designer who lives in Pueblo West, Colorado, won first place in the emerging artist division’s “digital arts/digitally-manipulated photography” category at the Colorado State Fair Fine Arts Competition. His winning image, titled “Théâtre D’opéra Spatial” (French for “Space Opera Theater”), was made with Midjourney — an artificial intelligence system that can produce detailed images when fed written prompts. A $300 prize accompanied his win.

Allen’s winning image looks like a bright, surreal cross between a Renaissance and steampunk painting. It’s one of three such images he entered in the competition. In total, 11 people entered 18 pieces of art in the same category in the emerging artist division.

The definition for the category in which Allen competed states that digital art refers to works that use “digital technology as part of the creative or presentation process.” Allen stated that Midjourney was used to create his image when he entered the contest, he said.

The newness of these tools, how they’re used to produce images, and, in some cases, the gatekeeping for access to some of the most powerful ones has led to debates about whether they can truly make art or assist humans in making art.

This came into sharp focus for Allen not long after his win. Allen had posted excitedly about his win on Midjourney’s Discord server on August 25 [2022], along with pictures of his three entries; it went viral on Twitter days later, with many artists angered by Allen’s win because of his use of AI to create the image, as a story by Vice’s Motherboard reported earlier this week.

“This sucks for the exact same reason we don’t let robots participate in the Olympics,” one Twitter user wrote.

“This is the literal definition of ‘pressed a few buttons to make a digital art piece’,” another Tweeted. “AI artwork is the ‘banana taped to the wall’ of the digital world now.”

Yet while Allen didn’t use a paintbrush to create his winning piece, there was plenty of work involved, he said.

“It’s not like you’re just smashing words together and winning competitions,” he said.

You can feed a phrase like “an oil painting of an angry strawberry” to Midjourney and receive several images from the AI system within seconds, but Allen’s process wasn’t that simple. To get the final three images he entered in the competition, he said, took more than 80 hours.

First, he said, he played around with phrasing that led Midjourney to generate images of women in frilly dresses and space helmets — he was trying to mash up Victorian-style costuming with space themes, he said. Over time, with many slight tweaks to his written prompt (such as to adjust lighting and color harmony), he created 900 iterations of what led to his final three images. He cleaned up those three images in Photoshop, such as by giving one of the female figures in his winning image a head with wavy, dark hair after Midjourney had rendered her headless. Then he ran the images through another software program called Gigapixel AI that can improve resolution and had the images printed on canvas at a local print shop.

Ars Technica has run a number of articles on the subject of Art and AI, Benj Edwards in an August 31, 2022 article seems to have been one of the first to comment on Jason Allen’s win (Note 1: Links have been removed; Note 2: Look at how Edwards identifies Jason Allen as an artist),

A synthetic media artist named Jason Allen entered AI-generated artwork into the Colorado State Fair fine arts competition and announced last week that he won first place in the Digital Arts/Digitally Manipulated Photography category, Vice reported Wednesday [August 31, 2022?] based on a viral tweet.

Allen’s victory prompted lively discussions on Twitter, Reddit, and the Midjourney Discord server about the nature of art and what it means to be an artist. Some commenters think human artistry is doomed thanks to AI and that all artists are destined to be replaced by machines. Others think art will evolve and adapt with new technologies that come along, citing synthesizers in music. It’s a hot debate that Wired covered in July [2022].

It’s worth noting that the invention of the camera in the 1800s prompted similar criticism related to the medium of photography, since the camera seemingly did all the work compared to an artist that labored to craft an artwork by hand with a brush or pencil. Some feared that painters would forever become obsolete with the advent of color photography. In some applications, photography replaced more laborious illustration methods (such as engraving), but human fine art painters are still around today.

Benj Edwards in a September 12, 2022 article for Ars Technica examines how some art communities are responding (Note: Links have been removed),

Confronted with an overwhelming amount of artificial-intelligence-generated artwork flooding in, some online art communities have taken dramatic steps to ban or curb its presence on their sites, including Newgrounds, Inkblot Art, and Fur Affinity, according to Andy Baio of Waxy.org.

Baio, who has been following AI art ethics closely on his blog, first noticed the bans and reported about them on Friday [Sept. 9, 2022?]. …

The arrival of widely available image synthesis models such as Midjourney and Stable Diffusion has provoked an intense online battle between artists who view AI-assisted artwork as a form of theft (more on that below) and artists who enthusiastically embrace the new creative tools.

… a quickly evolving debate about how art communities (and art professionals) can adapt to software that can potentially produce unlimited works of beautiful art at a rate that no human working without the tools could match.

A few weeks ago, some artists began discovering their artwork in the Stable Diffusion data set, and they weren’t happy about it. Charlie Warzel wrote a detailed report about these reactions for The Atlantic last week [September 7, 2022]. With battle lines being drawn firmly in the sand and new AI creativity tools coming out steadily, this debate will likely continue for some time to come.

Filthy lucre becomes more prominent in the conversation

Lizzie O’Leary in a September 12, 2022 article for Fast Company presents a transcript of an interview (from the TBD podcast) she conducted with Drew Harwell, tech reporter covering A.I. for Washington Post) about the ‘Jason Allen’ win,

I’m struck by how quickly these art A.I.s are advancing. DALL-E was released in January of last year and there were some pretty basic images. And then, a year later, DALL-E 2 is using complex, faster methods. Midjourney, the one Jason Allen used, has a feature that allows you to upscale and downscale images. Where is this sudden supply and demand for A.I. art coming from?

You could look back to five years ago when they had these text-to-image generators and the output would be really crude. You could sort of see what the A.I. was trying to get at, but we’ve only really been able to cross that photorealistic uncanny valley in the last year or so. And I think the things that have contributed to that are, one, better data. You’re seeing people invest a lot of money and brainpower and resources into adding more stuff into bigger data sets. We have whole groups that are taking every image they can get on the internet. Billions, billions of images from Pinterest and Amazon and Facebook. You have bigger data sets, so the A.I. is learning more. You also have better computing power, and those are the two ingredients to any good piece of A.I. So now you have A.I. that is not only trained to understand the world a little bit better, but it can now really quickly spit out a very finely detailed generated image.

Is there any way to know, when you look at a piece of A.I. art, what images it referenced to create what it’s doing? Or is it just so vast that you can’t kind of unspool it backward?

When you’re doing an image that’s totally generated out of nowhere, it’s taking bits of information from billions of images. It’s creating it in a much more sophisticated way so that it’s really hard to unspool.

Art generated by A.I. isn’t just a gee-whiz phenomenon, something that wins prizes, or even a fascinating subject for debate—it has valuable commercial uses, too. Some that are a little frightening if you’re, say, a graphic designer.

You’re already starting to see some of these images illustrating news articles, being used as logos for companies, being used in the form of stock art for small businesses and websites. Anything where somebody would’ve gone and paid an illustrator or graphic designer or artist to make something, they can now go to this A.I. and create something in a few seconds that is maybe not perfect, maybe would be beaten by a human in a head-to-head, but is good enough. From a commercial perspective, that’s scary, because we have an industry of people whose whole job is to create images, now running up against A.I.

And the A.I., again, in the last five years, the A.I. has gotten better and better. It’s still not perfect. I don’t think it’ll ever be perfect, whatever that looks like. It processes information in a different, maybe more literal, way than a human. I think human artists will still sort of have the upper hand in being able to imagine things a little more outside of the box. And yet, if you’re just looking for three people in a classroom or a pretty simple logo, you’re going to go to A.I. and you’re going to take potentially a job away from a freelancer whom you would’ve given it to 10 years ago.

I can see a use case here in marketing, in advertising. The A.I. doesn’t need health insurance, it doesn’t need paid vacation days, and I really do wonder about this idea that the A.I. could replace the jobs of visual artists. Do you think that is a legitimate fear, or is that overwrought at this moment?

I think it is a legitimate fear. When something can mirror your skill set, not 100 percent of the way, but enough of the way that it could replace you, that’s an issue. Do these A.I. creators have any kind of moral responsibility to not create it because it could put people out of jobs? I think that’s a debate, but I don’t think they see it that way. They see it like they’re just creating the new generation of digital camera, the new generation of Photoshop. But I think it is worth worrying about because even compared with cameras and Photoshop, the A.I. is a little bit more of the full package and it is so accessible and so hard to match in terms. It’s really going to be up to human artists to find some way to differentiate themselves from the A.I.

This is making me wonder about the humans underneath the data sets that the A.I. is trained on. The criticism is, of course, that these businesses are making money off thousands of artists’ work without their consent or knowledge and it undermines their work. Some people looked at the Stable Diffusion and they didn’t have access to its whole data set, but they found that Thomas Kinkade, the landscape painter, was the most referenced artist in the data set. Is the A.I. just piggybacking? And if it’s not Thomas Kinkade, if it’s someone who’s alive, are they piggybacking on that person’s work without that person getting paid?

Here’s a bit more on the topic of money and art in a September 19, 2022 article by John Herrman for New York Magazine. First, he starts with the literary arts, Note: Links have been removed,

Artificial-intelligence experts are excited about the progress of the past few years. You can tell! They’ve been telling reporters things like “Everything’s in bloom,” “Billions of lives will be affected,” and “I know a person when I talk to it — it doesn’t matter whether they have a brain made of meat in their head.”

We don’t have to take their word for it, though. Recently, AI-powered tools have been making themselves known directly to the public, flooding our social feeds with bizarre and shocking and often very funny machine-generated content. OpenAI’s GPT-3 took simple text prompts — to write a news article about AI or to imagine a rose ceremony from The Bachelor in Middle English — and produced convincing results.

Deepfakes graduated from a looming threat to something an enterprising teenager can put together for a TikTok, and chatbots are occasionally sending their creators into crisis.

More widespread, and probably most evocative of a creative artificial intelligence, is the new crop of image-creation tools, including DALL-E, Imagen, Craiyon, and Midjourney, which all do versions of the same thing. You ask them to render something. Then, with models trained on vast sets of images gathered from around the web and elsewhere, they try — “Bart Simpson in the style of Soviet statuary”; “goldendoodle megafauna in the streets of Chelsea”; “a spaghetti dinner in hell”; “a logo for a carpet-cleaning company, blue and red, round”; “the meaning of life.”

This flood of machine-generated media has already altered the discourse around AI for the better, probably, though it couldn’t have been much worse. In contrast with the glib intra-VC debate about avoiding human enslavement by a future superintelligence, discussions about image-generation technology have been driven by users and artists and focus on labor, intellectual property, AI bias, and the ethics of artistic borrowing and reproduction [emphasis mine]. Early controversies have cut to the chase: Is the guy who entered generated art into a fine-art contest in Colorado (and won!) an asshole? Artists and designers who already feel underappreciated or exploited in their industries — from concept artists in gaming and film and TV to freelance logo designers — are understandably concerned about automation. Some art communities and marketplaces have banned AI-generated images entirely.

Requests are effectively thrown into “a giant swirling whirlpool” of “10,000 graphics cards,” Holz [David Holz, Midjourney founder] said, after which users gradually watch them take shape, gaining sharpness but also changing form as Midjourney refines its work.

This hints at an externality beyond the worlds of art and design. “Almost all the money goes to paying for those machines,” Holz said. New users are given a small number of free image generations before they’re cut off and asked to pay; each request initiates a massive computational task, which means using a lot of electricity.

High compute costs [emphasis mine] — which are largely energy costs — are why other services have been cautious about adding new users. …

Another Midjourney user, Gila von Meissner, is a graphic designer and children’s-book author-illustrator from “the boondocks in north Germany.” Her agent is currently shopping around a book that combines generated images with her own art and characters. Like Pluckebaum [Brian Pluckebaum who works in automotive-semiconductor marketing and designs board games], she brought up the balance of power with publishers. “Picture books pay peanuts,” she said. “Most illustrators struggle financially.” Why not make the work easier and faster? “It’s my character, my edits on the AI backgrounds, my voice, and my story.” A process that took months now takes a week, she said. “Does that make it less original?”

User MoeHong, a graphic designer and typographer for the state of California, has been using Midjourney to make what he called generic illustrations (“backgrounds, people at work, kids at school, etc.”) for government websites, pamphlets, and literature: “I get some of the benefits of using custom art — not that we have a budget for commissions! — without the paying-an-artist part.” He said he has mostly replaced stock art, but he’s not entirely comfortable with the situation. “I have a number of friends who are commercial illustrators, and I’ve been very careful not to show them what I’ve made,” he said. He’s convinced that tools like this could eventually put people in his trade out of work. “But I’m already in my 50s,” he said, “and I hope I’ll be gone by the time that happens.”

Fan club

The last article I’m featuring here is a September 15, 2021 piece by Agnieszka Cichocka for DailyArt, which provides good, brief descriptions of algorithms, generative creative networks, machine learning, artificial neural networks, and more. She is an enthusiast (Note: Links have been removed),

I keep wondering if Leonardo da Vinci, who, in my opinion, was the most forward thinking artist of all time, would have ever imagined that art would one day be created by AI. He worked on numerous ideas and was constantly experimenting, and, although some were failures, he persistently tried new products, helping to move our world forward. Without such people, progress would not be possible. 

Machine Learning

As humans, we learn by acquiring knowledge through observations, senses, experiences, etc. This is similar to computers. Machine learning is a process in which a computer system learns how to perform a task better in two ways—either through exposure to environments that provide punishments and rewards (reinforcement learning) or by training with specific data sets (the system learns automatically and improves from previous experiences). Both methods help the systems improve their accuracy. Machines then use patterns and attempt to make an accurate analysis of things they have not seen before. To give an example, let’s say we feed the computer with thousands of photos of a dog. Consequently, it can learn what a dog looks like based on those. Later, even when faced with a picture it has never seen before, it can tell that the photo shows a dog.

If you want to see some creative machine learning experiments in art, check out ML x ART. This is a website with hundreds of artworks created using AI tools.

Some thoughts

As the saying goes “a picture is worth a thousand words” and, now, It seems that pictures will be made from words or so suggests the example of Jason M. Allen feeding prompts to the AI system Midjourney.

I suspect (as others have suggested) that in the end, artists who use AI systems will be absorbed into the art world in much the same way as artists who use photography, or are considered performance artists and/or conceptual artists, and/or use video have been absorbed. There will be some displacements and discomfort as the questions I opened this posting with (Who is an artist? What is an artist? Can everyone be an artist?) are passionately discussed and considered. Underlying many of these questions is the issue of money.

The impact on people’s livelihoods is cheering or concerning depending on how the AI system is being used. Herrman’s September 19, 2022 article highlights two examples that focus on graphic designers. Gila von Meissner, the illustrator and designer, who uses her own art to illustrate her children’s books in a faster, more cost effective way with an AI system and MoeHong, a graphic designer for the state of California, who uses an AI system to make ‘customized generic art’ for which the state government doesn’t have to pay.

So far, the focus has been on Midjourney and other AI agents that have been created by developers for use by visual artists and writers. What happens when the visual artist or the writer is the developer? A September 12, 2022 article by Brandon Scott Roye for Cool Hunting approaches the question (Note: Links have been removed),

Mario Klingemann and Sasha Stiles on Semi-Autonomous AI Artists

An artist and engineer at the forefront of generating AI artwork, Mario Klingemann and first-generation Kalmyk-American poet, artist and researcher Sasha Stiles both approach AI from a more human, personal angle. Creators of semi-autonomous systems, both Klingemann and Stiles are the minds behind Botto and Technelegy, respectively. They are both artists in their own right, but their creations are too. Within web3, the identity of the “artist” who creates with visuals and the “writer” who creates with words is enjoying a foundational shift and expansion. Many have fashioned themselves a new title as “engineer.”

Based on their primary identities as an artist and poet, Klingemann and Stiles face the conundrum of becoming engineers who design the tools, rather than artists responsible for the final piece. They now have the ability to remove themselves from influencing inputs and outputs.

If you have time, I suggest reading Roye’s September 12, 2022 article as it provides some very interesting ideas although I don’t necessarily agree with them, e.g., “They now have the ability to remove themselves from influencing inputs and outputs.” Anyone who’s following the ethics discussion around AI knows that biases are built into the algorithms whether we like it or not. As for artists and writers calling themselves ‘engineers’, they may get a little resistance from the engineering community.

As users of open source software, Klingemann and Stiles should not have to worry too much about intellectual property. However, it seems copyright for the actual works and patents for the software could raise some interesting issues especially since money is involved.

In a March 10, 2022 article by Shraddha Nair for Stir World, Klingemann claims to have made over $1M from auctions of Botto’s artworks. it’s not clear to me where Botto obtains its library of images for future use (which may signal a potential problem); Stiles’ Technelegy creates poems from prompts using its library of her poems. (For the curious, I have an August 30, 2022 post “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” which explores some of the issues around patents.)

Who gets the patent and/or the copyright? Assuming you and I are employing machine learning to train our AI agents separately, could there be an argument that if my version of the AI is different than yours and proves more popular with other content creators/ artists that I should own/share the patent to the software and rights to whatever the software produces?

Getting back to Herrman’s comment about high compute costs and energy, we seem to have an insatiable appetite for energy and that is not only a high cost financially but also environmentally.

Botto exhibition

Here’s more about Klingemann’s artist exhibition by Botto (from an October 6, 2022 announcement received via email),

Mario Klingemann is a pioneering figurehead in the field of AI art,
working deep in the field of Machine Learning. Governed by a community
of 5,000 people, Klingemann developed Botto around an idea of creating
an autonomous entity that is able to be creative and co-creative.
Inspired by Goethe’s artificial man in Faust, Botto is a genderless AI
entity that is guided by an international community and art historical
trends. Botto creates 350 art pieces per week that are presented to its
community. Members of the community give feedback on these art fragments
by voting, expressing their individual preferences on what is
aesthetically pleasing to them. Then collectively the votes are used as
feedback for Botto’s generative algorithm, dictating what direction
Botto should take in its next series of art pieces.

The creative capacity of its algorithm is far beyond the capacities of
an individual to combine and find relationships within all the
information available to the AI. Botto faces similar issues as a human
artist, and it is programmed to self-reflect and ask, “I’ve created
this type of work before. What can I show them that’s different this
week?”

Once a week, Botto auctions the art fragment with the most votes on
SuperRare. All proceeds from the auction go back to the community. The
AI artist auctioned its first three pieces, Asymmetrical Liberation,
Scene Precede, and Trickery Contagion for more than $900,000 dollars,
the most successful AI artist premiere. Today, Botto has produced
upwards of 22 artworks and current sales have generated over $2 million
in total
[emphasis mine].

From March 2022 when Botto had made $1M to October 2022 where it’s made over $2M. It seems Botto is a very financially successful artist.

Botto: A Whole Year of Co-Creation

This exhibition (October 26 – 30, 2022) is being held in London, England at this location:

The Department Store, Brixton 248 Ferndale Road London SW9 8FR United Kingdom

Enjoy!

US National Academies Sept. 22-23, 2022 workshop on techno, legal & ethical issues of brain-machine interfaces (BMIs)

If you’ve been longing for an opportunity to discover more and to engage in discussion about brain-machine interfaces (BMIs) and their legal, technical, and ethical issues, an opportunity is just a day away. From a September 20, 2022 (US) National Academies of Sciences, Engineering, and Medicine (NAS/NASEM or National Academies) notice (received via email),

Sept. 22-23 [2022] Workshop Explores Technical, Legal, Ethical Issues Raised by Brain-Machine Interfaces [official title: Brain-Machine and Related Neural Interface Technologies: Scientific, Technical, Ethical, and Regulatory Issues – A Workshop]

Technological developments and advances in understanding of the human brain have led to the development of new Brain-Machine Interface technologies. These include technologies that “read” the brain to record brain activity and decode its meaning, and those that “write” to the brain to manipulate activity in specific brain regions. Right now, most of these interface technologies are medical devices placed inside the brain or other parts of the nervous system – for example, devices that use deep brain stimulation to modulate the tremors of Parkinson’s disease.

But tech companies are developing mass-market wearable devices that focus on understanding emotional states or intended movements, such as devices used to detect fatigue, boost alertness, or enable thoughts to control gaming and other digital-mechanical systems. Such applications raise ethical and legal issues, including risks that thoughts or mood might be accessed or manipulated by companies, governments, or others; risks to privacy; and risks related to a widening of social inequalities.

A virtual workshop [emphasis mine] hosted by the National Academies of Sciences, Engineering, and Medicine on Sept. 22-23 [2022] will explore the present and future of these technologies and the ethical, legal, and regulatory issues they raise.

The workshop will run from 12:15 p.m. to 4:25 p.m. ET on Sept. 22 and from noon to 4:30 p.m. ET on Sept. 23. View agenda and register.

For those who might want a peak at the agenda before downloading it, I have listed the titles for the sessions (from my downloaded Agenda, Note: I’ve reformatted the information; there are no breaks, discussion periods, or Q&As included),

Sept. 22, 2022 Draft Agenda

12: 30 pm ET Brain-Machine and Related Neural Interface Technologies: The State and Limitations of the Technology

2:30 pm ET Brain-Machine and Related Neural Interface Technologies: Reading and Writing the Brain for Movement

Sept. 23, 2022 Draft Agenda

12:05 pm ET Brain-Machine and Related Neural Interface Technologies: Reading and Writing the Brain for Mood and Affect

2:05 pm ET Brain-Machine and Related Neural Interface Technologies: Reading and Writing the Brain for Thought, Communication, and Memory

4:00 pm ET Concluding Thoughts from Workshop Planning Committee

Regarding terminology, there’s brain-machine interface (BMI), which I think is a more generic term that includes: brain-computer interface (BCI), neural interface and/or neural implant. There are other terms as well, including this one in the title of my September 17, 2020 posting, “Turning brain-controlled wireless electronic prostheses [emphasis mine] into reality plus some ethical points.” I have a more recent April 5, 2022 posting, which is a very deep dive, “Going blind when your neural implant company flirts with bankruptcy (long read).” As you can see, various social issues associated with these devices have been of interest to me.

I’m not sure quite what to make of the session titles. There doesn’t seem to be all that much emphasis on ethical and legal issues but perhaps that’s the role the various speakers will play.

Age of AI and Big Data – Impact on Justice, Human Rights and Privacy Zoom event on September 28, 2022 at 12 – 1:30 pm EDT

The Canadian Science Policy Centre (CSPC) in a September 15, 2022 announcement (received via email) announced an event (Age of AI and Big Data – Impact on Justice, Human Rights and Privacy) centered on some of the latest government doings on artificial intelligence and privacy (Bill C-27),

In an increasingly connected world, we share a large amount of our data in our daily lives without our knowledge while browsing online, traveling, shopping, etc. More and more companies are collecting our data and using it to create algorithms or AI. The use of our data against us is becoming more and more common. The algorithms used may often be discriminatory against racial minorities and marginalized people.

As technology moves at a high pace, we have started to incorporate many of these technologies into our daily lives without understanding its consequences. These technologies have enormous impacts on our very own identity and collectively on civil society and democracy. 

Recently, the Canadian Government introduced the Artificial Intelligence and Data Act (AIDA) and Bill C-27 [which includes three acts in total] in parliament regulating the use of AI in our society. In this panel, we will discuss how our AI and Big data is affecting us and its impact on society, and how the new regulations affect us. 

Date: Sep 28 Time: 12:00 pm – 1:30 pm EDT Event Category: Virtual Session

Register Here

For some reason, there was no information about the moderator and panelists, other than their names, titles, and affiliations. Here’s a bit more:

Moderator: Yuan Stevens (from her eponymous website’s About page), Note: Links have been removed,

Yuan (“You-anne”) Stevens (she/they) is a legal and policy expert focused on sociotechnical security and human rights.

She works towards a world where powerful actors—and the systems they build—are held accountable to the public, especially when it comes to marginalized communities. 

She brings years of international experience to her role at the Leadership Lab at Toronto Metropolitan University [formerly Ryerson University], having examined the impacts of technology on vulnerable populations in Canada, the US and Germany. 

Committed to publicly accessible legal and technical knowledge, Yuan has written for popular media outlets such as the Toronto Star and Ottawa Citizen and has been quoted in news stories by the New York Times, the CBC and the Globe & Mail.

Yuan is a research fellow at the Centre for Law, Technology and Society at the University of Ottawa and a research affiliate at Data & Society Research Institute. She previously worked at Harvard University’s Berkman Klein Center for Internet & Society during her studies in law at McGill University.

She has been conducting research on artificial intelligence since 2017 and is currently exploring sociotechnical security as an LL.M candidate at University of Ottawa’s Faculty of Law working under Florian Martin-Bariteau.

Panelist: Brenda McPhail (from her Centre for International Governance Innovation profile page),

Brenda McPhail is the director of the Canadian Civil Liberties Association’s Privacy, Surveillance and Technology Project. Her recent work includes guiding the Canadian Civil Liberties Association’s interventions in key court cases that raise privacy issues, most recently at the Supreme Court of Canada in R v. Marakah and R v. Jones, which focused on privacy rights in sent text messages; research into surveillance of dissent, government information sharing, digital surveillance capabilities and privacy in relation to emergent technologies; and developing resources and presentations to drive public awareness about the importance of privacy as a social good.

Panelist: Nidhi Hegde (from her University of Alberta profile page),

My research has spanned many areas such as resource allocation in networking, smart grids, social information networks, machine learning. Broadly, my interest lies in gaining a fundamental understanding of a given system and the design of robust algorithms.

More recently my research focus has been in privacy in machine learning. I’m interested in understanding how robust machine learning methods are to perturbation, and privacy and fairness constraints, with the goal of designing practical algorithms that achieve privacy and fairness.

Bio

Before joining the University of Alberta, I spent many years in industry research labs. Most recently, I was a Research team lead at Borealis AI (a research institute at Royal Bank of Canada), where my team worked on privacy-preserving methods for machine learning models and other applied problems for RBC. Prior to that, I spent many years in research labs in Europe working on a variety of interesting and impactful problems. I was a researcher at Bell Labs, Nokia, in France from January 2015 to March 2018, where I led a new team focussed on Maths and Algorithms for Machine Learning in Networks and Systems, in the Maths and Algorithms group of Bell Labs. I also spent a few years at the Technicolor Paris Research Lab working on social network analysis, smart grids, and privacy in recommendations.

Panelist: Benjamin Faveri (from his LinkedIn page),

About

Benjamin Faveri is a Research and Policy Analyst at the Responsible AI Institute (RAII) [headquarted in Austin, Texas]. Currently, he is developing their Responsible AI Certification Program and leading it through Canada’s national accreditation process. Over the last several years, he has worked on numerous certification program-related research projects such as fishery economics and certification programs, police body-worn camera policy certification, and emerging AI certifications and assurance systems. Before his work at RAII, Benjamin completed a Master of Public Policy and Administration at Carleton University, where he was a Canada Graduate Scholar, Ontario Graduate Scholar, Social Innovation Fellow, and Visiting Scholar at UC Davis School of Law. He holds undergraduate degrees in criminology and psychology, finishing both with first class standing. Outside of work, Benjamin reads about how and why certification and private governance have been applied across various industries.

Panelist: Ori Freiman (from his eponymous website’s About page)

I research at the forefront of technological innovation. This website documents some of my academic activities.

My formal background is in Analytic Philosophy, Library and Information Science, and Science & Technology Studies. Until September 22′ [September 2022], I was a Post-Doctoral Fellow at the Ethics of AI Lab, at the University of Toronto’s Centre for Ethics. Before joining the Centre, I submitted my dissertation, about trust in technology, to The Graduate Program in Science, Technology and Society at Bar-Ilan University.

I have also found a number of overviews and bits of commentary about the Canadian federal government’s proposed Bill C-27, which I think of as an omnibus bill as it includes three proposed Acts.

The lawyers are excited but I’m starting with the Responsible AI Institute’s (RAII) response first as one of the panelists (Benjamin Faveri) works for them and it’s a view from a closely neighbouring country, from a June 22, 2022 RAII news release, Note: Links have been removed,

Business Implications of Canada’s Draft AI and Data Act

On June 16 [2022], the Government of Canada introduced the Artificial Intelligence and Data Act (AIDA), as part of the broader Digital Charter Implementation Act 2022 (Bill C-27). Shortly thereafter, it also launched the second phase of the Pan-Canadian Artificial Intelligence Strategy.

Both RAII’s Certification Program, which is currently under review by the Standards Council of Canada, and the proposed AIDA legislation adopt the same approach of gauging an AI system’s risk level in context; identifying, assessing, and mitigating risks both pre-deployment and on an ongoing basis; and pursuing objectives such as safety, fairness, consumer protection, and plain-language notification and explanation.

Businesses should monitor the progress of Bill C-27 and align their AI governance processes, policies, and controls to its requirements. Businesses participating in RAII’s Certification Program will already be aware of requirements, such as internal Algorithmic Impact Assessments to gauge risk level and Responsible AI Management Plans for each AI system, which include system documentation, mitigation measures, monitoring requirements, and internal approvals.

The AIDA draft is focused on the impact of any “high-impact system”. Companies would need to assess whether their AI systems are high-impact; identify, assess, and mitigate potential harms and biases flowing from high-impact systems; and “publish on a publicly available website a plain-language description of the system” if making a high-impact system available for use. The government elaborated in a press briefing that it will describe in future regulations the classes of AI systems that may have high impact.

The AIDA draft also outlines clear criminal penalties for entities which, in their AI efforts, possess or use unlawfully obtained personal information or knowingly make available for use an AI system that causes serious harm or defrauds the public and causes substantial economic loss to an individual.

If enacted, AIDA would establish the Office of the AI and Data Commissioner, to support Canada’s Minister of Innovation, Science and Economic Development, with powers to monitor company compliance with the AIDA, to order independent audits of companies’ AI activities, and to register compliance orders with courts. The Commissioner would also help the Minister ensure that standards for AI systems are aligned with international standards.

Apart from being aligned with the approach and requirements of Canada’s proposed AIDA legislation, RAII is also playing a key role in the Standards Council of Canada’s AI  accreditation pilot. The second phase of the Pan-Canadian includes funding for the Standards Council of Canada to “advance the development and adoption of standards and a conformity assessment program related to AI/”

The AIDA’s introduction shows that while Canada is serious about governing AI systems, its approach to AI governance is flexible and designed to evolve as the landscape changes.

Charles Mandel’s June 16, 2022 article for Betakit (Canadian Startup News and Tech Innovation) provides an overview of the government’s overall approach to data privacy, AI, and more,

The federal Liberal government has taken another crack at legislating privacy with the introduction of Bill C-27 in the House of Commons.

Among the bill’s highlights are new protections for minors as well as Canada’s first law regulating the development and deployment of high-impact AI systems.

“It [Bill C-27] will address broader concerns that have been expressed since the tabling of a previous proposal, which did not become law,” a government official told a media technical briefing on the proposed legislation.

François-Philippe Champagne, the Minister of Innovation, Science and Industry, together with David Lametti, the Minister of Justice and Attorney General of Canada, introduced the Digital Charter Implementation Act, 2022. The ministers said Bill C-27 will significantly strengthen Canada’s private sector privacy law, create new rules for the responsible development and use of artificial intelligence (AI), and continue to put in place Canada’s Digital Charter.

The Digital Charter Implementation Act includes three proposed acts: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act (AIDA)- all of which have implications for Canadian businesses.

Bill C-27 follows an attempt by the Liberals to introduce Bill C-11 in 2020. The latter was the federal government’s attempt to reform privacy laws in Canada, but it failed to gain passage in Parliament after the then-federal privacy commissioner criticized the bill.

The proposed Artificial Intelligence and Data Act is meant to protect Canadians by ensuring high-impact AI systems are developed and deployed in a way that identifies, assesses and mitigates the risks of harm and bias.

For businesses developing or implementing AI this means that the act will outline criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment.

..

An AI and data commissioner will support the minister of innovation, science, and industry in ensuring companies comply with the act. The commissioner will be responsible for monitoring company compliance, ordering third-party audits, and sharing information with other regulators and enforcers as appropriate.

The commissioner would also be expected to outline clear criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment.

Canada already collaborates on AI standards to some extent with a number of countries. Canada, France, and 13 other countries launched an international AI partnership to guide policy development and “responsible adoption” in 2020.

The federal government also has the Pan-Canadian Artificial Intelligence Strategy for which it committed an additional $443.8 million over 10 years in Budget 2021. Ahead of the 2022 budget, Trudeau [Canadian Prime Minister Justin Trudeau] had laid out an extensive list of priorities for the innovation sector, including tasking Champagne with launching or expanding national strategy on AI, among other things.

Within the AI community, companies and groups have been looking at AI ethics for some time. Scotiabank donated $750,000 in funding to the University of Ottawa in 2020 to launch a new initiative to identify solutions to issues related to ethical AI and technology development. And Richard Zemel, co-founder of the Vector Institute [formed as part of the Pan-Canadian Artificial Intelligence Strategy], joined Integrate.AI as an advisor in 2018 to help the startup explore privacy and fairness in AI.

When it comes to the Consumer Privacy Protection Act, the Liberals said the proposed act responds to feedback received on the proposed legislation, and is meant to ensure that the privacy of Canadians will be protected, and that businesses can benefit from clear rules as technology continues to evolve.

“A reformed privacy law will establish special status for the information of minors so that they receive heightened protection under the new law,” a federal government spokesperson told the technical briefing.

..

The act is meant to provide greater controls over Canadians’ personal information, including how it is handled by organizations as well as giving Canadians the freedom to move their information from one organization to another in a secure manner.

The act puts the onus on organizations to develop and maintain a privacy management program that includes the policies, practices and procedures put in place to fulfill obligations under the act. That includes the protection of personal information, how requests for information and complaints are received and dealt with, and the development of materials to explain an organization’s policies and procedures.

The bill also ensures that Canadians can request that their information be deleted from organizations.

The bill provides the privacy commissioner of Canada with broad powers, including the ability to order a company to stop collecting data or using personal information. The commissioner will be able to levy significant fines for non-compliant organizations—with fines of up to five percent of global revenue or $25 million, whichever is greater, for the most serious offences.

The proposed Personal Information and Data Protection Tribunal Act will create a new tribunal to enforce the Consumer Privacy Protection Act.

Although the Liberal government said it engaged with stakeholders for Bill C-27, the Council of Canadian Innovators (CCI) expressed reservations about the process. Nick Schiavo, CCI’s director of federal affairs, said it had concerns over the last version of privacy legislation, and had hoped to present those concerns when the bill was studied at committee, but the previous bill died before that could happen.

Now the lawyers. Simon Hodgett, Kuljit Bhogal, and Sam Ip have written a June 27, 2022 overview, which highlights the key features from the perspective of Osler, a leading business law firm practising internationally from offices across Canada and in New York.

Maya Medeiros and Jesse Beatson authored a June 23, 2022 article for Norton Rose Fulbright, a global law firm, which notes a few ‘weak’ spots in the proposed legislation,

… While the AIDA is directed to “high-impact” systems and prohibits “material harm,” these and other key terms are not yet defined. Further, the quantum of administrative penalties will be fixed only upon the issuance of regulations. 

Moreover, the AIDA sets out publication requirements but it is unclear if there will be a public register of high-impact AI systems and what level of technical detail about the AI systems will be available to the public. More clarity should come through Bill C-27’s second and third readings in the House of Commons, and subsequent regulations if the bill passes.

The AIDA may have extraterritorial application if components of global AI systems are used, developed, designed or managed in Canada. The European Union recently introduced its Artificial Intelligence Act, which also has some extraterritorial application. Other countries will likely follow. Multi-national companies should develop a coordinated global compliance program.

I have two podcasts from Michael Geist, a lawyer and Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa.

  • June 26, 2022: The Law Bytes Podcast, Episode 132: Ryan Black on the Government’s Latest Attempt at Privacy Law Reform “The privacy reform bill that is really three bills in one: a reform of PIPEDA, a bill to create a new privacy tribunal, and an artificial intelligence regulation bill. What’s in the bill from a privacy perspective and what’s changed? Is this bill any likelier to become law than an earlier bill that failed to even advance to committee hearings? To help sort through the privacy aspects of Bill C-27, Ryan Black, a Vancouver-based partner with the law firm DLA Piper (Canada) …” (about 45 mins.)
  • August 15, 2022: The Law Bytes Podcast, Episode 139: Florian Martin-Bariteau on the Artificial Intelligence and Data Act “Critics argue that regulations are long overdue, but have expressed concern about how much of the substance is left for regulations that are still to be developed. Florian Martin-Bariteau is a friend and colleague at the University of Ottawa, where he holds the University Research Chair in Technology and Society and serves as director of the Centre for Law, Technology and Society. He is currently a fellow at the Harvard’s Berkman Klein Center for Internet and Society …” (about 38 mins.)

Incorporating human cells into computer chips

What are the ethics of incorporating human cells into computer chips? That’s the question that Julian Savulescu (Visiting Professor in biomedical Ethics, University of Melbourne and Uehiro Chair in Practical Ethics, University of Oxford), Christopher Gyngell (Research Fellow in Biomedical Ethics, The University of Melbourne), and Tsutomu Sawai (Associate Professor, Humanities and Social Sciences, Hiroshima University) discuss in a May 24, 2022 essay on The Conversation (Note: A link has been removed),

The year is 2030 and we are at the world’s largest tech conference, CES in Las Vegas. A crowd is gathered to watch a big tech company unveil its new smartphone. The CEO comes to the stage and announces the Nyooro, containing the most powerful processor ever seen in a phone. The Nyooro can perform an astonishing quintillion operations per second, which is a thousand times faster than smartphone models in 2020. It is also ten times more energy-efficient with a battery that lasts for ten days.

A journalist asks: “What technological advance allowed such huge performance gains?” The chief executive replies: “We created a new biological chip using lab-grown human neurons. These biological chips are better than silicon chips because they can change their internal structure, adapting to a user’s usage pattern and leading to huge gains in efficiency.”

Another journalist asks: “Aren’t there ethical concerns about computers that use human brain matter?”

Although the name and scenario are fictional, this is a question we have to confront now. In December 2021, Melbourne-based Cortical Labs grew groups of neurons (brain cells) that were incorporated into a computer chip. The resulting hybrid chip works because both brains and neurons share a common language: electricity.

The authors explain their comment that brains and neurons share the common language of electricity (Note: Links have been removed),

In silicon computers, electrical signals travel along metal wires that link different components together. In brains, neurons communicate with each other using electric signals across synapses (junctions between nerve cells). In Cortical Labs’ Dishbrain system, neurons are grown on silicon chips. These neurons act like the wires in the system, connecting different components. The major advantage of this approach is that the neurons can change their shape, grow, replicate, or die in response to the demands of the system.

Dishbrain could learn to play the arcade game Pong faster than conventional AI systems. The developers of Dishbrain said: “Nothing like this has ever existed before … It is an entirely new mode of being. A fusion of silicon and neuron.”

Cortical Labs believes its hybrid chips could be the key to the kinds of complex reasoning that today’s computers and AI cannot produce. Another start-up making computers from lab-grown neurons, Koniku, believes their technology will revolutionise several industries including agriculture, healthcare, military technology and airport security. Other types of organic computers are also in the early stages of development.

Ethics issues arise (Note: Links have been removed),

… this raises questions about donor consent. Do people who provide tissue samples for technology research and development know that it might be used to make neural computers? Do they need to know this for their consent to be valid?

People will no doubt be much more willing to donate skin cells for research than their brain tissue. One of the barriers to brain donation is that the brain is seen as linked to your identity. But in a world where we can grow mini-brains from virtually any cell type, does it make sense to draw this type of distinction?

… Consider the scandal regarding Henrietta Lacks, an African-American woman whose cells were used extensively in medical and commercial research without her knowledge and consent.

Henrietta’s cells are still used in applications which generate huge amounts of revenue for pharmaceutical companies (including recently to develop COVID vaccines. The Lacks family still has not received any compensation. If a donor’s neurons end up being used in products like the imaginary Nyooro, should they be entitled to some of the profit made from those products?

Another key ethical consideration for neural computers is whether they could develop some form of consciousness and experience pain. Would neural computers be more likely to have experiences than silicon-based ones? …

This May 24, 2022 essay is fascinating and, if you have the time, I encourage you to read it all.

If you’re curious, you can find out about Cortical Labs here, more about Dishbrain in a February 22, 2022 article by Brian Patrick Green for iai (Institute for Art and Ideas) news, and more about Koniku in a May 31, 2018 posting about ‘wetware’ by Alissa Greenberg on Medium.

As for Henrietta Lacks, there’s this from my May 13, 2016 posting,

*HeLa cells are named for Henrietta Lacks who unknowingly donated her immortal cell line to medical research. You can find more about the story on the Oprah Winfrey website, which features an excerpt from the Rebecca Skloot book “The Immortal Life of Henrietta Lacks.”’ …

I checked; the excerpt is still on the Oprah Winfrey site.

h/t May 24, 2022 Nanowerk Spotlight article

Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations

Dear friend,

I thought it best to break this up a bit. There are a couple of ‘objects’ still to be discussed but this is mostly the commentary part of this letter to you. (Here’s a link for anyone who stumbled here but missed Part 1.)

Ethics, the natural world, social justice, eeek, and AI

Dorothy Woodend in her March 10, 2022 review for The Tyee) suggests some ethical issues in her critique of the ‘bee/AI collaboration’ and she’s not the only one with concerns. UNESCO (United Nations Educational, Scientific and Cultural Organization) has produced global recommendations for ethical AI (see my March 18, 2022 posting). More recently, there’s “Racist and sexist robots have flawed AI,” a June 23, 2022 posting, where researchers prepared a conference presentation and paper about deeply flawed AI still being used in robots.

Ultimately, the focus is always on humans and Woodend has extended the ethical AI conversation to include insects and the natural world. In short, something less human-centric.

My friend, this reference to the de Young exhibit may seem off topic but I promise it isn’t in more ways than one. The de Young Museum in San Francisco (February 22, 2020 – June 27, 2021) also held and AI and art show called, “Uncanny Valley: Being Human in the Age of AI”), from the exhibitions page,

In today’s AI-driven world, increasingly organized and shaped by algorithms that track, collect, and evaluate our data, the question of what it means to be human [emphasis mine] has shifted. Uncanny Valley is the first major exhibition to unpack this question through a lens of contemporary art and propose new ways of thinking about intelligence, nature, and artifice. [emphasis mine]

Courtesy: de Young Museum [downloaded from https://deyoung.famsf.org/exhibitions/uncanny-valley]

As you can see, it hinted (perhaps?) at an attempt to see beyond human-centric AI. (BTW, I featured this ‘Uncanny Valley’ show in my February 25, 2020 posting where I mentioned Stephanie Dinkins [featured below] and other artists.)

Social justice

While the VAG show doesn’t see much past humans and AI, it does touch on social justice. In particular there’s Pod 15 featuring the Algorithmic Justice League (AJL). The group “combine[s] art and research to illuminate the social implications and harms of AI” as per their website’s homepage.

In Pod 9, Stephanie Dinkins’ video work with a robot (Bina48), which was also part of the de Young Museum ‘Uncanny Valley’ show, addresses some of the same issues.

Still of Stephanie Dinkins, “Conversations with Bina48,” 2014–present. Courtesy of the artist [downloaded from https://deyoung.famsf.org/stephanie-dinkins-conversations-bina48-0]

From the the de Young Museum’s Stephanie Dinkins “Conversations with Bina48” April 23, 2020 article by Janna Keegan (Dinkins submitted the same work you see at the VAG show), Note: Links have been removed,

Transdisciplinary artist and educator Stephanie Dinkins is concerned with fostering AI literacy. The central thesis of her social practice is that AI, the internet, and other data-based technologies disproportionately impact people of color, LGBTQ+ people, women, and disabled and economically disadvantaged communities—groups rarely given a voice in tech’s creation. Dinkins strives to forge a more equitable techno-future by generating AI that includes the voices of multiple constituencies …

The artist’s ongoing Conversations with Bina48 takes the form of a series of interactions with the social robot Bina48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second). The machine is the brainchild of Martine Rothblatt, an entrepreneur in the field of biopharmaceuticals who, with her wife, Bina, cofounded the Terasem Movement, an organization that seeks to extend human life through cybernetic means. In 2007 Martine commissioned Hanson Robotics to create a robot whose appearance and consciousness simulate Bina’s. The robot was released in 2010, and Dinkins began her work with it in 2014.

Part psychoanalytical discourse, part Turing test, Conversations with Bina48 also participates in a larger dialogue regarding bias and representation in technology. Although Bina Rothblatt is a Black woman, Bina48 was not programmed with an understanding of its Black female identity or with knowledge of Black history. Dinkins’s work situates this omission amid the larger tech industry’s lack of diversity, drawing attention to the problems that arise when a roughly homogenous population creates technologies deployed globally. When this occurs, writes art critic Tess Thackara, “the unconscious biases of white developers proliferate on the internet, mapping our social structures and behaviors onto code and repeating imbalances and injustices that exist in the real world.” One of the most appalling and public of these instances occurred when a Google Photos image-recognition algorithm mislabeled the faces of Black people as “gorillas.”

Eeek

You will find as you go through the ‘imitation game’ a pod with a screen showing your movements through the rooms in realtime on a screen. The installation is called “Creepers” (2021-22). The student team from Vancouver’s Centre for Digital Media (CDM) describes their project this way, from the CDM’s AI-driven Installation Piece for the Vancouver Art Gallery webpage,

Project Description

Kaleidoscope [team name] is designing an installation piece that harnesses AI to collect and visualize exhibit visitor behaviours, and interactions with art, in an impactful and thought-provoking way.

There’s no warning that you’re being tracked and you can see they’ve used facial recognition software to track your movements through the show. It’s claimed on the pod’s signage that they are deleting the data once you’ve left.

‘Creepers’ is an interesting approach to the ethics of AI. The name suggests that even the student designers were aware it was problematic.

For the curious, there’s a description of the other VAG ‘imitation game’ installations provided by CDM students on the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage.

In recovery from an existential crisis (meditations)

There’s something greatly ambitious about “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” and walking up the VAG’s grand staircase affirms that ambition. Bravo to the two curators, Grenville and Entis for an exhibition.that presents a survey (or overview) of artificial intelligence, and its use in and impact on creative visual culture.

I’ve already enthused over the history (specifically Turing, Lovelace, Ovid), admitted to being mesmerized by Scott Eaton’s sculpture/AI videos, and confessed to a fascination (and mild repulsion) regarding Oxman’s honeycombs.

It’s hard to remember all of the ‘objects’ as the curators have offered a jumble of work, almost all of them on screens. Already noted, there’s Norbert Wiener’s The Moth (1949) and there are also a number of other computer-based artworks from the 1960s and 1970s. Plus, you’ll find works utilizing a GAN (generative adversarial network), an AI agent that is explained in the exhibit.

It’s worth going more than once to the show as there is so much to experience.

Why did they do that?

Dear friend, I’ve already commented on the poor flow through the show and It’s hard to tell if the curators intended the experience to be disorienting but this is to the point of chaos, especially when the exhibition is crowded.

I’ve seen Grenville’s shows before. In particular there was “MashUp: The Birth of Modern Culture, a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century” and there was “KRAZY! The Delirious World of Anime + Manga + Video Games + Art.” As you can see from the description, he pulls together disparate works and ideas into a show for you to ‘make sense’ of them.

One of the differences between those shows and the “imitation Game: …” is that most of us have some familiarity, whether we like it or not, with modern art/culture and anime/manga/etc. and can try to ‘make sense’ of it.

By contrast, artificial intelligence (which even experts have difficulty defining) occupies an entirely different set of categories; all of them associated with science/technology. This makes for a different kind of show so the curators cannot rely on the audience’s understanding of basics. It’s effectively an art/sci or art/tech show and, I believe, the first of its kind at the Vancouver Art Gallery. Unfortunately, the curators don’t seem to have changed their approach to accommodate that difference.

AI is also at the centre of a current panic over job loss, loss of personal agency, automated racism and sexism, etc. which makes the experience of viewing the show a little tense. In this context, their decision to commission and use ‘Creepers’ seems odd.

Where were Ai-Da and Dall-E-2 and the others?

Oh friend, I was hoping for a robot. Those roomba paintbots didn’t do much for me. All they did was lie there on the floor

To be blunt I wanted some fun and perhaps a bit of wonder and maybe a little vitality. I wasn’t necessarily expecting Ai-Da, an artisitic robot, but something three dimensional and fun in this very flat, screen-oriented show would have been nice.

This image has an empty alt attribute; its file name is image-asset.jpeg
Ai-Da was at the Glastonbury Festival in the U from 23-26th June 2022. Here’s Ai-Da and her Billie Eilish (one of the Glastonbury 2022 headliners) portrait. [downloaded from https://www.ai-darobot.com/exhibition]

Ai-Da was first featured here in a December 17, 2021 posting about performing poetry that she had written in honour of the 700th anniversary of poet Dante Alighieri’s death.

Named in honour of Ada Lovelace, Ai-Da visited the 2022 Venice Biennale as Leah Henrickson and Simone Natale describe in their May 12, 2022 article for Fast Company (Note: Links have been removed),

Ai-Da sits behind a desk, paintbrush in hand. She looks up at the person posing for her, and then back down as she dabs another blob of paint onto the canvas. A lifelike portrait is taking shape. If you didn’t know a robot produced it, this portrait could pass as the work of a human artist.

Ai-Da is touted as the “first robot to paint like an artist,” and an exhibition of her work, called Leaping into the Metaverse, opened at the Venice Biennale.

Ai-Da produces portraits of sitting subjects using a robotic hand attached to her lifelike feminine figure. She’s also able to talk, giving detailed answers to questions about her artistic process and attitudes toward technology. She even gave a TEDx talk about “The Intersection of Art and AI” in Oxford a few years ago. While the words she speaks are programmed, Ai-Da’s creators have also been experimenting with having her write and perform her own poetry.

She has her own website.

If not Ai-Da, what about Dall-E-2? Aaron Hertzmann’s June 20, 2022 commentary, “Give this AI a few words of description and it produces a stunning image – but is it art?” investigates for Salon (Note: Links have been removed),

DALL-E 2 is a new neural network [AI] algorithm that creates a picture from a short phrase or sentence that you provide. The program, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public. But a small and growing number of people – myself included – have been given access to experiment with it.

As a researcher studying the nexus of technology and art, I was keen to see how well the program worked. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology. It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.

A July 4, 2022 article “DALL-E, Make Me Another Picasso, Please” by Laura Lane for The New Yorker has a rebuttal to Ada Lovelace’s contention that creativity is uniquely human (Note: A link has been removed),

“There was this belief that creativity is this deeply special, only-human thing,” Sam Altman, OpenAI’s C.E.O., explained the other day. Maybe not so true anymore, he said. Altman, who wore a gray sweater and had tousled brown hair, was videoconferencing from the company’s headquarters, in San Francisco. DALL-E is still in a testing phase. So far, OpenAI has granted access to a select group of people—researchers, artists, developers—who have used it to produce a wide array of images: photorealistic animals, bizarre mashups, punny collages. Asked by a user to generate “a plate of various alien fruits from another planet photograph,” DALL-E returned something kind of like rambutans. “The rest of mona lisa” is, according to DALL-E, mostly just one big cliff. Altman described DALL-E as “an extension of your own creativity.”

There are other AI artists, in my August 16, 2019 posting, I had this,

AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.

That posting also included AI artist, AICAN. Both artist-AI agents (Obvious and AICAN) are based on GANs (generative adversarial networks) for learning and eventual output. Both artist-AI agents work independently or with human collaborators on art works that are available for purchase.

As might be expected not everyone is excited about AI and visual art. Sonja Drimmer, Professor of Medieval Art, University of Massachusetts at Amherst, provides another perspective on AI, visual art, and, her specialty, art history in her November 1, 2021 essay for The Conversation (Note: Links have been removed),

Over the past year alone, I’ve come across articles highlighting how artificial intelligence recovered a “secret” painting of a “lost lover” of Italian painter Modigliani, “brought to life” a “hidden Picasso nude”, “resurrected” Austrian painter Gustav Klimt’s destroyed works and “restored” portions of Rembrandt’s 1642 painting “The Night Watch.” The list goes on.

As an art historian, I’ve become increasingly concerned about the coverage and circulation of these projects.

They have not, in actuality, revealed one secret or solved a single mystery.

What they have done is generate feel-good stories about AI.

Take the reports about the Modigliani and Picasso paintings.

These were projects executed by the same company, Oxia Palus, which was founded not by art historians but by doctoral students in machine learning.

In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been carried out and published years prior – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases.

The company edited these X-rays and reconstituted them as new works of art by applying a technique called “neural style transfer.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them and then promises to recreate images of other content in that same style.

As you can ‘see’ my friend, the topic of AI and visual art is a juicy one. In fact, I have another example in my June 27, 2022 posting, which is titled, “Art appraised by algorithm.” So, Grenville’s and Entis’ decision to focus on AI and its impact on visual culture is quite timely.

Visual culture: seeing into the future

The VAG Imitation Game webpage lists these categories of visual culture “animation, architecture, art, fashion, graphic design, urban design and video games …” as being represented in the show. Movies and visual art, not mentioned in the write up, are represented while theatre and other performing arts are not mentioned or represented. That’ s not a surprise.

In addition to an area of science/technology that’s not well understood even by experts, the curators took on the truly amorphous (and overwhelming) topic of visual culture. Given that even writing this commentary has been a challenge, I imagine pulling the show together was quite the task.

Grenville often grounds his shows in a history of the subject and, this time, it seems especially striking. You’re in a building that is effectively a 19th century construct and in galleries that reflect a 20th century ‘white cube’ aesthetic, while looking for clues into the 21st century future of visual culture employing technology that has its roots in the 19th century and, to some extent, began to flower in the mid-20th century.

Chung’s collaboration is one of the only ‘optimistic’ notes about the future and, as noted earlier, it bears a resemblance to Wiener’s 1949 ‘Moth’

Overall, it seems we are being cautioned about the future. For example, Oxman’s work seems bleak (bees with no flowers to pollinate and living in an eternal spring). Adding in ‘Creepers’ and surveillance along with issues of bias and social injustice reflects hesitation and concern about what we will see, who sees it, and how it will be represented visually.

Learning about robots, automatons, artificial intelligence, and more

I wish the Vancouver Art Gallery (and Vancouver’s other art galleries) would invest a little more in audience education. A couple of tours, by someone who may or may not know what they’re talking, about during the week do not suffice. The extra material about Stephanie Dinkins and her work (“Conversations with Bina48,” 2014–present) came from the de Young Museum’s website. In my July 26, 2021 commentary on North Vancouver’s Polygon Gallery 2021 show “Interior Infinite,” I found background information for artist Zanele Muholi on the Tate Modern’s website. There is nothing on the VAG website that helps you to gain some perspective on the artists’ works.

It seems to me that if the VAG wants to be considered world class, it should conduct itself accordingly and beefing up its website with background information about their current shows would be a good place to start.

Robots, automata, and artificial intelligence

Prior to 1921, robots were known exclusively as automatons. These days, the word ‘automaton’ (or ‘automata’ in the plural) seems to be used to describe purely mechanical representations of humans from over 100 years ago whereas the word ‘robot’ can be either ‘humanlike’ or purely machine, e.g. a mechanical arm that performs the same function over and over. I have a good February 24, 2017 essay on automatons by Miguel Barral for OpenMind BBVA*, which provides some insight into the matter,

The concept of robot is relatively recent. The idea was introduced in 1921 by the Czech writer Karel Capek in his work R.U.R to designate a machine that performs tasks in place of man. But their predecessors, the automatons (from the Greek automata, or “mechanical device that works by itself”), have been the object of desire and fascination since antiquity. Some of the greatest inventors in history, such as Leonardo Da Vinci, have contributed to our fascination with these fabulous creations:

The Al-Jazari automatons

The earliest examples of known automatons appeared in the Islamic world in the 12th and 13th centuries. In 1206, the Arab polymath Al-Jazari, whose creations were known for their sophistication, described some of his most notable automatons: an automatic wine dispenser, a soap and towels dispenser and an orchestra-automaton that operated by the force of water. This latter invention was meant to liven up parties and banquets with music while floating on a pond, lake or fountain.

As the water flowed, it started a rotating drum with pegs that, in turn, moved levers whose movement produced different sounds and movements. As the pegs responsible for the musical notes could be exchanged for different ones in order to interpret another melody, it is considered one of the first programmable machines in history.

If you’re curious about automata, my friend, I found this Sept. 26, 2016 ABC news radio news item about singer Roger Daltrey’s and his wife, Heather’s auction of their collection of 19th century French automata (there’s an embedded video showcasing these extraordinary works of art). For more about automata, robots, and androids, there’s an excellent May 4, 2022 article by James Vincent, ‘A visit to the human factory; How to build the world’s most realistic robot‘ for The Verge; Vincent’s article is about Engineered Arts, the UK-based company that built Ai-Da.

AI is often used interchangeably with ‘robot’ but they aren’t the same. Not all robots have AI integrated into their processes. At its simplest AI is an algorithm or set of algorithms, which may ‘live’ in a CPU and be effectively invisible or ‘live’ in or make use of some kind of machine and/or humanlike body. As the experts have noted, the concept of artificial intelligence is a slippery concept.

*OpenMind BBVA is a Spanish multinational financial services company, Banco Bilbao Vizcaya Argentaria (BBVA), which runs the non-profit project, OpenMind (About us page) to disseminate information on robotics and so much more.*

You can’t always get what you want

My friend,

I expect many of the show’s shortcomings (as perceived by me) are due to money and/or scheduling issues. For example, Ai-Da was at the Venice Biennale and if there was a choice between the VAG and Biennale, I know where I’d be.

Even with those caveats in mind, It is a bit surprising that there were no examples of wearable technology. For example, Toronto’s Tapestry Opera recently performed R.U.R. A Torrent of Light (based on the word ‘robot’ from Karel Čapek’s play, R.U.R., ‘Rossumovi Univerzální Roboti’), from my May 24, 2022 posting,

I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,

“This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.”

And, from later in my posting,

“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design, [emphasis mine]” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director. 

That last quote brings me back to the my comment about theatre and performing arts not being part of the show. Of course, the curators couldn’t do it all but a website with my hoped for background and additional information could have helped to solve the problem.

The absence of the theatrical and performing arts in the VAG’s ‘Imitation Game’ is a bit surprising as the Council of Canadian Academies (CCA) in their third assessment, “Competing in a Global Innovation Economy: The Current State of R&D in Canada” released in 2018 noted this (from my April 12, 2018 posting),

Canada, relative to the world, specializes in subjects generally referred to as the
humanities and social sciences (plus health and the environment), and does
not specialize as much as others in areas traditionally referred to as the physical
sciences and engineering. Specifically, Canada has comparatively high levels
of research output in Psychology and Cognitive Sciences, Public Health and
Health Services, Philosophy and Theology, Earth and Environmental Sciences,
and Visual and Performing Arts. [emphasis mine] It accounts for more than 5% of world research in these fields. Conversely, Canada has lower research output than expected in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies,
Engineering, and Mathematics and Statistics. The comparatively low research
output in core areas of the natural sciences and engineering is concerning,
and could impair the flexibility of Canada’s research base, preventing research
institutions and researchers from being able to pivot to tomorrow’s emerging
research areas. [p. xix Print; p. 21 PDF]

US-centric

My friend,

I was a little surprised that the show was so centered on work from the US given that Grenville has curated ate least one show where there was significant input from artists based in Asia. Both Japan and Korea are very active with regard to artificial intelligence and it’s hard to believe that their artists haven’t kept pace. (I’m not as familiar with China and its AI efforts, other than in the field of facial recognition, but it’s hard to believe their artists aren’t experimenting.)

The Americans, of course, are very important developers in the field of AI but they are not alone and it would have been nice to have seen something from Asia and/or Africa and/or something from one of the other Americas. In fact, anything which takes us out of the same old, same old. (Luba Elliott wrote this (2019/2020/2021?) essay, “Artificial Intelligence Art from Africa and Black Communities Worldwide” on Aya Data if you want to get a sense of some of the activity on the African continent. Elliott does seem to conflate Africa and Black Communities, for some clarity you may want to check out the Wikipedia entry on Africanfuturism, which contrasts with this August 12, 2020 essay by Donald Maloba, “What is Afrofuturism? A Beginner’s Guide.” Maloba also conflates the two.)

As it turns out, Luba Elliott presented at the 2019 Montréal Digital Spring event, which brings me to Canada’s artificial intelligence and arts scene.

I promise I haven’t turned into a flag waving zealot, my friend. It’s just odd there isn’t a bit more given that machine learning was pioneered at the University of Toronto. Here’s more about that (from Wikipedia entry for Geoffrey Hinston),

Geoffrey Everest HintonCCFRSFRSC[11] (born 6 December 1947) is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks.

Hinton received the 2018 Turing Award, together with Yoshua Bengio [Canadian scientist] and Yann LeCun, for their work on deep learning.[24] They are sometimes referred to as the “Godfathers of AI” and “Godfathers of Deep Learning“,[25][26] and have continued to give public talks together.[27][28]

Some of Hinton’s work was started in the US but since 1987, he has pursued his interests at the University of Toronto. He wasn’t proven right until 2012. Katrina Onstad’s February 29, 2018 article (Mr. Robot) for Toronto Life is a gripping read about Hinton and his work on neural networks. BTW, Yoshua Bengio (co-Godfather) is a Canadian scientist at the Université de Montréal and Yann LeCun (co-Godfather) is a French scientist at New York University.

Then, there’s another contribution, our government was the first in the world to develop a national artificial intelligence strategy. Adding those developments to the CCA ‘State of Science’ report findings about visual arts and performing arts, is there another word besides ‘odd’ to describe the lack of Canadian voices?

You’re going to point out the installation by Ben Bogart (a member of Simon Fraser University’s Metacreation Lab for Creative AI and instructor at the Emily Carr University of Art + Design (ECU)) but it’s based on the iconic US scifi film, 2001: A Space Odyssey. As for the other Canadian, Sougwen Chung, she left Canada pretty quickly to get her undergraduate degree in the US and has since moved to the UK. (You could describe hers as the quintessential success story, i.e., moving from Canada only to get noticed here after success elsewhere.)

Of course, there are the CDM student projects but the projects seem less like an exploration of visual culture than an exploration of technology and industry requirements, from the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage, Note: A link has been removed,

In 2019, Bruce Grenville, Senior Curator at Vancouver Art Gallery, approached [the] Centre for Digital Media to collaborate on several industry projects for the forthcoming exhibition. Four student teams tackled the project briefs over the course of the next two years and produced award-winning installations that are on display until October 23 [2022].

Basically, my friend, it would have been nice to see other voices or, at the least, an attempt at representing other voices and visual cultures informed by AI. As for Canadian contributions, maybe put something on the VAG website?

Playing well with others

it’s always a mystery to me why the Vancouver cultural scene seems comprised of a set of silos or closely guarded kingdoms. Reaching out to the public library and other institutions such as Science World might have cost time but could have enhanced the show

For example, one of the branches of the New York Public Library ran a programme called, “We are AI” in March 2022 (see my March 23, 2022 posting about the five-week course, which was run as a learning circle). The course materials are available for free (We are AI webpage) and I imagine that adding a ‘visual culture module’ wouldn’t be that difficult.

There is one (rare) example of some Vancouver cultural institutions getting together to offer an art/science programme and that was in 2017 when the Morris and Helen Belkin Gallery (at the University of British Columbia; UBC) hosted an exhibition of Santiago Ramon y Cajal’s work (see my Sept. 11, 2017 posting about the gallery show) along with that show was an ancillary event held by the folks at Café Scientifique at Science World and featuring a panel of professionals from UBC’s Faculty of Medicine and Dept. of Psychology, discussing Cajal’s work.

In fact, where were the science and technology communities for this show?

On a related note, the 2022 ACM SIGGRAPH conference (August 7 – 11, 2022) is being held in Vancouver. (ACM is the Association for Computing Machinery; SIGGRAPH is for Special Interest Group on Computer Graphics and Interactive Techniques.) SIGGRAPH has been holding conferences in Vancouver every few years since at least 2011.

At this year’s conference, they have at least two sessions that indicate interests similar to the VAG’s. First, there’s Immersive Visualization for Research, Science and Art which includes AI and machine learning along with other related topics. There’s also, Frontiers Talk: Art in the Age of AI: Can Computers Create Art?

This is both an international conference and an exhibition (of art) and the whole thing seems to have kicked off on July 25, 2022. If you’re interested, the programme can be found here and registration here.

Last time SIGGRAPH was here the organizers seemed interested in outreach and they offered some free events.

In the end

It was good to see the show. The curators brought together some exciting material. As is always the case, there were some missed opportunities and a few blind spots. But all is not lost.

July 27, 2022, the VAG held a virtual event with an artist,

Gwenyth Chao to learn more about what happened to the honeybees and hives in Oxman’s Synthetic Apiary project. As a transdisciplinary artist herself, Chao will also discuss the relationship between art, science, technology and design. She will then guide participants to create a space (of any scale, from insect to human) inspired by patterns found in nature.

Hopefully there will be more more events inspired by specific ‘objects’. Meanwhile, August 12, 2022, the VAG is hosting,

… in partnership with the Canadian Music Centre BC, New Music at the Gallery is a live concert series hosted by the Vancouver Art Gallery that features an array of musicians and composers who draw on contemporary art themes.

Highlighting a selection of twentieth- and twenty-first-century music compositions, this second concert, inspired by the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, will spotlight The Iliac Suite (1957), the first piece ever written using only a computer, and Kaija Saariaho’s Terra Memoria (2006), which is in a large part dependent on a computer-generated musical process.

It would be lovely if they could include an Ada Lovelace Day event. This is an international celebration held on October 11, 2022.

Do go. Do enjoy, my friend.

Coming soon: Responsible AI at the 35th Canadian Conference on Artificial Intelligence (AI) from 30 May to 3 June, 2022

35 years? How have I not stumbled on this conference before? Anyway, I’m glad to have the news (even if I’m late to the party), from the 35th Canadian Conference on Artificial Intelligence homepage,

The 35th Canadian Conference on Artificial Intelligence will take place virtually in Toronto, Ontario, from 30 May to 3 June, 2022. All presentations and posters will be online, with in-person social events to be scheduled in Toronto for those who are able to attend in-person. Viewing rooms and isolated presentation facilities will be available for all visitors to the University of Toronto during the event.

The event is collocated with the Computer and Robot Vision conferences. These events (AI·CRV 2022) will bring together hundreds of leaders in research, industry, and government, as well as Canada’s most accomplished students. They showcase Canada’s ingenuity, innovation and leadership in intelligent systems and advanced information and communications technology. A single registration lets you attend any session in the two conferences, which are scheduled in parallel tracks.

The conference proceedings are published on PubPub, an open-source, privacy-respecting, and open access online platform. They are submitted to be indexed and abstracted in leading indexing services such as DBLP, ACM, Google Scholar.

You can view last year’s [2021] proceedings here: https://caiac.pubpub.org/ai2021.

The 2021 proceedings appear to be open access.

I can’t tell if ‘Responsible AI’ has been included as a specific topic in previous conferences but 2022 is definitely hosting a couple of sessions based on that theme, from the Responsible AI activities webpage,

Keynote speaker: Julia Stoyanovich

New York University

“Building Data Equity Systems”

Equity as a social concept — treating people differently depending on their endowments and needs to provide equality of outcome rather than equality of treatment — lends a unifying vision for ongoing work to operationalize ethical considerations across technology, law, and society.  In my talk I will present a vision for designing, developing, deploying, and overseeing data-intensive systems that consider equity as an essential objective.  I will discuss ongoing technical work, and will place this work into the broader context of policy, education, and public outreach.

Biography: Julia Stoyanovich is an Institute Associate Professor of Computer Science & Engineering at the Tandon School of Engineering, Associate Professor of Data Science at the Center for Data Science, and Director of the Center for Responsible AI at New York University (NYU).  Her research focuses on responsible data management and analysis: on operationalizing fairness, diversity, transparency, and data protection in all stages of the data science lifecycle.  She established the “Data, Responsibly” consortium and served on the New York City Automated Decision Systems Task Force, by appointment from Mayor de Blasio.  Julia developed and has been teaching courses on Responsible Data Science at NYU, and is a co-creator of an award-winning comic book series on this topic.  In addition to data ethics, Julia works on the management and analysis of preference and voting data, and on querying large evolving graphs. She holds M.S. and Ph.D. degrees in Computer Science from Columbia University, and a B.S. in Computer Science and in Mathematics & Statistics from the University of Massachusetts at Amherst.  She is a recipient of an NSF CAREER award and a Senior Member of the ACM.

Panel on ethical implications of AI

Panelists

Luke Stark, Faculty of Information and Media Studies, Western University

Luke Stark is an Assistant Professor in the Faculty of Information and Media Studies at Western University in London, ON. His work interrogating the historical, social, and ethical impacts of computing and AI technologies has appeared in journals including The Information Society, Social Studies of Science, and New Media & Society, and in popular venues like Slate, The Globe and Mail, and The Boston Globe. Luke was previously a Postdoctoral Researcher in AI ethics at Microsoft Research, and a Postdoctoral Fellow in Sociology at Dartmouth College; he holds a PhD from the Department of Media, Culture, and Communication at New York University, and a BA and MA from the University of Toronto.

Nidhi Hegde, Associate Professor in Computer Science and Amii [Alberta Machine Intelligence Institute] Fellow at the University of Alberta

Nidhi is a Fellow and Canada CIFAR [Canadian Institute for Advanced Research] AI Chair at Amii and an Associate Professor in the Department of Computing Science at the University of Alberta. Before joining UAlberta, she spent many years in industry research labs. Most recently, she was a Research team lead at Borealis AI (a research institute at Royal Bank of Canada), where her team worked on privacy-preserving methods for machine learning models and other applied problems for RBC. Prior to that, she spent many years in research labs in Europe working on a variety of interesting and impactful problems. She was a researcher at Bell Labs, Nokia, in France from January 2015 to March 2018, where she led a new team focussed on Maths and Algorithms for Machine Learning in Networks and Systems, in the Maths and Algorithms group of Bell Labs. She also spent a few years at the Technicolor Paris Research Lab working on social network analysis, smart grids, privacy, and recommendations. Nidhi is an associate editor of the IEEE/ACM Transactions on Networking, and an editor of the Elsevier Performance Evaluation Journal.

Karina Vold, Assistant Professor, Institute for the History and Philosophy of Science and Technology, University of Toronto

Dr. Karina Vold is an Assistant Professor at the Institute for the History and Philosophy of Science and Technology at the University of Toronto. She is also a Faculty Affiliate at the U of T Schwartz Reisman Institute for Technology and Society, a Faculty Associate at the U of T Centre for Ethics, and an Associate Fellow at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence. Vold specialises in Philosophy of Cognitive Science and Philosophy of Artificial Intelligence, and her recent research has focused on human autonomy, cognitive enhancement, extended cognition, and the risks and ethics of AI.

Elissa Strome, Executive Director, Pan-Canadian Artificial Intelligence Strategy at CIFAR

Elissa is Executive Director, Pan-Canadian Artificial Intelligence Strategy at CIFAR, working with research leaders across the country to implement Canada’s national research strategy in AI.  Elissa completed her PhD in Neuroscience from the University of British Columbia in 2006. Following a post-doc at Lund University, in Sweden, she decided to pursue a career in research strategy, policy and leadership. In 2008, she joined the University of Toronto’s Office of the Vice-President, Research and Innovation and was Director of Strategic Initiatives from 2011 to 2015. In that role, she led a small team dedicated to advancing the University’s strategic research priorities, including international institutional research partnerships, the institutional strategy for prestigious national and international research awards, and the establishment of the SOSCIP [Southern Ontario Smart Computing Innovation Platform] research consortium in 2012. From 2015 to 2017, Elissa was Executive Director of SOSCIP, leading the 17-member industry-academic consortium through a major period of growth and expansion, and establishing SOSCIP as Ontario’s leading platform for collaborative research and development in data science and advanced computing.

Tutorial on AI and the Law

Prof. Maura R. Grossman, University of Waterloo, and

Hon. Paul W. Grimm, United States District Court for the District of Maryland

AI applications are becoming more and more ubiquitous in almost every field of endeavor, and the same is true as to the legal industry. This panel, consisting of an experienced lawyer and computer scientist, and a U.S. federal trial court judge, will discuss how AI is currently being used in the legal profession, what adoption has been like since the introduction of AI to law in about 2009, what legal and ethical issues AI applications have raised in the legal system, and how a sitting trial court judge approaches AI evidence, in particular, the determination of whether to admit that AI evidence or not, when they are a non-expert.

How is AI being used in the legal industry today?

What has the legal industry’s reaction been to legal AI applications?

What are some of the biggest legal and ethical issues implicated by legal and other AI applications?

How does a sitting trial court judge evaluate AI evidence when making a determination of whether to admit that AI evidence or not?

What considerations go into the trial judge’s decision?

What happens if the judge is not an expert in AI?  Do they recuse?

You may recognize the name, Julia Stoyanovich, as she was mentioned here in my March 23, 2022 posting titled, The “We are AI” series gives citizens a primer on AI, a series of peer-to-peer workshops aimed at introducing the basics of AI to the public. There’s also a comic book series associated with it and all of the materials are available for free. It’s all there in the posting.

Getting back to the Responsible AI activities webpage,, there’s one more activity and this seems a little less focused on experts,

Virtual Meet and Greet on Responsible AI across Canada

Given the many activities that are fortunately happening around the responsible and ethical aspects of AI here in Canada, we are organizing an event in conjunction with Canadian AI 2022 this year to become familiar with what everyone is doing and what activities they are engaged in.

It would be wonderful to have a unified community here in Canada around responsible AI so we can support each other and find ways to more effectively collaborate and synergize. We are aiming for a casual, discussion-oriented event rather than talks or formal presentations.

The meet and greet will be hosted by Ebrahim Bagheri, Eleni Stroulia and Graham Taylor. If you are interested in participating, please email Ebrahim Bagheri (bagheri@ryerson.ca).

Thank you to the co-chairs for getting the word out about the Responsible AI topic at the conference,

Responsible AI Co-chairs

Ebrahim Bagheri
Professor
Electrical, Computer, and Biomedical Engineering, Ryerson University
Website

Eleni Stroulia
Professor, Department of Computing Science
Acting Vice Dean, Faculty of Science
Director, AI4Society Signature Area
University of Alberta
Website

The organization which hosts these conference has an almost palindromic abbreviation, CAIAC for Canadian Artificial Intelligence Association (CAIA) or Association Intelligence Artificiel Canadien (AIAC). Yes, you do have to read it in English and French and the C at either end gets knocked depending on which language you’re using, which is why it’s almost.

The CAIAC is almost 50 years old (under various previous names) and has its website here.

*April 22, 2022 at 1400 hours PT removed ‘the’ from this section of the headline: “… from 30 May to 3 June, 2022.” and removed period from the end.

Going blind when your neural implant company flirts with bankruptcy (long read)

This story got me to thinking about what happens when any kind of implant company (pacemaker, deep brain stimulator, etc.) goes bankrupt or is acquired by another company with a different business model.

As I worked on this piece, more issues were raised and the scope expanded to include prosthetics along with implants while the focus narrowed to neuro as in, neural implants and neuroprosthetics. At the same time, I found salient examples for this posting in other medical advances such as gene editing.

In sum, all references to implants and prosthetics are to neural devices and some issues are illustrated with salient examples from other medical advances (specifically, gene editing).

Definitions (for those who find them useful)

The US Food and Drug Administration defines implants and prosthetics,

Medical implants are devices or tissues that are placed inside or on the surface of the body. Many implants are prosthetics, intended to replace missing body parts. Other implants deliver medication, monitor body functions, or provide support to organs and tissues.

As for what constitutes a neural implant/neuroprosthetic, there’s this from Emily Waltz’s January 20, 2020 article (How Do Neural Implants Work? Neural implants are used for deep brain stimulation, vagus nerve stimulation, and mind-controlled prostheses) for the Institute of Electrical and Electronics Engineers (IEEE) Spectrum magazine,

A neural implant, then, is a device—typically an electrode of some kind—that’s inserted into the body, comes into contact with tissues that contain neurons, and interacts with those neurons in some way.

Now, let’s start with the recent near bankruptcy of a retinal implant company.

The company goes bust (more or less)

From a February 25, 2022 Science Friday (a National Public Radio program) posting/audio file, Note: Links have been removed,

Barbara Campbell was walking through a New York City subway station during rush hour when her world abruptly went dark. For four years, Campbell had been using a high-tech implant in her left eye that gave her a crude kind of bionic vision, partially compensating for the genetic disease that had rendered her completely blind in her 30s. “I remember exactly where I was: I was switching from the 6 train to the F train,” Campbell tells IEEE Spectrum. “I was about to go down the stairs, and all of a sudden I heard a little ‘beep, beep, beep’ sound.’”

It wasn’t her phone battery running out. It was her Argus II retinal implant system powering down. The patches of light and dark that she’d been able to see with the implant’s help vanished.

Terry Byland is the only person to have received this kind of implant in both eyes. He got the first-generation Argus I implant, made by the company Second Sight Medical Products, in his right eye in 2004, and the subsequent Argus II implant in his left 11 years later. He helped the company test the technology, spoke to the press movingly about his experiences, and even met Stevie Wonder at a conference. “[I] went from being just a person that was doing the testing to being a spokesman,” he remembers.

Yet in 2020, Byland had to find out secondhand that the company had abandoned the technology and was on the verge of going bankrupt. While his two-implant system is still working, he doesn’t know how long that will be the case. “As long as nothing goes wrong, I’m fine,” he says. “But if something does go wrong with it, well, I’m screwed. Because there’s no way of getting it fixed.”

Science Friday and the IEEE [Institute of Electrical and Electronics Engineers] Spectrum magazine collaborated to produce this story. You’ll find the audio files and the transcript of interviews with the authors and one of the implant patients in this February 25, 2022 Science Friday (a National Public Radio program) posting.

Here’s more from the February 15, 2022 IEEE Spectrum article by Eliza Strickland and Mark Harris,

Ross Doerr, another Second Sight patient, doesn’t mince words: “It is fantastic technology and a lousy company,” he says. He received an implant in one eye in 2019 and remembers seeing the shining lights of Christmas trees that holiday season. He was thrilled to learn in early 2020 that he was eligible for software upgrades that could further improve his vision. Yet in the early months of the COVID-19 pandemic, he heard troubling rumors about the company and called his Second Sight vision-rehab therapist. “She said, ‘Well, funny you should call. We all just got laid off,’ ” he remembers. She said, ‘By the way, you’re not getting your upgrades.’ ”

These three patients, and more than 350 other blind people around the world with Second Sight’s implants in their eyes, find themselves in a world in which the technology that transformed their lives is just another obsolete gadget. One technical hiccup, one broken wire, and they lose their artificial vision, possibly forever. To add injury to insult: A defunct Argus system in the eye could cause medical complications or interfere with procedures such as MRI scans, and it could be painful or expensive to remove.

The writers included some information about what happened to the business, from the February 15, 2022 IEEE Spectrum article, Note: Links have been removed,

After Second Sight discontinued its retinal implant in 2019 and nearly went out of business in 2020, a public offering in June 2021 raised US $57.5 million at $5 per share. The company promised to focus on its ongoing clinical trial of a brain implant, called Orion, that also provides artificial vision. But its stock price plunged to around $1.50, and in February 2022, just before this article was published, the company announced a proposed merger with an early-stage biopharmaceutical company called Nano Precision Medical (NPM). None of Second Sight’s executives will be on the leadership team of the new company, which will focus on developing NPM’s novel implant for drug delivery.The company’s current leadership declined to be interviewed for this article but did provide an emailed statement prior to the merger announcement. It said, in part: “We are a recognized global leader in neuromodulation devices for blindness and are committed to developing new technologies to treat the broadest population of sight-impaired individuals.”

It’s unclear what Second Sight’s proposed merger means for Argus patients. The day after the merger was announced, Adam Mendelsohn, CEO of Nano Precision Medical, told Spectrum that he doesn’t yet know what contractual obligations the combined company will have to Argus and Orion patients. But, he says, NPM will try to do what’s “right from an ethical perspective.” The past, he added in an email, is “simply not relevant to the new future.”

There may be some alternatives, from the February 15, 2022 IEEE Spectrum article (Note: Links have been removed),

Second Sight may have given up on its retinal implant, but other companies still see a need—and a market—for bionic vision without brain surgery. Paris-based Pixium Vision is conducting European and U.S. feasibility trials to see if its Prima system can help patients with age-related macular degeneration, a much more common condition than retinitis pigmentosa.

Daniel Palanker, a professor of ophthalmology at Stanford University who licensed his technology to Pixium, says the Prima implant is smaller, simpler, and cheaper than the Argus II. But he argues that Prima’s superior image resolution has the potential to make Pixium Vision a success. “If you provide excellent vision, there will be lots of patients,” he tells Spectrum. “If you provide crappy vision, there will be very few.”

Some clinicians involved in the Argus II work are trying to salvage what they can from the technology. Gislin Dagnelie, an associate professor of ophthalmology at Johns Hopkins University School of Medicine, has set up a network of clinicians who are still working with Argus II patients. The researchers are experimenting with a thermal camera to help users see faces, a stereo camera to filter out the background, and AI-powered object recognition. These upgrades are unlikely to result in commercial hardware today but could help future vision prostheses.

The writers have carefully balanced this piece so it is not an outright condemnation of the companies (Second Sight and Nano Precision), from the February 15, 2022 IEEE Spectrum article,

Failure is an inevitable part of innovation. The Argus II was an innovative technology, and progress made by Second Sight may pave the way for other companies that are developing bionic vision systems. But for people considering such an implant in the future, the cautionary tale of Argus patients left in the lurch may make a tough decision even tougher. Should they take a chance on a novel technology? If they do get an implant and find that it helps them navigate the world, should they allow themselves to depend upon it?

Abandoning the Argus II technology—and the people who use it—might have made short-term financial sense for Second Sight, but it’s a decision that could come back to bite the merged company if it does decide to commercialize a brain implant, believes Doerr.

For anyone curious about retinal implant technology (specifically the Argus II), I have a description in a June 30, 2015 posting.

Speculations and hopes for neuroprosthetics

The field of neuroprosthetics is very active. Dr Arthur Saniotis and Prof Maciej Henneberg have written an article where they speculate about the possibilities of a neuroprosthetic that may one day merge with neurons in a February 21, 2022 Nanowerk Spotlight article,

For over a generation several types of medical neuroprosthetics have been developed, which have improved the lives of thousands of individuals. For instance, cochlear implants have restored functional hearing in individuals with severe hearing impairment.

Further advances in motor neuroprosthetics are attempting to restore motor functions in tetraplegic, limb loss and brain stem stroke paralysis subjects.

Currently, scientists are working on various kinds of brain/machine interfaces [BMI] in order to restore movement and partial sensory function. One such device is the ‘Ipsihand’ that enables movement of a paralyzed hand. The device works by detecting the recipient’s intention in the form of electrical signals, thereby triggering hand movement.

Another recent development is the 12 month BMI gait neurohabilitation program that uses a visual-tactile feedback system in combination with a physical exoskeleton and EEG operated AI actuators while walking. This program has been tried on eight patients with reported improvements in lower limb movement and somatic sensation.

Surgically placed electrode implants have also reduced tremor symptoms in individuals with Parkinson’s disease.

Although neuroprosthetics have provided various benefits they do have their problems. Firstly, electrode implants to the brain are prone to degradation, necessitating new implants after a few years. Secondly, as in any kind of surgery, implanted electrodes can cause post-operative infection and glial scarring. Furthermore, one study showed that the neurobiological efficacy of an implant is dependent on the rate of speed of its insertion.

But what if humans designed a neuroprosthetic, which could bypass the medical glitches of invasive neuroprosthetics? However, instead of connecting devices to neural networks, this neuroprosthetic would directly merge with neurons – a novel step. Such a neuroprosthetic could radically optimize treatments for neurodegenerative disorders and brain injuries, and possibly cognitive enhancement [emphasis mine].

A team of three international scientists has recently designed a nanobased neuroprosthetic, which was published in Frontiers in Neuroscience (“Integration of Nanobots Into Neural Circuits As a Future Therapy for Treating Neurodegenerative Disorders“). [open access paper published in 2018]

An interesting feature of their nanobot neuroprosthetic is that it has been inspired from nature by way of endomyccorhizae – a type of plant/fungus symbiosis, which is over four hundred million years old. During endomyccorhizae, fungi use numerous threadlike projections called mycelium that penetrate plant roots, forming colossal underground networks with nearby root systems. During this process fungi take up vital nutrients while protecting plant roots from infections – a win-win relationship. Consequently, the nano-neuroprosthetic has been named ‘endomyccorhizae ligand interface’, or ‘ELI’ for short.

The Spotlight article goes on to describe how these nanobots might function. As for the possibility of cognitive enhancement, I wonder if that might come to be described as a form of ‘artificial intelligence’.

(Dr Arthur Saniotis and Prof Maciej Henneberg are both from the Department of Anthropology, Ludwik Hirszfeld Institute of Immunology and Experimental Therapy, Polish Academy of Sciences; and Biological Anthropology and Comparative Anatomy Research Unit, Adelaide Medical School, University of Adelaide. Abdul-Rahman Sawalma who’s listed as an author on the 2018 paper is from the Palestinian Neuroscience Initiative, Al-Quds University, Beit Hanina, Palestine.)

Saniotis and Henneberg’s Spotlight article presents an optimistic view of neuroprosthetics. It seems telling that they cite cochlear implants as a success story when it is viewed by many as ethically fraught (see the Cochlear implant Wikipedia entry; scroll down to ‘Criticism and controversy’).

Ethics and your implants

This is from an April 6, 2015 article by Luc Henry on technologist.eu,

Technologist: What are the potential consequences of accepting the “augmented human” in society?

Gregor Wolbring: There are many that we might not even envision now. But let me focus on failure and obsolescence [emphasis mine], two issues that are rarely discussed. What happens when the mechanisms fails in the middle of an action? Failure has hazardous consequences, but obsolescence has psychological ones. …. The constant surgical inter­vention needed to update the hardware may not be feasible. A person might feel obsolete if she cohabits with others using a newer version.

T. Are researchers working on prosthetics sometimes disconnected from reality?

G. W. Students engaged in the development of prosthetics have to learn how to think in societal terms and develop a broader perspective. Our education system provides them with a fascination for clever solutions to technological challenges but not with tools aiming at understanding the consequences, such as whether their product might increase or decrease social justice.

Wolbring is a professor at the University of Calgary’s Cumming School of Medicine (profile page) who writes on social issues to do with human enhancement/ augmentation. As well,

Some of his areas of engagement are: ability studies including governance of ability expectations, disability studies, governance of emerging and existing sciences and technologies (e.g. nanoscale science and technology, molecular manufacturing, aging, longevity and immortality, cognitive sciences, neuromorphic engineering, genetics, synthetic biology, robotics, artificial intelligence, automatization, brain machine interfaces, sensors), impact of science and technology on marginalized populations, especially people with disabilities he governance of bodily enhancement, sustainability issues, EcoHealth, resilience, ethics issues, health policy issues, human rights and sport.

He also maintains his own website here.

Not just startups

I’d classify Second Sight as a tech startup company and they have a high rate of failure, which may not have been clear to the patients who had the implants. Clinical trials can present problems too as this excerpt from my September 17, 2020 posting notes,

This October 31, 2017 article by Emily Underwood for Science was revelatory,

“In 2003, neurologist Helen Mayberg of Emory University in Atlanta began to test a bold, experimental treatment for people with severe depression, which involved implanting metal electrodes deep in the brain in a region called area 25 [emphases mine]. The initial data were promising; eventually, they convinced a device company, St. Jude Medical in Saint Paul, to sponsor a 200-person clinical trial dubbed BROADEN.

This month [October 2017], however, Lancet Psychiatry reported the first published data on the trial’s failure. The study stopped recruiting participants in 2012, after a 6-month study in 90 people failed to show statistically significant improvements between those receiving active stimulation and a control group, in which the device was implanted but switched off.

… a tricky dilemma for companies and research teams involved in deep brain stimulation (DBS) research: If trial participants want to keep their implants [emphases mine], who will take responsibility—and pay—for their ongoing care? And participants in last week’s meeting said it underscores the need for the growing corps of DBS researchers to think long-term about their planned studies.”

Symbiosis can be another consequence, as mentioned in my September 17, 2020 posting,

From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence. [emphasis mine]

It’s complicated

For a lot of people these devices are or could be life-changing. At the same time, there are a number of different issues related to implants/prosthetics; the following is not an exhaustive list. As Wolbring notes, issues that we can’t begin to imagine now are likely to emerge as these medical advances become more ubiquitous.

Ability/disability?

Assistive technologies are almost always portrayed as helpful. For example, a cochlear implant gives people without hearing the ability to hear. The assumption is that this is always a good thing—unless you’re a deaf person who wants to define the problem a little differently. Who gets to decide what is good and ‘normal’ and what is desirable?

While the cochlear implant is the most extreme example I can think of, there are variations of these questions throughout the ‘disability’ communities.

Also, as Wolbring notes in his interview with the Technologist.eu, the education system tends to favour technological solutions which don’t take social issues into account. Wolbring cites social justice issues when he mentions failure and obsolescence.

Technical failures and obsolescence

The story, excerpted earlier in this posting, opened with a striking example of a technical failure at an awkward moment; a blind woman depending on her retinal implant loses all sight as she maneuvers through a subway station in New York City.

Aside from being an awful way to find out the company supplying and supporting your implant is in serious financial trouble and can’t offer assistance or repair, the failure offers a preview of what could happen as implants and prosthetics become more commonly used.

Keeping up/fomo (fear of missing out)/obsolescence

It used to be called ‘keeping up with the Joneses, it’s the practice of comparing yourself and your worldly goods to someone else(‘s) and then trying to equal what they have or do better. Usually, people want to have more and better than the mythical Joneses.

These days, the phenomenon (which has been expanded to include social networking) is better known as ‘fomo’ or fear of missing out (see the Fear of missing out Wikipedia entry).

Whatever you want to call it, humanity’s competitive nature can be seen where technology is concerned. When I worked in technology companies, I noticed that hardware and software were sometimes purchased for features that were effectively useless to us. But, not upgrading to a newer version was unthinkable.

Call it fomo or ‘keeping up with the Joneses’, it’s a powerful force and when people (and even companies) miss out or can’t keep up, it can lead to a sense of inferiority in the same way that having an obsolete implant or prosthetic could.

Social consequences

Could there be a neural implant/neuroprosthetic divide? There is already a digital divide (from its Wikipedia entry),

The digital divide is a gap between those who have access to new technology and those who do not … people without access to the Internet and other ICTs [information and communication technologies] are at a socio-economic disadvantage because they are unable or less able to find and apply for jobs, shop and sell online, participate democratically, or research and learn.

After reading Wolbring’s comments, it’s not hard to imagine a neural implant/neuroprosthetic divide with its attendant psychological and social consequences.

What kind of human am I?

There are other issues as noted in my September 17, 2020 posting. I’ve already mentioned ‘patient 6’, the woman who developed a symbiotic relationship with her brain/computer interface. This is how the relationship ended,

… He [Frederic Gilbert, ethicist] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.

… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.

“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”

Above human

The possibility that implants will not merely restore or endow someone with ‘standard’ sight or hearing or motion or … but will augment or improve on nature was broached in this May 2, 2013 posting, More than human—a bionic ear that extends hearing beyond the usual frequencies and is one of many in the ‘Human Enhancement’ category on this blog.

More recently, Hugh Herr, an Associate Professor at the Massachusetts Institute of Technology (MIT), leader of the Biomechatronics research group at MIT’s Media Lab, a double amputee, and prosthetic enthusiast, starred in the recent (February 23, 2022) broadcast of ‘Augmented‘ on the Public Broadcasting Service (PBS) science programme, Nova.

I found ‘Augmented’ a little offputting as it gave every indication of being an advertisement for Herr’s work in the form of a hero’s journey. I was not able to watch more than 10 mins. This preview gives you a pretty good idea of what it was like although the part in ‘Augmented, where he says he’d like to be a cyborg hasn’t been included,

At a guess, there were a few talking heads (taking up from 10%-20% of the running time) who provided some cautionary words to counterbalance the enthusiasm in the rest of the programme. It’s a standard approach designed to give the impression that both sides of a question are being recognized. The cautionary material is usually inserted past the 1/2 way mark while leaving several minutes at the end for returning to the more optimistic material.

In a February 2, 2010 posting I have excerpts from an article featuring quotes from Herr that I still find startling,

Written by Paul Hochman for Fast Company, Bionic Legs, iLimbs, and Other Super-Human Prostheses [ETA March 23, 2022: an updated version of the article is now on Genius.com] delves further into the world where people may be willing to trade a healthy limb for a prosthetic. From the article,

There are many advantages to having your leg amputated.

Pedicure costs drop 50% overnight. A pair of socks lasts twice as long. But Hugh Herr, the director of the Biomechatronics Group at the MIT Media Lab, goes a step further. “It’s actually unfair,” Herr says about amputees’ advantages over the able-bodied. “As tech advancements in prosthetics come along, amputees can exploit those improvements. They can get upgrades. A person with a natural body can’t.”

Herr is not the only one who favours prosthetics (also from the Hochman article),

This influx of R&D cash, combined with breakthroughs in materials science and processor speed, has had a striking visual and social result: an emblem of hurt and loss has become a paradigm of the sleek, modern, and powerful. Which is why Michael Bailey, a 24-year-old student in Duluth, Georgia, is looking forward to the day when he can amputate the last two fingers on his left hand.

“I don’t think I would have said this if it had never happened,” says Bailey, referring to the accident that tore off his pinkie, ring, and middle fingers. “But I told Touch Bionics I’d cut the rest of my hand off if I could make all five of my fingers robotic.”

But Bailey is most surprised by his own reaction. “When I’m wearing it, I do feel different: I feel stronger. As weird as that sounds, having a piece of machinery incorporated into your body, as a part of you, well, it makes you feel above human.[emphasis mine] It’s a very powerful thing.”

My September 17, 2020 posting touches on more ethical and social issues including some of those surrounding consumer neurotechnologies or brain-computer interfaces (BCI). Unfortunately, I don’t have space for these issues here.

As for Paul Hochman’s article, Bionic Legs, iLimbs, and Other Super-Human Prostheses, now on Genius.com, it has been updated.

Money makes the world go around

Money and business practices have been indirectly referenced (for the most part) up to now in this posting. The February 15, 2022 IEEE Spectrum article and Hochman’s article, Bionic Legs, iLimbs, and Other Super-Human Prostheses, cover two aspects of the money angle.

In the IEEE Spectrum article, a tech start-up company, Second Sight, ran into financial trouble and is acquired by a company that has no plans to develop Second Sight’s core technology. The people implanted with the Argus II technology have been stranded as were ‘patient 6’ and others participating in the clinical trial described in the July 24, 2019 article by Liam Drew for Nature Outlook: The brain mentioned earlier in this posting.

I don’t know anything about the business bankruptcy mentioned in the Drew article but one of the business problems described in the IEEE Spectrum article suggests that Second Sight was founded before answering a basic question, “What is the market size for this product?”

On 18 July 2019, Second Sight sent Argus patients a letter saying it would be phasing out the retinal implant technology to clear the way for the development of its next-generation brain implant for blindness, Orion, which had begun a clinical trial with six patients the previous year. …

“The leadership at the time didn’t believe they could make [the Argus retinal implant] part of the business profitable,” Greenberg [Robert Greenberg, Second Sight co-founder] says. “I understood the decision, because I think the size of the market turned out to be smaller than we had thought.”

….

The question of whether a medical procedure or medicine can be profitable (or should the question be sufficiently profitable?) was referenced in my April 26, 2019 posting in the context of gene editing and personalized medicine

Edward Abrahams, president of the Personalized Medicine Coalition (US-based), advocates for personalized medicine while noting in passing, market forces as represented by Goldman Sachs in his May 23, 2018 piece for statnews.com (Note: A link has been removed),

Goldman Sachs, for example, issued a report titled “The Genome Revolution.” It argues that while “genome medicine” offers “tremendous value for patients and society,” curing patients may not be “a sustainable business model.” [emphasis mine] The analysis underlines that the health system is not set up to reap the benefits of new scientific discoveries and technologies. Just as we are on the precipice of an era in which gene therapies, gene-editing, and immunotherapies promise to address the root causes of disease, Goldman Sachs says that these therapies have a “very different outlook with regard to recurring revenue versus chronic therapies.”

The ‘Glybera’ story in my July 4, 2019 posting (scroll down about 40% of the way) highlights the issue with “recurring revenue versus chronic therapies,”

Kelly Crowe in a November 17, 2018 article for the CBC (Canadian Broadcasting Corporation) news writes about Glybera,

It is one of this country’s great scientific achievements.

“The first drug ever approved that can fix a faulty gene.

It’s called Glybera, and it can treat a painful and potentially deadly genetic disorder with a single dose — a genuine made-in-Canada medical breakthrough.

But most Canadians have never heard of it.

Here’s my summary (from the July 4, 2019 posting),

It cost $1M for a single treatment and that single treatment is good for at least 10 years.

Pharmaceutical companies make their money from repeated use of their medicaments and Glybera required only one treatment so the company priced it according to how much they would have gotten for repeated use, $100,000 per year over a 10 year period. The company was not able to persuade governments and/or individuals to pay the cost

In the end, 31 people got the treatment, most of them received it for free through clinical trials.

For rich people only?

Megan Devlin’s March 8, 2022 article for the Daily Hive announces a major research investment into medical research (Note: A link has been removed),

Vancouver [Canada] billionaire Chip Wilson revealed Tuesday [March 8, 2022] that he has a rare genetic condition that causes his muscles to waste away, and announced he’s spending $100 million on research to find a cure.

His condition is called facio-scapulo-humeral muscular dystrophy, or FSHD for short. It progresses rapidly in some people and more slowly in others, but is characterized by progressive muscle weakness starting the the face, the neck, shoulders, and later the lower body.

“I’m out for survival of my own life,” Wilson said.

“I also have the resources to do something about this which affects so many people in the world.”

Wilson hopes the $100 million will produce a cure or muscle-regenerating treatment by 2027.

“This could be one of the biggest discoveries of all time, for humankind,” Wilson said. “Most people lose muscle, they fall, and they die. If we can keep muscle as we age this can be a longevity drug like we’ve never seen before.”

According to rarediseases.org, FSHD affects between four and 10 people out of every 100,000 [emphasis mine], Right now, therapies are limited to exercise and pain management. There is no way to stall or reverse the disease’s course.

Wilson is best known for founding athleisure clothing company Lululemon. He also owns the most expensive home in British Columbia, a $73 million mansion in Vancouver’s Kitsilano neighbourhood.

Let’s see what the numbers add up to,

4 – 10 people out of 100,000

40 – 100 people out of 1M

1200 – 3,000 people out of 30M (let’s say this is Canada’s population)\

12,000 – 30,000 people out of 300M (let’s say this is the US’s population)

42,000 – 105,000 out of 1.115B (let’s say this is China’s population)

The rough total comes to 55,200 to 138,000 people between three countries with a combined population total of 1.445B. Given how business currently operates, it seems unlikely that any company will want to offer Wilson’s hoped for medical therapy although he and possibly others may benefit from a clinical trial.

Should profit or wealth be considerations?

The stories about the patients with the implants and the patients who need Glybera are heartbreaking and point to a question not often asked when medical therapies and medications are developed. Is the profit model the best choice and, if so, how much profit?

I have no answer to that question but I wish it was asked by medical researchers and policy makers.

As for wealthy people dictating the direction for medical research, I don’t have answers there either. I hope the research will yield applications and/or valuable information for more than Wilson’s disease.

It’s his money after all

Wilson calls his new venture, SolveFSHD. It doesn’t seem to be affiliated with any university or biomedical science organization and it’s not clear how the money will be awarded (no programmes, no application procedure, no panel of experts). There are three people on the team, Eva R. Chin, scientist and executive director, Chip Wilson, SolveFSHD founder/funder, and FSHD patient, and Neil Camarta, engineer, executive (fossil fuels and clean energy), and FSHD patient. There’s also a Twitter feed (presumably for the latest updates): https://twitter.com/SOLVEFSHD.

Perhaps unrelated but intriguing is news about a proposed new building in Kenneth Chan’s March 31, 2022 article for the Daily Hive,

Low Tide Properties, the real estate arm of Lululemon founder Chip Wilson [emphasis mine], has submitted a new development permit application to build a 148-ft-tall, eight-storey, mixed-use commercial building in the False Creek Flats of Vancouver.

The proposal, designed by local architectural firm Musson Cattell Mackey Partnership, calls for 236,000 sq ft of total floor area, including 105,000 sq ft of general office space, 102,000 sq ft of laboratory space [emphasis mine], and 5,000 sq ft of ground-level retail space. An outdoor amenity space for building workers will be provided on the rooftop.

[next door] The 2001-built, five-storey building at 1618 Station Street immediately to the west of the development site is also owned by Low Tide Properties [emphasis mine]. The Ferguson, the name of the existing building, contains about 79,000 sq ft of total floor area, including 47,000 sq ft of laboratory space and 32,000 sq ft of general office space. Biotechnology company Stemcell technologies [STEMCELL] Technologies] is the anchor tenant [emphasis mine].

I wonder if this proposed new building will house SolveFSHD and perhaps other FSHD-focused enterprises. The proximity of STEMCELL Technologies could be quite convenient. In any event, $100M will buy a lot (pun intended).

The end

Issues I’ve described here in the context of neural implants/neuroprosthetics and cutting edge medical advances are standard problems not specific to these technologies/treatments:

  • What happens when the technology fails (hopefully not at a critical moment)?
  • What happens when your supplier goes out of business or discontinues the products you purchase from them?
  • How much does it cost?
  • Who can afford the treatment/product? Will it only be for rich people?
  • Will this technology/procedure/etc. exacerbate or create new social tensions between social classes, cultural groups, religious groups, races, etc.?

Of course, having your neural implant fail suddenly in the middle of a New York City subway station seems a substantively different experience than having your car break down on the road.

There are, of course, there are the issues we can’t yet envision (as Wolbring notes) and there are issues such as symbiotic relationships with our implants and/or feeling that you are “above human.” Whether symbiosis and ‘implant/prosthetic superiority’ will affect more than a small number of people or become major issues is still to be determined.

There’s a lot to be optimistic about where new medical research and advances are concerned but I would like to see more thoughtful coverage in the media (e.g., news programmes and documentaries like ‘Augmented’) and more thoughtful comments from medical researchers.

Of course, the biggest issue I’ve raised here is about the current business models for health care products where profit is valued over people’s health and well-being. it’s a big question and I don’t see any definitive answers but the question put me in mind of this quote (from a September 22, 2020 obituary for US Supreme Court Justice Ruth Bader Ginsburg by Irene Monroe for Curve),

Ginsburg’s advocacy for justice was unwavering and showed it, especially with each oral dissent. In another oral dissent, Ginsburg quoted a familiar Martin Luther King Jr. line, adding her coda:” ‘The arc of the universe is long, but it bends toward justice,’” but only “if there is a steadfast commitment to see the task through to completion.” …

Martin Luther King Jr. popularized and paraphrased the quote (from a January 18, 2018 article by Mychal Denzel Smith for Huffington Post),

His use of the quote is best understood by considering his source material. “The arc of the moral universe is long, but it bends toward justice” is King’s clever paraphrasing of a portion of a sermon delivered in 1853 by the abolitionist minister Theodore Parker. Born in Lexington, Massachusetts, in 1810, Parker studied at Harvard Divinity School and eventually became an influential transcendentalist and minister in the Unitarian church. In that sermon, Parker said: “I do not pretend to understand the moral universe. The arc is a long one. My eye reaches but little ways. I cannot calculate the curve and complete the figure by experience of sight. I can divine it by conscience. And from what I see I am sure it bends toward justice.”

I choose to keep faith that people will get the healthcare products they need and that all of us need to keep working at making access more fair.