Category Archives: ethics

Guinea pigging and a walk down memory lane for Remembrance Day 2024

While this isn’t one of my usual areas of interest, there is a personal element for me (more about that at the end). Some people earn their living as subjects for drug tests; it’s called guinea pigging. (There’s more here in a July 1, 2015 posting; see the first three paragraphs after the information about cross-posting and the circumstances under which I wrote the article.)

Earlier this fall (2024), the Canadian Broadcasting Corporation (CBC) released a documentary, Bodies for Rent, focusing on two guinea piggers. Here’s more from a September 25, 2024 CBC online article about their documentary,

Before a drug becomes available on the market, it must undergo rigorous testing and multiple levels of clinical trials to ensure its functionality and safety. Every year, thousands of people in Canada and the U.S. take part in these trials, and may receive financial compensation for doing so. 

A new documentary highlights how some volunteers are attempting to earn a living by putting their bodies on the line. Bodies for Rent follows two men who spend their days searching for eligible clinical studies, and shows the lengths they’ll go to in order to complete a trial and get paid.  

A way to make a ‘living’

Participating in a trial for a medical drug still under development involves reporting any side effects. It’s a potentially dangerous “job,” but for many volunteers, the rewards outweigh the risks. 

“I think I’ve done more than 40 studies,” says 55-year-old “Franco,” who conceals his real identity with makeup in the documentary. “I was struggling to pay my rent. And I saw an ad at the subway in Toronto, and they said, ‘Would you like to make up to $1,200 over a weekend?'”

“I usually make [$30,000] to 40,000 a year. Before, I was making, like, $18,000 working at a factory.”

Raighne, an artist living in Minneapolis, was raised by a single mother and grew up on welfare. “I’ve done about 20 or 30 drug trials,” he says in the film. “And nothing makes money like clinical studies.”

Trying to get out of debt and manage an unstable business, Raighne sometimes spends days or weeks away from home while participating in a study. “I had a friend describe it as, like, ‘drug jail,'” he says. “Because you’re trapped for a set amount of time. You’re under observation.”

From testing on prisoners to testing on the poor

Before the 1970s, most Phase I clinical trials — which look at a drug’s safety, determine the safe dosage range and see if there are any side effects — were conducted on prisoners. This allowed researchers to control and monitor every aspect of participants’ lives. 

“These studies did the most unimaginably horrible things you can think of to prisoners there,” says Carl Elliott, a University of Minnesota bioethicist featured in Bodies for Rent and the author of The Occasional Human Sacrifice: Medical Experimentation and the Price of Saying No [emphasis mine]. 

“For example, they injected inmates with herpes. They injected them with asbestos. They even tested chemical warfare agents on them.”

Public outcry and new reforms eventually made research in prisons much more difficult. “The question was, ‘Well, who do we do Phase I trials on now?’ We can’t do them on prisoners anymore,” says Elliott. 

“The answer is poor people.”

‘A financial incentive to lie’

When testing in prisons stopped and financial incentives were introduced, students and people impacted by poverty became more common test subjects. However, the promise of money at the completion of a trial has added complications. 

“When I started doing studies, I used to be very honest,” says Franco. “I [would] tell all the side effects that I was going through.” 

But after reporting severe migraines during one study, Franco says he was forced to leave — with less than 20 per cent of the promised payout. He says he was also blocked from doing further studies with that company. 

“I [was] being penalized for being honest. So, after that, I kind of learned my lesson and I decided to tone down the side effects,” he says. 

Once in a study, the risks persist. Franco says that after participating for nearly two months in a study worth around $20,000 to him, he received a call from the clinic saying he had inflammation in his pancreas. The study manager told him he was being removed from the study, and later, the clinic advised him to go to an emergency room immediately. 

“I hope it’s not permanent. If it’s permanent, then I’m gonna be upset,” Franco says to the camera in the documentary. “I was supposed to get around $20,000. If I don’t get the full amount because I am getting side effects, I think that it’s unfair.”

In the end, Franco was paid $9,000. 

The September 25, 2024 CBC online article also includes an embedded video about testing on prisoners. “Bodies for Rent” can be viewed on CBC Gem. (You do have to create an account in order to view the documentary or anything else on CBC Gem.)

A walk down memory lane for Remembrance Day 2024

When my father was in basic training for the Canadian army and preparing to fight in World War II, he participated in some kind of experiment. The details are fuzzy as he didn’t talk about it much but he did insist that some of his medical problems (specifically, the problems he had with his skin) were directly due to his experience as a guinea pig and that he should be compensated by the Canadian government. If memory serves, he felt the army had misled him into participating in the experiment. .

Papa was 15 1/2 when he lied his way into the army. Not too long after, the army realizing its mistake kept him back from the front (in some training camp in the Prairies), which is when he became a medical experiment for a time. On reaching the age of 18 the Canadian army shipped him overseas.

When he finally did try to speak up about his experience as a guinea pig it was the late 1960s and he didn’t pursue the matter for long being of the opinion that no one would pay much attention. He wasn’t wrong.

It wasn’t until details about the infamous Tuskegee Syphilis Study were revealed that there was serious discussion about informed consent (about 1972) in the United States. I don’t know when it became a serious discussion in Canada. Even then, some of the research from the 1970s is stomach churning as I found on stumbling across a study from that period. The researchers were conducting an experiment with a drug they knew was not going to work and that had bad side effects as was noted in the abstract. The testing took place on patients in a hospital ward.

There is still a long ways to go as evidenced by the “Bodies for Rent” documentary and Elliott’s 2024 book “The Occasional Human Sacrifice: Medical Experimentation and the Price of Saying No”. I hope there are changes to how drug testing is done as a consequence of added awareness but it’s a long hard road to change.

For my father on Remembrance Day 2024: you were right; what they did to you was wrong. And still, you went and fought. Thank you.

Submit abstracts by Jan. 31 for 2025 Governance of Emerging Technologies & Science (GETS) Conference at Arizona State U

This call for abstracts from Arizona State University (ASU) for the Twelfth Annual Governance of Emerging Technologies and Science (GETS) Conference was received via email,

GETS 2025: Call for abstracts

Save the date for the Twelfth Annual Governance of Emerging Technologies and Science Conference, taking place May 19 and 20, 2025 at the Sandra Day O’Connor College of Law at Arizona State University in Phoenix, AZ. The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including:

National security
Nanotechnology
Quantum computing
Autonomous vehicles
3D printing
Robotics
Synthetic biology
Gene editing
Artificial intelligence
Biotechnology

Genomics
Internet of things (IoT)
Autonomous weapon systems
Personalized medicine
Neuroscience
Digital health
Human enhancement
Telemedicine
Virtual reality
Blockchain

Call for abstracts: The co-sponsors invite submission of abstracts for proposed presentations. Submitters of abstracts need not provide a written paper, although provision will be made for posting and possible post-conference publication of papers for those who are interested.

  • Abstracts are invited for any aspect or topic relating to the governance of emerging technologies, including any of the technologies listed above
  • Abstracts should not exceed 500 words and must contain your name and email address
  • Abstracts must be submitted by Friday, January 31, 2025, to be considered

Submit your abstract

For more information contact Eric Hitchcock.

Good luck!

AI and Canadian science diplomacy & more stories from the October 2024 Council of Canadian Academies (CCA) newsletter

The October 2024 issue of The Advance (Council of Canadian Academies [CCA] newsletter) arrived in my emailbox on October 15, 2024 with some interesting tidbits about artificial intelligence, Note: For anyone who wants to see the entire newsletter for themselves, you can sign up here or in French, vous pouvez vous abonner ici,

Artificial Intelligence and Canada’s Science Diplomacy Future

For nearly two decades, Canada has been a global leader in artificial intelligence (AI) research, contributing a significant percentage of the world’s top-cited scientific publications on the subject. In that time, the number of countries participating in international collaborations has grown significantly, supporting new partnerships and accounting for as much as one quarter of all published research articles.

“Opportunities for partnerships are growing rapidly alongside the increasing complexity of new scientific discoveries and emerging industry sectors,” wrote the CCA Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships earlier this year, singling out Canada’s AI expertise. “At the same time, discussions of sovereignty and national interests abut the movement toward open science and transdisciplinary approaches.”

On Friday, November 22 [2024], the CCA will host “Strategy and Influence: AI and Canada’s Science Diplomacy Future” as part of the Canadian Science Policy Centre (CSPC) annual conference. The panel discussion will draw on case studies related to AI research collaboration to explore the ways in which such partnerships inform science diplomacy. Panellists include:

  • Monica Gattinger, chair of the CCA Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships and director of the Institute for Science, Society and Policy at the University of Ottawa (picture omitted)
  • David Barnes, head of the British High Commission Science, Climate, and Energy Team
  • Constanza Conti, Professor of Numerical Analysis at the University of Florence and Scientific Attaché at the Italian Embassy in Ottawa
  • Jean-François Doulet, Attaché for Science and Higher Education at the Embassy of France in Canada
  • Konstantinos Kapsouropoulos, Digital and Research Counsellor at the Delegation of the European Union to Canada

For details on CSPC 2024, click here. [Here’s the theme and a few more details about the conference: Empowering Society: The Transformative Value of Science, Knowledge, and Innovation; The 16th annual Canadian Science Policy Conference (CSPC) will be held in person from November 20th to 22nd, 2024] For a user guide to  Navigating Collaborative Futures, from the CCA’s Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships, click here.

I have checked out the panel’s session page,

448: Strategy and Influence: AI and Canada’s Science Diplomacy Future

Friday, November 22 [2024]
1:00 pm – 2:30 pm EST

Science and International Affairs and Security

About

Organized By: Council of Canadian Academies (CCA)

Artificial intelligence has already begun to transform Canada’s economy and society, and the broader advantages of international collaboration in AI research have the potential to make an even greater impact. With three national AI institutes and a Pan-Canadian AI Strategy, Canada’s AI ecosystem is thriving and positions the country to build stronger international partnerships in this area, and to develop more meaningful international collaborations in other areas of innovation. This panel will convene science attachés to share perspectives on science diplomacy and partnerships, drawing on case studies related to AI research collaboration.

The newsletter also provides links to additional readings on various topics, here are the AI items,

In Ottawa, Prime Minister Justin Trudeau and President Emmanuel Macron of France renewed their commitment “to strengthening economic exchanges between Canadian and French AI ecosystems.” They also revealed that Canada would be named Country of the Year at Viva Technology’s annual conference, to be held next June in Paris.

A “slower, but more capable” version of OpenAI is impressing scientists with the strength of its responses to prompts, according to Nature. The new version, referred to as “o1,” outperformed a previous ChatGPT model on a standardized test involving chemistry, physics, and biology questions, and “beat PhD-level scholars on the hardest series of questions.” [Note: As of October 16, 2024, the Nature news article of October 1, 2024 appears to be open access. It’s unclear how long this will continue to be the case.]

In memoriam: Abhishek Gupta, the founder and principal researcher of the Montreal AI Ethics Institute and a member of the CCA Expert Panel on Artificial Intelligence for Science and Engineering, died on September 30 [2024]. His colleagues shared the news in a memorial post, writing, “It was during his time in Montreal that Abhishek envisioned a future where ethics and AI would intertwine—a vision that became the driving force behind his life’s work.”

I clicked the link to read the Trudeau/Macron announcement and found this September 26, 2024 Innovation, Science and Economic Development Canada news release,

Meeting in Ottawa on September 26, 2024, Justin Trudeau, the Prime Minister of Canada, and Emmanuel Macron, the President of the French Republic, issued a call to action to promote the development of a responsible approach to artificial intelligence (AI).

Our two countries will increase the coordination of our actions, as Canada will assume the Presidency of the G7 in 2025 and France will host the AI Action Summit on February 10 and 11, 2025.

Our two countries are working on the development and use of safe, secure and trustworthy AI as part of a risk-aware, human-centred and innovation-friendly approach. This cooperation is based on shared values. We believe that the development and use of AI need to be beneficial for individuals and the planet, for example by increasing human capabilities and developing creativity, ensuring the inclusion of under-represented people, reducing economic, social, gender and other inequalities, protecting information integrity and protecting natural environments, which in turn will promote inclusive growth, well-being, sustainable development and environmental sustainability.

We are committed to promoting the development and use of AI systems that respect the rule of law, human rights, democratic values and human-centred values. Respecting these values means developing and using AI systems that are transparent and explainable, robust, safe and secure, and whose stakeholders are held accountable for respecting these principles, in line with the Recommendation of the OECD Council on Artificial Intelligence, the Hiroshima AI Process, the G20 AI Principles and the International Partnership for Information and Democracy.

Based on these values and principles, Canada and France are working on high-quality scientific cooperation. In April 2023, we formalized the creation of a joint committee for science, technology and innovation. This committee has identified emerging technologies, including AI, as one of the priorities areas for cooperation between our two countries. In this context, a call for AI research projects was announced last July, scheduled for the end of 2024 and funded, on the French side, by the French National Research Agency, and, on the Canadian side, by a consortium made up of Canada’s three granting councils (the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada and the Canadian Institutes of Health Research) and IVADO [Institut de valorisation des données], the AI research, training and transfer consortium.

We will also collaborate on the evaluation and safety of AI models. We have announced key AI safety initiatives, including the AI Safety Institute of Canada [emphasis mine; not to be confused with Artificial Intelligence Governance & Safety Canada (AIGS)], which will be launched soon, and France’s National Centre for AI evaluation. We expect these two agencies will work to improve knowledge and understanding of technical and socio-technical aspects related to the safety and evaluation of advanced AI systems.

Canada and France are committed to strengthening economic exchanges between Canadian and French AI ecosystems, whether by organizing delegations, like the one organized by Scale AI with 60 Canadian companies at the latest Viva Technology conference in Paris, or showcasing France at the ALL IN event in Montréal on September 11 and 12, 2024, through cooperation between companies, for example, through large companies’ adoption of services provided by small companies or through the financial support that investment funds provide to companies on both sides of the Atlantic. Our two countries will continue their cooperation at the upcoming Viva Technology conference in Paris, where Canada will be the Country of the Year.

We want to strengthen our cooperation in terms of developing AI capabilities. We specifically want to promote access to AI’s compute capabilities in order to support national and international technological advances in research and business, notably in emerging markets and developing countries, while committing to strengthening their efforts to make the necessary improvements to the energy efficiency of these infrastructures. We are also committed to sharing their experience in initiatives to develop AI skills and training in order to accelerate workforce deployment.

Canada and France cooperate on the international stage to ensure the alignment and convergence of AI regulatory frameworks, given the economic potential and the global social consequences of this technological revolution. Under our successive G7 presidencies in 2018 and 2019, we worked to launch the Global Partnership on Artificial Intelligence (GPAI), which now has 29 members from all over the world, and whose first two centres of expertise were opened in Montréal and Paris. We support the creation of the new integrated partnership, which brings together OECD and GPAI member countries, and welcomes new members, including emerging and developing economies. We hope that the implementation of this new model will make it easier to participate in joint research projects that are of public interest, reduce the global digital divide and support constructive debate between the various partners on standards and the interoperability of their AI-related regulations.

We will continue our cooperation at the AI Action Summit in France on February 10 and 11, 2025, where we will strive to find solutions to meet our common objectives, such as the fight against disinformation or the reduction of the environmental impact of AI. With the objective of actively and tangibly promoting the use of the French language in the creation, production, distribution and dissemination of AI, taking into account its richness and diversity, and in compliance with copyright, we will attempt to identify solutions that are in line with the five themes of the summit: AI that serves the public interest, the future of work, innovation and culture, trust in AI and global AI governance.

Canada has accepted to co-chair the working group on global AI governance in order to continue the work already carried out by the GPAI, the OECD, the United Nations and its various bodies, the G7 and the G20. We would like to highlight and advance debates on the cultural challenges of AI in order to accelerate the joint development of relevant responses to the challenges faced. We would also like to develop the change management policies needed to support all of the affected cultural sectors. We will continue these discussions together during our successive G7 presidencies in 2025 and 2026.

Some very interesting news and it reminded me of this October 10, 2024 posting “October 29, 2024 Woodrow Wilson Center event: 2024 Canada-US Legal Symposium | Artificial Intelligence Regulation, Governance, and Liability.” (I also included an update of the current state of Canadian legislation and artificial intelligence in the posting.)

I checked out the In memoriam notice for Abhishek Gupta and found this, Note: Links have been removed except the link to the Abhishek Gupta’s memorial page hosting tributes, stories, and more. The link is in the highlighted paragraph,

Honoring the Life and Legacy of a Leader in AI Ethics

In accordance with his family’s wishes, it is with profound sadness that we announce the passing of Abhishek Gupta, Founder and Principal Researcher of the Montreal AI Ethics Institute (MAIEI), Director for Responsible AI at the Boston Consulting Group (BCG), and a pioneering voice in the field of AI ethics. Abhishek passed away peacefully in his sleep on September 30, 2024 in India, surrounded by his loving family. He is survived by his father, Ashok Kumar Gupta; his mother, Asha Gupta; and his younger brother, Abhijay Gupta.


Note: Details of a memorial service will be announced in the coming weeks. For those who wish to share stories, personal anecdotes, and photos of Abhishek, please visit www.forevermissed.com/abhishekgupta — your contributions will be greatly appreciated by his family and loved ones.

Born on December 20, 1992, in India, Abhishek’s intellectual curiosity and drive to understand technology led him on a remarkable journey. After excelling at Delhi Public School, Abhishek attended McGill University in Montreal, where he earned a Bachelor of Science in Computer Science (BSc’15). Following his graduation, Abhishek worked as a software engineer at Ericsson. He later joined Microsoft as a machine learning engineer, where he also served on the CSE Responsible AI Board. It was during his time in Montreal that Abhishek envisioned a future where ethics and AI would intertwine—a vision that became the driving force behind his life’s work. 

The Beginnings: Building a Global AI Ethics Community

Abhishek’s vision for MAIEI was rooted in community building. He began hosting in-person AI Ethics Meetups in Montreal throughout 2017. These gatherings were unique—participants completed assigned readings in advance, split into small groups for discussion, and then reconvened to share insights. This approach fostered deep, structured conversations and made AI ethics accessible to everyone, regardless of their background. The conversations and insights from these meetups became the foundation of MAIEI, which was launched in May 2018.

When the pandemic hit, Abhishek adapted the meetup format to an online setting, enabling MAIEI to expand worldwide. It was his idea to bring these conversations to a global stage, using virtual platforms to ensure voices from all corners of the world could join in. He passionately stood up for the “little guy,” making sure that those whose voices might be overlooked or unheard in traditional forums had a platform. Under his stewardship, MAIEI emerged as a globally recognized leader in fostering public discussions on the ethical implications of artificial intelligence. Through MAIEI, Abhishek fulfilled his mission of democratizing AI ethics literacy, empowering individuals from all backgrounds to engage with the future of technology.

I offer my sympathies to his family, friends, and communities for their profound loss.

Biobots (also known as biohybrid robots) occupy a third state between life and death?

I got a bit of a jolt from this September 12, 2024 essay by Peter A Noble, affiliate professor of microbiology at the University of Washington, and Alex Pozhitkov, senior technical lead of bioinformatics, Irell & Manella Graduate School of Biological Sciences at City of Hope, for The Conversation (h/t Sept. 12, 2024 item on phys.org), Note: Links have been removed,

Life and death are traditionally viewed as opposites. But the emergence of new multicellular life-forms from the cells of a dead organism introduces a “third state” that lies beyond the traditional boundaries of life and death.

Usually, scientists consider death to be the irreversible halt of functioning of an organism as a whole. However, practices such as organ donation highlight how organs, tissues and cells can continue to function even after an organism’s demise. This resilience raises the question: What mechanisms allow certain cells to keep working after an organism has died?

We are researchers who investigate what happens within organisms after they die. In our recently published review, we describe how certain cells – when provided with nutrients, oxygen, bioelectricity or biochemical cues – have the capacity to transform into multicellular organisms with new functions after death.

Life, death and emergence of something new

The third state challenges how scientists typically understand cell behavior. While caterpillars metamorphosing into butterflies, or tadpoles evolving into frogs, may be familiar developmental transformations, there are few instances where organisms change in ways that are not predetermined. Tumors, organoids and cell lines that can indefinitely divide in a petri dish, like HeLa cells [cervical cancer cells taken from Henrietta Lacks without her knowledge], are not considered part of the third state because they do not develop new functions.

However, researchers found that skin cells extracted from deceased frog embryos were able to adapt to the new conditions of a petri dish in a lab, spontaneously reorganizing into multicellular organisms called xenobots [emphasis mine]. These organisms exhibited behaviors that extend far beyond their original biological roles. Specifically, these xenobots use their cilia – small, hair-like structures – to navigate and move through their surroundings, whereas in a living frog embryo, cilia are typically used to move mucus.

Xenobots are also able to perform kinematic self-replication, meaning they can physically replicate their structure and function without growing. This differs from more common replication processes that involve growth within or on the organism’s body.

Researchers have also found that solitary human lung cells can self-assemble into miniature multicellular organisms that can move around. These anthrobots [emphasis mine] behave and are structured in new ways. They are not only able to navigate their surroundings but also repair both themselves and injured neuron cells placed nearby.

Taken together, these findings demonstrate the inherent plasticity of cellular systems and challenge the idea that cells and organisms can evolve only in predetermined ways. The third state suggests that organismal death may play a significant role in how life transforms over time.

I had not realized that xenobots are derived from dead frog embryos something I missed when mentioning or featuring them in previous stories, the latest in a September 13, 2024 posting, which also mentions anthrobots. Previous stories were published in a June 21, 2021 posting about xenobots 2.0 and their ability to move and a June 8, 2022 posting about their ability to reproduce. Thank you to the authors for relieving me of some of my ignorance.

For some reason I was expecting mention, brief or otherwise, of ethical or social implications but the authors offered this instead, from their September 12, 2024 essay, Note: Links have been removed,

Implications for biology and medicine

The third state not only offers new insights into the adaptability of cells. It also offers prospects for new treatments.

For example, anthrobots could be sourced from an individual’s living tissue to deliver drugs without triggering an unwanted immune response. Engineered anthrobots injected into the body could potentially dissolve arterial plaque in atherosclerosis patients and remove excess mucus in cystic fibrosis patients.

Importantly, these multicellular organisms have a finite life span, naturally degrading after four to six weeks. This “kill switch” prevents the growth of potentially invasive cells.

A better understanding of how some cells continue to function and metamorphose into multicellular entities some time after an organism’s demise holds promise for advancing personalized and preventive medicine.

I look forward to hearing about the third state and about any ethical or social issues that may arise from it.

Bio-hybrid robotics (living robots) needs public debate and regulation

A July 23, 2024 University of Southampton (UK) press release (also on EurekAlert but published July 22, 2024) describes the emerging science/technology of bio-hybrid robotics and a recent study about the ethical issues raised, Note 1: bio-hybrid may also be written as biohybrid; Note 2: Links have been removed,

Development of ‘living robots’ needs regulation and public debate

Researchers are calling for regulation to guide the responsible and ethical development of bio-hybrid robotics – a ground-breaking science which fuses artificial components with living tissue and cells.

In a paper published in Proceedings of the National Academy of Sciences [PNAS] a multidisciplinary team from the University of Southampton and universities in the US and Spain set out the unique ethical issues this technology presents and the need for proper governance.

Combining living materials and organisms with synthetic robotic components might sound like something out of science fiction, but this emerging field is advancing rapidly. Bio-hybrid robots using living muscles can crawl, swim, grip, pump, and sense their surroundings. Sensors made from sensory cells or insect antennae have improved chemical sensing. Living neurons have even been used to control mobile robots.

Dr Rafael Mestre from the University of Southampton, who specialises in emergent technologies and is co-lead author of the paper, said: “The challenges in overseeing bio-hybrid robotics are not dissimilar to those encountered in the regulation of biomedical devices, stem cells and other disruptive technologies. But unlike purely mechanical or digital technologies, bio-hybrid robots blend biological and synthetic components in unprecedented ways. This presents unique possible benefits but also potential dangers.”

Research publications relating to bio-hybrid robotics have increased continuously over the last decade. But the authors found that of the more than 1,500 publications on the subject at the time, only five considered its ethical implications in depth.

The paper’s authors identified three areas where bio-hybrid robotics present unique ethical issues: Interactivity – how bio-robots interact with humans and the environment, Integrability – how and whether humans might assimilate bio-robots (such as bio-robotic organs or limbs), and Moral status.

In a series of thought experiments, they describe how a bio-robot for cleaning our oceans could disrupt the food chain, how a bio-hybrid robotic arm might exacerbate inequalities [emphasis mine], and how increasing sophisticated bio-hybrid assistants could raise questions about sentience and moral value.

“Bio-hybrid robots create unique ethical dilemmas,” says Aníbal M. Astobiza, an ethicist from the University of the Basque Country in Spain and co-lead author of the paper. “The living tissue used in their fabrication, potential for sentience, distinct environmental impact, unusual moral status, and capacity for biological evolution or adaptation create unique ethical dilemmas that extend beyond those of wholly artificial or biological technologies.”

The paper is the first from the Biohybrid Futures project led by Dr Rafael Mestre, in collaboration with the Rebooting Democracy project. Biohybrid Futures is setting out to develop a framework for the responsible research, application, and governance of bio-hybrid robotics.

The paper proposes several requirements for such a framework, including risk assessments, consideration of social implications, and increasing public awareness and understanding.

Dr Matt Ryan, a political scientist from the University of Southampton and a co-author on the paper, said: “If debates around embryonic stem cells, human cloning or artificial intelligence have taught us something, it is that humans rarely agree on the correct resolution of the moral dilemmas of emergent technologies.

“Compared to related technologies such as embryonic stem cells or artificial intelligence, bio-hybrid robotics has developed relatively unattended by the media, the public and policymakers, but it is no less significant. We want the public to be included in this conversation to ensure a democratic approach to the development and ethical evaluation of this technology.”

In addition to the need for a governance framework, the authors set out actions that the research community can take now to guide their research.

“Taking these steps should not be seen as prescriptive in any way, but as an opportunity to share responsibility, taking a heavy weight away from the researcher’s shoulders,” says Dr Victoria Webster-Wood, a biomechanical engineer from Carnegie Mellon University in the US and co-author on the paper.

“Research in bio-hybrid robotics has evolved in various directions. We need to align our efforts to fully unlock its potential.”

Here’s a link to and a citation for the paper,

Ethics and responsibility in biohybrid robotics research by Rafael Mestre, Aníbal M. Astobiza, Victoria A. Webster-Wood, Matt Ryan, and M. Taher A. Saif. PNAS 121 (31) e2310458121 July 23, 2024 DOI: https://doi.org/10.1073/pnas.2310458121

This paper is open access.

Cyborg or biohybrid robot?

Earlier, I highlighted “… how a bio-hybrid robotic arm might exacerbate inequalities …” because it suggests cyborgs, which are not mentioned in the press release or in the paper, This seems like an odd omission but, over the years, terminology does change although it’s not clear that’s the situation here.

I have two ‘definitions’, the first is from an October 21, 2019 article by Javier Yanes for OpenMind BBVA, Note: More about BBVA later,

The fusion between living organisms and artificial devices has become familiar to us through the concept of the cyborg (cybernetic organism). This approach consists of restoring or improving the capacities of the organic being, usually a human being, by means of technological devices. On the other hand, biohybrid robots are in some ways the opposite idea: using living tissues or cells to provide the machine with functions that would be difficult to achieve otherwise. The idea is that if soft robots seek to achieve this through synthetic materials, why not do so directly with living materials?

In contrast, there’s this from “Biohybrid robots: recent progress, challenges, and perspectives,” Note 1: Full citation for paper follows excerpt; Note 2: Links have been removed,

2.3. Cyborgs

Another approach to building biohybrid robots is the artificial enhancement of animals or using an entire animal body as a scaffold to manipulate robotically. The locomotion of these augmented animals can then be externally controlled, spanning three modes of locomotion: walking/running, flying, and swimming. Notably, these capabilities have been demonstrated in jellyfish (figure 4(A)) [139, 140], clams (figure 4(B)) [141], turtles (figure 4(C)) [142, 143], and insects, including locusts (figure 4(D)) [27, 144], beetles (figure 4(E)) [28, 145–158], cockroaches (figure 4(F)) [159–165], and moths [166–170].

….

The advantages of using entire animals as cyborgs are multifold. For robotics, augmented animals possess inherent features that address some of the long-standing challenges within the field, including power consumption and damage tolerance, by taking advantage of animal metabolism [172], tissue healing, and other adaptive behaviors. In particular, biohybrid robotic jellyfish, composed of a self-contained microelectronic swim controller embedded into live Aurelia aurita moon jellyfish, consumed one to three orders of magnitude less power per mass than existing swimming robots [172], and cyborg insects can make use of the insect’s hemolymph directly as a fuel source [173].

So, sometimes there’s a distinction and sometimes there’s not. I take this to mean that the field is still emerging and that’s reflected in evolving terminology.

Here’s a link to and a citation for the paper,

Biohybrid robots: recent progress, challenges, and perspectives by Victoria A Webster-Wood, Maria Guix, Nicole W Xu, Bahareh Behkam, Hirotaka Sato, Deblina Sarkar, Samuel Sanchez, Masahiro Shimizu and Kevin Kit Parker. Bioinspiration & Biomimetics, Volume 18, Number 1 015001 DOI 10.1088/1748-3190/ac9c3b Published 8 November 2022 • © 2022 The Author(s). Published by IOP Publishing Ltd

This paper is open access.

A few notes about BBVA and other items

BBVA is Banco Bilbao Vizcaya Argentaria according to its Wikipedia entry, Note: Links have been removed,

Banco Bilbao Vizcaya Argentaria, S.A. (Spanish pronunciation: [ˈbaŋko βilˈβao βiθˈkaʝa aɾxenˈtaɾja]), better known by its initialism BBVA, is a Spanish multinational financial services company based in Madrid and Bilbao, Spain. It is one of the largest financial institutions in the world, and is present mainly in Spain, Portugal, Mexico, South America, Turkey, Italy and Romania.[2]

BBVA’s OpenMind is, from their About us page,

OpenMind: BBVA’s knowledge community

OpenMind is a non-profit project run by BBVA that aims to contribute to the generation and dissemination of knowledge about fundamental issues of our time, in an open and free way. The project is materialized in an online dissemination community.

Sharing knowledge for a better future.

At OpenMind we want to help people understand the main phenomena affecting our lives; the opportunities and challenges that we face in areas such as science, technology, humanities or economics. Analyzing the impact of scientific and technological advances on the future of the economy, society and our daily lives is the project’s main objective, which always starts on the premise that a broader and greater quality knowledge will help us to make better individual and collective decisions.

As for other items, you can find my latest (biorobotic, cyborg, or bionic depending what terminology you what to use) jellyfish story in this June 6, 2024 posting, the Biohybrid Futures project mentioned in the press release here, and also mentioned in the Rebooting Democracy project (unexpected in the context of an emerging science/technology) can be found here on this University of Southampton website.

Finally, you can find more on these stories (science/technology announcements and/or ethics research/issues) here by searching for ‘robots’ (tag and category), ‘cyborgs’ (tag), ‘machine/flesh’ (tag), ‘neuroprosthetic’ (tag), and human enhancement (category).

Implantable brain-computer interface collaborative community (iBCI-CC) launched

That’s quite a mouthful, ‘implantable brain-computer interface collaborative community (iBCI-CC). I assume the organization will be popularly known by its abbreviation.`A March 11, 2024 Mass General Brigham news release (also on EurekAlert) announces the iBCI-CC’s launch, Note: Mass stands for Massachusetts,

Mass General Brigham is establishing the Implantable Brain-Computer Interface Collaborative Community (iBCI-CC). This is the first Collaborative Community in the clinical neurosciences that has participation from the U.S. Food and Drug Administration (FDA).

BCIs are devices that interface with the nervous system and use software to interpret neural activity. Commonly, they are designed for improved access to communication or other technologies for people with physical disability. Implantable BCIs are investigational devices that hold the promise of unlocking new frontiers in restorative neurotechnology, offering potential breakthroughs in neurorehabilitation and in restoring function for people living with neurologic disease or injury.

The iBCI-CC (https://www.ibci-cc.org/) is a groundbreaking initiative aimed at fostering collaboration among diverse stakeholders to accelerate the development, safety and accessibility of iBCI technologies. The iBCI-CC brings together researchers, clinicians, medical device manufacturers, patient advocacy groups and individuals with lived experience of neurological conditions. This collaborative effort aims to propel the field of iBCIs forward by employing harmonized approaches that drive continuous innovation and ensure equitable access to these transformative technologies.

One of the first milestones for the iBCI-CC was to engage the participation of the FDA. “Brain-computer interfaces have the potential to restore lost function for patients suffering from a variety of neurological conditions. However, there are clinical, regulatory, coverage and payment questions that remain, which may impede patient access to this novel technology,” said David McMullen, M.D., Director of the Office of Neurological and Physical Medicine Devices in the FDA’s Center for Devices and Radiological Health (CDRH), and FDA member of the iBCI-CC. “The IBCI-CC will serve as an open venue to identify, discuss and develop approaches for overcoming these hurdles.”

The iBCI-CC will hold regular meetings open both to its members and the public to ensure inclusivity and transparency. Mass General Brigham will serve as the convener of the iBCI-CC, providing administrative support and ensuring alignment with the community’s objectives.

Over the past year, the iBCI-CC was organized by the interdisciplinary collaboration of leaders including Leigh Hochberg, MD, PhD, an internationally respected leader in BCI development and clinical testing and director of the Center for Neurotechnology and Neurorecovery at Massachusetts General Hospital; Jennifer French, MBA, executive director of the Neurotech Network and a Paralympic silver medalist; and Joe Lennerz, MD, PhD, a regulatory science expert and director of the Pathology Innovation Collaborative Community. These three organizers lead a distinguished group of Charter Signatories representing a diverse range of expertise and organizations.

“As a neurointensive care physician, I know how many patients with neurologic disorders could benefit from these devices,” said Dr. Hochberg. “Increasing discoveries in academia and the launch of multiple iBCI and related neurotech companies means that the time is right to identify common goals and metrics so that iBCIs are not only safe and effective, but also have thoroughly considered the design and function preferences of the people who hope to use them”.

Jennifer French, said, “Bringing diverse perspectives together, including those with lived experience, is a critical component to help address complex issues facing this field.” French has decades of experience working in the neurotech and patient advocacy fields. Living with a spinal cord injury, she also uses an implanted neurotech device for daily functions. “This ecosystem of neuroscience is on the cusp to collectively move the field forward by addressing access to the latest groundbreaking technology, in an equitable and ethical way. We can’t wait to engage and recruit the broader BCI community.”

Joe Lennerz, MD, PhD, emphasized, “Engaging in pre-competitive initiatives offers an often-overlooked avenue to drive meaningful progress. The collaboration of numerous thought leaders plays a pivotal role, with a crucial emphasis on regulatory engagement to unlock benefits for patients.”

The iBCI-CC is supported by key stakeholders within the Mass General Brigham system. Merit Cudkowicz, MD, MSc, chair of the Neurology Department, director of the Sean M. Healey and AMG Center for ALS at Massachusetts General Hospital, and Julianne Dorn Professor of Neurology at Harvard Medical School, said, “There is tremendous excitement in the ALS [amyotrophic lateral sclerosis, or Lou Gehrig’s disease] community for new devices that could ease and improve the ability of people with advanced ALS to communicate with their family, friends, and care partners. This important collaborative community will help to speed the development of a new class of neurologic devices to help our patients.”

Bailey McGuire, program manager of strategy and operations at Mass General Brigham’s Data Science Office, said, “We are thrilled to convene the iBCI-CC at Mass General Brigham’s DSO. By providing an administrative infrastructure, we want to help the iBCI-CC advance regulatory science and accelerate the availability of iBCI solutions that incorporate novel hardware and software that can benefit individuals with neurological conditions. We’re excited to help in this incredible space.”

For more information about the iBCI-CC, please visit https://www.ibci-cc.org/.

About Mass General Brigham

Mass General Brigham is an integrated academic health care system, uniting great minds to solve the hardest problems in medicine for our communities and the world. Mass General Brigham connects a full continuum of care across a system of academic medical centers, community and specialty hospitals, a health insurance plan, physician networks, community health centers, home care, and long-term care services. Mass General Brigham is a nonprofit organization committed to patient care, research, teaching, and service to the community. In addition, Mass General Brigham is one of the nation’s leading biomedical research organizations with several Harvard Medical School teaching hospitals. For more information, please visit massgeneralbrigham.org.

About the iBCI-CC Organizers:

Leigh Hochberg, MD, PhD is a neurointensivist at Massachusetts General Hospital’s Department of Neurology, where he directs the MGH Center for Neurotechnology and Neurorecovery. He is also the IDE Sponsor-Investigator and Directorof the BrainGate clinical trials, conducted by a consortium of scientists and clinicians at Brown, Emory, MGH, VA Providence, Stanford, and UC-Davis; the L. Herbert Ballou University Professor of Engineering and Professor of Brain Science at Brown University; Senior Lecturer on Neurology at Harvard Medical School; and Associate Director, VA RR&D Center for Neurorestoration and Neurotechnology in Providence.

Jennifer French, MBA, is the Executive Director of Neurotech Network, a nonprofit organization that focuses on education and advocacy of neurotechnologies. She serves on several Boards including the IEEE Neuroethics Initiative, Institute of Neuroethics, OpenMind platform, BRAIN Initiative Multi-Council and Neuroethics Working Groups, and the American Brain Coalition. She is the author of On My Feet Again (Neurotech Press, 2013) and is co-author of Bionic Pioneers (Neurotech Press, 2014). French lives with tetraplegia due to a spinal cord injury. She is an early user of an experimental implanted neural prosthesis for paralysis and is the Past-President and Founding member of the North American SCI Consortium.

Joe Lennerz, MD PhD, serves as the Chief Scientific Officer at BostonGene, an AI analytics and genomics startup based in Boston. Dr. Lennerz obtained a PhD in neurosciences, specializing in electrophysiology. He works on biomarker development and migraine research. Additionally, he is the co-founder and leader of the Pathology Innovation Collaborative Community, a regulatory science initiative focusing on diagnostics and software as a medical device (SaMD), convened by the Medical Device Innovation Consortium. He also serves as the co-chair of the federal Clinical Laboratory Fee Schedule (CLFS) advisory panel to the Centers for Medicare & Medicaid Services (CMS).

it’s been a while since I’ve come across BrainGate (see Leigh Hochberg bio in the above news release), which was last mentioned here in an April 2, 2021 posting, “BrainGate demonstrates a high-bandwidth wireless brain-computer interface (BCI).”

Here are two of my more recent postings about brain-computer interfaces,

This next one is an older posting but perhaps the most relevant to the announcement of this collaborative community’s purpose,

There’s a lot more on brain-computer interfaces (BCI) here, just use the term in the blog search engine.

Neural (brain) implants and hype (long read)

There was a big splash a few weeks ago when it was announced that Neuralink’s (Elon Musk company) brain implant had been surgically inserted into its first human patient.

Getting approval

David Tuffley, senior lecturer in Applied Ethics & CyberSecurity at Griffith University (Australia), provides a good overview of the road Neuralink took to getting FDA (US Food and Drug Administration) approval for human clinical trials in his May 29, 2023 essay for The Conversation, Note: Links have been removed,

Since its founding in 2016, Elon Musk’s neurotechnology company Neuralink has had the ambitious mission to build a next-generation brain implant with at least 100 times more brain connections than devices currently approved by the US Food and Drug Administration (FDA).

The company has now reached a significant milestone, having received FDA approval to begin human trials. So what were the issues keeping the technology in the pre-clinical trial phase for as long as it was? And have these concerns been addressed?

Neuralink is making a Class III medical device known as a brain-computer interface (BCI). The device connects the brain to an external computer via a Bluetooth signal, enabling continuous communication back and forth.

The device itself is a coin-sized unit called a Link. It’s implanted within a small disk-shaped cutout in the skull using a precision surgical robot. The robot splices a thousand tiny threads from the Link to certain neurons in the brain. [emphasis mine] Each thread is about a quarter the diameter of a human hair.

The company says the device could enable precise control of prosthetic limbs, giving amputees natural motor skills. It could revolutionise treatment for conditions such as Parkinson’s disease, epilepsy and spinal cord injuries. It also shows some promise for potential treatment of obesity, autism, depression, schizophrenia and tinnitus.

Several other neurotechnology companies and researchers have already developed BCI technologies that have helped people with limited mobility regain movement and complete daily tasks.

In February 2021, Musk said Neuralink was working with the FDA to secure permission to start initial human trials later that year. But human trials didn’t commence in 2021.

Then, in March 2022, Neuralink made a further application to the FDA to establish its readiness to begin humans trials.

One year and three months later, on May 25 2023, Neuralink finally received FDA approval for its first human clinical trial. Given how hard Neuralink has pushed for permission to begin, we can assume it will begin very soon. [emphasis mine]

The approval has come less than six months after the US Office of the Inspector General launched an investigation into Neuralink over potential animal welfare violations. [emphasis mine]

In accessible language, Tuffley goes on to discuss the FDA’s specific technical issues with implants and how they were addressed in his May 29, 2023 essay.

More about how Neuralink’s implant works and some concerns

Canadian Broadcasting Corporation (CBC) journalist Andrew Chang offers an almost 13 minute video, “Neuralink brain chip’s first human patient. How does it work?” Chang is a little overenthused for my taste but he offers some good information about neural implants, along with informative graphics in his presentation.

So, Tuffley was right about Neuralink getting ready quickly for human clinical trials as you can guess from the title of Chang’s CBC video.

Jennifer Korn announced that recruitment had started in her September 20, 2023 article for CNN (Cable News Network), Note: Links have been removed,

Elon Musk’s controversial biotechnology startup Neuralink opened up recruitment for its first human clinical trial Tuesday, according to a company blog.

After receiving approval from an independent review board, Neuralink is set to begin offering brain implants to paralysis patients as part of the PRIME Study, the company said. PRIME, short for Precise Robotically Implanted Brain-Computer Interface, is being carried out to evaluate both the safety and functionality of the implant.

Trial patients will have a chip surgically placed in the part of the brain that controls the intention to move. The chip, installed by a robot, will then record and send brain signals to an app, with the initial goal being “to grant people the ability to control a computer cursor or keyboard using their thoughts alone,” the company wrote.

Those with quadriplegia [sometimes known as tetraplegia] due to cervical spinal cord injury or amyotrophic lateral sclerosis (ALS) may qualify for the six-year-long study – 18 months of at-home and clinic visits followed by follow-up visits over five years. Interested people can sign up in the patient registry on Neuralink’s website.

Musk has been working on Neuralink’s goal of using implants to connect the human brain to a computer for five years, but the company so far has only tested on animals. The company also faced scrutiny after a monkey died in project testing in 2022 as part of efforts to get the animal to play Pong, one of the first video games.

I mentioned three Reuters investigative journalists who were reporting on Neuralink’s animal abuse allegations (emphasized in Tuffley’s essay) in a July 7, 2023 posting, “Global dialogue on the ethics of neurotechnology on July 13, 2023 led by UNESCO.” Later that year, Neuralink was cleared by the US Department of Agriculture (see September 24,, 2023 article by Mahnoor Jehangir for BNN Breaking).

Plus, Neuralink was being investigated over more allegations according to a February 9, 2023 article by Rachel Levy for Reuters, this time regarding hazardous pathogens,

The U.S. Department of Transportation said on Thursday it is investigating Elon Musk’s brain-implant company Neuralink over the potentially illegal movement of hazardous pathogens.

A Department of Transportation spokesperson told Reuters about the probe after the Physicians Committee of Responsible Medicine (PCRM), an animal-welfare advocacy group,wrote to Secretary of Transportation Pete Buttigieg, opens new tab earlier on Thursday to alert it of records it obtained on the matter.

PCRM said it obtained emails and other documents that suggest unsafe packaging and movement of implants removed from the brains of monkeys. These implants may have carried infectious diseases in violation of federal law, PCRM said.

There’s an update about the hazardous materials in the next section. Spoiler alert, the company got fined.

Neuralink’s first human implant

A January 30, 2024 article (Associated Press with files from Reuters) on the Canadian Broadcasting Corporation’s (CBC) online news webspace heralded the latest about Neurlink’s human clinical trials,

The first human patient received an implant from Elon Musk’s computer-brain interface company Neuralink over the weekend, the billionaire says.

In a post Monday [January 29, 2024] on X, the platform formerly known as Twitter, Musk said that the patient received the implant the day prior and was “recovering well.” He added that “initial results show promising neuron spike detection.”

Spikes are activity by neurons, which the National Institutes of Health describe as cells that use electrical and chemical signals to send information around the brain and to the body.

The billionaire, who owns X and co-founded Neuralink, did not provide additional details about the patient.

When Neuralink announced in September [2023] that it would begin recruiting people, the company said it was searching for individuals with quadriplegia due to cervical spinal cord injury or amyotrophic lateral sclerosis, commonly known as ALS or Lou Gehrig’s disease.

Neuralink reposted Musk’s Monday [January 29, 2024] post on X, but did not publish any additional statements acknowledging the human implant. The company did not immediately respond to requests for comment from The Associated Press or Reuters on Tuesday [January 30, 2024].

In a separate Monday [January 29, 2024] post on X, Musk said that the first Neuralink product is called “Telepathy” — which, he said, will enable users to control their phones or computers “just by thinking.” He said initial users would be those who have lost use of their limbs.

The startup’s PRIME Study is a trial for its wireless brain-computer interface to evaluate the safety of the implant and surgical robot.

Now for the hazardous materials, January 30, 2024 article, Note: A link has been removed,

Earlier this month [January 2024], a Reuters investigation found that Neuralink was fined for violating U.S. Department of Transportation (DOT) rules regarding the movement of hazardous materials. During inspections of the company’s facilities in Texas and California in February 2023, DOT investigators found the company had failed to register itself as a transporter of hazardous material.

They also found improper packaging of hazardous waste, including the flammable liquid Xylene. Xylene can cause headaches, dizziness, confusion, loss of muscle co-ordination and even death, according to the U.S. Centers for Disease Control and Prevention.

The records do not say why Neuralink would need to transport hazardous materials or whether any harm resulted from the violations.

Skeptical thoughts about Elon Musk and Neuralink

Earlier this month (February 2024), the British Broadcasting Corporation (BBC) published an article by health reporters, Jim Reed and Joe McFadden, that highlights the history of brain implants, the possibilities, and notes some of Elon Musk’s more outrageous claims for Neuralink’s brain implants,

Elon Musk is no stranger to bold claims – from his plans to colonise Mars to his dreams of building transport links underneath our biggest cities. This week the world’s richest man said his Neuralink division had successfully implanted its first wireless brain chip into a human.

Is he right when he says this technology could – in the long term – save the human race itself?

Sticking electrodes into brain tissue is really nothing new.

In the 1960s and 70s electrical stimulation was used to trigger or suppress aggressive behaviour in cats. By the early 2000s monkeys were being trained to move a cursor around a computer screen using just their thoughts.

“It’s nothing novel, but implantable technology takes a long time to mature, and reach a stage where companies have all the pieces of the puzzle, and can really start to put them together,” says Anne Vanhoestenberghe, professor of active implantable medical devices, at King’s College London.

Neuralink is one of a growing number of companies and university departments attempting to refine and ultimately commercialise this technology. The focus, at least to start with, is on paralysis and the treatment of complex neurological conditions.

Reed and McFadden’s February 2024 BBC article describes a few of the other brain implant efforts, Note: Links have been removed,

One of its [Neuralink’s] main rivals, a start-up called Synchron backed by funding from investment firms controlled by Bill Gates and Jeff Bezos, has already implanted its stent-like device into 10 patients.

Back in December 2021, Philip O’Keefe, a 62-year old Australian who lives with a form of motor neurone disease, composed the first tweet using just his thoughts to control a cursor.

And researchers at Lausanne University in Switzerland have shown it is possible for a paralysed man to walk again by implanting multiple devices to bypass damage caused by a cycling accident.

In a research paper published this year, they demonstrated a signal could be beamed down from a device in his brain to a second device implanted at the base of his spine, which could then trigger his limbs to move.

Some people living with spinal injuries are sceptical about the sudden interest in this new kind of technology.

“These breakthroughs get announced time and time again and don’t seem to be getting any further along,” says Glyn Hayes, who was paralysed in a motorbike accident in 2017, and now runs public affairs for the Spinal Injuries Association.

If I could have anything back, it wouldn’t be the ability to walk. It would be putting more money into a way of removing nerve pain, for example, or ways to improve bowel, bladder and sexual function.” [emphasis mine]

Musk, however, is focused on something far more grand for Neuralink implants, from Reed and McFadden’s February 2024 BBC article, Note: A link has been removed,

But for Elon Musk, “solving” brain and spinal injuries is just the first step for Neuralink.

The longer-term goal is “human/AI symbiosis” [emphasis mine], something he describes as “species-level important”.

Musk himself has already talked about a future where his device could allow people to communicate with a phone or computer “faster than a speed typist or auctioneer”.

In the past, he has even said saving and replaying memories may be possible, although he recognised “this is sounding increasingly like a Black Mirror episode.”

One of the experts quoted in Reed and McFadden’s February 2024 BBC article asks a pointed question,

… “At the moment, I’m struggling to see an application that a consumer would benefit from, where they would take the risk of invasive surgery,” says Prof Vanhoestenberghe.

“You’ve got to ask yourself, would you risk brain surgery just to be able to order a pizza on your phone?”

Rae Hodge’s February 11, 2024 article about Elon Musk and his hyped up Neuralink implant for Salon is worth reading in its entirety but for those who don’t have the time or need a little persuading, here are a few excerpts, Note 1: This is a warning; Hodge provides more detail about the animal cruelty allegations; Note 2: Links have been removed,

Elon Musk’s controversial brain-computer interface (BCI) tech, Neuralink, has supposedly been implanted in its first recipient — and as much as I want to see progress for treatment of paralysis and neurodegenerative disease, I’m not celebrating. I bet the neuroscientists he reportedly drove out of the company aren’t either, especially not after seeing the gruesome torture of test monkeys and apparent cover-up that paved the way for this moment. 

All of which is an ethics horror show on its own. But the timing of Musk’s overhyped implant announcement gives it an additional insulting subtext. Football players are currently in a battle for their lives against concussion-based brain diseases that plague autopsy reports of former NFL players. And Musk’s boast of false hope came just two weeks before living players take the field in the biggest and most brutal game of the year. [2024 Super Bowl LVIII]

ESPN’s Kevin Seifert reports neuro-damage is up this year as “players suffered a total of 52 concussions from the start of training camp to the beginning of the regular season. The combined total of 213 preseason and regular season concussions was 14% higher than 2021 but within range of the three-year average from 2018 to 2020 (203).”

I’m a big fan of body-tech: pacemakers, 3D-printed hips and prosthetic limbs that allow you to wear your wedding ring again after 17 years. Same for brain chips. But BCI is the slow-moving front of body-tech development for good reason. The brain is too understudied. Consequences of the wrong move are dire. Overpromising marketable results on profit-driven timelines — on the backs of such a small community of researchers in a relatively new field — would be either idiotic or fiendish. 

Brown University’s research in the sector goes back to the 1990s. Since the emergence of a floodgate-opening 2002 study and the first implant in 2004 by med-tech company BrainGate, more promising results have inspired broader investment into careful research. But BrainGate’s clinical trials started back in 2009, and as noted by Business Insider’s Hilary Brueck, are expected to continue until 2038 — with only 15 participants who have devices installed. 

Anne Vanhoestenberghe is a professor of active implantable medical devices at King’s College London. In a recent release, she cautioned against the kind of hype peddled by Musk.

“Whilst there are a few other companies already using their devices in humans and the neuroscience community have made remarkable achievements with those devices, the potential benefits are still significantly limited by technology,” she said. “Developing and validating core technology for long term use in humans takes time and we need more investments to ensure we do the work that will underpin the next generation of BCIs.” 

Neuralink is a metal coin in your head that connects to something as flimsy as an app. And we’ve seen how Elon treats those. We’ve also seen corporate goons steal a veteran’s prosthetic legs — and companies turn brain surgeons and dentists into repo-men by having them yank anti-epilepsy chips out of people’s skulls, and dentures out of their mouths. 

“I think we have a chance with Neuralink to restore full-body functionality to someone who has a spinal cord injury,” Musk said at a 2023 tech summit, adding that the chip could possibly “make up for whatever lost capacity somebody has.”

Maybe BCI can. But only in the careful hands of scientists who don’t have Musk squawking “go faster!” over their shoulders. His greedy frustration with the speed of BCI science is telling, as is the animal cruelty it reportedly prompted.

There have been other examples of Musk’s grandiosity. Notably, David Lee expressed skepticism about hyperloop in his August 13, 2013 article for BBC news online

Is Elon Musk’s Hyperloop just a pipe dream?

Much like the pun in the headline, the bright idea of transporting people using some kind of vacuum-like tube is neither new nor imaginative.

There was Robert Goddard, considered the “father of modern rocket propulsion”, who claimed in 1909 that his vacuum system could suck passengers from Boston to New York at 1,200mph.

And then there were Soviet plans for an amphibious monorail  – mooted in 1934  – in which two long pods would start their journey attached to a metal track before flying off the end and slipping into the water like a two-fingered Kit Kat dropped into some tea.

So ever since inventor and entrepreneur Elon Musk hit the world’s media with his plans for the Hyperloop, a healthy dose of scepticism has been in the air.

“This is by no means a new idea,” says Rod Muttram, formerly of Bombardier Transportation and Railtrack.

“It has been previously suggested as a possible transatlantic transport system. The only novel feature I see is the proposal to put the tubes above existing roads.”

Here’s the latest I’ve found on hyperloop, from the Hyperloop Wikipedia entry,

As of 2024, some companies continued to pursue technology development under the hyperloop moniker, however, one of the biggest, well funded players, Hyperloop One, declared bankruptcy and ceased operations in 2023.[15]

Musk is impatient and impulsive as noted in a September 12, 2023 posting by Mike Masnick on Techdirt, Note: A link has been removed,

The Batshit Crazy Story Of The Day Elon Musk Decided To Personally Rip Servers Out Of A Sacramento Data Center

Back on Christmas Eve [December 24, 2022] of last year there were some reports that Elon Musk was in the process of shutting down Twitter’s Sacramento data center. In that article, a number of ex-Twitter employees were quoted about how much work it would be to do that cleanly, noting that there’s a ton of stuff hardcoded in Twitter code referring to that data center (hold that thought).

That same day, Elon tweeted out that he had “disconnected one of the more sensitive server racks.”

Masnick follows with a story of reckless behaviour from someone who should have known better.

Ethics of implants—where to look for more information

While Musk doesn’t use the term when he describes a “human/AI symbiosis” (presumably by way of a neural implant), he’s talking about a cyborg. Here’s a 2018 paper, which looks at some of the implications,

Do you want to be a cyborg? The moderating effect of ethics on neural implant acceptance by Eva Reinares-Lara, Cristina Olarte-Pascual, and Jorge Pelegrín-Borondo. Computers in Human Behavior Volume 85, August 2018, Pages 43-53 DOI: https://doi.org/10.1016/j.chb.2018.03.032

This paper is open access.

Getting back to Neuralink, I have two blog posts that discuss the company and the ethics of brain implants from way back in 2021.

First, there’s Jazzy Benes’ March 1, 2021 posting on the Santa Clara University’s Markkula Center for Applied Ethics blog. It stands out as it includes a discussion of the disabled community’s issues, Note: Links have been removed,

In the heart of Silicon Valley we are constantly enticed by the newest technological advances. With the big influencers Grimes [a Canadian musician and the mother of three children with Elon Musk] and Lil Uzi Vert publicly announcing their willingness to become experimental subjects for Elon Musk’s Neuralink brain implantation device, we are left wondering if future technology will actually give us “the knowledge of the Gods.” Is it part of the natural order for humans to become omniscient beings? Who will have access to the devices? What other ethical considerations must be discussed before releasing such technology to the public?

A significant issue that arises from developing technologies for the disabled community is the assumption that disabled persons desire the abilities of what some abled individuals may define as “normal.” Individuals with disabilities may object to technologies intended to make them fit an able-bodied norm. “Normal” is relative to each individual, and it could be potentially harmful to use a deficit view of disability, which means judging a disability as a deficiency. However, this is not to say that all disabled individuals will reject a technology that may enhance their abilities. Instead, I believe it is a consideration that must be recognized when developing technologies for the disabled community, and it can only be addressed through communication with disabled persons. As a result, I believe this is a conversation that must be had with the community for whom the technology is developed–disabled persons.

With technologies that aim to address disabilities, we walk a fine line between therapeutics and enhancement. Though not the first neural implant medical device, the Link may have been the first BCI system openly discussed for its potential transhumanism uses, such as “enhanced cognitive abilities, memory storage and retrieval, gaming, telepathy, and even symbiosis with machines.” …

Benes also discusses transhumanism, privacy issues, and consent issues. It’s a thoughtful reading experience.

Second is a July 9, 2021 posting by anonymous on the University of California at Berkeley School of Information blog which provides more insight into privacy and other issues associated with data collection (and introduced me to the concept of decisional interference),

As the development of microchips furthers and advances in neuroscience occur, the possibility for seamless brain-machine interfaces, where a device decodes inputs from the user’s brain to perform functions, becomes more of a reality. These various forms of these technologies already exist. However, technological advances have made implantable and portable devices possible. Imagine a future where humans don’t need to talk to each other, but rather can transmit their thoughts directly to another person. This idea is the eventual goal of Elon Musk, the founder of Neuralink. Currently, Neuralink is one of the main companies involved in the advancement of this type of technology. Analysis of the Neuralink’s technology and their overall mission statement provide an interesting insight into the future of this type of human-computer interface and the potential privacy and ethical concerns with this technology.

As this technology further develops, several privacy and ethical concerns come into question. To begin, using Solove’s Taxonomy as a privacy framework, many areas of potential harm are revealed. In the realm of information collection, there is much risk. Brain-computer interfaces, depending on where they are implanted, could have access to people’s most private thoughts and emotions. This information would need to be transmitted to another device for processing. The collection of this information by companies such as advertisers would represent a major breach of privacy. Additionally, there is risk to the user from information processing. These devices must work concurrently with other devices and often wirelessly. Given the widespread importance of cloud computing in much of today’s technology, offloading information from these devices to the cloud would be likely. Having the data stored in a database puts the user at the risk of secondary use if proper privacy policies are not implemented. The trove of information stored within the information collected from the brain is vast. These datasets could be combined with existing databases such as browsing history on Google to provide third parties with unimaginable context on individuals. Lastly, there is risk for information dissemination, more specifically, exposure. The information collected and processed by these devices would need to be stored digitally. Keeping such private information, even if anonymized, would be a huge potential for harm, as the contents of the information may in itself be re-identifiable to a specific individual. Lastly there is risk for invasions such as decisional interference. Brain-machine interfaces would not only be able to read information in the brain but also write information. This would allow the device to make potential emotional changes in its users, which be a major example of decisional interference. …

For the most recent Neuralink and brain implant ethics piece, there’s this February 14, 2024 essay on The Conversation, which, unusually, for this publication was solicited by the editors, Note: Links have been removed,

In January 2024, Musk announced that Neuralink implanted its first chip in a human subject’s brain. The Conversation reached out to two scholars at the University of Washington School of Medicine – Nancy Jecker, a bioethicst, and Andrew Ko, a neurosurgeon who implants brain chip devices – for their thoughts on the ethics of this new horizon in neuroscience.

Information about the implant, however, is scarce, aside from a brochure aimed at recruiting trial subjects. Neuralink did not register at ClinicalTrials.gov, as is customary, and required by some academic journals. [all emphases mine]

Some scientists are troubled by this lack of transparency. Sharing information about clinical trials is important because it helps other investigators learn about areas related to their research and can improve patient care. Academic journals can also be biased toward positive results, preventing researchers from learning from unsuccessful experiments.

Fellows at the Hastings Center, a bioethics think tank, have warned that Musk’s brand of “science by press release, while increasingly common, is not science. [emphases mine]” They advise against relying on someone with a huge financial stake in a research outcome to function as the sole source of information.

When scientific research is funded by government agencies or philanthropic groups, its aim is to promote the public good. Neuralink, on the other hand, embodies a private equity model [emphasis mine], which is becoming more common in science. Firms pooling funds from private investors to back science breakthroughs may strive to do good, but they also strive to maximize profits, which can conflict with patients’ best interests.

In 2022, the U.S. Department of Agriculture investigated animal cruelty at Neuralink, according to a Reuters report, after employees accused the company of rushing tests and botching procedures on test animals in a race for results. The agency’s inspection found no breaches, according to a letter from the USDA secretary to lawmakers, which Reuters reviewed. However, the secretary did note an “adverse surgical event” in 2019 that Neuralink had self-reported.

In a separate incident also reported by Reuters, the Department of Transportation fined Neuralink for violating rules about transporting hazardous materials, including a flammable liquid.

…the possibility that the device could be increasingly shown to be helpful for people with disabilities, but become unavailable due to loss of research funding. For patients whose access to a device is tied to a research study, the prospect of losing access after the study ends can be devastating. [emphasis mine] This raises thorny questions about whether it is ever ethical to provide early access to breakthrough medical interventions prior to their receiving full FDA approval.

Not registering a clinical trial would seem to suggest there won’t be much oversight. As for Musk’s “science by press release” activities, I hope those will be treated with more skepticism by mainstream media although that seems unlikely given the current situation with journalism (more about that in a future post).

As for the issues associated with private equity models for science research and the problem of losing access to devices after a clinical trial is ended, my April 5, 2022 posting, “Going blind when your neural implant company flirts with bankruptcy (long read)” offers some cautionary tales, in addition to being the most comprehensive piece I’ve published on ethics and brain implants.

My July 17, 2023 posting, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report” offers a brief overview of the international scene.

First round of seed funding announced for NSF (US National Science Foundation) Institute for Trustworthy AI in Law & Society (TRAILS)

Having published an earlier January 2024 US National Science Foundation (NSF) funding announcement for the TRAILS (Trustworthy AI in Law & Society) Institute yesterday (February 21, 2024), I’m following up with an announcement about the initiative’s first round of seed funding.

From a TRAILS undated ‘story‘ by Tom Ventsias on the initiative’s website (and published January 24, 2024 as a University of Maryland news release on EurekAlert),

The Institute for Trustworthy AI in Law & Society (TRAILS) has unveiled an inaugural round of seed grants designed to integrate a greater diversity of stakeholders into the artificial intelligence (AI) development and governance lifecycle, ultimately creating positive feedback loops to improve trustworthiness, accessibility and efficacy in AI-infused systems.

The eight grants announced on January 24, 2024—ranging from $100K to $150K apiece and totaling just over $1.5 million—were awarded to interdisciplinary teams of faculty associated with the institute. Funded projects include developing AI chatbots to assist with smoking cessation, designing animal-like robots that can improve autism-specific support at home, and exploring how people use and rely upon AI-generated language translation systems.

All eight projects fall under the broader mission of TRAILS, which is to transform the practice of AI from one driven primarily by technological innovation to one that is driven by ethics, human rights, and input and feedback from communities whose voices have previously been marginalized.

“At the speed with which AI is developing, our seed grant program will enable us to keep pace—or even stay one step ahead—by incentivizing cutting-edge research and scholarship that spans AI design, development and governance,” said Hal Daumé III, a professor of computer science at the University of Maryland who is the director of TRAILS.

After TRAILS was launched in May 2023 with a $20 million award from the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST), lead faculty met to brainstorm how the institute could best move forward with research, innovation and outreach that would have a meaningful impact.

They determined a seed grant program could quickly leverage the wide range of academic talent at TRAILS’ four primary institutions. This includes the University of Maryland’s expertise in computing and human-computer interaction; George Washington University’s strengths in systems engineering and AI as it relates to law and governance; Morgan State University’s work in addressing bias and inequity in AI; and Cornell University’s research in human behavior and decision-making.

“NIST and NSF’s support of TRAILS enables us to create a structured mechanism to reach across academic and institutional boundaries in search of innovative solutions,” said David Broniatowski, an associate professor of engineering management and systems engineering at George Washington University who leads TRAILS activities on the GW campus. “Seed funding from TRAILS will enable multidisciplinary teams to identify opportunities for their research to have impact, and to build the case for even larger, multi-institutional efforts.”

Further discussions were held at a TRAILS faculty retreat to identify seed grant guidelines and collaborative themes that mirror TRAILS’ primary research thrusts—participatory design, methods and metrics, evaluating trust, and participatory governance.

“Some of the funded projects are taking a fresh look at ideas we may have already been working on individually, and others are taking an entirely new approach to timely, pressing issues involving AI and machine learning,” said Virginia Byrne, an assistant professor of higher education & student affairs at Morgan State who is leading TRAILS activities on that campus and who served on the seed grant review committee.

A second round of seed funding will be announced later this year, said Darren Cambridge, who was recently hired as managing director of TRAILS to lead its day-to-day operations.

Projects selected in the first round are eligible for a renewal, while other TRAILS faculty—or any faculty member at the four primary TRAILS institutions—can submit new proposals for consideration, Cambridge said.

Ultimately, the seed funding program is expected to strengthen and incentivize other TRAILS activities that are now taking shape, including K–12 education and outreach programs, AI policy seminars and workshops on Capitol Hill, and multiple postdoc opportunities for early-career researchers.

“We want TRAILS to be the ‘go-to’ resource for educators, policymakers and others who are seeking answers and solutions on how to build, manage and use AI systems that will benefit all of society,” Cambridge said.

The eight projects selected for the first round of TRAILS seed-funding are:

Chung Hyuk Park and Zoe Szajnfarber from GW and Hernisa Kacorri from UMD aim to improve the support infrastructure and access to quality care for families of autistic children. Early interventions are strongly correlated with positive outcomes, while provider shortages and financial burdens have raised challenges—particularly for families without sufficient resources and experience. The researchers will develop novel parent-robot teaming for the home, advance the assistive technology, and assess the impact of teaming to promote more trust in human-robot collaborative settings.

Soheil Feizi from UMD and Robert Brauneis from GW will investigate various issues surrounding text-to-image [emphasis mine] generative AI models like Stable Diffusion, DALL-E 2, and Midjourney, focusing on myriad legal, aesthetic and computational aspects that are currently unresolved. A key question is how copyright law might adapt if these tools create works in an artist’s style. The team will explore how generative AI models represent individual artists’ styles, and whether those representations are complex and distinctive enough to form stable objects of protection. The researchers will also explore legal and technical questions to determine if specific artworks, especially rare and unique ones, have already been used to train AI models.

Huaishu Peng and Ge Gao from UMD will work with Malte Jung from Cornell to increase trust-building in embodied AI systems, which bridge the gap between computers and human physical senses. Specifically, the researchers will explore embodied AI systems in the form of miniaturized on-body or desktop robotic systems that can enable the exchange of nonverbal cues between blind and sighted individuals, an essential component of efficient collaboration. The researchers will also examine multiple factors—both physical and mental—in order to gain a deeper understanding of both groups’ values related to teamwork facilitated by embodied AI.

Marine Carpuat and Ge Gao from UMD will explore “mental models”—how humans perceive things—for language translation systems used by millions of people daily. They will focus on how individuals, depending on their language fluency and familiarity with the technology, make sense of their “error boundary”—that is, deciding whether an AI-generated translation is correct or incorrect. The team will also develop innovative techniques to teach users how to improve their mental models as they interact with machine translation systems.

Hal Daumé III, Furong Huang and Zubin Jelveh from UMD and Donald Braman from GW will propose new philosophies grounded in law to conceptualize, evaluate and achieve “effort-aware fairness,” which involves algorithms for determining whether an individual or a group of individuals is discriminated against in terms of equality of effort. The researchers will develop new metrics, evaluate fairness of datasets, and design novel algorithms that enable AI auditors to uncover and potentially correct unfair decisions.

Lorien Abroms and David Broniatowski from GW will recruit smokers to study the reliability of using generative chatbots, such as ChatGPT, as the basis for a digital smoking cessation program. Additional work will examine the acceptability by smokers and their perceptions of trust in using this rapidly evolving technology for help to quit smoking. The researchers hope their study will directly inform future digital interventions for smoking cessation and/or modifying other health behaviors.

Adam Aviv from GW and Michelle Mazurek from UMD will examine bias, unfairness and untruths such as sexism, racism and other forms of misrepresentation that come out of certain AI and machine learning systems. Though some systems have public warnings of potential biases, the researchers want to explore how users understand these warnings, if they recognize how biases may manifest themselves in the AI-generated responses, and how users attempt to expose, mitigate and manage potentially biased responses.

Susan Ariel Aaronson and David Broniatowski from GW plan to create a prototype of a searchable, easy-to-use website to enable policymakers to better utilize academic research related to trustworthy and participatory AI. The team will analyze research publications by TRAILS-affiliated researchers to ascertain which ones may have policy implications. Then, each relevant publication will be summarized and categorized by research questions, issues, keywords, and relevant policymaking uses. The resulting database prototype will enable the researchers to test the utility of this resource for policymakers over time.

Yes, things are moving quickly where AI is concerned. There’s text-to-image being investigated by Soheil Feizi and Robert Brauneis and, since the funding announcement in early January 2024, text-to-video has been announced (Open AI’s Sora was previewed February 15, 2024). I wonder if that will be added to the project.

One more comment, Huaishu Peng’s, Ge Gao’s, and Malte Jung’s project for “… trust-building in embodied AI systems …” brings to mind Elon Musk’s stated goal of using brain implants for “human/AI symbiosis.” (I have more about that in an upcoming post.) Hopefully, Susan Ariel Aaronson’s and David Broniatowski’s proposed website for policymakers will be able to keep up with what’s happening in the field of AI, including research on the impact of private investments primarily designed for generating profits.

Prioritizing ethical & social considerations in emerging technologies—$16M in US National Science Foundation funding

I haven’t seen this much interest in the ethics and social impacts of emerging technologies in years. It seems that the latest AI (artificial intelligence) panic has stimulated interest not only in regulation but ethics too.

The latest information I have on this topic comes from a January 9, 2024 US National Science Foundation (NSF) news release (also received via email),

NSF and philanthropic partners announce $16 million in funding to prioritize ethical and social considerations in emerging technologies

ReDDDoT is a collaboration with five philanthropic partners and crosses
all disciplines of science and engineering_

The U.S. National Science Foundation today launched a new $16 million
program in collaboration with five philanthropic partners that seeks to
ensure ethical, legal, community and societal considerations are
embedded in the lifecycle of technology’s creation and use. The
Responsible Design, Development and Deployment of Technologies (ReDDDoT)
program aims to help create technologies that promote the public’s
wellbeing and mitigate potential harms.

“The design, development and deployment of technologies have broad
impacts on society,” said NSF Director Sethuraman Panchanathan. “As
discoveries and innovations are translated to practice, it is essential
that we engage and enable diverse communities to participate in this
work. NSF and its philanthropic partners share a strong commitment to
creating a comprehensive approach for co-design through soliciting
community input, incorporating community values and engaging a broad
array of academic and professional voices across the lifecycle of
technology creation and use.”

The ReDDDoT program invites proposals from multidisciplinary,
multi-sector teams that examine and demonstrate the principles,
methodologies and impacts associated with responsible design,
development and deployment of technologies, especially those specified
in the “CHIPS and Science Act of 2022.” In addition to NSF, the
program is funded and supported by the Ford Foundation, the Patrick J.
McGovern Foundation, Pivotal Ventures, Siegel Family Endowment and the
Eric and Wendy Schmidt Fund for Strategic Innovation.

“In recognition of the role responsible technologists can play to
advance human progress, and the danger unaccountable technology poses to
social justice, the ReDDDoT program serves as both a collaboration and a
covenant between philanthropy and government to center public interest
technology into the future of progress,” said Darren Walker, president
of the Ford Foundation. “This $16 million initiative will cultivate
expertise from public interest technologists across sectors who are
rooted in community and grounded by the belief that innovation, equity
and ethics must equally be the catalysts for technological progress.”

The broad goals of ReDDDoT include:  

*Stimulating activity and filling gaps in research, innovation and capacity building in the responsible design, development, and deployment of technologies.
* Creating broad and inclusive communities of interest that bring
together key stakeholders to better inform practices for the design,
development, and deployment of technologies.
* Educating and training the science, technology, engineering, and
mathematics workforce on approaches to responsible design,
development, and deployment of technologies. 
* Accelerating pathways to societal and economic benefits while
developing strategies to avoid or mitigate societal and economic harms.
* Empowering communities, including economically disadvantaged and
marginalized populations, to participate in all stages of technology
development, including the earliest stages of ideation and design.

Phase 1 of the program solicits proposals for Workshops, Planning
Grants, or the creation of Translational Research Coordination Networks,
while Phase 2 solicits full project proposals. The initial areas of
focus for 2024 include artificial intelligence, biotechnology or natural
and anthropogenic disaster prevention or mitigation. Future iterations
of the program may consider other key technology focus areas enumerated
in the CHIPS and Science Act.

For more information about ReDDDoT, visit the program website or register for an informational webinar on Feb. 9, 2024, at 2 p.m. ET.

Statements from NSF’s Partners

“The core belief at the heart of ReDDDoT – that technology should be
shaped by ethical, legal, and societal considerations as well as
community values – also drives the work of the Patrick J. McGovern
Foundation to build a human-centered digital future for all. We’re
pleased to support this partnership, committed to advancing the
development of AI, biotechnology, and climate technologies that advance
equity, sustainability, and justice.” – Vilas Dhar, President, Patrick
J. McGovern Foundation

“From generative AI to quantum computing, the pace of technology
development is only accelerating. Too often, technological advances are
not accompanied by discussion and design that considers negative impacts
or unrealized potential. We’re excited to support ReDDDoT as an
opportunity to uplift new and often forgotten perspectives that
critically examine technology’s impact on civic life, and advance Siegel
Family Endowment’s vision of technological change that includes and
improves the lives of all people.” – Katy Knight, President and
Executive Director of Siegel Family Endowment

Only eight months ago, another big NSF funding project was announced but this time focused on AI and promoting trust, from a May 4, 2023 University of Maryland (UMD) news release (also on EurekAlert), Note: A link has been removed,

The University of Maryland has been chosen to lead a multi-institutional effort supported by the National Science Foundation (NSF) that will develop new artificial intelligence (AI) technologies designed to promote trust and mitigate risks, while simultaneously empowering and educating the public.

The NSF Institute for Trustworthy AI in Law & Society (TRAILS) announced on May 4, 2023, unites specialists in AI and machine learning with social scientists, legal scholars, educators and public policy experts. The multidisciplinary team will work with impacted communities, private industry and the federal government to determine what trust in AI looks like, how to develop technical solutions for AI that can be trusted, and which policy models best create and sustain trust.

Funded by a $20 million award from NSF, the new institute is expected to transform the practice of AI from one driven primarily by technological innovation to one that is driven by ethics, human rights, and input and feedback from communities whose voices have previously been marginalized.

“As artificial intelligence continues to grow exponentially, we must embrace its potential for helping to solve the grand challenges of our time, as well as ensure that it is used both ethically and responsibly,” said UMD President Darryll J. Pines. “With strong federal support, this new institute will lead in defining the science and innovation needed to harness the power of AI for the benefit of the public good and all humankind.”

In addition to UMD, TRAILS will include faculty members from George Washington University (GW) and Morgan State University, with more support coming from Cornell University, the National Institute of Standards and Technology (NIST), and private sector organizations like the DataedX Group, Arthur AI, Checkstep, FinRegLab and Techstars.

At the heart of establishing the new institute is the consensus that AI is currently at a crossroads. AI-infused systems have great potential to enhance human capacity, increase productivity, catalyze innovation, and mitigate complex problems, but today’s systems are developed and deployed in a process that is opaque and insular to the public, and therefore, often untrustworthy to those affected by the technology.

“We’ve structured our research goals to educate, learn from, recruit, retain and support communities whose voices are often not recognized in mainstream AI development,” said Hal Daumé III, a UMD professor of computer science who is lead principal investigator of the NSF award and will serve as the director of TRAILS.

Inappropriate trust in AI can result in many negative outcomes, Daumé said. People often “overtrust” AI systems to do things they’re fundamentally incapable of. This can lead to people or organizations giving up their own power to systems that are not acting in their best interest. At the same time, people can also “undertrust” AI systems, leading them to avoid using systems that could ultimately help them.

Given these conditions—and the fact that AI is increasingly being deployed to mediate society’s online communications, determine health care options, and offer guidelines in the criminal justice system—it has become urgent to ensure that people’s trust in AI systems matches those same systems’ level of trustworthiness.

TRAILS has identified four key research thrusts to promote the development of AI systems that can earn the public’s trust through broader participation in the AI ecosystem.

The first, known as participatory AI, advocates involving human stakeholders in the development, deployment and use of these systems. It aims to create technology in a way that aligns with the values and interests of diverse groups of people, rather than being controlled by a few experts or solely driven by profit.

Leading the efforts in participatory AI is Katie Shilton, an associate professor in UMD’s College of Information Studies who specializes in ethics and sociotechnical systems. Tom Goldstein, a UMD associate professor of computer science, will lead the institute’s second research thrust, developing advanced machine learning algorithms that reflect the values and interests of the relevant stakeholders.

Daumé, Shilton and Goldstein all have appointments in the University of Maryland Institute for Advanced Computer Studies, which is providing administrative and technical support for TRAILS.

David Broniatowski, an associate professor of engineering management and systems engineering at GW, will lead the institute’s third research thrust of evaluating how people make sense of the AI systems that are developed, and the degree to which their levels of reliability, fairness, transparency and accountability will lead to appropriate levels of trust. Susan Ariel Aaronson, a research professor of international affairs at GW, will use her expertise in data-driven change and international data governance to lead the institute’s fourth thrust of participatory governance and trust.

Virginia Byrne, an assistant professor of higher education and student affairs at Morgan State, will lead community-driven projects related to the interplay between AI and education. According to Daumé, the TRAILS team will rely heavily on Morgan State’s leadership—as Maryland’s preeminent public urban research university—in conducting rigorous, participatory community-based research with broad societal impacts.

Additional academic support will come from Valerie Reyna, a professor of human development at Cornell, who will use her expertise in human judgment and cognition to advance efforts focused on how people interpret their use of AI.

Federal officials at NIST will collaborate with TRAILS in the development of meaningful measures, benchmarks, test beds and certification methods—particularly as they apply to important topics essential to trust and trustworthiness such as safety, fairness, privacy, transparency, explainability, accountability, accuracy and reliability.

“The ability to measure AI system trustworthiness and its impacts on individuals, communities and society is limited. TRAILS can help advance our understanding of the foundations of trustworthy AI, ethical and societal considerations of AI, and how to build systems that are trusted by the people who use and are affected by them,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio.

Today’s announcement [May 4, 2023] is the latest in a series of federal grants establishing a cohort of National Artificial Intelligence Research Institutes. This recent investment in seven new AI institutes, totaling $140 million, follows two previous rounds of awards.

“Maryland is at the forefront of our nation’s scientific innovation thanks to our talented workforce, top-tier universities, and federal partners,” said U.S. Sen. Chris Van Hollen (D-Md.). “This National Science Foundation award for the University of Maryland—in coordination with other Maryland-based research institutions including Morgan State University and NIST—will promote ethical and responsible AI development, with the goal of helping us harness the benefits of this powerful emerging technology while limiting the potential risks it poses. This investment entrusts Maryland with a critical priority for our shared future, recognizing the unparalleled ingenuity and world-class reputation of our institutions.” 

The NSF, in collaboration with government agencies and private sector leaders, has now invested close to half a billion dollars in the AI institutes ecosystem—an investment that expands a collaborative AI research network into almost every U.S. state.

“The National AI Research Institutes are a critical component of our nation’s AI innovation, infrastructure, technology, education and partnerships ecosystem,” said NSF Director Sethuraman Panchanathan. “[They] are driving discoveries that will ensure our country is at the forefront of the global AI revolution.”

As noted in the UMD news release, this funding is part of a ‘bundle’, here’s more from the May 4, 2023 US NSF news release announcing the full $ 140 million funding program, Note: Links have been removed,

The U.S. National Science Foundation, in collaboration with other federal agencies, higher education institutions and other stakeholders, today announced a $140 million investment to establish seven new National Artificial Intelligence Research Institutes. The announcement is part of a broader effort across the federal government to advance a cohesive approach to AI-related opportunities and risks.

The new AI Institutes will advance foundational AI research that promotes ethical and trustworthy AI systems and technologies, develop novel approaches to cybersecurity, contribute to innovative solutions to climate change, expand the understanding of the brain, and leverage AI capabilities to enhance education and public health. The institutes will support the development of a diverse AI workforce in the U.S. and help address the risks and potential harms posed by AI.  This investment means  NSF and its funding partners have now invested close to half a billion dollars in the AI Institutes research network, which reaches almost every U.S. state.

“The National AI Research Institutes are a critical component of our nation’s AI innovation, infrastructure, technology, education and partnerships ecosystem,” said NSF Director Sethuraman Panchanathan. “These institutes are driving discoveries that will ensure our country is at the forefront of the global AI revolution.”

“These strategic federal investments will advance American AI infrastructure and innovation, so that AI can help tackle some of the biggest challenges we face, from climate change to health. Importantly, the growing network of National AI Research Institutes will promote responsible innovation that safeguards people’s safety and rights,” said White House Office of Science and Technology Policy Director Arati Prabhakar.

The new AI Institutes are interdisciplinary collaborations among top AI researchers and are supported by co-funding from the U.S. Department of Commerce’s National Institutes of Standards and Technology (NIST); U.S. Department of Homeland Security’s Science and Technology Directorate (DHS S&T); U.S. Department of Agriculture’s National Institute of Food and Agriculture (USDA-NIFA); U.S. Department of Education’s Institute of Education Sciences (ED-IES); U.S. Department of Defense’s Office of the Undersecretary of Defense for Research and Engineering (DoD OUSD R&E); and IBM Corporation (IBM).

“Foundational research in AI and machine learning has never been more critical to the understanding, creation and deployment of AI-powered systems that deliver transformative and trustworthy solutions across our society,” said NSF Assistant Director for Computer and Information Science and Engineering Margaret Martonosi. “These recent awards, as well as our AI Institutes ecosystem as a whole, represent our active efforts in addressing national economic and societal priorities that hinge on our nation’s AI capability and leadership.”

The new AI Institutes focus on six research themes:

Trustworthy AI

NSF Institute for Trustworthy AI in Law & Society (TRAILS)

Led by the University of Maryland, TRAILS aims to transform the practice of AI from one driven primarily by technological innovation to one driven with attention to ethics, human rights and support for communities whose voices have been marginalized into mainstream AI. TRAILS will be the first institute of its kind to integrate participatory design, technology, and governance of AI systems and technologies and will focus on investigating what trust in AI looks like, whether current technical solutions for AI can be trusted, and which policy models can effectively sustain AI trustworthiness. TRAILS is funded by a partnership between NSF and NIST.

Intelligent Agents for Next-Generation Cybersecurity

AI Institute for Agent-based Cyber Threat Intelligence and Operation (ACTION)

Led by the University of California, Santa Barbara, this institute will develop novel approaches that leverage AI to anticipate and take corrective actions against cyberthreats that target the security and privacy of computer networks and their users. The team of researchers will work with experts in security operations to develop a revolutionary approach to cybersecurity, in which AI-enabled intelligent security agents cooperate with humans across the cyberdefense life cycle to jointly improve the resilience of security of computer systems over time. ACTION is funded by a partnership between NSF, DHS S&T, and IBM.

Climate Smart Agriculture and Forestry

AI Institute for Climate-Land Interactions, Mitigation, Adaptation, Tradeoffs and Economy (AI-CLIMATE)

Led by the University of Minnesota Twin Cities, this institute aims to advance foundational AI by incorporating knowledge from agriculture and forestry sciences and leveraging these unique, new AI methods to curb climate effects while lifting rural economies. By creating a new scientific discipline and innovation ecosystem intersecting AI and climate-smart agriculture and forestry, our researchers and practitioners will discover and invent compelling AI-powered knowledge and solutions. Examples include AI-enhanced estimation methods of greenhouse gases and specialized field-to-market decision support tools. A key goal is to lower the cost of and improve accounting for carbon in farms and forests to empower carbon markets and inform decision making. The institute will also expand and diversify rural and urban AI workforces. AI-CLIMATE is funded by USDA-NIFA.

Neural and Cognitive Foundations of Artificial Intelligence

AI Institute for Artificial and Natural Intelligence (ARNI)

Led by Columbia University, this institute will draw together top researchers across the country to focus on a national priority: connecting the major progress made in AI systems to the revolution in our understanding of the brain. ARNI will meet the urgent need for new paradigms of interdisciplinary research between neuroscience, cognitive science and AI. This will accelerate progress in all three fields and broaden the transformative impact on society in the next decade. ARNI is funded by a partnership between NSF and DoD OUSD R&E.

AI for Decision Making

AI Institute for Societal Decision Making (AI-SDM)

Led by Carnegie Mellon University, this institute seeks to create human-centric AI for decision making to bolster effective response in uncertain, dynamic and resource-constrained scenarios like disaster management and public health. By bringing together an interdisciplinary team of AI and social science researchers, AI-SDM will enable emergency managers, public health officials, first responders, community workers and the public to make decisions that are data driven, robust, agile, resource efficient and trustworthy. The vision of the institute will be realized via development of AI theory and methods, translational research, training and outreach, enabled by partnerships with diverse universities, government organizations, corporate partners, community colleges, public libraries and high schools.

AI-Augmented Learning to Expand Education Opportunities and Improve Outcomes

AI Institute for Inclusive Intelligent Technologies for Education (INVITE)

Led by the University of Illinois Urbana-Champaign, this institute seeks to fundamentally reframe how educational technologies interact with learners by developing AI tools and approaches to support three crucial noncognitive skills known to underlie effective learning: persistence, academic resilience and collaboration. The institute’s use-inspired research will focus on how children communicate STEM content, how they learn to persist through challenging work, and how teachers support and promote noncognitive skill development. The resultant AI-based tools will be integrated into classrooms to empower teachers to support learners in more developmentally appropriate ways.

AI Institute for Exceptional Education (AI4ExceptionalEd)

Led by the University at Buffalo, this institute will work toward universal speech and language screening for children. The framework, the AI screener, will analyze video and audio streams of children during classroom interactions and assess the need for evidence-based interventions tailored to individual needs of students. The institute will serve children in need of ability-based speech and language services, advance foundational AI technologies and enhance understanding of childhood speech and language development. The AI Institute for Exceptional Education was previously announced in January 2023. The INVITE and AI4ExceptionalEd institutes are funded by a partnership between NSF and ED-IES.

Statements from NSF’s Federal Government Funding Partners

“Increasing AI system trustworthiness while reducing its risks will be key to unleashing AI’s potential benefits and ensuring our shared societal values,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio. “Today, the ability to measure AI system trustworthiness and its impacts on individuals, communities and society is limited. TRAILS can help advance our understanding of the foundations of trustworthy AI, ethical and societal considerations of AI, and how to build systems that are trusted by the people who use and are affected by them.”

“The ACTION Institute will help us better assess the opportunities and risks of rapidly evolving AI technology and its impact on DHS missions,” said Dimitri Kusnezov, DHS under secretary for science and technology. “This group of researchers and their ambition to push the limits of fundamental AI and apply new insights represents a significant investment in cybersecurity defense. These partnerships allow us to collectively remain on the forefront of leading-edge research for AI technologies.”

“In the tradition of USDA National Institute of Food and Agriculture investments, this new institute leverages the scientific power of U.S. land-grant universities informed by close partnership with farmers, producers, educators and innovators to address the grand challenge of rising greenhouse gas concentrations and associated climate change,” said Acting NIFA Director Dionne Toombs. “This innovative center will address the urgent need to counter climate-related threats, lower greenhouse gas emissions, grow the American workforce and increase new rural opportunities.”

“The leading-edge in AI research inevitably draws from our, so far, limited understanding of human cognition. This AI Institute seeks to unify the fields of AI and neuroscience to bring advanced designs and approaches to more capable and trustworthy AI, while also providing better understanding of the human brain,” said Bindu Nair, director, Basic Research Office, Office of the Undersecretary of Defense for Research and Engineering. “We are proud to partner with NSF in this critical field of research, as continued advancement in these areas holds the potential for further and significant benefits to national security, the economy and improvements in quality of life.”

“We are excited to partner with NSF on these two AI institutes,” said IES Director Mark Schneider. “We hope that they will provide valuable insights into how to tap modern technologies to improve the education sciences — but more importantly we hope that they will lead to better student outcomes and identify ways to free up the time of teachers to deliver more informed individualized instruction for the students they care so much about.” 

Learn more about the NSF AI Institutes by visiting nsf.gov.

Two things I noticed, (1) No mention of including ethics training or concepts in science and technology education and (2) No mention of integrating ethics and social issues into any of the AI Institutes. So, it seems that ‘Responsible Design, Development and Deployment of Technologies (ReDDDoT)’ occupies its own fiefdom.

Some sobering thoughts

Things can go terribly wrong with new technology as seen in the British television hit series, Mr. Bates vs. The Post Office (based on a true story) , from a January 9, 2024 posting by Ani Blundel for tellyvisions.org,

… what is this show that’s caused the entire country to rise up as one to defend the rights of the lowly sub-postal worker? Known as the “British Post Office scandal,” the incidents first began in 1999 when the U.K. postal system began to switch to digital systems, using the Horizon Accounting system to track the monies brought in. However, the IT system was faulty from the start, and rather than blame the technology, the British government accused, arrested, persecuted, and convicted over 700 postal workers of fraud and theft. This continued through 2015 when the glitch was finally recognized, and in 2019, the convictions were ruled to be a miscarriage of justice.

Here’s the series synopsis:

The drama tells the story of one of the greatest miscarriages of justice in British legal history. Hundreds of innocent sub-postmasters and postmistresses were wrongly accused of theft, fraud, and false accounting due to a defective IT system. Many of the wronged workers were prosecuted, some of whom were imprisoned for crimes they never committed, and their lives were irreparably ruined by the scandal. Following the landmark Court of Appeal decision to overturn their criminal convictions, dozens of former sub-postmasters and postmistresses have been exonerated on all counts as they battled to finally clear their names. They fought for over ten years, finally proving their innocence and sealing a resounding victory, but all involved believe the fight is not over yet, not by a long way.

Here’s a video trailer for ‘Mr. Bates vs. The Post Office,

More from Blundel’s January 9, 2024 posting, Note: A link has been removed,

The outcry from the general public against the government’s bureaucratic mismanagement and abuse of employees has been loud and sustained enough that Prime Minister Rishi Sunak had to come out with a statement condemning what happened back during the 2009 incident. Further, the current Justice Secretary, Alex Chalk, is now trying to figure out the fastest way to exonerate the hundreds of sub-post managers and sub-postmistresses who were wrongfully convicted back then and if there are steps to be taken to punish the post office a decade later.

It’s a horrifying story and the worst I’ve seen so far but, sadly, it’s not the only one of its kind.

Too often people’s concerns and worries about new technology are dismissed or trivialized. Somehow, all the work done to establish ethical standards and develop trust seems to be used as a kind of sop to the concerns rather than being integrated into the implementation of life-altering technologies.

Canadian scientists still being muzzled and a call for action on values and ethics in the Canadian federal public service

I’m starting with the older news about a survey finding that Canadian scientists are being muzzled before moving on to news about a recent survey where workers in the Canadian public services (and where most Canadian scientists are employed) criticizes the government’s values and ethics.

Muzzles, anyone?

It’s not exactly surprising to hear that Canadian scientists are still being muzzled for another recent story, (see my November 7, 2023 posting, “Money and its influence on Canada’s fisheries and oceans” for some specifics’ two of the authors are associated with Dalhousie University, Nova Scotia, Canada) .

This December 13, 2023 essay is by Alana Westwood, Manjulika E. Robertson and Samantha M. Chu (all of Dalhousie University but none were listed as authors on the ‘money, fisheries, and oceans paper) on The Conversation (h/t December 14, 2023 news item on phys.org). These authors describe some recent research into the Canadian situation, specifically since the 2015 election and the Liberals formed the government and ‘removed’ the muzzles placed on scientists by the previous Conservative government,

We recently surveyed 741 environmental researchers across Canada in two separate studies into interference. We circulated our survey through scientific societies related to environmental fields, as well as directly emailing Canadian authors of peer-reviewed research in environmental disciplines.

Researchers were asked (1) if they believed they had experienced interference in their work, (2) the sources and types of this interference, and (3) the subsequent effects on their career satisfaction and well-being.

We also asked demographic information to understand whether researchers’ perceptions of interference differed by career stage, research area or identity.

Although overall ability to communicate is improving, interference is a pervasive issue in Canada, including from government, private industry and academia. We found 92 per cent of the environmental researchers reported having experienced interference with their ability to communicate or conduct their research in some form.

Interference also manifested in different ways and already-marginalized researchers experienced worse outcomes.

The writers go on to offer a history of the interference (there’s also a more detailed history in this May 20, 2015 Canadian Broadcasting Corporation [CBC] online news article by Althea Manasan) before offering more information about results from the two recent surveys, Note: Links have been removed,

In our survey, respondents indicated that, overall, their ability to communicate with the public has improved in the recent years. Of the respondents aware of the government’s scientific integrity policies, roughly half of them attribute positive changes to them.

Others argued that the 2015 change in government [from Conservative to Liberal] had the biggest influence. In the first few months of their tenure, the Liberal government created a new cabinet position, the Minister of Science (this position was absorbed into the role of Minister of Innovation, Science, and Industry in 2019), and appointed a chief science advisor among other changes.

Though the ability to communicate has generally improved, many of the researchers argued interference still goes on in subtler ways. These included undue restriction on what kind of environmental research they can do, and funding to pursue them. Many respondents attributed those restrictions to the influence of private industry [emphasis mine].

Respondents identified the major sources of external interference as management, workplace policies, and external research partners. The chief motivations for interference, as the scientists saw it, included downplaying environmental risks, justifying an organization’s current position on an issue and avoiding contention.

Our most surprising finding was almost half of respondents said they limited their communications with the public and policymakers due to fears of negative backlash and reduced career opportunities.

In addition, interference had not been experienced equally. Early career and marginalized scientists — including those who identify as women, racialized, living with a disability and 2SLGBTQI+ — reported facing significantly more interference than their counterparts.

Scientists studying climate change, pollution, environmental impacted assessments and threatened species were also more likely to experience interference with their work than scientists in other disciplines.

The researchers used a single survey as the basis for two studies concerning interference in science,

Interference in science: scientists’ perspectives on their ability to communicate and conduct environmental research in Canada by Manjulika E. Robertson, Samantha M. Chu, Anika Cloutier, Philippe Mongeon, Don A. Driscoll, Tej Heer, and Alana R. Westwood. FACETS 8 (1) 30 November 2023 DOI: https://doi.org/10.1139/facets-2023-0005

This paper is open access.

Do environmental researchers from marginalized groups experience greater interference? Understanding scientists’ perceptions by Samantha M. Chu, Manjulika E. Robertson, Anika Cloutier, Suchinta Arif, and Alana R. Westwood.
FACETS 30 November 2023 DOI: https://doi.org/10.1139/facets-2023-0006

This paper is open access.

This next bit is on a somewhat related topic.

The Canadian government’s public service: values and ethics

Before launching into the latest news, here’s a little background. In 2016 the newly elected Liberal government implemented a new payroll system for the Canadian civil/public service. it was a débacle, which continues to this day (for the latest news I could find, see this September 1, 2023 article by Sam Konnert for CBC online news).

It was preventable and both the Conservative and Liberal governments of the day are responsible. You can get more details from my December 27, 2019 posting; scroll down to “The Minister of Digital Government and a bureaucratic débacle” and read on from there. In short, elected officials of both the Liberal and Conservative governments refused to listen when employees (both from the government and from the contractor) expressed grave concerns about the proposed pay system.

Now for public service employee morale, from a February 7, 2024 article by Robyn Miller for CBC news online, Note: Links have been removed,

Unions representing federal public servants say the government needs to do more to address dissatisfaction among the workforce after a recent report found some employees are unable to feel pride in their work.

“It’s more difficult now to be proud to be a public servant because of people’s perceptions of the institution and because of Canada’s role on the global stage,” said one participant who testified as part of the Deputy Ministers’ Task Team on Values and Ethics Report.

The report was published in late December [2023] by members of a task force assembled by Privy Council Clerk John Hannaford.

It’s the first major values and ethics review since an earlier report titled A Strong Foundation was released nearly 30 years ago.

Alex Silas, a regional executive vice-president of the Public Service Alliance of Canada, said the union supports the recommendations in the report but wants to see action.

“What we’ve seen historically, unfortunately, is that the values and ethics proposed by the federal government are not implemented in the workplaces of the federal government,” Silas said.

According to the report, it drew its findings from more than 90 conversations with public servants and external stakeholders starting in September 2023.

The report notes “public servants must provide frank and professional advice, without partisan considerations or fear of criticism or political reprisals.” [emphasis mine]

“The higher up the food chain you go, the less accountability seems to exist,” said one participant.

So, either elected officials and/or higher ups don’t listen when you speak up or you’re afraid to speak up for fear of criticism and/or reprisals. Plus, there’s outright interference as noted in the survey of scientists.

For the curious, here’s a link to the Deputy Ministers’ Task Team on Values and Ethics Report to the Clerk of the Privy Council (Canada 2023).

Let’s hope this airing of dirty laundry leads to some changes.