Category Archives: ethics

Neural (brain) implants and hype (long read)

There was a big splash a few weeks ago when it was announced that Neuralink’s (Elon Musk company) brain implant had been surgically inserted into its first human patient.

Getting approval

David Tuffley, senior lecturer in Applied Ethics & CyberSecurity at Griffith University (Australia), provides a good overview of the road Neuralink took to getting FDA (US Food and Drug Administration) approval for human clinical trials in his May 29, 2023 essay for The Conversation, Note: Links have been removed,

Since its founding in 2016, Elon Musk’s neurotechnology company Neuralink has had the ambitious mission to build a next-generation brain implant with at least 100 times more brain connections than devices currently approved by the US Food and Drug Administration (FDA).

The company has now reached a significant milestone, having received FDA approval to begin human trials. So what were the issues keeping the technology in the pre-clinical trial phase for as long as it was? And have these concerns been addressed?

Neuralink is making a Class III medical device known as a brain-computer interface (BCI). The device connects the brain to an external computer via a Bluetooth signal, enabling continuous communication back and forth.

The device itself is a coin-sized unit called a Link. It’s implanted within a small disk-shaped cutout in the skull using a precision surgical robot. The robot splices a thousand tiny threads from the Link to certain neurons in the brain. [emphasis mine] Each thread is about a quarter the diameter of a human hair.

The company says the device could enable precise control of prosthetic limbs, giving amputees natural motor skills. It could revolutionise treatment for conditions such as Parkinson’s disease, epilepsy and spinal cord injuries. It also shows some promise for potential treatment of obesity, autism, depression, schizophrenia and tinnitus.

Several other neurotechnology companies and researchers have already developed BCI technologies that have helped people with limited mobility regain movement and complete daily tasks.

In February 2021, Musk said Neuralink was working with the FDA to secure permission to start initial human trials later that year. But human trials didn’t commence in 2021.

Then, in March 2022, Neuralink made a further application to the FDA to establish its readiness to begin humans trials.

One year and three months later, on May 25 2023, Neuralink finally received FDA approval for its first human clinical trial. Given how hard Neuralink has pushed for permission to begin, we can assume it will begin very soon. [emphasis mine]

The approval has come less than six months after the US Office of the Inspector General launched an investigation into Neuralink over potential animal welfare violations. [emphasis mine]

In accessible language, Tuffley goes on to discuss the FDA’s specific technical issues with implants and how they were addressed in his May 29, 2023 essay.

More about how Neuralink’s implant works and some concerns

Canadian Broadcasting Corporation (CBC) journalist Andrew Chang offers an almost 13 minute video, “Neuralink brain chip’s first human patient. How does it work?” Chang is a little overenthused for my taste but he offers some good information about neural implants, along with informative graphics in his presentation.

So, Tuffley was right about Neuralink getting ready quickly for human clinical trials as you can guess from the title of Chang’s CBC video.

Jennifer Korn announced that recruitment had started in her September 20, 2023 article for CNN (Cable News Network), Note: Links have been removed,

Elon Musk’s controversial biotechnology startup Neuralink opened up recruitment for its first human clinical trial Tuesday, according to a company blog.

After receiving approval from an independent review board, Neuralink is set to begin offering brain implants to paralysis patients as part of the PRIME Study, the company said. PRIME, short for Precise Robotically Implanted Brain-Computer Interface, is being carried out to evaluate both the safety and functionality of the implant.

Trial patients will have a chip surgically placed in the part of the brain that controls the intention to move. The chip, installed by a robot, will then record and send brain signals to an app, with the initial goal being “to grant people the ability to control a computer cursor or keyboard using their thoughts alone,” the company wrote.

Those with quadriplegia [sometimes known as tetraplegia] due to cervical spinal cord injury or amyotrophic lateral sclerosis (ALS) may qualify for the six-year-long study – 18 months of at-home and clinic visits followed by follow-up visits over five years. Interested people can sign up in the patient registry on Neuralink’s website.

Musk has been working on Neuralink’s goal of using implants to connect the human brain to a computer for five years, but the company so far has only tested on animals. The company also faced scrutiny after a monkey died in project testing in 2022 as part of efforts to get the animal to play Pong, one of the first video games.

I mentioned three Reuters investigative journalists who were reporting on Neuralink’s animal abuse allegations (emphasized in Tuffley’s essay) in a July 7, 2023 posting, “Global dialogue on the ethics of neurotechnology on July 13, 2023 led by UNESCO.” Later that year, Neuralink was cleared by the US Department of Agriculture (see September 24,, 2023 article by Mahnoor Jehangir for BNN Breaking).

Plus, Neuralink was being investigated over more allegations according to a February 9, 2023 article by Rachel Levy for Reuters, this time regarding hazardous pathogens,

The U.S. Department of Transportation said on Thursday it is investigating Elon Musk’s brain-implant company Neuralink over the potentially illegal movement of hazardous pathogens.

A Department of Transportation spokesperson told Reuters about the probe after the Physicians Committee of Responsible Medicine (PCRM), an animal-welfare advocacy group,wrote to Secretary of Transportation Pete Buttigieg, opens new tab earlier on Thursday to alert it of records it obtained on the matter.

PCRM said it obtained emails and other documents that suggest unsafe packaging and movement of implants removed from the brains of monkeys. These implants may have carried infectious diseases in violation of federal law, PCRM said.

There’s an update about the hazardous materials in the next section. Spoiler alert, the company got fined.

Neuralink’s first human implant

A January 30, 2024 article (Associated Press with files from Reuters) on the Canadian Broadcasting Corporation’s (CBC) online news webspace heralded the latest about Neurlink’s human clinical trials,

The first human patient received an implant from Elon Musk’s computer-brain interface company Neuralink over the weekend, the billionaire says.

In a post Monday [January 29, 2024] on X, the platform formerly known as Twitter, Musk said that the patient received the implant the day prior and was “recovering well.” He added that “initial results show promising neuron spike detection.”

Spikes are activity by neurons, which the National Institutes of Health describe as cells that use electrical and chemical signals to send information around the brain and to the body.

The billionaire, who owns X and co-founded Neuralink, did not provide additional details about the patient.

When Neuralink announced in September [2023] that it would begin recruiting people, the company said it was searching for individuals with quadriplegia due to cervical spinal cord injury or amyotrophic lateral sclerosis, commonly known as ALS or Lou Gehrig’s disease.

Neuralink reposted Musk’s Monday [January 29, 2024] post on X, but did not publish any additional statements acknowledging the human implant. The company did not immediately respond to requests for comment from The Associated Press or Reuters on Tuesday [January 30, 2024].

In a separate Monday [January 29, 2024] post on X, Musk said that the first Neuralink product is called “Telepathy” — which, he said, will enable users to control their phones or computers “just by thinking.” He said initial users would be those who have lost use of their limbs.

The startup’s PRIME Study is a trial for its wireless brain-computer interface to evaluate the safety of the implant and surgical robot.

Now for the hazardous materials, January 30, 2024 article, Note: A link has been removed,

Earlier this month [January 2024], a Reuters investigation found that Neuralink was fined for violating U.S. Department of Transportation (DOT) rules regarding the movement of hazardous materials. During inspections of the company’s facilities in Texas and California in February 2023, DOT investigators found the company had failed to register itself as a transporter of hazardous material.

They also found improper packaging of hazardous waste, including the flammable liquid Xylene. Xylene can cause headaches, dizziness, confusion, loss of muscle co-ordination and even death, according to the U.S. Centers for Disease Control and Prevention.

The records do not say why Neuralink would need to transport hazardous materials or whether any harm resulted from the violations.

Skeptical thoughts about Elon Musk and Neuralink

Earlier this month (February 2024), the British Broadcasting Corporation (BBC) published an article by health reporters, Jim Reed and Joe McFadden, that highlights the history of brain implants, the possibilities, and notes some of Elon Musk’s more outrageous claims for Neuralink’s brain implants,

Elon Musk is no stranger to bold claims – from his plans to colonise Mars to his dreams of building transport links underneath our biggest cities. This week the world’s richest man said his Neuralink division had successfully implanted its first wireless brain chip into a human.

Is he right when he says this technology could – in the long term – save the human race itself?

Sticking electrodes into brain tissue is really nothing new.

In the 1960s and 70s electrical stimulation was used to trigger or suppress aggressive behaviour in cats. By the early 2000s monkeys were being trained to move a cursor around a computer screen using just their thoughts.

“It’s nothing novel, but implantable technology takes a long time to mature, and reach a stage where companies have all the pieces of the puzzle, and can really start to put them together,” says Anne Vanhoestenberghe, professor of active implantable medical devices, at King’s College London.

Neuralink is one of a growing number of companies and university departments attempting to refine and ultimately commercialise this technology. The focus, at least to start with, is on paralysis and the treatment of complex neurological conditions.

Reed and McFadden’s February 2024 BBC article describes a few of the other brain implant efforts, Note: Links have been removed,

One of its [Neuralink’s] main rivals, a start-up called Synchron backed by funding from investment firms controlled by Bill Gates and Jeff Bezos, has already implanted its stent-like device into 10 patients.

Back in December 2021, Philip O’Keefe, a 62-year old Australian who lives with a form of motor neurone disease, composed the first tweet using just his thoughts to control a cursor.

And researchers at Lausanne University in Switzerland have shown it is possible for a paralysed man to walk again by implanting multiple devices to bypass damage caused by a cycling accident.

In a research paper published this year, they demonstrated a signal could be beamed down from a device in his brain to a second device implanted at the base of his spine, which could then trigger his limbs to move.

Some people living with spinal injuries are sceptical about the sudden interest in this new kind of technology.

“These breakthroughs get announced time and time again and don’t seem to be getting any further along,” says Glyn Hayes, who was paralysed in a motorbike accident in 2017, and now runs public affairs for the Spinal Injuries Association.

If I could have anything back, it wouldn’t be the ability to walk. It would be putting more money into a way of removing nerve pain, for example, or ways to improve bowel, bladder and sexual function.” [emphasis mine]

Musk, however, is focused on something far more grand for Neuralink implants, from Reed and McFadden’s February 2024 BBC article, Note: A link has been removed,

But for Elon Musk, “solving” brain and spinal injuries is just the first step for Neuralink.

The longer-term goal is “human/AI symbiosis” [emphasis mine], something he describes as “species-level important”.

Musk himself has already talked about a future where his device could allow people to communicate with a phone or computer “faster than a speed typist or auctioneer”.

In the past, he has even said saving and replaying memories may be possible, although he recognised “this is sounding increasingly like a Black Mirror episode.”

One of the experts quoted in Reed and McFadden’s February 2024 BBC article asks a pointed question,

… “At the moment, I’m struggling to see an application that a consumer would benefit from, where they would take the risk of invasive surgery,” says Prof Vanhoestenberghe.

“You’ve got to ask yourself, would you risk brain surgery just to be able to order a pizza on your phone?”

Rae Hodge’s February 11, 2024 article about Elon Musk and his hyped up Neuralink implant for Salon is worth reading in its entirety but for those who don’t have the time or need a little persuading, here are a few excerpts, Note 1: This is a warning; Hodge provides more detail about the animal cruelty allegations; Note 2: Links have been removed,

Elon Musk’s controversial brain-computer interface (BCI) tech, Neuralink, has supposedly been implanted in its first recipient — and as much as I want to see progress for treatment of paralysis and neurodegenerative disease, I’m not celebrating. I bet the neuroscientists he reportedly drove out of the company aren’t either, especially not after seeing the gruesome torture of test monkeys and apparent cover-up that paved the way for this moment. 

All of which is an ethics horror show on its own. But the timing of Musk’s overhyped implant announcement gives it an additional insulting subtext. Football players are currently in a battle for their lives against concussion-based brain diseases that plague autopsy reports of former NFL players. And Musk’s boast of false hope came just two weeks before living players take the field in the biggest and most brutal game of the year. [2024 Super Bowl LVIII]

ESPN’s Kevin Seifert reports neuro-damage is up this year as “players suffered a total of 52 concussions from the start of training camp to the beginning of the regular season. The combined total of 213 preseason and regular season concussions was 14% higher than 2021 but within range of the three-year average from 2018 to 2020 (203).”

I’m a big fan of body-tech: pacemakers, 3D-printed hips and prosthetic limbs that allow you to wear your wedding ring again after 17 years. Same for brain chips. But BCI is the slow-moving front of body-tech development for good reason. The brain is too understudied. Consequences of the wrong move are dire. Overpromising marketable results on profit-driven timelines — on the backs of such a small community of researchers in a relatively new field — would be either idiotic or fiendish. 

Brown University’s research in the sector goes back to the 1990s. Since the emergence of a floodgate-opening 2002 study and the first implant in 2004 by med-tech company BrainGate, more promising results have inspired broader investment into careful research. But BrainGate’s clinical trials started back in 2009, and as noted by Business Insider’s Hilary Brueck, are expected to continue until 2038 — with only 15 participants who have devices installed. 

Anne Vanhoestenberghe is a professor of active implantable medical devices at King’s College London. In a recent release, she cautioned against the kind of hype peddled by Musk.

“Whilst there are a few other companies already using their devices in humans and the neuroscience community have made remarkable achievements with those devices, the potential benefits are still significantly limited by technology,” she said. “Developing and validating core technology for long term use in humans takes time and we need more investments to ensure we do the work that will underpin the next generation of BCIs.” 

Neuralink is a metal coin in your head that connects to something as flimsy as an app. And we’ve seen how Elon treats those. We’ve also seen corporate goons steal a veteran’s prosthetic legs — and companies turn brain surgeons and dentists into repo-men by having them yank anti-epilepsy chips out of people’s skulls, and dentures out of their mouths. 

“I think we have a chance with Neuralink to restore full-body functionality to someone who has a spinal cord injury,” Musk said at a 2023 tech summit, adding that the chip could possibly “make up for whatever lost capacity somebody has.”

Maybe BCI can. But only in the careful hands of scientists who don’t have Musk squawking “go faster!” over their shoulders. His greedy frustration with the speed of BCI science is telling, as is the animal cruelty it reportedly prompted.

There have been other examples of Musk’s grandiosity. Notably, David Lee expressed skepticism about hyperloop in his August 13, 2013 article for BBC news online

Is Elon Musk’s Hyperloop just a pipe dream?

Much like the pun in the headline, the bright idea of transporting people using some kind of vacuum-like tube is neither new nor imaginative.

There was Robert Goddard, considered the “father of modern rocket propulsion”, who claimed in 1909 that his vacuum system could suck passengers from Boston to New York at 1,200mph.

And then there were Soviet plans for an amphibious monorail  – mooted in 1934  – in which two long pods would start their journey attached to a metal track before flying off the end and slipping into the water like a two-fingered Kit Kat dropped into some tea.

So ever since inventor and entrepreneur Elon Musk hit the world’s media with his plans for the Hyperloop, a healthy dose of scepticism has been in the air.

“This is by no means a new idea,” says Rod Muttram, formerly of Bombardier Transportation and Railtrack.

“It has been previously suggested as a possible transatlantic transport system. The only novel feature I see is the proposal to put the tubes above existing roads.”

Here’s the latest I’ve found on hyperloop, from the Hyperloop Wikipedia entry,

As of 2024, some companies continued to pursue technology development under the hyperloop moniker, however, one of the biggest, well funded players, Hyperloop One, declared bankruptcy and ceased operations in 2023.[15]

Musk is impatient and impulsive as noted in a September 12, 2023 posting by Mike Masnick on Techdirt, Note: A link has been removed,

The Batshit Crazy Story Of The Day Elon Musk Decided To Personally Rip Servers Out Of A Sacramento Data Center

Back on Christmas Eve [December 24, 2022] of last year there were some reports that Elon Musk was in the process of shutting down Twitter’s Sacramento data center. In that article, a number of ex-Twitter employees were quoted about how much work it would be to do that cleanly, noting that there’s a ton of stuff hardcoded in Twitter code referring to that data center (hold that thought).

That same day, Elon tweeted out that he had “disconnected one of the more sensitive server racks.”

Masnick follows with a story of reckless behaviour from someone who should have known better.

Ethics of implants—where to look for more information

While Musk doesn’t use the term when he describes a “human/AI symbiosis” (presumably by way of a neural implant), he’s talking about a cyborg. Here’s a 2018 paper, which looks at some of the implications,

Do you want to be a cyborg? The moderating effect of ethics on neural implant acceptance by Eva Reinares-Lara, Cristina Olarte-Pascual, and Jorge Pelegrín-Borondo. Computers in Human Behavior Volume 85, August 2018, Pages 43-53 DOI: https://doi.org/10.1016/j.chb.2018.03.032

This paper is open access.

Getting back to Neuralink, I have two blog posts that discuss the company and the ethics of brain implants from way back in 2021.

First, there’s Jazzy Benes’ March 1, 2021 posting on the Santa Clara University’s Markkula Center for Applied Ethics blog. It stands out as it includes a discussion of the disabled community’s issues, Note: Links have been removed,

In the heart of Silicon Valley we are constantly enticed by the newest technological advances. With the big influencers Grimes [a Canadian musician and the mother of three children with Elon Musk] and Lil Uzi Vert publicly announcing their willingness to become experimental subjects for Elon Musk’s Neuralink brain implantation device, we are left wondering if future technology will actually give us “the knowledge of the Gods.” Is it part of the natural order for humans to become omniscient beings? Who will have access to the devices? What other ethical considerations must be discussed before releasing such technology to the public?

A significant issue that arises from developing technologies for the disabled community is the assumption that disabled persons desire the abilities of what some abled individuals may define as “normal.” Individuals with disabilities may object to technologies intended to make them fit an able-bodied norm. “Normal” is relative to each individual, and it could be potentially harmful to use a deficit view of disability, which means judging a disability as a deficiency. However, this is not to say that all disabled individuals will reject a technology that may enhance their abilities. Instead, I believe it is a consideration that must be recognized when developing technologies for the disabled community, and it can only be addressed through communication with disabled persons. As a result, I believe this is a conversation that must be had with the community for whom the technology is developed–disabled persons.

With technologies that aim to address disabilities, we walk a fine line between therapeutics and enhancement. Though not the first neural implant medical device, the Link may have been the first BCI system openly discussed for its potential transhumanism uses, such as “enhanced cognitive abilities, memory storage and retrieval, gaming, telepathy, and even symbiosis with machines.” …

Benes also discusses transhumanism, privacy issues, and consent issues. It’s a thoughtful reading experience.

Second is a July 9, 2021 posting by anonymous on the University of California at Berkeley School of Information blog which provides more insight into privacy and other issues associated with data collection (and introduced me to the concept of decisional interference),

As the development of microchips furthers and advances in neuroscience occur, the possibility for seamless brain-machine interfaces, where a device decodes inputs from the user’s brain to perform functions, becomes more of a reality. These various forms of these technologies already exist. However, technological advances have made implantable and portable devices possible. Imagine a future where humans don’t need to talk to each other, but rather can transmit their thoughts directly to another person. This idea is the eventual goal of Elon Musk, the founder of Neuralink. Currently, Neuralink is one of the main companies involved in the advancement of this type of technology. Analysis of the Neuralink’s technology and their overall mission statement provide an interesting insight into the future of this type of human-computer interface and the potential privacy and ethical concerns with this technology.

As this technology further develops, several privacy and ethical concerns come into question. To begin, using Solove’s Taxonomy as a privacy framework, many areas of potential harm are revealed. In the realm of information collection, there is much risk. Brain-computer interfaces, depending on where they are implanted, could have access to people’s most private thoughts and emotions. This information would need to be transmitted to another device for processing. The collection of this information by companies such as advertisers would represent a major breach of privacy. Additionally, there is risk to the user from information processing. These devices must work concurrently with other devices and often wirelessly. Given the widespread importance of cloud computing in much of today’s technology, offloading information from these devices to the cloud would be likely. Having the data stored in a database puts the user at the risk of secondary use if proper privacy policies are not implemented. The trove of information stored within the information collected from the brain is vast. These datasets could be combined with existing databases such as browsing history on Google to provide third parties with unimaginable context on individuals. Lastly, there is risk for information dissemination, more specifically, exposure. The information collected and processed by these devices would need to be stored digitally. Keeping such private information, even if anonymized, would be a huge potential for harm, as the contents of the information may in itself be re-identifiable to a specific individual. Lastly there is risk for invasions such as decisional interference. Brain-machine interfaces would not only be able to read information in the brain but also write information. This would allow the device to make potential emotional changes in its users, which be a major example of decisional interference. …

For the most recent Neuralink and brain implant ethics piece, there’s this February 14, 2024 essay on The Conversation, which, unusually, for this publication was solicited by the editors, Note: Links have been removed,

In January 2024, Musk announced that Neuralink implanted its first chip in a human subject’s brain. The Conversation reached out to two scholars at the University of Washington School of Medicine – Nancy Jecker, a bioethicst, and Andrew Ko, a neurosurgeon who implants brain chip devices – for their thoughts on the ethics of this new horizon in neuroscience.

Information about the implant, however, is scarce, aside from a brochure aimed at recruiting trial subjects. Neuralink did not register at ClinicalTrials.gov, as is customary, and required by some academic journals. [all emphases mine]

Some scientists are troubled by this lack of transparency. Sharing information about clinical trials is important because it helps other investigators learn about areas related to their research and can improve patient care. Academic journals can also be biased toward positive results, preventing researchers from learning from unsuccessful experiments.

Fellows at the Hastings Center, a bioethics think tank, have warned that Musk’s brand of “science by press release, while increasingly common, is not science. [emphases mine]” They advise against relying on someone with a huge financial stake in a research outcome to function as the sole source of information.

When scientific research is funded by government agencies or philanthropic groups, its aim is to promote the public good. Neuralink, on the other hand, embodies a private equity model [emphasis mine], which is becoming more common in science. Firms pooling funds from private investors to back science breakthroughs may strive to do good, but they also strive to maximize profits, which can conflict with patients’ best interests.

In 2022, the U.S. Department of Agriculture investigated animal cruelty at Neuralink, according to a Reuters report, after employees accused the company of rushing tests and botching procedures on test animals in a race for results. The agency’s inspection found no breaches, according to a letter from the USDA secretary to lawmakers, which Reuters reviewed. However, the secretary did note an “adverse surgical event” in 2019 that Neuralink had self-reported.

In a separate incident also reported by Reuters, the Department of Transportation fined Neuralink for violating rules about transporting hazardous materials, including a flammable liquid.

…the possibility that the device could be increasingly shown to be helpful for people with disabilities, but become unavailable due to loss of research funding. For patients whose access to a device is tied to a research study, the prospect of losing access after the study ends can be devastating. [emphasis mine] This raises thorny questions about whether it is ever ethical to provide early access to breakthrough medical interventions prior to their receiving full FDA approval.

Not registering a clinical trial would seem to suggest there won’t be much oversight. As for Musk’s “science by press release” activities, I hope those will be treated with more skepticism by mainstream media although that seems unlikely given the current situation with journalism (more about that in a future post).

As for the issues associated with private equity models for science research and the problem of losing access to devices after a clinical trial is ended, my April 5, 2022 posting, “Going blind when your neural implant company flirts with bankruptcy (long read)” offers some cautionary tales, in addition to being the most comprehensive piece I’ve published on ethics and brain implants.

My July 17, 2023 posting, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report” offers a brief overview of the international scene.

First round of seed funding announced for NSF (US National Science Foundation) Institute for Trustworthy AI in Law & Society (TRAILS)

Having published an earlier January 2024 US National Science Foundation (NSF) funding announcement for the TRAILS (Trustworthy AI in Law & Society) Institute yesterday (February 21, 2024), I’m following up with an announcement about the initiative’s first round of seed funding.

From a TRAILS undated ‘story‘ by Tom Ventsias on the initiative’s website (and published January 24, 2024 as a University of Maryland news release on EurekAlert),

The Institute for Trustworthy AI in Law & Society (TRAILS) has unveiled an inaugural round of seed grants designed to integrate a greater diversity of stakeholders into the artificial intelligence (AI) development and governance lifecycle, ultimately creating positive feedback loops to improve trustworthiness, accessibility and efficacy in AI-infused systems.

The eight grants announced on January 24, 2024—ranging from $100K to $150K apiece and totaling just over $1.5 million—were awarded to interdisciplinary teams of faculty associated with the institute. Funded projects include developing AI chatbots to assist with smoking cessation, designing animal-like robots that can improve autism-specific support at home, and exploring how people use and rely upon AI-generated language translation systems.

All eight projects fall under the broader mission of TRAILS, which is to transform the practice of AI from one driven primarily by technological innovation to one that is driven by ethics, human rights, and input and feedback from communities whose voices have previously been marginalized.

“At the speed with which AI is developing, our seed grant program will enable us to keep pace—or even stay one step ahead—by incentivizing cutting-edge research and scholarship that spans AI design, development and governance,” said Hal Daumé III, a professor of computer science at the University of Maryland who is the director of TRAILS.

After TRAILS was launched in May 2023 with a $20 million award from the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST), lead faculty met to brainstorm how the institute could best move forward with research, innovation and outreach that would have a meaningful impact.

They determined a seed grant program could quickly leverage the wide range of academic talent at TRAILS’ four primary institutions. This includes the University of Maryland’s expertise in computing and human-computer interaction; George Washington University’s strengths in systems engineering and AI as it relates to law and governance; Morgan State University’s work in addressing bias and inequity in AI; and Cornell University’s research in human behavior and decision-making.

“NIST and NSF’s support of TRAILS enables us to create a structured mechanism to reach across academic and institutional boundaries in search of innovative solutions,” said David Broniatowski, an associate professor of engineering management and systems engineering at George Washington University who leads TRAILS activities on the GW campus. “Seed funding from TRAILS will enable multidisciplinary teams to identify opportunities for their research to have impact, and to build the case for even larger, multi-institutional efforts.”

Further discussions were held at a TRAILS faculty retreat to identify seed grant guidelines and collaborative themes that mirror TRAILS’ primary research thrusts—participatory design, methods and metrics, evaluating trust, and participatory governance.

“Some of the funded projects are taking a fresh look at ideas we may have already been working on individually, and others are taking an entirely new approach to timely, pressing issues involving AI and machine learning,” said Virginia Byrne, an assistant professor of higher education & student affairs at Morgan State who is leading TRAILS activities on that campus and who served on the seed grant review committee.

A second round of seed funding will be announced later this year, said Darren Cambridge, who was recently hired as managing director of TRAILS to lead its day-to-day operations.

Projects selected in the first round are eligible for a renewal, while other TRAILS faculty—or any faculty member at the four primary TRAILS institutions—can submit new proposals for consideration, Cambridge said.

Ultimately, the seed funding program is expected to strengthen and incentivize other TRAILS activities that are now taking shape, including K–12 education and outreach programs, AI policy seminars and workshops on Capitol Hill, and multiple postdoc opportunities for early-career researchers.

“We want TRAILS to be the ‘go-to’ resource for educators, policymakers and others who are seeking answers and solutions on how to build, manage and use AI systems that will benefit all of society,” Cambridge said.

The eight projects selected for the first round of TRAILS seed-funding are:

Chung Hyuk Park and Zoe Szajnfarber from GW and Hernisa Kacorri from UMD aim to improve the support infrastructure and access to quality care for families of autistic children. Early interventions are strongly correlated with positive outcomes, while provider shortages and financial burdens have raised challenges—particularly for families without sufficient resources and experience. The researchers will develop novel parent-robot teaming for the home, advance the assistive technology, and assess the impact of teaming to promote more trust in human-robot collaborative settings.

Soheil Feizi from UMD and Robert Brauneis from GW will investigate various issues surrounding text-to-image [emphasis mine] generative AI models like Stable Diffusion, DALL-E 2, and Midjourney, focusing on myriad legal, aesthetic and computational aspects that are currently unresolved. A key question is how copyright law might adapt if these tools create works in an artist’s style. The team will explore how generative AI models represent individual artists’ styles, and whether those representations are complex and distinctive enough to form stable objects of protection. The researchers will also explore legal and technical questions to determine if specific artworks, especially rare and unique ones, have already been used to train AI models.

Huaishu Peng and Ge Gao from UMD will work with Malte Jung from Cornell to increase trust-building in embodied AI systems, which bridge the gap between computers and human physical senses. Specifically, the researchers will explore embodied AI systems in the form of miniaturized on-body or desktop robotic systems that can enable the exchange of nonverbal cues between blind and sighted individuals, an essential component of efficient collaboration. The researchers will also examine multiple factors—both physical and mental—in order to gain a deeper understanding of both groups’ values related to teamwork facilitated by embodied AI.

Marine Carpuat and Ge Gao from UMD will explore “mental models”—how humans perceive things—for language translation systems used by millions of people daily. They will focus on how individuals, depending on their language fluency and familiarity with the technology, make sense of their “error boundary”—that is, deciding whether an AI-generated translation is correct or incorrect. The team will also develop innovative techniques to teach users how to improve their mental models as they interact with machine translation systems.

Hal Daumé III, Furong Huang and Zubin Jelveh from UMD and Donald Braman from GW will propose new philosophies grounded in law to conceptualize, evaluate and achieve “effort-aware fairness,” which involves algorithms for determining whether an individual or a group of individuals is discriminated against in terms of equality of effort. The researchers will develop new metrics, evaluate fairness of datasets, and design novel algorithms that enable AI auditors to uncover and potentially correct unfair decisions.

Lorien Abroms and David Broniatowski from GW will recruit smokers to study the reliability of using generative chatbots, such as ChatGPT, as the basis for a digital smoking cessation program. Additional work will examine the acceptability by smokers and their perceptions of trust in using this rapidly evolving technology for help to quit smoking. The researchers hope their study will directly inform future digital interventions for smoking cessation and/or modifying other health behaviors.

Adam Aviv from GW and Michelle Mazurek from UMD will examine bias, unfairness and untruths such as sexism, racism and other forms of misrepresentation that come out of certain AI and machine learning systems. Though some systems have public warnings of potential biases, the researchers want to explore how users understand these warnings, if they recognize how biases may manifest themselves in the AI-generated responses, and how users attempt to expose, mitigate and manage potentially biased responses.

Susan Ariel Aaronson and David Broniatowski from GW plan to create a prototype of a searchable, easy-to-use website to enable policymakers to better utilize academic research related to trustworthy and participatory AI. The team will analyze research publications by TRAILS-affiliated researchers to ascertain which ones may have policy implications. Then, each relevant publication will be summarized and categorized by research questions, issues, keywords, and relevant policymaking uses. The resulting database prototype will enable the researchers to test the utility of this resource for policymakers over time.

Yes, things are moving quickly where AI is concerned. There’s text-to-image being investigated by Soheil Feizi and Robert Brauneis and, since the funding announcement in early January 2024, text-to-video has been announced (Open AI’s Sora was previewed February 15, 2024). I wonder if that will be added to the project.

One more comment, Huaishu Peng’s, Ge Gao’s, and Malte Jung’s project for “… trust-building in embodied AI systems …” brings to mind Elon Musk’s stated goal of using brain implants for “human/AI symbiosis.” (I have more about that in an upcoming post.) Hopefully, Susan Ariel Aaronson’s and David Broniatowski’s proposed website for policymakers will be able to keep up with what’s happening in the field of AI, including research on the impact of private investments primarily designed for generating profits.

Prioritizing ethical & social considerations in emerging technologies—$16M in US National Science Foundation funding

I haven’t seen this much interest in the ethics and social impacts of emerging technologies in years. It seems that the latest AI (artificial intelligence) panic has stimulated interest not only in regulation but ethics too.

The latest information I have on this topic comes from a January 9, 2024 US National Science Foundation (NSF) news release (also received via email),

NSF and philanthropic partners announce $16 million in funding to prioritize ethical and social considerations in emerging technologies

ReDDDoT is a collaboration with five philanthropic partners and crosses
all disciplines of science and engineering_

The U.S. National Science Foundation today launched a new $16 million
program in collaboration with five philanthropic partners that seeks to
ensure ethical, legal, community and societal considerations are
embedded in the lifecycle of technology’s creation and use. The
Responsible Design, Development and Deployment of Technologies (ReDDDoT)
program aims to help create technologies that promote the public’s
wellbeing and mitigate potential harms.

“The design, development and deployment of technologies have broad
impacts on society,” said NSF Director Sethuraman Panchanathan. “As
discoveries and innovations are translated to practice, it is essential
that we engage and enable diverse communities to participate in this
work. NSF and its philanthropic partners share a strong commitment to
creating a comprehensive approach for co-design through soliciting
community input, incorporating community values and engaging a broad
array of academic and professional voices across the lifecycle of
technology creation and use.”

The ReDDDoT program invites proposals from multidisciplinary,
multi-sector teams that examine and demonstrate the principles,
methodologies and impacts associated with responsible design,
development and deployment of technologies, especially those specified
in the “CHIPS and Science Act of 2022.” In addition to NSF, the
program is funded and supported by the Ford Foundation, the Patrick J.
McGovern Foundation, Pivotal Ventures, Siegel Family Endowment and the
Eric and Wendy Schmidt Fund for Strategic Innovation.

“In recognition of the role responsible technologists can play to
advance human progress, and the danger unaccountable technology poses to
social justice, the ReDDDoT program serves as both a collaboration and a
covenant between philanthropy and government to center public interest
technology into the future of progress,” said Darren Walker, president
of the Ford Foundation. “This $16 million initiative will cultivate
expertise from public interest technologists across sectors who are
rooted in community and grounded by the belief that innovation, equity
and ethics must equally be the catalysts for technological progress.”

The broad goals of ReDDDoT include:  

*Stimulating activity and filling gaps in research, innovation and capacity building in the responsible design, development, and deployment of technologies.
* Creating broad and inclusive communities of interest that bring
together key stakeholders to better inform practices for the design,
development, and deployment of technologies.
* Educating and training the science, technology, engineering, and
mathematics workforce on approaches to responsible design,
development, and deployment of technologies. 
* Accelerating pathways to societal and economic benefits while
developing strategies to avoid or mitigate societal and economic harms.
* Empowering communities, including economically disadvantaged and
marginalized populations, to participate in all stages of technology
development, including the earliest stages of ideation and design.

Phase 1 of the program solicits proposals for Workshops, Planning
Grants, or the creation of Translational Research Coordination Networks,
while Phase 2 solicits full project proposals. The initial areas of
focus for 2024 include artificial intelligence, biotechnology or natural
and anthropogenic disaster prevention or mitigation. Future iterations
of the program may consider other key technology focus areas enumerated
in the CHIPS and Science Act.

For more information about ReDDDoT, visit the program website or register for an informational webinar on Feb. 9, 2024, at 2 p.m. ET.

Statements from NSF’s Partners

“The core belief at the heart of ReDDDoT – that technology should be
shaped by ethical, legal, and societal considerations as well as
community values – also drives the work of the Patrick J. McGovern
Foundation to build a human-centered digital future for all. We’re
pleased to support this partnership, committed to advancing the
development of AI, biotechnology, and climate technologies that advance
equity, sustainability, and justice.” – Vilas Dhar, President, Patrick
J. McGovern Foundation

“From generative AI to quantum computing, the pace of technology
development is only accelerating. Too often, technological advances are
not accompanied by discussion and design that considers negative impacts
or unrealized potential. We’re excited to support ReDDDoT as an
opportunity to uplift new and often forgotten perspectives that
critically examine technology’s impact on civic life, and advance Siegel
Family Endowment’s vision of technological change that includes and
improves the lives of all people.” – Katy Knight, President and
Executive Director of Siegel Family Endowment

Only eight months ago, another big NSF funding project was announced but this time focused on AI and promoting trust, from a May 4, 2023 University of Maryland (UMD) news release (also on EurekAlert), Note: A link has been removed,

The University of Maryland has been chosen to lead a multi-institutional effort supported by the National Science Foundation (NSF) that will develop new artificial intelligence (AI) technologies designed to promote trust and mitigate risks, while simultaneously empowering and educating the public.

The NSF Institute for Trustworthy AI in Law & Society (TRAILS) announced on May 4, 2023, unites specialists in AI and machine learning with social scientists, legal scholars, educators and public policy experts. The multidisciplinary team will work with impacted communities, private industry and the federal government to determine what trust in AI looks like, how to develop technical solutions for AI that can be trusted, and which policy models best create and sustain trust.

Funded by a $20 million award from NSF, the new institute is expected to transform the practice of AI from one driven primarily by technological innovation to one that is driven by ethics, human rights, and input and feedback from communities whose voices have previously been marginalized.

“As artificial intelligence continues to grow exponentially, we must embrace its potential for helping to solve the grand challenges of our time, as well as ensure that it is used both ethically and responsibly,” said UMD President Darryll J. Pines. “With strong federal support, this new institute will lead in defining the science and innovation needed to harness the power of AI for the benefit of the public good and all humankind.”

In addition to UMD, TRAILS will include faculty members from George Washington University (GW) and Morgan State University, with more support coming from Cornell University, the National Institute of Standards and Technology (NIST), and private sector organizations like the DataedX Group, Arthur AI, Checkstep, FinRegLab and Techstars.

At the heart of establishing the new institute is the consensus that AI is currently at a crossroads. AI-infused systems have great potential to enhance human capacity, increase productivity, catalyze innovation, and mitigate complex problems, but today’s systems are developed and deployed in a process that is opaque and insular to the public, and therefore, often untrustworthy to those affected by the technology.

“We’ve structured our research goals to educate, learn from, recruit, retain and support communities whose voices are often not recognized in mainstream AI development,” said Hal Daumé III, a UMD professor of computer science who is lead principal investigator of the NSF award and will serve as the director of TRAILS.

Inappropriate trust in AI can result in many negative outcomes, Daumé said. People often “overtrust” AI systems to do things they’re fundamentally incapable of. This can lead to people or organizations giving up their own power to systems that are not acting in their best interest. At the same time, people can also “undertrust” AI systems, leading them to avoid using systems that could ultimately help them.

Given these conditions—and the fact that AI is increasingly being deployed to mediate society’s online communications, determine health care options, and offer guidelines in the criminal justice system—it has become urgent to ensure that people’s trust in AI systems matches those same systems’ level of trustworthiness.

TRAILS has identified four key research thrusts to promote the development of AI systems that can earn the public’s trust through broader participation in the AI ecosystem.

The first, known as participatory AI, advocates involving human stakeholders in the development, deployment and use of these systems. It aims to create technology in a way that aligns with the values and interests of diverse groups of people, rather than being controlled by a few experts or solely driven by profit.

Leading the efforts in participatory AI is Katie Shilton, an associate professor in UMD’s College of Information Studies who specializes in ethics and sociotechnical systems. Tom Goldstein, a UMD associate professor of computer science, will lead the institute’s second research thrust, developing advanced machine learning algorithms that reflect the values and interests of the relevant stakeholders.

Daumé, Shilton and Goldstein all have appointments in the University of Maryland Institute for Advanced Computer Studies, which is providing administrative and technical support for TRAILS.

David Broniatowski, an associate professor of engineering management and systems engineering at GW, will lead the institute’s third research thrust of evaluating how people make sense of the AI systems that are developed, and the degree to which their levels of reliability, fairness, transparency and accountability will lead to appropriate levels of trust. Susan Ariel Aaronson, a research professor of international affairs at GW, will use her expertise in data-driven change and international data governance to lead the institute’s fourth thrust of participatory governance and trust.

Virginia Byrne, an assistant professor of higher education and student affairs at Morgan State, will lead community-driven projects related to the interplay between AI and education. According to Daumé, the TRAILS team will rely heavily on Morgan State’s leadership—as Maryland’s preeminent public urban research university—in conducting rigorous, participatory community-based research with broad societal impacts.

Additional academic support will come from Valerie Reyna, a professor of human development at Cornell, who will use her expertise in human judgment and cognition to advance efforts focused on how people interpret their use of AI.

Federal officials at NIST will collaborate with TRAILS in the development of meaningful measures, benchmarks, test beds and certification methods—particularly as they apply to important topics essential to trust and trustworthiness such as safety, fairness, privacy, transparency, explainability, accountability, accuracy and reliability.

“The ability to measure AI system trustworthiness and its impacts on individuals, communities and society is limited. TRAILS can help advance our understanding of the foundations of trustworthy AI, ethical and societal considerations of AI, and how to build systems that are trusted by the people who use and are affected by them,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio.

Today’s announcement [May 4, 2023] is the latest in a series of federal grants establishing a cohort of National Artificial Intelligence Research Institutes. This recent investment in seven new AI institutes, totaling $140 million, follows two previous rounds of awards.

“Maryland is at the forefront of our nation’s scientific innovation thanks to our talented workforce, top-tier universities, and federal partners,” said U.S. Sen. Chris Van Hollen (D-Md.). “This National Science Foundation award for the University of Maryland—in coordination with other Maryland-based research institutions including Morgan State University and NIST—will promote ethical and responsible AI development, with the goal of helping us harness the benefits of this powerful emerging technology while limiting the potential risks it poses. This investment entrusts Maryland with a critical priority for our shared future, recognizing the unparalleled ingenuity and world-class reputation of our institutions.” 

The NSF, in collaboration with government agencies and private sector leaders, has now invested close to half a billion dollars in the AI institutes ecosystem—an investment that expands a collaborative AI research network into almost every U.S. state.

“The National AI Research Institutes are a critical component of our nation’s AI innovation, infrastructure, technology, education and partnerships ecosystem,” said NSF Director Sethuraman Panchanathan. “[They] are driving discoveries that will ensure our country is at the forefront of the global AI revolution.”

As noted in the UMD news release, this funding is part of a ‘bundle’, here’s more from the May 4, 2023 US NSF news release announcing the full $ 140 million funding program, Note: Links have been removed,

The U.S. National Science Foundation, in collaboration with other federal agencies, higher education institutions and other stakeholders, today announced a $140 million investment to establish seven new National Artificial Intelligence Research Institutes. The announcement is part of a broader effort across the federal government to advance a cohesive approach to AI-related opportunities and risks.

The new AI Institutes will advance foundational AI research that promotes ethical and trustworthy AI systems and technologies, develop novel approaches to cybersecurity, contribute to innovative solutions to climate change, expand the understanding of the brain, and leverage AI capabilities to enhance education and public health. The institutes will support the development of a diverse AI workforce in the U.S. and help address the risks and potential harms posed by AI.  This investment means  NSF and its funding partners have now invested close to half a billion dollars in the AI Institutes research network, which reaches almost every U.S. state.

“The National AI Research Institutes are a critical component of our nation’s AI innovation, infrastructure, technology, education and partnerships ecosystem,” said NSF Director Sethuraman Panchanathan. “These institutes are driving discoveries that will ensure our country is at the forefront of the global AI revolution.”

“These strategic federal investments will advance American AI infrastructure and innovation, so that AI can help tackle some of the biggest challenges we face, from climate change to health. Importantly, the growing network of National AI Research Institutes will promote responsible innovation that safeguards people’s safety and rights,” said White House Office of Science and Technology Policy Director Arati Prabhakar.

The new AI Institutes are interdisciplinary collaborations among top AI researchers and are supported by co-funding from the U.S. Department of Commerce’s National Institutes of Standards and Technology (NIST); U.S. Department of Homeland Security’s Science and Technology Directorate (DHS S&T); U.S. Department of Agriculture’s National Institute of Food and Agriculture (USDA-NIFA); U.S. Department of Education’s Institute of Education Sciences (ED-IES); U.S. Department of Defense’s Office of the Undersecretary of Defense for Research and Engineering (DoD OUSD R&E); and IBM Corporation (IBM).

“Foundational research in AI and machine learning has never been more critical to the understanding, creation and deployment of AI-powered systems that deliver transformative and trustworthy solutions across our society,” said NSF Assistant Director for Computer and Information Science and Engineering Margaret Martonosi. “These recent awards, as well as our AI Institutes ecosystem as a whole, represent our active efforts in addressing national economic and societal priorities that hinge on our nation’s AI capability and leadership.”

The new AI Institutes focus on six research themes:

Trustworthy AI

NSF Institute for Trustworthy AI in Law & Society (TRAILS)

Led by the University of Maryland, TRAILS aims to transform the practice of AI from one driven primarily by technological innovation to one driven with attention to ethics, human rights and support for communities whose voices have been marginalized into mainstream AI. TRAILS will be the first institute of its kind to integrate participatory design, technology, and governance of AI systems and technologies and will focus on investigating what trust in AI looks like, whether current technical solutions for AI can be trusted, and which policy models can effectively sustain AI trustworthiness. TRAILS is funded by a partnership between NSF and NIST.

Intelligent Agents for Next-Generation Cybersecurity

AI Institute for Agent-based Cyber Threat Intelligence and Operation (ACTION)

Led by the University of California, Santa Barbara, this institute will develop novel approaches that leverage AI to anticipate and take corrective actions against cyberthreats that target the security and privacy of computer networks and their users. The team of researchers will work with experts in security operations to develop a revolutionary approach to cybersecurity, in which AI-enabled intelligent security agents cooperate with humans across the cyberdefense life cycle to jointly improve the resilience of security of computer systems over time. ACTION is funded by a partnership between NSF, DHS S&T, and IBM.

Climate Smart Agriculture and Forestry

AI Institute for Climate-Land Interactions, Mitigation, Adaptation, Tradeoffs and Economy (AI-CLIMATE)

Led by the University of Minnesota Twin Cities, this institute aims to advance foundational AI by incorporating knowledge from agriculture and forestry sciences and leveraging these unique, new AI methods to curb climate effects while lifting rural economies. By creating a new scientific discipline and innovation ecosystem intersecting AI and climate-smart agriculture and forestry, our researchers and practitioners will discover and invent compelling AI-powered knowledge and solutions. Examples include AI-enhanced estimation methods of greenhouse gases and specialized field-to-market decision support tools. A key goal is to lower the cost of and improve accounting for carbon in farms and forests to empower carbon markets and inform decision making. The institute will also expand and diversify rural and urban AI workforces. AI-CLIMATE is funded by USDA-NIFA.

Neural and Cognitive Foundations of Artificial Intelligence

AI Institute for Artificial and Natural Intelligence (ARNI)

Led by Columbia University, this institute will draw together top researchers across the country to focus on a national priority: connecting the major progress made in AI systems to the revolution in our understanding of the brain. ARNI will meet the urgent need for new paradigms of interdisciplinary research between neuroscience, cognitive science and AI. This will accelerate progress in all three fields and broaden the transformative impact on society in the next decade. ARNI is funded by a partnership between NSF and DoD OUSD R&E.

AI for Decision Making

AI Institute for Societal Decision Making (AI-SDM)

Led by Carnegie Mellon University, this institute seeks to create human-centric AI for decision making to bolster effective response in uncertain, dynamic and resource-constrained scenarios like disaster management and public health. By bringing together an interdisciplinary team of AI and social science researchers, AI-SDM will enable emergency managers, public health officials, first responders, community workers and the public to make decisions that are data driven, robust, agile, resource efficient and trustworthy. The vision of the institute will be realized via development of AI theory and methods, translational research, training and outreach, enabled by partnerships with diverse universities, government organizations, corporate partners, community colleges, public libraries and high schools.

AI-Augmented Learning to Expand Education Opportunities and Improve Outcomes

AI Institute for Inclusive Intelligent Technologies for Education (INVITE)

Led by the University of Illinois Urbana-Champaign, this institute seeks to fundamentally reframe how educational technologies interact with learners by developing AI tools and approaches to support three crucial noncognitive skills known to underlie effective learning: persistence, academic resilience and collaboration. The institute’s use-inspired research will focus on how children communicate STEM content, how they learn to persist through challenging work, and how teachers support and promote noncognitive skill development. The resultant AI-based tools will be integrated into classrooms to empower teachers to support learners in more developmentally appropriate ways.

AI Institute for Exceptional Education (AI4ExceptionalEd)

Led by the University at Buffalo, this institute will work toward universal speech and language screening for children. The framework, the AI screener, will analyze video and audio streams of children during classroom interactions and assess the need for evidence-based interventions tailored to individual needs of students. The institute will serve children in need of ability-based speech and language services, advance foundational AI technologies and enhance understanding of childhood speech and language development. The AI Institute for Exceptional Education was previously announced in January 2023. The INVITE and AI4ExceptionalEd institutes are funded by a partnership between NSF and ED-IES.

Statements from NSF’s Federal Government Funding Partners

“Increasing AI system trustworthiness while reducing its risks will be key to unleashing AI’s potential benefits and ensuring our shared societal values,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio. “Today, the ability to measure AI system trustworthiness and its impacts on individuals, communities and society is limited. TRAILS can help advance our understanding of the foundations of trustworthy AI, ethical and societal considerations of AI, and how to build systems that are trusted by the people who use and are affected by them.”

“The ACTION Institute will help us better assess the opportunities and risks of rapidly evolving AI technology and its impact on DHS missions,” said Dimitri Kusnezov, DHS under secretary for science and technology. “This group of researchers and their ambition to push the limits of fundamental AI and apply new insights represents a significant investment in cybersecurity defense. These partnerships allow us to collectively remain on the forefront of leading-edge research for AI technologies.”

“In the tradition of USDA National Institute of Food and Agriculture investments, this new institute leverages the scientific power of U.S. land-grant universities informed by close partnership with farmers, producers, educators and innovators to address the grand challenge of rising greenhouse gas concentrations and associated climate change,” said Acting NIFA Director Dionne Toombs. “This innovative center will address the urgent need to counter climate-related threats, lower greenhouse gas emissions, grow the American workforce and increase new rural opportunities.”

“The leading-edge in AI research inevitably draws from our, so far, limited understanding of human cognition. This AI Institute seeks to unify the fields of AI and neuroscience to bring advanced designs and approaches to more capable and trustworthy AI, while also providing better understanding of the human brain,” said Bindu Nair, director, Basic Research Office, Office of the Undersecretary of Defense for Research and Engineering. “We are proud to partner with NSF in this critical field of research, as continued advancement in these areas holds the potential for further and significant benefits to national security, the economy and improvements in quality of life.”

“We are excited to partner with NSF on these two AI institutes,” said IES Director Mark Schneider. “We hope that they will provide valuable insights into how to tap modern technologies to improve the education sciences — but more importantly we hope that they will lead to better student outcomes and identify ways to free up the time of teachers to deliver more informed individualized instruction for the students they care so much about.” 

Learn more about the NSF AI Institutes by visiting nsf.gov.

Two things I noticed, (1) No mention of including ethics training or concepts in science and technology education and (2) No mention of integrating ethics and social issues into any of the AI Institutes. So, it seems that ‘Responsible Design, Development and Deployment of Technologies (ReDDDoT)’ occupies its own fiefdom.

Some sobering thoughts

Things can go terribly wrong with new technology as seen in the British television hit series, Mr. Bates vs. The Post Office (based on a true story) , from a January 9, 2024 posting by Ani Blundel for tellyvisions.org,

… what is this show that’s caused the entire country to rise up as one to defend the rights of the lowly sub-postal worker? Known as the “British Post Office scandal,” the incidents first began in 1999 when the U.K. postal system began to switch to digital systems, using the Horizon Accounting system to track the monies brought in. However, the IT system was faulty from the start, and rather than blame the technology, the British government accused, arrested, persecuted, and convicted over 700 postal workers of fraud and theft. This continued through 2015 when the glitch was finally recognized, and in 2019, the convictions were ruled to be a miscarriage of justice.

Here’s the series synopsis:

The drama tells the story of one of the greatest miscarriages of justice in British legal history. Hundreds of innocent sub-postmasters and postmistresses were wrongly accused of theft, fraud, and false accounting due to a defective IT system. Many of the wronged workers were prosecuted, some of whom were imprisoned for crimes they never committed, and their lives were irreparably ruined by the scandal. Following the landmark Court of Appeal decision to overturn their criminal convictions, dozens of former sub-postmasters and postmistresses have been exonerated on all counts as they battled to finally clear their names. They fought for over ten years, finally proving their innocence and sealing a resounding victory, but all involved believe the fight is not over yet, not by a long way.

Here’s a video trailer for ‘Mr. Bates vs. The Post Office,

More from Blundel’s January 9, 2024 posting, Note: A link has been removed,

The outcry from the general public against the government’s bureaucratic mismanagement and abuse of employees has been loud and sustained enough that Prime Minister Rishi Sunak had to come out with a statement condemning what happened back during the 2009 incident. Further, the current Justice Secretary, Alex Chalk, is now trying to figure out the fastest way to exonerate the hundreds of sub-post managers and sub-postmistresses who were wrongfully convicted back then and if there are steps to be taken to punish the post office a decade later.

It’s a horrifying story and the worst I’ve seen so far but, sadly, it’s not the only one of its kind.

Too often people’s concerns and worries about new technology are dismissed or trivialized. Somehow, all the work done to establish ethical standards and develop trust seems to be used as a kind of sop to the concerns rather than being integrated into the implementation of life-altering technologies.

Canadian scientists still being muzzled and a call for action on values and ethics in the Canadian federal public service

I’m starting with the older news about a survey finding that Canadian scientists are being muzzled before moving on to news about a recent survey where workers in the Canadian public services (and where most Canadian scientists are employed) criticizes the government’s values and ethics.

Muzzles, anyone?

It’s not exactly surprising to hear that Canadian scientists are still being muzzled for another recent story, (see my November 7, 2023 posting, “Money and its influence on Canada’s fisheries and oceans” for some specifics’ two of the authors are associated with Dalhousie University, Nova Scotia, Canada) .

This December 13, 2023 essay is by Alana Westwood, Manjulika E. Robertson and Samantha M. Chu (all of Dalhousie University but none were listed as authors on the ‘money, fisheries, and oceans paper) on The Conversation (h/t December 14, 2023 news item on phys.org). These authors describe some recent research into the Canadian situation, specifically since the 2015 election and the Liberals formed the government and ‘removed’ the muzzles placed on scientists by the previous Conservative government,

We recently surveyed 741 environmental researchers across Canada in two separate studies into interference. We circulated our survey through scientific societies related to environmental fields, as well as directly emailing Canadian authors of peer-reviewed research in environmental disciplines.

Researchers were asked (1) if they believed they had experienced interference in their work, (2) the sources and types of this interference, and (3) the subsequent effects on their career satisfaction and well-being.

We also asked demographic information to understand whether researchers’ perceptions of interference differed by career stage, research area or identity.

Although overall ability to communicate is improving, interference is a pervasive issue in Canada, including from government, private industry and academia. We found 92 per cent of the environmental researchers reported having experienced interference with their ability to communicate or conduct their research in some form.

Interference also manifested in different ways and already-marginalized researchers experienced worse outcomes.

The writers go on to offer a history of the interference (there’s also a more detailed history in this May 20, 2015 Canadian Broadcasting Corporation [CBC] online news article by Althea Manasan) before offering more information about results from the two recent surveys, Note: Links have been removed,

In our survey, respondents indicated that, overall, their ability to communicate with the public has improved in the recent years. Of the respondents aware of the government’s scientific integrity policies, roughly half of them attribute positive changes to them.

Others argued that the 2015 change in government [from Conservative to Liberal] had the biggest influence. In the first few months of their tenure, the Liberal government created a new cabinet position, the Minister of Science (this position was absorbed into the role of Minister of Innovation, Science, and Industry in 2019), and appointed a chief science advisor among other changes.

Though the ability to communicate has generally improved, many of the researchers argued interference still goes on in subtler ways. These included undue restriction on what kind of environmental research they can do, and funding to pursue them. Many respondents attributed those restrictions to the influence of private industry [emphasis mine].

Respondents identified the major sources of external interference as management, workplace policies, and external research partners. The chief motivations for interference, as the scientists saw it, included downplaying environmental risks, justifying an organization’s current position on an issue and avoiding contention.

Our most surprising finding was almost half of respondents said they limited their communications with the public and policymakers due to fears of negative backlash and reduced career opportunities.

In addition, interference had not been experienced equally. Early career and marginalized scientists — including those who identify as women, racialized, living with a disability and 2SLGBTQI+ — reported facing significantly more interference than their counterparts.

Scientists studying climate change, pollution, environmental impacted assessments and threatened species were also more likely to experience interference with their work than scientists in other disciplines.

The researchers used a single survey as the basis for two studies concerning interference in science,

Interference in science: scientists’ perspectives on their ability to communicate and conduct environmental research in Canada by Manjulika E. Robertson, Samantha M. Chu, Anika Cloutier, Philippe Mongeon, Don A. Driscoll, Tej Heer, and Alana R. Westwood. FACETS 8 (1) 30 November 2023 DOI: https://doi.org/10.1139/facets-2023-0005

This paper is open access.

Do environmental researchers from marginalized groups experience greater interference? Understanding scientists’ perceptions by Samantha M. Chu, Manjulika E. Robertson, Anika Cloutier, Suchinta Arif, and Alana R. Westwood.
FACETS 30 November 2023 DOI: https://doi.org/10.1139/facets-2023-0006

This paper is open access.

This next bit is on a somewhat related topic.

The Canadian government’s public service: values and ethics

Before launching into the latest news, here’s a little background. In 2016 the newly elected Liberal government implemented a new payroll system for the Canadian civil/public service. it was a débacle, which continues to this day (for the latest news I could find, see this September 1, 2023 article by Sam Konnert for CBC online news).

It was preventable and both the Conservative and Liberal governments of the day are responsible. You can get more details from my December 27, 2019 posting; scroll down to “The Minister of Digital Government and a bureaucratic débacle” and read on from there. In short, elected officials of both the Liberal and Conservative governments refused to listen when employees (both from the government and from the contractor) expressed grave concerns about the proposed pay system.

Now for public service employee morale, from a February 7, 2024 article by Robyn Miller for CBC news online, Note: Links have been removed,

Unions representing federal public servants say the government needs to do more to address dissatisfaction among the workforce after a recent report found some employees are unable to feel pride in their work.

“It’s more difficult now to be proud to be a public servant because of people’s perceptions of the institution and because of Canada’s role on the global stage,” said one participant who testified as part of the Deputy Ministers’ Task Team on Values and Ethics Report.

The report was published in late December [2023] by members of a task force assembled by Privy Council Clerk John Hannaford.

It’s the first major values and ethics review since an earlier report titled A Strong Foundation was released nearly 30 years ago.

Alex Silas, a regional executive vice-president of the Public Service Alliance of Canada, said the union supports the recommendations in the report but wants to see action.

“What we’ve seen historically, unfortunately, is that the values and ethics proposed by the federal government are not implemented in the workplaces of the federal government,” Silas said.

According to the report, it drew its findings from more than 90 conversations with public servants and external stakeholders starting in September 2023.

The report notes “public servants must provide frank and professional advice, without partisan considerations or fear of criticism or political reprisals.” [emphasis mine]

“The higher up the food chain you go, the less accountability seems to exist,” said one participant.

So, either elected officials and/or higher ups don’t listen when you speak up or you’re afraid to speak up for fear of criticism and/or reprisals. Plus, there’s outright interference as noted in the survey of scientists.

For the curious, here’s a link to the Deputy Ministers’ Task Team on Values and Ethics Report to the Clerk of the Privy Council (Canada 2023).

Let’s hope this airing of dirty laundry leads to some changes.

Synthetic human embryos—what now? (2 of 2)

The term they’re using in the Weizmann Institute of Science’s (Israel) announcement is “a generally accurate human embryo model.” This is in contrast to previous announcements including the one from the University of Cambridge team highlighted in Part 1.

From a September 6, 2023 news item on phys.org, Note: A link has been removed,

A research team headed by Prof. Jacob Hanna at the Weizmann Institute of Science has created complete models of human embryos from stem cells cultured in the lab—and managed to grow them outside the womb up to day 14. As reported today [September 6, 2023] in Nature, these synthetic embryo models had all the structures and compartments characteristic of this stage, including the placenta, yolk sac, chorionic sac and other external tissues that ensure the models’ dynamic and adequate growth.

Cellular aggregates derived from human stem cells in previous studies could not be considered genuinely accurate human embryo models, because they lacked nearly all the defining hallmarks of a post-implantation embryo. In particular, they failed to contain several cell types that are essential to the embryo’s development, such as those that form the placenta and the chorionic sac. In addition, they did not have the structural organization characteristic of the embryo and revealed no dynamic ability to progress to the next developmental stage.

Given their authentic complexity, the human embryo models obtained by Hanna’s group may provide an unprecedented opportunity to shed new light on the embryo’s mysterious beginnings. Little is known about the early embryo because it is so difficult to study, for both ethical and technical reasons, yet its initial stages are crucial to its future development. During these stages, the clump of cells that implants itself in the womb on the seventh day of its existence becomes, within three to four weeks, a well-structured embryo that already contains all the body organs.

“The drama is in the first month, the remaining eight months of pregnancy are mainly lots of growth,” Hanna says. “But that first month is still largely a black box. Our stem cell–derived human embryo model offers an ethical and accessible way of peering into this box. It closely mimics the development of a real human embryo, particularly the emergence of its exquisitely fine architecture.”

A stem cell–derived human embryo model at a developmental stage equivalent to that of a day 14 embryo. The model has all the compartments that define this stage: the yolk sac (yellow) and the part that will become the embryo itself, topped by the amnion (blue) – all enveloped by cells that will become the placenta (pink) Courtesy: Weizmann Institute of Science

A September 6, 2023 Weizmann Institute of Science press release, which originated the news item, offers a wealth of detail, Note: Links have been removed,

Letting the embryo model say “Go!”

Hanna’s team built on their previous experience in creating synthetic stem cell–based models of mouse embryos. As in that research, the scientists made no use of fertilized eggs or a womb. Rather, they started out with human cells known as pluripotent stem cells, which have the potential to differentiate into many, though not all, cell types. Some were derived from adult skin cells that had been reverted to “stemness.” Others were the progeny of human stem cell lines that had been cultured for years in the lab.

The researchers then used Hanna’s recently developed method to reprogram pluripotent stem cells so as to turn the clock further back: to revert these cells to an even earlier state – known as the naïve state – in which they are capable of becoming anything, that is, specializing into any type of cell. This stage corresponds to day 7 of the natural human embryo, around the time it implants itself in the womb. Hanna’s team had in fact been the first to start describing methods to generate human naïve stem cells, back in 2013; they continued to improve these methods, which stand at the heart of the current project, over the years.

The scientists divided the cells into three groups. The cells intended to develop into the embryo were left as is. The cells in each of the other groups were treated only with chemicals, without any need for genetic modification, so as to turn on certain genes, which was intended to cause these cells to differentiate toward one of three tissue types needed to sustain the embryo: placenta, yolk sac or the extraembryonic mesoderm membrane that ultimately creates the chorionic sac.

Soon after being mixed together under optimized, specifically developed conditions, the cells formed clumps, about 1 percent of which self-organized into complete embryo-like structures. “An embryo is self-driven by definition; we don’t need to tell it what to do – we must only unleash its internally encoded potential,” Hanna says. “It’s critical to mix in the right kinds of cells at the beginning, which can only be derived from naïve stem cells that have no developmental restrictions. Once you do that, the embryo-like model itself says, ‘Go!’”

The stem cell–based embryo-like structures (termed SEMs) developed normally outside the womb for 8 days, reaching a developmental stage equivalent to day 14 in human embryonic development. That’s the point at which natural embryos acquire the internal structures that enable them to proceed to the next stage: developing the progenitors of body organs.

Complete human embryo models match classic diagrams in terms of structure and cell identity

When the researchers compared the inner organization of their stem cell–derived embryo models with illustrations and microscopic anatomy sections in classical embryology atlases from the 1960s, they found an uncanny structural resemblance between the models and the natural human embryos at the corresponding stage. Every compartment and supporting structure was not only there, but in the right place, size and shape. Even the cells that make the hormone used in pregnancy testing were there and active: When the scientists applied secretions from these cells to a commercial pregnancy test, it came out positive.

In fact, the study has already produced a finding that may open a new direction of research into early pregnancy failure. The researchers discovered that if the embryo is not enveloped by placenta-forming cells in the right manner at day 3 of the protocol (corresponding to day 10 in natural embryonic development), its internal structures, such as the yolk sac, fail to properly develop.

“An embryo is not static. It must have the right cells in the right organization, and it must be able to progress – it’s about being and becoming,” Hanna says. “Our complete embryo models will help researchers address the most basic questions about what determines its proper growth.”

This ethical approach to unlocking the mysteries of the very first stages of embryonic development could open numerous research paths. It might help reveal the causes of many birth defects and types of infertility. It could also lead to new technologies for growing transplant tissues and organs. And it could offer a way around experiments that cannot be performed on live embryos – for example, determining the effects of exposure to drugs or other substances on fetal development.

For people who are visually inclined, there are two videos embedded in the September 6, 2023 Weizmann Institute of Science press release.

Here’s a link to and a citation for the paper,

Complete human day 14 post-implantation embryo models from naïve ES cells by Bernardo Oldak, Emilie Wildschutz, Vladyslav Bondarenko, Mehmet-Yunus Comar, Cheng Zhao, Alejandro Aguilera-Castrejon, Shadi Tarazi, Sergey Viukov, Thi Xuan Ai Pham, Shahd Ashouokhi, Dmitry Lokshtanov, Francesco Roncato, Eitan Ariel, Max Rose, Nir Livnat, Tom Shani, Carine Joubran, Roni Cohen, Yoseph Addadi, Muriel Chemla, Merav Kedmi, Hadas Keren-Shaul, Vincent Pasque, Sophie Petropoulos, Fredrik Lanner, Noa Novershtern & Jacob H. Hanna. Nature (2023) DOI: https://doi.org/10.1038/s41586-023-06604-5 Published: 06 September 2023

This paper is behind a paywall.

As for the question I asked in the head “what now?” I have absolutely no idea.

Synthetic human embryos—what now? (1 of 2)

Usually, there’s a rough chronological order to how I introduce the research, but this time I’m looking at the term used to describe it, following up with the various news releases and commentaries about the research, and finishing with a Canadian perspective.

After writing this post (but before it was published), the Weizmann Institute of Science (Israel) made their September 6, 2023 announcement and things changed a bit. That’s in Part two.

Say what you really mean (a terminology issue)

First, it might be useful to investigate the term, ‘synthetic human embryos’ as Julian Hitchcock does in his June 29, 2023 article on Bristows website (h/t Mondaq’s July 5, 2023 news item), Note: Links have been removed,

Synthetic Embryos” are neither Synthetic nor Embryos. So why are editors giving that name to stem cell-based models of human development?

One of the less convincing aspects of the last fortnight’s flurry of announcements about advances in simulating early human development (see here) concerned their name. Headlines galore (in newspapers and scientific journals) referred to “synthetic embryos“.

But embryo models, however impressive, are not embryos. To claim that the fundamental stages of embryo development that we learnt at school – fertilisation, cleavage and compaction – could now be bypassed to achieve the same result would be wrong. Nor are these objects “synthesised”: indeed, their interest to us lies in the ways in which they organise themselves. The researchers merely place the stem cells in a matrix in appropriate conditions, then stand back and watch them do it. Scientists were therefore unhappy about this use of the term in news media, and relieved when the International Society for Stem Cell Research (ISSCR) stepped in with a press release:

“Unlike some recent media reports describing this research, the ISSCR advises against using the term “synthetic embryo” to describe embryo models, because it is inaccurate and can create confusion. Integrated embryo models are neither synthetic nor embryos. While these models can replicate aspects of the early-stage development of human embryos, they cannot and will not develop to the equivalent of postnatal stage humans. Further, the ISSCR Guidelines prohibit the transfer of any embryo model to the uterus of a human or an animal.”

Although this was the ISSCR’s first attempt to put that position to the public, it had already made that recommendation to the research community two years previously. Its 2021 Guidelines for Stem Cell Research and Clinical Translation had recommended researchers to “promote accurate, current, balanced, and responsive public representations of stem cell research”. In particular:

“While organoids, chimeras, embryo models, and other stem cell-based models are useful research tools offering possibilities for further scientific progress, limitations on the current state of scientific knowledge and regulatory constraints must be clearly explained in any communications with the public or media. Suggestions that any of the current in vitro models can recapitulate an intact embryo, human sentience or integrated brain function are unfounded overstatements that should be avoided and contradicted with more precise characterizations of current understanding.”

Here’s a little bit about Hitchcock from his Bristows profile page,

  • Diploma Medical School, University of Birmingham (1975-78)
  • LLB, University of Wolverhampton
  • Diploma in Intellectual Property Law & Practice, University of Bristol
  • Qualified 1998

Following an education in medicine at the University of Birmingham and a career as a BBC science producer, Julian has focused on the law and regulation of life science technologies since 1997, practising in England and Australia. He joined Bristows with Alex Denoon in 2018.

Hitchcock’s June 29, 2023 article comments on why this term is being used,

I have a lot of sympathy with the position of the science writers and editors incurring the scientists’ ire. First, why should journalists have known of the ISSCR’s recommendations on the use of the term “synthetic embryo”? A journalist who found Recommendation 4.1 of the ISSCR Guidelines would probably not have found them specific enough to address the point, and the academic introduction containing the missing detail is hard to find. …

My second reason for being sympathetic to the use of the terrible term is that no suitable alternative has been provided, other than in the Stem Cell Reports paper, which recommends the umbrella terms “embryo models” or “stem cell based embryo models”. …

When asked why she had used the term “synthetic embryo”, the journalist I contacted remarked that, “We’re still working out the right language and it’s something we’re discussing and will no doubt evolve along with the science”.

It is absolutely in the public’s interest (and in the interest of science), that scientific research is explained in terms that the public understands. There is, therefore, a need, I think, for the scientific community to supply a name to the media or endure the penalties of misinformation …

In such an intensely competitive field of research, disagreement among researchers, even as to names, is inevitable. In consequence, however, journalists and their audiences are confronted by a slew of terms which may or may not be synonymous or overlapping, with no agreed term [emphasis mine] for the overall class of stem cell based embryo models. We cannot blame them if they make up snappy titles of their own [emphasis mine]. …

The announcement

The earliest date for the announcement at the International Society for Stem Cell Researh meeting that I can find is Hannah Devlin’s June 14, 2023 article in The Guardian newspaper, Note: A link has been removed,

Scientists have created synthetic human embryos using stem cells, in a groundbreaking advance that sidesteps the need for eggs or sperm.

Scientists say these model embryos, which resemble those in the earliest stages of human development, could provide a crucial window on the impact of genetic disorders and the biological causes of recurrent miscarriage.

However, the work also raises serious ethical and legal issues as the lab-grown entities fall outside current legislation in the UK and most other countries.

The structures do not have a beating heart or the beginnings of a brain, but include cells that would typically go on to form the placenta, yolk sac and the embryo itself.

Prof Magdalena Żernicka-Goetz, of the University of Cambridge and the California Institute of Technology, described the work in a plenary address on Wednesday [June 14, 2023] at the International Society for Stem Cell Research’s annual meeting in Boston.

The (UK) Science Media Centre made expert comments available in a June 14, 2023 posting “expert reaction to Guardian reporting news of creation of synthetic embryos using stem cells.”

Two days later, this June 16, 2023 essay by Kathryn MacKay, Senior Lecturer in Bioethics, University of Sydney (Australia), appeared on The Conversation (h/t June 16, 2023 news item on phys.org), Note: Links have been removed,

Researchers have created synthetic human embryos using stem cells, according to media reports. Remarkably, these embryos have reportedly been created from embryonic stem cells, meaning they do not require sperm and ova.

This development, widely described as a breakthrough that could help scientists learn more about human development and genetic disorders, was revealed this week in Boston at the annual meeting of the International Society for Stem Cell Research.

The research, announced by Professor Magdalena Żernicka-Goetz of the University of Cambridge and the California Institute of Technology, has not yet been published in a peer-reviewed journal. But Żernicka-Goetz told the meeting these human-like embryos had been made by reprogramming human embryonic stem cells.

So what does all this mean for science, and what ethical issues does it present?

MacKay goes on to answer her own questions, from the June 16, 2023 essay, Note: A link has been removed,

One of these quandaries arises around whether their creation really gets us away from the use of human embryos.

Robin Lovell-Badge, the head of stem cell biology and developmental genetics at the Francis Crick Institute in London UK, reportedly said that if these human-like embryos can really model human development in the early stages of pregnancy, then we will not have to use human embryos for research.

At the moment, it is unclear if this is the case for two reasons.

First, the embryos were created from human embryonic stem cells, so it seems they do still need human embryos for their creation. Perhaps more light will be shed on this when Żernicka-Goetz’s research is published.

Second, there are questions about the extent to which these human-like embryos really can model human development.

Professor Magdalena Żernicka-Goetz’s research is published

Almost two weeks later the research from the Cambridge team (there are other teams and countries also racing; see Part two for the news from Sept. 6, 2023) was published, from a June 27, 2023 news item on ScienceDaily,

Cambridge scientists have created a stem cell-derived model of the human embryo in the lab by reprogramming human stem cells. The breakthrough could help research into genetic disorders and in understanding why and how pregnancies fail.

Published today [Tuesday, June 27, 2023] in the journal Nature, this embryo model is an organised three-dimensional structure derived from pluripotent stem cells that replicate some developmental processes that occur in early human embryos.

Use of such models allows experimental modelling of embryonic development during the second week of pregnancy. They can help researchers gain basic knowledge of the developmental origins of organs and specialised cells such as sperm and eggs, and facilitate understanding of early pregnancy loss.

A June 27, 2023 University of Cambridge press release (also on EurekAlert), which originated the news item, provides more detail about the work,

“Our human embryo-like model, created entirely from human stem cells, gives us access to the developing structure at a stage that is normally hidden from us due to the implantation of the tiny embryo into the mother’s womb,” said Professor Magdalena Zernicka-Goetz in the University of Cambridge’s Department of Physiology, Development and Neuroscience, who led the work.

She added: “This exciting development allows us to manipulate genes to understand their developmental roles in a model system. This will let us test the function of specific factors, which is difficult to do in the natural embryo.”

In natural human development, the second week of development is an important time when the embryo implants into the uterus. This is the time when many pregnancies are lost.

The new advance enables scientists to peer into the mysterious ‘black box’ period of human development – usually following implantation of the embryo in the uterus – to observe processes never directly observed before.

Understanding these early developmental processes holds the potential to reveal some of the causes of human birth defects and diseases, and to develop tests for these in pregnant women.

Until now, the processes could only be observed in animal models, using cells from zebrafish and mice, for example.

Legal restrictions in the UK currently prevent the culture of natural human embryos in the lab beyond day 14 of development: this time limit was set to correspond to the stage where the embryo can no longer form a twin. [emphasis mine]

Until now, scientists have only been able to study this period of human development using donated human embryos. This advance could reduce the need for donated human embryos in research.

Zernicka-Goetz says the while these models can mimic aspects of the development of human embryos, they cannot and will not develop to the equivalent of postnatal stage humans.

Over the past decade, Zernicka-Goetz’s group in Cambridge has been studying the earliest stages of pregnancy, in order to understand why some pregnancies fail and some succeed.

In 2021 and then in 2022 her team announced in Developmental Cell, Nature and Cell Stem Cell journals that they had finally created model embryos from mouse stem cells that can develop to form a brain-like structure, a beating heart, and the foundations of all other organs of the body.

The new models derived from human stem cells do not have a brain or beating heart, but they include cells that would typically go on to form the embryo, placenta and yolk sac, and develop to form the precursors of germ cells (that will form sperm and eggs).

Many pregnancies fail at the point when these three types of cells orchestrate implantation into the uterus begin to send mechanical and chemical signals to each other, which tell the embryo how to develop properly.

There are clear regulations governing stem cell-based models of human embryos and all researchers doing embryo modelling work must first be approved by ethics committees. Journals require proof of this ethics review before they accept scientific papers for publication. Zernicka-Goetz’s laboratory holds these approvals.

“It is against the law and FDA regulations to transfer any embryo-like models into a woman for reproductive aims. These are highly manipulated human cells and their attempted reproductive use would be extremely dangerous,” said Dr Insoo Hyun, Director of the Center for Life Sciences and Public Learning at Boston’s Museum of Science and a member of Harvard Medical School’s Center for Bioethics.

Zernicka-Goetz also holds position at the California Institute of Technology and is NOMIS Distinguished Scientist and Scholar Awardee.

The research was funded by the Wellcome Trust and Open Philanthropy.

(There’s more about legal concerns further down in this post.)

Here’s a link to and a citation for the paper,

Pluripotent stem cell-derived model of the post-implantation human embryo by Bailey A. T. Weatherbee, Carlos W. Gantner, Lisa K. Iwamoto-Stohl, Riza M. Daza, Nobuhiko Hamazaki, Jay Shendure & Magdalena Zernicka-Goetz. Nature (2023) DOI: https://doi.org/10.1038/s41586-023-06368-y Published: 27 June 2023

This paper is open access.

Published the same day (June 27, 2023) is a paper (citation and link follow) also focused on studying human embryonic development using stem cells. First, there’s this from the Abstract,

Investigating human development is a substantial scientific challenge due to the technical and ethical limitations of working with embryonic samples. In the face of these difficulties, stem cells have provided an alternative to experimentally model inaccessible stages of human development in vitro …

This time the work is from a US/German team,

Self-patterning of human stem cells into post-implantation lineages by Monique Pedroza, Seher Ipek Gassaloglu, Nicolas Dias, Liangwen Zhong, Tien-Chi Jason Hou, Helene Kretzmer, Zachary D. Smith & Berna Sozen. Nature (2023) DOI: https://doi.org/10.1038/s41586-023-06354-4 Published: 27 June 2023

The paper is open access.

Legal concerns and a Canadian focus

A July 25, 2023 essay by Françoise Baylis and Jocelyn Downie of Dalhousie University (Nova Scotia, Canada) for The Conversation (h/t July 25, 2023 article on phys.org) covers the advantages of doing this work before launching into a discussion of legislation and limits in the UK and, more extensively, in Canada, Note: Links have been removed,

This research could increase our understanding of human development and genetic disorders, help us learn how to prevent early miscarriages, lead to improvements in fertility treatment, and — perhaps — eventually allow for reproduction without using sperm and eggs.

Synthetic human embryos — also called embryoid bodies, embryo-like structures or embryo models — mimic the development of “natural human embryos,” those created by fertilization. Synthetic human embryos include the “cells that would typically go on to form the embryo, placenta and yolk sac, and develop to form the precursors of germ cells (that will form sperm and eggs).”

Though research involving natural human embryos is legal in many jurisdictions, it remains controversial. For some people, research involving synthetic human embryos is less controversial because these embryos cannot “develop to the equivalent of postnatal stage humans.” In other words, these embryos are non-viable and cannot result in live births.

Now, for a closer look at the legalities in the UK and in Canada, from the July 25, 2023 essay, Note: Links have been removed,

The research presented by Żernicka-Goetz at the ISSCR meeting took place in the United Kingdom. It was conducted in accordance with the Human Fertilization and Embryology Act, 1990, with the approval of the U.K. Stem Cell Bank Steering Committee.

U.K. law limits the research use of human embryos to 14 days of development. An embryo is defined as “a live human embryo where fertilisation is complete, and references to an embryo include an egg in the process of fertilisation.”

Synthetic embryos are not created by fertilization and therefore, by definition, the 14-day limit on human embryo research does not apply to them. This means that synthetic human embryo research beyond 14 days can proceed in the U.K.

The door to the touted potential benefits — and ethical controversies — seems wide open in the U.K.

While the law in the U.K. does not apply to synthetic human embryos, the law in Canada clearly does. This is because the legal definition of an embryo in Canada is not limited to embryos created by fertilization [emphasis mine].

The Assisted Human Reproduction Act (the AHR Act) defines an embryo as “a human organism during the first 56 days of its development following fertilization or creation, excluding any time during which its development has been suspended.”

Based on this definition, the AHR Act applies to embryos created by reprogramming human embryonic stem cells — in other words, synthetic human embryos — provided such embryos qualify as human organisms.

A synthetic human embryo is a human organism. It is of the species Homo sapiens, and is thus human. It also qualifies as an organism — a life form — alongside other organisms created by means of fertilization, asexual reproduction, parthenogenesis or cloning.

Given that the AHR Act applies to synthetic human embryos, there are legal limits on their creation and use in Canada.

First, human embryos — including synthetic human embryos – can only be created for the purposes of “creating a human being, improving or providing instruction in assisted reproduction procedures.”

Given the state of the science, it follows that synthetic human embryos could legally be created for the purpose of improving assisted reproduction procedures.

Second, “spare” or “excess” human embryos — including synthetic human embryos — originally created for one of the permitted purposes, but no longer wanted for this purpose, can be used for research. This research must be done in accordance with the consent regulations which specify that consent must be for a “specific research project.”

Finally, all research involving human embryos — including synthetic human embryos — is subject to the 14-day rule. The law stipulates that: “No person shall knowingly… maintain an embryo outside the body of a female person after the fourteenth day of its development following fertilization or creation, excluding any time during which its development has been suspended.”

Putting this all together, the creation of synthetic embryos for improving assisted human reproduction procedures is permitted, as is research using “spare” or “excess” synthetic embryos originally created for this purpose — provided there is specific consent and the research does not exceed 14 days.

This means that while synthetic human embryos may be useful for limited research on pre-implantation embryo development, they are not available in Canada for research on post-implantation embryo development beyond 14 days.

The authors close with this comment about the prospects for expanding Canada’s14-day limit, from the July 25, 2023 essay,

… any argument will have to overcome the political reality that the federal government is unlikely to open up the Pandora’s box of amending the AHR Act.

It therefore seems likely that synthetic human embryo research will remain limited in Canada for the foreseeable future.

As mentioned, in September 2023 there was a new development. See: Part two.

Ethical nanobiotechnology

This paper on ethics (aside: I have a few comments after the news release and citation) comes from the US Pacific Northwest National Laboratory (PNNL) according to a July 12, 2023 news item on phys.org,

Prosthetics moved by thoughts. Targeted treatments for aggressive brain cancer. Soldiers with enhanced vision or bionic ears. These powerful technologies sound like science fiction, but they’re becoming possible thanks to nanoparticles.

“In medicine and other biological settings, nanotechnology is amazing and helpful, but it could be harmful if used improperly,” said Pacific Northwest National Laboratory (PNNL) chemist Ashley Bradley, part of a team of researchers who conducted a comprehensive survey of nanobiotechnology applications and policies.

Their research, available in Health Security, works to sum up the very large, active field of nanotechnology in biology applications, draw attention to regulatory gaps, and offer areas for further consideration.

A July 12, 2023 PNNL news release (also on EurekAlert), which originated the news item, delves further into the topic, Note: A link has been removed,

“In our research, we learned there aren’t many global regulations yet,” said Bradley. “And we need to create a common set of rules to figure out the ethical boundaries.”

Nanoparticles, big differences

Nanoparticles are clusters of molecules with different properties than large amounts of the same substances. In medicine and other biology applications, these properties allow nanoparticles to act as the packaging that delivers treatments through cell walls and the difficult to cross blood-brain barrier.

“You can think of the nanoparticles a little bit like the plastic around shredded cheese,” said PNNL chemist Kristin Omberg. “It makes it possible to get something perishable directly where you want it, but afterwards you’ve got to deal with a whole lot of substance where it wasn’t before.”

Unfortunately, dealing with nanoparticles in new places isn’t straightforward. Carbon is pencil lead, nano carbon conducts electricity. The same material may have different properties at the nanoscale, but most countries still regulate it the same as bulk material, if the material is regulated at all.

For example, zinc oxide, a material that was stable and unreactive as a pigment in white paint, is now accumulating in oceans when used as nanoparticles in sunscreen, warranting a call to create alternative reef-safe sunscreens. And although fats and lipids aren’t regulated, the researchers suggest which agencies could weigh in on regulations were fats to become after-treatment byproducts.

The article also inventories national and international agencies, organizations, and governing bodies with an interest in understanding how nanoparticles break down or react in a living organism and the environmental life cycle of a nanoparticle. Because nanobiotechnology spans materials science, biology, medicine, environmental science, and tech, these disparate research and regulatory disciplines must come together, often for the first time—to fully understand the impact on humans and the environment.

Dual use: Good for us, bad for us

Like other quickly growing fields, there’s a time lag between the promise of new advances and the possibilities of unintended uses.

“There were so many more applications than we thought there were,” said Bradley, who collected exciting nanobio examples such as Alzheimer’s treatment, permanent contact lenses, organ replacement, and enhanced muscle recovery, among others.

The article also highlights concerns about crossing the blood-brain barrier, thought-initiated control of computers, and nano-enabled DNA editing where the researchers suggest more caution, questioning, and attention could be warranted. This attention spans everything from deep fundamental research and regulations all the way to what Omberg called “the equivalent of tattoo removal” if home-DNA splicing attempts go south.

The researchers draw parallels to more established fields such as synthetic bio and pharmacology, which offer lessons to be learned from current concerns such as the unintended consequences of fentanyl and opioids. They believe these fields also offer examples of innovative coordination between science and ethics, such as synthetic bio’s IGEM [The International Genetically Engineered Machine competition]—student competition, to think about not just how to create, but also to shape the use and control of new technologies.

Omberg said unusually enthusiastic early reviewers of the article contributed even more potential uses and concerns, demonstrating that experts in many fields recognize ethical nanobiotechnology is an issue to get in front of. “This is a train that’s going. It will be sad if 10 years from now, we haven’t figured how to talk about it.”

Funding for the team’s research was supported by PNNL’s Biorisk Beyond the List National Security Directorate Objective.

Here’s a link to and a citation for the paper,

The Promise of Emergent Nanobiotechnologies for In Vivo Applications and Implications for Safety and Security by Anne M. Arnold, Ashley M. Bradley, Karen L. Taylor, Zachary C. Kennedy, and Kristin M. Omberg. Health Security.Oct 2022.408-423.Published in Volume: 20 Issue 5: October 17, 2022 DOI: https://doi.org/10.1089/hs.2022.0014 Published Online:17 Oct 2022

This paper is open access.

You can find out more about IGEM (The International Genetically Engineered Machine competition) here.

Comments (brief)

It seems a little odd that the news release (“Prosthetics moved by thoughts …”) and the paper both reference neurotechnology without ever mentioning it by name. Here’s the reference from the paper, Note: Links have been removed,

Nanoparticles May Be Developed to Facilitate Cognitive Enhancements

The development and implementation of NPs that enhance cognitive function has yet to be realized. However, recent advances on the micro- and macro-level with neural–machine interfacing provide the building blocks necessary to develop this technology on the nanoscale. A noninvasive brain–computer interface to control a robotic arm was developed by teams at 2 universities.157 A US-based company, Neuralink, [emphasis mine] is at the forefront of implementing implantable, intracortical microelectrodes that provide an interface between the human brain and technology.158,159 Utilization of intracortical microelectrodes may ultimately provide thought-initiated access and control of computers and mobile devices, and possibly expand cognitive function by accessing underutilized areas of the brain.158

Neuralink (founded by Elon Musk) is controversial for its animal testing practices. You can find out more in Björn Ólafsson’s May 30, 2023 article for Sentient Media.

The focus on nanoparticles as the key factor in the various technologies and applications mentioned seems narrow but necessary given the breadth of topics covered in the paper as the authors themselves note in the paper’s abstract,

… In this article, while not comprehensive, we attempt to illustrate the breadth and promise of bionanotechnology developments, and how they may present future safety and security challenges. Specifically, we address current advancements to streamline the development of engineered NPs for in vivo applications and provide discussion on nano–bio interactions, NP in vivo delivery, nanoenhancement of human performance, nanomedicine, and the impacts of NPs on human health and the environment.

They have a good overview of the history and discussions about nanotechnology risks and regulation. It’s international in scope with a heavy emphasis on US efforts, as one would expect.

For anyone who’s interested in the neurotechnology end of things, I’ve got a July 17, 2023 commentary “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report.” The report was launched July 13, 2023 during UNESCO’s Global dialogue on the ethics of neurotechnology (see my July 7, 2023 posting about the then upcoming dialogue for links to more UNESCO information). Both the July 17 and July 7, 2023 postings included additional information about Neuralink.

ChatGPT and the academic cheating industry

I have two items on ChatGPT and academic cheating. The first (from April 2023) deals with the economic impact on people who make their living by writing the papers for the cheaters and the second (from May 2023) deals with unintended consequences for the cheaters (the students not the contract writers).

Making a living in Kenya

Martin K.N Siele’s April 21, 2023 article for restofworld.org (a website where you can find “Reporting [on] Global Tech Stories”) provides a perspective that’s unfamiliar to me, Note: Links have been removed,

For the past nine years, Collins, a 27-year-old freelance writer, has been making money by writing assignments for students in the U.S. — over 13,500 kilometers away from Nanyuki in central Kenya, where he lives. He is part of the “contract cheating” industry, known locally as simply “academic writing.” Collins writes college essays on topics including psychology, sociology, and economics. Occasionally, he is even granted direct access to college portals, allowing him to submit tests and assignments, participate in group discussions, and talk to professors using students’ identities. In 2022, he made between $900 and $1,200 a month from this work.

Lately, however, his earnings have dropped to $500–$800 a month. Collins links this to the meteoric rise of ChatGPT and other generative artificial intelligence tools.

“Last year at a time like this, I was getting, on average, 50 to 70 assignments, including discussions which are shorter, around 150 words each, and don’t require much research,” Collins told Rest of World. “Right now, on average, I get around 30 to 40-something assignments.” He requested to be identified only by his first name to avoid jeopardizing his accounts on platforms where he finds clients.

In January 2023, online learning platform Study surveyed more than 1,000 American students and over 100 educators. More than 89% of the students said they had used ChatGPT for help with a homework assignment. Nearly half admitted to using ChatGPT for an at-home test or quiz, 53% had used it to write an essay, and 22% had used it for outlining one.

Collins now fears that the rise of AI could significantly reduce students’ reliance on freelancers like him in the long term, affecting their income. Meanwhile, he depends on ChatGPT to generate the content he used to outsource to other freelance writers.

While 17 states in the U.S. have banned contract cheating, it has not been a problem for freelancers in Kenya, concerned about providing for themselves and their families. Despite being the largest economy in East Africa, Kenya has the region’s highest unemployment rate, with 5.7% of the labor force out of work in 2021. Around 25.8% of the population is estimated to live in extreme poverty. This situation makes the country a potent hub for freelance workers. According to the Online Labour Index (OLI), an economic indicator that measures the global online gig economy, Kenya accounts for 1% of the world’s online freelance workforce, ranking 15th overall and second only to Egypt in Africa. About 70% of online freelancers in Kenya offer writing and translation services.

Not everyone agrees with Collins with regard to the impact that AI such as ChatGPT is having on their ghostwriting bottom line but everyone agrees there’s an impact. If you have time, do read Siele’s April 21, 2023 article in its entirety.

The dark side of using contract writing services

This May 10, 2023 essay on The Conversation by Nathalie Wierdak (Teaching Fellow) and Lynnaire Sheridan (Senior lecturer), both at the University of Otago, takes a more standard perspective, initially (Note: Links have been removed; h/t phys.org May 11, 2023 news item),

Since the launch of ChatGPT in late 2022, academics have expressed concern over the impact the artificial intelligence service could have on student work.

But educational institutions trying to safeguard academic integrity could be looking in the wrong direction. Yes, ChatGPT raises questions about how to assess students’ learning. However, it should be less of a concern than the persistent and pervasive use of ghostwriting services.

Essentially, academic ghostwriting is when a student submits a piece of work as their own which is, in fact, written by someone else. Often dubbed “contract cheating,” the outsourcing of assessment to ghostwriters undermines student learning.

But contract cheating is increasingly commonplace as time-poor students juggle jobs to meet the soaring costs of education. And the internet creates the perfect breeding ground for willing ghostwriting entrepreneurs.

In New Zealand, 70-80% of tertiary students engage in some form of cheating. While most of this academic misconduct was collusion with peers or plagiarism, the emergence of artificial intelligence has been described as a battle academia will inevitably lose.

It is time a new approach is taken by universities.

Allowing the use of ChatGPT by students could help reduce the use of contract cheating by doing the heavy lifting of academic work while still giving students the opportunity to learn.

This essay seems to have been written as a counterpoint to Siele’s article. Here’s where the May 10, 2023 essay gets interesting,

Universities have been cracking down on ghost writing to ensure quality education, to protect their students from blackmail and to even prevent international espionage [emphasis mine].

Contract cheating websites store personal data making students unwittingly vulnerable to extortion to avoid exposure and potential expulsion from their institution, or the loss of their qualification.

Some researchers are warning there is an even greater risk – that private student data will fall into the hands of foreign state actors.

Preventing student engagement with contract cheating sites, or at least detecting students who use them, avoids the likelihood of graduates in critical job roles being targeted for nationally sensitive data.

Given the underworld associated with ghostwriting, artificial intelligence has the potential to bust the contract cheating economy. This would keep students safer by providing them with free, instant and accessible resources.

If you have time to read it in its entirety, there are other advantages to AI-enhanced learning mentioned in the May 10, 2023 essay.

Should robots have rights? Confucianism offers some ideas

Fascinating although I’m not sure I entirely understand his argument,

This May 24, 2023 Carnegie Mellon University (CMU) news release (also on EurekAlert but published May 25, 2023) has Professor Tae Wan Kim’s clarification, Note: Links have been removed,

Philosophers and legal scholars have explored significant aspects of the moral and legal status of robots, with some advocating for giving robots rights. As robots assume more roles in the world, a new analysis reviewed research on robot rights, concluding that granting rights to robots is a bad idea. Instead, the article looks to Confucianism to offer an alternative.

The analysis, by a researcher at Carnegie Mellon University (CMU), appears in Communications of the ACM, published by the Association for Computing Machinery.

“People are worried about the risks of granting rights to robots,” notes Tae Wan Kim, Associate Professor of Business Ethics at CMU’s Tepper School of Business, who conducted the analysis. “Granting rights is not the only way to address the moral status of robots: Envisioning robots as rites bearers—not a rights bearers—could work better.”

Although many believe that respecting robots should lead to granting them rights, Kim argues for a different approach. Confucianism, an ancient Chinese belief system, focuses on the social value of achieving harmony; individuals are made distinctively human by their ability to conceive of interests not purely in terms of personal self-interest, but in terms that include a relational and a communal self. This, in turn, requires a unique perspective on rites, with people enhancing themselves morally by participating in proper rituals.

When considering robots, Kim suggests that the Confucian alternative of assigning rites—or what he calls role obligations—to robots is more appropriate than giving robots rights. The concept of rights is often adversarial and competitive, and potential conflict between humans and robots is concerning.

“Assigning role obligations to robots encourages teamwork, which triggers an understanding that fulfilling those obligations should be done harmoniously,” explains Kim. “Artificial intelligence (AI) imitates human intelligence, so for robots to develop as rites bearers, they must be powered by a type of AI that can imitate humans’ capacity to recognize and execute team activities—and a machine can learn that ability in various ways.”

Kim acknowledges that some will question why robots should be treated respectfully in the first place. “To the extent that we make robots in our image, if we don’t treat them well, as entities capable of participating in rites, we degrade ourselves,” he suggests.

Various non-natural entities—such as corporations—are considered people and even assume some Constitutional rights. In addition, humans are not the only species with moral and legal status; in most developed societies, moral and legal considerations preclude researchers from gratuitously using animals for lab experiments.

Here’s a link to and a citation for the paper,

Should Robots Have Rights or Rites? by Tae Wan Kim, Alan Strudler. Communications of the ACM, June 2023, Vol. 66 No. 6, Pages 78-85 DOI: 10.1145/3571721

This work is licensed under a http://creativecommons.org/licenses/by/4.0/ In other words, this paper is open access.

The paper is quite readable, as academic papers go, (Note: Links have been removed),

Boston Dynamics recently released a video introducing Atlas, a six-foot bipedal humanoid robot capable of search and rescue missions. Part of the video contained employees apparently abusing Atlas (for example, kicking, hitting it with a hockey stick, pushing it with a heavy ball). The video quickly raised a public and academic debate regarding how humans should treat robots. A robot, in some sense, is nothing more than software embedded in hardware, much like a laptop computer. If it is your property and kicking it harms no one nor infringes on anyone’s rights, it’s okay to kick it, although that would be a stupid thing to do. Likewise, there seems to be no significant reason that kicking a robot should be deemed as a moral or legal wrong. However, the question—”What do we owe to robots?”—is not that simple. Philosophers and legal scholars have seriously explored and defended some significant aspects of the moral and legal status of robots—and their rights.3,6,15,16,24,29,36 In fact, various non-natural entities—for example, corporations—are treated as persons and even enjoy some constitutional rights.a In addition, humans are not the only species that get moral and legal status. In most developed societies, for example, moral and legal considerations preclude researchers from gratuitously using animals for lab experiments. The fact that corporations are treated as persons and animals are recognized as having some rights does not entail that robots should be treated analogously.

Connie Lin’s May 26, 2023 article for Fast Company “Confucianism for robots? Ethicist says that’s better than giving them full rights” offers a brief overview and more comments from Kim. For the curious, you find out more about Boston Dynamics and Atlas here.

Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report

Launched on Thursday, July 13, 2023 during UNESCO’s (United Nations Educational, Scientific, and Cultural Organization) “Global dialogue on the ethics of neurotechnology,” is a report tying together the usual measures of national scientific supremacy (number of papers published and number of patents filed) with information on corporate investment in the field. Consequently, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends” by Daniel S. Hain, Roman Jurowetzki, Mariagrazia Squicciarini, and Lihui Xu provides better insight into the international neurotechnology scene than is sometimes found in these kinds of reports. By the way, the report is open access.

Here’s what I mean, from the report‘s short summary,

Since 2013, government investments in this field have exceeded $6 billion. Private investment has also seen significant growth, with annual funding experiencing a 22-fold increase from 2010 to 2020, reaching $7.3 billion and totaling $33.2 billion.

This investment has translated into a 35-fold growth in neuroscience publications between 2000-2021 and 20-fold growth in innovations between 2022-2020, as proxied by patents. However, not all are poised to benefit from such developments, as big divides emerge.

Over 80% of high-impact neuroscience publications are produced by only ten countries, while 70% of countries contributed fewer than 10 such papers over the period considered. Similarly, five countries only hold 87% of IP5 neurotech patents.

This report sheds light on the neurotechnology ecosystem, that is, what is being developed, where and by whom, and informs about how neurotechnology interacts with other technological trajectories, especially Artificial Intelligence [emphasis mine]. [p. 2]

The money aspect is eye-opening even when you already have your suspicions. Also, it’s not entirely unexpected to learn that only ten countries produce over 80% of the high impact neurotech papers and that only five countries hold 87% of the IP5 neurotech patents but it is stunning to see it in context. (If you’re not familiar with the term ‘IP5 patents’, scroll down in this post to the relevant subhead. Hint: It means the patent was filed in one of the top five jurisdictions; I’ll leave you to guess which ones those might be.)

“Since 2013 …” isn’t quite as informative as the authors may have hoped. I wish they had given a time frame for government investments similar to what they did for corporate investments (e.g., 2010 – 2020). Also, is the $6B (likely in USD) government investment cumulative or an estimated annual number? To sum up, I would have appreciated parallel structure and specificity.

Nitpicks aside, there’s some very good material intended for policy makers. On that note, some of the analysis is beyond me. I haven’t used anything even somewhat close to their analytical tools in years and years. This commentaries reflects my interests and a very rapid reading. One last thing, this is being written from a Canadian perspective. With those caveats in mind, here’s some of what I found.

A definition, social issues, country statistics, and more

There’s a definition for neurotechnology and a second mention of artificial intelligence being used in concert with neurotechnology. From the report‘s executive summary,

Neurotechnology consists of devices and procedures used to access, monitor, investigate, assess, manipulate, and/or emulate the structure and function of the neural systems of animals or human beings. It is poised to revolutionize our understanding of the brain and to unlock innovative solutions to treat a wide range of diseases and disorders.

Similarly to Artificial Intelligence (AI), and also due to its convergence with AI, neurotechnology may have profound societal and economic impact, beyond the medical realm. As neurotechnology directly relates to the brain, it triggers ethical considerations about fundamental aspects of human existence, including mental integrity, human dignity, personal identity, freedom of thought, autonomy, and privacy [emphases mine]. Its potential for enhancement purposes and its accessibility further amplifies its prospect social and societal implications.

The recent discussions held at UNESCO’s Executive Board further shows Member States’ desire to address the ethics and governance of neurotechnology through the elaboration of a new standard-setting instrument on the ethics of neurotechnology, to be adopted in 2025. To this end, it is important to explore the neurotechnology landscape, delineate its boundaries, key players, and trends, and shed light on neurotech’s scientific and technological developments. [p. 7]

Here’s how they sourced the data for the report,

The present report addresses such a need for evidence in support of policy making in
relation to neurotechnology by devising and implementing a novel methodology on data from scientific articles and patents:

● We detect topics over time and extract relevant keywords using a transformer-
based language models fine-tuned for scientific text. Publication data for the period
2000-2021 are sourced from the Scopus database and encompass journal articles
and conference proceedings in English. The 2,000 most cited publications per year
are further used in in-depth content analysis.
● Keywords are identified through Named Entity Recognition and used to generate
search queries for conducting a semantic search on patents’ titles and abstracts,
using another language model developed for patent text. This allows us to identify
patents associated with the identified neuroscience publications and their topics.
The patent data used in the present analysis are sourced from the European
Patent Office’s Worldwide Patent Statistical Database (PATSTAT). We consider
IP5 patents filed between 2000-2020 having an English language abstract and
exclude patents solely related to pharmaceuticals.

This approach allows mapping the advancements detailed in scientific literature to the technological applications contained in patent applications, allowing for an analysis of the linkages between science and technology. This almost fully automated novel approach allows repeating the analysis as neurotechnology evolves. [pp. 8-9[

Findings in bullet points,

Key stylized facts are:
● The field of neuroscience has witnessed a remarkable surge in the overall number
of publications since 2000, exhibiting a nearly 35-fold increase over the period
considered, reaching 1.2 million in 2021. The annual number of publications in
neuroscience has nearly tripled since 2000, exceeding 90,000 publications a year
in 2021. This increase became even more pronounced since 2019.
● The United States leads in terms of neuroscience publication output (40%),
followed by the United Kingdom (9%), Germany (7%), China (5%), Canada (4%),
Japan (4%), Italy (4%), France (4%), the Netherlands (3%), and Australia (3%).
These countries account for over 80% of neuroscience publications from 2000 to
2021.
● Big divides emerge, with 70% of countries in the world having less than 10 high-
impact neuroscience publications between 2000 to 2021.
● Specific neurotechnology-related research trends between 2000 and 2021 include:
○ An increase in Brain-Computer Interface (BCI) research around 2010,
maintaining a consistent presence ever since.
○ A significant surge in Epilepsy Detection research in 2017 and 2018,
reflecting the increased use of AI and machine learning in healthcare.
○ Consistent interest in Neuroimaging Analysis, which peaks around 2004,
likely because of its importance in brain activity and language
comprehension studies.
○ While peaking in 2016 and 2017, Deep Brain Stimulation (DBS) remains a
persistent area of research, underlining its potential in treating conditions
like Parkinson’s disease and essential tremor.
● Between 2000 and 2020, the total number of patent applications in this field
increased significantly, experiencing a 20-fold increase from less than 500 to over
12,000. In terms of annual figures, a consistent upward trend in neurotechnology-10
related patent applications emerges, with a notable doubling observed between
2015 and 2020.
• The United States account for nearly half of all worldwide patent applications (47%).
Other major contributors include South Korea (11%), China (10%), Japan (7%),
Germany (7%), and France (5%). These five countries together account for 87%
of IP5 neurotech patents applied between 2000 and 2020.
○ The United States has historically led the field, with a peak around 2010, a
decline towards 2015, and a recovery up to 2020.
○ South Korea emerged as a significant contributor after 1990, overtaking
Germany in the late 2000s to become the second-largest developer of
neurotechnology. By the late 2010s, South Korea’s annual neurotechnology
patent applications approximated those of the United States.
○ China exhibits a sharp increase in neurotechnology patent applications in
the mid-2010s, bringing it on par with the United States in terms of
application numbers.
● The United States ranks highest in both scientific publications and patents,
indicating their strong ability to transform knowledge into marketable inventions.
China, France, and Korea excel in leveraging knowledge to develop patented
innovations. Conversely, countries such as the United Kingdom, Germany, Italy,
Canada, Brazil, and Australia lag behind in effectively translating neurotech
knowledge into patentable innovations.
● In terms of patent quality measured by forward citations, the leading countries are
Germany, US, China, Japan, and Korea.
● A breakdown of patents by technology field reveals that Computer Technology is
the most important field in neurotechnology, exceeding Medical Technology,
Biotechnology, and Pharmaceuticals. The growing importance of algorithmic
applications, including neural computing techniques, also emerges by looking at
the increase in patent applications in these fields between 2015-2020. Compared
to the reference year, computer technologies-related patents in neurotech
increased by 355% and by 92% in medical technology.
● An analysis of the specialization patterns of the top-5 countries developing
neurotechnologies reveals that Germany has been specializing in chemistry-
related technology fields, whereas Asian countries, particularly South Korea and
China, focus on computer science and electrical engineering-related fields. The
United States exhibits a balanced configuration with specializations in both
chemistry and computer science-related fields.
● The entities – i.e. both companies and other institutions – leading worldwide
innovation in the neurotech space are: IBM (126 IP5 patents, US), Ping An
Technology (105 IP5 patents, CH), Fujitsu (78 IP5 patents, JP), Microsoft (76 IP511
patents, US)1, Samsung (72 IP5 patents, KR), Sony (69 IP5 patents JP) and Intel
(64 IP5 patents US)

This report further proposes a pioneering taxonomy of neurotechnologies based on International Patent Classification (IPC) codes.

• 67 distinct patent clusters in neurotechnology are identified, which mirror the diverse research and development landscape of the field. The 20 most prominent neurotechnology groups, particularly in areas like multimodal neuromodulation, seizure prediction, neuromorphic computing [emphasis mine], and brain-computer interfaces, point to potential strategic areas for research and commercialization.
• The variety of patent clusters identified mirrors the breadth of neurotechnology’s potential applications, from medical imaging and limb rehabilitation to sleep optimization and assistive exoskeletons.
• The development of a baseline IPC-based taxonomy for neurotechnology offers a structured framework that enriches our understanding of this technological space, and can facilitate research, development and analysis. The identified key groups mirror the interdisciplinary nature of neurotechnology and underscores the potential impact of neurotechnology, not only in healthcare but also in areas like information technology and biomaterials, with non-negligible effects over societies and economies.

1 If we consider Microsoft Technology Licensing LLM and Microsoft Corporation as being under the same umbrella, Microsoft leads worldwide developments with 127 IP5 patents. Similarly, if we were to consider that Siemens AG and Siemens Healthcare GmbH belong to the same conglomerate, Siemens would appear much higher in the ranking, in third position, with 84 IP5 patents. The distribution of intellectual property assets across companies belonging to the same conglomerate is frequent and mirrors strategic as well as operational needs and features, among others. [pp. 9-11]

Surprises and comments

Interesting and helpful to learn that “neurotechnology interacts with other technological trajectories, especially Artificial Intelligence;” this has changed and improved my understanding of neurotechnology.

It was unexpected to find Canada in the top ten countries producing neuroscience papers. However, finding out that the country lags in translating its ‘neuro’ knowledge into patentable innovation is not entirely a surprise.

It can’t be an accident that countries with major ‘electronics and computing’ companies lead in patents. These companies do have researchers but they also buy startups to acquire patents. They (and ‘patent trolls’) will also file patents preemptively. For the patent trolls, it’s a moneymaking proposition and for the large companies, it’s a way of protecting their own interests and/or (I imagine) forcing a sale.

The mention of neuromorphic (brainlike) computing in the taxonomy section was surprising and puzzling. Up to this point, I’ve thought of neuromorphic computing as a kind of alternative or addition to standard computing but the authors have blurred the lines as per UNESCO’s definition of neurotechnology (specifically, “… emulate the structure and function of the neural systems of animals or human beings”) . Again, this report is broadening my understanding of neurotechnology. Of course, it required two instances before I quite grasped it, the definition and the taxonomy.

What’s puzzling is that neuromorphic engineering, a broader term that includes neuromorphic computing, isn’t used or mentioned. (For an explanation of the terms neuromorphic computing and neuromorphic engineering, there’s my June 23, 2023 posting, “Neuromorphic engineering: an overview.” )

The report

I won’t have time for everything. Here are some of the highlights from my admittedly personal perspective.

It’s not only about curing disease

From the report,

Neurotechnology’s applications however extend well beyond medicine [emphasis mine], and span from research, to education, to the workplace, and even people’s everyday life. Neurotechnology-based solutions may enhance learning and skill acquisition and boost focus through brain stimulation techniques. For instance, early research finds that brain- zapping caps appear to boost memory for at least one month (Berkeley, 2022). This could one day be used at home to enhance memory functions [emphasis mine]. They can further enable new ways to interact with the many digital devices we use in everyday life, transforming the way we work, live and interact. One example is the Sound Awareness wristband developed by a Stanford team (Neosensory, 2022) which enables individuals to “hear” by converting sound into tactile feedback, so that sound impaired individuals can perceive spoken words through their skin. Takagi and Nishimoto (2023) analyzed the brain scans taken through Magnetic Resonance Imaging (MRI) as individuals were shown thousands of images. They then trained a generative AI tool called Stable Diffusion2 on the brain scan data of the study’s participants, thus creating images that roughly corresponded to the real images shown. While this does not correspond to reading the mind of people, at least not yet, and some limitations of the study have been highlighted (Parshall, 2023), it nevertheless represents an important step towards developing the capability to interface human thoughts with computers [emphasis mine], via brain data interpretation.

While the above examples may sound somewhat like science fiction, the recent uptake of generative Artificial Intelligence applications and of large language models such as ChatGPT or Bard, demonstrates that the seemingly impossible can quickly become an everyday reality. At present, anyone can purchase online electroencephalogram (EEG) devices for a few hundred dollars [emphasis mine], to measure the electrical activity of their brain for meditation, gaming, or other purposes. [pp. 14-15]

This is very impressive achievement. Some of the research cited was published earlier this year (2023). The extraordinary speed is a testament to the efforts by the authors and their teams. It’s also a testament to how quickly the field is moving.

I’m glad to see the mention of and focus on consumer neurotechnology. (While the authors don’t speculate, I am free to do so.) Consumer neurotechnology could be viewed as one of the steps toward normalizing a cyborg future for all of us. Yes, we have books, television programmes, movies, and video games, which all normalize the idea but the people depicted have been severely injured and require the augmentation. With consumer neurotechnology, you have easily accessible devices being used to enhance people who aren’t injured, they just want to be ‘better’.

This phrase seemed particularly striking “… an important step towards developing the capability to interface human thoughts with computers” in light of some claims made by the Australian military in my June 13, 2023 posting “Mind-controlled robots based on graphene: an Australian research story.” (My posting has an embedded video demonstrating the Brain Robotic Interface (BRI) in action. Also, see the paragraph below the video for my ‘measured’ response.)

There’s no mention of the military in the report which seems more like a deliberate rather than inadvertent omission given the importance of military innovation where technology is concerned.

This section gives a good overview of government initiatives (in the report it’s followed by a table of the programmes),

Thanks to the promises it holds, neurotechnology has garnered significant attention from both governments and the private sector and is considered by many as an investment priority. According to the International Brain Initiative (IBI), brain research funding has become increasingly important over the past ten years, leading to a rise in large-scale state-led programs aimed at advancing brain intervention technologies(International Brain Initiative, 2021). Since 2013, initiatives such as the United States’ Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative and the European Union’s Human Brain Project (HBP), as well as major national initiatives in China, Japan and South Korea have been launched with significant funding support from the respective governments. The Canadian Brain Research Strategy, initially operated as a multi- stakeholder coalition on brain research, is also actively seeking funding support from the government to transform itself into a national research initiative (Canadian Brain Research Strategy, 2022). A similar proposal is also seen in the case of the Australian Brain Alliance, calling for the establishment of an Australian Brain Initiative (Australian Academy of Science, n.d.). [pp. 15-16]

Privacy

There are some concerns such as these,

Beyond the medical realm, research suggests that emotional responses of consumers
related to preferences and risks can be concurrently tracked by neurotechnology, such
as neuroimaging and that neural data can better predict market-level outcomes than
traditional behavioral data (Karmarkar and Yoon, 2016). As such, neural data is
increasingly sought after in the consumer market for purposes such as digital
phenotyping4, neurogaming 5,and neuromarketing6 (UNESCO, 2021). This surge in demand gives rise to risks like hacking, unauthorized data reuse, extraction of privacy-sensitive information, digital surveillance, criminal exploitation of data, and other forms of abuse. These risks prompt the question of whether neural data needs distinct definition and safeguarding measures.

These issues are particularly relevant today as a wide range of electroencephalogram (EEG) headsets that can be used at home are now available in consumer markets for purposes that range from meditation assistance to controlling electronic devices through the mind. Imagine an individual is using one of these devices to play a neurofeedback game, which records the person’s brain waves during the game. Without the person being aware, the system can also identify the patterns associated with an undiagnosed mental health condition, such as anxiety. If the game company sells this data to third parties, e.g. health insurance providers, this may lead to an increase of insurance fees based on undisclosed information. This hypothetical situation would represent a clear violation of mental privacy and of unethical use of neural data.

Another example is in the field of advertising, where companies are increasingly interested in using neuroimaging to better understand consumers’ responses to their products or advertisements, a practice known as neuromarketing. For instance, a company might use neural data to determine which advertisements elicit the most positive emotional responses in consumers. While this can help companies improve their marketing strategies, it raises significant concerns about mental privacy. Questions arise in relation to consumers being aware or not that their neural data is being used, and in the extent to which this can lead to manipulative advertising practices that unfairly exploit unconscious preferences. Such potential abuses underscore the need for explicit consent and rigorous data protection measures in the use of neurotechnology for neuromarketing purposes. [pp. 21-22]

Legalities

Some countries already have laws and regulations regarding neurotechnology data,

At the national level, only a few countries have enacted laws and regulations to protect mental integrity or have included neuro-data in personal data protection laws (UNESCO, University of Milan-Bicocca (Italy) and State University of New York – Downstate Health Sciences University, 2023). Examples are the constitutional reform undertaken by Chile (Republic of Chile, 2021), the Charter for the responsible development of neurotechnologies of the Government of France (Government of France, 2022), and the Digital Rights Charter of the Government of Spain (Government of Spain, 2021). They propose different approaches to the regulation and protection of human rights in relation to neurotechnology. Countries such as the UK are also examining under which circumstances neural data may be considered as a special category of data under the general data protection framework (i.e. UK’s GDPR) (UK’s Information Commissioner’s Office, 2023) [p. 24]

As you can see, these are recent laws. There doesn’t seem to be any attempt here in Canada even though there is an act being reviewed in Parliament that could conceivably include neural data. This is from my May 1, 2023 posting,

Bill C-27 (Digital Charter Implementation Act, 2022) is what I believe is called an omnibus bill as it includes three different pieces of proposed legislation (the Consumer Privacy Protection Act [CPPA], the Artificial Intelligence and Data Act [AIDA], and the Personal Information and Data Protection Tribunal Act [PIDPTA]). [emphasis added July 11, 2023] You can read the Innovation, Science and Economic Development (ISED) Canada summary here or a detailed series of descriptions of the act here on the ISED’s Canada’s Digital Charter webpage.

My focus at the time was artificial intelligence and, now, after reading this UNESCO report and briefly looking at the Innovation, Science and Economic Development (ISED) Canada summary and a detailed series of descriptions of the act on ISED’s Canada’s Digital Charter webpage, I don’t see anything that specifies neural data but it’s not excluded either.

IP5 patents

Here’s the explanation (the footnote is included at the end of the excerpt),

IP5 patents represent a subset of overall patents filed worldwide, which have the
characteristic of having been filed in at least one top intellectual property offices (IPO)
worldwide (the so called IP5, namely the Chinese National Intellectual Property
Administration, CNIPA (formerly SIPO); the European Patent Office, EPO; the Japan
Patent Office, JPO; the Korean Intellectual Property Office, KIPO; and the United States
Patent and Trademark Office, USPTO) as well as another country, which may or may not be an IP5. This signals their potential applicability worldwide, as their inventiveness and industrial viability have been validated by at least two leading IPOs. This gives these patents a sort of “quality” check, also since patenting inventions is costly and if applicants try to protect the same invention in several parts of the world, this normally mirrors that the applicant has expectations about their importance and expected value. If we were to conduct the same analysis using information about individually considered patent applied worldwide, i.e. without filtering for quality nor considering patent families, we would risk conducting a biased analysis based on duplicated data. Also, as patentability standards vary across countries and IPOs, and what matters for patentability is the existence (or not) of prior art in the IPO considered, we would risk mixing real innovations with patents related to catching up phenomena in countries that are not at the forefront of the technology considered.

9 The five IP offices (IP5) is a forum of the five largest intellectual property offices in the world that was set up to improve the efficiency of the examination process for patents worldwide. The IP5 Offices together handle about 80% of the world’s patent applications, and 95% of all work carried out under the Patent Cooperation Treaty (PCT), see http://www.fiveipoffices.org. (Dernis et al., 2015) [p. 31]

AI assistance on this report

As noted earlier I have next to no experience with the analytical tools having not attempted this kind of work in several years. Here’s an example of what they were doing,

We utilize a combination of text embeddings based on Bidirectional Encoder
Representations from Transformer (BERT), dimensionality reduction, and hierarchical
clustering inspired by the BERTopic methodology 12 to identify latent themes within
research literature. Latent themes or topics in the context of topic modeling represent
clusters of words that frequently appear together within a collection of documents (Blei, 2012). These groupings are not explicitly labeled but are inferred through computational analysis examining patterns in word usage. These themes are ‘hidden’ within the text, only to be revealed through this analysis. …

We further utilize OpenAI’s GPT-4 model to enrich our understanding of topics’ keywords and to generate topic labels (OpenAI, 2023), thus supplementing expert review of the broad interdisciplinary corpus. Recently, GPT-4 has shown impressive results in medical contexts across various evaluations (Nori et al., 2023), making it a useful tool to enhance the information obtained from prior analysis stages, and to complement them. The automated process enhances the evaluation workflow, effectively emphasizing neuroscience themes pertinent to potential neurotechnology patents. Notwithstanding existing concerns about hallucinations (Lee, Bubeck and Petro, 2023) and errors in generative AI models, this methodology employs the GPT-4 model for summarization and interpretation tasks, which significantly mitigates the likelihood of hallucinations. Since the model is constrained to the context provided by the keyword collections, it limits the potential for fabricating information outside of the specified boundaries, thereby enhancing the accuracy and reliability of the output. [pp. 33-34]

I couldn’t resist adding the ChatGPT paragraph given all of the recent hoopla about it.

Multimodal neuromodulation and neuromorphic computing patents

I think this gives a pretty good indication of the activity on the patent front,

The largest, coherent topic, termed “multimodal neuromodulation,” comprises 535
patents detailing methodologies for deep or superficial brain stimulation designed to
address neurological and psychiatric ailments. These patented technologies interact with various points in neural circuits to induce either Long-Term Potentiation (LTP) or Long-Term Depression (LTD), offering treatment for conditions such as obsession, compulsion, anxiety, depression, Parkinson’s disease, and other movement disorders. The modalities encompass implanted deep-brain stimulators (DBS), Transcranial Magnetic Stimulation (TMS), and transcranial Direct Current Stimulation (tDCS). Among the most representative documents for this cluster are patents with titles: Electrical stimulation of structures within the brain or Systems and methods for enhancing or optimizing neural stimulation therapy for treating symptoms of Parkinson’s disease and or other movement disorders. [p.65]

Given my longstanding interest in memristors, which (I believe) have to a large extent helped to stimulate research into neuromorphic computing, this had to be included. Then, there was the brain-computer interfaces cluster,

A cluster identified as “Neuromorphic Computing” consists of 366 patents primarily
focused on devices designed to mimic human neural networks for efficient and adaptable computation. The principal elements of these inventions are resistive memory cells and artificial synapses. They exhibit properties similar to the neurons and synapses in biological brains, thus granting these devices the ability to learn and modulate responses based on rewards, akin to the adaptive cognitive capabilities of the human brain.

The primary technology classes associated with these patents fall under specific IPC
codes, representing the fields of neural network models, analog computers, and static
storage structures. Essentially, these classifications correspond to technologies that are key to the construction of computers and exhibit cognitive functions similar to human brain processes.

Examples for this cluster include neuromorphic processing devices that leverage
variations in resistance to store and process information, artificial synapses exhibiting
spike-timing dependent plasticity, and systems that allow event-driven learning and
reward modulation within neuromorphic computers.

In relation to neurotechnology as a whole, the “neuromorphic computing” cluster holds significant importance. It embodies the fusion of neuroscience and technology, thereby laying the basis for the development of adaptive and cognitive computational systems. Understanding this specific cluster provides a valuable insight into the progressing domain of neurotechnology, promising potential advancements across diverse fields, including artificial intelligence and healthcare.

The “Brain-Computer Interfaces” cluster, consisting of 146 patents, embodies a key aspect of neurotechnology that focuses on improving the interface between the brain and external devices. The technology classification codes associated with these patents primarily refer to methods or devices for treatment or protection of eyes and ears, devices for introducing media into, or onto, the body, and electric communication techniques, which are foundational elements of brain-computer interface (BCI) technologies.

Key patents within this cluster include a brain-computer interface apparatus adaptable to use environment and method of operating thereof, a double closed circuit brain-machine interface system, and an apparatus and method of brain-computer interface for device controlling based on brain signal. These inventions mainly revolve around the concept of using brain signals to control external devices, such as robotic arms, and improving the classification performance of these interfaces, even after long periods of non-use.

The inventions described in these patents improve the accuracy of device control, maintain performance over time, and accommodate multiple commands, thus significantly enhancing the functionality of BCIs.

Other identified technologies include systems for medical image analysis, limb rehabilitation, tinnitus treatment, sleep optimization, assistive exoskeletons, and advanced imaging techniques, among others. [pp. 66-67]

Having sections on neuromorphic computing and brain-computer interface patents in immediate proximity led to more speculation on my part. Imagine how much easier it would be to initiate a BCI connection if it’s powered with a neuromorphic (brainlike) computer/device. [ETA July 21, 2023: Following on from that thought, it might be more than just easier to initiate a BCI connection. Could a brainlike computer become part of your brain? Why not? it’s been successfully argued that a robotic wheelchair was part of someone’s body, see my January 30, 2013 posting and scroll down about 40% of the way.)]

Neurotech policy debates

The report concludes with this,

Neurotechnology is a complex and rapidly evolving technological paradigm whose
trajectories have the power to shape people’s identity, autonomy, privacy, sentiments,
behaviors and overall well-being, i.e. the very essence of what it means to be human.

Designing and implementing careful and effective norms and regulations ensuring that neurotechnology is developed and deployed in an ethical manner, for the good of
individuals and for society as a whole, call for a careful identification and characterization of the issues at stake. This entails shedding light on the whole neurotechnology ecosystem, that is what is being developed, where and by whom, and also understanding how neurotechnology interacts with other developments and technological trajectories, especially AI. Failing to do so may result in ineffective (at best) or distorted policies and policy decisions, which may harm human rights and human dignity.

Addressing the need for evidence in support of policy making, the present report offers first time robust data and analysis shedding light on the neurotechnology landscape worldwide. To this end, its proposes and implements an innovative approach that leverages artificial intelligence and deep learning on data from scientific publications and paten[t]s to identify scientific and technological developments in the neurotech space. The methodology proposed represents a scientific advance in itself, as it constitutes a quasi- automated replicable strategy for the detection and documentation of neurotechnology- related breakthroughs in science and innovation, to be repeated over time to account for the evolution of the sector. Leveraging this approach, the report further proposes an IPC-based taxonomy for neurotechnology which allows for a structured framework to the exploration of neurotechnology, to enable future research, development and analysis. The innovative methodology proposed is very flexible and can in fact be leveraged to investigate different emerging technologies, as they arise.

In terms of technological trajectories, we uncover a shift in the neurotechnology industry, with greater emphasis being put on computer and medical technologies in recent years, compared to traditionally dominant trajectories related to biotechnology and pharmaceuticals. This shift warrants close attention from policymakers, and calls for attention in relation to the latest (converging) developments in the field, especially AI and related methods and applications and neurotechnology.

This is all the more important and the observed growth and specialization patterns are unfolding in the context of regulatory environments that, generally, are either not existent or not fit for purpose. Given the sheer implications and impact of neurotechnology on the very essence of human beings, this lack of regulation poses key challenges related to the possible infringement of mental integrity, human dignity, personal identity, privacy, freedom of thought, and autonomy, among others. Furthermore, issues surrounding accessibility and the potential for neurotech enhancement applications triggers significant concerns, with far-reaching implications for individuals and societies. [pp. 72-73]

Last words about the report

Informative, readable, and thought-provoking. And, it helped broaden my understanding of neurotechnology.

Future endeavours?

I’m hopeful that one of these days one of these groups (UNESCO, Canadian Science Policy Centre, or ???) will tackle the issue of business bankruptcy in the neurotechnology sector. It has already occurred as noted in my ““Going blind when your neural implant company flirts with bankruptcy [long read]” April 5, 2022 posting. That story opens with a woman going blind in a New York subway when her neural implant fails. It’s how she found out the company, which supplied her implant was going out of business.

In my July 7, 2023 posting about the UNESCO July 2023 dialogue on neurotechnology, I’ve included information on Neuralink (one of Elon Musk’s companies) and its approval (despite some investigations) by the US Food and Drug Administration to start human clinical trials. Scroll down about 75% of the way to the “Food for thought” subhead where you will find stories about allegations made against Neuralink.

The end

If you want to know more about the field, the report offers a seven-page bibliography and there’s a lot of material here where you can start with this December 3, 2019 posting “Neural and technological inequalities” which features an article mentioning a discussion between two scientists. Surprisingly (to me), the source article is in Fast Company (a leading progressive business media brand), according to their tagline)..

I have two categories you may want to check: Human Enhancement and Neuromorphic Engineering. There are also a number of tags: neuromorphic computing, machine/flesh, brainlike computing, cyborgs, neural implants, neuroprosthetics, memristors, and more.

Should you have any observations or corrections, please feel free to leave them in the Comments section of this posting.