Category Archives: social implications

Credibility slips according to US survey on public perceptions of scientists

Figure 1. Perceptions of scientists’ credibility. [downloaded from https://www.annenbergpublicpolicycenter.org/annenberg-survey-finds-public-perceptions-of-scientists-credibility-slips/]

A June 26, 2024 news item on ScienceDaily describes the research, which resulted in the graphic you see in the above,

New analyses from the Annenberg Public Policy Center find that public perceptions of scientists’ credibility — measured as their competence, trustworthiness, and the extent to which they are perceived to share an individual’s values — remain high, but their perceived competence and trustworthiness eroded somewhat between 2023 and 2024. The research also found that public perceptions of scientists working in artificial intelligence (AI) differ from those of scientists as a whole.

A June 26, 2024 Annenberg Public Policy Center of the University of Pennsylvania news release (also on EurekAlert and also received by email), which originated the news item, describes a series of surveys, how the information was gathered, and offers more detail about he results, Note 1: All links removed; Note 2: You can find links and citations for papers mentioned in the news release at the end of this posting.

From 2018-2022, the Annenberg Public Policy Center (APPC) of the University of Pennsylvania relied on national cross-sectional surveys to monitor perceptions of science and scientists. In 2023-24, APPC moved to a nationally representative empaneled sample to make it possible to observe changes in individual perceptions.

The February 2024 findings, released today to coincide with the address by National Academy of  Sciences President Marcia McNutt on “The State of the Science,” come from an analysis of responses from an empaneled national probability sample of U.S. adults surveyed in February 2023 (n=1,638 respondents), November 2023 (n=1,538), and February 2024 (n=1,555).

Drawing on the 2022 cross-sectional data, in an article titled “Factors Assessing Science’s Self-Presentation model and their effect on conservatives’ and liberals’ support for funding science,” published in Proceedings of the National Academy of Sciences (September 2023), Annenberg-affiliated researchers Yotam Ophir (State University of New York at Buffalo and an APPC distinguished research fellow), Dror Walter (Georgia State University and an APPC distinguished research fellow), and Patrick E. Jamieson and Kathleen Hall Jamieson of the Annenberg Public Policy Center isolated factors that underlie perceptions of scientists (Factors Assessing Science’s Self-Presentation, or FASS). These factors predict public support for increased funding of science and support for federal funding of basic research.

The five factors in FASS are whether science and scientists are perceived to be credible and prudent, and whether they are perceived to overcome bias, correct error (self-correcting), and whether their work benefits people like the respondent and the country as a whole (beneficial). In a 2024 publication titled “The Politicization of Climate Science: Media Consumption, Perceptions of Science and Scientists, and Support for Policy” (May 26, 2024) in the Journal of Health Communication, the same team showed that these five factors mediate the relationship between exposure to media sources such as Fox News and outcomes such as belief in anthropogenic climate change, perception of the threat it poses, and support for climate-friendly policies such as a carbon tax.

Speaking about the FASS model, Jamieson, director of the Annenberg Public Policy Center and director of the survey, said that “because our 13 core questions reliably reduce to five factors with significant predictive power, the ASK survey’s core questions make it possible to isolate both stability and changes in public perception of science and scientists across time.” (See the appendix for the list of questions.)

The new research finds that while scientists are held in high regard, two of the three dimensions that make up credibility – perceptions of competence and trustworthiness – showed a small but statistically significant drop from 2023 to 2024, as did both measures of beneficial. The 2024 survey data also indicate that the public considers AI scientists less credible than scientists in general, with notably fewer people saying that AI scientists are competent and trustworthy and “share my values” than scientists generally.

“Although confidence in science remains high overall, the survey reveals concerns about AI science,” Jamieson said. “The finding is unsurprising. Generative AI is an emerging area of science filled with both great promise and great potential peril.”

The data are based on Annenberg Science Knowledge (ASK) waves of the Annenberg Science and Public Health (ASAPH) surveys conducted in 2023 and 2024. The findings labeled 2023 are based on a February 2023 survey, and the findings labeled 2024 are based on combined ASAPH survey half-samples surveyed in November 2023 and February 2024.

For further details, download the toplines and a series of figures that accompany this summary.

Perceptions of scientists overall

In the FASS model, perceptions of scientists’ credibility are assessed through perceptions of whether scientists are competent, trustworthy, and “share my values.” The first two of those values slipped in the most recent survey. In 2024, 70% of those surveyed strongly or somewhat agree that scientists are competent (down from 77% in 2023) and 59% strongly or somewhat agree that scientists are trustworthy (down from 67% in 2023). (See figure 1 [see the first item in this post], and figs. 2-4 for other findings.)

The survey also found that in 2024, fewer people felt that scientists’ findings benefit “the country as a whole” and “benefit people like me.” In 2024, 66% strongly or somewhat agreed that findings benefit the country as a whole (down from 75% in 2023). Belief that scientists’ findings “benefit people like me,” also declined, to 60% from 68%. Taken together those two questions make up the beneficial factor of FASS. (See fig. 5.)

The findings follow sustained attacks on climate and Covid-19-related science and, more recently, public concerns about the rapid development and deployment of artificial intelligence.

Comparing perceptions of scientists in general with climate and AI scientists

Credibility: When asked about three factors underlying scientists’ credibility, AI scientists have lower credibility in all three values. (See fig. 6.)

  • Competent: 70% strongly/somewhat agree that scientists are competent, but only 62% for climate scientists and 49% for AI scientists.
  • Trustworthy: 59% agree scientists are trustworthy, 54% agree for climate scientists, 28% for AI scientists.
  • Share my values: A higher number (38%) agree that climate scientists share my values than for scientists in general (36%) and AI scientists (15%). More people disagree with this for AI scientists (35%) than for the others.

Prudence: Asked whether they agree or disagree that science by various groups of scientists “creates unintended consequences and replaces older problems with new ones,” over half of those surveyed (59%) agree that AI scientists create unintended consequences and just 9% disagree. (See fig. 7.)

Overcoming bias: Just 42% of those surveyed agree that scientists “are able to overcome human and political biases,” but only 21% feel that way about AI scientists. In fact, 41% disagree that AI scientists are able to overcome human political biases. In another area, just 23% agree that AI scientists provide unbiased conclusions in their area of inquiry and 38% disagree. (See fig. 8.)

Self-correction: Self-correction, or “organized skepticism expressed in expectations sustaining a culture of critique,” as the FASS paper puts it, is considered by some as a “hallmark of science.” AI scientists are seen as less likely than scientists or climate scientists to take action to prevent fraud; take responsibility for mistakes; or to have mistakes that are caught by peer review. (See fig. 9.)

Benefits: Asked about the benefits from scientists’ findings, 60% agree that scientists’ “findings benefit people like me,” though just 44% agree for climate scientists and 35% for AI scientists. Asked about whether findings benefit the country as a whole, 66% agree for scientists, 50% for climate scientists and 41% for AI scientists. (See fig. 10.)

Your best interest: The survey also asked respondents how much trust they have in scientists to act in the best interest of people like you. (This specific trust measure is not a part of the FASS battery.) Respondents have less trust in AI scientists than in others: 41% have a great deal/a lot of trust in medical scientists; 39% in climate scientists; 36% in scientists; and 12% in AI scientists. (See fig. 11.)

The data from ASK surveys have been used to date in four peer-reviewed papers:

  • Using 2019 ASK data: Jamieson, K. H., McNutt, M., Kiermer, V., & Sever, R. (2019). Signaling the trustworthiness of science. Proceedings of the National Academy of Sciences, 116(39), 19231-19236.
  • Using 2022 ASK data: Ophir, Y., Walter, D., Jamieson, P. E., & Jamieson, K. H. (2023). Factors Assessing Science’s Self-Presentation model and their effect on conservatives’ and liberals’ support for funding science. Proceedings of the National Academy of Sciences, 120(38), e2213838120.
  • Using  2024 ASK data: Lupia, A., Allison, D. B., Jamieson, K. H., Heimberg, J., Skipper, M., & Wolf, S. M. (2024). Trends in US public confidence in science and opportunities for progress. Proceedings of the National Academy of Sciences, 121(11), e2319488121. 
  • Using Nov 2023 and Feb 2024 ASK data: Ophir, Y., Walter, D., Jamieson, P. E., & Jamieson, K. H. (2024). The politicization of climate science: Media consumption, perceptions of science and scientists, and support for policy. Journal of Health Communication, 29(sup1): 18-27.
     

APPC’s ASAPH survey

The survey data come from the 17th and 18th waves of a nationally representative panel of U.S. adults, first empaneled in April 2021, conducted for the Annenberg Public Policy Center by SSRS, an independent market research company. These waves of the Annenberg Science and Public Health (ASAPH) knowledge survey were fielded February 22-28, 2023, November 14-20, 2023, and February 6-12, 2024, and have margins of sampling error (MOE) of ± 3.2, 3.3 and 3.4 percentage points at the 95% confidence level. In November 2023, half of the sample was asked about “scientists” and the other half “climate scientists.” In February 2024, those initially asked about “scientists” were asked about “scientists studying AI” and the other half “scientists.” This provided two half samples addressing specific areas of study, while all panelists were asked about “scientists” generally. All figures are rounded to the nearest whole number and may not add to 100%. Combined subcategories may not add to totals in the topline and text due to rounding.

The policy center has been tracking the American public’s knowledge, beliefs, and behaviors regarding vaccination, Covid-19, flu, maternal health, climate change, and other consequential health issues through this survey panel for over three years. In addition to Jamieson, the APPC team includes Shawn Patterson Jr., who analyzed the data; Patrick E. Jamieson, director of the Annenberg Health and Risk Communication Institute, who developed the questions; and Ken Winneg, managing director of survey research, who supervised the fielding of the survey.

Here are links to and citations for the papers listed above in the June 26, 2024 news release,

Using 2019 ASK data: Signaling the trustworthiness of science by Kathleen Hall Jamieson, Marcia McNutt, Veronique Kiermer, and Richard Sever.. Proceedings of the National Academy of Sciences (PNAS), 116 (39), 19231-19236 September 23, 2019 DOI: https://doi.org/10.1073/pnas.1913039116

Using 2022 ASK data Factors Assessing Science’s Self-Presentation model and their effect on conservatives’ and liberals’ support for funding science by Yotam Ophir, Dror Walter, Patrick E. Jamieson, and Kathleen Hall Jamieson.. Proceedings of the National Academy of Sciences (PNAS), 120 (38), e2213838120 September 11, 2023 DOI: https://doi.org/10.1073/pnas.2213838120

Using  2024 ASK data: Trends in US public confidence in science and opportunities for progress by Arthur Lupia, David B. Allison, Kathleen Hall Jamieson, Jennifer Heimberg, Magdalena Skipper, and Susan M. Wolf.. Proceedings of the National Academy of Sciences (PNAS), 121 (11), e2319488121 March 4, 2024 DOI: https://doi.org/10.1073/pnas.2319488121 

Using Nov 2023 and Feb 2024 ASK data: The politicization of climate science: Media consumption, perceptions of science and scientists, and support for policy by by Yotam Ophir, Dror Walter, Patrick E. Jamieson & Kathleen Hall Jamieson.. Journal of Health Communication, 29 (sup1): 18-27 DOI: https://doi.org/10.1080/10810730.2024.2357571 Published online: 26 May 2024

The 2019 paper ‘Signaling …’ has been featured here before in a September 30, 2019 posting, “Do you believe in science?” In addition to some of my comments, I embedded Adam Lambert’s version of Cher’s song ‘Do You Believe in Love?’ where you’ll see Cher brush away a few tears as she listens to her dance hit made love ballad.

The 2024 paper ‘Trends ..’ has also been featured here, albeit briefly, in an April 8, 2024 posting, “Trust in science remains high but public questions scientists’ adherence to science’s norms.”

Math + data + history lead to cliodynamics and history crisis detection

This February 18, 2024 essay by Daniel Hoyer, senior researcher, historian, and complexity scientist at the University of Toronto for The Conversation, discuses history as scientific data, Note: Links have been removed,

American humorist and writer Mark Twain is believed to have once said, “History doesn’t repeat itself, but it often rhymes.”

I’ve been working as a historian and complexity scientist for the better part of a decade, and I often think about this phrase as I follow different strands of the historical record and notice the same patterns over and over.

My background is in ancient history. As a young researcher, I tried to understand why the Roman Empire became so big and what ultimately led to its downfall. Then, during my doctoral studies, I met the evolutionary biologist turned historian Peter Turchin, and that meeting had a profound impact on my work.

I joined Turchin and a few others who were establishing a new field – a new way to investigate history. It was called cliodynamics after Clio, the ancient Greek muse of history, and dynamics, the study of how complex systems change over time. Cliodynamics marshals scientific and statistical tools to better understand the past.

The aim is to treat history as a “natural” science, using statistical methods, computational simulations and other tools adapted from evolutionary theory, physics and complexity science to understand why things happened the way that they did.

By turning historical knowledge into scientific “data”, we can run analyses and test hypotheses about historical processes, just like any other science.

Hoyer’s essay is fascinating and I can’t really do it justice with a few excerpts but hopefully these will tempt you into reading more of his and his colleagues’ work, from the February 18, 2024 essay.

Since 2011, my colleagues and I have been compiling an enormous amount of information about the past and storing it in a unique collection called the Seshat: Global History Databank. Seshat involves the contribution of over 100 researchers from around the world.

We create structured, analysable information by surveying the huge amount of scholarship available about the past. For instance, we can record a society’s population as a number, or answer questions about whether something was present or absent. Like, did a society have professional bureaucrats? Or, did it maintain public irrigation works.

Our goal is to find out what drove these societies into crisis, and then what factors seem to have determined whether people could course-correct to stave off devastation.

But why? Right now, we are living in an age of polycrisis – a state where social, political, economic, environmental and other systems are not only deeply interrelated, but nearly all of them are under strain or experiencing some kind of disaster or extreme upheaval.

Pouring through the historical record, we have started noticing some very important themes rhyming through history. Even major ecological disasters and unpredictable climates are nothing new.

One of the most common patterns that has jumped out is how extreme inequality shows up in nearly every case of major crisis. When big gaps exist between the haves and have-nots, not just in material wealth but also access to positions of power, this breeds frustration, dissent and turmoil.

“Ages of discord”, as Turchin dubbed periods of great social unrest and violence, produce some of history’s most devastating events. This includes the US civil war of the 1860s, the early 20th-century Russian Revolution, and the Taiping rebellion against the Chinese Qing dynasty, often said to be the deadliest civil war in history.

All of these cases saw people become frustrated at extreme wealth inequality, along with lack of inclusion in the political process. Frustration bred anger, and eventually erupted into fighting that killed millions and affected many more.

Perhaps one of the most surprising things is that inequality seems to be just as corrosive for the elites themselves. This is because the accumulation of so much wealth and power leads to intense infighting between them, which ripples throughout society.

My colleague, political scientist Jack Goldstone, came up with a theory to explain this [“gap between the wealthy who can afford services and the growing number who cannot”] in the early 1990s, called structural demographic theory. He took an in-depth look at the French Revolution, often seen as the archetypal popular revolt. Goldstone was able to show that a lot of the fighting and grievances were driven by frustrated elites, not only by the “masses”, as is the common understanding.

These elites were finding it harder and harder to get a seat at the table with the French royal court. Goldstone noted that the reason these tensions became so inflamed and exploded is because the state had been losing its grip on the country for decades due to mismanagement of resources and from all of the entrenched privileges that the elites were fighting so hard to retain.

If the past teaches us anything, it is that trying to hold on to systems and policies that refuse to appropriately adapt and respond to changing circumstances — like climate change or growing unrest among a population – usually end in disaster. Those with the means and opportunity to enact change must do so, or at least to not stand in the way when reform is needed.

Our goal as cliodynamicists is to uncover patterns – not just to see how what we are doing today rhymes with the past – but to help find better ways forward.

If you have the time, please do read Hoyer’s February 18, 2024 essay in its entirety (h/t February 19, 2024 news item on phys.org). The Seshat Global History Databank can be found here.

Resurrection consent for digital cloning of the dead

It’s a bit disconcerting to think that one might be resurrected, in this case, digitally, but Dr Masaki Iwasaki has helpfully published a study on attitudes to digital cloning and resurrection consent, which could prove helpful when establishing one’s final wishes.

A January 4, 2024 De Gruyter (publisher) press release (repurposed from a January 4, 2024 blog posting on De Gruyter.com) explains the idea and the study,

In a 2014 episode of sci-fi series Black Mirror, a grieving young widow reconnects with her dead husband using an app that trawls his social media history to mimic his online language, humor and personality. It works. She finds solace in the early interactions – but soon wants more.   

Such a scenario is no longer fiction. In 2017, the company Eternime aimed to create an avatar of a dead person using their digital footprint, but this “Skype for the dead” didn’t catch on. The machine-learning and AI algorithms just weren’t ready for it. Neither were we.

Now, in 2024, amid exploding use of Chat GPT-like programs, similar efforts are on the way. But should digital resurrection be allowed at all? And are we prepared for the legal battles over what constitutes consent?

In a study published in the Asian Journal of Law and Economics, Dr Masaki Iwasaki of Harvard Law School and currently an assistant professor at Seoul National University, explores how the deceased’s consent (or otherwise) affects attitudes to digital resurrection.

US adults were presented with scenarios where a woman in her 20s dies in a car accident. A company offers to bring a digital version of her back, but her consent is, at first, ambiguous. What should her friends decide?

Two options – one where the deceased has consented to digital resurrection and another where she hasn’t – were read by participants at random. They then answered questions about the social acceptability of bringing her back on a five-point rating scale, considering other factors such as ethics and privacy concerns.

Results showed that expressed consent shifted acceptability two points higher compared to dissent. “Although I expected societal acceptability for digital resurrection to be higher when consent was expressed, the stark difference in acceptance rates – 58% for consent versus 3% for dissent – was surprising,” says Iwasaki. “This highlights the crucial role of the deceased’s wishes in shaping public opinion on digital resurrection.”

In fact, 59% of respondents disagreed with their own digital resurrection, and around 40% of respondents did not find any kind of digital resurrection socially acceptable, even with expressed consent. “While the will of the deceased is important in determining the societal acceptability of digital resurrection, other factors such as ethical concerns about life and death, along with general apprehension towards new technology are also significant,” says Iwasaki.  

The results reflect a discrepancy between existing law and public sentiment. People’s general feelings – that the dead’s wishes should be respected – are actually not protected in most countries. The digitally recreated John Lennon in the film Forrest Gump, or animated hologram of Amy Winehouse reveal the ‘rights’ of the dead are easily overridden by those in the land of the living.

So, is your digital destiny something to consider when writing your will? It probably should be but in the current absence of clear legal regulations on the subject, the effectiveness of documenting your wishes in such a way is uncertain. For a start, how such directives are respected varies by legal jurisdiction. “But for those with strong preferences documenting their wishes could be meaningful,” says Iwasaki. “At a minimum, it serves as a clear communication of one’s will to family and associates, and may be considered when legal foundations are better established in the future.”

It’s certainly a conversation worth having now. Many generative AI chatbot services, such as like Replika (“The AI companion who cares”) and Project December (“Simulate the dead”) already enable conversations with chatbots replicating real people’s personalities. The service ‘You, Only Virtual’ (YOV) allows users to upload someone’s text messages, emails and voice conversations to create a ‘versona’ chatbot. And, in 2020, Microsoft obtained a patent to create chatbots from text, voice and image data for living people as well as for historical figures and fictional characters, with the option of rendering in 2D or 3D.

Iwasaki says he’ll investigate this and the digital resurrection of celebrities in future research. “It’s necessary first to discuss what rights should be protected, to what extent, then create rules accordingly,” he explains. “My research, building upon prior discussions in the field, argues that the opt-in rule requiring the deceased’s consent for digital resurrection might be one way to protect their rights.”

There is a link to the study in the press release above but this includes a citation, of sorts,

Digital Cloning of the Dead: Exploring the Optimal Default Rule by Masaki Iwasaki. Asian Journal of Law and Economics DOI: https://doi.org/10.1515/ajle-2023-0125 Published Online: 2023-12-27

This paper is open access.

Neural (brain) implants and hype (long read)

There was a big splash a few weeks ago when it was announced that Neuralink’s (Elon Musk company) brain implant had been surgically inserted into its first human patient.

Getting approval

David Tuffley, senior lecturer in Applied Ethics & CyberSecurity at Griffith University (Australia), provides a good overview of the road Neuralink took to getting FDA (US Food and Drug Administration) approval for human clinical trials in his May 29, 2023 essay for The Conversation, Note: Links have been removed,

Since its founding in 2016, Elon Musk’s neurotechnology company Neuralink has had the ambitious mission to build a next-generation brain implant with at least 100 times more brain connections than devices currently approved by the US Food and Drug Administration (FDA).

The company has now reached a significant milestone, having received FDA approval to begin human trials. So what were the issues keeping the technology in the pre-clinical trial phase for as long as it was? And have these concerns been addressed?

Neuralink is making a Class III medical device known as a brain-computer interface (BCI). The device connects the brain to an external computer via a Bluetooth signal, enabling continuous communication back and forth.

The device itself is a coin-sized unit called a Link. It’s implanted within a small disk-shaped cutout in the skull using a precision surgical robot. The robot splices a thousand tiny threads from the Link to certain neurons in the brain. [emphasis mine] Each thread is about a quarter the diameter of a human hair.

The company says the device could enable precise control of prosthetic limbs, giving amputees natural motor skills. It could revolutionise treatment for conditions such as Parkinson’s disease, epilepsy and spinal cord injuries. It also shows some promise for potential treatment of obesity, autism, depression, schizophrenia and tinnitus.

Several other neurotechnology companies and researchers have already developed BCI technologies that have helped people with limited mobility regain movement and complete daily tasks.

In February 2021, Musk said Neuralink was working with the FDA to secure permission to start initial human trials later that year. But human trials didn’t commence in 2021.

Then, in March 2022, Neuralink made a further application to the FDA to establish its readiness to begin humans trials.

One year and three months later, on May 25 2023, Neuralink finally received FDA approval for its first human clinical trial. Given how hard Neuralink has pushed for permission to begin, we can assume it will begin very soon. [emphasis mine]

The approval has come less than six months after the US Office of the Inspector General launched an investigation into Neuralink over potential animal welfare violations. [emphasis mine]

In accessible language, Tuffley goes on to discuss the FDA’s specific technical issues with implants and how they were addressed in his May 29, 2023 essay.

More about how Neuralink’s implant works and some concerns

Canadian Broadcasting Corporation (CBC) journalist Andrew Chang offers an almost 13 minute video, “Neuralink brain chip’s first human patient. How does it work?” Chang is a little overenthused for my taste but he offers some good information about neural implants, along with informative graphics in his presentation.

So, Tuffley was right about Neuralink getting ready quickly for human clinical trials as you can guess from the title of Chang’s CBC video.

Jennifer Korn announced that recruitment had started in her September 20, 2023 article for CNN (Cable News Network), Note: Links have been removed,

Elon Musk’s controversial biotechnology startup Neuralink opened up recruitment for its first human clinical trial Tuesday, according to a company blog.

After receiving approval from an independent review board, Neuralink is set to begin offering brain implants to paralysis patients as part of the PRIME Study, the company said. PRIME, short for Precise Robotically Implanted Brain-Computer Interface, is being carried out to evaluate both the safety and functionality of the implant.

Trial patients will have a chip surgically placed in the part of the brain that controls the intention to move. The chip, installed by a robot, will then record and send brain signals to an app, with the initial goal being “to grant people the ability to control a computer cursor or keyboard using their thoughts alone,” the company wrote.

Those with quadriplegia [sometimes known as tetraplegia] due to cervical spinal cord injury or amyotrophic lateral sclerosis (ALS) may qualify for the six-year-long study – 18 months of at-home and clinic visits followed by follow-up visits over five years. Interested people can sign up in the patient registry on Neuralink’s website.

Musk has been working on Neuralink’s goal of using implants to connect the human brain to a computer for five years, but the company so far has only tested on animals. The company also faced scrutiny after a monkey died in project testing in 2022 as part of efforts to get the animal to play Pong, one of the first video games.

I mentioned three Reuters investigative journalists who were reporting on Neuralink’s animal abuse allegations (emphasized in Tuffley’s essay) in a July 7, 2023 posting, “Global dialogue on the ethics of neurotechnology on July 13, 2023 led by UNESCO.” Later that year, Neuralink was cleared by the US Department of Agriculture (see September 24,, 2023 article by Mahnoor Jehangir for BNN Breaking).

Plus, Neuralink was being investigated over more allegations according to a February 9, 2023 article by Rachel Levy for Reuters, this time regarding hazardous pathogens,

The U.S. Department of Transportation said on Thursday it is investigating Elon Musk’s brain-implant company Neuralink over the potentially illegal movement of hazardous pathogens.

A Department of Transportation spokesperson told Reuters about the probe after the Physicians Committee of Responsible Medicine (PCRM), an animal-welfare advocacy group,wrote to Secretary of Transportation Pete Buttigieg, opens new tab earlier on Thursday to alert it of records it obtained on the matter.

PCRM said it obtained emails and other documents that suggest unsafe packaging and movement of implants removed from the brains of monkeys. These implants may have carried infectious diseases in violation of federal law, PCRM said.

There’s an update about the hazardous materials in the next section. Spoiler alert, the company got fined.

Neuralink’s first human implant

A January 30, 2024 article (Associated Press with files from Reuters) on the Canadian Broadcasting Corporation’s (CBC) online news webspace heralded the latest about Neurlink’s human clinical trials,

The first human patient received an implant from Elon Musk’s computer-brain interface company Neuralink over the weekend, the billionaire says.

In a post Monday [January 29, 2024] on X, the platform formerly known as Twitter, Musk said that the patient received the implant the day prior and was “recovering well.” He added that “initial results show promising neuron spike detection.”

Spikes are activity by neurons, which the National Institutes of Health describe as cells that use electrical and chemical signals to send information around the brain and to the body.

The billionaire, who owns X and co-founded Neuralink, did not provide additional details about the patient.

When Neuralink announced in September [2023] that it would begin recruiting people, the company said it was searching for individuals with quadriplegia due to cervical spinal cord injury or amyotrophic lateral sclerosis, commonly known as ALS or Lou Gehrig’s disease.

Neuralink reposted Musk’s Monday [January 29, 2024] post on X, but did not publish any additional statements acknowledging the human implant. The company did not immediately respond to requests for comment from The Associated Press or Reuters on Tuesday [January 30, 2024].

In a separate Monday [January 29, 2024] post on X, Musk said that the first Neuralink product is called “Telepathy” — which, he said, will enable users to control their phones or computers “just by thinking.” He said initial users would be those who have lost use of their limbs.

The startup’s PRIME Study is a trial for its wireless brain-computer interface to evaluate the safety of the implant and surgical robot.

Now for the hazardous materials, January 30, 2024 article, Note: A link has been removed,

Earlier this month [January 2024], a Reuters investigation found that Neuralink was fined for violating U.S. Department of Transportation (DOT) rules regarding the movement of hazardous materials. During inspections of the company’s facilities in Texas and California in February 2023, DOT investigators found the company had failed to register itself as a transporter of hazardous material.

They also found improper packaging of hazardous waste, including the flammable liquid Xylene. Xylene can cause headaches, dizziness, confusion, loss of muscle co-ordination and even death, according to the U.S. Centers for Disease Control and Prevention.

The records do not say why Neuralink would need to transport hazardous materials or whether any harm resulted from the violations.

Skeptical thoughts about Elon Musk and Neuralink

Earlier this month (February 2024), the British Broadcasting Corporation (BBC) published an article by health reporters, Jim Reed and Joe McFadden, that highlights the history of brain implants, the possibilities, and notes some of Elon Musk’s more outrageous claims for Neuralink’s brain implants,

Elon Musk is no stranger to bold claims – from his plans to colonise Mars to his dreams of building transport links underneath our biggest cities. This week the world’s richest man said his Neuralink division had successfully implanted its first wireless brain chip into a human.

Is he right when he says this technology could – in the long term – save the human race itself?

Sticking electrodes into brain tissue is really nothing new.

In the 1960s and 70s electrical stimulation was used to trigger or suppress aggressive behaviour in cats. By the early 2000s monkeys were being trained to move a cursor around a computer screen using just their thoughts.

“It’s nothing novel, but implantable technology takes a long time to mature, and reach a stage where companies have all the pieces of the puzzle, and can really start to put them together,” says Anne Vanhoestenberghe, professor of active implantable medical devices, at King’s College London.

Neuralink is one of a growing number of companies and university departments attempting to refine and ultimately commercialise this technology. The focus, at least to start with, is on paralysis and the treatment of complex neurological conditions.

Reed and McFadden’s February 2024 BBC article describes a few of the other brain implant efforts, Note: Links have been removed,

One of its [Neuralink’s] main rivals, a start-up called Synchron backed by funding from investment firms controlled by Bill Gates and Jeff Bezos, has already implanted its stent-like device into 10 patients.

Back in December 2021, Philip O’Keefe, a 62-year old Australian who lives with a form of motor neurone disease, composed the first tweet using just his thoughts to control a cursor.

And researchers at Lausanne University in Switzerland have shown it is possible for a paralysed man to walk again by implanting multiple devices to bypass damage caused by a cycling accident.

In a research paper published this year, they demonstrated a signal could be beamed down from a device in his brain to a second device implanted at the base of his spine, which could then trigger his limbs to move.

Some people living with spinal injuries are sceptical about the sudden interest in this new kind of technology.

“These breakthroughs get announced time and time again and don’t seem to be getting any further along,” says Glyn Hayes, who was paralysed in a motorbike accident in 2017, and now runs public affairs for the Spinal Injuries Association.

If I could have anything back, it wouldn’t be the ability to walk. It would be putting more money into a way of removing nerve pain, for example, or ways to improve bowel, bladder and sexual function.” [emphasis mine]

Musk, however, is focused on something far more grand for Neuralink implants, from Reed and McFadden’s February 2024 BBC article, Note: A link has been removed,

But for Elon Musk, “solving” brain and spinal injuries is just the first step for Neuralink.

The longer-term goal is “human/AI symbiosis” [emphasis mine], something he describes as “species-level important”.

Musk himself has already talked about a future where his device could allow people to communicate with a phone or computer “faster than a speed typist or auctioneer”.

In the past, he has even said saving and replaying memories may be possible, although he recognised “this is sounding increasingly like a Black Mirror episode.”

One of the experts quoted in Reed and McFadden’s February 2024 BBC article asks a pointed question,

… “At the moment, I’m struggling to see an application that a consumer would benefit from, where they would take the risk of invasive surgery,” says Prof Vanhoestenberghe.

“You’ve got to ask yourself, would you risk brain surgery just to be able to order a pizza on your phone?”

Rae Hodge’s February 11, 2024 article about Elon Musk and his hyped up Neuralink implant for Salon is worth reading in its entirety but for those who don’t have the time or need a little persuading, here are a few excerpts, Note 1: This is a warning; Hodge provides more detail about the animal cruelty allegations; Note 2: Links have been removed,

Elon Musk’s controversial brain-computer interface (BCI) tech, Neuralink, has supposedly been implanted in its first recipient — and as much as I want to see progress for treatment of paralysis and neurodegenerative disease, I’m not celebrating. I bet the neuroscientists he reportedly drove out of the company aren’t either, especially not after seeing the gruesome torture of test monkeys and apparent cover-up that paved the way for this moment. 

All of which is an ethics horror show on its own. But the timing of Musk’s overhyped implant announcement gives it an additional insulting subtext. Football players are currently in a battle for their lives against concussion-based brain diseases that plague autopsy reports of former NFL players. And Musk’s boast of false hope came just two weeks before living players take the field in the biggest and most brutal game of the year. [2024 Super Bowl LVIII]

ESPN’s Kevin Seifert reports neuro-damage is up this year as “players suffered a total of 52 concussions from the start of training camp to the beginning of the regular season. The combined total of 213 preseason and regular season concussions was 14% higher than 2021 but within range of the three-year average from 2018 to 2020 (203).”

I’m a big fan of body-tech: pacemakers, 3D-printed hips and prosthetic limbs that allow you to wear your wedding ring again after 17 years. Same for brain chips. But BCI is the slow-moving front of body-tech development for good reason. The brain is too understudied. Consequences of the wrong move are dire. Overpromising marketable results on profit-driven timelines — on the backs of such a small community of researchers in a relatively new field — would be either idiotic or fiendish. 

Brown University’s research in the sector goes back to the 1990s. Since the emergence of a floodgate-opening 2002 study and the first implant in 2004 by med-tech company BrainGate, more promising results have inspired broader investment into careful research. But BrainGate’s clinical trials started back in 2009, and as noted by Business Insider’s Hilary Brueck, are expected to continue until 2038 — with only 15 participants who have devices installed. 

Anne Vanhoestenberghe is a professor of active implantable medical devices at King’s College London. In a recent release, she cautioned against the kind of hype peddled by Musk.

“Whilst there are a few other companies already using their devices in humans and the neuroscience community have made remarkable achievements with those devices, the potential benefits are still significantly limited by technology,” she said. “Developing and validating core technology for long term use in humans takes time and we need more investments to ensure we do the work that will underpin the next generation of BCIs.” 

Neuralink is a metal coin in your head that connects to something as flimsy as an app. And we’ve seen how Elon treats those. We’ve also seen corporate goons steal a veteran’s prosthetic legs — and companies turn brain surgeons and dentists into repo-men by having them yank anti-epilepsy chips out of people’s skulls, and dentures out of their mouths. 

“I think we have a chance with Neuralink to restore full-body functionality to someone who has a spinal cord injury,” Musk said at a 2023 tech summit, adding that the chip could possibly “make up for whatever lost capacity somebody has.”

Maybe BCI can. But only in the careful hands of scientists who don’t have Musk squawking “go faster!” over their shoulders. His greedy frustration with the speed of BCI science is telling, as is the animal cruelty it reportedly prompted.

There have been other examples of Musk’s grandiosity. Notably, David Lee expressed skepticism about hyperloop in his August 13, 2013 article for BBC news online

Is Elon Musk’s Hyperloop just a pipe dream?

Much like the pun in the headline, the bright idea of transporting people using some kind of vacuum-like tube is neither new nor imaginative.

There was Robert Goddard, considered the “father of modern rocket propulsion”, who claimed in 1909 that his vacuum system could suck passengers from Boston to New York at 1,200mph.

And then there were Soviet plans for an amphibious monorail  – mooted in 1934  – in which two long pods would start their journey attached to a metal track before flying off the end and slipping into the water like a two-fingered Kit Kat dropped into some tea.

So ever since inventor and entrepreneur Elon Musk hit the world’s media with his plans for the Hyperloop, a healthy dose of scepticism has been in the air.

“This is by no means a new idea,” says Rod Muttram, formerly of Bombardier Transportation and Railtrack.

“It has been previously suggested as a possible transatlantic transport system. The only novel feature I see is the proposal to put the tubes above existing roads.”

Here’s the latest I’ve found on hyperloop, from the Hyperloop Wikipedia entry,

As of 2024, some companies continued to pursue technology development under the hyperloop moniker, however, one of the biggest, well funded players, Hyperloop One, declared bankruptcy and ceased operations in 2023.[15]

Musk is impatient and impulsive as noted in a September 12, 2023 posting by Mike Masnick on Techdirt, Note: A link has been removed,

The Batshit Crazy Story Of The Day Elon Musk Decided To Personally Rip Servers Out Of A Sacramento Data Center

Back on Christmas Eve [December 24, 2022] of last year there were some reports that Elon Musk was in the process of shutting down Twitter’s Sacramento data center. In that article, a number of ex-Twitter employees were quoted about how much work it would be to do that cleanly, noting that there’s a ton of stuff hardcoded in Twitter code referring to that data center (hold that thought).

That same day, Elon tweeted out that he had “disconnected one of the more sensitive server racks.”

Masnick follows with a story of reckless behaviour from someone who should have known better.

Ethics of implants—where to look for more information

While Musk doesn’t use the term when he describes a “human/AI symbiosis” (presumably by way of a neural implant), he’s talking about a cyborg. Here’s a 2018 paper, which looks at some of the implications,

Do you want to be a cyborg? The moderating effect of ethics on neural implant acceptance by Eva Reinares-Lara, Cristina Olarte-Pascual, and Jorge Pelegrín-Borondo. Computers in Human Behavior Volume 85, August 2018, Pages 43-53 DOI: https://doi.org/10.1016/j.chb.2018.03.032

This paper is open access.

Getting back to Neuralink, I have two blog posts that discuss the company and the ethics of brain implants from way back in 2021.

First, there’s Jazzy Benes’ March 1, 2021 posting on the Santa Clara University’s Markkula Center for Applied Ethics blog. It stands out as it includes a discussion of the disabled community’s issues, Note: Links have been removed,

In the heart of Silicon Valley we are constantly enticed by the newest technological advances. With the big influencers Grimes [a Canadian musician and the mother of three children with Elon Musk] and Lil Uzi Vert publicly announcing their willingness to become experimental subjects for Elon Musk’s Neuralink brain implantation device, we are left wondering if future technology will actually give us “the knowledge of the Gods.” Is it part of the natural order for humans to become omniscient beings? Who will have access to the devices? What other ethical considerations must be discussed before releasing such technology to the public?

A significant issue that arises from developing technologies for the disabled community is the assumption that disabled persons desire the abilities of what some abled individuals may define as “normal.” Individuals with disabilities may object to technologies intended to make them fit an able-bodied norm. “Normal” is relative to each individual, and it could be potentially harmful to use a deficit view of disability, which means judging a disability as a deficiency. However, this is not to say that all disabled individuals will reject a technology that may enhance their abilities. Instead, I believe it is a consideration that must be recognized when developing technologies for the disabled community, and it can only be addressed through communication with disabled persons. As a result, I believe this is a conversation that must be had with the community for whom the technology is developed–disabled persons.

With technologies that aim to address disabilities, we walk a fine line between therapeutics and enhancement. Though not the first neural implant medical device, the Link may have been the first BCI system openly discussed for its potential transhumanism uses, such as “enhanced cognitive abilities, memory storage and retrieval, gaming, telepathy, and even symbiosis with machines.” …

Benes also discusses transhumanism, privacy issues, and consent issues. It’s a thoughtful reading experience.

Second is a July 9, 2021 posting by anonymous on the University of California at Berkeley School of Information blog which provides more insight into privacy and other issues associated with data collection (and introduced me to the concept of decisional interference),

As the development of microchips furthers and advances in neuroscience occur, the possibility for seamless brain-machine interfaces, where a device decodes inputs from the user’s brain to perform functions, becomes more of a reality. These various forms of these technologies already exist. However, technological advances have made implantable and portable devices possible. Imagine a future where humans don’t need to talk to each other, but rather can transmit their thoughts directly to another person. This idea is the eventual goal of Elon Musk, the founder of Neuralink. Currently, Neuralink is one of the main companies involved in the advancement of this type of technology. Analysis of the Neuralink’s technology and their overall mission statement provide an interesting insight into the future of this type of human-computer interface and the potential privacy and ethical concerns with this technology.

As this technology further develops, several privacy and ethical concerns come into question. To begin, using Solove’s Taxonomy as a privacy framework, many areas of potential harm are revealed. In the realm of information collection, there is much risk. Brain-computer interfaces, depending on where they are implanted, could have access to people’s most private thoughts and emotions. This information would need to be transmitted to another device for processing. The collection of this information by companies such as advertisers would represent a major breach of privacy. Additionally, there is risk to the user from information processing. These devices must work concurrently with other devices and often wirelessly. Given the widespread importance of cloud computing in much of today’s technology, offloading information from these devices to the cloud would be likely. Having the data stored in a database puts the user at the risk of secondary use if proper privacy policies are not implemented. The trove of information stored within the information collected from the brain is vast. These datasets could be combined with existing databases such as browsing history on Google to provide third parties with unimaginable context on individuals. Lastly, there is risk for information dissemination, more specifically, exposure. The information collected and processed by these devices would need to be stored digitally. Keeping such private information, even if anonymized, would be a huge potential for harm, as the contents of the information may in itself be re-identifiable to a specific individual. Lastly there is risk for invasions such as decisional interference. Brain-machine interfaces would not only be able to read information in the brain but also write information. This would allow the device to make potential emotional changes in its users, which be a major example of decisional interference. …

For the most recent Neuralink and brain implant ethics piece, there’s this February 14, 2024 essay on The Conversation, which, unusually, for this publication was solicited by the editors, Note: Links have been removed,

In January 2024, Musk announced that Neuralink implanted its first chip in a human subject’s brain. The Conversation reached out to two scholars at the University of Washington School of Medicine – Nancy Jecker, a bioethicst, and Andrew Ko, a neurosurgeon who implants brain chip devices – for their thoughts on the ethics of this new horizon in neuroscience.

Information about the implant, however, is scarce, aside from a brochure aimed at recruiting trial subjects. Neuralink did not register at ClinicalTrials.gov, as is customary, and required by some academic journals. [all emphases mine]

Some scientists are troubled by this lack of transparency. Sharing information about clinical trials is important because it helps other investigators learn about areas related to their research and can improve patient care. Academic journals can also be biased toward positive results, preventing researchers from learning from unsuccessful experiments.

Fellows at the Hastings Center, a bioethics think tank, have warned that Musk’s brand of “science by press release, while increasingly common, is not science. [emphases mine]” They advise against relying on someone with a huge financial stake in a research outcome to function as the sole source of information.

When scientific research is funded by government agencies or philanthropic groups, its aim is to promote the public good. Neuralink, on the other hand, embodies a private equity model [emphasis mine], which is becoming more common in science. Firms pooling funds from private investors to back science breakthroughs may strive to do good, but they also strive to maximize profits, which can conflict with patients’ best interests.

In 2022, the U.S. Department of Agriculture investigated animal cruelty at Neuralink, according to a Reuters report, after employees accused the company of rushing tests and botching procedures on test animals in a race for results. The agency’s inspection found no breaches, according to a letter from the USDA secretary to lawmakers, which Reuters reviewed. However, the secretary did note an “adverse surgical event” in 2019 that Neuralink had self-reported.

In a separate incident also reported by Reuters, the Department of Transportation fined Neuralink for violating rules about transporting hazardous materials, including a flammable liquid.

…the possibility that the device could be increasingly shown to be helpful for people with disabilities, but become unavailable due to loss of research funding. For patients whose access to a device is tied to a research study, the prospect of losing access after the study ends can be devastating. [emphasis mine] This raises thorny questions about whether it is ever ethical to provide early access to breakthrough medical interventions prior to their receiving full FDA approval.

Not registering a clinical trial would seem to suggest there won’t be much oversight. As for Musk’s “science by press release” activities, I hope those will be treated with more skepticism by mainstream media although that seems unlikely given the current situation with journalism (more about that in a future post).

As for the issues associated with private equity models for science research and the problem of losing access to devices after a clinical trial is ended, my April 5, 2022 posting, “Going blind when your neural implant company flirts with bankruptcy (long read)” offers some cautionary tales, in addition to being the most comprehensive piece I’ve published on ethics and brain implants.

My July 17, 2023 posting, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report” offers a brief overview of the international scene.

First round of seed funding announced for NSF (US National Science Foundation) Institute for Trustworthy AI in Law & Society (TRAILS)

Having published an earlier January 2024 US National Science Foundation (NSF) funding announcement for the TRAILS (Trustworthy AI in Law & Society) Institute yesterday (February 21, 2024), I’m following up with an announcement about the initiative’s first round of seed funding.

From a TRAILS undated ‘story‘ by Tom Ventsias on the initiative’s website (and published January 24, 2024 as a University of Maryland news release on EurekAlert),

The Institute for Trustworthy AI in Law & Society (TRAILS) has unveiled an inaugural round of seed grants designed to integrate a greater diversity of stakeholders into the artificial intelligence (AI) development and governance lifecycle, ultimately creating positive feedback loops to improve trustworthiness, accessibility and efficacy in AI-infused systems.

The eight grants announced on January 24, 2024—ranging from $100K to $150K apiece and totaling just over $1.5 million—were awarded to interdisciplinary teams of faculty associated with the institute. Funded projects include developing AI chatbots to assist with smoking cessation, designing animal-like robots that can improve autism-specific support at home, and exploring how people use and rely upon AI-generated language translation systems.

All eight projects fall under the broader mission of TRAILS, which is to transform the practice of AI from one driven primarily by technological innovation to one that is driven by ethics, human rights, and input and feedback from communities whose voices have previously been marginalized.

“At the speed with which AI is developing, our seed grant program will enable us to keep pace—or even stay one step ahead—by incentivizing cutting-edge research and scholarship that spans AI design, development and governance,” said Hal Daumé III, a professor of computer science at the University of Maryland who is the director of TRAILS.

After TRAILS was launched in May 2023 with a $20 million award from the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST), lead faculty met to brainstorm how the institute could best move forward with research, innovation and outreach that would have a meaningful impact.

They determined a seed grant program could quickly leverage the wide range of academic talent at TRAILS’ four primary institutions. This includes the University of Maryland’s expertise in computing and human-computer interaction; George Washington University’s strengths in systems engineering and AI as it relates to law and governance; Morgan State University’s work in addressing bias and inequity in AI; and Cornell University’s research in human behavior and decision-making.

“NIST and NSF’s support of TRAILS enables us to create a structured mechanism to reach across academic and institutional boundaries in search of innovative solutions,” said David Broniatowski, an associate professor of engineering management and systems engineering at George Washington University who leads TRAILS activities on the GW campus. “Seed funding from TRAILS will enable multidisciplinary teams to identify opportunities for their research to have impact, and to build the case for even larger, multi-institutional efforts.”

Further discussions were held at a TRAILS faculty retreat to identify seed grant guidelines and collaborative themes that mirror TRAILS’ primary research thrusts—participatory design, methods and metrics, evaluating trust, and participatory governance.

“Some of the funded projects are taking a fresh look at ideas we may have already been working on individually, and others are taking an entirely new approach to timely, pressing issues involving AI and machine learning,” said Virginia Byrne, an assistant professor of higher education & student affairs at Morgan State who is leading TRAILS activities on that campus and who served on the seed grant review committee.

A second round of seed funding will be announced later this year, said Darren Cambridge, who was recently hired as managing director of TRAILS to lead its day-to-day operations.

Projects selected in the first round are eligible for a renewal, while other TRAILS faculty—or any faculty member at the four primary TRAILS institutions—can submit new proposals for consideration, Cambridge said.

Ultimately, the seed funding program is expected to strengthen and incentivize other TRAILS activities that are now taking shape, including K–12 education and outreach programs, AI policy seminars and workshops on Capitol Hill, and multiple postdoc opportunities for early-career researchers.

“We want TRAILS to be the ‘go-to’ resource for educators, policymakers and others who are seeking answers and solutions on how to build, manage and use AI systems that will benefit all of society,” Cambridge said.

The eight projects selected for the first round of TRAILS seed-funding are:

Chung Hyuk Park and Zoe Szajnfarber from GW and Hernisa Kacorri from UMD aim to improve the support infrastructure and access to quality care for families of autistic children. Early interventions are strongly correlated with positive outcomes, while provider shortages and financial burdens have raised challenges—particularly for families without sufficient resources and experience. The researchers will develop novel parent-robot teaming for the home, advance the assistive technology, and assess the impact of teaming to promote more trust in human-robot collaborative settings.

Soheil Feizi from UMD and Robert Brauneis from GW will investigate various issues surrounding text-to-image [emphasis mine] generative AI models like Stable Diffusion, DALL-E 2, and Midjourney, focusing on myriad legal, aesthetic and computational aspects that are currently unresolved. A key question is how copyright law might adapt if these tools create works in an artist’s style. The team will explore how generative AI models represent individual artists’ styles, and whether those representations are complex and distinctive enough to form stable objects of protection. The researchers will also explore legal and technical questions to determine if specific artworks, especially rare and unique ones, have already been used to train AI models.

Huaishu Peng and Ge Gao from UMD will work with Malte Jung from Cornell to increase trust-building in embodied AI systems, which bridge the gap between computers and human physical senses. Specifically, the researchers will explore embodied AI systems in the form of miniaturized on-body or desktop robotic systems that can enable the exchange of nonverbal cues between blind and sighted individuals, an essential component of efficient collaboration. The researchers will also examine multiple factors—both physical and mental—in order to gain a deeper understanding of both groups’ values related to teamwork facilitated by embodied AI.

Marine Carpuat and Ge Gao from UMD will explore “mental models”—how humans perceive things—for language translation systems used by millions of people daily. They will focus on how individuals, depending on their language fluency and familiarity with the technology, make sense of their “error boundary”—that is, deciding whether an AI-generated translation is correct or incorrect. The team will also develop innovative techniques to teach users how to improve their mental models as they interact with machine translation systems.

Hal Daumé III, Furong Huang and Zubin Jelveh from UMD and Donald Braman from GW will propose new philosophies grounded in law to conceptualize, evaluate and achieve “effort-aware fairness,” which involves algorithms for determining whether an individual or a group of individuals is discriminated against in terms of equality of effort. The researchers will develop new metrics, evaluate fairness of datasets, and design novel algorithms that enable AI auditors to uncover and potentially correct unfair decisions.

Lorien Abroms and David Broniatowski from GW will recruit smokers to study the reliability of using generative chatbots, such as ChatGPT, as the basis for a digital smoking cessation program. Additional work will examine the acceptability by smokers and their perceptions of trust in using this rapidly evolving technology for help to quit smoking. The researchers hope their study will directly inform future digital interventions for smoking cessation and/or modifying other health behaviors.

Adam Aviv from GW and Michelle Mazurek from UMD will examine bias, unfairness and untruths such as sexism, racism and other forms of misrepresentation that come out of certain AI and machine learning systems. Though some systems have public warnings of potential biases, the researchers want to explore how users understand these warnings, if they recognize how biases may manifest themselves in the AI-generated responses, and how users attempt to expose, mitigate and manage potentially biased responses.

Susan Ariel Aaronson and David Broniatowski from GW plan to create a prototype of a searchable, easy-to-use website to enable policymakers to better utilize academic research related to trustworthy and participatory AI. The team will analyze research publications by TRAILS-affiliated researchers to ascertain which ones may have policy implications. Then, each relevant publication will be summarized and categorized by research questions, issues, keywords, and relevant policymaking uses. The resulting database prototype will enable the researchers to test the utility of this resource for policymakers over time.

Yes, things are moving quickly where AI is concerned. There’s text-to-image being investigated by Soheil Feizi and Robert Brauneis and, since the funding announcement in early January 2024, text-to-video has been announced (Open AI’s Sora was previewed February 15, 2024). I wonder if that will be added to the project.

One more comment, Huaishu Peng’s, Ge Gao’s, and Malte Jung’s project for “… trust-building in embodied AI systems …” brings to mind Elon Musk’s stated goal of using brain implants for “human/AI symbiosis.” (I have more about that in an upcoming post.) Hopefully, Susan Ariel Aaronson’s and David Broniatowski’s proposed website for policymakers will be able to keep up with what’s happening in the field of AI, including research on the impact of private investments primarily designed for generating profits.