Tag Archives: United Nations Educational Scientific and Cultural Organization (UNESCO)

Neural (brain) implants and hype (long read)

There was a big splash a few weeks ago when it was announced that Neuralink’s (Elon Musk company) brain implant had been surgically inserted into its first human patient.

Getting approval

David Tuffley, senior lecturer in Applied Ethics & CyberSecurity at Griffith University (Australia), provides a good overview of the road Neuralink took to getting FDA (US Food and Drug Administration) approval for human clinical trials in his May 29, 2023 essay for The Conversation, Note: Links have been removed,

Since its founding in 2016, Elon Musk’s neurotechnology company Neuralink has had the ambitious mission to build a next-generation brain implant with at least 100 times more brain connections than devices currently approved by the US Food and Drug Administration (FDA).

The company has now reached a significant milestone, having received FDA approval to begin human trials. So what were the issues keeping the technology in the pre-clinical trial phase for as long as it was? And have these concerns been addressed?

Neuralink is making a Class III medical device known as a brain-computer interface (BCI). The device connects the brain to an external computer via a Bluetooth signal, enabling continuous communication back and forth.

The device itself is a coin-sized unit called a Link. It’s implanted within a small disk-shaped cutout in the skull using a precision surgical robot. The robot splices a thousand tiny threads from the Link to certain neurons in the brain. [emphasis mine] Each thread is about a quarter the diameter of a human hair.

The company says the device could enable precise control of prosthetic limbs, giving amputees natural motor skills. It could revolutionise treatment for conditions such as Parkinson’s disease, epilepsy and spinal cord injuries. It also shows some promise for potential treatment of obesity, autism, depression, schizophrenia and tinnitus.

Several other neurotechnology companies and researchers have already developed BCI technologies that have helped people with limited mobility regain movement and complete daily tasks.

In February 2021, Musk said Neuralink was working with the FDA to secure permission to start initial human trials later that year. But human trials didn’t commence in 2021.

Then, in March 2022, Neuralink made a further application to the FDA to establish its readiness to begin humans trials.

One year and three months later, on May 25 2023, Neuralink finally received FDA approval for its first human clinical trial. Given how hard Neuralink has pushed for permission to begin, we can assume it will begin very soon. [emphasis mine]

The approval has come less than six months after the US Office of the Inspector General launched an investigation into Neuralink over potential animal welfare violations. [emphasis mine]

In accessible language, Tuffley goes on to discuss the FDA’s specific technical issues with implants and how they were addressed in his May 29, 2023 essay.

More about how Neuralink’s implant works and some concerns

Canadian Broadcasting Corporation (CBC) journalist Andrew Chang offers an almost 13 minute video, “Neuralink brain chip’s first human patient. How does it work?” Chang is a little overenthused for my taste but he offers some good information about neural implants, along with informative graphics in his presentation.

So, Tuffley was right about Neuralink getting ready quickly for human clinical trials as you can guess from the title of Chang’s CBC video.

Jennifer Korn announced that recruitment had started in her September 20, 2023 article for CNN (Cable News Network), Note: Links have been removed,

Elon Musk’s controversial biotechnology startup Neuralink opened up recruitment for its first human clinical trial Tuesday, according to a company blog.

After receiving approval from an independent review board, Neuralink is set to begin offering brain implants to paralysis patients as part of the PRIME Study, the company said. PRIME, short for Precise Robotically Implanted Brain-Computer Interface, is being carried out to evaluate both the safety and functionality of the implant.

Trial patients will have a chip surgically placed in the part of the brain that controls the intention to move. The chip, installed by a robot, will then record and send brain signals to an app, with the initial goal being “to grant people the ability to control a computer cursor or keyboard using their thoughts alone,” the company wrote.

Those with quadriplegia [sometimes known as tetraplegia] due to cervical spinal cord injury or amyotrophic lateral sclerosis (ALS) may qualify for the six-year-long study – 18 months of at-home and clinic visits followed by follow-up visits over five years. Interested people can sign up in the patient registry on Neuralink’s website.

Musk has been working on Neuralink’s goal of using implants to connect the human brain to a computer for five years, but the company so far has only tested on animals. The company also faced scrutiny after a monkey died in project testing in 2022 as part of efforts to get the animal to play Pong, one of the first video games.

I mentioned three Reuters investigative journalists who were reporting on Neuralink’s animal abuse allegations (emphasized in Tuffley’s essay) in a July 7, 2023 posting, “Global dialogue on the ethics of neurotechnology on July 13, 2023 led by UNESCO.” Later that year, Neuralink was cleared by the US Department of Agriculture (see September 24,, 2023 article by Mahnoor Jehangir for BNN Breaking).

Plus, Neuralink was being investigated over more allegations according to a February 9, 2023 article by Rachel Levy for Reuters, this time regarding hazardous pathogens,

The U.S. Department of Transportation said on Thursday it is investigating Elon Musk’s brain-implant company Neuralink over the potentially illegal movement of hazardous pathogens.

A Department of Transportation spokesperson told Reuters about the probe after the Physicians Committee of Responsible Medicine (PCRM), an animal-welfare advocacy group,wrote to Secretary of Transportation Pete Buttigieg, opens new tab earlier on Thursday to alert it of records it obtained on the matter.

PCRM said it obtained emails and other documents that suggest unsafe packaging and movement of implants removed from the brains of monkeys. These implants may have carried infectious diseases in violation of federal law, PCRM said.

There’s an update about the hazardous materials in the next section. Spoiler alert, the company got fined.

Neuralink’s first human implant

A January 30, 2024 article (Associated Press with files from Reuters) on the Canadian Broadcasting Corporation’s (CBC) online news webspace heralded the latest about Neurlink’s human clinical trials,

The first human patient received an implant from Elon Musk’s computer-brain interface company Neuralink over the weekend, the billionaire says.

In a post Monday [January 29, 2024] on X, the platform formerly known as Twitter, Musk said that the patient received the implant the day prior and was “recovering well.” He added that “initial results show promising neuron spike detection.”

Spikes are activity by neurons, which the National Institutes of Health describe as cells that use electrical and chemical signals to send information around the brain and to the body.

The billionaire, who owns X and co-founded Neuralink, did not provide additional details about the patient.

When Neuralink announced in September [2023] that it would begin recruiting people, the company said it was searching for individuals with quadriplegia due to cervical spinal cord injury or amyotrophic lateral sclerosis, commonly known as ALS or Lou Gehrig’s disease.

Neuralink reposted Musk’s Monday [January 29, 2024] post on X, but did not publish any additional statements acknowledging the human implant. The company did not immediately respond to requests for comment from The Associated Press or Reuters on Tuesday [January 30, 2024].

In a separate Monday [January 29, 2024] post on X, Musk said that the first Neuralink product is called “Telepathy” — which, he said, will enable users to control their phones or computers “just by thinking.” He said initial users would be those who have lost use of their limbs.

The startup’s PRIME Study is a trial for its wireless brain-computer interface to evaluate the safety of the implant and surgical robot.

Now for the hazardous materials, January 30, 2024 article, Note: A link has been removed,

Earlier this month [January 2024], a Reuters investigation found that Neuralink was fined for violating U.S. Department of Transportation (DOT) rules regarding the movement of hazardous materials. During inspections of the company’s facilities in Texas and California in February 2023, DOT investigators found the company had failed to register itself as a transporter of hazardous material.

They also found improper packaging of hazardous waste, including the flammable liquid Xylene. Xylene can cause headaches, dizziness, confusion, loss of muscle co-ordination and even death, according to the U.S. Centers for Disease Control and Prevention.

The records do not say why Neuralink would need to transport hazardous materials or whether any harm resulted from the violations.

Skeptical thoughts about Elon Musk and Neuralink

Earlier this month (February 2024), the British Broadcasting Corporation (BBC) published an article by health reporters, Jim Reed and Joe McFadden, that highlights the history of brain implants, the possibilities, and notes some of Elon Musk’s more outrageous claims for Neuralink’s brain implants,

Elon Musk is no stranger to bold claims – from his plans to colonise Mars to his dreams of building transport links underneath our biggest cities. This week the world’s richest man said his Neuralink division had successfully implanted its first wireless brain chip into a human.

Is he right when he says this technology could – in the long term – save the human race itself?

Sticking electrodes into brain tissue is really nothing new.

In the 1960s and 70s electrical stimulation was used to trigger or suppress aggressive behaviour in cats. By the early 2000s monkeys were being trained to move a cursor around a computer screen using just their thoughts.

“It’s nothing novel, but implantable technology takes a long time to mature, and reach a stage where companies have all the pieces of the puzzle, and can really start to put them together,” says Anne Vanhoestenberghe, professor of active implantable medical devices, at King’s College London.

Neuralink is one of a growing number of companies and university departments attempting to refine and ultimately commercialise this technology. The focus, at least to start with, is on paralysis and the treatment of complex neurological conditions.

Reed and McFadden’s February 2024 BBC article describes a few of the other brain implant efforts, Note: Links have been removed,

One of its [Neuralink’s] main rivals, a start-up called Synchron backed by funding from investment firms controlled by Bill Gates and Jeff Bezos, has already implanted its stent-like device into 10 patients.

Back in December 2021, Philip O’Keefe, a 62-year old Australian who lives with a form of motor neurone disease, composed the first tweet using just his thoughts to control a cursor.

And researchers at Lausanne University in Switzerland have shown it is possible for a paralysed man to walk again by implanting multiple devices to bypass damage caused by a cycling accident.

In a research paper published this year, they demonstrated a signal could be beamed down from a device in his brain to a second device implanted at the base of his spine, which could then trigger his limbs to move.

Some people living with spinal injuries are sceptical about the sudden interest in this new kind of technology.

“These breakthroughs get announced time and time again and don’t seem to be getting any further along,” says Glyn Hayes, who was paralysed in a motorbike accident in 2017, and now runs public affairs for the Spinal Injuries Association.

If I could have anything back, it wouldn’t be the ability to walk. It would be putting more money into a way of removing nerve pain, for example, or ways to improve bowel, bladder and sexual function.” [emphasis mine]

Musk, however, is focused on something far more grand for Neuralink implants, from Reed and McFadden’s February 2024 BBC article, Note: A link has been removed,

But for Elon Musk, “solving” brain and spinal injuries is just the first step for Neuralink.

The longer-term goal is “human/AI symbiosis” [emphasis mine], something he describes as “species-level important”.

Musk himself has already talked about a future where his device could allow people to communicate with a phone or computer “faster than a speed typist or auctioneer”.

In the past, he has even said saving and replaying memories may be possible, although he recognised “this is sounding increasingly like a Black Mirror episode.”

One of the experts quoted in Reed and McFadden’s February 2024 BBC article asks a pointed question,

… “At the moment, I’m struggling to see an application that a consumer would benefit from, where they would take the risk of invasive surgery,” says Prof Vanhoestenberghe.

“You’ve got to ask yourself, would you risk brain surgery just to be able to order a pizza on your phone?”

Rae Hodge’s February 11, 2024 article about Elon Musk and his hyped up Neuralink implant for Salon is worth reading in its entirety but for those who don’t have the time or need a little persuading, here are a few excerpts, Note 1: This is a warning; Hodge provides more detail about the animal cruelty allegations; Note 2: Links have been removed,

Elon Musk’s controversial brain-computer interface (BCI) tech, Neuralink, has supposedly been implanted in its first recipient — and as much as I want to see progress for treatment of paralysis and neurodegenerative disease, I’m not celebrating. I bet the neuroscientists he reportedly drove out of the company aren’t either, especially not after seeing the gruesome torture of test monkeys and apparent cover-up that paved the way for this moment. 

All of which is an ethics horror show on its own. But the timing of Musk’s overhyped implant announcement gives it an additional insulting subtext. Football players are currently in a battle for their lives against concussion-based brain diseases that plague autopsy reports of former NFL players. And Musk’s boast of false hope came just two weeks before living players take the field in the biggest and most brutal game of the year. [2024 Super Bowl LVIII]

ESPN’s Kevin Seifert reports neuro-damage is up this year as “players suffered a total of 52 concussions from the start of training camp to the beginning of the regular season. The combined total of 213 preseason and regular season concussions was 14% higher than 2021 but within range of the three-year average from 2018 to 2020 (203).”

I’m a big fan of body-tech: pacemakers, 3D-printed hips and prosthetic limbs that allow you to wear your wedding ring again after 17 years. Same for brain chips. But BCI is the slow-moving front of body-tech development for good reason. The brain is too understudied. Consequences of the wrong move are dire. Overpromising marketable results on profit-driven timelines — on the backs of such a small community of researchers in a relatively new field — would be either idiotic or fiendish. 

Brown University’s research in the sector goes back to the 1990s. Since the emergence of a floodgate-opening 2002 study and the first implant in 2004 by med-tech company BrainGate, more promising results have inspired broader investment into careful research. But BrainGate’s clinical trials started back in 2009, and as noted by Business Insider’s Hilary Brueck, are expected to continue until 2038 — with only 15 participants who have devices installed. 

Anne Vanhoestenberghe is a professor of active implantable medical devices at King’s College London. In a recent release, she cautioned against the kind of hype peddled by Musk.

“Whilst there are a few other companies already using their devices in humans and the neuroscience community have made remarkable achievements with those devices, the potential benefits are still significantly limited by technology,” she said. “Developing and validating core technology for long term use in humans takes time and we need more investments to ensure we do the work that will underpin the next generation of BCIs.” 

Neuralink is a metal coin in your head that connects to something as flimsy as an app. And we’ve seen how Elon treats those. We’ve also seen corporate goons steal a veteran’s prosthetic legs — and companies turn brain surgeons and dentists into repo-men by having them yank anti-epilepsy chips out of people’s skulls, and dentures out of their mouths. 

“I think we have a chance with Neuralink to restore full-body functionality to someone who has a spinal cord injury,” Musk said at a 2023 tech summit, adding that the chip could possibly “make up for whatever lost capacity somebody has.”

Maybe BCI can. But only in the careful hands of scientists who don’t have Musk squawking “go faster!” over their shoulders. His greedy frustration with the speed of BCI science is telling, as is the animal cruelty it reportedly prompted.

There have been other examples of Musk’s grandiosity. Notably, David Lee expressed skepticism about hyperloop in his August 13, 2013 article for BBC news online

Is Elon Musk’s Hyperloop just a pipe dream?

Much like the pun in the headline, the bright idea of transporting people using some kind of vacuum-like tube is neither new nor imaginative.

There was Robert Goddard, considered the “father of modern rocket propulsion”, who claimed in 1909 that his vacuum system could suck passengers from Boston to New York at 1,200mph.

And then there were Soviet plans for an amphibious monorail  – mooted in 1934  – in which two long pods would start their journey attached to a metal track before flying off the end and slipping into the water like a two-fingered Kit Kat dropped into some tea.

So ever since inventor and entrepreneur Elon Musk hit the world’s media with his plans for the Hyperloop, a healthy dose of scepticism has been in the air.

“This is by no means a new idea,” says Rod Muttram, formerly of Bombardier Transportation and Railtrack.

“It has been previously suggested as a possible transatlantic transport system. The only novel feature I see is the proposal to put the tubes above existing roads.”

Here’s the latest I’ve found on hyperloop, from the Hyperloop Wikipedia entry,

As of 2024, some companies continued to pursue technology development under the hyperloop moniker, however, one of the biggest, well funded players, Hyperloop One, declared bankruptcy and ceased operations in 2023.[15]

Musk is impatient and impulsive as noted in a September 12, 2023 posting by Mike Masnick on Techdirt, Note: A link has been removed,

The Batshit Crazy Story Of The Day Elon Musk Decided To Personally Rip Servers Out Of A Sacramento Data Center

Back on Christmas Eve [December 24, 2022] of last year there were some reports that Elon Musk was in the process of shutting down Twitter’s Sacramento data center. In that article, a number of ex-Twitter employees were quoted about how much work it would be to do that cleanly, noting that there’s a ton of stuff hardcoded in Twitter code referring to that data center (hold that thought).

That same day, Elon tweeted out that he had “disconnected one of the more sensitive server racks.”

Masnick follows with a story of reckless behaviour from someone who should have known better.

Ethics of implants—where to look for more information

While Musk doesn’t use the term when he describes a “human/AI symbiosis” (presumably by way of a neural implant), he’s talking about a cyborg. Here’s a 2018 paper, which looks at some of the implications,

Do you want to be a cyborg? The moderating effect of ethics on neural implant acceptance by Eva Reinares-Lara, Cristina Olarte-Pascual, and Jorge Pelegrín-Borondo. Computers in Human Behavior Volume 85, August 2018, Pages 43-53 DOI: https://doi.org/10.1016/j.chb.2018.03.032

This paper is open access.

Getting back to Neuralink, I have two blog posts that discuss the company and the ethics of brain implants from way back in 2021.

First, there’s Jazzy Benes’ March 1, 2021 posting on the Santa Clara University’s Markkula Center for Applied Ethics blog. It stands out as it includes a discussion of the disabled community’s issues, Note: Links have been removed,

In the heart of Silicon Valley we are constantly enticed by the newest technological advances. With the big influencers Grimes [a Canadian musician and the mother of three children with Elon Musk] and Lil Uzi Vert publicly announcing their willingness to become experimental subjects for Elon Musk’s Neuralink brain implantation device, we are left wondering if future technology will actually give us “the knowledge of the Gods.” Is it part of the natural order for humans to become omniscient beings? Who will have access to the devices? What other ethical considerations must be discussed before releasing such technology to the public?

A significant issue that arises from developing technologies for the disabled community is the assumption that disabled persons desire the abilities of what some abled individuals may define as “normal.” Individuals with disabilities may object to technologies intended to make them fit an able-bodied norm. “Normal” is relative to each individual, and it could be potentially harmful to use a deficit view of disability, which means judging a disability as a deficiency. However, this is not to say that all disabled individuals will reject a technology that may enhance their abilities. Instead, I believe it is a consideration that must be recognized when developing technologies for the disabled community, and it can only be addressed through communication with disabled persons. As a result, I believe this is a conversation that must be had with the community for whom the technology is developed–disabled persons.

With technologies that aim to address disabilities, we walk a fine line between therapeutics and enhancement. Though not the first neural implant medical device, the Link may have been the first BCI system openly discussed for its potential transhumanism uses, such as “enhanced cognitive abilities, memory storage and retrieval, gaming, telepathy, and even symbiosis with machines.” …

Benes also discusses transhumanism, privacy issues, and consent issues. It’s a thoughtful reading experience.

Second is a July 9, 2021 posting by anonymous on the University of California at Berkeley School of Information blog which provides more insight into privacy and other issues associated with data collection (and introduced me to the concept of decisional interference),

As the development of microchips furthers and advances in neuroscience occur, the possibility for seamless brain-machine interfaces, where a device decodes inputs from the user’s brain to perform functions, becomes more of a reality. These various forms of these technologies already exist. However, technological advances have made implantable and portable devices possible. Imagine a future where humans don’t need to talk to each other, but rather can transmit their thoughts directly to another person. This idea is the eventual goal of Elon Musk, the founder of Neuralink. Currently, Neuralink is one of the main companies involved in the advancement of this type of technology. Analysis of the Neuralink’s technology and their overall mission statement provide an interesting insight into the future of this type of human-computer interface and the potential privacy and ethical concerns with this technology.

As this technology further develops, several privacy and ethical concerns come into question. To begin, using Solove’s Taxonomy as a privacy framework, many areas of potential harm are revealed. In the realm of information collection, there is much risk. Brain-computer interfaces, depending on where they are implanted, could have access to people’s most private thoughts and emotions. This information would need to be transmitted to another device for processing. The collection of this information by companies such as advertisers would represent a major breach of privacy. Additionally, there is risk to the user from information processing. These devices must work concurrently with other devices and often wirelessly. Given the widespread importance of cloud computing in much of today’s technology, offloading information from these devices to the cloud would be likely. Having the data stored in a database puts the user at the risk of secondary use if proper privacy policies are not implemented. The trove of information stored within the information collected from the brain is vast. These datasets could be combined with existing databases such as browsing history on Google to provide third parties with unimaginable context on individuals. Lastly, there is risk for information dissemination, more specifically, exposure. The information collected and processed by these devices would need to be stored digitally. Keeping such private information, even if anonymized, would be a huge potential for harm, as the contents of the information may in itself be re-identifiable to a specific individual. Lastly there is risk for invasions such as decisional interference. Brain-machine interfaces would not only be able to read information in the brain but also write information. This would allow the device to make potential emotional changes in its users, which be a major example of decisional interference. …

For the most recent Neuralink and brain implant ethics piece, there’s this February 14, 2024 essay on The Conversation, which, unusually, for this publication was solicited by the editors, Note: Links have been removed,

In January 2024, Musk announced that Neuralink implanted its first chip in a human subject’s brain. The Conversation reached out to two scholars at the University of Washington School of Medicine – Nancy Jecker, a bioethicst, and Andrew Ko, a neurosurgeon who implants brain chip devices – for their thoughts on the ethics of this new horizon in neuroscience.

Information about the implant, however, is scarce, aside from a brochure aimed at recruiting trial subjects. Neuralink did not register at ClinicalTrials.gov, as is customary, and required by some academic journals. [all emphases mine]

Some scientists are troubled by this lack of transparency. Sharing information about clinical trials is important because it helps other investigators learn about areas related to their research and can improve patient care. Academic journals can also be biased toward positive results, preventing researchers from learning from unsuccessful experiments.

Fellows at the Hastings Center, a bioethics think tank, have warned that Musk’s brand of “science by press release, while increasingly common, is not science. [emphases mine]” They advise against relying on someone with a huge financial stake in a research outcome to function as the sole source of information.

When scientific research is funded by government agencies or philanthropic groups, its aim is to promote the public good. Neuralink, on the other hand, embodies a private equity model [emphasis mine], which is becoming more common in science. Firms pooling funds from private investors to back science breakthroughs may strive to do good, but they also strive to maximize profits, which can conflict with patients’ best interests.

In 2022, the U.S. Department of Agriculture investigated animal cruelty at Neuralink, according to a Reuters report, after employees accused the company of rushing tests and botching procedures on test animals in a race for results. The agency’s inspection found no breaches, according to a letter from the USDA secretary to lawmakers, which Reuters reviewed. However, the secretary did note an “adverse surgical event” in 2019 that Neuralink had self-reported.

In a separate incident also reported by Reuters, the Department of Transportation fined Neuralink for violating rules about transporting hazardous materials, including a flammable liquid.

…the possibility that the device could be increasingly shown to be helpful for people with disabilities, but become unavailable due to loss of research funding. For patients whose access to a device is tied to a research study, the prospect of losing access after the study ends can be devastating. [emphasis mine] This raises thorny questions about whether it is ever ethical to provide early access to breakthrough medical interventions prior to their receiving full FDA approval.

Not registering a clinical trial would seem to suggest there won’t be much oversight. As for Musk’s “science by press release” activities, I hope those will be treated with more skepticism by mainstream media although that seems unlikely given the current situation with journalism (more about that in a future post).

As for the issues associated with private equity models for science research and the problem of losing access to devices after a clinical trial is ended, my April 5, 2022 posting, “Going blind when your neural implant company flirts with bankruptcy (long read)” offers some cautionary tales, in addition to being the most comprehensive piece I’ve published on ethics and brain implants.

My July 17, 2023 posting, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report” offers a brief overview of the international scene.

November 2023 science events with a UK flavour

This list of events, which are in date order (more or less), comes courtesy of the UK’s Sense about Science organization. Self-described as “… an independent charity that promotes the public interest in sound science and evidence,” their November 13, 2023 announcement (received via email) offers a good range of events focused on science, evidence, and understanding the science you’re getting.

Greenwich (England) and Glasgow (Scotland) Skeptics pub talks

Here’s more from the Sense about Science November 13, 2023 announcement,

Greenwich and Glasgow Skeptics pub talks

Want to engage with us about the importance of evidence? We have two public talks coming up, which can be a great opportunity to learn more about our work, meet some of our team and explore how everyone can use evidence as a tool to improve our lives.

We’ll be at Davy’s Wine Vaults in Greenwich at 7pm tomorrow [Tuesday, November 14, 2023] and Admiral Woods Bar in Glasgow on Tuesday 21 November 2023.

I found out more about Greenwich Skeptics in the Pub (from the Skeptics in the Pub (SitP) website),

Welcome to Greenwich Skeptics in the Pub!

Greenwich SitP is currently the only branch of SitP in South East London. The idea is simple: Once a month, we all meet up in a pub to hear a guest speaker and enjoy a drink or three

The Royal Park of Greenwich and the National Maritime Museum, from the Observatory. Backdrop: the Canary Wharf business district. Source: Wikipedia Commons

Our chosen pub is the Davy’s Wine Vaults (161 Greenwich High Road, SE10 8JA) and usually we meet on the second Tuesday of every month. Talks will begin at 7:30pm. Although the talks are free and open to all, we would appreciate a small contribution towards covering speakers’ expenses (suggested donation: £3).

Our Next Talk

The Power of Asking for Evidence

Munkhbayar Elkins & Tushita Bagga
Sense about Science

14 November 2023 Tuesday 19:30

In a time of misinformation, purchasable blue ticks, and spurious claims to be ‘following the science’, how do we ask the right questions of information we find from social media, companies, and politicians? 66% of people think it’s important the government shows the public all the evidence used to make policy decisions. And yet, the sources of data used in policy making become more complex, modelling and big data being two key examples. But you don’t need to be an expert to ask the right questions. This talk will cover how to ask about the data behind the issues that matter to you, be that climate change or local healthcare policies. With examples of how people asking for evidence have made a real difference, we’ll show you how you can too, and why this is more important than ever in the lead up to a general election next year.

Munkhbayar is senior research and policy officer at Sense about Science, with a BA in International Relations and an MSc in Security Studies. He works closely with decision-makers, world-leading researchers and community groups to raise the standard of evidence in public life. He wants to promote transparency of evidence standard across government to ensure accountability and to equip society with the right skills to scrutinise 21st century decision-making.

Tushita serves as a Policy and Campaigns Officer at Sense about Science, where she works on the upcoming Transparency of Evidence Standard campaign and is responsible for co-ordinating the annual Evidence Week event at UK Parliament. She recently completed her master’s degree in social policy research at the London School of Economics. Her previous work has focused on the role of ethics in academics interacting with marginalised communities and in news media representations of public health approaches to addressing the opioid epidemic. Tush is passionate about the accessible dissemination of social science research to the public and is driven to enable the masses to critically analyse complex policy concepts.

NB: This talk replaces the one which was originally advertised.

A week later on Tuesday, November 21, 2023, this same talk will be given by a different speaker in a Glasgow (Scotland) pub,

The power of asking for evidence – Annie Howitt (Sense About Science)

November 21 [2023] @ 8:15 pm – 10:00 pm

In a time of misinformation, purchasable blue ticks, and spurious claims to be ‘following the science’, how do we ask the right questions of information we find from social media, companies, and politicians? 61% of people think it’s important the government shows the public all the evidence used to make policy decisions. And yet, the sources of data used in policy making become more complex, modelling and big data being two key examples. But you don’t need to be an expert to ask the right questions. This talk will cover how to ask about the data behind the issues that matter to you, be that climate change or local healthcare policies. With examples of how people asking for evidence have made a real difference, we’ll show you how you can too.

About the speaker: Annie is the Communities officer at the charity Sense about Science. During her PhD researching pancreatic cancer, she realised that so much of our understanding of cancer biology and treatments is inaccessible to the people it affects the most. That’s how she found Sense about Science, which works with researchers to equip the public, policymakers and media with good questions and insights into evidence, particularly on difficult issues. Recently, Sense about Science has published What Counts? (a scoping inquiry into how well the government’s evidence for covid-19 decisions served society), guides to understanding data science and AI. It also runs Evidence Week in Parliament at Westminster and in Holyrood, bringing together policy makers, researchers and the public, and, in partnership with the journal Nature, the John Maddox Prize for courageously advancing public discourse with sound science.

This is event is free to attend, although we will be asking for donations at the end of the talk. Participants are under no obligation whatsoever to donate, however please rest assured that the money we collect doesn’t end up in anyone’s pocket – it is used to fund our overhead costs, and travel/accommodation for our speakers who come from further afield.

Accessibility: The Admiral Woods Bar now has a functioning lift which can take wheelchair users (or others who are unable to manage stairs) down to the function room. There is also a disabled toilet in the function room too. To help us accommodate you if you require to use these facilities we recommend you email us in advance: contact@glasgowskeptics.com

Venue

The Admiral Woods Bar 29 Waterloo Street
Glasgow, G2 6BZ United Kingdom + Google Map

UNESCO (Global) Media (and) Information Literacy Week 2023: a webinar on Thursday, November 16, 2023

According to their November 13, 2023 announcement, Sense about Science will be chairing a panel discussion,

UNESCO [Global] Media [and] Information Literacy week webinar

Join us online as we chair a live panel discussion on what infrastructure is needed for people to access sound evidence, find trustworthy sources, and engage in informed debate.

What societal infrastructure is needed for information literate citizens to thrive? is hosted by the International Federation of Library Association and Institutions (IFLA) to mark UNESCO [Global] Media [and] Information Literacy Week at 2pm GMT Thursday 16 November 2023 – register for free to participate in discussions.

There are more details on the International Federation of Library Association and Institutions (IFLA) event page,

Schedule (Time Zone: New York)

  • 9:00 – 9:10: Welcome
  • 9:10 – 9:15: Introduction
  • 9:15 – 10:00: Live Panel Discussion
  • 10:00 – 10:30: Live Q&A

Note: Presumably these are morning hours, i.e., 9 a.m ET.

Speakers

  • Host: Ning Zou, Chair, Information Literacy Section, IFLA|Associate Director for Student Academic Services and Learning Design at Harvard University Graduate School of Education
  • Panel Chair: David Schley, Deputy Director, Sense about Science
  • Angeline Djampou, Head, Knowledge and Publications Management Unit, UN Environment Programme 
  • TBC Deborah Jacobs,  Stichting IFLA Global Libraries (SIGL) Board of Directors 
  • Stephen Wyber,  Director of Policy and Advocacy, IFLA 

Theme and Focus

When the introduction of disposable beverage containers increased litter in the US, the response of producers was to launch a keep America beautiful campaign that placed the blame on consumers – the end users. In many countries it has taken over half a century for regulators to step in and deal with the problem of waste by, for example, prohibiting the use of free plastic bags or by making retailers take back unwanted packaging. But we still largely blame consumers for waste, despite them having little choice in practice about how goods are packaged.

Are we at risk of doing the same for consumers of information, overwhelmed by the volume of material available but not in control over what content is presented to them– by blaming poor information literacy for the spread of false information and misunderstanding?

While empowering citizens with information literacy is unquestionably good, is it enough? Or are we setting people up to fail in an attention economy where information providers surface content that maximised engagement, with no interest in whether it is accurate or useful? Is it fair to blame someone for naïvely sharing bad information when they are only fed corroborating material, or should we challenge the absence of regulation and oversight of how information is curated by social media platforms and search engines?

What infrastructure is needed for people to access sound evidence, find trustworthy sources, and genuinely engage in informed societal debate?

Join the IFLA Information Literacy Section and the School Library Section co-sponsored Global MIL Webinar and have a rich conversation with the invited panelists.

Registration is required and free.

Royal Statistical Society (RSS) workshop on developing accessible health statistics on Monday, November 20, 2023

This is the last event noted in the November 13, 2023 Sense about Science announcement,

Royal Statistical Society workshop on developing accessible health statistics

On Monday 20 November 2023, our Deputy-director David Schley will be part of a panel discussing how organisations producing health statistics across the UK can ensure their data is accessible and meaningful to the public.

This is a hybrid event at the Royal Statistical Society, run by the Official Statistics section but open to the public for a fee.

I have more details from the RSS’s event page,

Official Statistics and Health: Developing coherent and accessible health statistics: a UK perspective (Online)

Date: Monday 20 November 2023, 1.00PM – 5.30PM [GMT?]
Location: Online

Event costs:

Concessionary RSS Fellow £10
RSS CStat/GradStat £12.50
RSS Fellow £15
Non-members £20

During this afternoon of discussion, we will be exploring with our panels the approaches and challenges faced by organisations producing health related statistics across the UK to ensure the numbers and messages produced are accessible and meaningful to the public and other users.

In the first session (1-3pm), the panel will cover work across government departments and organisations to create a coherent system to produce comparable statistics across the four nations of the UK. They will touch upon the importance of presenting a coherent picture across the UK, at national and subnational levels, the data challenges, including how the definitions used can change the meaning of the statistics produced and how the public understand them.  We will hear the experiences from people working in that area to improve the coherence of our statistical system to those that used these statistics to inform policy. 

Our panel will include the head of the Office for Statistics Regulation Ed Humpherson, Lucy Vickers, Deputy Director – Statistics & Data Science at the Department for Health and Social Care, Julie Stanborough, Deputy Director for Health and Social Care Analysis at the Office for National Statistics with colleagues Michelle Waters and Heidi Wilson who work together with colleagues across the four nations on improving the UK-wide coherence on health statistics. They will be joined by William Perks (Head of health, social services and population statistics, Welsh Government), and colleagues from Scotland and Northern Ireland. We are also looking to bring into the discussion the perspectives from local authorities around the challenges of low-granularity meaningful statistics.

After a break, the second session (3.30-5.30pm) will discuss how we communicate statistics to users in a sensitive and accessible manners. The language of statistics, especially in the health context can be extremely technical and emotionally charged with words such as ‘risks’, ‘hazards’ and ‘uncertainty’. Those terms have a very specific meaning for a statistician which differs from the one the general public gives to these words. In this session, our panellists will share their experience in communicating sometimes complex concepts to a wide audience, balancing transparent and accurate reporting with accessibility.  They will share what they have tried, what worked and what did not and ideas to communicate clearly in that area, in a time where misleading information spreads fast and that mistakes in communication have the potential to damage the trust users have in the organisations producing the statistics.

The second panel will include both statistics producers (ONS engagement hub lead, and Lucy Vickers from DHSC, William Perks from Welsh Government) and the head of the Office for Statistics Regulation Ed Humpherson, individuals that champions promoting public understanding of statistics (David Schley from Sense about Science, Rhian Davies a RSS Statistics Ambassador), and charity and users groups.

Book now

Final note

Thank you to the librarians for this:

When the introduction of disposable beverage containers increased litter in the US, the response of producers was to launch a keep America beautiful campaign that placed the blame on consumers [emphasis mine] – the end users. In many countries it has taken over half a century for regulators to step in and deal with the problem of waste by, for example, prohibiting the use of free plastic bags or by making retailers take back unwanted packaging. But we still largely blame consumers for waste, despite them having little choice in practice about how goods are packaged. [[emphases mine]

Are we at risk of doing the same for consumers of information, overwhelmed by the volume of material available but not in control over what content is presented to them– by blaming poor information literacy for the spread of false information and misunderstanding? …

Hopefully, there’s something to your taste in this range of upcoming events.

Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report

Launched on Thursday, July 13, 2023 during UNESCO’s (United Nations Educational, Scientific, and Cultural Organization) “Global dialogue on the ethics of neurotechnology,” is a report tying together the usual measures of national scientific supremacy (number of papers published and number of patents filed) with information on corporate investment in the field. Consequently, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends” by Daniel S. Hain, Roman Jurowetzki, Mariagrazia Squicciarini, and Lihui Xu provides better insight into the international neurotechnology scene than is sometimes found in these kinds of reports. By the way, the report is open access.

Here’s what I mean, from the report‘s short summary,

Since 2013, government investments in this field have exceeded $6 billion. Private investment has also seen significant growth, with annual funding experiencing a 22-fold increase from 2010 to 2020, reaching $7.3 billion and totaling $33.2 billion.

This investment has translated into a 35-fold growth in neuroscience publications between 2000-2021 and 20-fold growth in innovations between 2022-2020, as proxied by patents. However, not all are poised to benefit from such developments, as big divides emerge.

Over 80% of high-impact neuroscience publications are produced by only ten countries, while 70% of countries contributed fewer than 10 such papers over the period considered. Similarly, five countries only hold 87% of IP5 neurotech patents.

This report sheds light on the neurotechnology ecosystem, that is, what is being developed, where and by whom, and informs about how neurotechnology interacts with other technological trajectories, especially Artificial Intelligence [emphasis mine]. [p. 2]

The money aspect is eye-opening even when you already have your suspicions. Also, it’s not entirely unexpected to learn that only ten countries produce over 80% of the high impact neurotech papers and that only five countries hold 87% of the IP5 neurotech patents but it is stunning to see it in context. (If you’re not familiar with the term ‘IP5 patents’, scroll down in this post to the relevant subhead. Hint: It means the patent was filed in one of the top five jurisdictions; I’ll leave you to guess which ones those might be.)

“Since 2013 …” isn’t quite as informative as the authors may have hoped. I wish they had given a time frame for government investments similar to what they did for corporate investments (e.g., 2010 – 2020). Also, is the $6B (likely in USD) government investment cumulative or an estimated annual number? To sum up, I would have appreciated parallel structure and specificity.

Nitpicks aside, there’s some very good material intended for policy makers. On that note, some of the analysis is beyond me. I haven’t used anything even somewhat close to their analytical tools in years and years. This commentaries reflects my interests and a very rapid reading. One last thing, this is being written from a Canadian perspective. With those caveats in mind, here’s some of what I found.

A definition, social issues, country statistics, and more

There’s a definition for neurotechnology and a second mention of artificial intelligence being used in concert with neurotechnology. From the report‘s executive summary,

Neurotechnology consists of devices and procedures used to access, monitor, investigate, assess, manipulate, and/or emulate the structure and function of the neural systems of animals or human beings. It is poised to revolutionize our understanding of the brain and to unlock innovative solutions to treat a wide range of diseases and disorders.

Similarly to Artificial Intelligence (AI), and also due to its convergence with AI, neurotechnology may have profound societal and economic impact, beyond the medical realm. As neurotechnology directly relates to the brain, it triggers ethical considerations about fundamental aspects of human existence, including mental integrity, human dignity, personal identity, freedom of thought, autonomy, and privacy [emphases mine]. Its potential for enhancement purposes and its accessibility further amplifies its prospect social and societal implications.

The recent discussions held at UNESCO’s Executive Board further shows Member States’ desire to address the ethics and governance of neurotechnology through the elaboration of a new standard-setting instrument on the ethics of neurotechnology, to be adopted in 2025. To this end, it is important to explore the neurotechnology landscape, delineate its boundaries, key players, and trends, and shed light on neurotech’s scientific and technological developments. [p. 7]

Here’s how they sourced the data for the report,

The present report addresses such a need for evidence in support of policy making in
relation to neurotechnology by devising and implementing a novel methodology on data from scientific articles and patents:

● We detect topics over time and extract relevant keywords using a transformer-
based language models fine-tuned for scientific text. Publication data for the period
2000-2021 are sourced from the Scopus database and encompass journal articles
and conference proceedings in English. The 2,000 most cited publications per year
are further used in in-depth content analysis.
● Keywords are identified through Named Entity Recognition and used to generate
search queries for conducting a semantic search on patents’ titles and abstracts,
using another language model developed for patent text. This allows us to identify
patents associated with the identified neuroscience publications and their topics.
The patent data used in the present analysis are sourced from the European
Patent Office’s Worldwide Patent Statistical Database (PATSTAT). We consider
IP5 patents filed between 2000-2020 having an English language abstract and
exclude patents solely related to pharmaceuticals.

This approach allows mapping the advancements detailed in scientific literature to the technological applications contained in patent applications, allowing for an analysis of the linkages between science and technology. This almost fully automated novel approach allows repeating the analysis as neurotechnology evolves. [pp. 8-9[

Findings in bullet points,

Key stylized facts are:
● The field of neuroscience has witnessed a remarkable surge in the overall number
of publications since 2000, exhibiting a nearly 35-fold increase over the period
considered, reaching 1.2 million in 2021. The annual number of publications in
neuroscience has nearly tripled since 2000, exceeding 90,000 publications a year
in 2021. This increase became even more pronounced since 2019.
● The United States leads in terms of neuroscience publication output (40%),
followed by the United Kingdom (9%), Germany (7%), China (5%), Canada (4%),
Japan (4%), Italy (4%), France (4%), the Netherlands (3%), and Australia (3%).
These countries account for over 80% of neuroscience publications from 2000 to
2021.
● Big divides emerge, with 70% of countries in the world having less than 10 high-
impact neuroscience publications between 2000 to 2021.
● Specific neurotechnology-related research trends between 2000 and 2021 include:
○ An increase in Brain-Computer Interface (BCI) research around 2010,
maintaining a consistent presence ever since.
○ A significant surge in Epilepsy Detection research in 2017 and 2018,
reflecting the increased use of AI and machine learning in healthcare.
○ Consistent interest in Neuroimaging Analysis, which peaks around 2004,
likely because of its importance in brain activity and language
comprehension studies.
○ While peaking in 2016 and 2017, Deep Brain Stimulation (DBS) remains a
persistent area of research, underlining its potential in treating conditions
like Parkinson’s disease and essential tremor.
● Between 2000 and 2020, the total number of patent applications in this field
increased significantly, experiencing a 20-fold increase from less than 500 to over
12,000. In terms of annual figures, a consistent upward trend in neurotechnology-10
related patent applications emerges, with a notable doubling observed between
2015 and 2020.
• The United States account for nearly half of all worldwide patent applications (47%).
Other major contributors include South Korea (11%), China (10%), Japan (7%),
Germany (7%), and France (5%). These five countries together account for 87%
of IP5 neurotech patents applied between 2000 and 2020.
○ The United States has historically led the field, with a peak around 2010, a
decline towards 2015, and a recovery up to 2020.
○ South Korea emerged as a significant contributor after 1990, overtaking
Germany in the late 2000s to become the second-largest developer of
neurotechnology. By the late 2010s, South Korea’s annual neurotechnology
patent applications approximated those of the United States.
○ China exhibits a sharp increase in neurotechnology patent applications in
the mid-2010s, bringing it on par with the United States in terms of
application numbers.
● The United States ranks highest in both scientific publications and patents,
indicating their strong ability to transform knowledge into marketable inventions.
China, France, and Korea excel in leveraging knowledge to develop patented
innovations. Conversely, countries such as the United Kingdom, Germany, Italy,
Canada, Brazil, and Australia lag behind in effectively translating neurotech
knowledge into patentable innovations.
● In terms of patent quality measured by forward citations, the leading countries are
Germany, US, China, Japan, and Korea.
● A breakdown of patents by technology field reveals that Computer Technology is
the most important field in neurotechnology, exceeding Medical Technology,
Biotechnology, and Pharmaceuticals. The growing importance of algorithmic
applications, including neural computing techniques, also emerges by looking at
the increase in patent applications in these fields between 2015-2020. Compared
to the reference year, computer technologies-related patents in neurotech
increased by 355% and by 92% in medical technology.
● An analysis of the specialization patterns of the top-5 countries developing
neurotechnologies reveals that Germany has been specializing in chemistry-
related technology fields, whereas Asian countries, particularly South Korea and
China, focus on computer science and electrical engineering-related fields. The
United States exhibits a balanced configuration with specializations in both
chemistry and computer science-related fields.
● The entities – i.e. both companies and other institutions – leading worldwide
innovation in the neurotech space are: IBM (126 IP5 patents, US), Ping An
Technology (105 IP5 patents, CH), Fujitsu (78 IP5 patents, JP), Microsoft (76 IP511
patents, US)1, Samsung (72 IP5 patents, KR), Sony (69 IP5 patents JP) and Intel
(64 IP5 patents US)

This report further proposes a pioneering taxonomy of neurotechnologies based on International Patent Classification (IPC) codes.

• 67 distinct patent clusters in neurotechnology are identified, which mirror the diverse research and development landscape of the field. The 20 most prominent neurotechnology groups, particularly in areas like multimodal neuromodulation, seizure prediction, neuromorphic computing [emphasis mine], and brain-computer interfaces, point to potential strategic areas for research and commercialization.
• The variety of patent clusters identified mirrors the breadth of neurotechnology’s potential applications, from medical imaging and limb rehabilitation to sleep optimization and assistive exoskeletons.
• The development of a baseline IPC-based taxonomy for neurotechnology offers a structured framework that enriches our understanding of this technological space, and can facilitate research, development and analysis. The identified key groups mirror the interdisciplinary nature of neurotechnology and underscores the potential impact of neurotechnology, not only in healthcare but also in areas like information technology and biomaterials, with non-negligible effects over societies and economies.

1 If we consider Microsoft Technology Licensing LLM and Microsoft Corporation as being under the same umbrella, Microsoft leads worldwide developments with 127 IP5 patents. Similarly, if we were to consider that Siemens AG and Siemens Healthcare GmbH belong to the same conglomerate, Siemens would appear much higher in the ranking, in third position, with 84 IP5 patents. The distribution of intellectual property assets across companies belonging to the same conglomerate is frequent and mirrors strategic as well as operational needs and features, among others. [pp. 9-11]

Surprises and comments

Interesting and helpful to learn that “neurotechnology interacts with other technological trajectories, especially Artificial Intelligence;” this has changed and improved my understanding of neurotechnology.

It was unexpected to find Canada in the top ten countries producing neuroscience papers. However, finding out that the country lags in translating its ‘neuro’ knowledge into patentable innovation is not entirely a surprise.

It can’t be an accident that countries with major ‘electronics and computing’ companies lead in patents. These companies do have researchers but they also buy startups to acquire patents. They (and ‘patent trolls’) will also file patents preemptively. For the patent trolls, it’s a moneymaking proposition and for the large companies, it’s a way of protecting their own interests and/or (I imagine) forcing a sale.

The mention of neuromorphic (brainlike) computing in the taxonomy section was surprising and puzzling. Up to this point, I’ve thought of neuromorphic computing as a kind of alternative or addition to standard computing but the authors have blurred the lines as per UNESCO’s definition of neurotechnology (specifically, “… emulate the structure and function of the neural systems of animals or human beings”) . Again, this report is broadening my understanding of neurotechnology. Of course, it required two instances before I quite grasped it, the definition and the taxonomy.

What’s puzzling is that neuromorphic engineering, a broader term that includes neuromorphic computing, isn’t used or mentioned. (For an explanation of the terms neuromorphic computing and neuromorphic engineering, there’s my June 23, 2023 posting, “Neuromorphic engineering: an overview.” )

The report

I won’t have time for everything. Here are some of the highlights from my admittedly personal perspective.

It’s not only about curing disease

From the report,

Neurotechnology’s applications however extend well beyond medicine [emphasis mine], and span from research, to education, to the workplace, and even people’s everyday life. Neurotechnology-based solutions may enhance learning and skill acquisition and boost focus through brain stimulation techniques. For instance, early research finds that brain- zapping caps appear to boost memory for at least one month (Berkeley, 2022). This could one day be used at home to enhance memory functions [emphasis mine]. They can further enable new ways to interact with the many digital devices we use in everyday life, transforming the way we work, live and interact. One example is the Sound Awareness wristband developed by a Stanford team (Neosensory, 2022) which enables individuals to “hear” by converting sound into tactile feedback, so that sound impaired individuals can perceive spoken words through their skin. Takagi and Nishimoto (2023) analyzed the brain scans taken through Magnetic Resonance Imaging (MRI) as individuals were shown thousands of images. They then trained a generative AI tool called Stable Diffusion2 on the brain scan data of the study’s participants, thus creating images that roughly corresponded to the real images shown. While this does not correspond to reading the mind of people, at least not yet, and some limitations of the study have been highlighted (Parshall, 2023), it nevertheless represents an important step towards developing the capability to interface human thoughts with computers [emphasis mine], via brain data interpretation.

While the above examples may sound somewhat like science fiction, the recent uptake of generative Artificial Intelligence applications and of large language models such as ChatGPT or Bard, demonstrates that the seemingly impossible can quickly become an everyday reality. At present, anyone can purchase online electroencephalogram (EEG) devices for a few hundred dollars [emphasis mine], to measure the electrical activity of their brain for meditation, gaming, or other purposes. [pp. 14-15]

This is very impressive achievement. Some of the research cited was published earlier this year (2023). The extraordinary speed is a testament to the efforts by the authors and their teams. It’s also a testament to how quickly the field is moving.

I’m glad to see the mention of and focus on consumer neurotechnology. (While the authors don’t speculate, I am free to do so.) Consumer neurotechnology could be viewed as one of the steps toward normalizing a cyborg future for all of us. Yes, we have books, television programmes, movies, and video games, which all normalize the idea but the people depicted have been severely injured and require the augmentation. With consumer neurotechnology, you have easily accessible devices being used to enhance people who aren’t injured, they just want to be ‘better’.

This phrase seemed particularly striking “… an important step towards developing the capability to interface human thoughts with computers” in light of some claims made by the Australian military in my June 13, 2023 posting “Mind-controlled robots based on graphene: an Australian research story.” (My posting has an embedded video demonstrating the Brain Robotic Interface (BRI) in action. Also, see the paragraph below the video for my ‘measured’ response.)

There’s no mention of the military in the report which seems more like a deliberate rather than inadvertent omission given the importance of military innovation where technology is concerned.

This section gives a good overview of government initiatives (in the report it’s followed by a table of the programmes),

Thanks to the promises it holds, neurotechnology has garnered significant attention from both governments and the private sector and is considered by many as an investment priority. According to the International Brain Initiative (IBI), brain research funding has become increasingly important over the past ten years, leading to a rise in large-scale state-led programs aimed at advancing brain intervention technologies(International Brain Initiative, 2021). Since 2013, initiatives such as the United States’ Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative and the European Union’s Human Brain Project (HBP), as well as major national initiatives in China, Japan and South Korea have been launched with significant funding support from the respective governments. The Canadian Brain Research Strategy, initially operated as a multi- stakeholder coalition on brain research, is also actively seeking funding support from the government to transform itself into a national research initiative (Canadian Brain Research Strategy, 2022). A similar proposal is also seen in the case of the Australian Brain Alliance, calling for the establishment of an Australian Brain Initiative (Australian Academy of Science, n.d.). [pp. 15-16]

Privacy

There are some concerns such as these,

Beyond the medical realm, research suggests that emotional responses of consumers
related to preferences and risks can be concurrently tracked by neurotechnology, such
as neuroimaging and that neural data can better predict market-level outcomes than
traditional behavioral data (Karmarkar and Yoon, 2016). As such, neural data is
increasingly sought after in the consumer market for purposes such as digital
phenotyping4, neurogaming 5,and neuromarketing6 (UNESCO, 2021). This surge in demand gives rise to risks like hacking, unauthorized data reuse, extraction of privacy-sensitive information, digital surveillance, criminal exploitation of data, and other forms of abuse. These risks prompt the question of whether neural data needs distinct definition and safeguarding measures.

These issues are particularly relevant today as a wide range of electroencephalogram (EEG) headsets that can be used at home are now available in consumer markets for purposes that range from meditation assistance to controlling electronic devices through the mind. Imagine an individual is using one of these devices to play a neurofeedback game, which records the person’s brain waves during the game. Without the person being aware, the system can also identify the patterns associated with an undiagnosed mental health condition, such as anxiety. If the game company sells this data to third parties, e.g. health insurance providers, this may lead to an increase of insurance fees based on undisclosed information. This hypothetical situation would represent a clear violation of mental privacy and of unethical use of neural data.

Another example is in the field of advertising, where companies are increasingly interested in using neuroimaging to better understand consumers’ responses to their products or advertisements, a practice known as neuromarketing. For instance, a company might use neural data to determine which advertisements elicit the most positive emotional responses in consumers. While this can help companies improve their marketing strategies, it raises significant concerns about mental privacy. Questions arise in relation to consumers being aware or not that their neural data is being used, and in the extent to which this can lead to manipulative advertising practices that unfairly exploit unconscious preferences. Such potential abuses underscore the need for explicit consent and rigorous data protection measures in the use of neurotechnology for neuromarketing purposes. [pp. 21-22]

Legalities

Some countries already have laws and regulations regarding neurotechnology data,

At the national level, only a few countries have enacted laws and regulations to protect mental integrity or have included neuro-data in personal data protection laws (UNESCO, University of Milan-Bicocca (Italy) and State University of New York – Downstate Health Sciences University, 2023). Examples are the constitutional reform undertaken by Chile (Republic of Chile, 2021), the Charter for the responsible development of neurotechnologies of the Government of France (Government of France, 2022), and the Digital Rights Charter of the Government of Spain (Government of Spain, 2021). They propose different approaches to the regulation and protection of human rights in relation to neurotechnology. Countries such as the UK are also examining under which circumstances neural data may be considered as a special category of data under the general data protection framework (i.e. UK’s GDPR) (UK’s Information Commissioner’s Office, 2023) [p. 24]

As you can see, these are recent laws. There doesn’t seem to be any attempt here in Canada even though there is an act being reviewed in Parliament that could conceivably include neural data. This is from my May 1, 2023 posting,

Bill C-27 (Digital Charter Implementation Act, 2022) is what I believe is called an omnibus bill as it includes three different pieces of proposed legislation (the Consumer Privacy Protection Act [CPPA], the Artificial Intelligence and Data Act [AIDA], and the Personal Information and Data Protection Tribunal Act [PIDPTA]). [emphasis added July 11, 2023] You can read the Innovation, Science and Economic Development (ISED) Canada summary here or a detailed series of descriptions of the act here on the ISED’s Canada’s Digital Charter webpage.

My focus at the time was artificial intelligence and, now, after reading this UNESCO report and briefly looking at the Innovation, Science and Economic Development (ISED) Canada summary and a detailed series of descriptions of the act on ISED’s Canada’s Digital Charter webpage, I don’t see anything that specifies neural data but it’s not excluded either.

IP5 patents

Here’s the explanation (the footnote is included at the end of the excerpt),

IP5 patents represent a subset of overall patents filed worldwide, which have the
characteristic of having been filed in at least one top intellectual property offices (IPO)
worldwide (the so called IP5, namely the Chinese National Intellectual Property
Administration, CNIPA (formerly SIPO); the European Patent Office, EPO; the Japan
Patent Office, JPO; the Korean Intellectual Property Office, KIPO; and the United States
Patent and Trademark Office, USPTO) as well as another country, which may or may not be an IP5. This signals their potential applicability worldwide, as their inventiveness and industrial viability have been validated by at least two leading IPOs. This gives these patents a sort of “quality” check, also since patenting inventions is costly and if applicants try to protect the same invention in several parts of the world, this normally mirrors that the applicant has expectations about their importance and expected value. If we were to conduct the same analysis using information about individually considered patent applied worldwide, i.e. without filtering for quality nor considering patent families, we would risk conducting a biased analysis based on duplicated data. Also, as patentability standards vary across countries and IPOs, and what matters for patentability is the existence (or not) of prior art in the IPO considered, we would risk mixing real innovations with patents related to catching up phenomena in countries that are not at the forefront of the technology considered.

9 The five IP offices (IP5) is a forum of the five largest intellectual property offices in the world that was set up to improve the efficiency of the examination process for patents worldwide. The IP5 Offices together handle about 80% of the world’s patent applications, and 95% of all work carried out under the Patent Cooperation Treaty (PCT), see http://www.fiveipoffices.org. (Dernis et al., 2015) [p. 31]

AI assistance on this report

As noted earlier I have next to no experience with the analytical tools having not attempted this kind of work in several years. Here’s an example of what they were doing,

We utilize a combination of text embeddings based on Bidirectional Encoder
Representations from Transformer (BERT), dimensionality reduction, and hierarchical
clustering inspired by the BERTopic methodology 12 to identify latent themes within
research literature. Latent themes or topics in the context of topic modeling represent
clusters of words that frequently appear together within a collection of documents (Blei, 2012). These groupings are not explicitly labeled but are inferred through computational analysis examining patterns in word usage. These themes are ‘hidden’ within the text, only to be revealed through this analysis. …

We further utilize OpenAI’s GPT-4 model to enrich our understanding of topics’ keywords and to generate topic labels (OpenAI, 2023), thus supplementing expert review of the broad interdisciplinary corpus. Recently, GPT-4 has shown impressive results in medical contexts across various evaluations (Nori et al., 2023), making it a useful tool to enhance the information obtained from prior analysis stages, and to complement them. The automated process enhances the evaluation workflow, effectively emphasizing neuroscience themes pertinent to potential neurotechnology patents. Notwithstanding existing concerns about hallucinations (Lee, Bubeck and Petro, 2023) and errors in generative AI models, this methodology employs the GPT-4 model for summarization and interpretation tasks, which significantly mitigates the likelihood of hallucinations. Since the model is constrained to the context provided by the keyword collections, it limits the potential for fabricating information outside of the specified boundaries, thereby enhancing the accuracy and reliability of the output. [pp. 33-34]

I couldn’t resist adding the ChatGPT paragraph given all of the recent hoopla about it.

Multimodal neuromodulation and neuromorphic computing patents

I think this gives a pretty good indication of the activity on the patent front,

The largest, coherent topic, termed “multimodal neuromodulation,” comprises 535
patents detailing methodologies for deep or superficial brain stimulation designed to
address neurological and psychiatric ailments. These patented technologies interact with various points in neural circuits to induce either Long-Term Potentiation (LTP) or Long-Term Depression (LTD), offering treatment for conditions such as obsession, compulsion, anxiety, depression, Parkinson’s disease, and other movement disorders. The modalities encompass implanted deep-brain stimulators (DBS), Transcranial Magnetic Stimulation (TMS), and transcranial Direct Current Stimulation (tDCS). Among the most representative documents for this cluster are patents with titles: Electrical stimulation of structures within the brain or Systems and methods for enhancing or optimizing neural stimulation therapy for treating symptoms of Parkinson’s disease and or other movement disorders. [p.65]

Given my longstanding interest in memristors, which (I believe) have to a large extent helped to stimulate research into neuromorphic computing, this had to be included. Then, there was the brain-computer interfaces cluster,

A cluster identified as “Neuromorphic Computing” consists of 366 patents primarily
focused on devices designed to mimic human neural networks for efficient and adaptable computation. The principal elements of these inventions are resistive memory cells and artificial synapses. They exhibit properties similar to the neurons and synapses in biological brains, thus granting these devices the ability to learn and modulate responses based on rewards, akin to the adaptive cognitive capabilities of the human brain.

The primary technology classes associated with these patents fall under specific IPC
codes, representing the fields of neural network models, analog computers, and static
storage structures. Essentially, these classifications correspond to technologies that are key to the construction of computers and exhibit cognitive functions similar to human brain processes.

Examples for this cluster include neuromorphic processing devices that leverage
variations in resistance to store and process information, artificial synapses exhibiting
spike-timing dependent plasticity, and systems that allow event-driven learning and
reward modulation within neuromorphic computers.

In relation to neurotechnology as a whole, the “neuromorphic computing” cluster holds significant importance. It embodies the fusion of neuroscience and technology, thereby laying the basis for the development of adaptive and cognitive computational systems. Understanding this specific cluster provides a valuable insight into the progressing domain of neurotechnology, promising potential advancements across diverse fields, including artificial intelligence and healthcare.

The “Brain-Computer Interfaces” cluster, consisting of 146 patents, embodies a key aspect of neurotechnology that focuses on improving the interface between the brain and external devices. The technology classification codes associated with these patents primarily refer to methods or devices for treatment or protection of eyes and ears, devices for introducing media into, or onto, the body, and electric communication techniques, which are foundational elements of brain-computer interface (BCI) technologies.

Key patents within this cluster include a brain-computer interface apparatus adaptable to use environment and method of operating thereof, a double closed circuit brain-machine interface system, and an apparatus and method of brain-computer interface for device controlling based on brain signal. These inventions mainly revolve around the concept of using brain signals to control external devices, such as robotic arms, and improving the classification performance of these interfaces, even after long periods of non-use.

The inventions described in these patents improve the accuracy of device control, maintain performance over time, and accommodate multiple commands, thus significantly enhancing the functionality of BCIs.

Other identified technologies include systems for medical image analysis, limb rehabilitation, tinnitus treatment, sleep optimization, assistive exoskeletons, and advanced imaging techniques, among others. [pp. 66-67]

Having sections on neuromorphic computing and brain-computer interface patents in immediate proximity led to more speculation on my part. Imagine how much easier it would be to initiate a BCI connection if it’s powered with a neuromorphic (brainlike) computer/device. [ETA July 21, 2023: Following on from that thought, it might be more than just easier to initiate a BCI connection. Could a brainlike computer become part of your brain? Why not? it’s been successfully argued that a robotic wheelchair was part of someone’s body, see my January 30, 2013 posting and scroll down about 40% of the way.)]

Neurotech policy debates

The report concludes with this,

Neurotechnology is a complex and rapidly evolving technological paradigm whose
trajectories have the power to shape people’s identity, autonomy, privacy, sentiments,
behaviors and overall well-being, i.e. the very essence of what it means to be human.

Designing and implementing careful and effective norms and regulations ensuring that neurotechnology is developed and deployed in an ethical manner, for the good of
individuals and for society as a whole, call for a careful identification and characterization of the issues at stake. This entails shedding light on the whole neurotechnology ecosystem, that is what is being developed, where and by whom, and also understanding how neurotechnology interacts with other developments and technological trajectories, especially AI. Failing to do so may result in ineffective (at best) or distorted policies and policy decisions, which may harm human rights and human dignity.

Addressing the need for evidence in support of policy making, the present report offers first time robust data and analysis shedding light on the neurotechnology landscape worldwide. To this end, its proposes and implements an innovative approach that leverages artificial intelligence and deep learning on data from scientific publications and paten[t]s to identify scientific and technological developments in the neurotech space. The methodology proposed represents a scientific advance in itself, as it constitutes a quasi- automated replicable strategy for the detection and documentation of neurotechnology- related breakthroughs in science and innovation, to be repeated over time to account for the evolution of the sector. Leveraging this approach, the report further proposes an IPC-based taxonomy for neurotechnology which allows for a structured framework to the exploration of neurotechnology, to enable future research, development and analysis. The innovative methodology proposed is very flexible and can in fact be leveraged to investigate different emerging technologies, as they arise.

In terms of technological trajectories, we uncover a shift in the neurotechnology industry, with greater emphasis being put on computer and medical technologies in recent years, compared to traditionally dominant trajectories related to biotechnology and pharmaceuticals. This shift warrants close attention from policymakers, and calls for attention in relation to the latest (converging) developments in the field, especially AI and related methods and applications and neurotechnology.

This is all the more important and the observed growth and specialization patterns are unfolding in the context of regulatory environments that, generally, are either not existent or not fit for purpose. Given the sheer implications and impact of neurotechnology on the very essence of human beings, this lack of regulation poses key challenges related to the possible infringement of mental integrity, human dignity, personal identity, privacy, freedom of thought, and autonomy, among others. Furthermore, issues surrounding accessibility and the potential for neurotech enhancement applications triggers significant concerns, with far-reaching implications for individuals and societies. [pp. 72-73]

Last words about the report

Informative, readable, and thought-provoking. And, it helped broaden my understanding of neurotechnology.

Future endeavours?

I’m hopeful that one of these days one of these groups (UNESCO, Canadian Science Policy Centre, or ???) will tackle the issue of business bankruptcy in the neurotechnology sector. It has already occurred as noted in my ““Going blind when your neural implant company flirts with bankruptcy [long read]” April 5, 2022 posting. That story opens with a woman going blind in a New York subway when her neural implant fails. It’s how she found out the company, which supplied her implant was going out of business.

In my July 7, 2023 posting about the UNESCO July 2023 dialogue on neurotechnology, I’ve included information on Neuralink (one of Elon Musk’s companies) and its approval (despite some investigations) by the US Food and Drug Administration to start human clinical trials. Scroll down about 75% of the way to the “Food for thought” subhead where you will find stories about allegations made against Neuralink.

The end

If you want to know more about the field, the report offers a seven-page bibliography and there’s a lot of material here where you can start with this December 3, 2019 posting “Neural and technological inequalities” which features an article mentioning a discussion between two scientists. Surprisingly (to me), the source article is in Fast Company (a leading progressive business media brand), according to their tagline)..

I have two categories you may want to check: Human Enhancement and Neuromorphic Engineering. There are also a number of tags: neuromorphic computing, machine/flesh, brainlike computing, cyborgs, neural implants, neuroprosthetics, memristors, and more.

Should you have any observations or corrections, please feel free to leave them in the Comments section of this posting.

Global dialogue on the ethics of neurotechnology on July 13, 2023 led by UNESCO

While there’s a great deal of attention and hyperbole attached to artificial intelligence (AI) these days, it seems that neurotechnology may be quietly gaining much needed attention. (For those who are interested, at the end of this posting, there’ll be a bit more information to round out what you’re seeing in the UNESCO material.)

Now, here’s news of an upcoming UNESCO (United Nations Educational, Scientific, and Cultural Organization) meeting on neurotechnology, from a June 6, 2023 UNESCO press release (also received via email), Note: Links have been removed,

The Member States of the Executive Board of UNESCO
have approved the proposal of the Director General to hold a global
dialogue to develop an ethical framework for the growing and largely
unregulated Neurotechnology sector, which may threaten human rights and
fundamental freedoms. A first international conference will be held at
UNESCO Headquarters on 13 July 2023.

“Neurotechnology could help solve many health issues, but it could
also access and manipulate people’s brains, and produce information
about our identities, and our emotions. It could threaten our rights to
human dignity, freedom of thought and privacy. There is an urgent need
to establish a common ethical framework at the international level, as
UNESCO has done for artificial intelligence,” said UNESCO
Director-General Audrey Azoulay.

UNESCO’s international conference, taking place on 13 July [2023], will start
exploring the immense potential of neurotechnology to solve neurological
problems and mental disorders, while identifying the actions needed to
address the threats it poses to human rights and fundamental freedoms.
The dialogue will involve senior officials, policymakers, civil society
organizations, academics and representatives of the private sector from
all regions of the world.

Lay the foundations for a global ethical framework

The dialogue will also be informed by a report by UNESCO’s
International Bioethics Committee (IBC) on the “Ethical Issues of
Neurotechnology”, and a UNESCO study proposing first time evidence on
the neurotechnology landscape, innovations, key actors worldwide and
major trends.

The ultimate goal of the dialogue is to advance a better understanding
of the ethical issues related to the governance of neurotechnology,
informing the development of the ethical framework to be approved by 193
member states of UNESCO – similar to the way in which UNESCO
established the global ethical frameworks on the human genome (1997),
human genetic data (2003) and artificial intelligence (2021).

UNESCO’s global standard on the Ethics of Artificial Intelligence has
been particularly effective and timely, given the latest developments
related to Generative AI, the pervasiveness of AI technologies and the
risks they pose to people, democracies, and jobs. The convergence of
neural data and artificial intelligence poses particular challenges, as
already recognized in UNESCO’s AI standard.

Neurotech could reduce the burden of disease…

Neurotechnology covers any kind of device or procedure which is designed
to “access, monitor, investigate, assess, manipulate, and/or emulate
the structure and function of neural systems”. [1] Neurotechnological
devices range from “wearables”, to non-invasive brain computer
interfaces such as robotic limbs, to brain implants currently being
developed [2] with the goal of treating disabilities such as paralysis.

One in eight people worldwide live with a mental or neurological
disorder, triggering care-related costs that account for up to a third
of total health expenses in developed countries. These burdens are
growing in low- and middle-income countries too. Globally these expenses
are expected to grow – the number of people aged over 60 is projected
to double by 2050 to 2.1 billion (WHO 2022). Neurotechnology has the
vast potential to reduce the number of deaths and disabilities caused by
neurological disorders, such as Epilepsy, Alzheimer’s, Parkinson’s
and Stroke.

… but also threaten Human Rights

Without ethical guardrails, these technologies can pose serious risks, as
brain information can be accessed and manipulated, threatening
fundamental rights and fundamental freedoms, which are central to the
notion of human identity, freedom of thought, privacy, and memory. In
its report published in 2021 [3], UNESCO’s IBC documents these risks
and proposes concrete actions to address them.

Neural data – which capture the individual’s reactions and basic
emotions – is in high demand in consumer markets. Unlike the data
gathered on us by social media platforms, most neural data is generated
unconsciously, therefore we cannot give our consent for its use. If
sensitive data is extracted, and then falls into the wrong hands, the
individual may suffer harmful consequences.

Brain-Computer-Interfaces (BCIs) implanted at a time during which a
child or teenager is still undergoing neurodevelopment may disrupt the
‘normal’ maturation of the brain. It may be able to transform young
minds, shaping their future identity with long-lasting, perhaps
permanent, effects.

Memory modification techniques (MMT) may enable scientists to alter the
content of a memory, reconstructing past events. For now, MMT relies on
the use of drugs, but in the future it may be possible to insert chips
into the brain. While this could be beneficial in the case of
traumatised people, such practices can also distort an individual’s
sense of personal identity.

Risk of exacerbating global inequalities and generating new ones

Currently 50% of Neurotech Companies are in the US, and 35% in Europe
and the UK. Because neurotechnology could usher in a new generation of
‘super-humans’, this would further widen the education, skills, wealth
and opportunities’ gap within and between countries, giving those with
the most advanced technology an unfair advantage.

UNESCO’s Ethics of neurotechnology webpage can be found here. As for the July 13, 2023 dialogue/conference, here are some of the details from UNESCO’s International Conference on the Ethics of Neurotechnology webpage,

UNESCO will organize an International Conference on the Ethics of Neurotechnology on the theme “Building a framework to protect and promote human rights and fundamental freedoms” at UNESCO Headquarters in Paris, on 13 July 2023, from 9:00 [CET; Central European Time] in Room I.

The Conference will explore the immense potential of neurotechnology and address the ethical challenges it poses to human rights and fundamental freedoms. It will bring together policymakers and experts, representatives of civil society and UN organizations, academia, media, and private sector companies, to prepare a solid foundation for an ethical framework on the governance of neurotechnology.

UNESCO International Conference on Ethics of Neurotechnology: Building a framework to protect and promote human rights and fundamental freedoms
13 July 2023 – 9:30 am – 13 July 2023 – 6:30 pm [CET; Central European Time]
Location UNESCO Headquarters, Paris, France
Rooms : Room
I Type : Cat II – Intergovernmental meeting, other than international conference of States
Arrangement type : Hybrid
Language(s) : French Spanish English Arabic
Contact : Rajarajeswari Pajany

Registration

Click here to register

A high-level session with ministers and policy makers focusing on policy actions and international cooperation will be featured in the Conference. Renowned experts will also be invited to discuss technological advancements in Neurotechnology and ethical challenges and human rights Implications. Two fireside chats will be organized to enrich the discussions focusing on the private sector, public awareness raising and public engagement. The Conference will also feature a new study of UNESCO’s Social and Human Sciences Sector shedding light on innovations in neurotechnology, key actors worldwide and key areas of development.

As one of the most promising technologies of our time, neurotechnology is providing new treatments and improving preventative and therapeutic options for millions of individuals suffering from neurological and mental illness. Neurotechnology is also transforming other aspects of our lives, from student learning and cognition to virtual and augmented reality systems and entertainment. While we celebrate these unprecedented opportunities, we must be vigilant against new challenges arising from the rapid and unregulated development and deployment of this innovative technology, including among others the risks to mental integrity, human dignity, personal identity, autonomy, fairness and equity, and mental privacy. 

UNESCO has been at the forefront of promoting an ethical approach to neurotechnology. UNESCO’s International Bioethics Committee (IBC) has examined the benefits and drawbacks from an ethical perspective in a report published in December 2021. The Organization has also led UN-wide efforts on this topic, collaborating with other agencies and academic institutions to organize expert roundtables, raise public awareness and produce publications. With a global mandate on bioethics and ethics of science and technology, UNESCO has been asked by the IBC, its expert advisory body, to consider developing a global standard on this topic.

A July 13, 2023 agenda and a little Canadian content

I have a link to the ‘provisional programme‘ for “Towards an Ethical Framework in the Protection and Promotion of Human Rights and Fundamental Freedoms,” the July 13, 2023 UNESCO International Conference on Ethics of Neurotechnology. Keeping in mind that this could (and likely will) change,

13 July 2023, Room I,
UNESCO HQ Paris, France,

9:00 –9:15 Welcoming Remarks (TBC)
•António Guterres, Secretary-General of the United Nations•
•Audrey Azoulay, Director-General of UNESCO

9:15 –10:00 Keynote Addresses (TBC)
•Gabriel Boric, President of Chile
•Narendra Modi, Prime Minister of India
•PedroSánchez Pérez-Castejón, Prime Minister of Spain
•Volker Turk, UN High Commissioner for Human Rights
•Amandeep Singh Gill, UN Secretary-General’sEnvoyon Technology

10:15 –11:00 Scene-Setting Address

1:00 –13:00 High-Level Session: Regulations and policy actions

14:30 –15:30 Expert Session: Technological advancement and opportunities

15:45 –16:30 Fireside Chat: Launch of the UNESCO publication “Unveiling the neurotechnology landscape: scientific advancements, innovationsand major trends”

16:30 –17:30 Expert Session: Ethical challenges and human rights implications

17:30 –18:15 Fireside Chat: “Why neurotechnology matters for all

18:15 –18:30 Closing Remarks

While I haven’t included the speakers’ names (for the most part), I do want to note some Canadian participation in the person of Dr. Judy Iles from the University of British Columbia. She’s a Professor of Neurology, Distinguished University Scholar in Neuroethics, andDirector, Neuroethics Canada, and President of the International Brain Initiative (IBI)

Iles is in the “Expert Session: Ethical challenges and human rights implications.”

If you have time do look at the provisional programme just to get a sense of the range of speakers and their involvement in an astonishing array of organizations. E.g., there’s the IBI (in Judy Iles’s bio), which at this point is largely (and surprisingly) supported by (from About Us) “Fonds de recherche du Québec, and the Institute of Neuroscience, Mental Health and Addiction of the Canadian Institutes of Health Research. Operational support for the IBI is also provided by the Japan Brain/MINDS Beyond and WorldView Studios“.

More food for thought

Neither the UNESCO July 2023 meeting, which tilts, understandably, to social justice issues vis-à-vis neurotechnology nor the Canadian Science Policy Centre (CSPC) May 2023 meeting (see my May 12, 2023 posting: Virtual panel discussion: Canadian Strategies for Responsible Neurotechnology Innovation on May 16, 2023), based on the publicly available agendas, seem to mention practical matters such as an implant company going out of business. Still, it’s possible it will be mentioned at the UNESCO conference. Unfortunately, the May 2023 CSPC panel has not been posted online.

(See my April 5, 2022 posting “Going blind when your neural implant company flirts with bankruptcy [long read].” Even skimming it will give you some pause.) The 2019 OECD Recommendation on Responsible Innovation in Neurotechnology doesn’t cover/mention the issue ob business bankruptcy either.

Taking a look at business practices seems particularly urgent given this news from a May 25, 2023 article by Rachael Levy, Marisa Taylor, and Akriti Sharma for Reuters, Note: A link has been removed,

Elon Musk’s Neuralink received U.S. Food and Drug Administration (FDA) clearance for its first-in-human clinical trial, a critical milestone for the brain-implant startup as it faces U.S. probes over its handling of animal experiments.

The FDA approval “represents an important first step that will one day allow our technology to help many people,” Neuralink said in a tweet on Thursday, without disclosing details of the planned study. It added it is not recruiting for the trial yet and said more details would be available soon.

The FDA acknowledged in a statement that the agency cleared Neuralink to use its brain implant and surgical robot for trials on patients but declined to provide more details.

Neuralink and Musk did not respond to Reuters requests for comment.

The critical milestone comes as Neuralink faces federal scrutiny [emphasis mine] following Reuters reports about the company’s animal experiments.

Neuralink employees told Reuters last year that the company was rushing and botching surgeries on monkeys, pigs and sheep, resulting in more animal deaths [emphasis mine] than necessary, as Musk pressured staff to receive FDA approval. The animal experiments produced data intended to support the company’s application for human trials, the sources said.

If you have time, it’s well worth reading the article in its entirety. Neuralink is being investigated for a number of alleged violations.

Slightly more detail has been added by a May 26, 2023 Associated Press (AP article on the Canadian Broadcasting Corporation’s news online website,

Elon Musk’s brain implant company, Neuralink, says it’s gotten permission from U.S. regulators to begin testing its device in people.

The company made the announcement on Twitter Thursday evening but has provided no details about a potential study, which was not listed on the U.S. government database of clinical trials.

Officials with the Food and Drug Administration (FDA) wouldn’t confirm or deny whether it had granted the approval, but press officer Carly Kempler said in an email that the agency “acknowledges and understands” that Musk’s company made the announcement. [emphases mine]

The AP article offers additional context on the international race to develop brain-computer interfaces.

Update: It seems the FDA gave its approval later on May 26, 2023. (See the May 26, 2023 updated Reuters article by Rachael Levy, Marisa Taylor and Akriti Sharma and/or Paul Tuffley’s (lecturer at Griffith University) May 29, 2023 essay on The Conversation.)

For anyone who’s curious about previous efforts to examine ethics and social implications with regard to implants, prosthetics (Note: Increasingly, prosthetics include a neural component), and the brain, I have a couple of older posts: “Prosthetics and the human brain,” a March 8, 2013 and “The ultimate DIY: ‘How to build a robotic man’ on BBC 4,” a January 30, 2013 posting.)

Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations

Dear friend,

I thought it best to break this up a bit. There are a couple of ‘objects’ still to be discussed but this is mostly the commentary part of this letter to you. (Here’s a link for anyone who stumbled here but missed Part 1.)

Ethics, the natural world, social justice, eeek, and AI

Dorothy Woodend in her March 10, 2022 review for The Tyee) suggests some ethical issues in her critique of the ‘bee/AI collaboration’ and she’s not the only one with concerns. UNESCO (United Nations Educational, Scientific and Cultural Organization) has produced global recommendations for ethical AI (see my March 18, 2022 posting). More recently, there’s “Racist and sexist robots have flawed AI,” a June 23, 2022 posting, where researchers prepared a conference presentation and paper about deeply flawed AI still being used in robots.

Ultimately, the focus is always on humans and Woodend has extended the ethical AI conversation to include insects and the natural world. In short, something less human-centric.

My friend, this reference to the de Young exhibit may seem off topic but I promise it isn’t in more ways than one. The de Young Museum in San Francisco (February 22, 2020 – June 27, 2021) also held and AI and art show called, “Uncanny Valley: Being Human in the Age of AI”), from the exhibitions page,

In today’s AI-driven world, increasingly organized and shaped by algorithms that track, collect, and evaluate our data, the question of what it means to be human [emphasis mine] has shifted. Uncanny Valley is the first major exhibition to unpack this question through a lens of contemporary art and propose new ways of thinking about intelligence, nature, and artifice. [emphasis mine]

Courtesy: de Young Museum [downloaded from https://deyoung.famsf.org/exhibitions/uncanny-valley]

As you can see, it hinted (perhaps?) at an attempt to see beyond human-centric AI. (BTW, I featured this ‘Uncanny Valley’ show in my February 25, 2020 posting where I mentioned Stephanie Dinkins [featured below] and other artists.)

Social justice

While the VAG show doesn’t see much past humans and AI, it does touch on social justice. In particular there’s Pod 15 featuring the Algorithmic Justice League (AJL). The group “combine[s] art and research to illuminate the social implications and harms of AI” as per their website’s homepage.

In Pod 9, Stephanie Dinkins’ video work with a robot (Bina48), which was also part of the de Young Museum ‘Uncanny Valley’ show, addresses some of the same issues.

Still of Stephanie Dinkins, “Conversations with Bina48,” 2014–present. Courtesy of the artist [downloaded from https://deyoung.famsf.org/stephanie-dinkins-conversations-bina48-0]

From the the de Young Museum’s Stephanie Dinkins “Conversations with Bina48” April 23, 2020 article by Janna Keegan (Dinkins submitted the same work you see at the VAG show), Note: Links have been removed,

Transdisciplinary artist and educator Stephanie Dinkins is concerned with fostering AI literacy. The central thesis of her social practice is that AI, the internet, and other data-based technologies disproportionately impact people of color, LGBTQ+ people, women, and disabled and economically disadvantaged communities—groups rarely given a voice in tech’s creation. Dinkins strives to forge a more equitable techno-future by generating AI that includes the voices of multiple constituencies …

The artist’s ongoing Conversations with Bina48 takes the form of a series of interactions with the social robot Bina48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second). The machine is the brainchild of Martine Rothblatt, an entrepreneur in the field of biopharmaceuticals who, with her wife, Bina, cofounded the Terasem Movement, an organization that seeks to extend human life through cybernetic means. In 2007 Martine commissioned Hanson Robotics to create a robot whose appearance and consciousness simulate Bina’s. The robot was released in 2010, and Dinkins began her work with it in 2014.

Part psychoanalytical discourse, part Turing test, Conversations with Bina48 also participates in a larger dialogue regarding bias and representation in technology. Although Bina Rothblatt is a Black woman, Bina48 was not programmed with an understanding of its Black female identity or with knowledge of Black history. Dinkins’s work situates this omission amid the larger tech industry’s lack of diversity, drawing attention to the problems that arise when a roughly homogenous population creates technologies deployed globally. When this occurs, writes art critic Tess Thackara, “the unconscious biases of white developers proliferate on the internet, mapping our social structures and behaviors onto code and repeating imbalances and injustices that exist in the real world.” One of the most appalling and public of these instances occurred when a Google Photos image-recognition algorithm mislabeled the faces of Black people as “gorillas.”

Eeek

You will find as you go through the ‘imitation game’ a pod with a screen showing your movements through the rooms in realtime on a screen. The installation is called “Creepers” (2021-22). The student team from Vancouver’s Centre for Digital Media (CDM) describes their project this way, from the CDM’s AI-driven Installation Piece for the Vancouver Art Gallery webpage,

Project Description

Kaleidoscope [team name] is designing an installation piece that harnesses AI to collect and visualize exhibit visitor behaviours, and interactions with art, in an impactful and thought-provoking way.

There’s no warning that you’re being tracked and you can see they’ve used facial recognition software to track your movements through the show. It’s claimed on the pod’s signage that they are deleting the data once you’ve left.

‘Creepers’ is an interesting approach to the ethics of AI. The name suggests that even the student designers were aware it was problematic.

For the curious, there’s a description of the other VAG ‘imitation game’ installations provided by CDM students on the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage.

In recovery from an existential crisis (meditations)

There’s something greatly ambitious about “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” and walking up the VAG’s grand staircase affirms that ambition. Bravo to the two curators, Grenville and Entis for an exhibition.that presents a survey (or overview) of artificial intelligence, and its use in and impact on creative visual culture.

I’ve already enthused over the history (specifically Turing, Lovelace, Ovid), admitted to being mesmerized by Scott Eaton’s sculpture/AI videos, and confessed to a fascination (and mild repulsion) regarding Oxman’s honeycombs.

It’s hard to remember all of the ‘objects’ as the curators have offered a jumble of work, almost all of them on screens. Already noted, there’s Norbert Wiener’s The Moth (1949) and there are also a number of other computer-based artworks from the 1960s and 1970s. Plus, you’ll find works utilizing a GAN (generative adversarial network), an AI agent that is explained in the exhibit.

It’s worth going more than once to the show as there is so much to experience.

Why did they do that?

Dear friend, I’ve already commented on the poor flow through the show and It’s hard to tell if the curators intended the experience to be disorienting but this is to the point of chaos, especially when the exhibition is crowded.

I’ve seen Grenville’s shows before. In particular there was “MashUp: The Birth of Modern Culture, a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century” and there was “KRAZY! The Delirious World of Anime + Manga + Video Games + Art.” As you can see from the description, he pulls together disparate works and ideas into a show for you to ‘make sense’ of them.

One of the differences between those shows and the “imitation Game: …” is that most of us have some familiarity, whether we like it or not, with modern art/culture and anime/manga/etc. and can try to ‘make sense’ of it.

By contrast, artificial intelligence (which even experts have difficulty defining) occupies an entirely different set of categories; all of them associated with science/technology. This makes for a different kind of show so the curators cannot rely on the audience’s understanding of basics. It’s effectively an art/sci or art/tech show and, I believe, the first of its kind at the Vancouver Art Gallery. Unfortunately, the curators don’t seem to have changed their approach to accommodate that difference.

AI is also at the centre of a current panic over job loss, loss of personal agency, automated racism and sexism, etc. which makes the experience of viewing the show a little tense. In this context, their decision to commission and use ‘Creepers’ seems odd.

Where were Ai-Da and Dall-E-2 and the others?

Oh friend, I was hoping for a robot. Those roomba paintbots didn’t do much for me. All they did was lie there on the floor

To be blunt I wanted some fun and perhaps a bit of wonder and maybe a little vitality. I wasn’t necessarily expecting Ai-Da, an artisitic robot, but something three dimensional and fun in this very flat, screen-oriented show would have been nice.

This image has an empty alt attribute; its file name is image-asset.jpeg
Ai-Da was at the Glastonbury Festival in the U from 23-26th June 2022. Here’s Ai-Da and her Billie Eilish (one of the Glastonbury 2022 headliners) portrait. [downloaded from https://www.ai-darobot.com/exhibition]

Ai-Da was first featured here in a December 17, 2021 posting about performing poetry that she had written in honour of the 700th anniversary of poet Dante Alighieri’s death.

Named in honour of Ada Lovelace, Ai-Da visited the 2022 Venice Biennale as Leah Henrickson and Simone Natale describe in their May 12, 2022 article for Fast Company (Note: Links have been removed),

Ai-Da sits behind a desk, paintbrush in hand. She looks up at the person posing for her, and then back down as she dabs another blob of paint onto the canvas. A lifelike portrait is taking shape. If you didn’t know a robot produced it, this portrait could pass as the work of a human artist.

Ai-Da is touted as the “first robot to paint like an artist,” and an exhibition of her work, called Leaping into the Metaverse, opened at the Venice Biennale.

Ai-Da produces portraits of sitting subjects using a robotic hand attached to her lifelike feminine figure. She’s also able to talk, giving detailed answers to questions about her artistic process and attitudes toward technology. She even gave a TEDx talk about “The Intersection of Art and AI” in Oxford a few years ago. While the words she speaks are programmed, Ai-Da’s creators have also been experimenting with having her write and perform her own poetry.

She has her own website.

If not Ai-Da, what about Dall-E-2? Aaron Hertzmann’s June 20, 2022 commentary, “Give this AI a few words of description and it produces a stunning image – but is it art?” investigates for Salon (Note: Links have been removed),

DALL-E 2 is a new neural network [AI] algorithm that creates a picture from a short phrase or sentence that you provide. The program, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public. But a small and growing number of people – myself included – have been given access to experiment with it.

As a researcher studying the nexus of technology and art, I was keen to see how well the program worked. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology. It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.

A July 4, 2022 article “DALL-E, Make Me Another Picasso, Please” by Laura Lane for The New Yorker has a rebuttal to Ada Lovelace’s contention that creativity is uniquely human (Note: A link has been removed),

“There was this belief that creativity is this deeply special, only-human thing,” Sam Altman, OpenAI’s C.E.O., explained the other day. Maybe not so true anymore, he said. Altman, who wore a gray sweater and had tousled brown hair, was videoconferencing from the company’s headquarters, in San Francisco. DALL-E is still in a testing phase. So far, OpenAI has granted access to a select group of people—researchers, artists, developers—who have used it to produce a wide array of images: photorealistic animals, bizarre mashups, punny collages. Asked by a user to generate “a plate of various alien fruits from another planet photograph,” DALL-E returned something kind of like rambutans. “The rest of mona lisa” is, according to DALL-E, mostly just one big cliff. Altman described DALL-E as “an extension of your own creativity.”

There are other AI artists, in my August 16, 2019 posting, I had this,

AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.

That posting also included AI artist, AICAN. Both artist-AI agents (Obvious and AICAN) are based on GANs (generative adversarial networks) for learning and eventual output. Both artist-AI agents work independently or with human collaborators on art works that are available for purchase.

As might be expected not everyone is excited about AI and visual art. Sonja Drimmer, Professor of Medieval Art, University of Massachusetts at Amherst, provides another perspective on AI, visual art, and, her specialty, art history in her November 1, 2021 essay for The Conversation (Note: Links have been removed),

Over the past year alone, I’ve come across articles highlighting how artificial intelligence recovered a “secret” painting of a “lost lover” of Italian painter Modigliani, “brought to life” a “hidden Picasso nude”, “resurrected” Austrian painter Gustav Klimt’s destroyed works and “restored” portions of Rembrandt’s 1642 painting “The Night Watch.” The list goes on.

As an art historian, I’ve become increasingly concerned about the coverage and circulation of these projects.

They have not, in actuality, revealed one secret or solved a single mystery.

What they have done is generate feel-good stories about AI.

Take the reports about the Modigliani and Picasso paintings.

These were projects executed by the same company, Oxia Palus, which was founded not by art historians but by doctoral students in machine learning.

In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been carried out and published years prior – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases.

The company edited these X-rays and reconstituted them as new works of art by applying a technique called “neural style transfer.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them and then promises to recreate images of other content in that same style.

As you can ‘see’ my friend, the topic of AI and visual art is a juicy one. In fact, I have another example in my June 27, 2022 posting, which is titled, “Art appraised by algorithm.” So, Grenville’s and Entis’ decision to focus on AI and its impact on visual culture is quite timely.

Visual culture: seeing into the future

The VAG Imitation Game webpage lists these categories of visual culture “animation, architecture, art, fashion, graphic design, urban design and video games …” as being represented in the show. Movies and visual art, not mentioned in the write up, are represented while theatre and other performing arts are not mentioned or represented. That’ s not a surprise.

In addition to an area of science/technology that’s not well understood even by experts, the curators took on the truly amorphous (and overwhelming) topic of visual culture. Given that even writing this commentary has been a challenge, I imagine pulling the show together was quite the task.

Grenville often grounds his shows in a history of the subject and, this time, it seems especially striking. You’re in a building that is effectively a 19th century construct and in galleries that reflect a 20th century ‘white cube’ aesthetic, while looking for clues into the 21st century future of visual culture employing technology that has its roots in the 19th century and, to some extent, began to flower in the mid-20th century.

Chung’s collaboration is one of the only ‘optimistic’ notes about the future and, as noted earlier, it bears a resemblance to Wiener’s 1949 ‘Moth’

Overall, it seems we are being cautioned about the future. For example, Oxman’s work seems bleak (bees with no flowers to pollinate and living in an eternal spring). Adding in ‘Creepers’ and surveillance along with issues of bias and social injustice reflects hesitation and concern about what we will see, who sees it, and how it will be represented visually.

Learning about robots, automatons, artificial intelligence, and more

I wish the Vancouver Art Gallery (and Vancouver’s other art galleries) would invest a little more in audience education. A couple of tours, by someone who may or may not know what they’re talking, about during the week do not suffice. The extra material about Stephanie Dinkins and her work (“Conversations with Bina48,” 2014–present) came from the de Young Museum’s website. In my July 26, 2021 commentary on North Vancouver’s Polygon Gallery 2021 show “Interior Infinite,” I found background information for artist Zanele Muholi on the Tate Modern’s website. There is nothing on the VAG website that helps you to gain some perspective on the artists’ works.

It seems to me that if the VAG wants to be considered world class, it should conduct itself accordingly and beefing up its website with background information about their current shows would be a good place to start.

Robots, automata, and artificial intelligence

Prior to 1921, robots were known exclusively as automatons. These days, the word ‘automaton’ (or ‘automata’ in the plural) seems to be used to describe purely mechanical representations of humans from over 100 years ago whereas the word ‘robot’ can be either ‘humanlike’ or purely machine, e.g. a mechanical arm that performs the same function over and over. I have a good February 24, 2017 essay on automatons by Miguel Barral for OpenMind BBVA*, which provides some insight into the matter,

The concept of robot is relatively recent. The idea was introduced in 1921 by the Czech writer Karel Capek in his work R.U.R to designate a machine that performs tasks in place of man. But their predecessors, the automatons (from the Greek automata, or “mechanical device that works by itself”), have been the object of desire and fascination since antiquity. Some of the greatest inventors in history, such as Leonardo Da Vinci, have contributed to our fascination with these fabulous creations:

The Al-Jazari automatons

The earliest examples of known automatons appeared in the Islamic world in the 12th and 13th centuries. In 1206, the Arab polymath Al-Jazari, whose creations were known for their sophistication, described some of his most notable automatons: an automatic wine dispenser, a soap and towels dispenser and an orchestra-automaton that operated by the force of water. This latter invention was meant to liven up parties and banquets with music while floating on a pond, lake or fountain.

As the water flowed, it started a rotating drum with pegs that, in turn, moved levers whose movement produced different sounds and movements. As the pegs responsible for the musical notes could be exchanged for different ones in order to interpret another melody, it is considered one of the first programmable machines in history.

If you’re curious about automata, my friend, I found this Sept. 26, 2016 ABC news radio news item about singer Roger Daltrey’s and his wife, Heather’s auction of their collection of 19th century French automata (there’s an embedded video showcasing these extraordinary works of art). For more about automata, robots, and androids, there’s an excellent May 4, 2022 article by James Vincent, ‘A visit to the human factory; How to build the world’s most realistic robot‘ for The Verge; Vincent’s article is about Engineered Arts, the UK-based company that built Ai-Da.

AI is often used interchangeably with ‘robot’ but they aren’t the same. Not all robots have AI integrated into their processes. At its simplest AI is an algorithm or set of algorithms, which may ‘live’ in a CPU and be effectively invisible or ‘live’ in or make use of some kind of machine and/or humanlike body. As the experts have noted, the concept of artificial intelligence is a slippery concept.

*OpenMind BBVA is a Spanish multinational financial services company, Banco Bilbao Vizcaya Argentaria (BBVA), which runs the non-profit project, OpenMind (About us page) to disseminate information on robotics and so much more.*

You can’t always get what you want

My friend,

I expect many of the show’s shortcomings (as perceived by me) are due to money and/or scheduling issues. For example, Ai-Da was at the Venice Biennale and if there was a choice between the VAG and Biennale, I know where I’d be.

Even with those caveats in mind, It is a bit surprising that there were no examples of wearable technology. For example, Toronto’s Tapestry Opera recently performed R.U.R. A Torrent of Light (based on the word ‘robot’ from Karel Čapek’s play, R.U.R., ‘Rossumovi Univerzální Roboti’), from my May 24, 2022 posting,

I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,

“This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.”

And, from later in my posting,

“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design, [emphasis mine]” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director. 

That last quote brings me back to the my comment about theatre and performing arts not being part of the show. Of course, the curators couldn’t do it all but a website with my hoped for background and additional information could have helped to solve the problem.

The absence of the theatrical and performing arts in the VAG’s ‘Imitation Game’ is a bit surprising as the Council of Canadian Academies (CCA) in their third assessment, “Competing in a Global Innovation Economy: The Current State of R&D in Canada” released in 2018 noted this (from my April 12, 2018 posting),

Canada, relative to the world, specializes in subjects generally referred to as the
humanities and social sciences (plus health and the environment), and does
not specialize as much as others in areas traditionally referred to as the physical
sciences and engineering. Specifically, Canada has comparatively high levels
of research output in Psychology and Cognitive Sciences, Public Health and
Health Services, Philosophy and Theology, Earth and Environmental Sciences,
and Visual and Performing Arts. [emphasis mine] It accounts for more than 5% of world research in these fields. Conversely, Canada has lower research output than expected in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies,
Engineering, and Mathematics and Statistics. The comparatively low research
output in core areas of the natural sciences and engineering is concerning,
and could impair the flexibility of Canada’s research base, preventing research
institutions and researchers from being able to pivot to tomorrow’s emerging
research areas. [p. xix Print; p. 21 PDF]

US-centric

My friend,

I was a little surprised that the show was so centered on work from the US given that Grenville has curated ate least one show where there was significant input from artists based in Asia. Both Japan and Korea are very active with regard to artificial intelligence and it’s hard to believe that their artists haven’t kept pace. (I’m not as familiar with China and its AI efforts, other than in the field of facial recognition, but it’s hard to believe their artists aren’t experimenting.)

The Americans, of course, are very important developers in the field of AI but they are not alone and it would have been nice to have seen something from Asia and/or Africa and/or something from one of the other Americas. In fact, anything which takes us out of the same old, same old. (Luba Elliott wrote this (2019/2020/2021?) essay, “Artificial Intelligence Art from Africa and Black Communities Worldwide” on Aya Data if you want to get a sense of some of the activity on the African continent. Elliott does seem to conflate Africa and Black Communities, for some clarity you may want to check out the Wikipedia entry on Africanfuturism, which contrasts with this August 12, 2020 essay by Donald Maloba, “What is Afrofuturism? A Beginner’s Guide.” Maloba also conflates the two.)

As it turns out, Luba Elliott presented at the 2019 Montréal Digital Spring event, which brings me to Canada’s artificial intelligence and arts scene.

I promise I haven’t turned into a flag waving zealot, my friend. It’s just odd there isn’t a bit more given that machine learning was pioneered at the University of Toronto. Here’s more about that (from Wikipedia entry for Geoffrey Hinston),

Geoffrey Everest HintonCCFRSFRSC[11] (born 6 December 1947) is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks.

Hinton received the 2018 Turing Award, together with Yoshua Bengio [Canadian scientist] and Yann LeCun, for their work on deep learning.[24] They are sometimes referred to as the “Godfathers of AI” and “Godfathers of Deep Learning“,[25][26] and have continued to give public talks together.[27][28]

Some of Hinton’s work was started in the US but since 1987, he has pursued his interests at the University of Toronto. He wasn’t proven right until 2012. Katrina Onstad’s February 29, 2018 article (Mr. Robot) for Toronto Life is a gripping read about Hinton and his work on neural networks. BTW, Yoshua Bengio (co-Godfather) is a Canadian scientist at the Université de Montréal and Yann LeCun (co-Godfather) is a French scientist at New York University.

Then, there’s another contribution, our government was the first in the world to develop a national artificial intelligence strategy. Adding those developments to the CCA ‘State of Science’ report findings about visual arts and performing arts, is there another word besides ‘odd’ to describe the lack of Canadian voices?

You’re going to point out the installation by Ben Bogart (a member of Simon Fraser University’s Metacreation Lab for Creative AI and instructor at the Emily Carr University of Art + Design (ECU)) but it’s based on the iconic US scifi film, 2001: A Space Odyssey. As for the other Canadian, Sougwen Chung, she left Canada pretty quickly to get her undergraduate degree in the US and has since moved to the UK. (You could describe hers as the quintessential success story, i.e., moving from Canada only to get noticed here after success elsewhere.)

Of course, there are the CDM student projects but the projects seem less like an exploration of visual culture than an exploration of technology and industry requirements, from the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage, Note: A link has been removed,

In 2019, Bruce Grenville, Senior Curator at Vancouver Art Gallery, approached [the] Centre for Digital Media to collaborate on several industry projects for the forthcoming exhibition. Four student teams tackled the project briefs over the course of the next two years and produced award-winning installations that are on display until October 23 [2022].

Basically, my friend, it would have been nice to see other voices or, at the least, an attempt at representing other voices and visual cultures informed by AI. As for Canadian contributions, maybe put something on the VAG website?

Playing well with others

it’s always a mystery to me why the Vancouver cultural scene seems comprised of a set of silos or closely guarded kingdoms. Reaching out to the public library and other institutions such as Science World might have cost time but could have enhanced the show

For example, one of the branches of the New York Public Library ran a programme called, “We are AI” in March 2022 (see my March 23, 2022 posting about the five-week course, which was run as a learning circle). The course materials are available for free (We are AI webpage) and I imagine that adding a ‘visual culture module’ wouldn’t be that difficult.

There is one (rare) example of some Vancouver cultural institutions getting together to offer an art/science programme and that was in 2017 when the Morris and Helen Belkin Gallery (at the University of British Columbia; UBC) hosted an exhibition of Santiago Ramon y Cajal’s work (see my Sept. 11, 2017 posting about the gallery show) along with that show was an ancillary event held by the folks at Café Scientifique at Science World and featuring a panel of professionals from UBC’s Faculty of Medicine and Dept. of Psychology, discussing Cajal’s work.

In fact, where were the science and technology communities for this show?

On a related note, the 2022 ACM SIGGRAPH conference (August 7 – 11, 2022) is being held in Vancouver. (ACM is the Association for Computing Machinery; SIGGRAPH is for Special Interest Group on Computer Graphics and Interactive Techniques.) SIGGRAPH has been holding conferences in Vancouver every few years since at least 2011.

At this year’s conference, they have at least two sessions that indicate interests similar to the VAG’s. First, there’s Immersive Visualization for Research, Science and Art which includes AI and machine learning along with other related topics. There’s also, Frontiers Talk: Art in the Age of AI: Can Computers Create Art?

This is both an international conference and an exhibition (of art) and the whole thing seems to have kicked off on July 25, 2022. If you’re interested, the programme can be found here and registration here.

Last time SIGGRAPH was here the organizers seemed interested in outreach and they offered some free events.

In the end

It was good to see the show. The curators brought together some exciting material. As is always the case, there were some missed opportunities and a few blind spots. But all is not lost.

July 27, 2022, the VAG held a virtual event with an artist,

Gwenyth Chao to learn more about what happened to the honeybees and hives in Oxman’s Synthetic Apiary project. As a transdisciplinary artist herself, Chao will also discuss the relationship between art, science, technology and design. She will then guide participants to create a space (of any scale, from insect to human) inspired by patterns found in nature.

Hopefully there will be more more events inspired by specific ‘objects’. Meanwhile, August 12, 2022, the VAG is hosting,

… in partnership with the Canadian Music Centre BC, New Music at the Gallery is a live concert series hosted by the Vancouver Art Gallery that features an array of musicians and composers who draw on contemporary art themes.

Highlighting a selection of twentieth- and twenty-first-century music compositions, this second concert, inspired by the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, will spotlight The Iliac Suite (1957), the first piece ever written using only a computer, and Kaija Saariaho’s Terra Memoria (2006), which is in a large part dependent on a computer-generated musical process.

It would be lovely if they could include an Ada Lovelace Day event. This is an international celebration held on October 11, 2022.

Do go. Do enjoy, my friend.

Let’s celebrate the International Day of Mathematics March 14, 2022 even if it is a little late

A March 14, 2022 UNESCO (United Nations Educational, Scientific and Cultural Organization) announcement (received via email) focuses on mathematics,

Despite the omnipresence of mathematics in our daily lives, in our phones, credit cards, cars etc., there may not be enough mathematicians to solve the complex challenges we face, from climate change to pandemics, a new UNESCO study finds.

Some 41% of the global population is at risk from flooding caused by tropical cyclones. Thanks to new mathematical models and better algorithms, the path of a tropical cyclone can now be predicted up to a week in advance.  In 2019, it could only be predicted five days in advance and, in the 1970s, just 36 hours ahead. Longer visibility gives municipal authorities precious additional time to plan the evacuation of populations in highly exposed areas.

This is just one of many case studies in Mathematics for Action, a new UNESCO publication released on 14 March to mark International Mathematics Day. “The study demonstrates why it makes sense for governments to include a mathematician on their team of scientific advisors”, says Christiane Rousseau of the Department of Mathematics and Statistics at the University of Montréal in Canada, who led the development of the toolkit.  

Mathematical methods to design vaccines

“The COVID-19 pandemic has really brought mathematical modelling into the public eye”, she adds. “Two years ago, who would have thought that a term such as ‘flattening the curve’ would become part of the public lexicon?” Similarly, news stories referring to mathematical terms such as the basic reproduction rate (R0) of the virus or ‘herd immunity’ through mass vaccination have become regular features. Mathematical methods themselves have been used to design vaccines more efficiently and to model vaccine hesitancy as a social phenomenon.

But the utility of mathematics does not stop there. For Norbert Hounkonnou, President of the Network of African Science Academies, “the Mathematics for Action toolkit is a revolutionary policy-oriented tool. It showcases the decisive role of mathematics in contributing to solving the world’s most pressing challenges and in achieving the 2030 Sustainable Development Goals”.

One of these goals is to end poverty. The toolkit describes, for example, how researchers were able to compile poverty maps of 552 villages and communities in Senegal and identify areas in need of greater public investment, despite missing census data. By applying mathematical tools like machine learning algorithms (artificial intelligence), the researchers were able to establish the extent of poverty in specific areas. .

Scenarios for the future

How are the many services nature provides, such as freshwater, medicinal plants or crops to be priced? Two research studies in Mathematics for Action do just that by quantifying the value of ecosystem services and biodiversity of large estuaries in North America and Asia.

The toolkit describes how mathematical models enable the exploration of multiple “what-if” scenarios to inform the decision-making process. Scientists use climate models in combination with storylines to produce plausible alternative scenarios for the future.

“The shortage of quality mathematics teachers around the world is a threat to training a sufficient number of mathematicians and scientists capable of meeting the challenges of the contemporary world”, warn Merrilyn Goos and Anjum Halai, the two Vice-Presidents of the International Commission on Mathematical Instruction, two authors of the toolkit.

Read the toolkit Mathematics for action: supporting science-based decision-making

The International Day of Mathematics was proclaimed by UNESCO in 2019 to draw attention to the extensive contribution that mathematics makes to social progress and the plethora of vocations that mathematics offers to boys and girls.

Mathematics for Action: Supporting Science-Based Decision Making is a series of policy briefs produced by UNESCO, the Centre de recherches mathématiques of Canada, the International Mathematical Union, the International Science Council and their partners.

The Centre de recherches mathématiques (CRM) was the manager of the toolkit project, which was produced by a consortium composed of the:

African Institute for Mathematical Sciences (AIMS)

African Mathematical Union (AMU)

Centre de recherches mathématiques (CRM)

UNESCO Cat II centre CIMPA (Centre international de mathématiques pures et appliquées)

European Mathematical Society (EMS)

Institut des Sciences mathématiques et de leurs interactions (INSMI) au CNRS [Centre national de la recherche scientifique]

Institut de valorisation des données (IVADO), Canada

International Commission on Mathematical Instruction (ICMI)

International Mathematical Union (IMU)

International Science Council (ISC)

I just noticed that March 14, 2022 is also Pi Day (from its Wikipedia entry; Note: Links have been removed),

Pi Day is an annual celebration of the mathematical constant π (pi). Pi Day is observed on March 14 (3/14 in the month/day format) since 3, 1, and 4 are the first three significant figures of π.[2][3] It was founded in 1988 by Larry Shaw, an employee of the Exploratorium. Celebrations often involve eating pie or holding pi recitation competitions. In 2009, the United States House of Representatives supported the designation of Pi Day.[4] UNESCO’s 40th General Conference designated Pi Day as the International Day of Mathematics in November 2019.[5][6] Alternative dates for the holiday include July 22[alpha 1] (22/7, an approximation of π) and June 28 (6.28, an approximation of 2π or tau).

As you can see from the entry, it’s not coincidence that Pi Day and the International Day of Mathematics are celebrated on the same day.

UNESCO’s first global recommendations on the ethics of artificial intelligence (AI) announced

This makes a nice accompaniment to my commentary (December 3, 2021 posting) on the Nature of Things programme (telecast by the Canadian Broadcasting Corporation), The Machine That Feels.

Here’s UNESCO’s (United Nations Educational, Scientific and Cultural Organization) November 25, 2021 press release making the announcement (also received via email),

UNESCO member states adopt the first ever global agreement [recommendation] on the Ethics of Artificial Intelligence

Paris, 25 Nov [2021] – Audrey Azoulay, Director-General of UNESCO presented
Thursday the first ever global standard on the ethics of artificial
intelligence adopted by the member states of UNESCO at the General
Conference.

This historical text defines the common values and principles which will
guide the construction of the necessary legal infrastructure to ensure
the healthy development of AI.

AI is pervasive, and enables many of our daily routines – booking
flights, steering driverless cars, and personalising our morning news
feeds. AI also supports the decision-making of governments and the
private sector.

AI technologies are delivering remarkable results in highly specialized
fields such as cancer screening and building inclusive environments for
people with disabilities. They also help combat global problems like
climate change and world hunger, and help reduce poverty by optimizing
economic aid.

But the technology is also bringing new unprecedented challenges. We see
increased gender and ethnic bias, significant threats to privacy,
dignity and agency, dangers of mass surveillance, and increased use of
unreliable AI technologies in law enforcement, to name a few. Until now,
there were no universal standards to provide an answer to these issues.

In 2018, Audrey Azoulay, Director-General of UNESCO, launched an
ambitious project: to give the world an ethical framework for the use of
artificial intelligence. Three years later, thanks to the mobilization
of hundreds of experts from around the world and intense international
negotiations, the 193 UNESCO’s member states have just officially
adopted this ethical framework.

“The world needs rules for artificial intelligence to benefit
humanity. The Recommendation on the ethics of AI is a major answer. It
sets the first global normative framework while giving States the
responsibility to apply it at their level. UNESCO will support its 193
Member States in its implementation and ask them to report regularly on
their progress and practices”, said Audrey Azoulay, UNESCO Director-General.

The content of the recommendation

The Recommendation [emphasis mine] aims to realize the advantages AI brings to society and reduce the risks it entails. It ensures that digital transformations
promote human rights and contribute to the achievement of the
Sustainable Development Goals, addressing issues around transparency,
accountability and privacy, with action-oriented policy chapters on data
governance, education, culture, labour, healthcare and the economy.

*Protecting data

The Recommendation calls for action beyond what tech firms and
governments are doing to guarantee individuals more protection by
ensuring transparency, agency and control over their personal data. It
states that individuals should all be able to access or even erase
records of their personal data. It also includes actions to improve data
protection and an individual’s knowledge of, and right to control,
their own data. It also increases the ability of regulatory bodies
around the world to enforce this.

*Banning social scoring and mass surveillance

The Recommendation explicitly bans the use of AI systems for social
scoring and mass surveillance. These types of technologies are very
invasive, they infringe on human rights and fundamental freedoms, and
they are used in a broad way. The Recommendation stresses that when
developing regulatory frameworks, Member States should consider that
ultimate responsibility and accountability must always lie with humans
and that AI technologies should not be given legal personality
themselves.

*Helping to monitor and evalute

The Recommendation also sets the ground for tools that will assist in
its implementation. Ethical Impact Assessment is intended to help
countries and companies developing and deploying AI systems to assess
the impact of those systems on individuals, on society and on the
environment. Readiness Assessment Methodology helps Member States to
assess how ready they are in terms of legal and technical
infrastructure. This tool will assist in enhancing the institutional
capacity of countries and recommend appropriate measures to be taken in
order to ensure that ethics are implemented in practice. In addition,
the Recommendation encourages Member States to consider adding the role
of an independent AI Ethics Officer or some other mechanism to oversee
auditing and continuous monitoring efforts.

*Protecting the environment

The Recommendation emphasises that AI actors should favour data, energy
and resource-efficient AI methods that will help ensure that AI becomes
a more prominent tool in the fight against climate change and on
tackling environmental issues. The Recommendation asks governments to
assess the direct and indirect environmental impact throughout the AI
system life cycle. This includes its carbon footprint, energy
consumption and the environmental impact of raw material extraction for
supporting the manufacturing of AI technologies. It also aims at
reducing the environmental impact of AI systems and data
infrastructures. It incentivizes governments to invest in green tech,
and if there are disproportionate negative impact of AI systems on the
environment, the Recommendation instruct that they should not be used.

Decisions impacting millions of people should be fair, transparent and contestable. These new technologies must help us address the major challenges in our world today, such as increased inequalities and the environmental crisis, and not deepening them.” said Gabriela Ramos, UNESCO’s Assistant Director General for Social and Human Sciences.

Emerging technologies such as AI have proven their immense capacity to
deliver for good. However, its negative impacts that are exacerbating an
already divided and unequal world, should be controlled. AI developments
should abide by the rule of law, avoiding harm, and ensuring that when
harm happens, accountability and redressal mechanisms are at hand for
those affected.

If I read this properly (and it took me a little while), this is an agreement on the nature of the recommendations themselves and not an agreement to uphold them.

You can find more background information about the process for developing the framework outlined in the press release on the Recommendation on the ethics of artificial intelligence webpage. I was curious as to the composition of the Adhoc Expert Group (AHEG) for the Recommendation; they had varied representation from every continent. (FYI, The US and Mexico represented North America.)

Jean-Pierre Luminet awarded UNESCO’s Kalinga prize for Popularizing Science

Before getting to the news about Jean-Pierre Luminet, astrophysicist, poet, sculptor, and more, there’s the prize itself.

Established in 1951, a scant five years after UNESCO (United Nations Educational, Scientific and Cultural Organization) was founded in 1945, the Kalinga Prize for the Popularization of Science is the organization’s oldest prize. Here’s more from the UNESCO Kalinga Prize for the Popularization of Science webpage,

The UNESCO Kalinga Prize for the Popularization of Science is an international award to reward exceptional contributions made by individuals in communicating science to society and promoting the popularization of science. It is awarded to persons who have had a distinguished career as writer, editor, lecturer, radio, television, or web programme director, or film producer in helping interpret science, research and technology to the public. UNESCO Kalinga Prize winners know the potential power of science, technology, and research in improving public welfare, enriching the cultural heritage of nations and providing solutions to societal problems on the local, regional and global level.

The UNESCO Kalinga Prize for the Popularization of Science is UNESCO’s oldest prize, created in 1951 following a donation from Mr Bijoyanand Patnaik, Founder and President of the Kalinga Foundation(link is external) Trust in India. Today, the Prize is funded by the Kalinga Foundation Trust(link is external), the Government of the State of Orissa, India(link is external), and the Government of India (Department of Science and Technology(link is external)).

Jean-Pierre Luminet

From the November 4, 2021 UNESCO press release (also received via email),

French scientist and author Jean-Pierre Luminet will be awarded the 2021 UNESCO Kalinga Prize for the Popularization of Science. The prize-giving ceremony will take place online on 5 November as part of the celebration of World Science Day for Peace and Development.

An independent international jury selected Jean-Pierre Luminet recognizing his longstanding commitment to the popularization of science. Mr Luminet is a distinguished astrophysicist and cosmologist who has been promoting the values of scientific research through a wide variety of media: he has created popular science books and novels, beautifully illustrated exhibition catalogues, poetry, audiovisual materials for children and documentaries, notably “Du Big Bang au vivant” with Hubert Reeves. He is also an artist, engraver and sculptor and has collaborated with composers on musicals inspired by the sounds of the Universe.

His publications are model examples for communicating science to the public. Their scientific content is precise, rigorous and always state-of-the-art. He has written seven “scientific novels”, including “Le Secret de Copernic”, published in 2006. His recent book “Le destin de l’univers : trous noirs et énergie sombre”, about black holes and dark energy, was written for the general public and was praised for its outstanding scientific, historical, and literary qualities. Jean-Pierre Luminet’s work has been translated into a many languages including Chinese and Korean.

There is a page for Luminet in both the French language and English language wikipedias. If you have the language skills, you might want to check out the French language essay as I found it to be more stylishly written.

Compare,

De par ses activités de poète, essayiste, romancier et scénariste, dans une œuvre voulant lier science, histoire, musique et art, il est également Officier des Arts et des Lettres.

With,

… Luminet has written fifteen science books,[4] seven historical novels,[4] TV documentaries,[5] and six poetry collections. He is an artist, an engraver, a sculptor, and a musician.

My rough translation of the French,

As a poet, essayaist, novelist, and a screenwriter in a body of work that brings together science, history, music and art, he is truly someone who has enriched the French cultural inheritance (which is what it means to be an Officer of Arts and Letters or Officier des Arts et des Lettres; see English language entry for Ordre des Arts et des Lettres).

In any event, congratulations to M. Luminet.

Not a pretty picture: Canada and a patent rights waiver for COVID-19 vaccines

At about 7:15 am PT this morning , May 13, 2021, I saw Dr. Mona Nemer’s (Canada’s Chief Science Advisor) tweet (Note: I’m sorry the formatting isn’t better,

Maryse de la Giroday@frogheart Does this mean Canada will support a waiver on patent rights for COVID-19 vaccines?

7:18 AM · May 13, 2021

Dr. Mona Nemer@ChiefSciCanThe global health crisis of the past year has underscored the critical importance of openly sharing scientific information. We are one step closer to making #openscience a reality around the world. So pleased that my office was part of these discussions. http://webcast.unesco.org/events/2021-05-OS-IGM/ Quote Tweet

Canada at UNESCO@Canada2UNESCO · May 6@Canada2UNESCO is partaking in negotiations today on the draft recommendation on #OpenScience The benefits of #science and #technology to health, the #economy and #development should be available to all.6:40 AM · May 13, 2021·Twitter Web App

No reply. No surprise

Brief summary of Canada’s COVID-19 patent rights nonwaiver

You’ll find more about the UNESCO meeting on open science in last week’s May 7, 2021 posting (Listen in on a UNESCO (United Nations Educational, Scientific and Cultural Organization) meeting [about Open Science]).

At the time, I noted a disparity in Canada’s policies centering on open science and patents; scroll down to the “Comments on open science and intellectual property in Canada” subsection for a more nuanced analysis. For those who don’t have the patience and/or the time, it boils down to this:

  1. Canada is happily participating in a UNESCO meeting on open science,
  2. the 2021 Canadian federal budget just dedicated a big chunk of money to augmenting Canada’s national patent strategy, and
  3. Canada is “willing to discuss” a waiver at the World Trade Organization (WTO) meetings.

I predicted UNESCO would see our representative’s enthusiastic participation while our representative at the WTO meeting would dance around the topic without committing. to anything. Sadly, it’s starting to look like I was right.

Leigh Beadon in a May 12, 2021 posting on Techdirt reveals the situation is worse than I thought (Note: Links have been removed),

Few things illustrate the broken state of our global intellectual property system better than the fact that, well over a year into this devastating pandemic and in the face of a strong IP waiver push by some of the hardest hit countries, patents are still holding back the production of life-saving vaccines. And of all the countries opposing a waiver at the WTO (or withholding support for it, which is functionally the same thing), Canada might be the most frustrating [emphasis mine].

Canada is the biggest hoarder [emphasis mine] of vaccine pre-orders, having secured enough to vaccinate the population five times over. Despite this, it has constantly run into supply problems and lagged behind comparable countries when it comes to administering the vaccines on a per capita basis. In response to criticism of its hoarding, the government continues to focus on its plans to donate all surplus doses to the COVAX vaccine sharing program — but these promises were somewhat more convincing before Canada became the only G7 country to withdraw doses from COVAX. Despite all this, and despite pressure from experts who explain how vaccine hoarding will prolong the pandemic for everyone, the country has continually refused to voice its support for a TRIPS patent waiver at the WTO.

Momentum for changing Canada’s position on a COVID-19 vaccine patent right waivers?

Maclean’s magazine has a May 10, 2021 open letter to Prime Minister Justin Trudeau,

Dear Prime Minister Trudeau,

The only way to combat this pandemic successfully is through a massive global vaccination campaign on a scale and timeline never before undertaken. This requires the production of effective tools and technologies to fight COVID-19 at scale and coordinated global distribution efforts.

The Trade-Related Aspect of Intellectual Property Rights (TRIPS) agreement at the World Trade Organization (WTO) is leading to the opposite outcome. Vaccine production is hindered by granting pharmaceutical companies monopoly power through protection of intellectual property rights, industrial designs and trade secrets. Pharmaceutical companies’ refusal to engage in health technology knowledge transfer makes large-scale, global vaccine production in (and for) low- and middle-income countries all but impossible. The current distribution of vaccines globally speaks to these obstacles.

Hundreds of civil society groups, the World Health Organization (WHO), and the elected governments of over 100 countries, including India, Afghanistan, Bangladesh, Nepal, Pakistan and Sri Lanka have come together and stated that current intellectual property protections reduce the availability of vaccines for protecting their people. On May 5, 2021 the United States also announced its intention to support a temporary waiver for vaccines at the WTO.

We are writing to ask our Canadian government to demonstrate its commitment to an equitable global pandemic response by supporting a temporary waiver of the TRIPS agreement. But clearly that is a necessary but not a sufficient first step. We recognize that scaling up vaccine production requires more than just a waiver of intellectual property rights, so we further request that our government support the WHO’s COVID-19 Technology Access Pool (C-TAP) to facilitate knowledge sharing and work with the WTO to address the supply chain and export constraints currently impeding vaccine production. Finally, because vaccines must be rolled out as part of an integrated strategy to end the acute phase of the epidemic, we request that Canada support the full scope of the TRIPS waiver, which extends to all essential COVID-19 products and technologies, including vaccines, diagnostics and therapeutics.

The status quo is clearly not working fast enough to end the acute phase of the pandemic globally. This waiver respects global intellectual property frameworks and takes advantage of existing provisions for exceptions during emergencies, as enshrined in the TRIPS agreement. Empowering countries to take measures to protect their own people is fundamental to bringing this pandemic to an end.

Anand Giridharadas (author of the 2018 book, Winners Take All: The Elite Charade of Changing the World) also makes the case for a patent rights waiver in his May 11, 2021 posting on The Ink, Note: A link has been removed,

Patents are temporary monopolies granted to inventors, to reward invention and thus encourage more of it. But what happens when you invent a drug that people around the world require to stay alive? What happens when, furthermore, that drug was built in part on technology the public paid for? Are there limits to intellectual property?

For years, activists have pressured the United States government to break or suspend patents in particular cases, as with HIV/Aids. They have had little luck. Indeed, the United States has often fought developing countries when they try to break patents to do right by their citizens, choosing American drug companies over dying people.

So it was a dramatic swerve when, last week, the Biden administration announced that it supported a waiver of the patents for Covid vaccines.

Not long afterward, I reached out to several leading activists for vaccine access to understand the significance of the announcement and where we go from here.

in all this talk about patents and social justice and, whether it’s directly referenced or not, money, the only numbers of I’ve seen,until recently, have been numbers of doses and aggregate costs.

How much does a single vaccine dose cost?

A Sunday, April 11, 2021 article by Krassen Nikolov for EURACTIV provides an answer about the cost in one region, the European Union,

“Pfizer cost €12, then €15.50. The Commission now signs contracts for €19,50”, Bulgarian Prime Minister Boyko Borissov revealed on Sunday [April 11, 2021].

The European Commission is in talks with Pfizer for the supply of COVID-19 vaccines in 2022 and 2023. Borissov said the contracts provide for €19.50 per dose.

Under an agreement with the vaccine producing companies, the European Commission has so far refused to reveal the price of vaccines. However, last December Belgian Secretary of State Eva De Bleeker shared on Twitter the vaccine prices negotiated by the Commission, as well as the number of doses purchased by her government. Then, it became known that the AstraZeneca jab costs €1.78 compared to €12 for Pfizer-BioNTech.

€12 to €19,50, that’s an increase of over 50%. I wonder how Pfizer is justifying such a hefty increase?

According to a March 16, 2021 article by Swikar Oli for the National Post (a Canadian newspaper), these prices are a cheap pandemic special prices,

A top Pfizer executive told shareholders the company is looking at a “significant opportunity” to raise the price of its Pfizer-BioNTech COVID-19 vaccine.

While addressing investors at the virtual Barclays Global Healthcare Conference last week, Pfizer CFO Frank D’Amelio noted they could raise prices when the virus becomes endemic, meaning it’s regularly found in clusters around the globe, according to a transcript of the conference posted on Pfizer’s website.

Current vaccine pricing models are pandemic-related, D’Amelio explained. After the pandemic is defeated and “normal market conditions” arrive, he noted the window would open for a “significant opportunity…from a pricing perspective.”

“So the one price that we published is the price with the U.S. of $19.50 per dose. Obviously, that’s not a normal price like we typically get for a vaccine, $150, $175 [emphasis mine] per dose,” he said, “So pandemic pricing.”

If I remember it rightly, as you increase production, you lower costs per unit. In other words, it’s cheaper to produce one dozen than one, which is why your bakery charges you less money per bun or cake if you purchase by the dozen.

During this pandemic, Pfizer has been producing huge amounts of vaccine, which they would not expect to do should the disease become endemic. As Pfizer has increased production, I would think the price should be dropping but according to the Bulgarian prime minister, it’s not.

They don’t seem to be changing the vaccine as new variants arrive. So, raising the prices doesn’t seem to be linked to research issues and as for the new production facilities, surely those didn’t cost billions.

Canada and COVID-19 money

Talking about money, Canada has a COVDI-19 billionaire according to a December 23, 2020 article (Meet The 50 Doctors, Scientists And Healthcare Entrepreneurs Who Became Pandemic Billionaires In 2020) by Giacomo Tognini for Forbes.

I have a bit more about Carl Hansen (COVID-19 billionaire) and his company, AbCellera, in my December 30, 2020 posting.

I wonder how much the Canadian life sciences community has to do with Canada’s hesitancy over a COVID-19 vaccine patent rights waiver.