Category Archives: regulation

UK AI Summit (November 1 – 2, 2023) at Bletchley Park finishes

This is the closest I’ve ever gotten to writing a gossip column (see my October 18, 2023 posting and scroll down to the “Insight into political jockeying [i.e., some juicy news bits]” subhead )for the first half.

Given the role that Canadian researchers (for more about that see my May 25, 2023 posting and scroll down to “The Panic” subhead) have played in the development of artificial intelligence (AI), it’s been surprising that the Canadian Broadcasting Corporation (CBC) has given very little coverage to the event in the UK. However, there is an October 31, 2023 article by Kelvin Chang and Jill Lawless for the Associated Press posted on the CBC website,

Digital officials, tech company bosses and researchers are converging Wednesday [November 1, 2023] at a former codebreaking spy base [Bletchley Park] near London [UK] to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence.

The two-day summit focusing on so-called frontier AI notched up an early achievement with officials from 28 nations and the European Union signing an agreement on safe and responsible development of the technology.

Frontier AI is shorthand for the latest and most powerful general purpose systems that take the technology right up to its limits, but could come with as-yet-unknown dangers. They’re underpinned by foundation models, which power chatbots like OpenAI’s ChatGPT and Google’s Bard and are trained on vast pools of information scraped from the internet.

The AI Safety Summit is a labour of love for British Prime Minister Rishi Sunak, a tech-loving former banker who wants the U.K. to be a hub for computing innovation and has framed the summit as the start of a global conversation about the safe development of AI.[emphasis mine]

But U.S. Vice President Kamala Harris may divert attention Wednesday [November 1, 2023] with a separate speech in London setting out the Biden administration’s more hands-on approach.

Canada’s Minister of Innovation, Science and Industry Francois-Philippe Champagne said AI would not be constrained by national borders, and therefore interoperability between different regulations being put in place was important.

As the meeting began, U.K. Technology Secretary Michelle Donelan announced that the 28 countries and the European Union had signed the Bletchley Declaration on AI Safety. It outlines the “urgent need to understand and collectively manage potential risks through a new joint global effort.”

South Korea has agreed to host a mini virtual AI summit in six months, followed by an in-person one in France in a year’s time, the U.K. government said.

Chris Stokel-Walker’s October 31, 2023 article for Fast Company presents a critique of the summit prior to the opening, Note: Links have been removed,

… one problem, critics say: The summit, which begins on November 1, is too insular and its participants are homogeneous—an especially damning critique for something that’s trying to tackle the huge, possibly intractable questions around AI. The guest list is made up of 100 of the great and good of governments, including representatives from China, Europe, and Vice President Kamala Harris. And it also includes luminaries within the tech sector. But precious few others—which means a lack of diversity in discussions about the impact of AI.

“Self-regulation didn’t work for social media companies, it didn’t work for the finance sector, and it won’t work for AI,” says Carsten Jung, a senior economist at the Institute for Public Policy Research, a progressive think tank that recently published a report advising on key policy pillars it believes should be discussed at the summit. (Jung isn’t on the guest list.) “We need to learn lessons from our past mistakes and create a strong supervisory hub for all things AI, right from the start.”

Kriti Sharma, chief product officer for legal tech at Thomson Reuters, who will be watching from the wings, not receiving an invite, is similarly circumspect about the goals of the summit. “I hope to see leaders moving past the doom to take practical steps to address known issues and concerns in AI, giving businesses the clarity they urgently need,” she says. “Ideally, I’d like to see movement towards putting some fundamental AI guardrails in place, in the form of a globally aligned, cross-industry regulatory framework.”

But it’s uncertain whether the summit will indeed discuss the more practical elements of AI. Already it seems as if the gathering is designed to quell public fears around AI while convincing those developing AI products that the U.K. will not take too strong an approach in regulating the technology, perhaps in contrasts to near neighbors in the European Union, who have been open about their plans to ensure the technology is properly fenced in to ensure user safety.

Already, there are suggestions that the summit has been drastically downscaled in its ambitions, with others, including the United States, where President Biden just announced a sweeping executive order on AI, and the United Nations, which announced its AI advisory board last week.

Ingrid Lunden in her October 31, 2023 article for TechCrunch is more blunt,

As we wrote yesterday, the U.K. is partly using this event — the first of its kind, as it has pointed out — to stake out a territory for itself on the AI map — both as a place to build AI businesses, but also as an authority in the overall field.

That, coupled with the fact that the topics and approach are focused on potential issues, the affair feel like one very grand photo opportunity and PR exercise, a way for the government to show itself off in the most positive way at the same time that it slides down in the polls and it also faces a disastrous, bad-look inquiry into how it handled the COVID-19 pandemic. On the other hand, the U.K. does have the credentials for a seat at the table, so if the government is playing a hand here, it’s able to do it because its cards are strong.

The subsequent guest list, predictably, leans more toward organizations and attendees from the U.K. It’s also almost as revealing to see who is not participating.

Lunden’s October 30, 2023 article “Existential risk? Regulatory capture? AI for one and all? A look at what’s going on with AI in the UK” includes a little ‘inside’ information,

That high-level aspiration is also reflected in who is taking part: top-level government officials, captains of industry, and notable thinkers in the space are among those expected to attend. (Latest late entry: Elon Musk; latest no’s reportedly include President Biden, Justin Trudeau and Olaf Scholz.) [Scholz’s no was mentioned in my my October 18, 2023 posting]

It sounds exclusive, and it is: “Golden tickets” (as Azeem Azhar, a London-based tech founder and writer, describes them) to the Summit are in scarce supply. Conversations will be small and mostly closed. So because nature abhors a vacuum, a whole raft of other events and news developments have sprung up around the Summit, looping in the many other issues and stakeholders at play. These have included talks at the Royal Society (the U.K.’s national academy of sciences); a big “AI Fringe” conference that’s being held across multiple cities all week; many announcements of task forces; and more.

Earlier today, a group of 100 trade unions and rights campaigners sent a letter to the prime minister saying that the government is “squeezing out” their voices in the conversation by not having them be a part of the Bletchley Park event. (They may not have gotten their golden tickets, but they were definitely canny how they objected: The group publicized its letter by sharing it with no less than the Financial Times, the most elite of economic publications in the country.)

And normal people are not the only ones who have been snubbed. “None of the people I know have been invited,” Carissa Véliz, a tutor in philosophy at the University of Oxford, said during one of the AI Fringe events today [October 30, 2023].

More broadly, the summit has become an anchor and only one part of the bigger conversation going on right now. Last week, U.K. prime minister Rishi Sunak outlined an intention to launch a new AI safety institute and a research network in the U.K. to put more time and thought into AI implications; a group of prominent academics, led by Yoshua Bengio [University of Montreal, Canada) and Geoffrey Hinton [University of Toronto, Canada], published a paper called “Managing AI Risks in an Era of Rapid Progress” to put their collective oar into the the waters; and the UN announced its own task force to explore the implications of AI. Today [October 30, 2023], U.S. president Joe Biden issued the country’s own executive order to set standards for AI security and safety.

There are a couple more article from the BBC (British Broadcasting Corporation) covering the start of the summit, a November 1, 2023 article by Zoe Kleinman & Tom Gerken, “King Charles: Tackle AI risks with urgency and unity” and another November 1, 2023 article this time by Tom Gerken & Imran Rahman-Jones, “Rishi Sunak: AI firms cannot ‘mark their own homework‘.”

Politico offers more US-centric coverage of the event with a November 1, 2023 article by Mark Scott, Tom Bristow and Gian Volpicelli, “US and China join global leaders to lay out need for AI rulemaking,” a November 1, 2023 article by Vincent Manancourt and Eugene Daniels, “Kamala Harris seizes agenda as Rishi Sunak’s AI summit kicks off,” and a November 1, 2023 article by Vincent Manancourt, Eugene Daniels and Brendan Bordelon, “‘Existential to who[m]?’ US VP Kamala Harris urges focus on near-term AI risks.”

I want to draw special attention to the second Politico article,

Kamala just showed Rishi who’s boss.

As British Prime Minister Rishi Sunak’s showpiece artificial intelligence event kicked off in Bletchley Park on Wednesday, 50 miles south in the futuristic environs of the American Embassy in London, U.S. Vice President Kamala Harris laid out her vision for how the world should govern artificial intelligence.

It was a raw show of U.S. power on the emerging technology.

Did she or was this an aggressive interpretation of events?

AI safety talks at Bletchley Park in November 2023

There’s a very good article about the upcoming AI (artificial intelligence) safety talks on the British Broadcasting Corporation (BBC) news website (plus some juicy perhaps even gossipy news about who may not be attending the event) but first, here’s the August 24, 2023 UK government press release making the announcement,

Iconic Bletchley Park to host UK AI Safety Summit in early November [2023]

Major global event to take place on the 1st and 2nd of November.[2023]

– UK to host world first summit on artificial intelligence safety in November

– Talks will explore and build consensus on rapid, international action to advance safety at the frontier of AI technology

– Bletchley Park, one of the birthplaces of computer science, to host the summit

International governments, leading AI companies and experts in research will unite for crucial talks in November on the safe development and use of frontier AI technology, as the UK Government announces Bletchley Park as the location for the UK summit.

The major global event will take place on the 1st and 2nd November to consider the risks of AI, especially at the frontier of development, and discuss how they can be mitigated through internationally coordinated action. Frontier AI models hold enormous potential to power economic growth, drive scientific progress and wider public benefits, while also posing potential safety risks if not developed responsibly.

To be hosted at Bletchley Park in Buckinghamshire, a significant location in the history of computer science development and once the home of British Enigma codebreaking – it will see coordinated action to agree a set of rapid, targeted measures for furthering safety in global AI use.

Preparations for the summit are already in full flow, with Matt Clifford and Jonathan Black recently appointed as the Prime Minister’s Representatives. Together they’ll spearhead talks and negotiations, as they rally leading AI nations and experts over the next three months to ensure the summit provides a platform for countries to work together on further developing a shared approach to agree the safety measures needed to mitigate the risks of AI.

Prime Minister Rishi Sunak said:

“The UK has long been home to the transformative technologies of the future, so there is no better place to host the first ever global AI safety summit than at Bletchley Park this November.

To fully embrace the extraordinary opportunities of artificial intelligence, we must grip and tackle the risks to ensure it develops safely in the years ahead.

With the combined strength of our international partners, thriving AI industry and expert academic community, we can secure the rapid international action we need for the safe and responsible development of AI around the world.”

Technology Secretary Michelle Donelan said:

“International collaboration is the cornerstone of our approach to AI regulation, and we want the summit to result in leading nations and experts agreeing on a shared approach to its safe use.

The UK is consistently recognised as a world leader in AI and we are well placed to lead these discussions. The location of Bletchley Park as the backdrop will reaffirm our historic leadership in overseeing the development of new technologies.

AI is already improving lives from new innovations in healthcare to supporting efforts to tackle climate change, and November’s summit will make sure we can all realise the technology’s huge benefits safely and securely for decades to come.”

The summit will also build on ongoing work at international forums including the OECD, Global Partnership on AI, Council of Europe, and the UN and standards-development organisations, as well as the recently agreed G7 Hiroshima AI Process.

The UK boasts strong credentials as a world leader in AI. The technology employs over 50,000 people, directly supports one of the Prime Minister’s five priorities by contributing £3.7 billion to the economy, and is the birthplace of leading AI companies such as Google DeepMind. It has also invested more on AI safety research than any other nation, backing the creation of the Foundation Model Taskforce with an initial £100 million.

Foreign Secretary James Cleverly said:

“No country will be untouched by AI, and no country alone will solve the challenges posed by this technology. In our interconnected world, we must have an international approach.

The origins of modern AI can be traced back to Bletchley Park. Now, it will also be home to the global effort to shape the responsible use of AI.”

Bletchley Park’s role in hosting the summit reflects the UK’s proud tradition of being at the frontier of new technology advancements. Since Alan Turing’s celebrated work some eight decades ago, computing and computer science have become fundamental pillars of life both in the UK and across the globe.

Iain Standen, CEO of the Bletchley Park Trust, said:

“Bletchley Park Trust is immensely privileged to have been chosen as the venue for the first major international summit on AI safety this November, and we look forward to welcoming the world to our historic site.

It is fitting that the very spot where leading minds harnessed emerging technologies to influence the successful outcome of World War 2 will, once again, be the crucible for international co-ordinated action.

We are incredibly excited to be providing the stage for discussions on global safety standards, which will help everyone manage and monitor the risks of artificial intelligence.”

The roots of AI can be traced back to the leading minds who worked at Bletchley during World War 2, with codebreakers Jack Good and Donald Michie among those who went on to write extensive works on the technology. In November [2023], it will once again take centre stage as the international community comes together to agree on important guardrails which ensure the opportunities of AI can be realised, and its risks safely managed.

The announcement follows the UK government allocating £13 million to revolutionise healthcare research through AI, unveiled last week. The funding supports a raft of new projects including transformations to brain tumour surgeries, new approaches to treating chronic nerve pain, and a system to predict a patient’s risk of developing future health problems based on existing conditions.

Tom Gerken’s August 24, 2023 BBC news article (an analysis by Zoe Kleinman follows as part of the article) fills in a few blanks, Note: Links have been removed,

World leaders will meet with AI companies and experts on 1 and 2 November for the discussions.

The global talks aim to build an international consensus on the future of AI.

The summit will take place at Bletchley Park, where Alan Turing, one of the pioneers of modern computing, worked during World War Two.

It is unknown which world leaders will be invited to the event, with a particular question mark over whether the Chinese government or tech giant Baidu will be in attendance.

The BBC has approached the government for comment.

The summit will address how the technology can be safely developed through “internationally co-ordinated action” but there has been no confirmation of more detailed topics.

It comes after US tech firm Palantir rejected calls to pause the development of AI in June, with its boss Alex Karp saying it was only those with “no products” who wanted a pause.

And in July [2023], children’s charity the Internet Watch Foundation called on Mr Sunak to tackle AI-generated child sexual abuse imagery, which it says is on the rise.

Kleinman’s analysis includes this, Note: A link has been removed,

Will China be represented? Currently there is a distinct east/west divide in the AI world but several experts argue this is a tech that transcends geopolitics. Some say a UN-style regulator would be a better alternative to individual territories coming up with their own rules.

If the government can get enough of the right people around the table in early November [2023], this is perhaps a good subject for debate.

Three US AI giants – OpenAI, Anthropic and Palantir – have all committed to opening London headquarters.

But there are others going in the opposite direction – British DeepMind co-founder Mustafa Suleyman chose to locate his new AI company InflectionAI in California. He told the BBC the UK needed to cultivate a more risk-taking culture in order to truly become an AI superpower.

Many of those who worked at Bletchley Park decoding messages during WW2 went on to write and speak about AI in later years, including codebreakers Irving John “Jack” Good and Donald Michie.

Soon after the War, [Alan] Turing proposed the imitation game – later dubbed the “Turing test” – which seeks to identify whether a machine can behave in a way indistinguishable from a human.

There is a Bletchley Park website, which sells tickets for tours.

Insight into political jockeying (i.e., some juicy news bits)

This has recently been reported by BBC, from an October 17 (?). 2023 news article by Jessica Parker & Zoe Kleinman on BBC news online,

German Chancellor Olaf Scholz may turn down his invitation to a major UK summit on artificial intelligence, the BBC understands.

While no guest list has been published of an expected 100 participants, some within the sector say it’s unclear if the event will attract top leaders.

A government source insisted the summit is garnering “a lot of attention” at home and overseas.

The two-day meeting is due to bring together leading politicians as well as independent experts and senior execs from the tech giants, who are mainly US based.

The first day will bring together tech companies and academics for a discussion chaired by the Secretary of State for Science, Innovation and Technology, Michelle Donelan.

The second day is set to see a “small group” of people, including international government figures, in meetings run by PM Rishi Sunak.

Though no final decision has been made, it is now seen as unlikely that the German Chancellor will attend.

That could spark concerns of a “domino effect” with other world leaders, such as the French President Emmanuel Macron, also unconfirmed.

Government sources say there are heads of state who have signalled a clear intention to turn up, and the BBC understands that high-level representatives from many US-based tech giants are going.

The foreign secretary confirmed in September [2023] that a Chinese representative has been invited, despite controversy.

Some MPs within the UK’s ruling Conservative Party believe China should be cut out of the conference after a series of security rows.

It is not known whether there has been a response to the invitation.

China is home to a huge AI sector and has already created its own set of rules to govern responsible use of the tech within the country.

The US, a major player in the sector and the world’s largest economy, will be represented by Vice-President Kamala Harris.

Britain is hoping to position itself as a key broker as the world wrestles with the potential pitfalls and risks of AI.

However, Berlin is thought to want to avoid any messy overlap with G7 efforts, after the group of leading democratic countries agreed to create an international code of conduct.

Germany is also the biggest economy in the EU – which is itself aiming to finalise its own landmark AI Act by the end of this year.

It includes grading AI tools depending on how significant they are, so for example an email filter would be less tightly regulated than a medical diagnosis system.

The European Commission President Ursula von der Leyen is expected at next month’s summit, while it is possible Berlin could send a senior government figure such as its vice chancellor, Robert Habeck.

A source from the Department for Science, Innovation and Technology said: “This is the first time an international summit has focused on frontier AI risks and it is garnering a lot of attention at home and overseas.

“It is usual not to confirm senior attendance at major international events until nearer the time, for security reasons.”

Fascinating, eh?

Symposium on “Enabling the Nanotechnology Revolution” on October 10, 2023, in-person in Washington, DC or virtual

It’s the 20th anniversary of the US National Nanotechnology Initiative (NNI) and, now, scientists and policymakers will be celebrating and analyzing the results on October 10, 2023 according to a September 18, 2023 post on the JD Supra Nano and Other Emerging Chemical Technologies blog, Note: A link has been removed,

On October 10, 2023, the National Nanotechnology Coordination Office (NNCO) will host a symposium entitled “Enabling the Nanotechnology Revolution: Celebrating the 20th Anniversary of the 21st Century Nanotechnology Research and Development Act” at the National Academies of Sciences, Engineering, and Medicine. Experts will address the importance of nanotechnology in microelectronics, optics, advanced polymers, quantum engineering, medicine, education, manufacturing, and more. Discussions will also focus on the environmental, health, and safety implications of nanomaterials, as well as the National Nanotechnology Initiative (NNI) community’s efforts around inclusion, diversity, equity, and access.

You can register and find more information on the National Nanotechnology Initiative (NNI) anniversary symposium webpage, Note: A link has been removed,

Scientists and engineers across many fields and disciplines are united by their work at the nanoscale. Their diverse efforts have helped produce everything from faster microchips to powerful mRNA vaccines. The transformative impact of this work has been spurred by the coordination and focus on U.S. nanotechnology established by the 21st Century Nanotechnology Research and Development Act in 2003. Celebrating such a broad impact and envisioning the future can be quite challenging, but this event will bring together voices from across the emerging technology landscape. There will be experts who can speak on the importance of nanotechnology in quantum engineering, optics, EHS, plastics, DEIA, microelectronics, medicine, education, manufacturing, and more. We can’t predict what will emerge from this lively discussion between researchers, policymakers, members of industry, educators, and the public, but the conversation can only benefit from including more diverse perspectives – especially yours.

You have the option of registering in-person attendance or for virtual attendance.

Here’s the:

AGENDA

9:00-9:05   Welcome and Introduction

9:05-9:30   Opening Remarks on the NNI

9:30-10:15  Morning Keynote

10:15-10:30  Coffee Break

10:30-11:15  Panel: Responsible Development

11:15-12:00  Panel: Fundamental Research

12:00-1:00  Lunch and Networking

1:00-1:45  Keynote Panel: The Future of Nanotechnology

1:45-2:30  Panel: Workforce Development

2:30-2:45  Break

2:45-3:30  Panel: Infrastructure

3:30-4:15  Panel: Commercialization

4:15-5:00  Closing Keynote

Reception to follow

If you’re curious about the panelists and speakers, you will find a list with pictures and links to profile pages on the NNI’s anniversary symposium webpage.

Ethical nanobiotechnology

This paper on ethics (aside: I have a few comments after the news release and citation) comes from the US Pacific Northwest National Laboratory (PNNL) according to a July 12, 2023 news item on phys.org,

Prosthetics moved by thoughts. Targeted treatments for aggressive brain cancer. Soldiers with enhanced vision or bionic ears. These powerful technologies sound like science fiction, but they’re becoming possible thanks to nanoparticles.

“In medicine and other biological settings, nanotechnology is amazing and helpful, but it could be harmful if used improperly,” said Pacific Northwest National Laboratory (PNNL) chemist Ashley Bradley, part of a team of researchers who conducted a comprehensive survey of nanobiotechnology applications and policies.

Their research, available in Health Security, works to sum up the very large, active field of nanotechnology in biology applications, draw attention to regulatory gaps, and offer areas for further consideration.

A July 12, 2023 PNNL news release (also on EurekAlert), which originated the news item, delves further into the topic, Note: A link has been removed,

“In our research, we learned there aren’t many global regulations yet,” said Bradley. “And we need to create a common set of rules to figure out the ethical boundaries.”

Nanoparticles, big differences

Nanoparticles are clusters of molecules with different properties than large amounts of the same substances. In medicine and other biology applications, these properties allow nanoparticles to act as the packaging that delivers treatments through cell walls and the difficult to cross blood-brain barrier.

“You can think of the nanoparticles a little bit like the plastic around shredded cheese,” said PNNL chemist Kristin Omberg. “It makes it possible to get something perishable directly where you want it, but afterwards you’ve got to deal with a whole lot of substance where it wasn’t before.”

Unfortunately, dealing with nanoparticles in new places isn’t straightforward. Carbon is pencil lead, nano carbon conducts electricity. The same material may have different properties at the nanoscale, but most countries still regulate it the same as bulk material, if the material is regulated at all.

For example, zinc oxide, a material that was stable and unreactive as a pigment in white paint, is now accumulating in oceans when used as nanoparticles in sunscreen, warranting a call to create alternative reef-safe sunscreens. And although fats and lipids aren’t regulated, the researchers suggest which agencies could weigh in on regulations were fats to become after-treatment byproducts.

The article also inventories national and international agencies, organizations, and governing bodies with an interest in understanding how nanoparticles break down or react in a living organism and the environmental life cycle of a nanoparticle. Because nanobiotechnology spans materials science, biology, medicine, environmental science, and tech, these disparate research and regulatory disciplines must come together, often for the first time—to fully understand the impact on humans and the environment.

Dual use: Good for us, bad for us

Like other quickly growing fields, there’s a time lag between the promise of new advances and the possibilities of unintended uses.

“There were so many more applications than we thought there were,” said Bradley, who collected exciting nanobio examples such as Alzheimer’s treatment, permanent contact lenses, organ replacement, and enhanced muscle recovery, among others.

The article also highlights concerns about crossing the blood-brain barrier, thought-initiated control of computers, and nano-enabled DNA editing where the researchers suggest more caution, questioning, and attention could be warranted. This attention spans everything from deep fundamental research and regulations all the way to what Omberg called “the equivalent of tattoo removal” if home-DNA splicing attempts go south.

The researchers draw parallels to more established fields such as synthetic bio and pharmacology, which offer lessons to be learned from current concerns such as the unintended consequences of fentanyl and opioids. They believe these fields also offer examples of innovative coordination between science and ethics, such as synthetic bio’s IGEM [The International Genetically Engineered Machine competition]—student competition, to think about not just how to create, but also to shape the use and control of new technologies.

Omberg said unusually enthusiastic early reviewers of the article contributed even more potential uses and concerns, demonstrating that experts in many fields recognize ethical nanobiotechnology is an issue to get in front of. “This is a train that’s going. It will be sad if 10 years from now, we haven’t figured how to talk about it.”

Funding for the team’s research was supported by PNNL’s Biorisk Beyond the List National Security Directorate Objective.

Here’s a link to and a citation for the paper,

The Promise of Emergent Nanobiotechnologies for In Vivo Applications and Implications for Safety and Security by Anne M. Arnold, Ashley M. Bradley, Karen L. Taylor, Zachary C. Kennedy, and Kristin M. Omberg. Health Security.Oct 2022.408-423.Published in Volume: 20 Issue 5: October 17, 2022 DOI: https://doi.org/10.1089/hs.2022.0014 Published Online:17 Oct 2022

This paper is open access.

You can find out more about IGEM (The International Genetically Engineered Machine competition) here.

Comments (brief)

It seems a little odd that the news release (“Prosthetics moved by thoughts …”) and the paper both reference neurotechnology without ever mentioning it by name. Here’s the reference from the paper, Note: Links have been removed,

Nanoparticles May Be Developed to Facilitate Cognitive Enhancements

The development and implementation of NPs that enhance cognitive function has yet to be realized. However, recent advances on the micro- and macro-level with neural–machine interfacing provide the building blocks necessary to develop this technology on the nanoscale. A noninvasive brain–computer interface to control a robotic arm was developed by teams at 2 universities.157 A US-based company, Neuralink, [emphasis mine] is at the forefront of implementing implantable, intracortical microelectrodes that provide an interface between the human brain and technology.158,159 Utilization of intracortical microelectrodes may ultimately provide thought-initiated access and control of computers and mobile devices, and possibly expand cognitive function by accessing underutilized areas of the brain.158

Neuralink (founded by Elon Musk) is controversial for its animal testing practices. You can find out more in Björn Ólafsson’s May 30, 2023 article for Sentient Media.

The focus on nanoparticles as the key factor in the various technologies and applications mentioned seems narrow but necessary given the breadth of topics covered in the paper as the authors themselves note in the paper’s abstract,

… In this article, while not comprehensive, we attempt to illustrate the breadth and promise of bionanotechnology developments, and how they may present future safety and security challenges. Specifically, we address current advancements to streamline the development of engineered NPs for in vivo applications and provide discussion on nano–bio interactions, NP in vivo delivery, nanoenhancement of human performance, nanomedicine, and the impacts of NPs on human health and the environment.

They have a good overview of the history and discussions about nanotechnology risks and regulation. It’s international in scope with a heavy emphasis on US efforts, as one would expect.

For anyone who’s interested in the neurotechnology end of things, I’ve got a July 17, 2023 commentary “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report.” The report was launched July 13, 2023 during UNESCO’s Global dialogue on the ethics of neurotechnology (see my July 7, 2023 posting about the then upcoming dialogue for links to more UNESCO information). Both the July 17 and July 7, 2023 postings included additional information about Neuralink.

Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report

Launched on Thursday, July 13, 2023 during UNESCO’s (United Nations Educational, Scientific, and Cultural Organization) “Global dialogue on the ethics of neurotechnology,” is a report tying together the usual measures of national scientific supremacy (number of papers published and number of patents filed) with information on corporate investment in the field. Consequently, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends” by Daniel S. Hain, Roman Jurowetzki, Mariagrazia Squicciarini, and Lihui Xu provides better insight into the international neurotechnology scene than is sometimes found in these kinds of reports. By the way, the report is open access.

Here’s what I mean, from the report‘s short summary,

Since 2013, government investments in this field have exceeded $6 billion. Private investment has also seen significant growth, with annual funding experiencing a 22-fold increase from 2010 to 2020, reaching $7.3 billion and totaling $33.2 billion.

This investment has translated into a 35-fold growth in neuroscience publications between 2000-2021 and 20-fold growth in innovations between 2022-2020, as proxied by patents. However, not all are poised to benefit from such developments, as big divides emerge.

Over 80% of high-impact neuroscience publications are produced by only ten countries, while 70% of countries contributed fewer than 10 such papers over the period considered. Similarly, five countries only hold 87% of IP5 neurotech patents.

This report sheds light on the neurotechnology ecosystem, that is, what is being developed, where and by whom, and informs about how neurotechnology interacts with other technological trajectories, especially Artificial Intelligence [emphasis mine]. [p. 2]

The money aspect is eye-opening even when you already have your suspicions. Also, it’s not entirely unexpected to learn that only ten countries produce over 80% of the high impact neurotech papers and that only five countries hold 87% of the IP5 neurotech patents but it is stunning to see it in context. (If you’re not familiar with the term ‘IP5 patents’, scroll down in this post to the relevant subhead. Hint: It means the patent was filed in one of the top five jurisdictions; I’ll leave you to guess which ones those might be.)

“Since 2013 …” isn’t quite as informative as the authors may have hoped. I wish they had given a time frame for government investments similar to what they did for corporate investments (e.g., 2010 – 2020). Also, is the $6B (likely in USD) government investment cumulative or an estimated annual number? To sum up, I would have appreciated parallel structure and specificity.

Nitpicks aside, there’s some very good material intended for policy makers. On that note, some of the analysis is beyond me. I haven’t used anything even somewhat close to their analytical tools in years and years. This commentaries reflects my interests and a very rapid reading. One last thing, this is being written from a Canadian perspective. With those caveats in mind, here’s some of what I found.

A definition, social issues, country statistics, and more

There’s a definition for neurotechnology and a second mention of artificial intelligence being used in concert with neurotechnology. From the report‘s executive summary,

Neurotechnology consists of devices and procedures used to access, monitor, investigate, assess, manipulate, and/or emulate the structure and function of the neural systems of animals or human beings. It is poised to revolutionize our understanding of the brain and to unlock innovative solutions to treat a wide range of diseases and disorders.

Similarly to Artificial Intelligence (AI), and also due to its convergence with AI, neurotechnology may have profound societal and economic impact, beyond the medical realm. As neurotechnology directly relates to the brain, it triggers ethical considerations about fundamental aspects of human existence, including mental integrity, human dignity, personal identity, freedom of thought, autonomy, and privacy [emphases mine]. Its potential for enhancement purposes and its accessibility further amplifies its prospect social and societal implications.

The recent discussions held at UNESCO’s Executive Board further shows Member States’ desire to address the ethics and governance of neurotechnology through the elaboration of a new standard-setting instrument on the ethics of neurotechnology, to be adopted in 2025. To this end, it is important to explore the neurotechnology landscape, delineate its boundaries, key players, and trends, and shed light on neurotech’s scientific and technological developments. [p. 7]

Here’s how they sourced the data for the report,

The present report addresses such a need for evidence in support of policy making in
relation to neurotechnology by devising and implementing a novel methodology on data from scientific articles and patents:

● We detect topics over time and extract relevant keywords using a transformer-
based language models fine-tuned for scientific text. Publication data for the period
2000-2021 are sourced from the Scopus database and encompass journal articles
and conference proceedings in English. The 2,000 most cited publications per year
are further used in in-depth content analysis.
● Keywords are identified through Named Entity Recognition and used to generate
search queries for conducting a semantic search on patents’ titles and abstracts,
using another language model developed for patent text. This allows us to identify
patents associated with the identified neuroscience publications and their topics.
The patent data used in the present analysis are sourced from the European
Patent Office’s Worldwide Patent Statistical Database (PATSTAT). We consider
IP5 patents filed between 2000-2020 having an English language abstract and
exclude patents solely related to pharmaceuticals.

This approach allows mapping the advancements detailed in scientific literature to the technological applications contained in patent applications, allowing for an analysis of the linkages between science and technology. This almost fully automated novel approach allows repeating the analysis as neurotechnology evolves. [pp. 8-9[

Findings in bullet points,

Key stylized facts are:
● The field of neuroscience has witnessed a remarkable surge in the overall number
of publications since 2000, exhibiting a nearly 35-fold increase over the period
considered, reaching 1.2 million in 2021. The annual number of publications in
neuroscience has nearly tripled since 2000, exceeding 90,000 publications a year
in 2021. This increase became even more pronounced since 2019.
● The United States leads in terms of neuroscience publication output (40%),
followed by the United Kingdom (9%), Germany (7%), China (5%), Canada (4%),
Japan (4%), Italy (4%), France (4%), the Netherlands (3%), and Australia (3%).
These countries account for over 80% of neuroscience publications from 2000 to
2021.
● Big divides emerge, with 70% of countries in the world having less than 10 high-
impact neuroscience publications between 2000 to 2021.
● Specific neurotechnology-related research trends between 2000 and 2021 include:
○ An increase in Brain-Computer Interface (BCI) research around 2010,
maintaining a consistent presence ever since.
○ A significant surge in Epilepsy Detection research in 2017 and 2018,
reflecting the increased use of AI and machine learning in healthcare.
○ Consistent interest in Neuroimaging Analysis, which peaks around 2004,
likely because of its importance in brain activity and language
comprehension studies.
○ While peaking in 2016 and 2017, Deep Brain Stimulation (DBS) remains a
persistent area of research, underlining its potential in treating conditions
like Parkinson’s disease and essential tremor.
● Between 2000 and 2020, the total number of patent applications in this field
increased significantly, experiencing a 20-fold increase from less than 500 to over
12,000. In terms of annual figures, a consistent upward trend in neurotechnology-10
related patent applications emerges, with a notable doubling observed between
2015 and 2020.
• The United States account for nearly half of all worldwide patent applications (47%).
Other major contributors include South Korea (11%), China (10%), Japan (7%),
Germany (7%), and France (5%). These five countries together account for 87%
of IP5 neurotech patents applied between 2000 and 2020.
○ The United States has historically led the field, with a peak around 2010, a
decline towards 2015, and a recovery up to 2020.
○ South Korea emerged as a significant contributor after 1990, overtaking
Germany in the late 2000s to become the second-largest developer of
neurotechnology. By the late 2010s, South Korea’s annual neurotechnology
patent applications approximated those of the United States.
○ China exhibits a sharp increase in neurotechnology patent applications in
the mid-2010s, bringing it on par with the United States in terms of
application numbers.
● The United States ranks highest in both scientific publications and patents,
indicating their strong ability to transform knowledge into marketable inventions.
China, France, and Korea excel in leveraging knowledge to develop patented
innovations. Conversely, countries such as the United Kingdom, Germany, Italy,
Canada, Brazil, and Australia lag behind in effectively translating neurotech
knowledge into patentable innovations.
● In terms of patent quality measured by forward citations, the leading countries are
Germany, US, China, Japan, and Korea.
● A breakdown of patents by technology field reveals that Computer Technology is
the most important field in neurotechnology, exceeding Medical Technology,
Biotechnology, and Pharmaceuticals. The growing importance of algorithmic
applications, including neural computing techniques, also emerges by looking at
the increase in patent applications in these fields between 2015-2020. Compared
to the reference year, computer technologies-related patents in neurotech
increased by 355% and by 92% in medical technology.
● An analysis of the specialization patterns of the top-5 countries developing
neurotechnologies reveals that Germany has been specializing in chemistry-
related technology fields, whereas Asian countries, particularly South Korea and
China, focus on computer science and electrical engineering-related fields. The
United States exhibits a balanced configuration with specializations in both
chemistry and computer science-related fields.
● The entities – i.e. both companies and other institutions – leading worldwide
innovation in the neurotech space are: IBM (126 IP5 patents, US), Ping An
Technology (105 IP5 patents, CH), Fujitsu (78 IP5 patents, JP), Microsoft (76 IP511
patents, US)1, Samsung (72 IP5 patents, KR), Sony (69 IP5 patents JP) and Intel
(64 IP5 patents US)

This report further proposes a pioneering taxonomy of neurotechnologies based on International Patent Classification (IPC) codes.

• 67 distinct patent clusters in neurotechnology are identified, which mirror the diverse research and development landscape of the field. The 20 most prominent neurotechnology groups, particularly in areas like multimodal neuromodulation, seizure prediction, neuromorphic computing [emphasis mine], and brain-computer interfaces, point to potential strategic areas for research and commercialization.
• The variety of patent clusters identified mirrors the breadth of neurotechnology’s potential applications, from medical imaging and limb rehabilitation to sleep optimization and assistive exoskeletons.
• The development of a baseline IPC-based taxonomy for neurotechnology offers a structured framework that enriches our understanding of this technological space, and can facilitate research, development and analysis. The identified key groups mirror the interdisciplinary nature of neurotechnology and underscores the potential impact of neurotechnology, not only in healthcare but also in areas like information technology and biomaterials, with non-negligible effects over societies and economies.

1 If we consider Microsoft Technology Licensing LLM and Microsoft Corporation as being under the same umbrella, Microsoft leads worldwide developments with 127 IP5 patents. Similarly, if we were to consider that Siemens AG and Siemens Healthcare GmbH belong to the same conglomerate, Siemens would appear much higher in the ranking, in third position, with 84 IP5 patents. The distribution of intellectual property assets across companies belonging to the same conglomerate is frequent and mirrors strategic as well as operational needs and features, among others. [pp. 9-11]

Surprises and comments

Interesting and helpful to learn that “neurotechnology interacts with other technological trajectories, especially Artificial Intelligence;” this has changed and improved my understanding of neurotechnology.

It was unexpected to find Canada in the top ten countries producing neuroscience papers. However, finding out that the country lags in translating its ‘neuro’ knowledge into patentable innovation is not entirely a surprise.

It can’t be an accident that countries with major ‘electronics and computing’ companies lead in patents. These companies do have researchers but they also buy startups to acquire patents. They (and ‘patent trolls’) will also file patents preemptively. For the patent trolls, it’s a moneymaking proposition and for the large companies, it’s a way of protecting their own interests and/or (I imagine) forcing a sale.

The mention of neuromorphic (brainlike) computing in the taxonomy section was surprising and puzzling. Up to this point, I’ve thought of neuromorphic computing as a kind of alternative or addition to standard computing but the authors have blurred the lines as per UNESCO’s definition of neurotechnology (specifically, “… emulate the structure and function of the neural systems of animals or human beings”) . Again, this report is broadening my understanding of neurotechnology. Of course, it required two instances before I quite grasped it, the definition and the taxonomy.

What’s puzzling is that neuromorphic engineering, a broader term that includes neuromorphic computing, isn’t used or mentioned. (For an explanation of the terms neuromorphic computing and neuromorphic engineering, there’s my June 23, 2023 posting, “Neuromorphic engineering: an overview.” )

The report

I won’t have time for everything. Here are some of the highlights from my admittedly personal perspective.

It’s not only about curing disease

From the report,

Neurotechnology’s applications however extend well beyond medicine [emphasis mine], and span from research, to education, to the workplace, and even people’s everyday life. Neurotechnology-based solutions may enhance learning and skill acquisition and boost focus through brain stimulation techniques. For instance, early research finds that brain- zapping caps appear to boost memory for at least one month (Berkeley, 2022). This could one day be used at home to enhance memory functions [emphasis mine]. They can further enable new ways to interact with the many digital devices we use in everyday life, transforming the way we work, live and interact. One example is the Sound Awareness wristband developed by a Stanford team (Neosensory, 2022) which enables individuals to “hear” by converting sound into tactile feedback, so that sound impaired individuals can perceive spoken words through their skin. Takagi and Nishimoto (2023) analyzed the brain scans taken through Magnetic Resonance Imaging (MRI) as individuals were shown thousands of images. They then trained a generative AI tool called Stable Diffusion2 on the brain scan data of the study’s participants, thus creating images that roughly corresponded to the real images shown. While this does not correspond to reading the mind of people, at least not yet, and some limitations of the study have been highlighted (Parshall, 2023), it nevertheless represents an important step towards developing the capability to interface human thoughts with computers [emphasis mine], via brain data interpretation.

While the above examples may sound somewhat like science fiction, the recent uptake of generative Artificial Intelligence applications and of large language models such as ChatGPT or Bard, demonstrates that the seemingly impossible can quickly become an everyday reality. At present, anyone can purchase online electroencephalogram (EEG) devices for a few hundred dollars [emphasis mine], to measure the electrical activity of their brain for meditation, gaming, or other purposes. [pp. 14-15]

This is very impressive achievement. Some of the research cited was published earlier this year (2023). The extraordinary speed is a testament to the efforts by the authors and their teams. It’s also a testament to how quickly the field is moving.

I’m glad to see the mention of and focus on consumer neurotechnology. (While the authors don’t speculate, I am free to do so.) Consumer neurotechnology could be viewed as one of the steps toward normalizing a cyborg future for all of us. Yes, we have books, television programmes, movies, and video games, which all normalize the idea but the people depicted have been severely injured and require the augmentation. With consumer neurotechnology, you have easily accessible devices being used to enhance people who aren’t injured, they just want to be ‘better’.

This phrase seemed particularly striking “… an important step towards developing the capability to interface human thoughts with computers” in light of some claims made by the Australian military in my June 13, 2023 posting “Mind-controlled robots based on graphene: an Australian research story.” (My posting has an embedded video demonstrating the Brain Robotic Interface (BRI) in action. Also, see the paragraph below the video for my ‘measured’ response.)

There’s no mention of the military in the report which seems more like a deliberate rather than inadvertent omission given the importance of military innovation where technology is concerned.

This section gives a good overview of government initiatives (in the report it’s followed by a table of the programmes),

Thanks to the promises it holds, neurotechnology has garnered significant attention from both governments and the private sector and is considered by many as an investment priority. According to the International Brain Initiative (IBI), brain research funding has become increasingly important over the past ten years, leading to a rise in large-scale state-led programs aimed at advancing brain intervention technologies(International Brain Initiative, 2021). Since 2013, initiatives such as the United States’ Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative and the European Union’s Human Brain Project (HBP), as well as major national initiatives in China, Japan and South Korea have been launched with significant funding support from the respective governments. The Canadian Brain Research Strategy, initially operated as a multi- stakeholder coalition on brain research, is also actively seeking funding support from the government to transform itself into a national research initiative (Canadian Brain Research Strategy, 2022). A similar proposal is also seen in the case of the Australian Brain Alliance, calling for the establishment of an Australian Brain Initiative (Australian Academy of Science, n.d.). [pp. 15-16]

Privacy

There are some concerns such as these,

Beyond the medical realm, research suggests that emotional responses of consumers
related to preferences and risks can be concurrently tracked by neurotechnology, such
as neuroimaging and that neural data can better predict market-level outcomes than
traditional behavioral data (Karmarkar and Yoon, 2016). As such, neural data is
increasingly sought after in the consumer market for purposes such as digital
phenotyping4, neurogaming 5,and neuromarketing6 (UNESCO, 2021). This surge in demand gives rise to risks like hacking, unauthorized data reuse, extraction of privacy-sensitive information, digital surveillance, criminal exploitation of data, and other forms of abuse. These risks prompt the question of whether neural data needs distinct definition and safeguarding measures.

These issues are particularly relevant today as a wide range of electroencephalogram (EEG) headsets that can be used at home are now available in consumer markets for purposes that range from meditation assistance to controlling electronic devices through the mind. Imagine an individual is using one of these devices to play a neurofeedback game, which records the person’s brain waves during the game. Without the person being aware, the system can also identify the patterns associated with an undiagnosed mental health condition, such as anxiety. If the game company sells this data to third parties, e.g. health insurance providers, this may lead to an increase of insurance fees based on undisclosed information. This hypothetical situation would represent a clear violation of mental privacy and of unethical use of neural data.

Another example is in the field of advertising, where companies are increasingly interested in using neuroimaging to better understand consumers’ responses to their products or advertisements, a practice known as neuromarketing. For instance, a company might use neural data to determine which advertisements elicit the most positive emotional responses in consumers. While this can help companies improve their marketing strategies, it raises significant concerns about mental privacy. Questions arise in relation to consumers being aware or not that their neural data is being used, and in the extent to which this can lead to manipulative advertising practices that unfairly exploit unconscious preferences. Such potential abuses underscore the need for explicit consent and rigorous data protection measures in the use of neurotechnology for neuromarketing purposes. [pp. 21-22]

Legalities

Some countries already have laws and regulations regarding neurotechnology data,

At the national level, only a few countries have enacted laws and regulations to protect mental integrity or have included neuro-data in personal data protection laws (UNESCO, University of Milan-Bicocca (Italy) and State University of New York – Downstate Health Sciences University, 2023). Examples are the constitutional reform undertaken by Chile (Republic of Chile, 2021), the Charter for the responsible development of neurotechnologies of the Government of France (Government of France, 2022), and the Digital Rights Charter of the Government of Spain (Government of Spain, 2021). They propose different approaches to the regulation and protection of human rights in relation to neurotechnology. Countries such as the UK are also examining under which circumstances neural data may be considered as a special category of data under the general data protection framework (i.e. UK’s GDPR) (UK’s Information Commissioner’s Office, 2023) [p. 24]

As you can see, these are recent laws. There doesn’t seem to be any attempt here in Canada even though there is an act being reviewed in Parliament that could conceivably include neural data. This is from my May 1, 2023 posting,

Bill C-27 (Digital Charter Implementation Act, 2022) is what I believe is called an omnibus bill as it includes three different pieces of proposed legislation (the Consumer Privacy Protection Act [CPPA], the Artificial Intelligence and Data Act [AIDA], and the Personal Information and Data Protection Tribunal Act [PIDPTA]). [emphasis added July 11, 2023] You can read the Innovation, Science and Economic Development (ISED) Canada summary here or a detailed series of descriptions of the act here on the ISED’s Canada’s Digital Charter webpage.

My focus at the time was artificial intelligence and, now, after reading this UNESCO report and briefly looking at the Innovation, Science and Economic Development (ISED) Canada summary and a detailed series of descriptions of the act on ISED’s Canada’s Digital Charter webpage, I don’t see anything that specifies neural data but it’s not excluded either.

IP5 patents

Here’s the explanation (the footnote is included at the end of the excerpt),

IP5 patents represent a subset of overall patents filed worldwide, which have the
characteristic of having been filed in at least one top intellectual property offices (IPO)
worldwide (the so called IP5, namely the Chinese National Intellectual Property
Administration, CNIPA (formerly SIPO); the European Patent Office, EPO; the Japan
Patent Office, JPO; the Korean Intellectual Property Office, KIPO; and the United States
Patent and Trademark Office, USPTO) as well as another country, which may or may not be an IP5. This signals their potential applicability worldwide, as their inventiveness and industrial viability have been validated by at least two leading IPOs. This gives these patents a sort of “quality” check, also since patenting inventions is costly and if applicants try to protect the same invention in several parts of the world, this normally mirrors that the applicant has expectations about their importance and expected value. If we were to conduct the same analysis using information about individually considered patent applied worldwide, i.e. without filtering for quality nor considering patent families, we would risk conducting a biased analysis based on duplicated data. Also, as patentability standards vary across countries and IPOs, and what matters for patentability is the existence (or not) of prior art in the IPO considered, we would risk mixing real innovations with patents related to catching up phenomena in countries that are not at the forefront of the technology considered.

9 The five IP offices (IP5) is a forum of the five largest intellectual property offices in the world that was set up to improve the efficiency of the examination process for patents worldwide. The IP5 Offices together handle about 80% of the world’s patent applications, and 95% of all work carried out under the Patent Cooperation Treaty (PCT), see http://www.fiveipoffices.org. (Dernis et al., 2015) [p. 31]

AI assistance on this report

As noted earlier I have next to no experience with the analytical tools having not attempted this kind of work in several years. Here’s an example of what they were doing,

We utilize a combination of text embeddings based on Bidirectional Encoder
Representations from Transformer (BERT), dimensionality reduction, and hierarchical
clustering inspired by the BERTopic methodology 12 to identify latent themes within
research literature. Latent themes or topics in the context of topic modeling represent
clusters of words that frequently appear together within a collection of documents (Blei, 2012). These groupings are not explicitly labeled but are inferred through computational analysis examining patterns in word usage. These themes are ‘hidden’ within the text, only to be revealed through this analysis. …

We further utilize OpenAI’s GPT-4 model to enrich our understanding of topics’ keywords and to generate topic labels (OpenAI, 2023), thus supplementing expert review of the broad interdisciplinary corpus. Recently, GPT-4 has shown impressive results in medical contexts across various evaluations (Nori et al., 2023), making it a useful tool to enhance the information obtained from prior analysis stages, and to complement them. The automated process enhances the evaluation workflow, effectively emphasizing neuroscience themes pertinent to potential neurotechnology patents. Notwithstanding existing concerns about hallucinations (Lee, Bubeck and Petro, 2023) and errors in generative AI models, this methodology employs the GPT-4 model for summarization and interpretation tasks, which significantly mitigates the likelihood of hallucinations. Since the model is constrained to the context provided by the keyword collections, it limits the potential for fabricating information outside of the specified boundaries, thereby enhancing the accuracy and reliability of the output. [pp. 33-34]

I couldn’t resist adding the ChatGPT paragraph given all of the recent hoopla about it.

Multimodal neuromodulation and neuromorphic computing patents

I think this gives a pretty good indication of the activity on the patent front,

The largest, coherent topic, termed “multimodal neuromodulation,” comprises 535
patents detailing methodologies for deep or superficial brain stimulation designed to
address neurological and psychiatric ailments. These patented technologies interact with various points in neural circuits to induce either Long-Term Potentiation (LTP) or Long-Term Depression (LTD), offering treatment for conditions such as obsession, compulsion, anxiety, depression, Parkinson’s disease, and other movement disorders. The modalities encompass implanted deep-brain stimulators (DBS), Transcranial Magnetic Stimulation (TMS), and transcranial Direct Current Stimulation (tDCS). Among the most representative documents for this cluster are patents with titles: Electrical stimulation of structures within the brain or Systems and methods for enhancing or optimizing neural stimulation therapy for treating symptoms of Parkinson’s disease and or other movement disorders. [p.65]

Given my longstanding interest in memristors, which (I believe) have to a large extent helped to stimulate research into neuromorphic computing, this had to be included. Then, there was the brain-computer interfaces cluster,

A cluster identified as “Neuromorphic Computing” consists of 366 patents primarily
focused on devices designed to mimic human neural networks for efficient and adaptable computation. The principal elements of these inventions are resistive memory cells and artificial synapses. They exhibit properties similar to the neurons and synapses in biological brains, thus granting these devices the ability to learn and modulate responses based on rewards, akin to the adaptive cognitive capabilities of the human brain.

The primary technology classes associated with these patents fall under specific IPC
codes, representing the fields of neural network models, analog computers, and static
storage structures. Essentially, these classifications correspond to technologies that are key to the construction of computers and exhibit cognitive functions similar to human brain processes.

Examples for this cluster include neuromorphic processing devices that leverage
variations in resistance to store and process information, artificial synapses exhibiting
spike-timing dependent plasticity, and systems that allow event-driven learning and
reward modulation within neuromorphic computers.

In relation to neurotechnology as a whole, the “neuromorphic computing” cluster holds significant importance. It embodies the fusion of neuroscience and technology, thereby laying the basis for the development of adaptive and cognitive computational systems. Understanding this specific cluster provides a valuable insight into the progressing domain of neurotechnology, promising potential advancements across diverse fields, including artificial intelligence and healthcare.

The “Brain-Computer Interfaces” cluster, consisting of 146 patents, embodies a key aspect of neurotechnology that focuses on improving the interface between the brain and external devices. The technology classification codes associated with these patents primarily refer to methods or devices for treatment or protection of eyes and ears, devices for introducing media into, or onto, the body, and electric communication techniques, which are foundational elements of brain-computer interface (BCI) technologies.

Key patents within this cluster include a brain-computer interface apparatus adaptable to use environment and method of operating thereof, a double closed circuit brain-machine interface system, and an apparatus and method of brain-computer interface for device controlling based on brain signal. These inventions mainly revolve around the concept of using brain signals to control external devices, such as robotic arms, and improving the classification performance of these interfaces, even after long periods of non-use.

The inventions described in these patents improve the accuracy of device control, maintain performance over time, and accommodate multiple commands, thus significantly enhancing the functionality of BCIs.

Other identified technologies include systems for medical image analysis, limb rehabilitation, tinnitus treatment, sleep optimization, assistive exoskeletons, and advanced imaging techniques, among others. [pp. 66-67]

Having sections on neuromorphic computing and brain-computer interface patents in immediate proximity led to more speculation on my part. Imagine how much easier it would be to initiate a BCI connection if it’s powered with a neuromorphic (brainlike) computer/device. [ETA July 21, 2023: Following on from that thought, it might be more than just easier to initiate a BCI connection. Could a brainlike computer become part of your brain? Why not? it’s been successfully argued that a robotic wheelchair was part of someone’s body, see my January 30, 2013 posting and scroll down about 40% of the way.)]

Neurotech policy debates

The report concludes with this,

Neurotechnology is a complex and rapidly evolving technological paradigm whose
trajectories have the power to shape people’s identity, autonomy, privacy, sentiments,
behaviors and overall well-being, i.e. the very essence of what it means to be human.

Designing and implementing careful and effective norms and regulations ensuring that neurotechnology is developed and deployed in an ethical manner, for the good of
individuals and for society as a whole, call for a careful identification and characterization of the issues at stake. This entails shedding light on the whole neurotechnology ecosystem, that is what is being developed, where and by whom, and also understanding how neurotechnology interacts with other developments and technological trajectories, especially AI. Failing to do so may result in ineffective (at best) or distorted policies and policy decisions, which may harm human rights and human dignity.

Addressing the need for evidence in support of policy making, the present report offers first time robust data and analysis shedding light on the neurotechnology landscape worldwide. To this end, its proposes and implements an innovative approach that leverages artificial intelligence and deep learning on data from scientific publications and paten[t]s to identify scientific and technological developments in the neurotech space. The methodology proposed represents a scientific advance in itself, as it constitutes a quasi- automated replicable strategy for the detection and documentation of neurotechnology- related breakthroughs in science and innovation, to be repeated over time to account for the evolution of the sector. Leveraging this approach, the report further proposes an IPC-based taxonomy for neurotechnology which allows for a structured framework to the exploration of neurotechnology, to enable future research, development and analysis. The innovative methodology proposed is very flexible and can in fact be leveraged to investigate different emerging technologies, as they arise.

In terms of technological trajectories, we uncover a shift in the neurotechnology industry, with greater emphasis being put on computer and medical technologies in recent years, compared to traditionally dominant trajectories related to biotechnology and pharmaceuticals. This shift warrants close attention from policymakers, and calls for attention in relation to the latest (converging) developments in the field, especially AI and related methods and applications and neurotechnology.

This is all the more important and the observed growth and specialization patterns are unfolding in the context of regulatory environments that, generally, are either not existent or not fit for purpose. Given the sheer implications and impact of neurotechnology on the very essence of human beings, this lack of regulation poses key challenges related to the possible infringement of mental integrity, human dignity, personal identity, privacy, freedom of thought, and autonomy, among others. Furthermore, issues surrounding accessibility and the potential for neurotech enhancement applications triggers significant concerns, with far-reaching implications for individuals and societies. [pp. 72-73]

Last words about the report

Informative, readable, and thought-provoking. And, it helped broaden my understanding of neurotechnology.

Future endeavours?

I’m hopeful that one of these days one of these groups (UNESCO, Canadian Science Policy Centre, or ???) will tackle the issue of business bankruptcy in the neurotechnology sector. It has already occurred as noted in my ““Going blind when your neural implant company flirts with bankruptcy [long read]” April 5, 2022 posting. That story opens with a woman going blind in a New York subway when her neural implant fails. It’s how she found out the company, which supplied her implant was going out of business.

In my July 7, 2023 posting about the UNESCO July 2023 dialogue on neurotechnology, I’ve included information on Neuralink (one of Elon Musk’s companies) and its approval (despite some investigations) by the US Food and Drug Administration to start human clinical trials. Scroll down about 75% of the way to the “Food for thought” subhead where you will find stories about allegations made against Neuralink.

The end

If you want to know more about the field, the report offers a seven-page bibliography and there’s a lot of material here where you can start with this December 3, 2019 posting “Neural and technological inequalities” which features an article mentioning a discussion between two scientists. Surprisingly (to me), the source article is in Fast Company (a leading progressive business media brand), according to their tagline)..

I have two categories you may want to check: Human Enhancement and Neuromorphic Engineering. There are also a number of tags: neuromorphic computing, machine/flesh, brainlike computing, cyborgs, neural implants, neuroprosthetics, memristors, and more.

Should you have any observations or corrections, please feel free to leave them in the Comments section of this posting.

Research on how the body will react to nanomedicines is inconsistent

This is a good general introductory video to nano gold but I have two caveats. It’s very ‘hypey’ (as in hyperbolic) and, as of 2023, it’s eight years old. The information still looks pretty good (after all it was produced by Nature magazine) but should you be watching this five years from now, the situation may have changed. (h/t January 5, 2023 news item on Nanowerk)

The video, which includes information about how nano gold can be used to deliver nanomedicines, is embedded in Morteza Mahmoudi’s (Assistant Professor of Radiology, Michigan State University) January 5, 2023 essay on The Conversation about a lack of research on how the body reacts to nanomedicines, Note: Links have been removed,

Nanomedicines took the spotlight during the COVID-19 pandemic. Researchers are using these very small and intricate materials to develop diagnostic tests and treatments. Nanomedicine is already used for various diseases, such as the COVID-19 vaccines and therapies for cardiovascular disease. The “nano” refers to the use of particles that are only a few hundred nanometers in size, which is significantly smaller than the width of a human hair.

Although researchers have developed several methods to improve the reliability of nanotechnologies, the field still faces one major roadblock: a lack of a standardized way to analyze biological identity, or how the body will react to nanomedicines. This is essential information in evaluating how effective and safe new treatments are.

I’m a researcher studying overlooked factors in nanomedicine development. In our recently published research, my colleagues and I found that analyses of biological identity are highly inconsistent across proteomics facilities that specialize in studying proteins.

Nanoparticles (white disks) can be used to deliver treatment to cells (blue). (Image: Brenda Melendez and Rita Serda, National Cancer Institute, National Institutes of Health, CC BY-NC) [downloaded from https://www.nanowerk.com/nanotechnology-news2/newsid=62097.php]

Mahmoudi’s January 5, 2023 essay describes testing a group of laboratories’ analyses of samples he and his team submitted to them (Note: Links have been removed),

We wanted to test how consistently these proteomics facilities analyzed protein corona samples. To do this, my colleagues and I sent biologically identical protein coronas to 17 different labs in the U.S. for analysis.

We had striking results: Less than 2% of the proteins the labs identified were the same.

Our results reveal an extreme lack of consistency in the analyses researchers use to understand how nanomedicines work in the body. This may pose a significant challenge not only to ensuring the accuracy of diagnostics, but also the effectiveness and safety of treatments based on nanomedicines.

… my team and I have identified several critical but often overlooked factors that can influence the performance of a nanomedicine, such as a person’s sex, prior medical conditions and disease type. …

Mahmoudi is pointing out that it’s very early days for nanomedicines and there’s a lot of work still be done.

Here’s a link to and a citation for the paper Mahmoudi and his team had published on this topic,

Measurements of heterogeneity in proteomics analysis of the nanoparticle protein corona across core facilities by Ali Akbar Ashkarran, Hassan Gharibi, Elizabeth Voke, Markita P. Landry, Amir Ata Saei & Morteza Mahmoudi. Nature Communications volume 13, Article number: 6610 (2022) DOI: https://doi.org/10.1038/s41467-022-34438-8 Published 03 November 2022

This paper is open access.

Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)

Bill C-27 (Digital Charter Implementation Act, 2022) is what I believe is called an omnibus bill as it includes three different pieces of proposed legislation (the Consumer Privacy Protection Act [CPPA], the Artificial Intelligence and Data Act [AIDA], and the Personal Information and Data Protection Tribunal Act [PIDPTA]). You can read the Innovation, Science and Economic Development (ISED) Canada summary here or a detailed series of descriptions of the act here on the ISED’s Canada’s Digital Charter webpage.

Months after the first reading in June 2022, Bill C-27 was mentioned here in a September 15, 2022 posting about a Canadian Science Policy Centre (CSPC) event featuring a panel discussion about the proposed legislation, artificial intelligence in particular. I dug down and found commentaries and additional information about the proposed bill with special attention to AIDA.

it seems discussion has been reactivated since the second reading was completed on April 24, 2023 and referred to committee for further discussion. (A report and third reading are still to be had in the House of Commons and then, there are three readings in the Senate before this legislation can be passed.)

Christian Paas-Lang has written an April 24, 2023 article for CBC (Canadian Broadcasting Corporation) news online that highlights concerns centred on AI from three cross-party Members of Parliament (MPs),

Once the domain of a relatively select group of tech workers, academics and science fiction enthusiasts, the debate over the future of artificial intelligence has been thrust into the mainstream. And a group of cross-party MPs say Canada isn’t yet ready to take on the challenge.

The popularization of AI as a subject of concern has been accelerated by the introduction of ChatGPT, an AI chatbot produced by OpenAI that is capable of generating a broad array of text, code and other content. ChatGPT relies on content published on the internet as well as training from its users to improve its responses.

ChatGPT has prompted such a fervour, said Katrina Ingram, founder of the group Ethically Aligned AI, because of its novelty and effectiveness. 

“I would argue that we’ve had AI enabled infrastructure or technologies around for quite a while now, but we haven’t really necessarily been confronted with them, you know, face to face,” she told CBC Radio’s The House [radio segment embedded in article] in an interview that aired Saturday [April 22, 2023].

Ingram said the technology has prompted a series of concerns: about the livelihoods of professionals like artists and writers, about privacy, data collection and surveillance and about whether chatbots like ChatGPT can be used as tools for disinformation.

With the popularization of AI as an issue has come a similar increase in concern about regulation, and Ingram says governments must act now.

“We are contending with these technologies right now. So it’s really imperative that governments are able to pick up the pace,” she told host Catherine Cullen.

That sentiment — the need for speed — is one shared by three MPs from across party lines who are watching the development of the AI issue. Conservative MP Michelle Rempel Garner, NDP MP Brian Masse and Nathaniel Erskine-Smith of the Liberals also joined The House for an interview that aired Saturday.

“This is huge. This is the new oil,” said Masse, the NDP’s industry critic, referring to how oil had fundamentally shifted economic and geopolitical relationships, leading to a great deal of good but also disasters — and AI could do the same.

Issues of both speed and substance

The three MPs are closely watching Bill C-27, a piece of legislation currently being debated in the House of Commons that includes Canada’s first federal regulations on AI.

But each MP expressed concern that the bill may not be ready in time and changes would be needed [emphasis mine].

“This legislation was tabled in June of last year [2022], six months before ChatGPT was released and it’s like it’s obsolete. It’s like putting in place a framework to regulate scribes four months after the printing press came out,” Rempel Garner said. She added that it was wrongheaded to move the discussion of AI away from Parliament and segment it off to a regulatory body.

Am I the only person who sees a problem with the “bill may not be ready in time and changes would be needed?” I don’t understand the rush (or how these people get elected). The point of a bill is to examine the ideas and make changes to it before it becomes legislation. Given how fluid the situation appears to be, a strong argument can be made for the current process which is three readings in the House of Commons, along with a committee report, and three readings in the senate before a bill, if successful, is passed into legislation.

Of course, the fluidity of the situation could also be an argument for starting over as Michael Geist’s (Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa and member of the Centre for Law, Technology and Society) April 19, 2023 post on his eponymous blog suggests, Note: Links have been removed,

As anyone who has tried ChatGPT will know, at the bottom of each response is an option to ask the AI system to “regenerate response”. Despite increasing pressure on the government to move ahead with Bill C-27’s Artificial Intelligence and Data Act (AIDA), the right response would be to hit the regenerate button and start over. AIDA may be well-meaning and the issue of AI regulation critically important, but the bill is limited in principles and severely lacking in detail, leaving virtually all of the heavy lifting to a regulation-making process that will take years to unfold. While no one should doubt the importance of AI regulation, Canadians deserve better than virtue signalling on the issue with a bill that never received a full public consultation.

What prompts this post is a public letter based out of MILA that calls on the government to urgently move ahead with the bill signed by some of Canada’s leading AI experts. The letter states: …

When the signatories to the letter suggest that there is prospect of moving AIDA forward before the summer, it feels like a ChatGPT error. There are a maximum of 43 days left on the House of Commons calendar until the summer. In all likelihood, it will be less than that. Bill C-27 is really three bills in one: major privacy reform, the creation of a new privacy tribunal, and AI regulation. I’ve watched the progress of enough bills to know that this just isn’t enough time to conduct extensive hearings on the bill, conduct a full clause-by-clause review, debate and vote in the House, and then conduct another review in the Senate. At best, Bill C-27 could make some headway at committee, but getting it passed with a proper review is unrealistic.

Moreover, I am deeply concerned about a Parliamentary process that could lump together these three bills in an expedited process. …

For anyone unfamiliar with MILA, it is also known as Quebec’s Artificial Intelligence Institute. (They seem to have replaced institute with ecosystem since the last time I checked.) You can see the document and list of signatories here.

Geist has a number of posts and podcasts focused on the bill and the easiest way to find them is to use the search term ‘Bill C-27’.

Maggie Arai at the University of Toronto’s Schwartz Reisman Institute for Technology and Society provides a brief overview titled, Five things to know about Bill C-27, in her April 18, 2022 commentary,

On June 16, 2022, the Canadian federal government introduced Bill C-27, the Digital Charter Implementation Act 2022, in the House of Commons. Bill C-27 is not entirely new, following in the footsteps of Bill C-11 (the Digital Charter Implementation Act 2020). Bill C-11 failed to pass, dying on the Order Paper when the Governor General dissolved Parliament to hold the 2021 federal election. While some aspects of C-27 will likely be familiar to those who followed the progress of Bill C-11, there are several key differences.

After noting the differences, Arai had this to say, from her April 18, 2022 commentary,

The tabling of Bill C-27 represents an exciting step forward for Canada as it attempts to forge a path towards regulating AI that will promote innovation of this advanced technology, while simultaneously offering consumers assurance and protection from the unique risks this new technology it poses. This second attempt towards the CPPA and PIDPTA is similarly positive, and addresses the need for updated and increased consumer protection, privacy, and data legislation.

However, as the saying goes, the devil is in the details. As we have outlined, several aspects of how Bill C-27 will be implemented are yet to be defined, and how the legislation will interact with existing social, economic, and legal dynamics also remains to be seen.

There are also sections of C-27 that could be improved, including areas where policymakers could benefit from the insights of researchers with domain expertise in areas such as data privacy, trusted computing, platform governance, and the social impacts of new technologies. In the coming weeks, the Schwartz Reisman Institute will present additional commentaries from our community that explore the implications of C-27 for Canadians when it comes to privacy, protection against harms, and technological governance.

Bryan Short’s September 14, 2022 posting (The Absolute Bare Minimum: Privacy and the New Bill C-27) on the Open Media website critiques two of the three bills included in Bill C-27, Note: Links have been removed,

The Canadian government has taken the first step towards creating new privacy rights for people in Canada. After a failed attempt in 2020 and three years of inaction since the proposal of the digital charter, the government has tabled another piece of legislation aimed at giving people in Canada the privacy rights they deserve.

In this post, we’ll explore how Bill C-27 compares to Canada’s current privacy legislation, how it stacks up against our international peers, and what it means for you. This post considers two of the three acts being proposed in Bill C-27, the Consumer Privacy Protection Act (CPPA) and the Personal Information and Data Tribunal Act (PIDTA), and doesn’t discuss the Artificial Intelligence and Data Act [emphasis mine]. The latter Act’s engagement with very new and complex issues means we think it deserves its own consideration separate from existing privacy proposals, and will handle it as such.

If we were to give Bill C-27’s CPPA and PIDTA a grade, it’d be a D. This is legislation that does the absolute bare minimum for privacy protections in Canada, and in some cases it will make things actually worse. If they were proposed and passed a decade ago, we might have rated it higher. However, looking ahead at predictable movement in data practices over the next ten – or even twenty – years, these laws will be out of date the moment they are passed, and leave people in Canada vulnerable to a wide range of predatory data practices. For detailed analysis, read on – but if you’re ready to raise your voice, go check out our action calling for positive change before C-27 passes!

Taking this all into account, Bill C-27 isn’t yet the step forward for privacy in Canada that we need. While it’s an improvement upon the last privacy bill that the government put forward, it misses so many areas that are critical for improvement, like failing to put people in Canada above the commercial interests of companies.

If Open Media has followed up with an AIDA critique, I have not been able to find it on their website.

A final SNUR (from the US Environmental Protection Agency) for MWCNTs (multiwalled carbon nanotubes)

SNUR means ‘significant new use rules’ and it’s been a long while since I’ve stumbled across any rulings from the US Environmental Protection Agency (EPA), which concern nanomaterials. From a September 30, 2022 news item on Nanotechnology News by Lynn L. Bergeson,

On September 29, 2022, the U.S. Environmental Protection Agency (EPA) issued final significant new use rules (SNUR) under the Toxic Substances Control Act (TSCA) for certain chemical substances that were the subject of premanufacture notices (PMN), including multi-walled carbon nanotubes (MWCNT) (generic). 87 Fed. Reg. 58999. See https://www.federalregister.gov/documents/2022/09/29/2022-21042/significant-new-use-rules-on-certain-chemical-substances-21-25e The SNUR requires persons who intend to manufacture (defined by statute to include import) or process the chemical substance identified generically as MWCNTs (PMN P-20-72) for an activity that is designated as a significant new use to notify EPA at least 90 days before commencing that activity. Persons may not commence manufacture or processing for the significant new use until EPA has conducted a review of the notice, made an appropriate determination on the notice, and taken such actions as are required by that determination. The SNUR will be effective on November 28, 2022.

Hazard communication: Requirements as specified in 40 C.F.R. Section 721.72(a) through (d), (f), (g)(1), (g)(3), and (g)(5). For purposes of Section 721.72(g)(1), this substance may cause: eye irritation; respiratory sensitization; skin sensitization; carcinogenicity; and specific target organ toxicity. For purposes of Section 721.72(g)(3), this substance may cause unknown aquatic toxicity. Alternative hazard and warning statements that meet the criteria of the Globally Harmonized System of Classification and Labeling of Chemicals (GHS) and Occupational Safety and Health Administration (OSHA) Hazard Communication Standard (HCS) may be used.

The September 30, 2022 news item lists more significant new uses.

US National Academies Sept. 22-23, 2022 workshop on techno, legal & ethical issues of brain-machine interfaces (BMIs)

If you’ve been longing for an opportunity to discover more and to engage in discussion about brain-machine interfaces (BMIs) and their legal, technical, and ethical issues, an opportunity is just a day away. From a September 20, 2022 (US) National Academies of Sciences, Engineering, and Medicine (NAS/NASEM or National Academies) notice (received via email),

Sept. 22-23 [2022] Workshop Explores Technical, Legal, Ethical Issues Raised by Brain-Machine Interfaces [official title: Brain-Machine and Related Neural Interface Technologies: Scientific, Technical, Ethical, and Regulatory Issues – A Workshop]

Technological developments and advances in understanding of the human brain have led to the development of new Brain-Machine Interface technologies. These include technologies that “read” the brain to record brain activity and decode its meaning, and those that “write” to the brain to manipulate activity in specific brain regions. Right now, most of these interface technologies are medical devices placed inside the brain or other parts of the nervous system – for example, devices that use deep brain stimulation to modulate the tremors of Parkinson’s disease.

But tech companies are developing mass-market wearable devices that focus on understanding emotional states or intended movements, such as devices used to detect fatigue, boost alertness, or enable thoughts to control gaming and other digital-mechanical systems. Such applications raise ethical and legal issues, including risks that thoughts or mood might be accessed or manipulated by companies, governments, or others; risks to privacy; and risks related to a widening of social inequalities.

A virtual workshop [emphasis mine] hosted by the National Academies of Sciences, Engineering, and Medicine on Sept. 22-23 [2022] will explore the present and future of these technologies and the ethical, legal, and regulatory issues they raise.

The workshop will run from 12:15 p.m. to 4:25 p.m. ET on Sept. 22 and from noon to 4:30 p.m. ET on Sept. 23. View agenda and register.

For those who might want a peak at the agenda before downloading it, I have listed the titles for the sessions (from my downloaded Agenda, Note: I’ve reformatted the information; there are no breaks, discussion periods, or Q&As included),

Sept. 22, 2022 Draft Agenda

12: 30 pm ET Brain-Machine and Related Neural Interface Technologies: The State and Limitations of the Technology

2:30 pm ET Brain-Machine and Related Neural Interface Technologies: Reading and Writing the Brain for Movement

Sept. 23, 2022 Draft Agenda

12:05 pm ET Brain-Machine and Related Neural Interface Technologies: Reading and Writing the Brain for Mood and Affect

2:05 pm ET Brain-Machine and Related Neural Interface Technologies: Reading and Writing the Brain for Thought, Communication, and Memory

4:00 pm ET Concluding Thoughts from Workshop Planning Committee

Regarding terminology, there’s brain-machine interface (BMI), which I think is a more generic term that includes: brain-computer interface (BCI), neural interface and/or neural implant. There are other terms as well, including this one in the title of my September 17, 2020 posting, “Turning brain-controlled wireless electronic prostheses [emphasis mine] into reality plus some ethical points.” I have a more recent April 5, 2022 posting, which is a very deep dive, “Going blind when your neural implant company flirts with bankruptcy (long read).” As you can see, various social issues associated with these devices have been of interest to me.

I’m not sure quite what to make of the session titles. There doesn’t seem to be all that much emphasis on ethical and legal issues but perhaps that’s the role the various speakers will play.