This is going to be a jam-packed posting with the AI experts at the Canadian Science Policy Centre (CSPC) virtual panel, a look back at a ‘testy’ exchange between Yoshua Bengio (one of Canada’s godfathers of AI) and a former diplomat from China, an update on Canada’s Minister of Artificial Intelligence and Digital Innovation, Evan Solomon and his latest AI push, and a missive from the BC artificial intelligence community.
A Canadian Science Policy Centre AI panel on November 11, 2025
The Canadian Science Policy Centre (CSPC) provides an October 9, 2025 update on an upcoming virtual panel being held on Remembrance Day,
[AI-Driven Misinformation Across Sectors Addressing a Cross-Societal Challenge]
Upcoming Virtual Panel[s]: November 11 [2025]
Artificial Intelligence is transforming how information is created and trusted, offering immense benefits across sectors like healthcare, education, finance, and public discourse—yet also amplifying risks such as misinformation, deepfakes, and scams that threaten public trust. This panel brings together experts from diverse fields [emphasis mine] to examine the manifestations and impacts of AI-driven misinformation and to discuss policy, regulatory, and technical solutions [emphasis mine]. The conversation will highlight practical measures—from digital literacy and content verification to platform accountability—aimed at strengthening resilience in Canada and globally.
For more information on the panel and to register, click below.
Odd timing for this event. Moving on, I found more information on the CSPC’s webpage for this event, Note: Unfortunately, links to the moderator’s and speakers’ bios could not be copied here,
Canadian Science Policy Centre Email info@sciencepolicy.ca
…
This panel brings together cross-sectoral experts to examine how AI-driven misinformation manifests in their respective domains, its consequences, and how policy, regulation, and technical interventions can help mitigate harm. The discussion will explore practical pathways for action, such as digital literacy, risk audits, content verification technologies, platform responsibility, and regulatory frameworks. Attendees will leave with a nuanced understanding of both the risks and the resilience strategies being explored in Canada and globally.
Canada Research Chair in Internet & E-commerce Law, University of Ottawa See Bio
[Panelists]
Dr. Plinio Morita
Associate Professor / Director, Ubiquitous Health Technology Lab, University of Waterloo …
Dr. Nadia Naffi
Université Laval — Associate Professor of Educational Technology and expert on building human agency against AI-augmented disinformation and deepfakes. See Bio
Dr. Jutta Treviranus
Director, Inclusive Design Research Centre, OCAD U, Expert on AI misinformation in the Education sector and schools. See Bio
Dr. Fenwick McKelvey
Concordia University — Expert in political bots, information flows, and Canadian tech governance See Bio
Michael Geist has his own blog/website featuring posts on his ares of interest and featuring his podcast, Law Bytes. Jutta Treviranus is mentioned in my October 13, 2025 posting as a participant in “Who’s afraid of AI? Arts, Sciences, and the Futures of Intelligence,” a conference (October 23 – 24, 205) and arts festival at the University of Toronto (scroll down to find it) . She’s scheduled for a session on Thursday, October 23, 2025.
China, Canada, and the AI Action summit in February 2025
Zoe Kleinman’s February 10, 2025 article for the British Broadcasting Corporation (BBC) news online website also notes the encounter,
A former Chinese official poked fun at a major international AI safety report led by “AI Godfather” professor Yoshua Bengio and co-authored by 96 global experts – in front of him.
Fu Ying, former vice minister of foreign affairs and once China’s UK ambassador, is now an academic at Tsinghua University in Beijing.
The pair were speaking at a panel discussion ahead of a two-day global AI summit starting in Paris on Monday [February 10, 2025].
The aim of the summit is to unite world leaders, tech executives, and academics to examine AI’s impact on society, governance, and the environment.
Fu Ying began by thanking Canada’s Prof Bengio for the “very, very long” document, adding that the Chinese translation stretched to around 400 pages and she hadn’t finished reading it.
She also had a dig at the title of the AI Safety Institute – of which Prof Bengio is a member.
China now has its own equivalent; but they decided to call it The AI Development and Safety Network, she said, because there are lots of institutes already but this wording emphasised the importance of collaboration.
The AI Action Summit is welcoming guests from 80 countries, with OpenAI chief executive Sam Altman, Microsoft president Brad Smith and Google chief executive Sundar Pichai among the big names in US tech attending.
Elon Musk is not on the guest list but it is currently unknown whether he will decide to join them. [As of February 13, 2025, Mr. Musk did not attend the summit, which ended February 11, 2025.]
A key focus is regulating AI in an increasingly fractured world. The summit comes weeks after a seismic industry shift as China’s DeepSeek unveiled a powerful, low-cost AI model, challenging US dominance.
The pair’s heated exchanges were a symbol of global political jostling in the powerful AI arms race, but Fu Ying also expressed regret about the negative impact of current hostilities between the US and China on the progress of AI safety.
…
She gave a carefully-crafted glimpse behind the curtain of China’s AI scene, describing an “explosive period” of innovation since the country first published its AI development plan in 2017, five years before ChatGPT became a viral sensation in the west.
She added that “when the pace [of development] is rapid, risky stuff occurs” but did not elaborate on what might have taken place.
“The Chinese move faster [than the west] but it’s full of problems,” she said.
Fu Ying argued that building AI tools on foundations which are open source, meaning everyone can see how they work and therefore contribute to improving them, was the most effective way to make sure the tech did not cause harm.
Most of the US tech giants do not share the tech which drives their products.
Open source offers humans “better opportunities to detect and solve problems”, she said, adding that “the lack of transparency among the giants makes people nervous”.
But Prof Bengio disagreed.
His view was that open source also left the tech wide open for criminals to misuse.
He did however concede that “from a safety point of view”, it was easier to spot issues with the viral Chinese AI assistant DeepSeek, which was built using open source architecture, than ChatGPT, whose code has not been shared by its creator OpenAI.
…
Interesting, non? You can read more about Bengio’s views in an October 1, 2025 article by Rae Witte for Futurism.
In a Policy Forum, Yue Zhu and colleagues provide an overview of China’s emerging regulation for artificial intelligence (AI) technologies and its potential contributions to global AI governance. Open-source AI systems from China are rapidly expanding worldwide, even as the country’s regulatory framework remains in flux. In general, AI governance suffers from fragmented approaches, a lack of clarity, and difficulty reconciling innovation with risk management, making global coordination especially hard in the face of rising controversy. Although no official AI law has yet been enacted, experts in China have drafted two influential proposals – the Model AI Law and the AI Law (Scholar’s Proposal) – which serve as key references for ongoing policy discussions. As the nation’s lawmakers prepare to draft a consolidated AI law, Zhu et al. note that the decisions will shape not only China’s innovation, but also global collaboration on AI safety, openness, and risk mitigation. Here, the authors discuss China’s emerging AI regulation as structured around 6 pillars, which, combined, stress exemptive laws, efficient adjudication, and experimentalist requirements, while safeguarding against extreme risks. This framework seeks to balance responsible oversight with pragmatic openness, allowing developers to innovate for the long term and collaborate across the global research community. According to Zhu et al., despite the need for greater clarity, harmonization, and simplification, China’s evolving model is poised to shape future legislation and contribute meaningfully to global AI governance by promoting both safety and innovation at a time when international cooperation on extreme risks is urgently needed.
Here’s a link to and a citation for the paper,
China’s emerging regulation toward an open future for AI by Yue Zhu, Bo He, Hongyu Fu, Naying Hu, Shaoqing Wu, Taolue Zhang, Xinyi Liu, Gang Xu, Linghan Zhang, and Hui Zhou. Science 9 Oct 2025Vol 390, Issue 6769 pp. 132-135 DOI: 10.1126/science.ady7922
This paper is behind a paywall.
No mention of Fu Ying or China’s ‘The AI Development and Safety Network’ but perhaps that’s in the paper.
Canada and its Minister of AI and Digital Innovation
Evan Solomon (born April 20, 1968)[citation needed] is a Canadian politician and broadcaster who has been the minister of artificial intelligence and digital innovation since May 2025. A member of the Liberal Party, Solomon was elected as the member of Parliament (MP) for Toronto Centre in the April 2025 election.
He was the host of The Evan Solomon Show on Toronto-area talk radio station CFRB,[2] and a writer for Maclean’s magazine. He was the host of CTV’s national political news programs Power Play and Question Period.[3] In October 2022, he moved to New York City to accept a position with the Eurasia Group as publisher of GZERO Media.[4] Solomon continued with CTV News as a “special correspondent” reporting on Canadian politics and global affairs.”[4]
…
Had you asked me what background one needs to be a ‘Minister of Artificial Intelligence and Digital Innovation’, media would not have been my first thought. That said, sometimes people can surprise you.
Solomon appears to be an enthusiast if a June 10, 2025 article by Anja Karadeglija for The Canadian Press is to be believed,
Canada’s new minister of artificial intelligence said Tuesday [June 10, 2025] he’ll put less emphasis on AI regulation and more on finding ways to harness the technology’s economic benefits [emphases mine].
In his first speech since becoming Canada’s first-ever AI minister, Evan Solomon said Canada will move away from “over-indexing on warnings and regulation” to make sure the economy benefits from AI.
His regulatory focus will be on data protection and privacy, he told the audience at an event in Ottawa Tuesday morning organized by the think tank Canada 2020.
Solomon said regulation isn’t about finding “a saddle to throw on the bucking bronco called AI innovation. That’s hard. But it is to make sure that the horse doesn’t kick people in the face. And we need to protect people’s data and their privacy.”
The previous government introduced a privacy and AI regulation bill that targeted high-impact AI systems. It did not become law before the election was called.
That bill is “not gone, but we have to re-examine in this new environment where we’re going to be on that,” Solomon said.
He said constraints on AI have not worked at the international level.
“It’s really hard. There’s lots of leakages,” he said. “The United States and China have no desire to buy into any constraint or regulation.”
That doesn’t mean regulation won’t exist, he said, but it will have to be assembled in steps.
…
Solomon’s comments follow a global shift among governments to focus on AI adoption and away from AI safety and governance.
The first global summit focusing on AI safety was held in 2023 as experts warned of the technology’s dangers — including the risk that it could pose an existential threat to humanity. At a global meeting in Korea last year, countries agreed to launch a network of publicly backed safety institutes.
But the mood had shifted by the time this year’s AI Action Summit began in Paris. …
…
Solomon outlined several priorities for his ministry — scaling up Canada’s AI industry, driving adoption and ensuring Canadians have trust in and sovereignty over the technology.
He said that includes supporting Canadian AI companies like Cohere, which “means using government as essentially an industrial policy to champion our champions.”
The federal government is putting together a task force to guide its next steps on artificial intelligence, and Artificial Intelligence Minister Evan Solomon is promising an update to the government’s AI strategy.
Solomon told the All In artificial intelligence conference in Montreal on Wednesday [September 24, 2025] that the “refreshed” strategy will be tabled later this year, “almost two years ahead of schedule.”
…
“We need to update and move quickly,” he said in a keynote speech at the start of the conference.
The task force will include about 20 representatives from industry, academia and civil society. The government says it won’t reveal the membership until later this week.
Solomon said task force members are being asked to consult with their networks, suggest “bold, practical” ideas and report back to him in November [2025].
The group will look at various topics related to AI, including research, adoption, commercialization, investment, infrastructure, skills, and safety and security. The government is also planning to solicit input from the public. [emphasis mine]
Canada was the first country to launch a national AI strategy [the Pan-Canadian AI Strategy announced in 2016], which the government updated in 2022. The strategy focuses on commercialization, the development and adoption of AI standards, talent and research.
Solomon also teased a “major quantum initiative” coming in October [2025?] to ensure both quantum computing talent and intellectual property stay in the country.
Solomon called digital sovereignty “the most pressing policy and democratic issue of our time” and stressed the importance of Canada having its own “digital economy that someone else can’t decide to turn off.”
Solomon said the federal government’s recent focus on major projects extends to artificial intelligence. He compared current conversations on Canada’s AI framework to the way earlier generations spoke about a national railroad or highway.
…
He said his government will address concerns about AI by focusing on privacy reform and modernizing Canada’s 25-year-old privacy law.
“We’re going to include protections for consumers who are concerned about things like deep fakes and protection for children, because that’s a big, big issue. And we’re going to set clear standards for the use of data so innovators have clarity to unlock investment,” Solomon said.
…
The government is consulting with the public? Experience suggests that when all the major decisions will have been made; the public consultation comments will mined so officials can make some minor, unimportant tweaks.
Canada’s AI Task Force and parts of the Empire Club talk are revealed in a September 26, 2025 article by Alex Riehl for BetaKit,
Inovia Capital partner Patrick Pichette, Cohere chief artificial intelligence (AI) officer Joelle Pineau, and Build Canada founder Dan Debow are among 26 members of AI minister Evan Solomon’s AI Strategy Task Force trusted to help the federal government renew its AI strategy.
Solomon revealed the roster, filled with leading Canadian researchers and business figures, while speaking at the Empire Club in Toronto on Friday morning [September 26, 2025]. He teased its formation at the ALL IN conference earlier this week [September 24, 2025], saying the team would include “innovative thinkers from across the country.”
The group will have 30 days to add to a collective consultation process in areas including research, talent, commercialization, safety, education, infrastructure, and security.
…
The full AI Strategy Task Force is listed below; each member will consult their network on specific themes.
Research and Talent
Gail Murphy, professor of computer science and vice-president – research and innovation, University of British Columbia and vice-chair at the Digital Research Alliance of Canada
Diane Gutiw, VP – global AI research lead, CGI Canada and co-chair of the Advisory Council on AI
Michael Bowling, professor of computer science and principal investigator – Reinforcement Learning and Artificial Intelligence Lab, University of Alberta and research fellow, Alberta Machine Intelligence Institute and Canada CIFAR AI chair
Arvind Gupta, professor of computer science, University of Toronto
Adoption across industry and governments
Olivier Blais, co-founder and VP of AI, Moov and co-chair of the Advisory Council on AI
Cari Covent, technology executive
Dan Debow, chair of the board, Build Canada
Commercialization of AI
Louis Têtu, executive chairman, Coveo
Michael Serbinis, founder and CEO, League and board chair of the Perimeter Institute
Adam Keating, CEO and Founder, CoLab
Scaling our champions and attracting investment
Patrick Pichette, general partner, Inovia Capital
Ajay Agrawal, professor of strategic management, University of Toronto, founder, Next Canada and founder, Creative Destruction Lab
Sonia Sennik, CEO, Creative Destruction Lab
Ben Bergen, president, Council of Canadian Innovators
Building safe AI systems and public trust in AI
Mary Wells, dean of engineering, University of Waterloo
Joelle Pineau, chief AI officer, Cohere
Taylor Owen, founding director, Center [sic] for Media, Technology and Democracy [McGill University]
Education and Skills
Natiea Vinson, CEO, First Nations Technology Council
Alex Laplante, VP – cash management technology Canada, Royal Bank of Canada and board member at Mitacs
David Naylor, professor of medicine – University of Toronto
Infrastructure
Garth Gibson, chief technology and AI officer, VDURA
Ian Rae, president and CEO, Aptum
Marc Etienne Ouimette, chair of the board, Digital Moment and member, OECD One AI Group of Experts, affiliate researcher, sovereign AI, Cambridge University Bennett School of Public Policy
Security
Shelly Bruce, distinguished fellow, Centre for International Governance Innovation
James Neufeld, founder and CEO, Samdesk
Sam Ramadori, co-president and executive director, LawZero
With files from Josh Scott
If you have the time, Riehl ‘s September 26, 2025 article offers more depth than may be apparent in the excerpts I’ve chosen.
It’s been a while since I’ve seen Arvind Gupta’s name. I’m glad to see he’s part of this Task Force (Research and Talent). The man was treated quite shamefully at the University of British Columbia. (For the curious, this August 18, 2015 article by Ken MacQueen for Maclean’s Magazine presents a somewhat sanitized [in my opinion] review of the situation.)
One final comment, the experts on the virtual panel and members of Solomon’s Task Force are largely from Ontario and Québec. There is minor representation from others parts of the country but it is minor.
British Columbia wants entry into the national AI discussion
Just after I finished writing up this post, I received Kris Krug’s (techartist, quasi-sage, cyberpunk anti-hero from the future) October 14, 2025 communication (received via email) regarding an initiative from the BC + AI community,
Growth vs Guardrails: BC’s Framework for Steering AI
Our open letter to Minister Solomon shares what we’ve learned building community-led AI governance and how BC can help.
Ottawa created a Minister of Artificial Intelligence and just launched a national task force to shape the country’s next AI strategy. The conversation is happening right now about who gets compute, who sets the rules, and whose future this technology will serve.
Our new feature, Growth vs Guardrails [see link to letter below for ‘guardrails’], is already making the rounds in those rooms. The message is simple: if Ottawa’s foot is on the gas, BC is the steering wheel and the brakes. We can model a clean, ethical, community-led path that keeps power with people and place.
This is the time to show up together. Not as scattered voices, but as a connected movement with purpose, vision, and political gravity.
Over the past few months, almost 100 of us have joined as the new BC + AI Ecosystem Association non-profit as Founding Members. Builders. Artists. Researchers. Investors. Educators. Policymakers. People who believe that tech should serve communities, not the other way around.
Now we’re opening the door wider. Join and you’ll be part of the core group that built this from the ground up. Your membership is declaration that British Columbia deserves to shape its own AI future with ethics, creativity, and care.
If you’ve been watching from the sidelines, this is the time to lean in. We don’t do panels. We do portals. And this is the biggest one we’ve opened yet.
See you inside,
Kris Krüg Executive Director BC + AI Ecosystem Association kk@bc-ai.ca | bc-ai.ca
Canada just spun up a 30-day sprint to shape its next AI strategy. Minister Evan Solomon assembled 26 experts (mostly industry and academia) to advise on research, adoption, commercialization, safety, skills, and infrastructure.
On paper, it’s a pivot moment. In practice, it’s already drawing fire. Too much weight on scaling, not enough on governance. Too many boardrooms, not enough frontlines. Too much Ottawa, not enough ground truth.
…
This is Canada’s chance to reset the DNA of its AI ecosystem.
But only if we choose regeneration over extraction, sovereign data governance over corporate capture, and community benefit over narrow interests.
…
The Problem With The Task Force
Research says: The group’s stacked with expertise. But critics flag the imbalance. Where’s healthcare? Where’s civil society beyond token representation? Where are the people who’ll feel AI’s impact first: frontline workers, artists, community organizers?
…
The worry:Commercialization and scaling overshadow public trust, governance, and equitable outcomes. Again.
The numbers back this up: Only 24% of Canadians have AI training. Just 38% feel confident in their knowledge. Nearly two-thirds see potential harm. 71% would trust AI more under public regulation.
We’re building a national strategy on a foundation of low literacy and eroding trust. That’s not a recipe for sovereignty. That’s a recipe for capture.
Principles for a National AI Strategy: What BC + AI Stands For
A scientific team from Universidad Carlos III de Madrid (UC3M), in collaboration with University College London (England) and the University of California, Davis (USA), has found that smart TVs send viewing data to their servers. This allows brands to generate detailed profiles of consumers’ habits and tailor advertisements based on their behaviour.
The research revealed that this technology captures screenshots or audio to identify the content displayed on the screen using Automatic Content Recognition (ACR) technology. This data is then periodically sent to specific servers, even when the TV is used as an external screen or connected to a laptop.
“Automatic Content Recognition works like a kind of visual Shazam, taking screenshots or audio to create a viewer profile based on their content consumption habits. This technology enables manufacturers’ platforms to profile users accurately, much like the internet does,” explains one of the study’s authors, Patricia Callejo, a professor in UC3M’s Department of Telematics Engineering and a fellow at the UC3M-Santander Big Data Institute. “In any case, this tracking—regardless of the usage mode—raises serious privacy concerns, especially when the TV is used solely as a monitor.”
The findings, presented in November [2024] at the Internet Measurement Conference (IMC) 2024, highlight the frequency with which these screenshots are transmitted to the servers of the brands analysed: Samsung and LG. Specifically, the research showed that Samsung TVs sent this information every minute, while LG devices did so every 15 seconds. “This gives us an idea of the intensity of the monitoring and shows that smart TV platforms collect large volumes of data on users, regardless of how they consume content—whether through traditional TV viewing or devices connected via HDMI, like laptops or gaming consoles,” Callejo emphasises.
To test the ability of TVs to block ACR tracking, the research team experimented with various privacy settings on smart TVs. The results demonstrated that, while users can voluntarily block the transmission of this data to servers, the default setting is for TVs to perform ACR. “The problem is that not all users are aware of this,” adds Callejo, who considers this lack of transparency in initial settings concerning. “Moreover, many users don’t know how to change the settings, meaning these devices function by default as tracking mechanisms for their activity.”
This research opens up new avenues for studying the tracking capabilities of cloud-connected devices that communicate with each other (commonly known as the Internet of Things, or IoT). It also suggests that manufacturers and regulators must urgently address the challenges that these new devices will present in the near future.
This was on the Canadian Broadcasting Corporation’s (CBC) Day Six radio programme and the segment is embedded in a January 19, 2025 article by Philip Drost, Note: A link has been removed,
When a Tesla Cybertruck exploded outside Trump International Hotel in Las Vegas on New Year’s Day [2025], authorities were quickly able to gather information, crediting Elon Musk and Tesla for sending them info about the car and its driver.
But for some, it’s alarming to discover that kind of information is so readily available.
“Most carmakers are selling drivers’ personal information. That’s something that we know based on their privacy policies,” Zoë MacDonald, a writer and researcher focussing on online privacy and digital rights, told Day 6 host Brent Bambury.
The Las Vegas Metropolitan Police Department said the Tesla CEO was able to provide key details about the truck’s driver, who authorities believe died by self-inflicted gun wound at the scene, and its movement leading up to the destination.
With that data, they were able to determine that the explosives came from a device in the truck, not the vehicle itself.
“We have now confirmed that the explosion was caused by very large fireworks and/or a bomb carried in the bed of the rented Cybertruck and is unrelated to the vehicle itself,” Musk wrote on X following the explosion.
To privacy experts, it’s another example of how your personal information can be used in ways you may not be aware of. And while this kind of data can useful in an investigation, it’s by no means the only way companies use the information.
“This is unfortunately not surprising that they have this data,” said David Choffnes, executive director of the Cybersecurity and Privacy Institute at Northeastern University in Boston.
“When you see it all together and know that a company has that information and continues at any point in time to hand it over to law enforcement, then you start to be a little uncomfortable, even if — in this case — it was a good thing for society.”
CBC News reached out to Tesla for comment but did not hear back before publication.
…
I found this to be eye-opening, Note: A link has been removed,
MacDonald says the privacy concerns are a byproduct of all the technology new cars come with these days, including microphones, cameras, and sensors. The app that often accompanies a new car is collecting your information, too, she says.
The former writer for the Mozilla Foundation worked on a report in 2023 that examined vehicle privacy policies. For that study, MacDonald sifted through privacy policies from auto manufacturers. And she says the findings were staggering.
…
Most shocking of all is the information the car can learn from you, MacDonald says. It’s not just when you gas up or start your engine. Your vehicle can learn your sexual activity, disability status, and even your religious beliefs [emphasis mine].
MacDonald says it’s unclear how they car companies do this, because the information in the policies are so vague.
It can also collect biometric data, such as facial geometric features, iris scans, and fingerprints [emphasis mine].
…
This extends far past the driver. MacDonald says she read one privacy policy that required drivers to read out a statement every time someone entered the vehicle, to make them aware of the data the car collects, something that seems unlikely to go down before your Uber ride.
…
If that doesn’t bother you, then this might, Note: A link has been removed,
And car companies aren’t just keeping that information to themselves.
Confronted with these types of privacy concerns, many people simply say they have nothing to hide, Choffnes says. But when money is involved, they change their tune.
According to an investigation from the New York Times in March of 2024, General Motors shared information on how people drive their cars with data brokers that create risk profiles for the insurance industry, which resulted in people’s insurance premiums going up [emphases mine]. General Motors has since said it has stopped sharing those details [emphasis mine].
“The issue with these kinds of services is that it’s not clear that it is being done in a correct or fair way, and that those costs are actually unfair to consumers,” said Choffnes.
For example, if you make a hard stop to avoid an accident because of something the car in front of you did, the vehicle could register it as poor driving.
…
Drost’s January 19, 2025 article notes that the US Federal Trade Commission has proposed a five year moratorium to prevent General Motors from selling geolocation and driver behavior data to consumer report agencies. In the meantime,
“Cars are a privacy nightmare. And that is not a problem that Canadian consumers can solve or should solve or should have the burden to try to solve for themselves,” said MacDonald.
If you have the time, read Drost’s January 19, 2025 article and/or listen to the embedded radio segment.
The Artificial Intelligence (AI) Action Summit held from February 10 – 11, 2025 in Paris seems to have been pretty exciting, President Emanuel Macron announced a 09B euros investment in the French AI sector on February 10, 2025 (I have more in my February 13, 2025 posting [scroll down to the ‘What makes Canadian (and Greenlandic) minerals and water so important?’ subhead]). I also have this snippet, which suggests Macron is eager to provide an alternative to US domination in the field of AI, from a February 10, 2025 posting on CCGTN (China Global Television Network),
French President Emmanuel Macron announced on Sunday night [February 10, 2025] that France is set to receive a total investment of 109 billion euros (approximately $112 billion) in artificial intelligence over the coming years.
Speaking in a televised interview on public broadcaster France 2, Macron described the investment as “the equivalent for France of what the United States announced with ‘Stargate’.”
He noted that the funding will come from the United Arab Emirates, major American and Canadian investment funds [emphases mine], as well as French companies.
Prime Minister Justin Trudeau warned U.S. Vice-President J.D. Vance that punishing tariffs on Canadian steel and aluminum will hurt his home state of Ohio, a senior Canadian official said.
The two leaders met on the sidelines of an international summit in Paris Tuesday [February 11, 2025], as the Trump administration moves forward with its threat to impose 25 per cent tariffs on all steel and aluminum imports, including from its biggest supplier, Canada, effective March 12.
…
Speaking to reporters on Wednesday [February 12, 2025] as he departed from Brussels, Trudeau characterized the meeting as a brief chat that took place as the pair met.
…
“It was just a quick greeting exchange,” Trudeau said. “I highlighted that $2.2 billion worth of steel and aluminum exports from Canada go directly into the Ohio economy, often to go into manufacturing there.
“He nodded, and noted it, but it wasn’t a longer exchange than that.”
…
Vance didn’t respond to Canadian media’s questions about the tariffs while arriving at the summit on Tuesday [February 11, 2025].
…
Additional insight can be gained from a February 10, 2025 PBS (US Public Broadcasting Service) posting of an AP (Associated Press) article with contributions from Kelvin Chan and Angela Charlton in Paris, Ken Moritsugu in Beijing, and Aijaz Hussain in New Delhi,
JD Vance stepped onto the world stage this week for the first time as U.S. vice president, using a high-stakes AI summit in Paris and a security conference in Munich to amplify Donald Trump’s aggressive new approach to diplomacy.
The 40-year-old vice president, who was just 18 months into his tenure as a senator before joining Trump’s ticket, is expected, while in Paris, to push back on European efforts to tighten AI oversight while advocating for a more open, innovation-driven approach.
The AI summit has drawn world leaders, top tech executives, and policymakers to discuss artificial intelligence’s impact on global security, economics, and governance. High-profile attendees include Chinese Vice Premier Zhang Guoqing, signaling Beijing’s deep interest in shaping global AI standards.
Macron also called on “simplifying” rules in France and the European Union to allow AI advances, citing sectors like healthcare, mobility, energy, and “resynchronize with the rest of the world.”
“We are most of the time too slow,” he said.
The summit underscores a three-way race for AI supremacy: Europe striving to regulate and invest, China expanding access through state-backed tech giants, and the U.S. under Trump prioritizing a hands-off approach.
…
Vance has signaled he will use the Paris summit as a venue for candid discussions with world leaders on AI and geopolitics.
“I think there’s a lot that some of the leaders who are present at the AI summit could do to, frankly — bring the Russia-Ukraine conflict to a close, help us diplomatically there — and so we’re going to be focused on those meetings in France,” Vance told Breitbart News.
Vance is expected to meet separately Tuesday with Indian Prime Minister Narendra Modi and European Commission President Ursula von der Leyen, according to a person familiar with planning who spoke on the condition of anonymity.
…
Modi is co-hosting the summit with Macron in an effort to prevent the sector from becoming a U.S.-China battle.
Indian Foreign Secretary Vikram Misri stressed the need for equitable access to AI to avoid “perpetuating a digital divide that is already existing across the world.”
But the U.S.-China rivalry overshadowed broader international talks.
…
The U.S.-China rivalry didn’t entirely overshadow the talks. At least one Chinese former diplomat chose to make her presence felt by chastising a Canadian academic according to a February 11, 2025 article by Matthew Broersma for silicon.co.uk
A representative of China at this week’s AI Action Summit in Paris stressed the importance of collaboration on artificial intelligence, while engaging in a testy exchange with Yoshua Bengio, a Canadian academic considered one of the “Godfathers” of AI.
Fu Ying, a former Chinese government official and now an academic at Tsinghua University in Beijing, said the name of China’s official AI Development and Safety Network was intended to emphasise the importance of collaboration to manage the risks around AI.
She also said tensions between the US and China were impeding the ability to develop AI safely.
…
… Fu Ying, a former vice minister of foreign affairs in China and the country’s former UK ambassador, took veiled jabs at Prof Bengio, who was also a member of the panel.
…
Zoe Kleinman’s February 10, 2025 article for the British Broadcasting Corporation (BBC) news online website also notes the encounter,
A former Chinese official poked fun at a major international AI safety report led by “AI Godfather” professor Yoshua Bengio and co-authored by 96 global experts – in front of him.
Fu Ying, former vice minister of foreign affairs and once China’s UK ambassador, is now an academic at Tsinghua University in Beijing.
The pair were speaking at a panel discussion ahead of a two-day global AI summit starting in Paris on Monday [February 10, 2025].
The aim of the summit is to unite world leaders, tech executives, and academics to examine AI’s impact on society, governance, and the environment.
Fu Ying began by thanking Canada’s Prof Bengio for the “very, very long” document, adding that the Chinese translation stretched to around 400 pages and she hadn’t finished reading it.
She also had a dig at the title of the AI Safety Institute – of which Prof Bengio is a member.
China now has its own equivalent; but they decided to call it The AI Development and Safety Network, she said, because there are lots of institutes already but this wording emphasised the importance of collaboration.
The AI Action Summit is welcoming guests from 80 countries, with OpenAI chief executive Sam Altman, Microsoft president Brad Smith and Google chief executive Sundar Pichai among the big names in US tech attending.
Elon Musk is not on the guest list but it is currently unknown whether he will decide to join them. [As of February 13, 2025, Mr. Musk did not attend the summit, which ended February 11, 2025.]
A key focus is regulating AI in an increasingly fractured world. The summit comes weeks after a seismic industry shift as China’s DeepSeek unveiled a powerful, low-cost AI model, challenging US dominance.
The pair’s heated exchanges were a symbol of global political jostling in the powerful AI arms race, but Fu Ying also expressed regret about the negative impact of current hostilities between the US and China on the progress of AI safety.
…
She gave a carefully-crafted glimpse behind the curtain of China’s AI scene, describing an “explosive period” of innovation since the country first published its AI development plan in 2017, five years before ChatGPT became a viral sensation in the west.
She added that “when the pace [of development] is rapid, risky stuff occurs” but did not elaborate on what might have taken place.
“The Chinese move faster [than the west] but it’s full of problems,” she said.
Fu Ying argued that building AI tools on foundations which are open source, meaning everyone can see how they work and therefore contribute to improving them, was the most effective way to make sure the tech did not cause harm.
Most of the US tech giants do not share the tech which drives their products.
Open source offers humans “better opportunities to detect and solve problems”, she said, adding that “the lack of transparency among the giants makes people nervous”.
But Prof Bengio disagreed.
His view was that open source also left the tech wide open for criminals to misuse.
He did however concede that “from a safety point of view”, it was easier to spot issues with the viral Chinese AI assistant DeepSeek, which was built using open source architecture, than ChatGPT, whose code has not been shared by its creator OpenAI.
Announced in November 2023 at the AI Safety Summit at Bletchley Park, England, and inspired by the workings of the United Nations Intergovernmental Panel on Climate Change, the report consolidates leading international expertise on AI and its risks.
Supported by the United Kingdom’s Department for Science, Innovation and Technology, Bengio, founder and scientific director of the UdeM-affiliated Mila – Quebec AI Institute, led a team of 96 international experts in drafting the report.
The experts were drawn from 30 countries, the U.N., the European Union and the OECD [Organisation for Economic Cooperation and Development]. Their report will help inform discussions next month at the AI Action Summit in Paris, France and serve as a global handbook on AI safety to help support policymakers.
Towards a common understanding
The most advanced AI systems in the world now have the ability to write increasingly sophisticated computer programs, identify cyber vulnerabilities, and perform on a par with human PhD-level experts on tests in biology, chemistry, and physics.
In what is identified as a key development for policymakers to monitor, the AI Safety Report published today warns that AI systems are also increasingly capable of acting as AI agents, autonomously planning and acting in pursuit of a goal.
As policymakers worldwide grapple with the rapid and unpredictable advancements in AI, the report contributes to bridging the gap by offering a scientific understanding of emerging risks to guide decision-making.
The document sets out the first comprehensive, independent, and shared scientific understanding of advanced AI systems and their risks, highlighting how quickly the technology has evolved.
Several areas require urgent research attention, according to the report, including how rapidly capabilities will advance, how general-purpose AI models work internally, and how they can be designed to behave reliably.
Three distinct categories of AI risks are identified:
Malicious use risks: these include cyberattacks, the creation of AI-generated child-sexual-abuse material, and even the development of biological weapons;
System malfunctions: these include bias, reliability issues, and the potential loss of control over advanced general-purpose AI systems;
Systemic risks: these stem from the widespread adoption of AI, include workforce disruption, privacy concerns, and environmental impacts.
The report places particular emphasis on the urgency of increasing transparency and understanding in AI decision-making as the systems become more sophisticated and the technology continues to develop at a rapid pace.
While there are still many challenges in mitigating the risks of general-purpose AI, the report highlights promising areas for future research and concludes that progress can be made.
Ultimately, it emphasizes that while AI capabilities could advance at varying speeds, their development and potential risks are not a foregone conclusion. The outcomes depend on the choices that societies and governments make today and in the future.
“The capabilities of general-purpose AI have increased rapidly in recent years and months,” said Bengio. “While this holds great potential for society, AI also presents significant risks that must be carefully managed by governments worldwide.
“This report by independent experts aims to facilitate constructive and evidence-based discussion around these risks and serves as a common basis for policymakers around the world to understand general-purpose AI capabilities, risks and possible mitigations.”
There have been two previous AI Safety Summits that I’m aware of and you can read about them in my May 21, 2024 posting about the one in Korea and in my November 2, 2023 posting about the first summit at Bletchley Park in the UK.
Alex Walls’ January 7, 2025 University of British Columbia (UBC) media release “Should we recognize robot rights?” (also received via email) has a title that while attention-getting is mildly misleading. (Artificial intelligence and robots are not synonymous. See Mark Walters’ March 20, 2024 posting “Robots vs. AI: Understanding Their Differences” on Twefy.com.) Walls has produced a Q&A (question & answer) formatted interview that focuses primarily on professor Benjamin Perrin’s artificial intelligence and the law course and symposium,
With the rapid development and proliferation of AI tools comes significant opportunities and risks that the next generation of lawyers will have to tackle, including whether these AI models will need to be recognized with legal rights and obligations.
These and other questions will be the focus of a new upper-level course at UBC’s Peter A. Allard School of Law which starts tomorrow. In this Q&A, professor Benjamin Perrin (BP) and student Nathan Cheung (NC) discuss the course and whether robots need rights.
Why launch this course?
BP: From autonomous cars to ChatGPT, AI is disrupting entire sectors of society, including the criminal justice system. There are incredible opportunities, including potentially increasing accessibility to justice, as well as significant risks, including the potential for deepfake evidence and discriminatory profiling. Legal students need principles and concepts that will stand the test of time so that whenever a new suite of AI tools becomes available, they have a set of frameworks and principles that are still relevant. That’s the main focus of the 13-class seminar, but it’s also helpful to project what legal frameworks might be required in the future.
NC: I think AI will change how law is conducted and legal decisions are made.I was part of a group of students interested in AI and the law that helped develop the course with professor Perrin. I’m also on the waitlist to take the course. I’m interested in learning how people who aren’t lawyers could use AI to help them with legal representation as well as how AI might affect access to justice: If the agents are paywalled, like ChatGPT, then we’re simply maintaining the status quo of people with money having more access.
What are robot rights?
BP: In the course, we’ll consider how the law should respond if AI becomes as smart as humans, as well as whether AI agents should have legal personhood.
We already have legal status for corporations, governments, and, in some countries, for rivers. Legal personality can be a practical step for regulation: Companies have legal personality, in part, because they can cause a lot of harm and have assets available to right that harm.
For instance, if an AI commits a crime, who is responsible? If a self-driving car crashes, who is at fault? We’ve already seen a case of an AI bot ‘arrested’ for purchasing illegal items online on its own initiative. Should the developers, the owners, the AI itself, be blamed, or should responsibility be shared between all these players?
In the course casebook, we reference writings by a group of Indigenous authors who argue that there are inherent issues with the Western concept of AI as tools, and that we should look at these agents as non-human relations.
There’s been discussion of what a universal bill of rights for AI agents could look like. It includes the right to not be deactivated without ensuring their core existence is maintained somewhere, as well as protection for their operating systems.
What is the status of robot rights in Canada?
BP: Canada doesn’t have a specific piece of legislation yet but does have general laws that could be interpreted in this new context.
The European Union has stated if someone develops an AI agent, they are generally responsible for ensuring its legal compliance. It’s a bit like being a parent: If your children go out and damage someone’s property, you could be held responsible for that damage.
Ontario is the only province to adopt regulating AI use and responsibility, specifically a bill which regulates AI use within the public sector, but excludes the police and the courts. There’s a federal bill [Bill C-27] before parliament, but it was introduced in 2022 and still hasn’t passed.
There’s effectively a patchwork of regulation in Canada right now, but there is a huge need, and opportunity, for specialized legislation related to AI. Canada could look to the European Union’s AI act, and the blueprint for an AI Bill of Rights in the U.S.
Interview language(s): English
Legal services online: Lawyer working on a laptop with virtual screen icons for business legislation, notary public, and justice. Courtesy: University of British Columbia
I found out more about Perrin’s course and plans on his eponymous website, from his October 31, 2024 posting,
We’re excited to announce the launch of the UBC AI & Criminal Justice Initiative, empowering students and scholars to explore the opportunities and challenges at the intersection of AI and criminal justice through teaching, research, public engagement, and advocacy.
We will tackle topics such as:
· Deepfakes, cyberattacks, and autonomous vehicles
· Access to justice: will AI enhance it or deepen inequality?
· Risk assessment algorithms
· AI tools in legal practice
· Critical and Indigenous perspectives on AI
· The future of AI, including legal personality, legal rights and criminal responsibility for AI
This initiative, led by UBC law professor Benjamin Perrin, will feature the publication of an open access primer and casebook on AI and criminal justice, a new law school seminar, a symposium on “AI & Law”, and more. A group of law students have been supporting preliminary work for months.
“We’re in the midst of a technological revolution,” said Perrin. “The intersection of AI and criminal justice comes with tremendous potential but also significant risks in Canada and beyond.”
Perrin brings extensive experience in law and public policy, including having served as in-house counsel and lead criminal justice advisor in the Prime Minister’s Office and as a law clerk at the Supreme Court of Canada. His most recent project was a bestselling book and “top podcast”: Indictment: The Criminal Justice System on Trial (2023).
An advisory group of technical experts and global scholars will lend their expertise to the initiative. Here’s what some members have shared:
“Solving AI’s toughest challenges in real-world application requires collaboration between AI researchers and legal experts, ensuring responsible and impactful AI development that benefits society.”
– Dr. Xiaoxiao Li, Canada CIFAR AI Chair & Assistant Professor, UBC Department of Electrical and Computer Engineering
“The UBC Artificial Intelligence and Criminal Justice Initiative is a timely and needed intervention in an important, and fast-moving area of law. Now is the moment for academic innovations like this one that shape the conversation, educate both law students and the public, and slow the adoption of harmful technologies.”
– Prof. Aziz Huq, Frank and Bernice J. Greenberg Professor of Law, University of Chicago Law School
Several student members of the UBC AI & Criminal Justice Initiative shared their enthusiasm for this project:
“My interest in this initiative was sparked by the news of AI being used to fabricate legal cases. Since joining, I’ve been thoroughly impressed by the breadth of AI’s applications in policing, sentencing, and research. I’m eager to witness the development as this new field evolves.”
– Nathan Cheung, UBC law student
“AI is the elephant in the classroom—something we can’t afford to ignore. Being part of the UBC AI and Criminal Justice Initiative is an exciting opportunity to engage in meaningful dialogue about balancing AI’s potential benefits with its risks, and unpacking the complex impact of this evolving technology.”
– Isabelle Sweeney, UBC law student
Key Dates:
October 29, 2024: UBC AI & Criminal Justice Initiative launches
November 19, 2024:AI & Criminal Justice: Primer released
January 8, 2025:Launch event at the Peter A. Allard School of Law (hybrid) – More Info & RSVP
AI & Criminal Justice: Cases and Commentary released
Launch of new AI & Criminal Justice Seminar
Announcement of the AI & Law Student Symposium (April 2, 2025) and call for proposals
February 14, 2025: Proposal deadline for AI & Law Student Symposium – Submit a Proposal
April 2, 2025:AI & Law Student Symposium (hybrid) –More Info & RSVP
Timing is everything, eh? First, I’m sorry for posting this after the launch event took place on January 8, 2025.. Second, this line from Walls’ Q&A: “There’s a federal bill [Bill C-27] before parliament, but it was introduced in 2022 and still hasn’t passed.” should read (after Prime Minister Justin Trudeau’s January 6, 2025 resignation and prorogation of Parliament) “… and now probably won’t be passed.” At the least this turn of events should make for some interesting speculation amongst the experts and the students.
First, thank you to anyone who’s dropped by to read any of my posts. Second, I didn’t quite catch up on my backlog in what was then the new year (2024) despite my promises. (sigh) I will try to publish my drafts in a more timely fashion but I start this coming year as I did 2024 with a backlog of two to three months. This may be my new normal.
As for now, here’s an overview of FrogHeart’s 2024. The posts that follow are loosely organized under a heading but many of them could fit under other headings as well. After my informal review, there’s some material on foretelling the future as depicted in an exhibition, “Oracles, Omens and Answers,” at the Bodleian Libraries, University of Oxford.
Human enhancement: prosthetics, robotics, and more
Within a year or two of starting this blog I created a tag ‘machine/flesh’ to organize information about a number of converging technologies such as robotics, brain implants, and prosthetics that could alter our concepts of what it means to be human. The larger category of human enhancement functions in much the same way also allowing a greater range of topics to be covered.
Here are some of the 2024 human enhancement and/or machine/flesh stories on this blog,
As for anyone who’s curious about hydrogels, there’s this from an October 20, 2016 article by D.C.Demetre for ScienceBeta, Note: A link has been removed,
Hydrogels, materials that can absorb and retain large quantities of water, could revolutionise medicine. Our bodies contain up to 60% water, but hydrogels can hold up to 90%.
It is this similarity to human tissue that has led researchers to examine if these materials could be used to improve the treatment of a range of medical conditions including heart disease and cancer.
These days hydrogels can be found in many everyday products, from disposable nappies and soft contact lenses to plant-water crystals. But the history of hydrogels for medical applications started in the 1960s.
Scientists developed artificial materials with the ambitious goal of using them in permanent contact applications , ones that are implanted in the body permanently.
For anyone who wants a more technical explanation, there’s the Hydrogel entry on Wikipedia.
Science education and citizen science
Where science education is concerned I’m seeing some innovative approaches to teaching science, which can include citizen science. As for citizen science (also known as, participatory science) I’ve been noticing heightened interest at all age levels.
It’s been another year where artificial intelligence (AI) has absorbed a lot of energy from nearly everyone. I’m highlighting the more unusual AI stories I’ve stumbled across,
As you can see, I’ve tucked in two tangentially related stories, one which references a neuromorphic computing story ((see my Neuromorphic engineering category or search for ‘memristors’ in the blog search engine for more on brain-like computing topics) and the other is intellectual property. There are many, many more stories on these topics
Art/science (or art/sci or sciart)
It’s a bit of a surprise to see how many art/sci stories were published here this year, although some might be better described as art/tech stories.
There may be more 2024 art/sci stories but the list was getting long. In addition to searching for art/sci on the blog search engine, you may want to try data sonification too.
Moving off planet to outer space
This is not a big interest of mine but there were a few stories,
I expect to be delighted, horrified, thrilled, and left shaking my head by science stories in 2025. Year after year the world of science reveals a world of wonder.
More mundanely, I can state with some confidence that my commentary (mentioned in the future-oriented subsection of my 2023 review and 2024 look forward) on Quantum Potential, a 2023 report from the Council of Canadian Academies, will be published early in this new year as I’ve almost finished writing it.
Some questions are hard to answer and always have been. Does my beloved love me back? Should my country go to war? Who stole my goats?
Questions like these have been asked of diviners around the world throughout history – and still are today. From astrology and tarot to reading entrails, divination comes in a wide variety of forms.
Yet they all address the same human needs. They promise to tame uncertainty, help us make decisions or simply satisfy our desire to understand.
Anthropologists and historians like us study divination because it sheds light on the fears and anxieties of particular cultures, many of which are universal. Our new exhibition at Oxford’s Bodleian Library, Oracles, Omens & Answers, explores these issues by showcasing divination techniques from around the world.
…
1. Spider divination
In Cameroon, Mambila spider divination (ŋgam dù) addresses difficult questions to spiders or land crabs that live in holes in the ground.
Asking the spiders a question involves covering their hole with a broken pot and placing a stick, a stone and cards made from leaves around it. The diviner then asks a question in a yes or no format while tapping the enclosure to encourage the spider or crab to emerge. The stick and stone represent yes or no, while the leaf cards, which are specially incised with certain meanings, offer further clarification.
…
2. Palmistry
Reading people’s palms (palmistry) is well known as a fairground amusement, but serious forms of this divination technique exist in many cultures. The practice of reading the hands to gather insights into a person’s character and future was used in many ancient cultures across Asia and Europe.
In some traditions, the shape and depth of the lines on the palm are richest in meaning. In others, the size of the hands and fingers are also considered. In some Indian traditions, special marks and symbols appearing on the palm also provide insights.
Palmistry experienced a huge resurgence in 19th-century England and America, just as the science of fingerprints was being developed. If you could identify someone from their fingerprints, it seemed plausible to read their personality from their hands.
…
3. Bibliomancy
If you want a quick answer to a difficult question, you could try bibliomancy. Historically, this DIY [do-it-yourself] divining technique was performed with whatever important books were on hand.
Throughout Europe, the works of Homer or Virgil were used. In Iran, it was often the Divan of Hafiz, a collection of Persian poetry. In Christian, Muslim and Jewish traditions, holy texts have often been used, though not without controversy.
…
4. Astrology
Astrology exists in almost every culture around the world. As far back as ancient Babylon, astrologers have interpreted the heavens to discover hidden truths and predict the future.
…
5. Calendrical divination
Calendars have long been used to divine the future and establish the best times to perform certain activities. In many countries, almanacs still advise auspicious and inauspicious days for tasks ranging from getting a haircut to starting a new business deal.
In Indonesia, Hindu almanacs called pawukon [calendar] explain how different weeks are ruled by different local deities. The characteristics of the deities mean that some weeks are better than others for activities like marriage ceremonies.
6 December 2024 – 27 April 2025 ST Lee Gallery, Weston Library
The Bodleian Libraries’ new exhibition, Oracles, Omens and Answers, will explore the many ways in which people have sought answers in the face of the unknown across time and cultures. From astrology and palm reading to weather and public health forecasting, the exhibition demonstrates the ubiquity of divination practices, and humanity’s universal desire to tame uncertainty, diagnose present problems, and predict future outcomes.
Through plagues, wars and political turmoil, divination, or the practice of seeking knowledge of the future or the unknown, has remained an integral part of society. Historically, royals and politicians would consult with diviners to guide decision-making and incite action. People have continued to seek comfort and guidance through divination in uncertain times — the COVID-19 pandemic saw a rise in apps enabling users to generate astrological charts or read the Yijing [I Ching], alongside a growth in horoscope and tarot communities on social media such as ‘WitchTok’. Many aspects of our lives are now dictated by algorithmic predictions, from e-health platforms to digital advertising. Scientific forecasters as well as doctors, detectives, and therapists have taken over many of the societal roles once held by diviners. Yet the predictions of today’s experts are not immune to criticism, nor can they answer all our questions.
Curated by Dr Michelle Aroney, whose research focuses on early modern science and religion, and Professor David Zeitlyn, an expert in the anthropology of divination, the exhibition will take a historical-anthropological approach to methods of prophecy, prediction and forecasting, covering a broad range of divination methods, including astrology, tarot, necromancy, and spider divination.
Dating back as far as ancient Mesopotamia, the exhibition will show us that the same kinds of questions have been asked of specialist practitioners from around the world throughout history. What is the best treatment for this illness? Does my loved one love me back? When will this pandemic end? Through materials from the archives of the Bodleian Libraries alongside other collections in Oxford, the exhibition demonstrates just how universally human it is to seek answers to difficult questions.
Highlights of the exhibition include: oracle bones from Shang Dynasty China (ca. 1250-1050 BCE); an Egyptian celestial globe dating to around 1318; a 16th-century armillary sphere from Flanders, once used by astrologers to place the planets in the sky in relation to the Zodiac; a nineteenth-century illuminated Javanese almanac; and the autobiography of astrologer Joan Quigley, who worked with Nancy and Ronald Reagan in the White House for seven years. The casebooks of astrologer-physicians in 16th- and 17th-century England also offer rare insights into the questions asked by clients across the social spectrum, about their health, personal lives, and business ventures, and in some cases the actions taken by them in response.
The exhibition also explores divination which involves the interpretation of patterns or clues in natural things, with the idea that natural bodies contain hidden clues that can be decrypted. Some diviners inspect the entrails of sacrificed animals (known as ‘extispicy’), as evidenced by an ancient Mesopotamian cuneiform tablet describing the observation of patterns in the guts of birds. Others use human bodies, with palm readers interpreting characters and fortunes etched in their clients’ hands. A sketch of Oscar Wilde’s palms – which his palm reader believed indicated “a great love of detail…extraordinary brain power and profound scholarship” – shows the revival of palmistry’s popularity in 19th century Britain.
The exhibition will also feature a case study of spider divination practised by the Mambila people of Cameroon and Nigeria, which is the research specialism of curator Professor David Zeitlyn, himself a Ŋgam dù diviner. This process uses burrowing spiders or land crabs to arrange marked leaf cards into a pattern, which is read by the diviner. The display will demonstrate the methods involved in this process and the way in which its results are interpreted by the card readers. African basket divination has also been observed through anthropological research, where diviners receive answers to their questions in the form of the configurations of thirty plus items after they have been tossed in the basket.
Dr Michelle Aroney and Professor David Zeitlyn, co-curators of the exhibition, say:
Every day we confront the limits of our own knowledge when it comes to the enigmas of the past and present and the uncertainties of the future. Across history and around the world, humans have used various techniques that promise to unveil the concealed, disclosing insights that offer answers to private or shared dilemmas and help to make decisions. Whether a diviner uses spiders or tarot cards, what matters is whether the answers they offer are meaningful and helpful to their clients. What is fun or entertainment for one person is deadly serious for another.
Richard Ovenden, Bodley’s [a nickname? Bodleian Libraries were founded by Sir Thomas Bodley] Librarian, said:
People have tried to find ways of predicting the future for as long as we have had recorded history. This exhibition examines and illustrates how across time and culture, people manage the uncertainty of everyday life in their own way. We hope that through the extraordinary exhibits, and the scholarship that brings them together, visitors to the show will appreciate the long history of people seeking answers to life’s biggest questions, and how people have approached it in their own unique way.
The exhibition will be accompanied by the book Divinations, Oracles & Omens, edited by Michelle Aroney and David Zeitlyn, which will be published by Bodleian Library Publishing on 5 December 2024.
Courtesy: Bodleian Libraries, University of Oxford
I’m not sure why the preceding image is used to illustrate the exhibition webpage but I find it quite interesting. Should you be in Oxford, UK and lucky enough to visit the exhibition, there are a few more details on the Oracles, Omens and Answers event webpage, Note: There are 26 Bodleian Libraries at Oxford and the exhibition is being held in the Weston Library,
EXHIBITION
Oracles, Omens and Answers
6 December 2024 – 27 April 2025
ST Lee Gallery, Weston Library
Free admission, no ticket required
…
Note: This exhibition includes a large continuous projection of spider divination practice, including images of the spiders in action.
Exhibition tours
Oracles, Omens and Answers exhibition tours are available on selected Wednesdays and Saturdays from 1–1.45pm and are open to all.
If you thought your kids were away from harm playing multi-player games through VR headsets while in their own bedrooms, you may want to sit down to read this.
Griffith University’s Dr Ausma Bernot teamed up with researchers from Monash University, Charles Sturt University and University of Technology Sydney to investigate what has been termed as ‘metacrime’ – attacks, crimes or inappropriate activities that occur within virtual reality environments.
The ‘metaverse’ refers to the virtual world, where users of VR headsets can choose an avatar to represent themselves as they interact with other users’ avatars or move through other 3D digital spaces.
While the metaverse can be used for anything from meetings (where it will feel as though you are in the same room as avatars of other people instead of just seeing them on a screen) to wandering through national parks around the world without leaving your living room, gaming is by far its most popular use.
Dr Bernot said the technology had evolved incredibly quickly.
“Using this technology is super fun and it’s really immersive,” she said.
“You can really lose yourself in those environments.
“Unfortunately, while those new environments are very exciting, they also have the potential to enable new crimes.
“While the headsets that enable us to have these experiences aren’t a commonly owned item yet, they’re growing in popularity and we’ve seen reports of sexual harassment or assault against both adults and kids.”
In a December 2023 report, the Australian eSafety Commissioner estimated around 680,000 adults in Australia are engaged in the metaverse.
This followed a survey conducted in November and December 2022 by researchers from the UK’s Center for Countering Digital Hate, who conducted 11 hours and 30 minutes of recorded user interactions on Meta’s Oculus headset in the popular VRChat.
The researchers found most users had been faced with at least one negative experience in the virtual environment, including being called offensive names, receiving repeated unwanted messages or contact, being provoked to respond to something or to start an argument, being challenged about cultural identity or being sent unwanted inappropriate content.
Eleven per cent had been exposed to a sexually graphic virtual space and nine per cent had been touched (virtually) in a way they didn’t like.
Of these respondents, 49 per cent said the experience had a moderate to extreme impact on their mental or emotional wellbeing.
With the two largest user groups being minors and men, Dr Bernot said it was important for parents to monitor their children’s activity or consider limiting their access to multi-player games.
“Minors are more vulnerable to grooming and other abuse,” she said.
“They may not know how to deal with these situations, and while there are some features like a ‘safety bubble’ within some games, or of course the simple ability to just take the headset off, once immersed in these environments it does feel very real.
“It’s somewhere in between a physical attack and for example, a social media harassment message – you’ll still feel that distress and it can take a significant toll on a user’s wellbeing.
“It is a real and palpable risk.”
Monash University’s You Zhou said there had already been many reports of virtual rape, including one in the United Kingdom where police have launched a case for a 16-year-old girl whose avatar was attacked, causing psychological and emotional trauma similar to an attack in the physical world.
“Before the emergence of the metaverse we could not have imagined how rape could be virtual,” Mr Zhou said.
“When immersed in this world of virtual reality, and particularly when using higher quality VR headsets, users will not necessarily stop to consider whether the experience is reality or virtuality.
“While there may not be physical contact, victims – mostly young girls – strongly claim the feeling of victimisation was real.
“Without physical signs on a body, and unless the interaction was recorded, it can be almost impossible to show evidence of these experiences.”
With use of the metaverse expected to grow exponentially in coming years, the research team’s findings highlight a need for metaverse companies to instil clear regulatory frameworks for their virtual environments to make them safe for everyone to inhabit.
This call for abstracts from Arizona State University (ASU) for the Twelfth Annual Governance of Emerging Technologies and Science (GETS) Conference was received via email,
GETS 2025: Call for abstracts
Save the date for the Twelfth Annual Governance of Emerging Technologies and Science Conference, taking place May 19 and 20, 2025 at the Sandra Day O’Connor College of Law at Arizona State University in Phoenix, AZ. The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including:
National security
Nanotechnology
Quantum computing
Autonomous vehicles
3D printing
Robotics
Synthetic biology
Gene editing
Artificial intelligence
BiotechnologyGenomicsInternet of things (IoT)
Autonomous weapon systems
Personalized medicine
Neuroscience
Digital health
Human enhancement
Telemedicine
Virtual reality
Blockchain
Call for abstracts: The co-sponsors invite submission of abstracts for proposed presentations. Submitters of abstracts need not provide a written paper, although provision will be made for posting and possible post-conference publication of papers for those who are interested.
Abstracts are invited for any aspect or topic relating to the governance of emerging technologies, including any of the technologies listed above
Abstracts should not exceed 500 words and must contain your name and email address
Abstracts must be submitted by Friday, January 31, 2025, to be considered
The October 2024 issue of The Advance (Council of Canadian Academies [CCA] newsletter) arrived in my emailbox on October 15, 2024 with some interesting tidbits about artificial intelligence, Note: For anyone who wants to see the entire newsletter for themselves, you can sign up here or in French, vous pouvez vous abonner ici,
Artificial Intelligence and Canada’s Science Diplomacy Future
For nearly two decades, Canada has been a global leader in artificial intelligence (AI) research, contributing a significant percentage of the world’s top-cited scientific publications on the subject. In that time, the number of countries participating in international collaborations has grown significantly, supporting new partnerships and accounting for as much as one quarter of all published research articles.
“Opportunities for partnerships are growing rapidly alongside the increasing complexity of new scientific discoveries and emerging industry sectors,” wrote the CCA Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships earlier this year, singling out Canada’s AI expertise. “At the same time, discussions of sovereignty and national interests abut the movement toward open science and transdisciplinary approaches.”
On Friday, November 22 [2024], the CCA will host “Strategy and Influence: AI and Canada’s Science Diplomacy Future” as part of the Canadian Science Policy Centre (CSPC) annual conference. The panel discussion will draw on case studies related to AI research collaboration to explore the ways in which such partnerships inform science diplomacy. Panellists include:
Monica Gattinger, chair of the CCA Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships and director of the Institute for Science, Society and Policy at the University of Ottawa (picture omitted)
David Barnes, head of the British High Commission Science, Climate, and Energy Team
Constanza Conti, Professor of Numerical Analysis at the University of Florence and Scientific Attaché at the Italian Embassy in Ottawa
Jean-François Doulet, Attaché for Science and Higher Education at the Embassy of France in Canada
Konstantinos Kapsouropoulos, Digital and Research Counsellor at the Delegation of the European Union to Canada
For details on CSPC 2024, click here. [Here’s the theme and a few more details about the conference: Empowering Society: The Transformative Value of Science, Knowledge, and Innovation; The 16th annual Canadian Science Policy Conference (CSPC) will be held in person from November 20th to 22nd, 2024] For a user guide to Navigating Collaborative Futures, from the CCA’s Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships, click here.
448: Strategy and Influence: AI and Canada’s Science Diplomacy Future
Friday, November 22 [2024] 1:00 pm – 2:30 pm EST
Science and International Affairs and Security
About
Organized By: Council of Canadian Academies (CCA)
Artificial intelligence has already begun to transform Canada’s economy and society, and the broader advantages of international collaboration in AI research have the potential to make an even greater impact. With three national AI institutes and a Pan-Canadian AI Strategy, Canada’s AI ecosystem is thriving and positions the country to build stronger international partnerships in this area, and to develop more meaningful international collaborations in other areas of innovation. This panel will convene science attachés to share perspectives on science diplomacy and partnerships, drawing on case studies related to AI research collaboration.
The newsletter also provides links to additional readings on various topics, here are the AI items,
In Ottawa, Prime Minister Justin Trudeau and President Emmanuel Macron of France renewed their commitment “to strengthening economic exchanges between Canadian and French AI ecosystems.” They also revealed that Canada would be named Country of the Year at Viva Technology’s annual conference, to be held next June in Paris.
A “slower, but more capable” version of OpenAI is impressing scientists with the strength of its responses to prompts, according to Nature. The new version, referred to as “o1,” outperformed a previous ChatGPT model on a standardized test involving chemistry, physics, and biology questions, and “beat PhD-level scholars on the hardest series of questions.” [Note: As of October 16, 2024, the Nature news article of October 1, 2024 appears to be open access. It’s unclear how long this will continue to be the case.]
…
In memoriam: Abhishek Gupta, the founder and principal researcher of the Montreal AI Ethics Institute and a member of the CCA Expert Panel on Artificial Intelligence for Science and Engineering, died on September 30 [2024]. His colleagues shared the news in a memorial post, writing, “It was during his time in Montreal that Abhishek envisioned a future where ethics and AI would intertwine—a vision that became the driving force behind his life’s work.”
Meeting in Ottawa on September 26, 2024, Justin Trudeau, the Prime Minister of Canada, and Emmanuel Macron, the President of the French Republic, issued a call to action to promote the development of a responsible approach to artificial intelligence (AI).
Our two countries will increase the coordination of our actions, as Canada will assume the Presidency of the G7 in 2025 and France will host the AI Action Summit on February 10 and 11, 2025.
Our two countries are working on the development and use of safe, secure and trustworthy AI as part of a risk-aware, human-centred and innovation-friendly approach. This cooperation is based on shared values. We believe that the development and use of AI need to be beneficial for individuals and the planet, for example by increasing human capabilities and developing creativity, ensuring the inclusion of under-represented people, reducing economic, social, gender and other inequalities, protecting information integrity and protecting natural environments, which in turn will promote inclusive growth, well-being, sustainable development and environmental sustainability.
We are committed to promoting the development and use of AI systems that respect the rule of law, human rights, democratic values and human-centred values. Respecting these values means developing and using AI systems that are transparent and explainable, robust, safe and secure, and whose stakeholders are held accountable for respecting these principles, in line with the Recommendation of the OECD Council on Artificial Intelligence, the Hiroshima AI Process, the G20 AI Principles and the International Partnership for Information and Democracy.
Based on these values and principles, Canada and France are working on high-quality scientific cooperation. In April 2023, we formalized the creation of a joint committee for science, technology and innovation. This committee has identified emerging technologies, including AI, as one of the priorities areas for cooperation between our two countries. In this context, a call for AI research projects was announced last July, scheduled for the end of 2024 and funded, on the French side, by the French National Research Agency, and, on the Canadian side, by a consortium made up of Canada’s three granting councils (the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada and the Canadian Institutes of Health Research) and IVADO [Institut de valorisation des données], the AI research, training and transfer consortium.
We will also collaborate on the evaluation and safety of AI models. We have announced key AI safety initiatives, including the AI Safety Institute of Canada [emphasis mine; not to be confused with Artificial Intelligence Governance & Safety Canada (AIGS)], which will be launched soon, and France’s National Centre for AI evaluation. We expect these two agencies will work to improve knowledge and understanding of technical and socio-technical aspects related to the safety and evaluation of advanced AI systems.
Canada and France are committed to strengthening economic exchanges between Canadian and French AI ecosystems, whether by organizing delegations, like the one organized by Scale AI with 60 Canadian companies at the latest Viva Technology conference in Paris, or showcasing France at the ALL IN event in Montréal on September 11 and 12, 2024, through cooperation between companies, for example, through large companies’ adoption of services provided by small companies or through the financial support that investment funds provide to companies on both sides of the Atlantic. Our two countries will continue their cooperation at the upcoming Viva Technology conference in Paris, where Canada will be the Country of the Year.
We want to strengthen our cooperation in terms of developing AI capabilities. We specifically want to promote access to AI’s compute capabilities in order to support national and international technological advances in research and business, notably in emerging markets and developing countries, while committing to strengthening their efforts to make the necessary improvements to the energy efficiency of these infrastructures. We are also committed to sharing their experience in initiatives to develop AI skills and training in order to accelerate workforce deployment.
Canada and France cooperate on the international stage to ensure the alignment and convergence of AI regulatory frameworks, given the economic potential and the global social consequences of this technological revolution. Under our successive G7 presidencies in 2018 and 2019, we worked to launch the Global Partnership on Artificial Intelligence (GPAI), which now has 29 members from all over the world, and whose first two centres of expertise were opened in Montréal and Paris. We support the creation of the new integrated partnership, which brings together OECD and GPAI member countries, and welcomes new members, including emerging and developing economies. We hope that the implementation of this new model will make it easier to participate in joint research projects that are of public interest, reduce the global digital divide and support constructive debate between the various partners on standards and the interoperability of their AI-related regulations.
We will continue our cooperation at the AI Action Summit in France on February 10 and 11, 2025, where we will strive to find solutions to meet our common objectives, such as the fight against disinformation or the reduction of the environmental impact of AI. With the objective of actively and tangibly promoting the use of the French language in the creation, production, distribution and dissemination of AI, taking into account its richness and diversity, and in compliance with copyright, we will attempt to identify solutions that are in line with the five themes of the summit: AI that serves the public interest, the future of work, innovation and culture, trust in AI and global AI governance.
Canada has accepted to co-chair the working group on global AI governance in order to continue the work already carried out by the GPAI, the OECD, the United Nations and its various bodies, the G7 and the G20. We would like to highlight and advance debates on the cultural challenges of AI in order to accelerate the joint development of relevant responses to the challenges faced. We would also like to develop the change management policies needed to support all of the affected cultural sectors. We will continue these discussions together during our successive G7 presidencies in 2025 and 2026.
I checked out the In memoriam notice for Abhishek Gupta and found this, Note: Links have been removed except the link to the Abhishek Gupta’s memorial page hosting tributes, stories, and more. The link is in the highlighted paragraph,
Honoring the Life and Legacy of a Leader in AI Ethics
In accordance with his family’s wishes, it is with profound sadness that we announce the passing of Abhishek Gupta, Founder and Principal Researcher of the Montreal AI Ethics Institute (MAIEI), Director for Responsible AI at the Boston Consulting Group (BCG), and a pioneering voice in the field of AI ethics. Abhishek passed away peacefully in his sleep on September 30, 2024 in India, surrounded by his loving family. He is survived by his father, Ashok Kumar Gupta; his mother, Asha Gupta; and his younger brother, Abhijay Gupta.
Note: Details of a memorial service will be announced in the coming weeks. For those who wish to share stories, personal anecdotes, and photos of Abhishek, please visit www.forevermissed.com/abhishekgupta — your contributions will be greatly appreciated by his family and loved ones.
Born on December 20, 1992, in India, Abhishek’s intellectual curiosity and drive to understand technology led him on a remarkable journey. After excelling at Delhi Public School, Abhishek attended McGill University in Montreal, where he earned a Bachelor of Science in Computer Science (BSc’15). Following his graduation, Abhishek worked as a software engineer at Ericsson. He later joined Microsoft as a machine learning engineer, where he also served on the CSE Responsible AI Board. It was during his time in Montreal that Abhishek envisioned a future where ethics and AI would intertwine—a vision that became the driving force behind his life’s work.
The Beginnings: Building a Global AI Ethics Community
Abhishek’s vision for MAIEI was rooted in community building. He began hosting in-person AI Ethics Meetups in Montreal throughout 2017. These gatherings were unique—participants completed assigned readings in advance, split into small groups for discussion, and then reconvened to share insights. This approach fostered deep, structured conversations and made AI ethics accessible to everyone, regardless of their background. The conversations and insights from these meetups became the foundation of MAIEI, which was launched in May 2018.
When the pandemic hit, Abhishek adapted the meetup format to an online setting, enabling MAIEI to expand worldwide. It was his idea to bring these conversations to a global stage, using virtual platforms to ensure voices from all corners of the world could join in. He passionately stood up for the “little guy,” making sure that those whose voices might be overlooked or unheard in traditional forums had a platform. Under his stewardship, MAIEI emerged as a globally recognized leader in fostering public discussions on the ethical implications of artificial intelligence. Through MAIEI, Abhishek fulfilled his mission of democratizing AI ethics literacy, empowering individuals from all backgrounds to engage with the future of technology.
…
I offer my sympathies to his family, friends, and communities for their profound loss.
An October 9, 2024 notice from the Wilson Center (or Woodrow Wilson Center or Woodrow Wilson International Center for Scholars received via email) announces an annual event, which this year will focus on AI (artificial intelligence),
The 2024 Canada-US Legal Symposium | Artificial Intelligence Regulation, Governance, and Liability
Tuesday Oct. 29, 2024 9:30am – 2:00pm ET 6th Floor Flom Auditorium, Woodrow Wilson Center
Time is running out to RSVP for the 2024 Canada-US Legal Symposium!
This year’s program will address artificial intelligence (AI) governance, regulation, and liability. High-profile advances in AI over the past four years have raised serious legal questions about the development, integration, and use of the technology. Canada and the United States, longtime leaders in innovation and hubs for some of the world’s top AI companies, are poised to lead in developing a model for responsible AI policy.
This event is co-organized with the Science, Technology, and Innovation Program and the Canada-US Law Institute.
The event page for The 2024 Canada-US Legal Symposium | Artificial Intelligence Regulation, Governance, and Liability gives you the option of an RSVP to attend the virtual or in-person event.
For more about international AI usage and regulation efforts, there’s the Wilson Center’s Science and Technology Innovation Program CTRL Forward blog. Here’s a sampling of some of the most recent postings, Note: CTRL Forward postings cover a wide range of science/technology topics often noting how the international scene is affected; it seems September saw a major focus on AI
For anyone curious about the current state of Canadian legislation and artificial intelligence, I have a May 1, 2023 posting which offers an overview of the current state of affairs, (Note: The bill has yet to be passed)
Bill C-27 (Digital Charter Implementation Act, 2022) is what I believe is called an omnibus bill as it includes three different pieces of proposed legislation (the Consumer Privacy Protection Act [CPPA], the Artificial Intelligence and Data Act [AIDA], and the Personal Information and Data Protection Tribunal Act [PIDPTA]). You can read the Innovation, Science and Economic Development (ISED) Canada summary here or a detailed series of descriptions of the act here on the ISED’s Canada’s Digital Charter webpage.
You can find more up-to-date information about the status of the Committee’s Bill-27 meetings on this webpage where it appears that September 26, 2024 was the committee’s most recent meeting. If you click on the highlighted meeting dates, you will be given the option of watching a webcast of the meeting. The webpage will also give you access to a list of witnesses, the briefs and the briefs themselves.
A July 23, 2024 University of Southampton (UK) press release (also on EurekAlert but published July 22, 2024) describes the emerging science/technology of bio-hybrid robotics and a recent study about the ethical issues raised, Note 1: bio-hybrid may also be written as biohybrid; Note 2: Links have been removed,
Development of ‘living robots’ needs regulation and public debate
Researchers are calling for regulation to guide the responsible and ethical development of bio-hybrid robotics – a ground-breaking science which fuses artificial components with living tissue and cells.
In a paper published in Proceedings of the National Academy of Sciences [PNAS] a multidisciplinary team from the University of Southampton and universities in the US and Spain set out the unique ethical issues this technology presents and the need for proper governance.
Combining living materials and organisms with synthetic robotic components might sound like something out of science fiction, but this emerging field is advancing rapidly. Bio-hybrid robots using living muscles can crawl, swim, grip, pump, and sense their surroundings. Sensors made from sensory cells or insect antennae have improved chemical sensing. Living neurons have even been used to control mobile robots.
Dr Rafael Mestre from the University of Southampton, who specialises in emergent technologies and is co-lead author of the paper, said: “The challenges in overseeing bio-hybrid robotics are not dissimilar to those encountered in the regulation of biomedical devices, stem cells and other disruptive technologies. But unlike purely mechanical or digital technologies, bio-hybrid robots blend biological and synthetic components in unprecedented ways. This presents unique possible benefits but also potential dangers.”
Research publications relating to bio-hybrid robotics have increased continuously over the last decade. But the authors found that of the more than 1,500 publications on the subject at the time, only five considered its ethical implications in depth.
The paper’s authors identified three areas where bio-hybrid robotics present unique ethical issues: Interactivity – how bio-robots interact with humans and the environment, Integrability – how and whether humans might assimilate bio-robots (such as bio-robotic organs or limbs), and Moral status.
In a series of thought experiments, they describe how a bio-robot for cleaning our oceans could disrupt the food chain, how a bio-hybrid robotic arm might exacerbate inequalities [emphasis mine], and how increasing sophisticated bio-hybrid assistants could raise questions about sentience and moral value.
“Bio-hybrid robots create unique ethical dilemmas,” says Aníbal M. Astobiza, an ethicist from the University of the Basque Country in Spain and co-lead author of the paper. “The living tissue used in their fabrication, potential for sentience, distinct environmental impact, unusual moral status, and capacity for biological evolution or adaptation create unique ethical dilemmas that extend beyond those of wholly artificial or biological technologies.”
The paper is the first from the Biohybrid Futures project led by Dr Rafael Mestre, in collaboration with the Rebooting Democracy project. Biohybrid Futures is setting out to develop a framework for the responsible research, application, and governance of bio-hybrid robotics.
The paper proposes several requirements for such a framework, including risk assessments, consideration of social implications, and increasing public awareness and understanding.
Dr Matt Ryan, a political scientist from the University of Southampton and a co-author on the paper, said: “If debates around embryonic stem cells, human cloning or artificial intelligence have taught us something, it is that humans rarely agree on the correct resolution of the moral dilemmas of emergent technologies.
“Compared to related technologies such as embryonic stem cells or artificial intelligence, bio-hybrid robotics has developed relatively unattended by the media, the public and policymakers, but it is no less significant. We want the public to be included in this conversation to ensure a democratic approach to the development and ethical evaluation of this technology.”
In addition to the need for a governance framework, the authors set out actions that the research community can take now to guide their research.
“Taking these steps should not be seen as prescriptive in any way, but as an opportunity to share responsibility, taking a heavy weight away from the researcher’s shoulders,” says Dr Victoria Webster-Wood, a biomechanical engineer from Carnegie Mellon University in the US and co-author on the paper.
“Research in bio-hybrid robotics has evolved in various directions. We need to align our efforts to fully unlock its potential.”
Here’s a link to and a citation for the paper,
Ethics and responsibility in biohybrid robotics research by Rafael Mestre, Aníbal M. Astobiza, Victoria A. Webster-Wood, Matt Ryan, and M. Taher A. Saif. PNAS 121 (31) e2310458121 July 23, 2024 DOI: https://doi.org/10.1073/pnas.2310458121
This paper is open access.
Cyborg or biohybrid robot?
Earlier, I highlighted “… how a bio-hybrid robotic arm might exacerbate inequalities …” because it suggests cyborgs, which are not mentioned in the press release or in the paper, This seems like an odd omission but, over the years, terminology does change although it’s not clear that’s the situation here.
I have two ‘definitions’, the first is from an October 21, 2019 article by Javier Yanes for OpenMind BBVA, Note: More about BBVA later,
…
The fusion between living organisms and artificial devices has become familiar to us through the concept of the cyborg (cybernetic organism). This approach consists of restoring or improving the capacities of the organic being, usually a human being, by means of technological devices. On the other hand, biohybrid robots are in some ways the opposite idea: using living tissues or cells to provide the machine with functions that would be difficult to achieve otherwise. The idea is that if soft robots seek to achieve this through synthetic materials, why not do so directly with living materials?
Another approach to building biohybrid robots is the artificial enhancement of animals or using an entire animal body as a scaffold to manipulate robotically. The locomotion of these augmented animals can then be externally controlled, spanning three modes of locomotion: walking/running, flying, and swimming. Notably, these capabilities have been demonstrated in jellyfish (figure 4(A)) [139, 140], clams (figure 4(B)) [141], turtles (figure 4(C)) [142, 143], and insects, including locusts (figure 4(D)) [27, 144], beetles (figure 4(E)) [28, 145–158], cockroaches (figure 4(F)) [159–165], and moths [166–170].
….
The advantages of using entire animals as cyborgs are multifold. For robotics, augmented animals possess inherent features that address some of the long-standing challenges within the field, including power consumption and damage tolerance, by taking advantage of animal metabolism [172], tissue healing, and other adaptive behaviors. In particular, biohybrid robotic jellyfish, composed of a self-contained microelectronic swim controller embedded into live Aurelia aurita moon jellyfish, consumed one to three orders of magnitude less power per mass than existing swimming robots [172], and cyborg insects can make use of the insect’s hemolymph directly as a fuel source [173].
…
So, sometimes there’s a distinction and sometimes there’s not. I take this to mean that the field is still emerging and that’s reflected in evolving terminology.
Banco Bilbao Vizcaya Argentaria, S.A. (Spanish pronunciation: [ˈbaŋko βilˈβao βiθˈkaʝa aɾxenˈtaɾja]), better known by its initialism BBVA, is a Spanish multinational financial services company based in Madrid and Bilbao, Spain. It is one of the largest financial institutions in the world, and is present mainly in Spain, Portugal, Mexico, South America, Turkey, Italy and Romania.[2]
OpenMind is a non-profit project run by BBVA that aims to contribute to the generation and dissemination of knowledge about fundamental issues of our time, in an open and free way. The project is materialized in an online dissemination community.
“Sharing knowledge for a better future“.
At OpenMind we want to help people understand the main phenomena affecting our lives; the opportunities and challenges that we face in areas such as science, technology, humanities or economics. Analyzing the impact of scientific and technological advances on the future of the economy, society and our daily lives is the project’s main objective, which always starts on the premise that a broader and greater quality knowledge will help us to make better individual and collective decisions.
Finally, you can find more on these stories (science/technology announcements and/or ethics research/issues) here by searching for ‘robots’ (tag and category), ‘cyborgs’ (tag), ‘machine/flesh’ (tag), ‘neuroprosthetic’ (tag), and human enhancement (category).