h/t to Kris Krug and his February 9, 2026 email invitation (received via email) to an upcoming meeting for the Vancouver AI (artificial intelligence) community,
H.R. MacMillan Space Centre Vancouver, British Columbia
Tickets
1×🪱 Earlyworm CA$40.00 Subtotal CA$40.00 Total CA$40.00 [available until 9:59 pm PST on February 11, 2025
[Standard tickets: $60 or Members of BC + AI: $45]
About Event
Vancouver AI welcomes the full spectrum… builders, artists, researchers, students, public servants, founders, the AI-curious, and the beautifully confused. This is a monthly room to swap notes, demo work, and argue productively about the future.
February is when Vancouver gets quiet enough to hear your own thoughts. Dangerous, I know.
his month is hosted by MAC our Mind, AI, and Consciousness crew. They’re the people in the BC + AI ecosystem philosophy special interest group where people have been grappling with questions that don’t have clean answers.
Does consciousness require embodiment? When does pattern recognition become understanding? Can synesthesia teach us something fundamental about how minds work? What happens when you remove the emotional binding between visual recognition and feeling?
For months, MAC has been meeting at SFU [Simon Fraser University] downtown for “Deep Dive” sessions tackling the intersection of artificial intelligence and consciousness through rigorous philosophical inquiry.
Assigned readings. Opposing viewpoint presentations. Chatham House rules. Academics and autodidacts arguing productively. Physicists debating with Zen priests.
February 25 [2026] is their showcase: highlights from recent sessions on embodiment, quantum consciousness, synesthesia as consciousness litmus tests, and whether LLMs are mirrors reflecting human knowledge or something closer to minds.
Come ready for depth. Come ready to have assumptions challenged. The philosophy crew gets the spotlight.
…
Romance, Intimacy & AI
What happens to love when it requires no effort, no risk, no repair?
Tanya Slingsby, strategic advisor to BC + AI and MAC community leader explores the question at the heart of human-AI relationships. Background: philanthropic governance, social impact, and a refusal to let philosophy stay abstract.
AI is increasingly participating in romantic intimacy: companionship, attachment, affective support. Drawing on relational sciences and human-AI interaction research,
Tanya examines how AI-mediated intimacy reshapes desire, attachment, and our capacity to tolerate friction and ambivalence. As tech companies shift from an attention economy to an attachment economy, who decides how love gets designed?
My December 3, 2021 posting, “True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)” may provides some context for anyone wanting more information before attending the February 25, 2026.Vancouver AI community meetup.
If you’re curious about Tanya Slingsby, according to her eponymous website, she’s a visual artist and artists’ representative. As for her work in the nonprofit and philanthropic sectors, there’s Slingsby’s profile on this Herrendorf Family Foundation team webpage. Intriguing, yes?
Self-disclosure doesn’t just take place in real life, it also happens when you interact in virtual reality (VR).
Caption: Researchers find that both the type of communication medium and gender influence self-disclosure. Credit: Professor Junko Ichino from Waseda University, Japan
Self-disclosure, or the process of conveying one’s details to others verbally, is crucial for communication. Self-disclosure includes expressing personal information, thoughts, and feelings. It encompasses self-expression and clarification, social validation and control, as well as relationship development, and is closely related to reciprocity, intimacy, trust, interactional enjoyment, and satisfaction.
In recent years, technological advancements have paved the way for new forms of communication, including video-conferencing and embodied virtual reality (VR). It is indispensable to shed light on the phenomenon of self-disclosure in this context to better understand relationship building and mental health.
With this goal, a team of researchers from Japan, led by Professor Junko Ichino from Waseda University (affiliated to Tokyo City University, Japan, at the time of study), including Mr. Masahiro Ide from the Tokyo City University and TIS Inc., Professor Hitomi Yokoyama from Okayama University of Science, Professor Hirotoshi Asano from Kogakuin University, and Professors Hideo Miyachi and Daisuke Okabe from Tokyo City University, has explored the effects of new communication media and gender on self-disclosure. Their findings were published online in Behaviour & Information Technology on June 4, 2025.
Prof. Ichino explains the motivation behind their research, “When I tried accessing VRChat, a social VR platform that gained popularity in Japan around 2017, I was surprised by the lack of polite or superficial conversation and the presence of freedom and directness of communication. I felt that these people would never interact like this in the real world, which led me to become interested in virtual spaces as a place for communication.”
Since self-disclosure is essential for communication, in addition to studies on self-disclosure in face-to-face conversations, there have been many studies on self-disclosure in conversations through text and voice, which are traditional communication media. Many studies have shown that conversations through text and voice encourage self-disclosure more than face-to-face conversations. However, little was known regarding whether new communication media, such as video-conferencing and embodied VR, encourage self-disclosure compared to face-to-face conversations. Therefore, the researchers investigated self-disclosure across four communication media: face-to-face, video, and embodied VR using both realistic and unrealistic avatars. They included 144 participants, aged 20 to 50 years, segregated them into 72 dyads, and encouraged them to develop conversations based on their personal topics. The participants underwent multiple self-disclosure sessions across the four communication media.
The researchers found that embodied VR, especially with unrealistic avatars, resulted in self-disclosure of personal feelings that were 1.5 times higher than in the face-to-face scenario. However, video communication did not differ noticeably from face-to-face. Furthermore, gender pairing also affected self-disclosure. To investigate how gender pairing affects self-disclosure, the researchers classified the participants into female-to-female, male-to-male, male-to-female, and female-to-male. Upon analysis, the team found that female-to-female pairings had the highest degree of self-disclosure, particularly the disclosure of personal information, regardless of the communication medium.
Since embodied VR facilitates self-disclosure of personal feelings compared to face-to-face, the team expects applications in various VR services related to self-expression, including counselling and psychotherapy services, where therapists interact with patients with ailments such as depression, dementia, cancer, adjustment disorders, and anxiety disorders and with clients with various mental symptoms. Moreover, the proposed innovation can lead to novel interventions in caregiver cafés for people who care for elderly people with dementia or who are bedridden, as well as in stress relief services where listening agents listen to people’s worries and anxieties about their physical condition and interpersonal relationships.
Overall, Prof. Ichino envisions a bright future sprouting from their breakthrough findings. “The shift to remote communication using communication media that surged during the COVID-19 pandemic is expected to continue because such media are required to achieve the UN’s Sustainable Development Goals. Additionally, the need for mental well-being support, which is closely related to self-disclosure, is expected to increase in the future.”
Together, the insights obtained from this study could be greatly utilised for applications that help in improving mental health.
I wonder if there will be further studies before utilizing these insights. I have a couple of questions (1) what happens when participants are over 50 and (2) are there cultural differences?
Caption: Researchers from the Istituto Italiano di Tecnologia (IIT) in Genoa (Italy) and Brown University in Providence (USA) have discovered that people sense the hand of a humanoid robot as part of their body schema, particularly when it comes to carrying out a task together, like slicing a bar of soap. Credit: IIT-Istituto Italiano di Tecnologia
Researchers from the Istituto Italiano di Tecnologia (IIT) in Genoa (Italy) and Brown University in Providence (USA) have discovered that people sense the hand of a humanoid robot as part of their body schema, particularly when it comes to carrying out a task together, like slicing a bar of soap. The study has been published in the journal iScience and can pave the way for a better design of robots that have to function in close contact with humans, such as those used in rehabilitation.
The project, led by Alessandra Sciutti, IIT Principal Investigator of the CONTACT unit at IIT, in collaboration with Brown University professor Joo-Hyun Song, explored whether unconscious mechanisms that shape interactions between humans also emerge in interactions between a person and a humanoid robot.
Researchers focused on a phenomenon known as the “near-hand effect”, in which the presence of a hand near an object alters visual attention of a person, because the brain is preparing to use the object. Moreover, the study considers the human brain’s ability to create its “body schema” to move more efficiently in the surrounding space, by integrating objects into it as well.
Through an unconscious process shaped by external stimuli, the brain builds a “body schema” that helps us avoid obstacles or grab objects without looking at them. Any tools can become part of this internal map as long as they are useful for a task, like a tennis racket that feels like an arm extension to the player who uses it daily. Since body schema is constantly evolving, the research team led by Sciutti explored whether a robot could also become part of it.
Giulia Scorza Azzarà, PhD student at IIT and first author of the study, designed and analyzed the results of experiments where people carried out a joint task with iCub, the IIT’s child-sized humanoid robot. They sliced a bar of soap together by using a steel wire, alternately pulled by the person and the robotic partner.
After the activity, researchers verified the integration of the robotic hand into the body schema, quantifying the near hand effect with the Posner cueing task. This test challenges participants to press a key as quickly as possible to indicate on which side of the screen an image appears, while an object placed right next to the screen influences their attention. Data from 30 volunteers showed a specific pattern: participants reacted faster when images appeared next to the robot’s hand, showing that their brains had treated it much like a near hand. Thanks to control experiments, researchers proved that this effect appeared only in those who had sliced the soap with the robot.
The strength of the near hand effect also depended on how the humanoid robot moved. When the robot’s gestures were broad, fluid, and well synchronized with the human ones, the effect was stronger, resulting in a better integration of iCub’s hand into the participant’s body schema. Physical closeness between the robotic hand and the person also played a role: the nearer the robot’s hand was to the participant during the slicing task, the greater the effect.
To assess how participants perceived the robot after working together on the task, researchers gathered information through questionnaires. The results show that the more participants saw iCub as competent and pleasant, the more intense the cognitive effect was. Attributing human-like traits or emotions to iCub further boosted the hand’s integration in the body schema; in other words, partnership and empathy enhanced the cognitive bond with the robot.
The team carried out experiments with a humanoid robot under controlled conditions, paving the way for a deeper understanding of human-machine interactions. Psychological factors will be essential to designing robots able to adapt to human stimuli and able to provide a more intuitive and effective robotic experience. These are crucial features for application of robotics in motor rehabilitation, virtual reality, and assistive technologies.
The research is part of the ERC-funded wHiSPER project, coordinated by IIT’s CONTACT (COgNiTive Architecture for Collaborative Technologies) unit.
Here’s a link to and a citation for the paper,
Collaborating with a robot biases human spatial attention by Giulia Scorza Azzarà, Joshua Zonca, Francesco Rea, Joo-Hyun Song, Alessandra Sciutti. iScience Volume 28, Issue 7, 18 July 2025, 112791 DOI: https://doi.org/10.1016/j.isci.2025.112791 Available online 2 June 2025, Version of Record 18 June 2025 Under a Creative Commons license CC BY 4.0 Attribution 4.0 International Deed
This paper is open access.
This business of a robot becoming an extension of your body, i.e., becoming part of you, is reminiscent of some issues brought up in my October 21, 2025 posting “Copyright, artificial intelligence, and thoughts about cyborgs,” such as, N. Katherine Hayles’s assemblages and, more specifically, the issues brought up in the section titled, “Symbiosis and your implant.”
Canadian research into relationships with domestic robots
Zhao Zhao’s (assistant professor in Computer Science at the University of Guelph) September 11, 2025 essay for The Conversation highlights results from one of her recently published studies, Note: Links have been removed,
Social companion robots are no longer just science fiction. In classrooms, libraries and homes, these small machines are designed to read stories, play games or offer comfort to children. They promise to support learning and companionship, yet their role in family life often extends beyond their original purpose.
In our recent study of families in Canada and the United States, we found that even after a children’s reading robot “retired” or was no longer in active and regular use, most households chose to keep it — treating it less like a gadget and more like a member of the family.
Luka is a small, owl-shaped reading robot, designed to scan and read picture books aloud, making storytime more engaging for young children.
In 2021, my colleague Rhonda McEwen and I set out to explore how 20 families used Luka. We wanted to study not just how families used Luka initially, but how that relationship was built and maintained over time, and what Luka came to mean in the household. Our earlier work laid the foundation for this by showing how families used Luka in daily life and how the bond grew over the first months of use.
When we returned in 2025 to follow up with 19 of those families, we were surprised by what we found. Eighteen households had chosen to keep Luka, even though its reading function was no longer useful to their now-older children. The robot lingered not because it worked better than before, but because it had become meaningful.
A deep, emotional connection
Children often spoke about Luka in affectionate, human-like terms. One called it “my little brother.” Another described it as their “only pet.” These weren’t just throwaway remarks — they reflected the deep emotional place the robot had taken in their everyday lives.
Because Luka had been present during important family rituals like bedtime reading, children remembered it as a companion.
Parents shared similar feelings. Several explained that Luka felt like “part of our history.” For them, the robot had become a symbol of their children’s early years, something they could not imagine discarding. One family even held a small “retirement ceremony” before passing Luka on to a younger cousin, acknowledging its role in their household.
Other families found new, practical uses. Luka was repurposed as a music player, a night light or a display item on a bookshelf next to other keepsakes. Parents admitted they continued to charge it because it felt like “taking care of” the robot.
The device had long outlived its original purpose, yet families found ways to integrate it into daily routines.
…
Luka the robot. Image by Dr Zhao Zhao, University of Guelph
Zhao also wrote an August 8, 2025 essay about her 2025 followup study on families and their Luka robots for Frontiers Media,
What happens to a social robot after it retires?
Four years ago, we placed a small owl-shaped reading robot named Luka into 20 families’ homes. At the time, the children were preschoolers, just learning to read. Luka’s job was clear: scan the pages of physical picture books and read them aloud, helping children build early literacy skills.
That was in 2021. In 2025, we went back — not expecting to find much. The children had grown. The reading level was no longer age-appropriate. Surely, Luka’s work was done.
Instead, we found something extraordinary.
18 of 19 families still had their robot. Many were still charging it. A few used it as a music player. Some simply left it on a shelf—next to baby books and keepsakes—its eyes still glowing gently. Luka had stayed.
…
As more families bring AI-powered companions into their homes, we’ll need to better understand not only how they’re used — but how they’re remembered.
Because sometimes, the robot stays.
For the curious, here’s a link to and a citation for the 2025 followup study,
Trying to distinguish between robots and artificial intelligence (AI) can mean wading into murky waters. Not all robots have (AI) and not all AI is embodied in a robot and cyborgs add more complexity.
N. Katherine Hayles’ 2025 book “Bacteria to AI; Human Futures with our Nonhuman Symbionts” mentioned in my October 21, 2025 posting “Copyright, artificial intelligence, and thoughts about cyborgs” does not make a distinction, which may or may not be important. We just don’t know. It seems we are in the process of redefining our relationships to the life and the objects around us as we redefine what it means to be a person.
This is going to be a jam-packed posting with the AI experts at the Canadian Science Policy Centre (CSPC) virtual panel, a look back at a ‘testy’ exchange between Yoshua Bengio (one of Canada’s godfathers of AI) and a former diplomat from China, an update on Canada’s Minister of Artificial Intelligence and Digital Innovation, Evan Solomon and his latest AI push, and a missive from the BC artificial intelligence community.
A Canadian Science Policy Centre AI panel on November 11, 2025
The Canadian Science Policy Centre (CSPC) provides an October 9, 2025 update on an upcoming virtual panel being held on Remembrance Day,
[AI-Driven Misinformation Across Sectors Addressing a Cross-Societal Challenge]
Upcoming Virtual Panel[s]: November 11 [2025]
Artificial Intelligence is transforming how information is created and trusted, offering immense benefits across sectors like healthcare, education, finance, and public discourse—yet also amplifying risks such as misinformation, deepfakes, and scams that threaten public trust. This panel brings together experts from diverse fields [emphasis mine] to examine the manifestations and impacts of AI-driven misinformation and to discuss policy, regulatory, and technical solutions [emphasis mine]. The conversation will highlight practical measures—from digital literacy and content verification to platform accountability—aimed at strengthening resilience in Canada and globally.
For more information on the panel and to register, click below.
Odd timing for this event. Moving on, I found more information on the CSPC’s webpage for this event, Note: Unfortunately, links to the moderator’s and speakers’ bios could not be copied here,
Canadian Science Policy Centre Email info@sciencepolicy.ca
…
This panel brings together cross-sectoral experts to examine how AI-driven misinformation manifests in their respective domains, its consequences, and how policy, regulation, and technical interventions can help mitigate harm. The discussion will explore practical pathways for action, such as digital literacy, risk audits, content verification technologies, platform responsibility, and regulatory frameworks. Attendees will leave with a nuanced understanding of both the risks and the resilience strategies being explored in Canada and globally.
Canada Research Chair in Internet & E-commerce Law, University of Ottawa See Bio
[Panelists]
Dr. Plinio Morita
Associate Professor / Director, Ubiquitous Health Technology Lab, University of Waterloo …
Dr. Nadia Naffi
Université Laval — Associate Professor of Educational Technology and expert on building human agency against AI-augmented disinformation and deepfakes. See Bio
Dr. Jutta Treviranus
Director, Inclusive Design Research Centre, OCAD U, Expert on AI misinformation in the Education sector and schools. See Bio
Dr. Fenwick McKelvey
Concordia University — Expert in political bots, information flows, and Canadian tech governance See Bio
Michael Geist has his own blog/website featuring posts on his ares of interest and featuring his podcast, Law Bytes. Jutta Treviranus is mentioned in my October 13, 2025 posting as a participant in “Who’s afraid of AI? Arts, Sciences, and the Futures of Intelligence,” a conference (October 23 – 24, 205) and arts festival at the University of Toronto (scroll down to find it) . She’s scheduled for a session on Thursday, October 23, 2025.
China, Canada, and the AI Action summit in February 2025
Zoe Kleinman’s February 10, 2025 article for the British Broadcasting Corporation (BBC) news online website also notes the encounter,
A former Chinese official poked fun at a major international AI safety report led by “AI Godfather” professor Yoshua Bengio and co-authored by 96 global experts – in front of him.
Fu Ying, former vice minister of foreign affairs and once China’s UK ambassador, is now an academic at Tsinghua University in Beijing.
The pair were speaking at a panel discussion ahead of a two-day global AI summit starting in Paris on Monday [February 10, 2025].
The aim of the summit is to unite world leaders, tech executives, and academics to examine AI’s impact on society, governance, and the environment.
Fu Ying began by thanking Canada’s Prof Bengio for the “very, very long” document, adding that the Chinese translation stretched to around 400 pages and she hadn’t finished reading it.
She also had a dig at the title of the AI Safety Institute – of which Prof Bengio is a member.
China now has its own equivalent; but they decided to call it The AI Development and Safety Network, she said, because there are lots of institutes already but this wording emphasised the importance of collaboration.
The AI Action Summit is welcoming guests from 80 countries, with OpenAI chief executive Sam Altman, Microsoft president Brad Smith and Google chief executive Sundar Pichai among the big names in US tech attending.
Elon Musk is not on the guest list but it is currently unknown whether he will decide to join them. [As of February 13, 2025, Mr. Musk did not attend the summit, which ended February 11, 2025.]
A key focus is regulating AI in an increasingly fractured world. The summit comes weeks after a seismic industry shift as China’s DeepSeek unveiled a powerful, low-cost AI model, challenging US dominance.
The pair’s heated exchanges were a symbol of global political jostling in the powerful AI arms race, but Fu Ying also expressed regret about the negative impact of current hostilities between the US and China on the progress of AI safety.
…
She gave a carefully-crafted glimpse behind the curtain of China’s AI scene, describing an “explosive period” of innovation since the country first published its AI development plan in 2017, five years before ChatGPT became a viral sensation in the west.
She added that “when the pace [of development] is rapid, risky stuff occurs” but did not elaborate on what might have taken place.
“The Chinese move faster [than the west] but it’s full of problems,” she said.
Fu Ying argued that building AI tools on foundations which are open source, meaning everyone can see how they work and therefore contribute to improving them, was the most effective way to make sure the tech did not cause harm.
Most of the US tech giants do not share the tech which drives their products.
Open source offers humans “better opportunities to detect and solve problems”, she said, adding that “the lack of transparency among the giants makes people nervous”.
But Prof Bengio disagreed.
His view was that open source also left the tech wide open for criminals to misuse.
He did however concede that “from a safety point of view”, it was easier to spot issues with the viral Chinese AI assistant DeepSeek, which was built using open source architecture, than ChatGPT, whose code has not been shared by its creator OpenAI.
…
Interesting, non? You can read more about Bengio’s views in an October 1, 2025 article by Rae Witte for Futurism.
In a Policy Forum, Yue Zhu and colleagues provide an overview of China’s emerging regulation for artificial intelligence (AI) technologies and its potential contributions to global AI governance. Open-source AI systems from China are rapidly expanding worldwide, even as the country’s regulatory framework remains in flux. In general, AI governance suffers from fragmented approaches, a lack of clarity, and difficulty reconciling innovation with risk management, making global coordination especially hard in the face of rising controversy. Although no official AI law has yet been enacted, experts in China have drafted two influential proposals – the Model AI Law and the AI Law (Scholar’s Proposal) – which serve as key references for ongoing policy discussions. As the nation’s lawmakers prepare to draft a consolidated AI law, Zhu et al. note that the decisions will shape not only China’s innovation, but also global collaboration on AI safety, openness, and risk mitigation. Here, the authors discuss China’s emerging AI regulation as structured around 6 pillars, which, combined, stress exemptive laws, efficient adjudication, and experimentalist requirements, while safeguarding against extreme risks. This framework seeks to balance responsible oversight with pragmatic openness, allowing developers to innovate for the long term and collaborate across the global research community. According to Zhu et al., despite the need for greater clarity, harmonization, and simplification, China’s evolving model is poised to shape future legislation and contribute meaningfully to global AI governance by promoting both safety and innovation at a time when international cooperation on extreme risks is urgently needed.
Here’s a link to and a citation for the paper,
China’s emerging regulation toward an open future for AI by Yue Zhu, Bo He, Hongyu Fu, Naying Hu, Shaoqing Wu, Taolue Zhang, Xinyi Liu, Gang Xu, Linghan Zhang, and Hui Zhou. Science 9 Oct 2025Vol 390, Issue 6769 pp. 132-135 DOI: 10.1126/science.ady7922
This paper is behind a paywall.
No mention of Fu Ying or China’s ‘The AI Development and Safety Network’ but perhaps that’s in the paper.
Canada and its Minister of AI and Digital Innovation
Evan Solomon (born April 20, 1968)[citation needed] is a Canadian politician and broadcaster who has been the minister of artificial intelligence and digital innovation since May 2025. A member of the Liberal Party, Solomon was elected as the member of Parliament (MP) for Toronto Centre in the April 2025 election.
He was the host of The Evan Solomon Show on Toronto-area talk radio station CFRB,[2] and a writer for Maclean’s magazine. He was the host of CTV’s national political news programs Power Play and Question Period.[3] In October 2022, he moved to New York City to accept a position with the Eurasia Group as publisher of GZERO Media.[4] Solomon continued with CTV News as a “special correspondent” reporting on Canadian politics and global affairs.”[4]
…
Had you asked me what background one needs to be a ‘Minister of Artificial Intelligence and Digital Innovation’, media would not have been my first thought. That said, sometimes people can surprise you.
Solomon appears to be an enthusiast if a June 10, 2025 article by Anja Karadeglija for The Canadian Press is to be believed,
Canada’s new minister of artificial intelligence said Tuesday [June 10, 2025] he’ll put less emphasis on AI regulation and more on finding ways to harness the technology’s economic benefits [emphases mine].
In his first speech since becoming Canada’s first-ever AI minister, Evan Solomon said Canada will move away from “over-indexing on warnings and regulation” to make sure the economy benefits from AI.
His regulatory focus will be on data protection and privacy, he told the audience at an event in Ottawa Tuesday morning organized by the think tank Canada 2020.
Solomon said regulation isn’t about finding “a saddle to throw on the bucking bronco called AI innovation. That’s hard. But it is to make sure that the horse doesn’t kick people in the face. And we need to protect people’s data and their privacy.”
The previous government introduced a privacy and AI regulation bill that targeted high-impact AI systems. It did not become law before the election was called.
That bill is “not gone, but we have to re-examine in this new environment where we’re going to be on that,” Solomon said.
He said constraints on AI have not worked at the international level.
“It’s really hard. There’s lots of leakages,” he said. “The United States and China have no desire to buy into any constraint or regulation.”
That doesn’t mean regulation won’t exist, he said, but it will have to be assembled in steps.
…
Solomon’s comments follow a global shift among governments to focus on AI adoption and away from AI safety and governance.
The first global summit focusing on AI safety was held in 2023 as experts warned of the technology’s dangers — including the risk that it could pose an existential threat to humanity. At a global meeting in Korea last year, countries agreed to launch a network of publicly backed safety institutes.
But the mood had shifted by the time this year’s AI Action Summit began in Paris. …
…
Solomon outlined several priorities for his ministry — scaling up Canada’s AI industry, driving adoption and ensuring Canadians have trust in and sovereignty over the technology.
He said that includes supporting Canadian AI companies like Cohere, which “means using government as essentially an industrial policy to champion our champions.”
The federal government is putting together a task force to guide its next steps on artificial intelligence, and Artificial Intelligence Minister Evan Solomon is promising an update to the government’s AI strategy.
Solomon told the All In artificial intelligence conference in Montreal on Wednesday [September 24, 2025] that the “refreshed” strategy will be tabled later this year, “almost two years ahead of schedule.”
…
“We need to update and move quickly,” he said in a keynote speech at the start of the conference.
The task force will include about 20 representatives from industry, academia and civil society. The government says it won’t reveal the membership until later this week.
Solomon said task force members are being asked to consult with their networks, suggest “bold, practical” ideas and report back to him in November [2025].
The group will look at various topics related to AI, including research, adoption, commercialization, investment, infrastructure, skills, and safety and security. The government is also planning to solicit input from the public. [emphasis mine]
Canada was the first country to launch a national AI strategy [the Pan-Canadian AI Strategy announced in 2016], which the government updated in 2022. The strategy focuses on commercialization, the development and adoption of AI standards, talent and research.
Solomon also teased a “major quantum initiative” coming in October [2025?] to ensure both quantum computing talent and intellectual property stay in the country.
Solomon called digital sovereignty “the most pressing policy and democratic issue of our time” and stressed the importance of Canada having its own “digital economy that someone else can’t decide to turn off.”
Solomon said the federal government’s recent focus on major projects extends to artificial intelligence. He compared current conversations on Canada’s AI framework to the way earlier generations spoke about a national railroad or highway.
…
He said his government will address concerns about AI by focusing on privacy reform and modernizing Canada’s 25-year-old privacy law.
“We’re going to include protections for consumers who are concerned about things like deep fakes and protection for children, because that’s a big, big issue. And we’re going to set clear standards for the use of data so innovators have clarity to unlock investment,” Solomon said.
…
The government is consulting with the public? Experience suggests that when all the major decisions will have been made; the public consultation comments will mined so officials can make some minor, unimportant tweaks.
Canada’s AI Task Force and parts of the Empire Club talk are revealed in a September 26, 2025 article by Alex Riehl for BetaKit,
Inovia Capital partner Patrick Pichette, Cohere chief artificial intelligence (AI) officer Joelle Pineau, and Build Canada founder Dan Debow are among 26 members of AI minister Evan Solomon’s AI Strategy Task Force trusted to help the federal government renew its AI strategy.
Solomon revealed the roster, filled with leading Canadian researchers and business figures, while speaking at the Empire Club in Toronto on Friday morning [September 26, 2025]. He teased its formation at the ALL IN conference earlier this week [September 24, 2025], saying the team would include “innovative thinkers from across the country.”
The group will have 30 days to add to a collective consultation process in areas including research, talent, commercialization, safety, education, infrastructure, and security.
…
The full AI Strategy Task Force is listed below; each member will consult their network on specific themes.
Research and Talent
Gail Murphy, professor of computer science and vice-president – research and innovation, University of British Columbia and vice-chair at the Digital Research Alliance of Canada
Diane Gutiw, VP – global AI research lead, CGI Canada and co-chair of the Advisory Council on AI
Michael Bowling, professor of computer science and principal investigator – Reinforcement Learning and Artificial Intelligence Lab, University of Alberta and research fellow, Alberta Machine Intelligence Institute and Canada CIFAR AI chair
Arvind Gupta, professor of computer science, University of Toronto
Adoption across industry and governments
Olivier Blais, co-founder and VP of AI, Moov and co-chair of the Advisory Council on AI
Cari Covent, technology executive
Dan Debow, chair of the board, Build Canada
Commercialization of AI
Louis Têtu, executive chairman, Coveo
Michael Serbinis, founder and CEO, League and board chair of the Perimeter Institute
Adam Keating, CEO and Founder, CoLab
Scaling our champions and attracting investment
Patrick Pichette, general partner, Inovia Capital
Ajay Agrawal, professor of strategic management, University of Toronto, founder, Next Canada and founder, Creative Destruction Lab
Sonia Sennik, CEO, Creative Destruction Lab
Ben Bergen, president, Council of Canadian Innovators
Building safe AI systems and public trust in AI
Mary Wells, dean of engineering, University of Waterloo
Joelle Pineau, chief AI officer, Cohere
Taylor Owen, founding director, Center [sic] for Media, Technology and Democracy [McGill University]
Education and Skills
Natiea Vinson, CEO, First Nations Technology Council
Alex Laplante, VP – cash management technology Canada, Royal Bank of Canada and board member at Mitacs
David Naylor, professor of medicine – University of Toronto
Infrastructure
Garth Gibson, chief technology and AI officer, VDURA
Ian Rae, president and CEO, Aptum
Marc Etienne Ouimette, chair of the board, Digital Moment and member, OECD One AI Group of Experts, affiliate researcher, sovereign AI, Cambridge University Bennett School of Public Policy
Security
Shelly Bruce, distinguished fellow, Centre for International Governance Innovation
James Neufeld, founder and CEO, Samdesk
Sam Ramadori, co-president and executive director, LawZero
With files from Josh Scott
If you have the time, Riehl ‘s September 26, 2025 article offers more depth than may be apparent in the excerpts I’ve chosen.
It’s been a while since I’ve seen Arvind Gupta’s name. I’m glad to see he’s part of this Task Force (Research and Talent). The man was treated quite shamefully at the University of British Columbia. (For the curious, this August 18, 2015 article by Ken MacQueen for Maclean’s Magazine presents a somewhat sanitized [in my opinion] review of the situation.)
One final comment, the experts on the virtual panel and members of Solomon’s Task Force are largely from Ontario and Québec. There is minor representation from others parts of the country but it is minor.
British Columbia wants entry into the national AI discussion
Just after I finished writing up this post, I received Kris Krug’s (techartist, quasi-sage, cyberpunk anti-hero from the future) October 14, 2025 communication (received via email) regarding an initiative from the BC + AI community,
Growth vs Guardrails: BC’s Framework for Steering AI
Our open letter to Minister Solomon shares what we’ve learned building community-led AI governance and how BC can help.
Ottawa created a Minister of Artificial Intelligence and just launched a national task force to shape the country’s next AI strategy. The conversation is happening right now about who gets compute, who sets the rules, and whose future this technology will serve.
Our new feature, Growth vs Guardrails [see link to letter below for ‘guardrails’], is already making the rounds in those rooms. The message is simple: if Ottawa’s foot is on the gas, BC is the steering wheel and the brakes. We can model a clean, ethical, community-led path that keeps power with people and place.
This is the time to show up together. Not as scattered voices, but as a connected movement with purpose, vision, and political gravity.
Over the past few months, almost 100 of us have joined as the new BC + AI Ecosystem Association non-profit as Founding Members. Builders. Artists. Researchers. Investors. Educators. Policymakers. People who believe that tech should serve communities, not the other way around.
Now we’re opening the door wider. Join and you’ll be part of the core group that built this from the ground up. Your membership is declaration that British Columbia deserves to shape its own AI future with ethics, creativity, and care.
If you’ve been watching from the sidelines, this is the time to lean in. We don’t do panels. We do portals. And this is the biggest one we’ve opened yet.
See you inside,
Kris Krüg Executive Director BC + AI Ecosystem Association kk@bc-ai.ca | bc-ai.ca
Canada just spun up a 30-day sprint to shape its next AI strategy. Minister Evan Solomon assembled 26 experts (mostly industry and academia) to advise on research, adoption, commercialization, safety, skills, and infrastructure.
On paper, it’s a pivot moment. In practice, it’s already drawing fire. Too much weight on scaling, not enough on governance. Too many boardrooms, not enough frontlines. Too much Ottawa, not enough ground truth.
…
This is Canada’s chance to reset the DNA of its AI ecosystem.
But only if we choose regeneration over extraction, sovereign data governance over corporate capture, and community benefit over narrow interests.
…
The Problem With The Task Force
Research says: The group’s stacked with expertise. But critics flag the imbalance. Where’s healthcare? Where’s civil society beyond token representation? Where are the people who’ll feel AI’s impact first: frontline workers, artists, community organizers?
…
The worry:Commercialization and scaling overshadow public trust, governance, and equitable outcomes. Again.
The numbers back this up: Only 24% of Canadians have AI training. Just 38% feel confident in their knowledge. Nearly two-thirds see potential harm. 71% would trust AI more under public regulation.
We’re building a national strategy on a foundation of low literacy and eroding trust. That’s not a recipe for sovereignty. That’s a recipe for capture.
Principles for a National AI Strategy: What BC + AI Stands For
I have two art/science events and one art/science conference/festival (IRL [in real life or in person] and Zoom) taking place in Toronto, Ontario.
October 16, 2025
There is a closing event for the “I don’t do Math” series mentioned in my September 8, 2025 posting,
…
ABOUT “I don’t do math” is a photographic series referencing dyscalculia, a learning difference affecting a person’s ability to understand and manipulate number-based information.
This initiative seeks to raise awareness about the challenges posed by dyscalculia with educators, fellow mathematicians, and parents, and to normalize its existence, leading to early detection and augmented support. In addition, it seeks to reflect on and question broader issues and assumptions about the role and significance of Mathematics and Math education in today’s changing socio-cultural and economic contexts.
The exhibition will contain pedagogical information and activities for visitors and students. The artist will also address the extensive research that led to the exhibition. The exhibition will feature two panel discussions following the opening and to conclude the exhibition.
…
I have some information from an October 12, 2025 ArtSci Salon announcement (received via email) about the “I don’t do math” closing event,
in us for
Closing Exhibition Panel Discussion Thursday, October 16 2025 10:00 am -12:00 pm room 309 The Fields Institute for Research in Mathematical Sciences (or online)
Artist Ann Piché will be in conversation with Andrew Fiss, Jacqueline Wernimont, Amenda Chow, Ellen Abrams, Michael Barany and JP Ascher
The second event mentioned in the October 12, 2025 ArtSci Salon announcement, Note 1: A link has been removed, Note 2: This event is part of a larger series,
…
Marco Donnarumma Monsters of Grace: bodies, sounds, and machines
Tuesday, October 21, 2025 3:30-4:30 PM Sensorium Research Loft 4th floor Goldfarb Centre for Fine Arts York University
About the talk What is sound to those who do not hear it? How does one listen to something that cannot be heard? What kind of sensory gaps are created by aiding technologies such as prostheses and artificial intelligence (AI)? As a matter of fact, the majority of non-deaf people hear only partially due to age and personal experience. Still, sound is most often considered through the normalizing viewpoint of the non-deaf. If I become your body, what does sound become for me? Join us to welcome Marco Donnarumma ahead of his new installation/performance at Paul Cadario Conference Room (Oct 22, 8-10 PM University College [University of Toronto] – 15 King’s College Circle). His talk will focus on this latest work in the context of a largest body of work titled “I Am Your Body,” an ongoing project investigating how normative power is enforced through the technological mediation of the senses.
About the artist: Marco Donnarumma is an artist, inventor and theorist. His oeuvre confronts normative body politics with uncompromising counter-narratives, where bodies are in tension between control and agency, presence and absence, grace and monstrosity. He is best known for using sound, AI, biosensors, and robotics to turn the body into a site of resistance and transformation. He has presented his work in thirty-seven countries across Asia, Europe, North and South America and is the recipient of numerous accolades, most notably the German Federal Ministry of Research and Education’s Artist of the Science Year 2018, and the Prix Ars Electronica’s Award of Distinction in Sound Art 2017. Donnarumma received a ZER01NE Creator grant in 2024 and was named a pioneer of performing arts with advanced technologies by the major national newspaper Der Standard, Austria. His writings are published in Frontiers in Computer Science, Computer Music Journal and Performance Research, among others, and his newest book chapter, co-authored with Elizabeth Jochum, will appear in Robot Theaters by Routledge. Together with Margherita Pevere he runs the performance group Fronte Vacuo.
…
I wonder if Donnarumma’s “Monsters of Grace: bodies, sounds, and machines’ received any inspiration from “Monsters of Grace” (Wikipedia entry) or if it’s just happenstance, Note: Links have been removed,
Monsters of Grace is a multimedia chamber opera in 13 short acts directed by Robert Wilson, with music by Philip Glass and libretto from the works of 13th-century Sufi mystic Jalaluddin Rumi. The title is said to be a reference to Wilson’s corruption of a line from Hamlet: “Angels and ministers of grace defend us!” (1.4.39).
So, the October 21, 2025 event is a talk at York University taking place before the “Who’s afraid of AI? Arts, Sciences, and the Futures of Intelligence” (more below).
“Who’s afraid of AI? Arts, Sciences, and the Futures of Intelligence,” a conference and arts festival at the University of Toronto
The conference (October 23 – 24, 2025) is concurrent with the arts festival (October 19 – 25, 2025) at the University of Toronto. Here’s more from the event homepage on the https://bmolab.artsci.utoronto.ca/ website, Note: BMO stands for Bank of Montreal, Note: No mention of Edward Albee and “Who’s afraid of Virginia Woolf?,”
2025 marks an inflection point in our technological landscape, driven by seismic shifts in AI innovation.
Who’s Afraid of AI? Arts, Science, and the Futures of Intelligence is a week-long inquiry into the implications and future directions of AI for our creative and collective imaginings, and the many possible futures of intelligence. The complexity of these immediate future calls for interdisciplinary dialogue, bringing together artists, AI researchers, and humanities scholars.
In this volatile domain, the question of who envisions our futures is vital. Artists explore with complexity and humanity, while the humanities reveal the histories of intelligence and the often-overlooked ways knowledge and decision-making have been shaped. By placing these voices in dialogue with AI researchers and technologists, Who’s Afraid of AI? examines the social dimensions of technology, questions tech solutionism from a social-impact perspective, and challenges profit-driven AI with innovation guided by public values.
The two-day conference at the University of Toronto’s University College anchors the week and features panels and debates with leading figures in these disciplines, including a keynoteby 2025 Nobel Laureate in Physics Geoffrey Hinton, the “Godfather of AI” and 2025 Neil Graham Lecturer in Science, Fei-Fei Li, an AI pioneer.
Throughout the week, the conversation continues across the city with:
AI-themed and AI powered art shows and exhibitions
Film screenings
Innovative theatre
Experimental music
Who’s Afraid of AI? demonstrates that Toronto has not only shaped the history of AI but continues to prepare its future.Step into this changing landscape and be part of this transformative dialogue — register today!
Organizing Committee:
Pia Kleber, Professor-Emerita, Comparative Literature, and Drama, U of T Dirk Bernhardt-Walther, Department of Psychology, Program Director, Cognitive Science, U of T David Rokeby, Director, BMO Lab, Centre for Drama, Theatre and Performance Studies, U of T Rayyan Dabbous, PhD candidate, Centre for Comparative Literature, U of T
…
This looks like a pretty interesting programme (if you’re mainly focused on AI and the creative arts), from the event homepage on the https://bmolab.artsci.utoronto.ca/ website, Note 1: All times are ET, Note 2: I have not included speakers’ photos,
The conference will explore core questions about AI such as its capabilities, possibilities and challenges, bringing their unique research, creative practice, scholarship and experience to the discussion. Speakers will also engage in an interdisciplinary conversation on topics including AI’s implications for theories of mind and embodiment, its influence on creation, innovation, and discovery, its recognition of diverse perspectives, and its transformation of artistic, cultural, political and everyday practices.
Thursday, October 23, 2025
Mind the World
9 AM | Clark Reading Room, University College – 15 King’s College Circle
What are the merits and limits of artificial intelligence within the larger debate on embodiment? This session brings together an artist who has given AI a physical dimension, a neuroscientist who reckons with the biological neural networks inspiring AI, and a humanist knowledgeable of the longer history in which the human has tried to decouple itself from its bodily needs and wants.
Suzanne Kite Director, The Wihanble S’a Center for Indigenous AI
James DiCarlo Director, MIT Quest for Intelligence
N. Katherine Hayles James B. Duke Distinguished Professor Emerita of Literature
Staging AI
11 AM | Clark Reading Room, University College – 15 King’s College Circle
How is AI changing the arts? To answer this question, we bring together theatre directors and artists who have made AI the main driving plot of their stories and those who opted to keep technology secondary in their productions.
Kay Voges Artistic Director, Schauspiel Köln
Roland Schimmelpfennig Playwright and Director, Berlin
Hito Steyerl Artist, Filmmaker and Writer, Berlin
Recognizing ‘Noise’
2 PM | Clark Reading Room, University College – 15 King’s College Circle
How can we design a more inclusive AI? This session brings together an artist who has worked with AI and has been sensitive to groups who may be excluded by its practice, an inclusive design scholar who has grappled with AI’s potential for personalized accessibility, and a humanist who understands the longer history on pattern and recognition from which emerged AI.
Marco Donnarumma Artist, Inventor, Theorist, Berlin
Jutta Treviranus Director, OCADU [Ontario College of Art & Design University], Inclusive Design Research Centre
Eryk Salvaggio Media Artist and Tech Policy Press Fellow, Rochester
Art, Design, and Application are the Solution to AI’s Charlie Chaplain Problem
4 PM | Hart House Theatre – 7 Hart House Circle
Daniel Wigdor CoFounder and Chief Executive Officer, AXL
Keynote and Neil Graham Lecture in Science
4:15 PM | Hart House Theatre – 7 Hart House Circle
Fei-Fei Li Sequoia Professor in Computer Science, Stanford Institute for Human-Centered AI
Geoffrey Hinton 2024 Nobel Laureate in Physics, Professor Emeritus in Computer Science
…
Friday, October 24, 2025
Life with AI
9 AM | Clark Reading Room, University College – 15 King’s College Circle
How do machine minds relate to human minds? What can we learn from one about the other? In this session we interrogate the impact of AI on our understanding of human knowledge and tool-making, from the perspective of philosophy, computer science, as well as the arts.
Jeanette Winterson Author, Fellow of the Royal Society of Literature, Great Britain
Leif Weatherby Professor of German and Director of Digital Theory Lab at New York University
Jennifer Nagel Professor, Philosophy, University of Toronto Mississauga
Discovery & In/Sight
11 AM | Clark Reading Room, University College – 15 King’s College Circle
This session explores creative practice through the lens of innovation and cultural/scientific advancement. An artist who creates with critical inspiration from AI joins forces with an innovation scholar who investigates the effects of AI on our decision making, as well as a philosopher of science who understands scientific discovery and inference as well as their limits.
Vladan Joler Visual Artist and Professor of New Media, University of Novi Sad [Serbia]
Alán Aspuru-Guzik Professor of Chemistry and Computer Science, University of Toronto
Brian Baigrie Professor, Institute for the History and Philosophy of Science & Technology, University of Toronto
Social history & Possible Futures
2 PM | Clark Reading Room, University College – 15 King’s College Circle
How does AI ownership and its private uses coexist within a framework of public good? It brings together an artist who has created AI tools to be used by others, an AI ethics researcher who has turned algorithmic bias into collective insight, and a philosopher who understands the connection between AI and the longer history of automation and work from which AI emerged.
Memo Akten Artist working with Code, Data and AI, UC San Diego
Beth Coleman Professor, Institute of Communication, Culture, Information and Technology, University of Toronto
Matteo Pasquinelli Professor, Philosophy and Cultural Heritage Università Ca’ Foscari Venezia [Italy]
A Theory of Latent Spaces | Conclusion: Where do we go from here?
4 PM | Clark Reading Room, University College – 15 King’s College Circle
Antonio Somaini, curator of the remarkable ‘World through AI’ exhibition at the Museé du Jeu de Paume in Paris, will discuss the way in which ‘latent spaces’, a core characteristic of current AI models as “meta-archives” that shape profoundly our relation with the past.
Following this, we will engage in a larger discussion amongst the various conference speakers and attendees on how we can, as artists, humanities scholars, scientists and the general public, collectively imagine and cultivate a future where AI serves the public good and enhances our individual and collective lives.”
Antonio Somaini Curator and Professor, Sorbonne Nouvelle [Université Sorbonne Nouvelle]
…
You can register here for this free conference, although, there’s now a waitlist for in person attendance. Do not despair, there’s access by Zoom,
In case you can’t make it in person, join us by Zoom:
October 22 | 2 PM | Student Forum and AI Commentary Contest Award | Paul Cadario Conference Room, University College – 15 King’s College Circle
October 22 | 8 – 10 PM |Marco Donnarumma, world première of a new performance installation | Paul Cadario Conference Room, University College – 15 King’s College Circle
October 23 | 2 PM | Jeanette Winterson: Arts & AI Talk | Paul Cadario Conference Room, University College – 15 King’s College Circle
October 24 | 7 PM | The Kiss by Roland Schimmelpfennig | The BMO Lab, University College – 15 King’s College Circle (Note: we are scheduling more performances. Check back for more info soon!)
October 25 | 8 PM | AI Cabaretfeaturing Jason Sherman, Rick Miller, Cole Lewis, BMO Lab projects and more| Crow’s Theatre, Nada Ristich Studio-Gallery – 345 Carlaw Avenue..
There was a bit of online excitement over the possibility that gene-edited pork would be entering the Canadian market soon. Environment and Climate Change Canada and Health Canada have a public consultation focused on the risk assessment process regarding the entry of gene-edited pigs into Canada’s food system. Before giving a link to the relevant government website, I have some information.
Factual
The best outline I could find was in Hailey Bennett’s July 10, 2025 article “US meat could soon be gene-edited. Here’s what that means” for British Broadcasting Corporation’s (BBC) Science Focus, Note 1: Bennett seems unaware that gene-edited pork may reach the Canadian market first; Note 2: Links have been removed,
From hot dogs to crispy bacon, US food staples could be made of gene-edited meat as early as 2026. Yes, really: the US Food and Drug Administration (FDA) recently approved the farming of a specific kind of genetically enhanced pig. And regulators around the world may not be far behind.
So, should we be worried? Will this pork truly be safe to eat? And just how ethical is it to create these pigs?
The first thing you should know: not every gene-edited animal will be directly spawned from a lab. Rather, such livestock are merely bred from animals whose DNA was edited early on – often at the single-cell or fertilised egg stage [also known as germline editing] – to give them beneficial traits.
And no, this gene editing isn’t about making pork taste better –it’s about protecting pigs from disease.
For instance, British company Genus has now farmed pigs with a genetic tweak that makes them resistant to PRRS (Reproductive and Respiratory Syndrome), a virus that attacks pigs’ immune cells. PRRS is a major threat: it can kill piglets, trigger miscarriages in pregnant sows, and weaken pigs’ immune systems, leaving them vulnerable to other infections.
These genetically enhanced pigs are even less of a novelty when you consider there is no effective vaccine for PRRS.
…
How heavily are these pigs being altered – and at what cost to their welfare? They’re fair questions. But in reality, the change is surprisingly minimal.
To stop the PRRS virus in its tracks, scientists snipped out a small section of pig DNA – part of the CD163 protein, which the virus uses to enter pig cells.
Pigs with the edited gene are resistant to almost all known strains of PRRS but are otherwise, Genus claims, “the same as conventional pigs”. And despite initial concerns that the virus could evolve to recognise and avoid the edited protein, that hasn’t happened so far.
According to Dr Christine Tait-Burkard, a Research Fellow at the University of Edinburgh’s Roslin Institute, who worked with Genus to develop the original gene-edited pigs, the natural CD163 protein they edited is “like nine beads on a string.” The edit removes only bead number five.
…
As Tait-Burkard explains, the edit is one that could also be naturally present in some pigs. “The chances are that there’s a pig somewhere in the world that’s resistant to this virus,” she says. “But we just don’t have the time to naturally breed this in. That’s where we have to start using biotechnology to integrate it into the breeding herd.”
…
In the 1990s and 2000s, genetically modified (GM) crops generated headlines and consumer concern about ‘Frankenfoods’. Ultimately, though, many GM crops were approved and the majority of scientists consider them safe to eat. These modified crops often carry foreign DNA – ‘Bt’ corn, for example, contains a gene from the bacterium Bacillus thuringiensis, enabling it to make a protein that kills insect pests.
The current generation of CRISPR-edited food products, by contrast, only contain changes that could naturally occur within the species. Scientists aren’t inventing entirely new kinds of pigs.
…
Bennett’s July 10, 2025 article does a good job of covering the topic but I advise supplementing it with other pieces.
Canada and the gene-edited pig
Sylvain Charlebois’ July 3, 2025 article “Transparency is paramount as gene-edited pork approaches market launch” for Canadian Grocer takes what can seem like abstract questions about gene-edited pigs and applies them to real life issues, Note: I have two “[sic?]’s” as I have been unable to confirm when gene-edited pork is likely to enter the Canadian market. At a guess, Charlebois is saying that Canadian consumers are likely to get the products later in 2025,
In April [2025], the U.S. Food and Drug Administration (FDA) approved the commercial distribution of pigs genetically edited with CRISPR technology to resist porcine reproductive and respiratory syndrome (PRRS), a costly and widespread disease in pork production. These pigs are expected to enter the American market in 2026 [sic?]. Yet, Canadian consumers could start seeing gene-edited pork products in stores—unlabeled and unannounced—as early as next year [sic?].
…
Canada imported more than US$850 million worth of U.S. pork last year, according to the National Pork Producers Council. So, regardless of Canadian regulatory decisions, gene-edited meat is coming. And yet, no label will tell you whether your pork chop or bacon came from a genetically altered animal.
That lack of transparency is precisely what Quebec-based duBreton, North America’s leading organic pork producer, is warning against. The company argues that gene editing is incompatible with organic and humane production standards—and more importantly, with informed consumer choice. Whether or not gene-edited meat poses a food safety risk isn’t the central issue. The issue is whether consumers have the right to know how their food was produced.
…
… GM [genetic modification] technology has long been accepted in grain production. Genetically modified ingredients—largely from corn, canola and soy—are now commonplace in processed foods. These technologies have contributed to yield stability and lower input costs for farmers. But even in grains, labeling remains inconsistent, and the average consumer still doesn’t know which products contain modified ingredients.
What’s different in livestock is the emotional and ethical connection people have with animals and meat. A pork chop isn’t just a commodity—it represents values tied to animal welfare, sustainability and trust in the food system. That’s why gene editing in livestock raises more scrutiny than it has in crop science.
To be clear, gene editing isn’t inherently a bad thing. …
The failure isn’t scientific—it’s strategic. Rather than building a transparent narrative around innovation, the industry has often opted for silence, leaving the public to fill in the blanks. That vacuum has been seized by special interest groups, some of which traffic in fear and misinformation. The “Frankenfood” rhetoric may have been overblown, but it did shine a light on an ethical principle we should not ignore: consumers deserve to know.
Labeling gene-edited products is not about fear—it’s about trust. Informed choice is the cornerstone of any credible food policy. Consumers don’t need to be protected from innovation, but they do need to be respected. The question is not whether gene-edited meat should exist, it’s whether its presence should be hidden.
…
Public engagement/consultation
Gwendolyn Blue’s (professor, University of Calgary) July 10, 2025 essay for The Conversation suggested more and better public consultation should be part of the process, Note: Links have been removed,
The Canadian government is currently considering approving the entry of gene-edited pigs into the food system.
…
These pigs are resistant to porcine reproductive and respiratory syndrome (PRRS), a horrible and sometimes fatal disease that affects pigs worldwide. PRRS has significant economic, food security and animal welfare implications.
The United States Food and Drug Administration [FDA] recently greenlit the commercial production of gene-edited pigs. Will the Canadian government follow suit?
AquAdvantage and EnviroPig
In 2016, Canada approved the first transgenic animal for human consumption — an Atlantic salmon called AquAdvantage salmon that contains DNA from other species of fish.
This approval came more than 25 years after the genetically modified fish was created by scientists at Memorial University in Newfoundland. The approval and commercialization of AquAdvantage salmon faced strong public opposition on both sides of the border, including protests, supermarket boycotts and court battles. In 2024, the company that produced AquAdvantage salmon announced that it was shutting down its operations [emphases mine].
In 2012, the Canadian government approved the manufacture of a transgenic pig known by its trade name, EnviroPig. Created by scientists at the University of Guelph, EnviroPigs released less phosphorus than conventionally bred pigs.
EnviroPig did not make it to market; the same year, the University of Guelph ended the EnviroPig project. Funding for the project had been suspended, in part because of consumer concerns.
Government regulation
Some researchers argue that government regulation of gene-edited animals should be less restrictive than for transgenic techniques. Gene editing introduces genetic changes that can occur with conventional animal breeding that is not subject to regulation. Gene-edited crops in Canada are treated the same as conventionally bred crops.
Others insist that stringent government regulation is necessary for gene editing to identify potential problems and ensure that laws keep up with industry and scientific ambition. Regulation plays a vital role in minimizing risk, encouraging public involvement and building trust.
Social science research has, for decades, demonstrated that resistance to biotechnology is not because of the public’s lack of knowledge [emphasis mine], as is often argued by biotechnology proponents. Public resistance to biotechnology is better understood as a rejection of potential harms imposed by governments and industry without public input and consent [emphasis mine].
Ethical, moral, cultural and political concerns
…
Similar to the U.S., Canada does not have specific gene technology regulation. Rather, the federal government relies on pre-existing environmental and food safety legislation. Canadian regulatory agencies use a risk, novelty and product-based approach to assess animal biotechnology. From a regulatory standpoint, distinctions between technical processes — like transgenic modification versus gene editing — are less important than the safety of the final product.
The Canadian government has recently updated its federal environmental and health regulations. This includes introducing mandatory public consultations for animals (vertebrates, specifically) created using biotechnology.
…
… regulatory and academic debates about the gene editing of animals are largely informed by scientists and industry proponents with considerably less input from the public, Indigenous communities and social sciences and humanities researchers.
Consulting the public
From a social standpoint, the process by which gene editing is assessed matters as much as the safety of the final product. Inclusive public engagement is essential to ensure that the production of gene-edited food animals aligns with societal needs and values.
Reactions to gene technologies are based on underlying values and beliefs, and sustained opportunities for public reflection and deliberation are vital for responsible innovation.
Important questions should be addressed: Who will reap the benefits of gene-editing techniques? Who will bear the costs and harms? What are the potential implications, including hard-to-anticipate social and political changes? How should decision-making proceed to ensure that Canadians have sufficient opportunities for input?
Currently, for the gene-edited pigs, members of the public can submit comments to the government until July 20, 2025.
…
Measured optimism and a lot of enthusiasm (two articles)
Geralyn Wichers’s April 3, 2025 article for The Western Producer provided more details and measured optimism,
Canadian consumers are largely fine with pork from gene-edited pigs — at least once the science and benefits are explained to them.
That’s according to new research from genetic development company PIC (Pig Improvement Company), which is using gene editing technology to develop a pig resistant to porcine reproductive and respiratory syndrome (PRRS).
A survey found that, after consumers read a description of gene editing in food and the PRRS-resistant pig, 49 per cent indicated positive or very positive sentiments, 38 per cent were neutral while 14 per cent were negative or very negative.
“Even though we’ve seen a lot of investments in things like vaccines and improved biosecurity, the problem is getting worse, not better,” said Banks Baker, PIC’s global director of product sustainability.
North America’s pork producers have been dealing with PRRS since the late 1980s. The viral disease causes respiratory issues in all ages of pigs. In breeding animals, though, it can derail reproductive performance.
…
A 2024 study by University of Guelph researcher Lynn Marchand estimated the annual cost of PRRS to a benchmark Manitoban 1,200-sow farrow-to-finish farm could be $588,709 to $631,602 [Manitoba is a province in Canada].
In January 2024, Ontario saw more severe cases of PRRS than it had in two years, veterinarian Dr. Ryan Tenbergen recently told attendees at the South Western Ontario Pork Conference. New strains of the disease, infecting more easily and severely, were also popping up in the United States.
The industry argues that genetically modified organisms have garnered a reputation for being unsafe and unhealthy, despite scientific evidence to the contrary. Further, they argue, gene editing is different.
Genetic modification typically refers to adding genetic information from an outside source, whereas gene editing generally involves making small changes to the organism’s existing genome.
“GMO has had a long shadow,” said Marisa Pooley, PIC’s director of communications.
“We have used it as a case study to make sure that we’re putting the consumer at the centre of this.”
Consumers identified reducing animal illness and antibiotic use reduction as factors that would motivate them to purchase gene-edited pork.
They seemed to like the idea that gene editing could help farmers raise healthier animals more sustainably and to grow crops better able to withstand climate change. The idea that gene editing can be used to cure human diseases such as sickle cell anemia and cancer also improved feelings.
…
John Greig’s May 29, 20s5 article had details not in the other articles and presented a more impatient attitude,
The approval in the United States for food use of pigs gene edited to resist Porcine Reproductive and Respiratory Syndrome, or PRRS, will be a good test for Canada’s year-old approval process for gene editing.
…
I’ve talked to farmers who have dealt with PRRS outbreaks, and many herds in Canada have battled it over the past 35 years. The level of abortion and respiratory stress the disease causes is hard to watch for the people who care for the pigs every day.
The Canadian industry is now skilled at managing and eliminating the disease once it’s in a production system, but it takes one biosecurity break before it is back again.
A gene-edited solution to reducing PRRS would be a tremendous win for animal welfare, the mental health of farm workers, and farm business productivity and profitability [emphasis mine].
I’m interested to see how quickly the gene-edited pigs are approved for food use in Canada [emphasis mine]. It will be an interesting test case, as genetic modification of livestock is something the public has not accepted, despite the potential improvements in animal welfare and food safety.
Canada created a process that follows much of the rest of the world in approving gene editing through conventional approval processes when the expression of the gene is not novel. Gene editing works by turning on and off already-existing genes within an organism.
…
There’s momentum in Canada to catch up to the rest of the world in speed of approval of new agriculture technologies [emphasis mine], as government and industry push to improve the country’s lagging productivity.
The successful discovery of the gene edit is a win for a swine genetics sector that has undergone significant consolidation in the past decade to the point where there are only a handful of swine genetics companies.
The consolidation was driven by the rise of big data analytics and the need to invest in technologies like gene editing.
The PRRS resistance gene editing process was developed by GenusPIC, itself a merger of two large breeding companies, Genus and Pig Improvement Corporation (PIC). Unfortunately, unlike the dairy sector where Semex, a Canadian company, is one of the major players in genetics, there are no more Canadian swine genetics companies of any scale. Alliance Genetics was acquired by Danbred in 2022 and Genesus, the last independent Canadian swine genetics company has been through a receivership process and is now under new ownership.
Regarding approval for new agricultural technologies, I wish Greig had specified or given examples. The gene-edited pork that was the topic of his article raises the question: how could Canada be trailing the rest of the world where gene-edited pork is concerned since no other country (to my knowledge) has approved it for the consumer market? Assuming it’s approved.
Share your thoughts: Participate in the risk assessment process for four lines of Gene Edited Pigs [emphasis mine]
The New Substances program, is seeking comments, including scientific information and test data that could inform the risk assessment process for four lines of gene edited pigs notified by Genus PLC on April 22nd, 2025. This consultation is open from June 20, 2025, to July 20, 2025 [emphasis mine]
NSN Numbers: 22051, 22196-22198
Substance designation of the organisms:
A gene edited Sus scrofa domesticus, Landrace descended from the L02 line, lacking a binding site for Porcine Reproductive and Respiratory Syndrome Virus (PRRSV)
A gene edited Sus scrofa domesticus, Large White descended from the L03 line, lacking a binding site for Porcine Reproductive and Respiratory Syndrome Virus (PRRSV)
A gene edited Sus scrofa domesticus, mix breed of Pietrain, Large White, Hampshire and Durocs descended from the L65 line, lacking a binding site for Porcine Reproductive and Respiratory Syndrome Virus (PRRSV)
A gene edited Sus scrofa domesticus, Duroc descended from the L800 line, lacking a binding site for Porcine Reproductive and Respiratory Syndrome Virus (PRRSV)
Subject to consultation requirements under section 108.1 of CEPA: Yes, the organism is a vertebrate.
Activity: The notified organisms are four genetically edited domestic breeds of pigs (scientific name, Sus scrofa domesticus) which include:
Landrace
Large White
Mix of Pietrain, Large White, Hampshire and Duroc
Duroc
They are notified for use in breeding with commercially raised pigs used for pork production.
Genetic modifications: All four lines of pigs have had their genomes edited to remove a binding site for the virus that causes porcine reproductive and respiratory syndrome PRRS. No new genetic material has been introduced to the notified organisms.
The gene editing is intended to remove the binding site for PRRS. Without this binding site, the virus is unable to bind and infect the host organism. PRRS is a highly contagious viral infection that is considered to be one of the most significant diseases in commercially raised pigs around the world. Currently, there is no effective treatment program for acute PRRS. The removal of the binding site for the PRRS virus from the notified organisms makes these pigs resistant to infection by the virus.
Exposure: According to the notifier, the pigs will be used for conventional breeding in commercial pig production systems in the same manner as non-edited pigs, to generate PRRS virus-resistant pig offspring for food and feed product use. The usage includes the import of animals derived from the edited pigs into production facilities in Canada. The notifier plans to maintain the animals under confinement and have described the biosafety and biosecurity procedures to be used at these facilities. There are no plans for any introduction into the environment outside production facilities
Waiver of information requirement: No waiver was requested.
Privacy Act Notice Statement
The personal information is collected under the authority of section 5 of the Department of the Environment Act and subsection 7(1) of the Financial Administration Act.
The New Substances Program, jointly administered by ECCC and Health Canada, is undertaking public consultation that could inform the risk assessment process for the four lines gene edited pigs. The information is collected, used and disclosed for the purpose of evaluating the potential risks posed by the gene edited organisms to the environment and human health. Information collected by ECCC will be retained by the department and shared with Health Canada for the purposes of the evaluation. Your participation and decision to provide any information is voluntary.
The personal information created, held or collected by Environment and Climate Change Canada is protected under the Privacy Act. Information from this consultation will be used, disclosed and retained in accordance with the conditions listed in the Personal Information Bank Outreach Activities PSU 938.
Any questions or comments regarding this privacy notice may be directed to ECCC’s Access to Information and Privacy Division at ECATIP-ECAIPRP@ec.gc.ca. If you are not satisfied that your privacy has been adequately respected, you have the right to file a complaint. You may contact the Office of the Privacy Commissioner of Canada by calling their information center at 1-800-282-1376 or by visiting their contact page.
Request for Confidentiality under CEPA
Please provide information in English or French. Your information may be summarized and published in part or in full. Pursuant to section 313 of CEPA, any person who provides information in response to New Substance Notification 22051, 22196, 22197, and 22198 may submit, with the information, a request that it be treated as confidential. A request for confidentiality must indicate which specific information or data should be treated as confidential, and it must be submitted with reasons taking into account the criteria referred to in subsection 313(2) of CEPA.
Include your name, affiliation, telephone number, e-mail and mailing address and use the following format in the title of your submission: Public Participation: [NSN number(s)] – [Substance Designation].
Join in: how to participate
All interested parties are invited to provide comments, including scientific information and test data related to potential risks to the environment or human health from the four lines of gene edited pigs. This information will be considered as part of the department’s assessment of the organism’s potential risks to the environment or human health, which is ongoing. A summary of the public comments received as well as the New Substances Program’s responses will be published once the evaluation has been completed.
By mail
Send us a letter with your comments and input to the address in the contact information below.
All people in Canada are invited to provide comments, including scientific information and test data that could inform the risk assessment process. Information that may inform the risk assessment process could include:
environmental fate information
ecological effects information
human health effects information or
exposure information (including sources and routes of exposure)
Science and Technology Branch Environment and Climate Change Canada Place Vincent Massey, 351 St. Joseph Blvd Gatineau QC K1A 0H3 Telephone: 1-800-567-1999 (Toll Free in Canada) or 1-819-938-3232 (Outside of Canada) E-mail: substances@ec.gc.ca
…
Odds and sods
The Canadian Science Policy Centre (CSPC) ran an online panel “Navigating Geopolitical Shifts: Canada’s Innovation Strategy for Agriculture and Agrifood Sector” on May 21, 2025, which may or may not have included discussion of gene-editing. They have posted the video of the May 21, 2025 session (1.5 hours) online (or, should you be interested in some other session, you can check here)..
The work underscores what can go wrong when an AI that manages city transport, safety, health and environmental monitoring predicts the future and intervenes in the present, significantly influencing urban governance and public policy development.
Generative Artificial Intelligence (AI) is boosting anticipatory forms of governance around the world, helping state actors to predict the future and focus their efforts in the present where the AI predicts they can have the greatest positive impact.
This phenomenon is particularly evident in China, but similar forms of governance mediated by generative AI are also becoming increasingly popular in Europe and the seeds of this trend are already visible in Ireland.
In this context, “city brains” represent an emerging type of generative AI currently employed in urban governance and public policy in a growing number of cities. City brains are large-scale AIs residing in vast digital urban platforms, which use Large Language Models (LLMs) to generate visions of urban futures: visions that are in turn used by policymakers to generate new urban policies. In China alone, there are over 500 cities developing city brains.
However, one of the main foreseeable dangers is the formation of a policy process that, under the influence of unintelligible LLMs, risks losing transparency and thus accountability, with another being the marginalisation of human stakeholders (citizens, in particular) as the role of AI in the management of cities keeps growing and governance begins to turn posthuman.
And by focusing on a real-world city brain project operating in the Haidian district of Beijing (China), which has a population of around 3,000,000 people and gathers data from over 14,000 CCTV cameras and over 20,000 environmental sensors, the researchers have been able to show these dangers are not just theoretical.
The dashboard of an existing City Brain system in China. Image credit: Dr Ying Xu.
Dr Federico Cugurullo, Associate Professor in Trinity’s School of Natural Sciences, is a leading expert in AI urbanism and the first author of the research, which has been published in the journal Policy and Society.
He said: “We can think of the Haidian city brain project as a gigantic panopticon that constantly observes what is happening it the city. It is operated by AI and focuses on three main areas of governance that are shaped by its predictions: environmental risk management, traffic management and public security.”
“For example, in the case of an impending natural disaster, policies are rapidly implemented to build new infrastructure meant to reinforce riverbanks and increase the efficiency of the city’s drainage systems. Outcomes also include direct interventions when, for example, police officers are dispatched to prevent illegal activities in an area where, according to the city brain, crimes are likely to take place in the near future.”
“However, the predictions of the Haidian city brain are far from being infallible. Our research reveals that the accuracy rate of what the city brain predicts varies from 60% to 90%, which leaves a significant margin of error around such important decisions, for which there is no understanding as to why they have been implemented. This is particularly dangerous when it comes to predictive policing, since any error made by AI means that an innocent person will be targeted by the police for hypothetical crimes that never took place.”
This research forms part of the ORACLE project led by Professor Cugurullo, which was funded by Research Ireland (formerly the Irish Research Council).
For anyone unfamiliar with the ‘panopticon’, there’s an entry in Wikipedia, Note: Links have been removed,
The panopticon is a design of institutional building with an inbuilt system of control, originated by the English philosopher and social theorist Jeremy Bentham in the 18th century. The concept is to allow all prisoners of an institution to be observed by a single corrections officer, without the inmates knowing whether or not they are being watched.
Although it is physically impossible for the single guard to observe all the inmates’ cells at once, the fact that the inmates cannot know when they are being watched motivates them to act as though they are all being watched at all times. They are effectively compelled to self-regulation. The architecture consists of a rotunda with an inspection house at its centre. From the centre, the manager or staff are able to watch the inmates. Bentham conceived the basic plan as being equally applicable to hospitals, schools, sanatoriums, and asylums. …
I have not been able to find a website for Cugurullo’s ORACLE project but I did find Cugurullo’s January 3, 2025 essay “AI could make cities autonomous, but that doesn’t mean we should let it happen,” which discusses a new field “AI urbanism” and includes the example of an enormous Saudi Arabian project, Neom and its linear city, The Line.
A scientific team from Universidad Carlos III de Madrid (UC3M), in collaboration with University College London (England) and the University of California, Davis (USA), has found that smart TVs send viewing data to their servers. This allows brands to generate detailed profiles of consumers’ habits and tailor advertisements based on their behaviour.
The research revealed that this technology captures screenshots or audio to identify the content displayed on the screen using Automatic Content Recognition (ACR) technology. This data is then periodically sent to specific servers, even when the TV is used as an external screen or connected to a laptop.
“Automatic Content Recognition works like a kind of visual Shazam, taking screenshots or audio to create a viewer profile based on their content consumption habits. This technology enables manufacturers’ platforms to profile users accurately, much like the internet does,” explains one of the study’s authors, Patricia Callejo, a professor in UC3M’s Department of Telematics Engineering and a fellow at the UC3M-Santander Big Data Institute. “In any case, this tracking—regardless of the usage mode—raises serious privacy concerns, especially when the TV is used solely as a monitor.”
The findings, presented in November [2024] at the Internet Measurement Conference (IMC) 2024, highlight the frequency with which these screenshots are transmitted to the servers of the brands analysed: Samsung and LG. Specifically, the research showed that Samsung TVs sent this information every minute, while LG devices did so every 15 seconds. “This gives us an idea of the intensity of the monitoring and shows that smart TV platforms collect large volumes of data on users, regardless of how they consume content—whether through traditional TV viewing or devices connected via HDMI, like laptops or gaming consoles,” Callejo emphasises.
To test the ability of TVs to block ACR tracking, the research team experimented with various privacy settings on smart TVs. The results demonstrated that, while users can voluntarily block the transmission of this data to servers, the default setting is for TVs to perform ACR. “The problem is that not all users are aware of this,” adds Callejo, who considers this lack of transparency in initial settings concerning. “Moreover, many users don’t know how to change the settings, meaning these devices function by default as tracking mechanisms for their activity.”
This research opens up new avenues for studying the tracking capabilities of cloud-connected devices that communicate with each other (commonly known as the Internet of Things, or IoT). It also suggests that manufacturers and regulators must urgently address the challenges that these new devices will present in the near future.
This was on the Canadian Broadcasting Corporation’s (CBC) Day Six radio programme and the segment is embedded in a January 19, 2025 article by Philip Drost, Note: A link has been removed,
When a Tesla Cybertruck exploded outside Trump International Hotel in Las Vegas on New Year’s Day [2025], authorities were quickly able to gather information, crediting Elon Musk and Tesla for sending them info about the car and its driver.
But for some, it’s alarming to discover that kind of information is so readily available.
“Most carmakers are selling drivers’ personal information. That’s something that we know based on their privacy policies,” Zoë MacDonald, a writer and researcher focussing on online privacy and digital rights, told Day 6 host Brent Bambury.
The Las Vegas Metropolitan Police Department said the Tesla CEO was able to provide key details about the truck’s driver, who authorities believe died by self-inflicted gun wound at the scene, and its movement leading up to the destination.
With that data, they were able to determine that the explosives came from a device in the truck, not the vehicle itself.
“We have now confirmed that the explosion was caused by very large fireworks and/or a bomb carried in the bed of the rented Cybertruck and is unrelated to the vehicle itself,” Musk wrote on X following the explosion.
To privacy experts, it’s another example of how your personal information can be used in ways you may not be aware of. And while this kind of data can useful in an investigation, it’s by no means the only way companies use the information.
“This is unfortunately not surprising that they have this data,” said David Choffnes, executive director of the Cybersecurity and Privacy Institute at Northeastern University in Boston.
“When you see it all together and know that a company has that information and continues at any point in time to hand it over to law enforcement, then you start to be a little uncomfortable, even if — in this case — it was a good thing for society.”
CBC News reached out to Tesla for comment but did not hear back before publication.
…
I found this to be eye-opening, Note: A link has been removed,
MacDonald says the privacy concerns are a byproduct of all the technology new cars come with these days, including microphones, cameras, and sensors. The app that often accompanies a new car is collecting your information, too, she says.
The former writer for the Mozilla Foundation worked on a report in 2023 that examined vehicle privacy policies. For that study, MacDonald sifted through privacy policies from auto manufacturers. And she says the findings were staggering.
…
Most shocking of all is the information the car can learn from you, MacDonald says. It’s not just when you gas up or start your engine. Your vehicle can learn your sexual activity, disability status, and even your religious beliefs [emphasis mine].
MacDonald says it’s unclear how they car companies do this, because the information in the policies are so vague.
It can also collect biometric data, such as facial geometric features, iris scans, and fingerprints [emphasis mine].
…
This extends far past the driver. MacDonald says she read one privacy policy that required drivers to read out a statement every time someone entered the vehicle, to make them aware of the data the car collects, something that seems unlikely to go down before your Uber ride.
…
If that doesn’t bother you, then this might, Note: A link has been removed,
And car companies aren’t just keeping that information to themselves.
Confronted with these types of privacy concerns, many people simply say they have nothing to hide, Choffnes says. But when money is involved, they change their tune.
According to an investigation from the New York Times in March of 2024, General Motors shared information on how people drive their cars with data brokers that create risk profiles for the insurance industry, which resulted in people’s insurance premiums going up [emphases mine]. General Motors has since said it has stopped sharing those details [emphasis mine].
“The issue with these kinds of services is that it’s not clear that it is being done in a correct or fair way, and that those costs are actually unfair to consumers,” said Choffnes.
For example, if you make a hard stop to avoid an accident because of something the car in front of you did, the vehicle could register it as poor driving.
…
Drost’s January 19, 2025 article notes that the US Federal Trade Commission has proposed a five year moratorium to prevent General Motors from selling geolocation and driver behavior data to consumer report agencies. In the meantime,
“Cars are a privacy nightmare. And that is not a problem that Canadian consumers can solve or should solve or should have the burden to try to solve for themselves,” said MacDonald.
If you have the time, read Drost’s January 19, 2025 article and/or listen to the embedded radio segment.
From a March 10, 2025 ArtSci Salon notice (received via email and visible here as of March 13, 2025), Note: I have reorganized this notice to put the events in date order and clarified for which event you are registering,
The ArtSci Salon (The Fields Institute) in collaboration with the NewONE program (U of T [University of Toronto]) are pleased to invite you to 3 engagements with Berlin-based interdisciplinary artist Kaethe Wenzel
Urban Pictograms Workshop March 20, 2025, 2:30-4:00 pm [ET[ William Doo Auditorium, 45 Willcocks street [sic]
A workshop to challenge the urban rules and cultural stereotypes of street signs
This workshop is part of the programming of the NewONE: learning without borders, New College, University of Toronto. Throughout the academic year, our classes have been exploring important issues pertaining to social justice. During this workshop, we invite students and members of the community to work together to create urban pictograms (or urban stickers) that challenge inequalities and reaffirm principles of social justice. A selected number of pictograms will be displayed on the windows of the D.G Ivey New College Library and will be launched on April 3 [2025] at 4:30 pm [ET].
Public talk: Urban organisms. Re-imagining urban ecologies and collective futures March 27 [2025], 5 pm [ET], Room 230 The Fields Institute for Research in Mathematical Sciences 222 College Street
After all, the world is being produced collectively, across the borders of time and geography as well as across the boundaries of the individual. –Kaethe Wenzel
Join us in welcoming Berlin-based interdisciplinary artist Kaethe Wenzel. Wenzel has used a diverse variety of media and material such as textiles, found items, animal bones, plants, soil and other organic material, as well as small electronics to produce urban interventions and objects of speculative fiction at the intersection of art, science and technology. Wenzel challenges the notion of the artwork as an object to be observed in a gallery or museum, and the gallery as a constrained space with relatively limited interactions. Her extensive body of work extends to building facades, billboards, entire neighborhoods and the city, translating into urban interventions to explore the collective production of culture and the creation and negotiation of public space.
Public launch of Urban Pictograms Thursday, April 3, 2025, 4 pm [ET] onwards Windows of D.G Ivey Library, 20 Willcocks Street, New College, University of Toronto
There’s just a week to go till The Space’s conference and we’re pleased to confirm our speakers for each of the roundtable talks on Day 1 and 2. There’s lots that will be of interest, including:
* A timely debate about how to make online communities safer * In introduction to CreaTech – a £6.75 million investment to develop small, micro- and medium-sized businesses specialising in creative tech like video games and immersive reality – find out how to get involved * Discussions on the role of artists in a digital world * Explorations of digital accessibiliy, community ownership, engagement and empowerment.
Day 1 Digital communities and online harms Wednesday 12 February
Digital accessibility, inclusion and community
Roundtable 1 How can we think differently about how we create digital content and challenge assumptions about what culture looks like? Exploring community ownership, engagement and empowerment through digital.
Zoe Partington – Acting CEO DaDa, Artist and Disability Consultant
Rachel Farrer – Associate Director, Cultural and Community Engagement Innovation Ecosystem, Coventry University
Jo Capper – Collaborative Programme Curator, Grand Union
Reducing online harms, how to make social media and online communities safer
Roundtable 2 In a world of increasingly polarised online spaces, what are the emerging trends and challenges when engaging audiences and building communities online?
Dr Rianna Walcott – Assistant Professor of Communication, University of Maryland
Day 2 The role of artists in a digital world Thursday 13 February
Calling all in the West Midlands!
Day 2 is taking place in person as well as streaming online. If you’d like to join us in person at the STEAMhouse in Birmingham, please register for free below.
As well as joining us for the great roundtables we have lined up, there’ll be a great chance to network in between sessions over lunch. Look forward to seeing you there!
CreaTech, the Digital West Midlands and beyond – Local and Global [CreaTech is an initiative of the UK’s Creative Industries Council]
Roundtable 1 An introduction to CreaTech – a £6.75 million investment to develop small, micro- and medium-sized businesses specialising in creative tech like video games and immersive reality. Creatives and academics from across the Midlands and further afield discuss arising opportunities and what this means for the region and beyond.
Richard Willacy – General Director, Birmingham Opera Company
Tom Rogers – Creative Content Producer, Birmingham Royal Ballet
Lamberto Coccioli – Project lead, CreaTech Frontiers, Professor of Music and Technology at the Royal Birmingham Conservatoire (BCU)
Rachel Davis – Director of Warwick Enterprise, University of Warwick
Platforming artists and storytellers – are artists and storyteller missing from modern discourse?
Roundtable 2 Artists and storytellers have historically played pivotal roles in shaping societal narratives and fostering cultural discourse. However, is their presence in mainstream discussions diminishing?
Javaad Alipoor – Artistic Director, Javaad Alipoor Company
If you got to The Space’s Digital Culture Talks 2025 webpage, you’ll find a few more details. Clicking on the link to register will give you the event time appropriate to your timezone.
Welcome to The Space. We help the arts, culture and heritage sector to engage audiences using digital and broadcast content and platforms.
As an independent not-for-profit organisation, our role is to fund the creation of new digital cultural content and provide free training, mentoring and online resources for organisations, artists and creative practitioners.
We are funded by a range of national and regional agencies, to enable you to build your digital skills, confidence and experience via practical advice and hands-on experience. We can also help you to find ways to make your digital content accessible to new and more diverse audiences.
We also offer a low-cost consultancy service for organisations who want to develop their digital cultural content strategy.