Category Archives: social implications

What do you reveal about yourself in the era of video communication and embodied virtual reality?

Self-disclosure doesn’t just take place in real life, it also happens when you interact in virtual reality (VR).

Caption: Researchers find that both the type of communication medium and gender influence self-disclosure. Credit: Professor Junko Ichino from Waseda University, Japan

A July 24, 2025 Waseda University (Japan) press release, also on EurekAlert, describes research into how relationships are developed in embodied virtual reality (VR) and in video communication, Note: A link has been removed,

Self-disclosure, or the process of conveying one’s details to others verbally, is crucial for communication. Self-disclosure includes expressing personal information, thoughts, and feelings. It encompasses self-expression and clarification, social validation and control, as well as relationship development, and is closely related to reciprocity, intimacy, trust, interactional enjoyment, and satisfaction.

In recent years, technological advancements have paved the way for new forms of communication, including video-conferencing and embodied virtual reality (VR). It is indispensable to shed light on the phenomenon of self-disclosure in this context to better understand relationship building and mental health.

With this goal, a team of researchers from Japan, led by Professor Junko Ichino from Waseda University (affiliated to Tokyo City University, Japan, at the time of study), including Mr. Masahiro Ide from the Tokyo City University and TIS Inc., Professor Hitomi Yokoyama from Okayama University of Science, Professor Hirotoshi Asano from Kogakuin University, and Professors Hideo Miyachi and Daisuke Okabe from Tokyo City University, has explored the effects of new communication media and gender on self-disclosure. Their findings were published online in Behaviour & Information Technology on June 4, 2025.

Prof. Ichino explains the motivation behind their research, “When I tried accessing VRChat, a social VR platform that gained popularity in Japan around 2017, I was surprised by the lack of polite or superficial conversation and the presence of freedom and directness of communication. I felt that these people would never interact like this in the real world, which led me to become interested in virtual spaces as a place for communication.”

Since self-disclosure is essential for communication, in addition to studies on self-disclosure in face-to-face conversations, there have been many studies on self-disclosure in conversations through text and voice, which are traditional communication media. Many studies have shown that conversations through text and voice encourage self-disclosure more than face-to-face conversations. However, little was known regarding whether new communication media, such as video-conferencing and embodied VR, encourage self-disclosure compared to face-to-face conversations. Therefore, the researchers investigated self-disclosure across four communication media: face-to-face, video, and embodied VR using both realistic and unrealistic avatars. They included 144 participants, aged 20 to 50 years, segregated them into 72 dyads, and encouraged them to develop conversations based on their personal topics. The participants underwent multiple self-disclosure sessions across the four communication media.

The researchers found that embodied VR, especially with unrealistic avatars, resulted in self-disclosure of personal feelings that were 1.5 times higher than in the face-to-face scenario. However, video communication did not differ noticeably from face-to-face. Furthermore, gender pairing also affected self-disclosure. To investigate how gender pairing affects self-disclosure, the researchers classified the participants into female-to-female, male-to-male, male-to-female, and female-to-male. Upon analysis, the team found that female-to-female pairings had the highest degree of self-disclosure, particularly the disclosure of personal information, regardless of the communication medium.

Since embodied VR facilitates self-disclosure of personal feelings compared to face-to-face, the team expects applications in various VR services related to self-expression, including counselling and psychotherapy services, where therapists interact with patients with ailments such as depression, dementia, cancer, adjustment disorders, and anxiety disorders and with clients with various mental symptoms. Moreover, the proposed innovation can lead to novel interventions in caregiver cafés for people who care for elderly people with dementia or who are bedridden, as well as in stress relief services where listening agents listen to people’s worries and anxieties about their physical condition and interpersonal relationships.

Overall, Prof. Ichino envisions a bright future sprouting from their breakthrough findings. “The shift to remote communication using communication media that surged during the COVID-19 pandemic is expected to continue because such media are required to achieve the UN’s Sustainable Development Goals. Additionally, the need for mental well-being support, which is closely related to self-disclosure, is expected to increase in the future.”

Together, the insights obtained from this study could be greatly utilised for applications that help in improving mental health.

I wonder if there will be further studies before utilizing these insights. I have a couple of questions (1) what happens when participants are over 50 and (2) are there cultural differences?

Here’s a link to and a citation for the paper,

Effects of new communication media and gender on self-disclosure by Junko Ichino, Masahiro Ide, Hitomi Yokoyama, Hirotoshi Asano, Hideo Miyachi & Daisuke Okabe. Behaviour & Information Technology DOI: 10.1080/0144929X.2025.2507690 Published: June 4, 2025

This paper is open access.

A collaborating robot as part of your “extended” body

Caption: Researchers from the Istituto Italiano di Tecnologia (IIT) in Genoa (Italy) and Brown University in Providence (USA) have discovered that people sense the hand of a humanoid robot as part of their body schema, particularly when it comes to carrying out a task together, like slicing a bar of soap. Credit: IIT-Istituto Italiano di Tecnologia

A September 12, 2025 Istituto Italiano di Tecnologia (IIT) press release (also on EurekAlert but published on September 11, 2025) describes some intriguing research into robot/human relationships,

Researchers from the Istituto Italiano di Tecnologia (IIT) in Genoa (Italy) and Brown University in Providence (USA) have discovered that people sense the hand of a humanoid robot as part of their body schema, particularly when it comes to carrying out a task together, like slicing a bar of soap. The study has been published in the journal iScience and can pave the way for a better design of robots that have to function in close contact with humans, such as those used in rehabilitation.

The project, led by Alessandra Sciutti, IIT Principal Investigator of the CONTACT unit at IIT, in collaboration with Brown University professor Joo-Hyun Song, explored whether unconscious mechanisms that shape interactions between humans also emerge in interactions between a person and a humanoid robot.

Researchers focused on a phenomenon known as the “near-hand effect”, in which the presence of a hand near an object alters visual attention of a person, because the brain is preparing to use the object. Moreover, the study considers the human brain’s ability to create its “body schema” to move more efficiently in the surrounding space, by integrating objects into it as well.

Through an unconscious process shaped by external stimuli, the brain builds a “body schema” that helps us avoid obstacles or grab objects without looking at them. Any tools can become part of this internal map as long as they are useful for a task, like a tennis racket that feels like an arm extension to the player who uses it daily. Since body schema is constantly evolving, the research team led by Sciutti explored whether a robot could also become part of it.

Giulia Scorza Azzarà, PhD student at IIT and first author of the study, designed and analyzed the results of experiments where people carried out a joint task with iCub, the IIT’s child-sized humanoid robot. They sliced a bar of soap together by using a steel wire, alternately pulled by the person and the robotic partner.

After the activity, researchers verified the integration of the robotic hand into the body schema, quantifying the near hand effect with the Posner cueing task. This test challenges participants to press a key as quickly as possible to indicate on which side of the screen an image appears, while an object placed right next to the screen influences their attention. Data from 30 volunteers showed a specific pattern: participants reacted faster when images appeared next to the robot’s hand, showing that their brains had treated it much like a near hand. Thanks to control experiments, researchers proved that this effect appeared only in those who had sliced the soap with the robot.

The strength of the near hand effect also depended on how the humanoid robot moved. When the robot’s gestures were broad, fluid, and well synchronized with the human ones, the effect was stronger, resulting in a better integration of iCub’s hand into the participant’s body schema. Physical closeness between the robotic hand and the person also played a role: the nearer the robot’s hand was to the participant during the slicing task, the greater the effect.

To assess how participants perceived the robot after working together on the task, researchers gathered information through questionnaires. The results show that the more participants saw iCub as competent and pleasant, the more intense the cognitive effect was. Attributing human-like traits or emotions to iCub further boosted the hand’s integration in the body schema; in other words, partnership and empathy enhanced the cognitive bond with the robot.

The team carried out experiments with a humanoid robot under controlled conditions, paving the way for a deeper understanding of human-machine interactions. Psychological factors will be essential to designing robots able to adapt to human stimuli and able to provide a more intuitive and effective robotic experience. These are crucial features for application of robotics in motor rehabilitation, virtual reality, and assistive technologies.

The research is part of the ERC-funded wHiSPER project, coordinated by IIT’s CONTACT (COgNiTive Architecture for Collaborative Technologies) unit.

Here’s a link to and a citation for the paper,

Collaborating with a robot biases human spatial attention by Giulia Scorza Azzarà, Joshua Zonca, Francesco Rea, Joo-Hyun Song, Alessandra Sciutti. iScience Volume 28, Issue 7, 18 July 2025, 112791 DOI: https://doi.org/10.1016/j.isci.2025.112791 Available online 2 June 2025, Version of Record 18 June 2025 Under a Creative Commons license CC BY 4.0 Attribution 4.0 International Deed

This paper is open access.

This business of a robot becoming an extension of your body, i.e., becoming part of you, is reminiscent of some issues brought up in my October 21, 2025 posting “Copyright, artificial intelligence, and thoughts about cyborgs,” such as, N. Katherine Hayles’s assemblages and, more specifically, the issues brought up in the section titled, “Symbiosis and your implant.”

Canadian research into relationships with domestic robots

Zhao Zhao’s (assistant professor in Computer Science at the University of Guelph) September 11, 2025 essay for The Conversation highlights results from one of her recently published studies, Note: Links have been removed,

Social companion robots are no longer just science fiction. In classrooms, libraries and homes, these small machines are designed to read stories, play games or offer comfort to children. They promise to support learning and companionship, yet their role in family life often extends beyond their original purpose.

In our recent study of families in Canada and the United States, we found that even after a children’s reading robot “retired” or was no longer in active and regular use, most households chose to keep it — treating it less like a gadget and more like a member of the family.

Luka is a small, owl-shaped reading robot, designed to scan and read picture books aloud, making storytime more engaging for young children.

In 2021, my colleague Rhonda McEwen and I set out to explore how 20 families used Luka. We wanted to study not just how families used Luka initially, but how that relationship was built and maintained over time, and what Luka came to mean in the household. Our earlier work laid the foundation for this by showing how families used Luka in daily life and how the bond grew over the first months of use.

When we returned in 2025 to follow up with 19 of those families, we were surprised by what we found. Eighteen households had chosen to keep Luka, even though its reading function was no longer useful to their now-older children. The robot lingered not because it worked better than before, but because it had become meaningful.

A deep, emotional connection

Children often spoke about Luka in affectionate, human-like terms. One called it “my little brother.” Another described it as their “only pet.” These weren’t just throwaway remarks — they reflected the deep emotional place the robot had taken in their everyday lives.

Because Luka had been present during important family rituals like bedtime reading, children remembered it as a companion.

Parents shared similar feelings. Several explained that Luka felt like “part of our history.” For them, the robot had become a symbol of their children’s early years, something they could not imagine discarding. One family even held a small “retirement ceremony” before passing Luka on to a younger cousin, acknowledging its role in their household.

Other families found new, practical uses. Luka was repurposed as a music player, a night light or a display item on a bookshelf next to other keepsakes. Parents admitted they continued to charge it because it felt like “taking care of” the robot.

The device had long outlived its original purpose, yet families found ways to integrate it into daily routines.

Luka the robot. Image by Dr Zhao Zhao, University of Guelph

Zhao also wrote an August 8, 2025 essay about her 2025 followup study on families and their Luka robots for Frontiers Media,

What happens to a social robot after it retires? 

Four years ago, we placed a small owl-shaped reading robot named Luka into 20 families’ homes. At the time, the children were preschoolers, just learning to read. Luka’s job was clear: scan the pages of physical picture books and read them aloud, helping children build early literacy skills. 

That was in 2021. In 2025, we went back — not expecting to find much. The children had grown. The reading level was no longer age-appropriate. Surely, Luka’s work was done. 

Instead, we found something extraordinary.

18 of 19 families still had their robot. Many were still charging it. A few used it as a music player. Some simply left it on a shelf—next to baby books and keepsakes—its eyes still glowing gently. Luka had stayed.

As more families bring AI-powered companions into their homes, we’ll need to better understand not only how they’re used — but how they’re remembered.

Because sometimes, the robot stays.

For the curious, here’s a link to and a citation for the 2025 followup study,

The robot that stayed: understanding how children and families engage with a retired social robot by Zhao Zhao, Rhonda McEwen. Front. Robot. AI, 07 August 2025 Sec. Human-Robot Interaction Volume 12 – 2025 DOI: https://doi.org/10.3389/frobt.2025.1628089

This paper is open access.

Where does this leave us?

Trying to distinguish between robots and artificial intelligence (AI) can mean wading into murky waters. Not all robots have (AI) and not all AI is embodied in a robot and cyborgs add more complexity.

N. Katherine Hayles’ 2025 book “Bacteria to AI; Human Futures with our Nonhuman Symbionts” mentioned in my October 21, 2025 posting “Copyright, artificial intelligence, and thoughts about cyborgs” does not make a distinction, which may or may not be important. We just don’t know. It seems we are in the process of redefining our relationships to the life and the objects around us as we redefine what it means to be a person.

Toronto’s ArtSci Salon is hosting a couple more October 2025 events

I have two art/science events and one art/science conference/festival (IRL [in real life or in person] and Zoom) taking place in Toronto, Ontario.

October 16, 2025

There is a closing event for the “I don’t do Math” series mentioned in my September 8, 2025 posting,

ABOUT
“I don’t do math” is a photographic series referencing dyscalculia, a learning difference affecting a person’s ability to understand and manipulate number-based information.

This initiative seeks to raise awareness about the challenges posed by dyscalculia with educators, fellow mathematicians, and parents, and to normalize its existence, leading to early detection and augmented support. In addition, it seeks to reflect on and question broader issues and assumptions about the role and significance of Mathematics and Math education in today’s changing socio-cultural and economic contexts. 

The exhibition will contain pedagogical information and activities for visitors and students. The artist will also address the extensive research that led to the exhibition. The exhibition will feature two panel discussions following the opening and to conclude the exhibition.

I have some information from an October 12, 2025 ArtSci Salon announcement (received via email) about the “I don’t do math” closing event,

in us for 

Closing Exhibition Panel Discussion
Thursday, October 16 2025
10:00 am -12:00 pm room 309
The Fields Institute for Research in Mathematical Sciences (or online)

Artist Ann Piché will be in conversation with
Andrew Fiss, Jacqueline Wernimont, Amenda Chow, Ellen Abrams, Michael Barany and JP Ascher

RSVP here

October 21, 2025

The second event mentioned in the October 12, 2025 ArtSci Salon announcement, Note 1: A link has been removed, Note 2: This event is part of a larger series,

Marco Donnarumma 
Monsters of Grace: bodies, sounds, and machines

Tuesday, October 21, 2025
3:30-4:30 PM
Sensorium Research Loft 
4th floor
Goldfarb Centre for Fine Arts
York University

About the talk
What is sound to those who do not hear it? How does one listen to something that cannot be heard? What kind of sensory gaps are created by aiding technologies such as prostheses and artificial intelligence (AI)? As a matter of fact, the majority of non-deaf people hear only partially due to age and personal experience. Still, sound is most often considered through the normalizing viewpoint of the non-deaf. If I become your body, what does sound become for me? Join us to welcome Marco Donnarumma  ahead of his new installation/performance at Paul Cadario Conference Room (Oct 22, 8-10 PM University College [University of Toronto] – 15 King’s College Circle). His talk will focus on this latest work in the context of a largest body of work titled “I Am Your Body,” an ongoing project investigating how normative power is enforced through the technological mediation of the senses.

About the artist:
Marco Donnarumma is an artist, inventor and theorist. His oeuvre confronts normative body politics with uncompromising counter-narratives, where bodies are in tension between control and agency, presence and absence, grace and monstrosity. He is best known for using sound, AI, biosensors, and robotics to turn the body into a site of resistance and transformation. He has presented his work in thirty-seven countries across Asia, Europe, North and South America and is the recipient of numerous accolades, most notably the German Federal Ministry of Research and Education’s Artist of the Science Year 2018, and the Prix Ars Electronica’s Award of Distinction in Sound Art 2017. Donnarumma received a ZER01NE Creator grant in 2024 and was named a pioneer of performing arts with advanced technologies by the major national newspaper Der Standard, Austria. His writings are published in Frontiers in Computer Science, Computer Music Journal and Performance Research, among others, and his newest book chapter, co-authored with Elizabeth Jochum, will appear in Robot Theaters by Routledge. Together with Margherita Pevere he runs the performance group Fronte Vacuo.


I wonder if Donnarumma’s “Monsters of Grace: bodies, sounds, and machines’ received any inspiration from “Monsters of Grace” (Wikipedia entry) or if it’s just happenstance, Note: Links have been removed,

Monsters of Grace is a multimedia chamber opera in 13 short acts directed by Robert Wilson, with music by Philip Glass and libretto from the works of 13th-century Sufi mystic Jalaluddin Rumi. The title is said to be a reference to Wilson’s corruption of a line from Hamlet: “Angels and ministers of grace defend us!” (1.4.39).

So, the October 21, 2025 event is a talk at York University taking place before the “Who’s afraid of AI? Arts, Sciences, and the Futures of Intelligence” (more below).

“Who’s afraid of AI? Arts, Sciences, and the Futures of Intelligence,” a conference and arts festival at the University of Toronto

The conference (October 23 – 24, 2025) is concurrent with the arts festival (October 19 – 25, 2025) at the University of Toronto. Here’s more from the event homepage on the https://bmolab.artsci.utoronto.ca/ website, Note: BMO stands for Bank of Montreal, Note: No mention of Edward Albee and “Who’s afraid of Virginia Woolf?,”

2025 marks an inflection point in our technological landscape, driven by seismic shifts in AI innovation.

Who’s Afraid of AI? Arts, Science, and the Futures of Intelligence is a week-long inquiry into the implications and future directions of AI for our creative and collective imaginings, and the many possible futures of intelligence. The complexity of these immediate future calls for interdisciplinary dialogue, bringing together artists, AI researchers, and humanities scholars.

In this volatile domain, the question of who envisions our futures is vital. Artists explore with complexity and humanity, while the humanities reveal the histories of intelligence and the often-overlooked ways knowledge and decision-making have been shaped. By placing these voices in dialogue with AI researchers and technologists, Who’s Afraid of AI? examines the social dimensions of technology, questions tech solutionism from a social-impact perspective, and challenges profit-driven AI with innovation guided by public values.

The two-day conference at the University of Toronto’s University College anchors the week and features panels and debates with leading figures in these disciplines, including a keynote by 2025 Nobel Laureate in Physics Geoffrey Hinton, the “Godfather of AI” and 2025 Neil Graham Lecturer in Science, Fei-Fei Li, an AI pioneer.

Throughout the week, the conversation continues across the city with:

  • AI-themed and AI powered art shows and exhibitions
  • Film screenings
  • Innovative theatre
  • Experimental music

Who’s Afraid of AI? demonstrates that Toronto has not only shaped the history of AI but continues to prepare its future.Step into this changing landscape and be part of this transformative dialogue — register today!

Organizing Committee:

Pia Kleber, Professor-Emerita, Comparative Literature, and Drama, U of T
Dirk Bernhardt-Walther, Department of Psychology, Program Director, Cognitive Science, U of T
David Rokeby, Director, BMO Lab, Centre for Drama, Theatre and Performance Studies, U of T
Rayyan Dabbous, PhD candidate, Centre for Comparative Literature, U of T

This looks like a pretty interesting programme (if you’re mainly focused on AI and the creative arts), from the event homepage on the https://bmolab.artsci.utoronto.ca/ website, Note 1: All times are ET, Note 2: I have not included speakers’ photos,

The conference will explore core questions about AI such as its capabilities, possibilities and challenges, bringing their unique research, creative practice, scholarship and experience to the discussion. Speakers will also engage in an interdisciplinary conversation on topics including AI’s implications for theories of mind and embodiment, its influence on creation, innovation, and discovery, its recognition of diverse perspectives, and its transformation of artistic, cultural, political and everyday practices.

Thursday, October 23, 2025

Mind the World

9 AM | Clark Reading Room, University College – 15 King’s College Circle

What are the merits and limits of artificial intelligence within the larger debate on embodiment? This session brings together an artist who has given AI a physical dimension, a neuroscientist who reckons with the biological neural networks inspiring AI, and a humanist knowledgeable of the longer history in which the human has tried to decouple itself from its bodily needs and wants.

Suzanne Kite
Director, The Wihanble S’a Center for Indigenous AI

James DiCarlo
Director, MIT Quest for Intelligence

N. Katherine Hayles
James B. Duke Distinguished Professor Emerita of Literature

Staging AI

11 AM | Clark Reading Room, University College – 15 King’s College Circle

How is AI changing the arts? To answer this question, we bring together theatre directors and artists who have made AI the main driving plot of their stories and those who opted to keep technology secondary in their productions.

Kay Voges
Artistic Director, Schauspiel Köln

Roland Schimmelpfennig
Playwright and Director, Berlin

Hito Steyerl
Artist, Filmmaker and Writer, Berlin

Recognizing ‘Noise’

2 PM | Clark Reading Room, University College – 15 King’s College Circle

How can we design a more inclusive AI? This session brings together an artist who has worked with AI and has been sensitive to groups who may be excluded by its practice, an inclusive design scholar who has grappled with AI’s potential for personalized accessibility, and a humanist who understands the longer history on pattern and recognition from which emerged AI.

Marco Donnarumma
Artist, Inventor, Theorist, Berlin

Jutta Treviranus
Director, OCADU [Ontario College of Art & Design University],
Inclusive Design Research Centre

Eryk Salvaggio
Media Artist and Tech Policy Press Fellow, Rochester

Art, Design, and Application are the Solution to AI’s Charlie Chaplain Problem

4 PM | Hart House Theatre – 7 Hart House Circle

Daniel Wigdor
CoFounder and Chief Executive Officer, AXL

Keynote and Neil Graham Lecture in Science

4:15 PM | Hart House Theatre – 7 Hart House Circle

Fei-Fei Li
Sequoia Professor in Computer Science, Stanford Institute for Human-Centered AI

Geoffrey Hinton
2024 Nobel Laureate in Physics, Professor Emeritus in Computer Science

Friday, October 24, 2025

Life with AI

9 AM | Clark Reading Room, University College – 15 King’s College Circle

How do machine minds relate to human minds? What can we learn from one about the other? In this session we interrogate the impact of AI on our understanding of human knowledge and tool-making, from the perspective of philosophy, computer science, as well as the arts.

Jeanette Winterson
Author, Fellow of the Royal Society of Literature, Great Britain

Leif Weatherby
Professor of German and Director of Digital Theory Lab at
New York University

Jennifer Nagel
Professor, Philosophy, University of Toronto Mississauga

Discovery & In/Sight

11 AM | Clark Reading Room, University College – 15 King’s College Circle

This session explores creative practice through the lens of innovation and cultural/scientific advancement. An artist who creates with critical inspiration from AI joins forces with an innovation scholar who investigates the effects of AI on our decision making, as well as a philosopher of science who understands scientific discovery and inference as well as their limits.

Vladan Joler
Visual Artist and Professor of
New Media, University of Novi Sad [Serbia]

Alán Aspuru-Guzik
Professor of Chemistry and Computer Science, University of Toronto

Brian Baigrie
Professor, Institute for the History and Philosophy of Science & Technology, University of Toronto

Social history & Possible Futures

2 PM | Clark Reading Room, University College – 15 King’s College Circle

How does AI ownership and its private uses coexist within a framework of public good? It brings together an artist who has created AI tools to be used by others, an AI ethics researcher who has turned algorithmic bias into collective insight, and a philosopher who understands the connection between AI and the longer history of automation and work from which AI emerged.

Memo Akten
Artist working with Code, Data and AI, UC San Diego

Beth Coleman
Professor, Institute of Communication, Culture, Information and Technology, University of Toronto

Matteo Pasquinelli
Professor, Philosophy and Cultural Heritage Università Ca’ Foscari Venezia [Italy]

A Theory of Latent Spaces | Conclusion: Where do we go from here?

4 PM | Clark Reading Room, University College – 15 King’s College Circle

Antonio Somaini, curator of the remarkable ‘World through AI’ exhibition at the Museé du Jeu de Paume in Paris, will discuss the way in which ‘latent spaces’, a core characteristic of current AI models as “meta-archives” that shape profoundly our relation with the past.

Following this, we will engage in a larger discussion amongst the various conference speakers and attendees on how we can, as artists, humanities scholars, scientists and the general public, collectively imagine and cultivate a future where AI serves the public good and enhances our individual and collective lives.”

Antonio Somaini
Curator and Professor, Sorbonne Nouvelle [Université Sorbonne Nouvelle]

You can register here for this free conference, although, there’s now a waitlist for in person attendance. Do not despair, there’s access by Zoom,

In case you can’t make it in person, join us by Zoom:

Link: https://utoronto.zoom.us/j/82603012955

Webinar ID: 826 0301 2955

Passcode: 512183

I have not forgotten the festival, from the event homepage on the https://bmolab.artsci.utoronto.ca/ website,

Events Also Happening

October 22 | 2 PM | Student Forum and AI Commentary Contest Award | Paul Cadario Conference Room, University College – 15 King’s College Circle

October 22 | 8 – 10 PM | Marco Donnarumma, world première of a new performance installation | Paul Cadario Conference Room, University College – 15 King’s College Circle

October 23 | 2 PM | Jeanette Winterson: Arts & AI Talk | Paul Cadario Conference Room, University College – 15 King’s College Circle

October 24 | 7 PM | The Kiss by Roland Schimmelpfennig | The BMO Lab, University College – 15 King’s College Circle (Note: we are scheduling more performances. Check back for more info soon!)

October 25 | 8 PM | AI Cabaret featuring Jason Sherman, Rick Miller, Cole Lewis, BMO Lab projects and more| Crow’s Theatre, Nada Ristich Studio-Gallery – 345 Carlaw Avenue..

Get tickets for the AI Cabaret

(Use promo code AICAB for 100% discount)

Enjoy!

AI governance in the real-world ‘city brain’ project: possible pitfalls

An April 16, 2025 Trinity College Dublin press release illustrates the pervasiveness of generative artificial intelligence (Generative AI, GenAI, or GAI) in the management of cities,

The work underscores what can go wrong when an AI that manages city transport, safety, health and environmental monitoring predicts the future and intervenes in the present, significantly influencing urban governance and public policy development.

Generative Artificial Intelligence (AI) is boosting anticipatory forms of governance around the world, helping state actors to predict the future and focus their efforts in the present where the AI predicts they can have the greatest positive impact.

This phenomenon is particularly evident in China, but similar forms of governance mediated by generative AI are also becoming increasingly popular in Europe and the seeds of this trend are already visible in Ireland.

In this context, “city brains” represent an emerging type of generative AI currently employed in urban governance and public policy in a growing number of cities. City brains are large-scale AIs residing in vast digital urban platforms, which use Large Language Models (LLMs) to generate visions of urban futures: visions that are in turn used by policymakers to generate new urban policies. In China alone, there are over 500 cities developing city brains.

However, one of the main foreseeable dangers is the formation of a policy process that, under the influence of unintelligible LLMs, risks losing transparency and thus accountability, with another being the marginalisation of human stakeholders (citizens, in particular) as the role of AI in the management of cities keeps growing and governance begins to turn posthuman.

And by focusing on a real-world city brain project operating in the Haidian district of Beijing (China), which has a population of around 3,000,000 people and gathers data from over 14,000 CCTV cameras and over 20,000 environmental sensors, the researchers have been able to show these dangers are not just theoretical.

A person stands looking at an interactive dashboard of an existing City Brain system in China, at an interactive exhibition centre for the public. The dashboard of an existing City Brain system in China. Image credit: Dr Ying Xu.

Dr Federico Cugurullo, Associate Professor in Trinity’s School of Natural Sciences, is a leading expert in AI urbanism and the first author of the research, which has been published in the journal Policy and Society.

He said: “We can think of the Haidian city brain project as a gigantic panopticon that constantly observes what is happening it the city. It is operated by AI and focuses on three main areas of governance that are shaped by its predictions: environmental risk management, traffic management and public security.”

“For example, in the case of an impending natural disaster, policies are rapidly implemented to build new infrastructure meant to reinforce riverbanks and increase the efficiency of the city’s drainage systems. Outcomes also include direct interventions when, for example, police officers are dispatched to prevent illegal activities in an area where, according to the city brain, crimes are likely to take place in the near future.”

“However, the predictions of the Haidian city brain are far from being infallible. Our research reveals that the accuracy rate of what the city brain predicts varies from 60% to 90%, which leaves a significant margin of error around such important decisions, for which there is no understanding as to why they have been implemented. This is particularly dangerous when it comes to predictive policing, since any error made by AI means that an innocent person will be targeted by the police for hypothetical crimes that never took place.”

This research forms part of the ORACLE project led by Professor Cugurullo, which was funded by Research Ireland (formerly the Irish Research Council).

Here’s a link to and a citation for the paper,

When AIs become oracles: generative artificial intelligence, anticipatory urban governance, and the future of cities by Federico Cugurullo, Ying Xu. Policy and Society, Volume 44, Issue 1, January 2025, Pages 98–115, DOI: https://doi.org/10.1093/polsoc/puae025 Published online: 01 August 2024

This paper is open access.

For anyone unfamiliar with the ‘panopticon’, there’s an entry in Wikipedia, Note: Links have been removed,

The panopticon is a design of institutional building with an inbuilt system of control, originated by the English philosopher and social theorist Jeremy Bentham in the 18th century. The concept is to allow all prisoners of an institution to be observed by a single corrections officer, without the inmates knowing whether or not they are being watched.

Although it is physically impossible for the single guard to observe all the inmates’ cells at once, the fact that the inmates cannot know when they are being watched motivates them to act as though they are all being watched at all times. They are effectively compelled to self-regulation. The architecture consists of a rotunda with an inspection house at its centre. From the centre, the manager or staff are able to watch the inmates. Bentham conceived the basic plan as being equally applicable to hospitals, schools, sanatoriums, and asylums. …

I have not been able to find a website for Cugurullo’s ORACLE project but I did find Cugurullo’s January 3, 2025 essay “AI could make cities autonomous, but that doesn’t mean we should let it happen,” which discusses a new field “AI urbanism” and includes the example of an enormous Saudi Arabian project, Neom and its linear city, The Line.

Is your smart TV or your car spying on you?

Simple answer: Yes.

Smart television sets (TVs)

A December 10, 2024 Universidad Carlos III de Madrid press release (also on EurekAlert) offers details about the data collected by smart TVs,

A scientific team from Universidad Carlos III de Madrid (UC3M), in collaboration with University College London (England) and the University of California, Davis (USA), has found that smart TVs send viewing data to their servers. This allows brands to generate detailed profiles of consumers’ habits and tailor advertisements based on their behaviour.

The research revealed that this technology captures screenshots or audio to identify the content displayed on the screen using Automatic Content Recognition (ACR) technology. This data is then periodically sent to specific servers, even when the TV is used as an external screen or connected to a laptop.

“Automatic Content Recognition works like a kind of visual Shazam, taking screenshots or audio to create a viewer profile based on their content consumption habits. This technology enables manufacturers’ platforms to profile users accurately, much like the internet does,” explains one of the study’s authors, Patricia Callejo, a professor in UC3M’s Department of Telematics Engineering and a fellow at the UC3M-Santander Big Data Institute. “In any case, this tracking—regardless of the usage mode—raises serious privacy concerns, especially when the TV is used solely as a monitor.”

The findings, presented in November [2024] at the Internet Measurement Conference (IMC) 2024, highlight the frequency with which these screenshots are transmitted to the servers of the brands analysed: Samsung and LG. Specifically, the research showed that Samsung TVs sent this information every minute, while LG devices did so every 15 seconds. “This gives us an idea of the intensity of the monitoring and shows that smart TV platforms collect large volumes of data on users, regardless of how they consume content—whether through traditional TV viewing or devices connected via HDMI, like laptops or gaming consoles,” Callejo emphasises.

To test the ability of TVs to block ACR tracking, the research team experimented with various privacy settings on smart TVs. The results demonstrated that, while users can voluntarily block the transmission of this data to servers, the default setting is for TVs to perform ACR. “The problem is that not all users are aware of this,” adds Callejo, who considers this lack of transparency in initial settings concerning. “Moreover, many users don’t know how to change the settings, meaning these devices function by default as tracking mechanisms for their activity.”

This research opens up new avenues for studying the tracking capabilities of cloud-connected devices that communicate with each other (commonly known as the Internet of Things, or IoT). It also suggests that manufacturers and regulators must urgently address the challenges that these new devices will present in the near future.

Here’s a link to and a citation for the paper,

Watching TV with the Second-Party: A First Look at Automatic Content Recognition Tracking in Smart TVs by Gianluca Anselmi, Yash Vekaria, Alexander D’Souza, Patricia Callejo, Anna Maria Mandalari, Zubair Shafiq. IMC ’24: Proceedings of the 2024 ACM on Internet Measurement Conference Pages 622 – 634 DOI: https://doi.org/10.1145/3646547.3689013 Published: 04 November 2024

This paper is open access.

Cars

This was on the Canadian Broadcasting Corporation’s (CBC) Day Six radio programme and the segment is embedded in a January 19, 2025 article by Philip Drost, Note: A link has been removed,

When a Tesla Cybertruck exploded outside Trump International Hotel in Las Vegas on New Year’s Day [2025], authorities were quickly able to gather information, crediting Elon Musk and Tesla for sending them info about the car and its driver. 

But for some, it’s alarming to discover that kind of information is so readily available.

“Most carmakers are selling drivers’ personal information. That’s something that we know based on their privacy policies,” Zoë MacDonald, a writer and researcher focussing on online privacy and digital rights, told Day 6 host Brent Bambury.

The Las Vegas Metropolitan Police Department said the Tesla CEO was able to provide key details about the truck’s driver, who authorities believe died by self-inflicted gun wound at the scene, and its movement leading up to the destination. 

With that data, they were able to determine that the explosives came from a device in the truck, not the vehicle itself.  

“We have now confirmed that the explosion was caused by very large fireworks and/or a bomb carried in the bed of the rented Cybertruck and is unrelated to the vehicle itself,” Musk wrote on X following the explosion.

To privacy experts, it’s another example of how your personal information can be used in ways you may not be aware of. And while this kind of data can useful in an investigation, it’s by no means the only way companies use the information.  

“This is unfortunately not surprising that they have this data,” said David Choffnes, executive director of the Cybersecurity and Privacy Institute at Northeastern University in Boston.

“When you see it all together and know that a company has that information and continues at any point in time to hand it over to law enforcement, then you start to be a little uncomfortable, even if — in this case — it was a good thing for society.”

CBC News reached out to Tesla for comment but did not hear back before publication. 

I found this to be eye-opening, Note: A link has been removed,

MacDonald says the privacy concerns are a byproduct of all the technology new cars come with these days, including microphones, cameras, and sensors. The app that often accompanies a new car is collecting your information, too, she says.

The former writer for the Mozilla Foundation worked on a report in 2023 that examined vehicle privacy policies. For that study, MacDonald sifted through privacy policies from auto manufacturers. And she says the findings were staggering.

Most shocking of all is the information the car can learn from you, MacDonald says. It’s not just when you gas up or start your engine. Your vehicle can learn your sexual activity, disability status, and even your religious beliefs [emphasis mine].

MacDonald says it’s unclear how they car companies do this, because the information in the policies are so vague.

It can also collect biometric data, such as facial geometric features, iris scans, and fingerprints [emphasis mine].

This extends far past the driver. MacDonald says she read one privacy policy that required drivers to read out a statement every time someone entered the vehicle, to make them aware of the data the car collects, something that seems unlikely to go down before your Uber ride.

If that doesn’t bother you, then this might, Note: A link has been removed,

And car companies aren’t just keeping that information to themselves.

Confronted with these types of privacy concerns, many people simply say they have nothing to hide, Choffnes says. But when money is involved, they change their tune. 

According to an investigation from the New York Times in March of 2024, General Motors shared information on how people drive their cars with data brokers that create risk profiles for the insurance industry, which resulted in people’s insurance premiums going up [emphases mine]. General Motors has since said it has stopped sharing those details [emphasis mine].

“The issue with these kinds of services is that it’s not clear that it is being done in a correct or fair way, and that those costs are actually unfair to consumers,” said Choffnes. 

For example, if you make a hard stop to avoid an accident because of something the car in front of you did, the vehicle could register it as poor driving.

Drost’s January 19, 2025 article notes that the US Federal Trade Commission has proposed a five year moratorium to prevent General Motors from selling geolocation and driver behavior data to consumer report agencies. In the meantime,

“Cars are a privacy nightmare. And that is not a problem that Canadian consumers can solve or should solve or should have the burden to try to solve for themselves,” said MacDonald.

If you have the time, read Drost’s January 19, 2025 article and/or listen to the embedded radio segment.

Urban organisms: 3 ArtSci Salon events with Kaethe Wenzel in Toronto, Canada during March and April 2025

From a March 10, 2025 ArtSci Salon notice (received via email and visible here as of March 13, 2025), Note: I have reorganized this notice to put the events in date order and clarified for which event you are registering,

The ArtSci Salon (The Fields Institute) in collaboration with the NewONE program (U of T [University of Toronto]) are pleased to invite you to 3 engagements with Berlin-based interdisciplinary artist Kaethe Wenzel

Urban Pictograms Workshop
March 20, 2025, 2:30-4:00 pm [ET[
William Doo Auditorium,
45 Willcocks street
[sic]

A workshop to challenge the urban rules and cultural stereotypes of street signs

This workshop is part of the programming of the NewONE: learning without borders, New College, University of Toronto. Throughout the academic year, our classes have been exploring important issues pertaining to social justice. During this workshop, we invite students and members of the community to work together to create urban pictograms (or urban stickers) that challenge inequalities and reaffirm principles of social justice. A selected number of pictograms will be displayed on the windows of the D.G Ivey New College Library and will be launched on April 3 [2025] at 4:30 pm [ET].

Register here to participate in the March 20, 2025 workshop

Public talk: Urban organisms. Re-imagining urban ecologies and collective futures
March 27 [2025], 5 pm [ET], Room 230
The Fields Institute for Research in Mathematical Sciences
222 College Street

After all, the world is being produced collectively, across the borders of time and geography as well as across the boundaries of the individual. 
–Kaethe Wenzel

Join us in welcoming Berlin-based interdisciplinary artist Kaethe Wenzel. Wenzel has used a diverse variety of media and material such as textiles, found items, animal bones, plants, soil and other organic material, as well as small electronics to produce urban interventions and objects of speculative fiction at the intersection of art, science and technology. Wenzel challenges the notion of the artwork as an object to be observed in a gallery or museum, and the gallery as a constrained space with relatively limited interactions. Her extensive body of work extends to building facades, billboards, entire neighborhoods and the city, translating into urban interventions to explore the collective production of culture and the creation and negotiation of public space.

Public launch of Urban Pictograms 
Thursday, April 3, 2025, 4 pm [ET] onwards
Windows of D.G Ivey Library,
20 Willcocks Street,
New College, University of Toronto

Register here to participate in the March 20, 2025 workshop

Enjoy!

For anyone curious about the NewONE program, you can find more here at the University of Toronto.

Digital Culture Talks presented by The Space online February 12 – 13, 2025

A February 5, 2025 notice (received via email) from The Space, a UK Arts organization, announced a two-day series of talks on digital culture,

Digital Culture Talks 2025!

There’s just a week to go till The Space’s conference and we’re pleased to confirm our speakers for each of the roundtable talks on Day 1 and 2. There’s lots that will be of interest, including:

* A timely debate about how to make online communities safer
* In introduction to CreaTech – a £6.75 million investment to develop small, micro- and medium-sized businesses specialising in creative tech like video games and immersive reality – find out how to get involved
* Discussions on the role of artists in a digital world
* Explorations of digital accessibiliy, community ownership, engagement and empowerment. 

Find out more here and below

Day 1
Digital communities and online harms
Wednesday 12 February

Digital accessibility, inclusion and community

Roundtable 1
How can we think differently about how we create digital content and challenge assumptions about what culture looks like? Exploring community ownership, engagement and empowerment through digital.

  • Zoe Partington – Acting CEO DaDa, Artist and Disability Consultant
  • Rachel Farrer – Associate Director, Cultural and Community Engagement Innovation Ecosystem, Coventry University
  • Parminder Dosanjh – Creative Director, Creative Black County
  • Jo Capper – Collaborative Programme Curator, Grand Union

Reducing online harms, how to make social media and online communities safer

Roundtable 2
In a world of increasingly polarised online spaces, what are the emerging trends and challenges when engaging audiences and building communities online?

Day 2
The role of artists in a digital world
Thursday 13 February

Calling all in the West Midlands!

Day 2 is taking place in person as well as streaming online. If you’d like to join us in person at the STEAMhouse in Birmingham, please register for free below.

As well as joining us for the great roundtables we have lined up, there’ll be a great chance to network in between sessions over lunch. Look forward to seeing you there!

Join us in person!

CreaTech, the Digital West Midlands and beyond – Local and Global [CreaTech is an initiative of the UK’s Creative Industries Council]

Roundtable 1
An introduction to CreaTech – a £6.75 million investment to develop small, micro- and medium-sized businesses specialising in creative tech like video games and immersive reality. Creatives and academics from across the Midlands and further afield discuss arising opportunities and what this means for the region and beyond.

  • Richard Willacy – General Director, Birmingham Opera Company 
  • Tom Rogers – Creative Content Producer, Birmingham Royal Ballet
  • Louise Latter – Head of Programme, BOM
  • Lamberto Coccioli – Project lead, CreaTech Frontiers, Professor of Music and Technology at the Royal Birmingham Conservatoire (BCU) 
  • Rachel Davis – Director of Warwick Enterprise, University of Warwick 

Platforming artists and storytellers – are artists and storyteller missing from modern discourse?

Roundtable 2
Artists and storytellers have historically played pivotal roles in shaping societal narratives and fostering cultural discourse. However, is their presence in mainstream discussions diminishing?

Come and join in the conversation!

Register to join us online

If you got to The Space’s Digital Culture Talks 2025 webpage, you’ll find a few more details. Clicking on the link to register will give you the event time appropriate to your timezone.

For anyone curious about The Space, from their homepage (scroll down about 60% of the way),

About us

Welcome to The Space. We help the arts, culture and heritage sector to engage audiences using digital and broadcast content and platforms.

As an independent not-for-profit organisation, our role is to fund the creation of new digital cultural content and provide free training, mentoring and online resources for organisations, artists and creative practitioners.

We are funded by a range of national and regional agencies, to enable you to build your digital skills, confidence and experience via practical advice and hands-on experience. We can also help you to find ways to make your digital content accessible to new and more diverse audiences.

We also offer a low-cost consultancy service for organisations who want to develop their digital cultural content strategy.

There you have it.