Tag Archives: Bacteria to AI; Human Futures with our Nonhuman Symbionts (2025 book)

A collaborating robot as part of your “extended” body

Caption: Researchers from the Istituto Italiano di Tecnologia (IIT) in Genoa (Italy) and Brown University in Providence (USA) have discovered that people sense the hand of a humanoid robot as part of their body schema, particularly when it comes to carrying out a task together, like slicing a bar of soap. Credit: IIT-Istituto Italiano di Tecnologia

A September 12, 2025 Istituto Italiano di Tecnologia (IIT) press release (also on EurekAlert but published on September 11, 2025) describes some intriguing research into robot/human relationships,

Researchers from the Istituto Italiano di Tecnologia (IIT) in Genoa (Italy) and Brown University in Providence (USA) have discovered that people sense the hand of a humanoid robot as part of their body schema, particularly when it comes to carrying out a task together, like slicing a bar of soap. The study has been published in the journal iScience and can pave the way for a better design of robots that have to function in close contact with humans, such as those used in rehabilitation.

The project, led by Alessandra Sciutti, IIT Principal Investigator of the CONTACT unit at IIT, in collaboration with Brown University professor Joo-Hyun Song, explored whether unconscious mechanisms that shape interactions between humans also emerge in interactions between a person and a humanoid robot.

Researchers focused on a phenomenon known as the “near-hand effect”, in which the presence of a hand near an object alters visual attention of a person, because the brain is preparing to use the object. Moreover, the study considers the human brain’s ability to create its “body schema” to move more efficiently in the surrounding space, by integrating objects into it as well.

Through an unconscious process shaped by external stimuli, the brain builds a “body schema” that helps us avoid obstacles or grab objects without looking at them. Any tools can become part of this internal map as long as they are useful for a task, like a tennis racket that feels like an arm extension to the player who uses it daily. Since body schema is constantly evolving, the research team led by Sciutti explored whether a robot could also become part of it.

Giulia Scorza Azzarà, PhD student at IIT and first author of the study, designed and analyzed the results of experiments where people carried out a joint task with iCub, the IIT’s child-sized humanoid robot. They sliced a bar of soap together by using a steel wire, alternately pulled by the person and the robotic partner.

After the activity, researchers verified the integration of the robotic hand into the body schema, quantifying the near hand effect with the Posner cueing task. This test challenges participants to press a key as quickly as possible to indicate on which side of the screen an image appears, while an object placed right next to the screen influences their attention. Data from 30 volunteers showed a specific pattern: participants reacted faster when images appeared next to the robot’s hand, showing that their brains had treated it much like a near hand. Thanks to control experiments, researchers proved that this effect appeared only in those who had sliced the soap with the robot.

The strength of the near hand effect also depended on how the humanoid robot moved. When the robot’s gestures were broad, fluid, and well synchronized with the human ones, the effect was stronger, resulting in a better integration of iCub’s hand into the participant’s body schema. Physical closeness between the robotic hand and the person also played a role: the nearer the robot’s hand was to the participant during the slicing task, the greater the effect.

To assess how participants perceived the robot after working together on the task, researchers gathered information through questionnaires. The results show that the more participants saw iCub as competent and pleasant, the more intense the cognitive effect was. Attributing human-like traits or emotions to iCub further boosted the hand’s integration in the body schema; in other words, partnership and empathy enhanced the cognitive bond with the robot.

The team carried out experiments with a humanoid robot under controlled conditions, paving the way for a deeper understanding of human-machine interactions. Psychological factors will be essential to designing robots able to adapt to human stimuli and able to provide a more intuitive and effective robotic experience. These are crucial features for application of robotics in motor rehabilitation, virtual reality, and assistive technologies.

The research is part of the ERC-funded wHiSPER project, coordinated by IIT’s CONTACT (COgNiTive Architecture for Collaborative Technologies) unit.

Here’s a link to and a citation for the paper,

Collaborating with a robot biases human spatial attention by Giulia Scorza Azzarà, Joshua Zonca, Francesco Rea, Joo-Hyun Song, Alessandra Sciutti. iScience Volume 28, Issue 7, 18 July 2025, 112791 DOI: https://doi.org/10.1016/j.isci.2025.112791 Available online 2 June 2025, Version of Record 18 June 2025 Under a Creative Commons license CC BY 4.0 Attribution 4.0 International Deed

This paper is open access.

This business of a robot becoming an extension of your body, i.e., becoming part of you, is reminiscent of some issues brought up in my October 21, 2025 posting “Copyright, artificial intelligence, and thoughts about cyborgs,” such as, N. Katherine Hayles’s assemblages and, more specifically, the issues brought up in the section titled, “Symbiosis and your implant.”

Canadian research into relationships with domestic robots

Zhao Zhao’s (assistant professor in Computer Science at the University of Guelph) September 11, 2025 essay for The Conversation highlights results from one of her recently published studies, Note: Links have been removed,

Social companion robots are no longer just science fiction. In classrooms, libraries and homes, these small machines are designed to read stories, play games or offer comfort to children. They promise to support learning and companionship, yet their role in family life often extends beyond their original purpose.

In our recent study of families in Canada and the United States, we found that even after a children’s reading robot “retired” or was no longer in active and regular use, most households chose to keep it — treating it less like a gadget and more like a member of the family.

Luka is a small, owl-shaped reading robot, designed to scan and read picture books aloud, making storytime more engaging for young children.

In 2021, my colleague Rhonda McEwen and I set out to explore how 20 families used Luka. We wanted to study not just how families used Luka initially, but how that relationship was built and maintained over time, and what Luka came to mean in the household. Our earlier work laid the foundation for this by showing how families used Luka in daily life and how the bond grew over the first months of use.

When we returned in 2025 to follow up with 19 of those families, we were surprised by what we found. Eighteen households had chosen to keep Luka, even though its reading function was no longer useful to their now-older children. The robot lingered not because it worked better than before, but because it had become meaningful.

A deep, emotional connection

Children often spoke about Luka in affectionate, human-like terms. One called it “my little brother.” Another described it as their “only pet.” These weren’t just throwaway remarks — they reflected the deep emotional place the robot had taken in their everyday lives.

Because Luka had been present during important family rituals like bedtime reading, children remembered it as a companion.

Parents shared similar feelings. Several explained that Luka felt like “part of our history.” For them, the robot had become a symbol of their children’s early years, something they could not imagine discarding. One family even held a small “retirement ceremony” before passing Luka on to a younger cousin, acknowledging its role in their household.

Other families found new, practical uses. Luka was repurposed as a music player, a night light or a display item on a bookshelf next to other keepsakes. Parents admitted they continued to charge it because it felt like “taking care of” the robot.

The device had long outlived its original purpose, yet families found ways to integrate it into daily routines.

Luka the robot. Image by Dr Zhao Zhao, University of Guelph

Zhao also wrote an August 8, 2025 essay about her 2025 followup study on families and their Luka robots for Frontiers Media,

What happens to a social robot after it retires? 

Four years ago, we placed a small owl-shaped reading robot named Luka into 20 families’ homes. At the time, the children were preschoolers, just learning to read. Luka’s job was clear: scan the pages of physical picture books and read them aloud, helping children build early literacy skills. 

That was in 2021. In 2025, we went back — not expecting to find much. The children had grown. The reading level was no longer age-appropriate. Surely, Luka’s work was done. 

Instead, we found something extraordinary.

18 of 19 families still had their robot. Many were still charging it. A few used it as a music player. Some simply left it on a shelf—next to baby books and keepsakes—its eyes still glowing gently. Luka had stayed.

As more families bring AI-powered companions into their homes, we’ll need to better understand not only how they’re used — but how they’re remembered.

Because sometimes, the robot stays.

For the curious, here’s a link to and a citation for the 2025 followup study,

The robot that stayed: understanding how children and families engage with a retired social robot by Zhao Zhao, Rhonda McEwen. Front. Robot. AI, 07 August 2025 Sec. Human-Robot Interaction Volume 12 – 2025 DOI: https://doi.org/10.3389/frobt.2025.1628089

This paper is open access.

Where does this leave us?

Trying to distinguish between robots and artificial intelligence (AI) can mean wading into murky waters. Not all robots have (AI) and not all AI is embodied in a robot and cyborgs add more complexity.

N. Katherine Hayles’ 2025 book “Bacteria to AI; Human Futures with our Nonhuman Symbionts” mentioned in my October 21, 2025 posting “Copyright, artificial intelligence, and thoughts about cyborgs” does not make a distinction, which may or may not be important. We just don’t know. It seems we are in the process of redefining our relationships to the life and the objects around us as we redefine what it means to be a person.

Copyright, artificial intelligence, and thoughts about cyborgs

I’ve been holding this one for a while and now, it seems like a good followup to yesterday’s, October 20, 2025 posting about “AI and the Art of Being Human,” which touches on co-writing and my October 13, 2025 posting and its mention of “Who’s afraid of AI? Arts, Sciences , and the Futures of Intelligence,” a conference and arts festival at the University of Toronto (scroll down to the “Who’s Afraid of AI …” subhead).

With the advent of some of the latest advances in artificial intelligence (AI) and its use in creative content, the view on copyright (as a form of property) seems to be shifting. In putting this post together I’ve highlighted a blog posting that focuses on copyright and AI as it is commonly viewed. Following that piece, is a look at N. Katherine Hayles’ concept of AI as one of a number of cognitive assemblages and the implications of that concept where AI and copyright are concerned.

Then, it gets more complicated. What happens when your neural implant has an AI component? It’s question asked by members of a Canadian legal firm, McMillan LLP, a business law firm in their investigation of copyright. (The implication of this type of cognitive assemblage is not explicitly considered in Hayles’ work.) Following on the idea of a neural implant enhanced with AI, cyborg bugs (they too can have neural implants) are considered.

Uncomplicated vision of AI and copyright future

Glyn Moody’s May 15, 2025 posting on techdirt.com provides a very brief overview of the last 100 years of copyright and goes on to highlight some of the latest AI comments from tech industry titans, Note: Links have been removed,

For the last hundred years or so, the prevailing dogma has been that copyright is an unalloyed good [emphasis mine], and that more of it is better. Whether that was ever true is one question, but it is certainly not the case since we entered the digital era, for reasons explained at length in Walled Culture the book (free digital versions available). Despite that fact, recent attempts to halt the constant expansion and strengthening of copyright have all foundered. Part of the problem is that there has never been a constituency with enough political clout to counter the huge power of the copyright industry and its lobbyists.

Until now. The latest iteration of artificial intelligence has captured the attention of politicians around the world [emphasis mine]. It seems that the latter can’t do enough to promote and support it, in the hope of deriving huge economic benefits, both directly, in the form of local AI companies worth trillions, and indirectly, through increased efficiency and improved services. That current favoured status has given AI leaders permission to start saying the unsayable: that copyright is an obstacle to progress [emphasis mine], and should be reined in, or at least muzzled, in order to allow AI to reach its full potential. …

In its own suggestions for the AI Action Plan, Google spells out what this means:

Balanced copyright rules, such as fair use and text-and-data mining exceptions, have been critical to enabling AI systems to learn from prior knowledge and publicly available data, unlocking scientific and social advances. These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rightsholders and avoid often highly unpredictable, imbalanced, and lengthy negotiations with data holders during model development or scientific experimentation. Balanced copyright laws that ensure access to publicly available scientific papers, for example, are essential for accelerating AI in science, particularly for applications that sift through scientific literature for insights or new hypotheses.

… some of the biggest personalities in the tech world have gone even further, reported here by TechCrunch:

Jack Dorsey, co-founder of Twitter (now X) and Square (now Block), sparked a weekend’s worth of debate around intellectual property, patents, and copyright, with a characteristically terse post declaring, “delete all IP law.”

X’s current owner, Elon Musk, quickly replied, “I agree.”

It’s not clear what exactly brought these comments on, but they come at a time when AI companies, including OpenAI (which Musk co-founded, competes with, and is challenging in court), are facing numerous lawsuits alleging that they’ve violated copyright to train their models.

Unsurprisingly, that bold suggestion provoked howls of outrage from various players in the copyright world. That was to be expected. But the fact that big names like Musk and Dorsey were happy to cause such a storm is indicative of the changed atmosphere in the world of copyright and beyond. Indeed, there are signs that the other main intellectual monopolies – patents and trademarks – are also under pressure. Calling into question the old ways of doing things in these fields will also weaken the presumption that copyright must be preserved in its current state.

Yes, it is interesting to see tech moguls such as Jack Dorsey and Elon Musk take a more ‘enlightened’ approach to copyright. However, there may be a few twists and turns to this story as it continues to develop..

Copyright and cognitive assemblages

I need to set the stage with something coming from N. Katherine Hayles’ 2025 book “Bacteria to AI; Human Futures with our Nonhuman Symbionts.” She suggests that we (humans) will be members in cognitive assemblages including bacteria, plants, cells, AI, and more. She then decouples cognition from consciousness and claims entities such as bacteria, etc. are capable of ‘nonconscious cognition’.

Hayles avoids the words ‘thinking’ and ‘thought’ by using cognition and providing this meaning for the word,

… “cognition is a process that interprets information within contexts that connect it with meaning” (Hayles 2017, 22 [in “Unthought: The power of the Cognitive Nonconscious”‘ University of Chicago Press]) Note: Hayles quotes herself on pp. 8-9 in 2025’s “Bacteria to AI ..”

Hayles then develops the notion of a cognitive assemblage made up of conscious (e.g. human) and nonconscious (e.g. AI agent) cognitions. The part that most interests me is where Hayles examines copyright and cognitive assemblages,

.. what happens to the whole idea of intellectual property when an AI has perused copyrighted works during its training and incorporated them into its general sense of how to produce a picture of X or a poem about Y. Already artists and stakeholders are confronting similar issues in the age of remixing and modifying existing content. how much of a picture, or a song, needs to be altered for it not to count as copyright infringement? As legal cases like this work their way through the courts, collective intelligence will doubt continue to spread through the cultures of developed countries, as more and more people come to rely on ChatGPT and similar models for more and more tasks. Thus our cultures edge toward the realization that the very idea of intellectual property as something owned by an individual who has exclusive rights to it may need to be rethought [emphasis mine] and reconceptualized on a basis consistent with the reality of collective intelligence [emphasis mine] and the pervasiveness of cognitive assemblages in producing products of value in the contemporary era. [pp. 226 – 227 in Hayles’ 2025 book, “Bacteria to AI …]

It certainly seems as if the notion of intellectual property as personal property is being seriously challenged (and not by academics alone) but this state of affairs may be temporary. In particular, the tech titans see a benefit to loosening the rules now but what happens if they see an advantage to tightening the rules?

Neurotechnology, AI, and copyright

Neuralink states clearly that AI is part of their (and presumably other company’s) products, from the “Neuralink and AI: Bridging the Gap Between Humans and Machines,” Note: Links have been removed,

The intersection of artificial intelligence (AI) and human cognition is no longer a distant sci-fi dream—it’s rapidly becoming reality. At the forefront of this revolution is Neuralink, a neurotechnology company founded by Elon Musk in 2016, dedicated to creating brain-computer interfaces (BCIs) that seamlessly connect the human brain to machines. With AI advancing at an unprecedented pace, Neuralink aims to bridge the gap between humans and technology, offering transformative possibilities for healthcare, communication, and even human evolution. In this article, we’ll explore how Neuralink and AI are reshaping our future, the science behind this innovation, its potential applications, and the ethical questions it raises.

Robbie Grant, Yue Fei, and Adelaide Egan (plus Articling Students: Aki Kamoshida and Sara Toufic) have given their April 17, 2025 article for McMillan LLP, a Canadian business law firm, a (I couldn’t resist the wordplay) ‘thought provoking’ title, “Who Owns a Thought? Navigating Legal Issues in Neurotech” for a very interesting read, Note 1: Links have been removed, Note 2: I’ve included the numbers for the footnotes but not the footnotes themselves,

The ongoing expansion of Neurotechnology (or “neurotech”) for consumers is raising questions related to privacy and ownership of one’s thoughts, as well as what will happen when technology can go beyond merely influencing humans and enter the realm of control {emphasis mine}.

Last year, a group of McGill students built a mind-controlled wheelchair in just 30 days.[1] Brain2Qwerty, Meta’s neuroscience project which translates brain activity into text, claims to allow for users to “type” with their minds.[2] Neuralink, a company founded by Elon Musk {emphasis mine}, is beginning clinical trials in Canada testing a fully wireless, remotely controllable device to be inserted into a user’s brain {emphasis mine}.[3] This comes several years after the company released a video of a monkey playing videogames with its mind using a similar implantable device.

The authors have included a good description of neurotech, from their April 17, 2025 article,

Neurotech refers to technology that records, analyzes or modifies the neurons in the human nervous system. Neurotech can be broken down into three subcategories:

    Neuroimaging: technology that monitors brain structure and function;

    Neuromodulation: technology that influences brain function; and

    Brain-Computer Interfaces or “BCIs”: technology that facilitates direct communication between the brain’s electrical activity and an external device, sometimes referred to as brain-machine interfaces.[5]

In the medical and research context, neurotech has been deployed for decades in one form or another. Neuroimaging techniques such as EEG, MRI and PET have been used to study and analyze brain activity.[6] Neuromodulation has also been used for the treatment of various diseases, such as for deep brain stimulation for Parkinson’s disease[7] as well as for cochlear implants.[8] However, the potential for applications of neurotech beyond medical devices is a newer development, accelerated by the arrival of less intrusive neurotech devices, and innovations in artificial intelligence.

My interests here are not the same as the authors’, the focus in this posting is solely on intellectual property, from their April 17, 2025 article,

3.  Intellectual Property

As neurotech continues to advance, it is possible that it will be able to make sense of complex, subconscious data such as dreams. This will present a host of novel IP challenges, which stem from the unique nature of the data being captured, the potential for the technology to generate new insights, and the fundamental questions about ownership and rights in a realm where personal thoughts become part of the technological process.

Ownership of Summarized Data: When neurotech is able to capture subconscious thoughts, [emphasis mine] it will likely process this data into summaries that reflect aspects of an individual’s mental state. The ownership of such summaries, however, can become contentious. On the one hand, it could be argued that the individual, as the originator of their thoughts, should own the summaries. On the other hand, one could argue that the summaries would not exist but for the processing done by the technology and hence the summaries should not be owned (or exclusively owned) by the individual. The challenge may be in determining whether the summary is a transformation of the data that makes it the product of the technology, or whether it remains simply a condensed version of the individual’s thoughts, in which case it makes sense for the individual to retain ownership.

Ownership of Creative Outputs: The situation becomes more complicated if the neurotech produces creative outputs based on the subconscious thoughts captured by the technology. For example, if the neurotech uses subconscious imagery or emotions to create art, music, or other works, who owns the rights to these works? Is the individual whose thoughts were analyzed the creator of the work, or does the technology, which has facilitated and interpreted those thoughts, hold some ownership? This issue is especially pertinent in a world where AI-generated creations are already challenging traditional ideas of IP ownership. For example, in many jurisdictions, ownership of copyrightable works is tied to the individual who conceived them.[27] Uncertainty can arise in cases where works are created with neurotech, where the individual whose thoughts are captured may not be aware of the process, or their thoughts may have been altered or combined with other information to produce the works. These uncertainties could have significant implications for IP ownership, compensation, and the extent to which individuals can control or profit from the thoughts embedded in their own subconscious minds.

The reference to capturing data from subconscious thought and how that might be used in creative outputs is fascinating. This sounds like a description of one of Hayles’ cognitive assemblages with the complicating factor of a technology that is owned by a company. (Will Elon Musk be quite so cavalier about copyright when he could potentially own your thoughts and, consequently, your creative output?)

If you have the time (it’s an 11 minute read according to the authors), the whole April 17, 2025 article is worth it as the authors cover more issues (confidentiality, Health Canada oversight, etc.) than I have included here.

I also stumbled across the issue of neurotech tech companies and ownership of brain data (not copyright but you can see how this all begins to converge) in a February 29, 2024 posting “Portable and non-invasive (?) mind-reading AI (artificial intelligence) turns thoughts into text and some thoughts about the near future” where I featured this quote (scroll down about 70% of the way),

Huth [Alexander Huth, assistant professor of Neuroscience and Computer Science at the University of Texas at Austin] and Tang [Jerry Tang, PhD student in the Department of Computer Science at the University of Texas Austin] concluded that brain data, therefore, should be closely guarded, especially in the realm of consumer products. In an article on Medium from last April, Tang wrote that “decoding technology is continually improving, and the information that could be decoded from a brain scan a year from now may be very different from what can be decoded today. It is crucial that companies are transparent about what they intend to do with brain data and take measures to ensure that brain data is carefully protected.” (Yuste [Rafael Yuste, a Columbia University neuroscientist] said the Neurorights Foundation recently surveyed the user agreements of 30 neurotech companies and found that all of them claim ownership of users’ brain data — and most assert the right to sell that data to third parties. [emphases mine]) Despite these concerns, however, Huth and Tang maintained that the potential benefits of these technologies outweighed their risks, provided the proper guardrails [emphasis mine] were put in place.

While I’m still with neurotech, there’s another aspect to be considered as noted in my April 5, 2022 posting “Going blind when your neural implant company flirts with bankruptcy (long read).” My long read is probably 15 mins. or more.

Ending on a neurotech device/implant note, here’s a November 20, 2024 University Hospital Network (UHN) news release burbling happily about their new clinical trial involving Neurolink

UHN is proud to be selected as the first hospital in Canada to perform a pioneering neurosurgical procedure involving the Neuralink implantable device as part of the CAN-PRIME study, marking a significant milestone in the field of medical innovation.

This first procedure in Canada represents an exciting new research direction in neurosurgery and will involve the implantation of a wireless brain-computer interface (BCI) at UHN’s Toronto Western Hospital, the exclusive surgical site in Canada.

“We are incredibly proud to be at the forefront of this research advancement in neurosurgery,” says Dr. Kevin Smith, UHN’s President and CEO. “This progress is a testament to the dedication and expertise of our world-leading medical and research professionals, as well as our commitment to providing the most innovative and effective treatments for patients.

“As the first and exclusive surgical site in Canada to perform this procedure, we will be continuing to shape the future of neurological care and further defining our track record for doing what hasn’t been done.”

Neuralink has received Health Canada approval to begin recruiting for this clinical trial in Canada.

The goal of the CAN-PRIME Study (short for Canadian Precise Robotically Implanted Brain-Computer Interface), according to the study synopsis, is “to evaluate the safety of our implant (N1) and surgical robot (R1) and assess the initial functionality of our BCI for enabling people with quadriplegia to control external devices with their thoughts [emphasis mine].”

Patients with limited or no ability to use both hands due to cervical spinal cord injury or amyotrophic lateral sclerosis (ALS), may be eligible for the CAN-PRIME Study.

“This landmark surgery has the potential to transform and improve outcomes for patients who previously had limited options,” says Dr. Andres Lozano, the Alan and Susan Hudson Cornerstone Chair in Neurosurgery at UHN and lead of the CAN-PRIME study at UHN.

The procedure, which combines state-of-the-art technology and advanced surgical techniques, will be carried out by a multidisciplinary team of neurosurgeons, neuroscientists and medical experts at UHN.

“This is a perfect example of how scientific discovery, technological innovation, and clinical expertise come together to develop new approaches to continuously improve patient care,” says Dr. Brad Wouters, Executive Vice President of Science & Research at UHN. “As Canada’s No. 1 research hospital, we are proud to be leading this important trial in Canada that has the goal to improve the lives of individuals living with quadriplegia or ALS.”

The procedure has already generated significant attention within the medical community and further studies are planned to assess its long-term effectiveness and safety.

UHN is recognized for finding solutions beyond boundaries, achieving firsts and leading the development and implementation of the latest breakthroughs in health care to benefit patients across Canada, and around the world.

Not just human brains: cyborg bugs and other biohybrids

Brain-computer interfaces don’t have to be passively accepting instructions from humans, they could also be giving instructions to humans. I don’t have anything that makes the possibility explicit except by inference. For example, let’s look at cyborg bugs, from a May 13, 2025 article “We can turn bugs into flying, crawling RoboCops. Does that mean we should” by Carlyn Zwarenstein for salon.com, Note: Links have been removed,

Imagine a tiny fly-like drone with delicate translucent wings and multi-lensed eyes, scouting out enemies who won’t even notice it’s there. Or a substantial cockroach-like robot, off on a little trip to check out a nuclear accident, wearing a cute little backpack, fearless, regardless of what the Geiger counter says. These little engineered creatures might engage in search and rescue — surveillance, environmental or otherwise — inspecting dangerous areas you would not want to send a human being into, like a tunnel or building that could collapse at any moment, or a facility where there’s been a gas leak.

These robots are blazing new ethical terrain. That’s because they are not animals performing tasks for humans, nor are they robots that draw inspiration from nature. The drone that looks like a fly is both machine and bug. The Madagascar hissing cockroach robot doesn’t just perfectly mimic the attributes that allow cockroaches to withstand radiation and poisonous air: it is a real life animal, and it is also a mechanical creature controlled remotely. These are tiny cyborgs, though even tinier ones exist, involving microbes like bacteria or even a type of white blood cell. Like fictional police officer Alex Murphy who is remade into RoboCop, these real-life cyborgs act via algorithms rather than free will.

Even as the technology for the creation of biohybrids, of which cyborgs are just the most ethically fraught category, has advanced in leaps and bounds, separate research on animal consciousness has been revealing the basis for considering insects just as we might other animals. (If you look at a tree of life, you will see that insects are indeed animals and therefore share part of our evolutionary history: even our nervous systems are not completely alien to theirs). Do we have the right to turn insects into cyborgs that we can control to do our bidding, including our military bidding, if they feel pain or have preferences or anxieties?

… the boundaries that keep an insect — a hawkmoth or cockroach, in one such project — under human control can be invisibly and automatically generated from the very backpack it wears, with researchers nudging it with neurostimulation pulses to guide it back within the boundaries of its invisible fence if it tries to stray away.

As a society, you can’t really say we’ve spent significant time considering the ethics of taking a living creature and using it literally as a machine, although reporter Ariel Yu, reviewing some of the factors to take into account in a 2024 story inspired by the backpack-wearing roaches, framed the ethical dilemma not in terms of the use of an animal as a machine — you could say using an ox to pull a cart is doing that — but specifically the fact that we’re now able to take direct control of an animal’s nervous system. Though as a society we haven’t really talked this through either, within the field of bioengineering, researchers are giving it some attention.

If it can be done to bugs and other creatures, why not us (ethics???)

The issues raised in Zwarenstein’s article could also be applied to humans. Given how I started this piece, ‘who owns a thought’ could become where did the thought come from? Could a brain-computer interface (BCI) enabled by AI be receiving thoughts from someone other than the person who has it implanted in their brain? And, if you’re the one with the BCI, how would you know? In short, could your BCI or other implant be hacked? That’s definitely a possibility researchers at Rice University (Texas, US) have prepared for according to my March 27, 2025 posting, “New security protocol to protect miniaturized wireless medical implants from cyberthreats.”

Even with no ‘interference’ and begging the question of corporate ownership, if all the thoughts weren’t ‘yours’, would you still be you?

Symbiosis and your implant

I have a striking excerpt from a September 17, 2020 post (Turning brain-controlled wireless electronic prostheses into reality plus some ethical points),

This was the most recent and most directly applicable work that I could find. From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.

Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence.

Interface technologies are divided into those that ‘read’ the brain to record brain activity and decode its meaning, and those that ‘write’ to the brain to manipulate activity in specific regions and affect their function.

Commercial research is opaque, but scientists at social-media platform Facebook are known to be pursuing brain-reading techniques for use in headsets that would convert users’ brain activity into text. And neurotechnology companies such as Kernel in Los Angeles, California, and Neuralink, founded by Elon Musk in San Francisco, California, predict bidirectional coupling in which computers respond to people’s brain activity and insert information into their neural circuitry. [emphasis mine]

This isn’t the first time I’ve used that excerpt or the first time I’ve waded into the ethics question regarding implants. For the curious, I mentioned the April 5, 2022 post “Going blind when your neural implant company flirts with bankruptcy (long read)” earlier and there’s a February 23, 2024 post “Neural (brain) implants and hype (long read)” as well as others.

So, who does own a thought?

Hayles’ notion of assemblages puts into question the notion of a ‘self’ or, if you will, an ‘I’. (Segue: Hayles will be in Toronto for the Who’s Afraid of AI? Arts, Sciences, and the Futures of Intelligence conference, October 23 – 24, 2025.) More questions have been raised with some of the older research about our relationships with AI: (1) see my December 3, 2021 posting “True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)” and newer research (2) see my upcoming post “A collaborating robot as part of your “extended” body.”

While I seem to have wandered into labyrinthine philosophical questions, I suspect lawyers will work towards more concrete definitions so that any questions that arise such as ‘who owns a thought’ can be argued and resolved in court.