Tag Archives: N. Katherine Hayles

Hybrid human–AI collectives make the most accurate medical diagnoses

It almost seems as if researchers at the Max Planck Institute have been reading N. Katherine Hayles’ 2025 book, “Bacteria to AI: Human Futures with our Nonhuman Symbionts” mentioned in my October 21, 2025 posting and in my October 23, 2025 posting.

Caption: Hybrid diagnostic collectives consisting of humans and AI make significantly more accurate diagnoses than either medical professionals or AI systems alone. CreditMPI for Human Development

A June 20, 2025 Max Planck Institute for Human Development press release (also on EurekAlert) focuses on research that explores a collaborative/cooperative relationship between human and AI systems,

Diagnostic errors are among the most serious problems in everyday medical practice. AI systems—especially large language models (LLMs) like ChatGPT-4, Gemini, or Claude 3—offer new ways to efficiently support medical diagnoses. Yet these systems also entail considerable risks—for example, they can “hallucinate” and generate false information. In addition, they reproduce existing social or medical biases and make mistakes that are often perplexing to humans.  

An international research team, led by the Max Planck Institute for Human Development and in collaboration with partners from the Human Diagnosis Project (San Francisco) and the Institute of Cognitive Sciences and Technologies of the Italian National Research Council (CNR-ISTC Rome), investigated how humans and AI can best collaborate. The result: hybrid diagnostic collectives—groups consisting of human experts and AI systems—are significantly more accurate than collectives consisting solely of humans or AI. This holds particularly for complex, open-ended diagnostic questions with numerous possible solutions, rather than simple yes/no decisions. “Our results show that cooperation between humans and AI models has great potential to improve patient safety,” says lead author Nikolas Zöller, postdoctoral researcher at the Center for Adaptive Rationality of the Max Planck Institute for Human Development. 

Realistic simulations using more than 2,100 clinical vignettes 

The researchers used data from the Human Diagnosis Project, which provides clinical vignettes—short descriptions of medical case studies—along with the correct diagnoses. Using more than 2,100 of these vignettes, the study compared the diagnoses made by medical professionals with those of five leading AI models. In the central experiment, various diagnostic collectives were simulated: individuals, human collectives, AI models, and mixed human–AI collectives. In total, the researchers analyzed more than 40,000 diagnoses. Each was classified and evaluated according to international medical standards (SNOMED CT). 

Humans and machines complement each other—even in their errors 

The study shows that combining multiple AI models improved diagnostic quality. On average, the AI collectives outperformed 85% of human diagnosticians. However, there were numerous cases in which humans performed better. Interestingly, when AI failed, humans often knew the correct diagnosis. 
 
The biggest surprise was that combining both worlds led to a significant increase in accuracy. Even adding a single AI model to a group of human diagnosticians—or vice versa—substantially improved the result. The most reliable outcomes came from collective decisions involving multiple humans and multiple AIs. The explanation is that humans and AI make systematically different errors. When AI failed, a human professional could compensate for the mistake—and vice versa. This so-called error complementarity makes hybrid collectives so powerful. “It’s not about replacing humans with machines. Rather, we should view artificial intelligence as a complementary tool that unfolds its full potential in collective decision-making,” says co-author Stefan Herzog, Senior Research Scientist at the Max Planck Institute for Human Development.  

However, the researchers also emphasize the limitations of their work. The study only considered text-based case vignettes—not actual patients in real clinical settings. Whether the results can be transferred directly to practice remains a questions for future studies to address.  Likewise, the study focused solely on diagnosis, not treatment, and a correct diagnosis does not necessarily guarantee an optimal treatment. 

It also remains uncertain how AI-based support systems will be accepted in practice by medical staff and patients. The potential risks of bias and discrimination by both AI and humans, particularly in relation to ethnic, social, or gender differences, likewise require further research. 


Wide range of applications for hybrid human–AI collectives 

The study is part of the Hybrid Human Artificial Collective Intelligence in Open-Ended Decision Making (HACID) project, funded under Horizon Europe, which aims to promote the development of future clinical decision-support systems through the smart integration of human and machine intelligence. The researchers see particular potential in regions where access to medical care is limited. Hybrid human–AI collectives could make a crucial contribution to greater healthcare equity in such areas. 

“The approach can also be transferred to other critical areas—such as the legal system, disaster response, or climate policy—anywhere that complex, high-risk decisions are needed. For example, the HACID project is also developing tools to enhance decision-making in climate adaptation” says Vito Trianni, co-author and coordinator of the HACID project. 

In brief: 

  • Hybrid diagnostic collectives consisting of humans and AI make significantly more accurate diagnoses than either medical professionals or AI systems alone—because they make systematically different errors that cancel each other out. 
  • The study analyzed over 40,000 diagnoses made by humans and machines in response to more than 2,100 realistic clinical vignettes. 
  • Adding an AI model to a human collective—or vice versa—noticeably improved diagnostic quality; hybrid collective decisions made by several humans and machines achieved the best results. 
  • These findings highlight the potential for greater patient safety and more equitable healthcare, especially in underserved regions. However, further research is needed on practical implementation and ethical considerations. 

Here’s a link to and a citation for the paper,

Human–AI collectives most accurately diagnose clinical vignettes by Nikolas Zöller, Julian Berger, Irving Lin, Nathan Fu, Jayanth Komarneni, Gioele Barabucci, Kyle Laskowski, Victor Shia, Benjamin Harack, Eugene A. Chu, Vito Trianni, Ralf H. J. M. Kurvers, and Stefan M. Herzog. PNAS June 13, 2025 122 (24) e2426153122 DOI: https://doi.org/10.1073/pnas.2426153122

This paper is open access.

I have links to a couple of the projects mentioned in the press release, (1) Human Diagnosis Project (Human Dx) and (2) HACID: Hybrid Human Artificial Collective Int elligence in Open-Ended Domains or Hybrid Human Artificial Collective Intelligence in Open-Ended Decision Making (HACID). I’m not sure why there’s a difference in the name.

Additionally, more information about HACID can be inferred from its webpage on the AI-on-Demand (AIoD) website, according to these FAQs (Frequently Asked Questions),

What is the AI-on-Demand (AIoD) platform?

The AIoD platform is a collaborative, community-driven digital space that supports European research and innovation in Artificial Intelligence (AI), while promoting the European values of quality, trustworthiness, and explainability.

Is AIoD only for academic researchers?

Not at all. While it has a strong research foundation, AIoD also serves industry professionals, startups, students, and public organizations interested in leveraging or contributing to AI.

Interesting, eh?

A collaborating robot as part of your “extended” body

Caption: Researchers from the Istituto Italiano di Tecnologia (IIT) in Genoa (Italy) and Brown University in Providence (USA) have discovered that people sense the hand of a humanoid robot as part of their body schema, particularly when it comes to carrying out a task together, like slicing a bar of soap. Credit: IIT-Istituto Italiano di Tecnologia

A September 12, 2025 Istituto Italiano di Tecnologia (IIT) press release (also on EurekAlert but published on September 11, 2025) describes some intriguing research into robot/human relationships,

Researchers from the Istituto Italiano di Tecnologia (IIT) in Genoa (Italy) and Brown University in Providence (USA) have discovered that people sense the hand of a humanoid robot as part of their body schema, particularly when it comes to carrying out a task together, like slicing a bar of soap. The study has been published in the journal iScience and can pave the way for a better design of robots that have to function in close contact with humans, such as those used in rehabilitation.

The project, led by Alessandra Sciutti, IIT Principal Investigator of the CONTACT unit at IIT, in collaboration with Brown University professor Joo-Hyun Song, explored whether unconscious mechanisms that shape interactions between humans also emerge in interactions between a person and a humanoid robot.

Researchers focused on a phenomenon known as the “near-hand effect”, in which the presence of a hand near an object alters visual attention of a person, because the brain is preparing to use the object. Moreover, the study considers the human brain’s ability to create its “body schema” to move more efficiently in the surrounding space, by integrating objects into it as well.

Through an unconscious process shaped by external stimuli, the brain builds a “body schema” that helps us avoid obstacles or grab objects without looking at them. Any tools can become part of this internal map as long as they are useful for a task, like a tennis racket that feels like an arm extension to the player who uses it daily. Since body schema is constantly evolving, the research team led by Sciutti explored whether a robot could also become part of it.

Giulia Scorza Azzarà, PhD student at IIT and first author of the study, designed and analyzed the results of experiments where people carried out a joint task with iCub, the IIT’s child-sized humanoid robot. They sliced a bar of soap together by using a steel wire, alternately pulled by the person and the robotic partner.

After the activity, researchers verified the integration of the robotic hand into the body schema, quantifying the near hand effect with the Posner cueing task. This test challenges participants to press a key as quickly as possible to indicate on which side of the screen an image appears, while an object placed right next to the screen influences their attention. Data from 30 volunteers showed a specific pattern: participants reacted faster when images appeared next to the robot’s hand, showing that their brains had treated it much like a near hand. Thanks to control experiments, researchers proved that this effect appeared only in those who had sliced the soap with the robot.

The strength of the near hand effect also depended on how the humanoid robot moved. When the robot’s gestures were broad, fluid, and well synchronized with the human ones, the effect was stronger, resulting in a better integration of iCub’s hand into the participant’s body schema. Physical closeness between the robotic hand and the person also played a role: the nearer the robot’s hand was to the participant during the slicing task, the greater the effect.

To assess how participants perceived the robot after working together on the task, researchers gathered information through questionnaires. The results show that the more participants saw iCub as competent and pleasant, the more intense the cognitive effect was. Attributing human-like traits or emotions to iCub further boosted the hand’s integration in the body schema; in other words, partnership and empathy enhanced the cognitive bond with the robot.

The team carried out experiments with a humanoid robot under controlled conditions, paving the way for a deeper understanding of human-machine interactions. Psychological factors will be essential to designing robots able to adapt to human stimuli and able to provide a more intuitive and effective robotic experience. These are crucial features for application of robotics in motor rehabilitation, virtual reality, and assistive technologies.

The research is part of the ERC-funded wHiSPER project, coordinated by IIT’s CONTACT (COgNiTive Architecture for Collaborative Technologies) unit.

Here’s a link to and a citation for the paper,

Collaborating with a robot biases human spatial attention by Giulia Scorza Azzarà, Joshua Zonca, Francesco Rea, Joo-Hyun Song, Alessandra Sciutti. iScience Volume 28, Issue 7, 18 July 2025, 112791 DOI: https://doi.org/10.1016/j.isci.2025.112791 Available online 2 June 2025, Version of Record 18 June 2025 Under a Creative Commons license CC BY 4.0 Attribution 4.0 International Deed

This paper is open access.

This business of a robot becoming an extension of your body, i.e., becoming part of you, is reminiscent of some issues brought up in my October 21, 2025 posting “Copyright, artificial intelligence, and thoughts about cyborgs,” such as, N. Katherine Hayles’s assemblages and, more specifically, the issues brought up in the section titled, “Symbiosis and your implant.”

Canadian research into relationships with domestic robots

Zhao Zhao’s (assistant professor in Computer Science at the University of Guelph) September 11, 2025 essay for The Conversation highlights results from one of her recently published studies, Note: Links have been removed,

Social companion robots are no longer just science fiction. In classrooms, libraries and homes, these small machines are designed to read stories, play games or offer comfort to children. They promise to support learning and companionship, yet their role in family life often extends beyond their original purpose.

In our recent study of families in Canada and the United States, we found that even after a children’s reading robot “retired” or was no longer in active and regular use, most households chose to keep it — treating it less like a gadget and more like a member of the family.

Luka is a small, owl-shaped reading robot, designed to scan and read picture books aloud, making storytime more engaging for young children.

In 2021, my colleague Rhonda McEwen and I set out to explore how 20 families used Luka. We wanted to study not just how families used Luka initially, but how that relationship was built and maintained over time, and what Luka came to mean in the household. Our earlier work laid the foundation for this by showing how families used Luka in daily life and how the bond grew over the first months of use.

When we returned in 2025 to follow up with 19 of those families, we were surprised by what we found. Eighteen households had chosen to keep Luka, even though its reading function was no longer useful to their now-older children. The robot lingered not because it worked better than before, but because it had become meaningful.

A deep, emotional connection

Children often spoke about Luka in affectionate, human-like terms. One called it “my little brother.” Another described it as their “only pet.” These weren’t just throwaway remarks — they reflected the deep emotional place the robot had taken in their everyday lives.

Because Luka had been present during important family rituals like bedtime reading, children remembered it as a companion.

Parents shared similar feelings. Several explained that Luka felt like “part of our history.” For them, the robot had become a symbol of their children’s early years, something they could not imagine discarding. One family even held a small “retirement ceremony” before passing Luka on to a younger cousin, acknowledging its role in their household.

Other families found new, practical uses. Luka was repurposed as a music player, a night light or a display item on a bookshelf next to other keepsakes. Parents admitted they continued to charge it because it felt like “taking care of” the robot.

The device had long outlived its original purpose, yet families found ways to integrate it into daily routines.

Luka the robot. Image by Dr Zhao Zhao, University of Guelph

Zhao also wrote an August 8, 2025 essay about her 2025 followup study on families and their Luka robots for Frontiers Media,

What happens to a social robot after it retires? 

Four years ago, we placed a small owl-shaped reading robot named Luka into 20 families’ homes. At the time, the children were preschoolers, just learning to read. Luka’s job was clear: scan the pages of physical picture books and read them aloud, helping children build early literacy skills. 

That was in 2021. In 2025, we went back — not expecting to find much. The children had grown. The reading level was no longer age-appropriate. Surely, Luka’s work was done. 

Instead, we found something extraordinary.

18 of 19 families still had their robot. Many were still charging it. A few used it as a music player. Some simply left it on a shelf—next to baby books and keepsakes—its eyes still glowing gently. Luka had stayed.

As more families bring AI-powered companions into their homes, we’ll need to better understand not only how they’re used — but how they’re remembered.

Because sometimes, the robot stays.

For the curious, here’s a link to and a citation for the 2025 followup study,

The robot that stayed: understanding how children and families engage with a retired social robot by Zhao Zhao, Rhonda McEwen. Front. Robot. AI, 07 August 2025 Sec. Human-Robot Interaction Volume 12 – 2025 DOI: https://doi.org/10.3389/frobt.2025.1628089

This paper is open access.

Where does this leave us?

Trying to distinguish between robots and artificial intelligence (AI) can mean wading into murky waters. Not all robots have (AI) and not all AI is embodied in a robot and cyborgs add more complexity.

N. Katherine Hayles’ 2025 book “Bacteria to AI; Human Futures with our Nonhuman Symbionts” mentioned in my October 21, 2025 posting “Copyright, artificial intelligence, and thoughts about cyborgs” does not make a distinction, which may or may not be important. We just don’t know. It seems we are in the process of redefining our relationships to the life and the objects around us as we redefine what it means to be a person.

Copyright, artificial intelligence, and thoughts about cyborgs

I’ve been holding this one for a while and now, it seems like a good followup to yesterday’s, October 20, 2025 posting about “AI and the Art of Being Human,” which touches on co-writing and my October 13, 2025 posting and its mention of “Who’s afraid of AI? Arts, Sciences , and the Futures of Intelligence,” a conference and arts festival at the University of Toronto (scroll down to the “Who’s Afraid of AI …” subhead).

With the advent of some of the latest advances in artificial intelligence (AI) and its use in creative content, the view on copyright (as a form of property) seems to be shifting. In putting this post together I’ve highlighted a blog posting that focuses on copyright and AI as it is commonly viewed. Following that piece, is a look at N. Katherine Hayles’ concept of AI as one of a number of cognitive assemblages and the implications of that concept where AI and copyright are concerned.

Then, it gets more complicated. What happens when your neural implant has an AI component? It’s question asked by members of a Canadian legal firm, McMillan LLP, a business law firm in their investigation of copyright. (The implication of this type of cognitive assemblage is not explicitly considered in Hayles’ work.) Following on the idea of a neural implant enhanced with AI, cyborg bugs (they too can have neural implants) are considered.

Uncomplicated vision of AI and copyright future

Glyn Moody’s May 15, 2025 posting on techdirt.com provides a very brief overview of the last 100 years of copyright and goes on to highlight some of the latest AI comments from tech industry titans, Note: Links have been removed,

For the last hundred years or so, the prevailing dogma has been that copyright is an unalloyed good [emphasis mine], and that more of it is better. Whether that was ever true is one question, but it is certainly not the case since we entered the digital era, for reasons explained at length in Walled Culture the book (free digital versions available). Despite that fact, recent attempts to halt the constant expansion and strengthening of copyright have all foundered. Part of the problem is that there has never been a constituency with enough political clout to counter the huge power of the copyright industry and its lobbyists.

Until now. The latest iteration of artificial intelligence has captured the attention of politicians around the world [emphasis mine]. It seems that the latter can’t do enough to promote and support it, in the hope of deriving huge economic benefits, both directly, in the form of local AI companies worth trillions, and indirectly, through increased efficiency and improved services. That current favoured status has given AI leaders permission to start saying the unsayable: that copyright is an obstacle to progress [emphasis mine], and should be reined in, or at least muzzled, in order to allow AI to reach its full potential. …

In its own suggestions for the AI Action Plan, Google spells out what this means:

Balanced copyright rules, such as fair use and text-and-data mining exceptions, have been critical to enabling AI systems to learn from prior knowledge and publicly available data, unlocking scientific and social advances. These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rightsholders and avoid often highly unpredictable, imbalanced, and lengthy negotiations with data holders during model development or scientific experimentation. Balanced copyright laws that ensure access to publicly available scientific papers, for example, are essential for accelerating AI in science, particularly for applications that sift through scientific literature for insights or new hypotheses.

… some of the biggest personalities in the tech world have gone even further, reported here by TechCrunch:

Jack Dorsey, co-founder of Twitter (now X) and Square (now Block), sparked a weekend’s worth of debate around intellectual property, patents, and copyright, with a characteristically terse post declaring, “delete all IP law.”

X’s current owner, Elon Musk, quickly replied, “I agree.”

It’s not clear what exactly brought these comments on, but they come at a time when AI companies, including OpenAI (which Musk co-founded, competes with, and is challenging in court), are facing numerous lawsuits alleging that they’ve violated copyright to train their models.

Unsurprisingly, that bold suggestion provoked howls of outrage from various players in the copyright world. That was to be expected. But the fact that big names like Musk and Dorsey were happy to cause such a storm is indicative of the changed atmosphere in the world of copyright and beyond. Indeed, there are signs that the other main intellectual monopolies – patents and trademarks – are also under pressure. Calling into question the old ways of doing things in these fields will also weaken the presumption that copyright must be preserved in its current state.

Yes, it is interesting to see tech moguls such as Jack Dorsey and Elon Musk take a more ‘enlightened’ approach to copyright. However, there may be a few twists and turns to this story as it continues to develop..

Copyright and cognitive assemblages

I need to set the stage with something coming from N. Katherine Hayles’ 2025 book “Bacteria to AI; Human Futures with our Nonhuman Symbionts.” She suggests that we (humans) will be members in cognitive assemblages including bacteria, plants, cells, AI, and more. She then decouples cognition from consciousness and claims entities such as bacteria, etc. are capable of ‘nonconscious cognition’.

Hayles avoids the words ‘thinking’ and ‘thought’ by using cognition and providing this meaning for the word,

… “cognition is a process that interprets information within contexts that connect it with meaning” (Hayles 2017, 22 [in “Unthought: The power of the Cognitive Nonconscious”‘ University of Chicago Press]) Note: Hayles quotes herself on pp. 8-9 in 2025’s “Bacteria to AI ..”

Hayles then develops the notion of a cognitive assemblage made up of conscious (e.g. human) and nonconscious (e.g. AI agent) cognitions. The part that most interests me is where Hayles examines copyright and cognitive assemblages,

.. what happens to the whole idea of intellectual property when an AI has perused copyrighted works during its training and incorporated them into its general sense of how to produce a picture of X or a poem about Y. Already artists and stakeholders are confronting similar issues in the age of remixing and modifying existing content. how much of a picture, or a song, needs to be altered for it not to count as copyright infringement? As legal cases like this work their way through the courts, collective intelligence will doubt continue to spread through the cultures of developed countries, as more and more people come to rely on ChatGPT and similar models for more and more tasks. Thus our cultures edge toward the realization that the very idea of intellectual property as something owned by an individual who has exclusive rights to it may need to be rethought [emphasis mine] and reconceptualized on a basis consistent with the reality of collective intelligence [emphasis mine] and the pervasiveness of cognitive assemblages in producing products of value in the contemporary era. [pp. 226 – 227 in Hayles’ 2025 book, “Bacteria to AI …]

It certainly seems as if the notion of intellectual property as personal property is being seriously challenged (and not by academics alone) but this state of affairs may be temporary. In particular, the tech titans see a benefit to loosening the rules now but what happens if they see an advantage to tightening the rules?

Neurotechnology, AI, and copyright

Neuralink states clearly that AI is part of their (and presumably other company’s) products, from the “Neuralink and AI: Bridging the Gap Between Humans and Machines,” Note: Links have been removed,

The intersection of artificial intelligence (AI) and human cognition is no longer a distant sci-fi dream—it’s rapidly becoming reality. At the forefront of this revolution is Neuralink, a neurotechnology company founded by Elon Musk in 2016, dedicated to creating brain-computer interfaces (BCIs) that seamlessly connect the human brain to machines. With AI advancing at an unprecedented pace, Neuralink aims to bridge the gap between humans and technology, offering transformative possibilities for healthcare, communication, and even human evolution. In this article, we’ll explore how Neuralink and AI are reshaping our future, the science behind this innovation, its potential applications, and the ethical questions it raises.

Robbie Grant, Yue Fei, and Adelaide Egan (plus Articling Students: Aki Kamoshida and Sara Toufic) have given their April 17, 2025 article for McMillan LLP, a Canadian business law firm, a (I couldn’t resist the wordplay) ‘thought provoking’ title, “Who Owns a Thought? Navigating Legal Issues in Neurotech” for a very interesting read, Note 1: Links have been removed, Note 2: I’ve included the numbers for the footnotes but not the footnotes themselves,

The ongoing expansion of Neurotechnology (or “neurotech”) for consumers is raising questions related to privacy and ownership of one’s thoughts, as well as what will happen when technology can go beyond merely influencing humans and enter the realm of control {emphasis mine}.

Last year, a group of McGill students built a mind-controlled wheelchair in just 30 days.[1] Brain2Qwerty, Meta’s neuroscience project which translates brain activity into text, claims to allow for users to “type” with their minds.[2] Neuralink, a company founded by Elon Musk {emphasis mine}, is beginning clinical trials in Canada testing a fully wireless, remotely controllable device to be inserted into a user’s brain {emphasis mine}.[3] This comes several years after the company released a video of a monkey playing videogames with its mind using a similar implantable device.

The authors have included a good description of neurotech, from their April 17, 2025 article,

Neurotech refers to technology that records, analyzes or modifies the neurons in the human nervous system. Neurotech can be broken down into three subcategories:

    Neuroimaging: technology that monitors brain structure and function;

    Neuromodulation: technology that influences brain function; and

    Brain-Computer Interfaces or “BCIs”: technology that facilitates direct communication between the brain’s electrical activity and an external device, sometimes referred to as brain-machine interfaces.[5]

In the medical and research context, neurotech has been deployed for decades in one form or another. Neuroimaging techniques such as EEG, MRI and PET have been used to study and analyze brain activity.[6] Neuromodulation has also been used for the treatment of various diseases, such as for deep brain stimulation for Parkinson’s disease[7] as well as for cochlear implants.[8] However, the potential for applications of neurotech beyond medical devices is a newer development, accelerated by the arrival of less intrusive neurotech devices, and innovations in artificial intelligence.

My interests here are not the same as the authors’, the focus in this posting is solely on intellectual property, from their April 17, 2025 article,

3.  Intellectual Property

As neurotech continues to advance, it is possible that it will be able to make sense of complex, subconscious data such as dreams. This will present a host of novel IP challenges, which stem from the unique nature of the data being captured, the potential for the technology to generate new insights, and the fundamental questions about ownership and rights in a realm where personal thoughts become part of the technological process.

Ownership of Summarized Data: When neurotech is able to capture subconscious thoughts, [emphasis mine] it will likely process this data into summaries that reflect aspects of an individual’s mental state. The ownership of such summaries, however, can become contentious. On the one hand, it could be argued that the individual, as the originator of their thoughts, should own the summaries. On the other hand, one could argue that the summaries would not exist but for the processing done by the technology and hence the summaries should not be owned (or exclusively owned) by the individual. The challenge may be in determining whether the summary is a transformation of the data that makes it the product of the technology, or whether it remains simply a condensed version of the individual’s thoughts, in which case it makes sense for the individual to retain ownership.

Ownership of Creative Outputs: The situation becomes more complicated if the neurotech produces creative outputs based on the subconscious thoughts captured by the technology. For example, if the neurotech uses subconscious imagery or emotions to create art, music, or other works, who owns the rights to these works? Is the individual whose thoughts were analyzed the creator of the work, or does the technology, which has facilitated and interpreted those thoughts, hold some ownership? This issue is especially pertinent in a world where AI-generated creations are already challenging traditional ideas of IP ownership. For example, in many jurisdictions, ownership of copyrightable works is tied to the individual who conceived them.[27] Uncertainty can arise in cases where works are created with neurotech, where the individual whose thoughts are captured may not be aware of the process, or their thoughts may have been altered or combined with other information to produce the works. These uncertainties could have significant implications for IP ownership, compensation, and the extent to which individuals can control or profit from the thoughts embedded in their own subconscious minds.

The reference to capturing data from subconscious thought and how that might be used in creative outputs is fascinating. This sounds like a description of one of Hayles’ cognitive assemblages with the complicating factor of a technology that is owned by a company. (Will Elon Musk be quite so cavalier about copyright when he could potentially own your thoughts and, consequently, your creative output?)

If you have the time (it’s an 11 minute read according to the authors), the whole April 17, 2025 article is worth it as the authors cover more issues (confidentiality, Health Canada oversight, etc.) than I have included here.

I also stumbled across the issue of neurotech tech companies and ownership of brain data (not copyright but you can see how this all begins to converge) in a February 29, 2024 posting “Portable and non-invasive (?) mind-reading AI (artificial intelligence) turns thoughts into text and some thoughts about the near future” where I featured this quote (scroll down about 70% of the way),

Huth [Alexander Huth, assistant professor of Neuroscience and Computer Science at the University of Texas at Austin] and Tang [Jerry Tang, PhD student in the Department of Computer Science at the University of Texas Austin] concluded that brain data, therefore, should be closely guarded, especially in the realm of consumer products. In an article on Medium from last April, Tang wrote that “decoding technology is continually improving, and the information that could be decoded from a brain scan a year from now may be very different from what can be decoded today. It is crucial that companies are transparent about what they intend to do with brain data and take measures to ensure that brain data is carefully protected.” (Yuste [Rafael Yuste, a Columbia University neuroscientist] said the Neurorights Foundation recently surveyed the user agreements of 30 neurotech companies and found that all of them claim ownership of users’ brain data — and most assert the right to sell that data to third parties. [emphases mine]) Despite these concerns, however, Huth and Tang maintained that the potential benefits of these technologies outweighed their risks, provided the proper guardrails [emphasis mine] were put in place.

While I’m still with neurotech, there’s another aspect to be considered as noted in my April 5, 2022 posting “Going blind when your neural implant company flirts with bankruptcy (long read).” My long read is probably 15 mins. or more.

Ending on a neurotech device/implant note, here’s a November 20, 2024 University Hospital Network (UHN) news release burbling happily about their new clinical trial involving Neurolink

UHN is proud to be selected as the first hospital in Canada to perform a pioneering neurosurgical procedure involving the Neuralink implantable device as part of the CAN-PRIME study, marking a significant milestone in the field of medical innovation.

This first procedure in Canada represents an exciting new research direction in neurosurgery and will involve the implantation of a wireless brain-computer interface (BCI) at UHN’s Toronto Western Hospital, the exclusive surgical site in Canada.

“We are incredibly proud to be at the forefront of this research advancement in neurosurgery,” says Dr. Kevin Smith, UHN’s President and CEO. “This progress is a testament to the dedication and expertise of our world-leading medical and research professionals, as well as our commitment to providing the most innovative and effective treatments for patients.

“As the first and exclusive surgical site in Canada to perform this procedure, we will be continuing to shape the future of neurological care and further defining our track record for doing what hasn’t been done.”

Neuralink has received Health Canada approval to begin recruiting for this clinical trial in Canada.

The goal of the CAN-PRIME Study (short for Canadian Precise Robotically Implanted Brain-Computer Interface), according to the study synopsis, is “to evaluate the safety of our implant (N1) and surgical robot (R1) and assess the initial functionality of our BCI for enabling people with quadriplegia to control external devices with their thoughts [emphasis mine].”

Patients with limited or no ability to use both hands due to cervical spinal cord injury or amyotrophic lateral sclerosis (ALS), may be eligible for the CAN-PRIME Study.

“This landmark surgery has the potential to transform and improve outcomes for patients who previously had limited options,” says Dr. Andres Lozano, the Alan and Susan Hudson Cornerstone Chair in Neurosurgery at UHN and lead of the CAN-PRIME study at UHN.

The procedure, which combines state-of-the-art technology and advanced surgical techniques, will be carried out by a multidisciplinary team of neurosurgeons, neuroscientists and medical experts at UHN.

“This is a perfect example of how scientific discovery, technological innovation, and clinical expertise come together to develop new approaches to continuously improve patient care,” says Dr. Brad Wouters, Executive Vice President of Science & Research at UHN. “As Canada’s No. 1 research hospital, we are proud to be leading this important trial in Canada that has the goal to improve the lives of individuals living with quadriplegia or ALS.”

The procedure has already generated significant attention within the medical community and further studies are planned to assess its long-term effectiveness and safety.

UHN is recognized for finding solutions beyond boundaries, achieving firsts and leading the development and implementation of the latest breakthroughs in health care to benefit patients across Canada, and around the world.

Not just human brains: cyborg bugs and other biohybrids

Brain-computer interfaces don’t have to be passively accepting instructions from humans, they could also be giving instructions to humans. I don’t have anything that makes the possibility explicit except by inference. For example, let’s look at cyborg bugs, from a May 13, 2025 article “We can turn bugs into flying, crawling RoboCops. Does that mean we should” by Carlyn Zwarenstein for salon.com, Note: Links have been removed,

Imagine a tiny fly-like drone with delicate translucent wings and multi-lensed eyes, scouting out enemies who won’t even notice it’s there. Or a substantial cockroach-like robot, off on a little trip to check out a nuclear accident, wearing a cute little backpack, fearless, regardless of what the Geiger counter says. These little engineered creatures might engage in search and rescue — surveillance, environmental or otherwise — inspecting dangerous areas you would not want to send a human being into, like a tunnel or building that could collapse at any moment, or a facility where there’s been a gas leak.

These robots are blazing new ethical terrain. That’s because they are not animals performing tasks for humans, nor are they robots that draw inspiration from nature. The drone that looks like a fly is both machine and bug. The Madagascar hissing cockroach robot doesn’t just perfectly mimic the attributes that allow cockroaches to withstand radiation and poisonous air: it is a real life animal, and it is also a mechanical creature controlled remotely. These are tiny cyborgs, though even tinier ones exist, involving microbes like bacteria or even a type of white blood cell. Like fictional police officer Alex Murphy who is remade into RoboCop, these real-life cyborgs act via algorithms rather than free will.

Even as the technology for the creation of biohybrids, of which cyborgs are just the most ethically fraught category, has advanced in leaps and bounds, separate research on animal consciousness has been revealing the basis for considering insects just as we might other animals. (If you look at a tree of life, you will see that insects are indeed animals and therefore share part of our evolutionary history: even our nervous systems are not completely alien to theirs). Do we have the right to turn insects into cyborgs that we can control to do our bidding, including our military bidding, if they feel pain or have preferences or anxieties?

… the boundaries that keep an insect — a hawkmoth or cockroach, in one such project — under human control can be invisibly and automatically generated from the very backpack it wears, with researchers nudging it with neurostimulation pulses to guide it back within the boundaries of its invisible fence if it tries to stray away.

As a society, you can’t really say we’ve spent significant time considering the ethics of taking a living creature and using it literally as a machine, although reporter Ariel Yu, reviewing some of the factors to take into account in a 2024 story inspired by the backpack-wearing roaches, framed the ethical dilemma not in terms of the use of an animal as a machine — you could say using an ox to pull a cart is doing that — but specifically the fact that we’re now able to take direct control of an animal’s nervous system. Though as a society we haven’t really talked this through either, within the field of bioengineering, researchers are giving it some attention.

If it can be done to bugs and other creatures, why not us (ethics???)

The issues raised in Zwarenstein’s article could also be applied to humans. Given how I started this piece, ‘who owns a thought’ could become where did the thought come from? Could a brain-computer interface (BCI) enabled by AI be receiving thoughts from someone other than the person who has it implanted in their brain? And, if you’re the one with the BCI, how would you know? In short, could your BCI or other implant be hacked? That’s definitely a possibility researchers at Rice University (Texas, US) have prepared for according to my March 27, 2025 posting, “New security protocol to protect miniaturized wireless medical implants from cyberthreats.”

Even with no ‘interference’ and begging the question of corporate ownership, if all the thoughts weren’t ‘yours’, would you still be you?

Symbiosis and your implant

I have a striking excerpt from a September 17, 2020 post (Turning brain-controlled wireless electronic prostheses into reality plus some ethical points),

This was the most recent and most directly applicable work that I could find. From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.

Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence.

Interface technologies are divided into those that ‘read’ the brain to record brain activity and decode its meaning, and those that ‘write’ to the brain to manipulate activity in specific regions and affect their function.

Commercial research is opaque, but scientists at social-media platform Facebook are known to be pursuing brain-reading techniques for use in headsets that would convert users’ brain activity into text. And neurotechnology companies such as Kernel in Los Angeles, California, and Neuralink, founded by Elon Musk in San Francisco, California, predict bidirectional coupling in which computers respond to people’s brain activity and insert information into their neural circuitry. [emphasis mine]

This isn’t the first time I’ve used that excerpt or the first time I’ve waded into the ethics question regarding implants. For the curious, I mentioned the April 5, 2022 post “Going blind when your neural implant company flirts with bankruptcy (long read)” earlier and there’s a February 23, 2024 post “Neural (brain) implants and hype (long read)” as well as others.

So, who does own a thought?

Hayles’ notion of assemblages puts into question the notion of a ‘self’ or, if you will, an ‘I’. (Segue: Hayles will be in Toronto for the Who’s Afraid of AI? Arts, Sciences, and the Futures of Intelligence conference, October 23 – 24, 2025.) More questions have been raised with some of the older research about our relationships with AI: (1) see my December 3, 2021 posting “True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)” and newer research (2) see my upcoming post “A collaborating robot as part of your “extended” body.”

While I seem to have wandered into labyrinthine philosophical questions, I suspect lawyers will work towards more concrete definitions so that any questions that arise such as ‘who owns a thought’ can be argued and resolved in court.

Toronto’s ArtSci Salon is hosting a couple more October 2025 events

I have two art/science events and one art/science conference/festival (IRL [in real life or in person] and Zoom) taking place in Toronto, Ontario.

October 16, 2025

There is a closing event for the “I don’t do Math” series mentioned in my September 8, 2025 posting,

ABOUT
“I don’t do math” is a photographic series referencing dyscalculia, a learning difference affecting a person’s ability to understand and manipulate number-based information.

This initiative seeks to raise awareness about the challenges posed by dyscalculia with educators, fellow mathematicians, and parents, and to normalize its existence, leading to early detection and augmented support. In addition, it seeks to reflect on and question broader issues and assumptions about the role and significance of Mathematics and Math education in today’s changing socio-cultural and economic contexts. 

The exhibition will contain pedagogical information and activities for visitors and students. The artist will also address the extensive research that led to the exhibition. The exhibition will feature two panel discussions following the opening and to conclude the exhibition.

I have some information from an October 12, 2025 ArtSci Salon announcement (received via email) about the “I don’t do math” closing event,

in us for 

Closing Exhibition Panel Discussion
Thursday, October 16 2025
10:00 am -12:00 pm room 309
The Fields Institute for Research in Mathematical Sciences (or online)

Artist Ann Piché will be in conversation with
Andrew Fiss, Jacqueline Wernimont, Amenda Chow, Ellen Abrams, Michael Barany and JP Ascher

RSVP here

October 21, 2025

The second event mentioned in the October 12, 2025 ArtSci Salon announcement, Note 1: A link has been removed, Note 2: This event is part of a larger series,

Marco Donnarumma 
Monsters of Grace: bodies, sounds, and machines

Tuesday, October 21, 2025
3:30-4:30 PM
Sensorium Research Loft 
4th floor
Goldfarb Centre for Fine Arts
York University

About the talk
What is sound to those who do not hear it? How does one listen to something that cannot be heard? What kind of sensory gaps are created by aiding technologies such as prostheses and artificial intelligence (AI)? As a matter of fact, the majority of non-deaf people hear only partially due to age and personal experience. Still, sound is most often considered through the normalizing viewpoint of the non-deaf. If I become your body, what does sound become for me? Join us to welcome Marco Donnarumma  ahead of his new installation/performance at Paul Cadario Conference Room (Oct 22, 8-10 PM University College [University of Toronto] – 15 King’s College Circle). His talk will focus on this latest work in the context of a largest body of work titled “I Am Your Body,” an ongoing project investigating how normative power is enforced through the technological mediation of the senses.

About the artist:
Marco Donnarumma is an artist, inventor and theorist. His oeuvre confronts normative body politics with uncompromising counter-narratives, where bodies are in tension between control and agency, presence and absence, grace and monstrosity. He is best known for using sound, AI, biosensors, and robotics to turn the body into a site of resistance and transformation. He has presented his work in thirty-seven countries across Asia, Europe, North and South America and is the recipient of numerous accolades, most notably the German Federal Ministry of Research and Education’s Artist of the Science Year 2018, and the Prix Ars Electronica’s Award of Distinction in Sound Art 2017. Donnarumma received a ZER01NE Creator grant in 2024 and was named a pioneer of performing arts with advanced technologies by the major national newspaper Der Standard, Austria. His writings are published in Frontiers in Computer Science, Computer Music Journal and Performance Research, among others, and his newest book chapter, co-authored with Elizabeth Jochum, will appear in Robot Theaters by Routledge. Together with Margherita Pevere he runs the performance group Fronte Vacuo.


I wonder if Donnarumma’s “Monsters of Grace: bodies, sounds, and machines’ received any inspiration from “Monsters of Grace” (Wikipedia entry) or if it’s just happenstance, Note: Links have been removed,

Monsters of Grace is a multimedia chamber opera in 13 short acts directed by Robert Wilson, with music by Philip Glass and libretto from the works of 13th-century Sufi mystic Jalaluddin Rumi. The title is said to be a reference to Wilson’s corruption of a line from Hamlet: “Angels and ministers of grace defend us!” (1.4.39).

So, the October 21, 2025 event is a talk at York University taking place before the “Who’s afraid of AI? Arts, Sciences, and the Futures of Intelligence” (more below).

“Who’s afraid of AI? Arts, Sciences, and the Futures of Intelligence,” a conference and arts festival at the University of Toronto

The conference (October 23 – 24, 2025) is concurrent with the arts festival (October 19 – 25, 2025) at the University of Toronto. Here’s more from the event homepage on the https://bmolab.artsci.utoronto.ca/ website, Note: BMO stands for Bank of Montreal, Note: No mention of Edward Albee and “Who’s afraid of Virginia Woolf?,”

2025 marks an inflection point in our technological landscape, driven by seismic shifts in AI innovation.

Who’s Afraid of AI? Arts, Science, and the Futures of Intelligence is a week-long inquiry into the implications and future directions of AI for our creative and collective imaginings, and the many possible futures of intelligence. The complexity of these immediate future calls for interdisciplinary dialogue, bringing together artists, AI researchers, and humanities scholars.

In this volatile domain, the question of who envisions our futures is vital. Artists explore with complexity and humanity, while the humanities reveal the histories of intelligence and the often-overlooked ways knowledge and decision-making have been shaped. By placing these voices in dialogue with AI researchers and technologists, Who’s Afraid of AI? examines the social dimensions of technology, questions tech solutionism from a social-impact perspective, and challenges profit-driven AI with innovation guided by public values.

The two-day conference at the University of Toronto’s University College anchors the week and features panels and debates with leading figures in these disciplines, including a keynote by 2025 Nobel Laureate in Physics Geoffrey Hinton, the “Godfather of AI” and 2025 Neil Graham Lecturer in Science, Fei-Fei Li, an AI pioneer.

Throughout the week, the conversation continues across the city with:

  • AI-themed and AI powered art shows and exhibitions
  • Film screenings
  • Innovative theatre
  • Experimental music

Who’s Afraid of AI? demonstrates that Toronto has not only shaped the history of AI but continues to prepare its future.Step into this changing landscape and be part of this transformative dialogue — register today!

Organizing Committee:

Pia Kleber, Professor-Emerita, Comparative Literature, and Drama, U of T
Dirk Bernhardt-Walther, Department of Psychology, Program Director, Cognitive Science, U of T
David Rokeby, Director, BMO Lab, Centre for Drama, Theatre and Performance Studies, U of T
Rayyan Dabbous, PhD candidate, Centre for Comparative Literature, U of T

This looks like a pretty interesting programme (if you’re mainly focused on AI and the creative arts), from the event homepage on the https://bmolab.artsci.utoronto.ca/ website, Note 1: All times are ET, Note 2: I have not included speakers’ photos,

The conference will explore core questions about AI such as its capabilities, possibilities and challenges, bringing their unique research, creative practice, scholarship and experience to the discussion. Speakers will also engage in an interdisciplinary conversation on topics including AI’s implications for theories of mind and embodiment, its influence on creation, innovation, and discovery, its recognition of diverse perspectives, and its transformation of artistic, cultural, political and everyday practices.

Thursday, October 23, 2025

Mind the World

9 AM | Clark Reading Room, University College – 15 King’s College Circle

What are the merits and limits of artificial intelligence within the larger debate on embodiment? This session brings together an artist who has given AI a physical dimension, a neuroscientist who reckons with the biological neural networks inspiring AI, and a humanist knowledgeable of the longer history in which the human has tried to decouple itself from its bodily needs and wants.

Suzanne Kite
Director, The Wihanble S’a Center for Indigenous AI

James DiCarlo
Director, MIT Quest for Intelligence

N. Katherine Hayles
James B. Duke Distinguished Professor Emerita of Literature

Staging AI

11 AM | Clark Reading Room, University College – 15 King’s College Circle

How is AI changing the arts? To answer this question, we bring together theatre directors and artists who have made AI the main driving plot of their stories and those who opted to keep technology secondary in their productions.

Kay Voges
Artistic Director, Schauspiel Köln

Roland Schimmelpfennig
Playwright and Director, Berlin

Hito Steyerl
Artist, Filmmaker and Writer, Berlin

Recognizing ‘Noise’

2 PM | Clark Reading Room, University College – 15 King’s College Circle

How can we design a more inclusive AI? This session brings together an artist who has worked with AI and has been sensitive to groups who may be excluded by its practice, an inclusive design scholar who has grappled with AI’s potential for personalized accessibility, and a humanist who understands the longer history on pattern and recognition from which emerged AI.

Marco Donnarumma
Artist, Inventor, Theorist, Berlin

Jutta Treviranus
Director, OCADU [Ontario College of Art & Design University],
Inclusive Design Research Centre

Eryk Salvaggio
Media Artist and Tech Policy Press Fellow, Rochester

Art, Design, and Application are the Solution to AI’s Charlie Chaplain Problem

4 PM | Hart House Theatre – 7 Hart House Circle

Daniel Wigdor
CoFounder and Chief Executive Officer, AXL

Keynote and Neil Graham Lecture in Science

4:15 PM | Hart House Theatre – 7 Hart House Circle

Fei-Fei Li
Sequoia Professor in Computer Science, Stanford Institute for Human-Centered AI

Geoffrey Hinton
2024 Nobel Laureate in Physics, Professor Emeritus in Computer Science

Friday, October 24, 2025

Life with AI

9 AM | Clark Reading Room, University College – 15 King’s College Circle

How do machine minds relate to human minds? What can we learn from one about the other? In this session we interrogate the impact of AI on our understanding of human knowledge and tool-making, from the perspective of philosophy, computer science, as well as the arts.

Jeanette Winterson
Author, Fellow of the Royal Society of Literature, Great Britain

Leif Weatherby
Professor of German and Director of Digital Theory Lab at
New York University

Jennifer Nagel
Professor, Philosophy, University of Toronto Mississauga

Discovery & In/Sight

11 AM | Clark Reading Room, University College – 15 King’s College Circle

This session explores creative practice through the lens of innovation and cultural/scientific advancement. An artist who creates with critical inspiration from AI joins forces with an innovation scholar who investigates the effects of AI on our decision making, as well as a philosopher of science who understands scientific discovery and inference as well as their limits.

Vladan Joler
Visual Artist and Professor of
New Media, University of Novi Sad [Serbia]

Alán Aspuru-Guzik
Professor of Chemistry and Computer Science, University of Toronto

Brian Baigrie
Professor, Institute for the History and Philosophy of Science & Technology, University of Toronto

Social history & Possible Futures

2 PM | Clark Reading Room, University College – 15 King’s College Circle

How does AI ownership and its private uses coexist within a framework of public good? It brings together an artist who has created AI tools to be used by others, an AI ethics researcher who has turned algorithmic bias into collective insight, and a philosopher who understands the connection between AI and the longer history of automation and work from which AI emerged.

Memo Akten
Artist working with Code, Data and AI, UC San Diego

Beth Coleman
Professor, Institute of Communication, Culture, Information and Technology, University of Toronto

Matteo Pasquinelli
Professor, Philosophy and Cultural Heritage Università Ca’ Foscari Venezia [Italy]

A Theory of Latent Spaces | Conclusion: Where do we go from here?

4 PM | Clark Reading Room, University College – 15 King’s College Circle

Antonio Somaini, curator of the remarkable ‘World through AI’ exhibition at the Museé du Jeu de Paume in Paris, will discuss the way in which ‘latent spaces’, a core characteristic of current AI models as “meta-archives” that shape profoundly our relation with the past.

Following this, we will engage in a larger discussion amongst the various conference speakers and attendees on how we can, as artists, humanities scholars, scientists and the general public, collectively imagine and cultivate a future where AI serves the public good and enhances our individual and collective lives.”

Antonio Somaini
Curator and Professor, Sorbonne Nouvelle [Université Sorbonne Nouvelle]

You can register here for this free conference, although, there’s now a waitlist for in person attendance. Do not despair, there’s access by Zoom,

In case you can’t make it in person, join us by Zoom:

Link: https://utoronto.zoom.us/j/82603012955

Webinar ID: 826 0301 2955

Passcode: 512183

I have not forgotten the festival, from the event homepage on the https://bmolab.artsci.utoronto.ca/ website,

Events Also Happening

October 22 | 2 PM | Student Forum and AI Commentary Contest Award | Paul Cadario Conference Room, University College – 15 King’s College Circle

October 22 | 8 – 10 PM | Marco Donnarumma, world première of a new performance installation | Paul Cadario Conference Room, University College – 15 King’s College Circle

October 23 | 2 PM | Jeanette Winterson: Arts & AI Talk | Paul Cadario Conference Room, University College – 15 King’s College Circle

October 24 | 7 PM | The Kiss by Roland Schimmelpfennig | The BMO Lab, University College – 15 King’s College Circle (Note: we are scheduling more performances. Check back for more info soon!)

October 25 | 8 PM | AI Cabaret featuring Jason Sherman, Rick Miller, Cole Lewis, BMO Lab projects and more| Crow’s Theatre, Nada Ristich Studio-Gallery – 345 Carlaw Avenue..

Get tickets for the AI Cabaret

(Use promo code AICAB for 100% discount)

Enjoy!

Call for submissions for two Electronic Literature Organization (ELO) prizes

Nothing is more heartbreaking than to be late for a submission, so, here’s the deadline for the Electronic Literature prizes: May 10, 2014. The Electronic Literature Organization’s gives more details on its call for prize submissions webpage,

The ELO is proud to announce the ”The N. Katherine Hayles Award for Criticism of Electronic Literature” and “The Robert Coover Award for a Work of Electronic Literature.” Below is information including guidelines for submissions for each.

“The N. Katherine Hayles Award for Criticism of Electronic Literature”

“The N. Katherine Hayles Award for Criticism of Electronic Literature” is an award given for the best work of criticism, of any length, on the topic of electronic literature. Bestowed by the Electronic Literature Organization and funded through a generous donation from N. Katherine Hayles and others, this $1000 annual prize aims to recognize excellence in the field. The prize comes with a plaque showing the name of the winner and an acknowledgement of the achievement, and a one-year membership in the Electronic Literature Organization at the Associate Level.

We invite critical works of any length. Submissions must follow these guidelines:

1. This is an open submission. Self nominations and nominations are both welcome. Membership in the Electronic Literature Organization is not required.
2. There is no cost involved in nominations. This is a free and open award aimed at rewarding excellence.
3. ELO Board Members serving their term of office on the Board are ineligible for nomination for the award. Members of the Jury are also not allowed to be nominated for the award.
4. Three finalists for the award will be selected by a jury of specialists in electronic literature; N. Katherine Hayles will choose the winner from among the finalists.
5. Because of the nature of online publishing, it is not possible to conduct a blind review of the submissions; the jury will be responsible for fair assessment of the work.
6. Those nominated may only have one work considered for the prize. In the event that several works are identified for a nominee, the nominee will choose the work that he or she wishes to be juried.
7. All works must have already been published or made available to the public within 18 months, no earlier than December 2012.
8. All print articles must be submitted in .pdf format. Books can be sent either in .pdf format or in print format. Online articles should be submitted as a link to an online site.
9. Nominations by self or others must include a 250-word explanation of the work’s impact in the field. The winner selected for the prize must also include a professional bio and a headshot or avatar.
10. All digital materials should be emailed to elo.hayles.award@gmail.com by May 15, 2014; three copies of the book should be mailed to Dr. Dene Grigar, Creative Media & Digital Culture, Washington State University Vancouver, 14204 NE Salmon Creek Ave., Vancouver, WA 98686 by May 15, 2014. [emphasis mine] Those making the nomination or the nominees themselves are responsible for mailing materials for jurying. Print materials will be returned via a self-addressed mailer.
11. Nominees and the winner retain all rights to their works. If copyright allows, ELO will be given permission to share the work or portions of it on the award webpage. Journals and presses that have published the winning work will be acknowledged on the award webpage.
12. The winner is not expected to attend the ELO conference banquet. The award will be mailed to the winner.

Timeline
Call for Nominations: April 15-May 10
Jury Deliberations: May 15-June 10
Award Announcement: ELO Conference Banquet

For more information, contact Dr. Dene Grigar, President, Electronic Literature Organization.

“The Robert Coover Award for a Work of Electronic Literature”

“The Robert Coover Award for a Work of Electronic Literature” is an award given for the best work of electronic literature of any length or genre. Bestowed by the Electronic Literature Organization and funded through a generous donation from supporters and members of the ELO, this $1000 annual prize aims to recognize creative excellence. The prize comes with a plaque showing the name of the winner and an acknowledgement of the achievement, and a one-year membership in the Electronic Literature Organization at the Associate Level.

We invite critical works of any length and genre. Submissions must follow these guidelines:

1. This is an open submission. Self nominations and nominations are both welcome. Membership in the Electronic Literature Organization is not required.
2. There is no cost involved in nominations. This is a free and open award aimed at rewarding excellence.
3. ELO Board Members serving their term of office on the Board are ineligible for nomination for the award. Members of the Jury are also not allowed to be nominated for the award.
4. Three finalists for the award will be selected by a jury of specialists in electronic literature; Robert Coover or a representative of his will choose the winner from among the finalists.
5. Because of the nature of online publishing, it is not possible to conduct a blind review of the submissions; the jury will be responsible for fair assessment of the work.
6. Those nominated may only have one work considered for the prize. In the event that several works are identified for a nominee, the nominee will choose the work that he or she wishes to be juried.
7. All works must have already been published or made available to the public within 18 months, no earlier than December 2012.
8. Works should be submitted either as a link to an online site or in the case of non-web work, available via Dropbox or sent as a CD/DVD or flash drive.
9. Nominations by self or others must include a 250-word explanation of the work’s impact in the field. The winner selected for the prize must also include a professional bio and a headshot or avatar.
10. Links to the digital materials or to Dropbox should be emailed to elo.coover.award@gmail.com by May 15, 2014; three copies of the CD/DVDs and flash drives should be mailed to Dr. Dene Grigar, Creative Media & Digital Culture, Washington State University Vancouver, 14204 NE Salmon Creek Ave., Vancouver, WA 98686 by May 15, 2014. [emphasis mine] Those making the nomination or the nominees themselves are responsible for mailing materials for jurying. Physical materials will be returned via a self-addressed mailer.
11. Nominees and the winner retain all rights to their works. If copyright allows, ELO will be given permission to share the work or portions of it on the award webpage. Journals and presses that have published the winning work will be acknowledged on the award webpage.
12. The winner is not expected to attend the ELO conference banquet. The award will be mailed to the winner.

Timeline
Call for Nominations: April 19-May 10
Jury Deliberations: May 15-June 10
Award Announcement: ELO Conference Banquet

For more information, contact Dr. Dene Grigar, President, Electronic Literature Organization.

Good luck and please note the mailing address in the submission guidelines is for Vancouver, US and not for Vancouver, Canada. Finally, thank you to Christine Wilks of crissxross for the heads up via LinkedIn.