Tag Archives: Mike Masnick

Neural (brain) implants and hype (long read)

There was a big splash a few weeks ago when it was announced that Neuralink’s (Elon Musk company) brain implant had been surgically inserted into its first human patient.

Getting approval

David Tuffley, senior lecturer in Applied Ethics & CyberSecurity at Griffith University (Australia), provides a good overview of the road Neuralink took to getting FDA (US Food and Drug Administration) approval for human clinical trials in his May 29, 2023 essay for The Conversation, Note: Links have been removed,

Since its founding in 2016, Elon Musk’s neurotechnology company Neuralink has had the ambitious mission to build a next-generation brain implant with at least 100 times more brain connections than devices currently approved by the US Food and Drug Administration (FDA).

The company has now reached a significant milestone, having received FDA approval to begin human trials. So what were the issues keeping the technology in the pre-clinical trial phase for as long as it was? And have these concerns been addressed?

Neuralink is making a Class III medical device known as a brain-computer interface (BCI). The device connects the brain to an external computer via a Bluetooth signal, enabling continuous communication back and forth.

The device itself is a coin-sized unit called a Link. It’s implanted within a small disk-shaped cutout in the skull using a precision surgical robot. The robot splices a thousand tiny threads from the Link to certain neurons in the brain. [emphasis mine] Each thread is about a quarter the diameter of a human hair.

The company says the device could enable precise control of prosthetic limbs, giving amputees natural motor skills. It could revolutionise treatment for conditions such as Parkinson’s disease, epilepsy and spinal cord injuries. It also shows some promise for potential treatment of obesity, autism, depression, schizophrenia and tinnitus.

Several other neurotechnology companies and researchers have already developed BCI technologies that have helped people with limited mobility regain movement and complete daily tasks.

In February 2021, Musk said Neuralink was working with the FDA to secure permission to start initial human trials later that year. But human trials didn’t commence in 2021.

Then, in March 2022, Neuralink made a further application to the FDA to establish its readiness to begin humans trials.

One year and three months later, on May 25 2023, Neuralink finally received FDA approval for its first human clinical trial. Given how hard Neuralink has pushed for permission to begin, we can assume it will begin very soon. [emphasis mine]

The approval has come less than six months after the US Office of the Inspector General launched an investigation into Neuralink over potential animal welfare violations. [emphasis mine]

In accessible language, Tuffley goes on to discuss the FDA’s specific technical issues with implants and how they were addressed in his May 29, 2023 essay.

More about how Neuralink’s implant works and some concerns

Canadian Broadcasting Corporation (CBC) journalist Andrew Chang offers an almost 13 minute video, “Neuralink brain chip’s first human patient. How does it work?” Chang is a little overenthused for my taste but he offers some good information about neural implants, along with informative graphics in his presentation.

So, Tuffley was right about Neuralink getting ready quickly for human clinical trials as you can guess from the title of Chang’s CBC video.

Jennifer Korn announced that recruitment had started in her September 20, 2023 article for CNN (Cable News Network), Note: Links have been removed,

Elon Musk’s controversial biotechnology startup Neuralink opened up recruitment for its first human clinical trial Tuesday, according to a company blog.

After receiving approval from an independent review board, Neuralink is set to begin offering brain implants to paralysis patients as part of the PRIME Study, the company said. PRIME, short for Precise Robotically Implanted Brain-Computer Interface, is being carried out to evaluate both the safety and functionality of the implant.

Trial patients will have a chip surgically placed in the part of the brain that controls the intention to move. The chip, installed by a robot, will then record and send brain signals to an app, with the initial goal being “to grant people the ability to control a computer cursor or keyboard using their thoughts alone,” the company wrote.

Those with quadriplegia [sometimes known as tetraplegia] due to cervical spinal cord injury or amyotrophic lateral sclerosis (ALS) may qualify for the six-year-long study – 18 months of at-home and clinic visits followed by follow-up visits over five years. Interested people can sign up in the patient registry on Neuralink’s website.

Musk has been working on Neuralink’s goal of using implants to connect the human brain to a computer for five years, but the company so far has only tested on animals. The company also faced scrutiny after a monkey died in project testing in 2022 as part of efforts to get the animal to play Pong, one of the first video games.

I mentioned three Reuters investigative journalists who were reporting on Neuralink’s animal abuse allegations (emphasized in Tuffley’s essay) in a July 7, 2023 posting, “Global dialogue on the ethics of neurotechnology on July 13, 2023 led by UNESCO.” Later that year, Neuralink was cleared by the US Department of Agriculture (see September 24,, 2023 article by Mahnoor Jehangir for BNN Breaking).

Plus, Neuralink was being investigated over more allegations according to a February 9, 2023 article by Rachel Levy for Reuters, this time regarding hazardous pathogens,

The U.S. Department of Transportation said on Thursday it is investigating Elon Musk’s brain-implant company Neuralink over the potentially illegal movement of hazardous pathogens.

A Department of Transportation spokesperson told Reuters about the probe after the Physicians Committee of Responsible Medicine (PCRM), an animal-welfare advocacy group,wrote to Secretary of Transportation Pete Buttigieg, opens new tab earlier on Thursday to alert it of records it obtained on the matter.

PCRM said it obtained emails and other documents that suggest unsafe packaging and movement of implants removed from the brains of monkeys. These implants may have carried infectious diseases in violation of federal law, PCRM said.

There’s an update about the hazardous materials in the next section. Spoiler alert, the company got fined.

Neuralink’s first human implant

A January 30, 2024 article (Associated Press with files from Reuters) on the Canadian Broadcasting Corporation’s (CBC) online news webspace heralded the latest about Neurlink’s human clinical trials,

The first human patient received an implant from Elon Musk’s computer-brain interface company Neuralink over the weekend, the billionaire says.

In a post Monday [January 29, 2024] on X, the platform formerly known as Twitter, Musk said that the patient received the implant the day prior and was “recovering well.” He added that “initial results show promising neuron spike detection.”

Spikes are activity by neurons, which the National Institutes of Health describe as cells that use electrical and chemical signals to send information around the brain and to the body.

The billionaire, who owns X and co-founded Neuralink, did not provide additional details about the patient.

When Neuralink announced in September [2023] that it would begin recruiting people, the company said it was searching for individuals with quadriplegia due to cervical spinal cord injury or amyotrophic lateral sclerosis, commonly known as ALS or Lou Gehrig’s disease.

Neuralink reposted Musk’s Monday [January 29, 2024] post on X, but did not publish any additional statements acknowledging the human implant. The company did not immediately respond to requests for comment from The Associated Press or Reuters on Tuesday [January 30, 2024].

In a separate Monday [January 29, 2024] post on X, Musk said that the first Neuralink product is called “Telepathy” — which, he said, will enable users to control their phones or computers “just by thinking.” He said initial users would be those who have lost use of their limbs.

The startup’s PRIME Study is a trial for its wireless brain-computer interface to evaluate the safety of the implant and surgical robot.

Now for the hazardous materials, January 30, 2024 article, Note: A link has been removed,

Earlier this month [January 2024], a Reuters investigation found that Neuralink was fined for violating U.S. Department of Transportation (DOT) rules regarding the movement of hazardous materials. During inspections of the company’s facilities in Texas and California in February 2023, DOT investigators found the company had failed to register itself as a transporter of hazardous material.

They also found improper packaging of hazardous waste, including the flammable liquid Xylene. Xylene can cause headaches, dizziness, confusion, loss of muscle co-ordination and even death, according to the U.S. Centers for Disease Control and Prevention.

The records do not say why Neuralink would need to transport hazardous materials or whether any harm resulted from the violations.

Skeptical thoughts about Elon Musk and Neuralink

Earlier this month (February 2024), the British Broadcasting Corporation (BBC) published an article by health reporters, Jim Reed and Joe McFadden, that highlights the history of brain implants, the possibilities, and notes some of Elon Musk’s more outrageous claims for Neuralink’s brain implants,

Elon Musk is no stranger to bold claims – from his plans to colonise Mars to his dreams of building transport links underneath our biggest cities. This week the world’s richest man said his Neuralink division had successfully implanted its first wireless brain chip into a human.

Is he right when he says this technology could – in the long term – save the human race itself?

Sticking electrodes into brain tissue is really nothing new.

In the 1960s and 70s electrical stimulation was used to trigger or suppress aggressive behaviour in cats. By the early 2000s monkeys were being trained to move a cursor around a computer screen using just their thoughts.

“It’s nothing novel, but implantable technology takes a long time to mature, and reach a stage where companies have all the pieces of the puzzle, and can really start to put them together,” says Anne Vanhoestenberghe, professor of active implantable medical devices, at King’s College London.

Neuralink is one of a growing number of companies and university departments attempting to refine and ultimately commercialise this technology. The focus, at least to start with, is on paralysis and the treatment of complex neurological conditions.

Reed and McFadden’s February 2024 BBC article describes a few of the other brain implant efforts, Note: Links have been removed,

One of its [Neuralink’s] main rivals, a start-up called Synchron backed by funding from investment firms controlled by Bill Gates and Jeff Bezos, has already implanted its stent-like device into 10 patients.

Back in December 2021, Philip O’Keefe, a 62-year old Australian who lives with a form of motor neurone disease, composed the first tweet using just his thoughts to control a cursor.

And researchers at Lausanne University in Switzerland have shown it is possible for a paralysed man to walk again by implanting multiple devices to bypass damage caused by a cycling accident.

In a research paper published this year, they demonstrated a signal could be beamed down from a device in his brain to a second device implanted at the base of his spine, which could then trigger his limbs to move.

Some people living with spinal injuries are sceptical about the sudden interest in this new kind of technology.

“These breakthroughs get announced time and time again and don’t seem to be getting any further along,” says Glyn Hayes, who was paralysed in a motorbike accident in 2017, and now runs public affairs for the Spinal Injuries Association.

If I could have anything back, it wouldn’t be the ability to walk. It would be putting more money into a way of removing nerve pain, for example, or ways to improve bowel, bladder and sexual function.” [emphasis mine]

Musk, however, is focused on something far more grand for Neuralink implants, from Reed and McFadden’s February 2024 BBC article, Note: A link has been removed,

But for Elon Musk, “solving” brain and spinal injuries is just the first step for Neuralink.

The longer-term goal is “human/AI symbiosis” [emphasis mine], something he describes as “species-level important”.

Musk himself has already talked about a future where his device could allow people to communicate with a phone or computer “faster than a speed typist or auctioneer”.

In the past, he has even said saving and replaying memories may be possible, although he recognised “this is sounding increasingly like a Black Mirror episode.”

One of the experts quoted in Reed and McFadden’s February 2024 BBC article asks a pointed question,

… “At the moment, I’m struggling to see an application that a consumer would benefit from, where they would take the risk of invasive surgery,” says Prof Vanhoestenberghe.

“You’ve got to ask yourself, would you risk brain surgery just to be able to order a pizza on your phone?”

Rae Hodge’s February 11, 2024 article about Elon Musk and his hyped up Neuralink implant for Salon is worth reading in its entirety but for those who don’t have the time or need a little persuading, here are a few excerpts, Note 1: This is a warning; Hodge provides more detail about the animal cruelty allegations; Note 2: Links have been removed,

Elon Musk’s controversial brain-computer interface (BCI) tech, Neuralink, has supposedly been implanted in its first recipient — and as much as I want to see progress for treatment of paralysis and neurodegenerative disease, I’m not celebrating. I bet the neuroscientists he reportedly drove out of the company aren’t either, especially not after seeing the gruesome torture of test monkeys and apparent cover-up that paved the way for this moment. 

All of which is an ethics horror show on its own. But the timing of Musk’s overhyped implant announcement gives it an additional insulting subtext. Football players are currently in a battle for their lives against concussion-based brain diseases that plague autopsy reports of former NFL players. And Musk’s boast of false hope came just two weeks before living players take the field in the biggest and most brutal game of the year. [2024 Super Bowl LVIII]

ESPN’s Kevin Seifert reports neuro-damage is up this year as “players suffered a total of 52 concussions from the start of training camp to the beginning of the regular season. The combined total of 213 preseason and regular season concussions was 14% higher than 2021 but within range of the three-year average from 2018 to 2020 (203).”

I’m a big fan of body-tech: pacemakers, 3D-printed hips and prosthetic limbs that allow you to wear your wedding ring again after 17 years. Same for brain chips. But BCI is the slow-moving front of body-tech development for good reason. The brain is too understudied. Consequences of the wrong move are dire. Overpromising marketable results on profit-driven timelines — on the backs of such a small community of researchers in a relatively new field — would be either idiotic or fiendish. 

Brown University’s research in the sector goes back to the 1990s. Since the emergence of a floodgate-opening 2002 study and the first implant in 2004 by med-tech company BrainGate, more promising results have inspired broader investment into careful research. But BrainGate’s clinical trials started back in 2009, and as noted by Business Insider’s Hilary Brueck, are expected to continue until 2038 — with only 15 participants who have devices installed. 

Anne Vanhoestenberghe is a professor of active implantable medical devices at King’s College London. In a recent release, she cautioned against the kind of hype peddled by Musk.

“Whilst there are a few other companies already using their devices in humans and the neuroscience community have made remarkable achievements with those devices, the potential benefits are still significantly limited by technology,” she said. “Developing and validating core technology for long term use in humans takes time and we need more investments to ensure we do the work that will underpin the next generation of BCIs.” 

Neuralink is a metal coin in your head that connects to something as flimsy as an app. And we’ve seen how Elon treats those. We’ve also seen corporate goons steal a veteran’s prosthetic legs — and companies turn brain surgeons and dentists into repo-men by having them yank anti-epilepsy chips out of people’s skulls, and dentures out of their mouths. 

“I think we have a chance with Neuralink to restore full-body functionality to someone who has a spinal cord injury,” Musk said at a 2023 tech summit, adding that the chip could possibly “make up for whatever lost capacity somebody has.”

Maybe BCI can. But only in the careful hands of scientists who don’t have Musk squawking “go faster!” over their shoulders. His greedy frustration with the speed of BCI science is telling, as is the animal cruelty it reportedly prompted.

There have been other examples of Musk’s grandiosity. Notably, David Lee expressed skepticism about hyperloop in his August 13, 2013 article for BBC news online

Is Elon Musk’s Hyperloop just a pipe dream?

Much like the pun in the headline, the bright idea of transporting people using some kind of vacuum-like tube is neither new nor imaginative.

There was Robert Goddard, considered the “father of modern rocket propulsion”, who claimed in 1909 that his vacuum system could suck passengers from Boston to New York at 1,200mph.

And then there were Soviet plans for an amphibious monorail  – mooted in 1934  – in which two long pods would start their journey attached to a metal track before flying off the end and slipping into the water like a two-fingered Kit Kat dropped into some tea.

So ever since inventor and entrepreneur Elon Musk hit the world’s media with his plans for the Hyperloop, a healthy dose of scepticism has been in the air.

“This is by no means a new idea,” says Rod Muttram, formerly of Bombardier Transportation and Railtrack.

“It has been previously suggested as a possible transatlantic transport system. The only novel feature I see is the proposal to put the tubes above existing roads.”

Here’s the latest I’ve found on hyperloop, from the Hyperloop Wikipedia entry,

As of 2024, some companies continued to pursue technology development under the hyperloop moniker, however, one of the biggest, well funded players, Hyperloop One, declared bankruptcy and ceased operations in 2023.[15]

Musk is impatient and impulsive as noted in a September 12, 2023 posting by Mike Masnick on Techdirt, Note: A link has been removed,

The Batshit Crazy Story Of The Day Elon Musk Decided To Personally Rip Servers Out Of A Sacramento Data Center

Back on Christmas Eve [December 24, 2022] of last year there were some reports that Elon Musk was in the process of shutting down Twitter’s Sacramento data center. In that article, a number of ex-Twitter employees were quoted about how much work it would be to do that cleanly, noting that there’s a ton of stuff hardcoded in Twitter code referring to that data center (hold that thought).

That same day, Elon tweeted out that he had “disconnected one of the more sensitive server racks.”

Masnick follows with a story of reckless behaviour from someone who should have known better.

Ethics of implants—where to look for more information

While Musk doesn’t use the term when he describes a “human/AI symbiosis” (presumably by way of a neural implant), he’s talking about a cyborg. Here’s a 2018 paper, which looks at some of the implications,

Do you want to be a cyborg? The moderating effect of ethics on neural implant acceptance by Eva Reinares-Lara, Cristina Olarte-Pascual, and Jorge Pelegrín-Borondo. Computers in Human Behavior Volume 85, August 2018, Pages 43-53 DOI: https://doi.org/10.1016/j.chb.2018.03.032

This paper is open access.

Getting back to Neuralink, I have two blog posts that discuss the company and the ethics of brain implants from way back in 2021.

First, there’s Jazzy Benes’ March 1, 2021 posting on the Santa Clara University’s Markkula Center for Applied Ethics blog. It stands out as it includes a discussion of the disabled community’s issues, Note: Links have been removed,

In the heart of Silicon Valley we are constantly enticed by the newest technological advances. With the big influencers Grimes [a Canadian musician and the mother of three children with Elon Musk] and Lil Uzi Vert publicly announcing their willingness to become experimental subjects for Elon Musk’s Neuralink brain implantation device, we are left wondering if future technology will actually give us “the knowledge of the Gods.” Is it part of the natural order for humans to become omniscient beings? Who will have access to the devices? What other ethical considerations must be discussed before releasing such technology to the public?

A significant issue that arises from developing technologies for the disabled community is the assumption that disabled persons desire the abilities of what some abled individuals may define as “normal.” Individuals with disabilities may object to technologies intended to make them fit an able-bodied norm. “Normal” is relative to each individual, and it could be potentially harmful to use a deficit view of disability, which means judging a disability as a deficiency. However, this is not to say that all disabled individuals will reject a technology that may enhance their abilities. Instead, I believe it is a consideration that must be recognized when developing technologies for the disabled community, and it can only be addressed through communication with disabled persons. As a result, I believe this is a conversation that must be had with the community for whom the technology is developed–disabled persons.

With technologies that aim to address disabilities, we walk a fine line between therapeutics and enhancement. Though not the first neural implant medical device, the Link may have been the first BCI system openly discussed for its potential transhumanism uses, such as “enhanced cognitive abilities, memory storage and retrieval, gaming, telepathy, and even symbiosis with machines.” …

Benes also discusses transhumanism, privacy issues, and consent issues. It’s a thoughtful reading experience.

Second is a July 9, 2021 posting by anonymous on the University of California at Berkeley School of Information blog which provides more insight into privacy and other issues associated with data collection (and introduced me to the concept of decisional interference),

As the development of microchips furthers and advances in neuroscience occur, the possibility for seamless brain-machine interfaces, where a device decodes inputs from the user’s brain to perform functions, becomes more of a reality. These various forms of these technologies already exist. However, technological advances have made implantable and portable devices possible. Imagine a future where humans don’t need to talk to each other, but rather can transmit their thoughts directly to another person. This idea is the eventual goal of Elon Musk, the founder of Neuralink. Currently, Neuralink is one of the main companies involved in the advancement of this type of technology. Analysis of the Neuralink’s technology and their overall mission statement provide an interesting insight into the future of this type of human-computer interface and the potential privacy and ethical concerns with this technology.

As this technology further develops, several privacy and ethical concerns come into question. To begin, using Solove’s Taxonomy as a privacy framework, many areas of potential harm are revealed. In the realm of information collection, there is much risk. Brain-computer interfaces, depending on where they are implanted, could have access to people’s most private thoughts and emotions. This information would need to be transmitted to another device for processing. The collection of this information by companies such as advertisers would represent a major breach of privacy. Additionally, there is risk to the user from information processing. These devices must work concurrently with other devices and often wirelessly. Given the widespread importance of cloud computing in much of today’s technology, offloading information from these devices to the cloud would be likely. Having the data stored in a database puts the user at the risk of secondary use if proper privacy policies are not implemented. The trove of information stored within the information collected from the brain is vast. These datasets could be combined with existing databases such as browsing history on Google to provide third parties with unimaginable context on individuals. Lastly, there is risk for information dissemination, more specifically, exposure. The information collected and processed by these devices would need to be stored digitally. Keeping such private information, even if anonymized, would be a huge potential for harm, as the contents of the information may in itself be re-identifiable to a specific individual. Lastly there is risk for invasions such as decisional interference. Brain-machine interfaces would not only be able to read information in the brain but also write information. This would allow the device to make potential emotional changes in its users, which be a major example of decisional interference. …

For the most recent Neuralink and brain implant ethics piece, there’s this February 14, 2024 essay on The Conversation, which, unusually, for this publication was solicited by the editors, Note: Links have been removed,

In January 2024, Musk announced that Neuralink implanted its first chip in a human subject’s brain. The Conversation reached out to two scholars at the University of Washington School of Medicine – Nancy Jecker, a bioethicst, and Andrew Ko, a neurosurgeon who implants brain chip devices – for their thoughts on the ethics of this new horizon in neuroscience.

Information about the implant, however, is scarce, aside from a brochure aimed at recruiting trial subjects. Neuralink did not register at ClinicalTrials.gov, as is customary, and required by some academic journals. [all emphases mine]

Some scientists are troubled by this lack of transparency. Sharing information about clinical trials is important because it helps other investigators learn about areas related to their research and can improve patient care. Academic journals can also be biased toward positive results, preventing researchers from learning from unsuccessful experiments.

Fellows at the Hastings Center, a bioethics think tank, have warned that Musk’s brand of “science by press release, while increasingly common, is not science. [emphases mine]” They advise against relying on someone with a huge financial stake in a research outcome to function as the sole source of information.

When scientific research is funded by government agencies or philanthropic groups, its aim is to promote the public good. Neuralink, on the other hand, embodies a private equity model [emphasis mine], which is becoming more common in science. Firms pooling funds from private investors to back science breakthroughs may strive to do good, but they also strive to maximize profits, which can conflict with patients’ best interests.

In 2022, the U.S. Department of Agriculture investigated animal cruelty at Neuralink, according to a Reuters report, after employees accused the company of rushing tests and botching procedures on test animals in a race for results. The agency’s inspection found no breaches, according to a letter from the USDA secretary to lawmakers, which Reuters reviewed. However, the secretary did note an “adverse surgical event” in 2019 that Neuralink had self-reported.

In a separate incident also reported by Reuters, the Department of Transportation fined Neuralink for violating rules about transporting hazardous materials, including a flammable liquid.

…the possibility that the device could be increasingly shown to be helpful for people with disabilities, but become unavailable due to loss of research funding. For patients whose access to a device is tied to a research study, the prospect of losing access after the study ends can be devastating. [emphasis mine] This raises thorny questions about whether it is ever ethical to provide early access to breakthrough medical interventions prior to their receiving full FDA approval.

Not registering a clinical trial would seem to suggest there won’t be much oversight. As for Musk’s “science by press release” activities, I hope those will be treated with more skepticism by mainstream media although that seems unlikely given the current situation with journalism (more about that in a future post).

As for the issues associated with private equity models for science research and the problem of losing access to devices after a clinical trial is ended, my April 5, 2022 posting, “Going blind when your neural implant company flirts with bankruptcy (long read)” offers some cautionary tales, in addition to being the most comprehensive piece I’ve published on ethics and brain implants.

My July 17, 2023 posting, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report” offers a brief overview of the international scene.

Non-human authors (ChatGPT or others) of scientific and medical studies and the latest AI panic!!!

It’s fascinating to see all the current excitement (distressed and/or enthusiastic) around the act of writing and artificial intelligence. Easy to forget that it’s not new. First, the ‘non-human authors’ and then the panic(s). *What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.*

How to handle non-human authors (ChatGPT and other AI agents)—the medical edition

The first time I wrote about the incursion of robots or artificial intelligence into the field of writing was in a July 16, 2014 posting titled “Writing and AI or is a robot writing this blog?” ChatGPT (then known as GPT-2) first made its way onto this blog in a February 18, 2019 posting titled “AI (artificial intelligence) text generator, too dangerous to release?

The folks at the Journal of the American Medical Association (JAMA) have recently adopted a pragmatic approach to the possibility of nonhuman authors of scientific and medical papers, from a January 31, 2022 JAMA editorial,

Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability..1

In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.

Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11

This is a link to and a citation for the JAMA editorial,

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge by Annette Flanagin, Kirsten Bibbins-Domingo, Michael Berkwits, Stacy L. Christiansen. JAMA. 2023;329(8):637-639. doi:10.1001/jama.2023.1344

The editorial appears to be open access.

ChatGPT in the field of education

Dr. Andrew Maynard (scientist, author, and professor of Advanced Technology Transitions in the Arizona State University [ASU] School for the Future if Innovation in Society and founder of the ASU Future of Being Human initiative and Director of the ASU Risk Innovation Nexus) also takes a pragmatic approach in a March 14, 2023 posting on his eponymous blog,

Like many of my colleagues, I’ve been grappling with how ChatGPT and other Large Language Models (LLMs) are impacting teaching and education — especially at the undergraduate level.

We’re already seeing signs of the challenges here as a growing divide emerges between LLM-savvy students who are experimenting with novel ways of using (and abusing) tools like ChatGPT, and educators who are desperately trying to catch up. As a result, educators are increasingly finding themselves unprepared and poorly equipped to navigate near-real-time innovations in how students are using these tools. And this is only exacerbated where their knowledge of what is emerging is several steps behind that of their students.

To help address this immediate need, a number of colleagues and I compiled a practical set of Frequently Asked Questions on ChatGPT in the classroom. These covers the basics of what ChatGPT is, possible concerns over use by students, potential creative ways of using the tool to enhance learning, and suggestions for class-specific guidelines.

Dr. Maynard goes on to offer the FAQ/practical guide here. Prior to issuing the ‘guide’, he wrote a December 8, 2022 essay on Medium titled “I asked Open AI’s ChatGPT about responsible innovation. This is what I got.”

Crawford Kilian, a longtime educator, author, and contributing editor to The Tyee, expresses measured enthusiasm for the new technology (as does Dr. Maynard), in a December 13, 2022 article for thetyee.ca, Note: Links have been removed,

ChatGPT, its makers tell us, is still in beta form. Like a million other new users, I’ve been teaching it (tuition-free) so its answers will improve. It’s pretty easy to run a tutorial: once you’ve created an account, you’re invited to ask a question or give a command. Then you watch the reply, popping up on the screen at the speed of a fast and very accurate typist.

Early responses to ChatGPT have been largely Luddite: critics have warned that its arrival means the end of high school English, the demise of the college essay and so on. But remember that the Luddites were highly skilled weavers who commanded high prices for their products; they could see that newfangled mechanized looms would produce cheap fabrics that would push good weavers out of the market. ChatGPT, with sufficient tweaks, could do just that to educators and other knowledge workers.

Having spent 40 years trying to teach my students how to write, I have mixed feelings about this prospect. But it wouldn’t be the first time that a technological advancement has resulted in the atrophy of a human mental skill.

Writing arguably reduced our ability to memorize — and to speak with memorable and persuasive coherence. …

Writing and other technological “advances” have made us what we are today — powerful, but also powerfully dangerous to ourselves and our world. If we can just think through the implications of ChatGPT, we may create companions and mentors that are not so much demonic as the angels of our better nature.

More than writing: emergent behaviour

The ChatGPT story extends further than writing and chatting. From a March 6, 2023 article by Stephen Ornes for Quanta Magazine, Note: Links have been removed,

What movie do these emojis describe?

That prompt was one of 204 tasks chosen last year to test the ability of various large language models (LLMs) — the computational engines behind AI chatbots such as ChatGPT. The simplest LLMs produced surreal responses. “The movie is a movie about a man who is a man who is a man,” one began. Medium-complexity models came closer, guessing The Emoji Movie. But the most complex model nailed it in one guess: Finding Nemo.

“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.

“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors [emphasis mine], including several identified in Dyer’s project. That list continues to grow.

Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.

But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.

As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli [Deep Ganguli, a computer scientist at the AI startup Anthropic] said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

There is an obvious problem with asking these models to explain themselves: They are notorious liars. [emphasis mine] “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” As one of many amusing examples, in February [2023] Google introduced its AI chatbot, Bard. The blog post announcing the new tool shows Bard making a factual error.

If you have time, I recommend reading Omes’s March 6, 2023 article.

The panic

Perhaps not entirely unrelated to current developments, there was this announcement in a May 1, 2023 article by Hannah Alberga for CTV (Canadian Television Network) news, Note: Links have been removed,

Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI

Geoffrey Hinton, professor at the University of Toronto and the “godfather” of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday [May 1, 2023] citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by “bad actors.”

Hinton, a British-Canadian computer scientist, is best-known for a series of deep neural network breakthroughs that won him, Yann LeCun and Yoshua Bengio the 2018 Turing Award, known as the Nobel Prize of computing. 

Hinton has been invested in the now-hot topic of artificial intelligence since its early stages. In 1970, he got a Bachelor of Arts in experimental psychology from Cambridge, followed by his Ph.D. in artificial intelligence in Edinburgh, U.K. in 1978.

He joined Google after spearheading a major breakthrough with two of his graduate students at the University of Toronto in 2012, in which the team uncovered and built a new method of artificial intelligence: neural networks. The team’s first neural network was  incorporated and sold to Google for $44 million.

Neural networks are a method of deep learning that effectively teaches computers how to learn the way humans do by analyzing data, paving the way for machines to classify objects and understand speech recognition.

There’s a bit more from Hinton in a May 3, 2023 article by Sheena Goodyear for the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens (the 10 minute radio interview is embedded in the article), Note: A link has been removed,

There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.

Nowadays, he’s not so sure.

“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”

For the last decade, he [Geoffrey Hinton] divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.

Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”

But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes. 

Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good. 

“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.

“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”

Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”

“That being said, I do think that some of these ideas that … AI supercomputers are going to ‘wake up’ and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear” and distract from more pressing issues, he said.

He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.

“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he [Hinton] said.

Don Pittis in his May 4, 2022 business analysis for CBC news online offers a somewhat jaundiced view of Hinton’s concern regarding AI, Note: Links have been removed,

As if we needed one more thing to terrify us, the latest warning from a University of Toronto scientist considered by many to be the founding intellect of artificial intelligence, adds a new layer of dread.

Others who have warned in the past that thinking machines are a threat to human existence seem a little miffed with the rock-star-like media coverage Geoffrey Hinton, billed at a conference this week as the Godfather of AI, is getting for what seems like a last minute conversion. Others say Hinton’s authoritative voice makes a difference.

Not only did Hinton tell an audience of experts at Wednesday’s [May 3, 2023] EmTech Digital conference that humans will soon be supplanted by AI — “I think it’s serious and fairly close.” — he said that due to national and business competition, there is no obvious way to prevent it.

“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial,” said Hinton on Wednesday [May 3, 2023] as he explained his change of heart in detailed technical terms. 

“But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.”

“I wish I had a nice and simple solution I could push, but I don’t,” he said. “It’s not clear there is a solution.”

So when is all this happening?

“In a few years time they may be significantly more intelligent than people,” he told Nil Köksal on CBC Radio’s As It Happens on Wednesday [May 3, 2023].

While he may be late to the party, Hinton’s voice adds new clout to growing anxiety that artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.

But long before that final day, he worries that the new technology will soon begin to strip away jobs and lead to a destabilizing societal gap between rich and poor that current politics will be unable to solve.

The EmTech Digital conference is a who’s who of AI business and academia, fields which often overlap. Most other participants at the event were not there to warn about AI like Hinton, but to celebrate the explosive growth of AI research and business.

As one expert I spoke to pointed out, the growth in AI is exponential and has been for a long time. But even knowing that, the increase in the dollar value of AI to business caught the sector by surprise.

Eight years ago when I wrote about the expected increase in AI business, I quoted the market intelligence group Tractica that AI spending would “be worth more than $40 billion in the coming decade,” which sounded like a lot at the time. It appears that was an underestimate.

“The global artificial intelligence market size was valued at $428 billion U.S. in 2022,” said an updated report from Fortune Business Insights. “The market is projected to grow from $515.31 billion U.S. in 2023.”  The estimate for 2030 is more than $2 trillion. 

This week the new Toronto AI company Cohere, where Hinton has a stake of his own, announced it was “in advanced talks” to raise $250 million. The Canadian media company Thomson Reuters said it was planning “a deeper investment in artificial intelligence.” IBM is expected to “pause hiring for roles that could be replaced with AI.” The founders of Google DeepMind and LinkedIn have launched a ChatGPT competitor called Pi.

And that was just this week.

“My one hope is that, because if we allow it to take over it will be bad for all of us, we could get the U.S. and China to agree, like we did with nuclear weapons,” said Hinton. “We’re all the in same boat with respect to existential threats, so we all ought to be able to co-operate on trying to stop it.”

Interviewer and moderator Will Douglas Heaven, an editor at MIT Technology Review finished Hinton’s sentence for him: “As long as we can make some money on the way.”

Hinton has attracted some criticism himself. Wilfred Chan writing for Fast Company has two articles, “‘I didn’t see him show up’: Ex-Googlers blast ‘AI godfather’ Geoffrey Hinton’s silence on fired AI experts” on May 5, 2023, Note: Links have been removed,

Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord.

But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.

“It’s disappointing to see this autumn-years redemption tour [emphasis mine] from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)

Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.

“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”

Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. …

… An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them. 

Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday [May 2, 2023], for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.” [emphasis mine]

Gebru has been mentioned here a few times. She’s mentioned in passing in a June 23, 2022 posting “Racist and sexist robots have flawed AI” and in a little more detail in an August 30, 2022 posting “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” scroll down to the ‘Consciousness and ethical AI’ subhead

Chan has another Fast Company article investigating AI issues also published on May 5, 2023, “Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them.”

The last two existential AI panics

The term “autumn-years redemption tour”is striking and while the reference to age could be viewed as problematic, it also hints at the money, honours, and acknowledgement that Hinton has enjoyed as an eminent scientist. I’ve covered two previous panics set off by eminent scientists. “Existential risk” is the title of my November 26, 2012 posting which highlights Martin Rees’ efforts to found the Centre for Existential Risk at the University of Cambridge.

Rees is a big deal. From his Wikipedia entry, Note: Links have been removed,

Martin John Rees, Baron Rees of Ludlow OM FRS FREng FMedSci FRAS HonFInstP[10][2] (born 23 June 1942) is a British cosmologist and astrophysicist.[11] He is the fifteenth Astronomer Royal, appointed in 1995,[12][13][14] and was Master of Trinity College, Cambridge, from 2004 to 2012 and President of the Royal Society between 2005 and 2010.[15][16][17][18][19][20]

The Centre for Existential Risk can be found here online (it is located at the University of Cambridge). Interestingly, Hinton who was born in December 1947 will be giving a lecture “Digital versus biological intelligence: Reasons for concern about AI” in Cambridge on May 25, 2023.

The next panic was set off by Stephen Hawking (1942 – 2018; also at the University of Cambridge, Wikipedia entry) a few years before he died. (Note: Rees, Hinton, and Hawking were all born within five years of each other and all have/had ties to the University of Cambridge. Interesting coincidence, eh?) From a January 9, 2015 article by Emily Chung for CBC news online,

Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, “the development of full artificial intelligence could spell the end of the human race.” Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably “our biggest existential threat.”

Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.

Hawking made his most recent comments at the beginning of December [2014], in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.

Popular works of science fiction – from the latest Terminator trailer, to the Matrix trilogy, to Star Trek’s borg – envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.

Sawyer has written about a different vision of life beyond singularity [when machines surpass humans in general intelligence,] — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the “possible misuse of powerful technologies” such as AI. He said Hawking and Musk have good reason to be concerned.

To sum up, the first panic was in 2012, the next in 2014/15, and the latest one began earlier this year (2023) with a letter. A March 29, 2023 Thompson Reuters news item on CBC news online provides information on the contents,

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a pioneer of research in the field.

According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.

The concerns come as EU police force Europol on Monday {March 27, 2023] joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Meanwhile, the U.K. government unveiled proposals for an “adaptable” regulatory framework around AI.

The government’s approach, outlined in a policy paper published on Wednesday [March 29, 2023], would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.

The engineers have chimed in, from an April 7, 2023 article by Margo Anderson for the IEEE (institute of Electrical and Electronics Engineers) Spectrum magazine, Note: Links have been removed,

The open letter [published March 29, 2023], titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”

It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.

In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.

IEEE members have expressed a similar diversity of opinions.

There was an earlier open letter in January 2015 according to Wikipedia’s “Open Letter on Artificial Intelligence” entry, Note: Links have been removed,

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.

As for ‘Mr. ChatGPT’ or Sam Altman, CEO of OpenAI, while he didn’t sign the March 29, 2023 letter, he appeared before US Congress suggesting AI needs to be regulated according to May 16, 2023 news article by Mohar Chatterjee for Politico.

You’ll notice I’ve arbitrarily designated three AI panics by assigning their origins to eminent scientists. In reality, these concerns rise and fall in ways that don’t allow for such a tidy analysis. As Chung notes, science fiction regularly addresses this issue. For example, there’s my October 16, 2013 posting, “Wizards & Robots: a comic book encourages study in the sciences and maths and discussions about existential risk.” By the way, will.i.am (of the Black Eyed Peas band was involved in the comic book project and he us a longtime supporter of STEM (science, technology, engineering, and mathematics) initiatives.

Finally (but not quite)

Puzzling, isn’t it? I’m not sure we’re asking the right questions but it’s encouraging to see that at least some are being asked.

Dr. Andrew Maynard in a May 12, 2023 essay for The Conversation (h/t May 12, 2023 item on phys.org) notes that ‘Luddites’ questioned technology’s inevitable progress and were vilified for doing so, Note: Links have been removed,

The term “Luddite” emerged in early 1800s England. At the time there was a thriving textile industry that depended on manual knitting frames and a skilled workforce to create cloth and garments out of cotton and wool. But as the Industrial Revolution gathered momentum, steam-powered mills threatened the livelihood of thousands of artisanal textile workers.

Faced with an industrialized future that threatened their jobs and their professional identity, a growing number of textile workers turned to direct action. Galvanized by their leader, Ned Ludd, they began to smash the machines that they saw as robbing them of their source of income.

It’s not clear whether Ned Ludd was a real person, or simply a figment of folklore invented during a period of upheaval. But his name became synonymous with rejecting disruptive new technologies – an association that lasts to this day.

Questioning doesn’t mean rejecting

Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time. Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life

In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.” Their sin? Raising concerns over the potential dangers of artificial intelligence.

The irony of three prominent scientists and entrepreneurs being labeled as Luddites underlines the disconnect between the term’s original meaning and its more modern use as an epithet for anyone who doesn’t wholeheartedly and unquestioningly embrace technological progress.

Yet technologists like Musk and Gates aren’t rejecting technology or innovation. Instead, they’re rejecting a worldview that all technological advances are ultimately good for society. This worldview optimistically assumes that the faster humans innovate, the better the future will be.

In an age of ChatGPT, gene editing and other transformative technologies, perhaps we all need to channel the spirit of Ned Ludd as we grapple with how to ensure that future technologies do more good than harm.

In fact, “Neo-Luddites” or “New Luddites” is a term that emerged at the end of the 20th century.

In 1990, the psychologist Chellis Glendinning published an essay titled “Notes toward a Neo-Luddite Manifesto.”

Then there are the Neo-Luddites who actively reject modern technologies, fearing that they are damaging to society. New York City’s Luddite Club falls into this camp. Formed by a group of tech-disillusioned Gen-Zers, the club advocates the use of flip phones, crafting, hanging out in parks and reading hardcover or paperback books. Screens are an anathema to the group, which sees them as a drain on mental health.

I’m not sure how many of today’s Neo-Luddites – whether they’re thoughtful technologists, technology-rejecting teens or simply people who are uneasy about technological disruption – have read Glendinning’s manifesto. And to be sure, parts of it are rather contentious. Yet there is a common thread here: the idea that technology can lead to personal and societal harm if it is not developed responsibly.

Getting back to where this started with nonhuman authors, Amelia Eqbal has written up an informal transcript of a March 16, 2023 CBC radio interview (radio segment is embedded) about ChatGPT-4 (the latest AI chatbot from OpenAI) between host Elamin Abdelmahmoud and tech journalist, Alyssa Bereznak.

I was hoping to add a little more Canadian content, so in March 2023 and again in April 2023, I sent a question about whether there were any policies regarding nonhuman or AI authors to Kim Barnhardt at the Canadian Medical Association Journal (CMAJ). To date, there has been no reply but should one arrive, I will place it here.

In the meantime, I have this from Canadian writer, Susan Baxter in her May 15, 2023 blog posting “Coming soon: Robot Overlords, Sentient AI and more,”

The current threat looming (Covid having been declared null and void by the WHO*) is Artificial Intelligence (AI) which, we are told, is becoming too smart for its own good and will soon outsmart humans. Then again, given some of the humans I’ve met along the way that wouldn’t be difficult.

All this talk of scary-boo AI seems to me to have become the worst kind of cliché, one that obscures how our lives have become more complicated and more frustrating as apps and bots and cyber-whatsits take over.

The trouble with clichés, as Alain de Botton wrote in How Proust Can Change Your Life, is not that they are wrong or contain false ideas but more that they are “superficial articulations of good ones”. Cliches are oversimplifications that become so commonplace we stop noticing the more serious subtext. (This is rife in medicine where metaphors such as talk of “replacing” organs through transplants makes people believe it’s akin to changing the oil filter in your car. Or whatever it is EV’s have these days that needs replacing.)

Should you live in Vancouver (Canada) and are attending a May 28, 2023 AI event, you may want to read Susan Baxter’s piece as a counterbalance to, “Discover the future of artificial intelligence at this unique AI event in Vancouver,” a May 19, 2023 sponsored content by Katy Brennan for the Daily Hive,

If you’re intrigued and eager to delve into the rapidly growing field of AI, you’re not going to want to miss this unique Vancouver event.

On Sunday, May 28 [2023], a Multiplatform AI event is coming to the Vancouver Playhouse — and it’s set to take you on a journey into the future of artificial intelligence.

The exciting conference promises a fusion of creativity, tech innovation, and thought–provoking insights, with talks from renowned AI leaders and concept artists, who will share their experiences and opinions.

Guests can look forward to intense discussions about AI’s pros and cons, hear real-world case studies, and learn about the ethical dimensions of AI, its potential threats to humanity, and the laws that govern its use.

Live Q&A sessions will also be held, where leading experts in the field will address all kinds of burning questions from attendees. There will also be a dynamic round table and several other opportunities to connect with industry leaders, pioneers, and like-minded enthusiasts. 

This conference is being held at The Playhouse, 600 Hamilton Street, from 11 am to 7:30 pm, ticket prices range from $299 to $349 to $499 (depending on when you make your purchase, From the Multiplatform AI Conference homepage,

Event Speakers

Max Sills
General Counsel at Midjourney

From Jan 2022 – present (Advisor – now General Counsel) – Midjourney – An independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species (SF) Midjourney – a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called “prompts”, similar to OpenAI’s DALL-E and Stable Diffusion. For now the company uses Discord Server as a source of service and, with huge 15M+ members, is the biggest Discord server in the world. In the two-things-at-once department, Max Sills also known as an owner of Open Advisory Services, firm which is set up to help small and medium tech companies with their legal needs (managing outside counsel, employment, carta, TOS, privacy). Their clients are enterprise level, medium companies and up, and they are here to help anyone on open source and IP strategy. Max is an ex-counsel at Block, ex-general manager of the Crypto Open Patent Alliance. Prior to that Max led Google’s open source legal group for 7 years.

So, the first speaker listed is a lawyer associated with Midjourney, a highly controversial generative artificial intelligence programme used to generate images. According to their entry on Wikipedia, the company is being sued, Note: Links have been removed,

On January 13, 2023, three artists – Sarah Andersen, Kelly McKernan, and Karla Ortiz – filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists.[32]

My October 24, 2022 posting highlights some of the issues with generative image programmes and Midjourney is mentioned throughout.

As I noted earlier, I’m glad to see more thought being put into the societal impact of AI and somewhat disconcerted by the hyperbole from the like of Geoffrey Hinton and the like of Vancouver’s Multiplatform AI conference organizers. Mike Masnick put it nicely in his May 24, 2023 posting on TechDirt (Note 1: I’ve taken a paragraph out of context, his larger issue is about proposals for legislation; Note 2: Links have been removed),

Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.

For anyone interested in the Canadian government attempts to legislate AI, there’s my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”

Addendum (June 1, 2023)

Another statement warning about runaway AI was issued on Tuesday, May 30, 2023. This was far briefer than the previous March 2023 warning, from the Center for AI Safety’s “Statement on AI Risk” webpage,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war [followed by a list of signatories] …

Vanessa Romo’s May 30, 2023 article (with contributions from Bobby Allyn) for NPR ([US] National Public Radio) offers an overview of both warnings. Rae Hodge’s May 31, 2023 article for Salon offers a more critical view, Note: Links have been removed,

The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of “nuclear war” and human “extinction.” Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement from the non-profit Center for AI Safety said.

But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board.

TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry.

“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote.

“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.

The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts.

“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”

Fear, after all, is a powerful sales tool.

Romo’s May 30, 2023 article for NPR offers a good overview and, if you have the time, I recommend reading Hodge’s May 31, 2023 article for Salon in its entirety.

*ETA June 8, 2023: This sentence “What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.” was added to the introductory paragraph at the beginning of this post.

Canadian copyright quietly extended

As of December 30, 2022, Canadian copyright (one of the three elements of intellectual property; the other two: patents and trademarks) will be extended for another 20 years.

Mike Masnick in his November 29, 2022 posting on Techdirt explains why this is contrary to the intentions for establishing copyright in the first place, Note: Links have been removed,

… it cannot make sense to extend copyright terms retroactively. The entire point of copyright law is to provide a limited monopoly on making copies of the work as an incentive to get the work produced. Assuming the work was produced, that says that the bargain that was struck was clearly enough of an incentive for the creator. They were told they’d receive that period of exclusivity and thus they created the work.

Going back and retroactively extending copyright then serves no purpose. Creators need no incentive for works already created. The only thing it does is steal from the public. That’s because the “deal” setup by governments creating copyright terms is between the public (who is temporarily stripped of their right to share knowledge freely) and the creator. But if we extend copyright term retroactively, the public then has their end of the bargain (“you will be free to share these works freely after such-and-such a date”) changed, with no recourse or compensation.

Canada has quietly done it: extending copyrights on literary, dramatic or musical works and engravings from life of the author plus 50 years year to life of the author plus 70 years. [emphasis mine]

Masnick pointed to a November 23, 2022 posting by Andrea on the Internet Archive Canada blog for how this will affect the Canadian public,

… we now know that this date has been fixed as December 30, 2022, meaning that no new works will enter the Canadian public domain for the next 20 years.

A whole generation of creative works will remain under copyright. This might seem like a win for the estates of popular, internationally known authors, but what about more obscure Canadian works and creators? With circulation over time often being the indicator of ‘value’, many 20th century works are being deselected from physical library collections. …

Edward A. McCourt (1907-1972) is an example of just one of these Canadian creators. Raised in Alberta and a graduate of the University of Alberta, Edward went on to be a Rhodes Scholar in 1932. In 1980, Winnifred Bogaards wrote that:

“[H]e recorded over a period of thirty years his particular vision of the prairies, the region of Canada which had irrevocably shaped his own life. In that time he published five novels and forty-three short stories set (with some exceptions among the earliest stories) in Western Canada, three juvenile works based on the Riel Rebellion, a travel book on Saskatchewan, several radio plays adapted from his western stories, The Canadian West in Fiction (the first critical study of the literature of the prairies), and a biography of the 19th century English soldier and adventurer, Sir William F. Butler… “

In Bogaards’ analysis of his work, “Edward McCourt: A Reassessment” published in the journal Studies in Canadian Literature, she notes that while McCourt has suffered in obscurity, he is often cited along with his contemporaries Hugh MacLennan, Robertson Davies and Irving Layton; Canadian literary stars. Incidentally, we will also wait an additional 20 years for their works to enter the public domain. The work of Rebecca Giblin, Jacob Flynn, and Francois Petitjean, looking at ‘What Happens When Books Enter the Public Domain?’ is relevant here. Their study shows concretely and empirically that extending copyright has no benefit to the public at all, and only benefits a very few wealthy, well known estates and companies. This term extension will not encourage the publishers of McCourt’s works to invest in making his writing available to a new generation of readers.

This 20 year extension can trace its roots to the trade agreement between the US, Mexico, and Canada (USMCA) that replaced the previous North American Free Trade Agreement (NAFTA), as of July 1, 2020. This is made clear in Michael Geist’s May 2, 2022 Law Bytes podcast where he discusses with Lucie Guibault the (then proposed) Canadian extension in the context of international standards,

Lucie Guibault is an internationally renowned expert on international copyright law, a Professor of Law and Associate Dean at Schulich School of Law at Dalhousie University, and the Associate Director of the school’s Law and Technology Institute.

It’s always good to get some context and in that spirit, here’s more from Michael Geist’s May 2, 2022 Law Bytes podcast,

… Despite recommendations from its own copyright review, students, teachers, librarians, and copyright experts to include a registration requirement [emphasis mine] for the additional 20 years of protection, the government chose to extend term without including protection to mitigate against the harms.

Geist’s podcast discussion with Guibault, where she explains what a ‘registration requirement’ is and how it would work plus more, runs for almost 27 mins. (May 2, 2022 Law Bytes podcast). One final comment, visual artists and musicians are also affected by copyright rules.

Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?

A couple of Australian academics have written a comment for the journal Nature, which bears the intriguing subtitle: “The patent system assumes that inventors are human. Inventions devised by machines require their own intellectual property law and an international treaty.” (For the curious, I’ve linked to a few of my previous posts touching on intellectual property [IP], specifically the patent’s fraternal twin, copyright at the end of this piece.)

Before linking to the comment, here’s the May 27, 2022 University of New South Wales (UNCSW) press release (also on EurekAlert but published May 30, 2022) which provides an overview of their thinking on the subject, Note: Links have been removed,

It’s not surprising these days to see new inventions that either incorporate or have benefitted from artificial intelligence (AI) in some way, but what about inventions dreamt up by AI – do we award a patent to a machine?

This is the quandary facing lawmakers around the world with a live test case in the works that its supporters say is the first true example of an AI system named as the sole inventor.

In commentary published in the journal Nature, two leading academics from UNSW Sydney examine the implications of patents being awarded to an AI entity.

Intellectual Property (IP) law specialist Associate Professor Alexandra George and AI expert, Laureate Fellow and Scientia Professor Toby Walsh argue that patent law as it stands is inadequate to deal with such cases and requires legislators to amend laws around IP and patents – laws that have been operating under the same assumptions for hundreds of years.

The case in question revolves around a machine called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) created by Dr Stephen Thaler, who is president and chief executive of US-based AI firm Imagination Engines. Dr Thaler has named DABUS as the inventor of two products – a food container with a fractal surface that helps with insulation and stacking, and a flashing light for attracting attention in emergencies.

For a short time in Australia, DABUS looked like it might be recognised as the inventor because, in late July 2021, a trial judge accepted Dr Thaler’s appeal against IP Australia’s rejection of the patent application five months earlier. But after the Commissioner of Patents appealed the decision to the Full Court of the Federal Court of Australia, the five-judge panel upheld the appeal, agreeing with the Commissioner that an AI system couldn’t be named the inventor.

A/Prof. George says the attempt to have DABUS awarded a patent for the two inventions instantly creates challenges for existing laws which has only ever considered humans or entities comprised of humans as inventors and patent-holders.

“Even if we do accept that an AI system is the true inventor, the first big problem is ownership. How do you work out who the owner is? An owner needs to be a legal person, and an AI is not recognised as a legal person,” she says.

Ownership is crucial to IP law. Without it there would be little incentive for others to invest in the new inventions to make them a reality.

“Another problem with ownership when it comes to AI-conceived inventions, is even if you could transfer ownership from the AI inventor to a person: is it the original software writer of the AI? Is it a person who has bought the AI and trained it for their own purposes? Or is it the people whose copyrighted material has been fed into the AI to give it all that information?” asks A/Prof. George.

For obvious reasons

Prof. Walsh says what makes AI systems so different to humans is their capacity to learn and store so much more information than an expert ever could. One of the requirements of inventions and patents is that the product or idea is novel, not obvious and is useful.

“There are certain assumptions built into the law that an invention should not be obvious to a knowledgeable person in the field,” Prof. Walsh says.

“Well, what might be obvious to an AI won’t be obvious to a human because AI might have ingested all the human knowledge on this topic, way more than a human could, so the nature of what is obvious changes.”

Prof. Walsh says this isn’t the first time that AI has been instrumental in coming up with new inventions. In the area of drug development, a new antibiotic was created in 2019 – Halicin – that used deep learning to find a chemical compound that was effective against drug-resistant strains of bacteria.

“Halicin was originally meant to treat diabetes, but its effectiveness as an antibiotic was only discovered by AI that was directed to examine a vast catalogue of drugs that could be repurposed as antibiotics. So there’s a mixture of human and machine coming into this discovery.”

Prof. Walsh says in the case of DABUS, it’s not entirely clear whether the system is truly responsible for the inventions.

“There’s lots of involvement of Dr Thaler in these inventions, first in setting up the problem, then guiding the search for the solution to the problem, and then interpreting the result,” Prof. Walsh says.

“But it’s certainly the case that without the system, you wouldn’t have come up with the inventions.”

Change the laws

Either way, both authors argue that governing bodies around the world will need to modernise the legal structures that determine whether or not AI systems can be awarded IP protection. They recommend the introduction of a new ‘sui generis’ form of IP law – which they’ve dubbed ‘AI-IP’ – that would be specifically tailored to the circumstances of AI-generated inventiveness. This, they argue, would be more effective than trying to retrofit and shoehorn AI-inventiveness into existing patent laws.

Looking forward, after examining the legal questions around AI and patent law, the authors are currently working on answering the technical question of how AI is going to be inventing in the future.

Dr Thaler has sought ‘special leave to appeal’ the case concerning DABUS to the High Court of Australia. It remains to be seen whether the High Court will agree to hear it. Meanwhile, the case continues to be fought in multiple other jurisdictions around the world.

Here’s a link to and a citation for the paper,

Artificial intelligence is breaking patent law by Alexandra George & Toby Walsh. Nature (Nature) COMMENT ISSN 1476-4687 (online) 24 May 2022 ISSN 0028-0836 (print) Vol 605 26 May 2022 pp. 616-18 DOI: 10.1038/d41586-022-01391-x

This paper appears to be open access.

The Journey

DABIUS has gotten a patent in one jurisdiction, from an August 8, 2021 article on brandedequity.com,

The patent application listing DABUS as the inventor was filed in patent offices around the world, including the US, Europe, Australia, and South Afica. But only South Africa granted the patent (Australia followed suit a few days later after a court judgment gave the go-ahard [and rejected it several months later]).

Natural person?

This September 27, 2021 article by Miguel Bibe for Inventa covers some of the same ground adding some some discussion of the ‘natural person’ problem,

The patent is for “a food container based on fractal geometry”, and was accepted by the CIPC [Companies and Intellectual Property Commission] on June 24, 2021. The notice of issuance was published in the July 2021 “Patent Journal”.  

South Africa does not have a substantive patent examination system and, instead, requires applicants to merely complete a filing for their inventions. This means that South Africa patent laws do not provide a definition for “inventor” and the office only proceeds with a formal examination in order to confirm if the paperwork was filled correctly.

… according to a press release issued by the University of Surrey: “While patent law in many jurisdictions is very specific in how it defines an inventor, the DABUS team is arguing that the status quo is not fit for purpose in the Fourth Industrial Revolution.”

On the other hand, this may not be considered as a victory for the DABUS team since several doubts and questions remain as to who should be considered the inventor of the patent. Current IP laws in many jurisdictions follow the traditional term of “inventor” as being a “natural person”, and there is no legal precedent in the world for inventions created by a machine.

August 2022 update

Mike Masnick in an August 15, 2022 posting on Techdirt provides the latest information on Stephen Thaler’s efforts to have patents and copyrights awarded to his AI entity, DABUS,

Stephen Thaler is a man on a mission. It’s not a very good mission, but it’s a mission. He created something called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) and claims that it’s creating things, for which he has tried to file for patents and copyrights around the globe, with his mission being to have DABUS named as the inventor or author. This is dumb for many reasons. The purpose of copyright and patents are to incentivize the creation of these things, by providing to the inventor or author a limited time monopoly, allowing them to, in theory, use that monopoly to make some money, thereby making the entire inventing/authoring process worthwhile. An AI doesn’t need such an incentive. And this is why patents and copyright only are given to persons and not animals or AI.

… Thaler’s somewhat quixotic quest continues to fail. The EU Patent Office rejected his application. The Australian patent office similarly rejected his request. In that case, a court sided with Thaler after he sued the Australian patent office, and said that his AI could be named as an inventor, but thankfully an appeals court set aside that ruling a few months ago. In the US, Thaler/DABUS keeps on losing as well. Last fall, he lost in court as he tried to overturn the USPTO ruling, and then earlier this year, the US Copyright Office also rejected his copyright attempt (something it has done a few times before). In June, he sued the Copyright Office over this, which seems like a long shot.

And now, he’s also lost his appeal of the ruling in the patent case. CAFC, the Court of Appeals for the Federal Circuit — the appeals court that handles all patent appeals — has rejected Thaler’s request just like basically every other patent and copyright office, and nearly all courts.

If you have the time, the August 15, 2022 posting is an interesting read.

Consciousness and ethical AI

Just to make things more fraught, an engineer at Google has claimed that one of their AI chatbots has consciousness. From a June 16, 2022 article (in Canada’s National Post [previewed on epaper]) by Patrick McGee,

Google has ignited a social media firestorm on the the nature of consciousness after placing an engineer on paid leave with his belief that the tech group’s chatbot has become “sentient.”

Blake Lemoine, a senior software engineer in Google’s Responsible AI unit, did not receive much attention when he wrote a Medium post saying he “may be fired soon for doing AI ethics work.”

But a Saturday [June 11, 2022] profile in the Washington Post characterized Lemoine as “the Google engineer who thinks “the company’s AI has come to life.”

This is not the first time that Google has run into a problem with ethics and AI. Famously, Timnit Gebru who co-led (with Margaret Mitchell) Google’s ethics and AI unit departed in 2020. Gebru said (and maintains to this day) she was fired. They said she was ?, they never did make a final statement although after an investigation Gebru did receive an apology. You *can* read more about Gebru and the issues she brought to light in her Wikipedia entry. Coincidentally (or not), Margaret Mitchell was terminated/fired in February 2021 from Google after criticizing the company for Gebru’s ‘firing’. See a February 19, 2021 article by Megan Rose Dickey for TechCrunch for details about what the company has admitted is a firing or Margaret Mitchell’s termination from the company.

Getting back intellectual property and AI.

What about copyright?

There are no mentions of copyright in the earliest material I have here about the ‘creative’ arts and artificial intelligence is this, “Writing and AI or is a robot writing this blog?” posted July 16, 2014. More recently, there’s “Beer and wine reviews, the American Chemical Society’s (ACS) AI editors, and the Turing Test” posted May 20, 2022. The type of writing featured is not literary or typically considered creative writing.

On the more creative front, there’s “True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)” posted on December 3, 2021. The literary/creative portion of the post can be found under the ‘AI and creativity’ subhead approximately 30% of the way down and where I mention Douglas Coupland. Again, there’s no mention of copyright.

It’s with the visual arts that copyright gets mentioned. The first one I can find here is “Robot artists—should they get copyright protection” posted on July 10, 2017.

Fun fact: Andres Guadamuz who was mentioned in my posting took to his own blog where he gave my blog a shout out while implying that I wasn’t thoughtful. The gist of his August 8, 2017 posting was that he was misunderstood by many people, which led to the title for his post, “Should academics try to engage the public?” Thankfully, he soldiers on trying to educate us with his TechnoLama blog.

Lastly, there’s this August 16, 2019 posting “AI (artificial intelligence) artist got a show at a New York City art gallery” where you can scroll down to the ‘What about intellectual property?’ subhead about 80% of the way.

You look like a thing …

i am recommending a book for anyone who’d like to learn a little more about how artificial intelligence (AI) works, “You look like a thing and I love you; How Artificial Intelligence Works and Why It’s Making the World a Weirder Place” by Janelle Shane (2019).

It does not require an understanding of programming/coding/algorithms/etc.; Shane makes the subject as accessible as possible and gives you insight into why the term ‘artificial stupidity’ is more applicable than you might think. You can find Shane’s website here and you can find her 10 minute TED talk here.

*’can’ added to sentence on May 12, 2023.

The Royal Bank of Canada reports ‘Humans wanted’ and some thoughts on the future of work, robots, and artificial intelligence

It seems the Royal Bank of Canada ((RBC or Royal Bank) wants to weigh in and influence what is to come with regard to what new technologies will bring us and how they will affect our working lives.  (I will be offering my critiques of the whole thing.)

Launch yourself into the future (if you’re a youth)

“I’m not planning on being replaced by a robot.” That’s the first line of text you’ll see if you go to the Royal Bank of Canada’s new Future Launch web space and latest marketing campaign and investment.

This whole endeavour is aimed at ‘youth’ and represents a $500M investment. Of course, that money will be invested over a 10-year period which works out to $50M per year and doesn’t seem quite so munificent given how much money Canadian banks make (from a March 1, 2017 article by Don Pittis for the Canadian Broadcasting Corporation [CBC] news website),

Yesterday [February 28, 2017] the Bank of Montreal [BMO] said it had made about $1.5 billion in three months.

That may be hard to put in context until you hear that it is an increase in profit of nearly 40 per cent from the same period last year and dramatically higher than stock watchers had been expecting.

Not all the banks have done as well as BMO this time. The Royal Bank’s profits were up 24 per cent at $3 billion. [emphasis mine] CIBC [Canadian Imperial Bank of Commerce] profits were up 13 per cent. TD [Toronto Dominion] releases its numbers tomorrow.

Those numbers would put the RBC on track to a profit of roughly $12B n 2017. This means  $500M represents approximately 4.5% of a single year’s profits which will be disbursed over a 10 year period which makes the investment work out to approximately .45% or less than 1/2 of one percent. Paradoxically, it’s a lot of money and it’s not that much money.

Advertising awareness

First, there was some advertising (in Vancouver at least),

[downloaded from http://flinflononline.com/local-news/356505]

You’ll notice she has what could be described as a ‘halo’. Is she an angel or, perhaps, she’s an RBC angel? After all, yellow and gold are closely associated as colours and RBC sports a partially yellow logo. As well, the model is wearing a blue denim jacket, RBC’s other logo colour.

Her ‘halo’ is intact but those bands of colour bend a bit and could be described as ‘rainbow-like’ bringing to mind ‘pots of gold’ at the end of the rainbow.  Free association is great fun and allows people to ascribe multiple and/or overlapping ideas and stories to the advertising. For example, people who might not approve of imagery that hearkens to religious art might have an easier time with rainbows and pots of gold. At any rate, none of the elements in images/ads are likely to be happy accidents or coincidence. They are intended to evoke certain associations, e.g., anyone associated with RBC will be blessed with riches.

The timing is deliberate, too, just before Easter 2018 (April 1), suggesting to some us, that even when the robots arrive destroying the past, youth will rise up (resurrection) for a new future. Or, if you prefer, Passover and its attendant themes of being spared and moving to the Promised Land.

Enough with the semiotic analysis and onto campaign details.

Humans Wanted: an RBC report

It seems the precursor to Future Launch, is an RBC report, ‘Humans Wanted’, which itself is the outcome of still earlier work such as this Brookfield Institute for Innovation + Entrepreneurship (BII+E) report, Future-proof: Preparing young Canadians for the future of work, March 2017 (authors: Creig Lamb and Sarah Doyle), which features a quote from RBC’s President and CEO (Chief Executive Officer) David McKay,

“Canada’s future prosperity and success will rely on us harnessing the innovation of our entire talent pool. A huge part of our success will depend on how well we integrate this next generation of Canadians into the workforce. Their confidence, optimism and inspiration could be the key to helping us reimagine traditional business models, products and ways of working.”  David McKay, President and CEO, RBC

There are a number of major trends that have the potential to shape the future of work, from climate change and resource scarcity to demographic shifts resulting from an aging population and immigration. This report focuses on the need to prepare Canada’s youth for a future where a great number of jobs will be rapidly created, altered or made obsolete by technology.

Successive waves of technological advancements have rocked global economies for centuries, reconfiguring the labour force and giving rise to new economic opportunities with each wave. Modern advances, including artificial intelligence and robotics, once again have the potential to transform the economy, perhaps more rapidly and more dramatically than ever before. As past pillars of Canada’s economic growth become less reliable, harnessing technology and innovation will become increasingly important in driving productivity and growth. 1, 2, 3

… (p. 2 print; p. 4 PDF)

The Brookfield Institute (at Ryerson University in Toronto, Ontario, Canada) report is worth reading if for no other reason than its Endnotes. Unlike the RBC materials, you can find the source for the information in the Brookfield report.

After Brookfield, there was the RBC Future Launch Youth Forums 2017: What We Learned  document (October 13, 2017 according to ‘View Page Info’),

In this rapidly changing world, there’s a new reality when it comes to work. A degree or diploma no longer guarantees a job, and some of the positions, skills and trades of today won’t exist – or be relevant – in the future.

Through an unprecedented 10-year, $500 million commitment, RBC Future LaunchTM  is focused on driving real change and preparing today’s young people for the future world of work, helping them access the skills, job experience and networks that will enable their success.

At the beginning of this 10-year journey RBC® wanted to go beyond research and expert reports to better understand the regional issues facing youth across Canada and to hear directly from young people and organizations that work with them. From November 2016 to May 2017, the RBC Future Launch team held 15 youth forums across the country, bringing together over 430 partners, including young people, to uncover ideas and talk through solutions to address the workforce gaps Canada’s youth face today.

Finally,  a March 26, 2018 RBC news release announces the RBC report: ‘Humans Wanted – How Canadian youth can thrive in the age of disruption’,

Automation to impact at least 50% of Canadian jobs in the next decade: RBC research

Human intelligence and intuition critical for young people and jobs of the future

  • Being ‘human’ will ensure resiliency in an era of disruption and artificial intelligence
  • Skills mobility – the ability to move from one job to another – will become a new competitive advantage

TORONTO, March 26, 2018 – A new RBC research paper, Humans Wanted – How Canadian youth can thrive in the age of disruption, has revealed that 50% of Canadian jobs will be disrupted by automation in the next 10 years.

As a result of this disruption, Canada’s Gen Mobile – young people who are currently transitioning from education to employment – are unprepared for the rapidly changing workplace. With 4 million Canadian youth entering the workforce over the next decade, and the shift from a jobs economy to a skills economy, the research indicates young people will need a portfolio of “human skills” to remain competitive and resilient in the labour market.

“Canada is at a historic cross-roads – we have the largest generation of young people coming into the workforce at the very same time technology is starting to impact most jobs in the country,” said Dave McKay, President and CEO, RBC. “Canada is on the brink of a skills revolution and we have a responsibility to prepare young people for the opportunities and ambiguities of the future.”

‘There is a changing demand for skills,” said John Stackhouse, Senior Vice-President, RBC. “According to our findings, if employers and the next generation of employees focus on foundational ‘human skills’, they’ll be better able to navigate a new age of career mobility as technology continues to reshape every aspect of the world around us.”

Key Findings:

  • Canada’s economy is on target to add 2.4 million jobs over the next four years, virtually all of which will require a different mix of skills.
  • A growing demand for “human skills” will grow across all job sectors and include: critical thinking, co-ordination, social perceptiveness, active listening and complex problem solving.
  • Rather than a nation of coders, digital literacy – the ability to understand digital items, digital technologies or the Internet fluently – will be necessary for all new jobs.
  • Canada’s education system, training programs and labour market initiatives are inadequately designed to help Canadian youth navigate the new skills economy, resulting in roughly half a million 15-29 year olds who are unemployed and another quarter of a million who are working part-time involuntarily.
  • Canadian employers are generally not prepared, through hiring, training or retraining, to recruit and develop the skills needed to ensure their organizations remain competitive in the digital economy.

“As digital and machine technology advances, the next generation of Canadians will need to be more adaptive, creative and collaborative, adding and refining skills to keep pace with a world of work undergoing profound change,” said McKay. “Canada’s future prosperity depends on getting a few big things right and that’s why we’ve introduced RBC Future Launch.”

RBC Future Launch is a decade-long commitment to help Canadian youth prepare for the jobs of tomorrow. RBC is committed to acting as a catalyst for change, bringing government, educators, public sector and not-for-profits together to co-create solutions to help young people better prepare for the future of the work through “human skills” development, networking and work experience.

Top recommendations from the report include:

  • A national review of post-secondary education programs to assess their focus on “human skills” including global competencies
  • A national target of 100% work-integrated learning, to ensure every undergraduate student has the opportunity for an apprenticeship, internship, co-op placement or other meaningful experiential placement
  • Standardization of labour market information across all provinces and regions, and a partnership with the private sector to move skills and jobs information to real-time, interactive platforms
  • The introduction of a national initiative to help employers measure foundational skills and incorporate them in recruiting, hiring and training practices

Join the conversation with Dave McKay and John Stackhouse on Wednesday, March 28 [2018] at 9:00 a.m. to 10:00 a.m. EDT at RBC Disruptors on Facebook Live.

Click here to read: Humans Wanted – How Canadian youth can thrive in the age of disruption.

About the Report
RBC Economics amassed a database of 300 occupations and drilled into the skills required to perform them now and projected into the future. The study groups the Canadian economy into six major clusters based on skillsets as opposed to traditional classifications and sectors. This cluster model is designed to illustrate the ease of transition between dissimilar jobs as well as the relevance of current skills to jobs of the future.

Six Clusters
Doers: Emphasis on basic skills
Transition: Greenhouse worker to crane operator
High Probability of Disruption

Crafters: Medium technical skills; low in management skills
Transition: Farmer to plumber
Very High Probability of Disruption

Technicians: High in technical skills
Transition: Car mechanic to electrician
Moderate Probability of Disruption

Facilitators: Emphasis on emotional intelligence
Transition: Dental assistant to graphic designer
Moderate Probability of Disruption

Providers: High in Analytical Skills
Transition: Real estate agent to police officer
Low Probability of Disruption

Solvers: Emphasis on management skills and critical thinking
Transition: Mathematician to software engineer
Minimal Probability of Disruption

About RBC
Royal Bank of Canada is a global financial institution with a purpose-driven, principles-led approach to delivering leading performance. Our success comes from the 81,000+ employees who bring our vision, values and strategy to life so we can help our clients thrive and communities prosper. As Canada’s biggest bank, and one of the largest in the world based on market capitalization, we have a diversified business model with a focus on innovation and providing exceptional experiences to our 16 million clients in Canada, the U.S. and 34 other countries. Learn more at rbc.com.‎

We are proud to support a broad range of community initiatives through donations, community investments and employee volunteer activities. See how at http://www.rbc.com/community-sustainability/.

– 30 – 

The report features a lot of bulleted points, airy text (large fonts and lots of space between the lines), inoffensive graphics, and human interest stories illustrating the points made elsewhere in the text.

There is no bibliography or any form of note telling you where to find the sources for the information in the report. The 2.4M jobs mentioned in the news release are also mentioned in the report on p. 16 (PDF) and is credited in the main body of the text to the EDSC. I’m not up-to-date on my abbreviations but I’m pretty sure it does not stand for East Doncaster Secondary College or East Duplin Soccer Club. I’m betting it stands for Employment and Social Development Canada. All that led to visiting the EDSC website and trying (unsuccessfully) to find the report or data sheet used to supply the figures RBC quoted in their report and news release.

Also, I’m not sure who came up with or how they developed the ‘crafters, ‘doers’, ‘technicians’, etc. categories.

Here’s more from p. 2 of their report,

CANADA, WE HAVE A PROBLEM. [emphasis mine] We’re hurtling towards the 2020s with perfect hindsight, not seeing what’s clearly before us. The next generation is entering the workforce at a time of profound economic, social and technological change. We know it. [emphasis mine] Canada’s youth know it. And we’re not doing enough about it.

RBC wants to change the conversation, [emphasis mine] to help Canadian youth own the 2020s — and beyond. RBC Future Launch is our 10-year commitment to that cause, to help young people prepare for and navigate a new world of work that, we believe, will fundamentally reshape Canada. For the better. If we get a few big things right.

This report, based on a year-long research project, is designed to help that conversation. Our team conducted one of the biggest labour force data projects [emphasis mine] in Canada, and crisscrossed the country to speak with students and workers in their early careers, with educators and policymakers, and with employers in every sector.

We discovered a quiet crisis — of recent graduates who are overqualified for the jobs they’re in, of unemployed youth who weren’t trained for the jobs that are out there, and young Canadians everywhere who feel they aren’t ready for the future of work.

Sarcasm ahead

There’s nothing like starting your remarks with a paraphrased quote from a US movie about the Apollo 13 spacecraft crisis as in, “Houston, we have a problem.” I’ve always preferred Trudeau (senior) and his comment about ‘keeping our noses out of the nation’s bedrooms’. It’s not applicable but it’s more amusing and a Canadian quote to boot.

So, we know we’re having a crisis which we know about but RBC wants to tell us about it anyway (?) and RBC wants to ‘change the conversation’. OK. So how does presenting the RBC Future Launch change the conversation? Especially in light of the fact, that the conversation has already been held, “a year-long research project … Our team conducted one of the biggest labour force data projects [emphasis mine] in Canada, and crisscrossed the country to speak with students and workers in their early careers, with educators and policymakers, and with employers in every sector.” Is the proposed change something along the lines of ‘Don’t worry, be happy; RBC has six categories (Doers, Crafters, Technicians, Facilitators, Providers, Solvers) for you.’ (Yes, for those who recognized it, I’m referencing I’m referencing Bobby McFerrin’s hit song, Don’t Worry, Be Happy.)

Also, what data did RBC collect and how do they collect it? Could Facebook and other forms of social media have been involved? (My March 29, 2018 posting mentions the latest Facebook data scandal; scroll down about 80% of the way.)

There are the people leading the way and ‘changing the conversation’ as it were and they can’t present logical, coherent points. What kind of conversation could they possibly have with youth (or anyone else for that matter)?

And, if part of the problem is that employers are not planning for the future, how does Future Launch ‘change that part of the conversation’?

RBC Future Launch

Days after the report’s release,there’s the Future Launch announcement in an RBC March 28, 2018 news release,

TORONTO, March 28, 2017 – In an era of unprecedented economic and technological change, RBC is today unveiling its largest-ever commitment to Canada’s future. RBC Future Launch is a 10-year, $500-million initiative to help young people gain access and opportunity to the skills, job experience and career networks needed for the future world of work.

“Tomorrow’s prosperity will depend on today’s young people and their ability to take on a future that’s equally inspiring and unnerving,” said Dave McKay, RBC president and CEO. “We’re sitting at an intersection of history, as a massive generational shift and unprecedented technological revolution come together. And we need to ensure young Canadians are prepared to help take us forward.”

Future Launch is a core part of RBC’s celebration of Canada 150, and is the result of two years of conversations with young Canadians from coast to coast to coast.

“Young people – Canada’s future – have the confidence, optimism and inspiration to reimagine the way our country works,” McKay said. “They just need access to the capabilities and connections to make the 21st century, and their place in it, all it should be.”

Working together with young people, RBC will bring community leaders, industry experts, governments, educators and employers to help design solutions and harness resources for young Canadians to chart a more prosperous and inclusive future.

Over 10 years, RBC Future Launch will invest in areas that help young people learn skills, experience jobs, share knowledge and build resilience. The initiative will address the following critical gaps:

  • A lack of relevant experience. Too many young Canadians miss critical early opportunities because they’re stuck in a cycle of “no experience, no job.” According to the consulting firm McKinsey & Co., 83 per cent of educators believe youth are prepared for the workforce, but only 34 per cent of employers and 44 per cent of young people agree. RBC will continue to help educators and employers develop quality work-integrated learning programs to build a more dynamic bridge between school and work.
  • A lack of relevant skills. Increasingly, young people entering the workforce require a complex set of technical, entrepreneurial and social skills that cannot be attained solely through a formal education. A 2016 report from the World Economic Forum states that by 2020, more than a third of the desired core skill-sets of most occupations will be different from today — if that job still exists. RBC will help ensure young Canadians gain the skills, from critical thinking to coding to creative design, that will help them integrate into the workplace of today, and be more competitive for the jobs of tomorrow.
  • A lack of knowledge networks. Young people are at a disadvantage in the job market if they don’t have an opportunity to learn from others and discover the realities of jobs they’re considering. Many have told RBC that there isn’t enough information on the spectrum of jobs that are available. From social networks to mentoring programs, RBC will harness the vast knowledge and goodwill of Canadians in guiding young people to the opportunities that exist and will exist, across Canada.
  • A lack of future readiness. Many young Canadians know their future will be defined by disruption. A new report, Future-proof: Preparing young Canadians for the future of work, by the Brookfield Institute for Innovation + Entrepreneurship, found that 42 per cent of the Canadian labour force is at a high risk of being affected by automation in the next 10 to 20 years. Young Canadians are okay with that: they want to be the disruptors and make the future workforce more creative and productive. RBC will help to create opportunities, through our education system, workplaces and communities at large to help young Canadians retool, rethink and rebuild as the age of disruption takes hold.

By helping young people unlock their potential and launch their careers, RBC can assist them with building a stronger future for themselves, and a more prosperous Canada for all. RBC created The Launching Careers Playbook, an interactive, digital resource focused on enabling young people to reach their full potential through three distinct modules: I am starting my career; I manage interns and I create internship programs. The Playbook shares the design principles, practices, and learnings captured from the RBC Career Launch Program over three years, as well as the research and feedback RBC has received from young people and their managers.

More information on RBC Future Launch can be found at www.rbc.com/futurelaunch.

Weirdly, this news release is the only document which gives you sources for some of RBC’s information. If you should be inclined, you can check the original reports as cited in the news release and determine if you agree with the conclusions the RBC people drew from them.

Cynicism ahead

They are planning to change the conversation, are they? I can’t help wondering what return they’re (RBC)  expecting to make on their investment ($500M over10 years). The RBC is prominently displayed not only on the launch page but in several of the subtopics listed on the page.

There appears to be some very good and helpful information although much of it leads you to using a bank for one reason or another. For example, if you’re planning to become an entrepreneur (and there is serious pressure from the government of Canada on this generation to become precisely that), then it’s very handy that you have easy access to RBC from any of the Future Launch pages. As well, you can easily apply for a job at or get a loan from RBC after you’ve done some of the exercises on the website and possibly given RBC a lot of data about yourself.

For anyone who believes I’m being harsh about the bank, you might want to check out a March 15, 2017 article by Erica Johnson for the Canadian Broadcasting Corporation’s Go Public website. It highlights just how ruthless Canadian banks can be,

Employees from all five of Canada’s big banks have flooded Go Public with stories of how they feel pressured to upsell, trick and even lie to customers to meet unrealistic sales targets and keep their jobs.

The deluge is fuelling multiple calls for a parliamentary inquiry, even as the banks claim they’re acting in customers’ best interests.

In nearly 1,000 emails, employees from RBC, BMO, CIBC, TD and Scotiabank locations across Canada describe the pressures to hit targets that are monitored weekly, daily and in some cases hourly.

“Management is down your throat all the time,” said a Scotiabank financial adviser. “They want you to hit your numbers and it doesn’t matter how.”

CBC has agreed to protect their identities because the workers are concerned about current and future employment.

An RBC teller from Thunder Bay, Ont., said even when customers don’t need or want anything, “we need to upgrade their Visa card, increase their Visa limits or get them to open up a credit line.”

“It’s not what’s important to our clients anymore,” she said. “The bank wants more and more money. And it’s leading everyone into debt.”

A CIBC teller said, “I am expected to aggressively sell products, especially Visa. Hit those targets, who cares if it’s hurting customers.”

….

Many bank employees described pressure tactics used by managers to try to increase sales.

An RBC certified financial planner in Guelph, Ont., said she’s been threatened with pay cuts and losing her job if she doesn’t upsell enough customers.

“Managers belittle you,” she said. “We get weekly emails that highlight in red the people who are not hitting those sales targets. It’s bullying.”

Some TD Bank employees told CBC’s Go Public they felt they had to break the law to keep their jobs. (Aaron Harris/Reuters)

Employees at several RBC branches in Calgary said there are white boards posted in the staff room that list which financial advisers are meeting their sales targets and which advisers are coming up short.

A CIBC small business associate who quit in January after nine years on the job said her district branch manager wasn’t pleased with her sales results when she was pregnant.

While working in Waterloo, Ont., she says her manager also instructed staff to tell all new international students looking to open a chequing account that they had to open a “student package,” which also included a savings account, credit card and overdraft.

“That is unfair and not the law, but we were told to do it for all of them.”

Go Public requested interviews with the CEOs of the five big banks — BMO, CIBC, RBC, Scotiabank and TD — but all declined.

If you have the time, it’s worth reading Johnson’s article in its entirety as it provides some fascinating insight into Canadian banking practices.

Final comments and an actual ‘conversation’ about the future of work

I’m torn, It’s good to see an attempt to grapple with the extraordinary changes we are likely to see in the not so distant future. It’s hard to believe that this Future Launch initiative is anything other than a self-interested means of profiting from fears about the future and a massive public relations campaign designed to engender good will. Doubly so since the very bad publicity the banks including RBC garnered last year (2017), as mentioned in the Johnson article.

Also, RBC and who knows how many other vested interests appear to have gathered data and information which they’ve used to draw any number of conclusions. First, I can’t find any information about what data RBC is gathering, who else might have access, and what plans, if any, they have to use it. Second, RBC seems to have predetermined how this ‘future of work’ conversation needs to be changed.

I suggest treading as lightly as possible and keeping in mind other ‘conversations’ are possible. For example, Mike Masnick at Techdirt has an April 3, 2018 posting about a new ‘future of work’ initiative,

For the past few years, there have been plenty of discussions about “the future of work,” but they tend to fall into one of two camps. You have the pessimists, who insist that the coming changes wrought by automation and artificial intelligence will lead to fewer and fewer jobs, as all of the jobs of today are automated out of existence. Then, there are the optimists who point to basically every single past similar prediction of doom and gloom due to innovation, which have always turned out to be incorrect. People in this camp point out that technology is more likely to augment than replace human-based work, and vaguely insist that “the jobs will come.” Whether you fall into one of those two camps — or somewhere in between or somewhere else entirely — one thing I’d hope most people can agree on is that the future of work will be… different.

Separately, we’re also living in an age where it is increasingly clear that those in and around the technology industry must take more responsibility in thinking through the possible consequences of the innovations they’re bringing to life, and exploring ways to minimize the harmful results (and hopefully maximizing the beneficial ones).

That brings us to the project we’re announcing today, Working Futures, which is an attempt to explore what the future of work might really look like in the next ten to fifteen years. We’re doing this project in partnership with two organizations that we’ve worked with multiples times in the past: Scout.ai and R Street.

….

The key point of this project: rather than just worry about the bad stuff or hand-wave around the idea of good stuff magically appearing, we want to really dig in — figure out what new jobs may actually appear, look into what benefits may accrue as well as what harms may be dished out — and see if there are ways to minimize the negative consequences, while pushing the world towards the beneficial consequences.

To do that, we’re kicking off a variation on the classic concept of scenario planning, bringing together a wide variety of individuals with different backgrounds, perspectives and ideas to run through a fun and creative exercise to imagine the future, while staying based in reality. We’re adding in some fun game-like mechanisms to push people to think about where the future might head. We’re also updating the output side of traditional scenario planning by involving science fiction authors, who obviously have a long history of thinking up the future, and who will participate in this process and help to craft short stories out of the scenarios we build, making them entertaining, readable and perhaps a little less “wonky” than the output of more traditional scenario plans.

There you have it; the Royal Bank is changing the conversation and Techdirt is inviting you to join in scenario planning and more.

Making a trademark claim memorable and fun

Usually when I write about intellectual property, it concerns technology and/or science disputes but this particular response to an alleged trademark violation amuses me greatly, swipes at a few Canadian stereotypes, and could act as a model for anyone who wants to lodge such protests. Before getting to the video, here are some details bout the dispute from a July 13, 2017 posting by Mike Masnick for Techdirt,

… — a few years ago, there was a virally popular rap song and video, by Brendan “B.Rich” Richmond, called Out for a Rip, spoofing Canadian culture/stereotypes. It got over 12 million views, and has become a bit of an anthem.

So, yeah. Coca Cola is using the phrase “out for a rip” on its Coke bottles and Richmond and his lawyer Kittredge decided the best way to respond was to write a song calling out Coca Cola on this and then recording a whole video. At the end of the video there’s an actual letter (part of which is dictated in the song itself) which is also pretty damn amusing:

Dear Coke,

I represent Brendan (B.Rich) Richmond (a.k.a. Friggin’ Buddy). You jacked his catchphrase, but you already know that.

Buddy owns the registered trademark “OUT FOR A RIP” in Canada (TMA934277). The music video for buddy’s original composition “OUT FOR A RIP” has been viewed more than 12 million times. Canadians associate the phrase “OUT FOR A RIP” with him.

Personally, I’m pretty psyched about this once-in-a-career opportunity to send a demand letter in the form of a rap video. Nonetheless, unlicensed use of OUT FOR A RIP violates my client’s rights. From what I understand, you guys do fairly well for yourselves – at least in comparison to most other multinational corporations, the GDP of most countries, or, say, the average musician, right? No room in your budget to clear IP rights?

Contact me no later than August 1, 2017 to discuss settlement of this matter. If you do not wish to discuss settlement, we require that you immediately cease using the OUT FOR A RIP mark, recall all OUT FOR A RIP bottles, and take immediate steps to preserve all relevant evidence in anticipation of possible litigation.

Regards,
Rob Kittredege

….

Here’s the ‘cease and desist’ video,

Enjoy!

Radical copyright reform proposal in the European Union

It seems the impulse to maximize copyright control has overtaken European Union officials. A Sept. 14, 2016 news item on phys.org lays out a few details,

The EU will overhaul copyright law to shake up how online news and entertainment is paid for in Europe, under proposals announced by European Commission chief Jean-Claude Juncker Wednesday [Sept. 14, 2016].

Pop stars such as Coldplay and Lady Gaga will hail part of the plan as a new weapon to bring a fair fight to YouTube, the Google-owned video service that they say is sapping the music business.

But the reform plans have attracted the fury of filmmakers and start-up investors who see it as a threat to European innovation and a wrong-headed favour to powerful media groups.

A Sept. 14, 2016 European Commission press release provides the European Union’s version of why more stringent copyright is needed,

“I want journalists, publishers and authors to be paid fairly for their work, whether it is made in studios or living rooms, whether it is disseminated offline or online, whether it is published via a copying machine or commercially hyperlinked on the web.”–President Juncker, State of the Union 2016

On the occasion of President Juncker’s 2016 State of the Union address, the Commission today set out proposals on the modernisation of copyright to increase cultural diversity in Europe and content available online, while bringing clearer rules for all online players. The proposals will also bring tools for innovation to education, research and cultural heritage institutions.

Digital technologies are changing the way music, films, TV, radio, books and the press are produced, distributed and accessed. New online services such as music streaming, video-on-demand platforms and news aggregators have become very popular, while consumers increasingly expect to access cultural content on the move and across borders. The new digital landscape will create opportunities for European creators as long as the rules offer legal certainty and clarity to all players. As a key part of its Digital Single Market strategy, the Commission has adopted proposals today to allow:

  • Better choice and access to content online and across borders
  • Improved copyright rules on education, research, cultural heritage and inclusion of disabled people
  • A fairer and sustainable marketplace for creators, the creative industries and the press

Andrus Ansip, Vice-President for the Digital Single Market, said: “Europeans want cross-border access to our rich and diverse culture. Our proposal will ensure that more content will be available, transforming Europe’s copyright rules in light of a new digital reality. Europe’s creative content should not be locked-up, but it should also be highly protected, in particular to improve the remuneration possibilities for our creators. We said we would deliver all our initiatives to create a Digital Single Market by the end of the year and we keep our promises. Without a properly functioning Digital Single Market we will miss out on creativity, growth and jobs.

Günther H. Oettinger, Commissioner for the Digital Economy and Society, said: “Our creative industries [emphasis mine] will benefit from these reforms which tackle the challenges of the digital age successfully while offering European consumers a wider choice of content to enjoy. We are proposing a copyright environment that is stimulating, fair and rewards investment.”

Today, almost half of EU internet users listen to music, watch TV series and films or play games online; however broadcasters and other operators find it hard to clear rights for their online or digital services when they want to offer them in other EU countries. Similarly, the socio-economically important sectors of education, research and cultural heritage too often face restrictions or legal uncertainty which holds back their digital innovation when using copyright protected content, including across borders. Finally, creators, other right holders and press publishers are often unable to negotiate the conditions and also payment for the online use of their works and performances.

Altogether, today’s copyright proposals have three main priorities:

1. Better choice and access to content online and across borders

With our proposal on the portability of online content presented in December 2015, we gave consumers the right to use their online subscriptions to films, music, ebooks when they are away from their home country, for example on holidays or business trips. Today, we propose a legal mechanism for broadcasters to obtain more easily the authorisations they need from right holders to transmit programmes online in other EU Member States. This is about programmes that broadcasters transmit online at the same time as their broadcast as well as their catch-up services that they wish to make available online in other Member States, such as MyTF1 in France, ZDF Mediathek in Germany, TV3 Play in Denmark, Sweden and the Baltic States and AtresPlayer in Spain. Empowering broadcasters to make the vast majority of their content, such as news, cultural, political, documentary or entertainment programmes, shown also in other Member States will give more choice to consumers.

Today’s rules also make it easier for operators who offer packages of channels (such as Proximus TV in Belgium, Movistar+ in Spain, Deutsche Telekom’s IPTV Entertain in Germany), to get the authorisations they need: instead of having to negotiate individually with every right holder in order to offer such packages of channels originating in other EU Member States, they will be able to get the licenses from collective management organisations representing right holders. This will also increase the choice of content for their customers.

To help development of Video-on-Demand (VoD) offerings in Europe, we ask Member States to set up negotiation bodies to help reach licensing deals, including those for cross-border services, between audiovisual rightholders and VoD platforms. A dialogue with the audiovisual industry on licensing issues and the use of innovative tools like licensing hubs will complement this mechanism.

To enhance access to Europe’s rich cultural heritage, the new Copyright Directive will help museums, archives and other institutions to digitise and make available across borders out-of commerce works, such as books or films that are protected by copyright, but no longer available to the public.

In parallel the Commission will use its €1.46 billion Creative Europe MEDIA programme to further support the circulation of creative content across borders . This includes more funding for subtitling and dubbing; a new catalogue of European audiovisual works for VoD providers that they can directly use for programming; and online tools to improve the digital distribution of European audiovisual works and make them easier to find and view online.

These combined actions will encourage people to discover TV and radio programmes from other European countries, keep in touch with their home countries when living in another Member State and enhance the availability of European films, including across borders, hence highlighting Europe’s rich cultural diversity.

2. Improving copyright rules on research, education and inclusion of disable [sic] people

Students and teachers are eager to use digital materials and technologies for learning, but today almost 1 in 4 educators encounter copyright-related restrictions in their digital teaching activities every week. The Commission has proposed today a new exception to allow educational establishments to use materials to illustrate teaching through digital tools and in online courses across borders.

The proposed Directive will also make it easier for researchers across the EU to use text and data mining (TDM) technologies to analyse large sets of data. This will provide a much needed boost to innovative research considering that today nearly all scientific publications are digital and their overall volume is increasing by 8-9% every year worldwide.

The Commission also proposes a new mandatory EU exception which will allow cultural heritage institutions to preserve works digitally, crucial for the survival of cultural heritage and for citizens’ access in the long term.

Finally, the Commission is proposing legislation to implement the Marrakesh Treaty to facilitate access to published works for persons who are blind, have other visual impairments or are otherwise print disabled. These measures are important to ensure that copyright does not constitute a barrier to the full participation in society of all citizens and will allow for the exchange of accessible format copies within the EU and with third countries that are parties to the Treaty, avoiding duplication of work and waste of resources.

3. A fairer and sustainable marketplace for creators and press

The Copyright Directive aims to reinforce the position of right holders to negotiate and be remunerated for the online exploitation of their content on video-sharing platforms such as YouTube or Dailymotion. Such platforms will have an obligation to deploy effective means such as technology to automatically detect songs or audiovisual works which right holders have identified and agreed with the platforms either to authorise or remove.

Newspapers, magazines and other press publications have benefited from the shift from print to digital and online services like social media and news aggregators. It has led to broader audiences, but it has also impacted advertising revenue and made the licensing and enforcement of the rights in these publications increasingly difficult.The Commission proposes to introduce a new related right for publishers, similar to the right that already exists under EU law for film producers, record (phonogram) producers and other players in the creative industries like broadcasters.

The new right recognises the important role press publishers play in investing in and creating quality journalistic content, which is essential for citizens’ access to knowledge in our democratic societies. As they will be legally recognised as right holders for the very first time they will be in a better position when they negotiate the use of their content with online services using or enabling access to it, and better able to fight piracy. This approach will give all players a clear legal framework when licensing content for digital uses, and help the development of innovative business models for the benefit of consumers.

The draft Directive also obliges publishers and producers to be transparent and inform authors or performers about profits they made with their works. It also puts in place a mechanism to help authors and performers to obtain a fair share when negotiating remuneration with producers and publishers. This should lead to higher level of trust among all players in the digital value chain.

Towards a Digital Single Market

As part of the Digital Single Market strategy presented in May 2015, today’s proposals complement the proposed regulation on portability of legal content (December 2015), the revised Audiovisual Media and Services Directive, the Communication on online platforms (May 2016). Later this autumn the Commission will propose to improve enforcement of all types of intellectual property rights, including copyright.

Today’s EU copyright rules, presented along with initiatives to boost internet connectivity in the EU (press releasepress conference at 15.15 CET), are part of the EU strategy to create a Digital Single Market (DSM). The Commission set out 16 initiatives (press release) and is on the right track to deliver all of them the end of this year.

While Juncker mixes industry (publishers) with content creators (journalists, authors), Günther H. Oettinger, Commissioner for the Digital Economy and Society clearly states that ‘creative industries’ are to be the beneficiaries. Business interests have tended to benefit disproportionately under current copyright regimes. The disruption posed by digital content has caused these businesses some agony and they have responded by lobbying vigorously to maximize copyright. For the most part, individual musicians, authors, visual artists and other content creators are highly unlikely to benefit from this latest reform.

I’m not a big fan of Google or its ‘stepchild’, YouTube but it should be noted that at least one career would not have existed without free and easy access to videos, Justin Bieber’s. He may not have made a penny from his YouTube videos but that hasn’t hurt his financial picture. Without YouTube, he would have been unlikely to get the exposure and recognition which have in turn led him to some serious financial opportunities.

I am somewhat less interested in the show business aspect than I am in the impact this could have on science as per section (2. Improving copyright rules on research, education and inclusion of disable [sic] people) of the European Commission press release. A Sept. 14, 2016 posting about a previous ruling on copyright in Europe by Mike Masnick for Techdirt provides some insight into the possible future impacts on science research,

Last week [Sept. 8, 2016 posting], we wrote about a terrible copyright ruling from the Court of Justice of the EU, which basically says that any for-profit entity that links to infringing material can be held liable for direct infringement, as the “for-profit” nature of the work is seen as evidence that they knew or should have known the work was infringing. We discussed the problems with this standard in our post, and there’s been a lot of commentary on what this will mean for Europe — with a variety of viewpoints being expressed. One really interesting set of concerns comes from Egon Willighagen, from Maastricht University, noting what a total and complete mess this is going to be for scientists, who rarely consider the copyright status of various data as databases they rely on are built up …

This is, of course, not the first time we’ve noted the problems of intellectual property in the science world. From various journals locking up research to the rise of patents scaring off researchers from sharing data, intellectual property keeps getting in the way of science, rather than supporting it. And that’s extremely unfortunate. I mean, after all, in the US specifically, the Constitution specifically says that copyrights and patents are supposed to be about “promoting the progress of science and the useful arts.”

Over and over again, though, we see that the law has been twisted and distorted and extended and expanded in such a way that is designed to protect a very narrow set of interests, at the expense of many others, including the public who would benefit from greater sharing and collaboration and open flow of data among scientific researchers. …

Masnick has also written up a Sept. 14, 2016 posting devoted to the EU copyright proposal itself,

This is not a surprise given the earlier leaks of what the EU Commission was cooking up for a copyright reform package, but the end result is here and it’s a complete disaster for everyone. And I do mean everyone. Some will argue that it’s a gift to Hollywood and legacy copyright interests — and there’s an argument that that’s the case. But the reality is that this proposal is so bad that it will end up doing massive harm to everyone. It will clearly harm independent creators and the innovative platforms that they rely on. And, because those platforms have become so important to even the legacy entertainment industry, it will harm them too. And, worst of all, it will harm the public greatly. It’s difficult to see how this proposal will benefit anyone, other than maybe some lawyers.

So the EU Commission has taken the exact wrong approach. It’s one that’s almost entirely about looking backwards and “protecting” old ways of doing business, rather than looking forward, and looking at what benefits the public, creators and innovators the most. If this proposal actually gets traction, it will be a complete disaster for the EU innovative community. Hopefully, Europeans speak out, vocally, about what a complete disaster this would be.

So, according to Masnick not even business interests will benefit.

Robots, Dallas (US), ethics, and killing

I’ve waited a while before posting this piece in the hope that the situation would calm. Sadly, it took longer than hoped as there was an additional shooting incident of police officers in Baton Rouge on July 17, 2016. There’s more about that shooting in a July 18, 2016 news posting by Steve Visser for CNN.)

Finally: Robots, Dallas, ethics, and killing: In the wake of the Thursday, July 7, 2016 shooting in Dallas (Texas, US) and subsequent use of a robot armed with a bomb to kill  the suspect, a discussion about ethics has been raised.

This discussion comes at a difficult period. In the same week as the targeted shooting of white police officers in Dallas, two African-American males were shot and killed in two apparently unprovoked shootings by police. The victims were Alton Sterling in Baton Rouge, Louisiana on Tuesday, July 5, 2016 and, Philando Castile in Minnesota on Wednesday, July 6, 2016. (There’s more detail about the shootings prior to Dallas in a July 7, 2016 news item on CNN.) The suspect in Dallas, Micah Xavier Johnson, a 25-year-old African-American male had served in the US Army Reserve and been deployed in Afghanistan (there’s more in a July 9, 2016 news item by Emily Shapiro, Julia Jacobo, and Stephanie Wash for abcnews.go.com). All of this has taken place within the context of a movement started in 2013 in the US, Black Lives Matter.

Getting back to robots, most of the material I’ve seen about ‘killing or killer’ robots has so far involved industrial accidents (very few to date) and ethical issues for self-driven cars (see a May 31, 2016 posting by Noah J. Goodall on the IEEE [Institute of Electrical and Electronics Engineers] Spectrum website).

The incident in Dallas is apparently the first time a US police organization has used a robot as a bomb, although it has been an occasional practice by US Armed Forces in combat situations. Rob Lever in a July 8, 2016 Agence France-Presse piece on phys.org focuses on the technology aspect,

The “bomb robot” killing of a suspected Dallas shooter may be the first lethal use of an automated device by American police, and underscores growing role of technology in law enforcement.

Regardless of the methods in Dallas, the use of robots is expected to grow, to handle potentially dangerous missions in law enforcement and the military.


Researchers at Florida International University meanwhile have been working on a TeleBot that would allow disabled police officers to control a humanoid robot.

The robot, described in some reports as similar to the “RoboCop” in films from 1987 and 2014, was designed “to look intimidating and authoritative enough for citizens to obey the commands,” but with a “friendly appearance” that makes it “approachable to citizens of all ages,” according to a research paper.

Robot developers downplay the potential for the use of automated lethal force by the devices, but some analysts say debate on this is needed, both for policing and the military.

A July 9, 2016 Associated Press piece by Michael Liedtke and Bree Fowler on phys.org focuses more closely on ethical issues raised by the Dallas incident,

When Dallas police used a bomb-carrying robot to kill a sniper, they also kicked off an ethical debate about technology’s use as a crime-fighting weapon.

The strategy opens a new chapter in the escalating use of remote and semi-autonomous devices to fight crime and protect lives. It also raises new questions over when it’s appropriate to dispatch a robot to kill dangerous suspects instead of continuing to negotiate their surrender.

“If lethally equipped robots can be used in this situation, when else can they be used?” says Elizabeth Joh, a University of California at Davis law professor who has followed U.S. law enforcement’s use of technology. “Extreme emergencies shouldn’t define the scope of more ordinary situations where police may want to use robots that are capable of harm.”

In approaching the question about the ethics, Mike Masnick’s July 8, 2016 posting on Techdirt provides a surprisingly sympathetic reading for the Dallas Police Department’s actions, as well as, asking some provocative questions about how robots might be better employed by police organizations (Note: Links have been removed),

The Dallas Police have a long history of engaging in community policing designed to de-escalate situations, rather than encourage antagonism between police and the community, have been handling all of this with astounding restraint, frankly. Many other police departments would be lashing out, and yet the Dallas Police Dept, while obviously grieving for a horrible situation, appear to be handling this tragic situation professionally. And it appears that they did everything they could in a reasonable manner. They first tried to negotiate with Johnson, but after that failed and they feared more lives would be lost, they went with the robot + bomb option. And, obviously, considering he had already shot many police officers, I don’t think anyone would question the police justification if they had shot Johnson.

But, still, at the very least, the whole situation raises a lot of questions about the legality of police using a bomb offensively to blow someone up. And, it raises some serious questions about how other police departments might use this kind of technology in the future. The situation here appears to be one where people reasonably concluded that this was the most effective way to stop further bloodshed. And this is a police department with a strong track record of reasonable behavior. But what about other police departments where they don’t have that kind of history? What are the protocols for sending in a robot or drone to kill someone? Are there any rules at all?

Furthermore, it actually makes you wonder, why isn’t there a focus on using robots to de-escalate these situations? What if, instead of buying military surplus bomb robots, there were robots being designed to disarm a shooter, or detain him in a manner that would make it easier for the police to capture him alive? Why should the focus of remote robotic devices be to kill him? This isn’t faulting the Dallas Police Department for its actions last night. But, rather, if we’re going to enter the age of robocop, shouldn’t we be looking for ways to use such robotic devices in a manner that would help capture suspects alive, rather than dead?

Gordon Corera’s July 12, 2016 article on the BBC’s (British Broadcasting Corporation) news website provides an overview of the use of automation and of ‘killing/killer robots’,

Remote killing is not new in warfare. Technology has always been driven by military application, including allowing killing to be carried out at distance – prior examples might be the introduction of the longbow by the English at Crecy in 1346, then later the Nazi V1 and V2 rockets.

More recently, unmanned aerial vehicles (UAVs) or drones such as the Predator and the Reaper have been used by the US outside of traditional military battlefields.

Since 2009, the official US estimate is that about 2,500 “combatants” have been killed in 473 strikes, along with perhaps more than 100 non-combatants. Critics dispute those figures as being too low.

Back in 2008, I visited the Creech Air Force Base in the Nevada desert, where drones are flown from.

During our visit, the British pilots from the RAF deployed their weapons for the first time.

One of the pilots visibly bristled when I asked him if it ever felt like playing a video game – a question that many ask.

The military uses encrypted channels to control its ordnance disposal robots, but – as any hacker will tell you – there is almost always a flaw somewhere that a determined opponent can find and exploit.

We have already seen cars being taken control of remotely while people are driving them, and the nightmare of the future might be someone taking control of a robot and sending a weapon in the wrong direction.

The military is at the cutting edge of developing robotics, but domestic policing is also a different context in which greater separation from the community being policed risks compounding problems.

The balance between risks and benefits of robots, remote control and automation remain unclear.

But Dallas suggests that the future may be creeping up on us faster than we can debate it.

The excerpts here do not do justice to the articles, if you’re interested in this topic and have the time, I encourage you to read all the articles cited here in their entirety.

*(ETA: July 25, 2016 at 1405 hours PDT: There is a July 25, 2016 essay by Carrie Sheffield for Salon.com which may provide some insight into the Black Lives matter movement and some of the generational issues within the US African-American community as revealed by the movement.)*

Using copyright to shut down easy access to scientific research

This started out as a simple post on copyright and publishers vis à vis Sci-Hub but then John Dupuis wrote a think piece (with which I disagree somewhat) on the situation in a Feb. 22, 2016 posting on his blog, Confessions of a Science Librarian. More on Dupuis and my take on it after a description of the situation.

Sci-Hub

Before getting to the controversy and legal suit, here’s a preamble about the purpose for copyright as per the US constitution from Mike Masnick’s Feb. 17, 2016 posting on Techdirt,

Lots of people are aware of the Constitutional underpinnings of our copyright system. Article 1, Section 8, Clause 8 famously says that Congress has the following power:

To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.

We’ve argued at great length over the importance of the preamble of that section, “to promote the progress,” but many people are confused about the terms “science” and “useful arts.” In fact, many people not well-versed in the issue often get the two backwards and think that “science” refers to inventions, and thus enables a patent system, while “useful arts” refers to “artistic works” and thus enables the copyright system. The opposite is actually the case. “Science” at the time the Constitution was written was actually synonymous with “learning” and “education” (while “useful arts” was a term meaning invention and new productivity tools).

While over the centuries, many who stood to benefit from an aggressive system of copyright control have tried to rewrite, whitewash or simply ignore this history, turning the copyright system falsely into a “property” regime, the fact is that it was always intended as a system to encourage the wider dissemination of ideas for the purpose of education and learning. The (potentially misguided) intent appeared to be that by granting exclusive rights to a certain limited class of works, it would encourage the creation of those works, which would then be useful in educating the public (and within a few decades enter the public domain).

Masnick’s preamble leads to a case where Elsevier (Publishers) has attempted to halt the very successful Sci-Hub, which bills itself as “the first pirate website in the world to provide mass and public access to tens of millions of research papers.” From Masnick’s Feb. 17, 2016 posting,

Rightfully, this is being celebrated as a massive boon to science and learning, making these otherwise hidden nuggets of knowledge and science that were previously locked up and hidden away available to just about anyone. And, to be clear, this absolutely fits with the original intent of copyright law — which was to encourage such learning. In a very large number of cases, it is not the creators of this content and knowledge who want the information to be locked up. Many researchers and academics know that their research has much more of an impact the wider it is seen, read, shared and built upon. But the gatekeepers — such as Elsveier and other large academic publishers — have stepped in and demanded copyright, basically for doing very little.

They do not pay the researchers for their work. Often, in fact, that work is funded by taxpayer funds. In some cases, in certain fields, the publishers actually demand that the authors of these papers pay to submit them. The journals do not pay to review the papers either. They outsource that work to other academics for “peer review” — which again, is unpaid. Finally, these publishers profit massively, having convinced many universities that they need to subscribe, often paying many tens or even hundreds of thousands of dollars for subscriptions to journals that very few actually read.

Simon Oxenham of the Neurobonkers blog on the big think website wrote a Feb. 9 (?), 2016 post about Sci-Hub, its originator, and its current legal fight (Note: Links have been removed),

On September 5th, 2011, Alexandra Elbakyan, a researcher from Kazakhstan, created Sci-Hub, a website that bypasses journal paywalls, illegally providing access to nearly every scientific paper ever published immediately to anyone who wants it. …

This was a game changer. Before September 2011, there was no way for people to freely access paywalled research en masse; researchers like Elbakyan were out in the cold. Sci-Hub is the first website to offer this service and now makes the process as simple as the click of a single button.

As the number of papers in the LibGen database expands, the frequency with which Sci-Hub has to dip into publishers’ repositories falls and consequently the risk of Sci-Hub triggering its alarm bells becomes ever smaller. Elbakyan explains, “We have already downloaded most paywalled articles to the library … we have almost everything!” This may well be no exaggeration. Elsevier, one of the most prolific and controversial scientific publishers in the world, recently alleged in court that Sci-Hub is currently harvesting Elsevier content at a rate of thousands of papers per day. Elbakyan puts the number of papers downloaded from various publishers through Sci-Hub in the range of hundreds of thousands per day, delivered to a running total of over 19 million visitors.

In one fell swoop, a network has been created that likely has a greater level of access to science than any individual university, or even government for that matter, anywhere in the world. Sci-Hub represents the sum of countless different universities’ institutional access — literally a world of knowledge. This is important now more than ever in a world where even Harvard University can no longer afford to pay skyrocketing academic journal subscription fees, while Cornell axed many of its Elsevier subscriptions over a decade ago. For researchers outside the US’ and Western Europe’s richest institutions, routine piracy has long been the only way to conduct science, but increasingly the problem of unaffordable journals is coming closer to home.

… This was the experience of Elbakyan herself, who studied in Kazakhstan University and just like other students in countries where journal subscriptions are unaffordable for institutions, was forced to pirate research in order to complete her studies. Elbakyan told me, “Prices are very high, and that made it impossible to obtain papers by purchasing. You need to read many papers for research, and when each paper costs about 30 dollars, that is impossible.”

While Sci-Hub is not expected to win its case in the US, where one judge has already ordered a preliminary injunction making its former domain unavailable. (Sci-Hub moved.) Should you be sympathetic to Elsevier, you may want to take this into account (Note: Links have been removed),

Elsevier is the world’s largest academic publisher and by far the most controversial. Over 15,000 researchers have vowed to boycott the publisher for charging “exorbitantly high prices” and bundling expensive, unwanted journals with essential journals, a practice that allegedly is bankrupting university libraries. Elsevier also supports SOPA and PIPA, which the researchers claim threatens to restrict the free exchange of information. Elsevier is perhaps most notorious for delivering takedown notices to academics, demanding them to take their own research published with Elsevier off websites like Academia.edu.

The movement against Elsevier has only gathered speed over the course of the last year with the resignation of 31 editorial board members from the Elsevier journal Lingua, who left in protest to set up their own open-access journal, Glossa. Now the battleground has moved from the comparatively niche field of linguistics to the far larger field of cognitive sciences. Last month, a petition of over 1,500 cognitive science researchers called on the editors of the Elsevier journal Cognition to demand Elsevier offer “fair open access”. Elsevier currently charges researchers $2,150 per article if researchers wish their work published in Cognition to be accessible by the public, a sum far higher than the charges that led to the Lingua mutiny.

In her letter to Sweet [New York District Court Judge Robert W. Sweet], Elbakyan made a point that will likely come as a shock to many outside the academic community: Researchers and universities don’t earn a single penny from the fees charged by publishers [emphasis mine] such as Elsevier for accepting their work, while Elsevier has an annual income over a billion U.S. dollars.

As Masnick noted, much of this research is done on the public dime (i. e., funded by taxpayers). For her part, Elbakyan has written a letter defending her actions on ethical rather than legal grounds.

I recommend reading the Oxenham article as it provides details about how the site works and includes text from the letter Elbakyan wrote.  For those who don’t have much time, Masnick’s post offers a good précis.

Sci-Hub suit as a distraction from the real issues?

Getting to Dupuis’ Feb. 22, 2016 posting and his perspective on the situation,

My take? Mostly that it’s a sideshow.

One aspect that I have ranted about on Twitter which I think is worth mentioning explicitly is that I think Elsevier and all the other big publishers are actually quite happy to feed the social media rage machine with these whack-a-mole controversies. The controversies act as a sideshow, distracting from the real issues and solutions that they would prefer all of us not to think about.

By whack-a-mole controversies I mean this recurring story of some person or company or group that wants to “free” scholarly articles and then gets sued or harassed by the big publishers or their proxies to force them to shut down. This provokes wide outrage and condemnation aimed at the publishers, especially Elsevier who is reserved a special place in hell according to most advocates of openness (myself included).

In other words: Elsevier and its ilk are thrilled to be the target of all the outrage. Focusing on the whack-a-mole game distracts us from fixing the real problem: the entrenched systems of prestige, incentive and funding in academia. As long as researchers are channelled into “high impact” journals, as long as tenure committees reward publishing in closed rather than open venues, nothing will really change. Until funders get serious about mandating true open access publishing and are willing to put their money where their intentions are, nothing will change. Or at least, progress will be mostly limited to surface victories rather than systemic change.

I think Dupuis is referencing a conflict theory (I can’t remember what it’s called) which suggests that certain types of conflicts help to keep systems in place while apparently attacking those systems. His point is well made but I disagree somewhat in that I think these conflicts can also raise awareness and activate people who might otherwise ignore or mindlessly comply with those systems. So, if Elsevier and the other publishers are using these legal suits as diversionary tactics, they may find they’ve made a strategic error.

ETA April 29, 2016: Sci-Hub does seem to move around so I’ve updated the links so it can be accessed but Sci-Hub’s situation can change at any moment.

Copyright and patent protections and human rights

The United Nations (UN) and cultural rights don’t immediately leap to mind when the subjects of copyright and patents are discussed. A Mar. 13, 2015 posting by Tim Cushing on Techdirt and an Oct. 14, 2015 posting by Glyn Moody also on Techdirt explain the connection in the person of Farida Shaheed, the UN Special Rapporteur on cultural rights and the author of two UN reports one on copyright and one on patents.

From the Mar. 13, 2015 posting by Tim Cushing,

… Farida Shaheed, has just delivered a less-than-complimentary report on copyright to the UN’s Human Rights Council. Shaheed’s report actually examines where copyright meshes with arts and science — the two areas it’s supposed to support — and finds it runs contrary to the rosy image of incentivized creation perpetuated by the MPAAs and RIAAs of the world.

Shaheed said a “widely shared concern stems from the tendency for copyright protection to be strengthened with little consideration to human rights issues.” This is illustrated by trade negotiations conducted in secrecy, and with the participation of corporate entities, she said.

She stressed the fact that one of the key points of her report is that intellectual property rights are not human rights. “This equation is false and misleading,” she said.

The last statement fires shots over the bows of “moral rights” purveyors, as well as those who view infringement as a moral issue, rather than just a legal one.

Shaheed also points out that the protections being installed around the world at the behest of incumbent industries are not necessarily reflective of creators’ desires. …

Glyn Moody’s Oct. 14, 2015 posting features Shaheed’s latest report on patents,

… As the summary to her report puts it:

There is no human right to patent protection. The right to protection of moral and material interests cannot be used to defend patent laws that inadequately respect the right to participate in cultural life, to enjoy the benefits of scientific progress and its applications, to scientific freedoms and the right to food and health and the rights of indigenous peoples and local communities.

Patents, when properly structured, may expand the options and well-being of all people by making new possibilities available. Yet, they also give patent-holders the power to deny access to others, thereby limiting or denying the public’s right of participation to science and culture. The human rights perspective demands that patents do not extend so far as to interfere with individuals’ dignity and well-being. Where patent rights and human rights are in conflict, human rights must prevail.

The report touches on many issues previously discussed here on Techdirt. For example, how pharmaceutical patents limit access to medicines by those unable to afford the high prices monopolies allow — a particularly hot topic in the light of TPP’s rules on data exclusivity for biologics. The impact of patents on seed independence is considered, and there is a warning about corporate sovereignty chapters in trade agreements, and the chilling effects they can have on the regulatory function of states and their ability to legislate in the public interest — for example, with patent laws.

I have two Canadian examples for data exclusivity and corporate sovereignty issues, both from Techdirt. There’s an Oct. 19, 2015 posting by Glyn Moody featuring a recent Health Canada move to threaten a researcher into suppressing information from human clinical trials,

… one of the final sticking points of the TPP negotiations [Trans Pacific Partnership] was the issue of data exclusivity for the class of drugs known as biologics. We’ve pointed out that the very idea of giving any monopoly on what amounts to facts is fundamentally anti-science, but that’s a rather abstract way of looking at it. A recent case in Canada makes plain what data exclusivity means in practice. As reported by CBC [Canadian Broadcasting Corporation] News, it concerns unpublished clinical trial data about a popular morning sickness drug:

Dr. Navindra Persaud has been fighting for four years to get access to thousands of pages of drug industry documents being held by Health Canada.

He finally received the material a few weeks ago, but now he’s being prevented from revealing what he has discovered.

That’s because Health Canada required him to sign a confidentiality agreement, and has threatened him with legal action if he breaks it.

The clinical trials data is so secret that he’s been told that he must destroy the documents once he’s read them, and notify Health Canada in writing that he has done so….

For those who aren’t familiar with it, the Trans Pacific Partnership is a proposed trade agreement including 12 countries (Australia, Brunei Darussalam, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru, Singapore, United States, and Vietnam) from the Pacific Rim. If all the countries sign on (it looks as if they will; Canada’s new Prime Minister as of Oct. 19, 2015 seems to be in favour of the agreement although he has yet to make a definitive statement), the TPP will represent a trading block that is almost double the size of the European Union.

An Oct. 8, 2015 posting by Mike Masnick provides a description of corporate sovereignty and of the Eli Lilly suit against the Canadian government.

We’ve pointed out a few times in the past that while everyone refers to the Trans Pacific Partnership (TPP) agreement as a “free trade” agreement, the reality is that there’s very little in there that’s actually about free trade. If it were truly a free trade agreement, then there would be plenty of reasons to support it. But the details show it’s not, and yet, time and time again, we see people supporting the TPP because “well, free trade is good.” …
… it’s that “harmonizing regulatory regimes” thing where the real nastiness lies, and where you quickly discover that most of the key factors in the TPP are not at all about free trade, but the opposite. It’s about as protectionist as can be. That’s mainly because of the really nasty corprorate sovereignty clauses in the agreement (which are officially called “investor state dispute settlement” or ISDS in an attempt to make it sound so boring you’ll stop paying attention). Those clauses basically allow large incumbents to force the laws of countries to change to their will. Companies who feel that some country’s regulation somehow takes away “expected profits” can convene a tribunal, and force a country to change its laws. Yes, technically a tribunal can only issue monetary sanctions against a country, but countries who wish to avoid such monetary payments will change their laws.

Remember how Eli Lilly is demanding $500 million from Canada after Canada rejected some Eli Lilly patents, noting that the new compound didn’t actually do anything new and useful? Eli Lilly claims that using such a standard to reject patents unfairly attacks its expected future profits, and thus it can demand $500 million from Canadian taxpayers. Now, imagine that on all sorts of other systems.

Cultural rights, human rights, corporate rights. It would seem that corporate rights are going to run counter to human rights, if nothing else.