Tag Archives: Bill Gates

Neural (brain) implants and hype (long read)

There was a big splash a few weeks ago when it was announced that Neuralink’s (Elon Musk company) brain implant had been surgically inserted into its first human patient.

Getting approval

David Tuffley, senior lecturer in Applied Ethics & CyberSecurity at Griffith University (Australia), provides a good overview of the road Neuralink took to getting FDA (US Food and Drug Administration) approval for human clinical trials in his May 29, 2023 essay for The Conversation, Note: Links have been removed,

Since its founding in 2016, Elon Musk’s neurotechnology company Neuralink has had the ambitious mission to build a next-generation brain implant with at least 100 times more brain connections than devices currently approved by the US Food and Drug Administration (FDA).

The company has now reached a significant milestone, having received FDA approval to begin human trials. So what were the issues keeping the technology in the pre-clinical trial phase for as long as it was? And have these concerns been addressed?

Neuralink is making a Class III medical device known as a brain-computer interface (BCI). The device connects the brain to an external computer via a Bluetooth signal, enabling continuous communication back and forth.

The device itself is a coin-sized unit called a Link. It’s implanted within a small disk-shaped cutout in the skull using a precision surgical robot. The robot splices a thousand tiny threads from the Link to certain neurons in the brain. [emphasis mine] Each thread is about a quarter the diameter of a human hair.

The company says the device could enable precise control of prosthetic limbs, giving amputees natural motor skills. It could revolutionise treatment for conditions such as Parkinson’s disease, epilepsy and spinal cord injuries. It also shows some promise for potential treatment of obesity, autism, depression, schizophrenia and tinnitus.

Several other neurotechnology companies and researchers have already developed BCI technologies that have helped people with limited mobility regain movement and complete daily tasks.

In February 2021, Musk said Neuralink was working with the FDA to secure permission to start initial human trials later that year. But human trials didn’t commence in 2021.

Then, in March 2022, Neuralink made a further application to the FDA to establish its readiness to begin humans trials.

One year and three months later, on May 25 2023, Neuralink finally received FDA approval for its first human clinical trial. Given how hard Neuralink has pushed for permission to begin, we can assume it will begin very soon. [emphasis mine]

The approval has come less than six months after the US Office of the Inspector General launched an investigation into Neuralink over potential animal welfare violations. [emphasis mine]

In accessible language, Tuffley goes on to discuss the FDA’s specific technical issues with implants and how they were addressed in his May 29, 2023 essay.

More about how Neuralink’s implant works and some concerns

Canadian Broadcasting Corporation (CBC) journalist Andrew Chang offers an almost 13 minute video, “Neuralink brain chip’s first human patient. How does it work?” Chang is a little overenthused for my taste but he offers some good information about neural implants, along with informative graphics in his presentation.

So, Tuffley was right about Neuralink getting ready quickly for human clinical trials as you can guess from the title of Chang’s CBC video.

Jennifer Korn announced that recruitment had started in her September 20, 2023 article for CNN (Cable News Network), Note: Links have been removed,

Elon Musk’s controversial biotechnology startup Neuralink opened up recruitment for its first human clinical trial Tuesday, according to a company blog.

After receiving approval from an independent review board, Neuralink is set to begin offering brain implants to paralysis patients as part of the PRIME Study, the company said. PRIME, short for Precise Robotically Implanted Brain-Computer Interface, is being carried out to evaluate both the safety and functionality of the implant.

Trial patients will have a chip surgically placed in the part of the brain that controls the intention to move. The chip, installed by a robot, will then record and send brain signals to an app, with the initial goal being “to grant people the ability to control a computer cursor or keyboard using their thoughts alone,” the company wrote.

Those with quadriplegia [sometimes known as tetraplegia] due to cervical spinal cord injury or amyotrophic lateral sclerosis (ALS) may qualify for the six-year-long study – 18 months of at-home and clinic visits followed by follow-up visits over five years. Interested people can sign up in the patient registry on Neuralink’s website.

Musk has been working on Neuralink’s goal of using implants to connect the human brain to a computer for five years, but the company so far has only tested on animals. The company also faced scrutiny after a monkey died in project testing in 2022 as part of efforts to get the animal to play Pong, one of the first video games.

I mentioned three Reuters investigative journalists who were reporting on Neuralink’s animal abuse allegations (emphasized in Tuffley’s essay) in a July 7, 2023 posting, “Global dialogue on the ethics of neurotechnology on July 13, 2023 led by UNESCO.” Later that year, Neuralink was cleared by the US Department of Agriculture (see September 24,, 2023 article by Mahnoor Jehangir for BNN Breaking).

Plus, Neuralink was being investigated over more allegations according to a February 9, 2023 article by Rachel Levy for Reuters, this time regarding hazardous pathogens,

The U.S. Department of Transportation said on Thursday it is investigating Elon Musk’s brain-implant company Neuralink over the potentially illegal movement of hazardous pathogens.

A Department of Transportation spokesperson told Reuters about the probe after the Physicians Committee of Responsible Medicine (PCRM), an animal-welfare advocacy group,wrote to Secretary of Transportation Pete Buttigieg, opens new tab earlier on Thursday to alert it of records it obtained on the matter.

PCRM said it obtained emails and other documents that suggest unsafe packaging and movement of implants removed from the brains of monkeys. These implants may have carried infectious diseases in violation of federal law, PCRM said.

There’s an update about the hazardous materials in the next section. Spoiler alert, the company got fined.

Neuralink’s first human implant

A January 30, 2024 article (Associated Press with files from Reuters) on the Canadian Broadcasting Corporation’s (CBC) online news webspace heralded the latest about Neurlink’s human clinical trials,

The first human patient received an implant from Elon Musk’s computer-brain interface company Neuralink over the weekend, the billionaire says.

In a post Monday [January 29, 2024] on X, the platform formerly known as Twitter, Musk said that the patient received the implant the day prior and was “recovering well.” He added that “initial results show promising neuron spike detection.”

Spikes are activity by neurons, which the National Institutes of Health describe as cells that use electrical and chemical signals to send information around the brain and to the body.

The billionaire, who owns X and co-founded Neuralink, did not provide additional details about the patient.

When Neuralink announced in September [2023] that it would begin recruiting people, the company said it was searching for individuals with quadriplegia due to cervical spinal cord injury or amyotrophic lateral sclerosis, commonly known as ALS or Lou Gehrig’s disease.

Neuralink reposted Musk’s Monday [January 29, 2024] post on X, but did not publish any additional statements acknowledging the human implant. The company did not immediately respond to requests for comment from The Associated Press or Reuters on Tuesday [January 30, 2024].

In a separate Monday [January 29, 2024] post on X, Musk said that the first Neuralink product is called “Telepathy” — which, he said, will enable users to control their phones or computers “just by thinking.” He said initial users would be those who have lost use of their limbs.

The startup’s PRIME Study is a trial for its wireless brain-computer interface to evaluate the safety of the implant and surgical robot.

Now for the hazardous materials, January 30, 2024 article, Note: A link has been removed,

Earlier this month [January 2024], a Reuters investigation found that Neuralink was fined for violating U.S. Department of Transportation (DOT) rules regarding the movement of hazardous materials. During inspections of the company’s facilities in Texas and California in February 2023, DOT investigators found the company had failed to register itself as a transporter of hazardous material.

They also found improper packaging of hazardous waste, including the flammable liquid Xylene. Xylene can cause headaches, dizziness, confusion, loss of muscle co-ordination and even death, according to the U.S. Centers for Disease Control and Prevention.

The records do not say why Neuralink would need to transport hazardous materials or whether any harm resulted from the violations.

Skeptical thoughts about Elon Musk and Neuralink

Earlier this month (February 2024), the British Broadcasting Corporation (BBC) published an article by health reporters, Jim Reed and Joe McFadden, that highlights the history of brain implants, the possibilities, and notes some of Elon Musk’s more outrageous claims for Neuralink’s brain implants,

Elon Musk is no stranger to bold claims – from his plans to colonise Mars to his dreams of building transport links underneath our biggest cities. This week the world’s richest man said his Neuralink division had successfully implanted its first wireless brain chip into a human.

Is he right when he says this technology could – in the long term – save the human race itself?

Sticking electrodes into brain tissue is really nothing new.

In the 1960s and 70s electrical stimulation was used to trigger or suppress aggressive behaviour in cats. By the early 2000s monkeys were being trained to move a cursor around a computer screen using just their thoughts.

“It’s nothing novel, but implantable technology takes a long time to mature, and reach a stage where companies have all the pieces of the puzzle, and can really start to put them together,” says Anne Vanhoestenberghe, professor of active implantable medical devices, at King’s College London.

Neuralink is one of a growing number of companies and university departments attempting to refine and ultimately commercialise this technology. The focus, at least to start with, is on paralysis and the treatment of complex neurological conditions.

Reed and McFadden’s February 2024 BBC article describes a few of the other brain implant efforts, Note: Links have been removed,

One of its [Neuralink’s] main rivals, a start-up called Synchron backed by funding from investment firms controlled by Bill Gates and Jeff Bezos, has already implanted its stent-like device into 10 patients.

Back in December 2021, Philip O’Keefe, a 62-year old Australian who lives with a form of motor neurone disease, composed the first tweet using just his thoughts to control a cursor.

And researchers at Lausanne University in Switzerland have shown it is possible for a paralysed man to walk again by implanting multiple devices to bypass damage caused by a cycling accident.

In a research paper published this year, they demonstrated a signal could be beamed down from a device in his brain to a second device implanted at the base of his spine, which could then trigger his limbs to move.

Some people living with spinal injuries are sceptical about the sudden interest in this new kind of technology.

“These breakthroughs get announced time and time again and don’t seem to be getting any further along,” says Glyn Hayes, who was paralysed in a motorbike accident in 2017, and now runs public affairs for the Spinal Injuries Association.

If I could have anything back, it wouldn’t be the ability to walk. It would be putting more money into a way of removing nerve pain, for example, or ways to improve bowel, bladder and sexual function.” [emphasis mine]

Musk, however, is focused on something far more grand for Neuralink implants, from Reed and McFadden’s February 2024 BBC article, Note: A link has been removed,

But for Elon Musk, “solving” brain and spinal injuries is just the first step for Neuralink.

The longer-term goal is “human/AI symbiosis” [emphasis mine], something he describes as “species-level important”.

Musk himself has already talked about a future where his device could allow people to communicate with a phone or computer “faster than a speed typist or auctioneer”.

In the past, he has even said saving and replaying memories may be possible, although he recognised “this is sounding increasingly like a Black Mirror episode.”

One of the experts quoted in Reed and McFadden’s February 2024 BBC article asks a pointed question,

… “At the moment, I’m struggling to see an application that a consumer would benefit from, where they would take the risk of invasive surgery,” says Prof Vanhoestenberghe.

“You’ve got to ask yourself, would you risk brain surgery just to be able to order a pizza on your phone?”

Rae Hodge’s February 11, 2024 article about Elon Musk and his hyped up Neuralink implant for Salon is worth reading in its entirety but for those who don’t have the time or need a little persuading, here are a few excerpts, Note 1: This is a warning; Hodge provides more detail about the animal cruelty allegations; Note 2: Links have been removed,

Elon Musk’s controversial brain-computer interface (BCI) tech, Neuralink, has supposedly been implanted in its first recipient — and as much as I want to see progress for treatment of paralysis and neurodegenerative disease, I’m not celebrating. I bet the neuroscientists he reportedly drove out of the company aren’t either, especially not after seeing the gruesome torture of test monkeys and apparent cover-up that paved the way for this moment. 

All of which is an ethics horror show on its own. But the timing of Musk’s overhyped implant announcement gives it an additional insulting subtext. Football players are currently in a battle for their lives against concussion-based brain diseases that plague autopsy reports of former NFL players. And Musk’s boast of false hope came just two weeks before living players take the field in the biggest and most brutal game of the year. [2024 Super Bowl LVIII]

ESPN’s Kevin Seifert reports neuro-damage is up this year as “players suffered a total of 52 concussions from the start of training camp to the beginning of the regular season. The combined total of 213 preseason and regular season concussions was 14% higher than 2021 but within range of the three-year average from 2018 to 2020 (203).”

I’m a big fan of body-tech: pacemakers, 3D-printed hips and prosthetic limbs that allow you to wear your wedding ring again after 17 years. Same for brain chips. But BCI is the slow-moving front of body-tech development for good reason. The brain is too understudied. Consequences of the wrong move are dire. Overpromising marketable results on profit-driven timelines — on the backs of such a small community of researchers in a relatively new field — would be either idiotic or fiendish. 

Brown University’s research in the sector goes back to the 1990s. Since the emergence of a floodgate-opening 2002 study and the first implant in 2004 by med-tech company BrainGate, more promising results have inspired broader investment into careful research. But BrainGate’s clinical trials started back in 2009, and as noted by Business Insider’s Hilary Brueck, are expected to continue until 2038 — with only 15 participants who have devices installed. 

Anne Vanhoestenberghe is a professor of active implantable medical devices at King’s College London. In a recent release, she cautioned against the kind of hype peddled by Musk.

“Whilst there are a few other companies already using their devices in humans and the neuroscience community have made remarkable achievements with those devices, the potential benefits are still significantly limited by technology,” she said. “Developing and validating core technology for long term use in humans takes time and we need more investments to ensure we do the work that will underpin the next generation of BCIs.” 

Neuralink is a metal coin in your head that connects to something as flimsy as an app. And we’ve seen how Elon treats those. We’ve also seen corporate goons steal a veteran’s prosthetic legs — and companies turn brain surgeons and dentists into repo-men by having them yank anti-epilepsy chips out of people’s skulls, and dentures out of their mouths. 

“I think we have a chance with Neuralink to restore full-body functionality to someone who has a spinal cord injury,” Musk said at a 2023 tech summit, adding that the chip could possibly “make up for whatever lost capacity somebody has.”

Maybe BCI can. But only in the careful hands of scientists who don’t have Musk squawking “go faster!” over their shoulders. His greedy frustration with the speed of BCI science is telling, as is the animal cruelty it reportedly prompted.

There have been other examples of Musk’s grandiosity. Notably, David Lee expressed skepticism about hyperloop in his August 13, 2013 article for BBC news online

Is Elon Musk’s Hyperloop just a pipe dream?

Much like the pun in the headline, the bright idea of transporting people using some kind of vacuum-like tube is neither new nor imaginative.

There was Robert Goddard, considered the “father of modern rocket propulsion”, who claimed in 1909 that his vacuum system could suck passengers from Boston to New York at 1,200mph.

And then there were Soviet plans for an amphibious monorail  – mooted in 1934  – in which two long pods would start their journey attached to a metal track before flying off the end and slipping into the water like a two-fingered Kit Kat dropped into some tea.

So ever since inventor and entrepreneur Elon Musk hit the world’s media with his plans for the Hyperloop, a healthy dose of scepticism has been in the air.

“This is by no means a new idea,” says Rod Muttram, formerly of Bombardier Transportation and Railtrack.

“It has been previously suggested as a possible transatlantic transport system. The only novel feature I see is the proposal to put the tubes above existing roads.”

Here’s the latest I’ve found on hyperloop, from the Hyperloop Wikipedia entry,

As of 2024, some companies continued to pursue technology development under the hyperloop moniker, however, one of the biggest, well funded players, Hyperloop One, declared bankruptcy and ceased operations in 2023.[15]

Musk is impatient and impulsive as noted in a September 12, 2023 posting by Mike Masnick on Techdirt, Note: A link has been removed,

The Batshit Crazy Story Of The Day Elon Musk Decided To Personally Rip Servers Out Of A Sacramento Data Center

Back on Christmas Eve [December 24, 2022] of last year there were some reports that Elon Musk was in the process of shutting down Twitter’s Sacramento data center. In that article, a number of ex-Twitter employees were quoted about how much work it would be to do that cleanly, noting that there’s a ton of stuff hardcoded in Twitter code referring to that data center (hold that thought).

That same day, Elon tweeted out that he had “disconnected one of the more sensitive server racks.”

Masnick follows with a story of reckless behaviour from someone who should have known better.

Ethics of implants—where to look for more information

While Musk doesn’t use the term when he describes a “human/AI symbiosis” (presumably by way of a neural implant), he’s talking about a cyborg. Here’s a 2018 paper, which looks at some of the implications,

Do you want to be a cyborg? The moderating effect of ethics on neural implant acceptance by Eva Reinares-Lara, Cristina Olarte-Pascual, and Jorge Pelegrín-Borondo. Computers in Human Behavior Volume 85, August 2018, Pages 43-53 DOI: https://doi.org/10.1016/j.chb.2018.03.032

This paper is open access.

Getting back to Neuralink, I have two blog posts that discuss the company and the ethics of brain implants from way back in 2021.

First, there’s Jazzy Benes’ March 1, 2021 posting on the Santa Clara University’s Markkula Center for Applied Ethics blog. It stands out as it includes a discussion of the disabled community’s issues, Note: Links have been removed,

In the heart of Silicon Valley we are constantly enticed by the newest technological advances. With the big influencers Grimes [a Canadian musician and the mother of three children with Elon Musk] and Lil Uzi Vert publicly announcing their willingness to become experimental subjects for Elon Musk’s Neuralink brain implantation device, we are left wondering if future technology will actually give us “the knowledge of the Gods.” Is it part of the natural order for humans to become omniscient beings? Who will have access to the devices? What other ethical considerations must be discussed before releasing such technology to the public?

A significant issue that arises from developing technologies for the disabled community is the assumption that disabled persons desire the abilities of what some abled individuals may define as “normal.” Individuals with disabilities may object to technologies intended to make them fit an able-bodied norm. “Normal” is relative to each individual, and it could be potentially harmful to use a deficit view of disability, which means judging a disability as a deficiency. However, this is not to say that all disabled individuals will reject a technology that may enhance their abilities. Instead, I believe it is a consideration that must be recognized when developing technologies for the disabled community, and it can only be addressed through communication with disabled persons. As a result, I believe this is a conversation that must be had with the community for whom the technology is developed–disabled persons.

With technologies that aim to address disabilities, we walk a fine line between therapeutics and enhancement. Though not the first neural implant medical device, the Link may have been the first BCI system openly discussed for its potential transhumanism uses, such as “enhanced cognitive abilities, memory storage and retrieval, gaming, telepathy, and even symbiosis with machines.” …

Benes also discusses transhumanism, privacy issues, and consent issues. It’s a thoughtful reading experience.

Second is a July 9, 2021 posting by anonymous on the University of California at Berkeley School of Information blog which provides more insight into privacy and other issues associated with data collection (and introduced me to the concept of decisional interference),

As the development of microchips furthers and advances in neuroscience occur, the possibility for seamless brain-machine interfaces, where a device decodes inputs from the user’s brain to perform functions, becomes more of a reality. These various forms of these technologies already exist. However, technological advances have made implantable and portable devices possible. Imagine a future where humans don’t need to talk to each other, but rather can transmit their thoughts directly to another person. This idea is the eventual goal of Elon Musk, the founder of Neuralink. Currently, Neuralink is one of the main companies involved in the advancement of this type of technology. Analysis of the Neuralink’s technology and their overall mission statement provide an interesting insight into the future of this type of human-computer interface and the potential privacy and ethical concerns with this technology.

As this technology further develops, several privacy and ethical concerns come into question. To begin, using Solove’s Taxonomy as a privacy framework, many areas of potential harm are revealed. In the realm of information collection, there is much risk. Brain-computer interfaces, depending on where they are implanted, could have access to people’s most private thoughts and emotions. This information would need to be transmitted to another device for processing. The collection of this information by companies such as advertisers would represent a major breach of privacy. Additionally, there is risk to the user from information processing. These devices must work concurrently with other devices and often wirelessly. Given the widespread importance of cloud computing in much of today’s technology, offloading information from these devices to the cloud would be likely. Having the data stored in a database puts the user at the risk of secondary use if proper privacy policies are not implemented. The trove of information stored within the information collected from the brain is vast. These datasets could be combined with existing databases such as browsing history on Google to provide third parties with unimaginable context on individuals. Lastly, there is risk for information dissemination, more specifically, exposure. The information collected and processed by these devices would need to be stored digitally. Keeping such private information, even if anonymized, would be a huge potential for harm, as the contents of the information may in itself be re-identifiable to a specific individual. Lastly there is risk for invasions such as decisional interference. Brain-machine interfaces would not only be able to read information in the brain but also write information. This would allow the device to make potential emotional changes in its users, which be a major example of decisional interference. …

For the most recent Neuralink and brain implant ethics piece, there’s this February 14, 2024 essay on The Conversation, which, unusually, for this publication was solicited by the editors, Note: Links have been removed,

In January 2024, Musk announced that Neuralink implanted its first chip in a human subject’s brain. The Conversation reached out to two scholars at the University of Washington School of Medicine – Nancy Jecker, a bioethicst, and Andrew Ko, a neurosurgeon who implants brain chip devices – for their thoughts on the ethics of this new horizon in neuroscience.

Information about the implant, however, is scarce, aside from a brochure aimed at recruiting trial subjects. Neuralink did not register at ClinicalTrials.gov, as is customary, and required by some academic journals. [all emphases mine]

Some scientists are troubled by this lack of transparency. Sharing information about clinical trials is important because it helps other investigators learn about areas related to their research and can improve patient care. Academic journals can also be biased toward positive results, preventing researchers from learning from unsuccessful experiments.

Fellows at the Hastings Center, a bioethics think tank, have warned that Musk’s brand of “science by press release, while increasingly common, is not science. [emphases mine]” They advise against relying on someone with a huge financial stake in a research outcome to function as the sole source of information.

When scientific research is funded by government agencies or philanthropic groups, its aim is to promote the public good. Neuralink, on the other hand, embodies a private equity model [emphasis mine], which is becoming more common in science. Firms pooling funds from private investors to back science breakthroughs may strive to do good, but they also strive to maximize profits, which can conflict with patients’ best interests.

In 2022, the U.S. Department of Agriculture investigated animal cruelty at Neuralink, according to a Reuters report, after employees accused the company of rushing tests and botching procedures on test animals in a race for results. The agency’s inspection found no breaches, according to a letter from the USDA secretary to lawmakers, which Reuters reviewed. However, the secretary did note an “adverse surgical event” in 2019 that Neuralink had self-reported.

In a separate incident also reported by Reuters, the Department of Transportation fined Neuralink for violating rules about transporting hazardous materials, including a flammable liquid.

…the possibility that the device could be increasingly shown to be helpful for people with disabilities, but become unavailable due to loss of research funding. For patients whose access to a device is tied to a research study, the prospect of losing access after the study ends can be devastating. [emphasis mine] This raises thorny questions about whether it is ever ethical to provide early access to breakthrough medical interventions prior to their receiving full FDA approval.

Not registering a clinical trial would seem to suggest there won’t be much oversight. As for Musk’s “science by press release” activities, I hope those will be treated with more skepticism by mainstream media although that seems unlikely given the current situation with journalism (more about that in a future post).

As for the issues associated with private equity models for science research and the problem of losing access to devices after a clinical trial is ended, my April 5, 2022 posting, “Going blind when your neural implant company flirts with bankruptcy (long read)” offers some cautionary tales, in addition to being the most comprehensive piece I’ve published on ethics and brain implants.

My July 17, 2023 posting, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report” offers a brief overview of the international scene.

Non-human authors (ChatGPT or others) of scientific and medical studies and the latest AI panic!!!

It’s fascinating to see all the current excitement (distressed and/or enthusiastic) around the act of writing and artificial intelligence. Easy to forget that it’s not new. First, the ‘non-human authors’ and then the panic(s). *What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.*

How to handle non-human authors (ChatGPT and other AI agents)—the medical edition

The first time I wrote about the incursion of robots or artificial intelligence into the field of writing was in a July 16, 2014 posting titled “Writing and AI or is a robot writing this blog?” ChatGPT (then known as GPT-2) first made its way onto this blog in a February 18, 2019 posting titled “AI (artificial intelligence) text generator, too dangerous to release?

The folks at the Journal of the American Medical Association (JAMA) have recently adopted a pragmatic approach to the possibility of nonhuman authors of scientific and medical papers, from a January 31, 2022 JAMA editorial,

Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability..1

In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.

Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11

This is a link to and a citation for the JAMA editorial,

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge by Annette Flanagin, Kirsten Bibbins-Domingo, Michael Berkwits, Stacy L. Christiansen. JAMA. 2023;329(8):637-639. doi:10.1001/jama.2023.1344

The editorial appears to be open access.

ChatGPT in the field of education

Dr. Andrew Maynard (scientist, author, and professor of Advanced Technology Transitions in the Arizona State University [ASU] School for the Future if Innovation in Society and founder of the ASU Future of Being Human initiative and Director of the ASU Risk Innovation Nexus) also takes a pragmatic approach in a March 14, 2023 posting on his eponymous blog,

Like many of my colleagues, I’ve been grappling with how ChatGPT and other Large Language Models (LLMs) are impacting teaching and education — especially at the undergraduate level.

We’re already seeing signs of the challenges here as a growing divide emerges between LLM-savvy students who are experimenting with novel ways of using (and abusing) tools like ChatGPT, and educators who are desperately trying to catch up. As a result, educators are increasingly finding themselves unprepared and poorly equipped to navigate near-real-time innovations in how students are using these tools. And this is only exacerbated where their knowledge of what is emerging is several steps behind that of their students.

To help address this immediate need, a number of colleagues and I compiled a practical set of Frequently Asked Questions on ChatGPT in the classroom. These covers the basics of what ChatGPT is, possible concerns over use by students, potential creative ways of using the tool to enhance learning, and suggestions for class-specific guidelines.

Dr. Maynard goes on to offer the FAQ/practical guide here. Prior to issuing the ‘guide’, he wrote a December 8, 2022 essay on Medium titled “I asked Open AI’s ChatGPT about responsible innovation. This is what I got.”

Crawford Kilian, a longtime educator, author, and contributing editor to The Tyee, expresses measured enthusiasm for the new technology (as does Dr. Maynard), in a December 13, 2022 article for thetyee.ca, Note: Links have been removed,

ChatGPT, its makers tell us, is still in beta form. Like a million other new users, I’ve been teaching it (tuition-free) so its answers will improve. It’s pretty easy to run a tutorial: once you’ve created an account, you’re invited to ask a question or give a command. Then you watch the reply, popping up on the screen at the speed of a fast and very accurate typist.

Early responses to ChatGPT have been largely Luddite: critics have warned that its arrival means the end of high school English, the demise of the college essay and so on. But remember that the Luddites were highly skilled weavers who commanded high prices for their products; they could see that newfangled mechanized looms would produce cheap fabrics that would push good weavers out of the market. ChatGPT, with sufficient tweaks, could do just that to educators and other knowledge workers.

Having spent 40 years trying to teach my students how to write, I have mixed feelings about this prospect. But it wouldn’t be the first time that a technological advancement has resulted in the atrophy of a human mental skill.

Writing arguably reduced our ability to memorize — and to speak with memorable and persuasive coherence. …

Writing and other technological “advances” have made us what we are today — powerful, but also powerfully dangerous to ourselves and our world. If we can just think through the implications of ChatGPT, we may create companions and mentors that are not so much demonic as the angels of our better nature.

More than writing: emergent behaviour

The ChatGPT story extends further than writing and chatting. From a March 6, 2023 article by Stephen Ornes for Quanta Magazine, Note: Links have been removed,

What movie do these emojis describe?

That prompt was one of 204 tasks chosen last year to test the ability of various large language models (LLMs) — the computational engines behind AI chatbots such as ChatGPT. The simplest LLMs produced surreal responses. “The movie is a movie about a man who is a man who is a man,” one began. Medium-complexity models came closer, guessing The Emoji Movie. But the most complex model nailed it in one guess: Finding Nemo.

“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.

“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors [emphasis mine], including several identified in Dyer’s project. That list continues to grow.

Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.

But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.

As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli [Deep Ganguli, a computer scientist at the AI startup Anthropic] said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

There is an obvious problem with asking these models to explain themselves: They are notorious liars. [emphasis mine] “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” As one of many amusing examples, in February [2023] Google introduced its AI chatbot, Bard. The blog post announcing the new tool shows Bard making a factual error.

If you have time, I recommend reading Omes’s March 6, 2023 article.

The panic

Perhaps not entirely unrelated to current developments, there was this announcement in a May 1, 2023 article by Hannah Alberga for CTV (Canadian Television Network) news, Note: Links have been removed,

Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI

Geoffrey Hinton, professor at the University of Toronto and the “godfather” of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday [May 1, 2023] citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by “bad actors.”

Hinton, a British-Canadian computer scientist, is best-known for a series of deep neural network breakthroughs that won him, Yann LeCun and Yoshua Bengio the 2018 Turing Award, known as the Nobel Prize of computing. 

Hinton has been invested in the now-hot topic of artificial intelligence since its early stages. In 1970, he got a Bachelor of Arts in experimental psychology from Cambridge, followed by his Ph.D. in artificial intelligence in Edinburgh, U.K. in 1978.

He joined Google after spearheading a major breakthrough with two of his graduate students at the University of Toronto in 2012, in which the team uncovered and built a new method of artificial intelligence: neural networks. The team’s first neural network was  incorporated and sold to Google for $44 million.

Neural networks are a method of deep learning that effectively teaches computers how to learn the way humans do by analyzing data, paving the way for machines to classify objects and understand speech recognition.

There’s a bit more from Hinton in a May 3, 2023 article by Sheena Goodyear for the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens (the 10 minute radio interview is embedded in the article), Note: A link has been removed,

There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.

Nowadays, he’s not so sure.

“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”

For the last decade, he [Geoffrey Hinton] divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.

Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”

But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes. 

Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good. 

“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.

“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”

Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”

“That being said, I do think that some of these ideas that … AI supercomputers are going to ‘wake up’ and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear” and distract from more pressing issues, he said.

He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.

“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he [Hinton] said.

Don Pittis in his May 4, 2022 business analysis for CBC news online offers a somewhat jaundiced view of Hinton’s concern regarding AI, Note: Links have been removed,

As if we needed one more thing to terrify us, the latest warning from a University of Toronto scientist considered by many to be the founding intellect of artificial intelligence, adds a new layer of dread.

Others who have warned in the past that thinking machines are a threat to human existence seem a little miffed with the rock-star-like media coverage Geoffrey Hinton, billed at a conference this week as the Godfather of AI, is getting for what seems like a last minute conversion. Others say Hinton’s authoritative voice makes a difference.

Not only did Hinton tell an audience of experts at Wednesday’s [May 3, 2023] EmTech Digital conference that humans will soon be supplanted by AI — “I think it’s serious and fairly close.” — he said that due to national and business competition, there is no obvious way to prevent it.

“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial,” said Hinton on Wednesday [May 3, 2023] as he explained his change of heart in detailed technical terms. 

“But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.”

“I wish I had a nice and simple solution I could push, but I don’t,” he said. “It’s not clear there is a solution.”

So when is all this happening?

“In a few years time they may be significantly more intelligent than people,” he told Nil Köksal on CBC Radio’s As It Happens on Wednesday [May 3, 2023].

While he may be late to the party, Hinton’s voice adds new clout to growing anxiety that artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.

But long before that final day, he worries that the new technology will soon begin to strip away jobs and lead to a destabilizing societal gap between rich and poor that current politics will be unable to solve.

The EmTech Digital conference is a who’s who of AI business and academia, fields which often overlap. Most other participants at the event were not there to warn about AI like Hinton, but to celebrate the explosive growth of AI research and business.

As one expert I spoke to pointed out, the growth in AI is exponential and has been for a long time. But even knowing that, the increase in the dollar value of AI to business caught the sector by surprise.

Eight years ago when I wrote about the expected increase in AI business, I quoted the market intelligence group Tractica that AI spending would “be worth more than $40 billion in the coming decade,” which sounded like a lot at the time. It appears that was an underestimate.

“The global artificial intelligence market size was valued at $428 billion U.S. in 2022,” said an updated report from Fortune Business Insights. “The market is projected to grow from $515.31 billion U.S. in 2023.”  The estimate for 2030 is more than $2 trillion. 

This week the new Toronto AI company Cohere, where Hinton has a stake of his own, announced it was “in advanced talks” to raise $250 million. The Canadian media company Thomson Reuters said it was planning “a deeper investment in artificial intelligence.” IBM is expected to “pause hiring for roles that could be replaced with AI.” The founders of Google DeepMind and LinkedIn have launched a ChatGPT competitor called Pi.

And that was just this week.

“My one hope is that, because if we allow it to take over it will be bad for all of us, we could get the U.S. and China to agree, like we did with nuclear weapons,” said Hinton. “We’re all the in same boat with respect to existential threats, so we all ought to be able to co-operate on trying to stop it.”

Interviewer and moderator Will Douglas Heaven, an editor at MIT Technology Review finished Hinton’s sentence for him: “As long as we can make some money on the way.”

Hinton has attracted some criticism himself. Wilfred Chan writing for Fast Company has two articles, “‘I didn’t see him show up’: Ex-Googlers blast ‘AI godfather’ Geoffrey Hinton’s silence on fired AI experts” on May 5, 2023, Note: Links have been removed,

Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord.

But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.

“It’s disappointing to see this autumn-years redemption tour [emphasis mine] from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)

Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.

“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”

Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. …

… An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them. 

Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday [May 2, 2023], for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.” [emphasis mine]

Gebru has been mentioned here a few times. She’s mentioned in passing in a June 23, 2022 posting “Racist and sexist robots have flawed AI” and in a little more detail in an August 30, 2022 posting “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” scroll down to the ‘Consciousness and ethical AI’ subhead

Chan has another Fast Company article investigating AI issues also published on May 5, 2023, “Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them.”

The last two existential AI panics

The term “autumn-years redemption tour”is striking and while the reference to age could be viewed as problematic, it also hints at the money, honours, and acknowledgement that Hinton has enjoyed as an eminent scientist. I’ve covered two previous panics set off by eminent scientists. “Existential risk” is the title of my November 26, 2012 posting which highlights Martin Rees’ efforts to found the Centre for Existential Risk at the University of Cambridge.

Rees is a big deal. From his Wikipedia entry, Note: Links have been removed,

Martin John Rees, Baron Rees of Ludlow OM FRS FREng FMedSci FRAS HonFInstP[10][2] (born 23 June 1942) is a British cosmologist and astrophysicist.[11] He is the fifteenth Astronomer Royal, appointed in 1995,[12][13][14] and was Master of Trinity College, Cambridge, from 2004 to 2012 and President of the Royal Society between 2005 and 2010.[15][16][17][18][19][20]

The Centre for Existential Risk can be found here online (it is located at the University of Cambridge). Interestingly, Hinton who was born in December 1947 will be giving a lecture “Digital versus biological intelligence: Reasons for concern about AI” in Cambridge on May 25, 2023.

The next panic was set off by Stephen Hawking (1942 – 2018; also at the University of Cambridge, Wikipedia entry) a few years before he died. (Note: Rees, Hinton, and Hawking were all born within five years of each other and all have/had ties to the University of Cambridge. Interesting coincidence, eh?) From a January 9, 2015 article by Emily Chung for CBC news online,

Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, “the development of full artificial intelligence could spell the end of the human race.” Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably “our biggest existential threat.”

Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.

Hawking made his most recent comments at the beginning of December [2014], in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.

Popular works of science fiction – from the latest Terminator trailer, to the Matrix trilogy, to Star Trek’s borg – envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.

Sawyer has written about a different vision of life beyond singularity [when machines surpass humans in general intelligence,] — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the “possible misuse of powerful technologies” such as AI. He said Hawking and Musk have good reason to be concerned.

To sum up, the first panic was in 2012, the next in 2014/15, and the latest one began earlier this year (2023) with a letter. A March 29, 2023 Thompson Reuters news item on CBC news online provides information on the contents,

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a pioneer of research in the field.

According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.

The concerns come as EU police force Europol on Monday {March 27, 2023] joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Meanwhile, the U.K. government unveiled proposals for an “adaptable” regulatory framework around AI.

The government’s approach, outlined in a policy paper published on Wednesday [March 29, 2023], would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.

The engineers have chimed in, from an April 7, 2023 article by Margo Anderson for the IEEE (institute of Electrical and Electronics Engineers) Spectrum magazine, Note: Links have been removed,

The open letter [published March 29, 2023], titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”

It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.

In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.

IEEE members have expressed a similar diversity of opinions.

There was an earlier open letter in January 2015 according to Wikipedia’s “Open Letter on Artificial Intelligence” entry, Note: Links have been removed,

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.

As for ‘Mr. ChatGPT’ or Sam Altman, CEO of OpenAI, while he didn’t sign the March 29, 2023 letter, he appeared before US Congress suggesting AI needs to be regulated according to May 16, 2023 news article by Mohar Chatterjee for Politico.

You’ll notice I’ve arbitrarily designated three AI panics by assigning their origins to eminent scientists. In reality, these concerns rise and fall in ways that don’t allow for such a tidy analysis. As Chung notes, science fiction regularly addresses this issue. For example, there’s my October 16, 2013 posting, “Wizards & Robots: a comic book encourages study in the sciences and maths and discussions about existential risk.” By the way, will.i.am (of the Black Eyed Peas band was involved in the comic book project and he us a longtime supporter of STEM (science, technology, engineering, and mathematics) initiatives.

Finally (but not quite)

Puzzling, isn’t it? I’m not sure we’re asking the right questions but it’s encouraging to see that at least some are being asked.

Dr. Andrew Maynard in a May 12, 2023 essay for The Conversation (h/t May 12, 2023 item on phys.org) notes that ‘Luddites’ questioned technology’s inevitable progress and were vilified for doing so, Note: Links have been removed,

The term “Luddite” emerged in early 1800s England. At the time there was a thriving textile industry that depended on manual knitting frames and a skilled workforce to create cloth and garments out of cotton and wool. But as the Industrial Revolution gathered momentum, steam-powered mills threatened the livelihood of thousands of artisanal textile workers.

Faced with an industrialized future that threatened their jobs and their professional identity, a growing number of textile workers turned to direct action. Galvanized by their leader, Ned Ludd, they began to smash the machines that they saw as robbing them of their source of income.

It’s not clear whether Ned Ludd was a real person, or simply a figment of folklore invented during a period of upheaval. But his name became synonymous with rejecting disruptive new technologies – an association that lasts to this day.

Questioning doesn’t mean rejecting

Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time. Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life

In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.” Their sin? Raising concerns over the potential dangers of artificial intelligence.

The irony of three prominent scientists and entrepreneurs being labeled as Luddites underlines the disconnect between the term’s original meaning and its more modern use as an epithet for anyone who doesn’t wholeheartedly and unquestioningly embrace technological progress.

Yet technologists like Musk and Gates aren’t rejecting technology or innovation. Instead, they’re rejecting a worldview that all technological advances are ultimately good for society. This worldview optimistically assumes that the faster humans innovate, the better the future will be.

In an age of ChatGPT, gene editing and other transformative technologies, perhaps we all need to channel the spirit of Ned Ludd as we grapple with how to ensure that future technologies do more good than harm.

In fact, “Neo-Luddites” or “New Luddites” is a term that emerged at the end of the 20th century.

In 1990, the psychologist Chellis Glendinning published an essay titled “Notes toward a Neo-Luddite Manifesto.”

Then there are the Neo-Luddites who actively reject modern technologies, fearing that they are damaging to society. New York City’s Luddite Club falls into this camp. Formed by a group of tech-disillusioned Gen-Zers, the club advocates the use of flip phones, crafting, hanging out in parks and reading hardcover or paperback books. Screens are an anathema to the group, which sees them as a drain on mental health.

I’m not sure how many of today’s Neo-Luddites – whether they’re thoughtful technologists, technology-rejecting teens or simply people who are uneasy about technological disruption – have read Glendinning’s manifesto. And to be sure, parts of it are rather contentious. Yet there is a common thread here: the idea that technology can lead to personal and societal harm if it is not developed responsibly.

Getting back to where this started with nonhuman authors, Amelia Eqbal has written up an informal transcript of a March 16, 2023 CBC radio interview (radio segment is embedded) about ChatGPT-4 (the latest AI chatbot from OpenAI) between host Elamin Abdelmahmoud and tech journalist, Alyssa Bereznak.

I was hoping to add a little more Canadian content, so in March 2023 and again in April 2023, I sent a question about whether there were any policies regarding nonhuman or AI authors to Kim Barnhardt at the Canadian Medical Association Journal (CMAJ). To date, there has been no reply but should one arrive, I will place it here.

In the meantime, I have this from Canadian writer, Susan Baxter in her May 15, 2023 blog posting “Coming soon: Robot Overlords, Sentient AI and more,”

The current threat looming (Covid having been declared null and void by the WHO*) is Artificial Intelligence (AI) which, we are told, is becoming too smart for its own good and will soon outsmart humans. Then again, given some of the humans I’ve met along the way that wouldn’t be difficult.

All this talk of scary-boo AI seems to me to have become the worst kind of cliché, one that obscures how our lives have become more complicated and more frustrating as apps and bots and cyber-whatsits take over.

The trouble with clichés, as Alain de Botton wrote in How Proust Can Change Your Life, is not that they are wrong or contain false ideas but more that they are “superficial articulations of good ones”. Cliches are oversimplifications that become so commonplace we stop noticing the more serious subtext. (This is rife in medicine where metaphors such as talk of “replacing” organs through transplants makes people believe it’s akin to changing the oil filter in your car. Or whatever it is EV’s have these days that needs replacing.)

Should you live in Vancouver (Canada) and are attending a May 28, 2023 AI event, you may want to read Susan Baxter’s piece as a counterbalance to, “Discover the future of artificial intelligence at this unique AI event in Vancouver,” a May 19, 2023 sponsored content by Katy Brennan for the Daily Hive,

If you’re intrigued and eager to delve into the rapidly growing field of AI, you’re not going to want to miss this unique Vancouver event.

On Sunday, May 28 [2023], a Multiplatform AI event is coming to the Vancouver Playhouse — and it’s set to take you on a journey into the future of artificial intelligence.

The exciting conference promises a fusion of creativity, tech innovation, and thought–provoking insights, with talks from renowned AI leaders and concept artists, who will share their experiences and opinions.

Guests can look forward to intense discussions about AI’s pros and cons, hear real-world case studies, and learn about the ethical dimensions of AI, its potential threats to humanity, and the laws that govern its use.

Live Q&A sessions will also be held, where leading experts in the field will address all kinds of burning questions from attendees. There will also be a dynamic round table and several other opportunities to connect with industry leaders, pioneers, and like-minded enthusiasts. 

This conference is being held at The Playhouse, 600 Hamilton Street, from 11 am to 7:30 pm, ticket prices range from $299 to $349 to $499 (depending on when you make your purchase, From the Multiplatform AI Conference homepage,

Event Speakers

Max Sills
General Counsel at Midjourney

From Jan 2022 – present (Advisor – now General Counsel) – Midjourney – An independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species (SF) Midjourney – a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called “prompts”, similar to OpenAI’s DALL-E and Stable Diffusion. For now the company uses Discord Server as a source of service and, with huge 15M+ members, is the biggest Discord server in the world. In the two-things-at-once department, Max Sills also known as an owner of Open Advisory Services, firm which is set up to help small and medium tech companies with their legal needs (managing outside counsel, employment, carta, TOS, privacy). Their clients are enterprise level, medium companies and up, and they are here to help anyone on open source and IP strategy. Max is an ex-counsel at Block, ex-general manager of the Crypto Open Patent Alliance. Prior to that Max led Google’s open source legal group for 7 years.

So, the first speaker listed is a lawyer associated with Midjourney, a highly controversial generative artificial intelligence programme used to generate images. According to their entry on Wikipedia, the company is being sued, Note: Links have been removed,

On January 13, 2023, three artists – Sarah Andersen, Kelly McKernan, and Karla Ortiz – filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists.[32]

My October 24, 2022 posting highlights some of the issues with generative image programmes and Midjourney is mentioned throughout.

As I noted earlier, I’m glad to see more thought being put into the societal impact of AI and somewhat disconcerted by the hyperbole from the like of Geoffrey Hinton and the like of Vancouver’s Multiplatform AI conference organizers. Mike Masnick put it nicely in his May 24, 2023 posting on TechDirt (Note 1: I’ve taken a paragraph out of context, his larger issue is about proposals for legislation; Note 2: Links have been removed),

Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.

For anyone interested in the Canadian government attempts to legislate AI, there’s my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”

Addendum (June 1, 2023)

Another statement warning about runaway AI was issued on Tuesday, May 30, 2023. This was far briefer than the previous March 2023 warning, from the Center for AI Safety’s “Statement on AI Risk” webpage,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war [followed by a list of signatories] …

Vanessa Romo’s May 30, 2023 article (with contributions from Bobby Allyn) for NPR ([US] National Public Radio) offers an overview of both warnings. Rae Hodge’s May 31, 2023 article for Salon offers a more critical view, Note: Links have been removed,

The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of “nuclear war” and human “extinction.” Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement from the non-profit Center for AI Safety said.

But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board.

TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry.

“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote.

“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.

The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts.

“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”

Fear, after all, is a powerful sales tool.

Romo’s May 30, 2023 article for NPR offers a good overview and, if you have the time, I recommend reading Hodge’s May 31, 2023 article for Salon in its entirety.

*ETA June 8, 2023: This sentence “What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.” was added to the introductory paragraph at the beginning of this post.

Kempner Institute for the Study of Natural and Artificial Intelligence launched at Harvard University and University of Manchester pushes the boundaries of smart robotics and AI

Before getting to the two news items, it might be a good idea to note that ‘artificial intelligence (AI)’ and ‘robot’ are not synonyms although they are often used that way, even by people who should know better. (sigh … I do it too)

A robot may or may not be animated with artificial intelligence while artificial intelligence algorithms may be installed on a variety of devices such as a phone or a computer or a thermostat or a … .

It’s something to bear in mind when reading about the two new institutions being launched. Now, on to Harvard University.

Kempner Institute for the Study of Natural and Artificial Intelligence

A September 23, 2022 Chan Zuckerberg Initiative (CZI) news release (also on EurekAlert) announces a symposium to launch a new institute close to Mark Zuckerberg’s heart,

On Thursday [September 22, 2022], leadership from the Chan Zuckerberg Initiative (CZI) and Harvard University celebrated the launch of the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University with a symposium on Harvard’s campus. Speakers included CZI Head of Science Stephen Quake, President of Harvard University Lawrence Bacow, Provost of Harvard University Alan Garber, and Kempner Institute co-directors Bernardo Sabatini and Sham Kakade. The event also included remarks and panels from industry leaders in science, technology, and artificial intelligence, including Bill Gates, Eric Schmidt, Andy Jassy, Daniel Huttenlocher, Sam Altman, Joelle Pineau, Sangeeta Bhatia, and Yann LeCun, among many others.

The Kempner Institute will seek to better understand the basis of intelligence in natural and artificial systems. Its bold premise is that the two fields are intimately interconnected; the next generation of AI will require the same principles that our brains use for fast, flexible natural reasoning, and understanding how our brains compute and reason requires theories developed for AI. The Kempner Institute will study AI systems, including artificial neural networks, to develop both principled theories [emphasis mine] and a practical understanding of how these systems operate and learn. It will also focus on research topics such as learning and memory, perception and sensation, brain function, and metaplasticity. The Institute will recruit and train future generations of researchers from undergraduates and graduate students to post-docs and faculty — actively recruiting from underrepresented groups at every stage of the pipeline — to study intelligence from biological, cognitive, engineering, and computational perspectives.

CZI Co-Founder and Co-CEO Mark Zuckerberg [chairman and chief executive officer of Meta/Facebook] said: “The Kempner Institute will be a one-of-a-kind institute for studying intelligence and hopefully one that helps us discover what intelligent systems really are, how they work, how they break and how to repair them. There’s a lot of exciting implications because once you understand how something is supposed to work and how to repair it once it breaks, you can apply that to the broader mission the Chan Zuckerberg Initiative has to empower scientists to help cure, prevent or manage all diseases.”

CZI Co-Founder and Co-CEO Priscilla Chan said: “Just attending this school meant the world to me. But to stand on this stage and to be able to give something back is truly a dream come true … All of this progress starts with building one fundamental thing: a Kempner community that’s diverse, multi-disciplinary and multi-generational, because incredible ideas can come from anyone. If you bring together people from all different disciplines to look at a problem and give them permission to articulate their perspective, you might start seeing insights or solutions in a whole different light. And those new perspectives lead to new insights and discoveries and generate new questions that can lead an entire field to blossom. So often, that momentum is what breaks the dam and tears down old orthodoxies, unleashing new floods of new ideas that allow us to progress together as a society.”

CZI Head of Science Stephen Quake said: “It’s an honor to partner with Harvard in building this extraordinary new resource for students and science. This is a once-in-a-generation moment for life sciences and medicine. We are living in such an extraordinary and exciting time for science. Many breakthrough discoveries are going to happen not only broadly but right here on this campus and at this institute.”

CZI’s 10-year vision is to advance research and develop technologies to observe, measure, and analyze any biological process within the human body — across spatial scales and in real time. CZI’s goal is to accelerate scientific progress by funding scientific research to advance entire fields; working closely with scientists and engineers at partner institutions like the Chan Zuckerberg Biohub and Chan Zuckerberg Institute for Advanced Biological Imaging to do the research that can’t be done in conventional environments; and building and democratizing next-generation software and hardware tools to drive biological insights and generate more accurate and biologically important sources of data.

President of Harvard University Lawrence Bacow said: “Here we are with this incredible opportunity that Priscilla Chan and Mark Zuckerberg have given us to imagine taking what we know about the brain, neuroscience and how to model intelligence and putting them together in ways that can inform both, and can truly advance our understanding of intelligence from multiple perspectives.”

Kempner Institute Co-Director and Gordon McKay Professor of Computer Science and of Statistics at the Harvard John A. Paulson School of Engineering and Applied Sciences Sham Kakade said: “Now we begin assembling a world-leading research and educational program at Harvard that collectively tries to understand the fundamental mechanisms of intelligence and seeks to apply these new technologies for the benefit of humanity … We hope to create a vibrant environment for all of us to engage in broader research questions … We want to train the next generation of leaders because those leaders will go on to do the next set of great things.”

Kempner Institute Co-Director and the Alice and Rodman W. Moorhead III Professor of Neurobiology at Harvard Medical School Bernardo Sabatini said: “We’re blending research, education and computation to nurture, raise up and enable any scientist who is interested in unraveling the mysteries of the brain. This field is a nascent and interdisciplinary one, so we’re going to have to teach neuroscience to computational biologists, who are going to have to teach machine learning to cognitive scientists and math to biologists. We’re going to do whatever is necessary to help each individual thrive and push the field forward … Success means we develop mathematical theories that explain how our brains compute and learn, and these theories should be specific enough to be testable and useful enough to start to explain diseases like schizophrenia, dyslexia or autism.”

About the Chan Zuckerberg Initiative

The Chan Zuckerberg Initiative was founded in 2015 to help solve some of society’s toughest challenges — from eradicating disease and improving education, to addressing the needs of our communities. Through collaboration, providing resources and building technology, our mission is to help build a more inclusive, just and healthy future for everyone. For more information, please visit chanzuckerberg.com.

Principled theories, eh. I don’t see a single mention of ethicists or anyone in the social sciences or the humanities or the arts. How are scientists and engineers who have no training in or education in or, even, an introduction to ethics or social impacts or psychology going to manage this?

Mark Zuckerberg’s approach to these issues was something along the lines of “it’s easier to ask for forgiveness than to ask for permission.” I understand there have been changes but it took far too long to recognize the damage let alone attempt to address it.

If you want to gain a little more insight into the Kempner Institute, there’s a December 7, 2021 article by Alvin Powell announcing the institute for the Harvard Gazette,

The institute will be funded by a $500 million gift from Priscilla Chan and Mark Zuckerberg, which was announced Tuesday [December 7, 2021] by the Chan Zuckerberg Initiative. The gift will support 10 new faculty appointments, significant new computing infrastructure, and resources to allow students to flow between labs in pursuit of ideas and knowledge. The institute’s name honors Zuckerberg’s mother, Karen Kempner Zuckerberg, and her parents — Zuckerberg’s grandparents — Sidney and Gertrude Kempner. Chan and Zuckerberg have given generously to Harvard in the past, supporting students, faculty, and researchers in a range of areas, including around public service, literacy, and cures.

“The Kempner Institute at Harvard represents a remarkable opportunity to bring together approaches and expertise in biological and cognitive science with machine learning, statistics, and computer science to make real progress in understanding how the human brain works to improve how we address disease, create new therapies, and advance our understanding of the human body and the world more broadly,” said President Larry Bacow.

Q&A

Bernardo Sabatini and Sham Kakade [Institute co-directors]

GAZETTE: Tell me about the new institute. What is its main reason for being?

SABATINI: The institute is designed to take from two fields and bring them together, hopefully to create something that’s essentially new, though it’s been tried in a couple of places. Imagine that you have over here cognitive scientists and neurobiologists who study the human brain, including the basic biological mechanisms of intelligence and decision-making. And then over there, you have people from computer science, from mathematics and statistics, who study artificial intelligence systems. Those groups don’t talk to each other very much.

We want to recruit from both populations to fill in the middle and to create a new population, through education, through graduate programs, through funding programs — to grow from academic infancy — those equally versed in neuroscience and in AI systems, who can be leaders for the next generation.

Over the millions of years that vertebrates have been evolving, the human brain has developed specializations that are fundamental for learning and intelligence. We need to know what those are to understand their benefits and to ask whether they can make AI systems better. At the same time, as people who study AI and machine learning (ML) develop mathematical theories as to how those systems work and can say that a network of the following structure with the following properties learns by calculating the following function, then we can take those theories and ask, “Is that actually how the human brain works?”

KAKADE: There’s a question of why now? In the technological space, the advancements are remarkable even to me, as a researcher who knows how these things are being made. I think there’s a long way to go, but many of us feel that this is the right time to study intelligence more broadly. You might also ask: Why is this mission unique and why is this institute different from what’s being done in academia and in industry? Academia is good at putting out ideas. Industry is good at turning ideas into reality. We’re in a bit of a sweet spot. We have the scale to study approaches at a very different level: It’s not going to be just individual labs pursuing their own ideas. We may not be as big as the biggest companies, but we can work on the types of problems that they work on, such as having the compute resources to work on large language models. Industry has exciting research, but the spectrum of ideas produced is very different, because they have different objectives.

For the die-hards, there’s a September 23, 2022 article by Clea Simon in Harvard Gazette, which updates the 2021 story,

Next, Manchester, England.

Manchester Centre for Robotics and AI

Robotots take a break at a lab at The University of Manchester – picture courtesy of Marketing Manchester [downloaded from https://www.manchester.ac.uk/discover/news/manchester-ai-summit-aims-to-attract-experts-in-advanced-engineering-and-robotics/]

A November 22, 2022 University of Manchester press release (also on EurekAlert) announces both a meeting and a new centre, Note: Links to the Centre have been retained; all others have been removed,

How humans and super smart robots will live and work together in the future will be among the key issues being scrutinised by experts at a new centre of excellence for AI and autonomous machines based at The University of Manchester.

The Manchester Centre for Robotics and AI will be a new specialist multi-disciplinary centre to explore developments in smart robotics through the lens of artificial intelligence (AI) and autonomous machinery.

The University of Manchester has built a modern reputation of excellence in AI and robotics, partly based on the legacy of pioneering thought leadership begun in this field in Manchester by legendary codebreaker Alan Turing.

Manchester’s new multi-disciplinary centre is home to world-leading research from across the academic disciplines – and this group will hold its first conference on Wednesday, Nov 23, at the University’s new engineering and materials facilities.

A  highlight will be a joint talk by robotics expert Dr Andy Weightman and theologian Dr Scott Midson which is expected to put a spotlight on ‘posthumanism’, a future world where humans won’t be the only highly intelligent decision-makers.

Dr Weightman, who researches home-based rehabilitation robotics for people with neurological impairment, and Dr Midson, who researches theological and philosophical critiques of posthumanism, will discuss how interdisciplinary research can help with the special challenges of rehabilitation robotics – and, ultimately, what it means to be human “in the face of the promises and challenges of human enhancement through robotic and autonomous machines”.

Other topics that the centre will have a focus on will include applications of robotics in extreme environments.

For the past decade, a specialist Manchester team led by Professor Barry Lennox has designed robots to work safely in nuclear decommissioning sites in the UK. A ground-breaking robot called Lyra that has been developed by Professor Lennox’s team – and recently deployed at the Dounreay site in Scotland, the “world’s deepest nuclear clean up site” – has been listed in Time Magazine’s Top 200 innovations of 2022.

Angelo Cangelosi, Professor of Machine Learning and Robotics at Manchester, said the University offers a world-leading position in the field of autonomous systems – a technology that will be an integral part of our future world. 

Professor Cangelosi, co-Director of Manchester’s Centre for Robotics and AI, said: “We are delighted to host our inaugural conference which will provide a special showcase for our diverse academic expertise to design robotics for a variety of real world applications.

“Our research and innovation team are at the interface between robotics, autonomy and AI – and their knowledge is drawn from across the University’s disciplines, including biological and medical sciences – as well the humanities and even theology. [emphases mine]

“This rich diversity offers Manchester a distinctive approach to designing robots and autonomous systems for real world applications, especially when combined with our novel use of AI-based knowledge.”

Delegates will have a chance to observe a series of robots and autonomous machines being demoed at the new conference.

The University of Manchester’s Centre for Robotics and AI will aim to: 

  • design control systems with a focus on bio-inspired solutions to mechatronics, eg the use of biomimetic sensors, actuators and robot platforms; 
  • develop new software engineering and AI methodologies for verification in autonomous systems, with the aim to design trustworthy autonomous systems; 
  • research human-robot interaction, with a pioneering focus on the use of brain-inspired approaches [emphasis mine] to robot control, learning and interaction; and 
  • research the ethics and human-centred robotics issues, for the understanding of the impact of the use of robots and autonomous systems with individuals and society. 

In some ways, the Kempner Institute and the Manchester Centre for Robotics and AI have very similar interests, especially where the brain is concerned. What fascinates me is the Manchester Centre’s inclusion of theologian Dr Scott Midson and the discussion (at the meeting) of ‘posthumanism’. The difference is between actual engagement at the symposium (the centre) and mere mention in a news release (the institute).

I wish the best for both institutions.

Overview of fusion energy scene

It’s funny how you think you know something and then realize you don’t. I’ve been hearing about cold fusion/fusion energy for years but never really understood what the term meant. So, this post includes an explanation, as well as, an overview, and a Cold Fusion Rap to ‘wrap’ it all up. (Sometimes I cannot resist a pun.)

Fusion energy explanation (1)

The Massachusetts Institute of Technology (MIT) has a Climate Portal where fusion energy is explained,

Fusion energy is the source of energy at the center of stars, including our own sun. Stars, like most of the universe, are made up of hydrogen, the simplest and most abundant element in the universe, created during the big bang. The center of a star is so hot and so dense that the immense pressure forces hydrogen atoms together. These atoms are forced together so strongly that they create new atoms entirely—helium atoms—and release a staggering amount of energy in the process. This energy is called fusion energy.

More energy than chemical energy

Fusion energy, like fossil fuels, is a form of stored energy. But fusion can create 20 to 100 million times more energy than the chemical reaction of a fossil fuel. Most of the mass of an atom, 99.9 percent, is contained at an atom’s center—inside of its nucleus. The ratio of this matter to the empty space in an atom is almost exactly the same ratio of how much energy you release when you manipulate the nucleus. In contrast, a chemical reaction, such as burning coal, rearranges the atoms through heat, but doesn’t alter the atoms themselves, so we don’t get as much energy.

Making fusion energy

For scientists, making fusion energy means recreating the conditions of stars, starting with plasma. Plasma is the fourth state of matter, after solids, liquids and gases. Ice is an example of a solid. When heated up, it becomes a liquid. Place that liquid in a pot on the stove, and it becomes a gas (steam). If you take that gas and continue to make it hotter, at around 10,000 degrees Fahrenheit (~6,000 Kelvin), it will change from a gas to the next phase of matter: plasma. Ninety-nine percent of the mass in the universe is in the plasma state, since almost the entire mass of the universe is in super hot stars that exist as plasma.

To make fusion energy, scientists must first build a steel chamber and create a vacuum, like in outer space. The next step is to add hydrogen gas. The gas particles are charged to produce an electric current and then surrounded and contained with an electromagnetic force; the hydrogen is now a plasma. This plasma is then heated to about 100 million degrees and fusion energy is released.

Fusion energy explanation (2)

A Vancouver-based company, General Fusion, offers an explanation of how they have approached making fusion energy a reality,

How It Works: Plasma Injector Technology at General Fusion from General Fusion on Vimeo.

After announcing that a General Fusion demonstration plant would be built in the UK (see June 17, 2021 General Fusion news release), there’s a recent announcement about an agreement with the UK Atomic Energy Authority (UKAEA) to commericialize the technology, from an October 17, 2022 General Fusion news release,

Today [October 17, 2022], General Fusion and the UKAEA kick off projects to advance the commercialization of magnetized target fusion energy as part of an important collaborative agreement. With these unique projects, General Fusion will benefit from the vast experience of the UKAEA’s team. The results will hone the design of General Fusion’s demonstration machine being built at the Culham Campus, part of the thriving UK fusion cluster. Ultimately, the company expects the projects will support its efforts to provide low-cost and low-carbon energy to the electricity grid.

General Fusion’s approach to fusion maximizes the reapplication of existing industrialized technologies, bypassing the need for expensive superconducting magnets, significant new materials, or high-power lasers. The demonstration machine will create fusion conditions in a power-plant-relevant environment, confirming the performance and economics of the company’s technology.

“The leading-edge fusion researchers at UKAEA have proven experience building, commissioning, and successfully operating large fusion machines,” said Greg Twinney, Chief Executive Officer, General Fusion. “Partnering with UKAEA’s incredible team will fast-track work to advance our technology and achieve our mission of delivering affordable commercial fusion power to the world.”

“Fusion energy is one of the greatest scientific and engineering quests of our time,” said Ian Chapman, UKAEA CEO. “This collaboration will enable General Fusion to benefit from the ground-breaking research being done in the UK and supports our shared aims of making fusion part of the world’s future energy mix for generations to come.”

I last wrote about General Fusion in a November 3, 2021 posting about the company’s move (?) to Sea Island, Richmond,

I first wrote about General Fusion in a December 2, 2011 posting titled: Burnaby-based company (Canada) challenges fossil fuel consumption with nuclear fusion. (For those unfamiliar with the Vancouver area, there’s the city of Vancouver and there’s Vancouver Metro, which includes the city of Vancouver and others in the region. Burnaby is part of Metro Vancouver; General Fusion is moving to Sea Island (near Vancouver Airport), in Richmond, which is also in Metro Vancouver.) Kenneth Chan’s October 20, 2021 article for the Daily Hive gives more detail about General Fusion’s new facilities (Note: A link has been removed),

The new facility will span two buildings at 6020 and 6082 Russ Baker Way, near YVR’s [Vancouver Airport] South Terminal. This includes a larger building previously used for aircraft engine maintenance and repair.

The relocation process could start before the end of 2021, allowing the company to more than quadruple its workforce over the coming years. Currently, it employs about 140 people.

The Sea Island [in Richmond] facility will house its corporate offices, primary fusion technology development division, and many of its engineering laboratories. This new facility provides General Fusion with the ability to build a new demonstration prototype to support the commercialization of its magnetized target fusion technology.

As of the date of this posting, I have not been able to confirm the move. The company’s Contact webpage lists an address in Burnaby, BC for its headquarters.

The overview

Alex **Pasternack** in an August 17, 2022 article (The frontrunners in the trillion-dollar race for limitless fusion power), **in Fast Company,** provides an overview of the international race with a very, very strong emphasis on the US scene (Note: Links have been removed),

With energy prices on the rise, along with demands for energy independence and an urgent need for carbon-free power, plans to walk away from nuclear energy are now being revised in Japan, South Korea, and even Germany. Last month, Europe announced green bonds for nuclear, and the U.S., thanks to the Inflation Reduction Act, will soon devote millions to new nuclear designs, incentives for nuclear production and domestic uranium mining, and, after years of paucity in funding, cash for fusion.

The new investment comes as fusion—long considered a pipe dream—has attracted real money from big venture capital and big companies, who are increasingly betting that abundant, cheap, clean nuclear will be a multi-trillion dollar industry. Last year, investors like Bill Gates and Jeff Bezos injected a record $3.4 billion into firms working on the technology, according to Pitchbook. One fusion firm, Seattle-based Helion, raised a record $500 million from Sam Altman and Peter Thiel. That money has certainly supercharged the nuclear sector: The Fusion Industry Association says that at least 33 different companies were now pursuing nuclear fusion, and predicted that fusion would be connected to the energy grid sometime in the 2030s.

… What’s not a joke is that we have about zero years to stop powering our civilization with earth-warming energy. The challenge with fusion is to achieve net energy gain, where the energy produced by a fusion reaction exceeds the energy used to make it. One milestone came quietly this month, when a team of researchers at the National Ignition Facility at Lawrence Livermore National Lab in California announced that an experiment last year had yielded over 1.3 megajoules (MJ) of energy, setting a new world record for energy yield for a nuclear fusion experiment. The experiment also achieved scientific ignition for the first time in history: after applying enough heat using an arsenal of lasers, the plasma became self-heating. (Researchers have since been trying to replicate the result, so far without success.)

On a growing campus an hour outside of Boston, the MIT spinoff Commonwealth Fusion Systems is building their first machine, SPARC, with a goal of producing power by 2025. “You’ll push a button,” CEO and cofounder Bob Mumgaard told the Khosla Ventures CEO Summit this summer, “and for the first time on earth you will make more power out than in from a fusion plasma. That’s about 200 million degrees—you know, cooling towers will have a bunch of steam go out of them—and you let your finger off the button and it will stop, and you push the button again and it will go.” With an explosion in funding from investors including Khosla, Bill Gates, George Soros, Emerson Collective and Google to name a few—they raised $1.8 billion last year alone—CFS hopes to start operating a prototype in 2025.

Like the three-decade-old ITER project in France, set for operation in 2025, Commonwealth and many other companies will try to reach net energy gain using a machine called a tokamak, a bagel-shaped device filled with super-hot plasma, heated to about 150 million degrees, within which hydrogen atoms can fuse and release energy. To control that hot plasma, you need to build a very powerful magnetic field. Commonwealth’s breakthrough was tape—specifically, a high-temperature-superconducting steel tape coated with a compound called yttrium-barium-copper oxide. When a prototype was first made commercially available in 2009, Dennis Whyte, director of MIT’s Plasma Science and Fusion Center, ordered as much as he could. With Mumgaard and a team of students, his lab used coils of the stuff to build a new kind of superconducting magnet, and a prototype reactor named ARC, after Tony Stark’s energy source. Commonwealth was born in 2015.

Southern California-based TAE Technologies has raised a whopping $1.2 billion since it was founded in 1998, and $250 million in its latest round. The round, announced in July, was led by Chevron’s venture arm, Google, and Sumitomo, a Tokyo-based holding company that aims to deploy fusion power in the Asia-Pacific market. TAE’s approach, which involves creating a fusion reaction at incredibly high heat, has a key advantage. Whereas ITER uses the hydrogen isotopes deuterium and tritium, an extremely rare element that must be specially created from lithium—and that produces as a byproduct radioactive-free neutrons—TAE’s linear reactor is completely non-radioactive, because it relies on hydrogen and boron, two abundant, naturally-occurring elements that react to produce only helium.

General Atomics, of San Diego, California, has the largest tokamak in the U.S. Its powerful magnetic chamber, called the DIII-D National Fusion Facility, or just “D-three-D,” now features a Toroidal Field Reversing Switch, which allows for the redirection of 120,000 amps of the current that power the primary magnetic field. It’s the only tokamak in the world that allows researchers to switch directions of the magnetic fields in minutes rather than hours. Another new upgrade, a traveling-wave antenna, allows physicists to inject high-powered “helicon” radio waves into DIII-D plasmas so fusion reactions occur much more powerfully and efficiently.

“We’ve got new tools for flexibility and new tools to help us figure out how to make that fusion plasma just keep going,” Richard Buttery, director of the project, told the San Diego Union-Tribune in January. The company is also behind eight of the magnet modules at the heart of the ITER facility, including its wild Central Solenoid — the world’s most powerful magnet — in a kind of scaled up version of the California machine.

But like an awful lot in fusion, ITER has been hampered by cost overruns and delays, with “first plasma” not expected to occur in 2025 as previously expected due to global pandemic-related disruptions. Some have complained that the money going to ITER has distracted from other more practical energy projects—the latest price tag is $22 billion—and others doubt if the project can ever produce net energy gain.

Based in Canada, General Fusion is backed by Jeff Bezos and building on technology originally developed by the U.S. Navy and explored by Russian scientists for potential use in weapons. Inside the machine, molten metal is spun to create a cavity, and pumped with pistons that push the metal inward to form a sphere. Hydrogen, heated to super-hot temperatures and held in place by a magnetic field, fills the sphere to create the reaction. Heat transferred to the metal can be turned into steam to drive a turbine and generate electricity. As former CEO Christofer Mowry told Fast Company last year, “to re-create a piece of the sun on Earth, as you can imagine, is very, very challenging.” Like many fusion companies, GF depends on modern supercomputers and advanced modeling and computational techniques to understand the science of plasma physics, as well as modern manufacturing technologies and materials.

“That’s really opened the door not just to being able to make fusion work but to make it work in a practical way,” Mowry said. This has been difficult to make work, but with a demonstration center it announced last year in Culham, England, GF isn’t aiming to generate electricity but to gather the data needed to later build a commercial pilot plant that could—and to generate more interest in fusion.

Magneto-Intertial Fusion Technologies, or MIFTI, of Tustin, Calif., founded by researchers from the University of California, Irvine, is developing a reactor that uses what’s known as a Staged Z-Pinch approach. A Z-Pinch design heats, confines, and compresses plasma using an intense, pulsed electrical current to generate a magnetic field that could reduce instabilities in the plasma, allowing fusion to persist for longer periods of time. But only recently have MIFTI’s scientists been able to overcome the instability problems, the company says, thanks to software made available to them at UC-Irvine by the U.S. Air Force. …

Princeton Fusion Systems of Plainsboro, New Jersey, is a small business focused on developing small, clean fusion reactors for both terrestrial and space applications. A spinoff of Princeton Satellite Systems, which specializes in spacecraft control, the company’s Princeton FRC reactor is built upon 15 years of research at the Princeton Plasma Physics Laboratory, funded primarily by the U.S. DOE and NASA, and is designed to eventually provide between 1 and 10 megawatts of power in off-grid locations and in modular power plants, “from remote industrial applications to emergency power after natural disasters to off-world bases on the moon or Mars.” The concept uses radio-frequency electromagnetic fields to generates and sustain a plasma formation called a Field-Reversed Configuration (FRC) inside a strong magnetic bottle. …

Tokamak Energy, a U.K.-based company named after the popular fusion device, announced in July that its ST-40 tokamak reactor had reached the 100 million Celsius threshold for commercially viable nuclear fusion. The achievement was made possible by a proprietary design built on a spherical, rather than donut, shape. This means that the magnets are closer to the plasma stream, allowing for smaller and cheaper magnets to create even stronger magnetic fields. …

Based in Pasadena, California, Helicity Space is developing a propulsion and power technology based on a specialized magneto inertial fusion concept. The system, a spin on what fellow fusion engineer, Seattle-based Helion is doing, appears to use twisted compression coils, like a braided rope, to achieve a known phenomenon called the Magnetic Helicity. … According to ZoomInfo and Linkedin, Helicity has over $4 million in funding and up to 10 employees, all aimed, the company says, at “enabling humanity’s access to the solar system, with a Helicity Drive-powered flight to Mars expected to take two months, without planetary alignment.”

ITER (International Thermonuclear Experimental Reactor), meaning “the way” or “the path” in Latin and mentioned in Pasternak’s article, dates its history with *fusion back to about 1978 when cold fusion was the ‘hot’ topic*. (You can read more here in the ITER Wikipedia entry.)

For more about the various approaches to fusion energy, read Pasternack’s August 17, 2022 article (The frontrunners in the trillion-dollar race for limitless fusion power) provides details. I wish there had been a little more about efforts in Japan and South Korea and other parts of the world. Pasternak’s singular focus on the US with a little of Canada and the UK seemingly thrown into the mix to provide an international flavour seems a little myopic.

Fusion rap

In an August 30, 2022 Baba Brinkman announcement (received via email) which gave an extensive update of Brinkman’s activities, there was this,

And the other new topic, which was surprisingly fun to explore, is cold fusion also known as “Low Energy Nuclear Reactions” which you may or may not have a strong opinion about, but if you do I imagine you probably think the technology is either bunk or destined to save the world.

That makes for an interesting topic to explore in rap songs! And fortunately last month I had the pleasure of performing for the cream of the LENR crop at the 24th International Conference on Cold Fusion, including rap ups and two new songs about the field, one very celebratory (for the insiders), and one cautiously optimistic (as an outreach tool).

You can watch “Cold Fusion Renaissance” and “You Must LENR” [L ow E nergy N uclear R eactions or sometimes L attice E nabled N anoscale R eactions or Cold Fusion or CANR (C hemically A ssisted N uclear R eactions)] for yourself to determine which video is which, and also enjoy this article in Infinite Energy Magazine which chronicles my whole cold fusion rap saga.

Here’s one of the rap videos mentioned in Brinkman’s email,

Enjoy!

*December 13, 2022: Sentence changed from “ITER (International Thermonuclear Experimental Reactor), meaning “the way” or “the path” in Latin and mentioned in Pasternak’s article, dates its history with fusion back to about 1978 when cold fusion was the ‘hot’ topic.” to “ITER (International Thermonuclear Experimental Reactor), meaning “the way” or “the path” in Latin and mentioned in Pasternak’s article, dates its history with fusion back to about 1978 when cold fusion was the ‘hot’ topic.”

** ‘Pasternak’ corrected to ‘Pasternack” and ‘in Fast Company’ added on December 29, 2022

Could CRISPR (clustered regularly interspaced short palindromic repeats) be weaponized?

On the occasion of an American team’s recent publication of research where they edited the germline (embryos), I produced a three-part series about CRISPR (clustered regularly interspaced short palindromic repeats), sometimes referred to as CRISPR/Cas9, (links offered at end of this post).

Somewhere in my series, there’s a quote about how CRISPR could be used as a ‘weapon of mass destruction’ and it seems this has been a hot topic for the last year or so as James Revill, research fellow at the University of Sussex, references in his August 31, 2017 essay on theconversation.com (h/t phys.org August 31, 2017 news item), Note: Links have been removed,

The gene editing technique CRISPR has been in the limelight after scientists reported they had used it to safely remove disease in human embryos for the first time. This follows a “CRISPR craze” over the last couple of years, with the number of academic publications on the topic growing steadily.

There are good reasons for the widespread attention to CRISPR. The technique allows scientists to “cut and paste” DNA more easily than in the past. It is being applied to a number of different peaceful areas, ranging from cancer therapies to the control of disease carrying insects.

Some of these applications – such as the engineering of mosquitoes to resist the parasite that causes malaria – effectively involve tinkering with ecosystems. CRISPR has therefore generated a number of ethical and safety concerns. Some also worry that applications being explored by defence organisations that involve “responsible innovation in gene editing” may send worrying signals to other states.

Concerns are also mounting that gene editing could be used in the development of biological weapons. In 2016, Bill Gates remarked that “the next epidemic could originate on the computer screen of a terrorist intent on using genetic engineering to create a synthetic version of the smallpox virus”. More recently, in July 2017, John Sotos, of Intel Health & Life Sciences, stated that gene editing research could “open up the potential for bioweapons of unimaginable destructive potential”.

An annual worldwide threat assessment report of the US intelligence community in February 2016 argued that the broad availability and low cost of the basic ingredients of technologies like CRISPR makes it particularly concerning.

A Feb. 11, 2016 news item on sciencemagazine.org offers a précis of some of the reactions while a February 9, 2016 article by Antonio Regalado for the Massachusetts Institute of Technology’s MIT Technology Review delves into the matter more deeply,

Genome editing is a weapon of mass destruction.

That’s according to James Clapper, [former] U.S. director of national intelligence, who on Tuesday, in the annual worldwide threat assessment report of the U.S. intelligence community, added gene editing to a list of threats posed by “weapons of mass destruction and proliferation.”

Gene editing refers to several novel ways to alter the DNA inside living cells. The most popular method, CRISPR, has been revolutionizing scientific research, leading to novel animals and crops, and is likely to power a new generation of gene treatments for serious diseases (see “Everything You Need to Know About CRISPR’s Monster Year”).

It is gene editing’s relative ease of use that worries the U.S. intelligence community, according to the assessment. “Given the broad distribution, low cost, and accelerated pace of development of this dual-use technology, its deliberate or unintentional misuse might lead to far-reaching economic and national security implications,” the report said.

The choice by the U.S. spy chief to call out gene editing as a potential weapon of mass destruction, or WMD, surprised some experts. It was the only biotechnology appearing in a tally of six more conventional threats, like North Korea’s suspected nuclear detonation on January 6 [2016], Syria’s undeclared chemical weapons, and new Russian cruise missiles that might violate an international treaty.

The report is an unclassified version of the “collective insights” of the Central Intelligence Agency, the National Security Agency, and half a dozen other U.S. spy and fact-gathering operations.

Although the report doesn’t mention CRISPR by name, Clapper clearly had the newest and the most versatile of the gene-editing systems in mind. The CRISPR technique’s low cost and relative ease of use—the basic ingredients can be bought online for $60—seems to have spooked intelligence agencies.

….

However, one has to be careful with the hype surrounding new technologies and, at present, the security implications of CRISPR are probably modest. There are easier, cruder methods of creating terror. CRISPR would only get aspiring biological terrorists so far. Other steps, such as growing and disseminating biological weapons agents, would typically be required for it to become an effective weapon. This would require additional skills and places CRISPR-based biological weapons beyond the reach of most terrorist groups. At least for the time being.

A July 5, 2016 opinion piece by Malcolm Dando for Nature argues for greater safeguards,

In Geneva next month [August 2016], officials will discuss updates to the global treaty that outlaws the use of biological weapons. The 1972 Biological Weapons Convention (BWC) was the first agreement to ban an entire class of weapons, and it remains a crucial instrument to stop scientific research on viruses, bacteria and toxins from being diverted into military programmes.

The BWC is the best route to ensure that nations take the biological-weapons threat seriously. Most countries have struggled to develop and introduce strong and effective national programmes — witness the difficulty the United States had in agreeing what oversight system should be applied to gain-of-function experiments that created more- dangerous lab-grown versions of common pathogens.

As scientific work advances — the CRISPR gene-editing system has been flagged as the latest example of possible dual-use technology — this treaty needs to be regularly updated. This is especially important because it has no formal verification system. Proposals for declarations, monitoring visits and inspections were vetoed by the United States in 2001, on the grounds that such verification threatened national security and confidential business information.

Even so, issues such as the possible dual-use threat from gene-editing systems will not be easily resolved. But we have to try. Without the involvement of the BWC, codes of conduct and oversight systems set up at national level are unlikely to be effective. The stakes are high, and after years of fumbling, we need strong international action to monitor and assess the threats from the new age of biological techniques.

Revill notes the latest BWC agreement and suggests future directions,

This convention is imperfect and lacks a way to ensure that states are compliant. Moreover, it has not been adequately “tended to” by its member states recently, with the last major meeting unable to agree a further programme of work. Yet it remains the cornerstone of an international regime against the hostile use of biology. All 178 state parties declared in December of 2016 their continued determination “to exclude completely the possibility of the use of (biological) weapons, and their conviction that such use would be repugnant to the conscience of humankind”.

These states therefore need to address the hostile potential of CRISPR. Moreover, they need to do so collectively. Unilateral national measures, such as reasonable biological security procedures, are important. However, preventing the hostile exploitation of CRISPR is not something that can be achieved by any single state acting alone.

As such, when states party to the convention meet later this year, it will be important to agree to a more systematic and regular review of science and technology. Such reviews can help with identifying and managing the security risks of technologies such as CRISPR, as well as allowing an international exchange of information on some of the potential benefits of such technologies.

Most states supported the principle of enhanced reviews of science and technology under the convention at the last major meeting. But they now need to seize the opportunity and agree on the practicalities of such reviews in order to prevent the convention being left behind by developments in science and technology.

Experts (military, intelligence, medical, etc.) are not the only ones concerned about CRISPR according to a February 11, 2016 article by Sharon Begley for statnews.com (Note: A link has been removed),

Most Americans oppose using powerful new technology to alter the genes of unborn babies, according to a new poll — even to prevent serious inherited diseases.

They expressed the strongest disapproval for editing genes to create “designer babies” with enhanced intelligence or looks.

But the poll, conducted by STAT and Harvard T.H. Chan School of Public Health, found that people have mixed, and apparently not firm, views on emerging genetic techniques. US adults are almost evenly split on whether the federal government should fund research on editing genes before birth to keep children from developing diseases such as cystic fibrosis or Huntington’s disease.

“They’re not against scientists trying to improve [genome-editing] technologies,” said Robert Blendon, professor of health policy and political analysis at Harvard’s Chan School, perhaps because they recognize that one day there might be a compelling reason to use such technologies. An unexpected event, such as scientists “eliminating a terrible disease” that a child would have otherwise inherited, “could change people’s views in the years ahead,” Blendon said.

But for now, he added, “people are concerned about editing the genes of those who are yet unborn.”

A majority, however, wants government regulators to approve gene therapy to treat diseases in children and adults.

The STAT-Harvard poll comes as scientists and policy makers confront the ethical, social, and legal implications of these revolutionary tools for changing DNA. Thanks to a technique called CRISPR-Cas9, scientists can easily, and with increasing precision, modify genes through the genetic analog of a computer’s “find and replace” function.

I find it surprising that there’s resistance to removing diseases found in the germline (embryos). When they were doing public consultations on nanotechnology, the one area where people tended to be quite open to research was health and medicine. Where food was concerned however, people had far more concerns.

If you’re interested in the STAT-Harvard poll, you can find it here. As for James Revill, he has written a more substantive version of this essay as a paper, which is available here.

On a semi-related note, I found STAT (statnews.com) to be a quite interesting and accessibly written online health science journal. Here’s more from the About Us page (Note: A link has been removed),

What’s STAT all about?
STAT is a national publication focused on finding and telling compelling stories about health, medicine, and scientific discovery. We produce daily news, investigative articles, and narrative projects in addition to multimedia features. We tell our stories from the places that matter to our readers — research labs, hospitals, executive suites, and political campaigns.

Why did you call it STAT?
In medical parlance, “stat” means important and urgent, and that’s what we’re all about — quickly and smartly delivering good stories. Read more about the origins of our name here.

Who’s behind the new publication?
STAT is produced by Boston Globe Media. Our headquarters is located in Boston but we have bureaus in Washington, New York, Cleveland, Atlanta, San Francisco, and Los Angeles. It was started by John Henry, the owner of Boston Globe Media and the principal owner of the Boston Red Sox. Rick Berke is executive editor.

So is STAT part of The Boston Globe?
They’re distinct properties but the two share content and complement one another.

Is it free?
Much of STAT is free. We also offer STAT Plus, a premium subscription plan that includes exclusive reporting about the pharmaceutical and biotech industries as well as other benefits. Learn more about it here.

Who’s working for STAT?
Some of the best-sourced science, health, and biotech journalists in the country, as well as motion graphics artists and data visualization specialists. Our team includes talented writers, editors, and producers capable of the kind of explanatory journalism that complicated science issues sometimes demand.

Who’s your audience?
You. Even if you don’t work in science, have never stepped foot in a hospital, or hated high school biology, we’ve got something for you. And for the lab scientists, health professionals, business leaders, and policy makers, we think you’ll find coverage here that interests you, too. The world of health, science, and medicine is booming and yielding fascinating stories. We explore how they affect us all.

….

As promised, here are the links to my three-part series on CRISPR,

Part 1 opens the series with a basic description of CRISPR and the germline research that occasioned the series along with some of the other (non-weapon) ethical issues and patent disputes that are arising from this new technology. CRISPR and editing the germline in the US (part 1 of 3): In the beginning

Part 2 covers three critical responses to the reporting and between them describe the technology in more detail and the possibility of ‘designer babies’.  CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?

Part 3 is all about public discussion or, rather, the lack of and need for according to a couple of social scientists. Informally, there is some discussion via pop culture and Joelle Renstrom notes although she is focused on the larger issues touched on by the television series, Orphan Black and as I touch on in my final comments. CRISPR and editing the germline in the US (part 3 of 3): public discussions and pop culture

Finally, I hope to stumble across studies from other countries about how they are responding to the possibilities presented by CRISPR/Cas9 so that I can offer a more global perspective than this largely US perspective. At the very least, it would be interesting to find it if there differences.

Canada and its Vancouver tech scene gets a boost

Prime Minister Justin Trudeau has been running around attending tech events both in the Vancouver area (Canada) and in Seattle these last few days (May 17 and May 18, 2017). First he attended the Microsoft CEO Summit as noted in a May 11, 2017 news release from the Prime Minister’s Office (Note: I have a few comments about this performance and the Canadian tech scene at the end of this post),

The Prime Minister, Justin Trudeau, today [May 11, 2017] announced that he will participate in the Microsoft CEO Summit in Seattle, Washington, on May 17 and 18 [2017], to promote the Cascadia Innovation Corridor, encourage investment in the Canadian technology sector, and draw global talent to Canada.

This year’s summit, under the theme “The CEO Agenda: Navigating Change,” will bring together more than 150 chief executive officers. While at the Summit, Prime Minister Trudeau will showcase Budget 2017’s Innovation and Skills Plan and demonstrate how Canada is making it easier for Canadian entrepreneurs and innovators to turn their ideas into thriving businesses.

Prime Minister Trudeau will also meet with Washington Governor Jay Inslee.

Quote

“Canada’s greatest strength is its skilled, hard-working, creative, and diverse workforce. Canada is recognized as a world leader in research and development in many areas like artificial intelligence, quantum computing, and 3D programming. Our government will continue to help Canadian businesses grow and create good, well-paying middle class jobs in today’s high-tech economy.”
— Rt. Honourable Justin Trudeau, Prime Minister of Canada

Quick Facts

  • Canada-U.S. bilateral trade in goods and services reached approximately $882 billion in 2016.
  • Nearly 400,000 people and over $2 billion-worth of goods and services cross the Canada-U.S. border every day.
  • Canada-Washington bilateral trade was $19.8 billion in 2016. Some 223,300 jobs in the State of Washington depend on trade and investment with Canada. Canada is among Washington’s top export destinations.

Associated Link

Here’s a little more about the Microsoft meeting from a May 17, 2017 article by Alan Boyle for GeekWire.com (Note: Links have been removed),

So far, this year’s Microsoft CEO Summit has been all about Canadian Prime Minister Justin Trudeau’s talk today, but there’s been precious little information available about who else is attending – and Trudeau may be one of the big reasons why.

Microsoft co-founder Bill Gates created the annual summit back in 1997, to give global business leaders an opportunity to share their experiences and learn about new technologies that will have an impact on business in the future. The event’s attendee list is kept largely confidential, as is the substance of the discussions.

This year, Microsoft says the summit’s two themes are “trust in technology” (as in cybersecurity, international hacking, privacy and the flow of data) and “the race to space” (as in privately funded space efforts such as Amazon billionaire Jeff Bezos’ Blue Origin rocket venture).

Usually, Microsoft lists a few folks who are attending the summit on the company’s Redmond campus, just to give a sense of the event’s cachet. For example, last year’s headliners included Berkshire Hathaway CEO Warren Buffett and Exxon Mobil CEO Rex Tillerson (who is now the Trump administration’s secretary of state)

This year, however, the spotlight has fallen almost exclusively on the hunky 45-year-old Trudeau, the first sitting head of government or state to address the summit. Microsoft isn’t saying anything about the other 140-plus VIPs attending the discussions. “Out of respect for the privacy of our guests, we are not providing any additional information,” a Microsoft spokesperson told GeekWire via email.

Even Trudeau’s remarks at the summit are hush-hush, although officials say he’s talking up Canada’s tech sector.  …

Laura Kane’s May 18, 2017 article for therecord.com provides a little more information about Trudeau’s May 18, 2017 activities in Washington state,

Prime Minister Justin Trudeau continued his efforts to promote Canada’s technology sector to officials in Washington state on Thursday [May 18, 2017], meeting with Gov. Jay Inslee a day after attending the secretive Microsoft CEO Summit.

Trudeau and Inslee discussed, among other issues, the development of the Cascadia Innovation Corridor, an initiative that aims to strengthen technology industry ties between British Columbia and Washington.

The pair also spoke about trade and investment opportunities and innovation in the energy sector, said Trudeau’s office. In brief remarks before the meeting, the prime minister said Washington and Canada share a lot in common.

But protesters clad in yellow hazardous material suits that read “Keystone XL Toxic Cleanup Crew” gathered outside the hotel to criticize Trudeau’s environmental record, arguing his support of pipelines is at odds with any global warming promises he has made.

Later that afternoon, Trudeau visited Electronic Arts (a US games company with offices in the Vancouver area) for more tech talk as Stephanie Ip notes in her May 18, 2017 article for The Vancouver Sun,

Prime Minister Justin Trudeau was in Metro Vancouver Thursday [may 18, 2017] to learn from local tech and business leaders how the federal government can boost B.C.’s tech sector.

The roundtable discussion was organized by the Vancouver Economic Commission and hosted in Burnaby at Electronic Arts’ Capture Lab, where the video game company behind the popular FIFA, Madden and NHL franchises records human movement to add more realism to its digital characters. Representatives from Amazon, Launch Academy, Sony Pictures, Darkhorse 101 Pictures and Front Fundr were also there.

While the roundtable was not open to media, Trudeau met beforehand with media.

“We’re going to talk about how the government can be a better partner or better get out of your way in some cases to allow you to continue to grow, to succeed, to create great opportunities to allow innovation to advance success in Canada and to create good jobs for Canadians and draw in people from around the world and continue to lead the way in the world,” he said.

“Everything from clean tech, to bio-medical advances, to innovation in digital economy — there’s a lot of very, very exciting things going on”

Comments on the US tech sector and the supposed Canadian tech sector

I wonder at all the secrecy. As for the companies mentioned as being at the roundtable, you’ll notice a preponderance of US companies with Launch Academy and Front Fundr (which is not a tech company but a crowdfunding equity company) supplying Canadian content. As for Darkhorse 101 Pictures,  I strongly suspect (after an online search) it is part of Darkhorse Comics (as US company) which has an entertainment division.

Perhaps it didn’t seem worthwhile to mention the Canadian companies? In that case, that’s a sad reflection on how poorly we and our media support our tech sector.

In fact, it seems Trudeau’s version of the Canadian technology sector is for us to continue in our role as a branch plant remaining forever in service of the US economy or at least the US tech sector which may be experiencing some concerns with the US Trump administration and what appears to be an increasingly isolationist perspective with regard to trade and immigration. It’s a perspective that the tech sector, especially the entertainment component, can ill afford.

As for the Cascadia Innovation Corridor mentioned in the Prime Minister’s news release and in Kane’s article, I have more about that in a Feb. 28, 2017 posting about the Cascadia Data Analytics Cooperative.

I noticed he mentioned clean tech as an area of excitement. Well, we just lost a significant player not to the US this time but to the EU (European Union) or more specifically, Germany. (There’ll be more about that in an upcoming post.)

I’m glad to see that Trudeau remains interested in Canadian science and technology but perhaps he could concentrate on new ways of promoting sectoral health rather than relying on the same old thing.

New Wave and its non-shrimp shrimp

I received a news release from a start-up company, New Wave Foods, which specializes in creating plant-based seafood. The concept looks very interesting and sci fi (Lois McMaster Bujold, and I’m sure others, has featured vat-grown meat and fish in her novels). Apparently, Google has already started using some of the New Wave product in its employee cafeteria. Here’s more from the July 19, 2016 New Wave Foods news release,

New Wave Foods announced today that it has successfully opened a seed round aimed at developing seafood that is healthier for humans and the planet. Efficient Capacity kicked off the round and New Crop Capital provided additional funding.

New Wave Foods uses plant-based ingredients, such as red algae, to engineer new edible materials that replicate the taste and texture of fish and shellfish while improving their nutritional profiles. Its first product, which has already been served in Google’s cafeterias, will be a truly sustainable shrimp. Shrimp is the nation’s most popular seafood, currently representing more than a quarter of the four billion pounds of fish and shellfish consumed by Americans annually. For each pound of shrimp caught, up to 15 pounds of other animals, including endangered dolphins, turtles, and sharks, die.

The market for meat analogs is expected to surpass $5 billion by 2020, and savvy investors are increasingly taking notice. In recent years, millions in venture capital has flowed into plant-based alternatives to animal foods from large food processors and investors like Bill Gates and Li Ka-shing, Asia’s richest businessman.

“The astounding scale of our consumption of sea animals is decimating ocean ecosystems through overfishing, massive death through bycatch, water pollution, carbon emissions, derelict fishing gear, mangrove deforestation, and more,” said New Wave Foods co-founder and CEO Dominique Barnes. “Shrimping is also fraught with human rights abuses and slave labor, so we’re pleased to introduce a product that is better for people, the planet, and animals.”

Efficient Capacity is an investment fund that advises and invests in companies worldwide. Efficient Capacity partners have founded or co-founded more than ten companies and served as advisors or directors to dozens of others.

New Crop Capital is a specialized private venture capital fund that provides early-stage investments to companies that develop “clean,” (i.e., cultured) and plant-based meat, dairy, and egg products or facilitate the promotion and sale of such products.

The current round of investments follows investments from SOS Ventures via IndieBio, an accelerator group funding and building biotech startups. IndieBio companies use technology to solve our culture’s most challenging problems, such as feeding a growing population sustainably. Along with investment, IndieBio offers its startups resources such as lab space and mentorship to help take an idea to a product.

Along with its funding round, New Wave Foods announced the appointment of John Wiest as COO. Wiest brings more than 15 years of senior management experience in food and consumer products, including animal-based seafood companies, to the company. As an executive and consultant, Wiest has helped dozens of food ventures develop new products, expand distribution channels, and create strategic partnerships.

New Wave Foods, founded in 2015, is a leader in plant-based seafood that is healthier and better for the environment. New Wave products are high in clean nutrients and deliver a culinary experience consumers expect without the devastating environmental impact of commercial fishing. Co-founder and CEO Dominique Barnes holds a master’s in marine biodiversity and conservation from Scripps Institution of Oceanography, and co-founder and CTO Michelle Wolf holds a bachelor’s in materials science and engineering and a master’s in biomedical engineering. New Wave Foods’ first products will reach consumers as early as Q4 2016.

I found a February 5, 2016 review article about the plant-based shrimp written by Ariel Schwartz for Tech Insider (Note: A link has been removed),

… after trying a lab-made “shrimp” made of plant proteins and algae, I’d consider giving it up the real thing. Maybe others will too.

The shrimp I ate came from New Wave Foods, a startup that just graduated from biotech startup accelerator IndieBio. When I first met New Wave’s founders in the fall of 2015, they had been working for eight weeks at IndieBio’s San Francisco lab. …

Barnes and Wolf [marine conservationist Dominique Barnes and materials scientist Michelle Wolf ] ultimately figured out a way to use plant proteins, along with the same algae that shrimp eat — the stuff that helps give the crustaceans their color and flavor — to come up with a substitute that has a similar texture, taste, color, and nutritional value.

The fact that New Wave’s product has the same high protein, low fat content as real shrimp is a big source of differentiation from other shrimp substitutes, according to Barnes.

In early February, I finally tried a breaded version of New Wave’s shrimp. Here’s what it looked like:

New Wave Foods Ariel Schwartz/Tech Insider

It was a little hard to judge the taste because of the breading, but the texture was almost perfect. The lab-made shrimp had that springiness and mixture of crunch and chew that you’d expect from the real thing. I could see myself replacing real shrimp with this in some situations.

Whether it could replace shrimp all the time depends on how the product tastes without the breading. “Our ultimate goal is to get to the cocktail shrimp level,” says Barnes.

I’m glad to have stumbled across Ariel Schwartz again as I’ve always enjoyed her writing and it has been a few years.

For the curious, you can check out more of Ariel Schwartz’s work here and find out more about Efficient Capacity in a listing on CrunchBase, New Crop Capital here, SOS Ventures here, IndieBio here. and, of course,  New Wave Foods here.

One final comment, I am not endorsing this company or its products. This is presented as interesting information and, hopefully, I will be hearing more about the company and its products in the future.