The term they’re using in the Weizmann Institute of Science’s (Israel) announcement is “a generally accurate human embryo model.” This is in contrast to previous announcements including the one from the University of Cambridge team highlighted in Part 1.
A research team headed by Prof. Jacob Hanna at the Weizmann Institute of Science has created complete models of human embryos from stem cells cultured in the lab—and managed to grow them outside the womb up to day 14. As reported today [September 6, 2023] in Nature, these synthetic embryo models had all the structures and compartments characteristic of this stage, including the placenta, yolk sac, chorionic sac and other external tissues that ensure the models’ dynamic and adequate growth.
Cellular aggregates derived from human stem cells in previous studies could not be considered genuinely accurate human embryo models, because they lacked nearly all the defining hallmarks of a post-implantation embryo. In particular, they failed to contain several cell types that are essential to the embryo’s development, such as those that form the placenta and the chorionic sac. In addition, they did not have the structural organization characteristic of the embryo and revealed no dynamic ability to progress to the next developmental stage.
Given their authentic complexity, the human embryo models obtained by Hanna’s group may provide an unprecedented opportunity to shed new light on the embryo’s mysterious beginnings. Little is known about the early embryo because it is so difficult to study, for both ethical and technical reasons, yet its initial stages are crucial to its future development. During these stages, the clump of cells that implants itself in the womb on the seventh day of its existence becomes, within three to four weeks, a well-structured embryo that already contains all the body organs.
“The drama is in the first month, the remaining eight months of pregnancy are mainly lots of growth,” Hanna says. “But that first month is still largely a black box. Our stem cell–derived human embryo model offers an ethical and accessible way of peering into this box. It closely mimics the development of a real human embryo, particularly the emergence of its exquisitely fine architecture.”
…
A stem cell–derived human embryo model at a developmental stage equivalent to that of a day 14 embryo. The model has all the compartments that define this stage: the yolk sac (yellow) and the part that will become the embryo itself, topped by the amnion (blue) – all enveloped by cells that will become the placenta (pink) Courtesy: Weizmann Institute of Science
Hanna’s team built on their previous experience in creating synthetic stem cell–based models of mouse embryos. As in that research, the scientists made no use of fertilized eggs or a womb. Rather, they started out with human cells known as pluripotent stem cells, which have the potential to differentiate into many, though not all, cell types. Some were derived from adult skin cells that had been reverted to “stemness.” Others were the progeny of human stem cell lines that had been cultured for years in the lab.
The researchers then used Hanna’s recently developed method to reprogram pluripotent stem cells so as to turn the clock further back: to revert these cells to an even earlier state – known as the naïve state – in which they are capable of becoming anything, that is, specializing into any type of cell. This stage corresponds to day 7 of the natural human embryo, around the time it implants itself in the womb. Hanna’s team had in fact been the first to start describing methods to generate human naïve stem cells, back in 2013; they continued to improve these methods, which stand at the heart of the current project, over the years.
The scientists divided the cells into three groups. The cells intended to develop into the embryo were left as is. The cells in each of the other groups were treated only with chemicals, without any need for genetic modification, so as to turn on certain genes, which was intended to cause these cells to differentiate toward one of three tissue types needed to sustain the embryo: placenta, yolk sac or the extraembryonic mesoderm membrane that ultimately creates the chorionic sac.
Soon after being mixed together under optimized, specifically developed conditions, the cells formed clumps, about 1 percent of which self-organized into complete embryo-like structures. “An embryo is self-driven by definition; we don’t need to tell it what to do – we must only unleash its internally encoded potential,” Hanna says. “It’s critical to mix in the right kinds of cells at the beginning, which can only be derived from naïve stem cells that have no developmental restrictions. Once you do that, the embryo-like model itself says, ‘Go!’”
The stem cell–based embryo-like structures (termed SEMs) developed normally outside the womb for 8 days, reaching a developmental stage equivalent to day 14 in human embryonic development. That’s the point at which natural embryos acquire the internal structures that enable them to proceed to the next stage: developing the progenitors of body organs.
Complete human embryo models match classic diagrams in terms of structure and cell identity
When the researchers compared the inner organization of their stem cell–derived embryo models with illustrations and microscopic anatomy sections in classical embryology atlases from the 1960s, they found an uncanny structural resemblance between the models and the natural human embryos at the corresponding stage. Every compartment and supporting structure was not only there, but in the right place, size and shape. Even the cells that make the hormone used in pregnancy testing were there and active: When the scientists applied secretions from these cells to a commercial pregnancy test, it came out positive.
In fact, the study has already produced a finding that may open a new direction of research into early pregnancy failure. The researchers discovered that if the embryo is not enveloped by placenta-forming cells in the right manner at day 3 of the protocol (corresponding to day 10 in natural embryonic development), its internal structures, such as the yolk sac, fail to properly develop.
“An embryo is not static. It must have the right cells in the right organization, and it must be able to progress – it’s about being and becoming,” Hanna says. “Our complete embryo models will help researchers address the most basic questions about what determines its proper growth.”
This ethical approach to unlocking the mysteries of the very first stages of embryonic development could open numerous research paths. It might help reveal the causes of many birth defects and types of infertility. It could also lead to new technologies for growing transplant tissues and organs. And it could offer a way around experiments that cannot be performed on live embryos – for example, determining the effects of exposure to drugs or other substances on fetal development.
Complete human day 14 post-implantation embryo models from naïve ES cells by Bernardo Oldak, Emilie Wildschutz, Vladyslav Bondarenko, Mehmet-Yunus Comar, Cheng Zhao, Alejandro Aguilera-Castrejon, Shadi Tarazi, Sergey Viukov, Thi Xuan Ai Pham, Shahd Ashouokhi, Dmitry Lokshtanov, Francesco Roncato, Eitan Ariel, Max Rose, Nir Livnat, Tom Shani, Carine Joubran, Roni Cohen, Yoseph Addadi, Muriel Chemla, Merav Kedmi, Hadas Keren-Shaul, Vincent Pasque, Sophie Petropoulos, Fredrik Lanner, Noa Novershtern & Jacob H. Hanna. Nature (2023) DOI: https://doi.org/10.1038/s41586-023-06604-5 Published: 06 September 2023
This paper is behind a paywall.
As for the question I asked in the head “what now?” I have absolutely no idea.
Usually, there’s a rough chronological order to how I introduce the research, but this time I’m looking at the term used to describe it, following up with the various news releases and commentaries about the research, and finishing with a Canadian perspective.
After writing this post (but before it was published), the Weizmann Institute of Science (Israel) made their September 6, 2023 announcement and things changed a bit. That’s in Part two.
Say what you really mean (a terminology issue)
First, it might be useful to investigate the term, ‘synthetic human embryos’ as Julian Hitchcock does in his June 29, 2023 article on Bristows website (h/t Mondaq’s July 5, 2023 news item), Note: Links have been removed,
“Synthetic Embryos” are neither Synthetic nor Embryos. So why are editors giving that name to stem cell-based models of human development?
One of the less convincing aspects of the last fortnight’s flurry of announcements about advances in simulating early human development (see here) concerned their name. Headlines galore (in newspapers and scientific journals) referred to “synthetic embryos“.
But embryo models, however impressive, are not embryos. To claim that the fundamental stages of embryo development that we learnt at school – fertilisation, cleavage and compaction – could now be bypassed to achieve the same result would be wrong. Nor are these objects “synthesised”: indeed, their interest to us lies in the ways in which they organise themselves. The researchers merely place the stem cells in a matrix in appropriate conditions, then stand back and watch them do it. Scientists were therefore unhappy about this use of the term in news media, and relieved when the International Society for Stem Cell Research (ISSCR) stepped in with a press release:
“Unlike some recent media reports describing this research, the ISSCR advises against using the term “synthetic embryo” to describe embryo models, because it is inaccurate and can create confusion. Integrated embryo models are neither synthetic nor embryos. While these models can replicate aspects of the early-stage development of human embryos, they cannot and will not develop to the equivalent of postnatal stage humans. Further, the ISSCR Guidelines prohibit the transfer of any embryo model to the uterus of a human or an animal.”
Although this was the ISSCR’s first attempt to put that position to the public, it had already made that recommendation to the research community two years previously. Its 2021 Guidelines for Stem Cell Research and Clinical Translation had recommended researchers to “promote accurate, current, balanced, and responsive public representations of stem cell research”. In particular:
“While organoids, chimeras, embryo models, and other stem cell-based models are useful research tools offering possibilities for further scientific progress, limitations on the current state of scientific knowledge and regulatory constraints must be clearly explained in any communications with the public or media. Suggestions that any of the current in vitro models can recapitulate an intact embryo, human sentience or integrated brain function are unfounded overstatements that should be avoided and contradicted with more precise characterizations of current understanding.”
Diploma Medical School, University of Birmingham (1975-78)
LLB, University of Wolverhampton
Diploma in Intellectual Property Law & Practice, University of Bristol
Qualified 1998
Following an education in medicine at the University of Birmingham and a career as a BBC science producer, Julian has focused on the law and regulation of life science technologies since 1997, practising in England and Australia. He joined Bristows with Alex Denoon in 2018.
I have a lot of sympathy with the position of the science writers and editors incurring the scientists’ ire. First, why should journalists have known of the ISSCR’s recommendations on the use of the term “synthetic embryo”? A journalist who found Recommendation 4.1 of the ISSCR Guidelines would probably not have found them specific enough to address the point, and the academic introduction containing the missing detail is hard to find. …
My second reason for being sympathetic to the use of the terrible term is that no suitable alternative has been provided, other than in the Stem Cell Reports paper, which recommends the umbrella terms “embryo models” or “stem cell based embryo models”. …
When asked why she had used the term “synthetic embryo”, the journalist I contacted remarked that, “We’re still working out the right language and it’s something we’re discussing and will no doubt evolve along with the science”.
It is absolutely in the public’s interest (and in the interest of science), that scientific research is explained in terms that the public understands. There is, therefore, a need, I think, for the scientific community to supply a name to the media or endure the penalties of misinformation …
In such an intensely competitive field of research, disagreement among researchers, even as to names, is inevitable. In consequence, however, journalists and their audiences are confronted by a slew of terms which may or may not be synonymous or overlapping, with no agreed term [emphasis mine] for the overall class of stem cell based embryo models. We cannot blame them if they make up snappy titles of their own [emphasis mine]. …
The announcement
The earliest date for the announcement at the International Society for Stem Cell Researh meeting that I can find is Hannah Devlin’s June 14, 2023 article in The Guardian newspaper, Note: A link has been removed,
Scientists have created synthetic human embryos using stem cells, in a groundbreaking advance that sidesteps the need for eggs or sperm.
Scientists say these model embryos, which resemble those in the earliest stages of human development, could provide a crucial window on the impact of genetic disorders and the biological causes of recurrent miscarriage.
However, the work also raises serious ethical and legal issues as the lab-grown entities fall outside current legislation in the UK and most other countries.
The structures do not have a beating heart or the beginnings of a brain, but include cells that would typically go on to form the placenta, yolk sac and the embryo itself.
Prof Magdalena Żernicka-Goetz, of the University of Cambridge and the California Institute of Technology, described the work in a plenary address on Wednesday [June 14, 2023] at the International Society for Stem Cell Research’s annual meeting in Boston.
Two days later, this June 16, 2023 essay by Kathryn MacKay, Senior Lecturer in Bioethics, University of Sydney (Australia), appeared on The Conversation (h/t June 16, 2023 news item on phys.org), Note: Links have been removed,
Researchers have created synthetic human embryos using stem cells, according to media reports. Remarkably, these embryos have reportedly been created from embryonic stem cells, meaning they do not require sperm and ova.
This development, widely described as a breakthrough that could help scientists learn more about human development and genetic disorders, was revealed this week in Boston at the annual meeting of the International Society for Stem Cell Research.
The research, announced by Professor Magdalena Żernicka-Goetz of the University of Cambridge and the California Institute of Technology, has not yet been published in a peer-reviewed journal. But Żernicka-Goetz told the meeting these human-like embryos had been made by reprogramming human embryonic stem cells.
So what does all this mean for science, and what ethical issues does it present?
MacKay goes on to answer her own questions, from the June 16, 2023 essay, Note: A link has been removed,
…
One of these quandaries arises around whether their creation really gets us away from the use of human embryos.
Robin Lovell-Badge, the head of stem cell biology and developmental genetics at the Francis Crick Institute in London UK, reportedly said that if these human-like embryos can really model human development in the early stages of pregnancy, then we will not have to use human embryos for research.
At the moment, it is unclear if this is the case for two reasons.
First, the embryos were created from human embryonic stem cells, so it seems they do still need human embryos for their creation. Perhaps more light will be shed on this when Żernicka-Goetz’s research is published.
Second, there are questions about the extent to which these human-like embryos really can model human development.
…
Professor Magdalena Żernicka-Goetz’s research is published
Almost two weeks later the research from the Cambridge team (there are other teams and countries also racing; see Part two for the news from Sept. 6, 2023) was published, from a June 27, 2023 news item on ScienceDaily,
Cambridge scientists have created a stem cell-derived model of the human embryo in the lab by reprogramming human stem cells. The breakthrough could help research into genetic disorders and in understanding why and how pregnancies fail.
Published today [Tuesday, June 27, 2023] in the journal Nature, this embryo model is an organised three-dimensional structure derived from pluripotent stem cells that replicate some developmental processes that occur in early human embryos.
Use of such models allows experimental modelling of embryonic development during the second week of pregnancy. They can help researchers gain basic knowledge of the developmental origins of organs and specialised cells such as sperm and eggs, and facilitate understanding of early pregnancy loss.
“Our human embryo-like model, created entirely from human stem cells, gives us access to the developing structure at a stage that is normally hidden from us due to the implantation of the tiny embryo into the mother’s womb,” said Professor Magdalena Zernicka-Goetz in the University of Cambridge’s Department of Physiology, Development and Neuroscience, who led the work.
She added: “This exciting development allows us to manipulate genes to understand their developmental roles in a model system. This will let us test the function of specific factors, which is difficult to do in the natural embryo.”
In natural human development, the second week of development is an important time when the embryo implants into the uterus. This is the time when many pregnancies are lost.
The new advance enables scientists to peer into the mysterious ‘black box’ period of human development – usually following implantation of the embryo in the uterus – to observe processes never directly observed before.
Understanding these early developmental processes holds the potential to reveal some of the causes of human birth defects and diseases, and to develop tests for these in pregnant women.
Until now, the processes could only be observed in animal models, using cells from zebrafish and mice, for example.
Legal restrictions in the UK currently prevent the culture of natural human embryos in the lab beyond day 14 of development: this time limit was set to correspond to the stage where the embryo can no longer form a twin. [emphasis mine]
Until now, scientists have only been able to study this period of human development using donated human embryos. This advance could reduce the need for donated human embryos in research.
Zernicka-Goetz says the while these models can mimic aspects of the development of human embryos, they cannot and will not develop to the equivalent of postnatal stage humans.
Over the past decade, Zernicka-Goetz’s group in Cambridge has been studying the earliest stages of pregnancy, in order to understand why some pregnancies fail and some succeed.
In 2021 and then in 2022 her team announced in Developmental Cell, Nature and Cell Stem Cell journals that they had finally created model embryos from mouse stem cells that can develop to form a brain-like structure, a beating heart, and the foundations of all other organs of the body.
The new models derived from human stem cells do not have a brain or beating heart, but they include cells that would typically go on to form the embryo, placenta and yolk sac, and develop to form the precursors of germ cells (that will form sperm and eggs).
Many pregnancies fail at the point when these three types of cells orchestrate implantation into the uterus begin to send mechanical and chemical signals to each other, which tell the embryo how to develop properly.
There are clear regulations governing stem cell-based models of human embryos and all researchers doing embryo modelling work must first be approved by ethics committees. Journals require proof of this ethics review before they accept scientific papers for publication. Zernicka-Goetz’s laboratory holds these approvals.
“It is against the law and FDA regulations to transfer any embryo-like models into a woman for reproductive aims. These are highly manipulated human cells and their attempted reproductive use would be extremely dangerous,” said Dr Insoo Hyun, Director of the Center for Life Sciences and Public Learning at Boston’s Museum of Science and a member of Harvard Medical School’s Center for Bioethics.
Zernicka-Goetz also holds position at the California Institute of Technology and is NOMIS Distinguished Scientist and Scholar Awardee.
The research was funded by the Wellcome Trust and Open Philanthropy.
(There’s more about legal concerns further down in this post.)
Here’s a link to and a citation for the paper,
Pluripotent stem cell-derived model of the post-implantation human embryo by Bailey A. T. Weatherbee, Carlos W. Gantner, Lisa K. Iwamoto-Stohl, Riza M. Daza, Nobuhiko Hamazaki, Jay Shendure & Magdalena Zernicka-Goetz. Nature (2023) DOI: https://doi.org/10.1038/s41586-023-06368-y Published: 27 June 2023
This paper is open access.
Published the same day (June 27, 2023) is a paper (citation and link follow) also focused on studying human embryonic development using stem cells. First, there’s this from the Abstract,
Investigating human development is a substantial scientific challenge due to the technical and ethical limitations of working with embryonic samples. In the face of these difficulties, stem cells have provided an alternative to experimentally model inaccessible stages of human development in vitro …
This time the work is from a US/German team,
Self-patterning of human stem cells into post-implantation lineages by Monique Pedroza, Seher Ipek Gassaloglu, Nicolas Dias, Liangwen Zhong, Tien-Chi Jason Hou, Helene Kretzmer, Zachary D. Smith & Berna Sozen. Nature (2023) DOI: https://doi.org/10.1038/s41586-023-06354-4 Published: 27 June 2023
The paper is open access.
Legal concerns and a Canadian focus
A July 25, 2023 essay by Françoise Baylis and Jocelyn Downie of Dalhousie University (Nova Scotia, Canada) for The Conversation (h/t July 25, 2023 article on phys.org) covers the advantages of doing this work before launching into a discussion of legislation and limits in the UK and, more extensively, in Canada, Note: Links have been removed,
…
This research could increase our understanding of human development and genetic disorders, help us learn how to prevent early miscarriages, lead to improvements in fertility treatment, and — perhaps — eventually allow for reproduction without using sperm and eggs.
Synthetic human embryos — also called embryoid bodies, embryo-like structures or embryo models — mimic the development of “natural human embryos,” those created by fertilization. Synthetic human embryos include the “cells that would typically go on to form the embryo, placenta and yolk sac, and develop to form the precursors of germ cells (that will form sperm and eggs).”
Though research involving natural human embryos is legal in many jurisdictions, it remains controversial. For some people, research involving synthetic human embryos is less controversial because these embryos cannot “develop to the equivalent of postnatal stage humans.” In other words, these embryos are non-viable and cannot result in live births.
…
Now, for a closer look at the legalities in the UK and in Canada, from the July 25, 2023 essay, Note: Links have been removed,
The research presented by Żernicka-Goetz at the ISSCR meeting took place in the United Kingdom. It was conducted in accordance with the Human Fertilization and Embryology Act, 1990, with the approval of the U.K. Stem Cell Bank Steering Committee.
U.K. law limits the research use of human embryos to 14 days of development. An embryo is defined as “a live human embryo where fertilisation is complete, and references to an embryo include an egg in the process of fertilisation.”
Synthetic embryos are not created by fertilization and therefore, by definition, the 14-day limit on human embryo research does not apply to them. This means that synthetic human embryo research beyond 14 days can proceed in the U.K.
The door to the touted potential benefits — and ethical controversies — seems wide open in the U.K.
…
While the law in the U.K. does not apply to synthetic human embryos, the law in Canada clearly does. This is because the legal definition of an embryo in Canada is not limited to embryos created by fertilization [emphasis mine].
The Assisted Human Reproduction Act (the AHR Act) defines an embryo as “a human organism during the first 56 days of its development following fertilization or creation, excluding any time during which its development has been suspended.”
Based on this definition, the AHR Act applies to embryos created by reprogramming human embryonic stem cells — in other words, synthetic human embryos — provided such embryos qualify as human organisms.
A synthetic human embryo is a human organism. It is of the species Homo sapiens, and is thus human. It also qualifies as an organism — a life form — alongside other organisms created by means of fertilization, asexual reproduction, parthenogenesis or cloning.
…
Given that the AHR Act applies to synthetic human embryos, there are legal limits on their creation and use in Canada.
First, human embryos — including synthetic human embryos – can only be created for the purposes of “creating a human being, improving or providing instruction in assisted reproduction procedures.”
Given the state of the science, it follows that synthetic human embryos could legally be created for the purpose of improving assisted reproduction procedures.
Second, “spare” or “excess” human embryos — including synthetic human embryos — originally created for one of the permitted purposes, but no longer wanted for this purpose, can be used for research. This research must be done in accordance with the consent regulations which specify that consent must be for a “specific research project.”
Finally, all research involving human embryos — including synthetic human embryos — is subject to the 14-day rule. The law stipulates that: “No person shall knowingly… maintain an embryo outside the body of a female person after the fourteenth day of its development following fertilization or creation, excluding any time during which its development has been suspended.”
Putting this all together, the creation of synthetic embryos for improving assisted human reproduction procedures is permitted, as is research using “spare” or “excess” synthetic embryos originally created for this purpose — provided there is specific consent and the research does not exceed 14 days.
This means that while synthetic human embryos may be useful for limited research on pre-implantation embryo development, they are not available in Canada for research on post-implantation embryo development beyond 14 days.
The authors close with this comment about the prospects for expanding Canada’s14-day limit, from the July 25, 2023 essay,
… any argument will have to overcome the political reality that the federal government is unlikely to open up the Pandora’s box of amending the AHR Act.
It therefore seems likely that synthetic human embryo research will remain limited in Canada for the foreseeable future.
As mentioned, in September 2023 there was a new development. See: Part two.
This paper on ethics (aside: I have a few comments after the news release and citation) comes from the US Pacific Northwest National Laboratory (PNNL) according to a July 12, 2023 news item on phys.org,
Prosthetics moved by thoughts. Targeted treatments for aggressive brain cancer. Soldiers with enhanced vision or bionic ears. These powerful technologies sound like science fiction, but they’re becoming possible thanks to nanoparticles.
“In medicine and other biological settings, nanotechnology is amazing and helpful, but it could be harmful if used improperly,” said Pacific Northwest National Laboratory (PNNL) chemist Ashley Bradley, part of a team of researchers who conducted a comprehensive survey of nanobiotechnology applications and policies.
Their research, available in Health Security, works to sum up the very large, active field of nanotechnology in biology applications, draw attention to regulatory gaps, and offer areas for further consideration.
“In our research, we learned there aren’t many global regulations yet,” said Bradley. “And we need to create a common set of rules to figure out the ethical boundaries.”
Nanoparticles, big differences
Nanoparticles are clusters of molecules with different properties than large amounts of the same substances. In medicine and other biology applications, these properties allow nanoparticles to act as the packaging that delivers treatments through cell walls and the difficult to cross blood-brain barrier.
“You can think of the nanoparticles a little bit like the plastic around shredded cheese,” said PNNL chemist Kristin Omberg. “It makes it possible to get something perishable directly where you want it, but afterwards you’ve got to deal with a whole lot of substance where it wasn’t before.”
Unfortunately, dealing with nanoparticles in new places isn’t straightforward. Carbon is pencil lead, nano carbon conducts electricity. The same material may have different properties at the nanoscale, but most countries still regulate it the same as bulk material, if the material is regulated at all.
For example, zinc oxide, a material that was stable and unreactive as a pigment in white paint, is now accumulating in oceans when used as nanoparticles in sunscreen, warranting a call to create alternative reef-safe sunscreens. And although fats and lipids aren’t regulated, the researchers suggest which agencies could weigh in on regulations were fats to become after-treatment byproducts.
The article also inventories national and international agencies, organizations, and governing bodies with an interest in understanding how nanoparticles break down or react in a living organism and the environmental life cycle of a nanoparticle. Because nanobiotechnology spans materials science, biology, medicine, environmental science, and tech, these disparate research and regulatory disciplines must come together, often for the first time—to fully understand the impact on humans and the environment.
Dual use: Good for us, bad for us
Like other quickly growing fields, there’s a time lag between the promise of new advances and the possibilities of unintended uses.
“There were so many more applications than we thought there were,” said Bradley, who collected exciting nanobio examples such as Alzheimer’s treatment, permanent contact lenses, organ replacement, and enhanced muscle recovery, among others.
The article also highlights concerns about crossing the blood-brain barrier, thought-initiated control of computers, and nano-enabled DNA editing where the researchers suggest more caution, questioning, and attention could be warranted. This attention spans everything from deep fundamental research and regulations all the way to what Omberg called “the equivalent of tattoo removal” if home-DNA splicing attempts go south.
The researchers draw parallels to more established fields such as synthetic bio and pharmacology, which offer lessons to be learned from current concerns such as the unintended consequences of fentanyl and opioids. They believe these fields also offer examples of innovative coordination between science and ethics, such as synthetic bio’s IGEM [The International Genetically Engineered Machine competition]—student competition, to think about not just how to create, but also to shape the use and control of new technologies.
Omberg said unusually enthusiastic early reviewers of the article contributed even more potential uses and concerns, demonstrating that experts in many fields recognize ethical nanobiotechnology is an issue to get in front of. “This is a train that’s going. It will be sad if 10 years from now, we haven’t figured how to talk about it.”
Funding for the team’s research was supported by PNNL’s Biorisk Beyond the List National Security Directorate Objective.
It seems a little odd that the news release (“Prosthetics moved by thoughts …”) and the paper both reference neurotechnology without ever mentioning it by name. Here’s the reference from the paper, Note: Links have been removed,
Nanoparticles May Be Developed to Facilitate Cognitive Enhancements
The development and implementation of NPs that enhance cognitive function has yet to be realized. However, recent advances on the micro- and macro-level with neural–machine interfacing provide the building blocks necessary to develop this technology on the nanoscale. A noninvasive brain–computer interface to control a robotic arm was developed by teams at 2 universities.157 A US-based company, Neuralink, [emphasis mine] is at the forefront of implementing implantable, intracortical microelectrodes that provide an interface between the human brain and technology.158,159 Utilization of intracortical microelectrodes may ultimately provide thought-initiated access and control of computers and mobile devices, and possibly expand cognitive function by accessing underutilized areas of the brain.158
Neuralink (founded by Elon Musk) is controversial for its animal testing practices. You can find out more in Björn Ólafsson’s May 30, 2023 article for Sentient Media.
The focus on nanoparticles as the key factor in the various technologies and applications mentioned seems narrow but necessary given the breadth of topics covered in the paper as the authors themselves note in the paper’s abstract,
… In this article, while not comprehensive, we attempt to illustrate the breadth and promise of bionanotechnology developments, and how they may present future safety and security challenges. Specifically, we address current advancements to streamline the development of engineered NPs for in vivo applications and provide discussion on nano–bio interactions, NP in vivo delivery, nanoenhancement of human performance, nanomedicine, and the impacts of NPs on human health and the environment.
…
They have a good overview of the history and discussions about nanotechnology risks and regulation. It’s international in scope with a heavy emphasis on US efforts, as one would expect.
For anyone who’s interested in the neurotechnology end of things, I’ve got a July 17, 2023 commentary “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report.” The report was launched July 13, 2023 during UNESCO’s Global dialogue on the ethics of neurotechnology (see my July 7, 2023 posting about the then upcoming dialogue for links to more UNESCO information). Both the July 17 and July 7, 2023 postings included additional information about Neuralink.
I have two items on ChatGPT and academic cheating. The first (from April 2023) deals with the economic impact on people who make their living by writing the papers for the cheaters and the second (from May 2023) deals with unintended consequences for the cheaters (the students not the contract writers).
Making a living in Kenya
Martin K.N Siele’s April 21, 2023 article for restofworld.org (a website where you can find “Reporting [on] Global Tech Stories”) provides a perspective that’s unfamiliar to me, Note: Links have been removed,
For the past nine years, Collins, a 27-year-old freelance writer, has been making money by writing assignments for students in the U.S. — over 13,500 kilometers away from Nanyuki in central Kenya, where he lives. He is part of the “contract cheating” industry, known locally as simply “academic writing.” Collins writes college essays on topics including psychology, sociology, and economics. Occasionally, he is even granted direct access to college portals, allowing him to submit tests and assignments, participate in group discussions, and talk to professors using students’ identities. In 2022, he made between $900 and $1,200 a month from this work.
Lately, however, his earnings have dropped to $500–$800 a month. Collins links this to the meteoric rise of ChatGPT and other generative artificial intelligence tools.
“Last year at a time like this, I was getting, on average, 50 to 70 assignments, including discussions which are shorter, around 150 words each, and don’t require much research,” Collins told Rest of World. “Right now, on average, I get around 30 to 40-something assignments.” He requested to be identified only by his first name to avoid jeopardizing his accounts on platforms where he finds clients.
In January 2023, online learning platform Study surveyed more than 1,000 American students and over 100 educators. More than 89% of the students said they had used ChatGPT for help with a homework assignment. Nearly half admitted to using ChatGPT for an at-home test or quiz, 53% had used it to write an essay, and 22% had used it for outlining one.
Collins now fears that the rise of AI could significantly reduce students’ reliance on freelancers like him in the long term, affecting their income. Meanwhile, he depends on ChatGPT to generate the content he used to outsource to other freelance writers.
While 17 states in the U.S. have banned contract cheating, it has not been a problem for freelancers in Kenya, concerned about providing for themselves and their families. Despite being the largest economy in East Africa, Kenya has the region’s highest unemployment rate, with 5.7% of the labor force out of work in 2021. Around 25.8% of the population is estimated to live in extreme poverty. This situation makes the country a potent hub for freelance workers. According to the Online Labour Index (OLI), an economic indicator that measures the global online gig economy, Kenya accounts for 1% of the world’s online freelance workforce, ranking 15th overall and second only to Egypt in Africa. About 70% of online freelancers in Kenya offer writing and translation services.
…
Not everyone agrees with Collins with regard to the impact that AI such as ChatGPT is having on their ghostwriting bottom line but everyone agrees there’s an impact. If you have time, do read Siele’s April 21, 2023 article in its entirety.
The dark side of using contract writing services
This May 10, 2023 essay on The Conversation by Nathalie Wierdak (Teaching Fellow) and Lynnaire Sheridan (Senior lecturer), both at the University of Otago, takes a more standard perspective, initially (Note: Links have been removed; h/t phys.org May 11, 2023 news item),
Since the launch of ChatGPT in late 2022, academics have expressed concern over the impact the artificial intelligence service could have on student work.
But educational institutions trying to safeguard academic integrity could be looking in the wrong direction. Yes, ChatGPT raises questions about how to assess students’ learning. However, it should be less of a concern than the persistent and pervasive use of ghostwriting services.
Essentially, academic ghostwriting is when a student submits a piece of work as their own which is, in fact, written by someone else. Often dubbed “contract cheating,” the outsourcing of assessment to ghostwriters undermines student learning.
…
But contract cheating is increasingly commonplace as time-poor students juggle jobs to meet the soaring costs of education. And the internet creates the perfect breeding ground for willing ghostwriting entrepreneurs.
In New Zealand, 70-80% of tertiary students engage in some form of cheating. While most of this academic misconduct was collusion with peers or plagiarism, the emergence of artificial intelligence has been described as a battle academia will inevitably lose.
It is time a new approach is taken by universities.
Allowing the use of ChatGPT by students could help reduce the use of contract cheating by doing the heavy lifting of academic work while still giving students the opportunity to learn.
…
This essay seems to have been written as a counterpoint to Siele’s article. Here’s where the May 10, 2023 essay gets interesting,
Universities have been cracking down on ghost writing to ensure quality education, to protect their students from blackmail and to even prevent international espionage [emphasis mine].
Contract cheating websites store personal data making students unwittingly vulnerable to extortion to avoid exposure and potential expulsion from their institution, or the loss of their qualification.
Some researchers are warning there is an even greater risk – that private student data will fall into the hands of foreign state actors.
Preventing student engagement with contract cheating sites, or at least detecting students who use them, avoids the likelihood of graduates in critical job roles being targeted for nationally sensitive data.
…
Given the underworld associated with ghostwriting, artificial intelligence has the potential to bust the contract cheating economy. This would keep students safer by providing them with free, instant and accessible resources.
…
If you have time to read it in its entirety, there are other advantages to AI-enhanced learning mentioned in the May 10, 2023 essay.
Philosophers and legal scholars have explored significant aspects of the moral and legal status of robots, with some advocating for giving robots rights. As robots assume more roles in the world, a new analysis reviewed research on robot rights, concluding that granting rights to robots is a bad idea. Instead, the article looks to Confucianism to offer an alternative.
The analysis, by a researcher at Carnegie Mellon University (CMU), appears in Communications of the ACM, published by the Association for Computing Machinery.
“People are worried about the risks of granting rights to robots,” notes Tae Wan Kim, Associate Professor of Business Ethics at CMU’s Tepper School of Business, who conducted the analysis. “Granting rights is not the only way to address the moral status of robots: Envisioning robots as rites bearers—not a rights bearers—could work better.”
Although many believe that respecting robots should lead to granting them rights, Kim argues for a different approach. Confucianism, an ancient Chinese belief system, focuses on the social value of achieving harmony; individuals are made distinctively human by their ability to conceive of interests not purely in terms of personal self-interest, but in terms that include a relational and a communal self. This, in turn, requires a unique perspective on rites, with people enhancing themselves morally by participating in proper rituals.
When considering robots, Kim suggests that the Confucian alternative of assigning rites—or what he calls role obligations—to robots is more appropriate than giving robots rights. The concept of rights is often adversarial and competitive, and potential conflict between humans and robots is concerning.
“Assigning role obligations to robots encourages teamwork, which triggers an understanding that fulfilling those obligations should be done harmoniously,” explains Kim. “Artificial intelligence (AI) imitates human intelligence, so for robots to develop as rites bearers, they must be powered by a type of AI that can imitate humans’ capacity to recognize and execute team activities—and a machine can learn that ability in various ways.”
Kim acknowledges that some will question why robots should be treated respectfully in the first place. “To the extent that we make robots in our image, if we don’t treat them well, as entities capable of participating in rites, we degrade ourselves,” he suggests.
Various non-natural entities—such as corporations—are considered people and even assume some Constitutional rights. In addition, humans are not the only species with moral and legal status; in most developed societies, moral and legal considerations preclude researchers from gratuitously using animals for lab experiments.
Here’s a link to and a citation for the paper,
Should Robots Have Rights or Rites? by Tae Wan Kim, Alan Strudler. Communications of the ACM, June 2023, Vol. 66 No. 6, Pages 78-85 DOI: 10.1145/3571721
The paper is quite readable, as academic papers go, (Note: Links have been removed),
Boston Dynamics recently released a video introducing Atlas, a six-foot bipedal humanoid robot capable of search and rescue missions. Part of the video contained employees apparently abusing Atlas (for example, kicking, hitting it with a hockey stick, pushing it with a heavy ball). The video quickly raised a public and academic debate regarding how humans should treat robots. A robot, in some sense, is nothing more than software embedded in hardware, much like a laptop computer. If it is your property and kicking it harms no one nor infringes on anyone’s rights, it’s okay to kick it, although that would be a stupid thing to do. Likewise, there seems to be no significant reason that kicking a robot should be deemed as a moral or legal wrong. However, the question—”What do we owe to robots?”—is not that simple. Philosophers and legal scholars have seriously explored and defended some significant aspects of the moral and legal status of robots—and their rights.3,6,15,16,24,29,36 In fact, various non-natural entities—for example, corporations—are treated as persons and even enjoy some constitutional rights.a In addition, humans are not the only species that get moral and legal status. In most developed societies, for example, moral and legal considerations preclude researchers from gratuitously using animals for lab experiments. The fact that corporations are treated as persons and animals are recognized as having some rights does not entail that robots should be treated analogously.
Launched on Thursday, July 13, 2023 during UNESCO’s (United Nations Educational, Scientific, and Cultural Organization) “Global dialogue on the ethics of neurotechnology,” is a report tying together the usual measures of national scientific supremacy (number of papers published and number of patents filed) with information on corporate investment in the field. Consequently, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends” by Daniel S. Hain, Roman Jurowetzki, Mariagrazia Squicciarini, and Lihui Xu provides better insight into the international neurotechnology scene than is sometimes found in these kinds of reports. By the way, the report is open access.
Here’s what I mean, from the report‘s short summary,
…
Since 2013, government investments in this field have exceeded $6 billion. Private investment has also seen significant growth, with annual funding experiencing a 22-fold increase from 2010 to 2020, reaching $7.3 billion and totaling $33.2 billion.
This investment has translated into a 35-fold growth in neuroscience publications between 2000-2021 and 20-fold growth in innovations between 2022-2020, as proxied by patents. However, not all are poised to benefit from such developments, as big divides emerge.
Over 80% of high-impact neuroscience publications are produced by only ten countries, while 70% of countries contributed fewer than 10 such papers over the period considered. Similarly, five countries only hold 87% of IP5 neurotech patents.
This report sheds light on the neurotechnology ecosystem, that is, what is being developed, where and by whom, and informs about how neurotechnology interacts with other technological trajectories, especially Artificial Intelligence [emphasis mine]. [p. 2]
…
The money aspect is eye-opening even when you already have your suspicions. Also, it’s not entirely unexpected to learn that only ten countries produce over 80% of the high impact neurotech papers and that only five countries hold 87% of the IP5 neurotech patents but it is stunning to see it in context. (If you’re not familiar with the term ‘IP5 patents’, scroll down in this post to the relevant subhead. Hint: It means the patent was filed in one of the top five jurisdictions; I’ll leave you to guess which ones those might be.)
“Since 2013 …” isn’t quite as informative as the authors may have hoped. I wish they had given a time frame for government investments similar to what they did for corporate investments (e.g., 2010 – 2020). Also, is the $6B (likely in USD) government investment cumulative or an estimated annual number? To sum up, I would have appreciated parallel structure and specificity.
Nitpicks aside, there’s some very good material intended for policy makers. On that note, some of the analysis is beyond me. I haven’t used anything even somewhat close to their analytical tools in years and years. This commentaries reflects my interests and a very rapid reading. One last thing, this is being written from a Canadian perspective. With those caveats in mind, here’s some of what I found.
A definition, social issues, country statistics, and more
There’s a definition for neurotechnology and a second mention of artificial intelligence being used in concert with neurotechnology. From the report‘s executive summary,
Neurotechnology consists of devices and procedures used to access, monitor, investigate, assess, manipulate, and/or emulate the structure and function of the neural systems of animals or human beings. It is poised to revolutionize our understanding of the brain and to unlock innovative solutions to treat a wide range of diseases and disorders.
…
Similarly to Artificial Intelligence (AI), and also due to its convergence with AI, neurotechnology may have profound societal and economic impact, beyond the medical realm. As neurotechnology directly relates to the brain, it triggers ethical considerations about fundamental aspects of human existence, including mental integrity, human dignity, personal identity, freedom of thought, autonomy, and privacy [emphases mine]. Its potential for enhancement purposes and its accessibility further amplifies its prospect social and societal implications.
…
The recent discussions held at UNESCO’s Executive Board further shows Member States’ desire to address the ethics and governance of neurotechnology through the elaboration of a new standard-setting instrument on the ethics of neurotechnology, to be adopted in 2025. To this end, it is important to explore the neurotechnology landscape, delineate its boundaries, key players, and trends, and shed light on neurotech’s scientific and technological developments. [p. 7]
The present report addresses such a need for evidence in support of policy making in relation to neurotechnology by devising and implementing a novel methodology on data from scientific articles and patents:
● We detect topics over time and extract relevant keywords using a transformer- based language models fine-tuned for scientific text. Publication data for the period 2000-2021 are sourced from the Scopus database and encompass journal articles and conference proceedings in English. The 2,000 most cited publications per year are further used in in-depth content analysis. ● Keywords are identified through Named Entity Recognition and used to generate search queries for conducting a semantic search on patents’ titles and abstracts, using another language model developed for patent text. This allows us to identify patents associated with the identified neuroscience publications and their topics. The patent data used in the present analysis are sourced from the European Patent Office’s Worldwide Patent Statistical Database (PATSTAT). We consider IP5 patents filed between 2000-2020 having an English language abstract and exclude patents solely related to pharmaceuticals.
This approach allows mapping the advancements detailed in scientific literature to the technological applications contained in patent applications, allowing for an analysis of the linkages between science and technology. This almost fully automated novel approach allows repeating the analysis as neurotechnology evolves. [pp. 8-9[
Findings in bullet points,
Key stylized facts are: ● The field of neuroscience has witnessed a remarkable surge in the overall number of publications since 2000, exhibiting a nearly 35-fold increase over the period considered, reaching 1.2 million in 2021. The annual number of publications in neuroscience has nearly tripled since 2000, exceeding 90,000 publications a year in 2021. This increase became even more pronounced since 2019. ● The United States leads in terms of neuroscience publication output (40%), followed by the United Kingdom (9%), Germany (7%), China (5%), Canada (4%), Japan (4%), Italy (4%), France (4%), the Netherlands (3%), and Australia (3%). These countries account for over 80% of neuroscience publications from 2000 to 2021. ● Big divides emerge, with 70% of countries in the world having less than 10 high- impact neuroscience publications between 2000 to 2021. ● Specific neurotechnology-related research trends between 2000 and 2021 include: ○ An increase in Brain-Computer Interface (BCI) research around 2010, maintaining a consistent presence ever since. ○ A significant surge in Epilepsy Detection research in 2017 and 2018, reflecting the increased use of AI and machine learning in healthcare. ○ Consistent interest in Neuroimaging Analysis, which peaks around 2004, likely because of its importance in brain activity and language comprehension studies. ○ While peaking in 2016 and 2017, Deep Brain Stimulation (DBS) remains a persistent area of research, underlining its potential in treating conditions like Parkinson’s disease and essential tremor. ● Between 2000 and 2020, the total number of patent applications in this field increased significantly, experiencing a 20-fold increase from less than 500 to over 12,000. In terms of annual figures, a consistent upward trend in neurotechnology-10 related patent applications emerges, with a notable doubling observed between 2015 and 2020. • The United States account for nearly half of all worldwide patent applications (47%). Other major contributors include South Korea (11%), China (10%), Japan (7%), Germany (7%), and France (5%). These five countries together account for 87% of IP5 neurotech patents applied between 2000 and 2020. ○ The United States has historically led the field, with a peak around 2010, a decline towards 2015, and a recovery up to 2020. ○ South Korea emerged as a significant contributor after 1990, overtaking Germany in the late 2000s to become the second-largest developer of neurotechnology. By the late 2010s, South Korea’s annual neurotechnology patent applications approximated those of the United States. ○ China exhibits a sharp increase in neurotechnology patent applications in the mid-2010s, bringing it on par with the United States in terms of application numbers. ● The United States ranks highest in both scientific publications and patents, indicating their strong ability to transform knowledge into marketable inventions. China, France, and Korea excel in leveraging knowledge to develop patented innovations. Conversely, countries such as the United Kingdom, Germany, Italy, Canada, Brazil, and Australia lag behind in effectively translating neurotech knowledge into patentable innovations. ● In terms of patent quality measured by forward citations, the leading countries are Germany, US, China, Japan, and Korea. ● A breakdown of patents by technology field reveals that Computer Technology is the most important field in neurotechnology, exceeding Medical Technology, Biotechnology, and Pharmaceuticals. The growing importance of algorithmic applications, including neural computing techniques, also emerges by looking at the increase in patent applications in these fields between 2015-2020. Compared to the reference year, computer technologies-related patents in neurotech increased by 355% and by 92% in medical technology. ● An analysis of the specialization patterns of the top-5 countries developing neurotechnologies reveals that Germany has been specializing in chemistry- related technology fields, whereas Asian countries, particularly South Korea and China, focus on computer science and electrical engineering-related fields. The United States exhibits a balanced configuration with specializations in both chemistry and computer science-related fields. ● The entities – i.e. both companies and other institutions – leading worldwide innovation in the neurotech space are: IBM (126 IP5 patents, US), Ping An Technology (105 IP5 patents, CH), Fujitsu (78 IP5 patents, JP), Microsoft (76 IP511 patents, US)1, Samsung (72 IP5 patents, KR), Sony (69 IP5 patents JP) and Intel (64 IP5 patents US)
This report further proposes a pioneering taxonomy of neurotechnologies based on International Patent Classification (IPC) codes.
• 67 distinct patent clusters in neurotechnology are identified, which mirror the diverse research and development landscape of the field. The 20 most prominent neurotechnology groups, particularly in areas like multimodal neuromodulation, seizure prediction, neuromorphic computing [emphasis mine], and brain-computer interfaces, point to potential strategic areas for research and commercialization. • The variety of patent clusters identified mirrors the breadth of neurotechnology’s potential applications, from medical imaging and limb rehabilitation to sleep optimization and assistive exoskeletons. • The development of a baseline IPC-based taxonomy for neurotechnology offers a structured framework that enriches our understanding of this technological space, and can facilitate research, development and analysis. The identified key groups mirror the interdisciplinary nature of neurotechnology and underscores the potential impact of neurotechnology, not only in healthcare but also in areas like information technology and biomaterials, with non-negligible effects over societies and economies.
1 If we consider Microsoft Technology Licensing LLM and Microsoft Corporation as being under the same umbrella, Microsoft leads worldwide developments with 127 IP5 patents. Similarly, if we were to consider that Siemens AG and Siemens Healthcare GmbH belong to the same conglomerate, Siemens would appear much higher in the ranking, in third position, with 84 IP5 patents. The distribution of intellectual property assets across companies belonging to the same conglomerate is frequent and mirrors strategic as well as operational needs and features, among others. [pp. 9-11]
Surprises and comments
Interesting and helpful to learn that “neurotechnology interacts with other technological trajectories, especially Artificial Intelligence;” this has changed and improved my understanding of neurotechnology.
It was unexpected to find Canada in the top ten countries producing neuroscience papers. However, finding out that the country lags in translating its ‘neuro’ knowledge into patentable innovation is not entirely a surprise.
It can’t be an accident that countries with major ‘electronics and computing’ companies lead in patents. These companies do have researchers but they also buy startups to acquire patents. They (and ‘patent trolls’) will also file patents preemptively. For the patent trolls, it’s a moneymaking proposition and for the large companies, it’s a way of protecting their own interests and/or (I imagine) forcing a sale.
The mention of neuromorphic (brainlike) computing in the taxonomy section was surprising and puzzling. Up to this point, I’ve thought of neuromorphic computing as a kind of alternative or addition to standard computing but the authors have blurred the lines as per UNESCO’s definition of neurotechnology (specifically, “… emulate the structure and function of the neural systems of animals or human beings”) . Again, this report is broadening my understanding of neurotechnology. Of course, it required two instances before I quite grasped it, the definition and the taxonomy.
What’s puzzling is that neuromorphic engineering, a broader term that includes neuromorphic computing, isn’t used or mentioned. (For an explanation of the terms neuromorphic computing and neuromorphic engineering, there’s my June 23, 2023 posting, “Neuromorphic engineering: an overview.” )
The report
I won’t have time for everything. Here are some of the highlights from my admittedly personal perspective.
Neurotechnology’s applications however extend well beyond medicine [emphasis mine], and span from research, to education, to the workplace, and even people’s everyday life. Neurotechnology-based solutions may enhance learning and skill acquisition and boost focus through brain stimulation techniques. For instance, early research finds that brain- zapping caps appear to boost memory for at least one month (Berkeley, 2022). This could one day be used at home to enhance memory functions [emphasis mine]. They can further enable new ways to interact with the many digital devices we use in everyday life, transforming the way we work, live and interact. One example is the Sound Awareness wristband developed by a Stanford team (Neosensory, 2022) which enables individuals to “hear” by converting sound into tactile feedback, so that sound impaired individuals can perceive spoken words through their skin. Takagi and Nishimoto (2023) analyzed the brain scans taken through Magnetic Resonance Imaging (MRI) as individuals were shown thousands of images. They then trained a generative AI tool called Stable Diffusion2 on the brain scan data of the study’s participants, thus creating images that roughly corresponded to the real images shown. While this does not correspond to reading the mind of people, at least not yet, and some limitations of the study have been highlighted (Parshall, 2023), it nevertheless represents an important step towards developing the capability to interface human thoughts with computers [emphasis mine], via brain data interpretation.
While the above examples may sound somewhat like science fiction, the recent uptake of generative Artificial Intelligence applications and of large language models such as ChatGPT or Bard, demonstrates that the seemingly impossible can quickly become an everyday reality. At present, anyone can purchase online electroencephalogram (EEG) devices for a few hundred dollars [emphasis mine], to measure the electrical activity of their brain for meditation, gaming, or other purposes. [pp. 14-15]
This is very impressive achievement. Some of the research cited was published earlier this year (2023). The extraordinary speed is a testament to the efforts by the authors and their teams. It’s also a testament to how quickly the field is moving.
I’m glad to see the mention of and focus on consumer neurotechnology. (While the authors don’t speculate, I am free to do so.) Consumer neurotechnology could be viewed as one of the steps toward normalizing a cyborg future for all of us. Yes, we have books, television programmes, movies, and video games, which all normalize the idea but the people depicted have been severely injured and require the augmentation. With consumer neurotechnology, you have easily accessible devices being used to enhance people who aren’t injured, they just want to be ‘better’.
This phrase seemed particularly striking “… an important step towards developing the capability to interface human thoughts with computers” in light of some claims made by the Australian military in my June 13, 2023 posting “Mind-controlled robots based on graphene: an Australian research story.” (My posting has an embedded video demonstrating the Brain Robotic Interface (BRI) in action. Also, see the paragraph below the video for my ‘measured’ response.)
There’s no mention of the military in the report which seems more like a deliberate rather than inadvertent omission given the importance of military innovation where technology is concerned.
This section gives a good overview of government initiatives (in the report it’s followed by a table of the programmes),
Thanks to the promises it holds, neurotechnology has garnered significant attention from both governments and the private sector and is considered by many as an investment priority. According to the International Brain Initiative (IBI), brain research funding has become increasingly important over the past ten years, leading to a rise in large-scale state-led programs aimed at advancing brain intervention technologies(International Brain Initiative, 2021). Since 2013, initiatives such as the United States’ Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative and the European Union’s Human Brain Project (HBP), as well as major national initiatives in China, Japan and South Korea have been launched with significant funding support from the respective governments. The Canadian Brain Research Strategy, initially operated as a multi- stakeholder coalition on brain research, is also actively seeking funding support from the government to transform itself into a national research initiative (Canadian Brain Research Strategy, 2022). A similar proposal is also seen in the case of the Australian Brain Alliance, calling for the establishment of an Australian Brain Initiative (Australian Academy of Science, n.d.). [pp. 15-16]
Privacy
There are some concerns such as these,
Beyond the medical realm, research suggests that emotional responses of consumers related to preferences and risks can be concurrently tracked by neurotechnology, such as neuroimaging and that neural data can better predict market-level outcomes than traditional behavioral data (Karmarkar and Yoon, 2016). As such, neural data is increasingly sought after in the consumer market for purposes such as digital phenotyping4, neurogaming 5,and neuromarketing6 (UNESCO, 2021). This surge in demand gives rise to risks like hacking, unauthorized data reuse, extraction of privacy-sensitive information, digital surveillance, criminal exploitation of data, and other forms of abuse. These risks prompt the question of whether neural data needs distinct definition and safeguarding measures.
These issues are particularly relevant today as a wide range of electroencephalogram (EEG) headsets that can be used at home are now available in consumer markets for purposes that range from meditation assistance to controlling electronic devices through the mind. Imagine an individual is using one of these devices to play a neurofeedback game, which records the person’s brain waves during the game. Without the person being aware, the system can also identify the patterns associated with an undiagnosed mental health condition, such as anxiety. If the game company sells this data to third parties, e.g. health insurance providers, this may lead to an increase of insurance fees based on undisclosed information. This hypothetical situation would represent a clear violation of mental privacy and of unethical use of neural data.
Another example is in the field of advertising, where companies are increasingly interested in using neuroimaging to better understand consumers’ responses to their products or advertisements, a practice known as neuromarketing. For instance, a company might use neural data to determine which advertisements elicit the most positive emotional responses in consumers. While this can help companies improve their marketing strategies, it raises significant concerns about mental privacy. Questions arise in relation to consumers being aware or not that their neural data is being used, and in the extent to which this can lead to manipulative advertising practices that unfairly exploit unconscious preferences. Such potential abuses underscore the need for explicit consent and rigorous data protection measures in the use of neurotechnology for neuromarketing purposes. [pp. 21-22]
Legalities
Some countries already have laws and regulations regarding neurotechnology data,
At the national level, only a few countries have enacted laws and regulations to protect mental integrity or have included neuro-data in personal data protection laws (UNESCO, University of Milan-Bicocca (Italy) and State University of New York – Downstate Health Sciences University, 2023). Examples are the constitutional reform undertaken by Chile (Republic of Chile, 2021), the Charter for the responsible development of neurotechnologies of the Government of France (Government of France, 2022), and the Digital Rights Charter of the Government of Spain (Government of Spain, 2021). They propose different approaches to the regulation and protection of human rights in relation to neurotechnology. Countries such as the UK are also examining under which circumstances neural data may be considered as a special category of data under the general data protection framework (i.e. UK’s GDPR) (UK’s Information Commissioner’s Office, 2023) [p. 24]
As you can see, these are recent laws. There doesn’t seem to be any attempt here in Canada even though there is an act being reviewed in Parliament that could conceivably include neural data. This is from my May 1, 2023 posting,
Bill C-27 (Digital Charter Implementation Act, 2022) is what I believe is called an omnibus bill as it includes three different pieces of proposed legislation (the Consumer Privacy Protection Act [CPPA], the Artificial Intelligence and Data Act [AIDA], and the Personal Information and Data Protection Tribunal Act [PIDPTA]). [emphasis added July 11, 2023] You can read the Innovation, Science and Economic Development (ISED) Canada summary here or a detailed series of descriptions of the act here on the ISED’s Canada’s Digital Charter webpage.
My focus at the time was artificial intelligence and, now, after reading this UNESCO report and briefly looking at the Innovation, Science and Economic Development (ISED) Canada summary and a detailed series of descriptions of the act on ISED’s Canada’s Digital Charter webpage, I don’t see anything that specifies neural data but it’s not excluded either.
IP5 patents
Here’s the explanation (the footnote is included at the end of the excerpt),
IP5 patents represent a subset of overall patents filed worldwide, which have the characteristic of having been filed in at least one top intellectual property offices (IPO) worldwide (the so called IP5, namely the Chinese National Intellectual Property Administration, CNIPA (formerly SIPO); the European Patent Office, EPO; the Japan Patent Office, JPO; the Korean Intellectual Property Office, KIPO; and the United States Patent and Trademark Office, USPTO) as well as another country, which may or may not be an IP5. This signals their potential applicability worldwide, as their inventiveness and industrial viability have been validated by at least two leading IPOs. This gives these patents a sort of “quality” check, also since patenting inventions is costly and if applicants try to protect the same invention in several parts of the world, this normally mirrors that the applicant has expectations about their importance and expected value. If we were to conduct the same analysis using information about individually considered patent applied worldwide, i.e. without filtering for quality nor considering patent families, we would risk conducting a biased analysis based on duplicated data. Also, as patentability standards vary across countries and IPOs, and what matters for patentability is the existence (or not) of prior art in the IPO considered, we would risk mixing real innovations with patents related to catching up phenomena in countries that are not at the forefront of the technology considered.
9 The five IP offices (IP5) is a forum of the five largest intellectual property offices in the world that was set up to improve the efficiency of the examination process for patents worldwide. The IP5 Offices together handle about 80% of the world’s patent applications, and 95% of all work carried out under the Patent Cooperation Treaty (PCT), see http://www.fiveipoffices.org. (Dernis et al., 2015) [p. 31]
AI assistance on this report
As noted earlier I have next to no experience with the analytical tools having not attempted this kind of work in several years. Here’s an example of what they were doing,
We utilize a combination of text embeddings based on Bidirectional Encoder Representations from Transformer (BERT), dimensionality reduction, and hierarchical clustering inspired by the BERTopic methodology 12 to identify latent themes within research literature. Latent themes or topics in the context of topic modeling represent clusters of words that frequently appear together within a collection of documents (Blei, 2012). These groupings are not explicitly labeled but are inferred through computational analysis examining patterns in word usage. These themes are ‘hidden’ within the text, only to be revealed through this analysis. …
…
We further utilize OpenAI’s GPT-4 model to enrich our understanding of topics’ keywords and to generate topic labels (OpenAI, 2023), thus supplementing expert review of the broad interdisciplinary corpus. Recently, GPT-4 has shown impressive results in medical contexts across various evaluations (Nori et al., 2023), making it a useful tool to enhance the information obtained from prior analysis stages, and to complement them. The automated process enhances the evaluation workflow, effectively emphasizing neuroscience themes pertinent to potential neurotechnology patents. Notwithstanding existing concerns about hallucinations (Lee, Bubeck and Petro, 2023) and errors in generative AI models, this methodology employs the GPT-4 model for summarization and interpretation tasks, which significantly mitigates the likelihood of hallucinations. Since the model is constrained to the context provided by the keyword collections, it limits the potential for fabricating information outside of the specified boundaries, thereby enhancing the accuracy and reliability of the output. [pp. 33-34]
I couldn’t resist adding the ChatGPT paragraph given all of the recent hoopla about it.
Multimodal neuromodulation and neuromorphic computing patents
I think this gives a pretty good indication of the activity on the patent front,
The largest, coherent topic, termed “multimodal neuromodulation,” comprises 535 patents detailing methodologies for deep or superficial brain stimulation designed to address neurological and psychiatric ailments. These patented technologies interact with various points in neural circuits to induce either Long-Term Potentiation (LTP) or Long-Term Depression (LTD), offering treatment for conditions such as obsession, compulsion, anxiety, depression, Parkinson’s disease, and other movement disorders. The modalities encompass implanted deep-brain stimulators (DBS), Transcranial Magnetic Stimulation (TMS), and transcranial Direct Current Stimulation (tDCS). Among the most representative documents for this cluster are patents with titles: Electrical stimulation of structures within the brain or Systems and methods for enhancing or optimizing neural stimulation therapy for treating symptoms of Parkinson’s disease and or other movement disorders. [p.65]
Given my longstanding interest in memristors, which (I believe) have to a large extent helped to stimulate research into neuromorphic computing, this had to be included. Then, there was the brain-computer interfaces cluster,
A cluster identified as “Neuromorphic Computing” consists of 366 patents primarily focused on devices designed to mimic human neural networks for efficient and adaptable computation. The principal elements of these inventions are resistive memory cells and artificial synapses. They exhibit properties similar to the neurons and synapses in biological brains, thus granting these devices the ability to learn and modulate responses based on rewards, akin to the adaptive cognitive capabilities of the human brain.
The primary technology classes associated with these patents fall under specific IPC codes, representing the fields of neural network models, analog computers, and static storage structures. Essentially, these classifications correspond to technologies that are key to the construction of computers and exhibit cognitive functions similar to human brain processes.
Examples for this cluster include neuromorphic processing devices that leverage variations in resistance to store and process information, artificial synapses exhibiting spike-timing dependent plasticity, and systems that allow event-driven learning and reward modulation within neuromorphic computers.
In relation to neurotechnology as a whole, the “neuromorphic computing” cluster holds significant importance. It embodies the fusion of neuroscience and technology, thereby laying the basis for the development of adaptive and cognitive computational systems. Understanding this specific cluster provides a valuable insight into the progressing domain of neurotechnology, promising potential advancements across diverse fields, including artificial intelligence and healthcare.
The “Brain-Computer Interfaces” cluster, consisting of 146 patents, embodies a key aspect of neurotechnology that focuses on improving the interface between the brain and external devices. The technology classification codes associated with these patents primarily refer to methods or devices for treatment or protection of eyes and ears, devices for introducing media into, or onto, the body, and electric communication techniques, which are foundational elements of brain-computer interface (BCI) technologies.
Key patents within this cluster include a brain-computer interface apparatus adaptable to use environment and method of operating thereof, a double closed circuit brain-machine interface system, and an apparatus and method of brain-computer interface for device controlling based on brain signal. These inventions mainly revolve around the concept of using brain signals to control external devices, such as robotic arms, and improving the classification performance of these interfaces, even after long periods of non-use.
The inventions described in these patents improve the accuracy of device control, maintain performance over time, and accommodate multiple commands, thus significantly enhancing the functionality of BCIs.
Other identified technologies include systems for medical image analysis, limb rehabilitation, tinnitus treatment, sleep optimization, assistive exoskeletons, and advanced imaging techniques, among others. [pp. 66-67]
Having sections on neuromorphic computing and brain-computer interface patents in immediate proximity led to more speculation on my part. Imagine how much easier it would be to initiate a BCI connection if it’s powered with a neuromorphic (brainlike) computer/device. [ETA July 21, 2023: Following on from that thought, it might be more than just easier to initiate a BCI connection. Could a brainlike computer become part of your brain? Why not? it’s been successfully argued that a robotic wheelchair was part of someone’s body, see my January 30, 2013 posting and scroll down about 40% of the way.)]
Neurotechnology is a complex and rapidly evolving technological paradigm whose trajectories have the power to shape people’s identity, autonomy, privacy, sentiments, behaviors and overall well-being, i.e. the very essence of what it means to be human.
Designing and implementing careful and effective norms and regulations ensuring that neurotechnology is developed and deployed in an ethical manner, for the good of individuals and for society as a whole, call for a careful identification and characterization of the issues at stake. This entails shedding light on the whole neurotechnology ecosystem, that is what is being developed, where and by whom, and also understanding how neurotechnology interacts with other developments and technological trajectories, especially AI. Failing to do so may result in ineffective (at best) or distorted policies and policy decisions, which may harm human rights and human dignity.
…
Addressing the need for evidence in support of policy making, the present report offers first time robust data and analysis shedding light on the neurotechnology landscape worldwide. To this end, its proposes and implements an innovative approach that leverages artificial intelligence and deep learning on data from scientific publications and paten[t]s to identify scientific and technological developments in the neurotech space. The methodology proposed represents a scientific advance in itself, as it constitutes a quasi- automated replicable strategy for the detection and documentation of neurotechnology- related breakthroughs in science and innovation, to be repeated over time to account for the evolution of the sector. Leveraging this approach, the report further proposes an IPC-based taxonomy for neurotechnology which allows for a structured framework to the exploration of neurotechnology, to enable future research, development and analysis. The innovative methodology proposed is very flexible and can in fact be leveraged to investigate different emerging technologies, as they arise.
…
In terms of technological trajectories, we uncover a shift in the neurotechnology industry, with greater emphasis being put on computer and medical technologies in recent years, compared to traditionally dominant trajectories related to biotechnology and pharmaceuticals. This shift warrants close attention from policymakers, and calls for attention in relation to the latest (converging) developments in the field, especially AI and related methods and applications and neurotechnology.
This is all the more important and the observed growth and specialization patterns are unfolding in the context of regulatory environments that, generally, are either not existent or not fit for purpose. Given the sheer implications and impact of neurotechnology on the very essence of human beings, this lack of regulation poses key challenges related to the possible infringement of mental integrity, human dignity, personal identity, privacy, freedom of thought, and autonomy, among others. Furthermore, issues surrounding accessibility and the potential for neurotech enhancement applications triggers significant concerns, with far-reaching implications for individuals and societies. [pp. 72-73]
Last words about the report
Informative, readable, and thought-provoking. And, it helped broaden my understanding of neurotechnology.
Future endeavours?
I’m hopeful that one of these days one of these groups (UNESCO, Canadian Science Policy Centre, or ???) will tackle the issue of business bankruptcy in the neurotechnology sector. It has already occurred as noted in my ““Going blind when your neural implant company flirts with bankruptcy [long read]” April 5, 2022 posting. That story opens with a woman going blind in a New York subway when her neural implant fails. It’s how she found out the company, which supplied her implant was going out of business.
In my July 7, 2023 posting about the UNESCO July 2023 dialogue on neurotechnology, I’ve included information on Neuralink (one of Elon Musk’s companies) and its approval (despite some investigations) by the US Food and Drug Administration to start human clinical trials. Scroll down about 75% of the way to the “Food for thought” subhead where you will find stories about allegations made against Neuralink.
The end
If you want to know more about the field, the report offers a seven-page bibliography and there’s a lot of material here where you can start with this December 3, 2019 posting “Neural and technological inequalities” which features an article mentioning a discussion between two scientists. Surprisingly (to me), the source article is in Fast Company (a leading progressive business media brand), according to their tagline)..
I have two categories you may want to check: Human Enhancement and Neuromorphic Engineering. There are also a number of tags: neuromorphic computing, machine/flesh, brainlike computing, cyborgs, neural implants, neuroprosthetics, memristors, and more.
Should you have any observations or corrections, please feel free to leave them in the Comments section of this posting.
While there’s a great deal of attention and hyperbole attached to artificial intelligence (AI) these days, it seems that neurotechnology may be quietly gaining much needed attention. (For those who are interested, at the end of this posting, there’ll be a bit more information to round out what you’re seeing in the UNESCO material.)
Now, here’s news of an upcoming UNESCO (United Nations Educational, Scientific, and Cultural Organization) meeting on neurotechnology, from a June 6, 2023 UNESCO press release (also received via email), Note: Links have been removed,
The Member States of the Executive Board of UNESCO have approved the proposal of the Director General to hold a global dialogue to develop an ethical framework for the growing and largely unregulated Neurotechnology sector, which may threaten human rights and fundamental freedoms. A first international conference will be held at UNESCO Headquarters on 13 July 2023.
“Neurotechnology could help solve many health issues, but it could also access and manipulate people’s brains, and produce information about our identities, and our emotions. It could threaten our rights to human dignity, freedom of thought and privacy. There is an urgent need to establish a common ethical framework at the international level, as UNESCO has done for artificial intelligence,” said UNESCO Director-General Audrey Azoulay.
UNESCO’s international conference, taking place on 13 July [2023], will start exploring the immense potential of neurotechnology to solve neurological problems and mental disorders, while identifying the actions needed to address the threats it poses to human rights and fundamental freedoms. The dialogue will involve senior officials, policymakers, civil society organizations, academics and representatives of the private sector from all regions of the world.
Lay the foundations for a global ethical framework
The dialogue will also be informed by a report by UNESCO’s International Bioethics Committee (IBC) on the “Ethical Issues of Neurotechnology”, and a UNESCO study proposing first time evidence on the neurotechnology landscape, innovations, key actors worldwide and major trends.
The ultimate goal of the dialogue is to advance a better understanding of the ethical issues related to the governance of neurotechnology, informing the development of the ethical framework to be approved by 193 member states of UNESCO – similar to the way in which UNESCO established the global ethical frameworks on the human genome (1997), human genetic data (2003) and artificial intelligence (2021).
UNESCO’s global standard on the Ethics of Artificial Intelligence has been particularly effective and timely, given the latest developments related to Generative AI, the pervasiveness of AI technologies and the risks they pose to people, democracies, and jobs. The convergence of neural data and artificial intelligence poses particular challenges, as already recognized in UNESCO’s AI standard.
Neurotech could reduce the burden of disease…
Neurotechnology covers any kind of device or procedure which is designed to “access, monitor, investigate, assess, manipulate, and/or emulate the structure and function of neural systems”. [1] Neurotechnological devices range from “wearables”, to non-invasive brain computer interfaces such as robotic limbs, to brain implants currently being developed [2] with the goal of treating disabilities such as paralysis.
One in eight people worldwide live with a mental or neurological disorder, triggering care-related costs that account for up to a third of total health expenses in developed countries. These burdens are growing in low- and middle-income countries too. Globally these expenses are expected to grow – the number of people aged over 60 is projected to double by 2050 to 2.1 billion (WHO 2022). Neurotechnology has the vast potential to reduce the number of deaths and disabilities caused by neurological disorders, such as Epilepsy, Alzheimer’s, Parkinson’s and Stroke.
… but also threaten Human Rights
Without ethical guardrails, these technologies can pose serious risks, as brain information can be accessed and manipulated, threatening fundamental rights and fundamental freedoms, which are central to the notion of human identity, freedom of thought, privacy, and memory. In its report published in 2021 [3], UNESCO’s IBC documents these risks and proposes concrete actions to address them.
Neural data – which capture the individual’s reactions and basic emotions – is in high demand in consumer markets. Unlike the data gathered on us by social media platforms, most neural data is generated unconsciously, therefore we cannot give our consent for its use. If sensitive data is extracted, and then falls into the wrong hands, the individual may suffer harmful consequences.
Brain-Computer-Interfaces (BCIs) implanted at a time during which a child or teenager is still undergoing neurodevelopment may disrupt the ‘normal’ maturation of the brain. It may be able to transform young minds, shaping their future identity with long-lasting, perhaps permanent, effects.
Memory modification techniques (MMT) may enable scientists to alter the content of a memory, reconstructing past events. For now, MMT relies on the use of drugs, but in the future it may be possible to insert chips into the brain. While this could be beneficial in the case of traumatised people, such practices can also distort an individual’s sense of personal identity.
Risk of exacerbating global inequalities and generating new ones
Currently 50% of Neurotech Companies are in the US, and 35% in Europe and the UK. Because neurotechnology could usher in a new generation of ‘super-humans’, this would further widen the education, skills, wealth and opportunities’ gap within and between countries, giving those with the most advanced technology an unfair advantage.
UNESCO will organize an International Conference on the Ethics of Neurotechnology on the theme “Building a framework to protect and promote human rights and fundamental freedoms” at UNESCO Headquarters in Paris, on 13 July 2023, from 9:00 [CET; Central European Time] in Room I.
The Conference will explore the immense potential of neurotechnology and address the ethical challenges it poses to human rights and fundamental freedoms. It will bring together policymakers and experts, representatives of civil society and UN organizations, academia, media, and private sector companies, to prepare a solid foundation for an ethical framework on the governance of neurotechnology.
UNESCO International Conference on Ethics of Neurotechnology: Building a framework to protect and promote human rights and fundamental freedoms 13 July 2023 – 9:30 am – 13 July 2023 – 6:30 pm [CET; Central European Time] Location UNESCO Headquarters, Paris, France Rooms : Room I Type : Cat II – Intergovernmental meeting, other than international conference of States Arrangement type : Hybrid Language(s) : French Spanish English Arabic Contact : Rajarajeswari Pajany
A high-level session with ministers and policy makers focusing on policy actions and international cooperation will be featured in the Conference. Renowned experts will also be invited to discuss technological advancements in Neurotechnology and ethical challenges and human rights Implications. Two fireside chats will be organized to enrich the discussions focusing on the private sector, public awareness raising and public engagement. The Conference will also feature a new study of UNESCO’s Social and Human Sciences Sector shedding light on innovations in neurotechnology, key actors worldwide and key areas of development.
As one of the most promising technologies of our time, neurotechnology is providing new treatments and improving preventative and therapeutic options for millions of individuals suffering from neurological and mental illness. Neurotechnology is also transforming other aspects of our lives, from student learning and cognition to virtual and augmented reality systems and entertainment. While we celebrate these unprecedented opportunities, we must be vigilant against new challenges arising from the rapid and unregulated development and deployment of this innovative technology, including among others the risks to mental integrity, human dignity, personal identity, autonomy, fairness and equity, and mental privacy.
UNESCO has been at the forefront of promoting an ethical approach to neurotechnology. UNESCO’s International Bioethics Committee (IBC) has examined the benefits and drawbacks from an ethical perspective in a report published in December 2021. The Organization has also led UN-wide efforts on this topic, collaborating with other agencies and academic institutions to organize expert roundtables, raise public awareness and produce publications. With a global mandate on bioethics and ethics of science and technology, UNESCO has been asked by the IBC, its expert advisory body, to consider developing a global standard on this topic.
A July 13, 2023 agenda and a little Canadian content
I have a link to the ‘provisional programme‘ for “Towards an Ethical Framework in the Protection and Promotion of Human Rights and Fundamental Freedoms,” the July 13, 2023 UNESCO International Conference on Ethics of Neurotechnology. Keeping in mind that this could (and likely will) change,
13 July 2023, Room I, UNESCO HQ Paris, France,
9:00 –9:15 Welcoming Remarks (TBC) •António Guterres, Secretary-General of the United Nations• •Audrey Azoulay, Director-General of UNESCO
9:15 –10:00 Keynote Addresses (TBC) •Gabriel Boric, President of Chile •Narendra Modi, Prime Minister of India •PedroSánchez Pérez-Castejón, Prime Minister of Spain •Volker Turk, UN High Commissioner for Human Rights •Amandeep Singh Gill, UN Secretary-General’sEnvoyon Technology
…
10:15 –11:00 Scene-Setting Address …
1:00 –13:00 High-Level Session: Regulations and policy actions …
14:30 –15:30 Expert Session: Technological advancement and opportunities …
15:45 –16:30 Fireside Chat: Launch of the UNESCO publication “Unveiling the neurotechnology landscape: scientific advancements, innovationsand major trends” …
16:30 –17:30 Expert Session: Ethical challenges and human rights implications …
17:30 –18:15 Fireside Chat: “Why neurotechnology matters for all …
18:15 –18:30 Closing Remarks …
While I haven’t included the speakers’ names (for the most part), I do want to note some Canadian participation in the person of Dr. Judy Iles from the University of British Columbia. She’s a Professor of Neurology, Distinguished University Scholar in Neuroethics, andDirector, Neuroethics Canada, and President of the International Brain Initiative (IBI)
Iles is in the “Expert Session: Ethical challenges and human rights implications.”
If you have time do look at the provisional programme just to get a sense of the range of speakers and their involvement in an astonishing array of organizations. E.g., there’s the IBI (in Judy Iles’s bio), which at this point is largely (and surprisingly) supported by (from About Us) “Fonds de recherche du Québec, and the Institute of Neuroscience, Mental Health and Addiction of the Canadian Institutes of Health Research. Operational support for the IBI is also provided by the Japan Brain/MINDS Beyond and WorldView Studios“.
More food for thought
Neither the UNESCO July 2023 meeting, which tilts, understandably, to social justice issues vis-à-vis neurotechnology nor the Canadian Science Policy Centre (CSPC) May 2023 meeting (see my May 12, 2023 posting: Virtual panel discussion: Canadian Strategies for Responsible Neurotechnology Innovation on May 16, 2023), based on the publicly available agendas, seem to mention practical matters such as an implant company going out of business. Still, it’s possible it will be mentioned at the UNESCO conference. Unfortunately, the May 2023 CSPC panel has not been posted online.
Taking a look at business practices seems particularly urgent given this news from a May 25, 2023 article by Rachael Levy, Marisa Taylor, and Akriti Sharma for Reuters, Note: A link has been removed,
Elon Musk’s Neuralink received U.S. Food and Drug Administration (FDA) clearance for its first-in-human clinical trial, a critical milestone for the brain-implant startup as it faces U.S. probes over its handling of animal experiments.
The FDA approval “represents an important first step that will one day allow our technology to help many people,” Neuralink said in a tweet on Thursday, without disclosing details of the planned study. It added it is not recruiting for the trial yet and said more details would be available soon.
The FDA acknowledged in a statement that the agency cleared Neuralink to use its brain implant and surgical robot for trials on patients but declined to provide more details.
Neuralink and Musk did not respond to Reuters requests for comment.
The critical milestone comes as Neuralink faces federal scrutiny [emphasis mine] following Reuters reports about the company’s animal experiments.
Neuralink employees told Reuters last year that the company was rushing and botching surgeries on monkeys, pigs and sheep, resulting in more animal deaths [emphasis mine] than necessary, as Musk pressured staff to receive FDA approval. The animal experiments produced data intended to support the company’s application for human trials, the sources said.
…
If you have time, it’s well worth reading the article in its entirety. Neuralink is being investigated for a number of alleged violations.
Elon Musk’s brain implant company, Neuralink, says it’s gotten permission from U.S. regulators to begin testing its device in people.
The company made the announcement on Twitter Thursday evening but has provided no details about a potential study, which was not listed on the U.S. government database of clinical trials.
Officials with the Food and Drug Administration (FDA) wouldn’t confirm or deny whether it had granted the approval, but press officer Carly Kempler said in an email that the agency “acknowledges and understands” that Musk’s company made the announcement. [emphases mine]
…
The AP article offers additional context on the international race to develop brain-computer interfaces.
Update: It seems the FDA gave its approval later on May 26, 2023. (See the May 26, 2023 updated Reuters article by Rachael Levy, Marisa Taylor and Akriti Sharma and/or Paul Tuffley’s (lecturer at Griffith University) May 29, 2023 essay on The Conversation.)
For anyone who’s curious about previous efforts to examine ethics and social implications with regard to implants, prosthetics (Note: Increasingly, prosthetics include a neural component), and the brain, I have a couple of older posts: “Prosthetics and the human brain,” a March 8, 2013 and “The ultimate DIY: ‘How to build a robotic man’ on BBC 4,” a January 30, 2013 posting.)
It’s fascinating to see all the current excitement (distressed and/or enthusiastic) around the act of writing and artificial intelligence. Easy to forget that it’s not new. First, the ‘non-human authors’ and then the panic(s). *What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.*
How to handle non-human authors (ChatGPT and other AI agents)—the medical edition
The folks at the Journal of the American Medical Association (JAMA) have recently adopted a pragmatic approach to the possibility of nonhuman authors of scientific and medical papers, from a January 31, 2022 JAMA editorial,
Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability..1
In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.
Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11
…
This is a link to and a citation for the JAMA editorial,
Dr. Andrew Maynard (scientist, author, and professor of Advanced Technology Transitions in the Arizona State University [ASU] School for the Future if Innovation in Society and founder of the ASU Future of Being Human initiative and Director of the ASU Risk Innovation Nexus) also takes a pragmatic approach in a March 14, 2023 posting on his eponymous blog,
Like many of my colleagues, I’ve been grappling with how ChatGPT and other Large Language Models (LLMs) are impacting teaching and education — especially at the undergraduate level.
…
We’re already seeing signs of the challenges here as a growing divide emerges between LLM-savvy students who are experimenting with novel ways of using (and abusing) tools like ChatGPT, and educators who are desperately trying to catch up. As a result, educators are increasingly finding themselves unprepared and poorly equipped to navigate near-real-time innovations in how students are using these tools. And this is only exacerbated where their knowledge of what is emerging is several steps behind that of their students.
To help address this immediate need, a number of colleagues and I compiled a practical set of Frequently Asked Questions on ChatGPT in the classroom. These covers the basics of what ChatGPT is, possible concerns over use by students, potential creative ways of using the tool to enhance learning, and suggestions for class-specific guidelines.
Crawford Kilian, a longtime educator, author, and contributing editor to The Tyee, expresses measured enthusiasm for the new technology (as does Dr. Maynard), in a December 13, 2022 article for thetyee.ca, Note: Links have been removed,
…
ChatGPT, its makers tell us, is still in beta form. Like a million other new users, I’ve been teaching it (tuition-free) so its answers will improve. It’s pretty easy to run a tutorial: once you’ve created an account, you’re invited to ask a question or give a command. Then you watch the reply, popping up on the screen at the speed of a fast and very accurate typist.
…
Early responses to ChatGPT have been largely Luddite: critics have warned that its arrival means the end of high school English, the demise of the college essay and so on. But remember that the Luddites were highly skilled weavers who commanded high prices for their products; they could see that newfangled mechanized looms would produce cheap fabrics that would push good weavers out of the market. ChatGPT, with sufficient tweaks, could do just that to educators and other knowledge workers.
Having spent 40 years trying to teach my students how to write, I have mixed feelings about this prospect. But it wouldn’t be the first time that a technological advancement has resulted in the atrophy of a human mental skill.
Writing arguably reduced our ability to memorize — and to speak with memorable and persuasive coherence. …
…
Writing and other technological “advances” have made us what we are today — powerful, but also powerfully dangerous to ourselves and our world. If we can just think through the implications of ChatGPT, we may create companions and mentors that are not so much demonic as the angels of our better nature.
More than writing: emergent behaviour
The ChatGPT story extends further than writing and chatting. From a March 6, 2023 article by Stephen Ornes for Quanta Magazine, Note: Links have been removed,
What movie do these emojis describe?
That prompt was one of 204 tasks chosen last year to test the ability of various large language models (LLMs) — the computational engines behind AI chatbots such as ChatGPT. The simplest LLMs produced surreal responses. “The movie is a movie about a man who is a man who is a man,” one began. Medium-complexity models came closer, guessing The Emoji Movie. But the most complex model nailed it in one guess: Finding Nemo.
“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.
…
“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors [emphasis mine], including several identified in Dyer’s project. That list continues to grow.
Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.
…
Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.
…
But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.
As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli [Deep Ganguli, a computer scientist at the AI startup Anthropic] said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”
…
There is an obvious problem with asking these models to explain themselves: They are notorious liars. [emphasis mine] “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” As one of many amusing examples, in February [2023] Google introduced its AI chatbot, Bard. The blog post announcing the new tool shows Bard making a factual error.
Perhaps not entirely unrelated to current developments, there was this announcement in a May 1, 2023 article by Hannah Alberga for CTV (Canadian Television Network) news, Note: Links have been removed,
Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI
…
Geoffrey Hinton, professor at the University of Toronto and the “godfather” of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday [May 1, 2023] citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by “bad actors.”
Hinton, a British-Canadian computer scientist, is best-known for a series of deep neural network breakthroughs that won him, Yann LeCun and Yoshua Bengio the 2018 Turing Award, known as the Nobel Prize of computing.
…
Hinton has been invested in the now-hot topic of artificial intelligence since its early stages. In 1970, he got a Bachelor of Arts in experimental psychology from Cambridge, followed by his Ph.D. in artificial intelligence in Edinburgh, U.K. in 1978.
He joined Google after spearheading a major breakthrough with two of his graduate students at the University of Toronto in 2012, in which the team uncovered and built a new method of artificial intelligence: neural networks. The team’s first neural network was incorporated and sold to Google for $44 million.
Neural networks are a method of deep learning that effectively teaches computers how to learn the way humans do by analyzing data, paving the way for machines to classify objects and understand speech recognition.
…
There’s a bit more from Hinton in a May 3, 2023 article by Sheena Goodyear for the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens (the 10 minute radio interview is embedded in the article), Note: A link has been removed,
There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.
Nowadays, he’s not so sure.
“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”
…
For the last decade, he [Geoffrey Hinton] divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.
Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”
But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes.
…
Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good.
“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.
“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”
…
Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”
“That being said, I do think that some of these ideas that … AI supercomputers are going to ‘wake up’ and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear” and distract from more pressing issues, he said.
He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.
…
“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he [Hinton] said.
Don Pittis in his May 4, 2022 business analysis for CBC news online offers a somewhat jaundiced view of Hinton’s concern regarding AI, Note: Links have been removed,
As if we needed one more thing to terrify us, the latest warning from a University of Toronto scientist considered by many to be the founding intellect of artificial intelligence, adds a new layer of dread.
Others who have warned in the past that thinking machines are a threat to human existence seem a little miffed with the rock-star-like media coverage Geoffrey Hinton, billed at a conference this week as the Godfather of AI, is getting for what seems like a last minute conversion. Others say Hinton’s authoritative voice makes a difference.
…
Not only did Hinton tell an audience of experts at Wednesday’s [May 3, 2023] EmTech Digital conference that humans will soon be supplanted by AI — “I think it’s serious and fairly close.” — he said that due to national and business competition, there is no obvious way to prevent it.
“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial,” said Hinton on Wednesday [May 3, 2023] as he explained his change of heart in detailed technical terms.
“But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.”
“I wish I had a nice and simple solution I could push, but I don’t,” he said. “It’s not clear there is a solution.”
So when is all this happening?
“In a few years time they may be significantly more intelligent than people,” he told Nil Köksal on CBC Radio’s As It Happens on Wednesday [May 3, 2023].
While he may be late to the party, Hinton’s voice adds new clout to growing anxiety that artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.
But long before that final day, he worries that the new technology will soon begin to strip away jobs and lead to a destabilizing societal gap between rich and poor that current politics will be unable to solve.
The EmTech Digital conference is a who’s who of AI business and academia, fields which often overlap. Most other participants at the event were not there to warn about AI like Hinton, but to celebrate the explosive growth of AI research and business.
…
As one expert I spoke to pointed out, the growth in AI is exponential and has been for a long time. But even knowing that, the increase in the dollar value of AI to business caught the sector by surprise.
Eight years ago when I wrote about the expected increase in AI business, I quoted the market intelligence group Tractica that AI spending would “be worth more than $40 billion in the coming decade,” which sounded like a lot at the time. It appears that was an underestimate.
“The global artificial intelligence market size was valued at $428 billion U.S. in 2022,” said an updated report from Fortune Business Insights. “The market is projected to grow from $515.31 billion U.S. in 2023.” The estimate for 2030 is more than $2 trillion.
…
This week the new Toronto AI company Cohere, where Hinton has a stake of his own, announced it was “in advanced talks” to raise $250 million. The Canadian media company Thomson Reuters said it was planning “a deeper investment in artificial intelligence.” IBM is expected to “pause hiring for roles that could be replaced with AI.” The founders of Google DeepMind and LinkedIn have launched a ChatGPT competitor called Pi.
And that was just this week.
…
“My one hope is that, because if we allow it to take over it will be bad for all of us, we could get the U.S. and China to agree, like we did with nuclear weapons,” said Hinton. “We’re all the in same boat with respect to existential threats, so we all ought to be able to co-operate on trying to stop it.”
Interviewer and moderator Will Douglas Heaven, an editor at MIT Technology Review finished Hinton’s sentence for him: “As long as we can make some money on the way.”
Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord.
But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.
“It’s disappointing to see this autumn-years redemption tour [emphasis mine] from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)
…
Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.
“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”
Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. …
…
… An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them.
Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday [May 2, 2023], for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.” [emphasis mine]
…
Gebru has been mentioned here a few times. She’s mentioned in passing in a June 23, 2022 posting “Racist and sexist robots have flawed AI” and in a little more detail in an August 30, 2022 posting “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” scroll down to the ‘Consciousness and ethical AI’ subhead
Chan has another Fast Company article investigating AI issues also published on May 5, 2023, “Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them.”
The last two existential AI panics
The term “autumn-years redemption tour”is striking and while the reference to age could be viewed as problematic, it also hints at the money, honours, and acknowledgement that Hinton has enjoyed as an eminent scientist. I’ve covered two previous panics set off by eminent scientists. “Existential risk” is the title of my November 26, 2012 posting which highlights Martin Rees’ efforts to found the Centre for Existential Risk at the University of Cambridge.
Rees is a big deal. From his Wikipedia entry, Note: Links have been removed,
Martin John Rees, Baron Rees of Ludlow OM FRS FREng FMedSci FRAS HonFInstP[10][2] (born 23 June 1942) is a British cosmologist and astrophysicist.[11] He is the fifteenth Astronomer Royal, appointed in 1995,[12][13][14] and was Master of Trinity College, Cambridge, from 2004 to 2012 and President of the Royal Society between 2005 and 2010.[15][16][17][18][19][20]
The next panic was set off by Stephen Hawking (1942 – 2018; also at the University of Cambridge, Wikipedia entry) a few years before he died. (Note: Rees, Hinton, and Hawking were all born within five years of each other and all have/had ties to the University of Cambridge. Interesting coincidence, eh?) From a January 9, 2015 article by Emily Chung for CBC news online,
Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, “the development of full artificial intelligence could spell the end of the human race.” Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably “our biggest existential threat.”
Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.
Hawking made his most recent comments at the beginning of December [2014], in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.
…
Popular works of science fiction – from the latest Terminator trailer, to the Matrix trilogy, to Star Trek’s borg – envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.
Sawyer has written about a different vision of life beyond singularity [when machines surpass humans in general intelligence,] — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the “possible misuse of powerful technologies” such as AI. He said Hawking and Musk have good reason to be concerned.
To sum up, the first panic was in 2012, the next in 2014/15, and the latest one began earlier this year (2023) with a letter. A March 29, 2023 Thompson Reuters news item on CBC news online provides information on the contents,
Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.
Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.
The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.
…
Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a pioneer of research in the field.
According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.
The concerns come as EU police force Europol on Monday {March 27, 2023] joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.
Meanwhile, the U.K. government unveiled proposals for an “adaptable” regulatory framework around AI.
The government’s approach, outlined in a policy paper published on Wednesday [March 29, 2023], would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.
…
The engineers have chimed in, from an April 7, 2023 article by Margo Anderson for the IEEE (institute of Electrical and Electronics Engineers) Spectrum magazine, Note: Links have been removed,
The open letter [published March 29, 2023], titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”
It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.
In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.
IEEE members have expressed a similar diversity of opinions.
…
There was an earlier open letter in January 2015 according to Wikipedia’s “Open Letter on Artificial Intelligence” entry, Note: Links have been removed,
In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.
…
As for ‘Mr. ChatGPT’ or Sam Altman, CEO of OpenAI, while he didn’t sign the March 29, 2023 letter, he appeared before US Congress suggesting AI needs to be regulated according to May 16, 2023 news article by Mohar Chatterjee for Politico.
You’ll notice I’ve arbitrarily designated three AI panics by assigning their origins to eminent scientists. In reality, these concerns rise and fall in ways that don’t allow for such a tidy analysis. As Chung notes, science fiction regularly addresses this issue. For example, there’s my October 16, 2013 posting, “Wizards & Robots: a comic book encourages study in the sciences and maths and discussions about existential risk.” By the way, will.i.am (of the Black Eyed Peas band was involved in the comic book project and he us a longtime supporter of STEM (science, technology, engineering, and mathematics) initiatives.
Finally (but not quite)
Puzzling, isn’t it? I’m not sure we’re asking the right questions but it’s encouraging to see that at least some are being asked.
Dr. Andrew Maynard in a May 12, 2023 essay for The Conversation (h/t May 12, 2023 item on phys.org) notes that ‘Luddites’ questioned technology’s inevitable progress and were vilified for doing so, Note: Links have been removed,
The term “Luddite” emerged in early 1800s England. At the time there was a thriving textile industry that depended on manual knitting frames and a skilled workforce to create cloth and garments out of cotton and wool. But as the Industrial Revolution gathered momentum, steam-powered mills threatened the livelihood of thousands of artisanal textile workers.
Faced with an industrialized future that threatened their jobs and their professional identity, a growing number of textile workers turned to direct action. Galvanized by their leader, Ned Ludd, they began to smash the machines that they saw as robbing them of their source of income.
It’s not clear whether Ned Ludd was a real person, or simply a figment of folklore invented during a period of upheaval. But his name became synonymous with rejecting disruptive new technologies – an association that lasts to this day.
Questioning doesn’t mean rejecting
Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time. Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life
…
In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.” Their sin? Raising concerns over the potential dangers of artificial intelligence.
The irony of three prominent scientists and entrepreneurs being labeled as Luddites underlines the disconnect between the term’s original meaning and its more modern use as an epithet for anyone who doesn’t wholeheartedly and unquestioningly embrace technological progress.
Yet technologists like Musk and Gates aren’t rejecting technology or innovation. Instead, they’re rejecting a worldview that all technological advances are ultimately good for society. This worldview optimistically assumes that the faster humans innovate, the better the future will be.
…
In an age of ChatGPT, gene editing and other transformative technologies, perhaps we all need to channel the spirit of Ned Ludd as we grapple with how to ensure that future technologies do more good than harm.
In fact, “Neo-Luddites” or “New Luddites” is a term that emerged at the end of the 20th century.
In 1990, the psychologist Chellis Glendinning published an essay titled “Notes toward a Neo-Luddite Manifesto.”
…
Then there are the Neo-Luddites who actively reject modern technologies, fearing that they are damaging to society. New York City’s Luddite Club falls into this camp. Formed by a group of tech-disillusioned Gen-Zers, the club advocates the use of flip phones, crafting, hanging out in parks and reading hardcover or paperback books. Screens are an anathema to the group, which sees them as a drain on mental health.
I’m not sure how many of today’s Neo-Luddites – whether they’re thoughtful technologists, technology-rejecting teens or simply people who are uneasy about technological disruption – have read Glendinning’s manifesto. And to be sure, parts of it are rather contentious. Yet there is a common thread here: the idea that technology can lead to personal and societal harm if it is not developed responsibly.
…
Getting back to where this started with nonhuman authors, Amelia Eqbal has written up an informal transcript of a March 16, 2023 CBC radio interview (radio segment is embedded) about ChatGPT-4 (the latest AI chatbot from OpenAI) between host Elamin Abdelmahmoud and tech journalist, Alyssa Bereznak.
I was hoping to add a little more Canadian content, so in March 2023 and again in April 2023, I sent a question about whether there were any policies regarding nonhuman or AI authors to Kim Barnhardt at the Canadian Medical Association Journal (CMAJ). To date, there has been no reply but should one arrive, I will place it here.
In the meantime, I have this from Canadian writer, Susan Baxter in her May 15, 2023 blog posting “Coming soon: Robot Overlords, Sentient AI and more,”
…
The current threat looming (Covid having been declared null and void by the WHO*) is Artificial Intelligence (AI) which, we are told, is becoming too smart for its own good and will soon outsmart humans. Then again, given some of the humans I’ve met along the way that wouldn’t be difficult.
…
All this talk of scary-boo AI seems to me to have become the worst kind of cliché, one that obscures how our lives have become more complicated and more frustrating as apps and bots and cyber-whatsits take over.
The trouble with clichés, as Alain de Botton wrote in How Proust Can Change Your Life, is not that they are wrong or contain false ideas but more that they are “superficial articulations of good ones”. Cliches are oversimplifications that become so commonplace we stop noticing the more serious subtext. (This is rife in medicine where metaphors such as talk of “replacing” organs through transplants makes people believe it’s akin to changing the oil filter in your car. Or whatever it is EV’s have these days that needs replacing.)
…
Should you live in Vancouver (Canada) and are attending a May 28, 2023 AI event, you may want to read Susan Baxter’s piece as a counterbalance to, “Discover the future of artificial intelligence at this unique AI event in Vancouver,” a May 19, 2023 sponsored content by Katy Brennan for the Daily Hive,
…
If you’re intrigued and eager to delve into the rapidly growing field of AI, you’re not going to want to miss this unique Vancouver event.
On Sunday, May 28 [2023], a Multiplatform AI event is coming to the Vancouver Playhouse — and it’s set to take you on a journey into the future of artificial intelligence.
The exciting conference promises a fusion of creativity, tech innovation, and thought–provoking insights, with talks from renowned AI leaders and concept artists, who will share their experiences and opinions.
Guests can look forward to intense discussions about AI’s pros and cons, hear real-world case studies, and learn about the ethical dimensions of AI, its potential threats to humanity, and the laws that govern its use.
Live Q&A sessions will also be held, where leading experts in the field will address all kinds of burning questions from attendees. There will also be a dynamic round table and several other opportunities to connect with industry leaders, pioneers, and like-minded enthusiasts.
…
This conference is being held at The Playhouse, 600 Hamilton Street, from 11 am to 7:30 pm, ticket prices range from $299 to $349 to $499 (depending on when you make your purchase, From the Multiplatform AI Conference homepage,
Event Speakers
Max Sills General Counsel at Midjourney
From Jan 2022 – present (Advisor – now General Counsel) – Midjourney – An independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species (SF) Midjourney – a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called “prompts”, similar to OpenAI’s DALL-E and Stable Diffusion. For now the company uses Discord Server as a source of service and, with huge 15M+ members, is the biggest Discord server in the world. In the two-things-at-once department, Max Sills also known as an owner of Open Advisory Services, firm which is set up to help small and medium tech companies with their legal needs (managing outside counsel, employment, carta, TOS, privacy). Their clients are enterprise level, medium companies and up, and they are here to help anyone on open source and IP strategy. Max is an ex-counsel at Block, ex-general manager of the Crypto Open Patent Alliance. Prior to that Max led Google’s open source legal group for 7 years.
…
So, the first speaker listed is a lawyer associated with Midjourney, a highly controversial generative artificial intelligence programme used to generate images. According to their entry on Wikipedia, the company is being sued, Note: Links have been removed,
…
On January 13, 2023, three artists – Sarah Andersen, Kelly McKernan, and Karla Ortiz – filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists.[32]
…
My October 24, 2022 posting highlights some of the issues with generative image programmes and Midjourney is mentioned throughout.
As I noted earlier, I’m glad to see more thought being put into the societal impact of AI and somewhat disconcerted by the hyperbole from the like of Geoffrey Hinton and the like of Vancouver’s Multiplatform AI conference organizers. Mike Masnick put it nicely in his May 24, 2023 posting on TechDirt (Note 1: I’ve taken a paragraph out of context, his larger issue is about proposals for legislation; Note 2: Links have been removed),
…
Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.
…
For anyone interested in the Canadian government attempts to legislate AI, there’s my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”
Addendum (June 1, 2023)
Another statement warning about runaway AI was issued on Tuesday, May 30, 2023. This was far briefer than the previous March 2023 warning, from the Center for AI Safety’s “Statement on AI Risk” webpage,
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war [followed by a list of signatories] …
Vanessa Romo’s May 30, 2023 article (with contributions from Bobby Allyn) for NPR ([US] National Public Radio) offers an overview of both warnings. Rae Hodge’s May 31, 2023 article for Salon offers a more critical view, Note: Links have been removed,
The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of “nuclear war” and human “extinction.” Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement from the non-profit Center for AI Safety said.
But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board.
TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry.
“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote.
“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.
…
The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts.
“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”
…
Fear, after all, is a powerful sales tool.
Romo’s May 30, 2023 article for NPR offers a good overview and, if you have the time, I recommend reading Hodge’s May 31, 2023 article for Salon in its entirety.
*ETA June 8, 2023: This sentence “What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.” was added to the introductory paragraph at the beginning of this post.
It seems chimeras are of more interest these days. In all likelihood that has something to do with the fellow who received a transplant of a pig’s heart in January 2022 (he died in March 2022).
For those who aren’t familiar with the term, a chimera is an entity with two different DNA (deoxyribonucleic acid) identities. In short, if you get a DNA sample from the heart, it’s different from a DNA sample obtained from a cheek swab. This contrasts with a hybrid such as a mule (donkey/horse) whose DNA samples show a consisted identity throughout its body.
A new report on the ethics of crossing species boundaries by inserting human cells into nonhuman animals – research surrounded by debate – makes recommendations clarifying the ethical issues and calling for improved oversight of this work.
The report, “Creating Chimeric Animals — Seeking Clarity On Ethics and Oversight,” was developed by an interdisciplinary team, with funding from the National Institutes of Health. Principal investigators are Josephine Johnston and Karen Maschke, research scholars at The Hastings Center, and Insoo Hyun, director of the Center for Life Sciences and Public Learning at the Museum of Life Sciences in Boston, formerly of Case Western Reserve University.
Advances in human stem cell science and gene editing enable scientists to insert human cells more extensively and precisely into nonhuman animals, creating “chimeric” animals, embryos, and other organisms that contain a mix of human and nonhuman cells.
Many people hope that this research will yield enormous benefits, including better models of human disease, inexpensive sources of human eggs and embryos for research, and sources of tissues and organs suitable for transplantation into humans.
But there are ethical concerns about this type of research, which raise questions such as whether the moral status of nonhuman animals is altered by the insertion of human stem cells, whether these studies should be subject to additional prohibitions or oversight, and whether this kind of research should be done at all.
The report found that:
Animal welfare is a primary ethical issue and should be a focus of ethical and policy analysis as well as the governance and oversight of chimeric research.
Chimeric studies raise the possibility of unique or novel harms resulting from the insertion and development of human stem cells in nonhuman animals, particularly when those cells develop in the brain or central nervous system.
Oversight and governance of chimeric research are siloed, and public communication is minimal. Public communication should be improved, communication between the different committees involved in oversight at each institution should be enhanced, and a national mechanism created for those involved in oversight of these studies.
Scientists, journalists, bioethicists, and others writing about chimeric research should use precise and accessible language that clarifies rather than obscures the ethical issues at stake. The terms “chimera,” which in Greek mythology refers to a fire-breathing monster, and “humanization” are examples of ethically laden, or overly broad language to be avoided.
The Research Team
The Hastings Center
• Josephine Johnston • Karen J. Maschke • Carolyn P. Neuhaus • Margaret M. Matthews • Isabel Bolo
Case Western Reserve University • Insoo Hyun (now at Museum of Science, Boston) • Patricia Marshall • Kaitlynn P. Craig
The Work Group
• Kara Drolet, Oregon Health & Science University • Henry T. Greely, Stanford University • Lori R. Hill, MD Anderson Cancer Center • Amy Hinterberger, King’s College London • Elisa A. Hurley, Public Responsibility in Medicine and Research • Robert Kesterson, University of Alabama at Birmingham • Jonathan Kimmelman, McGill University • Nancy M. P. King, Wake Forest University School of Medicine • Geoffrey Lomax, California Institute for Regenerative Medicine • Melissa J. Lopes, Harvard University Embryonic Stem Cell Research Oversight Committee • P. Pearl O’Rourke, Harvard Medical School • Brendan Parent, NYU Grossman School of Medicine • Steven Peckman, University of California, Los Angeles • Monika Piotrowska, State University of New York at Albany • May Schwarz, The Salk Institute for Biological Studies • Jeff Sebo, New York University • Chris Stodgell, University of Rochester • Robert Streiffer, University of Wisconsin-Madison • Lorenz Studer, Memorial Sloan Kettering Cancer Center • Amy Wilkerson, The Rockefeller University
Here’s a link to and a citation for the report,
Creating Chimeric Animals: Seeking Clarity on Ethics and Oversight edited by Karen J. Maschke, Margaret M. Matthews, Kaitlynn P. Craig, Carolyn P. Neuhaus, Insoo Hyun, Josephine Johnston, The Hastings Center Report Volume 52, Issue S2 (Special Report), November‐December 2022 First Published: 09 December 2022
Eighteen cartoons have been selected as finalists in the 2023 Ethics Cartooning Competition, an annual contest sponsored by the Morgridge Institute for Research.
Participants from the University of Wisconsin-Madison and affiliated biomedical centers or institutes submitted their work, then a panel of judges selected the final cartoons for display to the public, who is invited to vote and help determine the 2023 winners.
This year’s cartoons depict a variety of research ethics topics, such as the ethics of scientific publishing, research funding and environments, questionable research practices, drug pricing, the ethics of experimenting on animals, social impacts of scientific research, and scientists as responsible members of society.
The Morgridge Ethics Cartooning Competition, developed by Morgridge Bioethics Scholar in Residence Pilar Ossorio, encourages scientists to shed light on timely or recurring issues that arise in scientific research.
“Ethical issues are all around us,” says Ossorio. “An event like the competition encourages people to identify some of those issues, perhaps talk about them with friends and colleagues, and think about how to communicate about those issues with a broader community of people.”
The one above hit home as I commented on a local (Vancouver, Canada) billionaire’s (Chip Wilson of Lululemon) announcement that he was spending $100M on research to treat a rare disease (facio-scapulo-humeral muscular dystrophy [FSHD]) he has. (See my April 5, 2022 posting, scroll down about 80% of the way to the subhead, Money makes the world go around.)
And this too caught my eye,
It reminds me that I’ve been meaning to do a piece on science and racism for the last few years. Maybe this year, eh?