Tag Archives: cyborgs

Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report

Launched on Thursday, July 13, 2023 during UNESCO’s (United Nations Educational, Scientific, and Cultural Organization) “Global dialogue on the ethics of neurotechnology,” is a report tying together the usual measures of national scientific supremacy (number of papers published and number of patents filed) with information on corporate investment in the field. Consequently, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends” by Daniel S. Hain, Roman Jurowetzki, Mariagrazia Squicciarini, and Lihui Xu provides better insight into the international neurotechnology scene than is sometimes found in these kinds of reports. By the way, the report is open access.

Here’s what I mean, from the report‘s short summary,

Since 2013, government investments in this field have exceeded $6 billion. Private investment has also seen significant growth, with annual funding experiencing a 22-fold increase from 2010 to 2020, reaching $7.3 billion and totaling $33.2 billion.

This investment has translated into a 35-fold growth in neuroscience publications between 2000-2021 and 20-fold growth in innovations between 2022-2020, as proxied by patents. However, not all are poised to benefit from such developments, as big divides emerge.

Over 80% of high-impact neuroscience publications are produced by only ten countries, while 70% of countries contributed fewer than 10 such papers over the period considered. Similarly, five countries only hold 87% of IP5 neurotech patents.

This report sheds light on the neurotechnology ecosystem, that is, what is being developed, where and by whom, and informs about how neurotechnology interacts with other technological trajectories, especially Artificial Intelligence [emphasis mine]. [p. 2]

The money aspect is eye-opening even when you already have your suspicions. Also, it’s not entirely unexpected to learn that only ten countries produce over 80% of the high impact neurotech papers and that only five countries hold 87% of the IP5 neurotech patents but it is stunning to see it in context. (If you’re not familiar with the term ‘IP5 patents’, scroll down in this post to the relevant subhead. Hint: It means the patent was filed in one of the top five jurisdictions; I’ll leave you to guess which ones those might be.)

“Since 2013 …” isn’t quite as informative as the authors may have hoped. I wish they had given a time frame for government investments similar to what they did for corporate investments (e.g., 2010 – 2020). Also, is the $6B (likely in USD) government investment cumulative or an estimated annual number? To sum up, I would have appreciated parallel structure and specificity.

Nitpicks aside, there’s some very good material intended for policy makers. On that note, some of the analysis is beyond me. I haven’t used anything even somewhat close to their analytical tools in years and years. This commentaries reflects my interests and a very rapid reading. One last thing, this is being written from a Canadian perspective. With those caveats in mind, here’s some of what I found.

A definition, social issues, country statistics, and more

There’s a definition for neurotechnology and a second mention of artificial intelligence being used in concert with neurotechnology. From the report‘s executive summary,

Neurotechnology consists of devices and procedures used to access, monitor, investigate, assess, manipulate, and/or emulate the structure and function of the neural systems of animals or human beings. It is poised to revolutionize our understanding of the brain and to unlock innovative solutions to treat a wide range of diseases and disorders.

Similarly to Artificial Intelligence (AI), and also due to its convergence with AI, neurotechnology may have profound societal and economic impact, beyond the medical realm. As neurotechnology directly relates to the brain, it triggers ethical considerations about fundamental aspects of human existence, including mental integrity, human dignity, personal identity, freedom of thought, autonomy, and privacy [emphases mine]. Its potential for enhancement purposes and its accessibility further amplifies its prospect social and societal implications.

The recent discussions held at UNESCO’s Executive Board further shows Member States’ desire to address the ethics and governance of neurotechnology through the elaboration of a new standard-setting instrument on the ethics of neurotechnology, to be adopted in 2025. To this end, it is important to explore the neurotechnology landscape, delineate its boundaries, key players, and trends, and shed light on neurotech’s scientific and technological developments. [p. 7]

Here’s how they sourced the data for the report,

The present report addresses such a need for evidence in support of policy making in
relation to neurotechnology by devising and implementing a novel methodology on data from scientific articles and patents:

● We detect topics over time and extract relevant keywords using a transformer-
based language models fine-tuned for scientific text. Publication data for the period
2000-2021 are sourced from the Scopus database and encompass journal articles
and conference proceedings in English. The 2,000 most cited publications per year
are further used in in-depth content analysis.
● Keywords are identified through Named Entity Recognition and used to generate
search queries for conducting a semantic search on patents’ titles and abstracts,
using another language model developed for patent text. This allows us to identify
patents associated with the identified neuroscience publications and their topics.
The patent data used in the present analysis are sourced from the European
Patent Office’s Worldwide Patent Statistical Database (PATSTAT). We consider
IP5 patents filed between 2000-2020 having an English language abstract and
exclude patents solely related to pharmaceuticals.

This approach allows mapping the advancements detailed in scientific literature to the technological applications contained in patent applications, allowing for an analysis of the linkages between science and technology. This almost fully automated novel approach allows repeating the analysis as neurotechnology evolves. [pp. 8-9[

Findings in bullet points,

Key stylized facts are:
● The field of neuroscience has witnessed a remarkable surge in the overall number
of publications since 2000, exhibiting a nearly 35-fold increase over the period
considered, reaching 1.2 million in 2021. The annual number of publications in
neuroscience has nearly tripled since 2000, exceeding 90,000 publications a year
in 2021. This increase became even more pronounced since 2019.
● The United States leads in terms of neuroscience publication output (40%),
followed by the United Kingdom (9%), Germany (7%), China (5%), Canada (4%),
Japan (4%), Italy (4%), France (4%), the Netherlands (3%), and Australia (3%).
These countries account for over 80% of neuroscience publications from 2000 to
2021.
● Big divides emerge, with 70% of countries in the world having less than 10 high-
impact neuroscience publications between 2000 to 2021.
● Specific neurotechnology-related research trends between 2000 and 2021 include:
○ An increase in Brain-Computer Interface (BCI) research around 2010,
maintaining a consistent presence ever since.
○ A significant surge in Epilepsy Detection research in 2017 and 2018,
reflecting the increased use of AI and machine learning in healthcare.
○ Consistent interest in Neuroimaging Analysis, which peaks around 2004,
likely because of its importance in brain activity and language
comprehension studies.
○ While peaking in 2016 and 2017, Deep Brain Stimulation (DBS) remains a
persistent area of research, underlining its potential in treating conditions
like Parkinson’s disease and essential tremor.
● Between 2000 and 2020, the total number of patent applications in this field
increased significantly, experiencing a 20-fold increase from less than 500 to over
12,000. In terms of annual figures, a consistent upward trend in neurotechnology-10
related patent applications emerges, with a notable doubling observed between
2015 and 2020.
• The United States account for nearly half of all worldwide patent applications (47%).
Other major contributors include South Korea (11%), China (10%), Japan (7%),
Germany (7%), and France (5%). These five countries together account for 87%
of IP5 neurotech patents applied between 2000 and 2020.
○ The United States has historically led the field, with a peak around 2010, a
decline towards 2015, and a recovery up to 2020.
○ South Korea emerged as a significant contributor after 1990, overtaking
Germany in the late 2000s to become the second-largest developer of
neurotechnology. By the late 2010s, South Korea’s annual neurotechnology
patent applications approximated those of the United States.
○ China exhibits a sharp increase in neurotechnology patent applications in
the mid-2010s, bringing it on par with the United States in terms of
application numbers.
● The United States ranks highest in both scientific publications and patents,
indicating their strong ability to transform knowledge into marketable inventions.
China, France, and Korea excel in leveraging knowledge to develop patented
innovations. Conversely, countries such as the United Kingdom, Germany, Italy,
Canada, Brazil, and Australia lag behind in effectively translating neurotech
knowledge into patentable innovations.
● In terms of patent quality measured by forward citations, the leading countries are
Germany, US, China, Japan, and Korea.
● A breakdown of patents by technology field reveals that Computer Technology is
the most important field in neurotechnology, exceeding Medical Technology,
Biotechnology, and Pharmaceuticals. The growing importance of algorithmic
applications, including neural computing techniques, also emerges by looking at
the increase in patent applications in these fields between 2015-2020. Compared
to the reference year, computer technologies-related patents in neurotech
increased by 355% and by 92% in medical technology.
● An analysis of the specialization patterns of the top-5 countries developing
neurotechnologies reveals that Germany has been specializing in chemistry-
related technology fields, whereas Asian countries, particularly South Korea and
China, focus on computer science and electrical engineering-related fields. The
United States exhibits a balanced configuration with specializations in both
chemistry and computer science-related fields.
● The entities – i.e. both companies and other institutions – leading worldwide
innovation in the neurotech space are: IBM (126 IP5 patents, US), Ping An
Technology (105 IP5 patents, CH), Fujitsu (78 IP5 patents, JP), Microsoft (76 IP511
patents, US)1, Samsung (72 IP5 patents, KR), Sony (69 IP5 patents JP) and Intel
(64 IP5 patents US)

This report further proposes a pioneering taxonomy of neurotechnologies based on International Patent Classification (IPC) codes.

• 67 distinct patent clusters in neurotechnology are identified, which mirror the diverse research and development landscape of the field. The 20 most prominent neurotechnology groups, particularly in areas like multimodal neuromodulation, seizure prediction, neuromorphic computing [emphasis mine], and brain-computer interfaces, point to potential strategic areas for research and commercialization.
• The variety of patent clusters identified mirrors the breadth of neurotechnology’s potential applications, from medical imaging and limb rehabilitation to sleep optimization and assistive exoskeletons.
• The development of a baseline IPC-based taxonomy for neurotechnology offers a structured framework that enriches our understanding of this technological space, and can facilitate research, development and analysis. The identified key groups mirror the interdisciplinary nature of neurotechnology and underscores the potential impact of neurotechnology, not only in healthcare but also in areas like information technology and biomaterials, with non-negligible effects over societies and economies.

1 If we consider Microsoft Technology Licensing LLM and Microsoft Corporation as being under the same umbrella, Microsoft leads worldwide developments with 127 IP5 patents. Similarly, if we were to consider that Siemens AG and Siemens Healthcare GmbH belong to the same conglomerate, Siemens would appear much higher in the ranking, in third position, with 84 IP5 patents. The distribution of intellectual property assets across companies belonging to the same conglomerate is frequent and mirrors strategic as well as operational needs and features, among others. [pp. 9-11]

Surprises and comments

Interesting and helpful to learn that “neurotechnology interacts with other technological trajectories, especially Artificial Intelligence;” this has changed and improved my understanding of neurotechnology.

It was unexpected to find Canada in the top ten countries producing neuroscience papers. However, finding out that the country lags in translating its ‘neuro’ knowledge into patentable innovation is not entirely a surprise.

It can’t be an accident that countries with major ‘electronics and computing’ companies lead in patents. These companies do have researchers but they also buy startups to acquire patents. They (and ‘patent trolls’) will also file patents preemptively. For the patent trolls, it’s a moneymaking proposition and for the large companies, it’s a way of protecting their own interests and/or (I imagine) forcing a sale.

The mention of neuromorphic (brainlike) computing in the taxonomy section was surprising and puzzling. Up to this point, I’ve thought of neuromorphic computing as a kind of alternative or addition to standard computing but the authors have blurred the lines as per UNESCO’s definition of neurotechnology (specifically, “… emulate the structure and function of the neural systems of animals or human beings”) . Again, this report is broadening my understanding of neurotechnology. Of course, it required two instances before I quite grasped it, the definition and the taxonomy.

What’s puzzling is that neuromorphic engineering, a broader term that includes neuromorphic computing, isn’t used or mentioned. (For an explanation of the terms neuromorphic computing and neuromorphic engineering, there’s my June 23, 2023 posting, “Neuromorphic engineering: an overview.” )

The report

I won’t have time for everything. Here are some of the highlights from my admittedly personal perspective.

It’s not only about curing disease

From the report,

Neurotechnology’s applications however extend well beyond medicine [emphasis mine], and span from research, to education, to the workplace, and even people’s everyday life. Neurotechnology-based solutions may enhance learning and skill acquisition and boost focus through brain stimulation techniques. For instance, early research finds that brain- zapping caps appear to boost memory for at least one month (Berkeley, 2022). This could one day be used at home to enhance memory functions [emphasis mine]. They can further enable new ways to interact with the many digital devices we use in everyday life, transforming the way we work, live and interact. One example is the Sound Awareness wristband developed by a Stanford team (Neosensory, 2022) which enables individuals to “hear” by converting sound into tactile feedback, so that sound impaired individuals can perceive spoken words through their skin. Takagi and Nishimoto (2023) analyzed the brain scans taken through Magnetic Resonance Imaging (MRI) as individuals were shown thousands of images. They then trained a generative AI tool called Stable Diffusion2 on the brain scan data of the study’s participants, thus creating images that roughly corresponded to the real images shown. While this does not correspond to reading the mind of people, at least not yet, and some limitations of the study have been highlighted (Parshall, 2023), it nevertheless represents an important step towards developing the capability to interface human thoughts with computers [emphasis mine], via brain data interpretation.

While the above examples may sound somewhat like science fiction, the recent uptake of generative Artificial Intelligence applications and of large language models such as ChatGPT or Bard, demonstrates that the seemingly impossible can quickly become an everyday reality. At present, anyone can purchase online electroencephalogram (EEG) devices for a few hundred dollars [emphasis mine], to measure the electrical activity of their brain for meditation, gaming, or other purposes. [pp. 14-15]

This is very impressive achievement. Some of the research cited was published earlier this year (2023). The extraordinary speed is a testament to the efforts by the authors and their teams. It’s also a testament to how quickly the field is moving.

I’m glad to see the mention of and focus on consumer neurotechnology. (While the authors don’t speculate, I am free to do so.) Consumer neurotechnology could be viewed as one of the steps toward normalizing a cyborg future for all of us. Yes, we have books, television programmes, movies, and video games, which all normalize the idea but the people depicted have been severely injured and require the augmentation. With consumer neurotechnology, you have easily accessible devices being used to enhance people who aren’t injured, they just want to be ‘better’.

This phrase seemed particularly striking “… an important step towards developing the capability to interface human thoughts with computers” in light of some claims made by the Australian military in my June 13, 2023 posting “Mind-controlled robots based on graphene: an Australian research story.” (My posting has an embedded video demonstrating the Brain Robotic Interface (BRI) in action. Also, see the paragraph below the video for my ‘measured’ response.)

There’s no mention of the military in the report which seems more like a deliberate rather than inadvertent omission given the importance of military innovation where technology is concerned.

This section gives a good overview of government initiatives (in the report it’s followed by a table of the programmes),

Thanks to the promises it holds, neurotechnology has garnered significant attention from both governments and the private sector and is considered by many as an investment priority. According to the International Brain Initiative (IBI), brain research funding has become increasingly important over the past ten years, leading to a rise in large-scale state-led programs aimed at advancing brain intervention technologies(International Brain Initiative, 2021). Since 2013, initiatives such as the United States’ Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative and the European Union’s Human Brain Project (HBP), as well as major national initiatives in China, Japan and South Korea have been launched with significant funding support from the respective governments. The Canadian Brain Research Strategy, initially operated as a multi- stakeholder coalition on brain research, is also actively seeking funding support from the government to transform itself into a national research initiative (Canadian Brain Research Strategy, 2022). A similar proposal is also seen in the case of the Australian Brain Alliance, calling for the establishment of an Australian Brain Initiative (Australian Academy of Science, n.d.). [pp. 15-16]

Privacy

There are some concerns such as these,

Beyond the medical realm, research suggests that emotional responses of consumers
related to preferences and risks can be concurrently tracked by neurotechnology, such
as neuroimaging and that neural data can better predict market-level outcomes than
traditional behavioral data (Karmarkar and Yoon, 2016). As such, neural data is
increasingly sought after in the consumer market for purposes such as digital
phenotyping4, neurogaming 5,and neuromarketing6 (UNESCO, 2021). This surge in demand gives rise to risks like hacking, unauthorized data reuse, extraction of privacy-sensitive information, digital surveillance, criminal exploitation of data, and other forms of abuse. These risks prompt the question of whether neural data needs distinct definition and safeguarding measures.

These issues are particularly relevant today as a wide range of electroencephalogram (EEG) headsets that can be used at home are now available in consumer markets for purposes that range from meditation assistance to controlling electronic devices through the mind. Imagine an individual is using one of these devices to play a neurofeedback game, which records the person’s brain waves during the game. Without the person being aware, the system can also identify the patterns associated with an undiagnosed mental health condition, such as anxiety. If the game company sells this data to third parties, e.g. health insurance providers, this may lead to an increase of insurance fees based on undisclosed information. This hypothetical situation would represent a clear violation of mental privacy and of unethical use of neural data.

Another example is in the field of advertising, where companies are increasingly interested in using neuroimaging to better understand consumers’ responses to their products or advertisements, a practice known as neuromarketing. For instance, a company might use neural data to determine which advertisements elicit the most positive emotional responses in consumers. While this can help companies improve their marketing strategies, it raises significant concerns about mental privacy. Questions arise in relation to consumers being aware or not that their neural data is being used, and in the extent to which this can lead to manipulative advertising practices that unfairly exploit unconscious preferences. Such potential abuses underscore the need for explicit consent and rigorous data protection measures in the use of neurotechnology for neuromarketing purposes. [pp. 21-22]

Legalities

Some countries already have laws and regulations regarding neurotechnology data,

At the national level, only a few countries have enacted laws and regulations to protect mental integrity or have included neuro-data in personal data protection laws (UNESCO, University of Milan-Bicocca (Italy) and State University of New York – Downstate Health Sciences University, 2023). Examples are the constitutional reform undertaken by Chile (Republic of Chile, 2021), the Charter for the responsible development of neurotechnologies of the Government of France (Government of France, 2022), and the Digital Rights Charter of the Government of Spain (Government of Spain, 2021). They propose different approaches to the regulation and protection of human rights in relation to neurotechnology. Countries such as the UK are also examining under which circumstances neural data may be considered as a special category of data under the general data protection framework (i.e. UK’s GDPR) (UK’s Information Commissioner’s Office, 2023) [p. 24]

As you can see, these are recent laws. There doesn’t seem to be any attempt here in Canada even though there is an act being reviewed in Parliament that could conceivably include neural data. This is from my May 1, 2023 posting,

Bill C-27 (Digital Charter Implementation Act, 2022) is what I believe is called an omnibus bill as it includes three different pieces of proposed legislation (the Consumer Privacy Protection Act [CPPA], the Artificial Intelligence and Data Act [AIDA], and the Personal Information and Data Protection Tribunal Act [PIDPTA]). [emphasis added July 11, 2023] You can read the Innovation, Science and Economic Development (ISED) Canada summary here or a detailed series of descriptions of the act here on the ISED’s Canada’s Digital Charter webpage.

My focus at the time was artificial intelligence and, now, after reading this UNESCO report and briefly looking at the Innovation, Science and Economic Development (ISED) Canada summary and a detailed series of descriptions of the act on ISED’s Canada’s Digital Charter webpage, I don’t see anything that specifies neural data but it’s not excluded either.

IP5 patents

Here’s the explanation (the footnote is included at the end of the excerpt),

IP5 patents represent a subset of overall patents filed worldwide, which have the
characteristic of having been filed in at least one top intellectual property offices (IPO)
worldwide (the so called IP5, namely the Chinese National Intellectual Property
Administration, CNIPA (formerly SIPO); the European Patent Office, EPO; the Japan
Patent Office, JPO; the Korean Intellectual Property Office, KIPO; and the United States
Patent and Trademark Office, USPTO) as well as another country, which may or may not be an IP5. This signals their potential applicability worldwide, as their inventiveness and industrial viability have been validated by at least two leading IPOs. This gives these patents a sort of “quality” check, also since patenting inventions is costly and if applicants try to protect the same invention in several parts of the world, this normally mirrors that the applicant has expectations about their importance and expected value. If we were to conduct the same analysis using information about individually considered patent applied worldwide, i.e. without filtering for quality nor considering patent families, we would risk conducting a biased analysis based on duplicated data. Also, as patentability standards vary across countries and IPOs, and what matters for patentability is the existence (or not) of prior art in the IPO considered, we would risk mixing real innovations with patents related to catching up phenomena in countries that are not at the forefront of the technology considered.

9 The five IP offices (IP5) is a forum of the five largest intellectual property offices in the world that was set up to improve the efficiency of the examination process for patents worldwide. The IP5 Offices together handle about 80% of the world’s patent applications, and 95% of all work carried out under the Patent Cooperation Treaty (PCT), see http://www.fiveipoffices.org. (Dernis et al., 2015) [p. 31]

AI assistance on this report

As noted earlier I have next to no experience with the analytical tools having not attempted this kind of work in several years. Here’s an example of what they were doing,

We utilize a combination of text embeddings based on Bidirectional Encoder
Representations from Transformer (BERT), dimensionality reduction, and hierarchical
clustering inspired by the BERTopic methodology 12 to identify latent themes within
research literature. Latent themes or topics in the context of topic modeling represent
clusters of words that frequently appear together within a collection of documents (Blei, 2012). These groupings are not explicitly labeled but are inferred through computational analysis examining patterns in word usage. These themes are ‘hidden’ within the text, only to be revealed through this analysis. …

We further utilize OpenAI’s GPT-4 model to enrich our understanding of topics’ keywords and to generate topic labels (OpenAI, 2023), thus supplementing expert review of the broad interdisciplinary corpus. Recently, GPT-4 has shown impressive results in medical contexts across various evaluations (Nori et al., 2023), making it a useful tool to enhance the information obtained from prior analysis stages, and to complement them. The automated process enhances the evaluation workflow, effectively emphasizing neuroscience themes pertinent to potential neurotechnology patents. Notwithstanding existing concerns about hallucinations (Lee, Bubeck and Petro, 2023) and errors in generative AI models, this methodology employs the GPT-4 model for summarization and interpretation tasks, which significantly mitigates the likelihood of hallucinations. Since the model is constrained to the context provided by the keyword collections, it limits the potential for fabricating information outside of the specified boundaries, thereby enhancing the accuracy and reliability of the output. [pp. 33-34]

I couldn’t resist adding the ChatGPT paragraph given all of the recent hoopla about it.

Multimodal neuromodulation and neuromorphic computing patents

I think this gives a pretty good indication of the activity on the patent front,

The largest, coherent topic, termed “multimodal neuromodulation,” comprises 535
patents detailing methodologies for deep or superficial brain stimulation designed to
address neurological and psychiatric ailments. These patented technologies interact with various points in neural circuits to induce either Long-Term Potentiation (LTP) or Long-Term Depression (LTD), offering treatment for conditions such as obsession, compulsion, anxiety, depression, Parkinson’s disease, and other movement disorders. The modalities encompass implanted deep-brain stimulators (DBS), Transcranial Magnetic Stimulation (TMS), and transcranial Direct Current Stimulation (tDCS). Among the most representative documents for this cluster are patents with titles: Electrical stimulation of structures within the brain or Systems and methods for enhancing or optimizing neural stimulation therapy for treating symptoms of Parkinson’s disease and or other movement disorders. [p.65]

Given my longstanding interest in memristors, which (I believe) have to a large extent helped to stimulate research into neuromorphic computing, this had to be included. Then, there was the brain-computer interfaces cluster,

A cluster identified as “Neuromorphic Computing” consists of 366 patents primarily
focused on devices designed to mimic human neural networks for efficient and adaptable computation. The principal elements of these inventions are resistive memory cells and artificial synapses. They exhibit properties similar to the neurons and synapses in biological brains, thus granting these devices the ability to learn and modulate responses based on rewards, akin to the adaptive cognitive capabilities of the human brain.

The primary technology classes associated with these patents fall under specific IPC
codes, representing the fields of neural network models, analog computers, and static
storage structures. Essentially, these classifications correspond to technologies that are key to the construction of computers and exhibit cognitive functions similar to human brain processes.

Examples for this cluster include neuromorphic processing devices that leverage
variations in resistance to store and process information, artificial synapses exhibiting
spike-timing dependent plasticity, and systems that allow event-driven learning and
reward modulation within neuromorphic computers.

In relation to neurotechnology as a whole, the “neuromorphic computing” cluster holds significant importance. It embodies the fusion of neuroscience and technology, thereby laying the basis for the development of adaptive and cognitive computational systems. Understanding this specific cluster provides a valuable insight into the progressing domain of neurotechnology, promising potential advancements across diverse fields, including artificial intelligence and healthcare.

The “Brain-Computer Interfaces” cluster, consisting of 146 patents, embodies a key aspect of neurotechnology that focuses on improving the interface between the brain and external devices. The technology classification codes associated with these patents primarily refer to methods or devices for treatment or protection of eyes and ears, devices for introducing media into, or onto, the body, and electric communication techniques, which are foundational elements of brain-computer interface (BCI) technologies.

Key patents within this cluster include a brain-computer interface apparatus adaptable to use environment and method of operating thereof, a double closed circuit brain-machine interface system, and an apparatus and method of brain-computer interface for device controlling based on brain signal. These inventions mainly revolve around the concept of using brain signals to control external devices, such as robotic arms, and improving the classification performance of these interfaces, even after long periods of non-use.

The inventions described in these patents improve the accuracy of device control, maintain performance over time, and accommodate multiple commands, thus significantly enhancing the functionality of BCIs.

Other identified technologies include systems for medical image analysis, limb rehabilitation, tinnitus treatment, sleep optimization, assistive exoskeletons, and advanced imaging techniques, among others. [pp. 66-67]

Having sections on neuromorphic computing and brain-computer interface patents in immediate proximity led to more speculation on my part. Imagine how much easier it would be to initiate a BCI connection if it’s powered with a neuromorphic (brainlike) computer/device. [ETA July 21, 2023: Following on from that thought, it might be more than just easier to initiate a BCI connection. Could a brainlike computer become part of your brain? Why not? it’s been successfully argued that a robotic wheelchair was part of someone’s body, see my January 30, 2013 posting and scroll down about 40% of the way.)]

Neurotech policy debates

The report concludes with this,

Neurotechnology is a complex and rapidly evolving technological paradigm whose
trajectories have the power to shape people’s identity, autonomy, privacy, sentiments,
behaviors and overall well-being, i.e. the very essence of what it means to be human.

Designing and implementing careful and effective norms and regulations ensuring that neurotechnology is developed and deployed in an ethical manner, for the good of
individuals and for society as a whole, call for a careful identification and characterization of the issues at stake. This entails shedding light on the whole neurotechnology ecosystem, that is what is being developed, where and by whom, and also understanding how neurotechnology interacts with other developments and technological trajectories, especially AI. Failing to do so may result in ineffective (at best) or distorted policies and policy decisions, which may harm human rights and human dignity.

Addressing the need for evidence in support of policy making, the present report offers first time robust data and analysis shedding light on the neurotechnology landscape worldwide. To this end, its proposes and implements an innovative approach that leverages artificial intelligence and deep learning on data from scientific publications and paten[t]s to identify scientific and technological developments in the neurotech space. The methodology proposed represents a scientific advance in itself, as it constitutes a quasi- automated replicable strategy for the detection and documentation of neurotechnology- related breakthroughs in science and innovation, to be repeated over time to account for the evolution of the sector. Leveraging this approach, the report further proposes an IPC-based taxonomy for neurotechnology which allows for a structured framework to the exploration of neurotechnology, to enable future research, development and analysis. The innovative methodology proposed is very flexible and can in fact be leveraged to investigate different emerging technologies, as they arise.

In terms of technological trajectories, we uncover a shift in the neurotechnology industry, with greater emphasis being put on computer and medical technologies in recent years, compared to traditionally dominant trajectories related to biotechnology and pharmaceuticals. This shift warrants close attention from policymakers, and calls for attention in relation to the latest (converging) developments in the field, especially AI and related methods and applications and neurotechnology.

This is all the more important and the observed growth and specialization patterns are unfolding in the context of regulatory environments that, generally, are either not existent or not fit for purpose. Given the sheer implications and impact of neurotechnology on the very essence of human beings, this lack of regulation poses key challenges related to the possible infringement of mental integrity, human dignity, personal identity, privacy, freedom of thought, and autonomy, among others. Furthermore, issues surrounding accessibility and the potential for neurotech enhancement applications triggers significant concerns, with far-reaching implications for individuals and societies. [pp. 72-73]

Last words about the report

Informative, readable, and thought-provoking. And, it helped broaden my understanding of neurotechnology.

Future endeavours?

I’m hopeful that one of these days one of these groups (UNESCO, Canadian Science Policy Centre, or ???) will tackle the issue of business bankruptcy in the neurotechnology sector. It has already occurred as noted in my ““Going blind when your neural implant company flirts with bankruptcy [long read]” April 5, 2022 posting. That story opens with a woman going blind in a New York subway when her neural implant fails. It’s how she found out the company, which supplied her implant was going out of business.

In my July 7, 2023 posting about the UNESCO July 2023 dialogue on neurotechnology, I’ve included information on Neuralink (one of Elon Musk’s companies) and its approval (despite some investigations) by the US Food and Drug Administration to start human clinical trials. Scroll down about 75% of the way to the “Food for thought” subhead where you will find stories about allegations made against Neuralink.

The end

If you want to know more about the field, the report offers a seven-page bibliography and there’s a lot of material here where you can start with this December 3, 2019 posting “Neural and technological inequalities” which features an article mentioning a discussion between two scientists. Surprisingly (to me), the source article is in Fast Company (a leading progressive business media brand), according to their tagline)..

I have two categories you may want to check: Human Enhancement and Neuromorphic Engineering. There are also a number of tags: neuromorphic computing, machine/flesh, brainlike computing, cyborgs, neural implants, neuroprosthetics, memristors, and more.

Should you have any observations or corrections, please feel free to leave them in the Comments section of this posting.

Your cyborg future (brain-computer interface) is closer than you think

Researchers at the Imperial College London (ICL) are warning that brain-computer interfaces (BCIs) may pose a number of quandaries. (At the end of this post, I have a little look into some of the BCI ethical issues previously explored on this blog.)

Here’s more from a July 20, 2021American Institute of Physics (AIP) news release (also on EurekAlert),

Surpassing the biological limitations of the brain and using one’s mind to interact with and control external electronic devices may sound like the distant cyborg future, but it could come sooner than we think.

Researchers from Imperial College London conducted a review of modern commercial brain-computer interface (BCI) devices, and they discuss the primary technological limitations and humanitarian concerns of these devices in APL Bioengineering, from AIP Publishing.

The most promising method to achieve real-world BCI applications is through electroencephalography (EEG), a method of monitoring the brain noninvasively through its electrical activity. EEG-based BCIs, or eBCIs, will require a number of technological advances prior to widespread use, but more importantly, they will raise a variety of social, ethical, and legal concerns.

Though it is difficult to understand exactly what a user experiences when operating an external device with an eBCI, a few things are certain. For one, eBCIs can communicate both ways. This allows a person to control electronics, which is particularly useful for medical patients that need help controlling wheelchairs, for example, but also potentially changes the way the brain functions.

“For some of these patients, these devices become such an integrated part of themselves that they refuse to have them removed at the end of the clinical trial,” said Rylie Green, one of the authors. “It has become increasingly evident that neurotechnologies have the potential to profoundly shape our own human experience and sense of self.”

Aside from these potentially bleak mental and physiological side effects, intellectual property concerns are also an issue and may allow private companies that develop eBCI technologies to own users’ neural data.

“This is particularly worrisome, since neural data is often considered to be the most intimate and private information that could be associated with any given user,” said Roberto Portillo-Lara, another author. “This is mainly because, apart from its diagnostic value, EEG data could be used to infer emotional and cognitive states, which would provide unparalleled insight into user intentions, preferences, and emotions.”

As the availability of these platforms increases past medical treatment, disparities in access to these technologies may exacerbate existing social inequalities. For example, eBCIs can be used for cognitive enhancement and cause extreme imbalances in academic or professional successes and educational advancements.

“This bleak panorama brings forth an interesting dilemma about the role of policymakers in BCI commercialization,” Green said. “Should regulatory bodies intervene to prevent misuse and unequal access to neurotech? Should society follow instead the path taken by previous innovations, such as the internet or the smartphone, which originally targeted niche markets but are now commercialized on a global scale?”

She calls on global policymakers, neuroscientists, manufacturers, and potential users of these technologies to begin having these conversations early and collaborate to produce answers to these difficult moral questions.

“Despite the potential risks, the ability to integrate the sophistication of the human mind with the capabilities of modern technology constitutes an unprecedented scientific achievement, which is beginning to challenge our own preconceptions of what it is to be human,” [emphasis mine] Green said.

Caption: A schematic demonstrates the steps required for eBCI operation. EEG sensors acquire electrical signals from the brain, which are processed and outputted to control external devices. Credit: Portillo-Lara et al.

Here’s a link to and a citation for the paper,

Mind the gap: State-of-the-art technologies and applications for EEG-based brain-computer interfaces by Roberto Portillo-Lara, Bogachan Tahirbegi, Christopher A.R. Chapman, Josef A. Goding, and Rylie A. Green. APL Bioengineering, Volume 5, Issue 3, , 031507 (2021) DOI: https://doi.org/10.1063/5.0047237 Published Online: 20 July 2021

This paper appears to be open access.

Back on September 17, 2020 I published a post about a brain implant and included some material I’d dug up on ethics and brain-computer interfaces and was most struck by one of the stories. Here’s the excerpt (which can be found under the “Brain-computer interfaces, symbiosis, and ethical issues” subhead): … From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.

This is from another part of the September 17, 2020 posting,

… He [Gilbert] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.

… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.

“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”

It wasn’t my first thought when the topic of ethics and BCIs came up but as Gilbert’s research highlights: what happens if the company that made your implant and monitors it goes bankrupt?

If you have the time, do take a look at the entire entry under the “Brain-computer interfaces, symbiosis, and ethical issues” subhead of the September 17, 2020 posting or read the July 24, 2019 article by Liam Drew.

Should you have a problem finding the July 20, 2021 American Institute of Physics news release at either of the two links I have previously supplied, there’s a July 20, 2021 copy at SciTechDaily.com

Turning brain-controlled wireless electronic prostheses into reality plus some ethical points

Researchers at Stanford University (California, US) believe they have a solution for a problem with neuroprosthetics (Note: I have included brief comments about neuroprosthetics and possible ethical issues at the end of this posting) according an August 5, 2020 news item on ScienceDaily,

The current generation of neural implants record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But, so far, when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the implants generated too much heat to be safe for the patient. A new study suggests how to solve his problem — and thus cut the wires.

Caption: Photo of a current neural implant, that uses wires to transmit information and receive power. New research suggests how to one day cut the wires. Credit: Sergey Stavisky

An August 3, 2020 Stanford University news release (also on EurekAlert but published August 4, 2020) by Tom Abate, which originated the news item, details the problem and the proposed solution,

Stanford researchers have been working for years to advance a technology that could one day help people with paralysis regain use of their limbs, and enable amputees to use their thoughts to control prostheses and interact with computers.

The team has been focusing on improving a brain-computer interface, a device implanted beneath the skull on the surface of a patient’s brain. This implant connects the human nervous system to an electronic device that might, for instance, help restore some motor control to a person with a spinal cord injury, or someone with a neurological condition like amyotrophic lateral sclerosis, also called Lou Gehrig’s disease.

The current generation of these devices record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the devices would generate too much heat to be safe for the patient.

Now, a team led by electrical engineers and neuroscientists Krishna Shenoy, PhD, and Boris Murmann, PhD, and neurosurgeon and neuroscientist Jaimie Henderson, MD, have shown how it would be possible to create a wireless device, capable of gathering and transmitting accurate neural signals, but using a tenth of the power required by current wire-enabled systems. These wireless devices would look more natural than the wired models and give patients freer range of motion.

Graduate student Nir Even-Chen and postdoctoral fellow Dante Muratore, PhD, describe the team’s approach in a Nature Biomedical Engineering paper.

The team’s neuroscientists identified the specific neural signals needed to control a prosthetic device, such as a robotic arm or a computer cursor. The team’s electrical engineers then designed the circuitry that would enable a future, wireless brain-computer interface to process and transmit these these carefully identified and isolated signals, using less power and thus making it safe to implant the device on the surface of the brain.

To test their idea, the researchers collected neuronal data from three nonhuman primates and one human participant in a (BrainGate) clinical trial.

As the subjects performed movement tasks, such as positioning a cursor on a computer screen, the researchers took measurements. The findings validated their hypothesis that a wireless interface could accurately control an individual’s motion by recording a subset of action-specific brain signals, rather than acting like the wired device and collecting brain signals in bulk.

The next step will be to build an implant based on this new approach and proceed through a series of tests toward the ultimate goal.

Here’s a link to and a citation for the paper,

Power-saving design opportunities for wireless intracortical brain–computer interfaces by Nir Even-Chen, Dante G. Muratore, Sergey D. Stavisky, Leigh R. Hochberg, Jaimie M. Henderson, Boris Murmann & Krishna V. Shenoy. Nature Biomedical Engineering (2020) DOI: https://doi.org/10.1038/s41551-020-0595-9 Published: 03 August 2020

This paper is behind a paywall.

Comments about ethical issues

As I found out while investigating, ethical issues in this area abound. My first thought was to look at how someone with a focus on ability studies might view the complexities.

My ‘go to’ resource for human enhancement and ethical issues is Gregor Wolbring, an associate professor at the University of Calgary (Alberta, Canada). his profile lists these areas of interest: ability studies, disability studies, governance of emerging and existing sciences and technologies (e.g. neuromorphic engineering, genetics, synthetic biology, robotics, artificial intelligence, automatization, brain machine interfaces, sensors) and more.

I can’t find anything more recent on this particular topic but I did find an August 10, 2017 essay for The Conversation where he comments on technology and human enhancement ethical issues where the technology is gene-editing. Regardless, he makes points that are applicable to brain-computer interfaces (human enhancement), Note: Links have been removed),

Ability expectations have been and still are used to disable, or disempower, many people, not only people seen as impaired. They’ve been used to disable or marginalize women (men making the argument that rationality is an important ability and women don’t have it). They also have been used to disable and disempower certain ethnic groups (one ethnic group argues they’re smarter than another ethnic group) and others.

A recent Pew Research survey on human enhancement revealed that an increase in the ability to be productive at work was seen as a positive. What does such ability expectation mean for the “us” in an era of scientific advancements in gene-editing, human enhancement and robotics?

Which abilities are seen as more important than others?

The ability expectations among “us” will determine how gene-editing and other scientific advances will be used.

And so how we govern ability expectations, and who influences that governance, will shape the future. Therefore, it’s essential that ability governance and ability literacy play a major role in shaping all advancements in science and technology.

One of the reasons I find Gregor’s commentary so valuable is that he writes lucidly about ability and disability as concepts and poses what can be provocative questions about expectations and what it is to be truly abled or disabled. You can find more of his writing here on his eponymous (more or less) blog.

Ethics of clinical trials for testing brain implants

This October 31, 2017 article by Emily Underwood for Science was revelatory,

In 2003, neurologist Helen Mayberg of Emory University in Atlanta began to test a bold, experimental treatment for people with severe depression, which involved implanting metal electrodes deep in the brain in a region called area 25 [emphases mine]. The initial data were promising; eventually, they convinced a device company, St. Jude Medical in Saint Paul, to sponsor a 200-person clinical trial dubbed BROADEN.

This month [October 2017], however, Lancet Psychiatry reported the first published data on the trial’s failure. The study stopped recruiting participants in 2012, after a 6-month study in 90 people failed to show statistically significant improvements between those receiving active stimulation and a control group, in which the device was implanted but switched off.

… a tricky dilemma for companies and research teams involved in deep brain stimulation (DBS) research: If trial participants want to keep their implants [emphases mine], who will take responsibility—and pay—for their ongoing care? And participants in last week’s meeting said it underscores the need for the growing corps of DBS researchers to think long-term about their planned studies.

… participants bear financial responsibility for maintaining the device should they choose to keep it, and for any additional surgeries that might be needed in the future, Mayberg says. “The big issue becomes cost [emphasis mine],” she says. “We transition from having grants and device donations” covering costs, to patients being responsible. And although the participants agreed to those conditions before enrolling in the trial, Mayberg says she considers it a “moral responsibility” to advocate for lower costs for her patients, even it if means “begging for charity payments” from hospitals. And she worries about what will happen to trial participants if she is no longer around to advocate for them. “What happens if I retire, or get hit by a bus?” she asks.

There’s another uncomfortable possibility: that the hypothesis was wrong [emphases mine] to begin with. A large body of evidence from many different labs supports the idea that area 25 is “key to successful antidepressant response,” Mayberg says. But “it may be too simple-minded” to think that zapping a single brain node and its connections can effectively treat a disease as complex as depression, Krakauer [John Krakauer, a neuroscientist at Johns Hopkins University in Baltimore, Maryland] says. Figuring that out will likely require more preclinical research in people—a daunting prospect that raises additional ethical dilemmas, Krakauer says. “The hardest thing about being a clinical researcher,” he says, “is knowing when to jump.”

Brain-computer interfaces, symbiosis, and ethical issues

This was the most recent and most directly applicable work that I could find. From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.

Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence.

Interface technologies are divided into those that ‘read’ the brain to record brain activity and decode its meaning, and those that ‘write’ to the brain to manipulate activity in specific regions and affect their function.

Commercial research is opaque, but scientists at social-media platform Facebook are known to be pursuing brain-reading techniques for use in headsets that would convert users’ brain activity into text. And neurotechnology companies such as Kernel in Los Angeles, California, and Neuralink, founded by Elon Musk in San Francisco, California, predict bidirectional coupling in which computers respond to people’s brain activity and insert information into their neural circuitry. [emphasis mine]

Already, it is clear that melding digital technologies with human brains can have provocative effects, not least on people’s agency — their ability to act freely and according to their own choices. Although neuroethicists’ priority is to optimize medical practice, their observations also shape the debate about the development of commercial neurotechnologies.

Neuroethicists began to note the complex nature of the therapy’s side effects. “Some effects that might be described as personality changes are more problematic than others,” says Maslen [Hannah Maslen, a neuroethicist at the University of Oxford, UK]. A crucial question is whether the person who is undergoing stimulation can reflect on how they have changed. Gilbert, for instance, describes a DBS patient who started to gamble compulsively, blowing his family’s savings and seeming not to care. He could only understand how problematic his behaviour was when the stimulation was turned off.

Such cases present serious questions about how the technology might affect a person’s ability to give consent to be treated, or for treatment to continue. [emphases mine] If the person who is undergoing DBS is happy to continue, should a concerned family member or doctor be able to overrule them? If someone other than the patient can terminate treatment against the patient’s wishes, it implies that the technology degrades people’s ability to make decisions for themselves. It suggests that if a person thinks in a certain way only when an electrical current alters their brain activity, then those thoughts do not reflect an authentic self.

To observe a person with tetraplegia bringing a drink to their mouth using a BCI-controlled robotic arm is spectacular. [emphasis mine] This rapidly advancing technology works by implanting an array of electrodes either on or in a person’s motor cortex — a brain region involved in planning and executing movements. The activity of the brain is recorded while the individual engages in cognitive tasks, such as imagining that they are moving their hand, and these recordings are used to command the robotic limb.

If neuroscientists could unambiguously discern a person’s intentions from the chattering electrical activity that they record in the brain, and then see that it matched the robotic arm’s actions, ethical concerns would be minimized. But this is not the case. The neural correlates of psychological phenomena are inexact and poorly understood, which means that signals from the brain are increasingly being processed by artificial intelligence (AI) software before reaching prostheses.[emphasis mine]

But, he [Philipp Kellmeyer, a neurologist and neuroethicist at the University of Freiburg, Germany] says, using AI tools also introduces ethical issues of which regulators have little experience. [emphasis mine] Machine-learning software learns to analyse data by generating algorithms that cannot be predicted and that are difficult, or impossible, to comprehend. This introduces an unknown and perhaps unaccountable process between a person’s thoughts and the technology that is acting on their behalf.

Maslen is already helping to shape BCI-device regulation. She is in discussion with the European Commission about regulations it will implement in 2020 that cover non-invasive brain-modulating devices that are sold straight to consumers. [emphases mine; Note: There is a Canadian company selling this type of product, MUSE] Maslen became interested in the safety of these devices, which were covered by only cursory safety regulations. Although such devices are simple, they pass electrical currents through people’s scalps to modulate brain activity. Maslen found reports of them causing burns, headaches and visual disturbances. She also says clinical studies have shown that, although non-invasive electrical stimulation of the brain can enhance certain cognitive abilities, this can come at the cost of deficits in other aspects of cognition.

Regarding my note about MUSE, the company is InteraXon and its product is MUSE.They advertise the product as “Brain Sensing Headbands That Improve Your Meditation Practice.” The company website and the product seem to be one entity, Choose Muse. The company’s product has been used in some serious research papers they can be found here. I did not see any research papers concerning safety issues.

Getting back to Drew’s July 24, 2019 article and Patient 6,

… He [Gilbert] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.

… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.

“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”

I strongly recommend reading Drew’s July 24, 2019 article in its entirety.

Finally

It’s easy to forget that in all the excitement over technologies ‘making our lives better’ that there can be a dark side or two. Some of the points brought forth in the articles by Wolbring, Underwood, and Drew confirmed my uneasiness as reasonable and gave me some specific examples of how these technologies raise new issues or old issues in new ways.

What I find interesting is that no one is using the term ‘cyborg’, which would seem quite applicable.There is an April 20, 2012 posting here titled ‘My mother is a cyborg‘ where I noted that by at lease one definition people with joint replacements, pacemakers, etc. are considered cyborgs. In short, cyborgs or technology integrated into bodies have been amongst us for quite some time.

Interestingly, no one seems to care much when insects are turned into cyborgs (can’t remember who pointed this out) but it is a popular area of research especially for military applications and search and rescue applications.

I’ve sometimes used the term ‘machine/flesh’ and or ‘augmentation’ as a description of technologies integrated with bodies, human or otherwise. You can find lots on the topic here however I’ve tagged or categorized it.

Amongst other pieces you can find here, there’s the August 8, 2016 posting, ‘Technology, athletics, and the ‘new’ human‘ featuring Oscar Pistorius when he was still best known as the ‘blade runner’ and a remarkably successful paralympic athlete. It’s about his efforts to compete against able-bodied athletes at the London Olympic Games in 2012. It is fascinating to read about technology and elite athletes of any kind as they are often the first to try out ‘enhancements’.

Gregor Wolbring has a number of essays on The Conversation looking at Paralympic athletes and their pursuit of enhancements and how all of this is affecting our notions of abilities and disabilities. By extension, one has to assume that ‘abled’ athletes are also affected with the trickle-down effect on the rest of us.

Regardless of where we start the investigation, there is a sameness to the participants in neuroethics discussions with a few experts and commercial interests deciding on how the rest of us (however you define ‘us’ as per Gregor Wolbring’s essay) will live.

This paucity of perspectives is something I was getting at in my COVID-19 editorial for the Canadian Science Policy Centre. My thesis being that we need a range of ideas and insights that cannot be culled from small groups of people who’ve trained and read the same materials or entrepreneurs who too often seem to put profit over thoughtful implementations of new technologies. (See the PDF May 2020 edition [you’ll find me under Policy Development]) or see my May 15, 2020 posting here (with all the sources listed.)

As for this new research at Stanford, it’s exciting news, which raises questions, as it offers the hope of independent movement for people diagnosed as tetraplegic (sometimes known as quadriplegic.)

Human-machine interfaces and ultra-small nanoprobes

We’re back on the cyborg trail or what I sometimes refer to as machine/flesh. A July 3, 2019 news item on ScienceDaily describes the latest attempts to join machine with flesh,

Machine enhanced humans — or cyborgs as they are known in science fiction — could be one step closer to becoming a reality, thanks to new research Lieber Group at Harvard University, as well as scientists from University of Surrey and Yonsei University.

Researchers have conquered the monumental task of manufacturing scalable nanoprobe arrays small enough to record the inner workings of human cardiac cells and primary neurons.

The ability to read electrical activities from cells is the foundation of many biomedical procedures, such as brain activity mapping and neural prosthetics. Developing new tools for intracellular electrophysiology (the electric current running within cells) that push the limits of what is physically possible (spatiotemporal resolution) while reducing invasiveness could provide a deeper understanding of electrogenic cells and their networks in tissues, as well as new directions for human-machine interfaces.

The Lieber Group at Harvard University provided this image illustrating the work,

U-shaped nanowires can record electrical chatter inside a brain or heart cell without causing any damage. The devices are 100 times smaller than their biggest competitors, which kill a cell after recording. Courtesy: University of Surrey

A July 3, 2019 University of Surrey press release (also on EurekAlert), which originated the news item, provides more details about this UK/US/China collaboration,

In a paper published by Nature Nanotechnology, scientists from Surrey’s Advanced Technology Institute (ATI) and Harvard University detail how they produced an array of the ultra-small U-shaped nanowire field-effect transistor probes for intracellular recording. This incredibly small structure was used to record, with great clarity, the inner activity of primary neurons and other electrogenic cells, and the device has the capacity for multi-channel recordings.

Dr Yunlong Zhao from the ATI at the University of Surrey said: “If our medical professionals are to continue to understand our physical condition better and help us live longer, it is important that we continue to push the boundaries of modern science in order to give them the best possible tools to do their jobs. For this to be possible, an intersection between humans and machines is inevitable.

“Our ultra-small, flexible, nanowire probes could be a very powerful tool as they can measure intracellular signals with amplitudes comparable with those measured with patch clamp techniques; with the advantage of the device being scalable, it causes less discomfort and no fatal damage to the cell (cytosol dilation). Through this work, we found clear evidence for how both size and curvature affect device internalisation and intracellular recording signal.”

Professor Charles Lieber from the Department of Chemistry and Chemical Biology at Harvard University said: “This work represents a major step towards tackling the general problem of integrating ‘synthesised’ nanoscale building blocks into chip and wafer scale arrays, and thereby allowing us to address the long-standing challenge of scalable intracellular recording.

“The beauty of science to many, ourselves included, is having such challenges to drive hypotheses and future work. In the longer term, we see these probe developments adding to our capabilities that ultimately drive advanced high-resolution brain-machine interfaces and perhaps eventually bringing cyborgs to reality.”

Professor Ravi Silva, Director of the ATI at the University of Surrey, said: “This incredibly exciting and ambitious piece of work illustrates the value of academic collaboration. Along with the possibility of upgrading the tools we use to monitor cells, this work has laid the foundations for machine and human interfaces that could improve lives across the world.”

Dr Yunlong Zhao and his team are currently working on novel energy storage devices, electrochemical probing, bioelectronic devices, sensors and 3D soft electronic systems. Undergraduate, graduate and postdoc students with backgrounds in energy storage, electrochemistry, nanofabrication, bioelectronics, tissue engineering are very welcome to contact Dr Zhao to explore the opportunities further.

Here’s a link to and a citation for the paper,

Scalable ultrasmall three-dimensional nanowire transistor probes for intracellular recording by Yunlong Zhao, Siheng Sean You, Anqi Zhang, Jae-Hyun Lee, Jinlin Huang & Charles M. Lieber. Nature Nanotechnology (2019) DOI: https://doi.org/10.1038/s41565-019-0478-y Published 01 July 2019

The link I’ve provided leads to a paywall. However, I found a freely accessible version of the paper (this may not be the final published version) here.

October 2019 science and art/science events in Vancouver and other parts of Canada

This is a scattering of events, which I’m sure will be augmented as we properly start the month of October 2019.

October 2, 2019 in Waterloo, Canada (Perimeter Institute)

If you want to be close enough to press the sacred flesh (Sir Martin Rees), you’re out of luck. However, there are still options ranging from watching a live webcast from the comfort of your home to watching the lecture via closed circuit television with other devoted fans at a licensed bistro located on site at the Perimeter Institute (PI) to catching the lecture at a later date via YouTube.

That said, here’s why you might be interested,

Here’s more from a September 11, 2019 Perimeter Institute (PI) announcement received via email,

Surviving the Century
MOVING TOWARD A POST-HUMAN FUTURE
Martin Rees, UK Astronomer Royal
Wednesday, Oct. 2 at 7:00 PM ET

Advances in technology and space exploration could, if applied wisely, allow a bright future for the 10 billion people living on earth by the end of the century.

But there are dystopian risks we ignore at our peril: our collective “footprint” on our home planet, as well as the creation and use of technologies so powerful that even small groups could cause a global catastrophe.

Martin Rees, the UK Astronomer Royal, will explore this unprecedented moment in human history during his lecture on October 2, 2019. A former president of the Royal Society and master of Trinity College, Cambridge, Rees is a cosmologist whose work also explores the interfaces between science, ethics, and politics. Read More.

Mark your calendar! Tickets will be available on Monday, Sept. 16 at 9 AM ET

Didn’t get tickets for the lecture? We’ve got more ways to watch.
Join us at Perimeter on lecture night to watch live in the Black Hole Bistro.
Catch the live stream on Inside the Perimeter or watch it on Youtube the next day
Become a member of our donor thank you program! Learn more.

It took me a while to locate an address for PI venue since I expect that information to be part of the announcement. (insert cranky emoticon here) Here’s the address: Perimeter Institute, Mike Lazaridis Theatre of Ideas, 31 Caroline St. N., Waterloo, ON

Before moving onto the next event, I’m including a paragraph from the event description that was not included in the announcement (from the PI Outreach Surviving the Century webpage),

In his October 2 [2019] talk – which kicks off the 2019/20 season of the Perimeter Institute Public Lecture Series – Rees will discuss the outlook for humans (or their robotic envoys) venturing to other planets. Humans, Rees argues, will be ill-adapted to new habitats beyond Earth, and will use genetic and cyborg technology to transform into a “post-human” species.

I first covered Sir Martin Rees and his concerns about technology (robots and cyborgs run amok) in this November 26, 2012 posting about existential risk. He and his colleagues at Cambridge University, UK, proposed a Centre for the Study of Existential Risk, which opened in 2015.

Straddling Sept. and Oct. at the movies in Vancouver

The Vancouver International Film Festival (VIFF) opened today, September 26, 2019. During its run to October 11, 2019 there’ll be a number of documentaries that touch on science. Here are three of the documentaries most closely adhere to the topics I’m most likely to address on this blog. There is a fourth documentary included here as it touches on ecology in a more hopeful fashion than is the current trend.

Human Nature

From the VIFF 2019 film description and ticket page,

One of the most significant scientific breakthroughs in history, the discovery of CRISPR has made it possible to manipulate human DNA, paving the path to a future of great possibilities.

The implications of this could mean the eradication of disease or, more controversially, the possibility of genetically pre-programmed children.

Breaking away from scientific jargon, Human Nature pieces together a complex account of bio-research for the layperson as compelling as a work of science-fiction. But whether the gene-editing powers of CRISPR (described as “a word processor for DNA”) are used for good or evil, they’re reshaping the world as we know it. As we push past the boundaries of what it means to be human, Adam Bolt’s stunning work of science journalism reaches out to scientists, engineers, and people whose lives could benefit from CRISPR technology, and offers a wide-ranging look at the pros and cons of designing our futures.

Tickets
Friday, September 27, 2019 at 11:45 AM
Vancity Theatre

Saturday, September 28, 2019 at 11:15 AM
International Village 10

Thursday, October 10, 2019 at 6:45 PM
SFU Goldcorp

According to VIFF, the tickets for the Sept. 27, 2019 show are going fast.

Resistance Fighters

From the VIFF 2019 film description and ticket page,

Since mass-production in the 1940s, antibiotics have been nothing less than miraculous, saving countless lives and revolutionizing modern medicine. It’s virtually impossible to imagine hospitals or healthcare without them. But after years of abuse and mismanagement by the medical and agricultural communities, superbugs resistant to antibiotics are reaching apocalyptic proportions. The ongoing rise in multi-resistant bacteria – unvanquishable microbes, currently responsible for 700,000 deaths per year and projected to kill 10 million yearly by 2050 if nothing changes – and the people who fight them are the subjects of Michael Wech’s stunning “science-thriller.”

Peeling back the carefully constructed veneer of the medical corporate establishment’s greed and complacency to reveal the world on the cusp of a potential crisis, Resistance Fighters sounds a clarion call of urgency. It’s an all-out war, one which most of us never knew we were fighting, to avoid “Pharmageddon.” Doctors, researchers, patients, and diplomats testify about shortsighted medical and economic practices, while Wech offers refreshingly original perspectives on environment, ecology, and (animal) life in general. As alarming as it is informative, this is a wake-up call the world needs to hear.

Sunday, October 6, 2019 at 5:45 PM
International Village 8

Thursday, October 10, 2019 at 2:15 PM
SFU Goldcorp

According to VIFF, the tickets for the Oct. 6, 2019 show are going fast.

Trust Machine: The Story of Blockchain

Strictly speaking this is more of a technology story than science story but I have written about blockchain and cryptocurrencies before so I’m including this. From the VIFF 2019 film description and ticket page,

For anyone who has questions about cryptocurrencies like Bitcoin (and who doesn’t?), Alex Winter’s thorough documentary is an excellent introduction to the blockchain phenomenon. Trust Machine offers a wide range of expert testimony and a variety of perspectives that explicate the promises and the risks inherent in this new manifestation of high-tech wizardry. And it’s not just money that blockchains threaten to disrupt: innovators as diverse as UNICEF and Imogen Heap make spirited arguments that the industries of energy, music, humanitarianism, and more are headed for revolutionary change.

A propulsive and subversive overview of this little-understood phenomenon, Trust Machine crafts a powerful and accessible case that a technologically decentralized economy is more than just a fad. As the aforementioned experts – tech wizards, underground activists, and even some establishment figures – argue persuasively for an embrace of the possibilities offered by blockchains, others criticize its bubble-like markets and inefficiencies. Either way, Winter’s film suggests a whole new epoch may be just around the corner, whether the powers that be like it or not.

Tuesday, October 1, 2019 at 11:00 AM
Vancity Theatre

Thursday, October 3, 2019 at 9:00 PM
Vancity Theatre

Monday, October 7, 2019 at 1:15 PM
International Village 8

According to VIFF, tickets for all three shows are going fast

The Great Green Wall

For a little bit of hope, From the VIFF 2019 film description and ticket page,

“We must dare to invent the future.” In 2007, the African Union officially began a massively ambitious environmental project planned since the 1970s. Stretching through 11 countries and 8,000 km across the desertified Sahel region, on the southern edges of the Sahara, The Great Green Wall – once completed, a mosaic of restored, fertile land – would be the largest living structure on Earth.

Malian musician-activist Inna Modja embarks on an expedition through Senegal, Mali, Nigeria, Niger, and Ethiopia, gathering an ensemble of musicians and artists to celebrate the pan-African dream of realizing The Great Green Wall. Her journey is accompanied by a dazzling array of musical diversity, celebrating local cultures and traditions as they come together into a community to stand against the challenges of desertification, drought, migration, and violent conflict.

An unforgettable, beautiful exploration of a modern marvel of ecological restoration, and so much more than a passive source of information, The Great Green Wall is a powerful call to take action and help reshape the world.

Sunday, September 29, 2019 at 11:15 AM
International Village 10

Wednesday, October 2, 2019 at 6:00 PM
International Village 8
Standby – advance tickets are sold out but a limited number are likely to be released at the door

Wednesday, October 9, 2019 at 11:00 AM
International Village 9

As you can see, one show is already offering standby tickets only and the other two are selling quickly.

For venue locations, information about what ‘standby’ means and much more go here and click on the Festival tab. As for more information the individual films, you’ll links to trailers, running times, and more on the pages for which I’ve supplied links.

Brain Talks on October 16, 2019 in Vancouver

From time to time I get notices about a series titled Brain Talks from the Dept. of Psychiatry at the University of British Columbia. A September 11, 2019 announcement (received via email) focuses attention on the ‘guts of the matter’,

YOU ARE INVITED TO ATTEND:

BRAINTALKS: THE BRAIN AND THE GUT

WEDNESDAY, OCTOBER 16TH, 2019 FROM 6:00 PM – 8:00 PM

Join us on Wednesday October 16th [2019] for a series of talks exploring the
relationship between the brain, microbes, mental health, diet and the
gut. We are honored to host three phenomenal presenters for the evening:
Dr. Brett Finlay, Dr. Leslie Wicholas, and Thara Vayali, ND.

DR. BRETT FINLAY [2] is a Professor in the Michael Smith Laboratories at
the University of British Columbia. Dr. Finlay’s  research interests are
focused on host-microbe interactions at the molecular level,
specializing in Cellular Microbiology. He has published over 500 papers
and has been inducted into the Canadian  Medical Hall of Fame. He is the
co-author of the  books: Let Them Eat Dirt and The Whole Body
Microbiome.

DR. LESLIE WICHOLAS [3]  is a psychiatrist with an expertise in the
clinical understanding of the gut-brain axis. She has become
increasingly involved in the emerging field of Nutritional Psychiatry,
exploring connections between diet, nutrition, and mental health.
Currently, Dr. Wicholas is the director of the Food as Medicine program
at the Mood Disorder Association of BC.

THARA VAYALI, ND [4] holds a BSc in Nutritional Sciences and a MA in
Education and Communications. She has trained in naturopathic medicine
and advocates for awareness about women’s physiology and body literacy.
Ms. Vayali is a frequent speaker and columnist that prioritizes
engagement, understanding, and community as pivotal pillars for change.

Our event on Wednesday, October 16th [2019] will start with presentations from
each of the three speakers, and end with a panel discussion inspired by
audience questions. After the talks, at 7:30 pm, we host a social
gathering with a rich spread of catered healthy food and non-alcoholic
drinks. We look forward to seeing you there!

Paetzhold Theater

Vancouver General Hospital; Jim Pattison Pavilion, Vancouver, BC

Attend Event

That’s it for now.

Biohybrid cyborgs

Cyborgs are usually thought of as people who’ve been enhanced with some sort of technology, In contemporary real life that technology might be a pacemaker or hip replacement but in science fiction it’s technology such as artificial retinas (for example) that expands the range of visible light for an enhanced human.

Rarely does the topic of a microscopic life form come up in discussion about cyborgs and yet, that’s exactly what an April 3, 2019 Nanowerk spotlight article by Michael Berger describes in relationship to its use in water remediation efforts (Note: links have been removed),

Researchers often use living systems as inspiration for the design and engineering of micro- and nanoscale propulsion systems, actuators, sensors, and robots. …

“Although microrobots have recently proved successful for remediating decontaminated water at the laboratory scale, the major challenge in the field is to scale up these applications to actual environmental settings,” Professor Joseph Wang, Chair of Nanoengineering and Director, Center of Wearable Sensors at the University California San Diego, tells Nanowerk. “In order to do this, we need to overcome the toxicity of their chemical fuels, the short time span of biocompatible magnesium-based micromotors and the small domain operation of externally actuated microrobots.”

In their recent work on self-propelled biohybrid microrobots, Wang and his team were inspired by recent developments of biohybrid cyborgs that integrate self-propelling bacteria with functionalized synthetic nanostructures to transport materials.

“These tiny cyborgs are incredibly efficient for transport materials, but the limitation that we observed is that they do not provide large-scale fluid mixing,” notes Wang. ” We wanted to combine the best properties of both worlds. So, we searched for the best candidate to create a more robust biohybrid for mixing and we decided on using rotifers (Brachionus) as the engine of the cyborg.”

These marine microorganisms, which measure between 100 and 300 micrometers, are amazing creatures as they already possess sensing ability, energetic autonomy, and provide large-scale fluid mixing capability. They are also are very resilient and can survive in very harsh environments and even are one of the few organisms that have survived via asexual reproduction.

“Taking inspiration from the science fiction concept of a cybernetic organism, or cyborg – where an organism has enhanced abilities due to the integration of some artificial component – we developed a self-propelled biohybrid microrobot, that we named rotibot, employing rotifers as their engine,” says Fernando Soto, first author of a paper on this work (Advanced Functional Materials, “Rotibot: Use of Rotifers as Self-Propelling Biohybrid Microcleaners”).

This is the first demonstration of a biohybrid cyborg used for the removal and degradation of pollutants from solution. The technical breakthrough that allowed the team to achieve this task is based on a novel fabrication mechanism based on the selective accumulation of functionalized microbeads in the microorganism’s mouth: The rotifer serves not only as a transport vessel for active material or cargo but also acting as a powerful biological pump, as it creates fluid flows directed towards its mouth

Nanowerk has made this video demonstrating a rotifer available along with a description,

“The rotibot is a rotifer (a marine microorganism) that has plastic microbeads attached to the mouth, which are functionalized with pollutant-degrading enzymes. This video illustrates a free swimming rotibot mixing tracer particles in solution. “

Here’s a link to and a citation for the paper,

Rotibot: Use of Rotifers as Self‐Propelling Biohybrid Microcleaners by Fernando Soto, Miguel Angel Lopez‐Ramirez, Itthipon Jeerapan, Berta Esteban‐Fernandez de Avila, Rupesh, Kumar Mishra, Xiaolong Lu, Ingrid Chai, Chuanrui Chen, Daniel Kupor. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.201900658 First published: 28 March 2019

This paper is behind a paywall.

Berger’s April 3, 2019 Nanowerk spotlight article includes some useful images if you are interested in figuring out how these rotibots function.

I found it at the movies: a commentary on/review of “Films from the Future”

Kudos to anyone who recognized the reference to Pauline Kael (she changed film criticism forever) and her book “I Lost it at the Movies.” Of course, her book title was a bit of sexual innuendo, quite risqué for an important film critic in 1965 but appropriate for a period (the 1960s) associated with a sexual revolution. (There’s more about the 1960’s sexual revolution in the US along with mention of a prior sexual revolution in the 1920s in this Wikipedia entry.)

The title for this commentary is based on an anecdote from Dr. Andrew Maynard’s (director of the Arizona State University [ASU] Risk Innovation Lab) popular science and technology book, “Films from the Future: The Technology and Morality of Sci-Fi Movies.”

The ‘title-inspiring’ anecdote concerns Maynard’s first viewing of ‘2001: A Space Odyssey, when as a rather “bratty” 16-year-old who preferred to read science fiction, he discovered new ways of seeing and imaging the world. Maynard isn’t explicit about when he became a ‘techno nerd’ or how movies gave him an experience books couldn’t but presumably at 16 he was already gearing up for a career in the sciences. That ‘movie’ revelation received in front of a black and white television on January 1,1982 eventually led him to write, “Films from the Future.” (He has a PhD in physics which he is now applying to the field of risk innovation. For a more detailed description of Dr. Maynard and his work, there’s his ASU profile webpage and, of course, the introduction to his book.)

The book is quite timely. I don’t know how many people have noticed but science and scientific innovation is being covered more frequently in the media than it has been in many years. Science fairs and festivals are being founded on what seems to be a daily basis and you can now find science in art galleries. (Not to mention the movies and television where science topics are covered in comic book adaptations, in comedy, and in standard science fiction style.) Much of this activity is centered on what’s called ’emerging technologies’. These technologies are why people argue for what’s known as ‘blue sky’ or ‘basic’ or ‘fundamental’ science for without that science there would be no emerging technology.

Films from the Future

Isn’t reading the Table of Contents (ToC) the best way to approach a book? (From Films from the Future; Note: The formatting has been altered),

Table of Contents
Chapter One
In the Beginning 14
Beginnings 14
Welcome to the Future 16
The Power of Convergence 18
Socially Responsible Innovation 21
A Common Point of Focus 25
Spoiler Alert 26
Chapter Two
Jurassic Park: The Rise of Resurrection Biology 27
When Dinosaurs Ruled the World 27
De-Extinction 31
Could We, Should We? 36
The Butterfly Effect 39
Visions of Power 43
Chapter Three
Never Let Me Go: A Cautionary Tale of Human Cloning 46
Sins of Futures Past 46
Cloning 51
Genuinely Human? 56
Too Valuable to Fail? 62
Chapter Four
Minority Report: Predicting Criminal Intent 64
Criminal Intent 64
The “Science” of Predicting Bad Behavior 69
Criminal Brain Scans 74
Machine Learning-Based Precognition 77
Big Brother, Meet Big Data 79
Chapter Five
Limitless: Pharmaceutically-enhanced Intelligence 86
A Pill for Everything 86
The Seduction of Self-Enhancement 89
Nootropics 91
If You Could, Would You? 97
Privileged Technology 101
Our Obsession with Intelligence 105
Chapter Six
Elysium: Social Inequity in an Age of Technological
Extremes 110
The Poor Shall Inherit the Earth 110
Bioprinting Our Future Bodies 115
The Disposable Workforce 119
Living in an Automated Future 124
Chapter Seven
Ghost in the Shell: Being Human in an
Augmented Future 129
Through a Glass Darkly 129
Body Hacking 135
More than “Human”? 137
Plugged In, Hacked Out 142
Your Corporate Body 147
Chapter Eight
Ex Machina: AI and the Art of Manipulation 154
Plato’s Cave 154
The Lure of Permissionless Innovation 160
Technologies of Hubris 164
Superintelligence 169
Defining Artificial Intelligence 172
Artificial Manipulation 175
Chapter Nine
Transcendence: Welcome to the Singularity 180
Visions of the Future 180
Technological Convergence 184
Enter the Neo-Luddites 190
Techno-Terrorism 194
Exponential Extrapolation 200
Make-Believe in the Age of the Singularity 203
Chapter Ten
The Man in the White Suit: Living in a Material World 208
There’s Plenty of Room at the Bottom 208
Mastering the Material World 213
Myopically Benevolent Science 220
Never Underestimate the Status Quo 224
It’s Good to Talk 227
Chapter Eleven
Inferno: Immoral Logic in an Age of
Genetic Manipulation 231
Decoding Make-Believe 231
Weaponizing the Genome 234
Immoral Logic? 238
The Honest Broker 242
Dictating the Future 248
Chapter Twelve
The Day After Tomorrow: Riding the Wave of
Climate Change 251
Our Changing Climate 251
Fragile States 255
A Planetary “Microbiome” 258
The Rise of the Anthropocene 260
Building Resiliency 262
Geoengineering the Future 266
Chapter Thirteen
Contact: Living by More than Science Alone 272
An Awful Waste of Space 272
More than Science Alone 277
Occam’s Razor 280
What If We’re Not Alone? 283
Chapter Fourteen
Looking to the Future 288
Acknowledgments 293

The ToC gives the reader a pretty clue as to where the author is going with their book and Maynard explains how he chose his movies in his introductory chapter (from Films from the Future),

“There are some quite wonderful science fiction movies that didn’t make the cut because they didn’t fit the overarching narrative (Blade Runner and its sequel Blade Runner 2049, for instance, and the first of the Matrix trilogy). There are also movies that bombed with the critics, but were included because they ably fill a gap in the bigger story around emerging and converging technologies. Ultimately, the movies that made the cut were chosen because, together, they create an overarching narrative around emerging trends in biotechnologies, cybertechnologies, and materials-based technologies, and they illuminate a broader landscape around our evolving relationship with science and technology. And, to be honest, they are all movies that I get a kick out of watching.” (p. 17)

Jurassic Park (Chapter Two)

Dinosaurs do not interest me—they never have. Despite my profound indifference I did see the movie, Jurassic Park, when it was first released (someone talked me into going). And, I am still profoundly indifferent. Thankfully, Dr. Maynard finds meaning and a connection to current trends in biotechnology,

Jurassic Park is unabashedly a movie about dinosaurs. But it’s also a movie about greed, ambition, genetic engineering, and human folly—all rich pickings for thinking about the future, and what could possibly go wrong. (p. 28)

What really stands out with Jurassic Park, over twenty-five years later, is how it reveals a very human side of science and technology. This comes out in questions around when we should tinker with technology and when we should leave well enough alone. But there is also a narrative here that appears time and time again with the movies in this book, and that is how we get our heads around the sometimes oversized roles mega-entrepreneurs play in dictating how new tech is used, and possibly abused. These are all issues that are just as relevant now as they were in 1993, and are front and center of ensuring that the technologyenabled future we’re building is one where we want to live, and not one where we’re constantly fighting for our lives.  (pp. 30-1)

He also describes a connection to current trends in biotechnology,

De-Extinction

In a far corner of Siberia, two Russians—Sergey Zimov and his son Nikita—are attempting to recreate the Ice Age. More precisely, their vision is to reconstruct the landscape and ecosystem of northern Siberia in the Pleistocene, a period in Earth’s history that stretches from around two and a half million years ago to eleven thousand years ago. This was a time when the environment was much colder than now, with huge glaciers and ice sheets flowing over much of the Earth’s northern hemisphere. It was also a time when humans
coexisted with animals that are long extinct, including saber-tooth cats, giant ground sloths, and woolly mammoths.

The Zimovs’ ambitions are an extreme example of “Pleistocene rewilding,” a movement to reintroduce relatively recently extinct large animals, or their close modern-day equivalents, to regions where they were once common. In the case of the Zimovs, the
father-and-son team believe that, by reconstructing the Pleistocene ecosystem in the Siberian steppes and elsewhere, they can slow down the impacts of climate change on these regions. These areas are dominated by permafrost, ground that never thaws through
the year. Permafrost ecosystems have developed and survived over millennia, but a warming global climate (a theme we’ll come back to in chapter twelve and the movie The Day After Tomorrow) threatens to catastrophically disrupt them, and as this happens, the impacts
on biodiversity could be devastating. But what gets climate scientists even more worried is potentially massive releases of trapped methane as the permafrost disappears.

Methane is a powerful greenhouse gas—some eighty times more effective at exacerbating global warming than carbon dioxide— and large-scale releases from warming permafrost could trigger catastrophic changes in climate. As a result, finding ways to keep it in the ground is important. And here the Zimovs came up with a rather unusual idea: maintaining the stability of the environment by reintroducing long-extinct species that could help prevent its destruction, even in a warmer world. It’s a wild idea, but one that has some merit.8 As a proof of concept, though, the Zimovs needed somewhere to start. And so they set out to create a park for deextinct Siberian animals: Pleistocene Park.9

Pleistocene Park is by no stretch of the imagination a modern-day Jurassic Park. The dinosaurs in Hammond’s park date back to the Mesozoic period, from around 250 million years ago to sixty-five million years ago. By comparison, the Pleistocene is relatively modern history, ending a mere eleven and a half thousand years ago. And the vision behind Pleistocene Park is not thrills, spills, and profit, but the serious use of science and technology to stabilize an increasingly unstable environment. Yet there is one thread that ties them together, and that’s using genetic engineering to reintroduce extinct species. In this case, the species in question is warm-blooded and furry: the woolly mammoth.

The idea of de-extinction, or bringing back species from extinction (it’s even called “resurrection biology” in some circles), has been around for a while. It’s a controversial idea, and it raises a lot of tough ethical questions. But proponents of de-extinction argue
that we’re losing species and ecosystems at such a rate that we can’t afford not to explore technological interventions to help stem the flow.

Early approaches to bringing species back from the dead have involved selective breeding. The idea was simple—if you have modern ancestors of a recently extinct species, selectively breeding specimens that have a higher genetic similarity to their forebears can potentially help reconstruct their genome in living animals. This approach is being used in attempts to bring back the aurochs, an ancestor of modern cattle.10 But it’s slow, and it depends on
the fragmented genome of the extinct species still surviving in its modern-day equivalents.

An alternative to selective breeding is cloning. This involves finding a viable cell, or cell nucleus, in an extinct but well-preserved animal and growing a new living clone from it. It’s definitely a more appealing route for impatient resurrection biologists, but it does mean getting your hands on intact cells from long-dead animals and devising ways to “resurrect” these, which is no mean feat. Cloning has potential when it comes to recently extinct species whose cells have been well preserved—for instance, where the whole animal has become frozen in ice. But it’s still a slow and extremely limited option.

Which is where advances in genetic engineering come in.

The technological premise of Jurassic Park is that scientists can reconstruct the genome of long-dead animals from preserved DNA fragments. It’s a compelling idea, if you think of DNA as a massively long and complex instruction set that tells a group of biological molecules how to build an animal. In principle, if we could reconstruct the genome of an extinct species, we would have the basic instruction set—the biological software—to reconstruct
individual members of it.

The bad news is that DNA-reconstruction-based de-extinction is far more complex than this. First you need intact fragments of DNA, which is not easy, as DNA degrades easily (and is pretty much impossible to obtain, as far as we know, for dinosaurs). Then you
need to be able to stitch all of your fragments together, which is akin to completing a billion-piece jigsaw puzzle without knowing what the final picture looks like. This is a Herculean task, although with breakthroughs in data manipulation and machine learning,
scientists are getting better at it. But even when you have your reconstructed genome, you need the biological “wetware”—all the stuff that’s needed to create, incubate, and nurture a new living thing, like eggs, nutrients, a safe space to grow and mature, and so on. Within all this complexity, it turns out that getting your DNA sequence right is just the beginning of translating that genetic code into a living, breathing entity. But in some cases, it might be possible.

In 2013, Sergey Zimov was introduced to the geneticist George Church at a conference on de-extinction. Church is an accomplished scientist in the field of DNA analysis and reconstruction, and a thought leader in the field of synthetic biology (which we’ll come
back to in chapter nine). It was a match made in resurrection biology heaven. Zimov wanted to populate his Pleistocene Park with mammoths, and Church thought he could see a way of
achieving this.

What resulted was an ambitious project to de-extinct the woolly mammoth. Church and others who are working on this have faced plenty of hurdles. But the technology has been advancing so fast that, as of 2017, scientists were predicting they would be able to reproduce the woolly mammoth within the next two years.

One of those hurdles was the lack of solid DNA sequences to work from. Frustratingly, although there are many instances of well preserved woolly mammoths, their DNA rarely survives being frozen for tens of thousands of years. To overcome this, Church and others
have taken a different tack: Take a modern, living relative of the mammoth, and engineer into it traits that would allow it to live on the Siberian tundra, just like its woolly ancestors.

Church’s team’s starting point has been the Asian elephant. This is their source of base DNA for their “woolly mammoth 2.0”—their starting source code, if you like. So far, they’ve identified fifty plus gene sequences they think they can play with to give their modern-day woolly mammoth the traits it would need to thrive in Pleistocene Park, including a coat of hair, smaller ears, and a constitution adapted to cold.

The next hurdle they face is how to translate the code embedded in their new woolly mammoth genome into a living, breathing animal. The most obvious route would be to impregnate a female Asian elephant with a fertilized egg containing the new code. But Asian elephants are endangered, and no one’s likely to allow such cutting edge experimentation on the precious few that are still around, so scientists are working on an artificial womb for their reinvented woolly mammoth. They’re making progress with mice and hope to crack the motherless mammoth challenge relatively soon.

It’s perhaps a stretch to call this creative approach to recreating a species (or “reanimation” as Church refers to it) “de-extinction,” as what is being formed is a new species. … (pp. 31-4)

This selection illustrates what Maynard does so very well throughout the book where he uses each film as a launching pad for a clear, readable description of relevant bits of science so you understand why the premise was likely, unlikely, or pure fantasy while linking it to contemporary practices, efforts, and issues. In the context of Jurassic Park, Maynard goes on to raise some fascinating questions such as: Should we revive animals rendered extinct (due to obsolescence or inability to adapt to new conditions) when we could develop new animals?

General thoughts

‘Films for the Future’ offers readable (to non-scientific types) science, lively writing, and the occasional ‘memorish’ anecdote. As well, Dr. Maynard raises the curtain on aspects of the scientific enterprise that most of us do not get to see.  For example, the meeting  between Sergey Zimov and George Church and how it led to new ‘de-extinction’ work’. He also describes the problems that the scientists encountered and are encountering. This is in direct contrast to how scientific work is usually presented in the news media as one glorious breakthrough after the next.

Maynard does discuss the issues of social inequality and power and ownership. For example, who owns your transplant or data? Puzzlingly, he doesn’t touch on the current environment where scientists in the US and elsewhere are encouraged/pressured to start up companies commercializing their work.

Nor is there any mention of how universities are participating in this grand business experiment often called ‘innovation’. (My March 15, 2017 posting describes an outcome for the CRISPR [gene editing system] patent fight taking place between Harvard University’s & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley and my Sept. 11, 2018 posting about an art/science exhibit in Vancouver [Canada] provides an update for round 2 of the Broad Institute vs. UC Berkeley patent fight [scroll down about 65% of the way.) *To read about how my ‘cultural blindness’ shows up here scroll down to the single asterisk at the end.*

There’s a foray through machine-learning and big data as applied to predictive policing in Maynard’s ‘Minority Report’ chapter (my November 23, 2017 posting describes Vancouver’s predictive policing initiative [no psychics involved], the first such in Canada). There’s no mention of surveillance technology, which if I recall properly was part of the future environment, both by the state and by corporations. (Mia Armstrong’s November 15, 2018 article for Slate on Chinese surveillance being exported to Venezuela provides interesting insight.)

The gaps are interesting and various. This of course points to a problem all science writers have when attempting an overview of science. (Carl Zimmer’s latest, ‘She Has Her Mother’s Laugh: The Powers, Perversions, and Potential of Heredity’] a doorstopping 574 pages, also has some gaps despite his focus on heredity,)

Maynard has worked hard to give an comprehensive overview in a remarkably compact 279 pages while developing his theme about science and the human element. In other words, science is not monolithic; it’s created by human beings and subject to all the flaws and benefits that humanity’s efforts are always subject to—scientists are people too.

The readership for ‘Films from the Future’ spans from the mildly interested science reader to someone like me who’s been writing/blogging about these topics (more or less) for about 10 years. I learned a lot reading this book.

Next time, I’m hopeful there’ll be a next time, Maynard might want to describe the parameters he’s set for his book in more detail that is possible in his chapter headings. He could have mentioned that he’s not a cinéaste so his descriptions of the movies are very much focused on the story as conveyed through words. He doesn’t mention colour palates, camera angles, or, even, cultural lenses.

Take for example, his chapter on ‘Ghost in the Shell’. Focused on the Japanese animation film and not the live action Hollywood version he talks about human enhancement and cyborgs. The Japanese have a different take on robots, inanimate objects, and, I assume, cyborgs than is found in Canada or the US or Great Britain, for that matter (according to a colleague of mine, an Englishwoman who lived in Japan for ten or more years). There’s also the chapter on the Ealing comedy, The Man in The White Suit, an English film from the 1950’s. That too has a cultural (as well as, historical) flavour but since Maynard is from England, he may take that cultural flavour for granted. ‘Never let me go’ in Chapter Two was also a UK production, albeit far more recent than the Ealing comedy and it’s interesting to consider how a UK production about cloning might differ from a US or Chinese or … production on the topic. I am hearkening back to Maynard’s anecdote about movies giving him new ways of seeing and imagining the world.

There’s a corrective. A couple of sentences in Maynard’s introductory chapter cautioning that in depth exploration of ‘cultural lenses’ was not possible without expanding the book to an unreadable size followed by a sentence in each of the two chapters that there are cultural differences.

One area where I had a significant problem was with regard to being “programmed” and having  “instinctual” behaviour,

As a species, we are embarrassingly programmed to see “different” as “threatening,” and to take instinctive action against it. It’s a trait that’s exploited in many science fiction novels and movies, including those in this book. If we want to see the rise of increasingly augmented individuals, we need to be prepared for some social strife. (p. 136)

These concepts are much debated in the social sciences and there are arguments for and against ‘instincts regarding strangers and their possible differences’. I gather Dr. Maynard hies to the ‘instinct to defend/attack’ school of thought.

One final quandary, there was no sex and I was expecting it in the Ex Machina chapter, especially now that sexbots are about to take over the world (I exaggerate). Certainly, if you’re talking about “social strife,” then sexbots would seem to be fruitful line of inquiry, especially when there’s talk of how they could benefit families (my August 29, 2018 posting). Again, there could have been a sentence explaining why Maynard focused almost exclusively in this chapter on the discussions about artificial intelligence and superintelligence.

Taken in the context of the book, these are trifling issues and shouldn’t stop you from reading Films from the Future. What Maynard has accomplished here is impressive and I hope it’s just the beginning.

Final note

Bravo Andrew! (Note: We’ve been ‘internet acquaintances/friends since the first year I started blogging. When I’m referring to him in his professional capacity, he’s Dr. Maynard and when it’s not strictly in his professional capacity, it’s Andrew. For this commentary/review I wanted to emphasize his professional status.)

If you need to see a few more samples of Andrew’s writing, there’s a Nov. 15, 2018 essay on The Conversation, Sci-fi movies are the secret weapon that could help Silicon Valley grow up and a Nov. 21, 2018 article on slate.com, The True Cost of Stain-Resistant Pants; The 1951 British comedy The Man in the White Suit anticipated our fears about nanotechnology. Enjoy.

****Added at 1700 hours on Nov. 22, 2018: You can purchase Films from the Future here.

*Nov. 23, 2018: I should have been more specific and said ‘academic scientists’. In Canada, the great percentage of scientists are academic. It’s to the point where the OECD (Organization for Economic Cooperation and Development) has noted that amongst industrialized countries, Canada has very few industrial scientists in comparison to the others.

Xenotransplantation—organs for transplantation in human patients—it’s a business and a science

The last time (June 18, 2018 post) I mentioned xenotransplantation (transplanting organs from one species into another species; see more here), it was in the context of an art/sci (or sciart) event coming to Vancouver (Canada).,

Patricia Piccinini’s Curious Imaginings Courtesy: Vancouver Biennale [downloaded from http://dailyhive.com/vancouver/vancouver-biennale-unsual-public-art-2018/]

The latest edition of the Vancouver Biennale was featured in a June 6, 2018 news item on the Daily Hive (Vancouver),

Melbourne artist Patricia Piccinini’s Curious Imaginings is expected to be one of the most talked about installations of the exhibit. Her style of “oddly captivating, somewhat grotesque, human-animal hybrid creature” is meant to be shocking and thought-provoking.

Piccinini’s interactive [emphasis mine] experience will “challenge us to explore the social impacts of emerging biotechnology and our ethical limits in an age where genetic engineering and digital technologies are already pushing the boundaries of humanity.”

Piccinini’s work will be displayed in the 105-year-old Patricia Hotel in Vancouver’s Strathcona neighbourhood. The 90-day ticketed exhibition [emphasis mine] is scheduled to open this September [2018].

(The show opens on Sept. 14, 2018.)

At the time, I had yet to stumble across Ingfei Chen’s thoughtful dive into the topic in her May 9, 2018 article for Slate.com,

In the United States, the clock is ticking for more than 114,700 adults and children waiting for a donated kidney or other lifesaving organ, and each day, nearly 20 of them die. Researchers are devising a new way to grow human organs inside other animals, but the method raises potentially thorny ethical issues. Other conceivable futuristic techniques sound like dystopian science fiction. As we envision an era of regenerative medicine decades from now, how far is society willing to go to solve the organ shortage crisis?

I found myself pondering this question after a discussion about the promises of stem cell technologies veered from the intriguing into the bizarre. I was interviewing bioengineer Zev Gartner, co-director and research coordinator of the Center for Cellular Construction at the University of California, San Francisco, about so-called organoids, tiny clumps of organlike tissue that can self-assemble from human stem cells in a Petri dish. These tissue bits are lending new insights into how our organs form and diseases take root. Some researchers even hope they can nurture organoids into full-size human kidneys, pancreases, and other organs for transplantation.

Certain organoid experiments have recently set off alarm bells, but when I asked Gartner about it, his radar for moral concerns was focused elsewhere. For him, the “really, really thought-provoking” scenarios involve other emerging stem cell–based techniques for engineering replacement organs for people, he told me. “Like blastocyst complementation,” he said.

Never heard of it? Neither had I. Turns out it’s a powerful new genetic engineering trick that researchers hope to use for growing human organs inside pigs or sheep—organs that could be genetically personalized for transplant patients, in theory avoiding immune-system rejection problems. The science still has many years to go, but if it pans out, it could be one solution to the organ shortage crisis. However, the prospect of creating hybrid animals with human parts and killing them to harvest organs has already raised a slew of ethical questions. In 2015, the National Institutes of Health placed a moratorium on federal funding of this nascent research area while it evaluated and discussed the issues.

As Gartner sees it, the debate over blastocyst complementation research—work that he finds promising—is just one of many conversations that society needs to have about the ethical and social costs and benefits of future technologies for making lifesaving transplant organs. “There’s all these weird ways that we could go about doing this,” he said, with a spectrum of imaginable approaches that includes organoids, interspecies organ farming, and building organs from scratch using 3D bioprinters. But even if it turns out we can produce human organs in these novel ways, the bigger issue, in each technological instance, may be whether we should.

Gartner crystallized things with a downright creepy example: “We know that the best bioreactor for tissues and organs for humans are human beings,” he said. Hypothetically, “the best way to get you a new heart would be to clone you, grow up a copy of yourself, and take the heart out.” [emphasis mine] Scientists could probably produce a cloned person with the technologies we already have, if money and ethics were of no concern. “But we don’t want to go there, right?” he added in the next breath. “The ethics involved in doing it are not compatible with who we want to be as a society.”

This sounds like Gartner may have been reading some science fiction, specifically, Lois McMaster Bujold and her Barrayar series where she often explored the ethics and possibilities of bioengineering. At this point, some of her work seems eerily prescient.

As for Chen’s article, I strongly encourage you to read it in its entirety if you have the time.

Medicine, healing, and big money

At about the same time, there was a May 31, 2018 news item on phys.org offering a perspective from some of the leaders in the science and the business (Note: Links have been removed),

Over the past few years, researchers led by George Church have made important strides toward engineering the genomes of pigs to make their cells compatible with the human body. So many think that it’s possible that, with the help of CRISPR technology, a healthy heart for a patient in desperate need might one day come from a pig.

“It’s relatively feasible to change one gene in a pig, but to change many dozens—which is quite clear is the minimum here—benefits from CRISPR,” an acronym for clustered regularly interspaced short palindromic repeats, said Church, the Robert Winthrop Professor of Genetics at Harvard Medical School (HMS) and a core faculty member of Harvard’s Wyss Institute for Biologically Inspired Engineering. Xenotransplantation is “one of few” big challenges (along with gene drives and de-extinction, he said) “that really requires the ‘oomph’ of CRISPR.”

To facilitate the development of safe and effective cells, tissues, and organs for future medical transplantation into human patients, Harvard’s Office of Technology Development has granted a technology license to the Cambridge biotech startup eGenesis.

Co-founded by Church and former HMS doctoral student Luhan Yang in 2015, eGenesis announced last year that it had raised $38 million to advance its research and development work. At least eight former members of the Church lab—interns, doctoral students, postdocs, and visiting researchers—have continued their scientific careers as employees there.

“The Church Lab is well known for its relentless pursuit of scientific achievements so ambitious they seem improbable—and, indeed, [for] its track record of success,” said Isaac Kohlberg, Harvard’s chief technology development officer and senior associate provost. “George deserves recognition too for his ability to inspire passion and cultivate a strong entrepreneurial drive among his talented research team.”

The license from Harvard OTD covers a powerful set of genome-engineering technologies developed at HMS and the Wyss Institute, including access to foundational intellectual property relating to the Church Lab’s 2012 breakthrough use of CRISPR, led by Yang and Prashant Mali, to edit the genome of human cells. Subsequent innovations that enabled efficient and accurate editing of numerous genes simultaneously are also included. The license is exclusive to eGenesis but limited to the field of xenotransplantation.

A May 30, 2018 Harvard University news release by Caroline Petty, which originated the news item, explores some of the issues associated with incubating humans organs in other species,

The prospect of using living, nonhuman organs, and concerns over the infectiousness of pathogens either present in the tissues or possibly formed in combination with human genetic material, have prompted the Food and Drug Administration to issue detailed guidance on xenotransplantation research and development since the mid-1990s. In pigs, a primary concern has been that porcine endogenous retroviruses (PERVs), strands of potentially pathogenic DNA in the animals’ genomes, might infect human patients and eventually cause disease. [emphases mine]

That’s where the Church lab’s CRISPR expertise has enabled significant advances. In 2015, the lab published important results in the journal Science, successfully demonstrating the use of genome engineering to eliminate all 62 PERVs in porcine cells. Science later called it “the most widespread CRISPR editing feat to date.”

In 2017, with collaborators at Harvard, other universities, and eGenesis, Church and Yang went further. Publishing again in Science, they first confirmed earlier researchers’ fears: Porcine cells can, in fact, transmit PERVs into human cells, and those human cells can pass them on to other, unexposed human cells. (It is still unknown under what circumstances those PERVs might cause disease.) In the same paper, they corrected the problem, announcing the embryogenesis and birth of 37 PERV-free pigs. [Note: My July 17, 2018 post features research which suggests CRISPR-Cas9 gene editing may cause greater genetic damage than had been thought.]

“Taken together, those innovations were stunning,” said Vivian Berlin, director of business development in OTD, who manages the commercialization strategy for much of Harvard’s intellectual property in the life sciences. “That was the foundation they needed, to convince both the scientific community and the investment community that xenotransplantation might become a reality.”

“After hundreds of tests, this was a critical milestone for eGenesis — and the entire field — and represented a key step toward safe organ transplantation from pigs,” said Julie Sunderland, interim CEO of eGenesis. “Building on this study, we hope to continue to advance the science and potential of making xenotransplantation a safe and routine medical procedure.”

Genetic engineering may undercut human diseases, but also could help restore extinct species, researcher says. [Shades of the Jurassic Park movies!]

It’s not, however, the end of the story: An immunological challenge remains, which eGenesis will need to address. The potential for a patient’s body to outright reject transplanted tissue has stymied many previous attempts at xenotransplantation. Church said numerous genetic changes must be achieved to make porcine organs fully compatible with human patients. Among these are edits to several immune functions, coagulation functions, complements, and sugars, as well as the PERVs.

“Trying the straight transplant failed almost immediately, within hours, because there’s a huge mismatch in the carbohydrates on the surface of the cells, in particular alpha-1-3-galactose, and so that was a showstopper,” Church explained. “When you delete that gene, which you can do with conventional methods, you still get pretty fast rejection, because there are a lot of other aspects that are incompatible. You have to take care of each of them, and not all of them are just about removing things — some of them you have to humanize. There’s a great deal of subtlety involved so that you get normal pig embryogenesis but not rejection.

“Putting it all together into one package is challenging,” he concluded.

In short, it’s the next big challenge for CRISPR.

Not unexpectedly, there is no mention of the CRISPR patent fight between Harvard/MIT’s (Massachusetts Institute of Technology) Broad Institute and the University of California at Berkeley (UC Berkeley). My March 15, 2017 posting featured an outcome where the Broad Institute won the first round of the fight. As I recall, it was a decision based on the principles associated with King Solomon, i.e., the US Patent Office, divided the baby and UCBerkeley got the less important part of the baby. As you might expect the decision has been appealed. In an April 30, 2018 piece, Scientific American reprinted an article about the latest round in the fight written by Sharon Begley for STAT (Note: Links have been removed),

All You Need to Know for Round 2 of the CRISPR Patent Fight

It’s baaaaack, that reputation-shredding, stock-moving fight to the death over key CRISPR patents. On Monday morning in Washington, D.C., the U.S. Court of Appeals for the Federal Circuit will hear oral arguments in University of California v. Broad Institute. Questions?

How did we get here? The patent office ruled in February 2017 that the Broad’s 2014 CRISPR patent on using CRISPR-Cas9 to edit genomes, based on discoveries by Feng Zhang, did not “interfere” with a patent application by UC based on the work of UC Berkeley’s Jennifer Doudna. In plain English, that meant the Broad’s patent, on using CRISPR-Cas9 to edit genomes in eukaryotic cells (all animals and plants, but not bacteria), was different from UC’s, which described Doudna’s experiments using CRISPR-Cas9 to edit DNA in a test tube—and it was therefore valid. The Patent Trial and Appeal Board concluded that when Zhang got CRISPR-Cas9 to work in human and mouse cells in 2012, it was not an obvious extension of Doudna’s earlier research, and that he had no “reasonable expectation of success.” UC appealed, and here we are.

For anyone who may not realize what the stakes are for these institutions, Linda Williams in a March 16, 1999 article for the LA Times had this to say about universities, patents, and money,

The University of Florida made about $2 million last year in royalties on a patent for Gatorade Thirst Quencher, a sports drink that generates some $500 million to $600 million a year in revenue for Quaker Oats Co.

The payments place the university among the top five in the nation in income from patent royalties.

Oh, but if some people on the Gainesville, Fla., campus could just turn back the clock. “If we had done Gatorade right, we would be getting $5 or $6 million (a year),” laments Donald Price, director of the university’s office of corporate programs. “It is a classic example of how not to handle a patent idea,” he added.

Gatorade was developed in 1965 when many universities were ill equipped to judge the commercial potential of ideas emerging from their research labs. Officials blew the university’s chance to control the Gatorade royalties when they declined to develop a professor’s idea.

The Gatorade story does not stop there and, even though it’s almost 20 years old, this article stands the test of time. I strongly encourage you to read it if the business end of patents and academia interest you or if you would like to develop more insight into the Broad Institute/UC Berkeley situation.

Getting back to the science, there is that pesky matter of diseases crossing over from one species to another. While, Harvard and eGenesis claim a victory in this area, it seems more work needs to be done.

Infections from pigs

An August 29, 2018 University of Alabama at Birmingham news release (also on EurekAlert) by Jeff Hansen, describes the latest chapter in the quest to provide more organs for transplantion,

A shortage of organs for transplantation — including kidneys and hearts — means that many patients die while still on waiting lists. So, research at the University of Alabama at Birmingham and other sites has turned to pig organs as an alternative. [emphasis mine]

Using gene-editing, researchers have modified such organs to prevent rejection, and research with primates shows the modified pig organs are well-tolerated.

An added step is needed to ensure the safety of these inter-species transplants — sensitive, quantitative assays for viruses and other infectious microorganisms in donor pigs that potentially could gain access to humans during transplantation.

The U.S. Food and Drug Administration requires such testing, prior to implantation, of tissues used for xenotransplantation from animals to humans. It is possible — though very unlikely — that an infectious agent in transplanted tissues could become an emerging infectious disease in humans.

In a paper published in Xenotransplantation, Mark Prichard, Ph.D., and colleagues at UAB have described the development and testing of 30 quantitative assays for pig infectious agents. These assays had sensitivities similar to clinical lab assays for viral loads in human patients. After validation, the UAB team also used the assays on nine sows and 22 piglets delivered from the sows through caesarian section.

“Going forward, ensuring the safety of these organs is of paramount importance,” Prichard said. “The use of highly sensitive techniques to detect potential pathogens will help to minimize adverse events in xenotransplantation.”

“The assays hold promise as part of the screening program to identify suitable donor animals, validate and release transplantable organs for research purposes, and monitor transplant recipients,” said Prichard, a professor in the UAB Department of Pediatrics and director of the Department of Pediatrics Molecular Diagnostics Laboratory.

The UAB researchers developed quantitative polymerase chain reaction, or qPCR, assays for 28 viruses sometimes found in pigs and two groups of mycoplasmas. They established reproducibility, sensitivity, specificity and lower limit of detection for each assay. All but three showed features of good quantitative assays, and the lower limit of detection values ranged between one and 16 copies of the viral or bacterial genetic material.

Also, the pig virus assays did not give false positives for some closely related human viruses.

As a start to understanding the infectious disease load in normal healthy animals and ensuring the safety of pig tissues used in xenotransplantation research, the researchers then screened blood, nasal swab and stool specimens from nine adult sows and 22 of their piglets delivered by caesarian section.

Mycoplasma species and two distinct herpesviruses were the most commonly detected microorganisms. Yet 14 piglets that were delivered from three sows infected with either or both herpesviruses were not infected with the herpesviruses, showing that transmission of these viruses from sow to the caesarian-delivery piglet was inefficient.

Prichard says the assays promise to enhance the safety of pig tissues for xenotransplantation, and they will also aid evaluation of human specimens after xenotransplantation.

The UAB researchers say they subsequently have evaluated more than 300 additional specimens, and that resulted in the detection of most of the targets. “The detection of these targets in pig specimens provides reassurance that the analytical methods are functioning as designed,” said Prichard, “and there is no a priori reason some targets might be more difficult to detect than others with the methods described here.”

As is my custom, here’s a link to and a citation for the paper,

Xenotransplantation panel for the detection of infectious agents in pigs by Caroll B. Hartline, Ra’Shun L. Conner, Scott H. James, Jennifer Potter, Edward Gray, Jose Estrada, Mathew Tector, A. Joseph Tector, Mark N. Prichard. Xenotransplantaion Volume 25, Issue 4 July/August 2018 e12427 DOI: https://doi.org/10.1111/xen.12427 First published: 18 August 2018

This paper is open access.

All this leads to questions about chimeras. If a pig is incubating organs with human cells it’s a chimera but then means the human receiving the organ becomes a chimera too. (For an example, see my Dec. 22, 2013 posting where there’s mention of a woman who received a trachea from a pig. Scroll down about 30% of the way.)

What is it to be human?

A question much beloved of philosophers and others, the question seems particularly timely with xenotransplantion and other developments such neuroprosthetics (cyborgs) and neuromorphic computing (brainlike computing).

As I’ve noted before, although not recently, popular culture offers a discourse on these issues. Take a look at the superhero movies and the way in which enhanced humans and aliens are presented. For example, X-Men comics and movies present mutants (humans with enhanced abilities) as despised and rejected. Video games (not really my thing but there is the Deus Ex series which has as its hero, a cyborg also offer insight into these issues.

Other than popular culture and in the ‘bleeding edge’ arts community, I can’t recall any public discussion on these matters arising from the extraordinary set of technologies which are being deployed or prepared for deployment in the foreseeable future.

(If you’re in Vancouver (Canada) from September 14 – December 15, 2018, you may want to check out Piccinini’s work. Also, there’s ” NCSU [North Carolina State University] Libraries, NC State’s Genetic Engineering and Society (GES) Center, and the Gregg Museum of Art & Design have issued a public call for art for the upcoming exhibition Art’s Work in the Age of Biotechnology: Shaping our Genetic Futures.” from my Sept. 6, 2018 posting. Deadline: Oct. 1, 2018.)

At a guess, there will be pushback from people who have no interest in debating what it is to be human as they already know, and will find these developments, when they learn about them, to be horrifying and unnatural.

Robots in Vancouver and in Canada (two of two)

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Where
Vancouver, BC, Canada

When
Thursday September 28 & Friday September 29, 2017 — Save the date!

Download the full agenda and speakers’ list here.

Objectives

The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference.  The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Program Overview

Download the full agenda and speakers’ list here.

DAY ONE: IROS Workshop 

“Best practices in designing effective roadmaps for robotics innovation”

Thursday September 28, 2017 | 8:30am – 5:00pm | Vancouver Convention Centre

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

Thursday Evening Program: Sponsored by Clearpath Robotics  Workshop participants gather at a nearby restaurant to network and socialize.

Learn more about the IROS Workshop.

DAY TWO: NSERC-Sponsored Canadian Robotics Stakeholder Meeting
“Towards a national robotics strategy for Canada”

Friday September 29, 2017 | 8:30am – 5:00pm | University of British Columbia (UBC)

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

Friday Evening Program: Sponsored by NSERC Meeting participants gather at a nearby restaurant for the event’s closing dinner reception.

Learn more about the Canadian Robotics Stakeholder Meeting.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another  concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there  already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s  influential 1982 book, ‘Orality and Literacy’,  (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but  tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

That’s it!

ETA Oct. 16, 2017: Well, I guess that wasn’t quite ‘it’. BBC’s (British Broadcasting Corporation) Magazine published a thoughtful Oct. 15, 2017 piece titled: Can we teach robots ethics?