Category Archives: social implications

Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report

Launched on Thursday, July 13, 2023 during UNESCO’s (United Nations Educational, Scientific, and Cultural Organization) “Global dialogue on the ethics of neurotechnology,” is a report tying together the usual measures of national scientific supremacy (number of papers published and number of patents filed) with information on corporate investment in the field. Consequently, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends” by Daniel S. Hain, Roman Jurowetzki, Mariagrazia Squicciarini, and Lihui Xu provides better insight into the international neurotechnology scene than is sometimes found in these kinds of reports. By the way, the report is open access.

Here’s what I mean, from the report‘s short summary,

Since 2013, government investments in this field have exceeded $6 billion. Private investment has also seen significant growth, with annual funding experiencing a 22-fold increase from 2010 to 2020, reaching $7.3 billion and totaling $33.2 billion.

This investment has translated into a 35-fold growth in neuroscience publications between 2000-2021 and 20-fold growth in innovations between 2022-2020, as proxied by patents. However, not all are poised to benefit from such developments, as big divides emerge.

Over 80% of high-impact neuroscience publications are produced by only ten countries, while 70% of countries contributed fewer than 10 such papers over the period considered. Similarly, five countries only hold 87% of IP5 neurotech patents.

This report sheds light on the neurotechnology ecosystem, that is, what is being developed, where and by whom, and informs about how neurotechnology interacts with other technological trajectories, especially Artificial Intelligence [emphasis mine]. [p. 2]

The money aspect is eye-opening even when you already have your suspicions. Also, it’s not entirely unexpected to learn that only ten countries produce over 80% of the high impact neurotech papers and that only five countries hold 87% of the IP5 neurotech patents but it is stunning to see it in context. (If you’re not familiar with the term ‘IP5 patents’, scroll down in this post to the relevant subhead. Hint: It means the patent was filed in one of the top five jurisdictions; I’ll leave you to guess which ones those might be.)

“Since 2013 …” isn’t quite as informative as the authors may have hoped. I wish they had given a time frame for government investments similar to what they did for corporate investments (e.g., 2010 – 2020). Also, is the $6B (likely in USD) government investment cumulative or an estimated annual number? To sum up, I would have appreciated parallel structure and specificity.

Nitpicks aside, there’s some very good material intended for policy makers. On that note, some of the analysis is beyond me. I haven’t used anything even somewhat close to their analytical tools in years and years. This commentaries reflects my interests and a very rapid reading. One last thing, this is being written from a Canadian perspective. With those caveats in mind, here’s some of what I found.

A definition, social issues, country statistics, and more

There’s a definition for neurotechnology and a second mention of artificial intelligence being used in concert with neurotechnology. From the report‘s executive summary,

Neurotechnology consists of devices and procedures used to access, monitor, investigate, assess, manipulate, and/or emulate the structure and function of the neural systems of animals or human beings. It is poised to revolutionize our understanding of the brain and to unlock innovative solutions to treat a wide range of diseases and disorders.

Similarly to Artificial Intelligence (AI), and also due to its convergence with AI, neurotechnology may have profound societal and economic impact, beyond the medical realm. As neurotechnology directly relates to the brain, it triggers ethical considerations about fundamental aspects of human existence, including mental integrity, human dignity, personal identity, freedom of thought, autonomy, and privacy [emphases mine]. Its potential for enhancement purposes and its accessibility further amplifies its prospect social and societal implications.

The recent discussions held at UNESCO’s Executive Board further shows Member States’ desire to address the ethics and governance of neurotechnology through the elaboration of a new standard-setting instrument on the ethics of neurotechnology, to be adopted in 2025. To this end, it is important to explore the neurotechnology landscape, delineate its boundaries, key players, and trends, and shed light on neurotech’s scientific and technological developments. [p. 7]

Here’s how they sourced the data for the report,

The present report addresses such a need for evidence in support of policy making in
relation to neurotechnology by devising and implementing a novel methodology on data from scientific articles and patents:

● We detect topics over time and extract relevant keywords using a transformer-
based language models fine-tuned for scientific text. Publication data for the period
2000-2021 are sourced from the Scopus database and encompass journal articles
and conference proceedings in English. The 2,000 most cited publications per year
are further used in in-depth content analysis.
● Keywords are identified through Named Entity Recognition and used to generate
search queries for conducting a semantic search on patents’ titles and abstracts,
using another language model developed for patent text. This allows us to identify
patents associated with the identified neuroscience publications and their topics.
The patent data used in the present analysis are sourced from the European
Patent Office’s Worldwide Patent Statistical Database (PATSTAT). We consider
IP5 patents filed between 2000-2020 having an English language abstract and
exclude patents solely related to pharmaceuticals.

This approach allows mapping the advancements detailed in scientific literature to the technological applications contained in patent applications, allowing for an analysis of the linkages between science and technology. This almost fully automated novel approach allows repeating the analysis as neurotechnology evolves. [pp. 8-9[

Findings in bullet points,

Key stylized facts are:
● The field of neuroscience has witnessed a remarkable surge in the overall number
of publications since 2000, exhibiting a nearly 35-fold increase over the period
considered, reaching 1.2 million in 2021. The annual number of publications in
neuroscience has nearly tripled since 2000, exceeding 90,000 publications a year
in 2021. This increase became even more pronounced since 2019.
● The United States leads in terms of neuroscience publication output (40%),
followed by the United Kingdom (9%), Germany (7%), China (5%), Canada (4%),
Japan (4%), Italy (4%), France (4%), the Netherlands (3%), and Australia (3%).
These countries account for over 80% of neuroscience publications from 2000 to
● Big divides emerge, with 70% of countries in the world having less than 10 high-
impact neuroscience publications between 2000 to 2021.
● Specific neurotechnology-related research trends between 2000 and 2021 include:
○ An increase in Brain-Computer Interface (BCI) research around 2010,
maintaining a consistent presence ever since.
○ A significant surge in Epilepsy Detection research in 2017 and 2018,
reflecting the increased use of AI and machine learning in healthcare.
○ Consistent interest in Neuroimaging Analysis, which peaks around 2004,
likely because of its importance in brain activity and language
comprehension studies.
○ While peaking in 2016 and 2017, Deep Brain Stimulation (DBS) remains a
persistent area of research, underlining its potential in treating conditions
like Parkinson’s disease and essential tremor.
● Between 2000 and 2020, the total number of patent applications in this field
increased significantly, experiencing a 20-fold increase from less than 500 to over
12,000. In terms of annual figures, a consistent upward trend in neurotechnology-10
related patent applications emerges, with a notable doubling observed between
2015 and 2020.
• The United States account for nearly half of all worldwide patent applications (47%).
Other major contributors include South Korea (11%), China (10%), Japan (7%),
Germany (7%), and France (5%). These five countries together account for 87%
of IP5 neurotech patents applied between 2000 and 2020.
○ The United States has historically led the field, with a peak around 2010, a
decline towards 2015, and a recovery up to 2020.
○ South Korea emerged as a significant contributor after 1990, overtaking
Germany in the late 2000s to become the second-largest developer of
neurotechnology. By the late 2010s, South Korea’s annual neurotechnology
patent applications approximated those of the United States.
○ China exhibits a sharp increase in neurotechnology patent applications in
the mid-2010s, bringing it on par with the United States in terms of
application numbers.
● The United States ranks highest in both scientific publications and patents,
indicating their strong ability to transform knowledge into marketable inventions.
China, France, and Korea excel in leveraging knowledge to develop patented
innovations. Conversely, countries such as the United Kingdom, Germany, Italy,
Canada, Brazil, and Australia lag behind in effectively translating neurotech
knowledge into patentable innovations.
● In terms of patent quality measured by forward citations, the leading countries are
Germany, US, China, Japan, and Korea.
● A breakdown of patents by technology field reveals that Computer Technology is
the most important field in neurotechnology, exceeding Medical Technology,
Biotechnology, and Pharmaceuticals. The growing importance of algorithmic
applications, including neural computing techniques, also emerges by looking at
the increase in patent applications in these fields between 2015-2020. Compared
to the reference year, computer technologies-related patents in neurotech
increased by 355% and by 92% in medical technology.
● An analysis of the specialization patterns of the top-5 countries developing
neurotechnologies reveals that Germany has been specializing in chemistry-
related technology fields, whereas Asian countries, particularly South Korea and
China, focus on computer science and electrical engineering-related fields. The
United States exhibits a balanced configuration with specializations in both
chemistry and computer science-related fields.
● The entities – i.e. both companies and other institutions – leading worldwide
innovation in the neurotech space are: IBM (126 IP5 patents, US), Ping An
Technology (105 IP5 patents, CH), Fujitsu (78 IP5 patents, JP), Microsoft (76 IP511
patents, US)1, Samsung (72 IP5 patents, KR), Sony (69 IP5 patents JP) and Intel
(64 IP5 patents US)

This report further proposes a pioneering taxonomy of neurotechnologies based on International Patent Classification (IPC) codes.

• 67 distinct patent clusters in neurotechnology are identified, which mirror the diverse research and development landscape of the field. The 20 most prominent neurotechnology groups, particularly in areas like multimodal neuromodulation, seizure prediction, neuromorphic computing [emphasis mine], and brain-computer interfaces, point to potential strategic areas for research and commercialization.
• The variety of patent clusters identified mirrors the breadth of neurotechnology’s potential applications, from medical imaging and limb rehabilitation to sleep optimization and assistive exoskeletons.
• The development of a baseline IPC-based taxonomy for neurotechnology offers a structured framework that enriches our understanding of this technological space, and can facilitate research, development and analysis. The identified key groups mirror the interdisciplinary nature of neurotechnology and underscores the potential impact of neurotechnology, not only in healthcare but also in areas like information technology and biomaterials, with non-negligible effects over societies and economies.

1 If we consider Microsoft Technology Licensing LLM and Microsoft Corporation as being under the same umbrella, Microsoft leads worldwide developments with 127 IP5 patents. Similarly, if we were to consider that Siemens AG and Siemens Healthcare GmbH belong to the same conglomerate, Siemens would appear much higher in the ranking, in third position, with 84 IP5 patents. The distribution of intellectual property assets across companies belonging to the same conglomerate is frequent and mirrors strategic as well as operational needs and features, among others. [pp. 9-11]

Surprises and comments

Interesting and helpful to learn that “neurotechnology interacts with other technological trajectories, especially Artificial Intelligence;” this has changed and improved my understanding of neurotechnology.

It was unexpected to find Canada in the top ten countries producing neuroscience papers. However, finding out that the country lags in translating its ‘neuro’ knowledge into patentable innovation is not entirely a surprise.

It can’t be an accident that countries with major ‘electronics and computing’ companies lead in patents. These companies do have researchers but they also buy startups to acquire patents. They (and ‘patent trolls’) will also file patents preemptively. For the patent trolls, it’s a moneymaking proposition and for the large companies, it’s a way of protecting their own interests and/or (I imagine) forcing a sale.

The mention of neuromorphic (brainlike) computing in the taxonomy section was surprising and puzzling. Up to this point, I’ve thought of neuromorphic computing as a kind of alternative or addition to standard computing but the authors have blurred the lines as per UNESCO’s definition of neurotechnology (specifically, “… emulate the structure and function of the neural systems of animals or human beings”) . Again, this report is broadening my understanding of neurotechnology. Of course, it required two instances before I quite grasped it, the definition and the taxonomy.

What’s puzzling is that neuromorphic engineering, a broader term that includes neuromorphic computing, isn’t used or mentioned. (For an explanation of the terms neuromorphic computing and neuromorphic engineering, there’s my June 23, 2023 posting, “Neuromorphic engineering: an overview.” )

The report

I won’t have time for everything. Here are some of the highlights from my admittedly personal perspective.

It’s not only about curing disease

From the report,

Neurotechnology’s applications however extend well beyond medicine [emphasis mine], and span from research, to education, to the workplace, and even people’s everyday life. Neurotechnology-based solutions may enhance learning and skill acquisition and boost focus through brain stimulation techniques. For instance, early research finds that brain- zapping caps appear to boost memory for at least one month (Berkeley, 2022). This could one day be used at home to enhance memory functions [emphasis mine]. They can further enable new ways to interact with the many digital devices we use in everyday life, transforming the way we work, live and interact. One example is the Sound Awareness wristband developed by a Stanford team (Neosensory, 2022) which enables individuals to “hear” by converting sound into tactile feedback, so that sound impaired individuals can perceive spoken words through their skin. Takagi and Nishimoto (2023) analyzed the brain scans taken through Magnetic Resonance Imaging (MRI) as individuals were shown thousands of images. They then trained a generative AI tool called Stable Diffusion2 on the brain scan data of the study’s participants, thus creating images that roughly corresponded to the real images shown. While this does not correspond to reading the mind of people, at least not yet, and some limitations of the study have been highlighted (Parshall, 2023), it nevertheless represents an important step towards developing the capability to interface human thoughts with computers [emphasis mine], via brain data interpretation.

While the above examples may sound somewhat like science fiction, the recent uptake of generative Artificial Intelligence applications and of large language models such as ChatGPT or Bard, demonstrates that the seemingly impossible can quickly become an everyday reality. At present, anyone can purchase online electroencephalogram (EEG) devices for a few hundred dollars [emphasis mine], to measure the electrical activity of their brain for meditation, gaming, or other purposes. [pp. 14-15]

This is very impressive achievement. Some of the research cited was published earlier this year (2023). The extraordinary speed is a testament to the efforts by the authors and their teams. It’s also a testament to how quickly the field is moving.

I’m glad to see the mention of and focus on consumer neurotechnology. (While the authors don’t speculate, I am free to do so.) Consumer neurotechnology could be viewed as one of the steps toward normalizing a cyborg future for all of us. Yes, we have books, television programmes, movies, and video games, which all normalize the idea but the people depicted have been severely injured and require the augmentation. With consumer neurotechnology, you have easily accessible devices being used to enhance people who aren’t injured, they just want to be ‘better’.

This phrase seemed particularly striking “… an important step towards developing the capability to interface human thoughts with computers” in light of some claims made by the Australian military in my June 13, 2023 posting “Mind-controlled robots based on graphene: an Australian research story.” (My posting has an embedded video demonstrating the Brain Robotic Interface (BRI) in action. Also, see the paragraph below the video for my ‘measured’ response.)

There’s no mention of the military in the report which seems more like a deliberate rather than inadvertent omission given the importance of military innovation where technology is concerned.

This section gives a good overview of government initiatives (in the report it’s followed by a table of the programmes),

Thanks to the promises it holds, neurotechnology has garnered significant attention from both governments and the private sector and is considered by many as an investment priority. According to the International Brain Initiative (IBI), brain research funding has become increasingly important over the past ten years, leading to a rise in large-scale state-led programs aimed at advancing brain intervention technologies(International Brain Initiative, 2021). Since 2013, initiatives such as the United States’ Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative and the European Union’s Human Brain Project (HBP), as well as major national initiatives in China, Japan and South Korea have been launched with significant funding support from the respective governments. The Canadian Brain Research Strategy, initially operated as a multi- stakeholder coalition on brain research, is also actively seeking funding support from the government to transform itself into a national research initiative (Canadian Brain Research Strategy, 2022). A similar proposal is also seen in the case of the Australian Brain Alliance, calling for the establishment of an Australian Brain Initiative (Australian Academy of Science, n.d.). [pp. 15-16]


There are some concerns such as these,

Beyond the medical realm, research suggests that emotional responses of consumers
related to preferences and risks can be concurrently tracked by neurotechnology, such
as neuroimaging and that neural data can better predict market-level outcomes than
traditional behavioral data (Karmarkar and Yoon, 2016). As such, neural data is
increasingly sought after in the consumer market for purposes such as digital
phenotyping4, neurogaming 5,and neuromarketing6 (UNESCO, 2021). This surge in demand gives rise to risks like hacking, unauthorized data reuse, extraction of privacy-sensitive information, digital surveillance, criminal exploitation of data, and other forms of abuse. These risks prompt the question of whether neural data needs distinct definition and safeguarding measures.

These issues are particularly relevant today as a wide range of electroencephalogram (EEG) headsets that can be used at home are now available in consumer markets for purposes that range from meditation assistance to controlling electronic devices through the mind. Imagine an individual is using one of these devices to play a neurofeedback game, which records the person’s brain waves during the game. Without the person being aware, the system can also identify the patterns associated with an undiagnosed mental health condition, such as anxiety. If the game company sells this data to third parties, e.g. health insurance providers, this may lead to an increase of insurance fees based on undisclosed information. This hypothetical situation would represent a clear violation of mental privacy and of unethical use of neural data.

Another example is in the field of advertising, where companies are increasingly interested in using neuroimaging to better understand consumers’ responses to their products or advertisements, a practice known as neuromarketing. For instance, a company might use neural data to determine which advertisements elicit the most positive emotional responses in consumers. While this can help companies improve their marketing strategies, it raises significant concerns about mental privacy. Questions arise in relation to consumers being aware or not that their neural data is being used, and in the extent to which this can lead to manipulative advertising practices that unfairly exploit unconscious preferences. Such potential abuses underscore the need for explicit consent and rigorous data protection measures in the use of neurotechnology for neuromarketing purposes. [pp. 21-22]


Some countries already have laws and regulations regarding neurotechnology data,

At the national level, only a few countries have enacted laws and regulations to protect mental integrity or have included neuro-data in personal data protection laws (UNESCO, University of Milan-Bicocca (Italy) and State University of New York – Downstate Health Sciences University, 2023). Examples are the constitutional reform undertaken by Chile (Republic of Chile, 2021), the Charter for the responsible development of neurotechnologies of the Government of France (Government of France, 2022), and the Digital Rights Charter of the Government of Spain (Government of Spain, 2021). They propose different approaches to the regulation and protection of human rights in relation to neurotechnology. Countries such as the UK are also examining under which circumstances neural data may be considered as a special category of data under the general data protection framework (i.e. UK’s GDPR) (UK’s Information Commissioner’s Office, 2023) [p. 24]

As you can see, these are recent laws. There doesn’t seem to be any attempt here in Canada even though there is an act being reviewed in Parliament that could conceivably include neural data. This is from my May 1, 2023 posting,

Bill C-27 (Digital Charter Implementation Act, 2022) is what I believe is called an omnibus bill as it includes three different pieces of proposed legislation (the Consumer Privacy Protection Act [CPPA], the Artificial Intelligence and Data Act [AIDA], and the Personal Information and Data Protection Tribunal Act [PIDPTA]). [emphasis added July 11, 2023] You can read the Innovation, Science and Economic Development (ISED) Canada summary here or a detailed series of descriptions of the act here on the ISED’s Canada’s Digital Charter webpage.

My focus at the time was artificial intelligence and, now, after reading this UNESCO report and briefly looking at the Innovation, Science and Economic Development (ISED) Canada summary and a detailed series of descriptions of the act on ISED’s Canada’s Digital Charter webpage, I don’t see anything that specifies neural data but it’s not excluded either.

IP5 patents

Here’s the explanation (the footnote is included at the end of the excerpt),

IP5 patents represent a subset of overall patents filed worldwide, which have the
characteristic of having been filed in at least one top intellectual property offices (IPO)
worldwide (the so called IP5, namely the Chinese National Intellectual Property
Administration, CNIPA (formerly SIPO); the European Patent Office, EPO; the Japan
Patent Office, JPO; the Korean Intellectual Property Office, KIPO; and the United States
Patent and Trademark Office, USPTO) as well as another country, which may or may not be an IP5. This signals their potential applicability worldwide, as their inventiveness and industrial viability have been validated by at least two leading IPOs. This gives these patents a sort of “quality” check, also since patenting inventions is costly and if applicants try to protect the same invention in several parts of the world, this normally mirrors that the applicant has expectations about their importance and expected value. If we were to conduct the same analysis using information about individually considered patent applied worldwide, i.e. without filtering for quality nor considering patent families, we would risk conducting a biased analysis based on duplicated data. Also, as patentability standards vary across countries and IPOs, and what matters for patentability is the existence (or not) of prior art in the IPO considered, we would risk mixing real innovations with patents related to catching up phenomena in countries that are not at the forefront of the technology considered.

9 The five IP offices (IP5) is a forum of the five largest intellectual property offices in the world that was set up to improve the efficiency of the examination process for patents worldwide. The IP5 Offices together handle about 80% of the world’s patent applications, and 95% of all work carried out under the Patent Cooperation Treaty (PCT), see (Dernis et al., 2015) [p. 31]

AI assistance on this report

As noted earlier I have next to no experience with the analytical tools having not attempted this kind of work in several years. Here’s an example of what they were doing,

We utilize a combination of text embeddings based on Bidirectional Encoder
Representations from Transformer (BERT), dimensionality reduction, and hierarchical
clustering inspired by the BERTopic methodology 12 to identify latent themes within
research literature. Latent themes or topics in the context of topic modeling represent
clusters of words that frequently appear together within a collection of documents (Blei, 2012). These groupings are not explicitly labeled but are inferred through computational analysis examining patterns in word usage. These themes are ‘hidden’ within the text, only to be revealed through this analysis. …

We further utilize OpenAI’s GPT-4 model to enrich our understanding of topics’ keywords and to generate topic labels (OpenAI, 2023), thus supplementing expert review of the broad interdisciplinary corpus. Recently, GPT-4 has shown impressive results in medical contexts across various evaluations (Nori et al., 2023), making it a useful tool to enhance the information obtained from prior analysis stages, and to complement them. The automated process enhances the evaluation workflow, effectively emphasizing neuroscience themes pertinent to potential neurotechnology patents. Notwithstanding existing concerns about hallucinations (Lee, Bubeck and Petro, 2023) and errors in generative AI models, this methodology employs the GPT-4 model for summarization and interpretation tasks, which significantly mitigates the likelihood of hallucinations. Since the model is constrained to the context provided by the keyword collections, it limits the potential for fabricating information outside of the specified boundaries, thereby enhancing the accuracy and reliability of the output. [pp. 33-34]

I couldn’t resist adding the ChatGPT paragraph given all of the recent hoopla about it.

Multimodal neuromodulation and neuromorphic computing patents

I think this gives a pretty good indication of the activity on the patent front,

The largest, coherent topic, termed “multimodal neuromodulation,” comprises 535
patents detailing methodologies for deep or superficial brain stimulation designed to
address neurological and psychiatric ailments. These patented technologies interact with various points in neural circuits to induce either Long-Term Potentiation (LTP) or Long-Term Depression (LTD), offering treatment for conditions such as obsession, compulsion, anxiety, depression, Parkinson’s disease, and other movement disorders. The modalities encompass implanted deep-brain stimulators (DBS), Transcranial Magnetic Stimulation (TMS), and transcranial Direct Current Stimulation (tDCS). Among the most representative documents for this cluster are patents with titles: Electrical stimulation of structures within the brain or Systems and methods for enhancing or optimizing neural stimulation therapy for treating symptoms of Parkinson’s disease and or other movement disorders. [p.65]

Given my longstanding interest in memristors, which (I believe) have to a large extent helped to stimulate research into neuromorphic computing, this had to be included. Then, there was the brain-computer interfaces cluster,

A cluster identified as “Neuromorphic Computing” consists of 366 patents primarily
focused on devices designed to mimic human neural networks for efficient and adaptable computation. The principal elements of these inventions are resistive memory cells and artificial synapses. They exhibit properties similar to the neurons and synapses in biological brains, thus granting these devices the ability to learn and modulate responses based on rewards, akin to the adaptive cognitive capabilities of the human brain.

The primary technology classes associated with these patents fall under specific IPC
codes, representing the fields of neural network models, analog computers, and static
storage structures. Essentially, these classifications correspond to technologies that are key to the construction of computers and exhibit cognitive functions similar to human brain processes.

Examples for this cluster include neuromorphic processing devices that leverage
variations in resistance to store and process information, artificial synapses exhibiting
spike-timing dependent plasticity, and systems that allow event-driven learning and
reward modulation within neuromorphic computers.

In relation to neurotechnology as a whole, the “neuromorphic computing” cluster holds significant importance. It embodies the fusion of neuroscience and technology, thereby laying the basis for the development of adaptive and cognitive computational systems. Understanding this specific cluster provides a valuable insight into the progressing domain of neurotechnology, promising potential advancements across diverse fields, including artificial intelligence and healthcare.

The “Brain-Computer Interfaces” cluster, consisting of 146 patents, embodies a key aspect of neurotechnology that focuses on improving the interface between the brain and external devices. The technology classification codes associated with these patents primarily refer to methods or devices for treatment or protection of eyes and ears, devices for introducing media into, or onto, the body, and electric communication techniques, which are foundational elements of brain-computer interface (BCI) technologies.

Key patents within this cluster include a brain-computer interface apparatus adaptable to use environment and method of operating thereof, a double closed circuit brain-machine interface system, and an apparatus and method of brain-computer interface for device controlling based on brain signal. These inventions mainly revolve around the concept of using brain signals to control external devices, such as robotic arms, and improving the classification performance of these interfaces, even after long periods of non-use.

The inventions described in these patents improve the accuracy of device control, maintain performance over time, and accommodate multiple commands, thus significantly enhancing the functionality of BCIs.

Other identified technologies include systems for medical image analysis, limb rehabilitation, tinnitus treatment, sleep optimization, assistive exoskeletons, and advanced imaging techniques, among others. [pp. 66-67]

Having sections on neuromorphic computing and brain-computer interface patents in immediate proximity led to more speculation on my part. Imagine how much easier it would be to initiate a BCI connection if it’s powered with a neuromorphic (brainlike) computer/device. [ETA July 21, 2023: Following on from that thought, it might be more than just easier to initiate a BCI connection. Could a brainlike computer become part of your brain? Why not? it’s been successfully argued that a robotic wheelchair was part of someone’s body, see my January 30, 2013 posting and scroll down about 40% of the way.)]

Neurotech policy debates

The report concludes with this,

Neurotechnology is a complex and rapidly evolving technological paradigm whose
trajectories have the power to shape people’s identity, autonomy, privacy, sentiments,
behaviors and overall well-being, i.e. the very essence of what it means to be human.

Designing and implementing careful and effective norms and regulations ensuring that neurotechnology is developed and deployed in an ethical manner, for the good of
individuals and for society as a whole, call for a careful identification and characterization of the issues at stake. This entails shedding light on the whole neurotechnology ecosystem, that is what is being developed, where and by whom, and also understanding how neurotechnology interacts with other developments and technological trajectories, especially AI. Failing to do so may result in ineffective (at best) or distorted policies and policy decisions, which may harm human rights and human dignity.

Addressing the need for evidence in support of policy making, the present report offers first time robust data and analysis shedding light on the neurotechnology landscape worldwide. To this end, its proposes and implements an innovative approach that leverages artificial intelligence and deep learning on data from scientific publications and paten[t]s to identify scientific and technological developments in the neurotech space. The methodology proposed represents a scientific advance in itself, as it constitutes a quasi- automated replicable strategy for the detection and documentation of neurotechnology- related breakthroughs in science and innovation, to be repeated over time to account for the evolution of the sector. Leveraging this approach, the report further proposes an IPC-based taxonomy for neurotechnology which allows for a structured framework to the exploration of neurotechnology, to enable future research, development and analysis. The innovative methodology proposed is very flexible and can in fact be leveraged to investigate different emerging technologies, as they arise.

In terms of technological trajectories, we uncover a shift in the neurotechnology industry, with greater emphasis being put on computer and medical technologies in recent years, compared to traditionally dominant trajectories related to biotechnology and pharmaceuticals. This shift warrants close attention from policymakers, and calls for attention in relation to the latest (converging) developments in the field, especially AI and related methods and applications and neurotechnology.

This is all the more important and the observed growth and specialization patterns are unfolding in the context of regulatory environments that, generally, are either not existent or not fit for purpose. Given the sheer implications and impact of neurotechnology on the very essence of human beings, this lack of regulation poses key challenges related to the possible infringement of mental integrity, human dignity, personal identity, privacy, freedom of thought, and autonomy, among others. Furthermore, issues surrounding accessibility and the potential for neurotech enhancement applications triggers significant concerns, with far-reaching implications for individuals and societies. [pp. 72-73]

Last words about the report

Informative, readable, and thought-provoking. And, it helped broaden my understanding of neurotechnology.

Future endeavours?

I’m hopeful that one of these days one of these groups (UNESCO, Canadian Science Policy Centre, or ???) will tackle the issue of business bankruptcy in the neurotechnology sector. It has already occurred as noted in my ““Going blind when your neural implant company flirts with bankruptcy [long read]” April 5, 2022 posting. That story opens with a woman going blind in a New York subway when her neural implant fails. It’s how she found out the company, which supplied her implant was going out of business.

In my July 7, 2023 posting about the UNESCO July 2023 dialogue on neurotechnology, I’ve included information on Neuralink (one of Elon Musk’s companies) and its approval (despite some investigations) by the US Food and Drug Administration to start human clinical trials. Scroll down about 75% of the way to the “Food for thought” subhead where you will find stories about allegations made against Neuralink.

The end

If you want to know more about the field, the report offers a seven-page bibliography and there’s a lot of material here where you can start with this December 3, 2019 posting “Neural and technological inequalities” which features an article mentioning a discussion between two scientists. Surprisingly (to me), the source article is in Fast Company (a leading progressive business media brand), according to their tagline)..

I have two categories you may want to check: Human Enhancement and Neuromorphic Engineering. There are also a number of tags: neuromorphic computing, machine/flesh, brainlike computing, cyborgs, neural implants, neuroprosthetics, memristors, and more.

Should you have any observations or corrections, please feel free to leave them in the Comments section of this posting.

Global dialogue on the ethics of neurotechnology on July 13, 2023 led by UNESCO

While there’s a great deal of attention and hyperbole attached to artificial intelligence (AI) these days, it seems that neurotechnology may be quietly gaining much needed attention. (For those who are interested, at the end of this posting, there’ll be a bit more information to round out what you’re seeing in the UNESCO material.)

Now, here’s news of an upcoming UNESCO (United Nations Educational, Scientific, and Cultural Organization) meeting on neurotechnology, from a June 6, 2023 UNESCO press release (also received via email), Note: Links have been removed,

The Member States of the Executive Board of UNESCO
have approved the proposal of the Director General to hold a global
dialogue to develop an ethical framework for the growing and largely
unregulated Neurotechnology sector, which may threaten human rights and
fundamental freedoms. A first international conference will be held at
UNESCO Headquarters on 13 July 2023.

“Neurotechnology could help solve many health issues, but it could
also access and manipulate people’s brains, and produce information
about our identities, and our emotions. It could threaten our rights to
human dignity, freedom of thought and privacy. There is an urgent need
to establish a common ethical framework at the international level, as
UNESCO has done for artificial intelligence,” said UNESCO
Director-General Audrey Azoulay.

UNESCO’s international conference, taking place on 13 July [2023], will start
exploring the immense potential of neurotechnology to solve neurological
problems and mental disorders, while identifying the actions needed to
address the threats it poses to human rights and fundamental freedoms.
The dialogue will involve senior officials, policymakers, civil society
organizations, academics and representatives of the private sector from
all regions of the world.

Lay the foundations for a global ethical framework

The dialogue will also be informed by a report by UNESCO’s
International Bioethics Committee (IBC) on the “Ethical Issues of
Neurotechnology”, and a UNESCO study proposing first time evidence on
the neurotechnology landscape, innovations, key actors worldwide and
major trends.

The ultimate goal of the dialogue is to advance a better understanding
of the ethical issues related to the governance of neurotechnology,
informing the development of the ethical framework to be approved by 193
member states of UNESCO – similar to the way in which UNESCO
established the global ethical frameworks on the human genome (1997),
human genetic data (2003) and artificial intelligence (2021).

UNESCO’s global standard on the Ethics of Artificial Intelligence has
been particularly effective and timely, given the latest developments
related to Generative AI, the pervasiveness of AI technologies and the
risks they pose to people, democracies, and jobs. The convergence of
neural data and artificial intelligence poses particular challenges, as
already recognized in UNESCO’s AI standard.

Neurotech could reduce the burden of disease…

Neurotechnology covers any kind of device or procedure which is designed
to “access, monitor, investigate, assess, manipulate, and/or emulate
the structure and function of neural systems”. [1] Neurotechnological
devices range from “wearables”, to non-invasive brain computer
interfaces such as robotic limbs, to brain implants currently being
developed [2] with the goal of treating disabilities such as paralysis.

One in eight people worldwide live with a mental or neurological
disorder, triggering care-related costs that account for up to a third
of total health expenses in developed countries. These burdens are
growing in low- and middle-income countries too. Globally these expenses
are expected to grow – the number of people aged over 60 is projected
to double by 2050 to 2.1 billion (WHO 2022). Neurotechnology has the
vast potential to reduce the number of deaths and disabilities caused by
neurological disorders, such as Epilepsy, Alzheimer’s, Parkinson’s
and Stroke.

… but also threaten Human Rights

Without ethical guardrails, these technologies can pose serious risks, as
brain information can be accessed and manipulated, threatening
fundamental rights and fundamental freedoms, which are central to the
notion of human identity, freedom of thought, privacy, and memory. In
its report published in 2021 [3], UNESCO’s IBC documents these risks
and proposes concrete actions to address them.

Neural data – which capture the individual’s reactions and basic
emotions – is in high demand in consumer markets. Unlike the data
gathered on us by social media platforms, most neural data is generated
unconsciously, therefore we cannot give our consent for its use. If
sensitive data is extracted, and then falls into the wrong hands, the
individual may suffer harmful consequences.

Brain-Computer-Interfaces (BCIs) implanted at a time during which a
child or teenager is still undergoing neurodevelopment may disrupt the
‘normal’ maturation of the brain. It may be able to transform young
minds, shaping their future identity with long-lasting, perhaps
permanent, effects.

Memory modification techniques (MMT) may enable scientists to alter the
content of a memory, reconstructing past events. For now, MMT relies on
the use of drugs, but in the future it may be possible to insert chips
into the brain. While this could be beneficial in the case of
traumatised people, such practices can also distort an individual’s
sense of personal identity.

Risk of exacerbating global inequalities and generating new ones

Currently 50% of Neurotech Companies are in the US, and 35% in Europe
and the UK. Because neurotechnology could usher in a new generation of
‘super-humans’, this would further widen the education, skills, wealth
and opportunities’ gap within and between countries, giving those with
the most advanced technology an unfair advantage.

UNESCO’s Ethics of neurotechnology webpage can be found here. As for the July 13, 2023 dialogue/conference, here are some of the details from UNESCO’s International Conference on the Ethics of Neurotechnology webpage,

UNESCO will organize an International Conference on the Ethics of Neurotechnology on the theme “Building a framework to protect and promote human rights and fundamental freedoms” at UNESCO Headquarters in Paris, on 13 July 2023, from 9:00 [CET; Central European Time] in Room I.

The Conference will explore the immense potential of neurotechnology and address the ethical challenges it poses to human rights and fundamental freedoms. It will bring together policymakers and experts, representatives of civil society and UN organizations, academia, media, and private sector companies, to prepare a solid foundation for an ethical framework on the governance of neurotechnology.

UNESCO International Conference on Ethics of Neurotechnology: Building a framework to protect and promote human rights and fundamental freedoms
13 July 2023 – 9:30 am – 13 July 2023 – 6:30 pm [CET; Central European Time]
Location UNESCO Headquarters, Paris, France
Rooms : Room
I Type : Cat II – Intergovernmental meeting, other than international conference of States
Arrangement type : Hybrid
Language(s) : French Spanish English Arabic
Contact : Rajarajeswari Pajany


Click here to register

A high-level session with ministers and policy makers focusing on policy actions and international cooperation will be featured in the Conference. Renowned experts will also be invited to discuss technological advancements in Neurotechnology and ethical challenges and human rights Implications. Two fireside chats will be organized to enrich the discussions focusing on the private sector, public awareness raising and public engagement. The Conference will also feature a new study of UNESCO’s Social and Human Sciences Sector shedding light on innovations in neurotechnology, key actors worldwide and key areas of development.

As one of the most promising technologies of our time, neurotechnology is providing new treatments and improving preventative and therapeutic options for millions of individuals suffering from neurological and mental illness. Neurotechnology is also transforming other aspects of our lives, from student learning and cognition to virtual and augmented reality systems and entertainment. While we celebrate these unprecedented opportunities, we must be vigilant against new challenges arising from the rapid and unregulated development and deployment of this innovative technology, including among others the risks to mental integrity, human dignity, personal identity, autonomy, fairness and equity, and mental privacy. 

UNESCO has been at the forefront of promoting an ethical approach to neurotechnology. UNESCO’s International Bioethics Committee (IBC) has examined the benefits and drawbacks from an ethical perspective in a report published in December 2021. The Organization has also led UN-wide efforts on this topic, collaborating with other agencies and academic institutions to organize expert roundtables, raise public awareness and produce publications. With a global mandate on bioethics and ethics of science and technology, UNESCO has been asked by the IBC, its expert advisory body, to consider developing a global standard on this topic.

A July 13, 2023 agenda and a little Canadian content

I have a link to the ‘provisional programme‘ for “Towards an Ethical Framework in the Protection and Promotion of Human Rights and Fundamental Freedoms,” the July 13, 2023 UNESCO International Conference on Ethics of Neurotechnology. Keeping in mind that this could (and likely will) change,

13 July 2023, Room I,
UNESCO HQ Paris, France,

9:00 –9:15 Welcoming Remarks (TBC)
•António Guterres, Secretary-General of the United Nations•
•Audrey Azoulay, Director-General of UNESCO

9:15 –10:00 Keynote Addresses (TBC)
•Gabriel Boric, President of Chile
•Narendra Modi, Prime Minister of India
•PedroSánchez Pérez-Castejón, Prime Minister of Spain
•Volker Turk, UN High Commissioner for Human Rights
•Amandeep Singh Gill, UN Secretary-General’sEnvoyon Technology

10:15 –11:00 Scene-Setting Address

1:00 –13:00 High-Level Session: Regulations and policy actions

14:30 –15:30 Expert Session: Technological advancement and opportunities

15:45 –16:30 Fireside Chat: Launch of the UNESCO publication “Unveiling the neurotechnology landscape: scientific advancements, innovationsand major trends”

16:30 –17:30 Expert Session: Ethical challenges and human rights implications

17:30 –18:15 Fireside Chat: “Why neurotechnology matters for all

18:15 –18:30 Closing Remarks

While I haven’t included the speakers’ names (for the most part), I do want to note some Canadian participation in the person of Dr. Judy Iles from the University of British Columbia. She’s a Professor of Neurology, Distinguished University Scholar in Neuroethics, andDirector, Neuroethics Canada, and President of the International Brain Initiative (IBI)

Iles is in the “Expert Session: Ethical challenges and human rights implications.”

If you have time do look at the provisional programme just to get a sense of the range of speakers and their involvement in an astonishing array of organizations. E.g., there’s the IBI (in Judy Iles’s bio), which at this point is largely (and surprisingly) supported by (from About Us) “Fonds de recherche du Québec, and the Institute of Neuroscience, Mental Health and Addiction of the Canadian Institutes of Health Research. Operational support for the IBI is also provided by the Japan Brain/MINDS Beyond and WorldView Studios“.

More food for thought

Neither the UNESCO July 2023 meeting, which tilts, understandably, to social justice issues vis-à-vis neurotechnology nor the Canadian Science Policy Centre (CSPC) May 2023 meeting (see my May 12, 2023 posting: Virtual panel discussion: Canadian Strategies for Responsible Neurotechnology Innovation on May 16, 2023), based on the publicly available agendas, seem to mention practical matters such as an implant company going out of business. Still, it’s possible it will be mentioned at the UNESCO conference. Unfortunately, the May 2023 CSPC panel has not been posted online.

(See my April 5, 2022 posting “Going blind when your neural implant company flirts with bankruptcy [long read].” Even skimming it will give you some pause.) The 2019 OECD Recommendation on Responsible Innovation in Neurotechnology doesn’t cover/mention the issue ob business bankruptcy either.

Taking a look at business practices seems particularly urgent given this news from a May 25, 2023 article by Rachael Levy, Marisa Taylor, and Akriti Sharma for Reuters, Note: A link has been removed,

Elon Musk’s Neuralink received U.S. Food and Drug Administration (FDA) clearance for its first-in-human clinical trial, a critical milestone for the brain-implant startup as it faces U.S. probes over its handling of animal experiments.

The FDA approval “represents an important first step that will one day allow our technology to help many people,” Neuralink said in a tweet on Thursday, without disclosing details of the planned study. It added it is not recruiting for the trial yet and said more details would be available soon.

The FDA acknowledged in a statement that the agency cleared Neuralink to use its brain implant and surgical robot for trials on patients but declined to provide more details.

Neuralink and Musk did not respond to Reuters requests for comment.

The critical milestone comes as Neuralink faces federal scrutiny [emphasis mine] following Reuters reports about the company’s animal experiments.

Neuralink employees told Reuters last year that the company was rushing and botching surgeries on monkeys, pigs and sheep, resulting in more animal deaths [emphasis mine] than necessary, as Musk pressured staff to receive FDA approval. The animal experiments produced data intended to support the company’s application for human trials, the sources said.

If you have time, it’s well worth reading the article in its entirety. Neuralink is being investigated for a number of alleged violations.

Slightly more detail has been added by a May 26, 2023 Associated Press (AP article on the Canadian Broadcasting Corporation’s news online website,

Elon Musk’s brain implant company, Neuralink, says it’s gotten permission from U.S. regulators to begin testing its device in people.

The company made the announcement on Twitter Thursday evening but has provided no details about a potential study, which was not listed on the U.S. government database of clinical trials.

Officials with the Food and Drug Administration (FDA) wouldn’t confirm or deny whether it had granted the approval, but press officer Carly Kempler said in an email that the agency “acknowledges and understands” that Musk’s company made the announcement. [emphases mine]

The AP article offers additional context on the international race to develop brain-computer interfaces.

Update: It seems the FDA gave its approval later on May 26, 2023. (See the May 26, 2023 updated Reuters article by Rachael Levy, Marisa Taylor and Akriti Sharma and/or Paul Tuffley’s (lecturer at Griffith University) May 29, 2023 essay on The Conversation.)

For anyone who’s curious about previous efforts to examine ethics and social implications with regard to implants, prosthetics (Note: Increasingly, prosthetics include a neural component), and the brain, I have a couple of older posts: “Prosthetics and the human brain,” a March 8, 2013 and “The ultimate DIY: ‘How to build a robotic man’ on BBC 4,” a January 30, 2013 posting.)

The physics of biology: “Nano comes to Life” by Sonia Contera

Louis Minion provides an overview of a newly published book, “Nano Comes to Life: How Nanotechnology is Transforming Medicine and the Future of Biology” by Sonia Contera, in a December 5, 2022 article for Physics World and notes this in his final paragraph,

Nano Comes to Life is aimed at both the general reader as well as scientists [emphasis mine], emphasizing and encouraging the democratization of science and its relationship to human culture. Ending on an inspiring note, Contera encourages us to throw off our fear of technology and use science to make a fairer and more prosperous future.

Minion notes elsewhere in his article (Note: Links have been removed),

Part showcase, part manifesto, Sonia Contera’s Nano Comes to Life makes the ambitious attempt to convey the wonder of recent advances in biology and nanoscience while at the same time also arguing for a new approach to biological and medical research.

Contera – a biological physicist at the University of Oxford – covers huge ground, describing with clarity a range of pioneering experiments, including building nanoscale robots and engines from self-assembled DNA strands, and the incremental but fascinating work towards artificially grown organs.

But throughout this interesting survey of nanoscience in biology, Contera weaves a complex argument for the future of biology and medicine. For me, it is here the book truly excels. In arguing for the importance of physics and engineering in biology, the author critiques the way in which the biomedical industry has typically carried out research, instead arguing that we need an approach to biology that respects its properties at all scales, not just the molecular.

This book was published in hard cover in 2019 and in paperback in 2021 (according to Sonia Contera’s University of Oxford Department of Physics profile page), so, I’m not sure why there’s an article about it in December 2022 but I’m glad to learn of the book’s existence.

Princeton University Press, which published Contera’s book, features a November 1, 2019 interview (from the Sonia Contera on Nano Comes to Life webpage),

What is the significance of the title of the book? What is the relationship between biology and nanotechnology?

SC: Nanotechnology—the capacity to visualize, manipulate, and interact with matter at the nanometer scale—has been engaged with and inspired by biology from its inception in the 1980s. This is primarily because the molecular players in biology, and the main drug and treatment targets in medicine—proteins and DNA—are nanosize. Since the early days of the field, a main mission of nanotechnologists has been to create tools that allow us to interact with key biological molecules one at a time, directly in their natural medium. They strive to understand and even mimic in their artificial nanostructures the mechanisms that underpin the function of biological nanomachines (proteins). In the last thirty years nanomicroscopies (primarily, the atomic force microscope) have unveiled the complex dynamic nature of proteins and the vast numbers of tasks that they perform. Far from being the static shapes featured in traditional biochemistry books, proteins rotate to work as nanomotors; they  literally perform walks to transport cargo around the cell. This enables an understanding of molecular biology that departs quite radically from traditional biochemical methods developed in the last fifty years. Since the main tools of nanotechnology were born in physics labs, the scientists who use them to study biomolecules interrogate those molecules within the framework of physics. Everyone should have the experience of viewing atomic force microscopy movies of proteins in action. It really changes the way we think about ourselves, as I try to convey in my book.

And how does physics change the study of biology at the nanoscale?

SC: In its widest sense the physics of life seeks to understand how the rules that govern the whole universe  led to the emergence of life on Earth and underlie biological behaviour. Central to this study are the molecules (proteins, DNA, etc.  that underpin biological processes. Nanotechnology enables the investigation of the most basic mechanisms of their functions, their engineering principles, and ultimately mathematical models that describe them. Life on Earth probably evolved from nanosize molecules that became complex enough to enable replication, and evolution on Earth over billions of years has created the incredibly sophisticated nanomachines  whose complex interactions constitute the fabric of the actions, perceptions, and senses of all living creatures. Combining the tools of nanotech with physics to study the mechanisms of biology is also inspiring the development of new materials, electronic devices, and applications in engineering and medicine.

What consequences will this have for the future of biology?

SC: The incorporation of biology (including intelligence) into the realm of physics facilitates a profound and potentially groundbreaking cultural shift, because it places the study of life within the widest possible context: the study of the rules that govern the cosmos. Nano Comes to Life seeks to reveal this new context for studying life and the potential for human advancement that it enables. The most powerful message of this book is that in the twenty-first century life can no longer be considered just the biochemical product of an algorithm written in genes (one that can potentially be modified at someone’s convenience); it must be understood as a complex and magnificent (and meaningful) realization of the laws that created the universe itself. The biochemical/genetic paradigm that dominated most of the twentieth century has been useful for understanding many biological processes, but it is insufficient to explain life in all its complexity, and to unblock existing medical bottlenecks. More broadly, as physics, engineering, computer science, and materials science merge with biology, they are actually helping to reconnect science and technology with the deep questions that humans have asked themselves from the beginning of civilization: What is life? What does it mean to be human when we can manipulate and even exploit our own biology? We have reached a point in history where these questions naturally arise from the practice of science, and this necessarily changes the sciences’ relationship with society.

We are entering a historic period of scientific convergence, feeling an urge to turn our heads to the past even as we walk toward the future, seeking to find, in the origin of the ideas that brought us here, the inspiration that will allow us to move forward. Nano Comes to Life focuses on the science but attempts to call attention to the potential for a new intellectual framework to emerge at the convergence of the sciences, one that scientists, engineers, artists, and thinkers should tap to create narratives and visions of the future that midwife our coming of age as a technological species. This might be the most important role of the physics of life that emerges from our labs: to contribute to the collective construction of a path to the preservation of human life on Earth.

You can find out more about Contera’s work and writing on her University of Oxford Department of Physics profile page, which she seems to have written herself. I found this section particularly striking,

I am also interested in the relation of physics with power, imperialism/nationalism, politics and social identities in the XIX, XX and XXI centuries, and I am starting to write about it, like in this piece for Nature Review Materials : “Communication is central to the mission of science”  which explores science comms in the context of the pandemic and global warming. In a recent talk at Fundacion Telefonica, I explored the relation of national, “East-West”, and gender identity and physics, from colonialism to the Manhattan Project and the tech companies of the Silicon Valley of today, can be watched in Spanish and English (from min 17). Here I explore the future of Spanish science and world politics at Fundacion Rafael del Pino (Spanish).

The woman has some big ideas! Good, we need them.

BTW, I’ve posted a few items that might be of interest with regard to some of her ideas.

  1. Perimeter Institute (PI) presents: The Jazz of Physics with Stephon Alexander,” this April 5, 2023 posting features physicist Stephon Alexander’s upcoming April 14, 2023 presentation (you can get on the waiting list or find a link to the livestream) and mentions his 2021 book “Fear of a Black Universe; An Outsider’s Guide to the Future of Physics.”
  2. There’s also “Scientists gain from communication with public” posted on April 6, 2023.

Night of ideas/Nuit des idées 2022: (Re)building Together on January 27, 2022 (7th edition in Canada)

Vancouver and other Canadian cities are participating in an international culture event, Night of ideas/Nuit des idées, organized by the French Institute (Institut de France), a French Learned society first established in 1795 (during the French Revolution, which ran from 1789 to 1799 [Wikipedia entry]).

Before getting to the Canadian event, here’s more about the Night of Ideas from the event’s About Us page,

Initiated in 2016 during an exceptional evening that brought together in Paris foremost French and international thinkers invited to discuss the major issues of our time, the Night of Ideas has quickly become a fixture of the French and international agenda. Every year, on the last Thursday of January, the French Institute invites all cultural and educational institutions in France and on all five continents to celebrate the free flow of ideas and knowledge by offering, on the same evening, conferences, meetings, forums and round tables, as well as screenings, artistic performances and workshops, around a theme each one of them revisits in its own fashion.

“(Re)building together

For the 7th Night of Ideas, which will take place on 27 January 2022, the theme “(Re)building together” has been chosen to explore the resilience and reconstruction of societies faced with singular challenges, solidarity and cooperation between individuals, groups and states, the mobilisation of civil societies and the challenges of building and making our objects. This Nuit des Idées will also be marked by the beginning of the French Presidency of the Council of the European Union.

According to the About Us page, the 2021 event counted participants in 104 countries/190 cities/with other 200 events.

The French embassy in Canada (Ambassade de France au Canada) has a Night of Ideas/Nuit des idées 2022 webpage listing the Canadian events (Note: The times are local, e.g., 5 pm in Ottawa),

Ottawa: (Re)building through the arts, together

Moncton: (Re)building Together: How should we (re)think and (re)habilitate the post-COVID world?

Halifax: (Re)building together: Climate change — Building bridges between the present and future

Toronto: A World in Common

Edmonton: Introduction of the neutral pronoun “iel” — Can language influence the construction of identity?

Vancouver: (Re)building together with NFTs

Victoria: Committing in a time of uncertainty

Here’s a little more about the Vancouver event, from the Night of Ideas/Nuit des idées 2022 webpage,

Vancouver: (Re)building together with NFTs [non-fungible tokens]

NFTs, or non-fungible tokens, can be used as blockchain-based proofs of ownership. The new NFT “phenomenon” can be applied to any digital object: photos, videos, music, video game elements, and even tweets or highlights from sporting events.

Millions of dollars can be on the line when it comes to NFTs granting ownership rights to “crypto arts.” In addition to showing the signs of being a new speculative bubble, the market for NFTs could also lead to new experiences in online video gaming or in museums, and could revolutionize the creation and dissemination of works of art.

This evening will be an opportunity to hear from artists and professionals in the arts, technology and academia and to gain a better understanding of the opportunities that NFTs present for access to and the creation and dissemination of art and culture. Jesse McKee, Head of Strategy at 221A, Philippe Pasquier, Professor at School of Interactive Arts & Technology (SFU) and Rhea Myers, artist, hacker and writer will share their experiences in a session moderated by Dorothy Woodend, cultural editor for The Tyee.

- 7 p.m on Zoom (registration here) Event broadcast online on France Canada Culture’s Facebook. In English.

Not all of the events are in both languages.

One last thing, if you have some French and find puppets interesting, the event in Victoria, British Columbia features both, “Catherine Léger, linguist and professor at the University of Victoria, with whom we will discover and come to accept the diversity of French with the help of marionnettes [puppets]; … .”

Congratulations! Noēma magazine’s first year anniversary

Apparently, I am an idiot—if the folks at Expunct and other organizations passionately devoted to their own viewpoints are to be believed.

To be specific, Berggruen Institute (which publishes Noēma magazine) has attracted remarkably sharp criticism and, by implication, that seems to include anyone examining, listening, or reading the institute’s various communication efforts.

Perhaps you’d like to judge the quality of the ideas for yourself?

Abut the Institute and about the magazine

The institute is a think tank founded by Nicolas Berggruen, US-based billionaire investor and philanthropist, and Nathan Gardels, journalist and editor-in-chief of Noēma magazine, in 2010. Before moving onto the magazine’s first anniversary, here’s more about the Institute from its About webpage,

Ideas for a Changing World

We live in a time of great transformations. From capitalism, to democracy, to the global order, our institutions are faltering. The very meaning of the human is fragmenting.

The Berggruen Institute was established in 2010 to develop foundational ideas about how to reshape political and social institutions in the face of these great transformations. We work across cultures, disciplines and political boundaries, engaging great thinkers to develop and promote long-term answers to the biggest challenges of the 21st Century.

As the for the magazine, here’s more from the About Us webpage (Note: I have rearranged the paragraph order),

In ancient Greek, noēma means “thinking” or the “object of thought.” And that is our intention: to delve deeply into the critical issues transforming the world today, at length and with historical context, in order to illuminate new pathways of thought in a way not possible through the immediacy of daily media. In this era of accelerated social change, there is a dire need for new ideas and paradigms to frame the world we are moving into.

Noema is a magazine exploring the transformations sweeping our world. We publish essays, interviews, reportage, videos and art on the overlapping realms of philosophy, governance, geopolitics, economics, technology and culture. In doing so, our unique approach is to get out of the usual lanes and cross disciplines, social silos and cultural boundaries. From artificial intelligence and the climate crisis to the future of democracy and capitalism, Noema Magazine seeks a deeper understanding of the most pressing challenges of the 21st century.

Published online and in print by the Berggruen Institute, Noema grew out of a previous publication called The WorldPost, which was first a partnership with HuffPost and later with The Washington Post. Noema publishes thoughtful, rigorous, adventurous pieces by voices from both inside and outside the institute. While committed to using journalism to help build a more sustainable and equitable world, we do not promote any particular set of national, economic or partisan interests.

First anniversary

Noēma’s anniversary is being marked by its second paper publication (the first was produced for the magazine’s launch). From a July 1, 2021 announcement received via email,

June 2021 marked one year since the launch of Noema Magazine, a crucial milestone for the new publication focused on exploring and amplifying transformative ideas. Noema is working to attract audiences through longform perspectives and contemporary artwork that weave together threads in philosophy, governance, geopolitics, economics, technology, and culture.

“What began more than seven years ago as a news-driven global voices platform for The Huffington Post known as The WorldPost, and later in partnership with The Washington Post, has been reimagined,” said Nathan Gardels, editor-in-chief of Noema. “It has evolved into a platform for expansive ideas through a visual lens, and a timely and provocative portal to plumb the deeper issues behind present events.”

The magazine’s editorial board, involved in the genesis and as content drivers of the magazine, includes Orhan Pamuk, Arianna Huffington, Fareed Zakaria, Reid Hoffman, Dambisa Moyo, Walter Isaacson, Pico Iyer, and Elif Shafak. Pieces by thinkers cracking the calcifications of intellectual domains include, among many others:

·      Francis Fukuyama on the future of the nation-state

·      A collage of commentary on COVID with Yuval Harari and Jared Diamond 

·      An interview with economist Mariana Mazzucato on “mission-oriented government”

·      Taiwan’s Digital Minister Audrey Tang on digital democracy

·      Hedge-fund giant Ray Dalio in conversation with Nobel laureate Joe Stiglitz

·      Shannon Vallor on how AI is making us less intelligent and more artificial

·      Former Governor Jerry Brown in conversation with Stewart Brand 

·      Ecologist Suzanne Simard on the intelligence of forest ecosystems

·      A discussion on protecting the biosphere with Bill Gates’s guru Vaclav Smil 

·      An original story by Chinese science-fiction writer Hao Jingfang

Noema seeks to highlight how the great transformations of the 21st century are reflected in the work of today’s artistic innovators. Most articles are accompanied by an original illustration, melding together an aesthetic experience with ideas in social science and public policy. Among others, in the past year, the magazine has featured work from multimedia artist Pierre Huyghe, illustrator Daniel Martin Diaz, painter Scott Listfield, graphic designer and NFT artist Jonathan Zawada, 3D motion graphics artist Kyle Szostek, illustrator Moonassi, collage artist Lauren Lakin, and aerial photographer Brooke Holm. Additional contributions from artists include Berggruen Fellows Agnieszka Kurant and Anicka Yi discussing how their work explores the myth of the self.

Noema is available online and annually in print; the magazine’s second print issue will be released on July13, 2021. The theme of this issue is “planetary realism,” which proposes to go beyond the exhausted notions of globalization and geopolitical competition among nation-states to a new “Gaiapolitik.” It addresses the existential challenge of climate change across all borders and recognizes that human civilization is but one part of the ecology of being that encompasses multiple intelligences from microbes to forests to the emergent global exoskeleton of AI and internet connectivity (more on this in the letter from the editors below).

Published by the Berggruen Institute, Noema is an incubator for the Institute’s core ideas, such as “participation without populism,” “pre-distribution” and universal basic capital (vs. income), and the need for dialogue between the U.S. and China to avoid an AI arms race or inadvertent war.

“The world needs divergent thinking on big questions if we’re going to meet the challenges of the 21st century; Noema publishes bold and experimental ideas,” said Kathleen Miles, executive editor of Noema. “The magazine cross-fertilizes ideas across boundaries and explores correspondences among them in order to map out the terrain of the great transformations underway.”  

I notice Suzanne Simard (from the University of British Columbia and author of “Finding the Mother Tree: Discovering the Wisdom of the Forest”) on the list of essayists along with a story by Chinese science fiction writer, Hao Jingfang.

Simard was mentioned here in a May 12, 2021 posting (scroll down to the “UBC forestry professor, Suzanne Simard’s memoir going to the movies?” subhead) when it was announced that her then not yet published memoir will be a film starring Amy Adams (or so they hope).

Hao Jingfang was mentioned here in a November 16, 2020 posting titled: “Telling stories about artificial intelligence (AI) and Chinese science fiction; a Nov. 17, 2020 virtual event” (co-hosted by the Berggruen Institute and University of Cambridge’s Leverhulme Centre for the Future of Intelligence [CFI]).

A month after Noēma’s second paper issue on July 13, 2021, the theme and topics appear especially timely in light of the extensive news coverage in Canada and many other parts of the world given to the Monday, August, 9, 2021 release of the sixth UN Climate report raising alarms over irreversible impacts. (Emily Chung’s August 12, 2021 analysis for the Canadian Broadcasting Corporation [CBC] offers a little good news for those severely alarmed by the report.) Note: The Intergovernmental Panel on Climate Change (IPCC) is the UN body tasked with assessing the science related to climate change.

New US regulations exempt many gene-edited crops from government oversight

A June 1, 2020 essay by Maywa Montenegro (Postdoctoral Fellow, University of California at Davis) for The Conversation posits that new regulations (which in fact result in deregulation) are likely to create problems,

In May [2020], federal regulators finalized a new biotechnology policy that will bring sweeping changes to the U.S. food system. Dubbed “SECURE,” the rule revises U.S. Department of Agriculture regulations over genetically engineered plants, automatically exempting many gene-edited crops from government oversight. Companies and labs will be allowed to “self-determine” whether or not a crop should undergo regulatory review or environmental risk assessment.

Initial responses to this new policy have followed familiar fault lines in the food community. Seed industry trade groups and biotech firms hailed the rule as “important to support continuing innovation.” Environmental and small farmer NGOs called the USDA’s decision “shameful” and less attentive to public well-being than to agribusiness’s bottom line.

But the gene-editing tool CRISPR was supposed to break the impasse in old GM wars by making biotechnology more widely affordable, accessible and thus democratic.

In my research, I study how biotechnology affects transitions to sustainable food systems. It’s clear that since 2012 the swelling R&D pipeline of gene-edited grains, fruits and vegetables, fish and livestock has forced U.S. agencies to respond to the so-called CRISPR revolution.

Yet this rule change has a number of people in the food and scientific communities concerned. To me, it reflects the lack of accountability and trust between the public and government agencies setting policies.

Is there a better way?

… I have developed a set of principles and practices for governing CRISPR based on dialogue with front-line communities who are most affected by the technologies others usher in. Communities don’t just have to adopt or refuse technology – they can co-create [emphasis mine] it.

One way to move forward in the U.S. is to take advantage of common ground between sustainable agriculture movements and CRISPR scientists. The struggle over USDA rules suggests that few outside of industry believe self-regulation is fair, wise or scientific.

h/t: June 1, 2020 news item on

If you have the time and the inclination, do read the essay in its entirety.

Anyone who has read my COVID-19 op-ed for the Canadian Science Policy may see some similarity between Montenegro’s “co-create” and this from my May 15, 2020 posting which included my reference materials or this version on the Canadian Science Policy Centre where you can find many other COVID-19 op-eds)

In addition to engaging experts as we navigate our way into the future, we can look to artists, writers, citizen scientists, elders, indigenous communities, rural and urban communities, politicians, philosophers, ethicists, religious leaders, and bureaucrats of all stripes for more insight into the potential for collateral and unintended consequences.

To be clear, I think times of crises are when a lot of people call for more co-creation and input. Here’s more about Montenegro’s work on her profile page (which includes her academic credentials, research interests and publications) on the University of California at Berkeley’s Department of Environmental Science, Policy, and Management webspace. She seems to have been making the call for years.

I am a US-Dutch-Peruvian citizen who grew up in Appalachia, studied molecular biology in the Northeast, worked as a journalist in New York City, and then migrated to the left coast to pursue a PhD. My indigenous ancestry, smallholder family history, and the colonizing/decolonizing experiences of both the Netherlands and Peru informs my personal and professional interests in seeds and agrobiodiversity. My background engenders a strong desire to explore synergies between western science and the indigenous/traditional knowledge systems that have historically been devalued and marginalized.

Trained in molecular biology, science writing, and now, a range of critical social and ecological theory, I incorporate these perspectives into research on seeds.

I am particularly interested in the relationship between formal seed systems – characterized by professional breeding, certification, intellectual property – and commercial sale and informal seed systems through which farmers traditionally save, exchange, and sell seeds. …

You can find more on her Twitter feed, which is where I discovered a call for papers for a “Special Feature: Gene Editing the Food System” in the journal, Elementa: Science of the Anthropocene. They have a rolling deadline, which started in February 2020. At this time, there is one paper in the series,

Democratizing CRISPR? Stories, practices, and politics of science and governance on the agricultural gene editing frontier by Maywa Montenegro de Wit. Elem Sci Anth, 8(1), p.9. DOI: Published February 25, 2020

The paper is open access. Interestingly, the guest editor is Elizabeth Fitting of Dalhousie University in Nova Scotia, Canada.

Ghosts, mechanical turks, and pseudo-AI (artificial intelligence)—Is it all a con game?

There’s been more than one artificial intelligence (AI) story featured here on this blog but the ones featured in this posting are the first I’ve stumbled across that suggest the hype is even more exaggerated than even the most cynical might have thought. (BTW, the 2019 material is later as I have taken a chronological approach to this posting.)

It seems a lot of companies touting their AI algorithms and capabilities are relying on human beings to do the work, from a July 6, 2018 article by Olivia Solon for the Guardian (Note: A link has been removed),

It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans.

“Using a human to do the job lets you skip over a load of technical and business development challenges. It doesn’t scale, obviously, but it allows you to build something and skip the hard part early on,” said Gregory Koberger, CEO of ReadMe, who says he has come across a lot of “pseudo-AIs”.

“It’s essentially prototyping the AI with human beings,” he said.

In 2017, the business expense management app Expensify admitted that it had been using humans to transcribe at least some of the receipts it claimed to process using its “smartscan technology”. Scans of the receipts were being posted to Amazon’s Mechanical Turk crowdsourced labour tool, where low-paid workers were reading and transcribing them.

“I wonder if Expensify SmartScan users know MTurk workers enter their receipts,” said Rochelle LaPlante, a “Turker” and advocate for gig economy workers on Twitter. “I’m looking at someone’s Uber receipt with their full name, pick-up and drop-off addresses.”

Even Facebook, which has invested heavily in AI, relied on humans for its virtual assistant for Messenger, M.

In some cases, humans are used to train the AI system and improve its accuracy. …

The Turk

Fooling people with machines that seem intelligent is not new according to a Sept. 10, 2018 article by Seth Stevenson for (Note: Links have been removed),

It’s 1783, and Paris is gripped by the prospect of a chess match. One of the contestants is François-André Philidor, who is considered the greatest chess player in Paris, and possibly the world. Everyone is so excited because Philidor is about to go head-to-head with the other biggest sensation in the chess world at the time.

But his opponent isn’t a man. And it’s not a woman, either. It’s a machine.

This story may sound a lot like Garry Kasparov taking on Deep Blue, IBM’s chess-playing supercomputer. But that was only a couple of decades ago, and this chess match in Paris happened more than 200 years ago. It doesn’t seem like a robot that can play chess would even be possible in the 1780s. This machine playing against Philidor was making an incredible technological leap—playing chess, and not only that, but beating humans at chess.

In the end, it didn’t quite beat Philidor, but the chess master called it one of his toughest matches ever. It was so hard for Philidor to get a read on his opponent, which was a carved wooden figure—slightly larger than life—wearing elaborate garments and offering a cold, mean stare.

It seems like the minds of the era would have been completely blown by a robot that could nearly beat a human chess champion. Some people back then worried that it was black magic, but many folks took the development in stride. …

Debates about the hottest topic in technology today—artificial intelligence—didn’t starts in the 1940s, with people like Alan Turing and the first computers. It turns out that the arguments about AI go back much further than you might imagine. The story of the 18th-century chess machine turns out to be one of those curious tales from history that can help us understand technology today, and where it might go tomorrow.

[In future episodes our podcast, Secret History of the Future] we’re going to look at the first cyberattack, which happened in the 1830s, and find out how the Victorians invented virtual reality.

Philidor’s opponent was known as The Turk or Mechanical Turk and that ‘machine’ was in fact a masterful hoax as The Turk held a hidden compartment from which a human being directed his moves.

People pretending to be AI agents

It seems that today’s AI has something in common with the 18th century Mechanical Turk, there are often humans lurking in the background making things work. From a Sept. 4, 2018 article by Janelle Shane for (Note: Links have been removed),

Every day, people are paid to pretend to be bots.

In a strange twist on “robots are coming for my job,” some tech companies that boast about their artificial intelligence have found that at small scales, humans are a cheaper, easier, and more competent alternative to building an A.I. that can do the task.

Sometimes there is no A.I. at all. The “A.I.” is a mockup powered entirely by humans, in a “fake it till you make it” approach used to gauge investor interest or customer behavior. Other times, a real A.I. is combined with human employees ready to step in if the bot shows signs of struggling. These approaches are called “pseudo-A.I.” or sometimes, more optimistically, “hybrid A.I.”

Although some companies see the use of humans for “A.I.” tasks as a temporary bridge, others are embracing pseudo-A.I. as a customer service strategy that combines A.I. scalability with human competence. They’re advertising these as “hybrid A.I.” chatbots, and if they work as planned, you will never know if you were talking to a computer or a human. Every remote interaction could turn into a form of the Turing test. So how can you tell if you’re dealing with a bot pretending to be a human or a human pretending to be a bot?

One of the ways you can’t tell anymore is by looking for human imperfections like grammar mistakes or hesitations. In the past, chatbots had prewritten bits of dialogue that they could mix and match according to built-in rules. Bot speech was synonymous with precise formality. In early Turing tests, spelling mistakes were often a giveaway that the hidden speaker was a human. Today, however, many chatbots are powered by machine learning. Instead of using a programmer’s rules, these algorithms learn by example. And many training data sets come from services like Amazon’s Mechanical Turk, which lets programmers hire humans from around the world to generate examples of tasks like asking and answering questions. These data sets are usually full of casual speech, regionalisms, or other irregularities, so that’s what the algorithms learn. It’s not uncommon these days to get algorithmically generated image captions that read like text messages. And sometimes programmers deliberately add these things in, since most people don’t expect imperfections of an algorithm. In May, Google’s A.I. assistant made headlines for its ability to convincingly imitate the “ums” and “uhs” of a human speaker.

Limited computing power is the main reason that bots are usually good at just one thing at a time. Whenever programmers try to train machine learning algorithms to handle additional tasks, they usually get algorithms that can do many tasks rather badly. In other words, today’s algorithms are artificial narrow intelligence, or A.N.I., rather than artificial general intelligence, or A.G.I. For now, and for many years in the future, any algorithm or chatbot that claims A.G.I-level performance—the ability to deal sensibly with a wide range of topics—is likely to have humans behind the curtain.

Another bot giveaway is a very poor memory. …

Bringing AI to life: ghosts

Sidney Fussell’s April 15, 2019 article for The Atlantic provides more detail about the human/AI interface as found in some Amazon products such as Alexa ( a voice-control system),

… Alexa-enabled speakers can and do interpret speech, but Amazon relies on human guidance to make Alexa, well, more human—to help the software understand different accents, recognize celebrity names, and respond to more complex commands. This is true of many artificial intelligence–enabled products. They’re prototypes. They can only approximate their promised functions while humans help with what Harvard researchers have called “the paradox of automation’s last mile.” Advancements in AI, the researchers write, create temporary jobs such as tagging images or annotating clips, even as the technology is meant to supplant human labor. In the case of the Echo, gig workers are paid to improve its voice-recognition software—but then, when it’s advanced enough, it will be used to replace the hostess in a hotel lobby.

A 2016 paper by researchers at Stanford University used a computer vision system to infer, with 88 percent accuracy, the political affiliation of 22 million people based on what car they drive and where they live. Traditional polling would require a full staff, a hefty budget, and months of work. The system completed the task in two weeks. But first, it had to know what a car was. The researchers paid workers through Amazon’s Mechanical Turk [emphasis mine] platform to manually tag thousands of images of cars, so the system would learn to differentiate between shapes, styles, and colors.

It may be a rude awakening for Amazon Echo owners, but AI systems require enormous amounts of categorized data, before, during, and after product launch. ..,

Isn’t interesting that Amazon also has a crowdsourcing marketplace for its own products. Calling it ‘Mechanical Turk’ after a famous 18th century hoax would suggest a dark sense of humour somewhere in the corporation. (You can find out more about the Amazon Mechanical Turk on this Amazon website and in its Wikipedia entry.0

Anthropologist, Mary L. Gray has coined the phrase ‘ghost work’ for the work that humans perform but for which AI gets the credit. Angela Chan’s May 13, 2019 article for The Verge features Gray as she promotes her latest book with Siddarth Suri ‘Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass’ (Note: A link has been removed),

“Ghost work” is anthropologist Mary L. Gray’s term for the invisible labor that powers our technology platforms. When Gray, a senior researcher at Microsoft Research, first arrived at the company, she learned that building artificial intelligence requires people to manage and clean up data to feed to the training algorithms. “I basically started asking the engineers and computer scientists around me, ‘Who are the people you pay to do this task work of labeling images and classification tasks and cleaning up databases?’” says Gray. Some people said they didn’t know. Others said they didn’t want to know and were concerned that if they looked too closely they might find unsavory working conditions.

So Gray decided to find out for herself. Who are the people, often invisible, who pick up the tasks necessary for these platforms to run? Why do they do this work, and why do they leave? What are their working conditions?

The interview that follows is interesting although it doesn’t seem to me that the question about working conditions is answered in any great detail. However, there is this rather interesting policy suggestion,

If companies want to happily use contract work because they need to constantly churn through new ideas and new aptitudes, the only way to make that a good thing for both sides of that enterprise is for people to be able to jump into that pool. And people do that when they have health care and other provisions. This is the business case for universal health care, for universal education as a public good. It’s going to benefit all enterprise.

I want to get across to people that, in a lot of ways, we’re describing work conditions. We’re not describing a particular type of work. We’re describing today’s conditions for project-based task-driven work. This can happen to everybody’s jobs, and I hate that that might be the motivation because we should have cared all along, as this has been happening to plenty of people. For me, the message of this book is: let’s make this not just manageable, but sustainable and enjoyable. Stop making our lives wrap around work, and start making work serve our lives.

Puts a different spin on AI and work, doesn’t it?

S.NET (Society for the Study of New and Emerging Technologies) 2019 conference in Quito, Ecuador: call for abstracts

Why isn’t the S.NET abbreviation SSNET? That’s what it should be, given the organization’s full name: Society for the Study of New and Emerging Technologies. S.NET smacks of a compromise or consensus decision of some kind. Also, the ‘New’ in its name was ‘Nanoscience’ at one time (see my Oct. 22, 2013 posting).

Now onto 2019 and the conference, which, for the first time ever, is being held in Latin America. Here’s more from a February 4, 2019 S.Net email about the call for abstracts,

2019 Annual S.NET Meeting
Contrasting Visions of Technological Change

The 11th Annual S.NET meeting will take place November 18-20, 2019, at the Latin American Faculty of Social Sciences in Quito, Ecuador.

This year’s meeting will provide rich opportunities to reflect on technological change by establishing a dialogue between contrasting visions on how technology becomes closely intertwined with social orders.  We aim to open the black box of technological change by exploring the sociotechnical agreements that help to explain why societies follow certain technological trajectories. Contributors are invited to explore the ramifications of technological change, reflect on the policy process of technology, and debate whether or why technological innovation is a matter for democracy.

Following the transnational nature of S.NET, the meeting will highlight the diverse geographical and cultural approaches to technological innovation, the forces driving sociotechnical change, and social innovation.  It is of paramount importance to question the role of technology in the shaping of society and the outcomes of these configurations.  What happens when these arrangements come into being, are transformed or fall apart?  Does technology create contestation?  Why and how should we engage with contested visions of technology change?

This is the first time that the S.NET Meeting will take place in Latin America and we encourage panels and presentations with contrasting voices from both the Global North and the Global South. 

Topics of interest include, but are not limited to:

Sociotechnical imaginaries of innovation
The role of technology on shaping nationhood and nation identities
Decision-making processes on science and technology public policies
Co-creation approaches to promote public innovation
Grassroots innovation, sustainability and democracy
Visions and cultural imaginaries
Role of social sciences and humanities in processes technological change
In addition, we welcome contributions on:
Research dynamics and organization Innovation and use
Governance and regulation
Politics and ethics
Roles of publics and stakeholders

Keynote Speakers
TBA (check the conference website for updates!)

Deadlines & Submission Instructions
The program committee invites contributions from scholars, technology developers and practitioners, and welcome presentations from a range of disciplines spanning the humanities, social and natural sciences.  We invite individual paper submissions, open panel and closed session proposals, student posters, and special format sessions, including events that are innovative in form and content. 

The deadline for abstract submissions is *April 18, 2019* [extended to May 12, 2019].  Abstracts should be approximately 250 words in length, emailed in PDF format to  Notifications of acceptance can be expected by May 30, 2019.

Junior scholars and those with limited resources are strongly encouraged to apply, as the organizing committee is actively investigating potential sources of financial support.

Details on the conference can be found here:

Local Organizing Committee
María Belén Albornoz, Isarelis Pérez, Javier Jiménez, Mónica Bustamante, Jorge Núñez, Maka Suárez.

FLACSO Ecuador is located in the heart of Quito.  Most hotels, museums, shopping centers and other cultural hotspots in the city are located near the campus and are easily accessible by public or private transportation.  Due to its proximity and easy access, Meeting participants would be able to enjoy Quito’s rich cultural life during their stay.  

About S.NET
S.NET is an international association that promotes intellectual exchange and critical inquiry about the advancement of new and emerging technologies in society.  The aim of the association is to advance critical reflection from various perspectives on developments in a broad range of new and emerging fields, including, but not limited to, nanoscale science and engineering, biotechnology, synthetic biology, cognitive science, ICT and Big Data, and geo-engineering.  Current S.NET board members are: Michael Bennett (President), Maria Belen Albornoz, Claire Shelley-Egan, Ana Delgado, Ana Viseu, Nora Vaage, Chris Toumey, Poonam Pandey, Sylvester Johnson, Lotte Krabbenborg, and Maria Joao Ferreira Maia.

Don’t forget, the deadline for your abstract is *April 18, 2019* [extended to May 12, 2019].

For anyone curious about what Quito might look like, there’s this from Quito’s Wikipedia entry,

Clockwise from top: Calle La Ronda, Iglesia de la Compañía de Jesús, El Panecillo as seen from Northern Quito, Carondelet Palace, Central-Northern Quito, Parque La Carolina and Iglesia y Monasterio de San Francisco. Credit: various authors – montage of various important landmarks of the City of Quito, Ecuador taken from files found in Wikimedia Commons. CC BY-SA 3.0 File:Montaje Quito.png Created: 24 December 2012

Good luck to all everyone submitting an abstract.

*Date for abstract submissions changed from April 18, 2019 to May 12, 2019 on April 24, 2019

Summer (2019) Institute on AI (artificial intelligence) Societal Impacts, Governance, and Ethics. Summer Institute In Alberta, Canada

The deadline for applications is April 7, 2019. As for whether or not you might like to attend, here’s more from a joint March 11, 2019 Alberta Machine Intelligence Institute (Amii)/
Canadian Institute for Advanced Research (CIFAR)/University of California at Los Angeles (UCLA) Law School news release
(also on,

What will Artificial Intelligence (AI) mean for society? That’s the question scholars from a variety of disciplines will explore during the inaugural Summer Institute on AI Societal Impacts, Governance, and Ethics. Summer Institute, co-hosted by the Alberta Machine Intelligence Institute (Amii) and CIFAR, with support from UCLA School of Law, takes place July 22-24, 2019 in Edmonton, Canada.

“Recent advances in AI have brought a surge of attention to the field – both excitement and concern,” says co-organizer and UCLA professor, Edward Parson. “From algorithmic bias to autonomous vehicles, personal privacy to automation replacing jobs. Summer Institute will bring together exceptional people to talk about how humanity can receive the benefits and not get the worst harms from these rapid changes.”

Summer Institute brings together experts, grad students and researchers from multiple backgrounds to explore the societal, governmental, and ethical implications of AI. A combination of lectures, panels, and participatory problem-solving, this comprehensive interdisciplinary event aims to build understanding and action around these high-stakes issues.

“Machine intelligence is opening transformative opportunities across the world,” says John Shillington, CEO of Amii, “and Amii is excited to bring together our own world-leading researchers with experts from areas such as law, philosophy and ethics for this important discussion. Interdisciplinary perspectives will be essential to the ongoing development of machine intelligence and for ensuring these opportunities have the broadest reach possible.”

Over the three-day program, 30 graduate-level students and early-career researchers will engage with leading experts and researchers including event co-organizers: Western University’s Daniel Lizotte, Amii’s Alona Fyshe and UCLA’s Edward Parson. Participants will also have a chance to shape the curriculum throughout this uniquely interactive event.

Summer Institute takes place prior to Deep Learning and Reinforcement Learning Summer School, and includes a combined event on July 24th [2019] for both Summer Institute and Summer School participants.

Visit to apply; applications close April 7, 2019.

View our Summer Institute Biographies & Boilerplates for more information on confirmed faculty members and co-hosting organizations. Follow the conversation through social media channels using the hashtag #SI2019.

Media Contact: Spencer Murray, Director of Communications & Public Relations, Amii
t: 587.415.6100 | c: 780.991.7136 | e:

There’s a bit more information on The Summer Institute on AI and Society webpage (on the Deep Learning and Reinforcement Learning Summer School 2019 website) such as this more complete list of speakers,

Confirmed speakers at Summer Institute include:

Alona Fyshe, University of Alberta/Amii (SI co-organizer)
Edward Parson, UCLA (SI co-organizer)
Daniel Lizotte, Western University (SI co-organizer)
Geoffrey Rockwell, University of Alberta
Graham Taylor, University of Guelph/Vector Institute
Rob Lempert, Rand Corporation
Gary Marchant, Arizona State University
Richard Re, UCLA
Evan Selinger, Rochester Institute of Technology
Elana Zeide, UCLA

Two questions, why are all the summer school faculty either Canada- or US-based? What about South American, Asian, Middle Eastern, etc. thinkers?

One last thought, I wonder if this ‘AI & ethics summer institute’ has anything to do with the Pan-Canadian Artificial Intelligence Strategy, which CIFAR administers and where both the University of Alberta and Vector Institute are members.