Category Archives: neuromorphic engineering

Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report

Launched on Thursday, July 13, 2023 during UNESCO’s (United Nations Educational, Scientific, and Cultural Organization) “Global dialogue on the ethics of neurotechnology,” is a report tying together the usual measures of national scientific supremacy (number of papers published and number of patents filed) with information on corporate investment in the field. Consequently, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends” by Daniel S. Hain, Roman Jurowetzki, Mariagrazia Squicciarini, and Lihui Xu provides better insight into the international neurotechnology scene than is sometimes found in these kinds of reports. By the way, the report is open access.

Here’s what I mean, from the report‘s short summary,

Since 2013, government investments in this field have exceeded $6 billion. Private investment has also seen significant growth, with annual funding experiencing a 22-fold increase from 2010 to 2020, reaching $7.3 billion and totaling $33.2 billion.

This investment has translated into a 35-fold growth in neuroscience publications between 2000-2021 and 20-fold growth in innovations between 2022-2020, as proxied by patents. However, not all are poised to benefit from such developments, as big divides emerge.

Over 80% of high-impact neuroscience publications are produced by only ten countries, while 70% of countries contributed fewer than 10 such papers over the period considered. Similarly, five countries only hold 87% of IP5 neurotech patents.

This report sheds light on the neurotechnology ecosystem, that is, what is being developed, where and by whom, and informs about how neurotechnology interacts with other technological trajectories, especially Artificial Intelligence [emphasis mine]. [p. 2]

The money aspect is eye-opening even when you already have your suspicions. Also, it’s not entirely unexpected to learn that only ten countries produce over 80% of the high impact neurotech papers and that only five countries hold 87% of the IP5 neurotech patents but it is stunning to see it in context. (If you’re not familiar with the term ‘IP5 patents’, scroll down in this post to the relevant subhead. Hint: It means the patent was filed in one of the top five jurisdictions; I’ll leave you to guess which ones those might be.)

“Since 2013 …” isn’t quite as informative as the authors may have hoped. I wish they had given a time frame for government investments similar to what they did for corporate investments (e.g., 2010 – 2020). Also, is the $6B (likely in USD) government investment cumulative or an estimated annual number? To sum up, I would have appreciated parallel structure and specificity.

Nitpicks aside, there’s some very good material intended for policy makers. On that note, some of the analysis is beyond me. I haven’t used anything even somewhat close to their analytical tools in years and years. This commentaries reflects my interests and a very rapid reading. One last thing, this is being written from a Canadian perspective. With those caveats in mind, here’s some of what I found.

A definition, social issues, country statistics, and more

There’s a definition for neurotechnology and a second mention of artificial intelligence being used in concert with neurotechnology. From the report‘s executive summary,

Neurotechnology consists of devices and procedures used to access, monitor, investigate, assess, manipulate, and/or emulate the structure and function of the neural systems of animals or human beings. It is poised to revolutionize our understanding of the brain and to unlock innovative solutions to treat a wide range of diseases and disorders.

Similarly to Artificial Intelligence (AI), and also due to its convergence with AI, neurotechnology may have profound societal and economic impact, beyond the medical realm. As neurotechnology directly relates to the brain, it triggers ethical considerations about fundamental aspects of human existence, including mental integrity, human dignity, personal identity, freedom of thought, autonomy, and privacy [emphases mine]. Its potential for enhancement purposes and its accessibility further amplifies its prospect social and societal implications.

The recent discussions held at UNESCO’s Executive Board further shows Member States’ desire to address the ethics and governance of neurotechnology through the elaboration of a new standard-setting instrument on the ethics of neurotechnology, to be adopted in 2025. To this end, it is important to explore the neurotechnology landscape, delineate its boundaries, key players, and trends, and shed light on neurotech’s scientific and technological developments. [p. 7]

Here’s how they sourced the data for the report,

The present report addresses such a need for evidence in support of policy making in
relation to neurotechnology by devising and implementing a novel methodology on data from scientific articles and patents:

● We detect topics over time and extract relevant keywords using a transformer-
based language models fine-tuned for scientific text. Publication data for the period
2000-2021 are sourced from the Scopus database and encompass journal articles
and conference proceedings in English. The 2,000 most cited publications per year
are further used in in-depth content analysis.
● Keywords are identified through Named Entity Recognition and used to generate
search queries for conducting a semantic search on patents’ titles and abstracts,
using another language model developed for patent text. This allows us to identify
patents associated with the identified neuroscience publications and their topics.
The patent data used in the present analysis are sourced from the European
Patent Office’s Worldwide Patent Statistical Database (PATSTAT). We consider
IP5 patents filed between 2000-2020 having an English language abstract and
exclude patents solely related to pharmaceuticals.

This approach allows mapping the advancements detailed in scientific literature to the technological applications contained in patent applications, allowing for an analysis of the linkages between science and technology. This almost fully automated novel approach allows repeating the analysis as neurotechnology evolves. [pp. 8-9[

Findings in bullet points,

Key stylized facts are:
● The field of neuroscience has witnessed a remarkable surge in the overall number
of publications since 2000, exhibiting a nearly 35-fold increase over the period
considered, reaching 1.2 million in 2021. The annual number of publications in
neuroscience has nearly tripled since 2000, exceeding 90,000 publications a year
in 2021. This increase became even more pronounced since 2019.
● The United States leads in terms of neuroscience publication output (40%),
followed by the United Kingdom (9%), Germany (7%), China (5%), Canada (4%),
Japan (4%), Italy (4%), France (4%), the Netherlands (3%), and Australia (3%).
These countries account for over 80% of neuroscience publications from 2000 to
2021.
● Big divides emerge, with 70% of countries in the world having less than 10 high-
impact neuroscience publications between 2000 to 2021.
● Specific neurotechnology-related research trends between 2000 and 2021 include:
○ An increase in Brain-Computer Interface (BCI) research around 2010,
maintaining a consistent presence ever since.
○ A significant surge in Epilepsy Detection research in 2017 and 2018,
reflecting the increased use of AI and machine learning in healthcare.
○ Consistent interest in Neuroimaging Analysis, which peaks around 2004,
likely because of its importance in brain activity and language
comprehension studies.
○ While peaking in 2016 and 2017, Deep Brain Stimulation (DBS) remains a
persistent area of research, underlining its potential in treating conditions
like Parkinson’s disease and essential tremor.
● Between 2000 and 2020, the total number of patent applications in this field
increased significantly, experiencing a 20-fold increase from less than 500 to over
12,000. In terms of annual figures, a consistent upward trend in neurotechnology-10
related patent applications emerges, with a notable doubling observed between
2015 and 2020.
• The United States account for nearly half of all worldwide patent applications (47%).
Other major contributors include South Korea (11%), China (10%), Japan (7%),
Germany (7%), and France (5%). These five countries together account for 87%
of IP5 neurotech patents applied between 2000 and 2020.
○ The United States has historically led the field, with a peak around 2010, a
decline towards 2015, and a recovery up to 2020.
○ South Korea emerged as a significant contributor after 1990, overtaking
Germany in the late 2000s to become the second-largest developer of
neurotechnology. By the late 2010s, South Korea’s annual neurotechnology
patent applications approximated those of the United States.
○ China exhibits a sharp increase in neurotechnology patent applications in
the mid-2010s, bringing it on par with the United States in terms of
application numbers.
● The United States ranks highest in both scientific publications and patents,
indicating their strong ability to transform knowledge into marketable inventions.
China, France, and Korea excel in leveraging knowledge to develop patented
innovations. Conversely, countries such as the United Kingdom, Germany, Italy,
Canada, Brazil, and Australia lag behind in effectively translating neurotech
knowledge into patentable innovations.
● In terms of patent quality measured by forward citations, the leading countries are
Germany, US, China, Japan, and Korea.
● A breakdown of patents by technology field reveals that Computer Technology is
the most important field in neurotechnology, exceeding Medical Technology,
Biotechnology, and Pharmaceuticals. The growing importance of algorithmic
applications, including neural computing techniques, also emerges by looking at
the increase in patent applications in these fields between 2015-2020. Compared
to the reference year, computer technologies-related patents in neurotech
increased by 355% and by 92% in medical technology.
● An analysis of the specialization patterns of the top-5 countries developing
neurotechnologies reveals that Germany has been specializing in chemistry-
related technology fields, whereas Asian countries, particularly South Korea and
China, focus on computer science and electrical engineering-related fields. The
United States exhibits a balanced configuration with specializations in both
chemistry and computer science-related fields.
● The entities – i.e. both companies and other institutions – leading worldwide
innovation in the neurotech space are: IBM (126 IP5 patents, US), Ping An
Technology (105 IP5 patents, CH), Fujitsu (78 IP5 patents, JP), Microsoft (76 IP511
patents, US)1, Samsung (72 IP5 patents, KR), Sony (69 IP5 patents JP) and Intel
(64 IP5 patents US)

This report further proposes a pioneering taxonomy of neurotechnologies based on International Patent Classification (IPC) codes.

• 67 distinct patent clusters in neurotechnology are identified, which mirror the diverse research and development landscape of the field. The 20 most prominent neurotechnology groups, particularly in areas like multimodal neuromodulation, seizure prediction, neuromorphic computing [emphasis mine], and brain-computer interfaces, point to potential strategic areas for research and commercialization.
• The variety of patent clusters identified mirrors the breadth of neurotechnology’s potential applications, from medical imaging and limb rehabilitation to sleep optimization and assistive exoskeletons.
• The development of a baseline IPC-based taxonomy for neurotechnology offers a structured framework that enriches our understanding of this technological space, and can facilitate research, development and analysis. The identified key groups mirror the interdisciplinary nature of neurotechnology and underscores the potential impact of neurotechnology, not only in healthcare but also in areas like information technology and biomaterials, with non-negligible effects over societies and economies.

1 If we consider Microsoft Technology Licensing LLM and Microsoft Corporation as being under the same umbrella, Microsoft leads worldwide developments with 127 IP5 patents. Similarly, if we were to consider that Siemens AG and Siemens Healthcare GmbH belong to the same conglomerate, Siemens would appear much higher in the ranking, in third position, with 84 IP5 patents. The distribution of intellectual property assets across companies belonging to the same conglomerate is frequent and mirrors strategic as well as operational needs and features, among others. [pp. 9-11]

Surprises and comments

Interesting and helpful to learn that “neurotechnology interacts with other technological trajectories, especially Artificial Intelligence;” this has changed and improved my understanding of neurotechnology.

It was unexpected to find Canada in the top ten countries producing neuroscience papers. However, finding out that the country lags in translating its ‘neuro’ knowledge into patentable innovation is not entirely a surprise.

It can’t be an accident that countries with major ‘electronics and computing’ companies lead in patents. These companies do have researchers but they also buy startups to acquire patents. They (and ‘patent trolls’) will also file patents preemptively. For the patent trolls, it’s a moneymaking proposition and for the large companies, it’s a way of protecting their own interests and/or (I imagine) forcing a sale.

The mention of neuromorphic (brainlike) computing in the taxonomy section was surprising and puzzling. Up to this point, I’ve thought of neuromorphic computing as a kind of alternative or addition to standard computing but the authors have blurred the lines as per UNESCO’s definition of neurotechnology (specifically, “… emulate the structure and function of the neural systems of animals or human beings”) . Again, this report is broadening my understanding of neurotechnology. Of course, it required two instances before I quite grasped it, the definition and the taxonomy.

What’s puzzling is that neuromorphic engineering, a broader term that includes neuromorphic computing, isn’t used or mentioned. (For an explanation of the terms neuromorphic computing and neuromorphic engineering, there’s my June 23, 2023 posting, “Neuromorphic engineering: an overview.” )

The report

I won’t have time for everything. Here are some of the highlights from my admittedly personal perspective.

It’s not only about curing disease

From the report,

Neurotechnology’s applications however extend well beyond medicine [emphasis mine], and span from research, to education, to the workplace, and even people’s everyday life. Neurotechnology-based solutions may enhance learning and skill acquisition and boost focus through brain stimulation techniques. For instance, early research finds that brain- zapping caps appear to boost memory for at least one month (Berkeley, 2022). This could one day be used at home to enhance memory functions [emphasis mine]. They can further enable new ways to interact with the many digital devices we use in everyday life, transforming the way we work, live and interact. One example is the Sound Awareness wristband developed by a Stanford team (Neosensory, 2022) which enables individuals to “hear” by converting sound into tactile feedback, so that sound impaired individuals can perceive spoken words through their skin. Takagi and Nishimoto (2023) analyzed the brain scans taken through Magnetic Resonance Imaging (MRI) as individuals were shown thousands of images. They then trained a generative AI tool called Stable Diffusion2 on the brain scan data of the study’s participants, thus creating images that roughly corresponded to the real images shown. While this does not correspond to reading the mind of people, at least not yet, and some limitations of the study have been highlighted (Parshall, 2023), it nevertheless represents an important step towards developing the capability to interface human thoughts with computers [emphasis mine], via brain data interpretation.

While the above examples may sound somewhat like science fiction, the recent uptake of generative Artificial Intelligence applications and of large language models such as ChatGPT or Bard, demonstrates that the seemingly impossible can quickly become an everyday reality. At present, anyone can purchase online electroencephalogram (EEG) devices for a few hundred dollars [emphasis mine], to measure the electrical activity of their brain for meditation, gaming, or other purposes. [pp. 14-15]

This is very impressive achievement. Some of the research cited was published earlier this year (2023). The extraordinary speed is a testament to the efforts by the authors and their teams. It’s also a testament to how quickly the field is moving.

I’m glad to see the mention of and focus on consumer neurotechnology. (While the authors don’t speculate, I am free to do so.) Consumer neurotechnology could be viewed as one of the steps toward normalizing a cyborg future for all of us. Yes, we have books, television programmes, movies, and video games, which all normalize the idea but the people depicted have been severely injured and require the augmentation. With consumer neurotechnology, you have easily accessible devices being used to enhance people who aren’t injured, they just want to be ‘better’.

This phrase seemed particularly striking “… an important step towards developing the capability to interface human thoughts with computers” in light of some claims made by the Australian military in my June 13, 2023 posting “Mind-controlled robots based on graphene: an Australian research story.” (My posting has an embedded video demonstrating the Brain Robotic Interface (BRI) in action. Also, see the paragraph below the video for my ‘measured’ response.)

There’s no mention of the military in the report which seems more like a deliberate rather than inadvertent omission given the importance of military innovation where technology is concerned.

This section gives a good overview of government initiatives (in the report it’s followed by a table of the programmes),

Thanks to the promises it holds, neurotechnology has garnered significant attention from both governments and the private sector and is considered by many as an investment priority. According to the International Brain Initiative (IBI), brain research funding has become increasingly important over the past ten years, leading to a rise in large-scale state-led programs aimed at advancing brain intervention technologies(International Brain Initiative, 2021). Since 2013, initiatives such as the United States’ Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative and the European Union’s Human Brain Project (HBP), as well as major national initiatives in China, Japan and South Korea have been launched with significant funding support from the respective governments. The Canadian Brain Research Strategy, initially operated as a multi- stakeholder coalition on brain research, is also actively seeking funding support from the government to transform itself into a national research initiative (Canadian Brain Research Strategy, 2022). A similar proposal is also seen in the case of the Australian Brain Alliance, calling for the establishment of an Australian Brain Initiative (Australian Academy of Science, n.d.). [pp. 15-16]

Privacy

There are some concerns such as these,

Beyond the medical realm, research suggests that emotional responses of consumers
related to preferences and risks can be concurrently tracked by neurotechnology, such
as neuroimaging and that neural data can better predict market-level outcomes than
traditional behavioral data (Karmarkar and Yoon, 2016). As such, neural data is
increasingly sought after in the consumer market for purposes such as digital
phenotyping4, neurogaming 5,and neuromarketing6 (UNESCO, 2021). This surge in demand gives rise to risks like hacking, unauthorized data reuse, extraction of privacy-sensitive information, digital surveillance, criminal exploitation of data, and other forms of abuse. These risks prompt the question of whether neural data needs distinct definition and safeguarding measures.

These issues are particularly relevant today as a wide range of electroencephalogram (EEG) headsets that can be used at home are now available in consumer markets for purposes that range from meditation assistance to controlling electronic devices through the mind. Imagine an individual is using one of these devices to play a neurofeedback game, which records the person’s brain waves during the game. Without the person being aware, the system can also identify the patterns associated with an undiagnosed mental health condition, such as anxiety. If the game company sells this data to third parties, e.g. health insurance providers, this may lead to an increase of insurance fees based on undisclosed information. This hypothetical situation would represent a clear violation of mental privacy and of unethical use of neural data.

Another example is in the field of advertising, where companies are increasingly interested in using neuroimaging to better understand consumers’ responses to their products or advertisements, a practice known as neuromarketing. For instance, a company might use neural data to determine which advertisements elicit the most positive emotional responses in consumers. While this can help companies improve their marketing strategies, it raises significant concerns about mental privacy. Questions arise in relation to consumers being aware or not that their neural data is being used, and in the extent to which this can lead to manipulative advertising practices that unfairly exploit unconscious preferences. Such potential abuses underscore the need for explicit consent and rigorous data protection measures in the use of neurotechnology for neuromarketing purposes. [pp. 21-22]

Legalities

Some countries already have laws and regulations regarding neurotechnology data,

At the national level, only a few countries have enacted laws and regulations to protect mental integrity or have included neuro-data in personal data protection laws (UNESCO, University of Milan-Bicocca (Italy) and State University of New York – Downstate Health Sciences University, 2023). Examples are the constitutional reform undertaken by Chile (Republic of Chile, 2021), the Charter for the responsible development of neurotechnologies of the Government of France (Government of France, 2022), and the Digital Rights Charter of the Government of Spain (Government of Spain, 2021). They propose different approaches to the regulation and protection of human rights in relation to neurotechnology. Countries such as the UK are also examining under which circumstances neural data may be considered as a special category of data under the general data protection framework (i.e. UK’s GDPR) (UK’s Information Commissioner’s Office, 2023) [p. 24]

As you can see, these are recent laws. There doesn’t seem to be any attempt here in Canada even though there is an act being reviewed in Parliament that could conceivably include neural data. This is from my May 1, 2023 posting,

Bill C-27 (Digital Charter Implementation Act, 2022) is what I believe is called an omnibus bill as it includes three different pieces of proposed legislation (the Consumer Privacy Protection Act [CPPA], the Artificial Intelligence and Data Act [AIDA], and the Personal Information and Data Protection Tribunal Act [PIDPTA]). [emphasis added July 11, 2023] You can read the Innovation, Science and Economic Development (ISED) Canada summary here or a detailed series of descriptions of the act here on the ISED’s Canada’s Digital Charter webpage.

My focus at the time was artificial intelligence and, now, after reading this UNESCO report and briefly looking at the Innovation, Science and Economic Development (ISED) Canada summary and a detailed series of descriptions of the act on ISED’s Canada’s Digital Charter webpage, I don’t see anything that specifies neural data but it’s not excluded either.

IP5 patents

Here’s the explanation (the footnote is included at the end of the excerpt),

IP5 patents represent a subset of overall patents filed worldwide, which have the
characteristic of having been filed in at least one top intellectual property offices (IPO)
worldwide (the so called IP5, namely the Chinese National Intellectual Property
Administration, CNIPA (formerly SIPO); the European Patent Office, EPO; the Japan
Patent Office, JPO; the Korean Intellectual Property Office, KIPO; and the United States
Patent and Trademark Office, USPTO) as well as another country, which may or may not be an IP5. This signals their potential applicability worldwide, as their inventiveness and industrial viability have been validated by at least two leading IPOs. This gives these patents a sort of “quality” check, also since patenting inventions is costly and if applicants try to protect the same invention in several parts of the world, this normally mirrors that the applicant has expectations about their importance and expected value. If we were to conduct the same analysis using information about individually considered patent applied worldwide, i.e. without filtering for quality nor considering patent families, we would risk conducting a biased analysis based on duplicated data. Also, as patentability standards vary across countries and IPOs, and what matters for patentability is the existence (or not) of prior art in the IPO considered, we would risk mixing real innovations with patents related to catching up phenomena in countries that are not at the forefront of the technology considered.

9 The five IP offices (IP5) is a forum of the five largest intellectual property offices in the world that was set up to improve the efficiency of the examination process for patents worldwide. The IP5 Offices together handle about 80% of the world’s patent applications, and 95% of all work carried out under the Patent Cooperation Treaty (PCT), see http://www.fiveipoffices.org. (Dernis et al., 2015) [p. 31]

AI assistance on this report

As noted earlier I have next to no experience with the analytical tools having not attempted this kind of work in several years. Here’s an example of what they were doing,

We utilize a combination of text embeddings based on Bidirectional Encoder
Representations from Transformer (BERT), dimensionality reduction, and hierarchical
clustering inspired by the BERTopic methodology 12 to identify latent themes within
research literature. Latent themes or topics in the context of topic modeling represent
clusters of words that frequently appear together within a collection of documents (Blei, 2012). These groupings are not explicitly labeled but are inferred through computational analysis examining patterns in word usage. These themes are ‘hidden’ within the text, only to be revealed through this analysis. …

We further utilize OpenAI’s GPT-4 model to enrich our understanding of topics’ keywords and to generate topic labels (OpenAI, 2023), thus supplementing expert review of the broad interdisciplinary corpus. Recently, GPT-4 has shown impressive results in medical contexts across various evaluations (Nori et al., 2023), making it a useful tool to enhance the information obtained from prior analysis stages, and to complement them. The automated process enhances the evaluation workflow, effectively emphasizing neuroscience themes pertinent to potential neurotechnology patents. Notwithstanding existing concerns about hallucinations (Lee, Bubeck and Petro, 2023) and errors in generative AI models, this methodology employs the GPT-4 model for summarization and interpretation tasks, which significantly mitigates the likelihood of hallucinations. Since the model is constrained to the context provided by the keyword collections, it limits the potential for fabricating information outside of the specified boundaries, thereby enhancing the accuracy and reliability of the output. [pp. 33-34]

I couldn’t resist adding the ChatGPT paragraph given all of the recent hoopla about it.

Multimodal neuromodulation and neuromorphic computing patents

I think this gives a pretty good indication of the activity on the patent front,

The largest, coherent topic, termed “multimodal neuromodulation,” comprises 535
patents detailing methodologies for deep or superficial brain stimulation designed to
address neurological and psychiatric ailments. These patented technologies interact with various points in neural circuits to induce either Long-Term Potentiation (LTP) or Long-Term Depression (LTD), offering treatment for conditions such as obsession, compulsion, anxiety, depression, Parkinson’s disease, and other movement disorders. The modalities encompass implanted deep-brain stimulators (DBS), Transcranial Magnetic Stimulation (TMS), and transcranial Direct Current Stimulation (tDCS). Among the most representative documents for this cluster are patents with titles: Electrical stimulation of structures within the brain or Systems and methods for enhancing or optimizing neural stimulation therapy for treating symptoms of Parkinson’s disease and or other movement disorders. [p.65]

Given my longstanding interest in memristors, which (I believe) have to a large extent helped to stimulate research into neuromorphic computing, this had to be included. Then, there was the brain-computer interfaces cluster,

A cluster identified as “Neuromorphic Computing” consists of 366 patents primarily
focused on devices designed to mimic human neural networks for efficient and adaptable computation. The principal elements of these inventions are resistive memory cells and artificial synapses. They exhibit properties similar to the neurons and synapses in biological brains, thus granting these devices the ability to learn and modulate responses based on rewards, akin to the adaptive cognitive capabilities of the human brain.

The primary technology classes associated with these patents fall under specific IPC
codes, representing the fields of neural network models, analog computers, and static
storage structures. Essentially, these classifications correspond to technologies that are key to the construction of computers and exhibit cognitive functions similar to human brain processes.

Examples for this cluster include neuromorphic processing devices that leverage
variations in resistance to store and process information, artificial synapses exhibiting
spike-timing dependent plasticity, and systems that allow event-driven learning and
reward modulation within neuromorphic computers.

In relation to neurotechnology as a whole, the “neuromorphic computing” cluster holds significant importance. It embodies the fusion of neuroscience and technology, thereby laying the basis for the development of adaptive and cognitive computational systems. Understanding this specific cluster provides a valuable insight into the progressing domain of neurotechnology, promising potential advancements across diverse fields, including artificial intelligence and healthcare.

The “Brain-Computer Interfaces” cluster, consisting of 146 patents, embodies a key aspect of neurotechnology that focuses on improving the interface between the brain and external devices. The technology classification codes associated with these patents primarily refer to methods or devices for treatment or protection of eyes and ears, devices for introducing media into, or onto, the body, and electric communication techniques, which are foundational elements of brain-computer interface (BCI) technologies.

Key patents within this cluster include a brain-computer interface apparatus adaptable to use environment and method of operating thereof, a double closed circuit brain-machine interface system, and an apparatus and method of brain-computer interface for device controlling based on brain signal. These inventions mainly revolve around the concept of using brain signals to control external devices, such as robotic arms, and improving the classification performance of these interfaces, even after long periods of non-use.

The inventions described in these patents improve the accuracy of device control, maintain performance over time, and accommodate multiple commands, thus significantly enhancing the functionality of BCIs.

Other identified technologies include systems for medical image analysis, limb rehabilitation, tinnitus treatment, sleep optimization, assistive exoskeletons, and advanced imaging techniques, among others. [pp. 66-67]

Having sections on neuromorphic computing and brain-computer interface patents in immediate proximity led to more speculation on my part. Imagine how much easier it would be to initiate a BCI connection if it’s powered with a neuromorphic (brainlike) computer/device. [ETA July 21, 2023: Following on from that thought, it might be more than just easier to initiate a BCI connection. Could a brainlike computer become part of your brain? Why not? it’s been successfully argued that a robotic wheelchair was part of someone’s body, see my January 30, 2013 posting and scroll down about 40% of the way.)]

Neurotech policy debates

The report concludes with this,

Neurotechnology is a complex and rapidly evolving technological paradigm whose
trajectories have the power to shape people’s identity, autonomy, privacy, sentiments,
behaviors and overall well-being, i.e. the very essence of what it means to be human.

Designing and implementing careful and effective norms and regulations ensuring that neurotechnology is developed and deployed in an ethical manner, for the good of
individuals and for society as a whole, call for a careful identification and characterization of the issues at stake. This entails shedding light on the whole neurotechnology ecosystem, that is what is being developed, where and by whom, and also understanding how neurotechnology interacts with other developments and technological trajectories, especially AI. Failing to do so may result in ineffective (at best) or distorted policies and policy decisions, which may harm human rights and human dignity.

Addressing the need for evidence in support of policy making, the present report offers first time robust data and analysis shedding light on the neurotechnology landscape worldwide. To this end, its proposes and implements an innovative approach that leverages artificial intelligence and deep learning on data from scientific publications and paten[t]s to identify scientific and technological developments in the neurotech space. The methodology proposed represents a scientific advance in itself, as it constitutes a quasi- automated replicable strategy for the detection and documentation of neurotechnology- related breakthroughs in science and innovation, to be repeated over time to account for the evolution of the sector. Leveraging this approach, the report further proposes an IPC-based taxonomy for neurotechnology which allows for a structured framework to the exploration of neurotechnology, to enable future research, development and analysis. The innovative methodology proposed is very flexible and can in fact be leveraged to investigate different emerging technologies, as they arise.

In terms of technological trajectories, we uncover a shift in the neurotechnology industry, with greater emphasis being put on computer and medical technologies in recent years, compared to traditionally dominant trajectories related to biotechnology and pharmaceuticals. This shift warrants close attention from policymakers, and calls for attention in relation to the latest (converging) developments in the field, especially AI and related methods and applications and neurotechnology.

This is all the more important and the observed growth and specialization patterns are unfolding in the context of regulatory environments that, generally, are either not existent or not fit for purpose. Given the sheer implications and impact of neurotechnology on the very essence of human beings, this lack of regulation poses key challenges related to the possible infringement of mental integrity, human dignity, personal identity, privacy, freedom of thought, and autonomy, among others. Furthermore, issues surrounding accessibility and the potential for neurotech enhancement applications triggers significant concerns, with far-reaching implications for individuals and societies. [pp. 72-73]

Last words about the report

Informative, readable, and thought-provoking. And, it helped broaden my understanding of neurotechnology.

Future endeavours?

I’m hopeful that one of these days one of these groups (UNESCO, Canadian Science Policy Centre, or ???) will tackle the issue of business bankruptcy in the neurotechnology sector. It has already occurred as noted in my ““Going blind when your neural implant company flirts with bankruptcy [long read]” April 5, 2022 posting. That story opens with a woman going blind in a New York subway when her neural implant fails. It’s how she found out the company, which supplied her implant was going out of business.

In my July 7, 2023 posting about the UNESCO July 2023 dialogue on neurotechnology, I’ve included information on Neuralink (one of Elon Musk’s companies) and its approval (despite some investigations) by the US Food and Drug Administration to start human clinical trials. Scroll down about 75% of the way to the “Food for thought” subhead where you will find stories about allegations made against Neuralink.

The end

If you want to know more about the field, the report offers a seven-page bibliography and there’s a lot of material here where you can start with this December 3, 2019 posting “Neural and technological inequalities” which features an article mentioning a discussion between two scientists. Surprisingly (to me), the source article is in Fast Company (a leading progressive business media brand), according to their tagline)..

I have two categories you may want to check: Human Enhancement and Neuromorphic Engineering. There are also a number of tags: neuromorphic computing, machine/flesh, brainlike computing, cyborgs, neural implants, neuroprosthetics, memristors, and more.

Should you have any observations or corrections, please feel free to leave them in the Comments section of this posting.

A nonvolatile photo-memristor

Credit: by Xiao Fu, Tangxin Li, Bin Caid, Jinshui Miao, Gennady N. Panin, Xinyu Ma, Jinjin Wang, Xiaoyong Jiang, Qing Lia, Yi Dong, Chunhui Hao, Juyi Sun, Hangyu Xu, Qixiao Zhao, Mengjia Xia, Bo Song, Fansheng Chen, Xiaoshuang Chen, Wei Lu, Weida Hu

it took a while to get there but the February 13, 2023 news item on phys.org announced research into extending memristors from tunable conductance to reconfigurable photo-response,

In traditional vision systems, the optical information is captured by a frame-based digital camera, and then the digital signal is processed afterwards using machine-learning algorithms. In this scenario, a large amount of data (mostly redundant) has to be transferred from a standalone sensing elements to the processing units, which leads to high latency and power consumption.

To address this problem, much effort has been devoted to developing an efficient approach, where some of the memory and computational tasks are offloaded to sensor elements that can perceive and process the optical signal simultaneously.

In a new paper published in Light: Science & Applications, a team of scientists, led by Professor Weida Hu from School of Physics and Optoelectronic Engineering, Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou, China, State Key Laboratory of Infrared Physics, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai, China, and co-workers have developed a non-volatile photo-memristor, in which the reconfigurable responsivity can be modulated by the charge and/or photon flux through it and further stored in the device.

A February 13, 2023 Chinese Academy of Sciences press release, which originated the news item, provided more technical detail about the work,

The non-volatile photo-memristor has a simple two-terminal architecture, in which photoexcited carriers and oxygen-related ions are coupled, leading to a displaced and pinched hysteresis in the current-voltage characteristics. For the first time, non-volatile photo-memristors implement computationally complete logic with photoresponse-stateful operations, for which the same photo-memristor serves as both a logic gate and memory, using photoresponse as a physical state variable instead of light, voltage and memresistance. Polarity reversal of photo-memristors shows great potential for in-memory sensing and computing with feature extraction and image recognition for neuromorphic vision.

The photo-memristor demonstrates tunable short-circuit current in a non-volatile mode under illumination. By mimicking the biological functionalities of the human retina and designing specific device structures, the devices can act as neural network for neuromorphic visual processing and implementation of completely photoresponse-stateful logic operations triggered by electrical and light stimuli together. It can support various kinds of sensing tasks with all-in-one sensing-memory-computing approaches. These scientists summarize the operational principle and feature of their device:

“We design[ed] a two-terminal device with MoS2-xOx and specific graphene for three purposes in one: (1) to provide low barrier energy for the migration of oxygen ions; (2) to perform as geometry-asymmetric metal–semiconductor–metal van der Waals heterostructures with multi-photoresponse states; and (3) as an extension of a memristor, this device not only provides tunable conductance, but also demonstrates reconfigurable photoresponse for reading at zero bias voltage.”

“Moreover, the tunable short-circuit photocurrent and photoresponse can be increased to 889.8 nA and 98.8 mA/W, respectively, which are much higher than that of other reconfigurable phototransistors based on 2D materials. To reverse the channel polarity and obtain a gate-tunable short-circuit photocurrent, the channel semiconductor must be thin enough. Thus, it is difficult to use the thick film needed to absorb enough light to get a large signal. In our case, the mechanism of the two-terminal device rearrangement is based on ion migration, which is not limited by the thickness. We can increase the thickness of the film to absorb more photons and get a large short-circuit photocurrent.” they added.

“This new concept of a two-terminal photo-memristor not only enables all-in-one sensing-memory-computing approaches for neuromorphic vision hardware, but also brings great convenience for high-density integration.” the scientists forecast.

Here’s a link to and a citation for the paper,

Graphene/MoS2−xOx/graphene photomemristor with tunable non-volatile responsivities for neuromorphic vision processing by Xiao Fu, Tangxin Li, Bin Caid, Jinshui Miao, Gennady N. Panin, Xinyu Ma, Jinjin Wang, Xiaoyong Jiang, Qing Lia, Yi Dong, Chunhui Hao, Juyi Sun, Hangyu Xu, Qixiao Zhao, Mengjia Xia, Bo Song, Fansheng Chen, Xiaoshuang Chen, Wei Lu, Weida Hu. Light: Science & Applications volume 12, Article number: 39 (2023) DOI: https://doi.org/10.1038/s41377-023-01079-5 Published: 07 February 2023

This paper is open access.

A brainlike (neuromorphic) camera can go beyond diffraction limit of light

Just when I think I’m getting caught up with my backlog along comes something like this. A February 21, 2023 news item on Nanowerk announces research that combines neuromorphic (brainlike) engineering and nanotechnology, Note: A link has been removed,

In a new study, researchers at the Indian Institute of Science (IISc) show how a brain-inspired image sensor can go beyond the diffraction limit of light to detect miniscule objects such as cellular components or nanoparticles invisible to current microscopes. Their novel technique, which combines optical microscopy with a neuromorphic camera and machine learning algorithms, presents a major step forward in pinpointing objects smaller than 50 nanometers in size.

A February 21, 2023 (?) Indian Institute of Science (IISc) press release (also on EurekAlert), which originated the news item, describes the nature of the task and provides some technical details,

Since the invention of optical microscopes, scientists have strived to surpass a barrier called the diffraction limit, which means that the microscope cannot distinguish between two objects if they are smaller than a certain size (typically 200-300 nanometers). Their efforts have largely focused on either modifying the molecules being imaged, or developing better illumination strategies – some of which led to the 2014 Nobel Prize in Chemistry. “But very few have actually tried to use the detector itself to try and surpass this detection limit,” says Deepak Nair, Associate Professor at the Centre for Neuroscience (CNS), IISc, and corresponding author of the study.  

Measuring roughly 40 mm (height) by 60 mm (width) by 25 mm (diameter), and weighing about 100 grams, the neuromorphic camera used in the study mimics the way the human retina converts light into electrical impulses, and has several advantages over conventional cameras. In a typical camera, each pixel captures the intensity of light falling on it for the entire exposure time that the camera focuses on the object, and all these pixels are pooled together to reconstruct an image of the object. In neuromorphic cameras, each pixel operates independently and asynchronously, generating events or spikes only when there is a change in the intensity of light falling on that pixel. This generates sparse and lower amount of data compared to traditional cameras, which capture every pixel value at a fixed rate, regardless of whether there is any change in the scene. This functioning of a neuromorphic camera is similar to how the human retina works, and allows the camera to “sample” the environment with much higher temporal resolution – because it is not limited by a frame rate like normal cameras – and also perform background suppression.  

“Such neuromorphic cameras have a very high dynamic range (>120 dB), which means that you can go from a very low-light environment to very high-light conditions. The combination of the asynchronous nature, high dynamic range, sparse data, and high temporal resolution of neuromorphic cameras make them well-suited for use in neuromorphic microscopy,” explains Chetan Singh Thakur, Assistant Professor at the Department of Electronic Systems Engineering (DESE), IISc, and co-author. 

In the current study, the group used their neuromorphic camera to pinpoint individual fluorescent beads smaller than the limit of diffraction, by shining laser pulses at both high and low intensities, and measuring the variation in the fluorescence levels. As the intensity increases, the camera captures the signal as an “ON” event, while an “OFF” event is reported when the light intensity decreases. The data from these events were pooled together to reconstruct frames. 

To accurately locate the fluorescent particles within the frames, the team used two methods. The first was a deep learning algorithm, trained on about one and a half million image simulations that closely represented the experimental data, to predict where the centroid of the object could be, explains Rohit Mangalwedhekar, former research intern at CNS and first author of the study. A wavelet segmentation algorithm was also used to determine the centroids of the particles separately for the ON and the OFF events. Combining the predictions from both allowed the team to zero in on the object’s precise location with greater accuracy than existing techniques.  

“In biological processes like self-organisation, you have molecules that are alternating between random or directed movement, or that are immobilised,” explains Nair. “Therefore, you need to have the ability to locate the centre of this molecule with the highest precision possible so that we can understand the thumb rules that allow the self-organisation.” The team was able to closely track the movement of a fluorescent bead moving freely in an aqueous solution using this technique. This approach can, therefore, have widespread applications in precisely tracking and understanding stochastic processes in biology, chemistry and physics.  

Caption: Transformation of cumulative probability density of ON and OFF processes allows localisation below the limit of classical single particle detection. Credit: Mangalwedhekar et al

Here’s a link to and a citation for the paper,

Achieving nanoscale precision using neuromorphic localization microscopy by Rohit Mangalwedhekar, Nivedita Singh, Chetan Singh Thakur, Chandra Sekhar Seelamantula, Mini Jose & Deepak Nair. Nature Nanotechnology volume 18, pages 380–389 (2023) DOI: https://doi.org/10.1038/s41565-022-01291-1 Published online: 23 January 2023 Issue Date: April 2023

This paper is behind a paywall.

Neuromorphic engineering: an overview

In a February 13, 2023 essay, Michael Berger who runs the Nanowerk website provides an overview of brainlike (neuromorphic) engineering.

This essay is the most extensive piece I’ve seen on Berger’s website and it covers everything from the reasons why scientists are so interested in mimicking the human brain to specifics about memristors. Here are a few excerpts (Note: Links have been removed),

Neuromorphic engineering is a cutting-edge field that focuses on developing computer hardware and software systems inspired by the structure, function, and behavior of the human brain. The ultimate goal is to create computing systems that are significantly more energy-efficient, scalable, and adaptive than conventional computer systems, capable of solving complex problems in a manner reminiscent of the brain’s approach.

This interdisciplinary field draws upon expertise from various domains, including neuroscience, computer science, electronics, nanotechnology, and materials science. Neuromorphic engineers strive to develop computer chips and systems incorporating artificial neurons and synapses, designed to process information in a parallel and distributed manner, akin to the brain’s functionality.

Key challenges in neuromorphic engineering encompass developing algorithms and hardware capable of performing intricate computations with minimal energy consumption, creating systems that can learn and adapt over time, and devising methods to control the behavior of artificial neurons and synapses in real-time.

Neuromorphic engineering has numerous applications in diverse areas such as robotics, computer vision, speech recognition, and artificial intelligence. The aspiration is that brain-like computing systems will give rise to machines better equipped to tackle complex and uncertain tasks, which currently remain beyond the reach of conventional computers.

It is essential to distinguish between neuromorphic engineering and neuromorphic computing, two related but distinct concepts. Neuromorphic computing represents a specific application of neuromorphic engineering, involving the utilization of hardware and software systems designed to process information in a manner akin to human brain function.

One of the major obstacles in creating brain-inspired computing systems is the vast complexity of the human brain. Unlike traditional computers, the brain operates as a nonlinear dynamic system that can handle massive amounts of data through various input channels, filter information, store key information in short- and long-term memory, learn by analyzing incoming and stored data, make decisions in a constantly changing environment, and do all of this while consuming very little power.

The Human Brain Project [emphasis mine], a large-scale research project launched in 2013, aims to create a comprehensive, detailed, and biologically realistic simulation of the human brain, known as the Virtual Brain. One of the goals of the project is to develop new brain-inspired computing technologies, such as neuromorphic computing.

The Human Brain Project has been funded by the European Union (1B Euros over 10 years starting in 2013 and sunsetting in 2023). From the Human Brain Project Media Invite,

The final Human Brain Project Summit 2023 will take place in Marseille, France, from March 28-31, 2023.

As the ten-year European Flagship Human Brain Project (HBP) approaches its conclusion in September 2023, the final HBP Summit will highlight the scientific achievements of the project at the interface of neuroscience and technology and the legacy that it will leave for the brain research community. …

One last excerpt from the essay,

Neuromorphic computing is a radical reimagining of computer architecture at the transistor level, modeled after the structure and function of biological neural networks in the brain. This computing paradigm aims to build electronic systems that attempt to emulate the distributed and parallel computation of the brain by combining processing and memory in the same physical location.

This is unlike traditional computing, which is based on von Neumann systems consisting of three different units: processing unit, I/O unit, and storage unit. This stored program architecture is a model for designing computers that uses a single memory to store both data and instructions, and a central processing unit to execute those instructions. This design, first proposed by mathematician and computer scientist John von Neumann, is widely used in modern computers and is considered to be the standard architecture for computer systems and relies on a clear distinction between memory and processing.

I found the diagram Berger Included with von Neumann’s design contrasted with a neuromorphic design illuminating,

A graphical comparison of the von Neumann and Neuromorphic architecture. Left: The von Neumann architecture used in traditional computers. The red lines depict the data communication bottleneck in the von Neumann architecture. Right: A graphical representation of a general neuromorphic architecture. In this architecture, the processing and memory is decentralized across different neuronal units(the yellow nodes) and synapses(the black lines connecting the nodes), creating a naturally parallel computing environment via the mesh-like structure. (Source: DOI: 10.1109/IS.2016.7737434) [downloaded from https://www.nanowerk.com/spotlight/spotid=62353.php]

Berger offers a very good overview and I recommend reading his February 13, 2023 essay on neuromorphic engineering with one proviso, Note: A link has been removed,

Many researchers in this field see memristors as a key device component for neuromorphic engineering. Memristor – or memory resistor – devices are non-volatile nanoelectronic memory devices that were first theorized [emphasis mine] by Leon Chua in the 1970’s. However, it was some thirty years later that the first practical device was fabricated in 2008 by a group led by Stanley Williams [sometimes cited as R. Stanley Williams] at HP Research Labs.

Chua wasn’t the first as he, himself, has noted. Chua arrived at his theory independently in the 1970s but Bernard Widrow theorized what he called a ‘memistor’ in the 1960s. In fact “Memristors: they are older than you think” is a May 22, 2012 posting which featured an article “Two centuries of memristors” by Themistoklis Prodromakis, Christofer Toumazou and Leon Chua published in Nature Materials.

Most of us try to get it right but we don’t always succeed. It’s always good practice to read everyone (including me) with a little skepticism.

Learning and remembering like a human brain: nanowire networks

It’s all about memory in this April 21, 2023 news item on Nanowerk, Note: A link has been removed,

An international team led by scientists at the University of Sydney has demonstrated nanowire networks can exhibit both short- and long-term memory like the human brain.

The research has been published today in the journal Science Advances (“Neuromorphic learning, working memory, and metaplasticity in nanowire networks”), led by Dr Alon Loeffler, who received his PhD in the School of Physics, with collaborators in Japan.

An April 24, 2023 University of Sydney (Australia) press release (also on EurekAlert but published April 21, 2023), which originated news item, offers more detail about the research,

“In this research we found higher-order cognitive function, which we normally associate with the human brain, can be emulated in non-biological hardware,” Dr Loeffler said.

“This work builds on our previous research in which we showed how nanotechnology could be used to build a brain-inspired electrical device with neural network-like circuitry and synapse-like signalling.

“Our current work paves the way towards replicating brain-like learning and memory in non-biological hardware systems and suggests that the underlying nature of brain-like intelligence may be physical.”

Nanowire networks are a type of nanotechnology typically made from tiny, highly conductive silver wires that are invisible to the naked eye, covered in a plastic material, which are scattered across each other like a mesh. The wires mimic aspects of the networked physical structure of a human brain.

Advances in nanowire networks could herald many real-world applications, such as improving robotics or sensor devices that need to make quick decisions in unpredictable environments.

“This nanowire network is like a synthetic neural network because the nanowires act like neurons, and the places where they connect with each other are analogous to synapses,” senior author Professor Zdenka Kuncic, from the School of Physics, said.

“Instead of implementing some kind of machine learning task, in this study Dr Loeffler has actually taken it one step further and tried to demonstrate that nanowire networks exhibit some kind of cognitive function.”

To test the capabilities of the nanowire network, the researchers gave it a test similar to a common memory task used in human psychology experiments, called the N-Back task.

For a person, the N-Back task might involve remembering a specific picture of a cat from a series of feline images presented in a sequence. An N-Back score of 7, the average for people, indicates the person can recognise the same image that appeared seven steps back.

When applied to the nanowire network, the researchers found it could ‘remember’ a desired endpoint in an electric circuit seven steps back, meaning a score of 7 in an N-Back test.

“What we did here is manipulate the voltages of the end electrodes to force the pathways to change, rather than letting the network just do its own thing. We forced the pathways to go where we wanted them to go,” Dr Loeffler said.

“When we implement that, its memory had much higher accuracy and didn’t really decrease over time, suggesting that we’ve found a way to strengthen the pathways to push them towards where we want them, and then the network remembers it.

“Neuroscientists think this is how the brain works, certain synaptic connections strengthen while others weaken, and that’s thought to be how we preferentially remember some things, how we learn and so on.”

The researchers said when the nanowire network is constantly reinforced, it reaches a point where that reinforcement is no longer needed because the information is consolidated into memory.

“It’s kind of like the difference between long-term memory and short-term memory in our brains,” Professor Kuncic said.

“If we want to remember something for a long period of time, we really need to keep training our brains to consolidate that, otherwise it just kind of fades away over time.

“One task showed that the nanowire network can store up to seven items in memory at substantially higher than chance levels without reinforcement training and near-perfect accuracy with reinforcement training.”

COI [Conflict of Interest] Statement

Professor Zdenka Kuncic is with Emergentia [can be found here], Inc. The authors declare that they have no other competing interests.

Caption: Neural network (left) nanowire network (right) Credit: Loeffler et al.

I have a link to and citation for the paper in Science Advances (another link and citation follows),

Neuromorphic learning, working memory, and metaplasticity in nanowire networks by Alon Loeffler, Adrian Diaz-Alvarez, Ruomin Zhu, Natesh Ganesh, James M. Shine, Tomonobu Nakayama, and Zdenka Kuncic. Science Advances 21 Apr 2023 Vol 9, Issue 16 DOI: 10.1126/sciadv.adg3289

This paper is open access.

Never having seen this organization’s (Zenodo.org) setup before I’m a little confused by it,

Neuromorphic Learning, Working Memory and Metaplasticity in Nanowire Networks by Loeffler, Alon; Diaz-Alvarez, Adrian; Zhu, Ruomin; Ganesh, Natesh; Shine, James. M; Nakayama, Tomonobu; Kuncic, Zdenka, https://zenodo.org/record/7633958#.ZEv_2EnMKpo Published: February 12, 2023

I’m not sure if they’re including an early version of the article (I don’t think so) but they do have other files, which are open access and they reference the Science Advances study published in April 2023.

It seems their focus is data, from the About Zenodo webpage,

Every last detail

To fully understand and reproduce research performed by others, it is necessary to have all the details. In the digital age, that means all the digital artefacts, which are all welcomed in Zenodo.

To be an effective catch­-all, that eliminates barriers to adopting data sharing practices, Zenodo does not impose any requirements on format, size, access restrictions or licence. Quite literally we wish there to be no reason for researchers not to share!

Data, software and other artefacts in support of publications may be the core, but equally welcome are the materials associated with the conferences, projects or the institutions themselves, all of which are necessary to understand the scholarly process.

Don’t wait until the publication date!

Publication may happen months or years after completion of the research, so collecting together all the research artefacts at that stage to publish openly is often challenging. Zenodo therefore offers the possibility to house closed and restricted content, so that artefacts can be captured and stored safely whilst the research is ongoing, such that nothing is missing when they are openly shared later in the research workflow.

Additionally, to help publishing, research materials for the review process can be safely uploaded to Zenodo in restricted records and then protected links can be shared with the reviewers. Content can also be embargoed and automatically opened when the associated paper is published.

To support all these use cases, the simple web interface is supplemented by a rich API which allows third ­party tools and services to use Zenodo as a backend in their workflow.

Combining silicon with metal oxide memristors to create powerful, low-energy intensive chips enabling AI in portable devices

In this one week, I’m publishing my first stories (see also June 13, 2023 posting “ChatGPT and a neuromorphic [brainlike] synapse“) where artificial intelligence (AI) software is combined with a memristor (hardware component) for brainlike (neuromorphic) computing.

Here’s more about some of the latest research from a March 30, 2023 news item on ScienceDaily,

Everyone is talking about the newest AI and the power of neural networks, forgetting that software is limited by the hardware on which it runs. But it is hardware, says USC [University of Southern California] Professor of Electrical and Computer Engineering Joshua Yang, that has become “the bottleneck.” Now, Yang’s new research with collaborators might change that. They believe that they have developed a new type of chip with the best memory of any chip thus far for edge AI (AI in portable devices).

A March 29, 2023 University of Southern California (USC) news release (also on EurekAlert), which originated the news item, contextualizes the research and delves further into the topic of neuromorphic hardware,

For approximately the past 30 years, while the size of the neural networks needed for AI and data science applications doubled every 3.5 months, the hardware capability needed to process them doubled only every 3.5 years. According to Yang, hardware presents a more and more severe problem for which few have patience. 

Governments, industry, and academia are trying to address this hardware challenge worldwide. Some continue to work on hardware solutions with silicon chips, while others are experimenting with new types of materials and devices.  Yang’s work falls into the middle—focusing on exploiting and combining the advantages of the new materials and traditional silicon technology that could support heavy AI and data science computation. 

Their new paper in Nature focuses on the understanding of fundamental physics that leads to a drastic increase in memory capacity needed for AI hardware. The team led by Yang, with researchers from USC (including Han Wang’s group), MIT [Massachusetts Institute of Technology], and the University of Massachusetts, developed a protocol for devices to reduce “noise” and demonstrated the practicality of using this protocol in integrated chips. This demonstration was made at TetraMem, a startup company co-founded by Yang and his co-authors  (Miao Hu, Qiangfei Xia, and Glenn Ge), to commercialize AI acceleration technology. According to Yang, this new memory chip has the highest information density per device (11 bits) among all types of known memory technologies thus far. Such small but powerful devices could play a critical role in bringing incredible power to the devices in our pockets. The chips are not just for memory but also for the processor. And millions of them in a small chip, working in parallel to rapidly run your AI tasks, could only require a small battery to power it. 

The chips that Yang and his colleagues are creating combine silicon with metal oxide memristors in order to create powerful but low-energy intensive chips. The technique focuses on using the positions of atoms to represent information rather than the number of electrons (which is the current technique involved in computations on chips). The positions of the atoms offer a compact and stable way to store more information in an analog, instead of digital fashion. Moreover, the information can be processed where it is stored instead of being sent to one of the few dedicated ‘processors,’ eliminating the so-called ‘von Neumann bottleneck’ existing in current computing systems.  In this way, says Yang, computing for AI is “more energy efficient with a higher throughput.”

How it works: 

Yang explains that electrons which are manipulated in traditional chips, are “light.” And this lightness, makes them prone to moving around and being more volatile.  Instead of storing memory through electrons, Yang and collaborators are storing memory in full atoms. Here is why this memory matters. Normally, says Yang, when one turns off a computer, the information memory is gone—but if you need that memory to run a new computation and your computer needs the information all over again, you have lost both time and energy.  This new method, focusing on activating atoms rather than electrons, does not require battery power to maintain stored information. Similar scenarios happen in AI computations, where a stable memory capable of high information density is crucial. Yang imagines this new tech that may enable powerful AI capability in edge devices, such as Google Glasses, which he says previously suffered from a frequent recharging issue.

Further, by converting chips to rely on atoms as opposed to electrons, chips become smaller.  Yang adds that with this new method, there is more computing capacity at a smaller scale. And this method, he says, could offer “many more levels of memory to help increase information density.” 

To put it in context, right now, ChatGPT is running on a cloud. The new innovation, followed by some further development, could put the power of a mini version of ChatGPT in everyone’s personal device. It could make such high-powered tech more affordable and accessible for all sorts of applications. 

Here’s a link to and a citation for the paper,

Thousands of conductance levels in memristors integrated on CMOS by Mingyi Rao, Hao Tang, Jiangbin Wu, Wenhao Song, Max Zhang, Wenbo Yin, Ye Zhuo, Fatemeh Kiani, Benjamin Chen, Xiangqi Jiang, Hefei Liu, Hung-Yu Chen, Rivu Midya, Fan Ye, Hao Jiang, Zhongrui Wang, Mingche Wu, Miao Hu, Han Wang, Qiangfei Xia, Ning Ge, Ju Li & J. Joshua Yang. Nature volume 615, pages 823–829 (2023) DOI: https://doi.org/10.1038/s41586-023-05759-5 Issue Date: 30 March 2023 Published: 29 March 2023

This paper is behind a paywall.

ChatGPT and a neuromorphic (brainlike) synapse

I was teaching an introductory course about nanotechnology back in 2014 and, at the end of a session, stated (more or less) that the full potential for artificial intelligence (software) wasn’t going to be perceived until the hardware (memistors) was part of the package. (It’s interesting to revisit that in light of the recent uproar around AI (covered in my May 25, 2023 posting, which offered a survey of the situation.)

One of the major problems with artificial intelligence is its memory. The other is energy consumption. Both problems could be addressed by the integration of memristors into the hardware, giving rise to neuromorphic (brainlike) computing. (For those who don’t know, the human brain in addition to its capacity for memory is remarkably energy efficient.)

This is the first time I’ve seen research into memristors where software has been included. Disclaimer: There may be a lot more research of this type; I just haven’t seen it before. A March 24, 2023 news item on ScienceDaily announces research from Korea,

ChatGPT’s impact extends beyond the education sector and is causing significant changes in other areas. The AI language model is recognized for its ability to perform various tasks, including paper writing, translation, coding, and more, all through question-and-answer-based interactions. The AI system relies on deep learning, which requires extensive training to minimize errors, resulting in frequent data transfers between memory and processors. However, traditional digital computer systems’ von Neumann architecture separates the storage and computation of information, resulting in increased power consumption and significant delays in AI computations. Researchers have developed semiconductor technologies suitable for AI applications to address this challenge.

A March 24, 2023 Pohang University of Science & Technology (POSTECH) press release (also on EurekAlert), which originated the news item, provides more detail,

A research team at POSTECH, led by Professor Yoonyoung Chung (Department of Electrical Engineering, Department of Semiconductor Engineering), Professor Seyoung Kim (Department of Materials Science and Engineering, Department of Semiconductor Engineering), and Ph.D. candidate Seongmin Park (Department of Electrical Engineering), has developed a high-performance AI semiconductor device [emphasis mine] using indium gallium zinc oxide (IGZO), an oxide semiconductor widely used in OLED [organic light-emitting diode] displays. The new device has proven to be excellent in terms of performance and power efficiency.

Efficient AI operations, such as those of ChatGPT, require computations to occur within the memory responsible for storing information. Unfortunately, previous AI semiconductor technologies were limited in meeting all the requirements, such as linear and symmetric programming and uniformity, to improve AI accuracy.

The research team sought IGZO as a key material for AI computations that could be mass-produced and provide uniformity, durability, and computing accuracy. This compound comprises four atoms in a fixed ratio of indium, gallium, zinc, and oxygen and has excellent electron mobility and leakage current properties, which have made it a backplane of the OLED display.

Using this material, the researchers developed a novel synapse device [emphasis mine] composed of two transistors interconnected through a storage node. The precise control of this node’s charging and discharging speed has enabled the AI semiconductor to meet the diverse performance metrics required for high-level performance. Furthermore, applying synaptic devices to a large-scale AI system requires the output current of synaptic devices to be minimized. The researchers confirmed the possibility of utilizing the ultra-thin film insulators inside the transistors to control the current, making them suitable for large-scale AI.

The researchers used the newly developed synaptic device to train and classify handwritten data, achieving a high accuracy of over 98%, [emphasis mine] which verifies its potential application in high-accuracy AI systems in the future.

Professor Chung explained, “The significance of my research team’s achievement is that we overcame the limitations of conventional AI semiconductor technologies that focused solely on material development. To do this, we utilized materials already in mass production. Furthermore, Linear and symmetrical programming characteristics were obtained through a new structure using two transistors as one synaptic device. Thus, our successful development and application of this new AI semiconductor technology show great potential to improve the efficiency and accuracy of AI.”

This study was published last week [March 2023] on the inside back cover of Advanced Electronic Materials [paper edition] and was supported by the Next-Generation Intelligent Semiconductor Technology Development Program through the National Research Foundation, funded by the Ministry of Science and ICT [Information and Communication Technologies] of Korea.

Here’s a link to and a citation for the paper,

Highly Linear and Symmetric Analog Neuromorphic Synapse Based on Metal Oxide Semiconductor Transistors with Self-Assembled Monolayer for High-Precision Neural Network Computatio by Seongmin Park, Suwon Seong, Gilsu Jeon, Wonjae Ji, Kyungmi Noh, Seyoung Kim, Yoonyoung Chun. Volume 9, Issue 3 March 2023 2200554 DOI: https://doi.org/10.1002/aelm.202200554 First published online: 29 December 2022

This paper is open access.

Also, there is another approach to using materials such as indium gallium zinc oxide (IGZO) for a memristor. That would be using biological cells as my June 6, 2023 posting, which features work on biological neural networks (BNNs), suggests in relation to creating robots that can perform brainlike computing.

Fluidic memristor with neuromorphic (brainlike) functions

I think this is the first time I’ve had occasion to feature a fluidic memristor. From a January 13, 2023 news item on Nahowerk, Note: Links have been removed,

Neuromorphic devices have attracted increasing attention because of their potential applications in neuromorphic [brainlike] computing, intelligence sensing, brain-machine interfaces and neuroprosthetics. However, most of the neuromorphic functions realized are based on the mimic of electric pulses with solid state devices. Mimicking the functions of chemical synapses, especially neurotransmitter-related functions, is still a challenge in this research area.

In a study published in Science (“Neuromorphic functions with a polyelectrolyte-confined fluidic memristor”), the research group led by Prof. YU Ping and MAO Lanqun from the Institute of Chemistry of the Chinese Academy of Sciences developed a polyelectrolyte-confined fluidic memristor (PFM), which could emulate diverse electric pulse with ultralow energy consumption. Moreover, benefitting from the fluidic nature of PFM, chemical-regulated electric pulses and chemical-electric signal transduction could also be emulated.

A January 12, 2023 Chinese Academy of Science (CAS) press release, which originated the news item, offers more technical detail,

The researchers first fabricated the polyelectrolyte-confined fluidic channel by surface-initiated atomic transfer polymerization. By systematically studying the current-voltage relationship, they found that the fabricated fluidic channel well satisfied the nature memristor, defined as PFM. The origin of the ion memory was originated from the relatively slow diffusion dynamics of anions into and out of the polyelectrolyte brushes.  

The PFM could well emulate the short-term plasticity patterns (STP), including paired-pulse facilitation and paired-pulse depression. These functions can be operated at the voltage and energy consumption as low as those biological systems, suggesting the potential application in bioinspired sensorimotor implementation, intelligent sensing and neuroprosthetics.  

The PFM could also emulate the chemical-regulated STP electric pulses. Based on the interaction between polyelectrolyte and counterions, the retention time could be regulated in different electrolyte.

More importantly, in a physiological electrolyte (i.e., phosphate-buffered saline solution, pH7.4), the PFM could emulate the regulation of memory by adenosine triphosphate (ATP), demonstrating the possibility to regulate the synaptic plasticity by neurotransmitter.  More importantly, based on the interaction between polyelectrolytes and counterions, the chemical-electric signal transduction was accomplished with the PFM, which is a key step towards the fabrication of artificial chemical synapses.

With structural emulation to ion channels, PFM features versatility and easily interfaces with biological systems, paving a way to building neuromorphic devices with advanced functions by introducing rich chemical designs. This study provides a new way to interface the chemistry with neuromorphic device. 

Here’s a link to and a citation for the paper,

Neuromorphic functions with a polyelectrolyte-confined fluidic memristor by Tianyi Xiong, Changwei Li, Xiulan He, Boyang Xie, Jianwei Zong, Yanan Jiang, Wenjie Ma, Fei Wu, Junjie Fei, Ping Yu, and Lanqun Mao. Science 12 Jan 2023 Vol 379, Issue 6628 pp. 156-161 DOI: 10.1126/science.adc9150

This paper is behind a paywall.

Analogue memristor for next-generation brain-mimicking (neuromorphic) computing

This research into an analogue memristor comes from The Korea Institute of Science and Technology (KIST) according to a September 20, 2022 news item on Nanowerk, Note: A link has been removed,

Neuromorphic computing system technology mimicking the human brain has emerged and overcome the limitation of excessive power consumption regarding the existing von Neumann computing method. A high-performance, analog artificial synapse device, capable of expressing various synapse connection strengths, is required to implement a semiconductor device that uses a brain information transmission method. This method uses signals transmitted between neurons when a neuron generates a spike signal.

However, considering conventional resistance-variable memory devices widely used as artificial synapses, as the filament grows with varying resistance, the electric field increases, causing a feedback phenomenon, resulting in rapid filament growth. Therefore, it is challenging to implement considerable plasticity while maintaining analog (gradual) resistance variation concerning the filament type.

The Korea Institute of Science and Technology (KIST), led by Dr. YeonJoo Jeong’s team at the Center for Neuromorphic Engineering, solved the limitations of analog synaptic characteristics, plasticity and information preservation, which are chronic obstacles regarding memristors, neuromorphic semiconductor devices. He announced the development of an artificial synaptic semiconductor device capable of highly reliable neuromorphic computing (Nature Communications, “Cluster-type analogue memristor by engineering redox dynamics for high-performance neuromorphic computing”).

Caption: Concept image of the article Credit: Korea Institute of Science and Technology (KIST)

A September 20, 2022 (Korea) National Research Council of Science & Technology press release on EurekAlert, which originated the news item, delves further into the research,

The KIST research team fine-tuned the redox properties of active electrode ions to solve small synaptic plasticity hindering the performance of existing neuromorphic semiconductor devices. Furthermore, various transition metals were doped and used in the synaptic device, controlling the reduction probability of active electrode ions. It was discovered that the high reduction probability of ions is a critical variable in the development of high-performance artificial synaptic devices.

Therefore, a titanium transition metal, having a high ion reduction probability, was introduced by the research team into an existing artificial synaptic device. This maintains the synapse’s analog characteristics and the device plasticity at the synapse of the biological brain, approximately five times the difference between high and low resistances. Furthermore, they developed a high-performance neuromorphic semiconductor that is approximately 50 times more efficient.

Additionally, due to the high alloy formation reaction concerning the doped titanium transition metal, the information retention increased up to 63 times compared with the existing artificial synaptic device. Furthermore, brain functions, including long-term potentiation and long-term depression, could be more precisely simulated.

The team implemented an artificial neural network learning pattern using the developed artificial synaptic device and attempted artificial intelligence image recognition learning. As a result, the error rate was reduced by more than 60% compared with the existing artificial synaptic device; additionally, the handwriting image pattern (MNIST) recognition accuracy increased by more than 69%. The research team confirmed the feasibility of a high-performance neuromorphic computing system through this improved the artificial synaptic device.

Dr. Jeong of KIST stated, “This study drastically improved the synaptic range of motion and information preservation, which were the greatest technical barriers of existing synaptic mimics.” “In the developed artificial synapse device, the device’s analog operation area to express the synapse’s various connection strengths has been maximized, so the performance of brain simulation-based artificial intelligence computing will be improved.” Additionally, he mentioned, “In the follow-up research, we will manufacture a neuromorphic semiconductor chip based on the developed artificial synapse device to realize a high-performance artificial intelligence system, thereby further enhancing competitiveness in the domestic system and artificial intelligence semiconductor field.”

Here’s a link to and a citation for the paper,

Cluster-type analogue memristor by engineering redox dynamics for high-performance neuromorphic computing by Jaehyun Kang, Taeyoon Kim, Suman Hu, Jaewook Kim, Joon Young Kwak, Jongkil Park, Jong Keuk Park, Inho Kim, Suyoun Lee, Sangbum Kim & YeonJoo Jeong. Nature Communications volume 13, Article number: 4040 (2022) DOI: https://doi.org/10.1038/s41467-022-31804-4 Published: 12 July 2022

This paper is open access.

Dynamic molecular switches for brainlike computing at the University of Limerick

Aren’t memristors proof that brainlike computing at the molecular and atomic levels is possible? It seems I have misunderstood memristors according to this November 21, 2022 news item on ScienceDaily,

A breakthrough discovery at University of Limerick in Ireland has revealed for the first time that unconventional brain-like computing at the tiniest scale of atoms and molecules is possible.

Researchers at University of Limerick’s Bernal Institute worked with an international team of scientists to create a new type of organic material that learns from its past behaviour.

The discovery of the ‘dynamic molecular switch’ that emulate[s] synaptic behaviour is revealed in a new study in the international journal Nature Materials.

The study was led by Damien Thompson, Professor of Molecular Modelling in UL’s Department of Physics and Director of SSPC, the UL-hosted Science Foundation Ireland Research Centre for Pharmaceuticals, together with Christian Nijhuis at the Centre for Molecules and Brain-Inspired Nano Systems in University of Twente [Netherlands] and Enrique del Barco from University of Central Florida.

A November 21, 2022 University of Limerick press release (also on EurekAlert), which originated the news item, provides more technical details about the research,

Working during lockdowns, the team developed a two-nanometre thick layer of molecules, which is 50,000 times thinner than a strand of hair and remembers its history as electrons pass through it.

Professor Thompson explained that the “switching probability and the values of the on/off states continually change in the molecular material, which provides a disruptive new alternative to conventional silicon-based digital switches that can only ever be either on or off”.

The newly discovered dynamic organic switch displays all the mathematical logic functions necessary for deep learning, successfully emulating Pavlovian ‘call and response’ synaptic brain-like behaviour.

The researchers demonstrated the new materials properties using extensive experimental characterisation and electrical measurements supported by multi-scale modelling spanning from predictive modelling of the molecular structures at the quantum level to analytical mathematical modelling of the electrical data.

To emulate the dynamical behaviour of synapses at the molecular level, the researchers combined fast electron transfer (akin to action potentials and fast depolarization processes in biology) with slow proton coupling limited by diffusion (akin to the role of biological calcium ions or neurotransmitters).

Since the electron transfer and proton coupling steps inside the material occur at very different time scales, the transformation can emulate the plastic behaviour of synapse neuronal junctions, Pavlovian learning, and all logic gates for digital circuits, simply by changing the applied voltage and the duration of voltage pulses during the synthesis, they explained.

“This was a great lockdown project, with Chris, Enrique and I pushing each other through zoom meetings and gargantuan email threads to bring our teams combined skills in materials modelling, synthesis and characterisation to the point where we could demonstrate these new brain-like computing properties,” explained Professor Thompson.

“The community has long known that silicon technology works completely differently to how our brains work and so we used new types of electronic materials based on soft molecules to emulate brain-like computing networks.”

The researchers explained that the method can in the future be applied to dynamic molecular systems driven by other stimuli such as light and coupled to different types of dynamic covalent bond formation.

This breakthrough opens up a whole new range of adaptive and reconfigurable systems, creating new opportunities in sustainable and green chemistry, from more efficient flow chemistry production of drug products and other value-added chemicals to development of new organic materials for high density computing and memory storage in big data centres.

“This is just the start. We are already busy expanding this next generation of intelligent molecular materials, which is enabling development of sustainable alternative technologies to tackle grand challenges in energy, environment, and health,” explained Professor Thompson.

Professor Norelee Kennedy, Vice President Research at UL, said: “Our researchers are continuously finding new ways of making more effective, more sustainable materials. This latest finding is very exciting, demonstrating the reach and ambition of our international collaborations and showcasing our world-leading ability at UL to encode useful properties into organic materials.”

Here’s a link to and a citation for the paper,

Dynamic molecular switches with hysteretic negative differential conductance emulating synaptic behaviour by Yulong Wang, Qian Zhang, Hippolyte P. A. G. Astier, Cameron Nickle, Saurabh Soni, Fuad A. Alami, Alessandro Borrini, Ziyu Zhang, Christian Honnigfort, Björn Braunschweig, Andrea Leoncini, Dong-Cheng Qi, Yingmei Han, Enrique del Barco, Damien Thompson & Christian A. Nijhuis. Nature Materials volume 21, pages 1403–1411 (2022) DOI: https://doi.org/10.1038/s41563-022-01402-2 Published: 21 November 2022 Issue Date: December 2022

This paper is behind a paywall.