Tag Archives: US Patent and Trademark Office

Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report

Launched on Thursday, July 13, 2023 during UNESCO’s (United Nations Educational, Scientific, and Cultural Organization) “Global dialogue on the ethics of neurotechnology,” is a report tying together the usual measures of national scientific supremacy (number of papers published and number of patents filed) with information on corporate investment in the field. Consequently, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends” by Daniel S. Hain, Roman Jurowetzki, Mariagrazia Squicciarini, and Lihui Xu provides better insight into the international neurotechnology scene than is sometimes found in these kinds of reports. By the way, the report is open access.

Here’s what I mean, from the report‘s short summary,

Since 2013, government investments in this field have exceeded $6 billion. Private investment has also seen significant growth, with annual funding experiencing a 22-fold increase from 2010 to 2020, reaching $7.3 billion and totaling $33.2 billion.

This investment has translated into a 35-fold growth in neuroscience publications between 2000-2021 and 20-fold growth in innovations between 2022-2020, as proxied by patents. However, not all are poised to benefit from such developments, as big divides emerge.

Over 80% of high-impact neuroscience publications are produced by only ten countries, while 70% of countries contributed fewer than 10 such papers over the period considered. Similarly, five countries only hold 87% of IP5 neurotech patents.

This report sheds light on the neurotechnology ecosystem, that is, what is being developed, where and by whom, and informs about how neurotechnology interacts with other technological trajectories, especially Artificial Intelligence [emphasis mine]. [p. 2]

The money aspect is eye-opening even when you already have your suspicions. Also, it’s not entirely unexpected to learn that only ten countries produce over 80% of the high impact neurotech papers and that only five countries hold 87% of the IP5 neurotech patents but it is stunning to see it in context. (If you’re not familiar with the term ‘IP5 patents’, scroll down in this post to the relevant subhead. Hint: It means the patent was filed in one of the top five jurisdictions; I’ll leave you to guess which ones those might be.)

“Since 2013 …” isn’t quite as informative as the authors may have hoped. I wish they had given a time frame for government investments similar to what they did for corporate investments (e.g., 2010 – 2020). Also, is the $6B (likely in USD) government investment cumulative or an estimated annual number? To sum up, I would have appreciated parallel structure and specificity.

Nitpicks aside, there’s some very good material intended for policy makers. On that note, some of the analysis is beyond me. I haven’t used anything even somewhat close to their analytical tools in years and years. This commentaries reflects my interests and a very rapid reading. One last thing, this is being written from a Canadian perspective. With those caveats in mind, here’s some of what I found.

A definition, social issues, country statistics, and more

There’s a definition for neurotechnology and a second mention of artificial intelligence being used in concert with neurotechnology. From the report‘s executive summary,

Neurotechnology consists of devices and procedures used to access, monitor, investigate, assess, manipulate, and/or emulate the structure and function of the neural systems of animals or human beings. It is poised to revolutionize our understanding of the brain and to unlock innovative solutions to treat a wide range of diseases and disorders.

Similarly to Artificial Intelligence (AI), and also due to its convergence with AI, neurotechnology may have profound societal and economic impact, beyond the medical realm. As neurotechnology directly relates to the brain, it triggers ethical considerations about fundamental aspects of human existence, including mental integrity, human dignity, personal identity, freedom of thought, autonomy, and privacy [emphases mine]. Its potential for enhancement purposes and its accessibility further amplifies its prospect social and societal implications.

The recent discussions held at UNESCO’s Executive Board further shows Member States’ desire to address the ethics and governance of neurotechnology through the elaboration of a new standard-setting instrument on the ethics of neurotechnology, to be adopted in 2025. To this end, it is important to explore the neurotechnology landscape, delineate its boundaries, key players, and trends, and shed light on neurotech’s scientific and technological developments. [p. 7]

Here’s how they sourced the data for the report,

The present report addresses such a need for evidence in support of policy making in
relation to neurotechnology by devising and implementing a novel methodology on data from scientific articles and patents:

● We detect topics over time and extract relevant keywords using a transformer-
based language models fine-tuned for scientific text. Publication data for the period
2000-2021 are sourced from the Scopus database and encompass journal articles
and conference proceedings in English. The 2,000 most cited publications per year
are further used in in-depth content analysis.
● Keywords are identified through Named Entity Recognition and used to generate
search queries for conducting a semantic search on patents’ titles and abstracts,
using another language model developed for patent text. This allows us to identify
patents associated with the identified neuroscience publications and their topics.
The patent data used in the present analysis are sourced from the European
Patent Office’s Worldwide Patent Statistical Database (PATSTAT). We consider
IP5 patents filed between 2000-2020 having an English language abstract and
exclude patents solely related to pharmaceuticals.

This approach allows mapping the advancements detailed in scientific literature to the technological applications contained in patent applications, allowing for an analysis of the linkages between science and technology. This almost fully automated novel approach allows repeating the analysis as neurotechnology evolves. [pp. 8-9[

Findings in bullet points,

Key stylized facts are:
● The field of neuroscience has witnessed a remarkable surge in the overall number
of publications since 2000, exhibiting a nearly 35-fold increase over the period
considered, reaching 1.2 million in 2021. The annual number of publications in
neuroscience has nearly tripled since 2000, exceeding 90,000 publications a year
in 2021. This increase became even more pronounced since 2019.
● The United States leads in terms of neuroscience publication output (40%),
followed by the United Kingdom (9%), Germany (7%), China (5%), Canada (4%),
Japan (4%), Italy (4%), France (4%), the Netherlands (3%), and Australia (3%).
These countries account for over 80% of neuroscience publications from 2000 to
2021.
● Big divides emerge, with 70% of countries in the world having less than 10 high-
impact neuroscience publications between 2000 to 2021.
● Specific neurotechnology-related research trends between 2000 and 2021 include:
○ An increase in Brain-Computer Interface (BCI) research around 2010,
maintaining a consistent presence ever since.
○ A significant surge in Epilepsy Detection research in 2017 and 2018,
reflecting the increased use of AI and machine learning in healthcare.
○ Consistent interest in Neuroimaging Analysis, which peaks around 2004,
likely because of its importance in brain activity and language
comprehension studies.
○ While peaking in 2016 and 2017, Deep Brain Stimulation (DBS) remains a
persistent area of research, underlining its potential in treating conditions
like Parkinson’s disease and essential tremor.
● Between 2000 and 2020, the total number of patent applications in this field
increased significantly, experiencing a 20-fold increase from less than 500 to over
12,000. In terms of annual figures, a consistent upward trend in neurotechnology-10
related patent applications emerges, with a notable doubling observed between
2015 and 2020.
• The United States account for nearly half of all worldwide patent applications (47%).
Other major contributors include South Korea (11%), China (10%), Japan (7%),
Germany (7%), and France (5%). These five countries together account for 87%
of IP5 neurotech patents applied between 2000 and 2020.
○ The United States has historically led the field, with a peak around 2010, a
decline towards 2015, and a recovery up to 2020.
○ South Korea emerged as a significant contributor after 1990, overtaking
Germany in the late 2000s to become the second-largest developer of
neurotechnology. By the late 2010s, South Korea’s annual neurotechnology
patent applications approximated those of the United States.
○ China exhibits a sharp increase in neurotechnology patent applications in
the mid-2010s, bringing it on par with the United States in terms of
application numbers.
● The United States ranks highest in both scientific publications and patents,
indicating their strong ability to transform knowledge into marketable inventions.
China, France, and Korea excel in leveraging knowledge to develop patented
innovations. Conversely, countries such as the United Kingdom, Germany, Italy,
Canada, Brazil, and Australia lag behind in effectively translating neurotech
knowledge into patentable innovations.
● In terms of patent quality measured by forward citations, the leading countries are
Germany, US, China, Japan, and Korea.
● A breakdown of patents by technology field reveals that Computer Technology is
the most important field in neurotechnology, exceeding Medical Technology,
Biotechnology, and Pharmaceuticals. The growing importance of algorithmic
applications, including neural computing techniques, also emerges by looking at
the increase in patent applications in these fields between 2015-2020. Compared
to the reference year, computer technologies-related patents in neurotech
increased by 355% and by 92% in medical technology.
● An analysis of the specialization patterns of the top-5 countries developing
neurotechnologies reveals that Germany has been specializing in chemistry-
related technology fields, whereas Asian countries, particularly South Korea and
China, focus on computer science and electrical engineering-related fields. The
United States exhibits a balanced configuration with specializations in both
chemistry and computer science-related fields.
● The entities – i.e. both companies and other institutions – leading worldwide
innovation in the neurotech space are: IBM (126 IP5 patents, US), Ping An
Technology (105 IP5 patents, CH), Fujitsu (78 IP5 patents, JP), Microsoft (76 IP511
patents, US)1, Samsung (72 IP5 patents, KR), Sony (69 IP5 patents JP) and Intel
(64 IP5 patents US)

This report further proposes a pioneering taxonomy of neurotechnologies based on International Patent Classification (IPC) codes.

• 67 distinct patent clusters in neurotechnology are identified, which mirror the diverse research and development landscape of the field. The 20 most prominent neurotechnology groups, particularly in areas like multimodal neuromodulation, seizure prediction, neuromorphic computing [emphasis mine], and brain-computer interfaces, point to potential strategic areas for research and commercialization.
• The variety of patent clusters identified mirrors the breadth of neurotechnology’s potential applications, from medical imaging and limb rehabilitation to sleep optimization and assistive exoskeletons.
• The development of a baseline IPC-based taxonomy for neurotechnology offers a structured framework that enriches our understanding of this technological space, and can facilitate research, development and analysis. The identified key groups mirror the interdisciplinary nature of neurotechnology and underscores the potential impact of neurotechnology, not only in healthcare but also in areas like information technology and biomaterials, with non-negligible effects over societies and economies.

1 If we consider Microsoft Technology Licensing LLM and Microsoft Corporation as being under the same umbrella, Microsoft leads worldwide developments with 127 IP5 patents. Similarly, if we were to consider that Siemens AG and Siemens Healthcare GmbH belong to the same conglomerate, Siemens would appear much higher in the ranking, in third position, with 84 IP5 patents. The distribution of intellectual property assets across companies belonging to the same conglomerate is frequent and mirrors strategic as well as operational needs and features, among others. [pp. 9-11]

Surprises and comments

Interesting and helpful to learn that “neurotechnology interacts with other technological trajectories, especially Artificial Intelligence;” this has changed and improved my understanding of neurotechnology.

It was unexpected to find Canada in the top ten countries producing neuroscience papers. However, finding out that the country lags in translating its ‘neuro’ knowledge into patentable innovation is not entirely a surprise.

It can’t be an accident that countries with major ‘electronics and computing’ companies lead in patents. These companies do have researchers but they also buy startups to acquire patents. They (and ‘patent trolls’) will also file patents preemptively. For the patent trolls, it’s a moneymaking proposition and for the large companies, it’s a way of protecting their own interests and/or (I imagine) forcing a sale.

The mention of neuromorphic (brainlike) computing in the taxonomy section was surprising and puzzling. Up to this point, I’ve thought of neuromorphic computing as a kind of alternative or addition to standard computing but the authors have blurred the lines as per UNESCO’s definition of neurotechnology (specifically, “… emulate the structure and function of the neural systems of animals or human beings”) . Again, this report is broadening my understanding of neurotechnology. Of course, it required two instances before I quite grasped it, the definition and the taxonomy.

What’s puzzling is that neuromorphic engineering, a broader term that includes neuromorphic computing, isn’t used or mentioned. (For an explanation of the terms neuromorphic computing and neuromorphic engineering, there’s my June 23, 2023 posting, “Neuromorphic engineering: an overview.” )

The report

I won’t have time for everything. Here are some of the highlights from my admittedly personal perspective.

It’s not only about curing disease

From the report,

Neurotechnology’s applications however extend well beyond medicine [emphasis mine], and span from research, to education, to the workplace, and even people’s everyday life. Neurotechnology-based solutions may enhance learning and skill acquisition and boost focus through brain stimulation techniques. For instance, early research finds that brain- zapping caps appear to boost memory for at least one month (Berkeley, 2022). This could one day be used at home to enhance memory functions [emphasis mine]. They can further enable new ways to interact with the many digital devices we use in everyday life, transforming the way we work, live and interact. One example is the Sound Awareness wristband developed by a Stanford team (Neosensory, 2022) which enables individuals to “hear” by converting sound into tactile feedback, so that sound impaired individuals can perceive spoken words through their skin. Takagi and Nishimoto (2023) analyzed the brain scans taken through Magnetic Resonance Imaging (MRI) as individuals were shown thousands of images. They then trained a generative AI tool called Stable Diffusion2 on the brain scan data of the study’s participants, thus creating images that roughly corresponded to the real images shown. While this does not correspond to reading the mind of people, at least not yet, and some limitations of the study have been highlighted (Parshall, 2023), it nevertheless represents an important step towards developing the capability to interface human thoughts with computers [emphasis mine], via brain data interpretation.

While the above examples may sound somewhat like science fiction, the recent uptake of generative Artificial Intelligence applications and of large language models such as ChatGPT or Bard, demonstrates that the seemingly impossible can quickly become an everyday reality. At present, anyone can purchase online electroencephalogram (EEG) devices for a few hundred dollars [emphasis mine], to measure the electrical activity of their brain for meditation, gaming, or other purposes. [pp. 14-15]

This is very impressive achievement. Some of the research cited was published earlier this year (2023). The extraordinary speed is a testament to the efforts by the authors and their teams. It’s also a testament to how quickly the field is moving.

I’m glad to see the mention of and focus on consumer neurotechnology. (While the authors don’t speculate, I am free to do so.) Consumer neurotechnology could be viewed as one of the steps toward normalizing a cyborg future for all of us. Yes, we have books, television programmes, movies, and video games, which all normalize the idea but the people depicted have been severely injured and require the augmentation. With consumer neurotechnology, you have easily accessible devices being used to enhance people who aren’t injured, they just want to be ‘better’.

This phrase seemed particularly striking “… an important step towards developing the capability to interface human thoughts with computers” in light of some claims made by the Australian military in my June 13, 2023 posting “Mind-controlled robots based on graphene: an Australian research story.” (My posting has an embedded video demonstrating the Brain Robotic Interface (BRI) in action. Also, see the paragraph below the video for my ‘measured’ response.)

There’s no mention of the military in the report which seems more like a deliberate rather than inadvertent omission given the importance of military innovation where technology is concerned.

This section gives a good overview of government initiatives (in the report it’s followed by a table of the programmes),

Thanks to the promises it holds, neurotechnology has garnered significant attention from both governments and the private sector and is considered by many as an investment priority. According to the International Brain Initiative (IBI), brain research funding has become increasingly important over the past ten years, leading to a rise in large-scale state-led programs aimed at advancing brain intervention technologies(International Brain Initiative, 2021). Since 2013, initiatives such as the United States’ Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative and the European Union’s Human Brain Project (HBP), as well as major national initiatives in China, Japan and South Korea have been launched with significant funding support from the respective governments. The Canadian Brain Research Strategy, initially operated as a multi- stakeholder coalition on brain research, is also actively seeking funding support from the government to transform itself into a national research initiative (Canadian Brain Research Strategy, 2022). A similar proposal is also seen in the case of the Australian Brain Alliance, calling for the establishment of an Australian Brain Initiative (Australian Academy of Science, n.d.). [pp. 15-16]

Privacy

There are some concerns such as these,

Beyond the medical realm, research suggests that emotional responses of consumers
related to preferences and risks can be concurrently tracked by neurotechnology, such
as neuroimaging and that neural data can better predict market-level outcomes than
traditional behavioral data (Karmarkar and Yoon, 2016). As such, neural data is
increasingly sought after in the consumer market for purposes such as digital
phenotyping4, neurogaming 5,and neuromarketing6 (UNESCO, 2021). This surge in demand gives rise to risks like hacking, unauthorized data reuse, extraction of privacy-sensitive information, digital surveillance, criminal exploitation of data, and other forms of abuse. These risks prompt the question of whether neural data needs distinct definition and safeguarding measures.

These issues are particularly relevant today as a wide range of electroencephalogram (EEG) headsets that can be used at home are now available in consumer markets for purposes that range from meditation assistance to controlling electronic devices through the mind. Imagine an individual is using one of these devices to play a neurofeedback game, which records the person’s brain waves during the game. Without the person being aware, the system can also identify the patterns associated with an undiagnosed mental health condition, such as anxiety. If the game company sells this data to third parties, e.g. health insurance providers, this may lead to an increase of insurance fees based on undisclosed information. This hypothetical situation would represent a clear violation of mental privacy and of unethical use of neural data.

Another example is in the field of advertising, where companies are increasingly interested in using neuroimaging to better understand consumers’ responses to their products or advertisements, a practice known as neuromarketing. For instance, a company might use neural data to determine which advertisements elicit the most positive emotional responses in consumers. While this can help companies improve their marketing strategies, it raises significant concerns about mental privacy. Questions arise in relation to consumers being aware or not that their neural data is being used, and in the extent to which this can lead to manipulative advertising practices that unfairly exploit unconscious preferences. Such potential abuses underscore the need for explicit consent and rigorous data protection measures in the use of neurotechnology for neuromarketing purposes. [pp. 21-22]

Legalities

Some countries already have laws and regulations regarding neurotechnology data,

At the national level, only a few countries have enacted laws and regulations to protect mental integrity or have included neuro-data in personal data protection laws (UNESCO, University of Milan-Bicocca (Italy) and State University of New York – Downstate Health Sciences University, 2023). Examples are the constitutional reform undertaken by Chile (Republic of Chile, 2021), the Charter for the responsible development of neurotechnologies of the Government of France (Government of France, 2022), and the Digital Rights Charter of the Government of Spain (Government of Spain, 2021). They propose different approaches to the regulation and protection of human rights in relation to neurotechnology. Countries such as the UK are also examining under which circumstances neural data may be considered as a special category of data under the general data protection framework (i.e. UK’s GDPR) (UK’s Information Commissioner’s Office, 2023) [p. 24]

As you can see, these are recent laws. There doesn’t seem to be any attempt here in Canada even though there is an act being reviewed in Parliament that could conceivably include neural data. This is from my May 1, 2023 posting,

Bill C-27 (Digital Charter Implementation Act, 2022) is what I believe is called an omnibus bill as it includes three different pieces of proposed legislation (the Consumer Privacy Protection Act [CPPA], the Artificial Intelligence and Data Act [AIDA], and the Personal Information and Data Protection Tribunal Act [PIDPTA]). [emphasis added July 11, 2023] You can read the Innovation, Science and Economic Development (ISED) Canada summary here or a detailed series of descriptions of the act here on the ISED’s Canada’s Digital Charter webpage.

My focus at the time was artificial intelligence and, now, after reading this UNESCO report and briefly looking at the Innovation, Science and Economic Development (ISED) Canada summary and a detailed series of descriptions of the act on ISED’s Canada’s Digital Charter webpage, I don’t see anything that specifies neural data but it’s not excluded either.

IP5 patents

Here’s the explanation (the footnote is included at the end of the excerpt),

IP5 patents represent a subset of overall patents filed worldwide, which have the
characteristic of having been filed in at least one top intellectual property offices (IPO)
worldwide (the so called IP5, namely the Chinese National Intellectual Property
Administration, CNIPA (formerly SIPO); the European Patent Office, EPO; the Japan
Patent Office, JPO; the Korean Intellectual Property Office, KIPO; and the United States
Patent and Trademark Office, USPTO) as well as another country, which may or may not be an IP5. This signals their potential applicability worldwide, as their inventiveness and industrial viability have been validated by at least two leading IPOs. This gives these patents a sort of “quality” check, also since patenting inventions is costly and if applicants try to protect the same invention in several parts of the world, this normally mirrors that the applicant has expectations about their importance and expected value. If we were to conduct the same analysis using information about individually considered patent applied worldwide, i.e. without filtering for quality nor considering patent families, we would risk conducting a biased analysis based on duplicated data. Also, as patentability standards vary across countries and IPOs, and what matters for patentability is the existence (or not) of prior art in the IPO considered, we would risk mixing real innovations with patents related to catching up phenomena in countries that are not at the forefront of the technology considered.

9 The five IP offices (IP5) is a forum of the five largest intellectual property offices in the world that was set up to improve the efficiency of the examination process for patents worldwide. The IP5 Offices together handle about 80% of the world’s patent applications, and 95% of all work carried out under the Patent Cooperation Treaty (PCT), see http://www.fiveipoffices.org. (Dernis et al., 2015) [p. 31]

AI assistance on this report

As noted earlier I have next to no experience with the analytical tools having not attempted this kind of work in several years. Here’s an example of what they were doing,

We utilize a combination of text embeddings based on Bidirectional Encoder
Representations from Transformer (BERT), dimensionality reduction, and hierarchical
clustering inspired by the BERTopic methodology 12 to identify latent themes within
research literature. Latent themes or topics in the context of topic modeling represent
clusters of words that frequently appear together within a collection of documents (Blei, 2012). These groupings are not explicitly labeled but are inferred through computational analysis examining patterns in word usage. These themes are ‘hidden’ within the text, only to be revealed through this analysis. …

We further utilize OpenAI’s GPT-4 model to enrich our understanding of topics’ keywords and to generate topic labels (OpenAI, 2023), thus supplementing expert review of the broad interdisciplinary corpus. Recently, GPT-4 has shown impressive results in medical contexts across various evaluations (Nori et al., 2023), making it a useful tool to enhance the information obtained from prior analysis stages, and to complement them. The automated process enhances the evaluation workflow, effectively emphasizing neuroscience themes pertinent to potential neurotechnology patents. Notwithstanding existing concerns about hallucinations (Lee, Bubeck and Petro, 2023) and errors in generative AI models, this methodology employs the GPT-4 model for summarization and interpretation tasks, which significantly mitigates the likelihood of hallucinations. Since the model is constrained to the context provided by the keyword collections, it limits the potential for fabricating information outside of the specified boundaries, thereby enhancing the accuracy and reliability of the output. [pp. 33-34]

I couldn’t resist adding the ChatGPT paragraph given all of the recent hoopla about it.

Multimodal neuromodulation and neuromorphic computing patents

I think this gives a pretty good indication of the activity on the patent front,

The largest, coherent topic, termed “multimodal neuromodulation,” comprises 535
patents detailing methodologies for deep or superficial brain stimulation designed to
address neurological and psychiatric ailments. These patented technologies interact with various points in neural circuits to induce either Long-Term Potentiation (LTP) or Long-Term Depression (LTD), offering treatment for conditions such as obsession, compulsion, anxiety, depression, Parkinson’s disease, and other movement disorders. The modalities encompass implanted deep-brain stimulators (DBS), Transcranial Magnetic Stimulation (TMS), and transcranial Direct Current Stimulation (tDCS). Among the most representative documents for this cluster are patents with titles: Electrical stimulation of structures within the brain or Systems and methods for enhancing or optimizing neural stimulation therapy for treating symptoms of Parkinson’s disease and or other movement disorders. [p.65]

Given my longstanding interest in memristors, which (I believe) have to a large extent helped to stimulate research into neuromorphic computing, this had to be included. Then, there was the brain-computer interfaces cluster,

A cluster identified as “Neuromorphic Computing” consists of 366 patents primarily
focused on devices designed to mimic human neural networks for efficient and adaptable computation. The principal elements of these inventions are resistive memory cells and artificial synapses. They exhibit properties similar to the neurons and synapses in biological brains, thus granting these devices the ability to learn and modulate responses based on rewards, akin to the adaptive cognitive capabilities of the human brain.

The primary technology classes associated with these patents fall under specific IPC
codes, representing the fields of neural network models, analog computers, and static
storage structures. Essentially, these classifications correspond to technologies that are key to the construction of computers and exhibit cognitive functions similar to human brain processes.

Examples for this cluster include neuromorphic processing devices that leverage
variations in resistance to store and process information, artificial synapses exhibiting
spike-timing dependent plasticity, and systems that allow event-driven learning and
reward modulation within neuromorphic computers.

In relation to neurotechnology as a whole, the “neuromorphic computing” cluster holds significant importance. It embodies the fusion of neuroscience and technology, thereby laying the basis for the development of adaptive and cognitive computational systems. Understanding this specific cluster provides a valuable insight into the progressing domain of neurotechnology, promising potential advancements across diverse fields, including artificial intelligence and healthcare.

The “Brain-Computer Interfaces” cluster, consisting of 146 patents, embodies a key aspect of neurotechnology that focuses on improving the interface between the brain and external devices. The technology classification codes associated with these patents primarily refer to methods or devices for treatment or protection of eyes and ears, devices for introducing media into, or onto, the body, and electric communication techniques, which are foundational elements of brain-computer interface (BCI) technologies.

Key patents within this cluster include a brain-computer interface apparatus adaptable to use environment and method of operating thereof, a double closed circuit brain-machine interface system, and an apparatus and method of brain-computer interface for device controlling based on brain signal. These inventions mainly revolve around the concept of using brain signals to control external devices, such as robotic arms, and improving the classification performance of these interfaces, even after long periods of non-use.

The inventions described in these patents improve the accuracy of device control, maintain performance over time, and accommodate multiple commands, thus significantly enhancing the functionality of BCIs.

Other identified technologies include systems for medical image analysis, limb rehabilitation, tinnitus treatment, sleep optimization, assistive exoskeletons, and advanced imaging techniques, among others. [pp. 66-67]

Having sections on neuromorphic computing and brain-computer interface patents in immediate proximity led to more speculation on my part. Imagine how much easier it would be to initiate a BCI connection if it’s powered with a neuromorphic (brainlike) computer/device. [ETA July 21, 2023: Following on from that thought, it might be more than just easier to initiate a BCI connection. Could a brainlike computer become part of your brain? Why not? it’s been successfully argued that a robotic wheelchair was part of someone’s body, see my January 30, 2013 posting and scroll down about 40% of the way.)]

Neurotech policy debates

The report concludes with this,

Neurotechnology is a complex and rapidly evolving technological paradigm whose
trajectories have the power to shape people’s identity, autonomy, privacy, sentiments,
behaviors and overall well-being, i.e. the very essence of what it means to be human.

Designing and implementing careful and effective norms and regulations ensuring that neurotechnology is developed and deployed in an ethical manner, for the good of
individuals and for society as a whole, call for a careful identification and characterization of the issues at stake. This entails shedding light on the whole neurotechnology ecosystem, that is what is being developed, where and by whom, and also understanding how neurotechnology interacts with other developments and technological trajectories, especially AI. Failing to do so may result in ineffective (at best) or distorted policies and policy decisions, which may harm human rights and human dignity.

Addressing the need for evidence in support of policy making, the present report offers first time robust data and analysis shedding light on the neurotechnology landscape worldwide. To this end, its proposes and implements an innovative approach that leverages artificial intelligence and deep learning on data from scientific publications and paten[t]s to identify scientific and technological developments in the neurotech space. The methodology proposed represents a scientific advance in itself, as it constitutes a quasi- automated replicable strategy for the detection and documentation of neurotechnology- related breakthroughs in science and innovation, to be repeated over time to account for the evolution of the sector. Leveraging this approach, the report further proposes an IPC-based taxonomy for neurotechnology which allows for a structured framework to the exploration of neurotechnology, to enable future research, development and analysis. The innovative methodology proposed is very flexible and can in fact be leveraged to investigate different emerging technologies, as they arise.

In terms of technological trajectories, we uncover a shift in the neurotechnology industry, with greater emphasis being put on computer and medical technologies in recent years, compared to traditionally dominant trajectories related to biotechnology and pharmaceuticals. This shift warrants close attention from policymakers, and calls for attention in relation to the latest (converging) developments in the field, especially AI and related methods and applications and neurotechnology.

This is all the more important and the observed growth and specialization patterns are unfolding in the context of regulatory environments that, generally, are either not existent or not fit for purpose. Given the sheer implications and impact of neurotechnology on the very essence of human beings, this lack of regulation poses key challenges related to the possible infringement of mental integrity, human dignity, personal identity, privacy, freedom of thought, and autonomy, among others. Furthermore, issues surrounding accessibility and the potential for neurotech enhancement applications triggers significant concerns, with far-reaching implications for individuals and societies. [pp. 72-73]

Last words about the report

Informative, readable, and thought-provoking. And, it helped broaden my understanding of neurotechnology.

Future endeavours?

I’m hopeful that one of these days one of these groups (UNESCO, Canadian Science Policy Centre, or ???) will tackle the issue of business bankruptcy in the neurotechnology sector. It has already occurred as noted in my ““Going blind when your neural implant company flirts with bankruptcy [long read]” April 5, 2022 posting. That story opens with a woman going blind in a New York subway when her neural implant fails. It’s how she found out the company, which supplied her implant was going out of business.

In my July 7, 2023 posting about the UNESCO July 2023 dialogue on neurotechnology, I’ve included information on Neuralink (one of Elon Musk’s companies) and its approval (despite some investigations) by the US Food and Drug Administration to start human clinical trials. Scroll down about 75% of the way to the “Food for thought” subhead where you will find stories about allegations made against Neuralink.

The end

If you want to know more about the field, the report offers a seven-page bibliography and there’s a lot of material here where you can start with this December 3, 2019 posting “Neural and technological inequalities” which features an article mentioning a discussion between two scientists. Surprisingly (to me), the source article is in Fast Company (a leading progressive business media brand), according to their tagline)..

I have two categories you may want to check: Human Enhancement and Neuromorphic Engineering. There are also a number of tags: neuromorphic computing, machine/flesh, brainlike computing, cyborgs, neural implants, neuroprosthetics, memristors, and more.

Should you have any observations or corrections, please feel free to leave them in the Comments section of this posting.

CRISPR and editing the germline in the US (part 3 of 3): public discussions and pop culture

After giving a basic explanation of the technology and some of the controversies in part 1 and offering more detail about the technology and about the possibility of designer babies in part 2; this part covers public discussion, a call for one and the suggestion that one is taking place in popular culture.

But a discussion does need to happen

In a move that is either an exquisite coincidence or has been carefully orchestrated (I vote for the latter), researchers from the University of Wisconsin-Madison have released a study about attitudes in the US to human genome editing. From an Aug. 11, 2017 University of Wisconsin-Madison news release (also on EurekAllert),

In early August 2017, an international team of scientists announced they had successfully edited the DNA of human embryos. As people process the political, moral and regulatory issues of the technology — which nudges us closer to nonfiction than science fiction — researchers at the University of Wisconsin-Madison and Temple University show the time is now to involve the American public in discussions about human genome editing.

In a study published Aug. 11 in the journal Science, the researchers assessed what people in the United States think about the uses of human genome editing and how their attitudes may drive public discussion. They found a public divided on its uses but united in the importance of moving conversations forward.

“There are several pathways we can go down with gene editing,” says UW-Madison’s Dietram Scheufele, lead author of the study and member of a National Academy of Sciences committee that compiled a report focused on human gene editing earlier this year. “Our study takes an exhaustive look at all of those possible pathways forward and asks where the public stands on each one of them.”

Compared to previous studies on public attitudes about the technology, the new study takes a more nuanced approach, examining public opinion about the use of gene editing for disease therapy versus for human enhancement, and about editing that becomes hereditary versus editing that does not.

The research team, which included Scheufele and Dominique Brossard — both professors of life sciences communication — along with Michael Xenos, professor of communication arts, first surveyed study participants about the use of editing to treat disease (therapy) versus for enhancement (creating so-called “designer babies”). While about two-thirds of respondents expressed at least some support for therapeutic editing, only one-third expressed support for using the technology for enhancement.

Diving even deeper, researchers looked into public attitudes about gene editing on specific cell types — somatic or germline — either for therapy or enhancement. Somatic cells are non-reproductive, so edits made in those cells do not affect future generations. Germline cells, however, are heritable, and changes made in these cells would be passed on to children.

Public support of therapeutic editing was high both in cells that would be inherited and those that would not, with 65 percent of respondents supporting therapy in germline cells and 64 percent supporting therapy in somatic cells. When considering enhancement editing, however, support depended more upon whether the changes would affect future generations. Only 26 percent of people surveyed supported enhancement editing in heritable germline cells and 39 percent supported enhancement of somatic cells that would not be passed on to children.

“A majority of people are saying that germline enhancement is where the technology crosses that invisible line and becomes unacceptable,” says Scheufele. “When it comes to therapy, the public is more open, and that may partly be reflective of how severe some of those genetically inherited diseases are. The potential treatments for those diseases are something the public at least is willing to consider.”

Beyond questions of support, researchers also wanted to understand what was driving public opinions. They found that two factors were related to respondents’ attitudes toward gene editing as well as their attitudes toward the public’s role in its emergence: the level of religious guidance in their lives, and factual knowledge about the technology.

Those with a high level of religious guidance in their daily lives had lower support for human genome editing than those with low religious guidance. Additionally, those with high knowledge of the technology were more supportive of it than those with less knowledge.

While respondents with high religious guidance and those with high knowledge differed on their support for the technology, both groups highly supported public engagement in its development and use. These results suggest broad agreement that the public should be involved in questions of political, regulatory and moral aspects of human genome editing.

“The public may be split along lines of religiosity or knowledge with regard to what they think about the technology and scientific community, but they are united in the idea that this is an issue that requires public involvement,” says Scheufele. “Our findings show very nicely that the public is ready for these discussions and that the time to have the discussions is now, before the science is fully ready and while we have time to carefully think through different options regarding how we want to move forward.”

Here’s a  link to and a citation for the paper,

U.S. attitudes on human genome editing by Dietram A. Scheufele, Michael A. Xenos, Emily L. Howell, Kathleen M. Rose, Dominique Brossard1, and Bruce W. Hardy. Science 11 Aug 2017: Vol. 357, Issue 6351, pp. 553-554 DOI: 10.1126/science.aan3708

This paper is behind a paywall.

A couple of final comments

Briefly, I notice that there’s no mention of the ethics of patenting this technology in the news release about the study.

Moving on, it seems surprising that the first team to engage in germline editing in the US is in Oregon; I would have expected the work to come from Massachusetts, California, or Illinois where a lot of bleeding edge medical research is performed. However, given the dearth of financial support from federal funding institutions, it seems likely that only an outsider would dare to engage i the research. Given the timing, Mitalipov’s work was already well underway before the recent about-face from the US National Academy of Sciences (Note: Kaiser’s Feb. 14, 2017 article does note that for some the recent recommendations do not represent any change).

As for discussion on issues such as editing of the germline, I’ve often noted here that popular culture (including advertising with the science fiction and other dramas laid in various media) often provides an informal forum for discussion. Joelle Renstrom in an Aug. 13, 2017 article for slate.com writes that Orphan Black (a BBC America series featuring clones) opened up a series of questions about science and ethics in the guise of a thriller about clones. She offers a précis of the first four seasons (Note: A link has been removed),

If you stopped watching a few seasons back, here’s a brief synopsis of how the mysteries wrap up. Neolution, an organization that seeks to control human evolution through genetic modification, began Project Leda, the cloning program, for two primary reasons: to see whether they could and to experiment with mutations that might allow people (i.e., themselves) to live longer. Neolution partnered with biotech companies such as Dyad, using its big pharma reach and deep pockets to harvest people’s genetic information and to conduct individual and germline (that is, genetic alterations passed down through generations) experiments, including infertility treatments that result in horrifying birth defects and body modification, such as tail-growing.

She then provides the article’s thesis (Note: Links have been removed),

Orphan Black demonstrates Carl Sagan’s warning of a time when “awesome technological powers are in the hands of a very few.” Neolutionists do whatever they want, pausing only to consider whether they’re missing an opportunity to exploit. Their hubris is straight out of Victor Frankenstein’s playbook. Frankenstein wonders whether he ought to first reanimate something “of simpler organisation” than a human, but starting small means waiting for glory. Orphan Black’s evil scientists embody this belief: if they’re going to play God, then they’ll control not just their own destinies, but the clones’ and, ultimately, all of humanity’s. Any sacrifices along the way are for the greater good—reasoning that culminates in Westmoreland’s eugenics fantasy to genetically sterilize 99 percent of the population he doesn’t enhance.

Orphan Black uses sci-fi tropes to explore real-world plausibility. Neolution shares similarities with transhumanism, the belief that humans should use science and technology to take control of their own evolution. While some transhumanists dabble in body modifications, such as microchip implants or night-vision eye drops, others seek to end suffering by curing human illness and aging. But even these goals can be seen as selfish, as access to disease-eradicating or life-extending technologies would be limited to the wealthy. Westmoreland’s goal to “sell Neolution to the 1 percent” seems frighteningly plausible—transhumanists, who statistically tend to be white, well-educated, and male, and their associated organizations raise and spend massive sums of money to help fulfill their goals. …

On Orphan Black, denial of choice is tantamount to imprisonment. That the clones have to earn autonomy underscores the need for ethics in science, especially when it comes to genetics. The show’s message here is timely given the rise of gene-editing techniques such as CRISPR. Recently, the National Academy of Sciences gave germline gene editing the green light, just one year after academy scientists from around the world argued it would be “irresponsible to proceed” without further exploring the implications. Scientists in the United Kingdom and China have already begun human genetic engineering and American scientists recently genetically engineered a human embryo for the first time. The possibility of Project Leda isn’t farfetched. Orphan Black warns us that money, power, and fear of death can corrupt both people and science. Once that happens, loss of humanity—of both the scientists and the subjects—is inevitable.

In Carl Sagan’s dark vision of the future, “people have lost the ability to set their own agendas or knowledgeably question those in authority.” This describes the plight of the clones at the outset of Orphan Black, but as the series continues, they challenge this paradigm by approaching science and scientists with skepticism, ingenuity, and grit. …

I hope there are discussions such as those Scheufele and Brossard are advocating but it might be worth considering that there is already some discussion underway, as informal as it is.

-30-

Part 1: CRISPR and editing the germline in the US (part 1 of 3): In the beginning

Part 2: CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?

CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?

Having included an explanation of CRISPR-CAS9 technology along with the news about the first US team to edit the germline and bits and pieces about ethics and a patent fight (part 1), this part hones in on the details of the work and worries about ‘designer babies’.

The interest flurry

I found three articles addressing the research and all three concur that despite some of the early reporting, this is not the beginning of a ‘designer baby’ generation.

First up was Nick Thieme in a July 28, 2017 article for Slate,

MIT Technology Review reported Thursday that a team of researchers from Portland, Oregon were the first team of U.S.-based scientists to successfully create a genetically modified human embryo. The researchers, led by Shoukhrat Mitalipov of Oregon Health and Science University, changed the DNA of—in MIT Technology Review’s words—“many tens” of genetically-diseased embryos by injecting the host egg with CRISPR, a DNA-based gene editing tool first discovered in bacteria, at the time of fertilization. CRISPR-Cas9, as the full editing system is called, allows scientists to change genes accurately and efficiently. As has happened with research elsewhere, the CRISPR-edited embryos weren’t implanted—they were kept sustained for only a couple of days.

In addition to being the first American team to complete this feat, the researchers also improved upon the work of the three Chinese research teams that beat them to editing embryos with CRISPR: Mitalipov’s team increased the proportion of embryonic cells that received the intended genetic changes, addressing an issue called “mosaicism,” which is when an embryo is comprised of cells with different genetic makeups. Increasing that proportion is essential to CRISPR work in eliminating inherited diseases, to ensure that the CRISPR therapy has the intended result. The Oregon team also reduced the number of genetic errors introduced by CRISPR, reducing the likelihood that a patient would develop cancer elsewhere in the body.

Separate from the scientific advancements, it’s a big deal that this work happened in a country with such intense politicization of embryo research. …

But there are a great number of obstacles between the current research and the future of genetically editing all children to be 12-foot-tall Einsteins.

Ed Yong in an Aug. 2, 2017 article for The Atlantic offered a comprehensive overview of the research and its implications (unusually for Yong, there seems to be mildly condescending note but it’s worth ignoring for the wealth of information in the article; Note: Links have been removed),

… the full details of the experiment, which are released today, show that the study is scientifically important but much less of a social inflection point than has been suggested. “This has been widely reported as the dawn of the era of the designer baby, making it probably the fifth or sixth time people have reported that dawn,” says Alta Charo, an expert on law and bioethics at the University of Wisconsin-Madison. “And it’s not.”

Given the persistent confusion around CRISPR and its implications, I’ve laid out exactly what the team did, and what it means.

Who did the experiments?

Shoukhrat Mitalipov is a Kazakhstani-born cell biologist with a history of breakthroughs—and controversy—in the stem cell field. He was the scientist to clone monkeys. He was the first to create human embryos by cloning adult cells—a move that could provide patients with an easy supply of personalized stem cells. He also pioneered a technique for creating embryos with genetic material from three biological parents, as a way of preventing a group of debilitating inherited diseases.

Although MIT Tech Review name-checked Mitalipov alone, the paper splits credit for the research between five collaborating teams—four based in the United States, and one in South Korea.

What did they actually do?

The project effectively began with an elevator conversation between Mitalipov and his colleague Sanjiv Kaul. Mitalipov explained that he wanted to use CRISPR to correct a disease-causing gene in human embryos, and was trying to figure out which disease to focus on. Kaul, a cardiologist, told him about hypertrophic cardiomyopathy (HCM)—an inherited heart disease that’s commonly caused by mutations in a gene called MYBPC3. HCM is surprisingly common, affecting 1 in 500 adults. Many of them lead normal lives, but in some, the walls of their hearts can thicken and suddenly fail. For that reason, HCM is the commonest cause of sudden death in athletes. “There really is no treatment,” says Kaul. “A number of drugs are being evaluated but they are all experimental,” and they merely treat the symptoms. The team wanted to prevent HCM entirely by removing the underlying mutation.

They collected sperm from a man with HCM and used CRISPR to change his mutant gene into its normal healthy version, while simultaneously using the sperm to fertilize eggs that had been donated by female volunteers. In this way, they created embryos that were completely free of the mutation. The procedure was effective, and avoided some of the critical problems that have plagued past attempts to use CRISPR in human embryos.

Wait, other human embryos have been edited before?

There have been three attempts in China. The first two—in 2015 and 2016—used non-viable embryos that could never have resulted in a live birth. The third—announced this March—was the first to use viable embryos that could theoretically have been implanted in a womb. All of these studies showed that CRISPR gene-editing, for all its hype, is still in its infancy.

The editing was imprecise. CRISPR is heralded for its precision, allowing scientists to edit particular genes of choice. But in practice, some of the Chinese researchers found worrying levels of off-target mutations, where CRISPR mistakenly cut other parts of the genome.

The editing was inefficient. The first Chinese team only managed to successfully edit a disease gene in 4 out of 86 embryos, and the second team fared even worse.

The editing was incomplete. Even in the successful cases, each embryo had a mix of modified and unmodified cells. This pattern, known as mosaicism, poses serious safety problems if gene-editing were ever to be used in practice. Doctors could end up implanting women with embryos that they thought were free of a disease-causing mutation, but were only partially free. The resulting person would still have many tissues and organs that carry those mutations, and might go on to develop symptoms.

What did the American team do differently?

The Chinese teams all used CRISPR to edit embryos at early stages of their development. By contrast, the Oregon researchers delivered the CRISPR components at the earliest possible point—minutes before fertilization. That neatly avoids the problem of mosaicism by ensuring that an embryo is edited from the very moment it is created. The team did this with 54 embryos and successfully edited the mutant MYBPC3 gene in 72 percent of them. In the other 28 percent, the editing didn’t work—a high failure rate, but far lower than in previous attempts. Better still, the team found no evidence of off-target mutations.

This is a big deal. Many scientists assumed that they’d have to do something more convoluted to avoid mosaicism. They’d have to collect a patient’s cells, which they’d revert into stem cells, which they’d use to make sperm or eggs, which they’d edit using CRISPR. “That’s a lot of extra steps, with more risks,” says Alta Charo. “If it’s possible to edit the embryo itself, that’s a real advance.” Perhaps for that reason, this is the first study to edit human embryos that was published in a top-tier scientific journal—Nature, which rejected some of the earlier Chinese papers.

Is this kind of research even legal?

Yes. In Western Europe, 15 countries out of 22 ban any attempts to change the human germ line—a term referring to sperm, eggs, and other cells that can transmit genetic information to future generations. No such stance exists in the United States but Congress has banned the Food and Drug Administration from considering research applications that make such modifications. Separately, federal agencies like the National Institutes of Health are banned from funding research that ultimately destroys human embryos. But the Oregon team used non-federal money from their institutions, and donations from several small non-profits. No taxpayer money went into their work. [emphasis mine]

Why would you want to edit embryos at all?

Partly to learn more about ourselves. By using CRISPR to manipulate the genes of embryos, scientists can learn more about the earliest stages of human development, and about problems like infertility and miscarriages. That’s why biologist Kathy Niakan from the Crick Institute in London recently secured a license from a British regulator to use CRISPR on human embryos.

Isn’t this a slippery slope toward making designer babies?

In terms of avoiding genetic diseases, it’s not conceptually different from PGD, which is already widely used. The bigger worry is that gene-editing could be used to make people stronger, smarter, or taller, paving the way for a new eugenics, and widening the already substantial gaps between the wealthy and poor. But many geneticists believe that such a future is fundamentally unlikely because complex traits like height and intelligence are the work of hundreds or thousands of genes, each of which have a tiny effect. The prospect of editing them all is implausible. And since genes are so thoroughly interconnected, it may be impossible to edit one particular trait without also affecting many others.

“There’s the worry that this could be used for enhancement, so society has to draw a line,” says Mitalipov. “But this is pretty complex technology and it wouldn’t be hard to regulate it.”

Does this discovery have any social importance at all?

“It’s not so much about designer babies as it is about geographical location,” says Charo. “It’s happening in the United States, and everything here around embryo research has high sensitivity.” She and others worry that the early report about the study, before the actual details were available for scrutiny, could lead to unnecessary panic. “Panic reactions often lead to panic-driven policy … which is usually bad policy,” wrote Greely [bioethicist Hank Greely].

As I understand it, despite the change in stance, there is no federal funding available for the research performed by Mitalipov and his team.

Finally, University College London (UCL) scientists Joyce Harper and Helen O’Neill wrote about CRISPR, the Oregon team’s work, and the possibilities in an Aug. 3, 2017 essay for The Conversation (Note: Links have been removed),

The genome editing tool used, CRISPR-Cas9, has transformed the field of biology in the short time since its discovery in that it not only promises, but delivers. CRISPR has surpassed all previous efforts to engineer cells and alter genomes at a fraction of the time and cost.

The technology, which works like molecular scissors to cut and paste DNA, is a natural defence system that bacteria use to fend off harmful infections. This system has the ability to recognise invading virus DNA, cut it and integrate this cut sequence into its own genome – allowing the bacterium to render itself immune to future infections of viruses with similar DNA. It is this ability to recognise and cut DNA that has allowed scientists to use it to target and edit specific DNA regions.

When this technology is applied to “germ cells” – the sperm and eggs – or embryos, it changes the germline. That means that any alterations made would be permanent and passed down to future generations. This makes it more ethically complex, but there are strict regulations around human germline genome editing, which is predominantly illegal. The UK received a licence in 2016 to carry out CRISPR on human embryos for research into early development. But edited embryos are not allowed to be inserted into the uterus and develop into a fetus in any country.

Germline genome editing came into the global spotlight when Chinese scientists announced in 2015 that they had used CRISPR to edit non-viable human embryos – cells that could never result in a live birth. They did this to modify the gene responsible for the blood disorder β-thalassaemia. While it was met with some success, it received a lot of criticism because of the premature use of this technology in human embryos. The results showed a high number of potentially dangerous, off-target mutations created in the procedure.

Impressive results

The new study, published in Nature, is different because it deals with viable human embryos and shows that the genome editing can be carried out safely – without creating harmful mutations. The team used CRISPR to correct a mutation in the gene MYBPC3, which accounts for approximately 40% of the myocardial disease hypertrophic cardiomyopathy. This is a dominant disease, so an affected individual only needs one abnormal copy of the gene to be affected.

The researchers used sperm from a patient carrying one copy of the MYBPC3 mutation to create 54 embryos. They edited them using CRISPR-Cas9 to correct the mutation. Without genome editing, approximately 50% of the embryos would carry the patients’ normal gene and 50% would carry his abnormal gene.

After genome editing, the aim would be for 100% of embryos to be normal. In the first round of the experiments, they found that 66.7% of embryos – 36 out of 54 – were normal after being injected with CRIPSR. Of the remaining 18 embryos, five had remained unchanged, suggesting editing had not worked. In 13 embryos, only a portion of cells had been edited.

The level of efficiency is affected by the type of CRISPR machinery used and, critically, the timing in which it is put into the embryo. The researchers therefore also tried injecting the sperm and the CRISPR-Cas9 complex into the egg at the same time, which resulted in more promising results. This was done for 75 mature donated human eggs using a common IVF technique called intracytoplasmic sperm injection. This time, impressively, 72.4% of embryos were normal as a result. The approach also lowered the number of embryos containing a mixture of edited and unedited cells (these embryos are called mosaics).

Finally, the team injected a further 22 embryos which were grown into blastocyst – a later stage of embryo development. These were sequenced and the researchers found that the editing had indeed worked. Importantly, they could show that the level of off-target mutations was low.

A brave new world?

So does this mean we finally have a cure for debilitating, heritable diseases? It’s important to remember that the study did not achieve a 100% success rate. Even the researchers themselves stress that further research is needed in order to fully understand the potential and limitations of the technique.

In our view, it is unlikely that genome editing would be used to treat the majority of inherited conditions anytime soon. We still can’t be sure how a child with a genetically altered genome will develop over a lifetime, so it seems unlikely that couples carrying a genetic disease would embark on gene editing rather than undergoing already available tests – such as preimplantation genetic diagnosis or prenatal diagnosis – where the embryos or fetus are tested for genetic faults.

-30-

As might be expected there is now a call for public discussion about the ethics about this kind of work. See Part 3.

For anyone who started in the middle of this series, here’s Part 1 featuring an introduction to the technology and some of the issues.

CRISPR and editing the germline in the US (part 1 of 3): In the beginning

There’s been a minor flurry of interest in CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats; also known as CRISPR-CAS9), a gene-editing technique, since a team in Oregon announced a paper describing their work editing the germline. Since I’ve been following the CRISPR-CAS9 story for a while this seems like a good juncture for a more in-depth look at the topic. In this first part I’m including an introduction to CRISPR, some information about the latest US work, and some previous writing about ethics issues raised when Chinese scientists first announced their work editing germlines in 2015 and during the patent dispute between the University of California at Berkeley and Harvard University’s Broad Institute.

Introduction to CRISPR

I’ve been searching for a good description of CRISPR and this helped to clear up some questions for me (Thank you to MIT Review),

For anyone who’s been reading about science for a while, this upbeat approach to explaining how a particular technology will solve all sorts of problems will seem quite familiar. It’s not the most hyperbolic piece I’ve seen but it barely mentions any problems associated with research (for some of the problems see: ‘The interest flurry’ later in part 2).

Oregon team

Steve Connor’s July 26, 2017 article for the MIT (Massachusetts Institute of Technology) Technology Review breaks the news (Note: Links have been removed),

The first known attempt at creating genetically modified human embryos in the United States has been carried out by a team of researchers in Portland, Oregon, MIT Technology Review has learned.

The effort, led by Shoukhrat Mitalipov of Oregon Health and Science University, involved changing the DNA of a large number of one-cell embryos with the gene-editing technique CRISPR, according to people familiar with the scientific results.

Until now, American scientists have watched with a combination of awe, envy, and some alarm as scientists elsewhere were first to explore the controversial practice. To date, three previous reports of editing human embryos were all published by scientists in China.

Now Mitalipov is believed to have broken new ground both in the number of embryos experimented upon and by demonstrating that it is possible to safely and efficiently correct defective genes that cause inherited diseases.

Although none of the embryos were allowed to develop for more than a few days—and there was never any intention of implanting them into a womb—the experiments are a milestone on what may prove to be an inevitable journey toward the birth of the first genetically modified humans.

In altering the DNA code of human embryos, the objective of scientists is to show that they can eradicate or correct genes that cause inherited disease, like the blood condition beta-thalassemia. The process is termed “germline engineering” because any genetically modified child would then pass the changes on to subsequent generations via their own germ cells—the egg and sperm.

Some critics say germline experiments could open the floodgates to a brave new world of “designer babies” engineered with genetic enhancements—a prospect bitterly opposed by a range of religious organizations, civil society groups, and biotech companies.

The U.S. intelligence community last year called CRISPR a potential “weapon of mass destruction.”

Here’s a link to a citation for the groundbreaking paper,

Correction of a pathogenic gene mutation in human embryos by Hong Ma, Nuria Marti-Gutierrez, Sang-Wook Park, Jun Wu, Yeonmi Lee, Keiichiro Suzuki, Amy Koski, Dongmei Ji, Tomonari Hayama, Riffat Ahmed, Hayley Darby, Crystal Van Dyken, Ying Li, Eunju Kang, A.-Reum Park, Daesik Kim, Sang-Tae Kim, Jianhui Gong, Ying Gu, Xun Xu, David Battaglia, Sacha A. Krieg, David M. Lee, Diana H. Wu, Don P. Wolf, Stephen B. Heitner, Juan Carlos Izpisua Belmonte, Paula Amato, Jin-Soo Kim, Sanjiv Kaul, & Shoukhrat Mitalipov. Nature (2017) doi:10.1038/nature23305 Published online 02 August 2017

This paper appears to be open access.

CRISPR Issues: ethics and patents

In my May 14, 2015 posting I mentioned a ‘moratorium’ on germline research, the Chinese research paper, and the stance taken by the US National Institutes of Health (NIH),

The CRISPR technology has reignited a discussion about ethical and moral issues of human genetic engineering some of which is reviewed in an April 7, 2015 posting about a moratorium by Sheila Jasanoff, J. Benjamin Hurlbut and Krishanu Saha for the Guardian science blogs (Note: A link has been removed),

On April 3, 2015, a group of prominent biologists and ethicists writing in Science called for a moratorium on germline gene engineering; modifications to the human genome that will be passed on to future generations. The moratorium would apply to a technology called CRISPR/Cas9, which enables the removal of undesirable genes, insertion of desirable ones, and the broad recoding of nearly any DNA sequence.

Such modifications could affect every cell in an adult human being, including germ cells, and therefore be passed down through the generations. Many organisms across the range of biological complexity have already been edited in this way to generate designer bacteria, plants and primates. There is little reason to believe the same could not be done with human eggs, sperm and embryos. Now that the technology to engineer human germlines is here, the advocates for a moratorium declared, it is time to chart a prudent path forward. They recommend four actions: a hold on clinical applications; creation of expert forums; transparent research; and a globally representative group to recommend policy approaches.

The authors go on to review precedents and reasons for the moratorium while suggesting we need better ways for citizens to engage with and debate these issues,

An effective moratorium must be grounded in the principle that the power to modify the human genome demands serious engagement not only from scientists and ethicists but from all citizens. We need a more complex architecture for public deliberation, built on the recognition that we, as citizens, have a duty to participate in shaping our biotechnological futures, just as governments have a duty to empower us to participate in that process. Decisions such as whether or not to edit human genes should not be left to elite and invisible experts, whether in universities, ad hoc commissions, or parliamentary advisory committees. Nor should public deliberation be temporally limited by the span of a moratorium or narrowed to topics that experts deem reasonable to debate.

I recommend reading the post in its entirety as there are nuances that are best appreciated in the entirety of the piece.

Shortly after this essay was published, Chinese scientists announced they had genetically modified (nonviable) human embryos. From an April 22, 2015 article by David Cyranoski and Sara Reardon in Nature where the research and some of the ethical issues discussed,

In a world first, Chinese scientists have reported editing the genomes of human embryos. The results are published1 in the online journal Protein & Cell and confirm widespread rumours that such experiments had been conducted — rumours that sparked a high-profile debate last month2, 3 about the ethical implications of such work.

In the paper, researchers led by Junjiu Huang, a gene-function researcher at Sun Yat-sen University in Guangzhou, tried to head off such concerns by using ‘non-viable’ embryos, which cannot result in a live birth, that were obtained from local fertility clinics. The team attempted to modify the gene responsible for β-thalassaemia, a potentially fatal blood disorder, using a gene-editing technique known as CRISPR/Cas9. The researchers say that their results reveal serious obstacles to using the method in medical applications.

“I believe this is the first report of CRISPR/Cas9 applied to human pre-implantation embryos and as such the study is a landmark, as well as a cautionary tale,” says George Daley, a stem-cell biologist at Harvard Medical School in Boston, Massachusetts. “Their study should be a stern warning to any practitioner who thinks the technology is ready for testing to eradicate disease genes.”

….

Huang says that the paper was rejected by Nature and Science, in part because of ethical objections; both journals declined to comment on the claim. (Nature’s news team is editorially independent of its research editorial team.)

He adds that critics of the paper have noted that the low efficiencies and high number of off-target mutations could be specific to the abnormal embryos used in the study. Huang acknowledges the critique, but because there are no examples of gene editing in normal embryos he says that there is no way to know if the technique operates differently in them.

Still, he maintains that the embryos allow for a more meaningful model — and one closer to a normal human embryo — than an animal model or one using adult human cells. “We wanted to show our data to the world so people know what really happened with this model, rather than just talking about what would happen without data,” he says.

This, too, is a good and thoughtful read.

There was an official response in the US to the publication of this research, from an April 29, 2015 post by David Bruggeman on his Pasco Phronesis blog (Note: Links have been removed),

In light of Chinese researchers reporting their efforts to edit the genes of ‘non-viable’ human embryos, the National Institutes of Health (NIH) Director Francis Collins issued a statement (H/T Carl Zimmer).

“NIH will not fund any use of gene-editing technologies in human embryos. The concept of altering the human germline in embryos for clinical purposes has been debated over many years from many different perspectives, and has been viewed almost universally as a line that should not be crossed. Advances in technology have given us an elegant new way of carrying out genome editing, but the strong arguments against engaging in this activity remain. These include the serious and unquantifiable safety issues, ethical issues presented by altering the germline in a way that affects the next generation without their consent, and a current lack of compelling medical applications justifying the use of CRISPR/Cas9 in embryos.” …

The US has modified its stance according to a February 14, 2017 article by Jocelyn Kaiser for Science Magazine (Note: Links have been removed),

Editing the DNA of a human embryo to prevent a disease in a baby could be ethically allowable one day—but only in rare circumstances and with safeguards in place, says a widely anticipated report released today.

The report from an international committee convened by the U.S. National Academy of Sciences (NAS) and the National Academy of Medicine in Washington, D.C., concludes that such a clinical trial “might be permitted, but only following much more research” on risks and benefits, and “only for compelling reasons and under strict oversight.” Those situations could be limited to couples who both have a serious genetic disease and for whom embryo editing is “really the last reasonable option” if they want to have a healthy biological child, says committee co-chair Alta Charo, a bioethicist at the University of Wisconsin in Madison.

Some researchers are pleased with the report, saying it is consistent with previous conclusions that safely altering the DNA of human eggs, sperm, or early embryos—known as germline editing—to create a baby could be possible eventually. “They have closed the door to the vast majority of germline applications and left it open for a very small, well-defined subset. That’s not unreasonable in my opinion,” says genome researcher Eric Lander of the Broad Institute in Cambridge, Massachusetts. Lander was among the organizers of an international summit at NAS in December 2015 who called for more discussion before proceeding with embryo editing.

But others see the report as lowering the bar for such experiments because it does not explicitly say they should be prohibited for now. “It changes the tone to an affirmative position in the absence of the broad public debate this report calls for,” says Edward Lanphier, chairman of the DNA editing company Sangamo Therapeutics in Richmond, California. Two years ago, he co-authored a Nature commentary calling for a moratorium on clinical embryo editing.

One advocacy group opposed to embryo editing goes further. “We’re very disappointed with the report. It’s really a pretty dramatic shift from the existing and widespread agreement globally that human germline editing should be prohibited,” says Marcy Darnovsky, executive director of the Center for Genetics and Society in Berkeley, California.

Interestingly, this change of stance occurred just prior to a CRISPR patent decision (from my March 15, 2017 posting),

I have written about the CRISPR patent tussle (Harvard & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley) previously in a Jan. 6, 2015 posting and in a more detailed May 14, 2015 posting. I also mentioned (in a Jan. 17, 2017 posting) CRISPR and its patent issues in the context of a posting about a Slate.com series on Frankenstein and the novel’s applicability to our own time. This patent fight is being bitterly fought as fortunes are at stake.

It seems a decision has been made regarding the CRISPR patent claims. From a Feb. 17, 2017 article by Charmaine Distor for The Science Times,

After an intense court battle, the US Patent and Trademark Office (USPTO) released its ruling on February 15 [2017]. The rights for the CRISPR-Cas9 gene editing technology was handed over to the Broad Institute of Harvard University and the Massachusetts Institute of Technology (MIT).

According to an article in Nature, the said court battle was between the Broad Institute and the University of California. The two institutions are fighting over the intellectual property right for the CRISPR patent. The case between the two started when the patent was first awarded to the Broad Institute despite having the University of California apply first for the CRISPR patent.

Heidi Ledford’s Feb. 17, 2017 article for Nature provides more insight into the situation (Note: Links have been removed),

It [USPTO] ruled that the Broad Institute of Harvard and MIT in Cambridge could keep its patents on using CRISPR–Cas9 in eukaryotic cells. That was a blow to the University of California in Berkeley, which had filed its own patents and had hoped to have the Broad’s thrown out.

The fight goes back to 2012, when Jennifer Doudna at Berkeley, Emmanuelle Charpentier, then at the University of Vienna, and their colleagues outlined how CRISPR–Cas9 could be used to precisely cut isolated DNA1. In 2013, Feng Zhang at the Broad and his colleagues — and other teams — showed2 how it could be adapted to edit DNA in eukaryotic cells such as plants, livestock and humans.

Berkeley filed for a patent earlier, but the USPTO granted the Broad’s patents first — and this week upheld them. There are high stakes involved in the ruling. The holder of key patents could make millions of dollars from CRISPR–Cas9’s applications in industry: already, the technique has sped up genetic research, and scientists are using it to develop disease-resistant livestock and treatments for human diseases.

….

I also noted this eyebrow-lifting statistic,  “As for Ledford’s 3rd point, there are an estimated 763 patent families (groups of related patents) claiming CAS9 leading to the distinct possibility that the Broad Institute will be fighting many patent claims in the future.)

-30-

Part 2 covers three critical responses to the reporting and between them describe the technology in more detail and the possibility of ‘designer babies’.  CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?

Part 3 is all about public discussion or, rather, the lack of and need for according to a couple of social scientists. Informally, there is some discussion via pop culture and Joelle Renstrom notes although she is focused on the larger issues touched on by the television series, Orphan Black and as I touch on in my final comments. CRISPR and editing the germline in the US (part 3 of 3): public discussions and pop culture

CRISPR patent decision: Harvard’s and MIT’s Broad Institute victorious—for now

I have written about the CRISPR patent tussle (Harvard & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley) previously in a Jan. 6, 2015 posting and in a more detailed May 14, 2015 posting. I also mentioned (in a Jan. 17, 2017 posting) CRISPR and its patent issues in the context of a posting about a Slate.com series on Frankenstein and the novel’s applicability to our own time. This patent fight is being bitterly fought as fortunes are at stake.

It seems a decision has been made regarding the CRISPR patent claims. From a Feb. 17, 2017 article by Charmaine Distor for The Science Times,

After an intense court battle, the US Patent and Trademark Office (USPTO) released its ruling on February 15 [2017]. The rights for the CRISPR-Cas9 gene editing technology was handed over to the Broad Institute of Harvard University and the Massachusetts Institute of Technology (MIT).

According to an article in Nature, the said court battle was between the Broad Institute and the University of California. The two institutions are fighting over the intellectual property right for the CRISPR patent. The case between the two started when the patent was first awarded to the Broad Institute despite having the University of California apply first for the CRISPR patent.

Heidi Ledford’s Feb. 17, 2017 article for Nature provides more insight into the situation (Note: Links have been removed),

It [USPTO] ruled that the Broad Institute of Harvard and MIT in Cambridge could keep its patents on using CRISPR–Cas9 in eukaryotic cells. That was a blow to the University of California in Berkeley, which had filed its own patents and had hoped to have the Broad’s thrown out.

The fight goes back to 2012, when Jennifer Doudna at Berkeley, Emmanuelle Charpentier, then at the University of Vienna, and their colleagues outlined how CRISPR–Cas9 could be used to precisely cut isolated DNA1. In 2013, Feng Zhang at the Broad and his colleagues — and other teams — showed2 how it could be adapted to edit DNA in eukaryotic cells such as plants, livestock and humans.

Berkeley filed for a patent earlier, but the USPTO granted the Broad’s patents first — and this week upheld them. There are high stakes involved in the ruling. The holder of key patents could make millions of dollars from CRISPR–Cas9’s applications in industry: already, the technique has sped up genetic research, and scientists are using it to develop disease-resistant livestock and treatments for human diseases.

But the fight for patent rights to CRISPR technology is by no means over. Here are four reasons why.

1. Berkeley can appeal the ruling

2. European patents are still up for grabs

3. Other parties are also claiming patent rights on CRISPR–Cas9

4. CRISPR technology is moving beyond what the patents cover

As for Ledford’s 3rd point, there are an estimated 763 patent families (groups of related patents) claiming CAS9 leading to the distinct possibility that the Broad Institute will be fighting many patent claims in the future.

Once you’ve read Distor’s and Ledford’s articles, you may want to check out Adam Rogers’ and Eric Niiler’s Feb. 16, 2017 CRISPR patent article for Wired,

The fight over who owns the most promising technique for editing genes—cutting and pasting the stuff of life to cure disease and advance scientific knowledge—has been a rough one. A team on the West Coast, at UC Berkeley, filed patents on the method, Crispr-Cas9; a team on the East Coast, based at MIT and the Broad Institute, filed their own patents in 2014 after Berkeley’s, but got them granted first. The Berkeley group contended that this constituted “interference,” and that Berkeley deserved the patent.

At stake: millions, maybe billions of dollars in biotech money and licensing fees, the future of medicine, the future of bioscience. Not nothing. Who will benefit depends on who owns the patents.

On Wednesday [Feb. 15, 2017], the US Patent Trial and Appeal Board kind of, sort of, almost began to answer that question. Berkeley will get the patent for using the system called Crispr-Cas9 in any living cell, from bacteria to blue whales. Broad/MIT gets the patent in eukaryotic cells, which is to say, plants and animals.

It’s … confusing. “The patent that the Broad received is for the use of Crispr gene-editing technology in eukaryotic cells. The patent for the University of California is for all cells,” says Jennifer Doudna, the UC geneticist and co-founder of Caribou Biosciences who co-invented Crispr, on a conference call. Her metaphor: “They have a patent on green tennis balls; we have a patent for all tennis balls.”

Observers didn’t quite buy that topspin. If Caribou is playing tennis, it’s looking like Broad/MIT is Serena Williams.

“UC does not necessarily lose everything, but they’re no doubt spinning the story,” says Robert Cook-Deegan, an expert in genetic policy at Arizona State University’s School for the Future of Innovation in Society. “UC’s claims to eukaryotic uses of Crispr-Cas9 will not be granted in the form they sought. That’s a big deal, and UC was the big loser.”

UC officials said Wednesday [Feb. 15, 2017] that they are studying the 51-page decision and considering whether to appeal. That leaves members of the biotechnology sector wondering who they will have to pay to use Crispr as part of a business—and scientists hoping the outcome won’t somehow keep them from continuing their research.

….

Happy reading!

‘Biomimicry’ patents

The US Patent and Trade Office (USPTO) has issued a new guidance document concerning ‘biomimicry’ patents according to David Bruggeman’s Dec. 20, 2014 post on his Pasco Phronesis blog (Note: Links have been removed),

The United States Patent and Trademark Office (USPTO) has released another guidance memo for patents derived ‘from nature’ (H/T ScienceInsider).  The USPTO released its first memo in March [2014], and between negative public comments and additional court action, releasing new guidance makes sense to me.

The USPTO is requesting comments on the guidance by March 16, 2014 and will be holding a holding a public forum for comments on Jan. 21, 2015. Here’s more detail about the comments from the USPTO 2014 Interim Guidance on Subject Matter Eligibility webpage,

The USPTO has prepared 2014 Interim Guidance on Patent Subject Matter Eligibility (Interim Eligibility Guidance) for USPTO personnel to use when determining subject matter eligibility under 35 U.S.C. 101 in view of recent decisions by the U.S. Supreme Court, including Alice Corp., Myriad, and Mayo.  The Interim Eligibility Guidance supplements the June 25, 2014 Preliminary Examination Instructions issued in view of Alice Corp. and supersedes the March 4, 2014 Procedure for Subject Matter Eligibility Analysis of Claims Reciting or Involving Laws of Nature/Natural Principles, Natural Phenomena, and/or Natural Products issued in view of Mayo and Myriad.  It is expected that the guidance will be updated in view of developments in the case law and in response to public feedback.

Any member of the public may submit written comments on the Interim Eligibility Guidance and claim example sets by electronic mail message over the Internet addressed to 2014_interim_guidance@uspto.gov.  Electronic comments submitted in plain text are preferred, but also may be submitted in ADOBE® portable document format or MICROSOFT WORD® format.  The comments will be available for public inspection here at this Web page.  Because comments will be available for public inspection, information that is not desired to be made public, such as an address or a phone number, should not be included in the comments.  Comments will be accepted until March 16, 2015.

And there is also this about the public forum (from the Interim Guidance page),

A public forum will be hosted at the Alexandria campus of the USPTO on Jan. 21, 2015, to receive public feedback from any interested member of the public.  The Eligibility Forum will be an opportunity for the Office to provide an overview of the Interim Eligibility Guidance and for participants to present their interpretation of the impact of Supreme Court precedent on the complex legal and technical issues involved in subject matter eligibility analysis during examination by providing oral feedback on the Interim Eligibility Guidance and claim example sets.  Individuals will be provided an opportunity to make a presentation, to the extent that time permits.

Date and Location:  The Eligibility Forum will be held on Jan. 21, 2015, from 1pm – 5pm EST, in the Madison Auditorium North (Concourse Level), Madison Building, 600 Dulany Street, Alexandria, VA 22314. The meeting will also be accessible via WebEx.

Requests for Attendance at the Eligibility Forum:  Requests for attendance to the Eligibility Forum should be submitted by electronic mail through the Internet to 2014_interim_guidance@uspto.gov by JAN. 9, 2015.  Requests for attendance must include the attendee’s name, affiliation, title, mailing address, and telephone number.  An Internet e-mail address, if available, should also be provided.

If I understand David’s description of this guidance rightly, the use of something like curcumin (a constituent of turmeric) to heal wounds cannot be patented unless substantive changes have been made to the curcumin. In short, Laws Of Nature/Natural Principles, Natural Phenomena, And/Or Natural Products And/Or Abstract Ideas cannot be patented through the USPTO.

US Patent and Trademarks Office invests in a public relations campaign

The Smithsonian Institution in Washington, DC has been renovating its Arts and Industries Building since 2004. It is not scheduled to reopen until 2014 but there will be a ‘soft’ launch of a new partnership between the Smithsonian and the US Patent and Trademark Office (USPTO)  in June 2013, which relates to building’s refurbishment, according to David Bruggeman’s Jan. 20, 2013 posting on his Pasco Phronesis blog,

The partnership will include developing and displaying innovation-themed exhibits in the Arts and Industries Building.  In addition, the Smithsonian and the USPTO will sponsor an Innovation Expo in June 2013 at the USPTO headquarters in Alexandria (with future expos in the Pavilion).  Placing this pavilion in the Arts and Industries Building is a sort-of homecoming, as technology and progress were themes of many exhibits when the building first opened as the National Museum in 1881.

This seven-year, $7.5 million partnership is not the first collaboration between the USPTO and the Smithsonian. …

Here’s more about the Expo from the USPTO Innovation Expo webpage where they are appealing for more exhibitors,

The United States Patent and Trademark Office (USPTO) and the Smithsonian Institution are teaming up to stage the 2013 Innovation Expo. This is your chance to join a select group of technological game-changers in a celebration of ingenuity and patented technology.

The Expo will be held June 20-22, 2013, at the USPTO’s headquarters in Alexandria, Va., just across the Potomac River from the nation’s capital. The combination of the USPTO’s soaring architecture and the Smithsonian’s world-renowned exhibition programing makes the Innovation Expo an extraordinary opportunity for both exhibitors and attendees. Under terms of an agreement signed by the USPTO and the Smithsonian, the Expo will move to the National Mall in the summer of 2014 when the historic Arts and Industries Building reopens.

For three days, exhibits at this free and open-to-the-public event will showcase the latest technological developments from America’s innovators affiliated with large corporations, small businesses, academic institutions, government agencies, and the independent inventor community.

The Expo will also demonstrate the vital role America’s intellectual property system and the USPTO play in promoting and protecting innovation, a role that contributes greatly to America’s competitiveness and prowess in the global economy. [emphases mine]

The application deadline has been extended to March 31, 2013. Exhibition slots will be awarded to qualified U.S. patent owners on a rolling basis. Space is limited, so apply now.

Applications will be reviewed by an independent committee made up of representatives from some of the most important and respected intellectual property organizations.

If that wasn’t enough, the Smithsonian Institution’s Jan. 16, 2013 news release makes the purpose for this project blindingly apparent,

The collaboration will begin this year with an Innovation Expo June 20-22 at the Patent and Trademark Office’s headquarters in Alexandria, Va., where the latest technological developments—patented technologies from American companies—will be showcased. The three-day expo will feature a narrative about how the U.S. patent system promotes innovation and technological development. [emphasis mine] The Innovation Expo, which will be organized in partnership with the Smithsonian, will serve as a template for future expos to be held in the Innovation Pavilion at the A&I Building (the Pavilion will cover around 18,000 square feet of the 40,000 square feet of public space in the building).

During 2013, the Smithsonian will also develop further designs for the new Innovation Pavilion and begin work on plans for exhibitions and programming. The Pavilion will be a center for active learning, engaging visitors using digital technology and informing them about new developments in American innovation and technology. The collaboration is described in a Memorandum of Agreement signed by the Smithsonian Secretary and the director of the U.S. Patent and Trademark Office. The USPTO anticipates supporting the Pavilion over the term of the collaboration.

“The Arts and Industries Building has always been about celebrating innovation and progress, and it has been one of my goals to reopen the building and return it to that purpose,” said Wayne Clough, Smithsonian Secretary. “Through this collaboration with the United States Patent and Trademark Office, we will create a program that not only celebrates American ingenuity, but also reflects the 21st century expectations of our visitors.”

“We look forward to working with the Smithsonian to showcase America’s rich history and bright future of innovation, providing a workshop where inventors of all ages can interact together,” said Under Secretary of Commerce for Intellectual Property and Director of the USPTO David Kappos.

The Smithsonian and the USPTO have worked together on several projects in recent years, including three exhibitions: “The Great American Hall of Wonders” and “To Build a Better Mousetrap” at the Smithsonian American Art Museum, and an exhibition about Apple Inc. founder Steve Jobs’ patents in the Smithsonian’s Ripley Center.

$7.5 million of taxpayer money to promote an intellectual property system that seems to be in serious trouble, along with many other such systems around the world, is a time-honoured fashion of dealing with these kinds of  problems. Generally, they are doomed to fail. As I like to say, you can put a gift bow on a pile of manure but unless you trot a pony out right quickly, it’s no gift. And, the USPTO definitely does not have a pony waiting nearby.

I have written many pieces on the problems with intellectual property systems. There’s this Nov. 23, 2012 posting about patents strangling nanotechnology developments, this Oct. 10, 2012 posting about a UN patent summit concerning smartphones and patent problems; and this June 28, 2012 posting about patent trolls and their impact on the US economy (billions of dollars lost), amongst the others. For more comprehensive news, Techdirt covers the US scene and Michael Geist covers the Canadian scene. Both cover international intellectual property issues as well.

Patent bonanza in nanotechnology (sigh)

This is more of a snippet than anything else but since it touches on patents and nanotechnology, I’ve decided to post this excerpt (from J. Steven Rutt’s Jan. 2, 2013 posting on JD Supra Law News),

The nanotechnology patent filing boom continues. In 2012, the USPTO [US Patent and Trademark Office] published 4,098 nanotechnology class 977 applications, which represents a 19.2% increase over last year. By way of comparison, in 2008, the USPTO published only 827 nanotechnology applications, and in 2009, only 1,499. Hence, the number has almost tripled in three years.

Rutt is a lawyer with Foley & Lardner LLP and he’s much happier about this news than I am. Of course, a lawyer is much likely to profit from this trend than anyone else (except maybe for a patent troll). My Nov. 23, 2012 posting (Free the nano—stop patenting publicly funded research) highlights some alternative perspectives.

Free the nano—stop patenting publicly funded research

Joshua Pearce, a professor at Michigan Technological University, has written a commentary on patents and nanotechnology for Nature magazine which claims the current patent regimes strangle rather than encourage innovation. From the free article,  Physics: Make nanotechnology research open-source by Joshua Pearce in Nature 491, 519–521 (22 November 2012) doi:10.1038/491519a (Note: I have removed footnotes),

Any innovator wishing to work on or sell products based on single-walled carbon nanotubes in the United States must wade through more than 1,600 US patents that mention them. He or she must obtain a fistful of licences just to use this tubular form of naturally occurring graphite rolled from a one-atom-thick sheet. This is because many patents lay broad claims: one nanotube example covers “a composition of matter comprising at least about 99% by weight of single-wall carbon molecules”. Tens of others make overlapping claims.

Patent thickets occur in other high-tech fields, but the consequences for nanotechnology are dire because of the potential power and immaturity of the field. Advances are being stifled at birth because downstream innovation almost always infringes some early broad patents. By contrast, computing, lasers and software grew up without overzealous patenting at the outset.

Nanotechnology is big business. According to a 2011 report by technology consultants Cientifica, governments around the world have invested more than US$65 billion in nanotechnology in the past 11 years [my July 15, 2011 posting features an interview with Tim Harper, Cientfica CEO and founder, about the then newly released report]. The sector contributed more than $250 billion to the global economy in 2009 and is expected to reach $2.4 trillion a year by 2015, according to business analysts Lux Research. Since 2001, the United States has invested $18 billion in the National Nanotechnology Initiative; the 2013 US federal budget will add $1.8 billion more.

This investment is spurring intense patent filing by industry and academia. The number of nanotechnology patent applications to the US Patent and Trademark Office (USPTO) is rising each year and is projected to exceed 4,000 in 2012. Anyone who discovers a new and useful process, machine, manufacture or composition of matter, or any new and useful improvement thereof, may obtain a patent that prevents others from using that development unless they have the patent owner’s permission.

Pearce makes some convincing points (Note: I have removed a footnote),

Examples of patents that cover basic components include one owned by the multinational chip manufacturer Intel, which covers a method for making almost any nanostructure with a diameter less than 50 nm; another, held by nanotechnology company NanoSys of Palo Alto, California, covers composites consisting of a matrix and any form of nanostructure. And Rice University in Houston, Texas, has a patent covering “composition of matter comprising at least about 99% by weight of fullerene nanotubes”.

The vast majority of publicly announced IP licence agreements are now exclusive, meaning that only a single person or entity may use the technology or any other technology dependent on it. This cripples competition and technological development, because all other would-be innovators are shut out of the market. Exclusive licence agreements for building-block patents can restrict entire swathes of future innovation.

Pearce’s argument for open source,

This IP rush assumes that a financial incentive is necessary to innovate, and that without the market exclusivity (monopoly) offered by a patent, development of commercially viable products will be hampered. But there is another way, as decades of innovation for free and open-source software show. Large Internet-based companies such as Google and Facebook use this type of software. Others, such as Red Hat, make more than $1 billion a year from selling services for products that they give away for free, like Red Hat’s version of the computer operating system Linux.

An open-source model would leave nanotechnology companies free to use the best tools, materials and devices available. Costs would be cut because most licence fees would no longer be necessary. Without the shelter of an IP monopoly, innovation would be a necessity for a company to survive. Openness reduces the barrier for small, nimble entities entering the market.

John Timmer in his Nov. 23, 2012 article for Wired.co.uk expresses both support and criticism,

Some of Pearce’s solutions are perfectly reasonable. He argues that the National Science Foundation adopt the NIH model of making all research it funds open access after a one-year time limit. But he also calls for an end of patents derived from any publicly funded research: “Congress should alter the Bayh-Dole Act to exclude private IP lockdown of publicly funded innovations.” There are certainly some indications that Bayh-Dole hasn’t fostered as much innovation as it might (Pearce notes that his own institution brings in 100 times more money as grants than it does from licensing patents derived from past grants), but what he’s calling for is not so much a reform of Bayh-Dole as its elimination.

Pearce wants changes in patenting to extend well beyond the academic world, too. He argues that the USPTO should put a moratorium on patents for “nanotechnology-related fundamental science, materials, and concepts.” As we described above, the difference between a process innovation and the fundamental properties resulting in nanomaterial is a very difficult thing to define. The USPTO has struggled to manage far simpler distinctions; it’s unrealistic to expect it to manage a moratorium effectively.

While Pearce points to the 3-D printing sector admiringly, there are some issues even there, as per Mike Masnick’s Nov.  21, 2012 posting on Techdirt.com (Note:  I have removed links),

We’ve been pointing out for a while that one of the reasons why advancements in 3D printing have been relatively slow is because of patents holding back the market. However, a bunch of key patents have started expiring, leading to new opportunities. One, in particular, that has received a fair bit of attention was the Formlabs 3D printer, which raised nearly $3 million on Kickstarter earlier this year. It got a ton of well-deserved attention for being one of the first “low end” (sub ~$3,000) 3D printers with very impressive quality levels.

Part of the reason the company said it could offer such a high quality printer at a such a low price, relative to competitors, was because some of the key patents had expired, allowing it to build key components without having to pay astronomical licensing fees. A company called 3D Systems, however, claims that Formlabs missed one patent. It holds US Patent 5,597,520 on a “Simultaneous multiple layer curing in stereolithography.” While I find it ridiculous that 3D Systems is going legal, rather than competing in the marketplace, it’s entirely possible that the patent is valid. It just highlights how the system holds back competition that drives important innovation, though.

3D Systems claims that Formlabs “took deliberate acts to avoid learning” about 3D Systems’ live patents. The lawsuit claims that Formlabs looked only for expired patents — which seems like a very odd claim. Why would they only seek expired patents? …

I strongly suggest reading both Pearce’s and Timmer’s articles as they both provide some very interesting perspectives about nanotechnology IP (intellectual property) open access issues. I also recommend Mike Masnick’s piece for exposure to a rather odd but unfortunately not uncommon legal suit designed to limit competition in a relatively new technology (3-D printers).

Vancouver (Canada)-based company, Lumerical Solutions, files patent on new optoelectronic simulation software

I’m not a huge *fan of patents as per various postings (my Oct. 31, 2011 posting is probably my most overt statement) so I’m not entirely thrilled about this news from Lumerical Solutions, Inc. According to the June 14, 2012 news item on Nanowerk,

Lumerical Solutions, Inc., a global provider of optoelectronic design software, announced the filing of a provisional patent application titled, “System and Method for Transforming a Coordinate System to Simulate an Anisotropic Medium.” The patent application, filed with the US Patent and Trademark Office, describes how the optical response of dispersive, spatially varying anisotropic media can be efficiently simulated on a discretized grid like that employed by finite-difference time-domain (FDTD) or finite-element method (FEM) simulators. The invention disclosed is relevant to a wide array of applications including liquid crystal display (LCD) panels, microdisplays, spatial light modulators, integrated components using liquid crystal on silicon (LCOS) technology like LCOS optical switches, and magneto-optical elements in optical communication and sensing systems.

The company’s June 14, 2012 news release includes this comment from the founder and Chief Technical Office (CTO),

According to Dr. James Pond, the inventor and Lumerical’s Chief Technology Officer, “many next generation opto-electronic products combine complicated materials and nano-scale structure, which is beyond the capabilities of existing simulation tools. Lumerical’s enhanced framework allows designers to accurately simulate everything from liquid crystal displays to OLEDs, and silicon photonics to integrated quantum computing components.”

Lumerical’s new methodology for efficiently simulating anisotropic media is part of a larger effort to allow designers the ability to model the optical response of many different types of materials.  In addition to the disclosed invention, Lumerical has added a material plugin capability which will enable external parties to include complicated material models, such as those required for modelling semiconductor lasers or non-linear optical devices, into FDTD-based simulation projects.

…  According to Chris Koo, an engineer with Samsung, “Lumerical’s latest innovation has established them as the clear leader in the field of optoelectronic device modeling.  Their comprehensive material modeling capabilities paves the way for the development of exciting new technologies.”

I wish the company good luck. Despite my reservations about current patent regimes, I do appreciate that in some situations, it’s best to apply for a patent.

For the curious, here’s a little more (from the company’s About Lumerical page),

By empowering research and product development professionals with high performance optical design software that leverages recent advances in computing technology, Lumerical helps optical designers tackle challenging design goals and meet strict deadlines. Lumerical’s design software solutions are employed in more than 30 countries by global technology leaders like Agilent, ASML, Bosch, Canon, Harris, Northrop Grumman, Olympus, Philips, Samsung, and STMicroelectronics, and prominent research institutions including Caltech, Harvard, Max Planck Institute, MIT, NIST and the Chinese Academy of Sciences.

Our Name

Lu.min.ous (loo’me-nes) adj., full of light, illuminated

Nu.mer.i.cal (noo-mer’i-kel) adj., of or relating to a number or series of numbers

Lu.mer.i.cal (loo-mer’i-kel) – A company that delivers inventive, highly accurate and cost effective design solutions resulting in significant improvements in product development costs and speed-to-market.

* June 15, 2012: I found the error this morning (9:20 am PDT) and added the word ‘fan’.