Category Archives: regulation

Research on how the body will react to nanomedicines is inconsistent

This is a good general introductory video to nano gold but I have two caveats. It’s very ‘hypey’ (as in hyperbolic) and, as of 2023, it’s eight years old. The information still looks pretty good (after all it was produced by Nature magazine) but should you be watching this five years from now, the situation may have changed. (h/t January 5, 2023 news item on Nanowerk)

The video, which includes information about how nano gold can be used to deliver nanomedicines, is embedded in Morteza Mahmoudi’s (Assistant Professor of Radiology, Michigan State University) January 5, 2023 essay on The Conversation about a lack of research on how the body reacts to nanomedicines, Note: Links have been removed,

Nanomedicines took the spotlight during the COVID-19 pandemic. Researchers are using these very small and intricate materials to develop diagnostic tests and treatments. Nanomedicine is already used for various diseases, such as the COVID-19 vaccines and therapies for cardiovascular disease. The “nano” refers to the use of particles that are only a few hundred nanometers in size, which is significantly smaller than the width of a human hair.

Although researchers have developed several methods to improve the reliability of nanotechnologies, the field still faces one major roadblock: a lack of a standardized way to analyze biological identity, or how the body will react to nanomedicines. This is essential information in evaluating how effective and safe new treatments are.

I’m a researcher studying overlooked factors in nanomedicine development. In our recently published research, my colleagues and I found that analyses of biological identity are highly inconsistent across proteomics facilities that specialize in studying proteins.

Nanoparticles (white disks) can be used to deliver treatment to cells (blue). (Image: Brenda Melendez and Rita Serda, National Cancer Institute, National Institutes of Health, CC BY-NC) [downloaded from https://www.nanowerk.com/nanotechnology-news2/newsid=62097.php]

Mahmoudi’s January 5, 2023 essay describes testing a group of laboratories’ analyses of samples he and his team submitted to them (Note: Links have been removed),

We wanted to test how consistently these proteomics facilities analyzed protein corona samples. To do this, my colleagues and I sent biologically identical protein coronas to 17 different labs in the U.S. for analysis.

We had striking results: Less than 2% of the proteins the labs identified were the same.

Our results reveal an extreme lack of consistency in the analyses researchers use to understand how nanomedicines work in the body. This may pose a significant challenge not only to ensuring the accuracy of diagnostics, but also the effectiveness and safety of treatments based on nanomedicines.

… my team and I have identified several critical but often overlooked factors that can influence the performance of a nanomedicine, such as a person’s sex, prior medical conditions and disease type. …

Mahmoudi is pointing out that it’s very early days for nanomedicines and there’s a lot of work still be done.

Here’s a link to and a citation for the paper Mahmoudi and his team had published on this topic,

Measurements of heterogeneity in proteomics analysis of the nanoparticle protein corona across core facilities by Ali Akbar Ashkarran, Hassan Gharibi, Elizabeth Voke, Markita P. Landry, Amir Ata Saei & Morteza Mahmoudi. Nature Communications volume 13, Article number: 6610 (2022) DOI: https://doi.org/10.1038/s41467-022-34438-8 Published 03 November 2022

This paper is open access.

Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)

Bill C-27 (Digital Charter Implementation Act, 2022) is what I believe is called an omnibus bill as it includes three different pieces of proposed legislation (the Consumer Privacy Protection Act [CPPA], the Artificial Intelligence and Data Act [AIDA], and the Personal Information and Data Protection Tribunal Act [PIDPTA]). You can read the Innovation, Science and Economic Development (ISED) Canada summary here or a detailed series of descriptions of the act here on the ISED’s Canada’s Digital Charter webpage.

Months after the first reading in June 2022, Bill C-27 was mentioned here in a September 15, 2022 posting about a Canadian Science Policy Centre (CSPC) event featuring a panel discussion about the proposed legislation, artificial intelligence in particular. I dug down and found commentaries and additional information about the proposed bill with special attention to AIDA.

it seems discussion has been reactivated since the second reading was completed on April 24, 2023 and referred to committee for further discussion. (A report and third reading are still to be had in the House of Commons and then, there are three readings in the Senate before this legislation can be passed.)

Christian Paas-Lang has written an April 24, 2023 article for CBC (Canadian Broadcasting Corporation) news online that highlights concerns centred on AI from three cross-party Members of Parliament (MPs),

Once the domain of a relatively select group of tech workers, academics and science fiction enthusiasts, the debate over the future of artificial intelligence has been thrust into the mainstream. And a group of cross-party MPs say Canada isn’t yet ready to take on the challenge.

The popularization of AI as a subject of concern has been accelerated by the introduction of ChatGPT, an AI chatbot produced by OpenAI that is capable of generating a broad array of text, code and other content. ChatGPT relies on content published on the internet as well as training from its users to improve its responses.

ChatGPT has prompted such a fervour, said Katrina Ingram, founder of the group Ethically Aligned AI, because of its novelty and effectiveness. 

“I would argue that we’ve had AI enabled infrastructure or technologies around for quite a while now, but we haven’t really necessarily been confronted with them, you know, face to face,” she told CBC Radio’s The House [radio segment embedded in article] in an interview that aired Saturday [April 22, 2023].

Ingram said the technology has prompted a series of concerns: about the livelihoods of professionals like artists and writers, about privacy, data collection and surveillance and about whether chatbots like ChatGPT can be used as tools for disinformation.

With the popularization of AI as an issue has come a similar increase in concern about regulation, and Ingram says governments must act now.

“We are contending with these technologies right now. So it’s really imperative that governments are able to pick up the pace,” she told host Catherine Cullen.

That sentiment — the need for speed — is one shared by three MPs from across party lines who are watching the development of the AI issue. Conservative MP Michelle Rempel Garner, NDP MP Brian Masse and Nathaniel Erskine-Smith of the Liberals also joined The House for an interview that aired Saturday.

“This is huge. This is the new oil,” said Masse, the NDP’s industry critic, referring to how oil had fundamentally shifted economic and geopolitical relationships, leading to a great deal of good but also disasters — and AI could do the same.

Issues of both speed and substance

The three MPs are closely watching Bill C-27, a piece of legislation currently being debated in the House of Commons that includes Canada’s first federal regulations on AI.

But each MP expressed concern that the bill may not be ready in time and changes would be needed [emphasis mine].

“This legislation was tabled in June of last year [2022], six months before ChatGPT was released and it’s like it’s obsolete. It’s like putting in place a framework to regulate scribes four months after the printing press came out,” Rempel Garner said. She added that it was wrongheaded to move the discussion of AI away from Parliament and segment it off to a regulatory body.

Am I the only person who sees a problem with the “bill may not be ready in time and changes would be needed?” I don’t understand the rush (or how these people get elected). The point of a bill is to examine the ideas and make changes to it before it becomes legislation. Given how fluid the situation appears to be, a strong argument can be made for the current process which is three readings in the House of Commons, along with a committee report, and three readings in the senate before a bill, if successful, is passed into legislation.

Of course, the fluidity of the situation could also be an argument for starting over as Michael Geist’s (Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa and member of the Centre for Law, Technology and Society) April 19, 2023 post on his eponymous blog suggests, Note: Links have been removed,

As anyone who has tried ChatGPT will know, at the bottom of each response is an option to ask the AI system to “regenerate response”. Despite increasing pressure on the government to move ahead with Bill C-27’s Artificial Intelligence and Data Act (AIDA), the right response would be to hit the regenerate button and start over. AIDA may be well-meaning and the issue of AI regulation critically important, but the bill is limited in principles and severely lacking in detail, leaving virtually all of the heavy lifting to a regulation-making process that will take years to unfold. While no one should doubt the importance of AI regulation, Canadians deserve better than virtue signalling on the issue with a bill that never received a full public consultation.

What prompts this post is a public letter based out of MILA that calls on the government to urgently move ahead with the bill signed by some of Canada’s leading AI experts. The letter states: …

When the signatories to the letter suggest that there is prospect of moving AIDA forward before the summer, it feels like a ChatGPT error. There are a maximum of 43 days left on the House of Commons calendar until the summer. In all likelihood, it will be less than that. Bill C-27 is really three bills in one: major privacy reform, the creation of a new privacy tribunal, and AI regulation. I’ve watched the progress of enough bills to know that this just isn’t enough time to conduct extensive hearings on the bill, conduct a full clause-by-clause review, debate and vote in the House, and then conduct another review in the Senate. At best, Bill C-27 could make some headway at committee, but getting it passed with a proper review is unrealistic.

Moreover, I am deeply concerned about a Parliamentary process that could lump together these three bills in an expedited process. …

For anyone unfamiliar with MILA, it is also known as Quebec’s Artificial Intelligence Institute. (They seem to have replaced institute with ecosystem since the last time I checked.) You can see the document and list of signatories here.

Geist has a number of posts and podcasts focused on the bill and the easiest way to find them is to use the search term ‘Bill C-27’.

Maggie Arai at the University of Toronto’s Schwartz Reisman Institute for Technology and Society provides a brief overview titled, Five things to know about Bill C-27, in her April 18, 2022 commentary,

On June 16, 2022, the Canadian federal government introduced Bill C-27, the Digital Charter Implementation Act 2022, in the House of Commons. Bill C-27 is not entirely new, following in the footsteps of Bill C-11 (the Digital Charter Implementation Act 2020). Bill C-11 failed to pass, dying on the Order Paper when the Governor General dissolved Parliament to hold the 2021 federal election. While some aspects of C-27 will likely be familiar to those who followed the progress of Bill C-11, there are several key differences.

After noting the differences, Arai had this to say, from her April 18, 2022 commentary,

The tabling of Bill C-27 represents an exciting step forward for Canada as it attempts to forge a path towards regulating AI that will promote innovation of this advanced technology, while simultaneously offering consumers assurance and protection from the unique risks this new technology it poses. This second attempt towards the CPPA and PIDPTA is similarly positive, and addresses the need for updated and increased consumer protection, privacy, and data legislation.

However, as the saying goes, the devil is in the details. As we have outlined, several aspects of how Bill C-27 will be implemented are yet to be defined, and how the legislation will interact with existing social, economic, and legal dynamics also remains to be seen.

There are also sections of C-27 that could be improved, including areas where policymakers could benefit from the insights of researchers with domain expertise in areas such as data privacy, trusted computing, platform governance, and the social impacts of new technologies. In the coming weeks, the Schwartz Reisman Institute will present additional commentaries from our community that explore the implications of C-27 for Canadians when it comes to privacy, protection against harms, and technological governance.

Bryan Short’s September 14, 2022 posting (The Absolute Bare Minimum: Privacy and the New Bill C-27) on the Open Media website critiques two of the three bills included in Bill C-27, Note: Links have been removed,

The Canadian government has taken the first step towards creating new privacy rights for people in Canada. After a failed attempt in 2020 and three years of inaction since the proposal of the digital charter, the government has tabled another piece of legislation aimed at giving people in Canada the privacy rights they deserve.

In this post, we’ll explore how Bill C-27 compares to Canada’s current privacy legislation, how it stacks up against our international peers, and what it means for you. This post considers two of the three acts being proposed in Bill C-27, the Consumer Privacy Protection Act (CPPA) and the Personal Information and Data Tribunal Act (PIDTA), and doesn’t discuss the Artificial Intelligence and Data Act [emphasis mine]. The latter Act’s engagement with very new and complex issues means we think it deserves its own consideration separate from existing privacy proposals, and will handle it as such.

If we were to give Bill C-27’s CPPA and PIDTA a grade, it’d be a D. This is legislation that does the absolute bare minimum for privacy protections in Canada, and in some cases it will make things actually worse. If they were proposed and passed a decade ago, we might have rated it higher. However, looking ahead at predictable movement in data practices over the next ten – or even twenty – years, these laws will be out of date the moment they are passed, and leave people in Canada vulnerable to a wide range of predatory data practices. For detailed analysis, read on – but if you’re ready to raise your voice, go check out our action calling for positive change before C-27 passes!

Taking this all into account, Bill C-27 isn’t yet the step forward for privacy in Canada that we need. While it’s an improvement upon the last privacy bill that the government put forward, it misses so many areas that are critical for improvement, like failing to put people in Canada above the commercial interests of companies.

If Open Media has followed up with an AIDA critique, I have not been able to find it on their website.

A final SNUR (from the US Environmental Protection Agency) for MWCNTs (multiwalled carbon nanotubes)

SNUR means ‘significant new use rules’ and it’s been a long while since I’ve stumbled across any rulings from the US Environmental Protection Agency (EPA), which concern nanomaterials. From a September 30, 2022 news item on Nanotechnology News by Lynn L. Bergeson,

On September 29, 2022, the U.S. Environmental Protection Agency (EPA) issued final significant new use rules (SNUR) under the Toxic Substances Control Act (TSCA) for certain chemical substances that were the subject of premanufacture notices (PMN), including multi-walled carbon nanotubes (MWCNT) (generic). 87 Fed. Reg. 58999. See https://www.federalregister.gov/documents/2022/09/29/2022-21042/significant-new-use-rules-on-certain-chemical-substances-21-25e The SNUR requires persons who intend to manufacture (defined by statute to include import) or process the chemical substance identified generically as MWCNTs (PMN P-20-72) for an activity that is designated as a significant new use to notify EPA at least 90 days before commencing that activity. Persons may not commence manufacture or processing for the significant new use until EPA has conducted a review of the notice, made an appropriate determination on the notice, and taken such actions as are required by that determination. The SNUR will be effective on November 28, 2022.

Hazard communication: Requirements as specified in 40 C.F.R. Section 721.72(a) through (d), (f), (g)(1), (g)(3), and (g)(5). For purposes of Section 721.72(g)(1), this substance may cause: eye irritation; respiratory sensitization; skin sensitization; carcinogenicity; and specific target organ toxicity. For purposes of Section 721.72(g)(3), this substance may cause unknown aquatic toxicity. Alternative hazard and warning statements that meet the criteria of the Globally Harmonized System of Classification and Labeling of Chemicals (GHS) and Occupational Safety and Health Administration (OSHA) Hazard Communication Standard (HCS) may be used.

The September 30, 2022 news item lists more significant new uses.

US National Academies Sept. 22-23, 2022 workshop on techno, legal & ethical issues of brain-machine interfaces (BMIs)

If you’ve been longing for an opportunity to discover more and to engage in discussion about brain-machine interfaces (BMIs) and their legal, technical, and ethical issues, an opportunity is just a day away. From a September 20, 2022 (US) National Academies of Sciences, Engineering, and Medicine (NAS/NASEM or National Academies) notice (received via email),

Sept. 22-23 [2022] Workshop Explores Technical, Legal, Ethical Issues Raised by Brain-Machine Interfaces [official title: Brain-Machine and Related Neural Interface Technologies: Scientific, Technical, Ethical, and Regulatory Issues – A Workshop]

Technological developments and advances in understanding of the human brain have led to the development of new Brain-Machine Interface technologies. These include technologies that “read” the brain to record brain activity and decode its meaning, and those that “write” to the brain to manipulate activity in specific brain regions. Right now, most of these interface technologies are medical devices placed inside the brain or other parts of the nervous system – for example, devices that use deep brain stimulation to modulate the tremors of Parkinson’s disease.

But tech companies are developing mass-market wearable devices that focus on understanding emotional states or intended movements, such as devices used to detect fatigue, boost alertness, or enable thoughts to control gaming and other digital-mechanical systems. Such applications raise ethical and legal issues, including risks that thoughts or mood might be accessed or manipulated by companies, governments, or others; risks to privacy; and risks related to a widening of social inequalities.

A virtual workshop [emphasis mine] hosted by the National Academies of Sciences, Engineering, and Medicine on Sept. 22-23 [2022] will explore the present and future of these technologies and the ethical, legal, and regulatory issues they raise.

The workshop will run from 12:15 p.m. to 4:25 p.m. ET on Sept. 22 and from noon to 4:30 p.m. ET on Sept. 23. View agenda and register.

For those who might want a peak at the agenda before downloading it, I have listed the titles for the sessions (from my downloaded Agenda, Note: I’ve reformatted the information; there are no breaks, discussion periods, or Q&As included),

Sept. 22, 2022 Draft Agenda

12: 30 pm ET Brain-Machine and Related Neural Interface Technologies: The State and Limitations of the Technology

2:30 pm ET Brain-Machine and Related Neural Interface Technologies: Reading and Writing the Brain for Movement

Sept. 23, 2022 Draft Agenda

12:05 pm ET Brain-Machine and Related Neural Interface Technologies: Reading and Writing the Brain for Mood and Affect

2:05 pm ET Brain-Machine and Related Neural Interface Technologies: Reading and Writing the Brain for Thought, Communication, and Memory

4:00 pm ET Concluding Thoughts from Workshop Planning Committee

Regarding terminology, there’s brain-machine interface (BMI), which I think is a more generic term that includes: brain-computer interface (BCI), neural interface and/or neural implant. There are other terms as well, including this one in the title of my September 17, 2020 posting, “Turning brain-controlled wireless electronic prostheses [emphasis mine] into reality plus some ethical points.” I have a more recent April 5, 2022 posting, which is a very deep dive, “Going blind when your neural implant company flirts with bankruptcy (long read).” As you can see, various social issues associated with these devices have been of interest to me.

I’m not sure quite what to make of the session titles. There doesn’t seem to be all that much emphasis on ethical and legal issues but perhaps that’s the role the various speakers will play.

Nanomaterials are now nanoforms?

There’s a proposal/recommendation/call to start calling nanomaterials ‘nanoforms’ according to an April 20, 2021 article (you can also find it here on JDSupra) by Lynn Bergeson and Carla N. Hutton in April 27, 2021 (Volume XI, Number 117) issue of the National Law Review,

The Nanotechnology Industries Association (NIA) has published a March 2021 position paper, A changing regulatory landscape and language for the nanoscale, that examines the transition from “nanomaterial” to “nanoforms” to reflect better the differences in nanomaterial properties both in relation to bulk counterparts and to nanoforms of the same substance.  In the paper, NIA describes a transition phase where the language of nanoforms is used more widely and examples of nanoform sets can be increasingly demonstrated in the public domain, while there is still a “significant” learning curve for both industry and the European Chemicals Agency (ECHA).  NIA recommends that all stakeholders reassess the language they use where relevant, “particularly when discussing hazard, and where non-specific terminology may be misleading and result in confusion and mistrust in the safety of substances at the nanoscale.” …

I haven’t noticed the use of nanoforms yet but, going forward, I will be alert to the change in terminology.

Turning brain-controlled wireless electronic prostheses into reality plus some ethical points

Researchers at Stanford University (California, US) believe they have a solution for a problem with neuroprosthetics (Note: I have included brief comments about neuroprosthetics and possible ethical issues at the end of this posting) according an August 5, 2020 news item on ScienceDaily,

The current generation of neural implants record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But, so far, when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the implants generated too much heat to be safe for the patient. A new study suggests how to solve his problem — and thus cut the wires.

Caption: Photo of a current neural implant, that uses wires to transmit information and receive power. New research suggests how to one day cut the wires. Credit: Sergey Stavisky

An August 3, 2020 Stanford University news release (also on EurekAlert but published August 4, 2020) by Tom Abate, which originated the news item, details the problem and the proposed solution,

Stanford researchers have been working for years to advance a technology that could one day help people with paralysis regain use of their limbs, and enable amputees to use their thoughts to control prostheses and interact with computers.

The team has been focusing on improving a brain-computer interface, a device implanted beneath the skull on the surface of a patient’s brain. This implant connects the human nervous system to an electronic device that might, for instance, help restore some motor control to a person with a spinal cord injury, or someone with a neurological condition like amyotrophic lateral sclerosis, also called Lou Gehrig’s disease.

The current generation of these devices record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the devices would generate too much heat to be safe for the patient.

Now, a team led by electrical engineers and neuroscientists Krishna Shenoy, PhD, and Boris Murmann, PhD, and neurosurgeon and neuroscientist Jaimie Henderson, MD, have shown how it would be possible to create a wireless device, capable of gathering and transmitting accurate neural signals, but using a tenth of the power required by current wire-enabled systems. These wireless devices would look more natural than the wired models and give patients freer range of motion.

Graduate student Nir Even-Chen and postdoctoral fellow Dante Muratore, PhD, describe the team’s approach in a Nature Biomedical Engineering paper.

The team’s neuroscientists identified the specific neural signals needed to control a prosthetic device, such as a robotic arm or a computer cursor. The team’s electrical engineers then designed the circuitry that would enable a future, wireless brain-computer interface to process and transmit these these carefully identified and isolated signals, using less power and thus making it safe to implant the device on the surface of the brain.

To test their idea, the researchers collected neuronal data from three nonhuman primates and one human participant in a (BrainGate) clinical trial.

As the subjects performed movement tasks, such as positioning a cursor on a computer screen, the researchers took measurements. The findings validated their hypothesis that a wireless interface could accurately control an individual’s motion by recording a subset of action-specific brain signals, rather than acting like the wired device and collecting brain signals in bulk.

The next step will be to build an implant based on this new approach and proceed through a series of tests toward the ultimate goal.

Here’s a link to and a citation for the paper,

Power-saving design opportunities for wireless intracortical brain–computer interfaces by Nir Even-Chen, Dante G. Muratore, Sergey D. Stavisky, Leigh R. Hochberg, Jaimie M. Henderson, Boris Murmann & Krishna V. Shenoy. Nature Biomedical Engineering (2020) DOI: https://doi.org/10.1038/s41551-020-0595-9 Published: 03 August 2020

This paper is behind a paywall.

Comments about ethical issues

As I found out while investigating, ethical issues in this area abound. My first thought was to look at how someone with a focus on ability studies might view the complexities.

My ‘go to’ resource for human enhancement and ethical issues is Gregor Wolbring, an associate professor at the University of Calgary (Alberta, Canada). his profile lists these areas of interest: ability studies, disability studies, governance of emerging and existing sciences and technologies (e.g. neuromorphic engineering, genetics, synthetic biology, robotics, artificial intelligence, automatization, brain machine interfaces, sensors) and more.

I can’t find anything more recent on this particular topic but I did find an August 10, 2017 essay for The Conversation where he comments on technology and human enhancement ethical issues where the technology is gene-editing. Regardless, he makes points that are applicable to brain-computer interfaces (human enhancement), Note: Links have been removed),

Ability expectations have been and still are used to disable, or disempower, many people, not only people seen as impaired. They’ve been used to disable or marginalize women (men making the argument that rationality is an important ability and women don’t have it). They also have been used to disable and disempower certain ethnic groups (one ethnic group argues they’re smarter than another ethnic group) and others.

A recent Pew Research survey on human enhancement revealed that an increase in the ability to be productive at work was seen as a positive. What does such ability expectation mean for the “us” in an era of scientific advancements in gene-editing, human enhancement and robotics?

Which abilities are seen as more important than others?

The ability expectations among “us” will determine how gene-editing and other scientific advances will be used.

And so how we govern ability expectations, and who influences that governance, will shape the future. Therefore, it’s essential that ability governance and ability literacy play a major role in shaping all advancements in science and technology.

One of the reasons I find Gregor’s commentary so valuable is that he writes lucidly about ability and disability as concepts and poses what can be provocative questions about expectations and what it is to be truly abled or disabled. You can find more of his writing here on his eponymous (more or less) blog.

Ethics of clinical trials for testing brain implants

This October 31, 2017 article by Emily Underwood for Science was revelatory,

In 2003, neurologist Helen Mayberg of Emory University in Atlanta began to test a bold, experimental treatment for people with severe depression, which involved implanting metal electrodes deep in the brain in a region called area 25 [emphases mine]. The initial data were promising; eventually, they convinced a device company, St. Jude Medical in Saint Paul, to sponsor a 200-person clinical trial dubbed BROADEN.

This month [October 2017], however, Lancet Psychiatry reported the first published data on the trial’s failure. The study stopped recruiting participants in 2012, after a 6-month study in 90 people failed to show statistically significant improvements between those receiving active stimulation and a control group, in which the device was implanted but switched off.

… a tricky dilemma for companies and research teams involved in deep brain stimulation (DBS) research: If trial participants want to keep their implants [emphases mine], who will take responsibility—and pay—for their ongoing care? And participants in last week’s meeting said it underscores the need for the growing corps of DBS researchers to think long-term about their planned studies.

… participants bear financial responsibility for maintaining the device should they choose to keep it, and for any additional surgeries that might be needed in the future, Mayberg says. “The big issue becomes cost [emphasis mine],” she says. “We transition from having grants and device donations” covering costs, to patients being responsible. And although the participants agreed to those conditions before enrolling in the trial, Mayberg says she considers it a “moral responsibility” to advocate for lower costs for her patients, even it if means “begging for charity payments” from hospitals. And she worries about what will happen to trial participants if she is no longer around to advocate for them. “What happens if I retire, or get hit by a bus?” she asks.

There’s another uncomfortable possibility: that the hypothesis was wrong [emphases mine] to begin with. A large body of evidence from many different labs supports the idea that area 25 is “key to successful antidepressant response,” Mayberg says. But “it may be too simple-minded” to think that zapping a single brain node and its connections can effectively treat a disease as complex as depression, Krakauer [John Krakauer, a neuroscientist at Johns Hopkins University in Baltimore, Maryland] says. Figuring that out will likely require more preclinical research in people—a daunting prospect that raises additional ethical dilemmas, Krakauer says. “The hardest thing about being a clinical researcher,” he says, “is knowing when to jump.”

Brain-computer interfaces, symbiosis, and ethical issues

This was the most recent and most directly applicable work that I could find. From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.

Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence.

Interface technologies are divided into those that ‘read’ the brain to record brain activity and decode its meaning, and those that ‘write’ to the brain to manipulate activity in specific regions and affect their function.

Commercial research is opaque, but scientists at social-media platform Facebook are known to be pursuing brain-reading techniques for use in headsets that would convert users’ brain activity into text. And neurotechnology companies such as Kernel in Los Angeles, California, and Neuralink, founded by Elon Musk in San Francisco, California, predict bidirectional coupling in which computers respond to people’s brain activity and insert information into their neural circuitry. [emphasis mine]

Already, it is clear that melding digital technologies with human brains can have provocative effects, not least on people’s agency — their ability to act freely and according to their own choices. Although neuroethicists’ priority is to optimize medical practice, their observations also shape the debate about the development of commercial neurotechnologies.

Neuroethicists began to note the complex nature of the therapy’s side effects. “Some effects that might be described as personality changes are more problematic than others,” says Maslen [Hannah Maslen, a neuroethicist at the University of Oxford, UK]. A crucial question is whether the person who is undergoing stimulation can reflect on how they have changed. Gilbert, for instance, describes a DBS patient who started to gamble compulsively, blowing his family’s savings and seeming not to care. He could only understand how problematic his behaviour was when the stimulation was turned off.

Such cases present serious questions about how the technology might affect a person’s ability to give consent to be treated, or for treatment to continue. [emphases mine] If the person who is undergoing DBS is happy to continue, should a concerned family member or doctor be able to overrule them? If someone other than the patient can terminate treatment against the patient’s wishes, it implies that the technology degrades people’s ability to make decisions for themselves. It suggests that if a person thinks in a certain way only when an electrical current alters their brain activity, then those thoughts do not reflect an authentic self.

To observe a person with tetraplegia bringing a drink to their mouth using a BCI-controlled robotic arm is spectacular. [emphasis mine] This rapidly advancing technology works by implanting an array of electrodes either on or in a person’s motor cortex — a brain region involved in planning and executing movements. The activity of the brain is recorded while the individual engages in cognitive tasks, such as imagining that they are moving their hand, and these recordings are used to command the robotic limb.

If neuroscientists could unambiguously discern a person’s intentions from the chattering electrical activity that they record in the brain, and then see that it matched the robotic arm’s actions, ethical concerns would be minimized. But this is not the case. The neural correlates of psychological phenomena are inexact and poorly understood, which means that signals from the brain are increasingly being processed by artificial intelligence (AI) software before reaching prostheses.[emphasis mine]

But, he [Philipp Kellmeyer, a neurologist and neuroethicist at the University of Freiburg, Germany] says, using AI tools also introduces ethical issues of which regulators have little experience. [emphasis mine] Machine-learning software learns to analyse data by generating algorithms that cannot be predicted and that are difficult, or impossible, to comprehend. This introduces an unknown and perhaps unaccountable process between a person’s thoughts and the technology that is acting on their behalf.

Maslen is already helping to shape BCI-device regulation. She is in discussion with the European Commission about regulations it will implement in 2020 that cover non-invasive brain-modulating devices that are sold straight to consumers. [emphases mine; Note: There is a Canadian company selling this type of product, MUSE] Maslen became interested in the safety of these devices, which were covered by only cursory safety regulations. Although such devices are simple, they pass electrical currents through people’s scalps to modulate brain activity. Maslen found reports of them causing burns, headaches and visual disturbances. She also says clinical studies have shown that, although non-invasive electrical stimulation of the brain can enhance certain cognitive abilities, this can come at the cost of deficits in other aspects of cognition.

Regarding my note about MUSE, the company is InteraXon and its product is MUSE.They advertise the product as “Brain Sensing Headbands That Improve Your Meditation Practice.” The company website and the product seem to be one entity, Choose Muse. The company’s product has been used in some serious research papers they can be found here. I did not see any research papers concerning safety issues.

Getting back to Drew’s July 24, 2019 article and Patient 6,

… He [Gilbert] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.

… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.

“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”

I strongly recommend reading Drew’s July 24, 2019 article in its entirety.

Finally

It’s easy to forget that in all the excitement over technologies ‘making our lives better’ that there can be a dark side or two. Some of the points brought forth in the articles by Wolbring, Underwood, and Drew confirmed my uneasiness as reasonable and gave me some specific examples of how these technologies raise new issues or old issues in new ways.

What I find interesting is that no one is using the term ‘cyborg’, which would seem quite applicable.There is an April 20, 2012 posting here titled ‘My mother is a cyborg‘ where I noted that by at lease one definition people with joint replacements, pacemakers, etc. are considered cyborgs. In short, cyborgs or technology integrated into bodies have been amongst us for quite some time.

Interestingly, no one seems to care much when insects are turned into cyborgs (can’t remember who pointed this out) but it is a popular area of research especially for military applications and search and rescue applications.

I’ve sometimes used the term ‘machine/flesh’ and or ‘augmentation’ as a description of technologies integrated with bodies, human or otherwise. You can find lots on the topic here however I’ve tagged or categorized it.

Amongst other pieces you can find here, there’s the August 8, 2016 posting, ‘Technology, athletics, and the ‘new’ human‘ featuring Oscar Pistorius when he was still best known as the ‘blade runner’ and a remarkably successful paralympic athlete. It’s about his efforts to compete against able-bodied athletes at the London Olympic Games in 2012. It is fascinating to read about technology and elite athletes of any kind as they are often the first to try out ‘enhancements’.

Gregor Wolbring has a number of essays on The Conversation looking at Paralympic athletes and their pursuit of enhancements and how all of this is affecting our notions of abilities and disabilities. By extension, one has to assume that ‘abled’ athletes are also affected with the trickle-down effect on the rest of us.

Regardless of where we start the investigation, there is a sameness to the participants in neuroethics discussions with a few experts and commercial interests deciding on how the rest of us (however you define ‘us’ as per Gregor Wolbring’s essay) will live.

This paucity of perspectives is something I was getting at in my COVID-19 editorial for the Canadian Science Policy Centre. My thesis being that we need a range of ideas and insights that cannot be culled from small groups of people who’ve trained and read the same materials or entrepreneurs who too often seem to put profit over thoughtful implementations of new technologies. (See the PDF May 2020 edition [you’ll find me under Policy Development]) or see my May 15, 2020 posting here (with all the sources listed.)

As for this new research at Stanford, it’s exciting news, which raises questions, as it offers the hope of independent movement for people diagnosed as tetraplegic (sometimes known as quadriplegic.)

New US regulations exempt many gene-edited crops from government oversight

A June 1, 2020 essay by Maywa Montenegro (Postdoctoral Fellow, University of California at Davis) for The Conversation posits that new regulations (which in fact result in deregulation) are likely to create problems,

In May [2020], federal regulators finalized a new biotechnology policy that will bring sweeping changes to the U.S. food system. Dubbed “SECURE,” the rule revises U.S. Department of Agriculture regulations over genetically engineered plants, automatically exempting many gene-edited crops from government oversight. Companies and labs will be allowed to “self-determine” whether or not a crop should undergo regulatory review or environmental risk assessment.

Initial responses to this new policy have followed familiar fault lines in the food community. Seed industry trade groups and biotech firms hailed the rule as “important to support continuing innovation.” Environmental and small farmer NGOs called the USDA’s decision “shameful” and less attentive to public well-being than to agribusiness’s bottom line.

But the gene-editing tool CRISPR was supposed to break the impasse in old GM wars by making biotechnology more widely affordable, accessible and thus democratic.

In my research, I study how biotechnology affects transitions to sustainable food systems. It’s clear that since 2012 the swelling R&D pipeline of gene-edited grains, fruits and vegetables, fish and livestock has forced U.S. agencies to respond to the so-called CRISPR revolution.

Yet this rule change has a number of people in the food and scientific communities concerned. To me, it reflects the lack of accountability and trust between the public and government agencies setting policies.

Is there a better way?

… I have developed a set of principles and practices for governing CRISPR based on dialogue with front-line communities who are most affected by the technologies others usher in. Communities don’t just have to adopt or refuse technology – they can co-create [emphasis mine] it.

One way to move forward in the U.S. is to take advantage of common ground between sustainable agriculture movements and CRISPR scientists. The struggle over USDA rules suggests that few outside of industry believe self-regulation is fair, wise or scientific.

h/t: June 1, 2020 news item on phys.org

If you have the time and the inclination, do read the essay in its entirety.

Anyone who has read my COVID-19 op-ed for the Canadian Science Policy may see some similarity between Montenegro’s “co-create” and this from my May 15, 2020 posting which included my reference materials or this version on the Canadian Science Policy Centre where you can find many other COVID-19 op-eds)

In addition to engaging experts as we navigate our way into the future, we can look to artists, writers, citizen scientists, elders, indigenous communities, rural and urban communities, politicians, philosophers, ethicists, religious leaders, and bureaucrats of all stripes for more insight into the potential for collateral and unintended consequences.

To be clear, I think times of crises are when a lot of people call for more co-creation and input. Here’s more about Montenegro’s work on her profile page (which includes her academic credentials, research interests and publications) on the University of California at Berkeley’s Department of Environmental Science, Policy, and Management webspace. She seems to have been making the call for years.

I am a US-Dutch-Peruvian citizen who grew up in Appalachia, studied molecular biology in the Northeast, worked as a journalist in New York City, and then migrated to the left coast to pursue a PhD. My indigenous ancestry, smallholder family history, and the colonizing/decolonizing experiences of both the Netherlands and Peru informs my personal and professional interests in seeds and agrobiodiversity. My background engenders a strong desire to explore synergies between western science and the indigenous/traditional knowledge systems that have historically been devalued and marginalized.

Trained in molecular biology, science writing, and now, a range of critical social and ecological theory, I incorporate these perspectives into research on seeds.

I am particularly interested in the relationship between formal seed systems – characterized by professional breeding, certification, intellectual property – and commercial sale and informal seed systems through which farmers traditionally save, exchange, and sell seeds. …

You can find more on her Twitter feed, which is where I discovered a call for papers for a “Special Feature: Gene Editing the Food System” in the journal, Elementa: Science of the Anthropocene. They have a rolling deadline, which started in February 2020. At this time, there is one paper in the series,

Democratizing CRISPR? Stories, practices, and politics of science and governance on the agricultural gene editing frontier by Maywa Montenegro de Wit. Elem Sci Anth, 8(1), p.9. DOI: http://doi.org/10.1525/elementa.405 Published February 25, 2020

The paper is open access. Interestingly, the guest editor is Elizabeth Fitting of Dalhousie University in Nova Scotia, Canada.

In depth report on European Commission’s nanotechnology definition

A February 13, 2019 news item on the (US) National Law Review blog announces a new report on nanomaterial definitions (Note: A link has been removed),

The European Commission’s (EC) Joint Research Center (JRC) published on February 13, 2019, a report entitled An overview of concepts and terms used in the European Commission’s definition of nanomaterial. … The report provides recommendations for a harmonized and coherent implementation of the nanomaterial definition in any specific regulatory context at the European Union (EU) and national level.

©2019 Bergeson & Campbell, P.C.

There’s a bit more detail about the report in a February 19, 2019 European Commission press release,

The JRC just released a report clarifying the key concepts and terms used in the European Commission’s nanomaterial definition.

This will support stakeholders for the correct implementation of legislation making reference to the definition.

Nanotechnology may well be one of the most fast-moving sectors of the last few years.
The number of products produced by nanotechnology or containing nanomaterials entering the market is increasing.

As the technology develops, nanomaterials are delivering benefits to many sectors, including: healthcare (in targeted drug delivery, regenerative medicine, and diagnostics), electronics, cosmetics, textiles, information technology and environmental protection.
As the name suggests, nanomaterials are very small – so small that they are invisible to the human eye.

In fact, nanomaterials contain particles smaller than 100 nanometres (100 millionths of a millimetre).

Nanomaterials have unique physical and chemical characteristics.
They can be used in consumer products to improve the products’ properties – for instance, to make something more resistant against breaking, stains or humidity.

Nanomaterials have undoubtedly enabled progress in many areas, but as with all innovation, we must ensure that the impact on human health and the environment are properly considered

The European Commission’s Recommendation on the definition of nanomaterials (2011/696/EU) provides a general basis for regulatory instruments in many areas.

This definition has been used in the EU regulations on biocidal products and medical devices, and the REACH regulation. It is also used in various national legislative texts.

However, in the context of a JRC survey, many respondents expressed difficulties with the implementation of the EC definition, in particular due to the fact that some of the key concepts and terms could be interpreted in different ways.

Therefore, the JRC just published the report “An overview of concepts and terms used in the European Commission’s definition of nanomaterial” which aims to provide a clarification of the key concepts and terms of the nanomaterial definition and discusses them in a regulatory context.

This will facilitate a common understanding and fosters a harmonised and coherent implementation of the nanomaterial definition in different regulatory context at EU and national level.

Not my favourite topic but definitions and their implementation are important whether I like it or not.

Gene editing and personalized medicine: Canada

Back in the fall of 2018 I came across one of those overexcited pieces about personalized medicine and gene editing tha are out there. This one came from an unexpected source, an author who is a “PhD Scientist in Medical Science (Blood and Vasculature” (from Rick Gierczak’s LinkedIn profile).

It starts our promisingly enough although I’m beginning to dread the use of the word ‘precise’  where medicine is concerned, (from a September 17, 2018 posting on the Science Borealis blog by Rick Gierczak (Note: Links have been removed),

CRISPR-Cas9 technology was accidentally discovered in the 1980s when scientists were researching how bacteria defend themselves against viral infection. While studying bacterial DNA called clustered regularly interspaced short palindromic repeats (CRISPR), they identified additional CRISPR-associated (Cas) protein molecules. Together, CRISPR and one of those protein molecules, termed Cas9, can locate and cut precise regions of bacterial DNA. By 2012, researchers understood that the technology could be modified and used more generally to edit the DNA of any plant or animal. In 2015, the American Association for the Advancement of Science chose CRISPR-Cas9 as science’s “Breakthrough of the Year”.

Today, CRISPR-Cas9 is a powerful and precise gene-editing tool [emphasis mine] made of two molecules: a protein that cuts DNA (Cas9) and a custom-made length of RNA that works like a GPS for locating the exact spot that needs to be edited (CRISPR). Once inside the target cell nucleus, these two molecules begin editing the DNA. After the desired changes are made, they use a repair mechanism to stitch the new DNA into place. Cas9 never changes, but the CRISPR molecule must be tailored for each new target — a relatively easy process in the lab. However, it’s not perfect, and occasionally the wrong DNA is altered [emphasis mine].

Note that Gierczak makes a point of mentioning that CRISPR/Cas9 is “not perfect.” And then, he gets excited (Note: Links have been removed),

CRISPR-Cas9 has the potential to treat serious human diseases, many of which are caused by a single “letter” mutation in the genetic code (A, C, T, or G) that could be corrected by precise editing. [emphasis mine] Some companies are taking notice of the technology. A case in point is CRISPR Therapeutics, which recently developed a treatment for sickle cell disease, a blood disorder that causes a decrease in oxygen transport in the body. The therapy targets a special gene called fetal hemoglobin that’s switched off a few months after birth. Treatment involves removing stem cells from the patient’s bone marrow and editing the gene to turn it back on using CRISPR-Cas9. These new stem cells are returned to the patient ready to produce normal red blood cells. In this case, the risk of error is eliminated because the new cells are screened for the correct edit before use.

The breakthroughs shown by companies like CRISPR Therapeutics are evidence that personalized medicine has arrived. [emphasis mine] However, these discoveries will require government regulatory approval from the countries where the treatment is going to be used. In the US, the Food and Drug Administration (FDA) has developed new regulations allowing somatic (i.e., non-germ) cell editing and clinical trials to proceed. [emphasis mine]

The potential treatment for sickle cell disease is exciting but Gierczak offers no evidence that this treatment or any unnamed others constitute proof that “personalized medicine has arrived.” In fact, Goldman Sachs, a US-based investment bank, makes the case that it never will .

Cost/benefit analysis

Edward Abrahams, president of the Personalized Medicine Coalition (US-based), advocates for personalized medicine while noting in passing, market forces as represented by Goldman Sachs in his May 23, 2018 piece for statnews.com (Note: A link has been removed),

One of every four new drugs approved by the Food and Drug Administration over the last four years was designed to become a personalized (or “targeted”) therapy that zeros in on the subset of patients likely to respond positively to it. That’s a sea change from the way drugs were developed and marketed 10 years ago.

Some of these new treatments have extraordinarily high list prices. But focusing solely on the cost of these therapies rather than on the value they provide threatens the future of personalized medicine.

… most policymakers are not asking the right questions about the benefits of these treatments for patients and society. Influenced by cost concerns, they assume that prices for personalized tests and treatments cannot be justified even if they make the health system more efficient and effective by delivering superior, longer-lasting clinical outcomes and increasing the percentage of patients who benefit from prescribed treatments.

Goldman Sachs, for example, issued a report titled “The Genome Revolution.” It argues that while “genome medicine” offers “tremendous value for patients and society,” curing patients may not be “a sustainable business model.” [emphasis mine] The analysis underlines that the health system is not set up to reap the benefits of new scientific discoveries and technologies. Just as we are on the precipice of an era in which gene therapies, gene-editing, and immunotherapies promise to address the root causes of disease, Goldman Sachs says that these therapies have a “very different outlook with regard to recurring revenue versus chronic therapies.”

Let’s just chew on this one (contemplate)  for a minute”curing patients may not be ‘sustainable business model’!”

Coming down to earth: policy

While I find Gierczak to be over-enthused, he, like Abrahams, emphasizes the importance of new policy, in his case, the focus is Canadian policy. From Gierczak’s September 17, 2018 posting (Note: Links have been removed),

In Canada, companies need approval from Health Canada. But a 2004 law called the Assisted Human Reproduction Act (AHR Act) states that it’s a criminal offence “to alter the genome of a human cell, or in vitroembryo, that is capable of being transmitted to descendants”. The Actis so broadly written that Canadian scientists are prohibited from using the CRISPR-Cas9 technology on even somatic cells. Today, Canada is one of the few countries in the world where treating a disease with CRISPR-Cas9 is a crime.

On the other hand, some countries provide little regulatory oversight for editing either germ or somatic cells. In China, a company often only needs to satisfy the requirements of the local hospital where the treatment is being performed. And, if germ-cell editing goes wrong, there is little recourse for the future generations affected.

The AHR Act was introduced to regulate the use of reproductive technologies like in vitrofertilization and research related to cloning human embryos during the 1980s and 1990s. Today, we live in a time when medical science, and its role in Canadian society, is rapidly changing. CRISPR-Cas9 is a powerful tool, and there are aspects of the technology that aren’t well understood and could potentially put patients at risk if we move ahead too quickly. But the potential benefits are significant. Updated legislation that acknowledges both the risks and current realities of genomic engineering [emphasis mine] would relieve the current obstacles and support a path toward the introduction of safe new therapies.

Criminal ban on human gene-editing of inheritable cells (in Canada)

I had no idea there was a criminal ban on the practice until reading this January 2017 editorial by Bartha Maria Knoppers, Rosario Isasi, Timothy Caulfield, Erika Kleiderman, Patrick Bedford, Judy Illes, Ubaka Ogbogu, Vardit Ravitsky, & Michael Rudnicki for (Nature) npj Regenerative Medicine (Note: Links have been removed),

Driven by the rapid evolution of gene editing technologies, international policy is examining which regulatory models can address the ensuing scientific, socio-ethical and legal challenges for regenerative and personalised medicine.1 Emerging gene editing technologies, including the CRISPR/Cas9 2015 scientific breakthrough,2 are powerful, relatively inexpensive, accurate, and broadly accessible research tools.3 Moreover, they are being utilised throughout the world in a wide range of research initiatives with a clear eye on potential clinical applications. Considering the implications of human gene editing for selection, modification and enhancement, it is time to re-examine policy in Canada relevant to these important advances in the history of medicine and science, and the legislative and regulatory frameworks that govern them. Given the potential human reproductive applications of these technologies, careful consideration of these possibilities, as well as ethical and regulatory scrutiny must be a priority.4

With the advent of human embryonic stem cell research in 1978, the birth of Dolly (the cloned sheep) in 1996 and the Raelian cloning hoax in 2003, the environment surrounding the enactment of Canada’s 2004 Assisted Human Reproduction Act (AHRA) was the result of a decade of polarised debate,5 fuelled by dystopian and utopian visions for future applications. Rightly or not, this led to the AHRA prohibition on a wide range of activities, including the creation of embryos (s. 5(1)(b)) or chimeras (s. 5(1)(i)) for research and in vitro and in vivo germ line alterations (s. 5(1)(f)). Sanctions range from a fine (up to $500,000) to imprisonment (up to 10 years) (s. 60 AHRA).

In Canada, the criminal ban on gene editing appears clear, the Act states that “No person shall knowingly […] alter the genome of a cell of a human being or in vitro embryo such that the alteration is capable of being transmitted to descendants;” [emphases mine] (s. 5(1)(f) AHRA). This approach is not shared worldwide as other countries such as the United Kingdom, take a more regulatory approach to gene editing research.1 Indeed, as noted by the Law Reform Commission of Canada in 1982, criminal law should be ‘an instrument of last resort’ used solely for “conduct which is culpable, seriously harmful, and generally conceived of as deserving of punishment”.6 A criminal ban is a suboptimal policy tool for science as it is inflexible, stifles public debate, and hinders responsiveness to the evolving nature of science and societal attitudes.7 In contrast, a moratorium such as the self-imposed research moratorium on human germ line editing called for by scientists in December 20158 can at least allow for a time limited pause. But like bans, they may offer the illusion of finality and safety while halting research required to move forward and validate innovation.

On October 1st, 2016, Health Canada issued a Notice of Intent to develop regulations under the AHRA but this effort is limited to safety and payment issues (i.e. gamete donation). Today, there is a need for Canada to revisit the laws and policies that address the ethical, legal and social implications of human gene editing. The goal of such a critical move in Canada’s scientific and legal history would be a discussion of the right of Canadians to benefit from the advancement of science and its applications as promulgated in article 27 of the Universal Declaration of Human Rights9 and article 15(b) of the International Covenant on Economic, Social and Cultural Rights,10 which Canada has signed and ratified. Such an approach would further ensure the freedom of scientific endeavour both as a principle of a liberal democracy and as a social good, while allowing Canada to be engaged with the international scientific community.

Even though it’s a bit old, I still recommend reading the open access editorial in full, if you have the time.

One last thing abut the paper, the acknowledgements,

Sponsored by Canada’s Stem Cell Network, the Centre of Genomics and Policy of McGill University convened a ‘think tank’ on the future of human gene editing in Canada with legal and ethics experts as well as representatives and observers from government in Ottawa (August 31, 2016). The experts were Patrick Bedford, Janetta Bijl, Timothy Caulfield, Judy Illes, Rosario Isasi, Jonathan Kimmelman, Erika Kleiderman, Bartha Maria Knoppers, Eric Meslin, Cate Murray, Ubaka Ogbogu, Vardit Ravitsky, Michael Rudnicki, Stephen Strauss, Philip Welford, and Susan Zimmerman. The observers were Geneviève Dubois-Flynn, Danika Goosney, Peter Monette, Kyle Norrie, and Anthony Ridgway.

Competing interests

The authors declare no competing interests.

Both McGill and the Stem Cell Network pop up again. A November 8, 2017 article about the need for new Canadian gene-editing policies by Tom Blackwell for the National Post features some familiar names (Did someone have a budget for public relations and promotion?),

It’s one of the most exciting, and controversial, areas of health science today: new technology that can alter the genetic content of cells, potentially preventing inherited disease — or creating genetically enhanced humans.

But Canada is among the few countries in the world where working with the CRISPR gene-editing system on cells whose DNA can be passed down to future generations is a criminal offence, with penalties of up to 10 years in jail.

This week, one major science group announced it wants that changed, calling on the federal government to lift the prohibition and allow researchers to alter the genome of inheritable “germ” cells and embryos.

The potential of the technology is huge and the theoretical risks like eugenics or cloning are overplayed, argued a panel of the Stem Cell Network.

The step would be a “game-changer,” said Bartha Knoppers, a health-policy expert at McGill University, in a presentation to the annual Till & McCulloch Meetings of stem-cell and regenerative-medicine researchers [These meetings were originally known as the Stem Cell Network’s Annual General Meeting {AGM}]. [emphases mine]

“I’m completely against any modification of the human genome,” said the unidentified meeting attendee. “If you open this door, you won’t ever be able to close it again.”

If the ban is kept in place, however, Canadian scientists will fall further behind colleagues in other countries, say the experts behind the statement say; they argue possible abuses can be prevented with good ethical oversight.

“It’s a human-reproduction law, it was never meant to ban and slow down and restrict research,” said Vardit Ravitsky, a University of Montreal bioethicist who was part of the panel. “It’s a sort of historical accident … and now our hands are tied.”

There are fears, as well, that CRISPR could be used to create improved humans who are genetically programmed to have certain facial or other features, or that the editing could have harmful side effects. Regardless, none of it is happening in Canada, good or bad.

In fact, the Stem Cell Network panel is arguably skirting around the most contentious applications of the technology. It says it is asking the government merely to legalize research for its own sake on embryos and germ cells — those in eggs and sperm — not genetic editing of embryos used to actually get women pregnant.

The highlighted portions in the last two paragraphs of the excerpt were written one year prior to the claims by a Chinese scientist that he had run a clinical trial resulting in gene-edited twins, Lulu and Nana. (See my my November 28, 2018 posting for a comprehensive overview of the original furor). I have yet to publish a followup posting featuring the news that the CRISPR twins may have been ‘improved’ more extensively than originally realized. The initial reports about the twins focused on an illness-related reason (making them HIV ‘immune’) but made no mention of enhanced cognitive skills a side effect of eliminating the gene that would make them HIV ‘immune’. To date, the researcher has not made the bulk of his data available for an in-depth analysis to support his claim that he successfully gene-edited the twins. As well, there were apparently seven other pregnancies coming to term as part of the researcher’s clinical trial and there has been no news about those births.

Risk analysis innovation

Before moving onto the innovation of risk analysis, I want to focus a little more on at least one of the risks that gene-editing might present. Gierczak noted that CRISPR/Cas9 is “not perfect,” which acknowledges the truth but doesn’t convey all that much information.

While the terms ‘precision’ and ‘scissors’ are used frequently when describing the CRISPR technique, scientists actually mean that the technique is significantly ‘more precise’ than other techniques but they are not referencing an engineering level of precision. As for the ‘scissors’, it’s an analogy scientists like to use but in fact CRISPR is not as efficient and precise as a pair of scissors.

Michael Le Page in a July 16, 2018 article for New Scientist lays out some of the issues (Note: A link has been removed),

A study of CRIPSR suggests we shouldn’t rush into trying out CRISPR genome editing inside people’s bodies just yet. The technique can cause big deletions or rearrangements of DNA [emphasis mine], says Allan Bradley of the Wellcome Sanger Institute in the UK, meaning some therapies based on CRISPR may not be quite as safe as we thought.

The CRISPR genome editing technique is revolutionising biology, enabling us to create new varieties of plants and animals and develop treatments for a wide range of diseases.

The CRISPR Cas9 protein works by cutting the DNA of a cell in a specific place. When the cell repairs the damage, a few DNA letters get changed at this spot – an effect that can be exploited to disable genes.

At least, that’s how it is supposed to work. But in studies of mice and human cells, Bradley’s team has found that in around a fifth of cells, CRISPR causes deletions or rearrangements more than 100 DNA letters long. These surprising changes are sometimes thousands of letters long.

“I do believe the findings are robust,” says Gaetan Burgio of the Australian National University, an expert on CRISPR who has debunked previous studies questioning the method’s safety. “This is a well-performed study and fairly significant.”

I covered the Bradley paper and the concerns in a July 17, 2018 posting ‘The CRISPR ((clustered regularly interspaced short palindromic repeats)-CAS9 gene-editing technique may cause new genetic damage kerfuffle‘. (The ‘kerfufle’ was in reference to a report that the CRISPR market was affected by the publication of Bradley’s paper.)

Despite Health Canada not moving swiftly enough for some researchers, they have nonetheless managed to release an ‘outcome’ report about a consultation/analysis started in October 2016. Before getting to the consultation’s outcome, it’s interesting to look at how the consultation’s call for response was described (from Health Canada’s Toward a strengthened Assisted Human Reproduction Act ; A Consultation with Canadians on Key Policy Proposals webpage),

In October 2016, recognizing the need to strengthen the regulatory framework governing assisted human reproduction in Canada, Health Canada announced its intention to bring into force the dormant sections of the Assisted Human Reproduction Act  and to develop the necessary supporting regulations.

This consultation document provides an overview of the key policy proposals that will help inform the development of regulations to support bringing into force Section 10, Section 12 and Sections 45-58 of the Act. Specifically, the policy proposals describe the Department’s position on the following:

Section 10: Safety of Donor Sperm and Ova

  • Scope and application
  • Regulated parties and their regulatory obligations
  • Processing requirements, including donor suitability assessment
  • Record-keeping and traceability

Section 12: Reimbursement

  • Expenditures that may be reimbursed
  • Process for reimbursement
  • Creation and maintenance of records

Sections 45-58: Administration and Enforcement

  • Scope of the administration and enforcement framework
  • Role of inspectors designated under the Act

The purpose of the document is to provide Canadians with an opportunity to review the policy proposals and to provide feedback [emphasis mine] prior to the Department finalizing policy decisions and developing the regulations. In addition to requesting stakeholders’ general feedback on the policy proposals, the Department is also seeking input on specific questions, which are included throughout the document.

It took me a while to find the relevant section (in particular, take note of ‘Federal Regulatory Oversight’),

3.2. AHR in Canada Today

Today, an increasing number of Canadians are turning to AHR technologies to grow or build their families. A 2012 Canadian studyFootnote 1 found that infertility is on the rise in Canada, with roughly 16% of heterosexual couples experiencing infertility. In addition to rising infertility, the trend of delaying marriage and parenthood, scientific advances in cryopreserving ova, and the increasing use of AHR by LGBTQ2 couples and single parents to build a family are all contributing to an increase in the use of AHR technologies.

The growing use of reproductive technologies by Canadians to help build their families underscores the need to strengthen the AHR Act. While the approach to regulating AHR varies from country to country, Health Canada has considered international best practices and the need for regulatory alignment when developing the proposed policies set out in this document. …

3.2.1 Federal Regulatory Oversight

Although the scope of the AHR Act was significantly reduced in 2012 and some of the remaining sections have not yet been brought into force, there are many important sections of the Act that are currently administered and enforced by Health Canada, as summarized generally below:

Section 5: Prohibited Scientific and Research Procedures
Section 5 prohibits certain types of scientific research and clinical procedures that are deemed unacceptable, including: human cloning, the creation of an embryo for non-reproductive purposes, maintaining an embryo outside the human body beyond the fourteenth day, sex selection for non-medical reasons, altering the genome in a way that could be transmitted to descendants, and creating a chimera or a hybrid. [emphasis mine]

….

It almost seems as if the they were hiding the section that broached the human gene-editing question. It doesn’t seem to have worked as it appears, there are some very motivated parties determined to reframe the discussion. Health Canada’s ‘outocme’ report, published March 2019, What we heard: A summary of scanning and consultations on what’s next for health product regulation reflects the success of those efforts,

1.0 Introduction and Context

Scientific and technological advances are accelerating the pace of innovation. These advances are increasingly leading to the development of health products that are better able to predict, define, treat, and even cure human diseases. Globally, many factors are driving regulators to think about how to enable health innovation. To this end, Health Canada has been expanding beyond existing partnerships and engaging both domestically and internationally. This expanding landscape of products and services comes with a range of new challenges and opportunities.

In keeping up to date with emerging technologies and working collaboratively through strategic partnerships, Health Canada seeks to position itself as a regulator at the forefront of health innovation. Following the targeted sectoral review of the Health and Biosciences Sector Regulatory Review consultation by the Treasury Board Secretariat, Health Canada held a number of targeted meetings with a broad range of stakeholders.

This report outlines the methodologies used to look ahead at the emerging health technology environment, [emphasis mine] the potential areas of focus that resulted, and the key findings from consultations.

… the Department identified the following key drivers that are expected to shape the future of health innovation:

  1. The use of “big data” to inform decision-making: Health systems are generating more data, and becoming reliant on this data. The increasing accuracy, types, and volume of data available in real time enable automation and machine learning that can forecast activity, behaviour, or trends to support decision-making.
  2. Greater demand for citizen agency: Canadians increasingly want and have access to more information, resources, options, and platforms to manage their own health (e.g., mobile apps, direct-to-consumer services, decentralization of care).
  3. Increased precision and personalization in health care delivery: Diagnostic tools and therapies are increasingly able to target individual patients with customized therapies (e.g., individual gene therapy).
  4. Increased product complexity: Increasingly complex products do not fit well within conventional product classifications and standards (e.g., 3D printing).
  5. Evolving methods for production and distribution: In some cases, manufacturers and supply chains are becoming more distributed, challenging the current framework governing production and distribution of health products.
  6. The ways in which evidence is collected and used are changing: The processes around new drug innovation, research and development, and designing clinical trials are evolving in ways that are more flexible and adaptive.

With these key drivers in mind, the Department selected the following six emerging technologies for further investigation to better understand how the health product space is evolving:

  1. Artificial intelligence, including activities such as machine learning, neural networks, natural language processing, and robotics.
  2. Advanced cell therapies, such as individualized cell therapies tailor-made to address specific patient needs.
  3. Big data, from sources such as sensors, genetic information, and social media that are increasingly used to inform patient and health care practitioner decisions.
  4. 3D printing of health products (e.g., implants, prosthetics, cells, tissues).
  5. New ways of delivering drugs that bring together different product lines and methods (e.g., nano-carriers, implantable devices).
  6. Gene editing, including individualized gene therapies that can assist in preventing and treating certain diseases.

Next, to test the drivers identified and further investigate emerging technologies, the Department consulted key organizations and thought leaders across the country with expertise in health innovation. To this end, Health Canada held seven workshops with over 140 representatives from industry associations, small-to-medium sized enterprises and start-ups, larger multinational companies, investors, researchers, and clinicians in Ottawa, Toronto, Montreal, and Vancouver. [emphases mine]

The ‘outocme’ report, ‘What we heard …’, is well worth reading in its entirety; it’s about 9 pp.

I have one comment, ‘stakeholders’ don’t seem to include anyone who isn’t “from industry associations, small-to-medium sized enterprises and start-ups, larger multinational companies, investors, researchers, and clinician” or from “Ottawa, Toronto, Montreal, and Vancouver.” Aren’t the rest of us stakeholders?

Innovating risk analysis

This line in the report caught my eye (from Health Canada’s Toward a strengthened Assisted Human Reproduction Act ; A Consultation with Canadians on Key Policy Proposals webpage),

There is increasing need to enable innovation in a flexible, risk-based way, with appropriate oversight to ensure safety, quality, and efficacy. [emphases mine]

It reminded me of the 2019 federal budget (from my March 22, 2019 posting). One comment before proceeding, regulation and risk are tightly linked and, so, by innovating regulation they are by exttension alos innovating risk analysis,

… Budget 2019 introduces the first three “Regulatory Roadmaps” to specifically address stakeholder issues and irritants in these sectors, informed by over 140 responses [emphasis mine] from businesses and Canadians across the country, as well as recommendations from the Economic Strategy Tables.

Introducing Regulatory Roadmaps

These Roadmaps lay out the Government’s plans to modernize regulatory frameworks, without compromising our strong health, safety, and environmental protections. They contain proposals for legislative and regulatory amendments as well as novel regulatory approaches to accommodate emerging technologies, including the use of regulatory sandboxes and pilot projects—better aligning our regulatory frameworks with industry realities.

Budget 2019 proposes the necessary funding and legislative revisions so that regulatory departments and agencies can move forward on the Roadmaps, including providing the Canadian Food Inspection Agency, Health Canada and Transport Canada with up to $219.1 million over five years, starting in 2019–20, (with $0.5 million in remaining amortization), and $3.1 million per year on an ongoing basis.

In the coming weeks, the Government will be releasing the full Regulatory Roadmaps for each of the reviews, as well as timelines for enacting specific initiatives, which can be grouped in the following three main areas:

What Is a Regulatory Sandbox? Regulatory sandboxes are controlled “safe spaces” in which innovative products, services, business models and delivery mechanisms can be tested without immediately being subject to all of the regulatory requirements.
– European Banking Authority, 2017

Establishing a regulatory sandbox for new and innovative medical products
The regulatory approval system has not kept up with new medical technologies and processes. Health Canada proposes to modernize regulations to put in place a regulatory sandbox for new and innovative products, such as tissues developed through 3D printing, artificial intelligence, and gene therapies targeted to specific individuals. [emphasis mine]

Modernizing the regulation of clinical trials
Industry and academics have expressed concerns that regulations related to clinical trials are overly prescriptive and inconsistent. Health Canada proposes to implement a risk-based approach [emphasis mine] to clinical trials to reduce costs to industry and academics by removing unnecessary requirements for low-risk drugs and trials. The regulations will also provide the agri-food industry with the ability to carry out clinical trials within Canada on products such as food for special dietary use and novel foods.

Does the government always get 140 responses from a consultation process? Moving on, I agree with finding new approaches to regulatory processes and oversight and, by extension, new approaches to risk analysis.

Earlier in this post, I asked if someone had a budget for public relations/promotion. I wasn’t joking. My March 22, 2019 posting also included these line items in the proposed 2019 budget,

Budget 2019 proposes to make additional investments in support of the following organizations:
Stem Cell Network: Stem cell research—pioneered by two Canadians in the 1960s [James Till and Ernest McCulloch]—holds great promise for new therapies and medical treatments for respiratory and heart diseases, spinal cord injury, cancer, and many other diseases and disorders. The Stem Cell Network is a national not-for-profit organization that helps translate stem cell research into clinical applications and commercial products. To support this important work and foster Canada’s leadership in stem cell research, Budget 2019 proposes to provide the Stem Cell Network with renewed funding of $18 million over three years, starting in 2019–20.

Genome Canada: The insights derived from genomics—the study of the entire genetic information of living things encoded in their DNA and related molecules and proteins—hold the potential for breakthroughs that can improve the lives of Canadians and drive innovation and economic growth. Genome Canada is a not-for-profit organization dedicated to advancing genomics science and technology in order to create economic and social benefits for Canadians. To support Genome Canada’s operations, Budget 2019 proposes to provide Genome Canada with $100.5 million over five years, starting in 2020–21. This investment will also enable Genome Canada to launch new large-scale research competitions and projects, in collaboration with external partners, ensuring that Canada’s research community continues to have access to the resources needed to make transformative scientific breakthroughs and translate these discoveries into real-world applications.

Years ago, I managed to find a webpage with all of the proposals various organizations were submitting to a government budget committee. It was eye-opening. You can tell which organizations were able to hire someone who knew the current government buzzwords and the things that a government bureaucrat would want to hear and the organizations that didn’t.

Of course, if the government of the day is adamantly against or uninterested, no amount of persusasion will work to get your organization more money in the budget.

Finally

Reluctantly, I am inclined to explore the topic of emerging technologies such as gene-editing not only in the field of agriculture (for gene-editing of plants, fish, and animals see my November 28, 2018 posting) but also with humans. At the very least, it needs to be discussed whether we choose to participate or not.

If you are interested in the arguments against changing Canada’s prohibition against gene-editing of humans, there’s an Ocotber 2, 2017 posting on Impact Ethics by Françoise Baylis, Professor and Canada Research Chair in Bioethics and Philosophy at Dalhousie University, and Alana Cattapan, Johnson Shoyama Graduate School of Public Policy at the University of Saskatchewan, which makes some compelling arguments. Of course, it was written before the CRISPR twins (my November 28, 2018 posting).

Recaliing CRISPR Therapeutics (mentioned by Gierczak), the company received permission to run clinical trials in the US in October 2018 after the FDA (US Food and Drug Administration) lifted an earlier ban on their trials according to an Oct. 10, 2018 article by Frank Vinhuan for exome,

The partners also noted that their therapy is making progress outside of the U.S. They announced that they have received regulatory clearance in “multiple countries” to begin tests of the experimental treatment in both sickle cell disease and beta thalassemia, …

It seems to me that the quotes around “multiple countries” are meant to suggest doubt of some kind. Generally speaking, company representatives make those kinds of generalizations when they’re trying to pump up their copy. E.g., 50% increase in attendance  but no whole numbers to tell you what that means. It could mean two people attended the first year and then brought a friend the next year or 100 people attended and the next year there were 150.

Despite attempts to declare personalized medicine as having arrived, I think everything is still in flux with no preordained outcome. The future has yet to be determined but it will be and I , for one, would like to have some say in the matter.