Tag Archives: artificial intelligence (AI)

Gene editing and personalized medicine: Canada

Back in the fall of 2018 I came across one of those overexcited pieces about personalized medicine and gene editing tha are out there. This one came from an unexpected source, an author who is a “PhD Scientist in Medical Science (Blood and Vasculature” (from Rick Gierczak’s LinkedIn profile).

It starts our promisingly enough although I’m beginning to dread the use of the word ‘precise’  where medicine is concerned, (from a September 17, 2018 posting on the Science Borealis blog by Rick Gierczak (Note: Links have been removed),

CRISPR-Cas9 technology was accidentally discovered in the 1980s when scientists were researching how bacteria defend themselves against viral infection. While studying bacterial DNA called clustered regularly interspaced short palindromic repeats (CRISPR), they identified additional CRISPR-associated (Cas) protein molecules. Together, CRISPR and one of those protein molecules, termed Cas9, can locate and cut precise regions of bacterial DNA. By 2012, researchers understood that the technology could be modified and used more generally to edit the DNA of any plant or animal. In 2015, the American Association for the Advancement of Science chose CRISPR-Cas9 as science’s “Breakthrough of the Year”.

Today, CRISPR-Cas9 is a powerful and precise gene-editing tool [emphasis mine] made of two molecules: a protein that cuts DNA (Cas9) and a custom-made length of RNA that works like a GPS for locating the exact spot that needs to be edited (CRISPR). Once inside the target cell nucleus, these two molecules begin editing the DNA. After the desired changes are made, they use a repair mechanism to stitch the new DNA into place. Cas9 never changes, but the CRISPR molecule must be tailored for each new target — a relatively easy process in the lab. However, it’s not perfect, and occasionally the wrong DNA is altered [emphasis mine].

Note that Gierczak makes a point of mentioning that CRISPR/Cas9 is “not perfect.” And then, he gets excited (Note: Links have been removed),

CRISPR-Cas9 has the potential to treat serious human diseases, many of which are caused by a single “letter” mutation in the genetic code (A, C, T, or G) that could be corrected by precise editing. [emphasis mine] Some companies are taking notice of the technology. A case in point is CRISPR Therapeutics, which recently developed a treatment for sickle cell disease, a blood disorder that causes a decrease in oxygen transport in the body. The therapy targets a special gene called fetal hemoglobin that’s switched off a few months after birth. Treatment involves removing stem cells from the patient’s bone marrow and editing the gene to turn it back on using CRISPR-Cas9. These new stem cells are returned to the patient ready to produce normal red blood cells. In this case, the risk of error is eliminated because the new cells are screened for the correct edit before use.

The breakthroughs shown by companies like CRISPR Therapeutics are evidence that personalized medicine has arrived. [emphasis mine] However, these discoveries will require government regulatory approval from the countries where the treatment is going to be used. In the US, the Food and Drug Administration (FDA) has developed new regulations allowing somatic (i.e., non-germ) cell editing and clinical trials to proceed. [emphasis mine]

The potential treatment for sickle cell disease is exciting but Gierczak offers no evidence that this treatment or any unnamed others constitute proof that “personalized medicine has arrived.” In fact, Goldman Sachs, a US-based investment bank, makes the case that it never will .

Cost/benefit analysis

Edward Abrahams, president of the Personalized Medicine Coalition (US-based), advocates for personalized medicine while noting in passing, market forces as represented by Goldman Sachs in his May 23, 2018 piece for statnews.com (Note: A link has been removed),

One of every four new drugs approved by the Food and Drug Administration over the last four years was designed to become a personalized (or “targeted”) therapy that zeros in on the subset of patients likely to respond positively to it. That’s a sea change from the way drugs were developed and marketed 10 years ago.

Some of these new treatments have extraordinarily high list prices. But focusing solely on the cost of these therapies rather than on the value they provide threatens the future of personalized medicine.

… most policymakers are not asking the right questions about the benefits of these treatments for patients and society. Influenced by cost concerns, they assume that prices for personalized tests and treatments cannot be justified even if they make the health system more efficient and effective by delivering superior, longer-lasting clinical outcomes and increasing the percentage of patients who benefit from prescribed treatments.

Goldman Sachs, for example, issued a report titled “The Genome Revolution.” It argues that while “genome medicine” offers “tremendous value for patients and society,” curing patients may not be “a sustainable business model.” [emphasis mine] The analysis underlines that the health system is not set up to reap the benefits of new scientific discoveries and technologies. Just as we are on the precipice of an era in which gene therapies, gene-editing, and immunotherapies promise to address the root causes of disease, Goldman Sachs says that these therapies have a “very different outlook with regard to recurring revenue versus chronic therapies.”

Let’s just chew on this one (contemplate)  for a minute”curing patients may not be ‘sustainable business model’!”

Coming down to earth: policy

While I find Gierczak to be over-enthused, he, like Abrahams, emphasizes the importance of new policy, in his case, the focus is Canadian policy. From Gierczak’s September 17, 2018 posting (Note: Links have been removed),

In Canada, companies need approval from Health Canada. But a 2004 law called the Assisted Human Reproduction Act (AHR Act) states that it’s a criminal offence “to alter the genome of a human cell, or in vitroembryo, that is capable of being transmitted to descendants”. The Actis so broadly written that Canadian scientists are prohibited from using the CRISPR-Cas9 technology on even somatic cells. Today, Canada is one of the few countries in the world where treating a disease with CRISPR-Cas9 is a crime.

On the other hand, some countries provide little regulatory oversight for editing either germ or somatic cells. In China, a company often only needs to satisfy the requirements of the local hospital where the treatment is being performed. And, if germ-cell editing goes wrong, there is little recourse for the future generations affected.

The AHR Act was introduced to regulate the use of reproductive technologies like in vitrofertilization and research related to cloning human embryos during the 1980s and 1990s. Today, we live in a time when medical science, and its role in Canadian society, is rapidly changing. CRISPR-Cas9 is a powerful tool, and there are aspects of the technology that aren’t well understood and could potentially put patients at risk if we move ahead too quickly. But the potential benefits are significant. Updated legislation that acknowledges both the risks and current realities of genomic engineering [emphasis mine] would relieve the current obstacles and support a path toward the introduction of safe new therapies.

Criminal ban on human gene-editing of inheritable cells (in Canada)

I had no idea there was a criminal ban on the practice until reading this January 2017 editorial by Bartha Maria Knoppers, Rosario Isasi, Timothy Caulfield, Erika Kleiderman, Patrick Bedford, Judy Illes, Ubaka Ogbogu, Vardit Ravitsky, & Michael Rudnicki for (Nature) npj Regenerative Medicine (Note: Links have been removed),

Driven by the rapid evolution of gene editing technologies, international policy is examining which regulatory models can address the ensuing scientific, socio-ethical and legal challenges for regenerative and personalised medicine.1 Emerging gene editing technologies, including the CRISPR/Cas9 2015 scientific breakthrough,2 are powerful, relatively inexpensive, accurate, and broadly accessible research tools.3 Moreover, they are being utilised throughout the world in a wide range of research initiatives with a clear eye on potential clinical applications. Considering the implications of human gene editing for selection, modification and enhancement, it is time to re-examine policy in Canada relevant to these important advances in the history of medicine and science, and the legislative and regulatory frameworks that govern them. Given the potential human reproductive applications of these technologies, careful consideration of these possibilities, as well as ethical and regulatory scrutiny must be a priority.4

With the advent of human embryonic stem cell research in 1978, the birth of Dolly (the cloned sheep) in 1996 and the Raelian cloning hoax in 2003, the environment surrounding the enactment of Canada’s 2004 Assisted Human Reproduction Act (AHRA) was the result of a decade of polarised debate,5 fuelled by dystopian and utopian visions for future applications. Rightly or not, this led to the AHRA prohibition on a wide range of activities, including the creation of embryos (s. 5(1)(b)) or chimeras (s. 5(1)(i)) for research and in vitro and in vivo germ line alterations (s. 5(1)(f)). Sanctions range from a fine (up to $500,000) to imprisonment (up to 10 years) (s. 60 AHRA).

In Canada, the criminal ban on gene editing appears clear, the Act states that “No person shall knowingly […] alter the genome of a cell of a human being or in vitro embryo such that the alteration is capable of being transmitted to descendants;” [emphases mine] (s. 5(1)(f) AHRA). This approach is not shared worldwide as other countries such as the United Kingdom, take a more regulatory approach to gene editing research.1 Indeed, as noted by the Law Reform Commission of Canada in 1982, criminal law should be ‘an instrument of last resort’ used solely for “conduct which is culpable, seriously harmful, and generally conceived of as deserving of punishment”.6 A criminal ban is a suboptimal policy tool for science as it is inflexible, stifles public debate, and hinders responsiveness to the evolving nature of science and societal attitudes.7 In contrast, a moratorium such as the self-imposed research moratorium on human germ line editing called for by scientists in December 20158 can at least allow for a time limited pause. But like bans, they may offer the illusion of finality and safety while halting research required to move forward and validate innovation.

On October 1st, 2016, Health Canada issued a Notice of Intent to develop regulations under the AHRA but this effort is limited to safety and payment issues (i.e. gamete donation). Today, there is a need for Canada to revisit the laws and policies that address the ethical, legal and social implications of human gene editing. The goal of such a critical move in Canada’s scientific and legal history would be a discussion of the right of Canadians to benefit from the advancement of science and its applications as promulgated in article 27 of the Universal Declaration of Human Rights9 and article 15(b) of the International Covenant on Economic, Social and Cultural Rights,10 which Canada has signed and ratified. Such an approach would further ensure the freedom of scientific endeavour both as a principle of a liberal democracy and as a social good, while allowing Canada to be engaged with the international scientific community.

Even though it’s a bit old, I still recommend reading the open access editorial in full, if you have the time.

One last thing abut the paper, the acknowledgements,

Sponsored by Canada’s Stem Cell Network, the Centre of Genomics and Policy of McGill University convened a ‘think tank’ on the future of human gene editing in Canada with legal and ethics experts as well as representatives and observers from government in Ottawa (August 31, 2016). The experts were Patrick Bedford, Janetta Bijl, Timothy Caulfield, Judy Illes, Rosario Isasi, Jonathan Kimmelman, Erika Kleiderman, Bartha Maria Knoppers, Eric Meslin, Cate Murray, Ubaka Ogbogu, Vardit Ravitsky, Michael Rudnicki, Stephen Strauss, Philip Welford, and Susan Zimmerman. The observers were Geneviève Dubois-Flynn, Danika Goosney, Peter Monette, Kyle Norrie, and Anthony Ridgway.

Competing interests

The authors declare no competing interests.

Both McGill and the Stem Cell Network pop up again. A November 8, 2017 article about the need for new Canadian gene-editing policies by Tom Blackwell for the National Post features some familiar names (Did someone have a budget for public relations and promotion?),

It’s one of the most exciting, and controversial, areas of health science today: new technology that can alter the genetic content of cells, potentially preventing inherited disease — or creating genetically enhanced humans.

But Canada is among the few countries in the world where working with the CRISPR gene-editing system on cells whose DNA can be passed down to future generations is a criminal offence, with penalties of up to 10 years in jail.

This week, one major science group announced it wants that changed, calling on the federal government to lift the prohibition and allow researchers to alter the genome of inheritable “germ” cells and embryos.

The potential of the technology is huge and the theoretical risks like eugenics or cloning are overplayed, argued a panel of the Stem Cell Network.

The step would be a “game-changer,” said Bartha Knoppers, a health-policy expert at McGill University, in a presentation to the annual Till & McCulloch Meetings of stem-cell and regenerative-medicine researchers [These meetings were originally known as the Stem Cell Network’s Annual General Meeting {AGM}]. [emphases mine]

“I’m completely against any modification of the human genome,” said the unidentified meeting attendee. “If you open this door, you won’t ever be able to close it again.”

If the ban is kept in place, however, Canadian scientists will fall further behind colleagues in other countries, say the experts behind the statement say; they argue possible abuses can be prevented with good ethical oversight.

“It’s a human-reproduction law, it was never meant to ban and slow down and restrict research,” said Vardit Ravitsky, a University of Montreal bioethicist who was part of the panel. “It’s a sort of historical accident … and now our hands are tied.”

There are fears, as well, that CRISPR could be used to create improved humans who are genetically programmed to have certain facial or other features, or that the editing could have harmful side effects. Regardless, none of it is happening in Canada, good or bad.

In fact, the Stem Cell Network panel is arguably skirting around the most contentious applications of the technology. It says it is asking the government merely to legalize research for its own sake on embryos and germ cells — those in eggs and sperm — not genetic editing of embryos used to actually get women pregnant.

The highlighted portions in the last two paragraphs of the excerpt were written one year prior to the claims by a Chinese scientist that he had run a clinical trial resulting in gene-edited twins, Lulu and Nana. (See my my November 28, 2018 posting for a comprehensive overview of the original furor). I have yet to publish a followup posting featuring the news that the CRISPR twins may have been ‘improved’ more extensively than originally realized. The initial reports about the twins focused on an illness-related reason (making them HIV ‘immune’) but made no mention of enhanced cognitive skills a side effect of eliminating the gene that would make them HIV ‘immune’. To date, the researcher has not made the bulk of his data available for an in-depth analysis to support his claim that he successfully gene-edited the twins. As well, there were apparently seven other pregnancies coming to term as part of the researcher’s clinical trial and there has been no news about those births.

Risk analysis innovation

Before moving onto the innovation of risk analysis, I want to focus a little more on at least one of the risks that gene-editing might present. Gierczak noted that CRISPR/Cas9 is “not perfect,” which acknowledges the truth but doesn’t convey all that much information.

While the terms ‘precision’ and ‘scissors’ are used frequently when describing the CRISPR technique, scientists actually mean that the technique is significantly ‘more precise’ than other techniques but they are not referencing an engineering level of precision. As for the ‘scissors’, it’s an analogy scientists like to use but in fact CRISPR is not as efficient and precise as a pair of scissors.

Michael Le Page in a July 16, 2018 article for New Scientist lays out some of the issues (Note: A link has been removed),

A study of CRIPSR suggests we shouldn’t rush into trying out CRISPR genome editing inside people’s bodies just yet. The technique can cause big deletions or rearrangements of DNA [emphasis mine], says Allan Bradley of the Wellcome Sanger Institute in the UK, meaning some therapies based on CRISPR may not be quite as safe as we thought.

The CRISPR genome editing technique is revolutionising biology, enabling us to create new varieties of plants and animals and develop treatments for a wide range of diseases.

The CRISPR Cas9 protein works by cutting the DNA of a cell in a specific place. When the cell repairs the damage, a few DNA letters get changed at this spot – an effect that can be exploited to disable genes.

At least, that’s how it is supposed to work. But in studies of mice and human cells, Bradley’s team has found that in around a fifth of cells, CRISPR causes deletions or rearrangements more than 100 DNA letters long. These surprising changes are sometimes thousands of letters long.

“I do believe the findings are robust,” says Gaetan Burgio of the Australian National University, an expert on CRISPR who has debunked previous studies questioning the method’s safety. “This is a well-performed study and fairly significant.”

I covered the Bradley paper and the concerns in a July 17, 2018 posting ‘The CRISPR ((clustered regularly interspaced short palindromic repeats)-CAS9 gene-editing technique may cause new genetic damage kerfuffle‘. (The ‘kerfufle’ was in reference to a report that the CRISPR market was affected by the publication of Bradley’s paper.)

Despite Health Canada not moving swiftly enough for some researchers, they have nonetheless managed to release an ‘outcome’ report about a consultation/analysis started in October 2016. Before getting to the consultation’s outcome, it’s interesting to look at how the consultation’s call for response was described (from Health Canada’s Toward a strengthened Assisted Human Reproduction Act ; A Consultation with Canadians on Key Policy Proposals webpage),

In October 2016, recognizing the need to strengthen the regulatory framework governing assisted human reproduction in Canada, Health Canada announced its intention to bring into force the dormant sections of the Assisted Human Reproduction Act  and to develop the necessary supporting regulations.

This consultation document provides an overview of the key policy proposals that will help inform the development of regulations to support bringing into force Section 10, Section 12 and Sections 45-58 of the Act. Specifically, the policy proposals describe the Department’s position on the following:

Section 10: Safety of Donor Sperm and Ova

  • Scope and application
  • Regulated parties and their regulatory obligations
  • Processing requirements, including donor suitability assessment
  • Record-keeping and traceability

Section 12: Reimbursement

  • Expenditures that may be reimbursed
  • Process for reimbursement
  • Creation and maintenance of records

Sections 45-58: Administration and Enforcement

  • Scope of the administration and enforcement framework
  • Role of inspectors designated under the Act

The purpose of the document is to provide Canadians with an opportunity to review the policy proposals and to provide feedback [emphasis mine] prior to the Department finalizing policy decisions and developing the regulations. In addition to requesting stakeholders’ general feedback on the policy proposals, the Department is also seeking input on specific questions, which are included throughout the document.

It took me a while to find the relevant section (in particular, take note of ‘Federal Regulatory Oversight’),

3.2. AHR in Canada Today

Today, an increasing number of Canadians are turning to AHR technologies to grow or build their families. A 2012 Canadian studyFootnote 1 found that infertility is on the rise in Canada, with roughly 16% of heterosexual couples experiencing infertility. In addition to rising infertility, the trend of delaying marriage and parenthood, scientific advances in cryopreserving ova, and the increasing use of AHR by LGBTQ2 couples and single parents to build a family are all contributing to an increase in the use of AHR technologies.

The growing use of reproductive technologies by Canadians to help build their families underscores the need to strengthen the AHR Act. While the approach to regulating AHR varies from country to country, Health Canada has considered international best practices and the need for regulatory alignment when developing the proposed policies set out in this document. …

3.2.1 Federal Regulatory Oversight

Although the scope of the AHR Act was significantly reduced in 2012 and some of the remaining sections have not yet been brought into force, there are many important sections of the Act that are currently administered and enforced by Health Canada, as summarized generally below:

Section 5: Prohibited Scientific and Research Procedures
Section 5 prohibits certain types of scientific research and clinical procedures that are deemed unacceptable, including: human cloning, the creation of an embryo for non-reproductive purposes, maintaining an embryo outside the human body beyond the fourteenth day, sex selection for non-medical reasons, altering the genome in a way that could be transmitted to descendants, and creating a chimera or a hybrid. [emphasis mine]

….

It almost seems as if the they were hiding the section that broached the human gene-editing question. It doesn’t seem to have worked as it appears, there are some very motivated parties determined to reframe the discussion. Health Canada’s ‘outocme’ report, published March 2019, What we heard: A summary of scanning and consultations on what’s next for health product regulation reflects the success of those efforts,

1.0 Introduction and Context

Scientific and technological advances are accelerating the pace of innovation. These advances are increasingly leading to the development of health products that are better able to predict, define, treat, and even cure human diseases. Globally, many factors are driving regulators to think about how to enable health innovation. To this end, Health Canada has been expanding beyond existing partnerships and engaging both domestically and internationally. This expanding landscape of products and services comes with a range of new challenges and opportunities.

In keeping up to date with emerging technologies and working collaboratively through strategic partnerships, Health Canada seeks to position itself as a regulator at the forefront of health innovation. Following the targeted sectoral review of the Health and Biosciences Sector Regulatory Review consultation by the Treasury Board Secretariat, Health Canada held a number of targeted meetings with a broad range of stakeholders.

This report outlines the methodologies used to look ahead at the emerging health technology environment, [emphasis mine] the potential areas of focus that resulted, and the key findings from consultations.

… the Department identified the following key drivers that are expected to shape the future of health innovation:

  1. The use of “big data” to inform decision-making: Health systems are generating more data, and becoming reliant on this data. The increasing accuracy, types, and volume of data available in real time enable automation and machine learning that can forecast activity, behaviour, or trends to support decision-making.
  2. Greater demand for citizen agency: Canadians increasingly want and have access to more information, resources, options, and platforms to manage their own health (e.g., mobile apps, direct-to-consumer services, decentralization of care).
  3. Increased precision and personalization in health care delivery: Diagnostic tools and therapies are increasingly able to target individual patients with customized therapies (e.g., individual gene therapy).
  4. Increased product complexity: Increasingly complex products do not fit well within conventional product classifications and standards (e.g., 3D printing).
  5. Evolving methods for production and distribution: In some cases, manufacturers and supply chains are becoming more distributed, challenging the current framework governing production and distribution of health products.
  6. The ways in which evidence is collected and used are changing: The processes around new drug innovation, research and development, and designing clinical trials are evolving in ways that are more flexible and adaptive.

With these key drivers in mind, the Department selected the following six emerging technologies for further investigation to better understand how the health product space is evolving:

  1. Artificial intelligence, including activities such as machine learning, neural networks, natural language processing, and robotics.
  2. Advanced cell therapies, such as individualized cell therapies tailor-made to address specific patient needs.
  3. Big data, from sources such as sensors, genetic information, and social media that are increasingly used to inform patient and health care practitioner decisions.
  4. 3D printing of health products (e.g., implants, prosthetics, cells, tissues).
  5. New ways of delivering drugs that bring together different product lines and methods (e.g., nano-carriers, implantable devices).
  6. Gene editing, including individualized gene therapies that can assist in preventing and treating certain diseases.

Next, to test the drivers identified and further investigate emerging technologies, the Department consulted key organizations and thought leaders across the country with expertise in health innovation. To this end, Health Canada held seven workshops with over 140 representatives from industry associations, small-to-medium sized enterprises and start-ups, larger multinational companies, investors, researchers, and clinicians in Ottawa, Toronto, Montreal, and Vancouver. [emphases mine]

The ‘outocme’ report, ‘What we heard …’, is well worth reading in its entirety; it’s about 9 pp.

I have one comment, ‘stakeholders’ don’t seem to include anyone who isn’t “from industry associations, small-to-medium sized enterprises and start-ups, larger multinational companies, investors, researchers, and clinician” or from “Ottawa, Toronto, Montreal, and Vancouver.” Aren’t the rest of us stakeholders?

Innovating risk analysis

This line in the report caught my eye (from Health Canada’s Toward a strengthened Assisted Human Reproduction Act ; A Consultation with Canadians on Key Policy Proposals webpage),

There is increasing need to enable innovation in a flexible, risk-based way, with appropriate oversight to ensure safety, quality, and efficacy. [emphases mine]

It reminded me of the 2019 federal budget (from my March 22, 2019 posting). One comment before proceeding, regulation and risk are tightly linked and, so, by innovating regulation they are by exttension alos innovating risk analysis,

… Budget 2019 introduces the first three “Regulatory Roadmaps” to specifically address stakeholder issues and irritants in these sectors, informed by over 140 responses [emphasis mine] from businesses and Canadians across the country, as well as recommendations from the Economic Strategy Tables.

Introducing Regulatory Roadmaps

These Roadmaps lay out the Government’s plans to modernize regulatory frameworks, without compromising our strong health, safety, and environmental protections. They contain proposals for legislative and regulatory amendments as well as novel regulatory approaches to accommodate emerging technologies, including the use of regulatory sandboxes and pilot projects—better aligning our regulatory frameworks with industry realities.

Budget 2019 proposes the necessary funding and legislative revisions so that regulatory departments and agencies can move forward on the Roadmaps, including providing the Canadian Food Inspection Agency, Health Canada and Transport Canada with up to $219.1 million over five years, starting in 2019–20, (with $0.5 million in remaining amortization), and $3.1 million per year on an ongoing basis.

In the coming weeks, the Government will be releasing the full Regulatory Roadmaps for each of the reviews, as well as timelines for enacting specific initiatives, which can be grouped in the following three main areas:

What Is a Regulatory Sandbox? Regulatory sandboxes are controlled “safe spaces” in which innovative products, services, business models and delivery mechanisms can be tested without immediately being subject to all of the regulatory requirements.
– European Banking Authority, 2017

Establishing a regulatory sandbox for new and innovative medical products
The regulatory approval system has not kept up with new medical technologies and processes. Health Canada proposes to modernize regulations to put in place a regulatory sandbox for new and innovative products, such as tissues developed through 3D printing, artificial intelligence, and gene therapies targeted to specific individuals. [emphasis mine]

Modernizing the regulation of clinical trials
Industry and academics have expressed concerns that regulations related to clinical trials are overly prescriptive and inconsistent. Health Canada proposes to implement a risk-based approach [emphasis mine] to clinical trials to reduce costs to industry and academics by removing unnecessary requirements for low-risk drugs and trials. The regulations will also provide the agri-food industry with the ability to carry out clinical trials within Canada on products such as food for special dietary use and novel foods.

Does the government always get 140 responses from a consultation process? Moving on, I agree with finding new approaches to regulatory processes and oversight and, by extension, new approaches to risk analysis.

Earlier in this post, I asked if someone had a budget for public relations/promotion. I wasn’t joking. My March 22, 2019 posting also included these line items in the proposed 2019 budget,

Budget 2019 proposes to make additional investments in support of the following organizations:
Stem Cell Network: Stem cell research—pioneered by two Canadians in the 1960s [James Till and Ernest McCulloch]—holds great promise for new therapies and medical treatments for respiratory and heart diseases, spinal cord injury, cancer, and many other diseases and disorders. The Stem Cell Network is a national not-for-profit organization that helps translate stem cell research into clinical applications and commercial products. To support this important work and foster Canada’s leadership in stem cell research, Budget 2019 proposes to provide the Stem Cell Network with renewed funding of $18 million over three years, starting in 2019–20.

Genome Canada: The insights derived from genomics—the study of the entire genetic information of living things encoded in their DNA and related molecules and proteins—hold the potential for breakthroughs that can improve the lives of Canadians and drive innovation and economic growth. Genome Canada is a not-for-profit organization dedicated to advancing genomics science and technology in order to create economic and social benefits for Canadians. To support Genome Canada’s operations, Budget 2019 proposes to provide Genome Canada with $100.5 million over five years, starting in 2020–21. This investment will also enable Genome Canada to launch new large-scale research competitions and projects, in collaboration with external partners, ensuring that Canada’s research community continues to have access to the resources needed to make transformative scientific breakthroughs and translate these discoveries into real-world applications.

Years ago, I managed to find a webpage with all of the proposals various organizations were submitting to a government budget committee. It was eye-opening. You can tell which organizations were able to hire someone who knew the current government buzzwords and the things that a government bureaucrat would want to hear and the organizations that didn’t.

Of course, if the government of the day is adamantly against or uninterested, no amount of persusasion will work to get your organization more money in the budget.

Finally

Reluctantly, I am inclined to explore the topic of emerging technologies such as gene-editing not only in the field of agriculture (for gene-editing of plants, fish, and animals see my November 28, 2018 posting) but also with humans. At the very least, it needs to be discussed whether we choose to participate or not.

If you are interested in the arguments against changing Canada’s prohibition against gene-editing of humans, there’s an Ocotber 2, 2017 posting on Impact Ethics by Françoise Baylis, Professor and Canada Research Chair in Bioethics and Philosophy at Dalhousie University, and Alana Cattapan, Johnson Shoyama Graduate School of Public Policy at the University of Saskatchewan, which makes some compelling arguments. Of course, it was written before the CRISPR twins (my November 28, 2018 posting).

Recaliing CRISPR Therapeutics (mentioned by Gierczak), the company received permission to run clinical trials in the US in October 2018 after the FDA (US Food and Drug Administration) lifted an earlier ban on their trials according to an Oct. 10, 2018 article by Frank Vinhuan for exome,

The partners also noted that their therapy is making progress outside of the U.S. They announced that they have received regulatory clearance in “multiple countries” to begin tests of the experimental treatment in both sickle cell disease and beta thalassemia, …

It seems to me that the quotes around “multiple countries” are meant to suggest doubt of some kind. Generally speaking, company representatives make those kinds of generalizations when they’re trying to pump up their copy. E.g., 50% increase in attendance  but no whole numbers to tell you what that means. It could mean two people attended the first year and then brought a friend the next year or 100 people attended and the next year there were 150.

Despite attempts to declare personalized medicine as having arrived, I think everything is still in flux with no preordained outcome. The future has yet to be determined but it will be and I , for one, would like to have some say in the matter.

Summer (2019) Institute on AI (artificial intelligence) Societal Impacts, Governance, and Ethics. Summer Institute In Alberta, Canada

The deadline for applications is April 7, 2019. As for whether or not you might like to attend, here’s more from a joint March 11, 2019 Alberta Machine Intelligence Institute (Amii)/
Canadian Institute for Advanced Research (CIFAR)/University of California at Los Angeles (UCLA) Law School news release
(also on globalnewswire.com),

What will Artificial Intelligence (AI) mean for society? That’s the question scholars from a variety of disciplines will explore during the inaugural Summer Institute on AI Societal Impacts, Governance, and Ethics. Summer Institute, co-hosted by the Alberta Machine Intelligence Institute (Amii) and CIFAR, with support from UCLA School of Law, takes place July 22-24, 2019 in Edmonton, Canada.

“Recent advances in AI have brought a surge of attention to the field – both excitement and concern,” says co-organizer and UCLA professor, Edward Parson. “From algorithmic bias to autonomous vehicles, personal privacy to automation replacing jobs. Summer Institute will bring together exceptional people to talk about how humanity can receive the benefits and not get the worst harms from these rapid changes.”

Summer Institute brings together experts, grad students and researchers from multiple backgrounds to explore the societal, governmental, and ethical implications of AI. A combination of lectures, panels, and participatory problem-solving, this comprehensive interdisciplinary event aims to build understanding and action around these high-stakes issues.

“Machine intelligence is opening transformative opportunities across the world,” says John Shillington, CEO of Amii, “and Amii is excited to bring together our own world-leading researchers with experts from areas such as law, philosophy and ethics for this important discussion. Interdisciplinary perspectives will be essential to the ongoing development of machine intelligence and for ensuring these opportunities have the broadest reach possible.”

Over the three-day program, 30 graduate-level students and early-career researchers will engage with leading experts and researchers including event co-organizers: Western University’s Daniel Lizotte, Amii’s Alona Fyshe and UCLA’s Edward Parson. Participants will also have a chance to shape the curriculum throughout this uniquely interactive event.

Summer Institute takes place prior to Deep Learning and Reinforcement Learning Summer School, and includes a combined event on July 24th [2019] for both Summer Institute and Summer School participants.

Visit dlrlsummerschool.ca/the-summer-institute to apply; applications close April 7, 2019.

View our Summer Institute Biographies & Boilerplates for more information on confirmed faculty members and co-hosting organizations. Follow the conversation through social media channels using the hashtag #SI2019.

Media Contact: Spencer Murray, Director of Communications & Public Relations, Amii
t: 587.415.6100 | c: 780.991.7136 | e: spencer.murray@amii.ca

There’s a bit more information on The Summer Institute on AI and Society webpage (on the Deep Learning and Reinforcement Learning Summer School 2019 website) such as this more complete list of speakers,

Confirmed speakers at Summer Institute include:

Alona Fyshe, University of Alberta/Amii (SI co-organizer)
Edward Parson, UCLA (SI co-organizer)
Daniel Lizotte, Western University (SI co-organizer)
Geoffrey Rockwell, University of Alberta
Graham Taylor, University of Guelph/Vector Institute
Rob Lempert, Rand Corporation
Gary Marchant, Arizona State University
Richard Re, UCLA
Evan Selinger, Rochester Institute of Technology
Elana Zeide, UCLA

Two questions, why are all the summer school faculty either Canada- or US-based? What about South American, Asian, Middle Eastern, etc. thinkers?

One last thought, I wonder if this ‘AI & ethics summer institute’ has anything to do with the Pan-Canadian Artificial Intelligence Strategy, which CIFAR administers and where both the University of Alberta and Vector Institute are members.

Media registration for United Nations 3rd AI (artificial intelligence) for Good Global Summit

This is strictly for folks who have media accreditation. First, the news about the summit and then some detail about how you might accreditation should you be interested in going to Switzerland. Warning: The International Telecommunications Union which is holding this summit is a United Nations agency and you will note almost an entire paragraph of ‘alphabet soup’ when all the ‘sister’ agencies involved are listed.

From the March 21, 2019 International Telecommunications Union (ITU) media advisory (Note: There have been some changes to the formatting),

Geneva, 21 March 2019
​​​​​​​​​​​​​
Artificial Intelligence (AI) h​as taken giant leaps forward in recent years, inspiring growing confidence in AI’s ability to assist in solving some of humanity’s greatest challenges. Leaders in AI and humanitarian action are convening on the neutral platform offered by the United Nations to work towards AI improving the quality and sustainability of life on our planet.
The 2017 summit marked the beginning of global dialogue on the potential of AI to act as a force for good. The action-oriented 2018 summit gave rise to numerous ‘AI for Good’ projects, including an ‘AI for Health’ Focus Group, now led by ITU and the World Health Organization (WHO). The 2019 summit will continue to connect AI innovators with public and private-sector decision-makers, building collaboration to maximize the impact of ‘AI for Good’.

Organized by the International Telecommunication Union (IT​U) – the United Nations specialized agency for information and communication technology (ICT) – in partnership with the XPRIZE Foundation, the Association for Computing Machinery (ACM) and close to 30 sister United Nations agencies, the 3rd annual ​AI for Good Global Summit in Geneva, 28-31 May, is the leading United Nations platform for inclusive dialogue on AI. The goal of the summit is to identify practical applications of AI to accelerate progress towards the United Nations Sustainable Development Goals​​.​

►►► MEDIA REGISTRATION IS NOW OPEN ◄◄◄

Media are recommended to register in advance to receive key announcements in the run-up to the summit.

WHAT: The summit attracts a cross-section of AI experts from industry and academia, global business leaders, Heads of UN agencies, ICT ministers, non-governmental organizations, and civil society.

The summit is designed to generate ‘AI for Good’ projects able to be enacted in the near term, guided by the summit’s multi-stakeholder and inter-disciplinary audience. It also formulates supporting strategies to ensure trusted, safe and inclusive development of AI technologies and equitable access to their benefits.

The 2019 summit will highlight AI’s value in advancing education, healthcare and wellbeing, social and economic equality, space research, and smart and safe mobility. It will propose actions to assist high-potential AI solutions in achieving global scale. It will host debate around unintended consequences of AI as well as AI’s relationship with art and culture. A ‘learning day’ will offer potential AI adopters an audience with leading AI experts and educators.

A dynamic show floor will demonstrate innovations at the cutting edge of AI research and development, such as the IBM Watson live debater; the Fusion collaborative exoskeleton; RoboRace, the world’s first self-driving electric racing car; avatar prototypes, and the ElliQ social robot for the care of the elderly. Summit attendees can also look forward to AI-inspired performances from world-renowned musician Jojo Mayer and award-winning vocal and visual artist​ Reeps One

WHEN: 28-31 May 2019
WHERE: International Conference Centre Geneva, 17 Rue de Varembé, Geneva, Switzerland

WHO: Over 100 speakers have been confirmed to date, including:

Jim Hagemann Snabe – Chairman, Siemens​​
Cédric Villani – AI advisor to the President of France, and Mathematics Fields Medal Winner
Jean-Philippe Courtois – President of Global Operations, Microsoft
Anousheh Ansari – CEO, XPRIZE Foundation, Space Ambassador
Yves Daccord – Director General, International Committee of the Red Cross
Yan Huang – Director AI Innovation, Baidu
Timnit Gebru – Head of AI Ethics, Google
Vladimir Kramnik – World Chess Champion
Vicki Hanson – CEO, ACM
Zoubin Ghahramani – Chief Scientist, Uber, and Professor of Engineering, University of Cambridge
Lucas di Grassi – Formula E World Racing Champion, CEO of Roborac

Confirmed speakers also include C-level and expert representatives of Bosch, Botnar Foundation, Byton, Cambridge Quantum Computing, the cities of Montreal and Pittsburg, Darktrace, Deloitte, EPFL, European Space Agency, Factmata, Google, IBM, IEEE, IFIP, Intel, IPSoft, Iridescent, MasterCard, Mechanica.ai, Minecraft, NASA, Nethope, NVIDIA, Ocean Protocol, Open AI, Philips, PWC, Stanford University, University of Geneva, and WWF.

Please visit the summit programme for more information on the latest speakers, breakthrough sessions and panels.

The summit is organized in partnership with the following sister United Nations agencies:CTBTO, ICAO, ILO, IOM, UNAIDS, UNCTAD, UNDESA, UNDPA, UNEP, UNESCO, UNFPA, UNGP, UNHCR, UNICEF, UNICRI, UNIDIR, UNIDO, UNISDR, UNITAR, UNODA, UNODC, UNOOSA, UNOPS, UNU, WBG,  WFP, WHO, and WIPO.

The 2019 summit is kindly supported by Platinum Sponsor and Strategic Partner, Microsoft; Gold Sponsors, ACM, the Kay Family Foundation, Mind.ai and the Autonomous Driver Alliance; Silver Sponsors, Deloitte and the Zero Abuse Project; and Bronze Sponsor, Live Tiles.​

More information available at aiforgood.itu.int
​Join the conversat​ion on social media ​using the hashtag #AIforGood

As promised here are the media accreditation details from the ITU Media Registration and Accreditation webpage,

To gain media access, ITU must confirm your status as a bona fide member of the media. Therefore, please read ITU’s Media Accreditation Guidelines below so you are aware of the information you will be required to submit for ITU to confirm such status. ​
Media accreditation is not granted to 1) non-editorial staff working for a publishing house (e.g. management, marketing, advertising executives, etc.); 2) researchers, academics, authors or editors of directories; 3) employees of information outlets of public, non-governmental or private entities that are not first and foremost media organizations; 4) members of professional broadcasting or media associations, 5) press or communication professionals accompanying member state delegations; and 6) citizen journalists under no apparent editorial board oversight. If you have questions about your eligibility, please email us at pressreg@itu.int.​

Applications for accreditation are considered on a case-by-case basis and ITU reserves the right to request additional proof or documentation other than what is listed below. ​​​Media accreditation decisions rest with ITU and all decisions are final.

​Accreditation eligibility & credentials 
​1. Journalists* should provide an official letter of assignment from the Editor-in-Chief (or the News Editor for radio/TV). One letter per crew/editorial team will suffice. Editors-in-Chief and Bureau Chiefs should submit a letter from their Director. Please email this to pressreg@itu.int along with the required supporting credentials, based on the type of media organization you work for:

​​​​​Print and online publications should be available to the general public and published at least 6 times a year by an organization whose principle business activity is publishing and which generally carries paid advertising;
o please submit 2 copies or links to recent byline articles published within the last 4 months.

News wire services should provide news coverage to subscribers, including newspapers, periodicals and/or television networks;
o please submit 2 copies or links to recent byline articles or broadcasting material published within the last 4 months.

Broadcast media should provide news and information programmes to the general public. Inde​pendent film and video production companies can only be accredited if officially mandated by a broadcast station via a letter of assignment;
o please submit broadcasting material published within the last 4 months.

Freelance journalists and photographers must provide clear documentation that they are on assignment from a specific news organization or publication. Evidence that they regularly supply journalistic content to recognized media may be acceptable in the absence of an assignment letter and at the discretion of the ITU Corporate Communication Division.
o if possible, please submit a valid assignment letter from the news organization or publication.

2. Bloggers and community media may be granted accreditation if the content produced is deemed relevant to the industry, contains news commentary, is regularly updated and/or made publicly available. Corporate bloggers may register as normal participants (not media). Please see Guidelines for Bloggers and Community Media Accreditation below for more details:

Special guidelines for bloggers and community ​media accreditation

ITU is committed to working with independent and ‘new media’ reporters and columnists who reach their audiences via blogs, podcasts, video blogs, community or online radio, limited print formats which generally carry paid advertising ​​and other online media. These are some of the guidelines we use to determine whether to accredit bloggers and community media representatives:

​​ITU reserves the right to request traffic data from a third party (Sitemeter, Technorati, Feedburner, iTunes or equivalent) when considering your application. While the decision to grant access is not based solely on traffic/subscriber data, we ask that applicants provide sufficient transparency into their operations to help us make a fair and timely decision. If your media outlet is new, you must have an established record of having written extensively on ICT issues and must present copies or links to two recently published videos, podcasts or articles with your byline.​

Obtaining media accreditation for ITU events is an opportunity to meet and interact with key industry and political figures. While continued accreditation for ITU events is not directly contingent on producing coverage, owing to space limitations we may take this into consideration when processing future accreditation requests. Following any ITU event for which you are accredited, we therefore kindly request that you forward a link to your post/podcast/video blog to pressreg​@itu.int.

Bloggers who are granted access to ITU events are expected to act professionally. Those who do not maintain the standards expected of professional media representatives run the risk of having their accreditation withdrawn.

UN-accre​dited media

Media already accredited and badged by the United Nations are automatically accredited and registered by ITU. In this case, you only need to send a copy of your UN badge to pressreg@itu.int​to make sure you receive your event badge. Anyone joining an ITU event MUST have an event badge in order to access the premises. ​Please make sure you let us know in advance that you are planning to attend so your event badge is ready for printing and pick-up.​

You can register and get accreditation here (scroll past the guidelines). Good luck!

Artificial intelligence (AI) brings together International Telecommunications Union (ITU) and World Health Organization (WHO) and AI outperforms animal testing

Following on my May 11, 2018 posting about the International Telecommunications Union (ITU) and the 2018 AI for Good Global Summit in mid- May, there’s an announcement. My other bit of AI news concerns animal testing.

Leveraging the power of AI for health

A July 24, 2018 ITU press release (a shorter version was received via email) announces a joint initiative focused on improving health,

Two United Nations specialized agencies are joining forces to expand the use of artificial intelligence (AI) in the health sector to a global scale, and to leverage the power of AI to advance health for all worldwide. The International Telecommunication Union (ITU) and the World Health Organization (WHO) will work together through the newly established ITU Focus Group on AI for Health to develop an international “AI for health” standards framework and to identify use cases of AI in the health sector that can be scaled-up for global impact. The group is open to all interested parties.

“AI could help patients to assess their symptoms, enable medical professionals in underserved areas to focus on critical cases, and save great numbers of lives in emergencies by delivering medical diagnoses to hospitals before patients arrive to be treated,” said ITU Secretary-General Houlin Zhao. “ITU and WHO plan to ensure that such capabilities are available worldwide for the benefit of everyone, everywhere.”

The demand for such a platform was first identified by participants of the second AI for Good Global Summit held in Geneva, 15-17 May 2018. During the summit, AI and the health sector were recognized as a very promising combination, and it was announced that AI-powered technologies such as skin disease recognition and diagnostic applications based on symptom questions could be deployed on six billion smartphones by 2021.

The ITU Focus Group on AI for Health is coordinated through ITU’s Telecommunications Standardization Sector – which works with ITU’s 193 Member States and more than 800 industry and academic members to establish global standards for emerging ICT innovations. It will lead an intensive two-year analysis of international standardization opportunities towards delivery of a benchmarking framework of international standards and recommendations by ITU and WHO for the use of AI in the health sector.

“I believe the subject of AI for health is both important and useful for advancing health for all,” said WHO Director-General Tedros Adhanom Ghebreyesus.

The ITU Focus Group on AI for Health will also engage researchers, engineers, practitioners, entrepreneurs and policy makers to develop guidance documents for national administrations, to steer the creation of policies that ensure the safe, appropriate use of AI in the health sector.

“1.3 billion people have a mobile phone and we can use this technology to provide AI-powered health data analytics to people with limited or no access to medical care. AI can enhance health by improving medical diagnostics and associated health intervention decisions on a global scale,” said Thomas Wiegand, ITU Focus Group on AI for Health Chairman, and Executive Director of the Fraunhofer Heinrich Hertz Institute, as well as professor at TU Berlin.

He added, “The health sector is in many countries among the largest economic sectors or one of the fastest-growing, signalling a particularly timely need for international standardization of the convergence of AI and health.”

Data analytics are certain to form a large part of the ITU focus group’s work. AI systems are proving increasingly adept at interpreting laboratory results and medical imagery and extracting diagnostically relevant information from text or complex sensor streams.

As part of this, the ITU Focus Group for AI for Health will also produce an assessment framework to standardize the evaluation and validation of AI algorithms — including the identification of structured and normalized data to train AI algorithms. It will develop open benchmarks with the aim of these becoming international standards.

The ITU Focus Group for AI for Health will report to the ITU standardization expert group for multimedia, Study Group 16.

I got curious about Study Group 16 (from the Study Group 16 at a glance webpage),

Study Group 16 leads ITU’s standardization work on multimedia coding, systems and applications, including the coordination of related studies across the various ITU-T SGs. It is also the lead study group on ubiquitous and Internet of Things (IoT) applications; telecommunication/ICT accessibility for persons with disabilities; intelligent transport system (ITS) communications; e-health; and Internet Protocol television (IPTV).

Multimedia is at the core of the most recent advances in information and communication technologies (ICTs) – especially when we consider that most innovation today is agnostic of the transport and network layers, focusing rather on the higher OSI model layers.

SG16 is active in all aspects of multimedia standardization, including terminals, architecture, protocols, security, mobility, interworking and quality of service (QoS). It focuses its studies on telepresence and conferencing systems; IPTV; digital signage; speech, audio and visual coding; network signal processing; PSTN modems and interfaces; facsimile terminals; and ICT accessibility.

I wonder which group deals with artificial intelligence and, possibly, robots.

Chemical testing without animals

Thomas Hartung, professor of environmental health and engineering at Johns Hopkins University (US), describes in his July 25, 2018 essay (written for The Conversation) on phys.org the situation where chemical testing is concerned,

Most consumers would be dismayed with how little we know about the majority of chemicals. Only 3 percent of industrial chemicals – mostly drugs and pesticides – are comprehensively tested. Most of the 80,000 to 140,000 chemicals in consumer products have not been tested at all or just examined superficially to see what harm they may do locally, at the site of contact and at extremely high doses.

I am a physician and former head of the European Center for the Validation of Alternative Methods of the European Commission (2002-2008), and I am dedicated to finding faster, cheaper and more accurate methods of testing the safety of chemicals. To that end, I now lead a new program at Johns Hopkins University to revamp the safety sciences.

As part of this effort, we have now developed a computer method of testing chemicals that could save more than a US$1 billion annually and more than 2 million animals. Especially in times where the government is rolling back regulations on the chemical industry, new methods to identify dangerous substances are critical for human and environmental health.

Having written on the topic of alternatives to animal testing on a number of occasions (my December 26, 2014 posting provides an overview of sorts), I was particularly interested to see this in Hartung’s July 25, 2018 essay on The Conversation (Note: Links have been removed),

Following the vision of Toxicology for the 21st Century, a movement led by U.S. agencies to revamp safety testing, important work was carried out by my Ph.D. student Tom Luechtefeld at the Johns Hopkins Center for Alternatives to Animal Testing. Teaming up with Underwriters Laboratories, we have now leveraged an expanded database and machine learning to predict toxic properties. As we report in the journal Toxicological Sciences, we developed a novel algorithm and database for analyzing chemicals and determining their toxicity – what we call read-across structure activity relationship, RASAR.

This graphic reveals a small part of the chemical universe. Each dot represents a different chemical. Chemicals that are close together have similar structures and often properties. Thomas Hartung, CC BY-SA

To do this, we first created an enormous database with 10 million chemical structures by adding more public databases filled with chemical data, which, if you crunch the numbers, represent 50 trillion pairs of chemicals. A supercomputer then created a map of the chemical universe, in which chemicals are positioned close together if they share many structures in common and far where they don’t. Most of the time, any molecule close to a toxic molecule is also dangerous. Even more likely if many toxic substances are close, harmless substances are far. Any substance can now be analyzed by placing it into this map.

If this sounds simple, it’s not. It requires half a billion mathematical calculations per chemical to see where it fits. The chemical neighborhood focuses on 74 characteristics which are used to predict the properties of a substance. Using the properties of the neighboring chemicals, we can predict whether an untested chemical is hazardous. For example, for predicting whether a chemical will cause eye irritation, our computer program not only uses information from similar chemicals, which were tested on rabbit eyes, but also information for skin irritation. This is because what typically irritates the skin also harms the eye.

How well does the computer identify toxic chemicals?

This method will be used for new untested substances. However, if you do this for chemicals for which you actually have data, and compare prediction with reality, you can test how well this prediction works. We did this for 48,000 chemicals that were well characterized for at least one aspect of toxicity, and we found the toxic substances in 89 percent of cases.

This is clearly more accurate that the corresponding animal tests which only yield the correct answer 70 percent of the time. The RASAR shall now be formally validated by an interagency committee of 16 U.S. agencies, including the EPA [Environmental Protection Agency] and FDA [Food and Drug Administration], that will challenge our computer program with chemicals for which the outcome is unknown. This is a prerequisite for acceptance and use in many countries and industries.

The potential is enormous: The RASAR approach is in essence based on chemical data that was registered for the 2010 and 2013 REACH [Registration, Evaluation, Authorizations and Restriction of Chemicals] deadlines [in Europe]. If our estimates are correct and chemical producers would have not registered chemicals after 2013, and instead used our RASAR program, we would have saved 2.8 million animals and $490 million in testing costs – and received more reliable data. We have to admit that this is a very theoretical calculation, but it shows how valuable this approach could be for other regulatory programs and safety assessments.

In the future, a chemist could check RASAR before even synthesizing their next chemical to check whether the new structure will have problems. Or a product developer can pick alternatives to toxic substances to use in their products. This is a powerful technology, which is only starting to show all its potential.

It’s been my experience that these claims having led a movement (Toxicology for the 21st Century) are often contested with many others competing for the title of ‘leader’ or ‘first’. That said, this RASAR approach seems very exciting, especially in light of the skepticism about limiting and/or making animal testing unnecessary noted in my December 26, 2014 posting.it was from someone I thought knew better.

Here’s a link to and a citation for the paper mentioned in Hartung’s essay,

Machine learning of toxicological big data enables read-across structure activity relationships (RASAR) outperforming animal test reproducibility by Thomas Luechtefeld, Dan Marsh, Craig Rowlands, Thomas Hartung. Toxicological Sciences, kfy152, https://doi.org/10.1093/toxsci/kfy152 Published: 11 July 2018

This paper is open access.

AI (artificial intelligence) text generator, too dangerous to release?

Could this latest version of OpenAI‘s text generator be so good that it would fool you? And following on that thought could the concomitant reluctance to release the research be real or is it a publicity stunt? Here’s a sample of the text from the GPT2 AI model from a February 15, 2019 article by Mark Fraunfelder for Boing Boing,

Recycling is good for the world.
NO! YOU COULD NOT BE MORE WRONG!!
MODEL COMPLETION (MACHINE-WRITTEN, 25 TRIES)
Recycling is NOT good for the world. It is bad for the environment, it is bad for our health, and it is bad for our economy. I’m not kidding. Recycling is not good for the environment. It is destructive to the earth and it is a major contributor to global warming. Recycling is not good for our health. It contributes to obesity and diseases like heart disease and cancer. Recycling is bad for our economy. It increases the cost of a product, and in turn, the price of everything that is made with that product. Recycling is not good for our nation. We pay a tremendous price for the privilege of having the world’s most advanced and efficient recycling system. Recycling is a huge, colossal waste of time, energy, money, and resources. And THAT is why we need to get back to basics and get back to basics in our recycling efforts. One of the best ways to start is to look at the process of creating a paper product. When you make a paper product, it is basically a long chain of materials. Everything from the raw materials (wood, cardboard, paper, etc.), to the reagents (dyes, solvents, etc.) to the printing equipment (chemicals, glue, paper, ink, etc.), to the packaging, to the packaging materials (mercury, chemicals, etc.) to the processing equipment (heating, cooling, etc.), to the packaging materials, to the packaging materials that are shipped overseas and to the packaging materials that are used in the United States. Each step along the way creates tons of waste that we constantly have to clean up. The process of making a paper product is a very wasteful one. But the end result is something that all of us need to consume. And if we want to keep the recycling process running efficiently, then we really need to think about each and every step that goes into making a paper product.

The first few sentences don’t work for me but once the discussion turns to making paper products, then it becomes more convincing to me. As to whether the company’s reluctance to release the research is genuine or a publicity stunt, I don’t know. However, there was a fair degree of interest in GPT2 after the decision.

From a February 14, 2019 article by Alex Hern for the Guardian,

OpenAI, an nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.

When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.

Feed it the opening line of George Orwell’s Nineteen Eighty-Four – “It was a bright cold day in April, and the clocks were striking thirteen” – and the system recognises the vaguely futuristic tone and the novelistic style, and continues with: …

Sean Gallagher’s February 15, 2019 posting on the ars Technica blog provides some insight that’s partially written a style sometimes associated with gossip (Note: Links have been removed),

OpenAI is funded by contributions from a group of technology executives and investors connected to what some have referred to as the PayPal “mafia”—Elon Musk, Peter Thiel, Jessica Livingston, and Sam Altman of YCombinator, former PayPal COO and LinkedIn co-founder Reid Hoffman, and former Stripe Chief Technology Officer Greg Brockman. [emphasis mine] Brockman now serves as OpenAI’s CTO. Musk has repeatedly warned of the potential existential dangers posed by AI, and OpenAI is focused on trying to shape the future of artificial intelligence technology—ideally moving it away from potentially harmful applications.

Given present-day concerns about how fake content has been used to both generate money for “fake news” publishers and potentially spread misinformation and undermine public debate, GPT-2’s output certainly qualifies as concerning. Unlike other text generation “bot” models, such as those based on Markov chain algorithms, the GPT-2 “bot” did not lose track of what it was writing about as it generated output, keeping everything in context.

For example: given a two-sentence entry, GPT-2 generated a fake science story on the discovery of unicorns in the Andes, a story about the economic impact of Brexit, a report about a theft of nuclear materials near Cincinnati, a story about Miley Cyrus being caught shoplifting, and a student’s report on the causes of the US Civil War.

Each matched the style of the genre from the writing prompt, including manufacturing quotes from sources. In other samples, GPT-2 generated a rant about why recycling is bad, a speech written by John F. Kennedy’s brain transplanted into a robot (complete with footnotes about the feat itself), and a rewrite of a scene from The Lord of the Rings.

While the model required multiple tries to get a good sample, GPT-2 generated “good” results based on “how familiar the model is with the context,” the researchers wrote. “When prompted with topics that are highly represented in the data (Brexit, Miley Cyrus, Lord of the Rings, and so on), it seems to be capable of generating reasonable samples about 50 percent of the time. The opposite is also true: on highly technical or esoteric types of content, the model can perform poorly.”

There were some weak spots encountered in GPT-2’s word modeling—for example, the researchers noted it sometimes “writes about fires happening under water.” But the model could be fine-tuned to specific tasks and perform much better. “We can fine-tune GPT-2 on the Amazon Reviews dataset and use this to let us write reviews conditioned on things like star rating and category,” the authors explained.

James Vincent’s February 14, 2019 article for The Verge offers a deeper dive into the world of AI text agents and what makes GPT2 so special (Note: Links have been removed),

For decades, machines have struggled with the subtleties of human language, and even the recent boom in deep learning powered by big data and improved processors has failed to crack this cognitive challenge. Algorithmic moderators still overlook abusive comments, and the world’s most talkative chatbots can barely keep a conversation alive. But new methods for analyzing text, developed by heavyweights like Google and OpenAI as well as independent researchers, are unlocking previously unheard-of talents.

OpenAI’s new algorithm, named GPT-2, is one of the most exciting examples yet. It excels at a task known as language modeling, which tests a program’s ability to predict the next word in a given sentence. Give it a fake headline, and it’ll write the rest of the article, complete with fake quotations and statistics. Feed it the first line of a short story, and it’ll tell you what happens to your character next. It can even write fan fiction, given the right prompt.

The writing it produces is usually easily identifiable as non-human. Although its grammar and spelling are generally correct, it tends to stray off topic, and the text it produces lacks overall coherence. But what’s really impressive about GPT-2 is not its fluency but its flexibility.

This algorithm was trained on the task of language modeling by ingesting huge numbers of articles, blogs, and websites. By using just this data — and with no retooling from OpenAI’s engineers — it achieved state-of-the-art scores on a number of unseen language tests, an achievement known as “zero-shot learning.” It can also perform other writing-related tasks, like translating text from one language to another, summarizing long articles, and answering trivia questions.

GPT-2 does each of these jobs less competently than a specialized system, but its flexibility is a significant achievement. Nearly all machine learning systems used today are “narrow AI,” meaning they’re able to tackle only specific tasks. DeepMind’s original AlphaGo program, for example, was able to beat the world’s champion Go player, but it couldn’t best a child at Monopoly. The prowess of GPT-2, say OpenAI, suggests there could be methods available to researchers right now that can mimic more generalized brainpower.

“What the new OpenAI work has shown is that: yes, you absolutely can build something that really seems to ‘understand’ a lot about the world, just by having it read,” says Jeremy Howard, a researcher who was not involved with OpenAI’s work but has developed similar language modeling programs …

To put this work into context, it’s important to understand how challenging the task of language modeling really is. If I asked you to predict the next word in a given sentence — say, “My trip to the beach was cut short by bad __” — your answer would draw upon on a range of knowledge. You’d consider the grammar of the sentence and its tone but also your general understanding of the world. What sorts of bad things are likely to ruin a day at the beach? Would it be bad fruit, bad dogs, or bad weather? (Probably the latter.)

Despite this, programs that perform text prediction are quite common. You’ve probably encountered one today, in fact, whether that’s Google’s AutoComplete feature or the Predictive Text function in iOS. But these systems are drawing on relatively simple types of language modeling, while algorithms like GPT-2 encode the same information in more complex ways.

The difference between these two approaches is technically arcane, but it can be summed up in a single word: depth. Older methods record information about words in only their most obvious contexts, while newer methods dig deeper into their multiple meanings.

So while a system like Predictive Text only knows that the word “sunny” is used to describe the weather, newer algorithms know when “sunny” is referring to someone’s character or mood, when “Sunny” is a person, or when “Sunny” means the 1976 smash hit by Boney M.

The success of these newer, deeper language models has caused a stir in the AI community. Researcher Sebastian Ruder compares their success to advances made in computer vision in the early 2010s. At this time, deep learning helped algorithms make huge strides in their ability to identify and categorize visual data, kickstarting the current AI boom. Without these advances, a whole range of technologies — from self-driving cars to facial recognition and AI-enhanced photography — would be impossible today. This latest leap in language understanding could have similar, transformational effects.

Hern’s article for the Guardian (February 14, 2019 article ) acts as a good overview, while Gallagher’s ars Technical posting (February 15, 2019 posting) and Vincent’s article (February 14, 2019 article) for the The Verge take you progressively deeper into the world of AI text agents.

For anyone who wants to dig down even further, there’s a February 14, 2019 posting on OpenAI’s blog.

Crowdsourcing brain research at Princeton University to discover 6 new neuron types

Spritely music!

There were already 1/4M registered players as of May 17, 2018 but I’m sure there’s room for more should you be inspired. A May 17, 2018 Princeton University news release (also on EurekAlert) reveals more about the game and about the neurons,

With the help of a quarter-million video game players, Princeton researchers have created and shared detailed maps of more than 1,000 neurons — and they’re just getting started.

“Working with Eyewirers around the world, we’ve made a digital museum that shows off the intricate beauty of the retina’s neural circuits,” said Sebastian Seung, the Evnin Professor in Neuroscience and a professor of computer science and the Princeton Neuroscience Institute (PNI). The related paper is publishing May 17 [2018] in the journal Cell.

Seung is unveiling the Eyewire Museum, an interactive archive of neurons available to the general public and neuroscientists around the world, including the hundreds of researchers involved in the federal Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative.

“This interactive viewer is a huge asset for these larger collaborations, especially among people who are not physically in the same lab,” said Amy Robinson Sterling, a crowdsourcing specialist with PNI and the executive director of Eyewire, the online gaming platform for the citizen scientists who have created this data set.

“This museum is something like a brain atlas,” said Alexander Bae, a graduate student in electrical engineering and one of four co-first authors on the paper. “Previous brain atlases didn’t have a function where you could visualize by individual cell, or a subset of cells, and interact with them. Another novelty: Not only do we have the morphology of each cell, but we also have the functional data, too.”

The neural maps were developed by Eyewirers, members of an online community of video game players who have devoted hundreds of thousands of hours to painstakingly piecing together these neural cells, using data from a mouse retina gathered in 2009.

Eyewire pairs machine learning with gamers who trace the twisting and branching paths of each neuron. Humans are better at visually identifying the patterns of neurons, so every player’s moves are recorded and checked against each other by advanced players and Eyewire staffers, as well as by software that is improving its own pattern recognition skills.

Since Eyewire’s launch in 2012, more than 265,000 people have signed onto the game, and they’ve collectively colored in more than 10 million 3-D “cubes,” resulting in the mapping of more than 3,000 neural cells, of which about a thousand are displayed in the museum.

Each cube is a tiny subset of a single cell, about 4.5 microns across, so a 10-by-10 block of cubes would be the width of a human hair. Every cell is reviewed by between 5 and 25 gamers before it is accepted into the system as complete.

“Back in the early years it took weeks to finish a single cell,” said Sterling. “Now players complete multiple neurons per day.” The Eyewire user experience stays focused on the larger mission — “For science!” is a common refrain — but it also replicates a typical gaming environment, with achievement badges, a chat feature to connect with other players and technical support, and the ability to unlock privileges with increasing skill. “Our top players are online all the time — easily 30 hours a week,” Sterling said.

Dedicated Eyewirers have also contributed in other ways, including donating the swag that gamers win during competitions and writing program extensions “to make game play more efficient and more fun,” said Sterling, including profile histories, maps of player activity, a top 100 leaderboard and ever-increasing levels of customizability.

“The community has really been the driving force behind why Eyewire has been successful,” Sterling said. “You come in, and you’re not alone. Right now, there are 43 people online. Some of them will be admins from Boston or Princeton, but most are just playing — now it’s 46.”

For science!

With 100 billion neurons linked together via trillions of connections, the brain is immeasurably complex, and neuroscientists are still assembling its “parts list,” said Nicholas Turner, a graduate student in computer science and another of the co-first authors. “If you know what parts make up the machine you’re trying to break apart, you’re set to figure out how it all works,” he said.

The researchers have started by tackling Eyewire-mapped ganglion cells from the retina of a mouse. “The retina doesn’t just sense light,” Seung said. “Neural circuits in the retina perform the first steps of visual perception.”

The retina grows from the same embryonic tissue as the brain, and while much simpler than the brain, it is still surprisingly complex, Turner said. “Hammering out these details is a really valuable effort,” he said, “showing the depth and complexity that exists in circuits that we naively believe are simple.”

The researchers’ fundamental question is identifying exactly how the retina works, said Bae. “In our case, we focus on the structural morphology of the retinal ganglion cells.”

“Why the ganglion cells of the eye?” asked Shang Mu, an associate research scholar in PNI and fellow first author. “Because they’re the connection between the retina and the brain. They’re the only cell class that go back into the brain.” Different types of ganglion cells are known to compute different types of visual features, which is one reason the museum has linked shape to functional data.

Using Eyewire-produced maps of 396 ganglion cells, the researchers in Seung’s lab successfully classified these cells more thoroughly than has ever been done before.

“The number of different cell types was a surprise,” said Mu. “Just a few years ago, people thought there were only 15 to 20 ganglion cell types, but we found more than 35 — we estimate between 35 and 50 types.”

Of those, six appear to be novel, in that the researchers could not find any matching descriptions in a literature search.

A brief scroll through the digital museum reveals just how remarkably flat the neurons are — nearly all of the branching takes place along a two-dimensional plane. Seung’s team discovered that different cells grow along different planes, with some reaching high above the nucleus before branching out, while others spread out close to the nucleus. Their resulting diagrams resemble a rainforest, with ground cover, an understory, a canopy and an emergent layer overtopping the rest.

All of these are subdivisions of the inner plexiform layer, one of the five previously recognized layers of the retina. The researchers also identified a “density conservation principle” that they used to distinguish types of neurons.

One of the biggest surprises of the research project has been the extraordinary richness of the original sample, said Seung. “There’s a little sliver of a mouse retina, and almost 10 years later, we’re still learning things from it.”

Of course, it’s a mouse’s brain that you’ll be examining and while there are differences between a mouse brain and a human brain, mouse brains still provide valuable data as they did in the case of some groundbreaking research published in October 2017. James Hamblin wrote about it in an Oct. 7, 2017 article for The Atlantic (Note: Links have been removed),

 

Scientists Somehow Just Discovered a New System of Vessels in Our Brains

It is unclear what they do—but they likely play a central role in aging and disease.

A transparent model of the brain with a network of vessels filled in
Daniel Reich / National Institute of Neurological Disorders and Stroke

You are now among the first people to see the brain’s lymphatic system. The vessels in the photo above transport fluid that is likely crucial to metabolic and inflammatory processes. Until now, no one knew for sure that they existed.

Doctors practicing today have been taught that there are no lymphatic vessels inside the skull. Those deep-purple vessels were seen for the first time in images published this week by researchers at the U.S. National Institute of Neurological Disorders and Stroke.

In the rest of the body, the lymphatic system collects and drains the fluid that bathes our cells, in the process exporting their waste. It also serves as a conduit for immune cells, which go out into the body looking for adversaries and learning how to distinguish self from other, and then travel back to lymph nodes and organs through lymphatic vessels.

So how was it even conceivable that this process wasn’t happening in our brains?

Reich (Daniel Reich, senior investigator) started his search in 2015, after a major study in Nature reported a similar conduit for lymph in mice. The University of Virginia team wrote at the time, “The discovery of the central-nervous-system lymphatic system may call for a reassessment of basic assumptions in neuroimmunology.” The study was regarded as a potential breakthrough in understanding how neurodegenerative disease is associated with the immune system.

Around the same time, researchers discovered fluid in the brains of mice and humans that would become known as the “glymphatic system.” [emphasis mine] It was described by a team at the University of Rochester in 2015 as not just the brain’s “waste-clearance system,” but as potentially helping fuel the brain by transporting glucose, lipids, amino acids, and neurotransmitters. Although since “the central nervous system completely lacks conventional lymphatic vessels,” the researchers wrote at the time, it remained unclear how this fluid communicated with the rest of the body.

There are occasional references to the idea of a lymphatic system in the brain in historic literature. Two centuries ago, the anatomist Paolo Mascagni made full-body models of the lymphatic system that included the brain, though this was dismissed as an error. [emphases mine]  A historical account in The Lancet in 2003 read: “Mascagni was probably so impressed with the lymphatic system that he saw lymph vessels even where they did not exist—in the brain.”

I couldn’t resist the reference to someone whose work had been dismissed summarily being proved right, eventually, and with the help of mouse brains. Do read Hamblin’s article in its entirety if you have time as these excerpts don’t do it justice.

Getting back to Princeton’s research, here’s their research paper,

Digital museum of retinal ganglion cells with dense anatomy and physiology,” by Alexander Bae, Shang Mu, Jinseop Kim, Nicholas Turner, Ignacio Tartavull, Nico Kemnitz, Chris Jordan, Alex Norton, William Silversmith, Rachel Prentki, Marissa Sorek, Celia David, Devon Jones, Doug Bland, Amy Sterling, Jungman Park, Kevin Briggman, Sebastian Seung and the Eyewirers, was published May 17 in the journal Cell with DOI 10.1016/j.cell.2018.04.040.

The research was supported by the Gatsby Charitable Foundation, National Institute of Health-National Institute of Neurological Disorders and Stroke (U01NS090562 and 5R01NS076467), Defense Advanced Research Projects Agency (HR0011-14-2- 0004), Army Research Office (W911NF-12-1-0594), Intelligence Advanced Research Projects Activity (D16PC00005), KT Corporation, Amazon Web Services Research Grants, Korea Brain Research Institute (2231-415) and Korea National Research Foundation Brain Research Program (2017M3C7A1048086).

This paper is behind a paywall. For the players amongst us, here’s the Eyewire website. Go forth,  play, and, maybe, discover new neurons!

The sound of frogs (and other amphibians) and climate change

At least once a year I highlight some work about frogs. It’s usually about a new species but this time, it’s all about frog sounds (as well as, sounds from other amphibians).

Caption: The calls of the midwife toad and other amphibians have served to test the sound classifier. Credit: Jaime Bosch (MNCN-CSIC)

In any event, here’s more from an April 30, 2018 Spanish Foundation for Science and Technology (FECYT) press release (also on EurekAlert but with a May 17, 2018 publication date),

The sounds of amphibians are altered by the increase in ambient temperature, a phenomenon that, in addition to interfering with reproductive behaviour, serves as an indicator of global warming. Researchers at the University of Seville have resorted to artificial intelligence to create an automatic classifier of the thousands of frog and toad sounds that can be recorded in a natural environment.

One of the consequences of climate change is its impact on the physiological functions of animals, such as frogs and toads with their calls. Their mating call, which plays a crucial role in the sexual selection and reproduction of these amphibians, is affected by the increase in ambient temperature.

When this exceeds a certain threshold, the physiological processes associated with the sound production are restricted, and some calls are even actually inhibited. In fact, the beginning, duration and intensity of calls from the male to the female are changed, which influences reproductive activity.

Taking into account this phenomenon, the analysis and classification of the sounds produced by certain species of amphibians and other animals have turned out to be a powerful indicator of temperature fluctuations and, therefore, of the existence and evolution of global warming.

To capture the sounds of frogs, networks of audio sensors are placed and connected wirelessly in areas that can reach several hundred square kilometres. The problem is that a huge amount of bio-acoustic information is collected in environments as noisy as a jungle, and this makes it difficult to identify the species and their calls.

To solve this, engineers from the University of Seville have resorted to artificial intelligence. “We’ve segmented the sound into temporary windows or audio frames and have classified them by means of decision trees, an automatic learning technique that is used in computing”, explains Amalia Luque Sendra, co-author of the work.

To perform the classification, the researchers have based it on MPEG-7 parameters and audio descriptors, a standard way of representing audiovisual information. The details are published in Expert Systems with Applications magazine.

This technique has been put to the test with real sounds of amphibians recorded in the middle of nature and provided by the National Museum of Natural Sciences. More specifically, 868 records with 369 mating calls sung by the male and 63 release calls issued by the female natterajck toad (Epidalea calamita), along with 419 mating calls and 17 distress calls of the common midwife toad (Alytesobstetricans).

“In this case we obtained a success rate close to 90% when classifying the sounds,” observes Luque Sendra, who recalls that, in addition to the types of calls, the number of individuals of certain amphibian species that are heard in a geographical region over time can also be used as an indicator of climate change.

“A temperature increase affects the calling patterns,” she says, “but since these in most cases have a sexual calling nature, they also affect the number of individuals. With our method, we still can’t directly determine the exact number of specimens in an area, but it is possible to get a first approximation.”

In addition to the image of the midwife toad, the researchers included this image to illustrate their work,

Caption: This is the architecture of a wireless sensor network. Credit: J. Luque et al./Sensors

Here’s a link to and a citation for the paper,

Non-sequential automatic classification of anuran sounds for the estimation of climate-change indicators by Amalia Luque, Javier Romero-Lemos, Alejandro Carrasco, Julio Barbancho. Expert Systems with Applications Volume 95, 1 April 2018, Pages 248-260 DOI: https://doi.org/10.1016/j.eswa.2017.11.016 Available online 10 November 2017

This paper is open access.

Embedded AI (artificial intelligence) with a variant of a memristor?

I don’t entirely get how ReRAM (resistive random access memory) is a variant of a memristor but I’m assuming Samuel K. Moore knows what he’s writing about since his May 16, 2018 posting is on the Nanoclast blog (hosted by the IEEE [Institute of Electrical and Electronics Engineers]), Note: Links have been removed,

Resistive RAM technology developer Crossbar says it has inked a deal with aerospace chip maker Microsemi allowing the latter to embed Crossbar’s nonvolatile memory on future chips. The move follows selection of Crossbar’s technology by a leading foundry for advanced manufacturing nodes. Crossbar is counting on resistive RAM (ReRAM) to enable artificial intelligence systems whose neural networks are housed within the device rather than in the cloud.

ReRAM is a variant of the memristor, a nonvolatile memory device whose resistance can be set or reset by a pulse of voltage. The variant Crossbar qualified for advanced manufacturing is called a filament device. It’s built within the layers above a chip’s silicon, where the IC’s interconnects go, and it’s made up of three layers: from top to bottom—silver, amorphous silicon, and tungsten. Voltage across the amorphous silicon causes a filament of silver atoms to cross the gap to the tungsten, making the memory cell conductive. Reversing the voltage pushes the silver back into place, cutting off conduction.

“The filament itself is only three to four nanometers wide,” says Sylvain Dubois, vice president of marketing and business development at Crossbar. “So the cell itself will be able to scale below 10-nanometers.” What’s more, the ratio between the current that flows when the device is on to when it is off is 1,000 or higher. …

A May 14, 2018 Crossbar news release describes some of the technical AI challenges,

“The biggest challenge facing engineers for AI today is overcoming the memory speed and power bottleneck in the current architecture to get faster data access while lowering the energy cost,” said Dubois. “By enabling a new, memory-centric non-volatile architecture like ReRAM, the entire trained model or knowledge base can be on-chip, connected directly to the neural network with the potential to achieve massive energy savings and performance improvements, resulting in a greatly improved battery life and a better user experience.”

Crossbar’s May 16, 2018 news release provides more detail about their ‘strategic collaboration’ with Microsemi Products (Note: A link has been removed),

Crossbar Inc., the ReRAM technology leader, announced an agreement with Microsemi Corporation, the largest U.S. commercial supplier of military and aerospace semiconductors, in which Microsemi will license Crossbar’s ReRAM core intellectual property. As part of the agreement, Microsemi and Crossbar will collaborate in the research, development and application of Crossbar’s proprietary ReRAM technology in next generation products from Microsemi that integrate Crossbar’s embedded ReRAM with Microsemi products manufactured at the 1x nm process node.

Military and aerospace, eh?

Being smart about using artificial intelligence in the field of medicine

Since my August 20, 2018 post featured an opinion piece about the possibly imminent replacement of radiologists with artificial intelligence systems and the latest research about employing them for diagnosing eye diseases, it seems like a good time to examine some of the mythology embedded in the discussion about AI and medicine.

Imperfections in medical AI systems

An August 15, 2018 article for Slate.com by W. Nicholson Price II (who teaches at the University of Michigan School of Law; in addition to his law degree he has a PhD in Biological Sciences from Columbia University) begins with the peppy, optimistic view before veering into more critical territory (Note: Links have been removed),

For millions of people suffering from diabetes, new technology enabled by artificial intelligence promises to make management much easier. Medtronic’s Guardian Connect system promises to alert users 10 to 60 minutes before they hit high or low blood sugar level thresholds, thanks to IBM Watson, “the same supercomputer technology that can predict global weather patterns.” Startup Beta Bionics goes even further: In May, it received Food and Drug Administration approval to start clinical trials on what it calls a “bionic pancreas system” powered by artificial intelligence, capable of “automatically and autonomously managing blood sugar levels 24/7.”

An artificial pancreas powered by artificial intelligence represents a huge step forward for the treatment of diabetes—but getting it right will be hard. Artificial intelligence (also known in various iterations as deep learning and machine learning) promises to automatically learn from patterns in medical data to help us do everything from managing diabetes to finding tumors in an MRI to predicting how long patients will live. But the artificial intelligence techniques involved are typically opaque. We often don’t know how the algorithm makes the eventual decision. And they may change and learn from new data—indeed, that’s a big part of the promise. But when the technology is complicated, opaque, changing, and absolutely vital to the health of a patient, how do we make sure it works as promised?

Price describes how a ‘closed loop’ artificial pancreas with AI would automate insulin levels for diabetic patients, flaws in the automated system, and how companies like to maintain a competitive advantage (Note: Links have been removed),

[…] a “closed loop” artificial pancreas, where software handles the whole issue, receiving and interpreting signals from the monitor, deciding when and how much insulin is needed, and directing the insulin pump to provide the right amount. The first closed-loop system was approved in late 2016. The system should take as much of the issue off the mind of the patient as possible (though, of course, that has limits). Running a close-loop artificial pancreas is challenging. The way people respond to changing levels of carbohydrates is complicated, as is their response to insulin; it’s hard to model accurately. Making it even more complicated, each individual’s body reacts a little differently.

Here’s where artificial intelligence comes into play. Rather than trying explicitly to figure out the exact model for how bodies react to insulin and to carbohydrates, machine learning methods, given a lot of data, can find patterns and make predictions. And existing continuous glucose monitors (and insulin pumps) are excellent at generating a lot of data. The idea is to train artificial intelligence algorithms on vast amounts of data from diabetic patients, and to use the resulting trained algorithms to run a closed-loop artificial pancreas. Even more exciting, because the system will keep measuring blood glucose, it can learn from the new data and each patient’s artificial pancreas can customize itself over time as it acquires new data from that patient’s particular reactions.

Here’s the tough question: How will we know how well the system works? Diabetes software doesn’t exactly have the best track record when it comes to accuracy. A 2015 study found that among smartphone apps for calculating insulin doses, two-thirds of the apps risked giving incorrect results, often substantially so. … And companies like to keep their algorithms proprietary for a competitive advantage, which makes it hard to know how they work and what flaws might have gone unnoticed in the development process.

There’s more,

These issues aren’t unique to diabetes care—other A.I. algorithms will also be complicated, opaque, and maybe kept secret by their developers. The potential for problems multiplies when an algorithm is learning from data from an entire hospital, or hospital system, or the collected data from an entire state or nation, not just a single patient. …

The [US Food and Drug Administraiont] FDA is working on this problem. The head of the agency has expressed his enthusiasm for bringing A.I. safely into medical practice, and the agency has a new Digital Health Innovation Action Plan to try to tackle some of these issues. But they’re not easy, and one thing making it harder is a general desire to keep the algorithmic sauce secret. The example of IBM Watson for Oncology has given the field a bit of a recent black eye—it turns out that the company knew the algorithm gave poor recommendations for cancer treatment but kept that secret for more than a year. …

While Price focuses on problems with algorithms and with developers and their business interests, he also hints at some of the body’s complexities.

Can AI systems be like people?

Susan Baxter, a medical writer with over 20 years experience, a PhD in health economics, and author of countless magazine articles and several books, offers a more person-centered approach to the discussion in her July 6, 2018 posting on susanbaxter.com,

The fascination with AI continues to irk, given that every second thing I read seems to be extolling the magic of AI and medicine and how It Will Change Everything. Which it will not, trust me. The essential issue of illness remains perennial and revolves around an individual for whom no amount of technology will solve anything without human contact. …

But in this world, or so we are told by AI proponents, radiologists will soon be obsolete. [my August 20, 2018 post] The adaptational learning capacities of AI mean that reading a scan or x-ray will soon be more ably done by machines than humans. The presupposition here is that we, the original programmers of this artificial intelligence, understand the vagaries of real life (and real disease) so wonderfully that we can deconstruct these much as we do the game of chess (where, let’s face it, Big Blue ate our lunch) and that analyzing a two-dimensional image of a three-dimensional body, already problematic, can be reduced to a series of algorithms.

Attempting to extrapolate what some “shadow” on a scan might mean in a flesh and blood human isn’t really quite the same as bishop to knight seven. Never mind the false positive/negatives that are considered an acceptable risk or the very real human misery they create.

Moravec called it

It’s called Moravec’s paradox, the inability of humans to realize just how complex basic physical tasks are – and the corresponding inability of AI to mimic it. As you walk across the room, carrying a glass of water, talking to your spouse/friend/cat/child; place the glass on the counter and open the dishwasher door with your foot as you open a jar of pickles at the same time, take a moment to consider just how many concurrent tasks you are doing and just how enormous the computational power these ostensibly simple moves would require.

Researchers in Singapore taught industrial robots to assemble an Ikea chair. Essentially, screw in the legs. A person could probably do this in a minute. Maybe two. The preprogrammed robots took nearly half an hour. And I suspect programming those robots took considerably longer than that.

Ironically, even Elon Musk, who has had major production problems with the Tesla cars rolling out of his high tech factory, has conceded (in a tweet) that “Humans are underrated.”

I wouldn’t necessarily go that far given the political shenanigans of Trump & Co. but in the grand scheme of things I tend to agree. …

Is AI going the way of gene therapy?

Susan draws a parallel between the AI and medicine discussion with the discussion about genetics and medicine (Note: Links have been removed),

On a somewhat similar note – given the extent to which genetics discourse has that same linear, mechanistic  tone [as AI and medicine] – it turns out all this fine talk of using genetics to determine health risk and whatnot is based on nothing more than clever marketing, since a lot of companies are making a lot of money off our belief in DNA. Truth is half the time we don’t even know what a gene is never mind what it actually does;  geneticists still can’t agree on how many genes there are in a human genome, as this article in Nature points out.

Along the same lines, I was most amused to read about something called the Super Seniors Study, research following a group of individuals in their 80’s, 90’s and 100’s who seem to be doing really well. Launched in 2002 and headed by Angela Brooks Wilson, a geneticist at the BC [British Columbia] Cancer Agency and SFU [Simon Fraser University] Chair of biomedical physiology and kinesiology, this longitudinal work is examining possible factors involved in healthy ageing.

Turns out genes had nothing to do with it, the title of the Globe and Mail article notwithstanding. (“Could the DNA of these super seniors hold the secret to healthy aging?” The answer, a resounding “no”, well hidden at the very [end], the part most people wouldn’t even get to.) All of these individuals who were racing about exercising and working part time and living the kind of life that makes one tired just reading about it all had the same “multiple (genetic) factors linked to a high probability of disease”. You know, the gene markers they tell us are “linked” to cancer, heart disease, etc., etc. But these super seniors had all those markers but none of the diseases, demonstrating (pretty strongly) that the so-called genetic links to disease are a load of bunkum. Which (she said modestly) I have been saying for more years than I care to remember. You’re welcome.

The fundamental error in this type of linear thinking is in allowing our metaphors (genes are the “blueprint” of life) and propensity towards social ideas of determinism to overtake common sense. Biological and physiological systems are not static; they respond to and change to life in its entirety, whether it’s diet and nutrition to toxic or traumatic insults. Immunity alters, endocrinology changes, – even how we think and feel affects the efficiency and effectiveness of physiology. Which explains why as we age we become increasingly dissimilar.

If you have the time, I encourage to read Susan’s comments in their entirety.

Scientific certainties

Following on with genetics, gene therapy dreams, and the complexity of biology, the June 19, 2018 Nature article by Cassandra Willyard (mentioned in Susan’s posting) highlights an aspect of scientific research not often mentioned in public,

One of the earliest attempts to estimate the number of genes in the human genome involved tipsy geneticists, a bar in Cold Spring Harbor, New York, and pure guesswork.

That was in 2000, when a draft human genome sequence was still in the works; geneticists were running a sweepstake on how many genes humans have, and wagers ranged from tens of thousands to hundreds of thousands. Almost two decades later, scientists armed with real data still can’t agree on the number — a knowledge gap that they say hampers efforts to spot disease-related mutations.

In 2000, with the genomics community abuzz over the question of how many human genes would be found, Ewan Birney launched the GeneSweep contest. Birney, now co-director of the European Bioinformatics Institute (EBI) in Hinxton, UK, took the first bets at a bar during an annual genetics meeting, and the contest eventually attracted more than 1,000 entries and a US$3,000 jackpot. Bets on the number of genes ranged from more than 312,000 to just under 26,000, with an average of around 40,000. These days, the span of estimates has shrunk — with most now between 19,000 and 22,000 — but there is still disagreement (See ‘Gene Tally’).

… the inconsistencies in the number of genes from database to database are problematic for researchers, Pruitt says. “People want one answer,” she [Kim Pruitt, a genome researcher at the US National Center for Biotechnology Information {NCB}] in Bethesda, Maryland] adds, “but biology is complex.”

I wanted to note that scientists do make guesses and not just with genetics. For example, Gina Mallet’s 2005 book ‘Last Chance to Eat: The Fate of Taste in a Fast Food World’ recounts the story of how good and bad levels of cholesterol were established—the experts made some guesses based on their experience. That said, Willyard’s article details the continuing effort to nail down the number of genes almost 20 years after the human genome project was completed and delves into the problems the scientists have uncovered.

Final comments

In addition to opaque processes with developers/entrepreneurs wanting to maintain their secrets for competitive advantages and in addition to our own poor understanding of the human body (how many genes are there anyway?), there are same major gaps (reflected in AI) in our understanding of various diseases. Angela Lashbrook’s August 16, 2018 article for The Atlantic highlights some issues with skin cancer and shade of your skin (Note: Links have been removed),

… While fair-skinned people are at the highest risk for contracting skin cancer, the mortality rate for African Americans is considerably higher: Their five-year survival rate is 73 percent, compared with 90 percent for white Americans, according to the American Academy of Dermatology.

As the rates of melanoma for all Americans continue a 30-year climb, dermatologists have begun exploring new technologies to try to reverse this deadly trend—including artificial intelligence. There’s been a growing hope in the field that using machine-learning algorithms to diagnose skin cancers and other skin issues could make for more efficient doctor visits and increased, reliable diagnoses. The earliest results are promising—but also potentially dangerous for darker-skinned patients.

… Avery Smith, … a software engineer in Baltimore, Maryland, co-authored a paper in JAMA [Journal of the American Medical Association] Dermatology that warns of the potential racial disparities that could come from relying on machine learning for skin-cancer screenings. Smith’s co-author, Adewole Adamson of the University of Texas at Austin, has conducted multiple studies on demographic imbalances in dermatology. “African Americans have the highest mortality rate [for skin cancer], and doctors aren’t trained on that particular skin type,” Smith told me over the phone. “When I came across the machine-learning software, one of the first things I thought was how it will perform on black people.”

Recently, a study that tested machine-learning software in dermatology, conducted by a group of researchers primarily out of Germany, found that “deep-learning convolutional neural networks,” or CNN, detected potentially cancerous skin lesions better than the 58 dermatologists included in the study group. The data used for the study come from the International Skin Imaging Collaboration, or ISIC, an open-source repository of skin images to be used by machine-learning algorithms. Given the rise in melanoma cases in the United States, a machine-learning algorithm that assists dermatologists in diagnosing skin cancer earlier could conceivably save thousands of lives each year.

… Chief among the prohibitive issues, according to Smith and Adamson, is that the data the CNN relies on come from primarily fair-skinned populations in the United States, Australia, and Europe. If the algorithm is basing most of its knowledge on how skin lesions appear on fair skin, then theoretically, lesions on patients of color are less likely to be diagnosed. “If you don’t teach the algorithm with a diverse set of images, then that algorithm won’t work out in the public that is diverse,” says Adamson. “So there’s risk, then, for people with skin of color to fall through the cracks.”

As Adamson and Smith’s paper points out, racial disparities in artificial intelligence and machine learning are not a new issue. Algorithms have mistaken images of black people for gorillas, misunderstood Asians to be blinking when they weren’t, and “judged” only white people to be attractive. An even more dangerous issue, according to the paper, is that decades of clinical research have focused primarily on people with light skin, leaving out marginalized communities whose symptoms may present differently.

The reasons for this exclusion are complex. According to Andrew Alexis, a dermatologist at Mount Sinai, in New York City, and the director of the Skin of Color Center, compounding factors include a lack of medical professionals from marginalized communities, inadequate information about those communities, and socioeconomic barriers to participating in research. “In the absence of a diverse study population that reflects that of the U.S. population, potential safety or efficacy considerations could be missed,” he says.

Adamson agrees, elaborating that with inadequate data, machine learning could misdiagnose people of color with nonexistent skin cancers—or miss them entirely. But he understands why the field of dermatology would surge ahead without demographically complete data. “Part of the problem is that people are in such a rush. This happens with any new tech, whether it’s a new drug or test. Folks see how it can be useful and they go full steam ahead without thinking of potential clinical consequences. …

Improving machine-learning algorithms is far from the only method to ensure that people with darker skin tones are protected against the sun and receive diagnoses earlier, when many cancers are more survivable. According to the Skin Cancer Foundation, 63 percent of African Americans don’t wear sunscreen; both they and many dermatologists are more likely to delay diagnosis and treatment because of the belief that dark skin is adequate protection from the sun’s harmful rays. And due to racial disparities in access to health care in America, African Americans are less likely to get treatment in time.

Happy endings

I’ll add one thing to Price’s article, Susan’s posting, and Lashbrook’s article about the issues with AI , certainty, gene therapy, and medicine—the desire for a happy ending prefaced with an easy solution. If the easy solution isn’t possible accommodations will be made but that happy ending is a must. All disease will disappear and there will be peace on earth. (Nod to Susan Baxter and her many discussions with me about disease processes and happy endings.)

The solutions, for the most part, are seen as technological despite the mountain of evidence suggesting that technology reflects our own imperfect understanding of health and disease therefore providing what is at best an imperfect solution.

Also, we tend to underestimate just how complex humans are not only in terms of disease and health but also with regard to our skills, understanding, and, perhaps not often enough, our ability to respond appropriately in the moment.

There is much to celebrate in what has been accomplished: no more black death, no more smallpox, hip replacements, pacemakers, organ transplants, and much more. Yes, we should try to improve our medicine. But, maybe alongside the celebration we can welcome AI and other technologies with a lot less hype and a lot more skepticism.