Tag Archives: World Health Organization (WHO)

Dealing with mosquitos: a robot story and an engineered human tissue story

I have two ‘mosquito and disease’ stories, the first concerning dengue fever and the second, malaria.

Dengue fever in Taiwan

A June 8, 2023 news item on phys.org features robotic vehicles, dengue fever, and mosquitoes,

Unmanned ground vehicles can be used to identify and eliminate the breeding sources of mosquitos that carry dengue fever in urban areas, according to a new study published in PLOS Neglected Tropical Diseases by Wei-Liang Liu of the Taiwan National Mosquito-Borne Diseases Control Research Center, and colleagues.

It turns out sewers are a problem according to this June 8, 2023 PLOS (Public Library of Science) news release on EurekAlert, provides more context and detail,

Dengue fever is an infectious disease caused by the dengue virus and spread by several mosquito species in the genus Aedes, which also spread chikungunya, yellow fever and zika. Through the process of urbanization, sewers have become easy breeding grounds for Aedes mosquitos and most current mosquito monitoring programs struggle to monitor and analyze the density of mosquitos in these hidden areas.

In the new control effort, researchers combined a crawling robot, wire-controlled cable car and real-time monitoring system into an unmanned ground vehicle system (UGV) that can take high-resolution, real-time images of areas within sewers. From May to August 2018, the system was deployed in five administrative districts in Kaohsiung city, Taiwan, with covered roadside sewer ditches suspected to be hotspots for mosquitos. Mosquito gravitraps were places above the sewers to monitor effects of the UGV intervention on adult mosquitos in the area.

In 20.7% of inspected sewers, the system found traces of Aedes mosquitos in stages from larvae to adult. In positive sewers, additional prevention control measures were carried out, using either insecticides or high-temperature water jets.  Immediately after these interventions, the gravitrap index (GI)—  a measure of the adult mosquito density nearby— dropped significantly from 0.62 to 0.19.

“The widespread use of UGVs can potentially eliminate some of the breeding sources of vector mosquitoes, thereby reducing the annual prevalence of dengue fever in Kaohsiung city,” the authors say.

Here’s a link to and a citation for the paper,

Use of unmanned ground vehicle systems in urbanized zones: A study of vector Mosquito surveillance in Kaohsiung by Yu-Xuan Chen, Chao-Ying Pan, Bo-Yu Chen, Shu-Wen Jeng, Chun-Hong Chen, Joh-Jong Huang, Chaur-Dong Chen, Wei-Liang Liu. PLOS Neglected Tropical Diseases DOI: https://doi.org/10.1371/journal.pntd.0011346 Published: June 8, 2023

This paper is open access.

Dengue on the rise

Like many diseases, dengue is one where you may not have symptoms (asymptomatic), or they’re relatively mild and can be handled at home, or you may need care in a hospital and, in some cases, it can be fatal.

The World Health Organization (WHO) notes that dengue fever cases have increased exponentially since 2000 (from the March 17, 2023 version of the WHO’s “Dengue and severe dengue” fact sheet),

Global burden

The incidence of dengue has grown dramatically around the world in recent decades, with cases reported to WHO increased from 505 430 cases in 2000 to 5.2 million in 2019. A vast majority of cases are asymptomatic or mild and self-managed, and hence the actual numbers of dengue cases are under-reported. Many cases are also misdiagnosed as other febrile illnesses (1).

One modelling estimate indicates 390 million dengue virus infections per year of which 96 million manifest clinically (2). Another study on the prevalence of dengue estimates that 3.9 billion people are at risk of infection with dengue viruses.

The disease is now endemic in more than 100 countries in the WHO Regions of Africa, the Americas, the Eastern Mediterranean, South-East Asia and the Western Pacific. The Americas, South-East Asia and Western Pacific regions are the most seriously affected, with Asia representing around 70% of the global disease burden.

Dengue is spreading to new areas including Europe, [emphasis mine] and explosive outbreaks are occurring. Local transmission was reported for the first time in France and Croatia in 2010 [emphasis mine] and imported cases were detected in 3 other European countries.

The largest number of dengue cases ever reported globally was in 2019. All regions were affected, and dengue transmission was recorded in Afghanistan for the first time. The American Region reported 3.1 million cases, with more than 25 000 classified as severe. A high number of cases were reported in Bangladesh (101 000), Malaysia (131 000) Philippines (420 000), Vietnam (320 000) in Asia.

Dengue continues to affect Brazil, Colombia, the Cook Islands, Fiji, India, Kenya, Paraguay, Peru, the Philippines, the Reunion Islands and Vietnam as of 2021. 

There’s information from an earlier version of the fact sheet, in my July 2, 2013 posting, highlighting different aspects of the disease, e.g., “About 2.5% of those affected die.”

A July 21, 2023 United Nations press release warns that the danger from mosquitoes spreading dengue fever could increase along with the temperature,

Global warming marked by higher average temperatures, precipitation and longer periods of drought, could prompt a record number of dengue infections worldwide, the World Health Organization (WHO) warned on Friday [July 21, 2023].

Despite the absence of mosquitoes infected with the dengue virus in Canada, the government has a Dengue fever information page. At this point, the concern is likely focused on travelers who’ve contracted the disease from elsewhere. However, I am guessing that researchers are keeping a close eye on Canadian mosquitoes as these situations can change.

Malaria in Florida (US)

The researchers from the University of Central Florida (UCF) couldn’t have known when they began their project to study mosquito bites and disease that Florida would register its first malaria cases in 20 years this summer, from a July 26, 2023 article by Stephanie Colombini for NPR ([US] National Public Radio), Note: Links have been removed,

First local transmission in U.S. in 20 years

Heath [Hannah Heath] is one of eight known people in recent months who have contracted malaria in the U.S., after being bitten by a local mosquito, rather than while traveling abroad. The cases comprise the nation’s first locally transmitted outbreak in 20 years. The last time this occurred was in 2003, when eight people tested positive for malaria in Palm Beach, Fla.

One of the eight cases is in Texas; the rest occurred in the northern part of Sarasota County.

The Florida Department of Health recorded the most recent case in its weekly arbovirus report for July 9-15 [2023].

For the past month, health officials have issued a mosquito-borne illness alert for residents in Sarasota and neighboring Manatee County. Mosquito management teams are working to suppress the population of the type of mosquito that carries malaria, Anopheles.

Sarasota Memorial Hospital has treated five of the county’s seven malaria patients, according to Dr. Manuel Gordillo, director of infection control.

“The cases that are coming in are classic malaria, you know they come in with fever, body aches, headaches, nausea, vomiting, diarrhea,” Gordillo said, explaining that his hospital usually treats just one or two patients a year who acquire malaria while traveling abroad in Central or South America, or Africa.

All the locally acquired cases were of Plasmodium vivax malaria, a strain that typically produces milder symptoms or can even be asymptomatic, according to the Centers for Disease Control and Prevention. But the strain can still cause death, and pregnant people and children are particularly vulnerable.

Malaria does not spread from human-to-human contact; a mosquito carrying the disease has to bite someone to transmit the parasites.

Workers with Sarasota County Mosquito Management Services have been especially busy since May 26 [2023], when the first local case was confirmed.

Like similar departments across Florida, the team is experienced in responding to small outbreaks of mosquito-borne illnesses such as West Nile virus or dengue. They have protocols for addressing travel-related cases of malaria as well, but have ramped up their efforts now that they have confirmation that transmission is occurring locally between mosquitoes and humans.

While organizations like the World Health Organization have cautioned climate change could lead to more global cases and deaths from malaria and other mosquito-borne diseases, experts say it’s too soon to tell if the local transmission seen these past two months has any connection to extreme heat or flooding.

“We don’t have any reason to think that climate change has contributed to these particular cases,” said Ben Beard, deputy director of the CDC’s US Centers for Disease Control and Prevention] division of vector-borne diseases and deputy incident manager for this year’s local malaria response.

“In a more general sense though, milder winters, earlier springs, warmer, longer summers – all of those things sort of translate into mosquitoes coming out earlier, getting their replication cycles sooner, going through those cycles faster and being out longer,” he said. And so we are concerned about the impact of climate change and environmental change in general on what we call vector-borne diseases.”.

Beard co-authored a 2019 report that highlights a significant increase in diseases spread by ticks and mosquitoes in recent decades. Lyme disease and West Nile virus were among the top five most prevalent.

“In the big picture it’s a very significant concern that we have,” he said.

Engineered tissue and bloodthirsty mosquitoes

A June 8, 2023 University of Central Florida (UCF) news release (also on EurekAlert) by Eric Eraso describes the research into engineered human tissue and features a ‘bloodthirsty’ video. First, the video,

Note: A link has been removed,

A UCF research team has engineered tissue with human cells that mosquitoes love to bite and feed upon — with the goal of helping fight deadly diseases transmitted by the biting insects.

A multidisciplinary team led by College of Medicine biomedical researcher Bradley Jay Willenberg with Mollie Jewett (UCF Burnett School of Biomedical Sciences) and Andrew Dickerson (University of Tennessee) lined 3D capillary gel biomaterials with human cells to create engineered tissue and then infused it with blood. Testing showed mosquitoes readily bite and blood feed on the constructs. Scientists hope to use this new platform to study how pathogens that mosquitoes carry impact and infect human cells and tissues. Presently, researchers rely largely upon animal models and cells cultured on flat dishes for such investigations.

Further, the new system holds great promise for blood feeding mosquito species that have proven difficult to rear and maintain as colonies in the laboratory, an important practical application. The Willenberg team’s work was published Friday in the journal Insects.

Mosquitos have often been called the world’s deadliest animal, as vector-borne illnesses, including those from mosquitos cause more than 700,000 deaths worldwide each year. Malaria, dengue, Zika virus and West Nile virus are all transmitted by mosquitos. Even for those who survive these illnesses, many are left suffering from organ failure, seizures and serious neurological impacts.

“Many people get sick with mosquito-borne illnesses every year, including in the United States. The toll of such diseases can be especially devastating for many countries around the world,” Willenberg says.

This worldwide impact of mosquito-borne disease is what drives Willenberg, whose lab employs a unique blend of biomedical engineering, biomaterials, tissue engineering, nanotechnology and vector biology to develop innovative mosquito surveillance, control and research tools. He said he hopes to adapt his new platform for application to other vectors such as ticks, which spread Lyme disease.

“We have demonstrated the initial proof-of-concept with this prototype” he says. “I think there are many potential ways to use this technology.”

Captured on video, Willenberg observed mosquitoes enthusiastically blood feeding from the engineered tissue, much as they would from a human host. This demonstration represents the achievement of a critical milestone for the technology: ensuring the tissue constructs were appetizing to the mosquitoes.

“As one of my mentors shared with me long ago, the goal of physicians and biomedical researchers is to help reduce human suffering,” he says. “So, if we can provide something that helps us learn about mosquitoes, intervene with diseases and, in some way, keep mosquitoes away from people, I think that is a positive.”

Willenberg came up with the engineered tissue idea when he learned the National Institutes of Health (NIH) was looking for new in vitro 3D models that could help study pathogens that mosquitoes and other biting arthropods carry.

“When I read about the NIH seeking these models, it got me thinking that maybe there is a way to get the mosquitoes to bite and blood feed [on the 3D models] directly,” he says. “Then I can bring in the mosquito to do the natural delivery and create a complete vector-host-pathogen interface model to study it all together.”

As this platform is still in its early stages, Willenberg wants to incorporate addition types of cells to move the system closer to human skin. He is also developing collaborations with experts that study pathogens and work with infected vectors, and is working with mosquito control organizations to see how they can use the technology.

“I have a particular vision for this platform, and am going after it. My experience too is that other good ideas and research directions will flourish when it gets into the hands of others,” he says. “At the end of the day, the collective ideas and efforts of the various research communities propel a system like ours to its full potential. So, if we can provide them tools to enable their work, while also moving ours forward at the same time, that is really exciting.”

Willenberg received his Ph.D. in biomedical engineering from the University of Florida and continued there for his postdoctoral training and then in scientist, adjunct scientist and lecturer positions. He joined the UCF College of Medicine in 2014, where he is currently an assistant professor of medicine.

Willenberg is also a co-founder, co-owner and manager of Saisijin Biotech, LLC and has a minor ownership stake in Sustained Release Technologies, Inc. Neither entity was involved in any way with the work presented in this story. Team members may also be listed as inventors on patent/patent applications that could result in royalty payments. This technology is available for licensing. To learn more, please visit ucf.flintbox.com/technologies/44c06966-2748-4c14-87d7-fc40cbb4f2c6.

Here’s a link to and a citation for the paper,

Engineered Human Tissue as A New Platform for Mosquito Bite-Site Biology Investigations by Corey E. Seavey, Mona Doshi, Andrew P. Panarello, Michael A. Felice, Andrew K. Dickerson, Mollie W. Jewett and Bradley J. Willenberg. Insects 2023, 14(6), 514; https://doi.org/10.3390/insects14060514 Published: 2 June 2023

This paper is open access.

That final paragraph in the news release is new to me. I’ve seen them list companies where the researchers have financial interests but this is the first time I’ve seen a news release that offers a statement attempting to cover all the bases including some future possibilities such as: “Team members may also be listed as inventors on patent/patent applications that could result in royalty payments.

It seems pretty clear that there’s increasing concern about mosquito-borne diseases no matter where you live.

Listen in on a UNESCO (United Nations Educational, Scientific and Cultural Organization) meeting (about Open Science)

If you are intrigued* by the idea of sitting in on a UNESCO meeting, in this case, the Intergovernmental special committee meeting (Category II) related to the draft UNESCO Recommendation on Open Science, there is an opportunity.

Before getting to the opportunity, I want to comment on how smart the UNESCO communications/press office has been. Interest in relaxing COVID-19 vaccine patent rules is gaining momentum (May 6, 2021 Associated Press news item on Canadian Broadcasting Corporation [CBC]) and a decision was made in the press office (?) to capitalize on this momentum as a series of UNESCO meetings about open science are taking place. Well done!

Later in this post, I have a few comments about the intellectual property scene and open science in Canada.

UNESCO’s open meeting

According to the May 7, 2021 UNESCO press release no. 42 (received via email),

UNESCO welcomes move to lift the patent on the vaccines and pushes for
Open Science

Paris, 7 May [2021] -“The decision of the United States and many other
countries to call for the lifting of patent protection for coronavirus
vaccines could save millions of lives and serve as a blueprint for the
future of scientific cooperation. COVID-19 does not respect borders. No
country will be safe until the people of every country have access to
the vaccine,” said UNESCO Director-General Audrey Azoulay.

This growing momentum comes in response to the joint appeal made by
UNESCO, the WHO [World Health Organization] and the UNHCR [United Nations Commission on Human Rights] to open up science and boost scientific
cooperation in October 2020. Early in the pandemic last spring, UNESCO
mobilized over 122 countries to promote Open Science and reinforced
international cooperation.

The pandemic triggered strong support for Open Science among Member
States for this agenda. Chinese scientists sequenced the genome of the
new coronavirus on 11 January 2020 and posted it online, enabling German
scientists to develop a screening test, which was then shared by the
World Health Organization with governments everywhere. 

Since the outbreak of COVID-19, the world has embarked on a new era of
scientific research, forcing all countries to construct the shared rules
and common norms we need to work more effectively in these changing
times.

The recent announcements of countries in favor of lifting patents show
the growing support for open scientific cooperation. They also coincide
with the five-day meeting of UNESCO Member States to define a global
standard-setting framework on Open Science, which aims to develop new
models for the circulation of scientific knowledge and its benefits,
including global commons.

The outcomes of the meeting will lead to a Global Recommendation on Open
Science to be adopted by UNESCO’s 193 Member States at the
Organization’s General Conference in November 2021. This
Recommendation aims to be a driver for shared global access to data,
publications, patents, software, educational resources and technological
innovations and to reengage all of society in science.

More Information on UNESCO’s Open Science meeting:
https://events.unesco.org/event?id=1907937890&lang=1033 [1]

After clicking on UNESCO’s events link (in the above), you’ll be sent to a page where you’ll be invited to link to a live webcast (it’s live if there’s a session taking place and there will be on May 10, May 11, and May 12, 2021). If you’re on the West Coast of Canada or the US, add nine hours since the meeting is likely taking place on Paris (France) time (so at 2 pm PT, you’re not likely to hear anything), where UNESCO is headquartered. When you get to the page hosting the live webcast, click on the tab listing the current day’s date.

I managed to listen to some of the meeting this morning (May 7, 2021) at about 8 am my time; for the participants, it was a meeting that ran late. The thrill is being able to attend or listen in. From a content perspective, you probably need to be serious about open science and the language used to define it and make recommendations about it.

Comments on open science and intellectual property in Canada

Mentioned earlier was the rising momentum for relaxing COVID-19 vaccine patent rules, I looked carefully at the May 6, 2021 Associated Press news item on CBC] and couldn’t find any evidence that Canada is actively supporting the idea. However, the Canadian government has indicated a willingness to discuss relaxing the rules,

France joined the United States on Thursday [May 6, 2021] in supporting an easing of patent protections on COVID-19 vaccines that could help poorer countries get more doses and speed the end of the pandemic. While the backing from two countries with major drugmakers is important, many obstacles remain.

The United States’ support for waiving the protections marked a dramatic shift in its position. Still, even just one country voting against such a waiver would be enough to block efforts at the World Trade Organization [WTO].

With the Biden administration’s announcement on Wednesday [May 5, 2021], the U.S. became the first country in the developed world with big vaccine manufacturing to publicly support the waiver idea floated by India and South Africa last October at the WTO.

“I completely favour this opening up of the intellectual property,” French President Emmanuel Macron said Thursday [May 6, 2021] on a visit to a vaccine centre.

Many other leaders chimed in — though few expressed direct support. Italian Foreign Minister Luigi Di Maio wrote on Facebook that the U.S. announcement was “a very important signal” and that the world needs “free access” to patents for the vaccines.

Australian Prime Minister Scott Morrison called the U.S. position “great news” but did not directly respond to a question about whether his country would support a waiver.

Canada’s International Trade Minister Mary Ng told the House of Commons on Thursday that the federal government will “actively participate” in talks to waive the global rules that protect vaccine trade secrets. [emphases mine]

[Canada’s] International Development Minister Karina Gould said the U.S. support for waiving patents is “a really important step in this conversation.” [emphases mine]

Big difference between supporting something and talking about it, eh?

Open science in Canada

Back in 2016, the Montreal Neurological Institute (MNI or Montreal Neuro) in Québec, Canada was the first academic institution in the world to embrace an open science policy. Here’s the relevant excerpt from my January 22, 2016 posting (the posting describes the place that Montreal Neuro occupies historically in Canada and on the global stage),

.. David Bruggeman tells the story in a Jan. 21, 2016 posting on his Pasco Phronesis blog (Note: Links have been removed),

The Montreal Neurological Institute (MNI) at McGill University announced that it will be the first academic research institute to become what it calls ‘Open Science.’  As Science is reporting, the MNI will make available all research results and research data at the time of publication.  Additionally it will not seek patents on any of the discoveries made on research at the Institute. [emphasis mine]

Will this catch on?  I have no idea if this particular combination of open access research data and results with no patents will spread to other university research institutes.  But I do believe that those elements will continue to spread.  More universities and federal agencies are pursuing open access options for research they support.  Elon Musk has opted to not pursue patent litigation for any of Tesla Motors’ patents, and has not pursued patents for SpaceX technology (though it has pursued litigation over patents in rocket technology). …

What about intellectual property (IP) and the 2021 federal budget?

Interestingly, the 2021 Canadian federal budget, released April 19, 2021, (see my May 4, 2021 posting) has announced more investments in intellectual property initiatives,

“Promoting Canadian Intellectual Property

As the most highly educated country in the OECD [Organization for Economic Cooperation and Development], Canada is full of innovative and entrepreneurial people with great ideas. Those ideas are valuable intellectual property that are the seeds of huge growth opportunities. Building on the National Intellectual Property Strategy announced in Budget 2018, the government proposes to further support Canadian innovators, start-ups, and technology-intensive businesses. Budget 2021 proposes:

  • $90 million, over two years, starting in 2022-23, to create ElevateIP, a program to help accelerators and incubators provide start-ups with access to expert intellectual property services.
  • $75 million over three years, starting in 2021-22, for the National Research Council’s Industrial Research Assistance Program to provide high-growth client firms with access to expert intellectual property services.

These direct investments would be complemented by a Strategic Intellectual Property Program Review that will be launched. It is intended as a broad assessment of intellectual property provisions in Canada’s innovation and science programming, from basic research to near-commercial projects. This work will make sure Canada and Canadians fully benefit from innovations and intellectual property.”

Now, it’s back to me and the usual formatting for an upcoming excerpt. As for Canada’s National Intellectual Property Strategy, here’s more from the April 26, 2018 Innovation, Science and Economic Development Canada news release,

Canada’s IP Strategy will help Canadian entrepreneurs better understand and protect intellectual property and also get better access to shared intellectual property. Canada is a leader in research, science, creation and invention, but it can do more when it comes to commercializing innovations.

The IP Strategy will help give businesses the information and confidence they need to grow their business and take risks.

The IP Strategy will make changes in three key areas:

LEGISLATION

The IP Strategy will amend key IP laws to ensure that we remove barriers to innovation, particularly any loopholes that allow those seeking to use IP in bad faith to stall innovation for their own gain.

The IP Strategy will create an independent body to oversee patent and trademark agents, which will ensure that professional and ethical standards are maintained, and will support the provision of quality advice from IP professionals.

LITERACY AND ADVICE

As part of the IP Strategy, the Canadian Intellectual Property Office will launch a suite of programs to help improve IP literacy among Canadians.

The IP Strategy includes support for domestic and international engagement between Indigenous people and decision makers as well as for research activities and capacity building.

The IP Strategy will also support training for federal employees who deal with IP governance.

TOOLS

The IP Strategy will provide tools to support Canadian businesses as they learn about IP and pursue their own IP strategies.

The government is creating a patent collective to bring together businesses to facilitate better IP outcomes for members. The patent collective is the coming together of firms to share in IP expertise and strategy, including gaining access to a larger collection of patents and IP. 

I’m guessing what the government wants is more patents; at the same time, it does not want to get caught up in patent thickets and the patent troll wars often seen in the US. The desire for more patents isn’t simply ‘protection’ for Canadian businesses, it’s born also from a desire to brag (from “A few final comments subsection” in my May 4, 2021 posting on the Canadian federal budget),

The inclusion of a section on intellectual property in the budget could seem peculiar. I would have thought that years ago before I learned that governments measure and compare with other government the success of their science and technology efforts by the number of patents that have been filed. [new emphasis mine] There are other measures but intellectual property is very important, as far as governments are concerned. My “Billions lost to patent trolls; US White House asks for comments on intellectual property (IP) enforcement; and more on IP” June 28, 2012 posting points to some of the shortcomings, with which we still grapple.

Not just a Canadian conundrum

IP (patents, trademarks, and copyright) has a long history and my understanding of patents and copyright (not sure about trademarks) is that they were initially intended to guarantee inventors and creators a fixed period of time in which to make money from their inventions and/or creations. IP was intended to encourage competition not stifle it as happens so often these days. Here’s more about patents from the Origin of Patents: Everything You Need to Know webpage on the upcounsel.com website (Note: Links have been removed),

Origins of Patent Law and the Incentive Theory

It is possible to trace the idea of patent law as far back as the 9th century B.C. in ancient Greece.  However, one of the most vital pieces of legislation in the history of patents is the English Statute of Monopolies. The Parliament passed the Statute of Monopolies to end monopolies, which stifled competition. 

However, for about a decade, the Statute issued “letters patent” to allow for limited monopolies. This measure was seen as a way of balancing the importance of providing incentives for inventions with the distaste for monopolies. [emphasis mine] While monopolies usually don’t offer any innovative benefits, inventors need to have an incentive to create innovations that benefit society.

Changes?

As you can see in the ‘Origins of Patent Law’ excerpt , there’s a tension between ensuring profitability and spurring innovation. It certainly seems that our current approach to the problem is no longer successful.

There has been an appetite for change in how science is pursued, shared, and commercialized. Listening in on UNESCO’s Open Science meeting:
https://events.unesco.org/event?id=1907937890&lang=1033 [1] (May 10 -12, 2021) is an opportunity to see how this movement could develop. Sadly, I don’t think the World Trade Organization is going to afford anyone the opportunity to tune in to discussions about relaxing COVDI-19 vaccine patent rules. (sigh)

As for the Canadian government’s ‘willingness to talk’ I expect the Canadian representative at the UNESCO will be very happy to adopt open science while the Canadian representative at the WTO will dance around without committing.

If you are inclined, please do share your thoughts on either of the meetings or on the move towards open science.

*’intrigues’ changed to ‘intrigued’ on May 13, 2021.

Governments need to tell us when and how they’re using AI (artificial intelligence) algorithms to make decisions

I have two items and an exploration of the Canadian scene all three of which feature governments, artificial intelligence, and responsibility.

Special issue of Information Polity edited by Dutch academics,

A December 14, 2020 IOS Press press release (also on EurekAlert) announces a special issue of Information Polity focused on algorithmic transparency in government,

Amsterdam, NL – The use of algorithms in government is transforming the way bureaucrats work and make decisions in different areas, such as healthcare or criminal justice. Experts address the transparency challenges of using algorithms in decision-making procedures at the macro-, meso-, and micro-levels in this special issue of Information Polity.

Machine-learning algorithms hold huge potential to make government services fairer and more effective and have the potential of “freeing” decision-making from human subjectivity, according to recent research. Algorithms are used in many public service contexts. For example, within the legal system it has been demonstrated that algorithms can predict recidivism better than criminal court judges. At the same time, critics highlight several dangers of algorithmic decision-making, such as racial bias and lack of transparency.

Some scholars have argued that the introduction of algorithms in decision-making procedures may cause profound shifts in the way bureaucrats make decisions and that algorithms may affect broader organizational routines and structures. This special issue on algorithm transparency presents six contributions to sharpen our conceptual and empirical understanding of the use of algorithms in government.

“There has been a surge in criticism towards the ‘black box’ of algorithmic decision-making in government,” explain Guest Editors Sarah Giest (Leiden University) and Stephan Grimmelikhuijsen (Utrecht University). “In this special issue collection, we show that it is not enough to unpack the technical details of algorithms, but also look at institutional, organizational, and individual context within which these algorithms operate to truly understand how we can achieve transparent and responsible algorithms in government. For example, regulations may enable transparency mechanisms, yet organizations create new policies on how algorithms should be used, and individual public servants create new professional repertoires. All these levels interact and affect algorithmic transparency in public organizations.”

The transparency challenges for the use of algorithms transcend different levels of government – from European level to individual public bureaucrats. These challenges can also take different forms; transparency can be enabled or limited by technical tools as well as regulatory guidelines or organizational policies. Articles in this issue address transparency challenges of algorithm use at the macro-, meso-, and micro-level. The macro level describes phenomena from an institutional perspective – which national systems, regulations and cultures play a role in algorithmic decision-making. The meso-level primarily pays attention to the organizational and team level, while the micro-level focuses on individual attributes, such as beliefs, motivation, interactions, and behaviors.

“Calls to ‘keep humans in the loop’ may be moot points if we fail to understand how algorithms impact human decision-making and how algorithmic design impacts the practical possibilities for transparency and human discretion,” notes Rik Peeters, research professor of Public Administration at the Centre for Research and Teaching in Economics (CIDE) in Mexico City. In a review of recent academic literature on the micro-level dynamics of algorithmic systems, he discusses three design variables that determine the preconditions for human transparency and discretion and identifies four main sources of variation in “human-algorithm interaction.”

The article draws two major conclusions: First, human agents are rarely fully “out of the loop,” and levels of oversight and override designed into algorithms should be understood as a continuum. The second pertains to bounded rationality, satisficing behavior, automation bias, and frontline coping mechanisms that play a crucial role in the way humans use algorithms in decision-making processes.

For future research Dr. Peeters suggests taking a closer look at the behavioral mechanisms in combination with identifying relevant skills of bureaucrats in dealing with algorithms. “Without a basic understanding of the algorithms that screen- and street-level bureaucrats have to work with, it is difficult to imagine how they can properly use their discretion and critically assess algorithmic procedures and outcomes. Professionals should have sufficient training to supervise the algorithms with which they are working.”

At the macro-level, algorithms can be an important tool for enabling institutional transparency, writes Alex Ingrams, PhD, Governance and Global Affairs, Institute of Public Administration, Leiden University, Leiden, The Netherlands. This study evaluates a machine-learning approach to open public comments for policymaking to increase institutional transparency of public commenting in a law-making process in the United States. The article applies an unsupervised machine learning analysis of thousands of public comments submitted to the United States Transport Security Administration on a 2013 proposed regulation for the use of new full body imaging scanners in airports. The algorithm highlights salient topic clusters in the public comments that could help policymakers understand open public comments processes. “Algorithms should not only be subject to transparency but can also be used as tool for transparency in government decision-making,” comments Dr. Ingrams.

“Regulatory certainty in combination with organizational and managerial capacity will drive the way the technology is developed and used and what transparency mechanisms are in place for each step,” note the Guest Editors. “On its own these are larger issues to tackle in terms of developing and passing laws or providing training and guidance for public managers and bureaucrats. The fact that they are linked further complicates this process. Highlighting these linkages is a first step towards seeing the bigger picture of why transparency mechanisms are put in place in some scenarios and not in others and opens the door to comparative analyses for future research and new insights for policymakers. To advocate the responsible and transparent use of algorithms, future research should look into the interplay between micro-, meso-, and macro-level dynamics.”

“We are proud to present this special issue, the 100th issue of Information Polity. Its focus on the governance of AI demonstrates our continued desire to tackle contemporary issues in eGovernment and the importance of showcasing excellent research and the insights offered by information polity perspectives,” add Professor Albert Meijer (Utrecht University) and Professor William Webster (University of Stirling), Editors-in-Chief.

This image illustrates the interplay between the various level dynamics,

Caption: Studying algorithms and algorithmic transparency from multiple levels of analyses. Credit: Information Polity.

Here’s a link, to and a citation for the special issue,

Algorithmic Transparency in Government: Towards a Multi-Level Perspective
Guest Editors: Sarah Giest, PhD, and Stephan Grimmelikhuijsen, PhD
Information Polity, Volume 25, Issue 4 (December 2020), published by IOS Press

The issue is open access for three months, Dec. 14, 2020 – March 14, 2021.

Two articles from the special were featured in the press release,

“The agency of algorithms: Understanding human-algorithm interaction in administrative decision-making,” by Rik Peeters, PhD (https://doi.org/10.3233/IP-200253)

“A machine learning approach to open public comments for policymaking,” by Alex Ingrams, PhD (https://doi.org/10.3233/IP-200256)

An AI governance publication from the US’s Wilson Center

Within one week of the release of a special issue of Information Polity on AI and governments, a Wilson Center (Woodrow Wilson International Center for Scholars) December 21, 2020 news release (received via email) announces a new publication,

Governing AI: Understanding the Limits, Possibilities, and Risks of AI in an Era of Intelligent Tools and Systems by John Zysman & Mark Nitzberg

Abstract

In debates about artificial intelligence (AI), imaginations often run wild. Policy-makers, opinion leaders, and the public tend to believe that AI is already an immensely powerful universal technology, limitless in its possibilities. However, while machine learning (ML), the principal computer science tool underlying today’s AI breakthroughs, is indeed powerful, ML is fundamentally a form of context-dependent statistical inference and as such has its limits. Specifically, because ML relies on correlations between inputs and outputs or emergent clustering in training data, today’s AI systems can only be applied in well- specified problem domains, still lacking the context sensitivity of a typical toddler or house-pet. Consequently, instead of constructing policies to govern artificial general intelligence (AGI), decision- makers should focus on the distinctive and powerful problems posed by narrow AI, including misconceived benefits and the distribution of benefits, autonomous weapons, and bias in algorithms. AI governance, at least for now, is about managing those who create and deploy AI systems, and supporting the safe and beneficial application of AI to narrow, well-defined problem domains. Specific implications of our discussion are as follows:

  • AI applications are part of a suite of intelligent tools and systems and must ultimately be regulated as a set. Digital platforms, for example, generate the pools of big data on which AI tools operate and hence, the regulation of digital platforms and big data is part of the challenge of governing AI. Many of the platform offerings are, in fact, deployments of AI tools. Hence, focusing on AI alone distorts the governance problem.
  • Simply declaring objectives—be they assuring digital privacy and transparency, or avoiding bias—is not sufficient. We must decide what the goals actually will be in operational terms.
  • The issues and choices will differ by sector. For example, the consequences of bias and error will differ from a medical domain or a criminal justice domain to one of retail sales.
  • The application of AI tools in public policy decision making, in transportation design or waste disposal or policing among a whole variety of domains, requires great care. There is a substantial risk of focusing on efficiency when the public debate about what the goals should be in the first place is in fact required. Indeed, public values evolve as part of social and political conflict.
  • The economic implications of AI applications are easily exaggerated. Should public investment concentrate on advancing basic research or on diffusing the tools, user interfaces, and training needed to implement them?
  • As difficult as it will be to decide on goals and a strategy to implement the goals of one community, let alone regional or international communities, any agreement that goes beyond simple objective statements is very unlikely.

Unfortunately, I haven’t been able to successfully download the working paper/report from the Wilson Center’s Governing AI: Understanding the Limits, Possibilities, and Risks of AI in an Era of Intelligent Tools and Systems webpage.

However, I have found a draft version of the report (Working Paper) published August 26, 2020 on the Social Science Research Network. This paper originated at the University of California at Berkeley as part of a series from the Berkeley Roundtable on the International Economy (BRIE). ‘Governing AI: Understanding the Limits, Possibility, and Risks of AI in an Era of Intelligent Tools and Systems’ is also known as the BRIE Working Paper 2020-5.

Canadian government and AI

The special issue on AI and governance and the the paper published by the Wilson Center stimulated my interest in the Canadian government’s approach to governance, responsibility, transparency, and AI.

There is information out there but it’s scattered across various government initiatives and ministries. Above all, it is not easy to find, open communication. Whether that’s by design or the blindness and/or ineptitude to be found in all organizations I leave that to wiser judges. (I’ve worked in small companies and they too have the problem. In colloquial terms, ‘the right hand doesn’t know what the left hand is doing’.)

Responsible use? Maybe not after 2019

First there’s a government of Canada webpage, Responsible use of artificial intelligence (AI). Other than a note at the bottom of the page “Date modified: 2020-07-28,” all of the information dates from 2016 up to March 2019 (which you’ll find on ‘Our Timeline’). Is nothing new happening?

For anyone interested in responsible use, there are two sections “Our guiding principles” and “Directive on Automated Decision-Making” that answer some questions. I found the ‘Directive’ to be more informative with its definitions, objectives, and, even, consequences. Sadly, you need to keep clicking to find consequences and you’ll end up on The Framework for the Management of Compliance. Interestingly, deputy heads are assumed in charge of managing non-compliance. I wonder how employees deal with a non-compliant deputy head?

What about the government’s digital service?

You might think Canadian Digital Service (CDS) might also have some information about responsible use. CDS was launched in 2017, according to Luke Simon’s July 19, 2017 article on Medium,

In case you missed it, there was some exciting digital government news in Canada Tuesday. The Canadian Digital Service (CDS) launched, meaning Canada has joined other nations, including the US and the UK, that have a federal department dedicated to digital.

At the time, Simon was Director of Outreach at Code for Canada.

Presumably, CDS, from an organizational perspective, is somehow attached to the Minister of Digital Government (it’s a position with virtually no governmental infrastructure as opposed to the Minister of Innovation, Science and Economic Development who is responsible for many departments and agencies). The current minister is Joyce Murray whose government profile offers almost no information about her work on digital services. Perhaps there’s a more informative profile of the Minister of Digital Government somewhere on a government website.

Meanwhile, they are friendly folks at CDS but they don’t offer much substantive information. From the CDS homepage,

Our aim is to make services easier for government to deliver. We collaborate with people who work in government to address service delivery problems. We test with people who need government services to find design solutions that are easy to use.

Learn more

After clicking on Learn more, I found this,

At the Canadian Digital Service (CDS), we partner up with federal departments to design, test and build simple, easy to use services. Our goal is to improve the experience – for people who deliver government services and people who use those services.

How it works

We work with our partners in the open, regularly sharing progress via public platforms. This creates a culture of learning and fosters best practices. It means non-partner departments can apply our work and use our resources to develop their own services.

Together, we form a team that follows the ‘Agile software development methodology’. This means we begin with an intensive ‘Discovery’ research phase to explore user needs and possible solutions to meeting those needs. After that, we move into a prototyping ‘Alpha’ phase to find and test ways to meet user needs. Next comes the ‘Beta’ phase, where we release the solution to the public and intensively test it. Lastly, there is a ‘Live’ phase, where the service is fully released and continues to be monitored and improved upon.

Between the Beta and Live phases, our team members step back from the service, and the partner team in the department continues the maintenance and development. We can help partners recruit their service team from both internal and external sources.

Before each phase begins, CDS and the partner sign a partnership agreement which outlines the goal and outcomes for the coming phase, how we’ll get there, and a commitment to get them done.

As you can see, there’s not a lot of detail and they don’t seem to have included anything about artificial intelligence as part of their operation. (I’ll come back to the government’s implementation of artificial intelligence and information technology later.)

Does the Treasury Board of Canada have charge of responsible AI use?

I think so but there are government departments/ministries that also have some responsibilities for AI and I haven’t seen any links back to the Treasury Board documentation.

For anyone not familiar with the Treasury Board or even if you are, December 14, 2009 article (Treasury Board of Canada: History, Organization and Issues) on Maple Leaf Web is quite informative,

The Treasury Board of Canada represent a key entity within the federal government. As an important cabinet committee and central agency, they play an important role in financial and personnel administration. Even though the Treasury Board plays a significant role in government decision making, the general public tends to know little about its operation and activities. [emphasis mine] The following article provides an introduction to the Treasury Board, with a focus on its history, responsibilities, organization, and key issues.

It seems the Minister of Digital Government, Joyce Murray is part of the Treasury Board and the Treasury Board is the source for the Digital Operations Strategic Plan: 2018-2022,

I haven’t read the entire document but the table of contents doesn’t include a heading for artificial intelligence and there wasn’t any mention of it in the opening comments.

But isn’t there a Chief Information Officer for Canada?

Herein lies a tale (I doubt I’ll ever get the real story) but the answer is a qualified ‘no’. The Chief Information Officer for Canada, Alex Benay (there is an AI aspect) stepped down in September 2019 to join a startup company according to an August 6, 2019 article by Mia Hunt for Global Government Forum,

Alex Benay has announced he will step down as Canada’s chief information officer next month to “take on new challenge” at tech start-up MindBridge.

“It is with mixed emotions that I am announcing my departure from the Government of Canada,” he said on Wednesday in a statement posted on social media, describing his time as CIO as “one heck of a ride”.

He said he is proud of the work the public service has accomplished in moving the national digital agenda forward. Among these achievements, he listed the adoption of public Cloud across government; delivering the “world’s first” ethical AI management framework; [emphasis mine] renewing decades-old policies to bring them into the digital age; and “solidifying Canada’s position as a global leader in open government”.

He also led the introduction of new digital standards in the workplace, and provided “a clear path for moving off” Canada’s failed Phoenix pay system. [emphasis mine]

I cannot find a current Chief Information of Canada despite searches but I did find this List of chief information officers (CIO) by institution. Where there was one, there are now many.

Since September 2019, Mr. Benay has moved again according to a November 7, 2019 article by Meagan Simpson on the BetaKit,website (Note: Links have been removed),

Alex Benay, the former CIO [Chief Information Officer] of Canada, has left his role at Ottawa-based Mindbridge after a short few months stint.

The news came Thursday, when KPMG announced that Benay was joining the accounting and professional services organization as partner of digital and government solutions. Benay originally announced that he was joining Mindbridge in August, after spending almost two and a half years as the CIO for the Government of Canada.

Benay joined the AI startup as its chief client officer and, at the time, was set to officially take on the role on September 3rd. According to Benay’s LinkedIn, he joined Mindbridge in August, but if the September 3rd start date is correct, Benay would have only been at Mindbridge for around three months. The former CIO of Canada was meant to be responsible for Mindbridge’s global growth as the company looked to prepare for an IPO in 2021.

Benay told The Globe and Mail that his decision to leave Mindbridge was not a question of fit, or that he considered the move a mistake. He attributed his decision to leave to conversations with Mindbridge customer KPMG, over a period of three weeks. Benay told The Globe that he was drawn to the KPMG opportunity to lead its digital and government solutions practice, something that was more familiar to him given his previous role.

Mindbridge has not completely lost what was touted as a start hire, though, as Benay will be staying on as an advisor to the startup. “This isn’t a cutting the cord and moving on to something else completely,” Benay told The Globe. “It’s a win-win for everybody.”

Via Mr. Benay, I’ve re-introduced artificial intelligence and introduced the Phoenix Pay system and now I’m linking them to government implementation of information technology in a specific case and speculating about implementation of artificial intelligence algorithms in government.

Phoenix Pay System Debacle (things are looking up), a harbinger for responsible use of artificial intelligence?

I’m happy to hear that the situation where government employees had no certainty about their paycheques is becoming better. After the ‘new’ Phoenix Pay System was implemented in early 2016, government employees found they might get the correct amount on their paycheque or might find significantly less than they were entitled to or might find huge increases.

The instability alone would be distressing but adding to it with the inability to get the problem fixed must have been devastating. Almost five years later, the problems are being resolved and people are getting paid appropriately, more often.

The estimated cost for fixing the problems was, as I recall, over $1B; I think that was a little optimistic. James Bagnall’s July 28, 2020 article for the Ottawa Citizen provides more detail, although not about the current cost, and is the source of my measured optimism,

Something odd has happened to the Phoenix Pay file of late. After four years of spitting out errors at a furious rate, the federal government’s new pay system has gone quiet.

And no, it’s not because of the even larger drama written by the coronavirus. In fact, there’s been very real progress at Public Services and Procurement Canada [PSPC; emphasis mine], the department in charge of pay operations.

Since January 2018, the peak of the madness, the backlog of all pay transactions requiring action has dropped by about half to 230,000 as of late June. Many of these involve basic queries for information about promotions, overtime and rules. The part of the backlog involving money — too little or too much pay, incorrect deductions, pay not received — has shrunk by two-thirds to 125,000.

These are still very large numbers but the underlying story here is one of long-delayed hope. The government is processing the pay of more than 330,000 employees every two weeks while simultaneously fixing large batches of past mistakes.

While officials with two of the largest government unions — Public Service Alliance of Canada [PSAC] and the Professional Institute of the Public Service of Canada [PPSC] — disagree the pay system has worked out its kinks, they acknowledge it’s considerably better than it was. New pay transactions are being processed “with increased timeliness and accuracy,” the PSAC official noted.

Neither union is happy with the progress being made on historical mistakes. PIPSC president Debi Daviau told this newspaper that many of her nearly 60,000 members have been waiting for years to receive salary adjustments stemming from earlier promotions or transfers, to name two of the more prominent sources of pay errors.

Even so, the sharp improvement in Phoenix Pay’s performance will soon force the government to confront an interesting choice: Should it continue with plans to replace the system?

Treasury Board, the government’s employer, two years ago launched the process to do just that. Last March, SAP Canada — whose technology underpins the pay system still in use at Canada Revenue Agency — won a competition to run a pilot project. Government insiders believe SAP Canada is on track to build the full system starting sometime in 2023.

When Public Services set out the business case in 2009 for building Phoenix Pay, it noted the pay system would have to accommodate 150 collective agreements that contained thousands of business rules and applied to dozens of federal departments and agencies. The technical challenge has since intensified.

Under the original plan, Phoenix Pay was to save $70 million annually by eliminating 1,200 compensation advisors across government and centralizing a key part of the operation at the pay centre in Miramichi, N.B., where 550 would manage a more automated system.

Instead, the Phoenix Pay system currently employs about 2,300.  This includes 1,600 at Miramichi and five regional pay offices, along with 350 each at a client contact centre (which deals with relatively minor pay issues) and client service bureau (which handles the more complex, longstanding pay errors). This has naturally driven up the average cost of managing each pay account — 55 per cent higher than the government’s former pay system according to last fall’s estimate by the Parliamentary Budget Officer.

… As the backlog shrinks, the need for regional pay offices and emergency staffing will diminish. Public Services is also working with a number of high-tech firms to develop ways of accurately automating employee pay using artificial intelligence [emphasis mine].

Given the Phoenix Pay System debacle, it might be nice to see a little information about how the government is planning to integrate more sophisticated algorithms (artificial intelligence) in their operations.

I found this on a Treasury Board webpage, all 1 minute and 29 seconds of it,

The blonde model or actress mentions that companies applying to Public Services and Procurement Canada for placement on the list must use AI responsibly. Her script does not include a definition or guidelines, which, as previously noted, as on the Treasury Board website.

As for Public Services and Procurement Canada, they have an Artificial intelligence source list,

Public Services and Procurement Canada (PSPC) is putting into operation the Artificial intelligence source list to facilitate the procurement of Canada’s requirements for Artificial intelligence (AI).

After research and consultation with industry, academia, and civil society, Canada identified 3 AI categories and business outcomes to inform this method of supply:

Insights and predictive modelling

Machine interactions

Cognitive automation

PSPC is focused only on procuring AI. If there are guidelines on their website for its use, I did not find them.

I found one more government agency that might have some information about artificial intelligence and guidelines for its use, Shared Services Canada,

Shared Services Canada (SSC) delivers digital services to Government of Canada organizations. We provide modern, secure and reliable IT services so federal organizations can deliver digital programs and services that meet Canadians needs.

Since the Minister of Digital Government, Joyce Murray, is listed on the homepage, I was hopeful that I could find out more about AI and governance and whether or not the Canadian Digital Service was associated with this government ministry/agency. I was frustrated on both counts.

To sum up, there is no information that I could find after March 2019 about Canada, it’s government and plans for AI, especially responsible management/governance and AI on a Canadian government website although I have found guidelines, expectations, and consequences for non-compliance. (Should anyone know which government agency has up-to-date information on its responsible use of AI, please let me know in the Comments.

Canadian Institute for Advanced Research (CIFAR)

The first mention of the Pan-Canadian Artificial Intelligence Strategy is in my analysis of the Canadian federal budget in a March 24, 2017 posting. Briefly, CIFAR received a big chunk of that money. Here’s more about the strategy from the CIFAR Pan-Canadian AI Strategy homepage,

In 2017, the Government of Canada appointed CIFAR to develop and lead a $125 million Pan-Canadian Artificial Intelligence Strategy, the world’s first national AI strategy.

CIFAR works in close collaboration with Canada’s three national AI Institutes — Amii in Edmonton, Mila in Montreal, and the Vector Institute in Toronto, as well as universities, hospitals and organizations across the country.

The objectives of the strategy are to:

Attract and retain world-class AI researchers by increasing the number of outstanding AI researchers and skilled graduates in Canada.

Foster a collaborative AI ecosystem by establishing interconnected nodes of scientific excellence in Canada’s three major centres for AI: Edmonton, Montreal, and Toronto.

Advance national AI initiatives by supporting a national research community on AI through training programs, workshops, and other collaborative opportunities.

Understand the societal implications of AI by developing global thought leadership on the economic, ethical, policy, and legal implications [emphasis mine] of advances in AI.

Responsible AI at CIFAR

You can find Responsible AI in a webspace devoted to what they have called, AI & Society. Here’s more from the homepage,

CIFAR is leading global conversations about AI’s impact on society.

The AI & Society program, one of the objectives of the CIFAR Pan-Canadian AI Strategy, develops global thought leadership on the economic, ethical, political, and legal implications of advances in AI. These dialogues deliver new ways of thinking about issues, and drive positive change in the development and deployment of responsible AI.

Solution Networks

AI Futures Policy Labs

AI & Society Workshops

Building an AI World

Under the category of building an AI World I found this (from CIFAR’s AI & Society homepage),

BUILDING AN AI WORLD

Explore the landscape of global AI strategies.

Canada was the first country in the world to announce a federally-funded national AI strategy, prompting many other nations to follow suit. CIFAR published two reports detailing the global landscape of AI strategies.

I skimmed through the second report and it seems more like a comparative study of various country’s AI strategies than a overview of responsible use of AI.

Final comments about Responsible AI in Canada and the new reports

I’m glad to see there’s interest in Responsible AI but based on my adventures searching the Canadian government websites and the Pan-Canadian AI Strategy webspace, I’m left feeling hungry for more.

I didn’t find any details about how AI is being integrated into government departments and for what uses. I’d like to know and I’d like to have some say about how it’s used and how the inevitable mistakes will be dealh with.

The great unwashed

What I’ve found is high minded, but, as far as I can tell, there’s absolutely no interest in talking to the ‘great unwashed’. Those of us who are not experts are being left out of these earlier stage conversations.

I’m sure we’ll be consulted at some point but it will be long past the time when are our opinions and insights could have impact and help us avoid the problems that experts tend not to see. What we’ll be left with is protest and anger on our part and, finally, grudging admissions and corrections of errors on the government’s part.

Let’s take this for an example. The Phoenix Pay System was implemented in its first phase on Feb. 24, 2016. As I recall, problems develop almost immediately. The second phase of implementation starts April 21, 2016. In May 2016 the government hires consultants to fix the problems. November 29, 2016 the government minister, Judy Foote, admits a mistake has been made. February 2017 the government hires consultants to establish what lessons they might learn. February 15, 2018 the pay problems backlog amounts to 633,000. Source: James Bagnall, Feb. 23, 2018 ‘timeline‘ for Ottawa Citizen

Do take a look at the timeline, there’s more to it than what I’ve written here and I’m sure there’s more to the Phoenix Pay System debacle than a failure to listen to warnings from those who would be directly affected. It’s fascinating though how often a failure to listen presages far deeper problems with a project.

The Canadian government, both a conservative and a liberal government, contributed to the Phoenix Debacle but it seems the gravest concern is with senior government bureaucrats. You might think things have changed since this recounting of the affair in a June 14, 2018 article by Michelle Zilio for the Globe and Mail,

The three public servants blamed by the Auditor-General for the Phoenix pay system problems were not fired for mismanagement of the massive technology project that botched the pay of tens of thousands of public servants for more than two years.

Marie Lemay, deputy minister for Public Services and Procurement Canada (PSPC), said two of the three Phoenix executives were shuffled out of their senior posts in pay administration and did not receive performance bonuses for their handling of the system. Those two employees still work for the department, she said. Ms. Lemay, who refused to identify the individuals, said the third Phoenix executive retired.

In a scathing report last month, Auditor-General Michael Ferguson blamed three “executives” – senior public servants at PSPC, which is responsible for Phoenix − for the pay system’s “incomprehensible failure.” [emphasis mine] He said the executives did not tell the then-deputy minister about the known problems with Phoenix, leading the department to launch the pay system despite clear warnings it was not ready.

Speaking to a parliamentary committee on Thursday, Ms. Lemay said the individuals did not act with “ill intent,” noting that the development and implementation of the Phoenix project were flawed. She encouraged critics to look at the “bigger picture” to learn from all of Phoenix’s failures.

Mr. Ferguson, whose office spoke with the three Phoenix executives as a part of its reporting, said the officials prioritized some aspects of the pay-system rollout, such as schedule and budget, over functionality. He said they also cancelled a pilot implementation project with one department that would have helped it detect problems indicating the system was not ready.

Mr. Ferguson’s report warned the Phoenix problems are indicative of “pervasive cultural problems” [emphasis mine] in the civil service, which he said is fearful of making mistakes, taking risks and conveying “hard truths.”

Speaking to the same parliamentary committee on Tuesday, Privy Council Clerk [emphasis mine] Michael Wernick challenged Mr. Ferguson’s assertions, saying his chapter on the federal government’s cultural issues is an “opinion piece” containing “sweeping generalizations.”

The Privy Council Clerk is the top level bureaucrat (and there is only one such clerk) in the civil/public service and I think his quotes are quite telling of “pervasive cultural problems.” There’s a new Privy Council Clerk but from what I can tell he was well trained by his predecessor.

Do* we really need senior government bureaucrats?

I now have an example of bureaucratic interference, specifically with the Global Public Health Information Network (GPHIN) where it would seem that not much has changed, from a December 26, 2020 article by Grant Robertson for the Globe & Mail,

When Canada unplugged support for its pandemic alert system [GPHIN] last year, it was a symptom of bigger problems inside the Public Health Agency. Experienced scientists were pushed aside, expertise was eroded, and internal warnings went unheeded, which hindered the department’s response to COVID-19

As a global pandemic began to take root in February, China held a series of backchannel conversations with Canada, lobbying the federal government to keep its borders open.

With the virus already taking a deadly toll in Asia, Heng Xiaojun, the Minister Counsellor for the Chinese embassy, requested a call with senior Transport Canada officials. Over the course of the conversation, the Chinese representatives communicated Beijing’s desire that flights between the two countries not be stopped because it was unnecessary.

“The Chinese position on the continuation of flights was reiterated,” say official notes taken from the call. “Mr. Heng conveyed that China is taking comprehensive measures to combat the coronavirus.”

Canadian officials seemed to agree, since no steps were taken to restrict or prohibit travel. To the federal government, China appeared to have the situation under control and the risk to Canada was low. Before ending the call, Mr. Heng thanked Ottawa for its “science and fact-based approach.”

It was a critical moment in the looming pandemic, but the Canadian government lacked the full picture, instead relying heavily on what Beijing was choosing to disclose to the World Health Organization (WHO). Ottawa’s ability to independently know what was going on in China – on the ground and inside hospitals – had been greatly diminished in recent years.

Canada once operated a robust pandemic early warning system and employed a public-health doctor based in China who could report back on emerging problems. But it had largely abandoned those international strategies over the past five years, and was no longer as plugged-in.

By late February [2020], Ottawa seemed to be taking the official reports from China at their word, stating often in its own internal risk assessments that the threat to Canada remained low. But inside the Public Health Agency of Canada (PHAC), rank-and-file doctors and epidemiologists were growing increasingly alarmed at how the department and the government were responding.

“The team was outraged,” one public-health scientist told a colleague in early April, in an internal e-mail obtained by The Globe and Mail, criticizing the lack of urgency shown by Canada’s response during January, February and early March. “We knew this was going to be around for a long time, and it’s serious.”

China had locked down cities and restricted travel within its borders. Staff inside the Public Health Agency believed Beijing wasn’t disclosing the whole truth about the danger of the virus and how easily it was transmitted. “The agency was just too slow to respond,” the scientist said. “A sane person would know China was lying.”

It would later be revealed that China’s infection and mortality rates were played down in official records, along with key details about how the virus was spreading.

But the Public Health Agency, which was created after the 2003 SARS crisis to bolster the country against emerging disease threats, had been stripped of much of its capacity to gather outbreak intelligence and provide advance warning by the time the pandemic hit.

The Global Public Health Intelligence Network, an early warning system known as GPHIN that was once considered a cornerstone of Canada’s preparedness strategy, had been scaled back over the past several years, with resources shifted into projects that didn’t involve outbreak surveillance.

However, a series of documents obtained by The Globe during the past four months, from inside the department and through numerous Access to Information requests, show the problems that weakened Canada’s pandemic readiness run deeper than originally thought. Pleas from the international health community for Canada to take outbreak detection and surveillance much more seriously were ignored by mid-level managers [emphasis mine] inside the department. A new federal pandemic preparedness plan – key to gauging the country’s readiness for an emergency – was never fully tested. And on the global stage, the agency stopped sending experts [emphasis mine] to international meetings on pandemic preparedness, instead choosing senior civil servants with little or no public-health background [emphasis mine] to represent Canada at high-level talks, The Globe found.

The curtailing of GPHIN and allegations that scientists had become marginalized within the Public Health Agency, detailed in a Globe investigation this past July [2020], are now the subject of two federal probes – an examination by the Auditor-General of Canada and an independent federal review, ordered by the Minister of Health.

Those processes will undoubtedly reshape GPHIN and may well lead to an overhaul of how the agency functions in some areas. The first steps will be identifying and fixing what went wrong. With the country now topping 535,000 cases of COVID-19 and more than 14,700 dead, there will be lessons learned from the pandemic.

Prime Minister Justin Trudeau has said he is unsure what role added intelligence [emphasis mine] could have played in the government’s pandemic response, though he regrets not bolstering Canada’s critical supplies of personal protective equipment sooner. But providing the intelligence to make those decisions early is exactly what GPHIN was created to do – and did in previous outbreaks.

Epidemiologists have described in detail to The Globe how vital it is to move quickly and decisively in a pandemic. Acting sooner, even by a few days or weeks in the early going, and throughout, can have an exponential impact on an outbreak, including deaths. Countries such as South Korea, Australia and New Zealand, which have fared much better than Canada, appear to have acted faster in key tactical areas, some using early warning information they gathered. As Canada prepares itself in the wake of COVID-19 for the next major health threat, building back a better system becomes paramount.

If you have time, do take a look at Robertson’s December 26, 2020 article and the July 2020 Globe investigation. As both articles make clear, senior bureaucrats whose chief attribute seems to have been longevity took over, reallocated resources, drove out experts, and crippled the few remaining experts in the system with a series of bureaucratic demands while taking trips to attend meetings (in desirable locations) for which they had no significant or useful input.

The Phoenix and GPHIN debacles bear a resemblance in that senior bureaucrats took over and in a state of blissful ignorance made a series of disastrous decisions bolstered by politicians who seem to neither understand nor care much about the outcomes.

If you think I’m being harsh watch Canadian Broadcasting Corporation (CBC) reporter Rosemary Barton interview Prime Minister Trudeau for a 2020 year-end interview, Note: There are some commercials. Then, pay special attention to the Trudeau’s answer to the first question,

Responsible AI, eh?

Based on the massive mishandling of the Phoenix Pay System implementation where top bureaucrats did not follow basic and well established information services procedures and the Global Public Health Information Network mismanagement by top level bureaucrats, I’m not sure I have a lot of confidence in any Canadian government claims about a responsible approach to using artificial intelligence.

Unfortunately, it doesn’t matter as implementation is most likely already taking place here in Canada.

Enough with the pessimism. I feel it’s necessary to end this on a mildly positive note. Hurray to the government employees who worked through the Phoenix Pay System debacle, the current and former GPHIN experts who continued to sound warnings, and all those people striving to make true the principles of ‘Peace, Order, and Good Government’, the bedrock principles of the Canadian Parliament.

A lot of mistakes have been made but we also do make a lot of good decisions.

*’Doe’ changed to ‘Do’ on May 14, 2021.

Preventing warmed-up vaccines from becoming useless

One of the major problems with vaccines is that they need to be refrigerated. (The Nanopatch, which additionally wouldn’t require needles or syringes, is my favourite proposed solution and it comes from Australia.) This latest research into making vaccines more long-lasting is from the UK and takes a different approach to the problem.

From a June 8, 2020 news item on phys.org,

Vaccines are notoriously difficult to transport to remote or dangerous places, as they spoil when not refrigerated. Formulations are safe between 2°C and 8°C, but at other temperatures the proteins start to unravel, making the vaccines ineffective. As a result, millions of children around the world miss out on life-saving inoculations.

However, scientists have now found a way to prevent warmed-up vaccines from degrading. By encasing protein molecules in a silica shell, the structure remains intact even when heated to 100°C, or stored at room temperature for up to three years.

The technique for tailor-fitting a vaccine with a silica coat—known as ensilication—was developed by a Bath [University] team in collaboration with the University of Newcastle. This pioneering technology was seen to work in the lab two years ago, and now it has demonstrated its effectiveness in the real world too.

Here’s the lead researcher describing her team’s work

Ensilication: success in animal trials from University of Bath on Vimeo.

A June 8, 2020 University of Bath press release (also on EurekAlert) fills in more details about the research,

In their latest study, published in the journal Scientific Reports, the researchers sent both ensilicated and regular samples of the tetanus vaccine from Bath to Newcastle by ordinary post (a journey time of over 300 miles, which by post takes a day or two). When doses of the ensilicated vaccine were subsequently injected into mice, an immune response was triggered, showing the vaccine to be active. No immune response was detected in mice injected with unprotected doses of the vaccine, indicating the medicine had been damaged in transit.

Dr Asel Sartbaeva, who led the project from the University of Bath’s Department of Chemistry, said: “This is really exciting data because it shows us that ensilication preserves not just the structure of the vaccine proteins but also the function – the immunogenicity.”

“This project has focused on tetanus, which is part of the DTP (diphtheria, tetanus and pertussis) vaccine given to young children in three doses. Next, we will be working on developing a thermally-stable vaccine for diphtheria, and then pertussis. Eventually we want to create a silica cage for the whole DTP trivalent vaccine, so that every child in the world can be given DTP without having to rely on cold chain distribution.”

Cold chain distribution requires a vaccine to be refrigerated from the moment of manufacturing to the endpoint destination.

Silica is an inorganic, non-toxic material, and Dr Sartbaeva estimates that ensilicated vaccines could be used for humans within five to 15 years. She hopes the technology to silica-wrap proteins will eventually be adopted to store and transport all childhood vaccines, as well as other protein-based products, such as antibodies and enzymes.

“Ultimately, we want to make important medicines stable so they can be more widely available,” she said. “The aim is to eradicate vaccine-preventable diseases in low income countries by using thermally stable vaccines and cutting out dependence on cold chain.”

Currently, up to 50% of vaccine doses are discarded before use due to exposure to suboptimal temperatures. According to the World Health Organisation (WHO), 19.4 million infants did not receive routine life-saving vaccinations in 2018.

Here’s a link to and a citation for the paper,

Ensilicated tetanus antigen retains immunogenicity: in vivo study and time-resolved SAXS characterization by A. Doekhie, R. Dattani, Y-C. Chen, Y. Yang, A. Smith, A. P. Silve, F. Koumanov, S. A. Wells, K. J. Edler, K. J. Marchbank, J. M. H. van den Elsen & A. Sartbaeva. Scientific Reports volume 10, Article number: 9243 (2020) DOI: https://doi.org/10.1038/s41598-020-65876-3 Published 08 June 2020

This paper is open access

Nanopatch update

I tend to lose track as a science gets closer to commercialization since the science news becomes business news and I almost never scan that sector. It’s been about two-and-half years since I featured research that suggested Nanopatch provided more effective polio vaccination than the standard needle and syringe method in a December 20, 2017 post. The latest bits of news have an interesting timeline.

March 2020

Mark Kendal (Wikipedia entry) is the researcher behind the Nanopatch. He’s interviewed in a March 5, 2020 episode (about 20 mins.) in the Pioneers Series (bankrolled by Rolex [yes, the watch company]) on Monocle.com. Coincidentally or not, a new piece of research funded by Vaxxas (the nanopatch company founded by Mark Kendall; on the website you will find a ‘front’ page and a ‘Contact us’ page only) was announced in a March 17, 2020 news item on medical.net,

Vaxxas, a clinical-stage biotechnology company commercializing a novel vaccination platform, today announced the publication in the journal PLoS Medicine of groundbreaking clinical research indicating the broad immunological and commercial potential of Vaxxas’ novel high-density microarray patch (HD-MAP). Using influenza vaccine, the clinical study of Vaxxas’ HD-MAP demonstrated significantly enhanced immune response compared to vaccination by needle/syringe. This is the largest microarray patch clinical vaccine study ever performed.

“With vaccine coated onto Vaxxas HD-MAPs shown to be stable for up to a year at 40°C [emphasis mine], we can offer a truly differentiated platform with a global reach, particularly into low and middle income countries or in emergency use and pandemic situations,” said Angus Forster, Chief Development and Operations Officer of Vaxxas and lead author of the PLoS Medicine publication. “Vaxxas’ HD-MAP is readily fabricated by injection molding to produce a 10 x 10 mm square with more than 3,000 microprojections that are gamma-irradiated before aseptic dry application of vaccine to the HD-MAP’s tips. All elements of device design, as well as coating and QC, have been engineered to enable small, modular, aseptic lines to make millions of vaccine products per week.”

The PLoS publication reported results and analyses from a clinical study involving 210 clinical subjects [emphasis mine]. The clinical study was a two-part, randomized, partially double-blind, placebo-controlled trial conducted at a single Australian clinical site. The clinical study’s primary objective was to measure the safety and tolerability of A/Singapore/GP1908/2015 H1N1 (A/Sing) monovalent vaccine delivered by Vaxxas HD-MAP in comparison to an uncoated Vaxxas HD-MAP and IM [intramuscular] injection of a quadrivalent seasonal influenza vaccine (QIV) delivering approximately the same dose of A/Sing HA protein. Exploratory outcomes were: to evaluate the immune responses to HD-MAP application to the forearm with A/Sing at 4 dose levels in comparison to IM administration of A/Sing at the standard 15 μg HA per dose per strain, and to assess further measures of immune response through additional assays and assessment of the local skin response via punch biopsy of the HD-MAP application sites. Local skin response, serological, mucosal and cellular immune responses were assessed pre- and post-vaccination.

Here’s a link to and a citation for the latest ‘nanopatch’ paper,

Safety, tolerability, and immunogenicity of influenza vaccination with a high-density microarray patch: Results from a randomized, controlled phase I clinical trial by Angus H. Forster, Katey Witham, Alexandra C. I. Depelsenaire, Margaret Veitch, James W. Wells, Adam Wheatley, Melinda Pryor, Jason D. Lickliter, Barbara Francis, Steve Rockman, Jesse Bodle, Peter Treasure, Julian Hickling, Germain J. P. Fernando. DOI: https://doi.org/10.1371/journal.pmed.1003024 PLOS (Public Library of Science) Published: March 17, 2020

This is an open access paper.

May 2020

Two months later, Merck, an American multinational pharmaceutical company, showed some serious interest in the ‘nanopatch’. A May 28, 2020 article by Chris Newmarker for drugdelvierybusiness.com announces the news (Note: Links have been removed),

Merck has exercised its option to use Vaxxas‘ High Density Microarray Patch (HD-MAP) platform as a delivery platform for a vaccine candidate, the companies announced today [Thursday, May 28, 2020].

Also today, Vaxxas announced that German manufacturing equipment maker Harro Höfliger will help Vaxxas develop a high-throughput, aseptic manufacturing line to make vaccine products based on Vaxxas’ HD-MAP technology. Initial efforts will focus on having a pilot line operating in 2021 to support late-stage clinical studies — with a goal of single, aseptic-based lines being able to churn out 5 million vaccine products a week.

“A major challenge in commercializing microarray patches — like Vaxxas’ HD-MAP — for vaccination is the ability to manufacture at industrially-relevant scale, while meeting stringent sterility and quality standards. Our novel device design along with our innovative vaccine coating and quality verification technologies are an excellent fit for integration with Harro Höfliger’s aseptic process automation platforms. Adopting a modular approach, it will be possible to achieve output of tens-of-millions of vaccine-HD-MAP products per week,” Hoey [David L. Hoey, President and CEO of Vaxxas] said.

Vaxxas also claims that the patches can deliver vaccine more efficiently — a positive when people around the world are clamoring for a vaccine against COVID-19. The company points to a recent [March 17, 2020] clinical study in which their micropatch delivering a sixth of an influenza vaccine dose produced an immune response comparable to a full dose by intramuscular injection. A two-thirds dose by HD-MAP generated significantly faster and higher overall antibody responses.

As I noted earlier, this is an interesting timeline.

Final comment

In the end, what all of this means is that there may be more than one way to deal with vaccines and medicines that deteriorate all too quickly unless refrigerated. I wish all of these researchers the best.

COVID-19: caution and concern not panic

There’s a lot of information being pumped out about COVID-19 and not all of it is as helpful as it might be. In fact, the sheer volume can seem overwhelming despite one’s best efforts to be calm.

Here are a few things I’ve used to help relieve some fo the pressure as numbers in Canada keep rising.

Inspiration from the Italians

I was thrilled to find Emily Rumball’s March 18 ,2020 article titled, “Italians making the most of quarantine is just what the world needs right now (VIDEOS),” on the Daily Hive website. The couple dancing on the balcony while Ginger Rogers and Fred Astaire are shown dancing on the wall above is my favourite.

As the Italians practice social distancing and exercise caution, they are also demonstrating that “life goes on” even while struggling as one of the countries hit hardest by COVID-19.

Investigating viruses and the 1918/19 pandemic vs. COVID-19

There has been some mention of and comparison to the 1918/19 pandemic (also known as the Spanish flu) in articles by people who don’t seem to be particularly well informed about that earlier pandemic. Susan Baxter offers a concise and scathing explanation for why the 1918/19 situation deteriorated as much as it did in her February 8, 2010 posting. As for this latest pandemic (COVID-19), she explains what a virus actually is and suggests we all calm down in her March 17, 2020 posting. BTW, she has an interdisciplinary PhD for work largely focused on health economics. She is also a lecturer in the health sciences programme at Simon Fraser University (Vancouver, Canada). Full disclosure: She and I have a longstanding friendship.

Marilyn J. Roossinck, a professor of Plant Pathology and Environmental Microbiology at Pennsylvania State University, wrote a February 20, 2020 essay for The Conversation titled, “What are viruses anyway, and why do they make us so sick? 5 questions answered,”

4. SARS was a formidable foe, and then seemed to disappear. Why?

Measures to contain SARS started early, and they were very successful. The key is to stop the chain of transmission by isolating infected individuals. SARS had a short incubation period; people generally showed symptoms in two to seven days. There were no documented cases of anyone being a source of SARS without showing symptoms.

Stopping the chain of transmission is much more difficult when the incubation time is much longer, or when some people don’t get symptoms at all. This may be the case with the virus causing CoVID-19, so stopping it may take more time.

1918/19 pandemic vs. COVID-19

Angela Betsaida B. Laguipo, with a Bachelor of Nursing degree from the University of Baguio, Philippine is currently completing her Master’s Degree, has written a March 9, 2020 article for News Medical comparing the two pandemics,

The COVID-19 is fast spreading because traveling is an everyday necessity today, with flights from one country to another accessible to most.

Some places did manage to keep the virus at bay in 1918 with traditional and effective methods, such as closing schools, banning public gatherings, and locking down villages, which has been performed in Wuhan City, in Hubei province, China, where the coronavirus outbreak started. The same method is now being implemented in Northern Italy, where COVID-19 had killed more than 400 people.

The 1918 Spanish flu has a higher mortality rate of an estimated 10 to 20 percent, compared to 2 to 3 percent in COVID-19. The global mortality rate of the Spanish flu is unknown since many cases were not reported back then. About 500 million people or one-third of the world’s population contracted the disease, while the number of deaths was estimated to be up to 50 million.

During that time, public funds are mostly diverted to military efforts, and a public health system was still a budding priority in most countries. In most places, only the middle class or the wealthy could afford to visit a doctor. Hence, the virus has [sic] killed many people in poor urban areas where there are poor nutrition and sanitation. Many people during that time had underlying health conditions, and they can’t afford to receive health services.

I recommend reading Laguipo’s article in its entirety right down to the sources she cites at the end of her article.

Ed Yong’s March 20, 2020 article for The Atlantic, “Why the Coronavirus Has Been So Successful; We’ve known about SARS-CoV-2 for only three months, but scientists can make some educated guesses about where it came from and why it’s behaving in such an extreme way,” provides more information about what is currently know about the coronavirus, SATS-CoV-2,

One of the few mercies during this crisis is that, by their nature, individual coronaviruses are easily destroyed. Each virus particle consists of a small set of genes, enclosed by a sphere of fatty lipid molecules, and because lipid shells are easily torn apart by soap, 20 seconds of thorough hand-washing can take one down. Lipid shells are also vulnerable to the elements; a recent study shows that the new coronavirus, SARS-CoV-2, survives for no more than a day on cardboard, and about two to three days on steel and plastic. These viruses don’t endure in the world. They need bodies.

But why do some people with COVID-19 get incredibly sick, while others escape with mild or nonexistent symptoms? Age is a factor. Elderly people are at risk of more severe infections possibly because their immune system can’t mount an effective initial defense, while children are less affected because their immune system is less likely to progress to a cytokine storm. But other factors—a person’s genes, the vagaries of their immune system, the amount of virus they’re exposed to, the other microbes in their bodies—might play a role too. In general, “it’s a mystery why some people have mild disease, even within the same age group,” Iwasaki [Akiko Iwasaki of the Yale School of Medicine] says.

We still have a lot to learn about this.

Going nuts and finding balance with numbers

Generally speaking,. I find numbers help me to put this situation into perspective. It seems I’m not alone; Dr. Daniel Gillis’ (Guelph University in Ontario, Canada) March 18, 2020 blog post is titled, Statistics In A Time of Crisis.

Hearkening back in history, the Wikipedia entry for Spanish flu offers a low of 17M deaths in a 2018 estimate to a high of !00M deaths in a 2005 estimate. At this writing (Friday, March 20, 2020 at 3 pm PT), the number of coronovirus cases worldwide is 272,820 with 11, 313 deaths.

Articles like Michael Schulman’s March 16, 2020 article for the New Yorker might not be as helpful as one hope (Note: Links have been removed),

Last Wednesday night [March 11, 2020], not long after President Trump’s Oval Office address, I called my mother to check in about the, you know, unprecedented global health crisis [emphasis mine] that’s happening. She told me that she and my father were in a cab on the way home from a fun dinner at the Polo Bar, in midtown Manhattan, with another couple who were old friends.

“You went to a restaurant?!” I shrieked. This was several days after she had told me, through sniffles, that she was recovering from a cold but didn’t see any reason that she shouldn’t go to the school where she works. Also, she was still hoping to make a trip to Florida at the end of the month. My dad, a lawyer, was planning to go into the office on Thursday, but thought that he might work from home on Friday, if he could figure out how to link up his personal computer. …

… I’m thirty-eight, and my mother and father are sixty-eight and seventy-four, respectively. Neither is retired, and both are in good shape. But people sixty-five and older—more than half of the baby-boomer population—are more susceptible to COVID-19 and have a higher mortality rate, and my parents’ blithe behavior was as unsettling as the frantic warnings coming from hospitals in Italy.

Clearly, Schulman is concerned about his parents’ health and well being but the tone of near hysteria is a bit off-putting. We’re not in a crisis (exception: the Italians and, possibly, the Spanish and the French)—yet.

Tyler Dawson’s March 20, 2020 article in The Province newspaper (in Vancouver, British Columbia) offers dire consequences from COVID-19 before pivoting,

COVID-19 will leave no Canadian untouched.

Travel plans halted. First dates postponed. School semesters interrupted. Jobs lost. Retirement savings decimated. Some of us will know someone who has gotten sick, or tragically, died from the virus.

By now we know the terminology: social distancing, flatten the curve. Across the country, each province is taking measures to prepare, to plan for care, and the federal government has introduced financial measures amounting to more than three per cent of the country’s GDP to float the economy onward.

The response, says Steven Taylor, a University of British Columbia psychiatry professor and author of The Psychology of Pandemics, is a “balancing act.” [emphasis mine] Keep people alert, but neither panicked nor tuned out.

“You need to generate some degree of anxiety that gets people’s attention,” says Taylor. “If you overstate the message it could backfire.”

Prepare for uncertainty

In the same way experts still cannot come up with a definitive death rate for the 1918/19 pandemic, they are having trouble with this one too although, now, they’re trying to model the future rather than trying to establish what happened in the past. David Adam’s March 12, 2020 article forThe Scientist, provides some insight into the difficulties (Note: Links have been removed)

Like any other models, the projections of how the outbreak will unfold, how many people will become infected, and how many will die, are only as reliable as the scientific information they rest on. And most modelers’ efforts so far have focused on improving these data, rather than making premature predictions.

“Most of the work that modelers have done recently or in the first part of the epidemic hasn’t really been coming up with models and predictions, which is I think how most people think of it,” says John Edmunds, who works in the Centre for the Mathematical Modelling of Infectious Diseases at the London School of Hygiene & Tropical Medicine. “Most of the work has really been around characterizing the epidemiology, trying to estimate key parameters. I don’t really class that as modeling but it tends to be the modelers that do it.”

These variables include key numbers such as the disease incubation period, how quickly the virus spreads through the population, and, perhaps most contentiously, the case-fatality ratio. This sounds simple: it’s the proportion of infected people who die. But working it out is much trickier than it looks. “The non-specialists do this all the time and they always get it wrong,” Edmunds says. “If you just divide the total numbers of deaths by the total numbers of cases, you’re going to get the wrong answer.”

Earlier this month, Tedros Adhanom Ghebreyesus, the head of the World Health Organization, dismayed disease modelers when he said COVID-19 (the disease caused by the SARS-CoV-2 coronavirus) had killed 3.4 percent of reported cases, and that this was more severe than seasonal flu, which has a death rate of around 0.1 percent. Such a simple calculation does not account for the two to three weeks it usually takes someone who catches the virus to die, for example. And it assumes that reported cases are an accurate reflection of how many people are infected, when the true number will be much higher and the true mortality rate much lower.

Edmunds calls this kind of work “outbreak analytics” rather than true modeling, and he says the results of various specialist groups around the world are starting to converge on COVID-19’s true case-fatality ratio, which seems to be about 1 percent.[emphasis mine]

The 1% estimate in Adam’s article accords with Jeremy Samuel Faust’s (an emergency medicine physician at Brigham and Women’s Hospital in Boston, faculty in its division of health policy and public health, and an instructor at Harvard Medical School) estimates in a March 4, 2020 article (COVID-19 Isn’t As Deadly As We Think featured in my March 9, 2020 posting).

In a March 17, 2020 article by Steven Lewis (a health policy consultant formerly based in Saskatchewan, Canada; now living in Australia) for the Canadian Broadcasting Corporation’s (CBC) news online website, he covers some of the same ground and offers a somewhat higher projected death rate while refusing to commit,

Imagine you’re a chief public health officer and you’re asked the question on everyone’s mind: how deadly is the COVID-19 outbreak?

With the number of cases worldwide approaching 200,000, and 1,000 or more cases in 15 countries, you’d think there would be an answer. But the more data we see, the tougher it is to come up with a hard number.

Overall, the death rate is around four per cent — of reported cases. That’s also the death rate in China, which to date accounts for just under half the total number of global cases.

China is the only country where a) the outcome of almost all cases is known (85 per cent have recovered), and b) the spread has been stopped (numbers plateaued about a month ago). 

A four per cent death rate is pretty high — about 40 times more deadly than seasonal flu — but no experts believe that is the death rate. The latest estimate is that it is around 1.5 per cent. [emphasis mine] Other models suggest that it may be somewhat lower. 

The true rate can be known only if every case is known and confirmed by testing — including the asymptomatic or relatively benign cases, which comprise 80 per cent or more of the total — and all cases have run their course (people have either recovered or died). Aside from those in China, almost all cases identified are still active. 

Unless a jurisdiction systematically tests a large random sample of its population, we may never know the true rate of infection or the real death rate. 

Yet for all this unavoidable uncertainty, it is still odd that the rates vary so widely by country.

His description of the situation in Europe is quite interesting and worthwhile if you have the time to read it.

In the last article I’m including here, Murray Brewster offers some encouraging words in his March 20, 2020 piece about the preparations being made by the Canadian Armed Forces (CAF),

The Canadian military is preparing to respond to multiple waves of the COVID-19 pandemic which could stretch out over a year or more, the country’s top military commander said in his latest planning directive.

Gen. Jonathan Vance, chief of the defence staff, warned in a memo issued Thursday that requests for assistance can be expected “from all echelons of government and the private sector and they will likely come to the Department [of National Defence] through multiple points of entry.”

The directive notes the federal government has not yet directed the military to move into response mode, but if or when it does, a single government panel — likely a deputy-minister level inter-departmental task force — will “triage requests and co-ordinate federal responses.”

It also warns that members of the military will contract the novel coronavirus, “potentially threatening the integrity” of some units.

The notion that the virus caseload could recede and then return is a feature of federal government planning.

The Public Health Agency of Canada has put out a notice looking for people to staff its Centre for Emergency Preparedness and Response during the crisis and the secondment is expected to last between 12 and 24 months.

The Canadian military, unlike those in some other nations, has high-readiness units available. Vance said they are already set to reach out into communities to help when called.

Planners are also looking in more detail at possible missions — such as aiding remote communities in the Arctic where an outbreak could cripple critical infrastructure.

Defence analyst Dave Perry said this kind of military planning exercise is enormously challenging and complicated in normal times, let alone when most of the federal civil service has been sent home.

“The idea that they’re planning to be at this for year is absolutely bang on,” said Perry, a vice-president at the Canadian Global Affairs Institute.

In other words, concern and caution are called for not panic. I realize this post has a strongly Canada-centric focus but I’m hopeful others elsewhere will find this helpful.

Genes, intelligence, Chinese CRISPR (clustered regularly interspaced short palindromic repeats) babies, and other children

This started out as an update and now it’s something else. What follows is a brief introduction to the Chinese CRISPR twins; a brief examination of parents, children, and competitiveness; and, finally, a suggestion that genes may not be what we thought. I also include a discussion about how some think scientists should respond when they know beforehand that one of their kin is crossing an ethical line. Basically, this is a complex topic and I am attempting to interweave a number of competing lines of query into one narrative about human nature and the latest genetics obsession.

Introduction to the Chinese CRISPR twins

Back in November 2018 I covered the story about the Chinese scientist, He Jiankui , who had used CRISPR technology to edit genes in embryos that were subsequently implanted in a waiting mother (apparently there could be as many as eight mothers) with the babies being brought to term despite an international agreement (of sorts) not to do that kind of work. At this time, we know of the twins, Lulu and Nana but, by now, there may be more babies. (I have much more detail about the initial controversies in my November 28, 2018 posting.)

It seems the drama has yet to finish unfolding. There may be another consequence of He’s genetic tinkering.

Could the CRISPR babies, Lulu and Nana, have enhanced cognitive abilities?

Yes, according to Antonio Regalado’s February 21, 2019 article (behind a paywall) for MIT’s (Massachusetts Institute of Technology) Technology Review, those engineered babies may have enhanced abilities for learning and remembering.

For those of us who can’t get beyond the paywall, others have been successful. Josh Gabbatiss in his February 22, 2019 article for independent.co.uk provides some detail,

The world’s first gene edited babies may have had their brains unintentionally altered – and perhaps cognitively enhanced – as a result of the controversial treatment undertaken by a team of Chinese scientists.

Dr He Jiankui and his team allegedly deleted a gene from a number of human embryos before implanting them in their mothers, a move greeted with horror by the global scientific community. The only known successful birth so far is the case of twin girls Nana and Lulu.

The now disgraced scientist claimed that he removed a gene called CCR5 [emphasis mine] from their embroyos in an effort to make the twins resistant to infection by HIV.

But another twist in the saga has now emerged after a new paper provided more evidence that the impact of CCR5 deletion reaches far beyond protection against dangerous viruses – people who naturally lack this gene appear to recover more quickly from strokes, and even go further in school. [emphasis mine]

Dr Alcino Silva, a neurobiologist at the University of California, Los Angeles, who helped identify this role for CCR5 said the work undertaken by Dr Jiankui likely did change the girls’ brains.

“The simplest interpretation is that those mutations will probably have an impact on cognitive function in the twins,” he told the MIT Technology Review.

The connection immediately raised concerns that the gene was targeted due to its known links with intelligence, which Dr Silva said was his immediate response when he heard the news.

… there is no evidence that this was Dr Jiankui’s goal and at a press conference organised after the initial news broke, he said he was aware of the work but was “against using genome editing for enhancement”.

..

Claire Maldarelli’s February 22, 2019 article for Popular Science provides more information about the CCR5 gene/protein (Note: Links have been removed),

CCR5 is a protein that sits on the surface of white blood cells, a major component of the human immune system. There, it allows HIV to enter and infect a cell. A chunk of the human population naturally carries a mutation that makes CCR5 nonfunctional (one study found that 10 percent of Europeans have this mutation), which often results in a smaller protein size and one that isn’t located on the outside of the cell, preventing HIV from ever entering and infecting the human immune system.

The goal of the Chinese researchers’ work, led by He Jiankui of the Southern University of Science and Technology located in Shenzhen, was to tweak the embryos’ genome to lack CCR5, ensuring the babies would be immune to HIV.

But genetics is rarely that simple.

In recent years, the CCR5 gene has been a target of ongoing research, and not just for its relationship to HIV. In an attempt to understand what influences memory formation and learning in the brain, a group of researchers at UCLA found that lowering the levels of CCR5 production enhanced both learning and memory formation. This connection led those researchers to think that CCR5 could be a good drug target for helping stroke victims recover: Relearning how to move, walk, and talk is a key component to stroke rehabilitation.

… promising research, but it begs the question: What does that mean for the babies who had their CCR5 genes edited via CRISPR prior to their birth? Researchers speculate that the alternation will have effects on the children’s cognitive functioning. …

John Loeffler’s February 22, 2019 article for interestingengineering.com notes that there are still many questions about He’s (scientist’s name) research including, did he (pronoun) do what he claimed? (Note: Links have been removed),

Considering that no one knows for sure whether He has actually done as he and his team claim, the swiftness of the condemnation of his work—unproven as it is—shows the sensitivity around this issue.

Whether He did in fact edit Lulu and Nana’s genes, it appears he didn’t intend to impact their cognitive capacities. According to MIT Technology Review, not a single researcher studying CCR5’s role in intelligence was contacted by He, even as other doctors and scientists were sought out for advice about his project.

This further adds to the alarm as there is every expectation that He should have known about the connection between CCR5 and cognition.

At a gathering of gene-editing researchers in Hong Kong two days after the birth of the potentially genetically-altered twins was announced, He was asked about the potential impact of erasing CCR5 from the twins DNA on their mental capacity.

He responded that he knew about the potential cognitive link shown in Silva’s 2016 research. “I saw that paper, it needs more independent verification,” He said, before adding that “I am against using genome editing for enhancement.”

The problem, as Silva sees it, is that He may be blazing the trail for exactly that outcome, whether He intends to or not. Silva says that after his 2016 research was published, he received an uncomfortable amount of attention from some unnamed, elite Silicon Valley leaders who seem to be expressing serious interest in using CRISPR to give their children’s brains a boost through gene editing. [emphasis mine]

As such, Silva can be forgiven for not quite believing He’s claims that he wasn’t intending to alter the human genome for enhancement. …

The idea of designer babies isn’t new. As far back as Plato, the thought of using science to “engineer” a better human has been tossed about, but other than selective breeding, there really hasn’t been a path forward.

In the late 1800s, early 1900s, Eugenics made a real push to accomplish something along these lines, and the results were horrifying, even before Nazism. After eugenics mid-wifed the Holocaust in World War II, the concept of designer children has largely been left as fodder for science fiction since few reputable scientists would openly declare their intention to dabble in something once championed and pioneered by the greatest monsters of the 20th century.

Memories have faded though, and CRISPR significantly changes this decades-old calculus. CRISPR makes it easier than ever to target specific traits in order to add or subtract them from an embryos genetic code. Embryonic research is also a diverse enough field that some scientist could see pioneering designer babies as a way to establish their star power in academia while getting their names in the history books, [emphasis mine] all while working in relative isolation. They only need to reveal their results after the fact and there is little the scientific community can do to stop them, unfortunately.

When He revealed his research and data two days after announcing the births of Lulu and Nana, the gene-scientists at the Hong Kong conference were not all that impressed with the quality of He’s work. He has not provided access for fellow researchers to either his data on Lulu, Nana, and their family’s genetic data so that others can verify that Lulu and Nana’s CCR5 genes were in fact eliminated.

This almost rudimentary verification and validation would normally accompany a major announcement such as this. Neither has He’s work undergone a peer-review process and it hasn’t been formally published in any scientific journal—possibly for good reason.

Researchers such as Eric Topol, a geneticist at the Scripps Research Institute, have been finding several troubling signs in what little data He has released. Topol says that the editing itself was not precise and show “all kinds of glitches.”

Gaetan Burgio, a geneticist at the Australian National University, is likewise unimpressed with the quality of He’s work. Speaking of the slides He showed at the conference to support his claim, Burgio calls it amateurish, “I can believe that he did it because it’s so bad.”

Worse of all, its entirely possible that He actually succeeded in editing Lulu and Nana’s genetic code in an ad hoc, unethical, and medically substandard way. Sadly, there is no shortage of families with means who would be willing to spend a lot of money to design their idea of a perfect child, so there is certainly demand for such a “service.”

It’s nice to know (sarcasm icon) that the ‘Silicon Valley elite’ are willing to volunteer their babies for scientific experimentation in a bid to enhance intelligence.

The ethics of not saying anything

Natalie Kofler, a molecular biologist, wrote a February 26, 2019 Nature opinion piece and call to action on the subject of why scientists who were ‘in the know’ remained silent about He’s work prior to his announcements,

Millions [?] were shocked to learn of the birth of gene-edited babies last year, but apparently several scientists were already in the know. Chinese researcher He Jiankui had spoken with them about his plans to genetically modify human embryos intended for pregnancy. His work was done before adequate animal studies and in direct violation of the international scientific consensus that CRISPR–Cas9 gene-editing technology is not ready or appropriate for making changes to humans that could be passed on through generations.

Scholars who have spoken publicly about their discussions with He described feeling unease. They have defended their silence by pointing to uncertainty over He’s intentions (or reassurance that he had been dissuaded), a sense of obligation to preserve confidentiality and, perhaps most consistently, the absence of a global oversight body. Others who have not come forward probably had similar rationales. But He’s experiments put human health at risk; anyone with enough knowledge and concern could have posted to blogs or reached out to their deans, the US National Institutes of Health or relevant scientific societies, such as the Association for Responsible Research and Innovation in Genome Editing (see page 440). Unfortunately, I think that few highly established scientists would have recognized an obligation to speak up.

I am convinced that this silence is a symptom of a broader scientific cultural crisis: a growing divide between the values upheld by the scientific community and the mission of science itself.

A fundamental goal of the scientific endeavour is to advance society through knowledge and innovation. As scientists, we strive to cure disease, improve environmental health and understand our place in the Universe. And yet the dominant values ingrained in scientists centre on the virtues of independence, ambition and objectivity. That is a grossly inadequate set of skills with which to support a mission of advancing society.

Editing the genes of embryos could change our species’ evolutionary trajectory. Perhaps one day, the technology will eliminate heritable diseases such as sickle-cell anaemia and cystic fibrosis. But it might also eliminate deafness or even brown eyes. In this quest to improve the human race, the strengths of our diversity could be lost, and the rights of already vulnerable populations could be jeopardized.

Decisions about how and whether this technology should be used will require an expanded set of scientific virtues: compassion to ensure its applications are designed to be just, humility to ensure its risks are heeded and altruism to ensure its benefits are equitably distributed.

Calls for improved global oversight and robust ethical frameworks are being heeded. Some researchers who apparently knew of He’s experiments are under review by their universities. Chinese investigators have said He skirted regulations and will be punished. But punishment is an imperfect motivator. We must foster researchers’ sense of societal values.

Fortunately, initiatives popping up throughout the scientific community are cultivating a scientific culture informed by a broader set of values and considerations. The Scientific Citizenship Initiative at Harvard University in Cambridge, Massachusetts, trains scientists to align their research with societal needs. The Summer Internship for Indigenous Peoples in Genomics offers genomics training that also focuses on integrating indigenous cultural perspectives into gene studies. The AI Now Institute at New York University has initiated a holistic approach to artificial-intelligence research that incorporates inclusion, bias and justice. And Editing Nature, a programme that I founded, provides platforms that integrate scientific knowledge with diverse cultural world views to foster the responsible development of environmental genetic technologies.

Initiatives such as these are proof [emphasis mine] that science is becoming more socially aware, equitable and just. …

I’m glad to see there’s work being done on introducing a broader set of values into the scientific endeavour. That said, these programmes seem to be voluntary, i.e., people self-select, and those most likely to participate in these programmes are the ones who might be inclined to integrate social values into their work in the first place.

This doesn’t address the issue of how to deal with unscrupulous governments pressuring scientists to create designer babies along with hypercompetitive and possibly unscrupulous individuals such as the members of the ‘Silicon Valley insiders mentioned in Loeffler’s article, teaming up with scientists who will stop at nothing to get their place in the history books.

Like Kofler, I’m encouraged to see these programmes but I’m a little less convinced that they will be enough. What form it might take I don’t know but I think something a little more punitive is also called for.

CCR5 and freedom from HIV

I’ve added this piece about the Berlin and London patients because, back in November 2018, I failed to realize how compelling the idea of eradicating susceptibility to AIDS/HIV might be. Reading about some real life remissions helped me to understand some of He’s stated motivations a bit better. Unfortunately, there’s a major drawback described here in a March 5, 2019 news item on CBC (Canadian Broadcasting Corporation) online news attributed to Reuters,

An HIV-positive man in Britain has become the second known adult worldwide to be cleared of the virus that causes AIDS after he received a bone marrow transplant from an HIV-resistant donor, his doctors said.

The therapy had an early success with a man known as “the Berlin patient,” Timothy Ray Brown, a U.S. man treated in Germany who is 12 years post-transplant and still free of HIV. Until now, Brown was the only person thought to have been cured of infection with HIV, the virus that causes AIDS.

Such transplants are dangerous and have failed in other patients. They’re also impractical to try to cure the millions already infected.

In the latest case, the man known as “the London patient” has no trace of HIV infection, almost three years after he received bone marrow stem cells from a donor with a rare genetic mutation that resists HIV infection — and more than 18 months after he came off antiretroviral drugs.

“There is no virus there that we can measure. We can’t detect anything,” said Ravindra Gupta, a professor and HIV biologist who co-led a team of doctors treating the man.

Gupta described his patient as “functionally cured” and “in remission,” but cautioned: “It’s too early to say he’s cured.”

Gupta, now at Cambridge University, treated the London patient when he was working at University College London. The man, who has asked to remain anonymous, had contracted HIV in 2003, Gupta said, and in 2012 was also diagnosed with a type of blood cancer called Hodgkin’s lymphoma.

In 2016, when he was very sick with cancer, doctors decided to seek a transplant match for him.

“This was really his last chance of survival,” Gupta told Reuters.

Doctors found a donor with a gene mutation known as CCR5 delta 32, which confers resistance to HIV. About one per cent of people descended from northern Europeans have inherited the mutation from both parents and are immune to most HIV. The donor had this double copy of the mutation.

That was “an improbable event,” Gupta said. “That’s why this has not been observed more frequently.”

Most experts say it is inconceivable such treatments could be a way of curing all patients. The procedure is expensive, complex and risky. To do this in others, exact match donors would have to be found in the tiny proportion of people who have the CCR5 mutation.

Specialists said it is also not yet clear whether the CCR5 resistance is the only key [emphasis mine] — or whether the graft-versus-host disease may have been just as important. Both the Berlin and London patients had this complication, which may have played a role in the loss of HIV-infected cells, Gupta said.

Not only is there some question as to what role the CCR5 gene plays, there’s also a question as to whether or not we know what role genes play.

A big question: are genes what we thought?

Ken Richardson’s January 3, 2019 article for Nautilus (I stumbled across it on May 14, 2019 so I’m late to the party) makes and supports a startling statement, It’s the End of the Gene As We Know It We are not nearly as determined by our genes as once thought (Note: A link has been removed),

We’ve all seen the stark headlines: “Being Rich and Successful Is in Your DNA” (Guardian, July 12); “A New Genetic Test Could Help Determine Children’s Success” (Newsweek, July 10); “Our Fortunetelling Genes” make us (Wall Street Journal, Nov. 16); and so on.

The problem is, many of these headlines are not discussing real genes at all, but a crude statistical model of them, involving dozens of unlikely assumptions. Now, slowly but surely, that whole conceptual model of the gene is being challenged.

We have reached peak gene, and passed it.

The preferred dogma started to appear in different versions in the 1920s. It was aptly summarized by renowned physicist Erwin Schrödinger in a famous lecture in Dublin in 1943. He told his audience that chromosomes “contain, in some kind of code-script, the entire pattern of the individual’s future development and of its functioning in the mature state.”

Around that image of the code a whole world order of rank and privilege soon became reinforced. These genes, we were told, come in different “strengths,” different permutations forming ranks that determine the worth of different “races” and of different classes in a class-structured society. A whole intelligence testing movement was built around that preconception, with the tests constructed accordingly.

The image fostered the eugenics and Nazi movements of the 1930s, with tragic consequences. Governments followed a famous 1938 United Kingdom education commission in decreeing that, “The facts of genetic inequality are something that we cannot escape,” and that, “different children … require types of education varying in certain important respects.”

Today, 1930s-style policy implications are being drawn once again. Proposals include gene-testing at birth for educational intervention, embryo selection for desired traits, identifying which classes or “races” are fitter than others, and so on. And clever marketizing now sees millions of people scampering to learn their genetic horoscopes in DNA self-testing kits.[emphasis mine]

So the hype now pouring out of the mass media is popularizing what has been lurking in the science all along: a gene-god as an entity with almost supernatural powers. Today it’s the gene that, in the words of the Anglican hymn, “makes us high and lowly and orders our estate.”

… at the same time, a counter-narrative is building, not from the media but from inside science itself.

So it has been dawning on us is that there is no prior plan or blueprint for development: Instructions are created on the hoof, far more intelligently than is possible from dumb DNA. That is why today’s molecular biologists are reporting “cognitive resources” in cells; “bio-information intelligence”; “cell intelligence”; “metabolic memory”; and “cell knowledge”—all terms appearing in recent literature.1,2 “Do cells think?” is the title of a 2007 paper in the journal Cellular and Molecular Life Sciences.3 On the other hand the assumed developmental “program” coded in a genotype has never been described.


It is such discoveries that are turning our ideas of genetic causation inside out. We have traditionally thought of cell contents as servants to the DNA instructions. But, as the British biologist Denis Noble insists in an interview with the writer Suzan Mazur,1 “The modern synthesis has got causality in biology wrong … DNA on its own does absolutely nothing [ emphasis mine] until activated by the rest of the system … DNA is not a cause in an active sense. I think it is better described as a passive data base which is used by the organism to enable it to make the proteins that it requires.”

I highly recommend reading Richardson’s article in its entirety. As well, you may want to read his book, ” Genes, Brains and Human Potential: The Science and Ideology of Intelligence .”

As for “DNA on its own doing absolutely nothing,” that might be a bit of a eye-opener for the Silicon Valley elite types investigating cognitive advantages attributed to the lack of a CCR5 gene. Meanwhile, there are scientists inserting a human gene associated with brain development into monkeys,

Transgenic monkeys and human intelligence

An April 2, 2019 news item on chinadaily.com describes research into transgenic monkeys,

Researchers from China and the United States have created transgenic monkeys carrying a human gene that is important for brain development, and the monkeys showed human-like brain development.

Scientists have identified several genes that are linked to primate brain size. MCPH1 is a gene that is expressed during fetal brain development. Mutations in MCPH1 can lead to microcephaly, a developmental disorder characterized by a small brain.

In the study published in the Beijing-based National Science Review, researchers from the Kunming Institute of Zoology, Chinese Academy of Sciences, the University of North Carolina in the United States and other research institutions reported that they successfully created 11 transgenic rhesus monkeys (eight first-generation and three second-generation) carrying human copies of MCPH1.

According to the research article, brain imaging and tissue section analysis showed an altered pattern of neuron differentiation and a delayed maturation of the neural system, which is similar to the developmental delay (neoteny) in humans.

Neoteny in humans is the retention of juvenile features into adulthood. One key difference between humans and nonhuman primates is that humans require a much longer time to shape their neuro-networks during development, greatly elongating childhood, which is the so-called “neoteny.”

Here’s a link to and a citation for the paper,

Transgenic rhesus monkeys carrying the human MCPH1 gene copies show human-like neoteny of brain development by Lei Shi, Xin Luo, Jin Jiang, Yongchang Chen, Cirong Liu, Ting Hu, Min Li, Qiang Lin, Yanjiao Li, Jun Huang Hong Wang, Yuyu Niu, Yundi Shi, Martin Styner, Jianhong Wang, Yi Lu, Xuejin Sun, Hualin Yu, Weizhi Ji, Bing Su. National Science Review, nwz043, https://doi.org/10.1093/nsr/nwz043 Published: 27 March 2019

This appears to be an open access paper,

Transgenic monkeys and an ethical uproar

Predictably, this research set off alarms as Sharon Kirkey’s April 12, 2019 article for the National Post describes in detail (Note: A link has been removed)l,

Their brains may not be bigger than normal, but monkeys created with human brain genes are exhibiting cognitive changes that suggest they might be smarter — and the experiments have ethicists shuddering.

In the wake of the genetically modified human babies scandal, Chinese scientists [as a scientist from the US] are drawing fresh condemnation from philosophers and ethicists, this time over the announcement they’ve created transgenic monkeys with elements of a human brain.

Six of the monkeys died, however the five survivors “exhibited better short-term memory and shorter reaction time” compared to their wild-type controls, the researchers report in the journa.

According to the researchers, the experiments represent the first attempt to study the genetic basis of human brain origin using transgenic monkeys. The findings, they insist, “have the potential to provide important — and potentially unique — insights into basic questions of what actually makes humans unique.”

For others, the work provokes a profoundly moral and visceral uneasiness. Even one of the collaborators — University of North Carolina computer scientist Martin Styner — told MIT Technology Review he considered removing his name from the paper, which he said was unable to find a publisher in the West.

“Now we have created this animal which is different than it is supposed to be,” Styner said. “When we do experiments, we have to have a good understanding of what we are trying to learn, to help society, and that is not the case here.” l

In an email to the National Post, Styner said he has an expertise in medical image analysis and was approached by the researchers back in 2011. He said he had no input on the science in the project, beyond how to best do the analysis of their MRI data. “At the time, I did not think deeply enough about the ethical consideration.”

….

When it comes to the scientific use of nonhuman primates, ethicists say the moral compass is skewed in cases like this.

Given the kind of beings monkeys are, “I certainly would have thought you would have had to have a reasonable expectation of high benefit to human beings to justify the harms that you are going to have for intensely social, cognitively complex, emotional animals like monkeys,” said Letitia Meynell, an associate professor in the department of philosophy at Dalhousie University in Halifax.

“It’s not clear that this kind of research has any reasonable expectation of having any useful application for human beings,” she said.

The science itself is also highly dubious and fundamentally flawed in its logic, she said.
“If you took Einstein as a baby and you raised him in the lab he wouldn’t turn out to be Einstein,” Meynell said. “If you’re actually interested in studying the cognitive complexity of these animals, you’re not going to get a good representation of that by raising them in labs, because they can’t develop the kind of cognitive and social skills they would in their normal environment.”

The Chinese said the MCPH1 gene is one of the strongest candidates for human brain evolution. But looking at a single gene is just bad genetics, Meynell said. Multiple genes and their interactions affect the vast majority of traits.

My point is that there’s a lot of research focused on intelligence and genes when we don’t really know what role genes actually play and when there doesn’t seem to be any serious oversight.

Global plea for moratorium on heritable genome editing

A March 13, 2019 University of Otago (New Zealand) press release (also on EurekAlert) describes a global plea for a moratorium,

A University of Otago bioethicist has added his voice to a global plea for a moratorium on heritable genome editing from a group of international scientists and ethicists in the wake of the recent Chinese experiment aiming to produce HIV immune children.

In an article in the latest issue of international scientific journal Nature, Professor Jing-Bao Nie together with another 16 [17] academics from seven countries, call for a global moratorium on all clinical uses of human germline editing to make genetically modified children.

They would like an international governance framework – in which nations voluntarily commit to not approve any use of clinical germline editing unless certain conditions are met – to be created potentially for a five-year period.

Professor Nie says the scientific scandal of the experiment that led to the world’s first genetically modified babies raises many intriguing ethical, social and transcultural/transglobal issues. His main personal concerns include what he describes as the “inadequacy” of the Chinese and international responses to the experiment.

“The Chinese authorities have conducted a preliminary investigation into the scientist’s genetic misadventure and issued a draft new regulation on the related biotechnologies. These are welcome moves. Yet, by putting blame completely on the rogue scientist individually, the institutional failings are overlooked,” Professor Nie explains.

“In the international discourse, partly due to the mentality of dichotomising China and the West, a tendency exists to characterise the scandal as just a Chinese problem. As a result, the global context of the experiment and Chinese science schemes have been far from sufficiently examined.”

The group of 17 [18] scientists and bioethicists say it is imperative that extensive public discussions about the technical, scientific, medical, societal, ethical and moral issues must be considered before germline editing is permitted. A moratorium would provide time to establish broad societal consensus and an international framework.

“For germline editing to even be considered for a clinical application, its safety and efficacy must be sufficient – taking into account the unmet medical need, the risks and potential benefits and the existence of alternative approaches,” the opinion article states.

Although techniques have improved in recent years, germline editing is not yet safe or effective enough to justify any use in the clinic with the risk of failing to make the desired change or of introducing unintended mutations still unacceptably high, the scientists and ethicists say.

“No clinical application of germline editing should be considered unless its long-term biological consequences are sufficiently understood – both for individuals and for the human species.”

The proposed moratorium does not however, apply to germline editing for research uses or in human somatic (non-reproductive) cells to treat diseases.

Professor Nie considers it significant that current presidents of the UK Royal Society, the US National Academy of Medicine and the Director and Associate Director of the US National Institute of Health have expressed their strong support for such a proposed global moratorium in two correspondences published in the same issue of Nature. The editorial in the issue also argues that the right decision can be reached “only through engaging more communities in the debate”.

“The most challenging questions are whether international organisations and different countries will adopt a moratorium and if yes, whether it will be effective at all,” Professor Nie says.

A March 14, 2019 news item on phys.org provides a précis of the Comment in Nature. Or, you ,can access the Comment with this link

Adopt a moratorium on heritable genome editing; Eric Lander, Françoise Baylis, Feng Zhang, Emmanuelle Charpentier, Paul Berg and specialists from seven countries call for an international governance framework.signed by: Eric S. Lander, Françoise Baylis, Feng Zhang, Emmanuelle Charpentier, Paul Berg, Catherine Bourgain, Bärbel Friedrich, J. Keith Joung, Jinsong Li, David Liu, Luigi Naldini, Jing-Bao Nie, Renzong Qiu, Bettina Schoene-Seifert, Feng Shao, Sharon Terry, Wensheng Wei, & Ernst-Ludwig Winnacker. Nature 567, 165-168 (2019) doi: 10.1038/d41586-019-00726-5

This Comment in Nature is open access.

World Health Organization (WHO) chimes in

Better late than never, eh? The World Health Organization has called heritable gene editing of humans ‘irresponsible’ and made recommendations. From a March 19, 2019 news item on the Canadian Broadcasting Corporation’s Online news webpage,

A panel convened by the World Health Organization said it would be “irresponsible” for scientists to use gene editing for reproductive purposes, but stopped short of calling for a ban.

The experts also called for the U.N. health agency to create a database of scientists working on gene editing. The recommendation was announced Tuesday after a two-day meeting in Geneva to examine the scientific, ethical, social and legal challenges of such research.

“At this time, it is irresponsible for anyone to proceed” with making gene-edited babies since DNA changes could be passed down to future generations, the experts said in a statement.

Germline editing has been on my radar since 2015 (see my May 14, 2015 posting) and the probability that someone would experiment with viable embryos and bring them to term shouldn’t be that much of a surprise.

Slow science from Canada

Canada has banned germline editing but there is pressure to lift that ban. (I touched on the specifics of the campaign in an April 26, 2019 posting.) This March 17, 2019 essay on The Conversation by Landon J Getz and Graham Dellaire, both of Dalhousie University (Nova Scotia, Canada) elucidates some of the discussion about whether research into germline editing should be slowed down.

Naughty (or Haughty, if you prefer) scientists

There was scoffing from some, if not all, members of the scientific community about the potential for ‘designer babies’ that can be seen in an excerpt from an article by Ed Yong for The Atlantic (originally published in my ,August 15, 2017 posting titled: CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?),

Ed Yong in an Aug. 2, 2017 article for The Atlantic offered a comprehensive overview of the research and its implications (unusually for Yong, there seems to be mildly condescending note but it’s worth ignoring for the wealth of information in the article; Note: Links have been removed),

” … the full details of the experiment, which are released today, show that the study is scientifically important but much less of a social inflection point than has been suggested. “This has been widely reported as the dawn of the era of the designer baby, making it probably the fifth or sixth time people have reported that dawn,” says Alta Charo, an expert on law and bioethics at the University of Wisconsin-Madison. “And it’s not.”

Then about 15 months later, the possibility seemed to be realized.

Interesting that scientists scoffed at the public’s concerns (you can find similar arguments about robots and artificial intelligence not being a potentially catastrophic problem), yes? Often, nonscientists’ concerns are dismissed as being founded in science fiction.

To be fair, there are times when concerns are overblown, the difficulty is that it seems the scientific community’s default position is to uniformly dismiss concerns rather than approaching them in a nuanced fashion. If the scoffers had taken the time to think about it, germline editing on viable embryos seems like an obvious and inevitable next step (as I’ve noted previously).

At this point, no one seems to know if He actually succeeded at removing CCR5 from Lulu’s and Nana’s genomes. In November 2018, scientists were guessing that at least one of the twins was a ‘mosaic’. In other words, some of her cells did not include CCR5 while others did.

Parents, children, competition

A recent college admissions scandal in the US has highlighted the intense competition to get into high profile educational institutions. (This scandal brought to mind the Silicon Valey elite who wanted to know more about gene editing that might result in improved cognitive skills.)

Since it can be easy to point the finger at people in other countries, I’d like to note that there was a Canadian parent among these wealthy US parents attempting to give their children advantages by any means, legal or not. (Note: These are alleged illegalities.) From a March 12, 2019 news article by Scott Brown, Kevin Griffin, and Keith Fraser for the Vancouver Sun,

Vancouver businessman and former CFL [Canadian Football League] player David Sidoo has been charged with conspiracy to commit mail and wire fraud in connection with a far-reaching FBI investigation into a criminal conspiracy that sought to help privileged kids with middling grades gain admission to elite U.S. universities.

In a 12-page indictment filed March 5 [2019] in the U.S. District Court of Massachusetts, Sidoo is accused of making two separate US$100,000 payments to have others take college entrance exams in place of his two sons.

Sidoo is also accused of providing documents for the purpose of creating falsified identification cards for the people taking the tests.

In what is being called the biggest college-admissions scam ever prosecuted by the U.S. Justice Department, Sidoo has been charged with nearly 50 other people. Nine athletic coaches and 33 parents including Hollywood actresses Felicity Huffman and Lori Loughlin. are among those charged in the investigation, dubbed Operation Varsity Blues.

According to the indictment, an unidentified person flew from Tampa, Fla., to Vancouver in 2011 to take the Scholastic Aptitude Test (SAT) in place of Sidoo’s older son and was directed not to obtain too high a score since the older son had previously taken the exam, obtaining a score of 1460 out of a possible 2400.

A copy of the resulting SAT score — 1670 out of 2400 — was mailed to Chapman University, a private university in Orange, Calif., on behalf of the older son, who was admitted to and ultimately enrolled in the university in January 2012, according to the indictment.

It’s also alleged that Sidoo arranged to have someone secretly take the older boy’s Canadian high school graduation exam, with the person posing as the boy taking the exam in June 2012.

The Vancouver businessman is also alleged to have paid another $100,000 to have someone take the SAT in place of his younger son.

Sidoo, an investment banker currently serving as CEO of Advantage Lithium, was awarded the Order of B.C. in 2016 for his philanthropic efforts.

He is a former star with the UBC [University of British Columbia] Thunderbirds football team and helped the school win its first Vanier Cup in 1982. He went on to play five seasons in the CFL with the Saskatchewan Roughriders and B.C. Lions.

Sidoo is a prominent donor to UBC and is credited with spearheading an alumni fundraising campaign, 13th Man Foundation, that resuscitated the school’s once struggling football team. He reportedly donated $2 million of his own money to support the program.

Sidoo Field at UBC’s Thunderbird Stadium is named in his honour.

In 2016, he received the B.C. [British Columbia] Sports Hall of Fame’s W.A.C. Bennett Award for his contributions to the sporting life of the province.

The question of whether or not these people like the ‘Silicon Valley elite’ (mentioned in John Loeffler’s February 22, 2019 article) would choose to tinker with their children’s genome if it gave them an advantage, is still hypothetical but it’s easy to believe that at least some might seriously consider the possibility especially if the researcher or doctor didn’t fully explain just how little is known about the impact of tinkering with the genome. For example, there’s a big question about whether those parents in China fully understood what they signed up for.

By the way, cheating scandals aren’t new (see Vanity Fair’s Schools For Scandal; The Inside Dramas at 16 of America’s Most Elite Campuses—Plus Oxford! Edited by Graydon Carter, published in August 2018 and covering 25 years of the magazine’s reporting). On a similar line, there’s this March13, 2019 essay which picks apart some of the hierarchical and power issues at play in the US higher educational system which led to this latest (but likely not last) scandal.

Scientists under pressure

While Kofler’s February 26, 2019 Nature opinion piece and call to action seems to address the concerns regarding germline editing by advocating that scientists become more conscious of how their choices impact society, as I noted earlier, the ideas expressed seem a little ungrounded in harsh realities. Perhaps it’s time to give some recognition to the various pressures put on scientists from their own governments and from an academic environment that fosters ‘success’ at any cost to peer pressure, etc. (For more about the costs of a science culture focused on success, read this March 2, 2019 blog posting by Jon Tennant on digital-science.com for a breakdown.)

One other thing I should mention, for some scientists getting into the history books, winning Nobel prizes, etc. is a very important goal. Scientists are people too.

Some thoughts

There seems to be a great disjunction between what Richardson presents as an alternative narrative to the ‘gene-god’ and how genetic research is being performed and reported on. What is clear to me is that no one really understands genetics and this business of inserting and deleting genes is essentially research designed to satisfy curiosity and/or allay fears about being left behind in a great scientific race to a an unknown destination.

I’d like to see some better reporting and a more agile response by the scientific community, the various governments, and international agencies. What shape or form a more agile response might take, I don’t know but I’d like to see some efforts.

Back to the regular programme

There’s a lot about CRISPR here on this blog. A simple search of ‘CRISPR ‘in the blog’s search engine should get you more than enough information about the technology and the various issues ranging from intellectual property to risks and more.

The three part series (CRISPR and editing the germline in the US …), mentioned previously, was occasioned by the publication of a study on germline editing research with nonviable embryos in the US. The 2017 research was done at the Oregon Health and Science University by Shoukhrat Mitalipov following similar research published by Chinese scientists in 2015. The series gives relatively complete coverage of the issues along with an introduction to CRISPR and embedded video describing the technique. Here’s part 1 to get you started..

Artificial intelligence (AI) brings together International Telecommunications Union (ITU) and World Health Organization (WHO) and AI outperforms animal testing

Following on my May 11, 2018 posting about the International Telecommunications Union (ITU) and the 2018 AI for Good Global Summit in mid- May, there’s an announcement. My other bit of AI news concerns animal testing.

Leveraging the power of AI for health

A July 24, 2018 ITU press release (a shorter version was received via email) announces a joint initiative focused on improving health,

Two United Nations specialized agencies are joining forces to expand the use of artificial intelligence (AI) in the health sector to a global scale, and to leverage the power of AI to advance health for all worldwide. The International Telecommunication Union (ITU) and the World Health Organization (WHO) will work together through the newly established ITU Focus Group on AI for Health to develop an international “AI for health” standards framework and to identify use cases of AI in the health sector that can be scaled-up for global impact. The group is open to all interested parties.

“AI could help patients to assess their symptoms, enable medical professionals in underserved areas to focus on critical cases, and save great numbers of lives in emergencies by delivering medical diagnoses to hospitals before patients arrive to be treated,” said ITU Secretary-General Houlin Zhao. “ITU and WHO plan to ensure that such capabilities are available worldwide for the benefit of everyone, everywhere.”

The demand for such a platform was first identified by participants of the second AI for Good Global Summit held in Geneva, 15-17 May 2018. During the summit, AI and the health sector were recognized as a very promising combination, and it was announced that AI-powered technologies such as skin disease recognition and diagnostic applications based on symptom questions could be deployed on six billion smartphones by 2021.

The ITU Focus Group on AI for Health is coordinated through ITU’s Telecommunications Standardization Sector – which works with ITU’s 193 Member States and more than 800 industry and academic members to establish global standards for emerging ICT innovations. It will lead an intensive two-year analysis of international standardization opportunities towards delivery of a benchmarking framework of international standards and recommendations by ITU and WHO for the use of AI in the health sector.

“I believe the subject of AI for health is both important and useful for advancing health for all,” said WHO Director-General Tedros Adhanom Ghebreyesus.

The ITU Focus Group on AI for Health will also engage researchers, engineers, practitioners, entrepreneurs and policy makers to develop guidance documents for national administrations, to steer the creation of policies that ensure the safe, appropriate use of AI in the health sector.

“1.3 billion people have a mobile phone and we can use this technology to provide AI-powered health data analytics to people with limited or no access to medical care. AI can enhance health by improving medical diagnostics and associated health intervention decisions on a global scale,” said Thomas Wiegand, ITU Focus Group on AI for Health Chairman, and Executive Director of the Fraunhofer Heinrich Hertz Institute, as well as professor at TU Berlin.

He added, “The health sector is in many countries among the largest economic sectors or one of the fastest-growing, signalling a particularly timely need for international standardization of the convergence of AI and health.”

Data analytics are certain to form a large part of the ITU focus group’s work. AI systems are proving increasingly adept at interpreting laboratory results and medical imagery and extracting diagnostically relevant information from text or complex sensor streams.

As part of this, the ITU Focus Group for AI for Health will also produce an assessment framework to standardize the evaluation and validation of AI algorithms — including the identification of structured and normalized data to train AI algorithms. It will develop open benchmarks with the aim of these becoming international standards.

The ITU Focus Group for AI for Health will report to the ITU standardization expert group for multimedia, Study Group 16.

I got curious about Study Group 16 (from the Study Group 16 at a glance webpage),

Study Group 16 leads ITU’s standardization work on multimedia coding, systems and applications, including the coordination of related studies across the various ITU-T SGs. It is also the lead study group on ubiquitous and Internet of Things (IoT) applications; telecommunication/ICT accessibility for persons with disabilities; intelligent transport system (ITS) communications; e-health; and Internet Protocol television (IPTV).

Multimedia is at the core of the most recent advances in information and communication technologies (ICTs) – especially when we consider that most innovation today is agnostic of the transport and network layers, focusing rather on the higher OSI model layers.

SG16 is active in all aspects of multimedia standardization, including terminals, architecture, protocols, security, mobility, interworking and quality of service (QoS). It focuses its studies on telepresence and conferencing systems; IPTV; digital signage; speech, audio and visual coding; network signal processing; PSTN modems and interfaces; facsimile terminals; and ICT accessibility.

I wonder which group deals with artificial intelligence and, possibly, robots.

Chemical testing without animals

Thomas Hartung, professor of environmental health and engineering at Johns Hopkins University (US), describes in his July 25, 2018 essay (written for The Conversation) on phys.org the situation where chemical testing is concerned,

Most consumers would be dismayed with how little we know about the majority of chemicals. Only 3 percent of industrial chemicals – mostly drugs and pesticides – are comprehensively tested. Most of the 80,000 to 140,000 chemicals in consumer products have not been tested at all or just examined superficially to see what harm they may do locally, at the site of contact and at extremely high doses.

I am a physician and former head of the European Center for the Validation of Alternative Methods of the European Commission (2002-2008), and I am dedicated to finding faster, cheaper and more accurate methods of testing the safety of chemicals. To that end, I now lead a new program at Johns Hopkins University to revamp the safety sciences.

As part of this effort, we have now developed a computer method of testing chemicals that could save more than a US$1 billion annually and more than 2 million animals. Especially in times where the government is rolling back regulations on the chemical industry, new methods to identify dangerous substances are critical for human and environmental health.

Having written on the topic of alternatives to animal testing on a number of occasions (my December 26, 2014 posting provides an overview of sorts), I was particularly interested to see this in Hartung’s July 25, 2018 essay on The Conversation (Note: Links have been removed),

Following the vision of Toxicology for the 21st Century, a movement led by U.S. agencies to revamp safety testing, important work was carried out by my Ph.D. student Tom Luechtefeld at the Johns Hopkins Center for Alternatives to Animal Testing. Teaming up with Underwriters Laboratories, we have now leveraged an expanded database and machine learning to predict toxic properties. As we report in the journal Toxicological Sciences, we developed a novel algorithm and database for analyzing chemicals and determining their toxicity – what we call read-across structure activity relationship, RASAR.

This graphic reveals a small part of the chemical universe. Each dot represents a different chemical. Chemicals that are close together have similar structures and often properties. Thomas Hartung, CC BY-SA

To do this, we first created an enormous database with 10 million chemical structures by adding more public databases filled with chemical data, which, if you crunch the numbers, represent 50 trillion pairs of chemicals. A supercomputer then created a map of the chemical universe, in which chemicals are positioned close together if they share many structures in common and far where they don’t. Most of the time, any molecule close to a toxic molecule is also dangerous. Even more likely if many toxic substances are close, harmless substances are far. Any substance can now be analyzed by placing it into this map.

If this sounds simple, it’s not. It requires half a billion mathematical calculations per chemical to see where it fits. The chemical neighborhood focuses on 74 characteristics which are used to predict the properties of a substance. Using the properties of the neighboring chemicals, we can predict whether an untested chemical is hazardous. For example, for predicting whether a chemical will cause eye irritation, our computer program not only uses information from similar chemicals, which were tested on rabbit eyes, but also information for skin irritation. This is because what typically irritates the skin also harms the eye.

How well does the computer identify toxic chemicals?

This method will be used for new untested substances. However, if you do this for chemicals for which you actually have data, and compare prediction with reality, you can test how well this prediction works. We did this for 48,000 chemicals that were well characterized for at least one aspect of toxicity, and we found the toxic substances in 89 percent of cases.

This is clearly more accurate that the corresponding animal tests which only yield the correct answer 70 percent of the time. The RASAR shall now be formally validated by an interagency committee of 16 U.S. agencies, including the EPA [Environmental Protection Agency] and FDA [Food and Drug Administration], that will challenge our computer program with chemicals for which the outcome is unknown. This is a prerequisite for acceptance and use in many countries and industries.

The potential is enormous: The RASAR approach is in essence based on chemical data that was registered for the 2010 and 2013 REACH [Registration, Evaluation, Authorizations and Restriction of Chemicals] deadlines [in Europe]. If our estimates are correct and chemical producers would have not registered chemicals after 2013, and instead used our RASAR program, we would have saved 2.8 million animals and $490 million in testing costs – and received more reliable data. We have to admit that this is a very theoretical calculation, but it shows how valuable this approach could be for other regulatory programs and safety assessments.

In the future, a chemist could check RASAR before even synthesizing their next chemical to check whether the new structure will have problems. Or a product developer can pick alternatives to toxic substances to use in their products. This is a powerful technology, which is only starting to show all its potential.

It’s been my experience that these claims having led a movement (Toxicology for the 21st Century) are often contested with many others competing for the title of ‘leader’ or ‘first’. That said, this RASAR approach seems very exciting, especially in light of the skepticism about limiting and/or making animal testing unnecessary noted in my December 26, 2014 posting.it was from someone I thought knew better.

Here’s a link to and a citation for the paper mentioned in Hartung’s essay,

Machine learning of toxicological big data enables read-across structure activity relationships (RASAR) outperforming animal test reproducibility by Thomas Luechtefeld, Dan Marsh, Craig Rowlands, Thomas Hartung. Toxicological Sciences, kfy152, https://doi.org/10.1093/toxsci/kfy152 Published: 11 July 2018

This paper is open access.

Explaining the link between air pollution and heart disease?

An April 26, 2017 news item on Nanowerk announces research that may explain the link between heart disease and air pollution (Note: A link has been removed),

Tiny particles in air pollution have been associated with cardiovascular disease, which can lead to premature death. But how particles inhaled into the lungs can affect blood vessels and the heart has remained a mystery.

Now, scientists have found evidence in human and animal studies that inhaled nanoparticles can travel from the lungs into the bloodstream, potentially explaining the link between air pollution and cardiovascular disease. Their results appear in the journal ACS Nano (“Inhaled Nanoparticles Accumulate at Sites of Vascular Disease”).

An April 26, 2017 American Chemical Society news release on EurekAlert, which originated the news item,  expands on the theme,

The World Health Organization estimates that in 2012, about 72 percent of premature deaths related to outdoor air pollution were due to ischemic heart disease and strokes. Pulmonary disease, respiratory infections and lung cancer were linked to the other 28 percent. Many scientists have suspected that fine particles travel from the lungs into the bloodstream, but evidence supporting this assumption in humans has been challenging to collect. So Mark Miller and colleagues at the University of Edinburgh in the United Kingdom and the National Institute for Public Health and the Environment in the Netherlands used a selection of specialized techniques to track the fate of inhaled gold nanoparticles.

In the new study, 14 healthy volunteers, 12 surgical patients and several mouse models inhaled gold nanoparticles, which have been safely used in medical imaging and drug delivery. Soon after exposure, the nanoparticles were detected in blood and urine. Importantly, the nanoparticles appeared to preferentially accumulate at inflamed vascular sites, including carotid plaques in patients at risk of a stroke. The findings suggest that nanoparticles can travel from the lungs into the bloodstream and reach susceptible areas of the cardiovascular system where they could possibly increase the likelihood of a heart attack or stroke, the researchers say.

Here’s a link to and a citation for the paper,

Inhaled Nanoparticles Accumulate at Sites of Vascular Disease by Mark R. Miller, Jennifer B. Raftis, Jeremy P. Langrish, Steven G. McLean, Pawitrabhorn Samutrtai, Shea P. Connell, Simon Wilson, Alex T. Vesey, Paul H. B. Fokkens, A. John F. Boere, Petra Krystek, Colin J. Campbell, Patrick W. F. Hadoke, Ken Donaldson, Flemming R. Cassee, David E. Newby, Rodger Duffin, and Nicholas L. Mills. ACS Nano, Article ASAP DOI: 10.1021/acsnano.6b08551 Publication Date (Web): April 26, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

University of Malaya (Malaysia) and Harvard University (US) partner on nanomedicine/prevention projects

Unusually for a ‘nanomedicine’ project, the talk turned to prevention during a Jan. 10, 2016 teleconference featuring Dr. Noor Hayaty Abu Kasim of the University of Malaya and Dr. Wong Tin Wui of the Universiti Teknologi Malaysia and Dr. Joseph Brain of  Harvard University in a discussion about Malaysia’s major investment in nanomedicine treatment for lung diseases.

A Jan. 11, 2016 Malaysian Industry-Government Group for High Technology (MIGHT) news release on EurekAlert announces both the lung project (University of Malaya/Harvard University) and others under Malaysia’s NanoMITe (Malaysia Institute for Innovative Nanotechnology) banner,

Malaysian scientists are joining forces with Harvard University experts to help revolutionize the treatment of lung diseases — the delivery of nanomedicine deep into places otherwise impossible to reach.

Under a five-year memorandum of understanding between Harvard and the University of Malaya, Malaysian scientists will join a distinguished team seeking a safe, more effective way of tackling lung problems including chronic obstructive pulmonary disease (COPD), the progressive, irreversible obstruction of airways causing almost 1 in 10 deaths today.

Treatment of COPD and lung cancer commonly involves chemotherapeutics and corticosteroids misted into a fine spray and inhaled, enabling direct delivery to the lungs and quick medicinal effect. However, because the particles produced by today’s inhalers are large, most of the medicine is deposited in the upper respiratory tract.

The Harvard team, within the university’s T.H. Chan School of Public Health, is working on “smart” nanoparticles that deliver appropriate levels of diagnostic and therapeutic agents to the deepest, tiniest sacs of the lung, a process potentially assisted by the use of magnetic fields.

Malaysia’s role within the international collaboration: help ensure the safety and improve the effectiveness of nanomedicine, assessing how nanomedicine particles behave in the body, what attaches to them to form a coating, where the drug accumulates and how it interacts with target and non-target cells.

Led by Joseph Brain, the Cecil K. and Philip Drinker Professor of Environmental Physiology, the research draws on extensive expertise at Harvard in biokinetics — determining how to administer medicine to achieve the proper dosage to impact target cells and assessing the extent to which drug-loaded nanoparticles pass through biological barriers to different organs.

The studies also build on decades of experience studying the biology of macrophages — large, specialized cells that recognize, engulf and destroy target cells as part of the human immune system.

Manipulating immune cells represents an important strategy for treating lung diseases like COPD and lung cancer, as well as infectious diseases including tuberculosis and listeriosis.

Dr. Brain notes that every day humans breathe 20,000 litres of air loaded with bacteria and viruses, and that the world’s deadliest epidemic — an outbreak of airborne influenza in the 1920s — killed tens of millions.

Inhaled nanomedicine holds the promise of helping doctors prevent and treat such problems in future, reaching the target area more swiftly than if administered orally or even intravenously.

This is particularly true for lung cancer, says Dr. Brain. “Experiments have demonstrated that a drug dose administered directly to the respiratory tract achieves much higher local drug concentrations at the target site.”

COPD meanwhile affects over 235 million people worldwide and is on the rise, with 80% of cases caused by cigarette smoking. Exacerbated by poor air quality, COPD is expected to rise from 5th to 3rd place among humanity’s most lethal health problems by 2030.

“Nanotechnology is making a significant impact on healthcare by delivering improvements in disease diagnosis and monitoring, as well as enabling new approaches to regenerative medicine and drug delivery,” says Prof. Zakri Abdul Hamid, Science Advisor to the Prime Minister of Malaysia.

“Malaysia, through NanoMITe, is proud and excited to join the Harvard team and contribute to the creation of these life-giving innovations.”

While neither Dr. Abu Kasim nor Dr. Wong are included in the news release both are key members of the Malaysian team tasked to work on nanomedicines for lung disease. Dr. Abu Kasim is a professor of restorative dentistry at the University of Malaya and familiar with nanotechnology-enabled materials and nanoparticles through her work in that field. She is also the project lead for NanoMITe’s Project 4: Consequences of Smoking among the Malaysian Population. From the project webpage,

Smoking is a prevalent problem worldwide but especially so in Asia where nearly more than half of the world population reside. Smoking kills half of its users and despite the many documented harm to health is still a major problem. Globally six million lives are lost each year because of this addiction. This number is estimated to increase to ten million within the next two decades. Apart from the mortality, smokers are at increased risk of health morbidities of smoking which is a major risk factor for many non-communicable diseases (NCD) such as heart diseases, respiratory conditions and even mental health. Together, smoking reduces life expectancy 10-15 years compared to a non-smoker. Those with mental health lose double the years, 20 -25 years of their life as a result of their smoking. The current Malaysia death toll is at 10,000 lives per year due to smoking related health complications.

Although the health impact of smoking has been reported at length, this information is limited nationally. Lung cancer for example is closely linked to smoking, however, the study of the link between the two is lacking in Malaysia. Lung cancer particularly in Malaysia is also often diagnosed late, usually at stages 3 and 4. These stages of cancer are linked with a poorer prognosis. As a result to the harms to health either directly or indirectly, the World Health Organization (WHO) has introduced a legal treaty, the first, called the Framework Convention for Tobacco Control (FCTC). This treaty currently ratified by 174 countries was introduced in 2005 and consists of 38 FCTC Articles which are evidence based policies, known to assist member countries to reduce their smoking prevalence. Malaysia is an early signatory and early adopter of the MPOWER strategy which are major articles of the FCTC. Among them are education and information dissemination informing the dangers of smoking which can be done through awareness campaigns of advocacy using civil society groups. Most campaigns have focused on health harms with little mention non-health or environmental harm as a result of smoking. Therefore there is an opportunity to further develop this idea as a strong advocacy point towards a smoke-free generation in the near future

It is difficult impossible to recall any other nanomedicine initiative that has so thoroughly embedded prevention as part of its mandate. As Dr. Brain puts it, “Malaysia’s commitment to better health for everyone—sometimes, I’m jealous.”

Getting back to nanomedicine, it’s Dr. Wong, an associate professor in the school of pharmaceutics at Universiti Teknologi Malaysia (UTM), who is developing polymeric nanoparticles designed to carry medications into the lungs and Brain who will work on the best method of transport. From Dr. Brain’s webpage,

Dr. Brain’s research emphasizes responses to inhaled gases, particulates, and microbes. His studies extend from the deposition of inhaled particles in the respiratory tract to their clearance by respiratory defense mechanisms. Of particular interest is the role of lung macrophages; this resident cell keeps lung surfaces clean and sterile. Moreover, the lung macrophage is also a critical regulator of inflammatory and immune responses. The context of these studies on macrophages is the prevention and pathogenesis of environmental lung disease as well as respiratory infection.

His research has utilized magnetic particles in macrophages throughout the body as a non-invasive tool for measuring cell motility and the response of macrophages to various mediators and toxins. …

It was difficult to get any specifics about the proposed lung nanomedicine effort as it seems to be at a very early stage.

  • Malaysia through the Ministry of Higher Education with matching funds from the University of Malaya is funding this effort with 1M Ringgits ($300,00 USD) per year over five years for a total of 5M Ringgits ($1.5M USD)
  • A Malaysian researcher will be going to Harvard to collaborate directly with Dr. Brain and others on his team. The first will be Dr. Wong who will come to Harvard in June 2016 where he will work with his polymeric nanoparticles (vehicles for medications) and where Brain will examine transport strategies (aerosol, intrathecal administration, etc.) for those nanoparticle-bearing medications.
  • There will be a series of comparative studies of smoking in Malaysia and the US and other information efforts designed to support prevention strategies.

One last tidbit about research, Dr. Brain will be testing the nanoparticle-bearing medication once it has entered the lung using the ‘precision cut lung slices’ technique, as an alternative to some, if not all, in vivo testing.

Final comments

Nanomedicine is highly competitive and the Malaysians are interested in commercializing their efforts which according to Dr. Abu Kasim is one of the reasons they approached Harvard and Dr. Brain.

Should you find any errors please do let me know.