Tag Archives: COVID-19

Online symposium (April 27 – 28, 2021) on Canada’s first federal budget in two years

The Canadian federal budget is due to be announced/revealed on April 19, 2021—the first budget we’ve seen since 2019.

The Canadian Science Policy Centre (CSPC)is hosting an April 27 -28, 2021 symposium online and the main focus will be on science and funding. Before moving onto the symposium details, I think a quick refresher is in order.

No oversight, WE Charity scandal

While the Liberal government has done much which is laudable by supporting people and businesses through this worldwide COVID-19 pandemic, there have been at least two notable missteps with regard to fiscal responsibility. This March 24, 2020 article in The Abbotsford News outlines the problem,

Conservative Finance critic Pierre Poilievre says there’s no deal yet between the Liberal government and Opposition over a proposed emergency aid bill to spend billions of dollars to fight the COVID-19 pandemic and cushion some of its damage to the economy.

The opposition parties had said they would back the $82 billion in direct spending and deferred taxes Prime Minister Justin Trudeau promised to put up to prepare the country for mass illness and help Canadians cope with lost jobs and wages.

Yet a draft of the bill circulated Monday suggested it was going to give cabinet, not MPs, extraordinary power over taxes and spending, so ministers could act without Parliament’s approval for months.

The Conservatives will support every one of the aid measures contained in bill with no debate, Poilievre said. The only issue is whether the government needs to be given never before seen powers to tax and spend. [emphasis mine]

When there’s a minority government like the one Trudeau leads, the chance to bring the government down on a spending bill is what gives the opposition its power.

The government did not receive that approval in Parliament—but they tried. That was in March 2020; a few weeks later, there’s this (from the WE Charity scandal entry on Wikipedia),, Note: Links have been removed

On April 5, 2020 amidst the COVID-19 Pandemic, the Prime Minister of Canada, Justin Trudeau, and his then-Finance Minister Bill Morneau, held a telephone conversation discussing measures to financially assist the country’s student population.[14] The Finance Department was tasked with devising a series of measures to address these issues. This would begin a chain of events involving numerous governmental agencies.

Through a no-bid selection process [emphasis mine], WE Charity was chosen to administer the CSSG [Canada Student Service Grant], which would have created grants for students who volunteered during the COVID-19 pandemic.[15][13] The contract agreement was signed with WE Charity Foundation,[16] a corporation affiliated with WE Charity, on June 23, 2020. It was agreed that WE Charity, which had already begun incurring eligible expenses for the project on May 5 at their own risk,[17][18] would be paid $43.53 million[19] to administer the program; $30 million of which was paid to WE Charity Foundation on June 30, 2020.[18] This was later fully refunded.[17] A senior bureaucrat would note that “ESDC thinks that ‘WE’ might be able to be the volunteer matching third party … The mission of WE is congruent with national service and they have a massive following on social media.”[20]

Concurrent to these events, and prior to the announcement of the CSSG on June 25, 2020, WE Charity was simultaneously corresponding with the same government agencies ultimately responsible for choosing the administrator of the program.[8] WE Charity would submit numerous proposals in April, beginning on April 9, 2020, on the topic of youth volunteer award programs.[9] These were able to be reformed into what became the CSSG.[8]

On June 25, 2020 Justin Trudeau announced a series of relief measures for students. Among them was the Canada Student Service Grant program; whereby students would be eligible to receive $1000 for every 100 hours of volunteer activities, up to $5,000.[21]

The structure of the program, and the selection of WE Charity as its administrator, immediately triggered condemnation amongst the Official Opposition,[22] as well as numerous other groups, such as the Public Service Alliance of Canada,[7] Democracy Watch,[23] and Volunteer Canada[24] who argued that WE Charity:

  • Was not the only possible administrator as had been claimed
  • Had been the beneficiary of cronyism
  • Had experienced significant disruption due to the COVID-19 pandemic and required a bailout
  • Had illegally lobbied the government
  • Was unable to operate in French-speaking regions of Canada
  • Was potentially in violation of labour laws
  • Had created hundreds of volunteer positions with WE Charity itself as part of the program, doing work generally conducted by paid employees, representing a conflict of interests. …

In a July 13, 2020 article about the scandal on BBC (British Broadcasting Corporation) online, it’s noted that Trudeau was about to undergo his third ethics inquiry since first becoming Prime Minister in 2015. His first ethics inquiry took place in 2017, the second in 2019, and again in 2020.

None of this has anything to do with science funding (as far as I know) but it does set the stage for questions about how science funding is determined and who will be getting it. There are already systems in place for science funding through various agencies but the federal budget often sets special priorities such as the 2017 Pan-Canadian Artificial Intelligence Strategy with its attendant $125M. As well,Prime Minister Justin Trudeau likes to use science as a means of enhancing his appeal. See my March 16, 2018 posting for a sample of this, scroll down to the “Sunny ways: a discussion between Justin Trudeau and Bill Nye” subhead.

Federal Budget 2021 Symposium

From the CSPC’s Federal Budget 2021 Symposium event page, Note: Minor changes have been made due to my formatting skills, or lack thereof,

Keynote talk by David Watters entitled: “Canada’s Performance in R&D and Innovation Ecosystem in the Context of Health and Economic Impact of COVID-19 and Investments in the Budget“ [sic]

Tentative Event Schedule

Tuesday April 27
12:00 – 4:30 pm EDT

12:00 – 1:00 Session I: Keynote Address: The Impact of Budget 2021 on the Performance of Canada’s National R&D/Innovation Ecosystem 

David Watters, President & CEO, Global Advantage Consulting

1:15 – 1:45 Session II: Critical Analysis 

Robert Asselin, Senior Vice President, Policy, Business Council of Canada
Irene Sterian, Founder, President & CEO, REMAP (Refined Manufacturing Acceleration Process); Director, Technology & Innovation, Celestica
David Wolfe, Professor of Political Science, UTM [University of Toronto Mississauga], Innovation Policy Lab, Munk School of Global Affairs and Public Policy

2:00 – 3:00 Session III: Superclusters 

Bill Greuel, CEO, Protein Industries Canada
Kendra MacDonald, CEO, Canada’s Ocean Supercluster
Angela Mondou, President & CEO, TECHNATION
Jayson Myers, CEO, Next Generation Manufacturing Canada (NGen)

3:30 – 4:30 Session IV: Business & Industry 3:30 – 4:30

Namir Anani, President & CEO, Information and Communications Technology Council [ICTC]
Karl Blackburn, President & CEO, Conseil du patronat du Québec
Tabatha Bull, President & CEO, Canadian Council for Aboriginal Business [CCAB]
Karen Churchill, President & CEO, Ag-West Bio Inc.
Karimah Es Sabar, CEO & Partner of Quark Venture LP; Chair, Health/Biosciences Economic Strategy Table

Wednesday April 28
2:00 – 4:30 pm EDT

2:00 – 3:00 Session V: Universities and Colleges

Steven Liss, Vice-President, Research and Innovation & Professor of Chemistry and Biology, Faculty of Science, Ryerson University
Madison Rilling, Project Manager, Optonique, Québec’s Optics & Photonics Cluster; Youth Council Member, Office of the Chief Science Advisor of Canada

3:30 – 4:30 Session VI: Non-Governmental Organizations 

Genesa M. Greening, President & CEO, BC Women’s Health Foundation
Maya Roy, CEO, YWCA Canada
Gisèle Yasmeen, Executive Director, Food Secure Canada
Jayson Myers, CEO, Next Generation Manufacturing Canada (NGen)

Register Here

Enjoy!

PS: I expect the guests at the Canadian Science Policy Centre’s (CSPC) April 27 – 28, 2021 Federal Budget Symposium to offer at least some commentary that boils down to ‘we love getting more money’ or ‘we’re not getting enough money’ or a bit of both.

I also expect the usual moaning over our failure to support industrial research and/or home grown companies E.g., Element AI (Canadian artificial intelligence company formerly headquartered in Montréal) was sold to a US company in November 2020 (see the Wikipedia entry). The US company doesn’t seem to have kept any of the employees but it seems to have acquired the intellectual property.

Health Canada advisory: Face masks that contain graphene may pose health risks

Since COVID-19, we’ve been advised to wear face masks. It seems some of them may not be as safe as we assumed. First, the Health Canada advisory that was issued today, April 2, 2021 and then excerpts from an in-depth posting by Dr. Andrew Maynard (associate dean in the Arizona State University College of Global Futures) about the advisory and the use of graphene in masks.

From the Health Canada Recalls & alerts: Face masks that contain graphene may pose health risks webpage,

Summary

  • Product: Face masks labelled to contain graphene or biomass graphene.
  • Issue: There is a potential that wearers could inhale graphene particles from some masks, which may pose health risks.
  • What to do: Do not use these face masks. Report any health product adverse events or complaints to Health Canada.

Issue

Health Canada is advising Canadians not to use face masks that contain graphene because there is a potential that they could inhale graphene particles, which may pose health risks.

Graphene is a novel nanomaterial (materials made of tiny particles) reported to have antiviral and antibacterial properties. Health Canada conducted a preliminary scientific assessment after being made aware that masks containing graphene have been sold with COVID-19 claims and used by adults and children in schools and daycares. Health Canada believes they may also have been distributed for use in health care settings.

Health Canada’s preliminary assessment of available research identified that inhaled graphene particles had some potential to cause early lung toxicity in animals. However, the potential for people to inhale graphene particles from face masks and the related health risks are not yet known, and may vary based on mask design. The health risk to people of any age is not clear. Variables, such as the amount and duration of exposure, and the type and characteristics of the graphene material used, all affect the potential to inhale particles and the associated health risks. Health Canada has requested data from mask manufacturers to assess the potential health risks related to their masks that contain graphene.

Until the Department completes a thorough scientific assessment and has established the safety and effectiveness of graphene-containing face masks, it is taking the precautionary approach of removing them from the market while continuing to gather and assess information. Health Canada has directed all known distributors, importers and manufacturers to stop selling and to recall the affected products. Additionally, Health Canada has written to provinces and territories advising them to stop distribution and use of masks containing graphene. The Department will continue to take appropriate action to stop the import and sale of graphene face masks.

Products affected

Face masks labelled as containing graphene or biomass graphene.

What you should do

  • Do not use face masks labelled to contain graphene or biomass graphene.
  • Consult your health care provider if you have used graphene face masks and have health concerns, such as new or unexplained shortness of breath, discomfort or difficulty breathing.
  • Report any health product adverse events or complaints regarding graphene face masks to Health Canada.

Dr. Andrew Maynard’s Edge of Innovation series features a March 26, 2021 posting about the use of graphene in masks (Note: Links have been removed),

Face masks should protect you, not place you in greater danger. However, last Friday Radio Canada revealed that residents of Quebec and Ottawa were being advised not to use specific types of graphene-containing masks as they could potentially be harmful.

The offending material in the masks is graphene — a form of carbon that consists of nanoscopically thin flakes of hexagonally-arranged carbon atoms. It’s a material that has a number of potentially beneficial properties, including the ability to kill bacteria and viruses when they’re exposed to it.

Yet despite its many potential uses, the scientific jury is still out when it comes to how safe the material is.

As with all materials, the potential health risks associated with graphene depend on whether it can get into the body, where it goes if it can, what it does when it gets there, and how much of it is needed to cause enough damage to be of concern.

Unfortunately, even though these are pretty basic questions, there aren’t many answers forthcoming when it comes to the substance’s use in face masks.

Early concerns around graphene were sparked by previous research on another form of carbon — carbon nanotubes. It turns out that some forms of these fiber-like materials can cause serious harm if inhaled. And following on from research here, a natural next-question to ask is whether carbon nanotubes’ close cousin graphene comes with similar concerns.

Because graphene lacks many of the physical and chemical aspects of carbon nanotubes that make them harmful (such as being long, thin, and hard for the body to get rid of), the indications are that the material is safer than its nanotube cousins. But safer doesn’t mean safe. And current research indicates that this is not a material that should be used where it could potentially be inhaled, without a good amount of safety testing first.

[downloaded from https://medium.com/edge-of-innovation/how-safe-are-graphene-based-face-masks-b88740547e8c] Original source: Wikimedia

When it comes to inhaling graphene, the current state of the science indicates that if the material can get into the lower parts of the lungs (the respirable or alveolar region) it can lead to an inflammatory response at high enough concentrations.

There is some evidence that adverse responses are relatively short-lived, and that graphene particles can be broken down and disposed of by the lungs’ defenses.

This is good news as it means that there are less likely to be long-term health impacts from inhaling the material.

There’s also evidence that graphene, unlike some forms of thin, straight carbon nanotubes, does not migrate to the outside layers of the lungs where it could potentially do a lot more damage.

Again, this is encouraging as it suggests that graphene is unlikely to lead to serious long-term health impacts like mesothelioma.

However, research also shows that this is not a benign material. Despite being made of carbon — and it’s tempting to think of carbon as being safe, just because we’re familiar with it — there is some evidence that the jagged edges of some graphene particles can harm cells, leading to local damage as the body responds to any damage the material causes.

There are also concerns, although they are less well explored in the literature, that some forms of graphene may be carriers for nanometer-sized metal particles that can be quite destructive in the lungs. This is certainly the case with some carbon nanotubes, as the metallic catalyst particles used to manufacture them become embedded in the material, and contribute to its toxicity.

The long and short of this is that, while there are still plenty of gaps in our knowledge around how much graphene it’s safe to inhale, inhaling small graphene particles probably isn’t a great idea unless there’s been comprehensive testing to show otherwise.

And this brings us to graphene-containing face masks.

….

Here, it’s important to stress that we don’t yet know if graphene particles are being released and, if they are, whether they are being released in sufficient quantities to cause health effects. And there are indications that, if there are health risks, these may be relatively short-term — simply because graphene particles may be effectively degraded by the lungs’ defenses.

At the same time, it seems highly irresponsible to include a material with unknown inhalation risks in a product that is intimately associated with inhalation. Especially when there are a growing number of face masks available that claim to use graphene.

… There are millions of graphene face masks and respirators being sold and used around the world. And while the unfolding news focuses on Quebec and one particular type of face mask, this is casting uncertainty over the safety of any graphene-containing masks that are being sold.

And this uncertainty will persist until manufacturers and regulators provide data indicating that they have tested the products for the release and subsequent inhalation of fine graphene particles, and shown the risks to be negligible.

I strongly recommend reading, in its entirety , Dr. Maynard’s March 26, 2021 posting, Which he has updated twice since first posting the story.

In short. you may want to hold off before buying a mask with graphene until there’s more data about safety.

COVID-19 infection as a dance of molecules

What a great bit of work, publicity-wise, from either or both the Aga Khan Museum in Toronto (Canada) and artist/scientist Radha Chaddah. IAM (ee-yam): Dance of the Molecules, a virtual performance installation featuring COVID-19 and molecular dance, has been profiled in the Toronto Star, on the Canadian Broadcasting Corporation (CBC) website, and in the Globe and Mail within the last couple of weeks. From a Canadian perspective, that’s major coverage and much of it national.

Bruce DeMara’s March 11, 2021 article for the Toronto Star introduces artist/scientist Radha Chaddah, her COVID-19 dance of molecules, and her team (Note: A link has been removed),

Visual artist Radha Chaddah has always had an abiding interest in science. She has a degree in biology and has done graduate studies in stem cell research.

[…] four-act dance performance; the first part “IAM: Dance of the Molecules” premiered as a digital exhibition on the Aga Khan Museum’s website March 5 [2021] and runs for eight weeks. Subsequent acts — human, planetary and universal, all using the COVID virus as an entry point — will be unveiled over the coming months until the final instalment in December 2022.

Among Chaddah’s team were Allie Blumas and the Open Fortress dance collective — who perform as microscopic components of the virus’s proliferation, including “spike” proteins, A2 receptors and ribosomes — costumiers Call and Response (who designed for the late Prince), director of photography Henry Sansom and composer Dan Bédard (who wrote the film’s music after observing the dance rehearsals remotely).

A March 5, 2021 article by Leah Collins for CBC online offers more details (Note: Links have been removed),

This month, the Aga Khan Museum in Toronto is debuting new work from local artist Radha Chaddah. Called IAM, this digital exhibition is actually the first act in a series of four short films that she aims to produce between now and the end of 2022. It’s a “COVID story,” says Chaddah, but one that offers a perspective beyond the anniversary of its impact on life and culture and toilet-paper consumption. “I wanted to present a piece that makes people think about the coronavirus in a different way,” she explains, “one that pulls them out of the realm of fear and puts our imaginations into the realm of curiosity.”

It’s scientific curiosity that Chaddah’s talking about, and her own extra-curricular inquiries first sparked the series. For several years, Chaddah has produced work that splices art and science, a practice she began while doing grad studies in molecular neurobiology. “If I had to describe it simply, I would say that I make art about invisible realities, often using the tools of research science,” she says, and in January of last year, she was gripped by news of the novel coronavirus’ discovery. 

“I started researching: reading research papers, looking into how it was that [the virus] actually affected the human body,” she says. “How does it get into the cells? What’s its replicative life cycle?” Chaddah wanted a closer look at the structure of the various molecules associated with the progression of COVID-19 in the body, and there is, it turns out, a trove of free material online. Using animated 3-D renderings (sourced from this digital database), Chaddah began reviewing the files: blowing them up with a video projector, and using the trees in her own backyard as “a kind of green, living stage.”

Part one of IAM (the film appearing on the Aga Khan’s website) is called “Dance of the Molecules.” Recorded on Chaddah’s property in September, it features two dancers: Allie Blumas (who choreographed the piece) and Lee Gelbloom. Their bodies, along with the leafy setting, serve as a screen for Chaddah’s projections: a swirl of firecracker colour and pattern, built from found digital models. Quite literally, the viewer is looking at an illustration of how the coronavirus infects the human body and then replicates. (The very first images, for example, are close-ups of the virus’ spiky surface, she explains.) And in tandem with this molecular drama, the dancers interpret the process. 

There is a brief preview,

To watch part 1 of IAM: Dance of the Molecules, go here to the Aga Khan Museum.

Enjoy!

Being a bit curious I looked up Radha Chaddah’s website and found this on her Bio webpage (click on About tab for the dropdown menu from the Home page),

Radha Chaddah is a Toronto based visual artist and scientist. Born in Owen Sound, Ontario she studied Film and Art History at Queen’s University (BAH), and Human Biology at the University of Toronto, where she received a Master of Science in Cell and Molecular Neurobiology. 

Chaddah makes art about invisible realities like the cellular world, electromagnetism and wave form energy, using light as her primary medium.  Her work examines the interconnected themes of knowledge, illusion, desire and the unseen world. In her studio she designs projected light installations for public exhibition. In the laboratory, she uses the tools of research science to grow and photograph cells using embedded fluorescent light-emitting molecules. Her cell photographs and light installations have been exhibited across Canada and her photographs have appeared in numerous publications.  She has lectured on basic cell and stem cell biology for artists, art students and the public at OCADU [Ontario College of Art & Design University], the Ontario Science Centre, the University of Toronto and the Textile Museum of Canada.

I also found Call and Response here, the Open Fortress dance collective on the Centre de Création O Vertigo website, Henry Sansom here, and Dan Bedard here. Both Bedard and Sansom can be found on the Internet Move Database (IMDB.com), as well.

Detecting COVID-19 in under five minutes with paper-based sensor made of graphene

A Dec. 7, 2020 news item on Nanowerk announced a new technology for rapid COVID-19 testing (Note: A link has been removed),

As the COVID-19 pandemic continues to spread across the world, testing remains a key strategy for tracking and containing the virus. Bioengineering graduate student, Maha Alafeef, has co-developed a rapid, ultrasensitive test using a paper-based electrochemical sensor that can detect the presence of the virus in less than five minutes.

The team led by professor Dipanjan Pan reported their findings in ACS Nano (“Rapid, Ultrasensitive, and Quantitative Detection of SARS-CoV-2 Using Antisense Oligonucleotides Directed Electrochemical Biosensor Chip”).

“Currently, we are experiencing a once-in-a-century life-changing event,” said Alafeef. “We are responding to this global need from a holistic approach by developing multidisciplinary tools for early detection and diagnosis and treatment for SARS-CoV-2.”

I wonder why they didn’t think to provide a caption for the graphene substrate (the square surface) underlying the gold electrode (the round thing) or provide a caption for the electrode. Maybe they assumed anyone knowledgeable about graphene would be able to identify it?

Caption: COVID-19 electrochemical sensing platform. Credit: University of Illinois

A Dec. 7, 2020 University of Illinois Grainger College of Engineering news release (also on EurekAlert) by Huan Song, which originated the news item, provides more technical detail including a description of the graphene substrate and the gold electrode, which make up the cheaper, faster COVID-19 sensing platform,

There are two broad categories of COVID-19 tests on the market. The first category uses reverse transcriptase real-time polymerase chain reaction (RT-PCR) and nucleic acid hybridization strategies to identify viral RNA. Current FDA [US Food and Drug Administration]-approved diagnostic tests use this technique. Some drawbacks include the amount of time it takes to complete the test, the need for specialized personnel and the availability of equipment and reagents.

The second category of tests focuses on the detection of antibodies. However, there could be a delay of a few days to a few weeks after a person has been exposed to the virus for them to produce detectable antibodies.

In recent years, researchers have had some success with creating point-of-care biosensors using 2D nanomaterials such as graphene to detect diseases. The main advantages of graphene-based biosensors are their sensitivity, low cost of production and rapid detection turnaround. “The discovery of graphene opened up a new era of sensor development due to its properties. Graphene exhibits unique mechanical and electrochemical properties that make it ideal for the development of sensitive electrochemical sensors,” said Alafeef. The team created a graphene-based electrochemical biosensor with an electrical read-out setup to selectively detect the presence of SARS-CoV-2 genetic material.

There are two components [emphasis mine] to this biosensor: a platform to measure an electrical read-out and probes to detect the presence of viral RNA. To create the platform, researchers first coated filter paper with a layer of graphene nanoplatelets to create a conductive film [emphasis mine]. Then, they placed a gold electrode with a predefined design on top of the graphene [emphasis mine] as a contact pad for electrical readout. Both gold and graphene have high sensitivity and conductivity which makes this platform ultrasensitive to detect changes in electrical signals.

Current RNA-based COVID-19 tests screen for the presence of the N-gene (nucleocapsid phosphoprotein) on the SARS-CoV-2 virus. In this research, the team designed antisense oligonucleotide (ASOs) probes to target two regions of the N-gene. Targeting two regions ensures the reliability of the senor in case one region undergoes gene mutation. Furthermore, gold nanoparticles (AuNP) are capped with these single-stranded nucleic acids (ssDNA), which represents an ultra-sensitive sensing probe for the SARS-CoV-2 RNA.

The researchers previously showed the sensitivity of the developed sensing probes in their earlier work published in ACS Nano. The hybridization of the viral RNA with these probes causes a change in the sensor electrical response. The AuNP caps accelerate the electron transfer and when broadcasted over the sensing platform, results in an increase in the output signal and indicates the presence of the virus.

The team tested the performance of this sensor by using COVID-19 positive and negative samples. The sensor showed a significant increase in the voltage of positive samples compared to the negative ones and confirmed the presence of viral genetic material in less than five minutes. Furthermore, the sensor was able to differentiate viral RNA loads in these samples. Viral load is an important quantitative indicator of the progress of infection and a challenge to measure using existing diagnostic methods.

This platform has far-reaching applications due to its portability and low cost. The sensor, when integrated with microcontrollers and LED screens or with a smartphone via Bluetooth or wifi, could be used at the point-of-care in a doctor’s office or even at home. Beyond COVID-19, the research team also foresees the system to be adaptable for the detection of many different diseases.

“The unlimited potential of bioengineering has always sparked my utmost interest with its innovative translational applications,” Alafeef said. “I am happy to see my research project has an impact on solving a real-world problem. Finally, I would like to thank my Ph.D. advisor professor Dipanjan Pan for his endless support, research scientist Dr. Parikshit Moitra, and research assistant Ketan Dighe for their help and contribution toward the success of this study.”

Here’s a link to and a citation for the paper,

Rapid, Ultrasensitive, and Quantitative Detection of SARS-CoV-2 Using Antisense Oligonucleotides Directed Electrochemical Biosensor Chip by Maha Alafeef, Ketan Dighe, Parikshit Moitra, and Dipanjan Pan. ACS Nano 2020, 14, 12, 17028–17045 DOI: https://doi.org/10.1021/acsnano.0c06392 Publication Date:October 20, 2020 Copyright © 2020 American Chemical Society

I’m not sure where I found this notice but it is most definitely from the American Chemical Society: “This paper is freely accessible, at this time, for unrestricted RESEARCH re-use and analyses in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.”

Why is Precision Nanosystems Inc. in the local (Vancouver, Canada) newspaper?

Usually when a company is featured in a news item, there’s some reason why it’s considered newsworthy. Even after reading the article twice, I still don’t see what makes the Precision Nanosystems Inc. (PNI) newsworthy.

Kevin Griffin’s Jan. 17, 2021 article about Vancouver area Precision Nanosystems Inc. (PNI) for The Province is interesting for anyone who’s looking for information about members of the local biotechnology and/or nanomedicine community (Note: Links have been removed),

A Vancouver nanomedicine company is part of a team using new genetic technology to develop a COVID-19 vaccine.

Precision NanoSystems Incorporated is working on a vaccine in the same class the ones made by Pfizer-BioNTech and Moderna, the only two COVID-19 vaccines approved by Health Canada.

PNI’s vaccine is based on a new kind of technology called mRNA which stands for messenger ribonucleic acid. The mRNA class of vaccines carry genetic instructions to make proteins that trigger the body’s immune system. Once a body has antibodies, it can fight off a real infection when it comes in contact with SARS-CoV-2, the name of the virus that causes COVID-19.

James Taylor, CEO of Precision NanoSystems, said the “revolutionary technology is having an impact not only on COVID-19 pandemic but also the treatment of other diseases.

The federal government has invested $18.2 million in PNI to carry its vaccine candidate through pre-clinical studies and clinical trails.

Ottawa has also invested another $173 million in Medicago, a Quebec-city based company which is developing a virus-like particle vaccine on a plant-based platform and building a large-scale vaccine and antibody production facility. The federal government has an agreement with Medicago to buy up to 76 million doses (enough for 38 million people) of its COVID-19 vaccine.

PNI’s vaccine, which the company is developing with other collaborators, is still at an early, pre-clinical stage.

Taylor is one of the co-founders of PNI along with Euan Ramsay, the company’s chief commercial officer.

The scientific co-founders of PNI are physicist Carl Hansen [emphasis mine] and Pieter Cullis. Cullis is also board chairman and scientific adviser at Acuitas Therapeutics [emphasis mine], the UBC biotechnology company that developed the delivery system for the Pfizer-BioNTech COVID-19 vaccine.

PNI, founded in 2010 as a spin-off from UBC [University of British Columbia], focuses on developing technology and expertise in genetic medicine to treat a wide range of infectious and rare diseases and cancers.

What has been described as PNI’s flagship product is a NanoAssemblr Benchtop Instrument, which allows scientists to develop nanomedicines for testing.

It’s informational but none of this is new, if you’ve been following developments in the COVID-19 vaccine story or local biotechnology scene. The $18.2 million federal government investment was announced in the company’s latest press release dated October 23, 2020. Not exactly fresh news.

One possibility is that the company is trying to generate publicity prior to a big announcement. As to why a reporter would produce this profile, perhaps he was promised an exclusive?

Acuitas Therapeutics, which I highlighted in the excerpt from Griffin’s story, has been featured here before in a November 12, 2020 posting about lipid nanoparticles and their role in the development of the Pfizer-BioNTech COVID-19 vaccine.

Curiously (or not), Griffin didn’t mention Vancouver’s biggest ‘COVID-19 star’, AbCellera. You can find out more about that company in my December 30, 2020 posting titled, Avo Media, Science Telephone, and a Canadian COVID-19 billionaire scientist, which features a link to a video about AbCellera’s work (scroll down about 60% of the way to the subsection titled: Avo Media, The Tyee, and Science Telephone, second paragraph).

The Canadian COVID-19 billionaire scientist? That would be Carl Hansen, Chief Executive Officer and co-founder of AbCellera and co-founder of PNI. it’s such a small world sometimes.

Governments need to tell us when and how they’re using AI (artificial intelligence) algorithms to make decisions

I have two items and an exploration of the Canadian scene all three of which feature governments, artificial intelligence, and responsibility.

Special issue of Information Polity edited by Dutch academics,

A December 14, 2020 IOS Press press release (also on EurekAlert) announces a special issue of Information Polity focused on algorithmic transparency in government,

Amsterdam, NL – The use of algorithms in government is transforming the way bureaucrats work and make decisions in different areas, such as healthcare or criminal justice. Experts address the transparency challenges of using algorithms in decision-making procedures at the macro-, meso-, and micro-levels in this special issue of Information Polity.

Machine-learning algorithms hold huge potential to make government services fairer and more effective and have the potential of “freeing” decision-making from human subjectivity, according to recent research. Algorithms are used in many public service contexts. For example, within the legal system it has been demonstrated that algorithms can predict recidivism better than criminal court judges. At the same time, critics highlight several dangers of algorithmic decision-making, such as racial bias and lack of transparency.

Some scholars have argued that the introduction of algorithms in decision-making procedures may cause profound shifts in the way bureaucrats make decisions and that algorithms may affect broader organizational routines and structures. This special issue on algorithm transparency presents six contributions to sharpen our conceptual and empirical understanding of the use of algorithms in government.

“There has been a surge in criticism towards the ‘black box’ of algorithmic decision-making in government,” explain Guest Editors Sarah Giest (Leiden University) and Stephan Grimmelikhuijsen (Utrecht University). “In this special issue collection, we show that it is not enough to unpack the technical details of algorithms, but also look at institutional, organizational, and individual context within which these algorithms operate to truly understand how we can achieve transparent and responsible algorithms in government. For example, regulations may enable transparency mechanisms, yet organizations create new policies on how algorithms should be used, and individual public servants create new professional repertoires. All these levels interact and affect algorithmic transparency in public organizations.”

The transparency challenges for the use of algorithms transcend different levels of government – from European level to individual public bureaucrats. These challenges can also take different forms; transparency can be enabled or limited by technical tools as well as regulatory guidelines or organizational policies. Articles in this issue address transparency challenges of algorithm use at the macro-, meso-, and micro-level. The macro level describes phenomena from an institutional perspective – which national systems, regulations and cultures play a role in algorithmic decision-making. The meso-level primarily pays attention to the organizational and team level, while the micro-level focuses on individual attributes, such as beliefs, motivation, interactions, and behaviors.

“Calls to ‘keep humans in the loop’ may be moot points if we fail to understand how algorithms impact human decision-making and how algorithmic design impacts the practical possibilities for transparency and human discretion,” notes Rik Peeters, research professor of Public Administration at the Centre for Research and Teaching in Economics (CIDE) in Mexico City. In a review of recent academic literature on the micro-level dynamics of algorithmic systems, he discusses three design variables that determine the preconditions for human transparency and discretion and identifies four main sources of variation in “human-algorithm interaction.”

The article draws two major conclusions: First, human agents are rarely fully “out of the loop,” and levels of oversight and override designed into algorithms should be understood as a continuum. The second pertains to bounded rationality, satisficing behavior, automation bias, and frontline coping mechanisms that play a crucial role in the way humans use algorithms in decision-making processes.

For future research Dr. Peeters suggests taking a closer look at the behavioral mechanisms in combination with identifying relevant skills of bureaucrats in dealing with algorithms. “Without a basic understanding of the algorithms that screen- and street-level bureaucrats have to work with, it is difficult to imagine how they can properly use their discretion and critically assess algorithmic procedures and outcomes. Professionals should have sufficient training to supervise the algorithms with which they are working.”

At the macro-level, algorithms can be an important tool for enabling institutional transparency, writes Alex Ingrams, PhD, Governance and Global Affairs, Institute of Public Administration, Leiden University, Leiden, The Netherlands. This study evaluates a machine-learning approach to open public comments for policymaking to increase institutional transparency of public commenting in a law-making process in the United States. The article applies an unsupervised machine learning analysis of thousands of public comments submitted to the United States Transport Security Administration on a 2013 proposed regulation for the use of new full body imaging scanners in airports. The algorithm highlights salient topic clusters in the public comments that could help policymakers understand open public comments processes. “Algorithms should not only be subject to transparency but can also be used as tool for transparency in government decision-making,” comments Dr. Ingrams.

“Regulatory certainty in combination with organizational and managerial capacity will drive the way the technology is developed and used and what transparency mechanisms are in place for each step,” note the Guest Editors. “On its own these are larger issues to tackle in terms of developing and passing laws or providing training and guidance for public managers and bureaucrats. The fact that they are linked further complicates this process. Highlighting these linkages is a first step towards seeing the bigger picture of why transparency mechanisms are put in place in some scenarios and not in others and opens the door to comparative analyses for future research and new insights for policymakers. To advocate the responsible and transparent use of algorithms, future research should look into the interplay between micro-, meso-, and macro-level dynamics.”

“We are proud to present this special issue, the 100th issue of Information Polity. Its focus on the governance of AI demonstrates our continued desire to tackle contemporary issues in eGovernment and the importance of showcasing excellent research and the insights offered by information polity perspectives,” add Professor Albert Meijer (Utrecht University) and Professor William Webster (University of Stirling), Editors-in-Chief.

This image illustrates the interplay between the various level dynamics,

Caption: Studying algorithms and algorithmic transparency from multiple levels of analyses. Credit: Information Polity.

Here’s a link, to and a citation for the special issue,

Algorithmic Transparency in Government: Towards a Multi-Level Perspective
Guest Editors: Sarah Giest, PhD, and Stephan Grimmelikhuijsen, PhD
Information Polity, Volume 25, Issue 4 (December 2020), published by IOS Press

The issue is open access for three months, Dec. 14, 2020 – March 14, 2021.

Two articles from the special were featured in the press release,

“The agency of algorithms: Understanding human-algorithm interaction in administrative decision-making,” by Rik Peeters, PhD (https://doi.org/10.3233/IP-200253)

“A machine learning approach to open public comments for policymaking,” by Alex Ingrams, PhD (https://doi.org/10.3233/IP-200256)

An AI governance publication from the US’s Wilson Center

Within one week of the release of a special issue of Information Polity on AI and governments, a Wilson Center (Woodrow Wilson International Center for Scholars) December 21, 2020 news release (received via email) announces a new publication,

Governing AI: Understanding the Limits, Possibilities, and Risks of AI in an Era of Intelligent Tools and Systems by John Zysman & Mark Nitzberg

Abstract

In debates about artificial intelligence (AI), imaginations often run wild. Policy-makers, opinion leaders, and the public tend to believe that AI is already an immensely powerful universal technology, limitless in its possibilities. However, while machine learning (ML), the principal computer science tool underlying today’s AI breakthroughs, is indeed powerful, ML is fundamentally a form of context-dependent statistical inference and as such has its limits. Specifically, because ML relies on correlations between inputs and outputs or emergent clustering in training data, today’s AI systems can only be applied in well- specified problem domains, still lacking the context sensitivity of a typical toddler or house-pet. Consequently, instead of constructing policies to govern artificial general intelligence (AGI), decision- makers should focus on the distinctive and powerful problems posed by narrow AI, including misconceived benefits and the distribution of benefits, autonomous weapons, and bias in algorithms. AI governance, at least for now, is about managing those who create and deploy AI systems, and supporting the safe and beneficial application of AI to narrow, well-defined problem domains. Specific implications of our discussion are as follows:

  • AI applications are part of a suite of intelligent tools and systems and must ultimately be regulated as a set. Digital platforms, for example, generate the pools of big data on which AI tools operate and hence, the regulation of digital platforms and big data is part of the challenge of governing AI. Many of the platform offerings are, in fact, deployments of AI tools. Hence, focusing on AI alone distorts the governance problem.
  • Simply declaring objectives—be they assuring digital privacy and transparency, or avoiding bias—is not sufficient. We must decide what the goals actually will be in operational terms.
  • The issues and choices will differ by sector. For example, the consequences of bias and error will differ from a medical domain or a criminal justice domain to one of retail sales.
  • The application of AI tools in public policy decision making, in transportation design or waste disposal or policing among a whole variety of domains, requires great care. There is a substantial risk of focusing on efficiency when the public debate about what the goals should be in the first place is in fact required. Indeed, public values evolve as part of social and political conflict.
  • The economic implications of AI applications are easily exaggerated. Should public investment concentrate on advancing basic research or on diffusing the tools, user interfaces, and training needed to implement them?
  • As difficult as it will be to decide on goals and a strategy to implement the goals of one community, let alone regional or international communities, any agreement that goes beyond simple objective statements is very unlikely.

Unfortunately, I haven’t been able to successfully download the working paper/report from the Wilson Center’s Governing AI: Understanding the Limits, Possibilities, and Risks of AI in an Era of Intelligent Tools and Systems webpage.

However, I have found a draft version of the report (Working Paper) published August 26, 2020 on the Social Science Research Network. This paper originated at the University of California at Berkeley as part of a series from the Berkeley Roundtable on the International Economy (BRIE). ‘Governing AI: Understanding the Limits, Possibility, and Risks of AI in an Era of Intelligent Tools and Systems’ is also known as the BRIE Working Paper 2020-5.

Canadian government and AI

The special issue on AI and governance and the the paper published by the Wilson Center stimulated my interest in the Canadian government’s approach to governance, responsibility, transparency, and AI.

There is information out there but it’s scattered across various government initiatives and ministries. Above all, it is not easy to find, open communication. Whether that’s by design or the blindness and/or ineptitude to be found in all organizations I leave that to wiser judges. (I’ve worked in small companies and they too have the problem. In colloquial terms, ‘the right hand doesn’t know what the left hand is doing’.)

Responsible use? Maybe not after 2019

First there’s a government of Canada webpage, Responsible use of artificial intelligence (AI). Other than a note at the bottom of the page “Date modified: 2020-07-28,” all of the information dates from 2016 up to March 2019 (which you’ll find on ‘Our Timeline’). Is nothing new happening?

For anyone interested in responsible use, there are two sections “Our guiding principles” and “Directive on Automated Decision-Making” that answer some questions. I found the ‘Directive’ to be more informative with its definitions, objectives, and, even, consequences. Sadly, you need to keep clicking to find consequences and you’ll end up on The Framework for the Management of Compliance. Interestingly, deputy heads are assumed in charge of managing non-compliance. I wonder how employees deal with a non-compliant deputy head?

What about the government’s digital service?

You might think Canadian Digital Service (CDS) might also have some information about responsible use. CDS was launched in 2017, according to Luke Simon’s July 19, 2017 article on Medium,

In case you missed it, there was some exciting digital government news in Canada Tuesday. The Canadian Digital Service (CDS) launched, meaning Canada has joined other nations, including the US and the UK, that have a federal department dedicated to digital.

At the time, Simon was Director of Outreach at Code for Canada.

Presumably, CDS, from an organizational perspective, is somehow attached to the Minister of Digital Government (it’s a position with virtually no governmental infrastructure as opposed to the Minister of Innovation, Science and Economic Development who is responsible for many departments and agencies). The current minister is Joyce Murray whose government profile offers almost no information about her work on digital services. Perhaps there’s a more informative profile of the Minister of Digital Government somewhere on a government website.

Meanwhile, they are friendly folks at CDS but they don’t offer much substantive information. From the CDS homepage,

Our aim is to make services easier for government to deliver. We collaborate with people who work in government to address service delivery problems. We test with people who need government services to find design solutions that are easy to use.

Learn more

After clicking on Learn more, I found this,

At the Canadian Digital Service (CDS), we partner up with federal departments to design, test and build simple, easy to use services. Our goal is to improve the experience – for people who deliver government services and people who use those services.

How it works

We work with our partners in the open, regularly sharing progress via public platforms. This creates a culture of learning and fosters best practices. It means non-partner departments can apply our work and use our resources to develop their own services.

Together, we form a team that follows the ‘Agile software development methodology’. This means we begin with an intensive ‘Discovery’ research phase to explore user needs and possible solutions to meeting those needs. After that, we move into a prototyping ‘Alpha’ phase to find and test ways to meet user needs. Next comes the ‘Beta’ phase, where we release the solution to the public and intensively test it. Lastly, there is a ‘Live’ phase, where the service is fully released and continues to be monitored and improved upon.

Between the Beta and Live phases, our team members step back from the service, and the partner team in the department continues the maintenance and development. We can help partners recruit their service team from both internal and external sources.

Before each phase begins, CDS and the partner sign a partnership agreement which outlines the goal and outcomes for the coming phase, how we’ll get there, and a commitment to get them done.

As you can see, there’s not a lot of detail and they don’t seem to have included anything about artificial intelligence as part of their operation. (I’ll come back to the government’s implementation of artificial intelligence and information technology later.)

Does the Treasury Board of Canada have charge of responsible AI use?

I think so but there are government departments/ministries that also have some responsibilities for AI and I haven’t seen any links back to the Treasury Board documentation.

For anyone not familiar with the Treasury Board or even if you are, December 14, 2009 article (Treasury Board of Canada: History, Organization and Issues) on Maple Leaf Web is quite informative,

The Treasury Board of Canada represent a key entity within the federal government. As an important cabinet committee and central agency, they play an important role in financial and personnel administration. Even though the Treasury Board plays a significant role in government decision making, the general public tends to know little about its operation and activities. [emphasis mine] The following article provides an introduction to the Treasury Board, with a focus on its history, responsibilities, organization, and key issues.

It seems the Minister of Digital Government, Joyce Murray is part of the Treasury Board and the Treasury Board is the source for the Digital Operations Strategic Plan: 2018-2022,

I haven’t read the entire document but the table of contents doesn’t include a heading for artificial intelligence and there wasn’t any mention of it in the opening comments.

But isn’t there a Chief Information Officer for Canada?

Herein lies a tale (I doubt I’ll ever get the real story) but the answer is a qualified ‘no’. The Chief Information Officer for Canada, Alex Benay (there is an AI aspect) stepped down in September 2019 to join a startup company according to an August 6, 2019 article by Mia Hunt for Global Government Forum,

Alex Benay has announced he will step down as Canada’s chief information officer next month to “take on new challenge” at tech start-up MindBridge.

“It is with mixed emotions that I am announcing my departure from the Government of Canada,” he said on Wednesday in a statement posted on social media, describing his time as CIO as “one heck of a ride”.

He said he is proud of the work the public service has accomplished in moving the national digital agenda forward. Among these achievements, he listed the adoption of public Cloud across government; delivering the “world’s first” ethical AI management framework; [emphasis mine] renewing decades-old policies to bring them into the digital age; and “solidifying Canada’s position as a global leader in open government”.

He also led the introduction of new digital standards in the workplace, and provided “a clear path for moving off” Canada’s failed Phoenix pay system. [emphasis mine]

I cannot find a current Chief Information of Canada despite searches but I did find this List of chief information officers (CIO) by institution. Where there was one, there are now many.

Since September 2019, Mr. Benay has moved again according to a November 7, 2019 article by Meagan Simpson on the BetaKit,website (Note: Links have been removed),

Alex Benay, the former CIO [Chief Information Officer] of Canada, has left his role at Ottawa-based Mindbridge after a short few months stint.

The news came Thursday, when KPMG announced that Benay was joining the accounting and professional services organization as partner of digital and government solutions. Benay originally announced that he was joining Mindbridge in August, after spending almost two and a half years as the CIO for the Government of Canada.

Benay joined the AI startup as its chief client officer and, at the time, was set to officially take on the role on September 3rd. According to Benay’s LinkedIn, he joined Mindbridge in August, but if the September 3rd start date is correct, Benay would have only been at Mindbridge for around three months. The former CIO of Canada was meant to be responsible for Mindbridge’s global growth as the company looked to prepare for an IPO in 2021.

Benay told The Globe and Mail that his decision to leave Mindbridge was not a question of fit, or that he considered the move a mistake. He attributed his decision to leave to conversations with Mindbridge customer KPMG, over a period of three weeks. Benay told The Globe that he was drawn to the KPMG opportunity to lead its digital and government solutions practice, something that was more familiar to him given his previous role.

Mindbridge has not completely lost what was touted as a start hire, though, as Benay will be staying on as an advisor to the startup. “This isn’t a cutting the cord and moving on to something else completely,” Benay told The Globe. “It’s a win-win for everybody.”

Via Mr. Benay, I’ve re-introduced artificial intelligence and introduced the Phoenix Pay system and now I’m linking them to government implementation of information technology in a specific case and speculating about implementation of artificial intelligence algorithms in government.

Phoenix Pay System Debacle (things are looking up), a harbinger for responsible use of artificial intelligence?

I’m happy to hear that the situation where government employees had no certainty about their paycheques is becoming better. After the ‘new’ Phoenix Pay System was implemented in early 2016, government employees found they might get the correct amount on their paycheque or might find significantly less than they were entitled to or might find huge increases.

The instability alone would be distressing but adding to it with the inability to get the problem fixed must have been devastating. Almost five years later, the problems are being resolved and people are getting paid appropriately, more often.

The estimated cost for fixing the problems was, as I recall, over $1B; I think that was a little optimistic. James Bagnall’s July 28, 2020 article for the Ottawa Citizen provides more detail, although not about the current cost, and is the source of my measured optimism,

Something odd has happened to the Phoenix Pay file of late. After four years of spitting out errors at a furious rate, the federal government’s new pay system has gone quiet.

And no, it’s not because of the even larger drama written by the coronavirus. In fact, there’s been very real progress at Public Services and Procurement Canada [PSPC; emphasis mine], the department in charge of pay operations.

Since January 2018, the peak of the madness, the backlog of all pay transactions requiring action has dropped by about half to 230,000 as of late June. Many of these involve basic queries for information about promotions, overtime and rules. The part of the backlog involving money — too little or too much pay, incorrect deductions, pay not received — has shrunk by two-thirds to 125,000.

These are still very large numbers but the underlying story here is one of long-delayed hope. The government is processing the pay of more than 330,000 employees every two weeks while simultaneously fixing large batches of past mistakes.

While officials with two of the largest government unions — Public Service Alliance of Canada [PSAC] and the Professional Institute of the Public Service of Canada [PPSC] — disagree the pay system has worked out its kinks, they acknowledge it’s considerably better than it was. New pay transactions are being processed “with increased timeliness and accuracy,” the PSAC official noted.

Neither union is happy with the progress being made on historical mistakes. PIPSC president Debi Daviau told this newspaper that many of her nearly 60,000 members have been waiting for years to receive salary adjustments stemming from earlier promotions or transfers, to name two of the more prominent sources of pay errors.

Even so, the sharp improvement in Phoenix Pay’s performance will soon force the government to confront an interesting choice: Should it continue with plans to replace the system?

Treasury Board, the government’s employer, two years ago launched the process to do just that. Last March, SAP Canada — whose technology underpins the pay system still in use at Canada Revenue Agency — won a competition to run a pilot project. Government insiders believe SAP Canada is on track to build the full system starting sometime in 2023.

When Public Services set out the business case in 2009 for building Phoenix Pay, it noted the pay system would have to accommodate 150 collective agreements that contained thousands of business rules and applied to dozens of federal departments and agencies. The technical challenge has since intensified.

Under the original plan, Phoenix Pay was to save $70 million annually by eliminating 1,200 compensation advisors across government and centralizing a key part of the operation at the pay centre in Miramichi, N.B., where 550 would manage a more automated system.

Instead, the Phoenix Pay system currently employs about 2,300.  This includes 1,600 at Miramichi and five regional pay offices, along with 350 each at a client contact centre (which deals with relatively minor pay issues) and client service bureau (which handles the more complex, longstanding pay errors). This has naturally driven up the average cost of managing each pay account — 55 per cent higher than the government’s former pay system according to last fall’s estimate by the Parliamentary Budget Officer.

… As the backlog shrinks, the need for regional pay offices and emergency staffing will diminish. Public Services is also working with a number of high-tech firms to develop ways of accurately automating employee pay using artificial intelligence [emphasis mine].

Given the Phoenix Pay System debacle, it might be nice to see a little information about how the government is planning to integrate more sophisticated algorithms (artificial intelligence) in their operations.

I found this on a Treasury Board webpage, all 1 minute and 29 seconds of it,

The blonde model or actress mentions that companies applying to Public Services and Procurement Canada for placement on the list must use AI responsibly. Her script does not include a definition or guidelines, which, as previously noted, as on the Treasury Board website.

As for Public Services and Procurement Canada, they have an Artificial intelligence source list,

Public Services and Procurement Canada (PSPC) is putting into operation the Artificial intelligence source list to facilitate the procurement of Canada’s requirements for Artificial intelligence (AI).

After research and consultation with industry, academia, and civil society, Canada identified 3 AI categories and business outcomes to inform this method of supply:

Insights and predictive modelling

Machine interactions

Cognitive automation

PSPC is focused only on procuring AI. If there are guidelines on their website for its use, I did not find them.

I found one more government agency that might have some information about artificial intelligence and guidelines for its use, Shared Services Canada,

Shared Services Canada (SSC) delivers digital services to Government of Canada organizations. We provide modern, secure and reliable IT services so federal organizations can deliver digital programs and services that meet Canadians needs.

Since the Minister of Digital Government, Joyce Murray, is listed on the homepage, I was hopeful that I could find out more about AI and governance and whether or not the Canadian Digital Service was associated with this government ministry/agency. I was frustrated on both counts.

To sum up, there is no information that I could find after March 2019 about Canada, it’s government and plans for AI, especially responsible management/governance and AI on a Canadian government website although I have found guidelines, expectations, and consequences for non-compliance. (Should anyone know which government agency has up-to-date information on its responsible use of AI, please let me know in the Comments.

Canadian Institute for Advanced Research (CIFAR)

The first mention of the Pan-Canadian Artificial Intelligence Strategy is in my analysis of the Canadian federal budget in a March 24, 2017 posting. Briefly, CIFAR received a big chunk of that money. Here’s more about the strategy from the CIFAR Pan-Canadian AI Strategy homepage,

In 2017, the Government of Canada appointed CIFAR to develop and lead a $125 million Pan-Canadian Artificial Intelligence Strategy, the world’s first national AI strategy.

CIFAR works in close collaboration with Canada’s three national AI Institutes — Amii in Edmonton, Mila in Montreal, and the Vector Institute in Toronto, as well as universities, hospitals and organizations across the country.

The objectives of the strategy are to:

Attract and retain world-class AI researchers by increasing the number of outstanding AI researchers and skilled graduates in Canada.

Foster a collaborative AI ecosystem by establishing interconnected nodes of scientific excellence in Canada’s three major centres for AI: Edmonton, Montreal, and Toronto.

Advance national AI initiatives by supporting a national research community on AI through training programs, workshops, and other collaborative opportunities.

Understand the societal implications of AI by developing global thought leadership on the economic, ethical, policy, and legal implications [emphasis mine] of advances in AI.

Responsible AI at CIFAR

You can find Responsible AI in a webspace devoted to what they have called, AI & Society. Here’s more from the homepage,

CIFAR is leading global conversations about AI’s impact on society.

The AI & Society program, one of the objectives of the CIFAR Pan-Canadian AI Strategy, develops global thought leadership on the economic, ethical, political, and legal implications of advances in AI. These dialogues deliver new ways of thinking about issues, and drive positive change in the development and deployment of responsible AI.

Solution Networks

AI Futures Policy Labs

AI & Society Workshops

Building an AI World

Under the category of building an AI World I found this (from CIFAR’s AI & Society homepage),

BUILDING AN AI WORLD

Explore the landscape of global AI strategies.

Canada was the first country in the world to announce a federally-funded national AI strategy, prompting many other nations to follow suit. CIFAR published two reports detailing the global landscape of AI strategies.

I skimmed through the second report and it seems more like a comparative study of various country’s AI strategies than a overview of responsible use of AI.

Final comments about Responsible AI in Canada and the new reports

I’m glad to see there’s interest in Responsible AI but based on my adventures searching the Canadian government websites and the Pan-Canadian AI Strategy webspace, I’m left feeling hungry for more.

I didn’t find any details about how AI is being integrated into government departments and for what uses. I’d like to know and I’d like to have some say about how it’s used and how the inevitable mistakes will be dealh with.

The great unwashed

What I’ve found is high minded, but, as far as I can tell, there’s absolutely no interest in talking to the ‘great unwashed’. Those of us who are not experts are being left out of these earlier stage conversations.

I’m sure we’ll be consulted at some point but it will be long past the time when are our opinions and insights could have impact and help us avoid the problems that experts tend not to see. What we’ll be left with is protest and anger on our part and, finally, grudging admissions and corrections of errors on the government’s part.

Let’s take this for an example. The Phoenix Pay System was implemented in its first phase on Feb. 24, 2016. As I recall, problems develop almost immediately. The second phase of implementation starts April 21, 2016. In May 2016 the government hires consultants to fix the problems. November 29, 2016 the government minister, Judy Foote, admits a mistake has been made. February 2017 the government hires consultants to establish what lessons they might learn. February 15, 2018 the pay problems backlog amounts to 633,000. Source: James Bagnall, Feb. 23, 2018 ‘timeline‘ for Ottawa Citizen

Do take a look at the timeline, there’s more to it than what I’ve written here and I’m sure there’s more to the Phoenix Pay System debacle than a failure to listen to warnings from those who would be directly affected. It’s fascinating though how often a failure to listen presages far deeper problems with a project.

The Canadian government, both a conservative and a liberal government, contributed to the Phoenix Debacle but it seems the gravest concern is with senior government bureaucrats. You might think things have changed since this recounting of the affair in a June 14, 2018 article by Michelle Zilio for the Globe and Mail,

The three public servants blamed by the Auditor-General for the Phoenix pay system problems were not fired for mismanagement of the massive technology project that botched the pay of tens of thousands of public servants for more than two years.

Marie Lemay, deputy minister for Public Services and Procurement Canada (PSPC), said two of the three Phoenix executives were shuffled out of their senior posts in pay administration and did not receive performance bonuses for their handling of the system. Those two employees still work for the department, she said. Ms. Lemay, who refused to identify the individuals, said the third Phoenix executive retired.

In a scathing report last month, Auditor-General Michael Ferguson blamed three “executives” – senior public servants at PSPC, which is responsible for Phoenix − for the pay system’s “incomprehensible failure.” [emphasis mine] He said the executives did not tell the then-deputy minister about the known problems with Phoenix, leading the department to launch the pay system despite clear warnings it was not ready.

Speaking to a parliamentary committee on Thursday, Ms. Lemay said the individuals did not act with “ill intent,” noting that the development and implementation of the Phoenix project were flawed. She encouraged critics to look at the “bigger picture” to learn from all of Phoenix’s failures.

Mr. Ferguson, whose office spoke with the three Phoenix executives as a part of its reporting, said the officials prioritized some aspects of the pay-system rollout, such as schedule and budget, over functionality. He said they also cancelled a pilot implementation project with one department that would have helped it detect problems indicating the system was not ready.

Mr. Ferguson’s report warned the Phoenix problems are indicative of “pervasive cultural problems” [emphasis mine] in the civil service, which he said is fearful of making mistakes, taking risks and conveying “hard truths.”

Speaking to the same parliamentary committee on Tuesday, Privy Council Clerk [emphasis mine] Michael Wernick challenged Mr. Ferguson’s assertions, saying his chapter on the federal government’s cultural issues is an “opinion piece” containing “sweeping generalizations.”

The Privy Council Clerk is the top level bureaucrat (and there is only one such clerk) in the civil/public service and I think his quotes are quite telling of “pervasive cultural problems.” There’s a new Privy Council Clerk but from what I can tell he was well trained by his predecessor.

Doe we really need senior government bureaucrats?

I now have an example of bureaucratic interference, specifically with the Global Public Health Information Network (GPHIN) where it would seem that not much has changed, from a December 26, 2020 article by Grant Robertson for the Globe & Mail,

When Canada unplugged support for its pandemic alert system [GPHIN] last year, it was a symptom of bigger problems inside the Public Health Agency. Experienced scientists were pushed aside, expertise was eroded, and internal warnings went unheeded, which hindered the department’s response to COVID-19

As a global pandemic began to take root in February, China held a series of backchannel conversations with Canada, lobbying the federal government to keep its borders open.

With the virus already taking a deadly toll in Asia, Heng Xiaojun, the Minister Counsellor for the Chinese embassy, requested a call with senior Transport Canada officials. Over the course of the conversation, the Chinese representatives communicated Beijing’s desire that flights between the two countries not be stopped because it was unnecessary.

“The Chinese position on the continuation of flights was reiterated,” say official notes taken from the call. “Mr. Heng conveyed that China is taking comprehensive measures to combat the coronavirus.”

Canadian officials seemed to agree, since no steps were taken to restrict or prohibit travel. To the federal government, China appeared to have the situation under control and the risk to Canada was low. Before ending the call, Mr. Heng thanked Ottawa for its “science and fact-based approach.”

It was a critical moment in the looming pandemic, but the Canadian government lacked the full picture, instead relying heavily on what Beijing was choosing to disclose to the World Health Organization (WHO). Ottawa’s ability to independently know what was going on in China – on the ground and inside hospitals – had been greatly diminished in recent years.

Canada once operated a robust pandemic early warning system and employed a public-health doctor based in China who could report back on emerging problems. But it had largely abandoned those international strategies over the past five years, and was no longer as plugged-in.

By late February [2020], Ottawa seemed to be taking the official reports from China at their word, stating often in its own internal risk assessments that the threat to Canada remained low. But inside the Public Health Agency of Canada (PHAC), rank-and-file doctors and epidemiologists were growing increasingly alarmed at how the department and the government were responding.

“The team was outraged,” one public-health scientist told a colleague in early April, in an internal e-mail obtained by The Globe and Mail, criticizing the lack of urgency shown by Canada’s response during January, February and early March. “We knew this was going to be around for a long time, and it’s serious.”

China had locked down cities and restricted travel within its borders. Staff inside the Public Health Agency believed Beijing wasn’t disclosing the whole truth about the danger of the virus and how easily it was transmitted. “The agency was just too slow to respond,” the scientist said. “A sane person would know China was lying.”

It would later be revealed that China’s infection and mortality rates were played down in official records, along with key details about how the virus was spreading.

But the Public Health Agency, which was created after the 2003 SARS crisis to bolster the country against emerging disease threats, had been stripped of much of its capacity to gather outbreak intelligence and provide advance warning by the time the pandemic hit.

The Global Public Health Intelligence Network, an early warning system known as GPHIN that was once considered a cornerstone of Canada’s preparedness strategy, had been scaled back over the past several years, with resources shifted into projects that didn’t involve outbreak surveillance.

However, a series of documents obtained by The Globe during the past four months, from inside the department and through numerous Access to Information requests, show the problems that weakened Canada’s pandemic readiness run deeper than originally thought. Pleas from the international health community for Canada to take outbreak detection and surveillance much more seriously were ignored by mid-level managers [emphasis mine] inside the department. A new federal pandemic preparedness plan – key to gauging the country’s readiness for an emergency – was never fully tested. And on the global stage, the agency stopped sending experts [emphasis mine] to international meetings on pandemic preparedness, instead choosing senior civil servants with little or no public-health background [emphasis mine] to represent Canada at high-level talks, The Globe found.

The curtailing of GPHIN and allegations that scientists had become marginalized within the Public Health Agency, detailed in a Globe investigation this past July [2020], are now the subject of two federal probes – an examination by the Auditor-General of Canada and an independent federal review, ordered by the Minister of Health.

Those processes will undoubtedly reshape GPHIN and may well lead to an overhaul of how the agency functions in some areas. The first steps will be identifying and fixing what went wrong. With the country now topping 535,000 cases of COVID-19 and more than 14,700 dead, there will be lessons learned from the pandemic.

Prime Minister Justin Trudeau has said he is unsure what role added intelligence [emphasis mine] could have played in the government’s pandemic response, though he regrets not bolstering Canada’s critical supplies of personal protective equipment sooner. But providing the intelligence to make those decisions early is exactly what GPHIN was created to do – and did in previous outbreaks.

Epidemiologists have described in detail to The Globe how vital it is to move quickly and decisively in a pandemic. Acting sooner, even by a few days or weeks in the early going, and throughout, can have an exponential impact on an outbreak, including deaths. Countries such as South Korea, Australia and New Zealand, which have fared much better than Canada, appear to have acted faster in key tactical areas, some using early warning information they gathered. As Canada prepares itself in the wake of COVID-19 for the next major health threat, building back a better system becomes paramount.

If you have time, do take a look at Robertson’s December 26, 2020 article and the July 2020 Globe investigation. As both articles make clear, senior bureaucrats whose chief attribute seems to have been longevity took over, reallocated resources, drove out experts, and crippled the few remaining experts in the system with a series of bureaucratic demands while taking trips to attend meetings (in desirable locations) for which they had no significant or useful input.

The Phoenix and GPHIN debacles bear a resemblance in that senior bureaucrats took over and in a state of blissful ignorance made a series of disastrous decisions bolstered by politicians who seem to neither understand nor care much about the outcomes.

If you think I’m being harsh watch Canadian Broadcasting Corporation (CBC) reporter Rosemary Barton interview Prime Minister Trudeau for a 2020 year-end interview, Note: There are some commercials. Then, pay special attention to the Trudeau’s answer to the first question,

Responsible AI, eh?

Based on the massive mishandling of the Phoenix Pay System implementation where top bureaucrats did not follow basic and well established information services procedures and the Global Public Health Information Network mismanagement by top level bureaucrats, I’m not sure I have a lot of confidence in any Canadian government claims about a responsible approach to using artificial intelligence.

Unfortunately, it doesn’t matter as implementation is most likely already taking place here in Canada.

Enough with the pessimism. I feel it’s necessary to end this on a mildly positive note. Hurray to the government employees who worked through the Phoenix Pay System debacle, the current and former GPHIN experts who continued to sound warnings, and all those people striving to make true the principles of ‘Peace, Order, and Good Government’, the bedrock principles of the Canadian Parliament.

A lot of mistakes have been made but we also do make a lot of good decisions.

A Vancouver (Canada) connection to the Pfizer COVID-19 vaccine

Canada’s NanoMedicines Innovation Network (NMIN) must have been excited over the COVID-19 vaccine news (Pfizer Nov. 9, 2020 news release) since it’s a Canadian company (Acuitas Therapeutics) that is providing the means of delivering the vaccine once it enters the body.

Here’s the company’s president and CEO [chief executive officer], Dr. Thomas Madden explaining his company’s delivery system (from Acuitas’ news and events webpage),

For anyone who might find a textual description about the vaccine helpful, I have a Nov. 9, 2020 article by Adele Peters for Fast Company,

… a handful of small biotech companies began scrambling to develop vaccines using an as-yet-unproven technology platform that relies on something called messenger RNA [ribonucleic acid], usually shortened to mRNA …

Like other vaccines, mRNA vaccines work by training the immune system to recognize a threat like a virus and begin producing antibodies to protect itself. But while traditional vaccines often use inactivated doses of the organisms that cause disease, mRNA vaccines are designed to make the body produce those proteins itself. Messenger RNA—a molecule that contains instructions for cells to make DNA—is injected into cells. In the case of COVID-19, mRNA vaccines provide instructions for cells to start producing the “spike” protein of the new coronavirus, the protein that helps the virus get into cells. On its own, the spike protein isn’t harmful. But it triggers the immune system to begin a defensive response. As Bill Gates, who has supported companies like Moderna and BioNTech through the Gates Foundation, has described it, “you essentially turn your body into its own manufacturing unit.”

Amy Judd’s Nov. 9, 2020 article for Global news online explains (or you can just take another look at the video to refresh your memory) how the Acuitas technology fits into the vaccine picture,

Vancouver-based Acuitas Therapeutics, a biotechnology company, is playing a key role through a technology known as lipid nanoparticles, which deliver messenger RNA into cells.

“The technology we provide to our partners is lipid nanoparticles and BioNTech and Pfizer are developing a vaccine that’s using a messenger RNA that tells our cells how to make a protein that’s actually found in the COVID-19 virus,” Dr. Thomas Madden, president and CEO of Acuitas Therapeutics, told Global News Monday [Nov. 9, 2020].

“But the messenger RNA can’t work by itself, it needs a delivery technology to protect this after it’s administered and then to carry it into the cells where it can be expressed and give rise to an immune response.”

Madden said they like to think of the lipid nanoparticles as protective wrapping around a fragile glass ornament [emphasis mine] being shipped to your house online. That protective wrapping would then make sure the ornament made it to your house, through your front door, then unwrap itself and leave in your hallway, ready for you to come and grab it when you came home.

Acuitas Therapeutics employs 29 people and Madden said he believes everyone is feeling very proud of their work.

“Not many people are aware of the history of this technology and the fact that it originated in Vancouver,” he added.

“Dr. Pieter Cullis was one of the key scientists who brought together a team to develop this technology many, many years ago. UBC and Vancouver and companies associated with those scientists have been at the global centre of this technology for many years now.

“I think we’ve been looking for a light at the end of the tunnel for quite some time. I think everybody has been hoping that a vaccine would be able to provide the protection we need to move out of our current situation and I think this is now a confirmation that this hope wasn’t misplaced.”

Nanomedicine in Vancouver

For anyone who’s curious about the Canadian nanomedicine scene, you can find out more about it on Canada’s NanoMedicines Innovation Network (NMIN) website. They recently held a virtual event (Vancouver Nanomedicine Day) on Sept. 17, 2020 (see my Sept. 11, 2020 posting for details), which featured a presentation about Aquitas’ technology.

Happily, the organizers have posted videos for most of the sessions. Dr. Ying Tam of Acuitas made this presentation (about 22 mins. running time) “A Novel Vaccine Approach Using Messenger RNA‐Lipid Nanoparticles: Preclinical and Clinical Perspectives.” If you’re interested in that video or any of the others go to the NanoMedicines Innovation Network’s Nanomedicine Day 2020 webpage.

Acuitas Therapeutics can be found here.

City University of Hong Kong (CityU) and its anti-bacterial graphene face masks

This looks like interesting work and I think the integration of visual images and embedded video in the news release (on the university website) is particularly well done. I won’t be including all the graphical information here as my focus is the text.

A Sept. 10, 2020 City University of Hong Kong (CityU) press release (also on EurekAlert) announces a greener, more effective face mask,

Face masks have become an important tool in fighting against the COVID-19 pandemic. However, improper use or disposal of masks may lead to “secondary transmission”. A research team from City University of Hong Kong (CityU) has successfully produced graphene masks with an anti-bacterial efficiency of 80%, which can be enhanced to almost 100% with exposure to sunlight for around 10 minutes. Initial tests also showed very promising results in the deactivation of two species of coronaviruses. The graphene masks are easily produced at low cost, and can help to resolve the problems of sourcing raw materials and disposing of non-biodegradable masks.

The research is conducted by Dr Ye Ruquan, Assistant Professor from CityU’s Department of Chemistry, in collaboration with other researchers. The findings were published in the scientific journal ACS Nano, titled “Self-Reporting and Photothermally Enhanced Rapid Bacterial Killing on a Laser-Induced Graphene Mask“.

Commonly used surgical masks are not anti-bacterial. This may lead to the risk of secondary transmission of bacterial infection when people touch the contaminated surfaces of the used masks or discard them improperly. Moreover, the melt-blown fabrics used as a bacterial filter poses an impact on the environment as they are difficult to decompose. Therefore, scientists have been looking for alternative materials to make masks.

Converting other materials into graphene by laser

Dr Ye has been studying the use of laser-induced graphene [emphasis mine] in developing sustainable energy. When he was studying PhD degree at Rice University several years ago, the research team he participated in and led by his supervisor discovered an easy way to produce graphene. They found that direct writing on carbon-containing polyimide films (a polymeric plastic material with high thermal stability) using a commercial CO2 infrared laser system can generate 3D porous graphene. The laser changes the structure of the raw material and hence generates graphene. That’s why it is named laser-induced graphene.

Graphene is known for its anti-bacterial properties, so as early as last September, before the outbreak of COVID-19, producing outperforming masks with laser-induced graphene already came across Dr Ye’s mind. He then kick-started the study in collaboration with researchers from the Hong Kong University of Science and Technology (HKUST), Nankai University, and other organisations.

Excellent anti-bacterial efficiency

The research team tested their laser-induced graphene with E. coli, and it achieved high anti-bacterial efficiency of about 82%. In comparison, the anti-bacterial efficiency of activated carbon fibre and melt-blown fabrics, both commonly-used materials in masks, were only 2% and 9% respectively. Experiment results also showed that over 90% of the E. coli deposited on them remained alive even after 8 hours, while most of the E. coli deposited on the graphene surface were dead after 8 hours. Moreover, the laser-induced graphene showed a superior anti-bacterial capacity for aerosolised bacteria.

Dr Ye said that more research on the exact mechanism of graphene’s bacteria-killing property is needed. But he believed it might be related to the damage of bacterial cell membranes by graphene’s sharp edge. And the bacteria may be killed by dehydration induced by the hydrophobic (water-repelling) property of graphene.

Previous studies suggested that COVID-19 would lose its infectivity at high temperatures. So the team carried out experiments to test if the graphene’s photothermal effect (producing heat after absorbing light) can enhance the anti-bacterial effect. The results showed that the anti-bacterial efficiency of the graphene material could be improved to 99.998% within 10 minutes under sunlight, while activated carbon fibre and melt-blown fabrics only showed an efficiency of 67% and 85% respectively.

The team is currently working with laboratories in mainland China to test the graphene material with two species of human coronaviruses. Initial tests showed that it inactivated over 90% of the virus in five minutes and almost 100% in 10 minutes under sunlight. The team plans to conduct testings with the COVID-19 virus later.

Their next step is to further enhance the anti-virus efficiency and develop a reusable strategy for the mask. They hope to release it to the market shortly after designing an optimal structure for the mask and obtaining the certifications.

Dr Ye described the production of laser-induced graphene as a “green technique”. All carbon-containing materials, such as cellulose or paper, can be converted into graphene using this technique. And the conversion can be carried out under ambient conditions without using chemicals other than the raw materials, nor causing pollution. And the energy consumption is low.

“Laser-induced graphene masks are reusable. If biomaterials are used for producing graphene, it can help to resolve the problem of sourcing raw material for masks. And it can lessen the environmental impact caused by the non-biodegradable disposable masks,” he added.

Dr Ye pointed out that producing laser-induced graphene is easy. Within just one and a half minutes, an area of 100 cm² can be converted into graphene as the outer or inner layer of the mask. Depending on the raw materials for producing the graphene, the price of the laser-induced graphene mask is expected to be between that of surgical mask and N95 mask. He added that by adjusting laser power, the size of the pores of the graphene material can be modified so that the breathability would be similar to surgical masks.

A new way to check the condition of the mask

To facilitate users to check whether graphene masks are still in good condition after being used for a period of time, the team fabricated a hygroelectric generator. It is powered by electricity generated from the moisture in human breath. By measuring the change in the moisture-induced voltage when the user breathes through a graphene mask, it provides an indicator of the condition of the mask. Experiment results showed that the more the bacteria and atmospheric particles accumulated on the surface of the mask, the lower the voltage resulted. “The standard of how frequently a mask should be changed is better to be decided by the professionals. Yet, this method we used may serve as a reference,” suggested Dr Ye.

Laser-induced graphene (LIG), Rice University, and Dr. Ye were mentioned here in a May 9, 2018 titled: Do you want that coffee with some graphene on toast?

Back to the latest research, read the caption carefully,

Research shows that over 90% of the E. coli deposited on activated carbon fibre (fig c and d) and melt-blown fabrics (fig e and f) remained alive even after 8 hours. In contrast, most of the E. coli deposited on the graphene surface (fig a and b) were dead. (Photo source: DOI number: 10.1021/acsnano.0c05330)

Here’s a link to and a citation for the paper,

Self-Reporting and Photothermally Enhanced Rapid Bacterial Killing on a Laser-Induced Graphene Mask by Libei Huang, Siyu Xu, Zhaoyu Wang, Ke Xue, Jianjun Su, Yun Song, Sijie Chen, Chunlei Zhu, Ben Zhong Tang, and Ruquan Ye. ACS Nano 2020, 14, 9, 12045–12053 DOI: https://doi.org/10.1021/acsnano.0c05330 Publication Date:August 11, 2020 Copyright © 2020 American Chemical Society

This paper is behind a paywall.

D-Wave’s new Advantage quantum computer

Thanks to Bob Yirka’s September 30, 2020 article for phys.org there’s an announcement about D-Wave Systems’ latest quantum computer and an explanation of how D-Wave’s quantum computer differs from other quantum computers. Here’s the explanation (Note: Links have been removed),

Over the past several years, several companies have dedicated resources to the development of a true quantum computer that can tackle problems conventional computers cannot handle. Progress on developing such computers has been slow, however, especially when compared with the early development of the conventional computer. As part of the research effort, companies have taken different approaches. Google and IBM, for example, are working on gate-model quantum computer technology, in which qubits are modified as an algorithm is executed. D-Wave, in sharp contrast, has been focused on developing so-called annealer technology, in which qubits are cooled during execution of an algorithm, which allows for passively changing their value.

Comparing the two is next to impossible because of their functional differences. Thus, using 5,000 qubits in the Advantage system does not necessarily mean that it is any more useful than the 100-qubit systems currently being tested by IBM or Google. Still, the announcement suggests that businesses are ready to start taking advantage of the increased capabilities of quantum systems. D-Wave notes that several customers are already using their system for a wide range of applications. Menten AI, for example, has used the system to design new proteins; grocery chain Save-On-Foods has been using it to optimize business operations; Accenture has been using it to develop business applications; Volkswagen has used the system to develop a more efficient car painting system.

Here’s the company’s Sept. 29, 2020 video announcement,

For those who might like some text, there’s a Sept. 29, 2020 D-Wave Systems press release (Note: Links have been removed; this is long),

D-Wave Systems Inc., the leader in quantum computing systems, software, and services, today [Sept. 29, 2020] announced the general availability of its next-generation quantum computing platform, incorporating new hardware, software, and tools to enable and accelerate the delivery of in-production quantum computing applications. Available today in the Leap™ quantum cloud service, the platform includes the Advantage™ quantum system, with more than 5000 qubits and 15-way qubit connectivity, in addition to an expanded hybrid solver service that can run problems with up to one million variables. The combination of the computing power of Advantage and the scale to address real-world problems with the hybrid solver service in Leap enables businesses to run performant, real-time, hybrid quantum applications for the first time.

As part of its commitment to enabling businesses to build in-production quantum applications, the company announced D-Wave Launch™, a jump-start program for businesses who want to get started building hybrid quantum applications today but may need additional support. Bringing together a team of applications experts and a robust partner community, the D-Wave Launch program provides support to help identify the best applications and to translate businesses’ problems into hybrid quantum applications. The extra support helps customers accelerate designing, building, and running their most important and complex applications, while delivering quantum acceleration and performance.

The company also announced a new hybrid solver. The discrete quadratic model (DQM) solver gives developers and businesses the ability to apply the benefits of hybrid quantum computing to new problem classes. Instead of accepting problems with only binary variables (0 or 1), the DQM solver uses other variable sets (e.g. integers from 1 to 500, or red, yellow, and blue), expanding the types of problems that can run on the quantum computer. The DQM solver will be generally available on October 8 [2020].

With support for new solvers and larger problem sizes backed by the Advantage system, customers and partners like Menten AI, Save-On-Foods, Accenture, and Volkswagen are building and running hybrid quantum applications that create solutions with business value today.

  • Protein design pioneer Menten AI has developed the first process using hybrid quantum programs to determine protein structure for de novo protein design with very encouraging results often outperforming classical solvers. Menten AI’s unique protein designs have been computationally validated, chemically synthesized, and are being advanced to live-virus testing against COVID-19.
  • Western Canadian grocery retailer Save-On-Foods is using hybrid quantum algorithms to bring grocery optimization solutions to their business, with pilot tests underway in-store. The company has been able to reduce the time an important optimization task takes from 25 hours to a mere 2 minutes of calculations each week. Even more important than the reduction in time is the ability to optimize performance across and between a significant number of business parameters in a way that is challenging using traditional methods.
  • Accenture, a leading global professional services company, is exploring quantum, quantum-inspired, and hybrid solutions to develop applications across industries. Accenture recently conducted a series of business experiments with a banking client to pilot quantum applications for currency arbitrage, credit scoring, and trading optimization, successfully mapping computationally challenging business problems to quantum formulations, enabling quantum readiness.
  • Volkswagen, an early adopter of D-Wave’s annealing quantum computer, has expanded its quantum use cases with the hybrid solver service to build a paint shop scheduling application. The algorithm is designed to optimize the order in which cars are being painted. By using the hybrid solver service, the number of color switches will be reduced significantly, leading to performance improvements.

The Advantage quantum computer and the Leap quantum cloud service include:

  • New Topology: The topology in Advantage makes it the most connected of any commercial quantum system in the world. In the D-Wave 2000Q™ system, qubits may connect to 6 other qubits. In the new Advantage system, each qubit may connect to 15 other qubits. With two-and-a-half times more connectivity, Advantage enables the embedding of larger problems with fewer physical qubits compared to using the D-Wave 2000Q system. The D-Wave Ocean™ software development kit (SDK) includes tools for using the new topology. Information on the topology in Advantage can be found in this white paper, and a getting started video on how to use the new topology can be found here.
  • Increased Qubit Count: With more than 5000 qubits, Advantage more than doubles the qubit count of the D-Wave 2000Q system. More qubits and richer connectivity provide quantum programmers access to a larger, denser, and more powerful graph for building commercial quantum applications.
  • Greater Performance & Problem Size: With up to one million variables, the hybrid solver service in Leap allows businesses to run large-scale, business-critical problems. This, coupled with the new topology and more than 5000 qubits in the Advantage system, expands the complexity and more than doubles the size of problems that can run directly on the quantum processing unit (QPU). In fact, the hybrid solver outperformed or matched the best of 27 classical optimization solvers on 87% of 45 application-relevant inputs tested in MQLib. Additionally, greater connectivity of the QPU allows for more compact embeddings of complex problems. Advantage can find optimal solutions 10 to 30 times faster in some cases, and can find better quality solutions up to 64% percent of the time, when compared to the D-Wave 2000Q LN QPU.
  • Expansion of Hybrid Software & Tools in Leap: Further investments in the hybrid solver service, new solver classes, ease-of-use, automation, and new tools provide an even more powerful hybrid rapid development environment in Python for business-scale problems.
  • Flexible Access: Advantage, the expanded hybrid solver service, and the upcoming DQM solver are available in the Leap quantum cloud service. All current Leap customers get immediate access with no additional charge, and new customers will benefit from all the new and existing capabilities in Leap. This means that developers and businesses can get started today building in-production hybrid quantum applications. Flexible purchase plans allow developers and forward-thinking businesses to access the D-Wave quantum system in the way that works for them and their business. 
  • Ongoing Releases: D-Wave continues to bring innovations to market with additional hybrid solvers, QPUs, and software updates through the cloud. Interested users and customers can get started today with Advantage and the hybrid solver service, and will benefit from new components of the platform through Leap as they become available.

“Today’s general availability of Advantage delivers the first quantum system built specifically for business, and marks the expansion into production scale commercial applications and new problem types with our hybrid solver services. In combination with our new jump-start program to get customers started, this launch continues what we’ve known at D-Wave for a long time: it’s not about hype, it’s about scaling, and delivering systems that provide real business value on real business applications,” said Alan Baratz, CEO, D-Wave. “We also continue to invest in the science of building quantum systems. Advantage was completely re-engineered from the ground up. We’ll take what we’ve learned about connectivity and scale and continue to push the limits of innovation for the next generations of our quantum computers. I’m incredibly proud of the team that has brought us here and the customers and partners who have collaborated with us to build hundreds of early applications and who now are putting applications into production.”

“We are using quantum to design proteins today. Using hybrid quantum applications, we’re able to solve astronomical protein design problems that help us create new protein structures,” said Hans Melo, Co-founder and CEO, Menten AI. “We’ve seen extremely encouraging results with hybrid quantum procedures often finding better solutions than competing classical solvers for de novo protein design. This means we can create better proteins and ultimately enable new drug discoveries.”

“At Save-On-Foods, we have been committed to bringing innovation to our customers for more than 105 years. To that end, we are always looking for new and creative ways to solve problems, especially in an environment that has gotten increasingly complex,” said Andrew Donaher, Vice President, Digital & Analytics at Save-On-Foods. “We’re new to quantum computing, and in a short period of time, we have seen excellent early results. In fact, the early results we see with Advantage and the hybrid solver service from D-Wave are encouraging enough that our goal is to turn our pilot into an in-production business application. Quantum is emerging as a potential competitive edge for our business.“

“Accenture is committed to helping our clients prepare for the arrival of mainstream quantum computing by exploring relevant use cases and conducting business experiments now,” said Marc Carrel-Billiard, Senior Managing Director and Technology Innovation Lead at Accenture. “We’ve been collaborating with D-Wave for several years and with early access to the Advantage system and hybrid solver service we’ve seen performance improvements and advancements in the platform that are important steps for helping to make quantum a reality for clients across industries, creating new sources of competitive advantage.”

“Embracing quantum computing is nothing new for Volkswagen. We were the first to run a hybrid quantum application in production in Lisbon last November with our bus routing application,” said Florian Neukart, Director of Advanced Technologies at Volkswagen Group of America. “At Volkswagen, we are focusing on building up a deep understanding of meaningful applications of quantum computing in a corporate context. The D-Wave system gives us the opportunity to address optimization tasks with a large number of variables at an impressive speed. With this we are taking a step further towards quantum applications that will be suitable for everyday business use.”

I found the description of D-Wave’s customers and how they’re using quantum computing to be quite interesting. For anyone curious about D-Wave Systems, you can find out more here. BTW, the company is located in metro Vancouver (Canada).

Gold nanoparticles make a new promise: a non-invasive COVID-19 breathalyser

I believe that swab they stick up your nose to test for COVDI-19 is 10 inches long so it seems to me that discomfort or unpleasant are not the words that best describe the testing experience .

Hopefully, no one will have to find inadequate vocabulary for this new COVID-19 testing assuming that future trials are successful and they are able to put the technology into production. From an August 19, 2020 news item on Nanowerk,

Few people who have undergone nasopharyngeal swabs for coronavirus testing would describe it as a pleasant experience. The procedure involves sticking a long swab up the nose to collect a sample from the back of the nose and throat, which is then analyzed for SARS-CoV-2 RNA [ribonucleic acid] by the reverse-transcription polymerase chain reaction (RT-PCR).

Now, researchers reporting in [American Chemical Society] ACS Nano (“Multiplexed Nanomaterial-Based Sensor Array for Detection of COVID-19 in Exhaled Breath”) have developed a prototype device that non-invasively detected COVID-19 in the exhaled breath of infected patients.

An August 19, 2020 ACS news release (also received via email and on EurekAlert), which originated the news item, provides more technical details,

In addition to being uncomfortable, the current gold standard for COVID-19 testing requires RT-PCR, a time-consuming laboratory procedure. Because of backlogs, obtaining a result can take several days. To reduce transmission and mortality rates, healthcare systems need quick, inexpensive and easy-to-use tests. Hossam Haick, Hu Liu, Yueyin Pan and colleagues wanted to develop a nanomaterial-based sensor that could detect COVID-19 in exhaled breath, similar to a breathalyzer test for alcohol intoxication. Previous studies have shown that viruses and the cells they infect emit volatile organic compounds (VOCs) that can be exhaled in the breath.

The researchers made an array of gold nanoparticles linked to molecules that are sensitive to various VOCs. When VOCs interact with the molecules on a nanoparticle, the electrical resistance changes. The researchers trained the sensor to detect COVID-19 by using machine learning to compare the pattern of electrical resistance signals obtained from the breath of 49 confirmed COVID-19 patients with those from 58 healthy controls and 33 non-COVID lung infection patients in Wuhan, China. Each study participant blew into the device for 2-3 seconds from a distance of 1¬-2 cm. Once machine learning identified a potential COVID-19 signature, the team tested the accuracy of the device on a subset of participants. In the test set, the device showed 76% accuracy in distinguishing COVID-19 cases from controls and 95% accuracy in discriminating COVID-19 cases from lung infections. The sensor could also distinguish, with 88% accuracy, between sick and recovered COVID-19 patients. Although the test needs to be validated in more patients, it could be useful for screening large populations to determine which individuals need further testing, the researchers say.

The authors acknowledge funding from the Technion-Israel Institute of Technology.

Here’s a link to and a citation for the paper,

Multiplexed Nanomaterial-Based Sensor Array for Detection of COVID-19 in Exhaled Breath by Benjie Shan, Yoav Y Broza, Wenjuan Li, Yong Wang, Sihan Wu, Zhengzheng Liu, Jiong Wang, Shuyu Gui, Lin Wang, Zhihong Zhang, Wei Liu, Shoubing Zhou, Wei Jin, Qianyu Zhang, Dandan Hu, Lin Lin, Qiujun Zhang, Wenyu Li, Jinquan Wang, Hu Liu, Yueyin Pan, and Hossam Haick. ACS Nano 2020, XXXX, XXX, XXX-XXX DOI: https://doi.org/10.1021/acsnano.0c05657 Publication Date:August 18, 2020 Copyright © 2020 American Chemical Society

This paper is behind a paywall.