Tag Archives: COVID-19

Governments need to tell us when and how they’re using AI (artificial intelligence) algorithms to make decisions

I have two items and an exploration of the Canadian scene all three of which feature governments, artificial intelligence, and responsibility.

Special issue of Information Polity edited by Dutch academics,

A December 14, 2020 IOS Press press release (also on EurekAlert) announces a special issue of Information Polity focused on algorithmic transparency in government,

Amsterdam, NL – The use of algorithms in government is transforming the way bureaucrats work and make decisions in different areas, such as healthcare or criminal justice. Experts address the transparency challenges of using algorithms in decision-making procedures at the macro-, meso-, and micro-levels in this special issue of Information Polity.

Machine-learning algorithms hold huge potential to make government services fairer and more effective and have the potential of “freeing” decision-making from human subjectivity, according to recent research. Algorithms are used in many public service contexts. For example, within the legal system it has been demonstrated that algorithms can predict recidivism better than criminal court judges. At the same time, critics highlight several dangers of algorithmic decision-making, such as racial bias and lack of transparency.

Some scholars have argued that the introduction of algorithms in decision-making procedures may cause profound shifts in the way bureaucrats make decisions and that algorithms may affect broader organizational routines and structures. This special issue on algorithm transparency presents six contributions to sharpen our conceptual and empirical understanding of the use of algorithms in government.

“There has been a surge in criticism towards the ‘black box’ of algorithmic decision-making in government,” explain Guest Editors Sarah Giest (Leiden University) and Stephan Grimmelikhuijsen (Utrecht University). “In this special issue collection, we show that it is not enough to unpack the technical details of algorithms, but also look at institutional, organizational, and individual context within which these algorithms operate to truly understand how we can achieve transparent and responsible algorithms in government. For example, regulations may enable transparency mechanisms, yet organizations create new policies on how algorithms should be used, and individual public servants create new professional repertoires. All these levels interact and affect algorithmic transparency in public organizations.”

The transparency challenges for the use of algorithms transcend different levels of government – from European level to individual public bureaucrats. These challenges can also take different forms; transparency can be enabled or limited by technical tools as well as regulatory guidelines or organizational policies. Articles in this issue address transparency challenges of algorithm use at the macro-, meso-, and micro-level. The macro level describes phenomena from an institutional perspective – which national systems, regulations and cultures play a role in algorithmic decision-making. The meso-level primarily pays attention to the organizational and team level, while the micro-level focuses on individual attributes, such as beliefs, motivation, interactions, and behaviors.

“Calls to ‘keep humans in the loop’ may be moot points if we fail to understand how algorithms impact human decision-making and how algorithmic design impacts the practical possibilities for transparency and human discretion,” notes Rik Peeters, research professor of Public Administration at the Centre for Research and Teaching in Economics (CIDE) in Mexico City. In a review of recent academic literature on the micro-level dynamics of algorithmic systems, he discusses three design variables that determine the preconditions for human transparency and discretion and identifies four main sources of variation in “human-algorithm interaction.”

The article draws two major conclusions: First, human agents are rarely fully “out of the loop,” and levels of oversight and override designed into algorithms should be understood as a continuum. The second pertains to bounded rationality, satisficing behavior, automation bias, and frontline coping mechanisms that play a crucial role in the way humans use algorithms in decision-making processes.

For future research Dr. Peeters suggests taking a closer look at the behavioral mechanisms in combination with identifying relevant skills of bureaucrats in dealing with algorithms. “Without a basic understanding of the algorithms that screen- and street-level bureaucrats have to work with, it is difficult to imagine how they can properly use their discretion and critically assess algorithmic procedures and outcomes. Professionals should have sufficient training to supervise the algorithms with which they are working.”

At the macro-level, algorithms can be an important tool for enabling institutional transparency, writes Alex Ingrams, PhD, Governance and Global Affairs, Institute of Public Administration, Leiden University, Leiden, The Netherlands. This study evaluates a machine-learning approach to open public comments for policymaking to increase institutional transparency of public commenting in a law-making process in the United States. The article applies an unsupervised machine learning analysis of thousands of public comments submitted to the United States Transport Security Administration on a 2013 proposed regulation for the use of new full body imaging scanners in airports. The algorithm highlights salient topic clusters in the public comments that could help policymakers understand open public comments processes. “Algorithms should not only be subject to transparency but can also be used as tool for transparency in government decision-making,” comments Dr. Ingrams.

“Regulatory certainty in combination with organizational and managerial capacity will drive the way the technology is developed and used and what transparency mechanisms are in place for each step,” note the Guest Editors. “On its own these are larger issues to tackle in terms of developing and passing laws or providing training and guidance for public managers and bureaucrats. The fact that they are linked further complicates this process. Highlighting these linkages is a first step towards seeing the bigger picture of why transparency mechanisms are put in place in some scenarios and not in others and opens the door to comparative analyses for future research and new insights for policymakers. To advocate the responsible and transparent use of algorithms, future research should look into the interplay between micro-, meso-, and macro-level dynamics.”

“We are proud to present this special issue, the 100th issue of Information Polity. Its focus on the governance of AI demonstrates our continued desire to tackle contemporary issues in eGovernment and the importance of showcasing excellent research and the insights offered by information polity perspectives,” add Professor Albert Meijer (Utrecht University) and Professor William Webster (University of Stirling), Editors-in-Chief.

This image illustrates the interplay between the various level dynamics,

Caption: Studying algorithms and algorithmic transparency from multiple levels of analyses. Credit: Information Polity.

Here’s a link, to and a citation for the special issue,

Algorithmic Transparency in Government: Towards a Multi-Level Perspective
Guest Editors: Sarah Giest, PhD, and Stephan Grimmelikhuijsen, PhD
Information Polity, Volume 25, Issue 4 (December 2020), published by IOS Press

The issue is open access for three months, Dec. 14, 2020 – March 14, 2021.

Two articles from the special were featured in the press release,

“The agency of algorithms: Understanding human-algorithm interaction in administrative decision-making,” by Rik Peeters, PhD (https://doi.org/10.3233/IP-200253)

“A machine learning approach to open public comments for policymaking,” by Alex Ingrams, PhD (https://doi.org/10.3233/IP-200256)

An AI governance publication from the US’s Wilson Center

Within one week of the release of a special issue of Information Polity on AI and governments, a Wilson Center (Woodrow Wilson International Center for Scholars) December 21, 2020 news release (received via email) announces a new publication,

Governing AI: Understanding the Limits, Possibilities, and Risks of AI in an Era of Intelligent Tools and Systems by John Zysman & Mark Nitzberg

Abstract

In debates about artificial intelligence (AI), imaginations often run wild. Policy-makers, opinion leaders, and the public tend to believe that AI is already an immensely powerful universal technology, limitless in its possibilities. However, while machine learning (ML), the principal computer science tool underlying today’s AI breakthroughs, is indeed powerful, ML is fundamentally a form of context-dependent statistical inference and as such has its limits. Specifically, because ML relies on correlations between inputs and outputs or emergent clustering in training data, today’s AI systems can only be applied in well- specified problem domains, still lacking the context sensitivity of a typical toddler or house-pet. Consequently, instead of constructing policies to govern artificial general intelligence (AGI), decision- makers should focus on the distinctive and powerful problems posed by narrow AI, including misconceived benefits and the distribution of benefits, autonomous weapons, and bias in algorithms. AI governance, at least for now, is about managing those who create and deploy AI systems, and supporting the safe and beneficial application of AI to narrow, well-defined problem domains. Specific implications of our discussion are as follows:

  • AI applications are part of a suite of intelligent tools and systems and must ultimately be regulated as a set. Digital platforms, for example, generate the pools of big data on which AI tools operate and hence, the regulation of digital platforms and big data is part of the challenge of governing AI. Many of the platform offerings are, in fact, deployments of AI tools. Hence, focusing on AI alone distorts the governance problem.
  • Simply declaring objectives—be they assuring digital privacy and transparency, or avoiding bias—is not sufficient. We must decide what the goals actually will be in operational terms.
  • The issues and choices will differ by sector. For example, the consequences of bias and error will differ from a medical domain or a criminal justice domain to one of retail sales.
  • The application of AI tools in public policy decision making, in transportation design or waste disposal or policing among a whole variety of domains, requires great care. There is a substantial risk of focusing on efficiency when the public debate about what the goals should be in the first place is in fact required. Indeed, public values evolve as part of social and political conflict.
  • The economic implications of AI applications are easily exaggerated. Should public investment concentrate on advancing basic research or on diffusing the tools, user interfaces, and training needed to implement them?
  • As difficult as it will be to decide on goals and a strategy to implement the goals of one community, let alone regional or international communities, any agreement that goes beyond simple objective statements is very unlikely.

Unfortunately, I haven’t been able to successfully download the working paper/report from the Wilson Center’s Governing AI: Understanding the Limits, Possibilities, and Risks of AI in an Era of Intelligent Tools and Systems webpage.

However, I have found a draft version of the report (Working Paper) published August 26, 2020 on the Social Science Research Network. This paper originated at the University of California at Berkeley as part of a series from the Berkeley Roundtable on the International Economy (BRIE). ‘Governing AI: Understanding the Limits, Possibility, and Risks of AI in an Era of Intelligent Tools and Systems’ is also known as the BRIE Working Paper 2020-5.

Canadian government and AI

The special issue on AI and governance and the the paper published by the Wilson Center stimulated my interest in the Canadian government’s approach to governance, responsibility, transparency, and AI.

There is information out there but it’s scattered across various government initiatives and ministries. Above all, it is not easy to find, open communication. Whether that’s by design or the blindness and/or ineptitude to be found in all organizations I leave that to wiser judges. (I’ve worked in small companies and they too have the problem. In colloquial terms, ‘the right hand doesn’t know what the left hand is doing’.)

Responsible use? Maybe not after 2019

First there’s a government of Canada webpage, Responsible use of artificial intelligence (AI). Other than a note at the bottom of the page “Date modified: 2020-07-28,” all of the information dates from 2016 up to March 2019 (which you’ll find on ‘Our Timeline’). Is nothing new happening?

For anyone interested in responsible use, there are two sections “Our guiding principles” and “Directive on Automated Decision-Making” that answer some questions. I found the ‘Directive’ to be more informative with its definitions, objectives, and, even, consequences. Sadly, you need to keep clicking to find consequences and you’ll end up on The Framework for the Management of Compliance. Interestingly, deputy heads are assumed in charge of managing non-compliance. I wonder how employees deal with a non-compliant deputy head?

What about the government’s digital service?

You might think Canadian Digital Service (CDS) might also have some information about responsible use. CDS was launched in 2017, according to Luke Simon’s July 19, 2017 article on Medium,

In case you missed it, there was some exciting digital government news in Canada Tuesday. The Canadian Digital Service (CDS) launched, meaning Canada has joined other nations, including the US and the UK, that have a federal department dedicated to digital.

At the time, Simon was Director of Outreach at Code for Canada.

Presumably, CDS, from an organizational perspective, is somehow attached to the Minister of Digital Government (it’s a position with virtually no governmental infrastructure as opposed to the Minister of Innovation, Science and Economic Development who is responsible for many departments and agencies). The current minister is Joyce Murray whose government profile offers almost no information about her work on digital services. Perhaps there’s a more informative profile of the Minister of Digital Government somewhere on a government website.

Meanwhile, they are friendly folks at CDS but they don’t offer much substantive information. From the CDS homepage,

Our aim is to make services easier for government to deliver. We collaborate with people who work in government to address service delivery problems. We test with people who need government services to find design solutions that are easy to use.

Learn more

After clicking on Learn more, I found this,

At the Canadian Digital Service (CDS), we partner up with federal departments to design, test and build simple, easy to use services. Our goal is to improve the experience – for people who deliver government services and people who use those services.

How it works

We work with our partners in the open, regularly sharing progress via public platforms. This creates a culture of learning and fosters best practices. It means non-partner departments can apply our work and use our resources to develop their own services.

Together, we form a team that follows the ‘Agile software development methodology’. This means we begin with an intensive ‘Discovery’ research phase to explore user needs and possible solutions to meeting those needs. After that, we move into a prototyping ‘Alpha’ phase to find and test ways to meet user needs. Next comes the ‘Beta’ phase, where we release the solution to the public and intensively test it. Lastly, there is a ‘Live’ phase, where the service is fully released and continues to be monitored and improved upon.

Between the Beta and Live phases, our team members step back from the service, and the partner team in the department continues the maintenance and development. We can help partners recruit their service team from both internal and external sources.

Before each phase begins, CDS and the partner sign a partnership agreement which outlines the goal and outcomes for the coming phase, how we’ll get there, and a commitment to get them done.

As you can see, there’s not a lot of detail and they don’t seem to have included anything about artificial intelligence as part of their operation. (I’ll come back to the government’s implementation of artificial intelligence and information technology later.)

Does the Treasury Board of Canada have charge of responsible AI use?

I think so but there are government departments/ministries that also have some responsibilities for AI and I haven’t seen any links back to the Treasury Board documentation.

For anyone not familiar with the Treasury Board or even if you are, December 14, 2009 article (Treasury Board of Canada: History, Organization and Issues) on Maple Leaf Web is quite informative,

The Treasury Board of Canada represent a key entity within the federal government. As an important cabinet committee and central agency, they play an important role in financial and personnel administration. Even though the Treasury Board plays a significant role in government decision making, the general public tends to know little about its operation and activities. [emphasis mine] The following article provides an introduction to the Treasury Board, with a focus on its history, responsibilities, organization, and key issues.

It seems the Minister of Digital Government, Joyce Murray is part of the Treasury Board and the Treasury Board is the source for the Digital Operations Strategic Plan: 2018-2022,

I haven’t read the entire document but the table of contents doesn’t include a heading for artificial intelligence and there wasn’t any mention of it in the opening comments.

But isn’t there a Chief Information Officer for Canada?

Herein lies a tale (I doubt I’ll ever get the real story) but the answer is a qualified ‘no’. The Chief Information Officer for Canada, Alex Benay (there is an AI aspect) stepped down in September 2019 to join a startup company according to an August 6, 2019 article by Mia Hunt for Global Government Forum,

Alex Benay has announced he will step down as Canada’s chief information officer next month to “take on new challenge” at tech start-up MindBridge.

“It is with mixed emotions that I am announcing my departure from the Government of Canada,” he said on Wednesday in a statement posted on social media, describing his time as CIO as “one heck of a ride”.

He said he is proud of the work the public service has accomplished in moving the national digital agenda forward. Among these achievements, he listed the adoption of public Cloud across government; delivering the “world’s first” ethical AI management framework; [emphasis mine] renewing decades-old policies to bring them into the digital age; and “solidifying Canada’s position as a global leader in open government”.

He also led the introduction of new digital standards in the workplace, and provided “a clear path for moving off” Canada’s failed Phoenix pay system. [emphasis mine]

I cannot find a current Chief Information of Canada despite searches but I did find this List of chief information officers (CIO) by institution. Where there was one, there are now many.

Since September 2019, Mr. Benay has moved again according to a November 7, 2019 article by Meagan Simpson on the BetaKit,website (Note: Links have been removed),

Alex Benay, the former CIO [Chief Information Officer] of Canada, has left his role at Ottawa-based Mindbridge after a short few months stint.

The news came Thursday, when KPMG announced that Benay was joining the accounting and professional services organization as partner of digital and government solutions. Benay originally announced that he was joining Mindbridge in August, after spending almost two and a half years as the CIO for the Government of Canada.

Benay joined the AI startup as its chief client officer and, at the time, was set to officially take on the role on September 3rd. According to Benay’s LinkedIn, he joined Mindbridge in August, but if the September 3rd start date is correct, Benay would have only been at Mindbridge for around three months. The former CIO of Canada was meant to be responsible for Mindbridge’s global growth as the company looked to prepare for an IPO in 2021.

Benay told The Globe and Mail that his decision to leave Mindbridge was not a question of fit, or that he considered the move a mistake. He attributed his decision to leave to conversations with Mindbridge customer KPMG, over a period of three weeks. Benay told The Globe that he was drawn to the KPMG opportunity to lead its digital and government solutions practice, something that was more familiar to him given his previous role.

Mindbridge has not completely lost what was touted as a start hire, though, as Benay will be staying on as an advisor to the startup. “This isn’t a cutting the cord and moving on to something else completely,” Benay told The Globe. “It’s a win-win for everybody.”

Via Mr. Benay, I’ve re-introduced artificial intelligence and introduced the Phoenix Pay system and now I’m linking them to government implementation of information technology in a specific case and speculating about implementation of artificial intelligence algorithms in government.

Phoenix Pay System Debacle (things are looking up), a harbinger for responsible use of artificial intelligence?

I’m happy to hear that the situation where government employees had no certainty about their paycheques is becoming better. After the ‘new’ Phoenix Pay System was implemented in early 2016, government employees found they might get the correct amount on their paycheque or might find significantly less than they were entitled to or might find huge increases.

The instability alone would be distressing but adding to it with the inability to get the problem fixed must have been devastating. Almost five years later, the problems are being resolved and people are getting paid appropriately, more often.

The estimated cost for fixing the problems was, as I recall, over $1B; I think that was a little optimistic. James Bagnall’s July 28, 2020 article for the Ottawa Citizen provides more detail, although not about the current cost, and is the source of my measured optimism,

Something odd has happened to the Phoenix Pay file of late. After four years of spitting out errors at a furious rate, the federal government’s new pay system has gone quiet.

And no, it’s not because of the even larger drama written by the coronavirus. In fact, there’s been very real progress at Public Services and Procurement Canada [PSPC; emphasis mine], the department in charge of pay operations.

Since January 2018, the peak of the madness, the backlog of all pay transactions requiring action has dropped by about half to 230,000 as of late June. Many of these involve basic queries for information about promotions, overtime and rules. The part of the backlog involving money — too little or too much pay, incorrect deductions, pay not received — has shrunk by two-thirds to 125,000.

These are still very large numbers but the underlying story here is one of long-delayed hope. The government is processing the pay of more than 330,000 employees every two weeks while simultaneously fixing large batches of past mistakes.

While officials with two of the largest government unions — Public Service Alliance of Canada [PSAC] and the Professional Institute of the Public Service of Canada [PPSC] — disagree the pay system has worked out its kinks, they acknowledge it’s considerably better than it was. New pay transactions are being processed “with increased timeliness and accuracy,” the PSAC official noted.

Neither union is happy with the progress being made on historical mistakes. PIPSC president Debi Daviau told this newspaper that many of her nearly 60,000 members have been waiting for years to receive salary adjustments stemming from earlier promotions or transfers, to name two of the more prominent sources of pay errors.

Even so, the sharp improvement in Phoenix Pay’s performance will soon force the government to confront an interesting choice: Should it continue with plans to replace the system?

Treasury Board, the government’s employer, two years ago launched the process to do just that. Last March, SAP Canada — whose technology underpins the pay system still in use at Canada Revenue Agency — won a competition to run a pilot project. Government insiders believe SAP Canada is on track to build the full system starting sometime in 2023.

When Public Services set out the business case in 2009 for building Phoenix Pay, it noted the pay system would have to accommodate 150 collective agreements that contained thousands of business rules and applied to dozens of federal departments and agencies. The technical challenge has since intensified.

Under the original plan, Phoenix Pay was to save $70 million annually by eliminating 1,200 compensation advisors across government and centralizing a key part of the operation at the pay centre in Miramichi, N.B., where 550 would manage a more automated system.

Instead, the Phoenix Pay system currently employs about 2,300.  This includes 1,600 at Miramichi and five regional pay offices, along with 350 each at a client contact centre (which deals with relatively minor pay issues) and client service bureau (which handles the more complex, longstanding pay errors). This has naturally driven up the average cost of managing each pay account — 55 per cent higher than the government’s former pay system according to last fall’s estimate by the Parliamentary Budget Officer.

… As the backlog shrinks, the need for regional pay offices and emergency staffing will diminish. Public Services is also working with a number of high-tech firms to develop ways of accurately automating employee pay using artificial intelligence [emphasis mine].

Given the Phoenix Pay System debacle, it might be nice to see a little information about how the government is planning to integrate more sophisticated algorithms (artificial intelligence) in their operations.

I found this on a Treasury Board webpage, all 1 minute and 29 seconds of it,

The blonde model or actress mentions that companies applying to Public Services and Procurement Canada for placement on the list must use AI responsibly. Her script does not include a definition or guidelines, which, as previously noted, as on the Treasury Board website.

As for Public Services and Procurement Canada, they have an Artificial intelligence source list,

Public Services and Procurement Canada (PSPC) is putting into operation the Artificial intelligence source list to facilitate the procurement of Canada’s requirements for Artificial intelligence (AI).

After research and consultation with industry, academia, and civil society, Canada identified 3 AI categories and business outcomes to inform this method of supply:

Insights and predictive modelling

Machine interactions

Cognitive automation

PSPC is focused only on procuring AI. If there are guidelines on their website for its use, I did not find them.

I found one more government agency that might have some information about artificial intelligence and guidelines for its use, Shared Services Canada,

Shared Services Canada (SSC) delivers digital services to Government of Canada organizations. We provide modern, secure and reliable IT services so federal organizations can deliver digital programs and services that meet Canadians needs.

Since the Minister of Digital Government, Joyce Murray, is listed on the homepage, I was hopeful that I could find out more about AI and governance and whether or not the Canadian Digital Service was associated with this government ministry/agency. I was frustrated on both counts.

To sum up, there is no information that I could find after March 2019 about Canada, it’s government and plans for AI, especially responsible management/governance and AI on a Canadian government website although I have found guidelines, expectations, and consequences for non-compliance. (Should anyone know which government agency has up-to-date information on its responsible use of AI, please let me know in the Comments.

Canadian Institute for Advanced Research (CIFAR)

The first mention of the Pan-Canadian Artificial Intelligence Strategy is in my analysis of the Canadian federal budget in a March 24, 2017 posting. Briefly, CIFAR received a big chunk of that money. Here’s more about the strategy from the CIFAR Pan-Canadian AI Strategy homepage,

In 2017, the Government of Canada appointed CIFAR to develop and lead a $125 million Pan-Canadian Artificial Intelligence Strategy, the world’s first national AI strategy.

CIFAR works in close collaboration with Canada’s three national AI Institutes — Amii in Edmonton, Mila in Montreal, and the Vector Institute in Toronto, as well as universities, hospitals and organizations across the country.

The objectives of the strategy are to:

Attract and retain world-class AI researchers by increasing the number of outstanding AI researchers and skilled graduates in Canada.

Foster a collaborative AI ecosystem by establishing interconnected nodes of scientific excellence in Canada’s three major centres for AI: Edmonton, Montreal, and Toronto.

Advance national AI initiatives by supporting a national research community on AI through training programs, workshops, and other collaborative opportunities.

Understand the societal implications of AI by developing global thought leadership on the economic, ethical, policy, and legal implications [emphasis mine] of advances in AI.

Responsible AI at CIFAR

You can find Responsible AI in a webspace devoted to what they have called, AI & Society. Here’s more from the homepage,

CIFAR is leading global conversations about AI’s impact on society.

The AI & Society program, one of the objectives of the CIFAR Pan-Canadian AI Strategy, develops global thought leadership on the economic, ethical, political, and legal implications of advances in AI. These dialogues deliver new ways of thinking about issues, and drive positive change in the development and deployment of responsible AI.

Solution Networks

AI Futures Policy Labs

AI & Society Workshops

Building an AI World

Under the category of building an AI World I found this (from CIFAR’s AI & Society homepage),

BUILDING AN AI WORLD

Explore the landscape of global AI strategies.

Canada was the first country in the world to announce a federally-funded national AI strategy, prompting many other nations to follow suit. CIFAR published two reports detailing the global landscape of AI strategies.

I skimmed through the second report and it seems more like a comparative study of various country’s AI strategies than a overview of responsible use of AI.

Final comments about Responsible AI in Canada and the new reports

I’m glad to see there’s interest in Responsible AI but based on my adventures searching the Canadian government websites and the Pan-Canadian AI Strategy webspace, I’m left feeling hungry for more.

I didn’t find any details about how AI is being integrated into government departments and for what uses. I’d like to know and I’d like to have some say about how it’s used and how the inevitable mistakes will be dealh with.

The great unwashed

What I’ve found is high minded, but, as far as I can tell, there’s absolutely no interest in talking to the ‘great unwashed’. Those of us who are not experts are being left out of these earlier stage conversations.

I’m sure we’ll be consulted at some point but it will be long past the time when are our opinions and insights could have impact and help us avoid the problems that experts tend not to see. What we’ll be left with is protest and anger on our part and, finally, grudging admissions and corrections of errors on the government’s part.

Let’s take this for an example. The Phoenix Pay System was implemented in its first phase on Feb. 24, 2016. As I recall, problems develop almost immediately. The second phase of implementation starts April 21, 2016. In May 2016 the government hires consultants to fix the problems. November 29, 2016 the government minister, Judy Foote, admits a mistake has been made. February 2017 the government hires consultants to establish what lessons they might learn. February 15, 2018 the pay problems backlog amounts to 633,000. Source: James Bagnall, Feb. 23, 2018 ‘timeline‘ for Ottawa Citizen

Do take a look at the timeline, there’s more to it than what I’ve written here and I’m sure there’s more to the Phoenix Pay System debacle than a failure to listen to warnings from those who would be directly affected. It’s fascinating though how often a failure to listen presages far deeper problems with a project.

The Canadian government, both a conservative and a liberal government, contributed to the Phoenix Debacle but it seems the gravest concern is with senior government bureaucrats. You might think things have changed since this recounting of the affair in a June 14, 2018 article by Michelle Zilio for the Globe and Mail,

The three public servants blamed by the Auditor-General for the Phoenix pay system problems were not fired for mismanagement of the massive technology project that botched the pay of tens of thousands of public servants for more than two years.

Marie Lemay, deputy minister for Public Services and Procurement Canada (PSPC), said two of the three Phoenix executives were shuffled out of their senior posts in pay administration and did not receive performance bonuses for their handling of the system. Those two employees still work for the department, she said. Ms. Lemay, who refused to identify the individuals, said the third Phoenix executive retired.

In a scathing report last month, Auditor-General Michael Ferguson blamed three “executives” – senior public servants at PSPC, which is responsible for Phoenix − for the pay system’s “incomprehensible failure.” [emphasis mine] He said the executives did not tell the then-deputy minister about the known problems with Phoenix, leading the department to launch the pay system despite clear warnings it was not ready.

Speaking to a parliamentary committee on Thursday, Ms. Lemay said the individuals did not act with “ill intent,” noting that the development and implementation of the Phoenix project were flawed. She encouraged critics to look at the “bigger picture” to learn from all of Phoenix’s failures.

Mr. Ferguson, whose office spoke with the three Phoenix executives as a part of its reporting, said the officials prioritized some aspects of the pay-system rollout, such as schedule and budget, over functionality. He said they also cancelled a pilot implementation project with one department that would have helped it detect problems indicating the system was not ready.

Mr. Ferguson’s report warned the Phoenix problems are indicative of “pervasive cultural problems” [emphasis mine] in the civil service, which he said is fearful of making mistakes, taking risks and conveying “hard truths.”

Speaking to the same parliamentary committee on Tuesday, Privy Council Clerk [emphasis mine] Michael Wernick challenged Mr. Ferguson’s assertions, saying his chapter on the federal government’s cultural issues is an “opinion piece” containing “sweeping generalizations.”

The Privy Council Clerk is the top level bureaucrat (and there is only one such clerk) in the civil/public service and I think his quotes are quite telling of “pervasive cultural problems.” There’s a new Privy Council Clerk but from what I can tell he was well trained by his predecessor.

Doe we really need senior government bureaucrats?

I now have an example of bureaucratic interference, specifically with the Global Public Health Information Network (GPHIN) where it would seem that not much has changed, from a December 26, 2020 article by Grant Robertson for the Globe & Mail,

When Canada unplugged support for its pandemic alert system [GPHIN] last year, it was a symptom of bigger problems inside the Public Health Agency. Experienced scientists were pushed aside, expertise was eroded, and internal warnings went unheeded, which hindered the department’s response to COVID-19

As a global pandemic began to take root in February, China held a series of backchannel conversations with Canada, lobbying the federal government to keep its borders open.

With the virus already taking a deadly toll in Asia, Heng Xiaojun, the Minister Counsellor for the Chinese embassy, requested a call with senior Transport Canada officials. Over the course of the conversation, the Chinese representatives communicated Beijing’s desire that flights between the two countries not be stopped because it was unnecessary.

“The Chinese position on the continuation of flights was reiterated,” say official notes taken from the call. “Mr. Heng conveyed that China is taking comprehensive measures to combat the coronavirus.”

Canadian officials seemed to agree, since no steps were taken to restrict or prohibit travel. To the federal government, China appeared to have the situation under control and the risk to Canada was low. Before ending the call, Mr. Heng thanked Ottawa for its “science and fact-based approach.”

It was a critical moment in the looming pandemic, but the Canadian government lacked the full picture, instead relying heavily on what Beijing was choosing to disclose to the World Health Organization (WHO). Ottawa’s ability to independently know what was going on in China – on the ground and inside hospitals – had been greatly diminished in recent years.

Canada once operated a robust pandemic early warning system and employed a public-health doctor based in China who could report back on emerging problems. But it had largely abandoned those international strategies over the past five years, and was no longer as plugged-in.

By late February [2020], Ottawa seemed to be taking the official reports from China at their word, stating often in its own internal risk assessments that the threat to Canada remained low. But inside the Public Health Agency of Canada (PHAC), rank-and-file doctors and epidemiologists were growing increasingly alarmed at how the department and the government were responding.

“The team was outraged,” one public-health scientist told a colleague in early April, in an internal e-mail obtained by The Globe and Mail, criticizing the lack of urgency shown by Canada’s response during January, February and early March. “We knew this was going to be around for a long time, and it’s serious.”

China had locked down cities and restricted travel within its borders. Staff inside the Public Health Agency believed Beijing wasn’t disclosing the whole truth about the danger of the virus and how easily it was transmitted. “The agency was just too slow to respond,” the scientist said. “A sane person would know China was lying.”

It would later be revealed that China’s infection and mortality rates were played down in official records, along with key details about how the virus was spreading.

But the Public Health Agency, which was created after the 2003 SARS crisis to bolster the country against emerging disease threats, had been stripped of much of its capacity to gather outbreak intelligence and provide advance warning by the time the pandemic hit.

The Global Public Health Intelligence Network, an early warning system known as GPHIN that was once considered a cornerstone of Canada’s preparedness strategy, had been scaled back over the past several years, with resources shifted into projects that didn’t involve outbreak surveillance.

However, a series of documents obtained by The Globe during the past four months, from inside the department and through numerous Access to Information requests, show the problems that weakened Canada’s pandemic readiness run deeper than originally thought. Pleas from the international health community for Canada to take outbreak detection and surveillance much more seriously were ignored by mid-level managers [emphasis mine] inside the department. A new federal pandemic preparedness plan – key to gauging the country’s readiness for an emergency – was never fully tested. And on the global stage, the agency stopped sending experts [emphasis mine] to international meetings on pandemic preparedness, instead choosing senior civil servants with little or no public-health background [emphasis mine] to represent Canada at high-level talks, The Globe found.

The curtailing of GPHIN and allegations that scientists had become marginalized within the Public Health Agency, detailed in a Globe investigation this past July [2020], are now the subject of two federal probes – an examination by the Auditor-General of Canada and an independent federal review, ordered by the Minister of Health.

Those processes will undoubtedly reshape GPHIN and may well lead to an overhaul of how the agency functions in some areas. The first steps will be identifying and fixing what went wrong. With the country now topping 535,000 cases of COVID-19 and more than 14,700 dead, there will be lessons learned from the pandemic.

Prime Minister Justin Trudeau has said he is unsure what role added intelligence [emphasis mine] could have played in the government’s pandemic response, though he regrets not bolstering Canada’s critical supplies of personal protective equipment sooner. But providing the intelligence to make those decisions early is exactly what GPHIN was created to do – and did in previous outbreaks.

Epidemiologists have described in detail to The Globe how vital it is to move quickly and decisively in a pandemic. Acting sooner, even by a few days or weeks in the early going, and throughout, can have an exponential impact on an outbreak, including deaths. Countries such as South Korea, Australia and New Zealand, which have fared much better than Canada, appear to have acted faster in key tactical areas, some using early warning information they gathered. As Canada prepares itself in the wake of COVID-19 for the next major health threat, building back a better system becomes paramount.

If you have time, do take a look at Robertson’s December 26, 2020 article and the July 2020 Globe investigation. As both articles make clear, senior bureaucrats whose chief attribute seems to have been longevity took over, reallocated resources, drove out experts, and crippled the few remaining experts in the system with a series of bureaucratic demands while taking trips to attend meetings (in desirable locations) for which they had no significant or useful input.

The Phoenix and GPHIN debacles bear a resemblance in that senior bureaucrats took over and in a state of blissful ignorance made a series of disastrous decisions bolstered by politicians who seem to neither understand nor care much about the outcomes.

If you think I’m being harsh watch Canadian Broadcasting Corporation (CBC) reporter Rosemary Barton interview Prime Minister Trudeau for a 2020 year-end interview, Note: There are some commercials. Then, pay special attention to the Trudeau’s answer to the first question,

Responsible AI, eh?

Based on the massive mishandling of the Phoenix Pay System implementation where top bureaucrats did not follow basic and well established information services procedures and the Global Public Health Information Network mismanagement by top level bureaucrats, I’m not sure I have a lot of confidence in any Canadian government claims about a responsible approach to using artificial intelligence.

Unfortunately, it doesn’t matter as implementation is most likely already taking place here in Canada.

Enough with the pessimism. I feel it’s necessary to end this on a mildly positive note. Hurray to the government employees who worked through the Phoenix Pay System debacle, the current and former GPHIN experts who continued to sound warnings, and all those people striving to make true the principles of ‘Peace, Order, and Good Government’, the bedrock principles of the Canadian Parliament.

A lot of mistakes have been made but we also do make a lot of good decisions.

A Vancouver (Canada) connection to the Pfizer COVID-19 vaccine

Canada’s NanoMedicines Innovation Network (NMIN) must have been excited over the COVID-19 vaccine news (Pfizer Nov. 9, 2020 news release) since it’s a Canadian company (Acuitas Therapeutics) that is providing the means of delivering the vaccine once it enters the body.

Here’s the company’s president and CEO [chief executive officer], Dr. Thomas Madden explaining his company’s delivery system (from Acuitas’ news and events webpage),

For anyone who might find a textual description about the vaccine helpful, I have a Nov. 9, 2020 article by Adele Peters for Fast Company,

… a handful of small biotech companies began scrambling to develop vaccines using an as-yet-unproven technology platform that relies on something called messenger RNA [ribonucleic acid], usually shortened to mRNA …

Like other vaccines, mRNA vaccines work by training the immune system to recognize a threat like a virus and begin producing antibodies to protect itself. But while traditional vaccines often use inactivated doses of the organisms that cause disease, mRNA vaccines are designed to make the body produce those proteins itself. Messenger RNA—a molecule that contains instructions for cells to make DNA—is injected into cells. In the case of COVID-19, mRNA vaccines provide instructions for cells to start producing the “spike” protein of the new coronavirus, the protein that helps the virus get into cells. On its own, the spike protein isn’t harmful. But it triggers the immune system to begin a defensive response. As Bill Gates, who has supported companies like Moderna and BioNTech through the Gates Foundation, has described it, “you essentially turn your body into its own manufacturing unit.”

Amy Judd’s Nov. 9, 2020 article for Global news online explains (or you can just take another look at the video to refresh your memory) how the Acuitas technology fits into the vaccine picture,

Vancouver-based Acuitas Therapeutics, a biotechnology company, is playing a key role through a technology known as lipid nanoparticles, which deliver messenger RNA into cells.

“The technology we provide to our partners is lipid nanoparticles and BioNTech and Pfizer are developing a vaccine that’s using a messenger RNA that tells our cells how to make a protein that’s actually found in the COVID-19 virus,” Dr. Thomas Madden, president and CEO of Acuitas Therapeutics, told Global News Monday [Nov. 9, 2020].

“But the messenger RNA can’t work by itself, it needs a delivery technology to protect this after it’s administered and then to carry it into the cells where it can be expressed and give rise to an immune response.”

Madden said they like to think of the lipid nanoparticles as protective wrapping around a fragile glass ornament [emphasis mine] being shipped to your house online. That protective wrapping would then make sure the ornament made it to your house, through your front door, then unwrap itself and leave in your hallway, ready for you to come and grab it when you came home.

Acuitas Therapeutics employs 29 people and Madden said he believes everyone is feeling very proud of their work.

“Not many people are aware of the history of this technology and the fact that it originated in Vancouver,” he added.

“Dr. Pieter Cullis was one of the key scientists who brought together a team to develop this technology many, many years ago. UBC and Vancouver and companies associated with those scientists have been at the global centre of this technology for many years now.

“I think we’ve been looking for a light at the end of the tunnel for quite some time. I think everybody has been hoping that a vaccine would be able to provide the protection we need to move out of our current situation and I think this is now a confirmation that this hope wasn’t misplaced.”

Nanomedicine in Vancouver

For anyone who’s curious about the Canadian nanomedicine scene, you can find out more about it on Canada’s NanoMedicines Innovation Network (NMIN) website. They recently held a virtual event (Vancouver Nanomedicine Day) on Sept. 17, 2020 (see my Sept. 11, 2020 posting for details), which featured a presentation about Aquitas’ technology.

Happily, the organizers have posted videos for most of the sessions. Dr. Ying Tam of Acuitas made this presentation (about 22 mins. running time) “A Novel Vaccine Approach Using Messenger RNA‐Lipid Nanoparticles: Preclinical and Clinical Perspectives.” If you’re interested in that video or any of the others go to the NanoMedicines Innovation Network’s Nanomedicine Day 2020 webpage.

Acuitas Therapeutics can be found here.

City University of Hong Kong (CityU) and its anti-bacterial graphene face masks

This looks like interesting work and I think the integration of visual images and embedded video in the news release (on the university website) is particularly well done. I won’t be including all the graphical information here as my focus is the text.

A Sept. 10, 2020 City University of Hong Kong (CityU) press release (also on EurekAlert) announces a greener, more effective face mask,

Face masks have become an important tool in fighting against the COVID-19 pandemic. However, improper use or disposal of masks may lead to “secondary transmission”. A research team from City University of Hong Kong (CityU) has successfully produced graphene masks with an anti-bacterial efficiency of 80%, which can be enhanced to almost 100% with exposure to sunlight for around 10 minutes. Initial tests also showed very promising results in the deactivation of two species of coronaviruses. The graphene masks are easily produced at low cost, and can help to resolve the problems of sourcing raw materials and disposing of non-biodegradable masks.

The research is conducted by Dr Ye Ruquan, Assistant Professor from CityU’s Department of Chemistry, in collaboration with other researchers. The findings were published in the scientific journal ACS Nano, titled “Self-Reporting and Photothermally Enhanced Rapid Bacterial Killing on a Laser-Induced Graphene Mask“.

Commonly used surgical masks are not anti-bacterial. This may lead to the risk of secondary transmission of bacterial infection when people touch the contaminated surfaces of the used masks or discard them improperly. Moreover, the melt-blown fabrics used as a bacterial filter poses an impact on the environment as they are difficult to decompose. Therefore, scientists have been looking for alternative materials to make masks.

Converting other materials into graphene by laser

Dr Ye has been studying the use of laser-induced graphene [emphasis mine] in developing sustainable energy. When he was studying PhD degree at Rice University several years ago, the research team he participated in and led by his supervisor discovered an easy way to produce graphene. They found that direct writing on carbon-containing polyimide films (a polymeric plastic material with high thermal stability) using a commercial CO2 infrared laser system can generate 3D porous graphene. The laser changes the structure of the raw material and hence generates graphene. That’s why it is named laser-induced graphene.

Graphene is known for its anti-bacterial properties, so as early as last September, before the outbreak of COVID-19, producing outperforming masks with laser-induced graphene already came across Dr Ye’s mind. He then kick-started the study in collaboration with researchers from the Hong Kong University of Science and Technology (HKUST), Nankai University, and other organisations.

Excellent anti-bacterial efficiency

The research team tested their laser-induced graphene with E. coli, and it achieved high anti-bacterial efficiency of about 82%. In comparison, the anti-bacterial efficiency of activated carbon fibre and melt-blown fabrics, both commonly-used materials in masks, were only 2% and 9% respectively. Experiment results also showed that over 90% of the E. coli deposited on them remained alive even after 8 hours, while most of the E. coli deposited on the graphene surface were dead after 8 hours. Moreover, the laser-induced graphene showed a superior anti-bacterial capacity for aerosolised bacteria.

Dr Ye said that more research on the exact mechanism of graphene’s bacteria-killing property is needed. But he believed it might be related to the damage of bacterial cell membranes by graphene’s sharp edge. And the bacteria may be killed by dehydration induced by the hydrophobic (water-repelling) property of graphene.

Previous studies suggested that COVID-19 would lose its infectivity at high temperatures. So the team carried out experiments to test if the graphene’s photothermal effect (producing heat after absorbing light) can enhance the anti-bacterial effect. The results showed that the anti-bacterial efficiency of the graphene material could be improved to 99.998% within 10 minutes under sunlight, while activated carbon fibre and melt-blown fabrics only showed an efficiency of 67% and 85% respectively.

The team is currently working with laboratories in mainland China to test the graphene material with two species of human coronaviruses. Initial tests showed that it inactivated over 90% of the virus in five minutes and almost 100% in 10 minutes under sunlight. The team plans to conduct testings with the COVID-19 virus later.

Their next step is to further enhance the anti-virus efficiency and develop a reusable strategy for the mask. They hope to release it to the market shortly after designing an optimal structure for the mask and obtaining the certifications.

Dr Ye described the production of laser-induced graphene as a “green technique”. All carbon-containing materials, such as cellulose or paper, can be converted into graphene using this technique. And the conversion can be carried out under ambient conditions without using chemicals other than the raw materials, nor causing pollution. And the energy consumption is low.

“Laser-induced graphene masks are reusable. If biomaterials are used for producing graphene, it can help to resolve the problem of sourcing raw material for masks. And it can lessen the environmental impact caused by the non-biodegradable disposable masks,” he added.

Dr Ye pointed out that producing laser-induced graphene is easy. Within just one and a half minutes, an area of 100 cm² can be converted into graphene as the outer or inner layer of the mask. Depending on the raw materials for producing the graphene, the price of the laser-induced graphene mask is expected to be between that of surgical mask and N95 mask. He added that by adjusting laser power, the size of the pores of the graphene material can be modified so that the breathability would be similar to surgical masks.

A new way to check the condition of the mask

To facilitate users to check whether graphene masks are still in good condition after being used for a period of time, the team fabricated a hygroelectric generator. It is powered by electricity generated from the moisture in human breath. By measuring the change in the moisture-induced voltage when the user breathes through a graphene mask, it provides an indicator of the condition of the mask. Experiment results showed that the more the bacteria and atmospheric particles accumulated on the surface of the mask, the lower the voltage resulted. “The standard of how frequently a mask should be changed is better to be decided by the professionals. Yet, this method we used may serve as a reference,” suggested Dr Ye.

Laser-induced graphene (LIG), Rice University, and Dr. Ye were mentioned here in a May 9, 2018 titled: Do you want that coffee with some graphene on toast?

Back to the latest research, read the caption carefully,

Research shows that over 90% of the E. coli deposited on activated carbon fibre (fig c and d) and melt-blown fabrics (fig e and f) remained alive even after 8 hours. In contrast, most of the E. coli deposited on the graphene surface (fig a and b) were dead. (Photo source: DOI number: 10.1021/acsnano.0c05330)

Here’s a link to and a citation for the paper,

Self-Reporting and Photothermally Enhanced Rapid Bacterial Killing on a Laser-Induced Graphene Mask by Libei Huang, Siyu Xu, Zhaoyu Wang, Ke Xue, Jianjun Su, Yun Song, Sijie Chen, Chunlei Zhu, Ben Zhong Tang, and Ruquan Ye. ACS Nano 2020, 14, 9, 12045–12053 DOI: https://doi.org/10.1021/acsnano.0c05330 Publication Date:August 11, 2020 Copyright © 2020 American Chemical Society

This paper is behind a paywall.

D-Wave’s new Advantage quantum computer

Thanks to Bob Yirka’s September 30, 2020 article for phys.org there’s an announcement about D-Wave Systems’ latest quantum computer and an explanation of how D-Wave’s quantum computer differs from other quantum computers. Here’s the explanation (Note: Links have been removed),

Over the past several years, several companies have dedicated resources to the development of a true quantum computer that can tackle problems conventional computers cannot handle. Progress on developing such computers has been slow, however, especially when compared with the early development of the conventional computer. As part of the research effort, companies have taken different approaches. Google and IBM, for example, are working on gate-model quantum computer technology, in which qubits are modified as an algorithm is executed. D-Wave, in sharp contrast, has been focused on developing so-called annealer technology, in which qubits are cooled during execution of an algorithm, which allows for passively changing their value.

Comparing the two is next to impossible because of their functional differences. Thus, using 5,000 qubits in the Advantage system does not necessarily mean that it is any more useful than the 100-qubit systems currently being tested by IBM or Google. Still, the announcement suggests that businesses are ready to start taking advantage of the increased capabilities of quantum systems. D-Wave notes that several customers are already using their system for a wide range of applications. Menten AI, for example, has used the system to design new proteins; grocery chain Save-On-Foods has been using it to optimize business operations; Accenture has been using it to develop business applications; Volkswagen has used the system to develop a more efficient car painting system.

Here’s the company’s Sept. 29, 2020 video announcement,

For those who might like some text, there’s a Sept. 29, 2020 D-Wave Systems press release (Note: Links have been removed; this is long),

D-Wave Systems Inc., the leader in quantum computing systems, software, and services, today [Sept. 29, 2020] announced the general availability of its next-generation quantum computing platform, incorporating new hardware, software, and tools to enable and accelerate the delivery of in-production quantum computing applications. Available today in the Leap™ quantum cloud service, the platform includes the Advantage™ quantum system, with more than 5000 qubits and 15-way qubit connectivity, in addition to an expanded hybrid solver service that can run problems with up to one million variables. The combination of the computing power of Advantage and the scale to address real-world problems with the hybrid solver service in Leap enables businesses to run performant, real-time, hybrid quantum applications for the first time.

As part of its commitment to enabling businesses to build in-production quantum applications, the company announced D-Wave Launch™, a jump-start program for businesses who want to get started building hybrid quantum applications today but may need additional support. Bringing together a team of applications experts and a robust partner community, the D-Wave Launch program provides support to help identify the best applications and to translate businesses’ problems into hybrid quantum applications. The extra support helps customers accelerate designing, building, and running their most important and complex applications, while delivering quantum acceleration and performance.

The company also announced a new hybrid solver. The discrete quadratic model (DQM) solver gives developers and businesses the ability to apply the benefits of hybrid quantum computing to new problem classes. Instead of accepting problems with only binary variables (0 or 1), the DQM solver uses other variable sets (e.g. integers from 1 to 500, or red, yellow, and blue), expanding the types of problems that can run on the quantum computer. The DQM solver will be generally available on October 8 [2020].

With support for new solvers and larger problem sizes backed by the Advantage system, customers and partners like Menten AI, Save-On-Foods, Accenture, and Volkswagen are building and running hybrid quantum applications that create solutions with business value today.

  • Protein design pioneer Menten AI has developed the first process using hybrid quantum programs to determine protein structure for de novo protein design with very encouraging results often outperforming classical solvers. Menten AI’s unique protein designs have been computationally validated, chemically synthesized, and are being advanced to live-virus testing against COVID-19.
  • Western Canadian grocery retailer Save-On-Foods is using hybrid quantum algorithms to bring grocery optimization solutions to their business, with pilot tests underway in-store. The company has been able to reduce the time an important optimization task takes from 25 hours to a mere 2 minutes of calculations each week. Even more important than the reduction in time is the ability to optimize performance across and between a significant number of business parameters in a way that is challenging using traditional methods.
  • Accenture, a leading global professional services company, is exploring quantum, quantum-inspired, and hybrid solutions to develop applications across industries. Accenture recently conducted a series of business experiments with a banking client to pilot quantum applications for currency arbitrage, credit scoring, and trading optimization, successfully mapping computationally challenging business problems to quantum formulations, enabling quantum readiness.
  • Volkswagen, an early adopter of D-Wave’s annealing quantum computer, has expanded its quantum use cases with the hybrid solver service to build a paint shop scheduling application. The algorithm is designed to optimize the order in which cars are being painted. By using the hybrid solver service, the number of color switches will be reduced significantly, leading to performance improvements.

The Advantage quantum computer and the Leap quantum cloud service include:

  • New Topology: The topology in Advantage makes it the most connected of any commercial quantum system in the world. In the D-Wave 2000Q™ system, qubits may connect to 6 other qubits. In the new Advantage system, each qubit may connect to 15 other qubits. With two-and-a-half times more connectivity, Advantage enables the embedding of larger problems with fewer physical qubits compared to using the D-Wave 2000Q system. The D-Wave Ocean™ software development kit (SDK) includes tools for using the new topology. Information on the topology in Advantage can be found in this white paper, and a getting started video on how to use the new topology can be found here.
  • Increased Qubit Count: With more than 5000 qubits, Advantage more than doubles the qubit count of the D-Wave 2000Q system. More qubits and richer connectivity provide quantum programmers access to a larger, denser, and more powerful graph for building commercial quantum applications.
  • Greater Performance & Problem Size: With up to one million variables, the hybrid solver service in Leap allows businesses to run large-scale, business-critical problems. This, coupled with the new topology and more than 5000 qubits in the Advantage system, expands the complexity and more than doubles the size of problems that can run directly on the quantum processing unit (QPU). In fact, the hybrid solver outperformed or matched the best of 27 classical optimization solvers on 87% of 45 application-relevant inputs tested in MQLib. Additionally, greater connectivity of the QPU allows for more compact embeddings of complex problems. Advantage can find optimal solutions 10 to 30 times faster in some cases, and can find better quality solutions up to 64% percent of the time, when compared to the D-Wave 2000Q LN QPU.
  • Expansion of Hybrid Software & Tools in Leap: Further investments in the hybrid solver service, new solver classes, ease-of-use, automation, and new tools provide an even more powerful hybrid rapid development environment in Python for business-scale problems.
  • Flexible Access: Advantage, the expanded hybrid solver service, and the upcoming DQM solver are available in the Leap quantum cloud service. All current Leap customers get immediate access with no additional charge, and new customers will benefit from all the new and existing capabilities in Leap. This means that developers and businesses can get started today building in-production hybrid quantum applications. Flexible purchase plans allow developers and forward-thinking businesses to access the D-Wave quantum system in the way that works for them and their business. 
  • Ongoing Releases: D-Wave continues to bring innovations to market with additional hybrid solvers, QPUs, and software updates through the cloud. Interested users and customers can get started today with Advantage and the hybrid solver service, and will benefit from new components of the platform through Leap as they become available.

“Today’s general availability of Advantage delivers the first quantum system built specifically for business, and marks the expansion into production scale commercial applications and new problem types with our hybrid solver services. In combination with our new jump-start program to get customers started, this launch continues what we’ve known at D-Wave for a long time: it’s not about hype, it’s about scaling, and delivering systems that provide real business value on real business applications,” said Alan Baratz, CEO, D-Wave. “We also continue to invest in the science of building quantum systems. Advantage was completely re-engineered from the ground up. We’ll take what we’ve learned about connectivity and scale and continue to push the limits of innovation for the next generations of our quantum computers. I’m incredibly proud of the team that has brought us here and the customers and partners who have collaborated with us to build hundreds of early applications and who now are putting applications into production.”

“We are using quantum to design proteins today. Using hybrid quantum applications, we’re able to solve astronomical protein design problems that help us create new protein structures,” said Hans Melo, Co-founder and CEO, Menten AI. “We’ve seen extremely encouraging results with hybrid quantum procedures often finding better solutions than competing classical solvers for de novo protein design. This means we can create better proteins and ultimately enable new drug discoveries.”

“At Save-On-Foods, we have been committed to bringing innovation to our customers for more than 105 years. To that end, we are always looking for new and creative ways to solve problems, especially in an environment that has gotten increasingly complex,” said Andrew Donaher, Vice President, Digital & Analytics at Save-On-Foods. “We’re new to quantum computing, and in a short period of time, we have seen excellent early results. In fact, the early results we see with Advantage and the hybrid solver service from D-Wave are encouraging enough that our goal is to turn our pilot into an in-production business application. Quantum is emerging as a potential competitive edge for our business.“

“Accenture is committed to helping our clients prepare for the arrival of mainstream quantum computing by exploring relevant use cases and conducting business experiments now,” said Marc Carrel-Billiard, Senior Managing Director and Technology Innovation Lead at Accenture. “We’ve been collaborating with D-Wave for several years and with early access to the Advantage system and hybrid solver service we’ve seen performance improvements and advancements in the platform that are important steps for helping to make quantum a reality for clients across industries, creating new sources of competitive advantage.”

“Embracing quantum computing is nothing new for Volkswagen. We were the first to run a hybrid quantum application in production in Lisbon last November with our bus routing application,” said Florian Neukart, Director of Advanced Technologies at Volkswagen Group of America. “At Volkswagen, we are focusing on building up a deep understanding of meaningful applications of quantum computing in a corporate context. The D-Wave system gives us the opportunity to address optimization tasks with a large number of variables at an impressive speed. With this we are taking a step further towards quantum applications that will be suitable for everyday business use.”

I found the description of D-Wave’s customers and how they’re using quantum computing to be quite interesting. For anyone curious about D-Wave Systems, you can find out more here. BTW, the company is located in metro Vancouver (Canada).

Gold nanoparticles make a new promise: a non-invasive COVID-19 breathalyser

I believe that swab they stick up your nose to test for COVDI-19 is 10 inches long so it seems to me that discomfort or unpleasant are not the words that best describe the testing experience .

Hopefully, no one will have to find inadequate vocabulary for this new COVID-19 testing assuming that future trials are successful and they are able to put the technology into production. From an August 19, 2020 news item on Nanowerk,

Few people who have undergone nasopharyngeal swabs for coronavirus testing would describe it as a pleasant experience. The procedure involves sticking a long swab up the nose to collect a sample from the back of the nose and throat, which is then analyzed for SARS-CoV-2 RNA [ribonucleic acid] by the reverse-transcription polymerase chain reaction (RT-PCR).

Now, researchers reporting in [American Chemical Society] ACS Nano (“Multiplexed Nanomaterial-Based Sensor Array for Detection of COVID-19 in Exhaled Breath”) have developed a prototype device that non-invasively detected COVID-19 in the exhaled breath of infected patients.

An August 19, 2020 ACS news release (also received via email and on EurekAlert), which originated the news item, provides more technical details,

In addition to being uncomfortable, the current gold standard for COVID-19 testing requires RT-PCR, a time-consuming laboratory procedure. Because of backlogs, obtaining a result can take several days. To reduce transmission and mortality rates, healthcare systems need quick, inexpensive and easy-to-use tests. Hossam Haick, Hu Liu, Yueyin Pan and colleagues wanted to develop a nanomaterial-based sensor that could detect COVID-19 in exhaled breath, similar to a breathalyzer test for alcohol intoxication. Previous studies have shown that viruses and the cells they infect emit volatile organic compounds (VOCs) that can be exhaled in the breath.

The researchers made an array of gold nanoparticles linked to molecules that are sensitive to various VOCs. When VOCs interact with the molecules on a nanoparticle, the electrical resistance changes. The researchers trained the sensor to detect COVID-19 by using machine learning to compare the pattern of electrical resistance signals obtained from the breath of 49 confirmed COVID-19 patients with those from 58 healthy controls and 33 non-COVID lung infection patients in Wuhan, China. Each study participant blew into the device for 2-3 seconds from a distance of 1¬-2 cm. Once machine learning identified a potential COVID-19 signature, the team tested the accuracy of the device on a subset of participants. In the test set, the device showed 76% accuracy in distinguishing COVID-19 cases from controls and 95% accuracy in discriminating COVID-19 cases from lung infections. The sensor could also distinguish, with 88% accuracy, between sick and recovered COVID-19 patients. Although the test needs to be validated in more patients, it could be useful for screening large populations to determine which individuals need further testing, the researchers say.

The authors acknowledge funding from the Technion-Israel Institute of Technology.

Here’s a link to and a citation for the paper,

Multiplexed Nanomaterial-Based Sensor Array for Detection of COVID-19 in Exhaled Breath by Benjie Shan, Yoav Y Broza, Wenjuan Li, Yong Wang, Sihan Wu, Zhengzheng Liu, Jiong Wang, Shuyu Gui, Lin Wang, Zhihong Zhang, Wei Liu, Shoubing Zhou, Wei Jin, Qianyu Zhang, Dandan Hu, Lin Lin, Qiujun Zhang, Wenyu Li, Jinquan Wang, Hu Liu, Yueyin Pan, and Hossam Haick. ACS Nano 2020, XXXX, XXX, XXX-XXX DOI: https://doi.org/10.1021/acsnano.0c05657 Publication Date:August 18, 2020 Copyright © 2020 American Chemical Society

This paper is behind a paywall.

Science fiction, interconnectedness (globality), and pandemics

Mayurika Chakravorty at Carleton University (Department of English) in Ottawa, (Ontario, Canada) points out that the latest pandemic (COVID-19) is an example of how everything is connected (interconnectedness or globality) by way of science fiction in her July 19, 2020 essay on The Conversation (h/t July 20, 2020 item on phys.org), Note: Links have been removed,

In the early days of the coronavirus outbreak, a theory widely shared on social media suggested that a science fiction text, Dean Koontz’s 1981 science fiction novel, The Eyes of Darkness, had predicted the coronavirus pandemic with uncanny precision. COVID-19 has held the entire world hostage, producing a resemblance to the post-apocalyptic world depicted in many science fiction texts. Canadian author Margaret Atwood’s classic 2003 novel Oryx and Crake refers to a time when “there was a lot of dismay out there, and not enough ambulances” — a prediction of our current predicament.

However, the connection between science fiction and pandemics runs deeper. They are linked by a perception of globality, what sociologist Roland Robertson defines as “the consciousness of the world as a whole.”

Chakravorty goes on to make a compelling case (from her July 19, 2020 essay Note: Links have been removed),

In his 1992 survey of the history of telecommunications, How the World Was One, Arthur C. Clarke alludes to the famed historian Alfred Toynbee’s lecture entitled “The Unification of the World.” Delivered at the University of London in 1947, Toynbee envisions a “single planetary society” and notes how “despite all the linguistic, religious and cultural barriers that still sunder nations and divide them into yet smaller tribes, the unification of the world has passed the point of no return.”

Science fiction writers have, indeed, always embraced globality. In interplanetary texts, humans of all nations, races and genders have to come together as one people in the face of alien invasions. Facing an interplanetary encounter, bellicose nations have to reluctantly eschew political rivalries and collaborate on a global scale, as in Denis Villeneuve’s 2018 film, Arrival.

Globality is central to science fiction. To be identified as an Earthling, one has to transcend the local and the national, and sometimes, even the global, by embracing a larger planetary consciousness.

In The Left Hand of Darkness, Ursula K. Le Guin conceptualizes the Ekumen, which comprises 83 habitable planets. The idea of the Ekumen was borrowed from Le Guin’s father, the noted cultural anthropologist Arthur L. Kroeber. Kroeber had, in a 1945 paper, introduced the concept (from Greek oikoumene) to represent a “historic culture aggregate.” Originally, Kroeber used oikoumene to refer to the “entire inhabited world,” as he traced back human culture to one single people. Le Guin then adopted this idea of a common origin of shared humanity in her novel.

..,

Regarding Canada’s response to the crisis [COVID-19], researchers have noted both the immorality and futility of a nationalistic “Canada First” approach.

If you have time, I recommend reading Chakravorty’s July 19, 2020 essay in its entirety.

Gold nanoparticles could help detect the presence of COVID-19 in ten minutes

If this works out, it would make testing for COVID-19 an infinitely easier task. From a May 29, 2020 news item on phys.org,

Scientists from the University of Maryland School of Medicine (UMSOM) developed an experimental diagnostic test for COVID-19 that can visually detect the presence of the virus in 10 minutes. It uses a simple assay containing plasmonic gold nanoparticles to detect a color change when the virus is present. The test does not require the use of any advanced laboratory techniques, such as those commonly used to amplify DNA, for analysis. The authors published their work last week [May 21, 2020] in the American Chemical Society’s nanotechnology journal ACS Nano.

“Based on our preliminary results, we believe this promising new test may detect RNA [ribonucleic acid] material from the virus as early as the first day of infection. Additional studies are needed, however, to confirm whether this is indeed the case,” said study leader Dipanjan Pan, PhD, Professor of Diagnostic Radiology and Nuclear Medicine and Pediatrics at the UMSOM.

Caption: A nasal swab containing a test sample is mixed with a simple lab test. It contains a liquid mixed with gold nanoparticles attached to a molecule that binds to the novel coronavirus. If the virus is present, the gold nanoparticles turns the solution a deep blue color (bottom of the tube) and a precipitation is noticed. If it is not present, the solution retains its original purple color. Credit: University of Maryland School of Medicine

A May 28, 2020 University of Maryland news release (also on EurekAlert), which originated the news item, provides more detail,

Once a nasal swab or saliva sample is obtained from a patient, the RNA is extracted from the sample via a simple process that takes about 10 minutes. The test uses a highly specific molecule attached to the gold nanoparticles to detect a particular protein. This protein is part of the genetic sequence that is unique to the novel coronavirus. When the biosensor binds to the virus’s gene sequence, the gold nanoparticles respond by turning the liquid reagent from purple to blue.

“The accuracy of any COVID-19 test is based on being able to reliably detect any virus. This means it does not give a false negative result if the virus actually is present, nor a false positive result if the virus is not present,” said Dr. Pan. “Many of the diagnostic tests currently on the market cannot detect the virus until several days after infection. For this reason, they have a significant rate of false negative results.”

Dr. Pan created a company called VitruVian Bio to develop the test for commercial application. He plans to have a pre-submission meeting with the U.S. Food and Drug Administration (FDA) within the next month to discuss requirements for getting an emergency use authorization for the test. New FDA policy allows for the marketing of COVID-19 tests without requiring them to go through the usual approval or clearance process. These tests do, however, need to meet certain validation testing requirements to ensure that they provide reliable results.

“This RNA-based test appears to be very promising in terms of detecting the virus. The innovative approach provides results without the need for a sophisticated laboratory facility,” said study co-author Matthew Frieman, PhD, Associate Professor of Microbiology and Immunology at UMSOM.

Although more clinical studies are warranted, this test could be far less expensive to produce and process than a standard COVID-19 lab test; it does not require laboratory equipment or trained personnel to run the test and analyze the results. If this new test meets FDA expectations, it could potentially be used in daycare centers, nursing homes, college campuses, and work places as a surveillance technique to monitor any resurgence of infections.

In Dr. Pan’s laboratory, research scientist Parikshit Moitra, PhD, and UMSOM research fellow Maha Alafeef conducted the studies along with research fellow Ketan Dighe from UMBC.

Dr. Pan holds a joint appointment with the College of Engineering at the University of Maryland Baltimore County and is also a faculty member of the Center for Blood Oxygen Transport and Hemostasis (CBOTH).

“This is another example of how our faculty is driving innovation to fulfill a vital need to expand the capacity of COVID-19 testing,” said Dean E. Albert Reece, MD, PhD, MBA, who is also Executive Vice President for Medical Affairs, UM Baltimore, and the John Z. and Akiko K. Bowers Distinguished Professor, University of Maryland School of Medicine. “Our nation will be relying on inexpensive, rapid tests that can be dispersed widely and used often until we have effective vaccines against this pandemic.”

Here’s a link to and a citation for the paper,

Selective Naked-Eye Detection of SARS-CoV-2 Mediated by N Gene Targeted Antisense Oligonucleotide Capped Plasmonic Nanoparticles by Parikshit Moitra, Maha Alafeef, Ketan Dighe, Matthew B. Frieman, and Dipanjan Pan. ACS Nano 2020, XXXX, XXX, XXX-XXX DOI: https://doi.org/10.1021/acsnano.0c03822 Publication Date:May 21, 2020 Copyright © 2020 American Chemical Society

This paper appears to be open access.

I tried to find Dr. Pan’s company, VitruVian Bio and found a business with an almost identical name, Vitruvian Biomedical, which does not include Dr. Pan on its management team list and this company’s focus is on Alzheimer’s Disease. Finally, there is no mention of the COVID-19 test anywhere on the Vitruvian Biomedical website.

US Food and Drug Administration (FDA) gives first authorization for CRISPR (clustered regularly interspersed short palindromic repeats) use in COVID-19 crisis

Clustered regularly interspersed short palindromic repeats (CRISPR) gene editing has been largely confined to laboratory use or tested in agricultural trials. I believe that is true worldwide excepting the CRISPR twin scandal. (There are numerous postings about the CRISPR twins here including a Nov. 28, 2018 post, a May 17, 2019 post, and a June 20, 2019 post. Update: It was reported (3rd. para.) in December 2019 that He had been sentenced to three years jail time.)

Connie Lin in a May 7, 2020 article for Fast Company reports on this surprising decision by the US Food and Drug Administration (FDA), Note: A link has been removed),

The U.S. Food and Drug Administration has granted Emergency Use Authorization to a COVID-19 test that uses controversial gene-editing technology CRISPR.

This marks the first time CRISPR has been authorized by the FDA, although only for the purpose of detecting the coronavirus, and not for its far more contentious applications. The new test kit, developed by Cambridge, Massachusetts-based Sherlock Biosciences, will be deployed in laboratories certified to carry out high-complexity procedures and is “rapid,” returning results in about an hour as opposed to those that rely on the standard polymerase chain reaction method, which typically requires six hours.

The announcement was made in the FDA’s Coronavirus (COVID-19) Update: May 7, 2020 Daily Roundup (4th item in the bulleted list), Or, you can read the May 6, 2020 letter (PDF) sent to John Vozella of Sherlock Biosciences by the FDA.

As well, there’s the May 7, 2020 Sherlock BioSciences news release (the most informative of the lot),

Sherlock Biosciences, an Engineering Biology company dedicated to making diagnostic testing better, faster and more affordable, today announced the company has received Emergency Use Authorization (EUA) from the U.S. Food and Drug Administration (FDA) for its Sherlock™ CRISPR SARS-CoV-2 kit for the detection of the virus that causes COVID-19, providing results in approximately one hour.

“While it has only been a little over a year since the launch of Sherlock Biosciences, today we have made history with the very first FDA-authorized use of CRISPR technology, which will be used to rapidly identify the virus that causes COVID-19,” said Rahul Dhanda, co-founder, president and CEO of Sherlock Biosciences. “We are committed to providing this initial wave of testing kits to physicians, laboratory experts and researchers worldwide to enable them to assist frontline workers leading the charge against this pandemic.”

The Sherlock™ CRISPR SARS-CoV-2 test kit is designed for use in laboratories certified under the Clinical Laboratory Improvement Amendments of 1988 (CLIA), 42 U.S.C. §263a, to perform high complexity tests. Based on the SHERLOCK method, which stands for Specific High-sensitivity Enzymatic Reporter unLOCKing, the kit works by programming a CRISPR molecule to detect the presence of a specific genetic signature – in this case, the genetic signature for SARS-CoV-2 – in a nasal swab, nasopharyngeal swab, oropharyngeal swab or bronchoalveolar lavage (BAL) specimen. When the signature is found, the CRISPR enzyme is activated and releases a detectable signal. In addition to SHERLOCK, the company is also developing its INSPECTR™ platform to create an instrument-free, handheld test – similar to that of an at-home pregnancy test – that utilizes Sherlock Biosciences’ Synthetic Biology platform to provide rapid detection of a genetic match of the SARS-CoV-2 virus.

“When our lab collaborated with Dr. Feng Zhang’s team to develop SHERLOCK, we believed that this CRISPR-based diagnostic method would have a significant impact on global health,” said James J. Collins, co-founder and board member of Sherlock Biosciences and Termeer Professor of Medical Engineering and Science for MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering. “During what is a major healthcare crisis across the globe, we are heartened that the first FDA-authorized use of CRISPR will aid in the fight against this global COVID-19 pandemic.”

Access to rapid diagnostics is critical for combating this pandemic and is a primary focus for Sherlock Biosciences co-founder and board member, David R. Walt, Ph.D., who co-leads the Mass [Massachusetts] General Brigham Center for COVID Innovation.

“SHERLOCK enables rapid identification of a single alteration in a DNA or RNA sequence in a single molecule,” said Dr. Walt. “That precision, coupled with its capability to be deployed to multiplex over 100 targets or as a simple point-of-care system, will make it a critical addition to the arsenal of rapid diagnostics already being used to detect COVID-19.”

This development is particularly interesting since there was a major intellectual property dispute over CRISPR between the Broad Institute (a Harvard University and Massachusetts Institute of Technology [MIT] joint initiative), and the University of California at Berkeley (UC Berkeley). The Broad Institute mostly won in the first round of the patent fight, as I noted in a March 15, 2017 post but, as far as I’m aware, UC Berkeley is still disputing that decision.

In the period before receiving authorization, it appears that Sherlock Biosciences was doing a little public relations and ‘consciousness raising’ work. Here’s a sample from a May 5, 2020 article by Sharon Begley for STAT (Note: Links have been removed),

The revolutionary genetic technique better known for its potential to cure thousands of inherited diseases could also solve the challenge of Covid-19 diagnostic testing, scientists announced on Tuesday. A team headed by biologist Feng Zhang of the McGovern Institute at MIT and the Broad Institute has repurposed the genome-editing tool CRISPR into a test able to quickly detect as few as 100 coronavirus particles in a swab or saliva sample.

Crucially, the technique, dubbed a “one pot” protocol, works in a single test tube and does not require the many specialty chemicals, or reagents, whose shortage has hampered the rollout of widespread Covid-19 testing in the U.S. It takes about an hour to get results, requires minimal handling, and in preliminary studies has been highly accurate, Zhang told STAT. He and his colleagues, led by the McGovern’s Jonathan Gootenberg and Omar Abudayyeh, released the protocol on their STOPCovid.science website.

Because the test has not been approved by the Food and Drug Administration, it is only for research purposes for now. But minutes before speaking to STAT on Monday, Zhang and his colleagues were on a conference call with FDA officials about what they needed to do to receive an “emergency use authorization” that would allow clinical use of the test. The FDA has used EUAs to fast-track Covid-19 diagnostics as well as experimental therapies, including remdesivir, after less extensive testing than usually required.

For an EUA, the agency will require the scientists to validate the test, which they call STOPCovid, on dozens to hundreds of samples. Although “it is still early in the process,” Zhang said, he and his colleagues are confident enough in its accuracy that they are conferring with potential commercial partners who could turn the test into a cartridge-like device, similar to a pregnancy test, enabling Covid-19 testing at doctor offices and other point-of-care sites.

“It could potentially even be used at home or at workplaces,” Zhang said. “It’s inexpensive, does not require a lab, and can return results within an hour using a paper strip, not unlike a pregnancy test. This helps address the urgent need for widespread, accurate, inexpensive, and accessible Covid-19 testing.” Public health experts say the availability of such a test is one of the keys to safely reopening society, which will require widespread testing, and then tracing and possibly isolating the contacts of those who test positive.

If you have time, do read Begley’s in full.

COVID-19 editorial (in response to Canadian Science Policy Centre call for submissions)

I successfully submitted an editorial to the Canadian Science Policy Centre (CSPC). You can find it and a host of others on the CSPC Editorial Series: Response to COVID-19 Pandemic and its Impacts webpage (scroll down under Policy Development) or in the CSPC Featured Editorial Series Volume 1, Issue 2, May 2020 PDF on pp. 31-2.

What I’ve posted here is the piece followed by attribution for the artwork used to illustrate my op-ed in the PDF version of essays and by links to all of my reference materials.

It can become overwhelming as one looks at the images of coffins laid out in various venues, listens to exhausted health care professionals, and sees body bags being loaded onto vans while reading stories about the people who have been hospitalized and/or have died.

In this sea of information, it’s easy to forget that COVID-19 is one in a long history of pandemics. For the sake of brevity, here’s a mostly complete roundup of the last 100 years. The H1N1 pandemic of 1918/19 resulted in either 17 million, 50 million, or 100 million deaths depending on the source of information. The H2N2 pandemic of 1958/59 resulted in approximately 1.1. million deaths; the H3N2 pandemic of 1968/69 resulted in somewhere from 1 to 4 million deaths; and the H1N1pdm09 pandemic of 2009 resulted in roughly 150,000 -575,000 deaths. The HIV/AIDS global pandemic or, depending on the agency, epidemic is ongoing. The estimate for HIVAIDS-related deaths in 2018 alone was between 500,000 – 1.1 million.

It’s now clear that the 2019/20 pandemic will take upwards of 350,000 lives and, quite possibly, many more lives before it has run its course.

On the face of it, the numbers for COVID-19 would not seem to occasion the current massive attempt at physical isolation which ranges across the globe and within entire countries. There is no record of any such previous, more or less global effort. In the past, physical isolation seems to have been practiced on a more localized level.

We are told the current policy ‘flattening the curve’ is an attempt to constrain the numbers so as to lighten the burden on the health care system, i.e. the primary focus being to lessen the number of people needing care at any one time and also lessening the number of deaths and hospitalizations

It’s an idea that can be traced back in more recent times to the 1918/19 pandemic (and stretches back to at least the 17th century when as a student Isaac Newton was sent home from Cambridge to self-isolate from the Great Plague of London).

During the 1918/19 pandemic, Philadelphia and St. Louis, in the US had vastly different experiences. Ignoring advice from infectious disease experts, Philadelphia held a large public parade. Within two or three days, people sickened and, ultimately, 16,000 died in six months. By contrast, St. Louis adopted social and physical isolation measures suffering 2,000 deaths and flattening the curve. (That city too suffered greatly but more slowly.)

In 2019/20, many governments were slow to respond and many have been harshly criticized for their tardiness. Government leaders seem to have been following an older script, something more laissez-faire, something similar to the one we have followed with past pandemics.

We are breaking new ground by following a policy that is untested at this scale.

Viewed positively, the policy hints at a shift in how we view disease and death and hopes are that this heralds a more cohesive and integrated approach to all life on this planet. Viewed more negatively, it suggests an agenda of social control being enacted and promoted to varying degrees across the planet.

Regardless of your perspective, ‘flattening the curve’ seems to have been employed without any substantive consideration of collateral damages and unintended consequences

We are beginning to understand some of the consequences. On April 5, 2020, UN Secretary-General Antonio Guterres expressed grave concern about a global surge in domestic violence. King’s College London and the Australian National University released a report on April 9, 2020 estimating that half a billion people around the world may be pushed into poverty because of these measures.

As well, access to water, which many of us take for granted, can be highly problematic. Homeless people, incarcerated people, indigenous peoples and others note that washing with water and soap, the recommended practice for killing the virus should it land on you, is not a simple matter for them.

More crises such as pandemics, climate change as seen in extreme weather events and water shortages along with rising sea levels around the world, and economic downturns either singly or connected together in ways we have difficulty fully appreciating can be anticipated.

In addition to engaging experts as we navigate our way into the future, we can look to artists, writers, citizen scientists, elders, indigenous communities, rural and urban communities, politicians, philosophers, ethicists, religious leaders, and bureaucrats of all stripes for more insight into the potential for collateral and unintended consequences.

We have the tools what remains is the will and the wit to use them. Brute force analysis has its uses but it’s also important to pay attention to the outliers. “We cannot solve our problems with the same thinking we used when we created them.” (Albert Einstein)

PDF of essays (Response to COVID-19 Pandemic and its Impacts, volume 1, issue 2, May 20202)

Low-resolution detail of an art work by Samuel Monnier. Inspired by the w:Fibonacci word fractal. Orginal owned by Alexis Monnerot-Dumaine. CC BY-SA 3.0

This image of an art piece derived from a Fibonacci word fractal was used to illustrate my essay (pp. 31-2) as reproduced in the PDF only.

For anyone unfamiliar with Fibonacci words (from its Wikipedia entry), Note: Links have been removed,

A Fibonacci word is a specific sequence of binary digits (or symbols from any two-letter alphabet). The Fibonacci word is formed by repeated concatenation in the same way that the Fibonacci numbers are formed by repeated addition.

It is a paradigmatic example of a Sturmian word and specifically, a morphic word.

The name “Fibonacci word” has also been used to refer to the members of a formal language L consisting of strings of zeros and ones with no two repeated ones. Any prefix of the specific Fibonacci word belongs to L, but so do many other strings. L has a Fibonacci number of members of each possible length.

References used for op-ed

That opinion piece was roughly 787 words and as such fit into the 600-800 words submission guideline. It’s been a long time since I’ve written something without links and supporting information. What follows are the supporting sources I used for my statements. (Note: i have also included a few pieces that were published after my op-ed was submitted on April 20, 2020 as they lend further support for some of my contentions.)

Statistics for previous pandemics

https://en.wikipedia.org/wiki/Spanish_flu For 1918/19 numbers see: Mortality; Around the globe, 2nd. para

https://en.wikipedia.org/wiki/2009_swine_flu_pandemic

https://www.mphonline.org/worst-pandemics-in-history/

https://en.wikipedia.org/wiki/List_of_epidemics

https://www.who.int/gho/hiv/epidemic_status/deaths_text/en/

https://www.avert.org/global-hiv-and-aids-statistics

https://vancouversun.com/news/local-news/how-long-is-covid-19-pandemic-going-to-last-what-will-spread-look-like/ The print version of this article by Gordon Hoekstra featured a sidebar with statistics from the Imperial College of London, US Centers for Disease Control, and The Canadian Encyclopedia. Sadly, it has not been reproduced for the online version.

Statistics supporting my projections

https://www.covid-19canada.com/ This is a Canadian site relying on information from the Canadian federal government, Johns Hopkins University (US) and the World Health Organization (WHO) as well as others.

https://www.worldometers.info/coronavirus/ The focus is international with information being supplied by WHO and by various nations.

History of physical and social isolation and ‘flattening the curve’

https://www.washingtonpost.com/history/2020/03/12/during-pandemic-isaac-newton-had-work-home-too-he-used-time-wisely/

https://history.com/news/spanish-flu-pandemic-response-cities

https://www.livescience.com/coronavirus-flatten-the-curve.html What does flattening the curve mean? The writer provides an answer.

Slow response and harsh criticism (e.g., Canada)

https://www.cbc.ca/news/politics/covid-19-government-documents-1.5528726

https://www.cbc.ca/news/politics/coronavirus-pandemic-covid-canadian-military-intelligence-wuhan-1.5528381

https://vancouversun.com/opinion/columnists/douglas-todd-weighing-deaths-from-covid-19-against-deaths-from-despair/

Laissez-faire script

https://slate.com/news-and-politics/2020/04/sweden-coronavirus-response-death-social-distancing.html Essay by ex-pat Swede returning home. Ranges from pointed criticism to strong doubt mixed with hope (at the end) about laissez-faire.

Positive and negative views of the ‘flatten the curve’ policy

https://phys.org/news/2020-04-french-philosopher-virus-exploitation.html A French philosopher, Bernard-Henri Levy, suggests the policy reflects two views “life has become sacred” and the [information system] “helps to bring hysteria to the perception of things … . “

https://business.financialpost.com/diane-francis/diane-francis-to-beat-this-coronavirus-we-must-sacrifice-our-freedoms “To beat this coronavirus, we must sacrifice our freedoms” presents arguments for more control over what people do and don’t do. I find it unpleasantly extreme but Diane Francis supports her contentions with some valid points.

Unintended consequences (the good and the bad)

https://www.voanews.com/covid-19-pandemic/cleaner-air-covid-19-lockdowns-may-save-lives Cleaner air.

https://news.itu.int/sharing-best-practices-on-digital-cooperation-during-covid19-and-beyond/ “I think what COVID has done, is actually to put the will to get the world connected right in front of us – and we rallied around that will,” said Doreen Bogdan-Martin, Director of ITU’s Telecommunication Development Bureau. “We have come together in these very difficult circumstances and we have come up with innovative practices to actually better connect people who actually weren’t connected before.”

https://en.unesco.org/news/unesco-mobilizes-122-countries-promote-open-science-and-reinforced-cooperation-face-covid-19 Open science and more cooperation

https://www.aljazeera.com/news/2020/04/global-surge-domestic-violence-coronavirus-lockdowns-200406065737864.html International rise in domestic violence

https://news.un.org/en/story/2020/04/1061052 International rise in domestic violence

https://www.anu.edu.au/news/all-news/covid-19-to-push-half-a-billion-people-into-poverty Increase in poverty worldwide Australian National University press release

https://www.kcl.ac.uk/news/half-a-billion-people-could-be-pushed-into-poverty-by-covid-19 Increase in poverty worldwide Kings College London press release

https://www.wider.unu.edu/publication/estimates-impact-covid-19-global-poverty Rise in poverty worldwide UN working paper

https://www.techdirt.com/articles/20200422/11395644353/unesco-suggests-covid-19-is-reason-to-create-eternal-copyright.shtml Taking advantage of the situation

https://phys.org/news/2020-04-climate-scientists-world-response-coronavirus.html COVID-19 might prove helpful with climate change

ETC.

You might not want to keep getting advice from your usual (expert) crew only

https://www.frogheart.ca/?p=638 Kevin Dunbar’s research into how problem-solving is improved when you get a more diverse set of experts than usual

https://www.theatlantic.com/magazine/archive/2020/04/the-perks-of-being-a-weirdo/606778/?utm_source=pocket-newtab How loners and weirdos (often found amongst writers, artists, etc.) can promote more creative problem-solving

Brute force analysis and tools for broader consultation

I came up with the term ‘brute force analysis’ after an experience in local participatory budgeting. (For those who don’t know, there’s a movement afoot for a government body [in this case, it was the City of Vancouver] to dedicate a portion of their budget to a community [in this case, it was the West End neighbourhood] for citizens to decide on how the allocation should be sent.)

In our case, volunteers had gone out out and solicited ideas for how neighbourhood residents would like to see the money spent. The ideas were categorized and a call for volunteers to work on committees went out. I ended up on the ‘arts and culture’ committee and we were tasked with taking some 300 – 400 suggestions and establishing a list of 10 – 12 possibilities for more discussion and research after which we were to present three or four to city staff who would select a maximum of two suggestions for a community vote.

Our deadlines, many of which seemed artificially imposed, were tight and we had to be quite ruthless as we winnowed away the suggestions. It became an exercise in determining which were the most frequently made suggestions, hence, ‘brute force analysis’. (This a condensed description of the process.)

As for tools to encourage wider participation, I was thinking of something like ‘Foldit‘ (scroll down to ‘Folding …’.). Both a research project (University of Washington) and a video puzzle game for participants who want to try protein-folding, it’s a remarkable effort first described in my August 6, 2010 posting when the researchers had their work published in Nature with an astonishing 50,000 co-authors.

Albert Einstein

The quote, “We cannot solve our problems with the same thinking we used when we created them,” is attributed to Albert Einstein in many places but I have not been able to find any supporting references or documentation.

The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (5 of 5)

At long last, the end is in sight! This last part is mostly a collection of items that don’t fit elsewhere or could have fit elsewhere but that particular part was already overstuffed.

Podcasting science for the people

March 2009 was the birth date for a podcast, then called Skeptically Speaking and now known as Science for the People (Wikipedia entry). Here’s more from the Science for the People About webpage,

Science for the People is a long-format interview podcast that explores the connections between science, popular culture, history, and public policy, to help listeners understand the evidence and arguments behind what’s in the news and on the shelves.

Every week, our hosts sit down with science researchers, writers, authors, journalists, and experts to discuss science from the past, the science that affects our lives today, and how science might change our future.

THE TEAM

Rachelle Saunders: Producer & Host

I love to learn new things, and say the word “fascinating” way too much. I like to talk about intersections and how science and critical thinking intersect with everyday life, politics, history, and culture. By day I’m a web developer, and I definitely listen to way too many podcasts.

….

H/t to GeekWrapped’s 20 Best Science Podcasts.

Science: human contexts and cosmopolitanism

situating science: Science in Human Contexts was a seven-year project ending in 2014 and funded by the Social Sciences and Humanities Research Council of Canada (SSHRC). Here’s more from their Project Summary webpage,

Created in 2007 with the generous funding of the Social Sciences and Humanities Research Council of Canada Strategic Knowledge Cluster grant, Situating Science is a seven-year project promoting communication and collaboration among humanists and social scientists that are engaged in the study of science and technology.

You can find out more about Situating Science’s final days in my August 16, 2013 posting where I included a lot of information about one of their last events titled, “Science and Society 2013 Symposium; Emerging Agendas for Citizens and the Sciences.”

The “think-tank” will dovetail nicely with a special symposium in Ottawa on Science and Society Oct. 21-23. For this symposium, the Cluster is partnering with the Institute for Science, Society and Policy to bring together scholars from various disciplines, public servants and policy workers to discuss key issues at the intersection of science and society. [emphasis mine]  The discussions will be compiled in a document to be shared with stakeholders and the wider public.

The team will continue to seek support and partnerships for projects within the scope of its objectives. Among our top priorities are a partnership to explore sciences, technologies and their publics as well as new partnerships to build upon exchanges between scholars and institutions in India, Singapore and Canada.

The Situating Science folks did attempt to carry on the organization’s work by rebranding the organization to call it the Canadian Consortium for Situating Science and Technology (CCSST). It seems to have been a short-lived volunteer effort.

Meanwhile, the special symposium held in October 2013 appears to have been the springboard for another SSHRC funded multi-year initiative, this time focused on science collaborations between Canada, India, and Singapore, Cosmopolitanism and the Local in Science and Nature from 2014 – 2017. Despite their sunset year having been in 2017, their homepage boasts news about a 2020 Congress and their Twitter feed is still active. Harking back, here’s what the project was designed to do, from the About Us page,

Welcome to our three year project that will establish a research network on “Cosmopolitanism” in science. It closely examines the actual types of negotiations that go into the making of science and its culture within an increasingly globalized landscape. This partnership is both about “cosmopolitanism and the local” and is, at the same time, cosmopolitan and local.

Anyone who reads this blog with any frequency will know that I often comment on the fact that when organizations such as the Council of Canadian Academies bring in experts from other parts of the world, they are almost always from the US or Europe. So, I was delighted to discover the Cosmopolitanism project and featured it in a February 19, 2015 posting.

Here’s more from Cosmopolitanism’s About Us page

Specifically, the project will:

  1. Expose a hitherto largely Eurocentric scholarly community in Canada to widening international perspectives and methods,
  2. Build on past successes at border-crossings and exchanges between the participants,
  3. Facilitate a much needed nation-wide organization and exchange amongst Indian and South East Asian scholars, in concert with their Canadian counterparts, by integrating into an international network,
  4. Open up new perspectives on the genesis and place of globalized science, and thereby
  5. Offer alternative ways to conceptualize and engage globalization itself, and especially the globalization of knowledge and science.
  6. Bring the managerial team together for joint discussion, research exchange, leveraging and planning – all in the aid of laying the grounds of a sustainable partnership

Eco Art (also known as ecological art or environmental art)

I’m of two minds as to whether I should have tried to stuff this into the art/sci subsection in part 2. On balance, I decided that this merited its own section and that part 2 was already overstuffed.

Let’s start in Newfoundland and Labrador with Marlene Creates (pronounced Kreets), here’s more about her from her website’s bio webpage,

Marlene Creates (pronounced “Kreets”) is an environmental artist and poet who works with photography, video, scientific and vernacular knowledge, walking and collaborative site-specific performance in the six-acre patch of boreal forest in Portugal Cove, Newfoundland and Labrador, Canada, where she lives.

For almost 40 years her work has been an exploration of the relationship between human experience, memory, language and the land, and the impact they have on each other. …

Currently her work is focused on the six acres of boreal forest where she lives in a ‘relational aesthetic’ to the land. This oeuvre includes Water Flowing to the Sea Captured at the Speed of Light, Blast Hole Pond River, Newfoundland 2002–2003, and several ongoing projects:

Marlene Creates received a Governor General’s Award in Visual and Media Arts for “Lifetime Artistic Achievement” in 2019. …

As mentioned in her bio, Creates has a ‘forest’ project. The Boreal Poetry Garden,
Portugal Cove, Newfoundland 2005– (ongoing)
. If you are interested in exploring it, she has created a virtual walk here. Just click on one of the index items on the right side of the screen to activate a video.

An October 1, 2018 article by Yasmin Nurming-Por for Canadian Art magazine features 10 artists who focus on environmental and/or land art themes,

As part of her 2016 master’s thesis exhibition, Fredericton [New Brunswick] artist Gillian Dykeman presented the video Dispatches from the Feminist Utopian Future within a larger installation that imagined various canonical earthworks from the perspective of the future. It’s a project that addresses the inherent sense of timelessness in these massive interventions on the natural landscape from the perspective of contemporary land politics. … she proposes a kind of interaction with the invasive and often colonial gestures of modernist Land art, one that imagines a different future for these earthworks, where they are treated as alien in a landscape and as beacons from a feminist future.

A video trailer featuring “DISPATCHES FROM THE FEMINIST UTOPIAN FUTURE” (from Dykeman’s website archive page featuring the show,

If you have the time, I recommend reading the article in its entirety.

Oddly, I did not expect Vancouver to have such an active eco arts focus. The City of Vancouver Parks Board maintains an Environmental Art webpage on its site listing a number of current and past projects.

I cannot find the date for when this Parks Board initiative started but I did find a document produced prior to a Spring 2006 Arts & Ecology think tank held in Vancouver under the auspices of the Canada Council for the Arts, the Canadian Commission for UNESCO, the Vancouver Foundation, and the Royal Society for the Encouragement of the Arts, Manufactures and Commerce (London UK).

In all likelihood, Vancouver Park Board’s Environmental Art webpage was produced after 2006.

I imagine the document and the think tank session helped to anchor any then current eco art projects and encouraged more projects.

The document (MAPPING THE TERRAIN OF CONTEMPORARY ECOART PRACTICE AND COLLABORATION) while almost 14 years old offers a fascinating overview of what was happening internationally and in Canada.

While its early days were in 2008, EartHand Gleaners (Vancouver-based) wasn’t formally founded as an arts non-for-profit organization until 2013. You can find out more about them and their projects here.

Eco Art has been around for decades according to the eco art think tank document but it does seemed to have gained momentum here in Canada over the last decade.

Photography and the Natural Sciences and Engineering Research Council of Canada (NSERC)

Exploring the jack pine tight knit family tree. Credit: Dana Harris Brock University (2018)

Pictured are developing phloem, cambial, and xylem cells (blue), and mature xylem cells (red), in the outermost portion of a jack pine tree. This research aims to identify the influences of climate on the cellular development of the species at its northern limit in Yellowknife, NT. The differences in these cell formations is what creates the annual tree ring boundary.

Science Exposed is a photography contest for scientists which has been run since 2016 (assuming the Past Winners archive is a good indicator for the programme’s starting year).

The 2020 competition recently closed but public voting should start soon. It’s nice to see that NSERC is now making efforts to engage members of the general public rather than focusing its efforts solely on children. The UK’s ASPIRES project seems to support the idea that adults need to be more fully engaged with STEM (science, technology, engineering, and mathematics) efforts as it found that children’s attitudes toward science are strongly influenced by their parents’ and relatives’ attitudes.(See my January 31, 2012 posting.)

Ingenious, the book and Ingenium, the science museums

To celebrate Canada’s 150th anniversary in 2017, then Governor General David Johnston and Tom Jenkins (Chair of the board for Open Text and former Chair of the federal committee overseeing the ‘Review of Federal Support to R&’D [see my October 21, 2011 posting about the resulting report]) wrote a boo about Canada’s inventors and inventions.

Johnston and Jenkins jaunted around the country launching their book (I have more about their June 1, 2017 Vancouver visit in a May 30, 2017 posting; scroll down about 60% of the way]).

The book’s full title, “Ingenious: How Canadian Innovators Made the World Smarter, Smaller, Kinder, Safer, Healthier, Wealthier and Happier ” outlines their thesis neatly.

Not all that long after the book was launched, there was a name change (thankfully) for the Canada Science and Technology Museums Corporation (CSTMC). It is now known as Ingenium (covered in my August 10, 2017 posting).

The reason that name change was such a relief (for those who don’t know) is that the corporation included three national science museums: Canada Aviation and Space Museum, Canada Agriculture and Food Museum, and (wait for it) Canada Science and Technology Museum. On the list of confusing names, this ranks very high for me. Again, I give thanks for the change from CSTMC to Ingenium, leaving the name for the museum alone.

2017 was also the year that the newly refurbished Canada Science and Technology Museum was reopened after more than three years (see my June 23, 2017 posting about the November 2017 reopening and my June 12, 2015 posting for more information about the situation that led to the closure).

A Saskatchewan lab, Convergence, Order of Canada, Year of Science, Animated Mathematics, a graphic novel, and new media

Since this section is jampacked, I’m using subheads.

Saskatchewan

Dr. Brian Eames hosts an artist-in-residence, Jean-Sebastien (JS) Gauthier at the University of Saskatchewan’s College of Medicine Eames Lab. A February 16, 2018 posting here featured their first collaboration together. It covered evolutionary biology, the synchrotron (Canadian Light Source [CLS]) in Saskatoon, and the ‘ins and outs’ of a collaboration between a scientist an artist. Presumably the art-in-residence position indicates that first collaboration went very well.

In January 2020, Brian kindly gave me an update on their current projects. Jean-Sebastin successfully coded an interactive piece for an exhibit at the 2019 Nuit Blanche Saskatoon event using Connect (Xbox). More recently, he got a VR [virtual reality] helmet for an upcoming project or two.

After much clicking on the Nuit Blanche Saskatoon 2019 interactive map, I found this,

Our Glass is a work of interactive SciArt co-created by artist JS Gauthier and biologist Dr Brian F. Eames. It uses cutting-edge 3D microscopic images produced for artistic purposes at the Canadian Light Source, Canada’s only synchrotron facility. Our Glass engages viewers of all ages to peer within an hourglass showing how embryonic development compares among animals with whom we share a close genetic heritage.

Eames also mentioned they were hoping to hold an international SciArt Symposium at the University of Saskatchewan in 2021.

Convergence

Dr. Cristian Zaelzer-Perez, an instructor at Concordia University (Montreal; read this November 20, 2019 Concordia news release by Kelsey Rolfe for more about his work and awards), in 2016 founded the Convergence Initiative, a not-for-profit organization that encourages interdisciplinary neuroscience and art collaborations.

Cat Lau’s December 23, 2019 posting for the Science Borealis blog provides insight into Zaelzer-Perez’s relationship to science and art,

Cristian: I have had a relationship with art and science ever since I have had memory. As a child, I loved to do classifications, from grouping different flowers to collecting leaves by their shapes. At the same time, I really loved to draw them and for me, both things never looked different; they (art and science) have always worked together.

I started as a graphic designer, but the pursuit to learn about nature was never dead. At some point, I knew I wanted to go back to school to do research, to explore and learn new things. I started studying medical technologies, then molecular biology and then jumped into a PhD. At that point, my life as a graphic designer slipped down, because of the focus you have to give to the discipline. It seemed like every time I tried to dedicate myself to one thing, I would find myself doing the other thing a couple years later.

I came to Montreal to do my post-doc, but I had trouble publishing, which became problematic in getting a career. I was still loving what I was doing, but not seeing a future in that. Once again, art came back into my life and at the same time I saw that science was becoming really hard to understand and scientists were not doing much to bridge the gap.

The Convergence Initiative has an impressive array of programmes. Do check it out.

Order of Canada and ‘The Science Lady’

For a writer of children’s science books, an appointment to the Order of Canada is a singular honour. I cannot recall a children’s science book writer previous to Shar Levine being appointed as a Member of the Order of Canada. Known as ‘The Science Lady‘, Levine was appointed in 2016. Here’s more from her Wikipedia entry, Note: Links have been removed,

Shar Levine (born 1953) is an award-winning, best selling Canadian children’s author, and designer.

Shar has written over 70 books and book/kits, primarily on hands-on science for children. For her work in Science literacy and Science promotion, Shar has been appointed to the 2016 Order of Canada. In 2015, she was recognized by the University of Alberta and received their Alumni Honour Award. Levine, and her co-author, Leslie Johnstone, were co-recipients of the Eve Savory Award for Science Communication from the BC Innovation Council (2006) and their book, Backyard Science, was a finalist for the Subaru Award, (hands on activity) from the American Association for the Advancement of Science, Science Books and Films (2005). The Ultimate Guide to Your Microscope was a finalist-2008 American Association for the Advancement of Science/Subaru Science Books and Films Prize Hands -On Science/Activity Books.

To get a sense of what an appointment to the Order of Canada means, here’s a description from the government of Canada website,

The Order of Canada is how our country honours people who make extraordinary contributions to the nation.

Since its creation in 1967—Canada’s centennial year—more than 7 000 people from all sectors of society have been invested into the Order. The contributions of these trailblazers are varied, yet they have all enriched the lives of others and made a difference to this country. Their grit and passion inspire us, teach us and show us the way forward. They exemplify the Order’s motto: DESIDERANTES MELIOREM PATRIAM (“They desire a better country”).

Year of Science in British Columbia

In the Fall of 2010, the British Columbia provincial government announced a Year of Science (coinciding with the school year) . Originally, it was supposed to be a provincial government-wide initiative but the idea percolated through any number of processes and emerged as a year dedicated to science education for youth (according to the idea’s originator, Moira Stilwell who was then a Member of the Legislative Assembly [MLA]’ I spoke with her sometime in 2010 or 2011).

As the ‘year’ drew to a close, there was a finale ($1.1M in funding), which was featured here in a July 6, 2011 posting.

The larger portion of the money ($1M) was awarded to Science World while $100,000 ($0.1 M) was given to the Pacific Institute of Mathematical Sciences To my knowledge there have been no followup announcements about how the money was used.

Animation and mathematics

In Toronto, mathematician Dr. Karan Singh enjoyed a flurry of interest due to his association with animator Chris Landreth and their Academy Award (Oscar) Winning 2004 animated film, Ryan. They have continued to work together as members of the Dynamic Graphics Project (DGP) Lab at the University of Toronto. Theirs is not the only Oscar winning work to emerge from one or more of the members of the lab. Jos Stam, DGP graduate and adjunct professor won his third in 2019.

A graphic novel and medical promise

An academic at Simon Fraser University since 2015, Coleman Nye worked with three other women to produce a graphic novel about medical dilemmas in a genre described as’ ethno-fiction’.

Lissa: A Story about Medical Promise, Friendship, and Revolution (2017) by Sherine Hamdy and Coleman Nye, two anthropologists and Art by Sarula Bao and Caroline Brewer, two artists.

Here’s a description of the book from the University of Toronto Press website,

As young girls in Cairo, Anna and Layla strike up an unlikely friendship that crosses class, cultural, and religious divides. Years later, Anna learns that she may carry the hereditary cancer gene responsible for her mother’s death. Meanwhile, Layla’s family is faced with a difficult decision about kidney transplantation. Their friendship is put to the test when these medical crises reveal stark differences in their perspectives…until revolutionary unrest in Egypt changes their lives forever.

The first book in a new series [ethnoGRAPIC; a series of graphic novels from the University of Toronto Press], Lissa brings anthropological research to life in comic form, combining scholarly insights and accessible, visually-rich storytelling to foster greater understanding of global politics, inequalities, and solidarity.

I hope to write more about this graphic novel in a future posting.

New Media

I don’t know if this could be described as a movement yet but it’s certainly an interesting minor development. Two new media centres have hosted, in the last four years, art/sci projects and/or workshops. It’s unexpected given this definition from the Wikipedia entry for New Media (Note: Links have been removed),

New media are forms of media that are computational and rely on computers for redistribution. Some examples of new media are computer animations, computer games, human-computer interfaces, interactive computer installations, websites, and virtual worlds.[1][2]

In Manitoba, the Video Pool Media Arts Centre hosted a February 2016 workshop Biology as a New Art Medium: Workshop with Marta De Menezes. De Menezes, an artist from Portugal, gave workshops and talks in both Winnipeg (Manitoba) and Toronto (Ontario). Here’s a description for the one in Winnipeg,

This workshop aims to explore the multiple possibilities of artistic approaches that can be developed in relation to Art and Microbiology in a DIY situation. A special emphasis will be placed on the development of collaborative art and microbiology projects where the artist has to learn some biological research skills in order to create the artwork. The course will consist of a series of intense experimental sessions that will give raise to discussions on the artistic, aesthetic and ethical issues raised by the art and the science involved. Handling these materials and organisms will provoke a reflection on the theoretical issues involved and the course will provide background information on the current diversity of artistic discourses centred on biological sciences, as well a forum for debate.

VIVO Media Arts Centre in Vancouver hosted the Invasive Systems in 2019. From the exhibition page,

Picture this – a world where AI invades human creativity, bacteria invade our brains, and invisible technological signals penetrate all natural environments. Where invasive species from plants to humans transform spaces where they don’t belong, technology infiltrates every aspect of our daily lives, and the waste of human inventions ravages our natural environments.

This weekend festival includes an art-science exhibition [emphasis mine], a hands-on workshop (Sat, separate registration required), and guided discussions and tours by the curator (Sat/Sun). It will showcase collaborative works by three artist/scientist pairs, and independent works by six artists. Opening reception will be on Friday, November 8 starting at 7pm; curator’s remarks and performance by Edzi’u at 7:30pm and 9pm. 

New Westminster’s (British Columbia) New Media Gallery recently hosted an exhibition, ‘winds‘ from June 20 – September 29, 2019 that could be described as an art/sci exhibition,

Landscape and weather have long shared an intimate connection with the arts.  Each of the works here is a landscape: captured, interpreted and presented through a range of technologies. The four artists in this exhibition have taken, as their material process, the movement of wind through physical space & time. They explore how our perception and understanding of landscape can be interpreted through technology. 

These works have been created by what might be understood as a sort of scientific method or process that involves collecting data, acute observation, controlled experiments and the incorporation of measurements and technologies that control or collect motion, pressure, sound, pattern and the like. …

Council of Canadian Academies, Publishing, and Open Access

Established in 2005, the Council of Canadian Academies (CCA) (Wikipedia entry) is tasked by various departments and agencies to answer their queries about science issues that could affect the populace and/or the government. In 2014, the CCA published a report titled, Science Culture: Where Canada Stands. It was in response to the Canada Science and Technology Museums Corporation (now called Ingenium), Industry Canada, and Natural Resources Canada and their joint request that the CCA conduct an in-depth, independent assessment to investigate the state of Canada’s science culture.

I gave a pretty extensive analysis of the report, which I delivered in four parts: Part 1, Part 2 (a), Part 2 (b), and Part 3. In brief, the term ‘science culture’ seems to be specifically, i.e., it’s not used elsewhere in the world (that we know of), Canadian. We have lots to be proud of. I was a little disappointed by the lack of culture (arts) producers on the expert panel and, as usual, I bemoaned the fact that the international community included as reviewers, members of the panel, and as points for comparison were drawn from the usual suspects (US, UK, or somewhere in northern Europe).

Science publishing in Canada took a bit of a turn in 2010, when the country’s largest science publisher, NRC (National Research Council) Research Publisher was cut loose from the government and spun out into the private, *not-for-profit publisher*, Canadian Science Publishing (CSP). From the CSP Wikipedia entry,

Since 2010, Canadian Science Publishing has acquired five new journals:

Since 2010, Canadian Science Publishing has also launched four new journals

Canadian Science Publishing offers researchers options to make their published papers freely available (open access) in their standard journals and in their open access journal, (from the CSP Wikipedia entry)

Arctic Science aims to provide a collaborative approach to Arctic research for a diverse group of users including government, policy makers, the general public, and researchers across all scientific fields

FACETS is Canada’s first open access multidisciplinary science journal, aiming to advance science by publishing research that the multi-faceted global community of research. FACETS is the official journal of the Royal Society of Canada’s Academy of Science.

Anthropocene Coasts aims to understand and predict the effects of human activity, including climate change, on coastal regions.

In addition, Canadian Science Publishing strives to make their content accessible through the CSP blog that includes plain language summaries of featured research. The open-access journal FACETS similarly publishes plain language summaries.

*comment removed*

CSP announced (on Twitter) a new annual contest in 2016,

Canadian Science Publishing@cdnsciencepub

New CONTEST! Announcing Visualizing Science! Share your science images & win great prizes! Full details on the blog http://cdnsciencepub.com/blog/2016-csp-image-contest-visualizing-science.aspx1:45 PM · Sep 19, 2016·TweetDeck

The 2016 blog posting is no longer accessible. Oddly for a contest of this type, I can’t find an image archive for previous contests. Regardless, a 2020 competition has been announced for Summer 2020. There are some details on the VISUALIZING SCIENCE 2020 webpage but some are missing, e.g., no opening date, no deadline. They are encouraging you to sign up for notices.

Back to open access, in a January 22, 2016 posting I featured news about Montreal Neuro (Montreal Neurological Institute [MNI] in Québec, Canada) and its then new policy giving researchers world wide access to its research and made a pledge that it would not seek patents for its work.

Fish, Newfoundland & Labrador, and Prince Edward Island

AquAdvantage’s genetically modified salmon was approved for consumption in Canada according to my May 20, 2016 posting. The salmon are produced/farmed by a US company (AquaBounty) but the the work of genetically modifying Atlantic salmon with genetic material from the Chinook (a Pacific ocean salmon) was mostly undertaken at Memorial University in Newfoundland & Labrador.

The process by which work done in Newfoundland & Labrador becomes the property of a US company is one that’s well known here in Canada. The preliminary work and technology is developed here and then purchased by a US company, which files patents, markets, and profits from it. Interestingly, the fish farms for the AquAdvantage salmon are mostly (two out of three) located on Prince Edward Island.

Intriguingly, 4.5 tonnes of the modified fish were sold for consumption in Canada without consumers being informed (see my Sept. 13, 2017 posting, scroll down about 45% of the way).

It’s not all sunshine and roses where science culture in Canada is concerned. Incidents where Canadians are not informed let alone consulted about major changes in the food supply and other areas are not unusual. Too many times, scientists, politicians, and government policy experts want to spread news about science without any response from the recipients who are in effect viewed as a ‘tabula rasa’ or a blank page.

Tying it all up

This series has been my best attempt to document in some fashion or another the extraordinary range of science culture in Canada from roughly 2010-19. Thank you! This series represents a huge amount of work and effort to develop science culture in Canada and I am deeply thankful that people give so much to this effort.

I have inevitably missed people and organizations and events. For that I am very sorry. (There is an addendum to the series as it’s been hard to stop but I don’t expect to add anything or anyone more.)

I want to mention but can’t expand upon,the Pan-Canadian Artificial Intelligence Strategy, which was established in the 2017 federal budget (see a March 31, 2017 posting about the Vector Institute and Canada’s artificial intelligence sector).

Science Borealis, the Canadian science blog aggregator, owes its existence to Canadian Science Publishing for the support (programming and financial) needed to establish itself and, I believe, that support is still ongoing. I think thanks are also due to Jenny Ryan who was working for CSP and championed the initiative. Jenny now works for Canadian Blood Services. Interestingly, that agency added a new programme, a ‘Lay Science Writing Competition’ in 2018. It’s offered n partnership with two other groups, the Centre for Blood Research at the University of British Columbia and Science Borealis

While the Royal Astronomical Society of Canada does not fit into my time frame as it lists as its founding date December 1, 1868 (18 months after confederation), the organization did celebrate its 150th anniversary in 2018.

Vancouver’s Electric Company often produces theatrical experiences that cover science topics such as the one featured in my June 7, 2013 posting, You are very star—an immersive transmedia experience.

Let’s Talk Science (Wikipedia entry) has been heavily involved with offering STEM (science, technology, engineering, and mathematics) programming both as part of curricular and extra-curricular across Canada since 1993.

This organization predates confederation having been founded in 1849 by Sir Sandford Fleming and Kivas Tully in Toronto. for surveyors, civil engineers, and architects. It is the Royal Canadian Institute of Science (Wikipedia entry)_. With almost no interruption, they have been delivering a regular series of lectures on the University of Toronto campus since 1913.

The Perimeter Institute for Theoretical Physics is a more recent beast. In 1999 Mike Lazirides, founder of Research In Motion (now known as Blackberry Limited), acted as both founder and major benefactor for this institute in Waterloo, Ontario. They offer a substantive and imaginative outreach programmes such as Arts and Culture: “Event Horizons is a series of unique and extraordinary events that aim to stimulate and enthral. It is a showcase of innovative work of the highest international standard, an emotional, intellectual, and creative experience. And perhaps most importantly, it is a social space, where ideas collide and curious minds meet.”

While gene-editing hasn’t seemed to be top-of-mind for anyone other than those in the art/sci community that may change. My April 26, 2019 posting focused on what appears to be a campaign to reverse Canada’s criminal ban on human gene-editing of inheritable cells (germline). With less potential for controversy, there is a discussion about somatic gene therapies and engineered cell therapies. A report from the Council of Canadian is due in the Fall of 2020. (The therapies being discussed do not involve germline editing.)

French language science media and podcasting

Agence Science-Presse is unique as it is the only press agency in Canada devoted to science news. Founded in 1978, it has been active in print, radio, television, online blogs, and podcasts (Baladodiffusion). You can find their Twitter feed here.

I recently stumbled across ‘un balados’ (podcast), titled, 20%. Started in January 2019 by the magazine, Québec Science, the podcast is devoted to women in science and technology. 20%, the podcast’s name, is the statistic representing the number of women in those fields. “Dans les domaines de la science et de la technologie, les femmes ne forment que 20% de la main-d’oeuvre.” (from the podcast webpage) The podcast is a co-production between “Québec Science [founded in 1962] et l’Acfas [formerly, l’Association Canadienne-Française pour l’Avancement des Sciences, now, Association francophone pour le savoir], en collaboration avec la Commission canadienne pour l’UNESCO, L’Oréal Canada et la radio Choq.ca.” (also from the podcast webpage)

Does it mean anything?

There have been many developments since I started writing this series in late December 2019. In January 2020, Iran shot down one of its own planes. That error killed some 176 people , many of them (136 Canadians and students) bound for Canada. The number of people who were involved in the sciences, technology, and medicine was striking.

It was a shocking loss and will reverberate for quite some time. There is a memorial posting here (January 13, 2020), which includes links to another memorial posting and an essay.

As I write this we are dealing with a pandemic, COVID-19, which has us all practicing physical and social distancing. Congregations of large numbers are expressly forbidden. All of this is being done in a bid to lessen the passage of the virus, SARS-CoV-2 which causes COVID-19.

In the short term at least, it seems that much of what I’ve described in these five parts (and the addendum) will undergo significant changes or simply fade away.

As for the long term, with this last 10 years having hosted the most lively science culture scene I can ever recall, I’m hopeful that science culture in Canada will do more than survive but thrive.

For anyone who missed them:

Part 1 covers science communication, science media (mainstream and others such as blogging) and arts as exemplified by music and dance: The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (1 of 5).

Part 2 covers art/science (or art/sci or sciart) efforts, science festivals both national and local, international art and technology conferences held in Canada, and various bar/pub/café events: The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (2 of 5).

Part 3 covers comedy, do-it-yourself (DIY) biology, chief science advisor, science policy, mathematicians, and more: The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (3 of 5)

Part 4 covers citizen science, birds, climate change, indigenous knowledge (science), and the IISD Experimental Lakes Area: The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (4 of 5)

*”for-profit publisher, Canadian Science Publishing (CSP)” corrected to “not-for-profit publisher, Canadian Science Publishing (CSP)” and this comment “Not bad for a for-profit business, eh?” removed on April 29, 2020 as per Twitter comments,

Canadian Science Publishing @cdnsciencepub

Hi Maryse, thank you for alerting us to your blog. To clarify, Canadian Science Publishing is a not-for-profit publisher. Thank you as well for sharing our image contest. We’ve updated the contest page to indicate that the contest opens July 2020!

10:01am · 29 Apr 2020 · Twitter Web App