Tag Archives: Pan-Canadian Artificial Intelligence Strategy

Governments need to tell us when and how they’re using AI (artificial intelligence) algorithms to make decisions

I have two items and an exploration of the Canadian scene all three of which feature governments, artificial intelligence, and responsibility.

Special issue of Information Polity edited by Dutch academics,

A December 14, 2020 IOS Press press release (also on EurekAlert) announces a special issue of Information Polity focused on algorithmic transparency in government,

Amsterdam, NL – The use of algorithms in government is transforming the way bureaucrats work and make decisions in different areas, such as healthcare or criminal justice. Experts address the transparency challenges of using algorithms in decision-making procedures at the macro-, meso-, and micro-levels in this special issue of Information Polity.

Machine-learning algorithms hold huge potential to make government services fairer and more effective and have the potential of “freeing” decision-making from human subjectivity, according to recent research. Algorithms are used in many public service contexts. For example, within the legal system it has been demonstrated that algorithms can predict recidivism better than criminal court judges. At the same time, critics highlight several dangers of algorithmic decision-making, such as racial bias and lack of transparency.

Some scholars have argued that the introduction of algorithms in decision-making procedures may cause profound shifts in the way bureaucrats make decisions and that algorithms may affect broader organizational routines and structures. This special issue on algorithm transparency presents six contributions to sharpen our conceptual and empirical understanding of the use of algorithms in government.

“There has been a surge in criticism towards the ‘black box’ of algorithmic decision-making in government,” explain Guest Editors Sarah Giest (Leiden University) and Stephan Grimmelikhuijsen (Utrecht University). “In this special issue collection, we show that it is not enough to unpack the technical details of algorithms, but also look at institutional, organizational, and individual context within which these algorithms operate to truly understand how we can achieve transparent and responsible algorithms in government. For example, regulations may enable transparency mechanisms, yet organizations create new policies on how algorithms should be used, and individual public servants create new professional repertoires. All these levels interact and affect algorithmic transparency in public organizations.”

The transparency challenges for the use of algorithms transcend different levels of government – from European level to individual public bureaucrats. These challenges can also take different forms; transparency can be enabled or limited by technical tools as well as regulatory guidelines or organizational policies. Articles in this issue address transparency challenges of algorithm use at the macro-, meso-, and micro-level. The macro level describes phenomena from an institutional perspective – which national systems, regulations and cultures play a role in algorithmic decision-making. The meso-level primarily pays attention to the organizational and team level, while the micro-level focuses on individual attributes, such as beliefs, motivation, interactions, and behaviors.

“Calls to ‘keep humans in the loop’ may be moot points if we fail to understand how algorithms impact human decision-making and how algorithmic design impacts the practical possibilities for transparency and human discretion,” notes Rik Peeters, research professor of Public Administration at the Centre for Research and Teaching in Economics (CIDE) in Mexico City. In a review of recent academic literature on the micro-level dynamics of algorithmic systems, he discusses three design variables that determine the preconditions for human transparency and discretion and identifies four main sources of variation in “human-algorithm interaction.”

The article draws two major conclusions: First, human agents are rarely fully “out of the loop,” and levels of oversight and override designed into algorithms should be understood as a continuum. The second pertains to bounded rationality, satisficing behavior, automation bias, and frontline coping mechanisms that play a crucial role in the way humans use algorithms in decision-making processes.

For future research Dr. Peeters suggests taking a closer look at the behavioral mechanisms in combination with identifying relevant skills of bureaucrats in dealing with algorithms. “Without a basic understanding of the algorithms that screen- and street-level bureaucrats have to work with, it is difficult to imagine how they can properly use their discretion and critically assess algorithmic procedures and outcomes. Professionals should have sufficient training to supervise the algorithms with which they are working.”

At the macro-level, algorithms can be an important tool for enabling institutional transparency, writes Alex Ingrams, PhD, Governance and Global Affairs, Institute of Public Administration, Leiden University, Leiden, The Netherlands. This study evaluates a machine-learning approach to open public comments for policymaking to increase institutional transparency of public commenting in a law-making process in the United States. The article applies an unsupervised machine learning analysis of thousands of public comments submitted to the United States Transport Security Administration on a 2013 proposed regulation for the use of new full body imaging scanners in airports. The algorithm highlights salient topic clusters in the public comments that could help policymakers understand open public comments processes. “Algorithms should not only be subject to transparency but can also be used as tool for transparency in government decision-making,” comments Dr. Ingrams.

“Regulatory certainty in combination with organizational and managerial capacity will drive the way the technology is developed and used and what transparency mechanisms are in place for each step,” note the Guest Editors. “On its own these are larger issues to tackle in terms of developing and passing laws or providing training and guidance for public managers and bureaucrats. The fact that they are linked further complicates this process. Highlighting these linkages is a first step towards seeing the bigger picture of why transparency mechanisms are put in place in some scenarios and not in others and opens the door to comparative analyses for future research and new insights for policymakers. To advocate the responsible and transparent use of algorithms, future research should look into the interplay between micro-, meso-, and macro-level dynamics.”

“We are proud to present this special issue, the 100th issue of Information Polity. Its focus on the governance of AI demonstrates our continued desire to tackle contemporary issues in eGovernment and the importance of showcasing excellent research and the insights offered by information polity perspectives,” add Professor Albert Meijer (Utrecht University) and Professor William Webster (University of Stirling), Editors-in-Chief.

This image illustrates the interplay between the various level dynamics,

Caption: Studying algorithms and algorithmic transparency from multiple levels of analyses. Credit: Information Polity.

Here’s a link, to and a citation for the special issue,

Algorithmic Transparency in Government: Towards a Multi-Level Perspective
Guest Editors: Sarah Giest, PhD, and Stephan Grimmelikhuijsen, PhD
Information Polity, Volume 25, Issue 4 (December 2020), published by IOS Press

The issue is open access for three months, Dec. 14, 2020 – March 14, 2021.

Two articles from the special were featured in the press release,

“The agency of algorithms: Understanding human-algorithm interaction in administrative decision-making,” by Rik Peeters, PhD (https://doi.org/10.3233/IP-200253)

“A machine learning approach to open public comments for policymaking,” by Alex Ingrams, PhD (https://doi.org/10.3233/IP-200256)

An AI governance publication from the US’s Wilson Center

Within one week of the release of a special issue of Information Polity on AI and governments, a Wilson Center (Woodrow Wilson International Center for Scholars) December 21, 2020 news release (received via email) announces a new publication,

Governing AI: Understanding the Limits, Possibilities, and Risks of AI in an Era of Intelligent Tools and Systems by John Zysman & Mark Nitzberg

Abstract

In debates about artificial intelligence (AI), imaginations often run wild. Policy-makers, opinion leaders, and the public tend to believe that AI is already an immensely powerful universal technology, limitless in its possibilities. However, while machine learning (ML), the principal computer science tool underlying today’s AI breakthroughs, is indeed powerful, ML is fundamentally a form of context-dependent statistical inference and as such has its limits. Specifically, because ML relies on correlations between inputs and outputs or emergent clustering in training data, today’s AI systems can only be applied in well- specified problem domains, still lacking the context sensitivity of a typical toddler or house-pet. Consequently, instead of constructing policies to govern artificial general intelligence (AGI), decision- makers should focus on the distinctive and powerful problems posed by narrow AI, including misconceived benefits and the distribution of benefits, autonomous weapons, and bias in algorithms. AI governance, at least for now, is about managing those who create and deploy AI systems, and supporting the safe and beneficial application of AI to narrow, well-defined problem domains. Specific implications of our discussion are as follows:

  • AI applications are part of a suite of intelligent tools and systems and must ultimately be regulated as a set. Digital platforms, for example, generate the pools of big data on which AI tools operate and hence, the regulation of digital platforms and big data is part of the challenge of governing AI. Many of the platform offerings are, in fact, deployments of AI tools. Hence, focusing on AI alone distorts the governance problem.
  • Simply declaring objectives—be they assuring digital privacy and transparency, or avoiding bias—is not sufficient. We must decide what the goals actually will be in operational terms.
  • The issues and choices will differ by sector. For example, the consequences of bias and error will differ from a medical domain or a criminal justice domain to one of retail sales.
  • The application of AI tools in public policy decision making, in transportation design or waste disposal or policing among a whole variety of domains, requires great care. There is a substantial risk of focusing on efficiency when the public debate about what the goals should be in the first place is in fact required. Indeed, public values evolve as part of social and political conflict.
  • The economic implications of AI applications are easily exaggerated. Should public investment concentrate on advancing basic research or on diffusing the tools, user interfaces, and training needed to implement them?
  • As difficult as it will be to decide on goals and a strategy to implement the goals of one community, let alone regional or international communities, any agreement that goes beyond simple objective statements is very unlikely.

Unfortunately, I haven’t been able to successfully download the working paper/report from the Wilson Center’s Governing AI: Understanding the Limits, Possibilities, and Risks of AI in an Era of Intelligent Tools and Systems webpage.

However, I have found a draft version of the report (Working Paper) published August 26, 2020 on the Social Science Research Network. This paper originated at the University of California at Berkeley as part of a series from the Berkeley Roundtable on the International Economy (BRIE). ‘Governing AI: Understanding the Limits, Possibility, and Risks of AI in an Era of Intelligent Tools and Systems’ is also known as the BRIE Working Paper 2020-5.

Canadian government and AI

The special issue on AI and governance and the the paper published by the Wilson Center stimulated my interest in the Canadian government’s approach to governance, responsibility, transparency, and AI.

There is information out there but it’s scattered across various government initiatives and ministries. Above all, it is not easy to find, open communication. Whether that’s by design or the blindness and/or ineptitude to be found in all organizations I leave that to wiser judges. (I’ve worked in small companies and they too have the problem. In colloquial terms, ‘the right hand doesn’t know what the left hand is doing’.)

Responsible use? Maybe not after 2019

First there’s a government of Canada webpage, Responsible use of artificial intelligence (AI). Other than a note at the bottom of the page “Date modified: 2020-07-28,” all of the information dates from 2016 up to March 2019 (which you’ll find on ‘Our Timeline’). Is nothing new happening?

For anyone interested in responsible use, there are two sections “Our guiding principles” and “Directive on Automated Decision-Making” that answer some questions. I found the ‘Directive’ to be more informative with its definitions, objectives, and, even, consequences. Sadly, you need to keep clicking to find consequences and you’ll end up on The Framework for the Management of Compliance. Interestingly, deputy heads are assumed in charge of managing non-compliance. I wonder how employees deal with a non-compliant deputy head?

What about the government’s digital service?

You might think Canadian Digital Service (CDS) might also have some information about responsible use. CDS was launched in 2017, according to Luke Simon’s July 19, 2017 article on Medium,

In case you missed it, there was some exciting digital government news in Canada Tuesday. The Canadian Digital Service (CDS) launched, meaning Canada has joined other nations, including the US and the UK, that have a federal department dedicated to digital.

At the time, Simon was Director of Outreach at Code for Canada.

Presumably, CDS, from an organizational perspective, is somehow attached to the Minister of Digital Government (it’s a position with virtually no governmental infrastructure as opposed to the Minister of Innovation, Science and Economic Development who is responsible for many departments and agencies). The current minister is Joyce Murray whose government profile offers almost no information about her work on digital services. Perhaps there’s a more informative profile of the Minister of Digital Government somewhere on a government website.

Meanwhile, they are friendly folks at CDS but they don’t offer much substantive information. From the CDS homepage,

Our aim is to make services easier for government to deliver. We collaborate with people who work in government to address service delivery problems. We test with people who need government services to find design solutions that are easy to use.

Learn more

After clicking on Learn more, I found this,

At the Canadian Digital Service (CDS), we partner up with federal departments to design, test and build simple, easy to use services. Our goal is to improve the experience – for people who deliver government services and people who use those services.

How it works

We work with our partners in the open, regularly sharing progress via public platforms. This creates a culture of learning and fosters best practices. It means non-partner departments can apply our work and use our resources to develop their own services.

Together, we form a team that follows the ‘Agile software development methodology’. This means we begin with an intensive ‘Discovery’ research phase to explore user needs and possible solutions to meeting those needs. After that, we move into a prototyping ‘Alpha’ phase to find and test ways to meet user needs. Next comes the ‘Beta’ phase, where we release the solution to the public and intensively test it. Lastly, there is a ‘Live’ phase, where the service is fully released and continues to be monitored and improved upon.

Between the Beta and Live phases, our team members step back from the service, and the partner team in the department continues the maintenance and development. We can help partners recruit their service team from both internal and external sources.

Before each phase begins, CDS and the partner sign a partnership agreement which outlines the goal and outcomes for the coming phase, how we’ll get there, and a commitment to get them done.

As you can see, there’s not a lot of detail and they don’t seem to have included anything about artificial intelligence as part of their operation. (I’ll come back to the government’s implementation of artificial intelligence and information technology later.)

Does the Treasury Board of Canada have charge of responsible AI use?

I think so but there are government departments/ministries that also have some responsibilities for AI and I haven’t seen any links back to the Treasury Board documentation.

For anyone not familiar with the Treasury Board or even if you are, December 14, 2009 article (Treasury Board of Canada: History, Organization and Issues) on Maple Leaf Web is quite informative,

The Treasury Board of Canada represent a key entity within the federal government. As an important cabinet committee and central agency, they play an important role in financial and personnel administration. Even though the Treasury Board plays a significant role in government decision making, the general public tends to know little about its operation and activities. [emphasis mine] The following article provides an introduction to the Treasury Board, with a focus on its history, responsibilities, organization, and key issues.

It seems the Minister of Digital Government, Joyce Murray is part of the Treasury Board and the Treasury Board is the source for the Digital Operations Strategic Plan: 2018-2022,

I haven’t read the entire document but the table of contents doesn’t include a heading for artificial intelligence and there wasn’t any mention of it in the opening comments.

But isn’t there a Chief Information Officer for Canada?

Herein lies a tale (I doubt I’ll ever get the real story) but the answer is a qualified ‘no’. The Chief Information Officer for Canada, Alex Benay (there is an AI aspect) stepped down in September 2019 to join a startup company according to an August 6, 2019 article by Mia Hunt for Global Government Forum,

Alex Benay has announced he will step down as Canada’s chief information officer next month to “take on new challenge” at tech start-up MindBridge.

“It is with mixed emotions that I am announcing my departure from the Government of Canada,” he said on Wednesday in a statement posted on social media, describing his time as CIO as “one heck of a ride”.

He said he is proud of the work the public service has accomplished in moving the national digital agenda forward. Among these achievements, he listed the adoption of public Cloud across government; delivering the “world’s first” ethical AI management framework; [emphasis mine] renewing decades-old policies to bring them into the digital age; and “solidifying Canada’s position as a global leader in open government”.

He also led the introduction of new digital standards in the workplace, and provided “a clear path for moving off” Canada’s failed Phoenix pay system. [emphasis mine]

I cannot find a current Chief Information of Canada despite searches but I did find this List of chief information officers (CIO) by institution. Where there was one, there are now many.

Since September 2019, Mr. Benay has moved again according to a November 7, 2019 article by Meagan Simpson on the BetaKit,website (Note: Links have been removed),

Alex Benay, the former CIO [Chief Information Officer] of Canada, has left his role at Ottawa-based Mindbridge after a short few months stint.

The news came Thursday, when KPMG announced that Benay was joining the accounting and professional services organization as partner of digital and government solutions. Benay originally announced that he was joining Mindbridge in August, after spending almost two and a half years as the CIO for the Government of Canada.

Benay joined the AI startup as its chief client officer and, at the time, was set to officially take on the role on September 3rd. According to Benay’s LinkedIn, he joined Mindbridge in August, but if the September 3rd start date is correct, Benay would have only been at Mindbridge for around three months. The former CIO of Canada was meant to be responsible for Mindbridge’s global growth as the company looked to prepare for an IPO in 2021.

Benay told The Globe and Mail that his decision to leave Mindbridge was not a question of fit, or that he considered the move a mistake. He attributed his decision to leave to conversations with Mindbridge customer KPMG, over a period of three weeks. Benay told The Globe that he was drawn to the KPMG opportunity to lead its digital and government solutions practice, something that was more familiar to him given his previous role.

Mindbridge has not completely lost what was touted as a start hire, though, as Benay will be staying on as an advisor to the startup. “This isn’t a cutting the cord and moving on to something else completely,” Benay told The Globe. “It’s a win-win for everybody.”

Via Mr. Benay, I’ve re-introduced artificial intelligence and introduced the Phoenix Pay system and now I’m linking them to government implementation of information technology in a specific case and speculating about implementation of artificial intelligence algorithms in government.

Phoenix Pay System Debacle (things are looking up), a harbinger for responsible use of artificial intelligence?

I’m happy to hear that the situation where government employees had no certainty about their paycheques is becoming better. After the ‘new’ Phoenix Pay System was implemented in early 2016, government employees found they might get the correct amount on their paycheque or might find significantly less than they were entitled to or might find huge increases.

The instability alone would be distressing but adding to it with the inability to get the problem fixed must have been devastating. Almost five years later, the problems are being resolved and people are getting paid appropriately, more often.

The estimated cost for fixing the problems was, as I recall, over $1B; I think that was a little optimistic. James Bagnall’s July 28, 2020 article for the Ottawa Citizen provides more detail, although not about the current cost, and is the source of my measured optimism,

Something odd has happened to the Phoenix Pay file of late. After four years of spitting out errors at a furious rate, the federal government’s new pay system has gone quiet.

And no, it’s not because of the even larger drama written by the coronavirus. In fact, there’s been very real progress at Public Services and Procurement Canada [PSPC; emphasis mine], the department in charge of pay operations.

Since January 2018, the peak of the madness, the backlog of all pay transactions requiring action has dropped by about half to 230,000 as of late June. Many of these involve basic queries for information about promotions, overtime and rules. The part of the backlog involving money — too little or too much pay, incorrect deductions, pay not received — has shrunk by two-thirds to 125,000.

These are still very large numbers but the underlying story here is one of long-delayed hope. The government is processing the pay of more than 330,000 employees every two weeks while simultaneously fixing large batches of past mistakes.

While officials with two of the largest government unions — Public Service Alliance of Canada [PSAC] and the Professional Institute of the Public Service of Canada [PPSC] — disagree the pay system has worked out its kinks, they acknowledge it’s considerably better than it was. New pay transactions are being processed “with increased timeliness and accuracy,” the PSAC official noted.

Neither union is happy with the progress being made on historical mistakes. PIPSC president Debi Daviau told this newspaper that many of her nearly 60,000 members have been waiting for years to receive salary adjustments stemming from earlier promotions or transfers, to name two of the more prominent sources of pay errors.

Even so, the sharp improvement in Phoenix Pay’s performance will soon force the government to confront an interesting choice: Should it continue with plans to replace the system?

Treasury Board, the government’s employer, two years ago launched the process to do just that. Last March, SAP Canada — whose technology underpins the pay system still in use at Canada Revenue Agency — won a competition to run a pilot project. Government insiders believe SAP Canada is on track to build the full system starting sometime in 2023.

When Public Services set out the business case in 2009 for building Phoenix Pay, it noted the pay system would have to accommodate 150 collective agreements that contained thousands of business rules and applied to dozens of federal departments and agencies. The technical challenge has since intensified.

Under the original plan, Phoenix Pay was to save $70 million annually by eliminating 1,200 compensation advisors across government and centralizing a key part of the operation at the pay centre in Miramichi, N.B., where 550 would manage a more automated system.

Instead, the Phoenix Pay system currently employs about 2,300.  This includes 1,600 at Miramichi and five regional pay offices, along with 350 each at a client contact centre (which deals with relatively minor pay issues) and client service bureau (which handles the more complex, longstanding pay errors). This has naturally driven up the average cost of managing each pay account — 55 per cent higher than the government’s former pay system according to last fall’s estimate by the Parliamentary Budget Officer.

… As the backlog shrinks, the need for regional pay offices and emergency staffing will diminish. Public Services is also working with a number of high-tech firms to develop ways of accurately automating employee pay using artificial intelligence [emphasis mine].

Given the Phoenix Pay System debacle, it might be nice to see a little information about how the government is planning to integrate more sophisticated algorithms (artificial intelligence) in their operations.

I found this on a Treasury Board webpage, all 1 minute and 29 seconds of it,

The blonde model or actress mentions that companies applying to Public Services and Procurement Canada for placement on the list must use AI responsibly. Her script does not include a definition or guidelines, which, as previously noted, as on the Treasury Board website.

As for Public Services and Procurement Canada, they have an Artificial intelligence source list,

Public Services and Procurement Canada (PSPC) is putting into operation the Artificial intelligence source list to facilitate the procurement of Canada’s requirements for Artificial intelligence (AI).

After research and consultation with industry, academia, and civil society, Canada identified 3 AI categories and business outcomes to inform this method of supply:

Insights and predictive modelling

Machine interactions

Cognitive automation

PSPC is focused only on procuring AI. If there are guidelines on their website for its use, I did not find them.

I found one more government agency that might have some information about artificial intelligence and guidelines for its use, Shared Services Canada,

Shared Services Canada (SSC) delivers digital services to Government of Canada organizations. We provide modern, secure and reliable IT services so federal organizations can deliver digital programs and services that meet Canadians needs.

Since the Minister of Digital Government, Joyce Murray, is listed on the homepage, I was hopeful that I could find out more about AI and governance and whether or not the Canadian Digital Service was associated with this government ministry/agency. I was frustrated on both counts.

To sum up, there is no information that I could find after March 2019 about Canada, it’s government and plans for AI, especially responsible management/governance and AI on a Canadian government website although I have found guidelines, expectations, and consequences for non-compliance. (Should anyone know which government agency has up-to-date information on its responsible use of AI, please let me know in the Comments.

Canadian Institute for Advanced Research (CIFAR)

The first mention of the Pan-Canadian Artificial Intelligence Strategy is in my analysis of the Canadian federal budget in a March 24, 2017 posting. Briefly, CIFAR received a big chunk of that money. Here’s more about the strategy from the CIFAR Pan-Canadian AI Strategy homepage,

In 2017, the Government of Canada appointed CIFAR to develop and lead a $125 million Pan-Canadian Artificial Intelligence Strategy, the world’s first national AI strategy.

CIFAR works in close collaboration with Canada’s three national AI Institutes — Amii in Edmonton, Mila in Montreal, and the Vector Institute in Toronto, as well as universities, hospitals and organizations across the country.

The objectives of the strategy are to:

Attract and retain world-class AI researchers by increasing the number of outstanding AI researchers and skilled graduates in Canada.

Foster a collaborative AI ecosystem by establishing interconnected nodes of scientific excellence in Canada’s three major centres for AI: Edmonton, Montreal, and Toronto.

Advance national AI initiatives by supporting a national research community on AI through training programs, workshops, and other collaborative opportunities.

Understand the societal implications of AI by developing global thought leadership on the economic, ethical, policy, and legal implications [emphasis mine] of advances in AI.

Responsible AI at CIFAR

You can find Responsible AI in a webspace devoted to what they have called, AI & Society. Here’s more from the homepage,

CIFAR is leading global conversations about AI’s impact on society.

The AI & Society program, one of the objectives of the CIFAR Pan-Canadian AI Strategy, develops global thought leadership on the economic, ethical, political, and legal implications of advances in AI. These dialogues deliver new ways of thinking about issues, and drive positive change in the development and deployment of responsible AI.

Solution Networks

AI Futures Policy Labs

AI & Society Workshops

Building an AI World

Under the category of building an AI World I found this (from CIFAR’s AI & Society homepage),

BUILDING AN AI WORLD

Explore the landscape of global AI strategies.

Canada was the first country in the world to announce a federally-funded national AI strategy, prompting many other nations to follow suit. CIFAR published two reports detailing the global landscape of AI strategies.

I skimmed through the second report and it seems more like a comparative study of various country’s AI strategies than a overview of responsible use of AI.

Final comments about Responsible AI in Canada and the new reports

I’m glad to see there’s interest in Responsible AI but based on my adventures searching the Canadian government websites and the Pan-Canadian AI Strategy webspace, I’m left feeling hungry for more.

I didn’t find any details about how AI is being integrated into government departments and for what uses. I’d like to know and I’d like to have some say about how it’s used and how the inevitable mistakes will be dealh with.

The great unwashed

What I’ve found is high minded, but, as far as I can tell, there’s absolutely no interest in talking to the ‘great unwashed’. Those of us who are not experts are being left out of these earlier stage conversations.

I’m sure we’ll be consulted at some point but it will be long past the time when are our opinions and insights could have impact and help us avoid the problems that experts tend not to see. What we’ll be left with is protest and anger on our part and, finally, grudging admissions and corrections of errors on the government’s part.

Let’s take this for an example. The Phoenix Pay System was implemented in its first phase on Feb. 24, 2016. As I recall, problems develop almost immediately. The second phase of implementation starts April 21, 2016. In May 2016 the government hires consultants to fix the problems. November 29, 2016 the government minister, Judy Foote, admits a mistake has been made. February 2017 the government hires consultants to establish what lessons they might learn. February 15, 2018 the pay problems backlog amounts to 633,000. Source: James Bagnall, Feb. 23, 2018 ‘timeline‘ for Ottawa Citizen

Do take a look at the timeline, there’s more to it than what I’ve written here and I’m sure there’s more to the Phoenix Pay System debacle than a failure to listen to warnings from those who would be directly affected. It’s fascinating though how often a failure to listen presages far deeper problems with a project.

The Canadian government, both a conservative and a liberal government, contributed to the Phoenix Debacle but it seems the gravest concern is with senior government bureaucrats. You might think things have changed since this recounting of the affair in a June 14, 2018 article by Michelle Zilio for the Globe and Mail,

The three public servants blamed by the Auditor-General for the Phoenix pay system problems were not fired for mismanagement of the massive technology project that botched the pay of tens of thousands of public servants for more than two years.

Marie Lemay, deputy minister for Public Services and Procurement Canada (PSPC), said two of the three Phoenix executives were shuffled out of their senior posts in pay administration and did not receive performance bonuses for their handling of the system. Those two employees still work for the department, she said. Ms. Lemay, who refused to identify the individuals, said the third Phoenix executive retired.

In a scathing report last month, Auditor-General Michael Ferguson blamed three “executives” – senior public servants at PSPC, which is responsible for Phoenix − for the pay system’s “incomprehensible failure.” [emphasis mine] He said the executives did not tell the then-deputy minister about the known problems with Phoenix, leading the department to launch the pay system despite clear warnings it was not ready.

Speaking to a parliamentary committee on Thursday, Ms. Lemay said the individuals did not act with “ill intent,” noting that the development and implementation of the Phoenix project were flawed. She encouraged critics to look at the “bigger picture” to learn from all of Phoenix’s failures.

Mr. Ferguson, whose office spoke with the three Phoenix executives as a part of its reporting, said the officials prioritized some aspects of the pay-system rollout, such as schedule and budget, over functionality. He said they also cancelled a pilot implementation project with one department that would have helped it detect problems indicating the system was not ready.

Mr. Ferguson’s report warned the Phoenix problems are indicative of “pervasive cultural problems” [emphasis mine] in the civil service, which he said is fearful of making mistakes, taking risks and conveying “hard truths.”

Speaking to the same parliamentary committee on Tuesday, Privy Council Clerk [emphasis mine] Michael Wernick challenged Mr. Ferguson’s assertions, saying his chapter on the federal government’s cultural issues is an “opinion piece” containing “sweeping generalizations.”

The Privy Council Clerk is the top level bureaucrat (and there is only one such clerk) in the civil/public service and I think his quotes are quite telling of “pervasive cultural problems.” There’s a new Privy Council Clerk but from what I can tell he was well trained by his predecessor.

Do* we really need senior government bureaucrats?

I now have an example of bureaucratic interference, specifically with the Global Public Health Information Network (GPHIN) where it would seem that not much has changed, from a December 26, 2020 article by Grant Robertson for the Globe & Mail,

When Canada unplugged support for its pandemic alert system [GPHIN] last year, it was a symptom of bigger problems inside the Public Health Agency. Experienced scientists were pushed aside, expertise was eroded, and internal warnings went unheeded, which hindered the department’s response to COVID-19

As a global pandemic began to take root in February, China held a series of backchannel conversations with Canada, lobbying the federal government to keep its borders open.

With the virus already taking a deadly toll in Asia, Heng Xiaojun, the Minister Counsellor for the Chinese embassy, requested a call with senior Transport Canada officials. Over the course of the conversation, the Chinese representatives communicated Beijing’s desire that flights between the two countries not be stopped because it was unnecessary.

“The Chinese position on the continuation of flights was reiterated,” say official notes taken from the call. “Mr. Heng conveyed that China is taking comprehensive measures to combat the coronavirus.”

Canadian officials seemed to agree, since no steps were taken to restrict or prohibit travel. To the federal government, China appeared to have the situation under control and the risk to Canada was low. Before ending the call, Mr. Heng thanked Ottawa for its “science and fact-based approach.”

It was a critical moment in the looming pandemic, but the Canadian government lacked the full picture, instead relying heavily on what Beijing was choosing to disclose to the World Health Organization (WHO). Ottawa’s ability to independently know what was going on in China – on the ground and inside hospitals – had been greatly diminished in recent years.

Canada once operated a robust pandemic early warning system and employed a public-health doctor based in China who could report back on emerging problems. But it had largely abandoned those international strategies over the past five years, and was no longer as plugged-in.

By late February [2020], Ottawa seemed to be taking the official reports from China at their word, stating often in its own internal risk assessments that the threat to Canada remained low. But inside the Public Health Agency of Canada (PHAC), rank-and-file doctors and epidemiologists were growing increasingly alarmed at how the department and the government were responding.

“The team was outraged,” one public-health scientist told a colleague in early April, in an internal e-mail obtained by The Globe and Mail, criticizing the lack of urgency shown by Canada’s response during January, February and early March. “We knew this was going to be around for a long time, and it’s serious.”

China had locked down cities and restricted travel within its borders. Staff inside the Public Health Agency believed Beijing wasn’t disclosing the whole truth about the danger of the virus and how easily it was transmitted. “The agency was just too slow to respond,” the scientist said. “A sane person would know China was lying.”

It would later be revealed that China’s infection and mortality rates were played down in official records, along with key details about how the virus was spreading.

But the Public Health Agency, which was created after the 2003 SARS crisis to bolster the country against emerging disease threats, had been stripped of much of its capacity to gather outbreak intelligence and provide advance warning by the time the pandemic hit.

The Global Public Health Intelligence Network, an early warning system known as GPHIN that was once considered a cornerstone of Canada’s preparedness strategy, had been scaled back over the past several years, with resources shifted into projects that didn’t involve outbreak surveillance.

However, a series of documents obtained by The Globe during the past four months, from inside the department and through numerous Access to Information requests, show the problems that weakened Canada’s pandemic readiness run deeper than originally thought. Pleas from the international health community for Canada to take outbreak detection and surveillance much more seriously were ignored by mid-level managers [emphasis mine] inside the department. A new federal pandemic preparedness plan – key to gauging the country’s readiness for an emergency – was never fully tested. And on the global stage, the agency stopped sending experts [emphasis mine] to international meetings on pandemic preparedness, instead choosing senior civil servants with little or no public-health background [emphasis mine] to represent Canada at high-level talks, The Globe found.

The curtailing of GPHIN and allegations that scientists had become marginalized within the Public Health Agency, detailed in a Globe investigation this past July [2020], are now the subject of two federal probes – an examination by the Auditor-General of Canada and an independent federal review, ordered by the Minister of Health.

Those processes will undoubtedly reshape GPHIN and may well lead to an overhaul of how the agency functions in some areas. The first steps will be identifying and fixing what went wrong. With the country now topping 535,000 cases of COVID-19 and more than 14,700 dead, there will be lessons learned from the pandemic.

Prime Minister Justin Trudeau has said he is unsure what role added intelligence [emphasis mine] could have played in the government’s pandemic response, though he regrets not bolstering Canada’s critical supplies of personal protective equipment sooner. But providing the intelligence to make those decisions early is exactly what GPHIN was created to do – and did in previous outbreaks.

Epidemiologists have described in detail to The Globe how vital it is to move quickly and decisively in a pandemic. Acting sooner, even by a few days or weeks in the early going, and throughout, can have an exponential impact on an outbreak, including deaths. Countries such as South Korea, Australia and New Zealand, which have fared much better than Canada, appear to have acted faster in key tactical areas, some using early warning information they gathered. As Canada prepares itself in the wake of COVID-19 for the next major health threat, building back a better system becomes paramount.

If you have time, do take a look at Robertson’s December 26, 2020 article and the July 2020 Globe investigation. As both articles make clear, senior bureaucrats whose chief attribute seems to have been longevity took over, reallocated resources, drove out experts, and crippled the few remaining experts in the system with a series of bureaucratic demands while taking trips to attend meetings (in desirable locations) for which they had no significant or useful input.

The Phoenix and GPHIN debacles bear a resemblance in that senior bureaucrats took over and in a state of blissful ignorance made a series of disastrous decisions bolstered by politicians who seem to neither understand nor care much about the outcomes.

If you think I’m being harsh watch Canadian Broadcasting Corporation (CBC) reporter Rosemary Barton interview Prime Minister Trudeau for a 2020 year-end interview, Note: There are some commercials. Then, pay special attention to the Trudeau’s answer to the first question,

Responsible AI, eh?

Based on the massive mishandling of the Phoenix Pay System implementation where top bureaucrats did not follow basic and well established information services procedures and the Global Public Health Information Network mismanagement by top level bureaucrats, I’m not sure I have a lot of confidence in any Canadian government claims about a responsible approach to using artificial intelligence.

Unfortunately, it doesn’t matter as implementation is most likely already taking place here in Canada.

Enough with the pessimism. I feel it’s necessary to end this on a mildly positive note. Hurray to the government employees who worked through the Phoenix Pay System debacle, the current and former GPHIN experts who continued to sound warnings, and all those people striving to make true the principles of ‘Peace, Order, and Good Government’, the bedrock principles of the Canadian Parliament.

A lot of mistakes have been made but we also do make a lot of good decisions.

*’Doe’ changed to ‘Do’ on May 14, 2021.

The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (5 of 5)

At long last, the end is in sight! This last part is mostly a collection of items that don’t fit elsewhere or could have fit elsewhere but that particular part was already overstuffed.

Podcasting science for the people

March 2009 was the birth date for a podcast, then called Skeptically Speaking and now known as Science for the People (Wikipedia entry). Here’s more from the Science for the People About webpage,

Science for the People is a long-format interview podcast that explores the connections between science, popular culture, history, and public policy, to help listeners understand the evidence and arguments behind what’s in the news and on the shelves.

Every week, our hosts sit down with science researchers, writers, authors, journalists, and experts to discuss science from the past, the science that affects our lives today, and how science might change our future.

THE TEAM

Rachelle Saunders: Producer & Host

I love to learn new things, and say the word “fascinating” way too much. I like to talk about intersections and how science and critical thinking intersect with everyday life, politics, history, and culture. By day I’m a web developer, and I definitely listen to way too many podcasts.

….

H/t to GeekWrapped’s 20 Best Science Podcasts.

Science: human contexts and cosmopolitanism

situating science: Science in Human Contexts was a seven-year project ending in 2014 and funded by the Social Sciences and Humanities Research Council of Canada (SSHRC). Here’s more from their Project Summary webpage,

Created in 2007 with the generous funding of the Social Sciences and Humanities Research Council of Canada Strategic Knowledge Cluster grant, Situating Science is a seven-year project promoting communication and collaboration among humanists and social scientists that are engaged in the study of science and technology.

You can find out more about Situating Science’s final days in my August 16, 2013 posting where I included a lot of information about one of their last events titled, “Science and Society 2013 Symposium; Emerging Agendas for Citizens and the Sciences.”

The “think-tank” will dovetail nicely with a special symposium in Ottawa on Science and Society Oct. 21-23. For this symposium, the Cluster is partnering with the Institute for Science, Society and Policy to bring together scholars from various disciplines, public servants and policy workers to discuss key issues at the intersection of science and society. [emphasis mine]  The discussions will be compiled in a document to be shared with stakeholders and the wider public.

The team will continue to seek support and partnerships for projects within the scope of its objectives. Among our top priorities are a partnership to explore sciences, technologies and their publics as well as new partnerships to build upon exchanges between scholars and institutions in India, Singapore and Canada.

The Situating Science folks did attempt to carry on the organization’s work by rebranding the organization to call it the Canadian Consortium for Situating Science and Technology (CCSST). It seems to have been a short-lived volunteer effort.

Meanwhile, the special symposium held in October 2013 appears to have been the springboard for another SSHRC funded multi-year initiative, this time focused on science collaborations between Canada, India, and Singapore, Cosmopolitanism and the Local in Science and Nature from 2014 – 2017. Despite their sunset year having been in 2017, their homepage boasts news about a 2020 Congress and their Twitter feed is still active. Harking back, here’s what the project was designed to do, from the About Us page,

Welcome to our three year project that will establish a research network on “Cosmopolitanism” in science. It closely examines the actual types of negotiations that go into the making of science and its culture within an increasingly globalized landscape. This partnership is both about “cosmopolitanism and the local” and is, at the same time, cosmopolitan and local.

Anyone who reads this blog with any frequency will know that I often comment on the fact that when organizations such as the Council of Canadian Academies bring in experts from other parts of the world, they are almost always from the US or Europe. So, I was delighted to discover the Cosmopolitanism project and featured it in a February 19, 2015 posting.

Here’s more from Cosmopolitanism’s About Us page

Specifically, the project will:

  1. Expose a hitherto largely Eurocentric scholarly community in Canada to widening international perspectives and methods,
  2. Build on past successes at border-crossings and exchanges between the participants,
  3. Facilitate a much needed nation-wide organization and exchange amongst Indian and South East Asian scholars, in concert with their Canadian counterparts, by integrating into an international network,
  4. Open up new perspectives on the genesis and place of globalized science, and thereby
  5. Offer alternative ways to conceptualize and engage globalization itself, and especially the globalization of knowledge and science.
  6. Bring the managerial team together for joint discussion, research exchange, leveraging and planning – all in the aid of laying the grounds of a sustainable partnership

Eco Art (also known as ecological art or environmental art)

I’m of two minds as to whether I should have tried to stuff this into the art/sci subsection in part 2. On balance, I decided that this merited its own section and that part 2 was already overstuffed.

Let’s start in Newfoundland and Labrador with Marlene Creates (pronounced Kreets), here’s more about her from her website’s bio webpage,

Marlene Creates (pronounced “Kreets”) is an environmental artist and poet who works with photography, video, scientific and vernacular knowledge, walking and collaborative site-specific performance in the six-acre patch of boreal forest in Portugal Cove, Newfoundland and Labrador, Canada, where she lives.

For almost 40 years her work has been an exploration of the relationship between human experience, memory, language and the land, and the impact they have on each other. …

Currently her work is focused on the six acres of boreal forest where she lives in a ‘relational aesthetic’ to the land. This oeuvre includes Water Flowing to the Sea Captured at the Speed of Light, Blast Hole Pond River, Newfoundland 2002–2003, and several ongoing projects:

Marlene Creates received a Governor General’s Award in Visual and Media Arts for “Lifetime Artistic Achievement” in 2019. …

As mentioned in her bio, Creates has a ‘forest’ project. The Boreal Poetry Garden,
Portugal Cove, Newfoundland 2005– (ongoing)
. If you are interested in exploring it, she has created a virtual walk here. Just click on one of the index items on the right side of the screen to activate a video.

An October 1, 2018 article by Yasmin Nurming-Por for Canadian Art magazine features 10 artists who focus on environmental and/or land art themes,

As part of her 2016 master’s thesis exhibition, Fredericton [New Brunswick] artist Gillian Dykeman presented the video Dispatches from the Feminist Utopian Future within a larger installation that imagined various canonical earthworks from the perspective of the future. It’s a project that addresses the inherent sense of timelessness in these massive interventions on the natural landscape from the perspective of contemporary land politics. … she proposes a kind of interaction with the invasive and often colonial gestures of modernist Land art, one that imagines a different future for these earthworks, where they are treated as alien in a landscape and as beacons from a feminist future.

A video trailer featuring “DISPATCHES FROM THE FEMINIST UTOPIAN FUTURE” (from Dykeman’s website archive page featuring the show,

If you have the time, I recommend reading the article in its entirety.

Oddly, I did not expect Vancouver to have such an active eco arts focus. The City of Vancouver Parks Board maintains an Environmental Art webpage on its site listing a number of current and past projects.

I cannot find the date for when this Parks Board initiative started but I did find a document produced prior to a Spring 2006 Arts & Ecology think tank held in Vancouver under the auspices of the Canada Council for the Arts, the Canadian Commission for UNESCO, the Vancouver Foundation, and the Royal Society for the Encouragement of the Arts, Manufactures and Commerce (London UK).

In all likelihood, Vancouver Park Board’s Environmental Art webpage was produced after 2006.

I imagine the document and the think tank session helped to anchor any then current eco art projects and encouraged more projects.

The document (MAPPING THE TERRAIN OF CONTEMPORARY ECOART PRACTICE AND COLLABORATION) while almost 14 years old offers a fascinating overview of what was happening internationally and in Canada.

While its early days were in 2008, EartHand Gleaners (Vancouver-based) wasn’t formally founded as an arts non-for-profit organization until 2013. You can find out more about them and their projects here.

Eco Art has been around for decades according to the eco art think tank document but it does seemed to have gained momentum here in Canada over the last decade.

Photography and the Natural Sciences and Engineering Research Council of Canada (NSERC)

Exploring the jack pine tight knit family tree. Credit: Dana Harris Brock University (2018)

Pictured are developing phloem, cambial, and xylem cells (blue), and mature xylem cells (red), in the outermost portion of a jack pine tree. This research aims to identify the influences of climate on the cellular development of the species at its northern limit in Yellowknife, NT. The differences in these cell formations is what creates the annual tree ring boundary.

Science Exposed is a photography contest for scientists which has been run since 2016 (assuming the Past Winners archive is a good indicator for the programme’s starting year).

The 2020 competition recently closed but public voting should start soon. It’s nice to see that NSERC is now making efforts to engage members of the general public rather than focusing its efforts solely on children. The UK’s ASPIRES project seems to support the idea that adults need to be more fully engaged with STEM (science, technology, engineering, and mathematics) efforts as it found that children’s attitudes toward science are strongly influenced by their parents’ and relatives’ attitudes.(See my January 31, 2012 posting.)

Ingenious, the book and Ingenium, the science museums

To celebrate Canada’s 150th anniversary in 2017, then Governor General David Johnston and Tom Jenkins (Chair of the board for Open Text and former Chair of the federal committee overseeing the ‘Review of Federal Support to R&’D [see my October 21, 2011 posting about the resulting report]) wrote a boo about Canada’s inventors and inventions.

Johnston and Jenkins jaunted around the country launching their book (I have more about their June 1, 2017 Vancouver visit in a May 30, 2017 posting; scroll down about 60% of the way]).

The book’s full title, “Ingenious: How Canadian Innovators Made the World Smarter, Smaller, Kinder, Safer, Healthier, Wealthier and Happier ” outlines their thesis neatly.

Not all that long after the book was launched, there was a name change (thankfully) for the Canada Science and Technology Museums Corporation (CSTMC). It is now known as Ingenium (covered in my August 10, 2017 posting).

The reason that name change was such a relief (for those who don’t know) is that the corporation included three national science museums: Canada Aviation and Space Museum, Canada Agriculture and Food Museum, and (wait for it) Canada Science and Technology Museum. On the list of confusing names, this ranks very high for me. Again, I give thanks for the change from CSTMC to Ingenium, leaving the name for the museum alone.

2017 was also the year that the newly refurbished Canada Science and Technology Museum was reopened after more than three years (see my June 23, 2017 posting about the November 2017 reopening and my June 12, 2015 posting for more information about the situation that led to the closure).

A Saskatchewan lab, Convergence, Order of Canada, Year of Science, Animated Mathematics, a graphic novel, and new media

Since this section is jampacked, I’m using subheads.

Saskatchewan

Dr. Brian Eames hosts an artist-in-residence, Jean-Sebastien (JS) Gauthier at the University of Saskatchewan’s College of Medicine Eames Lab. A February 16, 2018 posting here featured their first collaboration together. It covered evolutionary biology, the synchrotron (Canadian Light Source [CLS]) in Saskatoon, and the ‘ins and outs’ of a collaboration between a scientist an artist. Presumably the art-in-residence position indicates that first collaboration went very well.

In January 2020, Brian kindly gave me an update on their current projects. Jean-Sebastin successfully coded an interactive piece for an exhibit at the 2019 Nuit Blanche Saskatoon event using Connect (Xbox). More recently, he got a VR [virtual reality] helmet for an upcoming project or two.

After much clicking on the Nuit Blanche Saskatoon 2019 interactive map, I found this,

Our Glass is a work of interactive SciArt co-created by artist JS Gauthier and biologist Dr Brian F. Eames. It uses cutting-edge 3D microscopic images produced for artistic purposes at the Canadian Light Source, Canada’s only synchrotron facility. Our Glass engages viewers of all ages to peer within an hourglass showing how embryonic development compares among animals with whom we share a close genetic heritage.

Eames also mentioned they were hoping to hold an international SciArt Symposium at the University of Saskatchewan in 2021.

Convergence

Dr. Cristian Zaelzer-Perez, an instructor at Concordia University (Montreal; read this November 20, 2019 Concordia news release by Kelsey Rolfe for more about his work and awards), in 2016 founded the Convergence Initiative, a not-for-profit organization that encourages interdisciplinary neuroscience and art collaborations.

Cat Lau’s December 23, 2019 posting for the Science Borealis blog provides insight into Zaelzer-Perez’s relationship to science and art,

Cristian: I have had a relationship with art and science ever since I have had memory. As a child, I loved to do classifications, from grouping different flowers to collecting leaves by their shapes. At the same time, I really loved to draw them and for me, both things never looked different; they (art and science) have always worked together.

I started as a graphic designer, but the pursuit to learn about nature was never dead. At some point, I knew I wanted to go back to school to do research, to explore and learn new things. I started studying medical technologies, then molecular biology and then jumped into a PhD. At that point, my life as a graphic designer slipped down, because of the focus you have to give to the discipline. It seemed like every time I tried to dedicate myself to one thing, I would find myself doing the other thing a couple years later.

I came to Montreal to do my post-doc, but I had trouble publishing, which became problematic in getting a career. I was still loving what I was doing, but not seeing a future in that. Once again, art came back into my life and at the same time I saw that science was becoming really hard to understand and scientists were not doing much to bridge the gap.

The Convergence Initiative has an impressive array of programmes. Do check it out.

Order of Canada and ‘The Science Lady’

For a writer of children’s science books, an appointment to the Order of Canada is a singular honour. I cannot recall a children’s science book writer previous to Shar Levine being appointed as a Member of the Order of Canada. Known as ‘The Science Lady‘, Levine was appointed in 2016. Here’s more from her Wikipedia entry, Note: Links have been removed,

Shar Levine (born 1953) is an award-winning, best selling Canadian children’s author, and designer.

Shar has written over 70 books and book/kits, primarily on hands-on science for children. For her work in Science literacy and Science promotion, Shar has been appointed to the 2016 Order of Canada. In 2015, she was recognized by the University of Alberta and received their Alumni Honour Award. Levine, and her co-author, Leslie Johnstone, were co-recipients of the Eve Savory Award for Science Communication from the BC Innovation Council (2006) and their book, Backyard Science, was a finalist for the Subaru Award, (hands on activity) from the American Association for the Advancement of Science, Science Books and Films (2005). The Ultimate Guide to Your Microscope was a finalist-2008 American Association for the Advancement of Science/Subaru Science Books and Films Prize Hands -On Science/Activity Books.

To get a sense of what an appointment to the Order of Canada means, here’s a description from the government of Canada website,

The Order of Canada is how our country honours people who make extraordinary contributions to the nation.

Since its creation in 1967—Canada’s centennial year—more than 7 000 people from all sectors of society have been invested into the Order. The contributions of these trailblazers are varied, yet they have all enriched the lives of others and made a difference to this country. Their grit and passion inspire us, teach us and show us the way forward. They exemplify the Order’s motto: DESIDERANTES MELIOREM PATRIAM (“They desire a better country”).

Year of Science in British Columbia

In the Fall of 2010, the British Columbia provincial government announced a Year of Science (coinciding with the school year) . Originally, it was supposed to be a provincial government-wide initiative but the idea percolated through any number of processes and emerged as a year dedicated to science education for youth (according to the idea’s originator, Moira Stilwell who was then a Member of the Legislative Assembly [MLA]’ I spoke with her sometime in 2010 or 2011).

As the ‘year’ drew to a close, there was a finale ($1.1M in funding), which was featured here in a July 6, 2011 posting.

The larger portion of the money ($1M) was awarded to Science World while $100,000 ($0.1 M) was given to the Pacific Institute of Mathematical Sciences To my knowledge there have been no followup announcements about how the money was used.

Animation and mathematics

In Toronto, mathematician Dr. Karan Singh enjoyed a flurry of interest due to his association with animator Chris Landreth and their Academy Award (Oscar) Winning 2004 animated film, Ryan. They have continued to work together as members of the Dynamic Graphics Project (DGP) Lab at the University of Toronto. Theirs is not the only Oscar winning work to emerge from one or more of the members of the lab. Jos Stam, DGP graduate and adjunct professor won his third in 2019.

A graphic novel and medical promise

An academic at Simon Fraser University since 2015, Coleman Nye worked with three other women to produce a graphic novel about medical dilemmas in a genre described as’ ethno-fiction’.

Lissa: A Story about Medical Promise, Friendship, and Revolution (2017) by Sherine Hamdy and Coleman Nye, two anthropologists and Art by Sarula Bao and Caroline Brewer, two artists.

Here’s a description of the book from the University of Toronto Press website,

As young girls in Cairo, Anna and Layla strike up an unlikely friendship that crosses class, cultural, and religious divides. Years later, Anna learns that she may carry the hereditary cancer gene responsible for her mother’s death. Meanwhile, Layla’s family is faced with a difficult decision about kidney transplantation. Their friendship is put to the test when these medical crises reveal stark differences in their perspectives…until revolutionary unrest in Egypt changes their lives forever.

The first book in a new series [ethnoGRAPIC; a series of graphic novels from the University of Toronto Press], Lissa brings anthropological research to life in comic form, combining scholarly insights and accessible, visually-rich storytelling to foster greater understanding of global politics, inequalities, and solidarity.

I hope to write more about this graphic novel in a future posting.

New Media

I don’t know if this could be described as a movement yet but it’s certainly an interesting minor development. Two new media centres have hosted, in the last four years, art/sci projects and/or workshops. It’s unexpected given this definition from the Wikipedia entry for New Media (Note: Links have been removed),

New media are forms of media that are computational and rely on computers for redistribution. Some examples of new media are computer animations, computer games, human-computer interfaces, interactive computer installations, websites, and virtual worlds.[1][2]

In Manitoba, the Video Pool Media Arts Centre hosted a February 2016 workshop Biology as a New Art Medium: Workshop with Marta De Menezes. De Menezes, an artist from Portugal, gave workshops and talks in both Winnipeg (Manitoba) and Toronto (Ontario). Here’s a description for the one in Winnipeg,

This workshop aims to explore the multiple possibilities of artistic approaches that can be developed in relation to Art and Microbiology in a DIY situation. A special emphasis will be placed on the development of collaborative art and microbiology projects where the artist has to learn some biological research skills in order to create the artwork. The course will consist of a series of intense experimental sessions that will give raise to discussions on the artistic, aesthetic and ethical issues raised by the art and the science involved. Handling these materials and organisms will provoke a reflection on the theoretical issues involved and the course will provide background information on the current diversity of artistic discourses centred on biological sciences, as well a forum for debate.

VIVO Media Arts Centre in Vancouver hosted the Invasive Systems in 2019. From the exhibition page,

Picture this – a world where AI invades human creativity, bacteria invade our brains, and invisible technological signals penetrate all natural environments. Where invasive species from plants to humans transform spaces where they don’t belong, technology infiltrates every aspect of our daily lives, and the waste of human inventions ravages our natural environments.

This weekend festival includes an art-science exhibition [emphasis mine], a hands-on workshop (Sat, separate registration required), and guided discussions and tours by the curator (Sat/Sun). It will showcase collaborative works by three artist/scientist pairs, and independent works by six artists. Opening reception will be on Friday, November 8 starting at 7pm; curator’s remarks and performance by Edzi’u at 7:30pm and 9pm. 

New Westminster’s (British Columbia) New Media Gallery recently hosted an exhibition, ‘winds‘ from June 20 – September 29, 2019 that could be described as an art/sci exhibition,

Landscape and weather have long shared an intimate connection with the arts.  Each of the works here is a landscape: captured, interpreted and presented through a range of technologies. The four artists in this exhibition have taken, as their material process, the movement of wind through physical space & time. They explore how our perception and understanding of landscape can be interpreted through technology. 

These works have been created by what might be understood as a sort of scientific method or process that involves collecting data, acute observation, controlled experiments and the incorporation of measurements and technologies that control or collect motion, pressure, sound, pattern and the like. …

Council of Canadian Academies, Publishing, and Open Access

Established in 2005, the Council of Canadian Academies (CCA) (Wikipedia entry) is tasked by various departments and agencies to answer their queries about science issues that could affect the populace and/or the government. In 2014, the CCA published a report titled, Science Culture: Where Canada Stands. It was in response to the Canada Science and Technology Museums Corporation (now called Ingenium), Industry Canada, and Natural Resources Canada and their joint request that the CCA conduct an in-depth, independent assessment to investigate the state of Canada’s science culture.

I gave a pretty extensive analysis of the report, which I delivered in four parts: Part 1, Part 2 (a), Part 2 (b), and Part 3. In brief, the term ‘science culture’ seems to be specifically, i.e., it’s not used elsewhere in the world (that we know of), Canadian. We have lots to be proud of. I was a little disappointed by the lack of culture (arts) producers on the expert panel and, as usual, I bemoaned the fact that the international community included as reviewers, members of the panel, and as points for comparison were drawn from the usual suspects (US, UK, or somewhere in northern Europe).

Science publishing in Canada took a bit of a turn in 2010, when the country’s largest science publisher, NRC (National Research Council) Research Publisher was cut loose from the government and spun out into the private, *not-for-profit publisher*, Canadian Science Publishing (CSP). From the CSP Wikipedia entry,

Since 2010, Canadian Science Publishing has acquired five new journals:

Since 2010, Canadian Science Publishing has also launched four new journals

Canadian Science Publishing offers researchers options to make their published papers freely available (open access) in their standard journals and in their open access journal, (from the CSP Wikipedia entry)

Arctic Science aims to provide a collaborative approach to Arctic research for a diverse group of users including government, policy makers, the general public, and researchers across all scientific fields

FACETS is Canada’s first open access multidisciplinary science journal, aiming to advance science by publishing research that the multi-faceted global community of research. FACETS is the official journal of the Royal Society of Canada’s Academy of Science.

Anthropocene Coasts aims to understand and predict the effects of human activity, including climate change, on coastal regions.

In addition, Canadian Science Publishing strives to make their content accessible through the CSP blog that includes plain language summaries of featured research. The open-access journal FACETS similarly publishes plain language summaries.

*comment removed*

CSP announced (on Twitter) a new annual contest in 2016,

Canadian Science Publishing@cdnsciencepub

New CONTEST! Announcing Visualizing Science! Share your science images & win great prizes! Full details on the blog http://cdnsciencepub.com/blog/2016-csp-image-contest-visualizing-science.aspx1:45 PM · Sep 19, 2016·TweetDeck

The 2016 blog posting is no longer accessible. Oddly for a contest of this type, I can’t find an image archive for previous contests. Regardless, a 2020 competition has been announced for Summer 2020. There are some details on the VISUALIZING SCIENCE 2020 webpage but some are missing, e.g., no opening date, no deadline. They are encouraging you to sign up for notices.

Back to open access, in a January 22, 2016 posting I featured news about Montreal Neuro (Montreal Neurological Institute [MNI] in Québec, Canada) and its then new policy giving researchers world wide access to its research and made a pledge that it would not seek patents for its work.

Fish, Newfoundland & Labrador, and Prince Edward Island

AquAdvantage’s genetically modified salmon was approved for consumption in Canada according to my May 20, 2016 posting. The salmon are produced/farmed by a US company (AquaBounty) but the the work of genetically modifying Atlantic salmon with genetic material from the Chinook (a Pacific ocean salmon) was mostly undertaken at Memorial University in Newfoundland & Labrador.

The process by which work done in Newfoundland & Labrador becomes the property of a US company is one that’s well known here in Canada. The preliminary work and technology is developed here and then purchased by a US company, which files patents, markets, and profits from it. Interestingly, the fish farms for the AquAdvantage salmon are mostly (two out of three) located on Prince Edward Island.

Intriguingly, 4.5 tonnes of the modified fish were sold for consumption in Canada without consumers being informed (see my Sept. 13, 2017 posting, scroll down about 45% of the way).

It’s not all sunshine and roses where science culture in Canada is concerned. Incidents where Canadians are not informed let alone consulted about major changes in the food supply and other areas are not unusual. Too many times, scientists, politicians, and government policy experts want to spread news about science without any response from the recipients who are in effect viewed as a ‘tabula rasa’ or a blank page.

Tying it all up

This series has been my best attempt to document in some fashion or another the extraordinary range of science culture in Canada from roughly 2010-19. Thank you! This series represents a huge amount of work and effort to develop science culture in Canada and I am deeply thankful that people give so much to this effort.

I have inevitably missed people and organizations and events. For that I am very sorry. (There is an addendum to the series as it’s been hard to stop but I don’t expect to add anything or anyone more.)

I want to mention but can’t expand upon,the Pan-Canadian Artificial Intelligence Strategy, which was established in the 2017 federal budget (see a March 31, 2017 posting about the Vector Institute and Canada’s artificial intelligence sector).

Science Borealis, the Canadian science blog aggregator, owes its existence to Canadian Science Publishing for the support (programming and financial) needed to establish itself and, I believe, that support is still ongoing. I think thanks are also due to Jenny Ryan who was working for CSP and championed the initiative. Jenny now works for Canadian Blood Services. Interestingly, that agency added a new programme, a ‘Lay Science Writing Competition’ in 2018. It’s offered n partnership with two other groups, the Centre for Blood Research at the University of British Columbia and Science Borealis

While the Royal Astronomical Society of Canada does not fit into my time frame as it lists as its founding date December 1, 1868 (18 months after confederation), the organization did celebrate its 150th anniversary in 2018.

Vancouver’s Electric Company often produces theatrical experiences that cover science topics such as the one featured in my June 7, 2013 posting, You are very star—an immersive transmedia experience.

Let’s Talk Science (Wikipedia entry) has been heavily involved with offering STEM (science, technology, engineering, and mathematics) programming both as part of curricular and extra-curricular across Canada since 1993.

This organization predates confederation having been founded in 1849 by Sir Sandford Fleming and Kivas Tully in Toronto. for surveyors, civil engineers, and architects. It is the Royal Canadian Institute of Science (Wikipedia entry)_. With almost no interruption, they have been delivering a regular series of lectures on the University of Toronto campus since 1913.

The Perimeter Institute for Theoretical Physics is a more recent beast. In 1999 Mike Lazirides, founder of Research In Motion (now known as Blackberry Limited), acted as both founder and major benefactor for this institute in Waterloo, Ontario. They offer a substantive and imaginative outreach programmes such as Arts and Culture: “Event Horizons is a series of unique and extraordinary events that aim to stimulate and enthral. It is a showcase of innovative work of the highest international standard, an emotional, intellectual, and creative experience. And perhaps most importantly, it is a social space, where ideas collide and curious minds meet.”

While gene-editing hasn’t seemed to be top-of-mind for anyone other than those in the art/sci community that may change. My April 26, 2019 posting focused on what appears to be a campaign to reverse Canada’s criminal ban on human gene-editing of inheritable cells (germline). With less potential for controversy, there is a discussion about somatic gene therapies and engineered cell therapies. A report from the Council of Canadian is due in the Fall of 2020. (The therapies being discussed do not involve germline editing.)

French language science media and podcasting

Agence Science-Presse is unique as it is the only press agency in Canada devoted to science news. Founded in 1978, it has been active in print, radio, television, online blogs, and podcasts (Baladodiffusion). You can find their Twitter feed here.

I recently stumbled across ‘un balados’ (podcast), titled, 20%. Started in January 2019 by the magazine, Québec Science, the podcast is devoted to women in science and technology. 20%, the podcast’s name, is the statistic representing the number of women in those fields. “Dans les domaines de la science et de la technologie, les femmes ne forment que 20% de la main-d’oeuvre.” (from the podcast webpage) The podcast is a co-production between “Québec Science [founded in 1962] et l’Acfas [formerly, l’Association Canadienne-Française pour l’Avancement des Sciences, now, Association francophone pour le savoir], en collaboration avec la Commission canadienne pour l’UNESCO, L’Oréal Canada et la radio Choq.ca.” (also from the podcast webpage)

Does it mean anything?

There have been many developments since I started writing this series in late December 2019. In January 2020, Iran shot down one of its own planes. That error killed some 176 people , many of them (136 Canadians and students) bound for Canada. The number of people who were involved in the sciences, technology, and medicine was striking.

It was a shocking loss and will reverberate for quite some time. There is a memorial posting here (January 13, 2020), which includes links to another memorial posting and an essay.

As I write this we are dealing with a pandemic, COVID-19, which has us all practicing physical and social distancing. Congregations of large numbers are expressly forbidden. All of this is being done in a bid to lessen the passage of the virus, SARS-CoV-2 which causes COVID-19.

In the short term at least, it seems that much of what I’ve described in these five parts (and the addendum) will undergo significant changes or simply fade away.

As for the long term, with this last 10 years having hosted the most lively science culture scene I can ever recall, I’m hopeful that science culture in Canada will do more than survive but thrive.

For anyone who missed them:

Part 1 covers science communication, science media (mainstream and others such as blogging) and arts as exemplified by music and dance: The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (1 of 5).

Part 2 covers art/science (or art/sci or sciart) efforts, science festivals both national and local, international art and technology conferences held in Canada, and various bar/pub/café events: The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (2 of 5).

Part 3 covers comedy, do-it-yourself (DIY) biology, chief science advisor, science policy, mathematicians, and more: The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (3 of 5)

Part 4 covers citizen science, birds, climate change, indigenous knowledge (science), and the IISD Experimental Lakes Area: The decade that was (2010-19) and the decade to come (2020-29): Science culture in Canada (4 of 5)

*”for-profit publisher, Canadian Science Publishing (CSP)” corrected to “not-for-profit publisher, Canadian Science Publishing (CSP)” and this comment “Not bad for a for-profit business, eh?” removed on April 29, 2020 as per Twitter comments,

Canadian Science Publishing @cdnsciencepub

Hi Maryse, thank you for alerting us to your blog. To clarify, Canadian Science Publishing is a not-for-profit publisher. Thank you as well for sharing our image contest. We’ve updated the contest page to indicate that the contest opens July 2020!

10:01am · 29 Apr 2020 · Twitter Web App

Summer (2019) Institute on AI (artificial intelligence) Societal Impacts, Governance, and Ethics. Summer Institute In Alberta, Canada

The deadline for applications is April 7, 2019. As for whether or not you might like to attend, here’s more from a joint March 11, 2019 Alberta Machine Intelligence Institute (Amii)/
Canadian Institute for Advanced Research (CIFAR)/University of California at Los Angeles (UCLA) Law School news release
(also on globalnewswire.com),

What will Artificial Intelligence (AI) mean for society? That’s the question scholars from a variety of disciplines will explore during the inaugural Summer Institute on AI Societal Impacts, Governance, and Ethics. Summer Institute, co-hosted by the Alberta Machine Intelligence Institute (Amii) and CIFAR, with support from UCLA School of Law, takes place July 22-24, 2019 in Edmonton, Canada.

“Recent advances in AI have brought a surge of attention to the field – both excitement and concern,” says co-organizer and UCLA professor, Edward Parson. “From algorithmic bias to autonomous vehicles, personal privacy to automation replacing jobs. Summer Institute will bring together exceptional people to talk about how humanity can receive the benefits and not get the worst harms from these rapid changes.”

Summer Institute brings together experts, grad students and researchers from multiple backgrounds to explore the societal, governmental, and ethical implications of AI. A combination of lectures, panels, and participatory problem-solving, this comprehensive interdisciplinary event aims to build understanding and action around these high-stakes issues.

“Machine intelligence is opening transformative opportunities across the world,” says John Shillington, CEO of Amii, “and Amii is excited to bring together our own world-leading researchers with experts from areas such as law, philosophy and ethics for this important discussion. Interdisciplinary perspectives will be essential to the ongoing development of machine intelligence and for ensuring these opportunities have the broadest reach possible.”

Over the three-day program, 30 graduate-level students and early-career researchers will engage with leading experts and researchers including event co-organizers: Western University’s Daniel Lizotte, Amii’s Alona Fyshe and UCLA’s Edward Parson. Participants will also have a chance to shape the curriculum throughout this uniquely interactive event.

Summer Institute takes place prior to Deep Learning and Reinforcement Learning Summer School, and includes a combined event on July 24th [2019] for both Summer Institute and Summer School participants.

Visit dlrlsummerschool.ca/the-summer-institute to apply; applications close April 7, 2019.

View our Summer Institute Biographies & Boilerplates for more information on confirmed faculty members and co-hosting organizations. Follow the conversation through social media channels using the hashtag #SI2019.

Media Contact: Spencer Murray, Director of Communications & Public Relations, Amii
t: 587.415.6100 | c: 780.991.7136 | e: spencer.murray@amii.ca

There’s a bit more information on The Summer Institute on AI and Society webpage (on the Deep Learning and Reinforcement Learning Summer School 2019 website) such as this more complete list of speakers,

Confirmed speakers at Summer Institute include:

Alona Fyshe, University of Alberta/Amii (SI co-organizer)
Edward Parson, UCLA (SI co-organizer)
Daniel Lizotte, Western University (SI co-organizer)
Geoffrey Rockwell, University of Alberta
Graham Taylor, University of Guelph/Vector Institute
Rob Lempert, Rand Corporation
Gary Marchant, Arizona State University
Richard Re, UCLA
Evan Selinger, Rochester Institute of Technology
Elana Zeide, UCLA

Two questions, why are all the summer school faculty either Canada- or US-based? What about South American, Asian, Middle Eastern, etc. thinkers?

One last thought, I wonder if this ‘AI & ethics summer institute’ has anything to do with the Pan-Canadian Artificial Intelligence Strategy, which CIFAR administers and where both the University of Alberta and Vector Institute are members.

AI fairytale and April 25, 2018 AI event at Canada Science and Technology Museum*** in Ottawa

These days it’s all about artificial intelligence (AI) or robots and often, it’s both. They’re everywhere and they will take everyone’s jobs, or not, depending on how you view them. Today, I’ve got two artificial intelligence items, the first of which may provoke writers’ anxieties.

Fairytales

The Princess and the Fox is a new fairytale by the Brothers Grimm or rather, their artificially intelligent surrogate according to an April 18, 2018 article on the British Broadcasting Corporation’s online news website,

It was recently reported that the meditation app Calm had published a “new” fairytale by the Brothers Grimm.

However, The Princess and the Fox was written not by the brothers, who died over 150 years ago, but by humans using an artificial intelligence (AI) tool.

It’s the first fairy tale written by an AI, claims Calm, and is the result of a collaboration with Botnik Studios – a community of writers, artists and developers. Calm says the technique could be referred to as “literary cloning”.

Botnik employees used a predictive-text program to generate words and phrases that might be found in the original Grimm fairytales. Human writers then pieced together sentences to form “the rough shape of a story”, according to Jamie Brew, chief executive of Botnik.

The full version is available to paying customers of Calm, but here’s a short extract:

“Once upon a time, there was a golden horse with a golden saddle and a beautiful purple flower in its hair. The horse would carry the flower to the village where the princess danced for joy at the thought of looking so beautiful and good.

Advertising for a meditation app?

Of course, it’s advertising and it’s ‘smart’ advertising (wordplay intended). Here’s a preview/trailer,

Blair Marnell’s April 18, 2018 article for SyFy Wire provides a bit more detail,

“You might call it a form of literary cloning,” said Calm co-founder Michael Acton Smith. Calm commissioned Botnik to use its predictive text program, Voicebox, to create a new Brothers Grimm story. But first, Voicebox was given the entire collected works of the Brothers Grimm to analyze, before it suggested phrases and sentences based upon those stories. Of course, human writers gave the program an assist when it came to laying out the plot. …

“The Brothers Grimm definitely have a reputation for darkness and many of their best-known tales are undoubtedly scary,” Peter Freedman told SYFY WIRE. Freedman is a spokesperson for Calm who was a part of the team behind the creation of this story. “In the process of machine-human collaboration that generated The Princess and The Fox, we did gently steer the story towards something with a more soothing, calm plot and vibe, that would make it work both as a new Grimm fairy tale and simultaneously as a Sleep Story on Calm.” [emphasis mine]

….

If Marnell’s article is to be believed, Peter Freedman doesn’t hold much hope for writers in the long-term future although we don’t need to start ‘battening down the hatches’ yet.

You can find Calm here.

You can find Botnik  here and Botnik Studios here.

 

AI at Ingenium [Canada Science and Technology Museum] on April 25, 2018

Formerly known (I believe) [*Read the comments for the clarification] as the Canada Science and Technology Museum, Ingenium is hosting a ‘sold out but there will be a livestream’ Google event. From Ingenium’s ‘Curiosity on Stage Evening Edition with Google – The AI Revolution‘ event page,

Join Google, Inc. and the Canada Science and Technology Museum for an evening of thought-provoking discussions about artificial intelligence.

[April 25, 2018
7:00 p.m. – 10:00 p.m. {ET}
Fees: Free]

Invited speakers from industry leaders Google, Facebook, Element AI and Deepmind will explore the intersection of artificial intelligence with robotics, arts, social impact and healthcare. The session will end with a panel discussion and question-and-answer period. Following the event, there will be a reception along with light refreshments and networking opportunities.

The event will be simultaneously translated into both official languages as well as available via livestream from the Museum’s YouTube channel.

Seating is limited

THIS EVENT IS NOW SOLD OUT. Please join us for the livestream from the Museum’s YouTube channel. https://www.youtube.com/cstmweb *** April 25, 2018: I received corrective information about the link for the livestream: https://youtu.be/jG84BIno5J4 from someone at Ingenium.***

Speakers

David Usher (Moderator)

David Usher is an artist, best-selling author, entrepreneur and keynote speaker. As a musician he has sold more than 1.4 million albums, won 4 Junos and has had #1 singles singing in English, French and Thai. When David is not making music, he is equally passionate about his other life, as a Geek. He is the founder of Reimagine AI, an artificial intelligence creative studio working at the intersection of art and artificial intelligence. David is also the founder and creative director of the non-profit, the Human Impact Lab at Concordia University [located in Montréal, Québec]. The Lab uses interactive storytelling to revisualize the story of climate change. David is the co-creator, with Dr. Damon Matthews, of the Climate Clock. Climate Clock has been presented all over the world including the United Nations COP 23 Climate Conference and is presently on a three-year tour with the Canada Museum of Science and Innovation’s Climate Change Exhibit.

Joelle Pineau (Facebook)

The AI Revolution:  From Ideas and Models to Building Smart Robots
Joelle Pineau is head of the Facebook AI Research Lab Montreal, and an Associate Professor and William Dawson Scholar at McGill University. Dr. Pineau’s research focuses on developing new models and algorithms for automatic planning and learning in partially-observable domains. She also applies these algorithms to complex problems in robotics, health-care, games and conversational agents. She serves on the editorial board of the Journal of Artificial Intelligence Research and the Journal of Machine Learning Research and is currently President of the International Machine Learning Society. She is a AAAI Fellow, a Senior Fellow of the Canadian Institute for Advanced Research (CIFAR) and in 2016 was named a member of the College of New Scholars, Artists and Scientists by the Royal Society of Canada.

Pablo Samuel Castro (Google)

Building an Intelligent Assistant for Music Creators
Pablo was born and raised in Quito, Ecuador, and moved to Montreal after high school to study at McGill. He stayed in Montreal for the next 10 years, finished his bachelors, worked at a flight simulator company, and then eventually obtained his masters and PhD at McGill, focusing on Reinforcement Learning. After his PhD Pablo did a 10-month postdoc in Paris before moving to Pittsburgh to join Google. He has worked at Google for almost 6 years, and is currently a research Software Engineer in Google Brain in Montreal, focusing on fundamental Reinforcement Learning research, as well as Machine Learning and Music. Aside from his interest in coding/AI/math, Pablo is an active musician (https://www.psctrio.com), loves running (5 marathons so far, including Boston!), and discussing politics and activism.

Philippe Beaudoin (Element AI)

Concrete AI-for-Good initiatives at Element AI
Philippe cofounded Element AI in 2016 and currently leads its applied lab and AI-for-Good initiatives. His team has helped tackle some of the biggest and most interesting business challenges using machine learning. Philippe holds a Ph.D in Computer Science and taught virtual bipeds to walk by themselves during his postdoc at UBC. He spent five years at Google as a Senior Developer and Technical Lead Manager, partly with the Chrome Machine Learning team. Philippe also founded ArcBees, specializing in cloud-based development. Prior to that he worked in the videogame and graphics hardware industries. When he has some free time, Philippe likes to invent new boardgames — the kind of games where he can still beat the AI!

Doina Precup (Deepmind)

Challenges and opportunities for the AI revolution in health care
Doina Precup splits her time between McGill University, where she co-directs the Reasoning and Learning Lab in the School of Computer Science, and DeepMind Montreal, where she leads the newly formed research team since October 2017.  She got her BSc degree in computer science form the Technical University Cluj-Napoca, Romania, and her MSc and PhD degrees from the University of Massachusetts-Amherst, where she was a Fulbright fellow. Her research interests are in the areas of reinforcement learning, deep learning, time series analysis, and diverse applications of machine learning in health care, automated control and other fields. She became a senior member of AAAI in 2015, a Canada Research Chair in Machine Learning in 2016 and a Senior Fellow of CIFAR in 2017.

Interesting, oui? Not a single expert from Ottawa or Toronto. Well, Element AI has an office in Toronto. Still, I wonder why this singular focus on AI in Montréal. After all, one of the current darlings of AI, machine learning, was developed at the University of Toronto which houses the Canadian Institute for Advanced Research (CIFAR),  the institution in charge of the Pan-Canadian Artificial Intelligence Strategy and the Vector Institutes (more about that in my March 31,2017 posting).

Enough with my musing: For those of us on the West Coast, there’s an opportunity to attend via livestream from 4 pm to 7 pm on April 25, 2018 on xxxxxxxxx. *** April 25, 2018: I received corrective information about the link for the livestream: https://youtu.be/jG84BIno5J4 and clarification as the relationship between Ingenium and the Canada Science and Technology Museum from someone at Ingenium.***

For more about Element AI, go here; for more about DeepMind, go here for information about parent company in the UK and the most I dug up about their Montréal office was this job posting; and, finally , Reimagine.AI is here.

Alberta adds a newish quantum nanotechnology research hub to the Canada’s quantum computing research scene

One of the winners in Canada’s 2017 federal budget announcement of the Pan-Canadian Artificial Intelligence Strategy was Edmonton, Alberta. It’s a fact which sometimes goes unnoticed while Canadians marvel at the wonderfulness found in Toronto and Montréal where it seems new initiatives and monies are being announced on a weekly basis (I exaggerate) for their AI (artificial intelligence) efforts.

Alberta’s quantum nanotechnology hub (graduate programme)

Intriguingly, it seems that Edmonton has higher aims than (an almost unnoticed) leadership in AI. Physicists at the University of Alberta have announced hopes to be just as successful as their AI brethren in a Nov. 27, 2017 article by Juris Graney for the Edmonton Journal,

Physicists at the University of Alberta [U of A] are hoping to emulate the success of their artificial intelligence studying counterparts in establishing the city and the province as the nucleus of quantum nanotechnology research in Canada and North America.

Google’s artificial intelligence research division DeepMind announced in July [2017] it had chosen Edmonton as its first international AI research lab, based on a long-running partnership with the U of A’s 10-person AI lab.

Retaining the brightest minds in the AI and machine-learning fields while enticing a global tech leader to Alberta was heralded as a coup for the province and the university.

It is something U of A physics professor John Davis believes the university’s new graduate program, Quanta, can help achieve in the world of quantum nanotechnology.

The field of quantum mechanics had long been a realm of theoretical science based on the theory that atomic and subatomic material like photons or electrons behave both as particles and waves.

“When you get right down to it, everything has both behaviours (particle and wave) and we can pick and choose certain scenarios which one of those properties we want to use,” he said.

But, Davis said, physicists and scientists are “now at the point where we understand quantum physics and are developing quantum technology to take to the marketplace.”

“Quantum computing used to be realm of science fiction, but now we’ve figured it out, it’s now a matter of engineering,” he said.

Quantum computing labs are being bought by large tech companies such as Google, IBM and Microsoft because they realize they are only a few years away from having this power, he said.

Those making the groundbreaking developments may want to commercialize their finds and take the technology to market and that is where Quanta comes in.

East vs. West—Again?

Ivan Semeniuk in his article, Quantum Supremacy, ignores any quantum research effort not located in either Waterloo, Ontario or metro Vancouver, British Columbia to describe a struggle between the East and the West (a standard Canadian trope). From Semeniuk’s Oct. 17, 2017 quantum article [link follows the excerpts] for the Globe and Mail’s October 2017 issue of the Report on Business (ROB),

 Lazaridis [Mike], of course, has experienced lost advantage first-hand. As co-founder and former co-CEO of Research in Motion (RIM, now called Blackberry), he made the smartphone an indispensable feature of the modern world, only to watch rivals such as Apple and Samsung wrest away Blackberry’s dominance. Now, at 56, he is engaged in a high-stakes race that will determine who will lead the next technology revolution. In the rolling heartland of southwestern Ontario, he is laying the foundation for what he envisions as a new Silicon Valley—a commercial hub based on the promise of quantum technology.

Semeniuk skips over the story of how Blackberry lost its advantage. I came onto that story late in the game when Blackberry was already in serious trouble due to a failure to recognize that the field they helped to create was moving in a new direction. If memory serves, they were trying to keep their technology wholly proprietary which meant that developers couldn’t easily create apps to extend the phone’s features. Blackberry also fought a legal battle in the US with a patent troll draining company resources and energy in proved to be a futile effort.

Since then Lazaridis has invested heavily in quantum research. He gave the University of Waterloo a serious chunk of money as they named their Quantum Nano Centre (QNC) after him and his wife, Ophelia (you can read all about it in my Sept. 25, 2012 posting about the then new centre). The best details for Lazaridis’ investments in Canada’s quantum technology are to be found on the Quantum Valley Investments, About QVI, History webpage,

History-bannerHistory has repeatedly demonstrated the power of research in physics to transform society.  As a student of history and a believer in the power of physics, Mike Lazaridis set out in 2000 to make real his bold vision to establish the Region of Waterloo as a world leading centre for physics research.  That is, a place where the best researchers in the world would come to do cutting-edge research and to collaborate with each other and in so doing, achieve transformative discoveries that would lead to the commercialization of breakthrough  technologies.

Establishing a World Class Centre in Quantum Research:

The first step in this regard was the establishment of the Perimeter Institute for Theoretical Physics.  Perimeter was established in 2000 as an independent theoretical physics research institute.  Mike started Perimeter with an initial pledge of $100 million (which at the time was approximately one third of his net worth).  Since that time, Mike and his family have donated a total of more than $170 million to the Perimeter Institute.  In addition to this unprecedented monetary support, Mike also devotes his time and influence to help lead and support the organization in everything from the raising of funds with government and private donors to helping to attract the top researchers from around the globe to it.  Mike’s efforts helped Perimeter achieve and grow its position as one of a handful of leading centres globally for theoretical research in fundamental physics.

Stephen HawkingPerimeter is located in a Governor-General award winning designed building in Waterloo.  Success in recruiting and resulting space requirements led to an expansion of the Perimeter facility.  A uniquely designed addition, which has been described as space-ship-like, was opened in 2011 as the Stephen Hawking Centre in recognition of one of the most famous physicists alive today who holds the position of Distinguished Visiting Research Chair at Perimeter and is a strong friend and supporter of the organization.

Recognizing the need for collaboration between theorists and experimentalists, in 2002, Mike applied his passion and his financial resources toward the establishment of The Institute for Quantum Computing at the University of Waterloo.  IQC was established as an experimental research institute focusing on quantum information.  Mike established IQC with an initial donation of $33.3 million.  Since that time, Mike and his family have donated a total of more than $120 million to the University of Waterloo for IQC and other related science initiatives.  As in the case of the Perimeter Institute, Mike devotes considerable time and influence to help lead and support IQC in fundraising and recruiting efforts.  Mike’s efforts have helped IQC become one of the top experimental physics research institutes in the world.

Quantum ComputingMike and Doug Fregin have been close friends since grade 5.  They are also co-founders of BlackBerry (formerly Research In Motion Limited).  Doug shares Mike’s passion for physics and supported Mike’s efforts at the Perimeter Institute with an initial gift of $10 million.  Since that time Doug has donated a total of $30 million to Perimeter Institute.  Separately, Doug helped establish the Waterloo Institute for Nanotechnology at the University of Waterloo with total gifts for $29 million.  As suggested by its name, WIN is devoted to research in the area of nanotechnology.  It has established as an area of primary focus the intersection of nanotechnology and quantum physics.

With a donation of $50 million from Mike which was matched by both the Government of Canada and the province of Ontario as well as a donation of $10 million from Doug, the University of Waterloo built the Mike & Ophelia Lazaridis Quantum-Nano Centre, a state of the art laboratory located on the main campus of the University of Waterloo that rivals the best facilities in the world.  QNC was opened in September 2012 and houses researchers from both IQC and WIN.

Leading the Establishment of Commercialization Culture for Quantum Technologies in Canada:

In the Research LabFor many years, theorists have been able to demonstrate the transformative powers of quantum mechanics on paper.  That said, converting these theories to experimentally demonstrable discoveries has, putting it mildly, been a challenge.  Many naysayers have suggested that achieving these discoveries was not possible and even the believers suggested that it could likely take decades to achieve these discoveries.  Recently, a buzz has been developing globally as experimentalists have been able to achieve demonstrable success with respect to Quantum Information based discoveries.  Local experimentalists are very much playing a leading role in this regard.  It is believed by many that breakthrough discoveries that will lead to commercialization opportunities may be achieved in the next few years and certainly within the next decade.

Recognizing the unique challenges for the commercialization of quantum technologies (including risk associated with uncertainty of success, complexity of the underlying science and high capital / equipment costs) Mike and Doug have chosen to once again lead by example.  The Quantum Valley Investment Fund will provide commercialization funding, expertise and support for researchers that develop breakthroughs in Quantum Information Science that can reasonably lead to new commercializable technologies and applications.  Their goal in establishing this Fund is to lead in the development of a commercialization infrastructure and culture for Quantum discoveries in Canada and thereby enable such discoveries to remain here.

Semeniuk goes on to set the stage for Waterloo/Lazaridis vs. Vancouver (from Semeniuk’s 2017 ROB article),

… as happened with Blackberry, the world is once again catching up. While Canada’s funding of quantum technology ranks among the top five in the world, the European Union, China, and the US are all accelerating their investments in the field. Tech giants such as Google [also known as Alphabet], Microsoft and IBM are ramping up programs to develop companies and other technologies based on quantum principles. Meanwhile, even as Lazaridis works to establish Waterloo as the country’s quantum hub, a Vancouver-area company has emerged to challenge that claim. The two camps—one methodically focused on the long game, the other keen to stake an early commercial lead—have sparked an East-West rivalry that many observers of the Canadian quantum scene are at a loss to explain.

Is it possible that some of the rivalry might be due to an influential individual who has invested heavily in a ‘quantum valley’ and has a history of trying to ‘own’ a technology?

Getting back to D-Wave Systems, the Vancouver company, I have written about them a number of times (particularly in 2015; for the full list: input D-Wave into the blog search engine). This June 26, 2015 posting includes a reference to an article in The Economist magazine about D-Wave’s commercial opportunities while the bulk of the posting is focused on a technical breakthrough.

Semeniuk offers an overview of the D-Wave Systems story,

D-Wave was born in 1999, the same year Lazaridis began to fund quantum science in Waterloo. From the start, D-Wave had a more immediate goal: to develop a new computer technology to bring to market. “We didn’t have money or facilities,” says Geordie Rose, a physics PhD who co0founded the company and served in various executive roles. …

The group soon concluded that the kind of machine most scientists were pursing based on so-called gate-model architecture was decades away from being realized—if ever. …

Instead, D-Wave pursued another idea, based on a principle dubbed “quantum annealing.” This approach seemed more likely to produce a working system, even if the application that would run on it were more limited. “The only thing we cared about was building the machine,” says Rose. “Nobody else was trying to solve the same problem.”

D-Wave debuted its first prototype at an event in California in February 2007 running it through a few basic problems such as solving a Sudoku puzzle and finding the optimal seating plan for a wedding reception. … “They just assumed we were hucksters,” says Hilton [Jeremy Hilton, D.Wave senior vice-president of systems]. Federico Spedalieri, a computer scientist at the University of Southern California’s [USC} Information Sciences Institute who has worked with D-Wave’s system, says the limited information the company provided about the machine’s operation provoked outright hostility. “I think that played against them a lot in the following years,” he says.

It seems Lazaridis is not the only one who likes to hold company information tightly.

Back to Semeniuk and D-Wave,

Today [October 2017], the Los Alamos National Laboratory owns a D-Wave machine, which costs about $15million. Others pay to access D-Wave systems remotely. This year , for example, Volkswagen fed data from thousands of Beijing taxis into a machine located in Burnaby [one of the municipalities that make up metro Vancouver] to study ways to optimize traffic flow.

But the application for which D-Wave has the hights hope is artificial intelligence. Any AI program hings on the on the “training” through which a computer acquires automated competence, and the 2000Q [a D-Wave computer] appears well suited to this task. …

Yet, for all the buzz D-Wave has generated, with several research teams outside Canada investigating its quantum annealing approach, the company has elicited little interest from the Waterloo hub. As a result, what might seem like a natural development—the Institute for Quantum Computing acquiring access to a D-Wave machine to explore and potentially improve its value—has not occurred. …

I am particularly interested in this comment as it concerns public funding (from Semeniuk’s article),

Vern Brownell, a former Goldman Sachs executive who became CEO of D-Wave in 2009, calls the lack of collaboration with Waterloo’s research community “ridiculous,” adding that his company’s efforts to establish closer ties have proven futile, “I’ll be blunt: I don’t think our relationship is good enough,” he says. Brownell also point out that, while  hundreds of millions in public funds have flowed into Waterloo’s ecosystem, little funding is available for  Canadian scientists wishing to make the most of D-Wave’s hardware—despite the fact that it remains unclear which core quantum technology will prove the most profitable.

There’s a lot more to Semeniuk’s article but this is the last excerpt,

The world isn’t waiting for Canada’s quantum rivals to forge a united front. Google, Microsoft, IBM, and Intel are racing to develop a gate-model quantum computer—the sector’s ultimate goal. (Google’s researchers have said they will unveil a significant development early next year.) With the U.K., Australia and Japan pouring money into quantum, Canada, an early leader, is under pressure to keep up. The federal government is currently developing  a strategy for supporting the country’s evolving quantum sector and, ultimately, getting a return on its approximately $1-billion investment over the past decade [emphasis mine].

I wonder where the “approximately $1-billion … ” figure came from. I ask because some years ago MP Peter Julian asked the government for information about how much Canadian federal money had been invested in nanotechnology. The government replied with sheets of paper (a pile approximately 2 inches high) that had funding disbursements from various ministries. Each ministry had its own method with different categories for listing disbursements and the titles for the research projects were not necessarily informative for anyone outside a narrow specialty. (Peter Julian’s assistant had kindly sent me a copy of the response they had received.) The bottom line is that it would have been close to impossible to determine the amount of federal funding devoted to nanotechnology using that data. So, where did the $1-billion figure come from?

In any event, it will be interesting to see how the Council of Canadian Academies assesses the ‘quantum’ situation in its more academically inclined, “The State of Science and Technology and Industrial Research and Development in Canada,” when it’s released later this year (2018).

Finally, you can find Semeniuk’s October 2017 article here but be aware it’s behind a paywall.

Whither we goest?

Despite any doubts one might have about Lazaridis’ approach to research and technology, his tremendous investment and support cannot be denied. Without him, Canada’s quantum research efforts would be substantially less significant. As for the ‘cowboys’ in Vancouver, it takes a certain temperament to found a start-up company and it seems the D-Wave folks have more in common with Lazaridis than they might like to admit. As for the Quanta graduate  programme, it’s early days yet and no one should ever count out Alberta.

Meanwhile, one can continue to hope that a more thoughtful approach to regional collaboration will be adopted so Canada can continue to blaze trails in the field of quantum research.

Robots in Vancouver and in Canada (two of two)

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Where
Vancouver, BC, Canada

When
Thursday September 28 & Friday September 29, 2017 — Save the date!

Download the full agenda and speakers’ list here.

Objectives

The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference.  The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Program Overview

Download the full agenda and speakers’ list here.

DAY ONE: IROS Workshop 

“Best practices in designing effective roadmaps for robotics innovation”

Thursday September 28, 2017 | 8:30am – 5:00pm | Vancouver Convention Centre

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

Thursday Evening Program: Sponsored by Clearpath Robotics  Workshop participants gather at a nearby restaurant to network and socialize.

Learn more about the IROS Workshop.

DAY TWO: NSERC-Sponsored Canadian Robotics Stakeholder Meeting
“Towards a national robotics strategy for Canada”

Friday September 29, 2017 | 8:30am – 5:00pm | University of British Columbia (UBC)

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

Friday Evening Program: Sponsored by NSERC Meeting participants gather at a nearby restaurant for the event’s closing dinner reception.

Learn more about the Canadian Robotics Stakeholder Meeting.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another  concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there  already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s  influential 1982 book, ‘Orality and Literacy’,  (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but  tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

That’s it!

ETA Oct. 16, 2017: Well, I guess that wasn’t quite ‘it’. BBC’s (British Broadcasting Corporation) Magazine published a thoughtful Oct. 15, 2017 piece titled: Can we teach robots ethics?

Robots in Vancouver and in Canada (one of two)

This piece just started growing. It started with robot ethics, moved on to sexbots and news of an upcoming Canadian robotics roadmap. Then, it became a two-part posting with the robotics strategy (roadmap) moving to part two along with robots and popular culture and a further  exploration of robot and AI ethics issues..

What is a robot?

There are lots of robots, some are macroscale and others are at the micro and nanoscales (see my Sept. 22, 2017 posting for the latest nanobot). Here’s a definition from the Robot Wikipedia entry that covers all the scales. (Note: Links have been removed),

A robot is a machine—especially one programmable by a computer— capable of carrying out a complex series of actions automatically.[2] Robots can be guided by an external control device or the control may be embedded within. Robots may be constructed to take on human form but most robots are machines designed to perform a task with no regard to how they look.

Robots can be autonomous or semi-autonomous and range from humanoids such as Honda’s Advanced Step in Innovative Mobility (ASIMO) and TOSY’s TOSY Ping Pong Playing Robot (TOPIO) to industrial robots, medical operating robots, patient assist robots, dog therapy robots, collectively programmed swarm robots, UAV drones such as General Atomics MQ-1 Predator, and even microscopic nano robots. [emphasis mine] By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own.

We may think we’ve invented robots but the idea has been around for a very long time (from the Robot Wikipedia entry; Note: Links have been removed),

Many ancient mythologies, and most modern religions include artificial people, such as the mechanical servants built by the Greek god Hephaestus[18] (Vulcan to the Romans), the clay golems of Jewish legend and clay giants of Norse legend, and Galatea, the mythical statue of Pygmalion that came to life. Since circa 400 BC, myths of Crete include Talos, a man of bronze who guarded the Cretan island of Europa from pirates.

In ancient Greece, the Greek engineer Ctesibius (c. 270 BC) “applied a knowledge of pneumatics and hydraulics to produce the first organ and water clocks with moving figures.”[19][20] In the 4th century BC, the Greek mathematician Archytas of Tarentum postulated a mechanical steam-operated bird he called “The Pigeon”. Hero of Alexandria (10–70 AD), a Greek mathematician and inventor, created numerous user-configurable automated devices, and described machines powered by air pressure, steam and water.[21]

The 11th century Lokapannatti tells of how the Buddha’s relics were protected by mechanical robots (bhuta vahana yanta), from the kingdom of Roma visaya (Rome); until they were disarmed by King Ashoka. [22] [23]

In ancient China, the 3rd century text of the Lie Zi describes an account of humanoid automata, involving a much earlier encounter between Chinese emperor King Mu of Zhou and a mechanical engineer known as Yan Shi, an ‘artificer’. Yan Shi proudly presented the king with a life-size, human-shaped figure of his mechanical ‘handiwork’ made of leather, wood, and artificial organs.[14] There are also accounts of flying automata in the Han Fei Zi and other texts, which attributes the 5th century BC Mohist philosopher Mozi and his contemporary Lu Ban with the invention of artificial wooden birds (ma yuan) that could successfully fly.[17] In 1066, the Chinese inventor Su Song built a water clock in the form of a tower which featured mechanical figurines which chimed the hours.

The beginning of automata is associated with the invention of early Su Song’s astronomical clock tower featured mechanical figurines that chimed the hours.[24][25][26] His mechanism had a programmable drum machine with pegs (cams) that bumped into little levers that operated percussion instruments. The drummer could be made to play different rhythms and different drum patterns by moving the pegs to different locations.[26]

In Renaissance Italy, Leonardo da Vinci (1452–1519) sketched plans for a humanoid robot around 1495. Da Vinci’s notebooks, rediscovered in the 1950s, contained detailed drawings of a mechanical knight now known as Leonardo’s robot, able to sit up, wave its arms and move its head and jaw.[28] The design was probably based on anatomical research recorded in his Vitruvian Man. It is not known whether he attempted to build it.

In Japan, complex animal and human automata were built between the 17th to 19th centuries, with many described in the 18th century Karakuri zui (Illustrated Machinery, 1796). One such automaton was the karakuri ningyō, a mechanized puppet.[29] Different variations of the karakuri existed: the Butai karakuri, which were used in theatre, the Zashiki karakuri, which were small and used in homes, and the Dashi karakuri which were used in religious festivals, where the puppets were used to perform reenactments of traditional myths and legends.

The term robot was coined by a Czech writer (from the Robot Wikipedia entry; Note: Links have been removed)

‘Robot’ was first applied as a term for artificial automata in a 1920 play R.U.R. by the Czech writer, Karel Čapek. However, Josef Čapek was named by his brother Karel as the true inventor of the term robot.[6][7] The word ‘robot’ itself was not new, having been in Slavic language as robota (forced laborer), a term which classified those peasants obligated to compulsory service under the feudal system widespread in 19th century Europe (see: Robot Patent).[37][38] Čapek’s fictional story postulated the technological creation of artificial human bodies without souls, and the old theme of the feudal robota class eloquently fit the imagination of a new class of manufactured, artificial workers.

I’m particularly fascinated by how long humans have been imagining and creating robots.

Robot ethics in Vancouver

The Westender, has run what I believe is the first article by a local (Vancouver, Canada) mainstream media outlet on the topic of robots and ethics. Tessa Vikander’s Sept. 14, 2017 article highlights two local researchers, Ajung Moon and Mark Schmidt, and a local social media company’s (Hootsuite), analytics director, Nik Pai. Vikander opens her piece with an ethical dilemma (Note: Links have been removed),

Emma is 68, in poor health and an alcoholic who has been told by her doctor to stop drinking. She lives with a care robot, which helps her with household tasks.

Unable to fix herself a drink, she asks the robot to do it for her. What should the robot do? Would the answer be different if Emma owns the robot, or if she’s borrowing it from the hospital?

This is the type of hypothetical, ethical question that Ajung Moon, director of the Open Roboethics Initiative [ORI], is trying to answer.

According to an ORI study, half of respondents said ownership should make a difference, and half said it shouldn’t. With society so torn on the question, Moon is trying to figure out how engineers should be programming this type of robot.

A Vancouver resident, Moon is dedicating her life to helping those in the decision-chair make the right choice. The question of the care robot is but one ethical dilemma in the quickly advancing world of artificial intelligence.

At the most sensationalist end of the scale, one form of AI that’s recently made headlines is the sex robot, which has a human-like appearance. A report from the Foundation for Responsible Robotics says that intimacy with sex robots could lead to greater social isolation [emphasis mine] because they desensitize people to the empathy learned through human interaction and mutually consenting relationships.

I’ll get back to the impact that robots might have on us in part two but first,

Sexbots, could they kill?

For more about sexbots in general, Alessandra Maldonado wrote an Aug. 10, 2017 article for salon.com about them (Note: A link has been removed),

Artificial intelligence has given people the ability to have conversations with machines like never before, such as speaking to Amazon’s personal assistant Alexa or asking Siri for directions on your iPhone. But now, one company has widened the scope of what it means to connect with a technological device and created a whole new breed of A.I. — specifically for sex-bots.

Abyss Creations has been in the business of making hyperrealistic dolls for 20 years, and by the end of 2017, they’ll unveil their newest product, an anatomically correct robotic sex toy. Matt McMullen, the company’s founder and CEO, explains the goal of sex robots is companionship, not only a physical partnership. “Imagine if you were completely lonely and you just wanted someone to talk to, and yes, someone to be intimate with,” he said in a video depicting the sculpting process of the dolls. “What is so wrong with that? It doesn’t hurt anybody.”

Maldonado also embedded this video into her piece,

A friend of mine described it as creepy. Specifically we were discussing why someone would want to programme ‘insecurity’ as a  desirable trait in a sexbot.

Marc Beaulieu’s concept of a desirable trait in a sexbot is one that won’t kill him according to his Sept. 25, 2017 article on Canadian Broadcasting News (CBC) online (Note: Links have been removed),

Harmony has a charming Scottish lilt, albeit a bit staccato and canny. Her eyes dart around the room, her chin dips as her eyebrows raise in coquettish fashion. Her face manages expressions that are impressively lifelike. That face comes in 31 different shapes and 5 skin tones, with or without freckles and it sticks to her cyber-skull with magnets. Just peel it off and switch it out at will. In fact, you can choose Harmony’s eye colour, body shape (in great detail) and change her hair too. Harmony, of course, is a sex bot. A very advanced one. How advanced is she? Well, if you have $12,332 CAD to put towards a talkative new home appliance, REALBOTIX says you could be having a “conversation” and relations with her come January. Happy New Year.

Caveat emptor though: one novel bonus feature you might also get with Harmony is her ability to eventually murder you in your sleep. And not because she wants to.

Dr Nick Patterson, faculty of Science Engineering and Built Technology at Deakin University in Australia is lending his voice to a slew of others warning us to slow down and be cautious as we steadily approach Westworldian levels of human verisimilitude with AI tech. Surprisingly, Patterson didn’t regurgitate the narrative we recognize from the popular sci-fi (increasingly non-fi actually) trope of a dystopian society’s futile resistance to a robocalypse. He doesn’t think Harmony will want to kill you. He thinks she’ll be hacked by a code savvy ne’er-do-well who’ll want to snuff you out instead. …

Embedded in Beaulieu’s article is another video of the same sexbot profiled earlier. Her programmer seems to have learned a thing or two (he no longer inputs any traits as you’re watching),

I guess you could get one for Christmas this year if you’re willing to wait for an early 2018 delivery and aren’t worried about hackers turning your sexbot into a killer. While the killer aspect might seem farfetched, it turns out it’s not the only sexbot/hacker issue.

Sexbots as spies

This Oct. 5, 2017 story by Karl Bode for Techdirt points out that sex toys that are ‘smart’ can easily be hacked for any reason including some mischief (Note: Links have been removed),

One “smart dildo” manufacturer was recently forced to shell out $3.75 million after it was caught collecting, err, “usage habits” of the company’s customers. According to the lawsuit, Standard Innovation’s We-Vibe vibrator collected sensitive data about customer usage, including “selected vibration settings,” the device’s battery life, and even the vibrator’s “temperature.” At no point did the company apparently think it was a good idea to clearly inform users of this data collection.

But security is also lacking elsewhere in the world of internet-connected sex toys. Alex Lomas of Pentest Partners recently took a look at the security in many internet-connected sex toys, and walked away arguably unimpressed. Using a Bluetooth “dongle” and antenna, Lomas drove around Berlin looking for openly accessible sex toys (he calls it “screwdriving,” in a riff off of wardriving). He subsequently found it’s relatively trivial to discover and hijack everything from vibrators to smart butt plugs — thanks to the way Bluetooth Low Energy (BLE) connectivity works:

“The only protection you have is that BLE devices will generally only pair with one device at a time, but range is limited and if the user walks out of range of their smartphone or the phone battery dies, the adult toy will become available for others to connect to without any authentication. I should say at this point that this is purely passive reconnaissance based on the BLE advertisements the device sends out – attempting to connect to the device and actually control it without consent is not something I or you should do. But now one could drive the Hush’s motor to full speed, and as long as the attacker remains connected over BLE and not the victim, there is no way they can stop the vibrations.”

Does that make you think twice about a sexbot?

Robots and artificial intelligence

Getting back to the Vikander article (Sept. 14, 2017), Moon or Vikander or both seem to have conflated artificial intelligence with robots in this section of the article,

As for the building blocks that have thrust these questions [care robot quandary mentioned earlier] into the spotlight, Moon explains that AI in its basic form is when a machine uses data sets or an algorithm to make a decision.

“It’s essentially a piece of output that either affects your decision, or replaces a particular decision, or supports you in making a decision.” With AI, we are delegating decision-making skills or thinking to a machine, she says.

Although we’re not currently surrounded by walking, talking, independently thinking robots, the use of AI [emphasis mine] in our daily lives has become widespread.

For Vikander, the conflation may have been due to concerns about maintaining her word count and for Moon, it may have been one of convenience or a consequence of how the jargon is evolving with ‘robot’ meaning a machine specifically or, sometimes, a machine with AI or AI only.

To be precise, not all robots have AI and not all AI is found in robots. It’s a distinction that may be more important for people developing robots and/or AI but it also seems to make a difference where funding is concerned. In a March 24, 2017 posting about the 2017 Canadian federal budget I noticed this,

… The Canadian Institute for Advanced Research will receive $93.7 million [emphasis mine] to “launch a Pan-Canadian Artificial Intelligence Strategy … (to) position Canada as a world-leading destination for companies seeking to invest in artificial intelligence and innovation.”

This brings me to a recent set of meetings held in Vancouver to devise a Canadian robotics roadmap, which suggests the robotics folks feel they need specific representation and funding.

See: part two for the rest.

Canadian science policy news and doings (also: some US science envoy news)

I have a couple of notices from the Canadian Science Policy Centre (CSPC), a twitter feed, and an article in online magazine to thank for this bumper crop of news.

 Canadian Science Policy Centre: the conference

The 2017 Canadian Science Policy Conference to be held Nov. 1 – 3, 2017 in Ottawa, Ontario for the third year in a row has a super saver rate available until Sept. 3, 2017 according to an August 14, 2017 announcement (received via email).

Time is running out, you have until September 3rd until prices go up from the SuperSaver rate.

Savings off the regular price with the SuperSaver rate:
Up to 26% for General admission
Up to 29% for Academic/Non-Profit Organizations
Up to 40% for Students and Post-Docs

Before giving you the link to the registration page and assuming that you might want to check out what is on offer at the conference, here’s a link to the programme. They don’t seem to have any events celebrating Canada’s 150th anniversary although they do have a session titled, ‘The Next 150 years of Science in Canada: Embedding Equity, Delivering Diversity/Les 150 prochaine années de sciences au Canada:  Intégrer l’équité, promouvoir la diversité‘,

Enhancing equity, diversity, and inclusivity (EDI) in science, technology, engineering and math (STEM) has been described as being a human rights issue and an economic development issue by various individuals and organizations (e.g. OECD). Recent federal policy initiatives in Canada have focused on increasing participation of women (a designated under-represented group) in science through increased reporting, program changes, and institutional accountability. However, the Employment Equity Act requires employers to act to ensure the full representation of the three other designated groups: Aboriginal peoples, persons with disabilities and members of visible minorities. Significant structural and systemic barriers to full participation and employment in STEM for members of these groups still exist in Canadian institutions. Since data support the positive role of diversity in promoting innovation and economic development, failure to capture the full intellectual capacity of a diverse population limits provincial and national potential and progress in many areas. A diverse international panel of experts from designated groups will speak to the issue of accessibility and inclusion in STEM. In addition, the discussion will focus on evidence-based recommendations for policy initiatives that will promote full EDI in science in Canada to ensure local and national prosperity and progress for Canada over the next 150 years.

There’s also this list of speakers . Curiously, I don’t see Kirsty Duncan, Canada’s Minister of Science on the list, nor do I see any other politicians in the banner for their conference website  This divergence from the CSPC’s usual approach to promoting the conference is interesting.

Moving onto the conference, the organizers have added two panels to the programme (from the announcement received via email),

Friday, November 3, 2017
10:30AM-12:00PM
Open Science and Innovation
Organizer: Tiberius Brastaviceanu
Organization: ACES-CAKE

10:30AM- 12:00PM
The Scientific and Economic Benefits of Open Science
Organizer: Arij Al Chawaf
Organization: Structural Genomics

I think this is the first time there’s been a ‘Tiberius’ on this blog and teamed with the organization’s name, well, I just had to include it.

Finally, here’s the link to the registration page and a page that details travel deals.

Canadian Science Policy Conference: a compendium of documents and articles on Canada’s Chief Science Advisor and Ontario’s Chief Scientist and the pre-2018 budget submissions

The deadline for applications for the Chief Science Advisor position was extended to Feb. 2017 and so far, there’s no word as to whom it might be. Perhaps Minister of Science Kirsty Duncan wants to make a splash with a surprise announcement at the CSPC’s 2017 conference? As for Ontario’s Chief Scientist, this move will make province the third (?) to have a chief scientist, after Québec and Alberta. There is apparently one in Alberta but there doesn’t seem to be a government webpage and his LinkedIn profile doesn’t include this title. In any event, Dr. Fred Wrona is mentioned as the Alberta’s Chief Scientist in a May 31, 2017 Alberta government announcement. *ETA Aug. 25, 2017: I missed the Yukon, which has a Senior Science Advisor. The position is currently held by Dr. Aynslie Ogden.*

Getting back to the compendium, here’s the CSPC’s A Comprehensive Collection of Publications Regarding Canada’s Federal Chief Science Advisor and Ontario’s Chief Scientist webpage. Here’s a little background provided on the page,

On June 2nd, 2017, the House of Commons Standing Committee on Finance commenced the pre-budget consultation process for the 2018 Canadian Budget. These consultations provide Canadians the opportunity to communicate their priorities with a focus on Canadian productivity in the workplace and community in addition to entrepreneurial competitiveness. Organizations from across the country submitted their priorities on August 4th, 2017 to be selected as witness for the pre-budget hearings before the Committee in September 2017. The process will result in a report to be presented to the House of Commons in December 2017 and considered by the Minister of Finance in the 2018 Federal Budget.

NEWS & ANNOUNCEMENT

House of Commons- PRE-BUDGET CONSULTATIONS IN ADVANCE OF THE 2018 BUDGET

https://www.ourcommons.ca/Committees/en/FINA/StudyActivity?studyActivityId=9571255

CANADIANS ARE INVITED TO SHARE THEIR PRIORITIES FOR THE 2018 FEDERAL BUDGET

https://www.ourcommons.ca/DocumentViewer/en/42-1/FINA/news-release/9002784

The deadline for pre-2018 budget submissions was Aug. 4, 2017 and they haven’t yet scheduled any meetings although they are to be held in September. (People can meet with the Standing Committee on Finance in various locations across Canada to discuss their submissions.) I’m not sure where the CSPC got their list of ‘science’ submissions but it’s definitely worth checking as there are some odd omissions such as TRIUMF (Canada’s National Laboratory for Particle and Nuclear Physics)), Genome Canada, the Pan-Canadian Artificial Intelligence Strategy, CIFAR (Canadian Institute for Advanced Research), the Perimeter Institute, Canadian Light Source, etc.

Twitter and the Naylor Report under a microscope

This news came from University of British Columbia President Santa Ono’s twitter feed,

 I will join Jon [sic] Borrows and Janet Rossant on Sept 19 in Ottawa at a Mindshare event to discuss the importance of the Naylor Report

The Mindshare event Ono is referring to is being organized by Universities Canada (formerly the Association of Universities and Colleges of Canada) and the Institute for Research on Public Policy. It is titled, ‘The Naylor report under the microscope’. Here’s more from the event webpage,

Join Universities Canada and Policy Options for a lively discussion moderated by editor-in-chief Jennifer Ditchburn on the report from the Fundamental Science Review Panel and why research matters to Canadians.

Moderator

Jennifer Ditchburn, editor, Policy Options.

Jennifer Ditchburn

Editor-in-chief, Policy Options

Jennifer Ditchburn is the editor-in-chief of Policy Options, the online policy forum of the Institute for Research on Public Policy.  An award-winning parliamentary correspondent, Jennifer began her journalism career at the Canadian Press in Montreal as a reporter-editor during the lead-up to the 1995 referendum.  From 2001 and 2006 she was a national reporter with CBC TV on Parliament Hill, and in 2006 she returned to the Canadian Press.  She is a three-time winner of a National Newspaper Award:  twice in the politics category, and once in the breaking news category. In 2015 she was awarded the prestigious Charles Lynch Award for outstanding coverage of national issues. Jennifer has been a frequent contributor to television and radio public affairs programs, including CBC’s Power and Politics, the “At Issue” panel, and The Current. She holds a bachelor of arts from Concordia University, and a master of journalism from Carleton University.

@jenditchburn

Tuesday, September 19, 2017

 12-2 pm

Fairmont Château Laurier,  Laurier  Room
 1 Rideau Street, Ottawa

 rsvp@univcan.ca

I can’t tell if they’re offering lunch or if there is a cost associated with this event so you may want to contact the organizers.

As for the Naylor report, I posted a three-part series on June 8, 2017, which features my comments and the other comments I was able to find on the report:

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 1 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 2 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 3 of 3

One piece not mentioned in my three-part series is Paul Wells’ provocatively titled June 29, 2017 article for MacLean’s magazine, Why Canadian scientists aren’t happy (Note: Links have been removed),

Much hubbub this morning over two interviews Kirsty Duncan, the science minister, has given the papers. The subject is Canada’s Fundamental Science Review, commonly called the Naylor Report after David Naylor, the former University of Toronto president who was its main author.

Other authors include BlackBerry founder Mike Lazaridis, who has bankrolled much of the Waterloo renaissance, and Canadian Nobel physicist Arthur McDonald. It’s as blue-chip as a blue-chip panel could be.

Duncan appointed the panel a year ago. It’s her panel, delivered by her experts. Why does it not seem to be… getting anywhere? Why does it seem to have no champion in government? Therein lies a tale.

Note, first, that Duncan’s interviews—her first substantive comment on the report’s recommendations!—come nearly three months after its April release, which in turn came four months after Duncan asked Naylor to deliver his report, last December. (By March I had started to make fun of the Trudeau government in print for dragging its heels on the report’s release. That column was not widely appreciated in the government, I’m told.)

Anyway, the report was released, at an event attended by no representative of the Canadian government. Here’s the gist of what I wrote at the time:

 

Naylor’s “single most important recommendation” is a “rapid increase” in federal spending on “independent investigator-led research” instead of the “priority-driven targeted research” that two successive federal governments, Trudeau’s and Stephen Harper’s, have preferred in the last 8 or 10 federal budgets.

In English: Trudeau has imitated Harper in favouring high-profile, highly targeted research projects, on areas of study selected by political staffers in Ottawa, that are designed to attract star researchers from outside Canada so they can bolster the image of Canada as a research destination.

That’d be great if it wasn’t achieved by pruning budgets for the less spectacular research that most scientists do.

Naylor has numbers. “Between 2007-08 and 2015-16, the inflation-adjusted budgetary envelope for investigator-led research fell by 3 per cent while that for priority-driven research rose by 35 per cent,” he and his colleagues write. “As the number of researchers grew during this period, the real resources available per active researcher to do investigator-led research declined by about 35 per cent.”

And that’s not even taking into account the way two new programs—the $10-million-per-recipient Canada Excellence Research Chairs and the $1.5 billion Canada First Research Excellence Fund—are “further concentrating resources in the hands of smaller numbers of individuals and institutions.”

That’s the context for Duncan’s remarks. In the Globe, she says she agrees with Naylor on “the need for a research system that promotes equity and diversity, provides a better entry for early career researchers and is nimble in response to new scientific opportunities.” But she also “disagreed” with the call for a national advisory council that would give expert advice on the government’s entire science, research and innovation policy.

This is an asinine statement. When taking three months to read a report, it’s a good idea to read it. There is not a single line in Naylor’s overlong report that calls for the new body to make funding decisions. Its proposed name is NACRI, for National Advisory Council on Research and Innovation. A for Advisory. Its responsibilities, listed on Page 19 if you’re reading along at home, are restricted to “advice… evaluation… public reporting… advice… advice.”

Duncan also didn’t promise to meet Naylor’s requested funding levels: $386 million for research in the first year, growing to $1.3 billion in new money in the fourth year. That’s a big concern for researchers, who have been warning for a decade that two successive government’s—Harper’s and Trudeau’s—have been more interested in building new labs than in ensuring there’s money to do research in them.

The minister has talking points. She gave the same answer to both reporters about whether Naylor’s recommendations will be implemented in time for the next federal budget. “It takes time to turn the Queen Mary around,” she said. Twice. I’ll say it does: She’s reacting three days before Canada Day to a report that was written before Christmas. Which makes me worry when she says elected officials should be in charge of being nimble.

Here’s what’s going on.

The Naylor report represents Canadian research scientists’ side of a power struggle. The struggle has been continuing since Jean Chrétien left office. After early cuts, he presided for years over very large increases to the budgets of the main science granting councils. But since 2003, governments have preferred to put new funding dollars to targeted projects in applied sciences. …

Naylor wants that trend reversed, quickly. He is supported in that call by a frankly astonishingly broad coalition of university administrators and working researchers, who until his report were more often at odds. So you have the group representing Canada’s 15 largest research universities and the group representing all universities and a new group representing early-career researchers and, as far as I can tell, every Canadian scientist on Twitter. All backing Naylor. All fundamentally concerned that new money for research is of no particular interest if it does not back the best science as chosen by scientists, through peer review.

The competing model, the one preferred by governments of all stripes, might best be called superclusters. Very large investments into very large projects with loosely defined scientific objectives, whose real goal is to retain decorated veteran scientists and to improve the Canadian high-tech industry. Vast and sprawling labs and tech incubators, cabinet ministers nodding gravely as world leaders in sexy trendy fields sketch the golden path to Jobs of Tomorrow.

You see the imbalance. On one side, ribbons to cut. On the other, nerds experimenting on tapeworms. Kirsty Duncan, a shaky political performer, transparently a junior minister to the supercluster guy, with no deputy minister or department reporting to her, is in a structurally weak position: her title suggests she’s science’s emissary to the government, but she is not equipped to be anything more than government’s emissary to science.

A government that consistently buys into the market for intellectual capital at the very top of the price curve is a factory for producing white elephants. But don’t take my word for it. Ask Geoffrey Hinton [University of Toronto’s Geoffrey Hinton, a Canadian leader in machine learning].

“There is a lot of pressure to make things more applied; I think it’s a big mistake,” he said in 2015. “In the long run, curiosity-driven research just works better… Real breakthroughs come from people focusing on what they’re excited about.”

I keep saying this, like a broken record. If you want the science that changes the world, ask the scientists who’ve changed it how it gets made. This government claims to be interested in what scientists think. We’ll see.

Incisive and acerbic,  you may want to make time to read this article in its entirety.

Getting back to the ‘The Naylor report under the microscope’ event, I wonder if anyone will be as tough and direct as Wells. Going back even further, I wonder if this is why there’s no mention of Duncan as a speaker at the conference. It could go either way: surprise announcement of a Chief Science Advisor, as I first suggested, or avoidance of a potentially angry audience.

For anyone curious about Geoffrey Hinton, there’s more here in my March 31, 2017 post (scroll down about 20% of the way) and for more about the 2017 budget and allocations for targeted science projects there’s my March 24, 2017 post.

US science envoy quits

An Aug. 23, 2017article by Matthew Rosza for salon.com notes the resignation of one of the US science envoys,

President Donald Trump’s infamous response to the Charlottesville riots — namely, saying that both sides were to blame and that there were “very fine people” marching as white supremacists — has prompted yet another high profile resignation from his administration.

Daniel M. Kammen, who served as a science envoy for the State Department and focused on renewable energy development in the Middle East and Northern Africa, submitted a letter of resignation on Wednesday. Notably, he began the first letter of each paragraph with letters that spelled out I-M-P-E-A-C-H. That followed a letter earlier this month by writer Jhumpa Lahiri and actor Kal Penn to similarly spell R-E-S-I-S-T in their joint letter of resignation from the President’s Committee on Arts and Humanities.

Jeremy Berke’s Aug. 23, 2017 article for BusinessInsider.com provides a little more detail (Note: Links have been removed),

A State Department climate science envoy resigned Wednesday in a public letter posted on Twitter over what he says is President Donald Trump’s “attacks on the core values” of the United States with his response to violence in Charlottesville, Virginia.

“My decision to resign is in response to your attacks on the core values of the United States,” wrote Daniel Kammen, a professor of energy at the University of California, Berkeley, who was appointed as one five science envoys in 2016. “Your failure to condemn white supremacists and neo-Nazis has domestic and international ramifications.”

“Your actions to date have, sadly, harmed the quality of life in the United States, our standing abroad, and the sustainability of the planet,” Kammen writes.

Science envoys work with the State Department to establish and develop energy programs in countries around the world. Kammen specifically focused on renewable energy development in the Middle East and North Africa.

That’s it.

Artificial intelligence (AI) company (in Montréal, Canada) attracts $135M in funding from Microsoft, Intel, Nvidia and others

It seems there’s a push on to establish Canada as a centre for artificial intelligence research and, if the federal and provincial governments have their way, for commercialization of said research. As always, there seems to be a bit of competition between Toronto (Ontario) and Montréal (Québec) as to which will be the dominant hub for the Canadian effort if one is to take Braga’s word for the situation.

In any event, Toronto seemed to have a mild advantage over Montréal initially with the 2017 Canadian federal government  budget announcement that the Canadian Institute for Advanced Research (CIFAR), based in Toronto, would launch a Pan-Canadian Artificial Intelligence Strategy and with an announcement from the University of Toronto shortly after (from my March 31, 2017 posting),

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

However, Montréal and the province of Québec are no slouches when it comes to supporting to technology. From a June 14, 2017 article by Matthew Braga for CBC (Canadian Broadcasting Corporation) news online (Note: Links have been removed),

One of the most promising new hubs for artificial intelligence research in Canada is going international, thanks to a $135 million investment with contributions from some of the biggest names in tech.

The company, Montreal-based Element AI, was founded last October [2016] to help companies that might not have much experience in artificial intelligence start using the technology to change the way they do business.

It’s equal parts general research lab and startup incubator, with employees working to develop new and improved techniques in artificial intelligence that might not be fully realized for years, while also commercializing products and services that can be sold to clients today.

It was co-founded by Yoshua Bengio — one of the pioneers of a type of AI research called machine learning — along with entrepreneurs Jean-François Gagné and Nicolas Chapados, and the Canadian venture capital fund Real Ventures.

In an interview, Bengio and Gagné said the money from the company’s funding round will be used to hire 250 new employees by next January. A hundred will be based in Montreal, but an additional 100 employees will be hired for a new office in Toronto, and the remaining 50 for an Element AI office in Asia — its first international outpost.

They will join more than 100 employees who work for Element AI today, having left jobs at Amazon, Uber and Google, among others, to work at the company’s headquarters in Montreal.

The expansion is a big vote of confidence in Element AI’s strategy from some of the world’s biggest technology companies. Microsoft, Intel and Nvidia all contributed to the round, and each is a key player in AI research and development.

The company has some not unexpected plans and partners (from the Braga, article, Note: A link has been removed),

The Series A round was led by Data Collective, a Silicon Valley-based venture capital firm, and included participation by Fidelity Investments Canada, National Bank of Canada, and Real Ventures.

What will it help the company do? Scale, its founders say.

“We’re looking at domain experts, artificial intelligence experts,” Gagné said. “We already have quite a few, but we’re looking at people that are at the top of their game in their domains.

“And at this point, it’s no longer just pure artificial intelligence, but people who understand, extremely well, robotics, industrial manufacturing, cybersecurity, and financial services in general, which are all the areas we’re going after.”

Gagné says that Element AI has already delivered 10 projects to clients in those areas, and have many more in development. In one case, Element AI has been helping a Japanese semiconductor company better analyze the data collected by the assembly robots on its factory floor, in a bid to reduce manufacturing errors and improve the quality of the company’s products.

There’s more to investment in Québec’s AI sector than Element AI (from the Braga article; Note: Links have been removed),

Element AI isn’t the only organization in Canada that investors are interested in.

In September, the Canadian government announced $213 million in funding for a handful of Montreal universities, while both Google and Microsoft announced expansions of their Montreal AI research groups in recent months alongside investments in local initiatives. The province of Quebec has pledged $100 million for AI initiatives by 2022.

Braga goes on to note some other initiatives but at that point the article’s focus is exclusively Toronto.

For more insight into the AI situation in Québec, there’s Dan Delmar’s May 23, 2017 article for the Montreal Express (Note: Links have been removed),

Advocating for massive government spending with little restraint admittedly deviates from the tenor of these columns, but the AI business is unlike any other before it. [emphasis misn] Having leaders acting as fervent advocates for the industry is crucial; resisting the coming technological tide is, as the Borg would say, futile.

The roughly 250 AI researchers who call Montreal home are not simply part of a niche industry. Quebec’s francophone character and Montreal’s multilingual citizenry are certainly factors favouring the development of language technology, but there’s ample opportunity for more ambitious endeavours with broader applications.

AI isn’t simply a technological breakthrough; it is the technological revolution. [emphasis mine] In the coming decades, modern computing will transform all industries, eliminating human inefficiencies and maximizing opportunities for innovation and growth — regardless of the ethical dilemmas that will inevitably arise.

“By 2020, we’ll have computers that are powerful enough to simulate the human brain,” said (in 2009) futurist Ray Kurzweil, author of The Singularity Is Near, a seminal 2006 book that has inspired a generation of AI technologists. Kurzweil’s projections are not science fiction but perhaps conservative, as some forms of AI already effectively replace many human cognitive functions. “By 2045, we’ll have expanded the intelligence of our human-machine civilization a billion-fold. That will be the singularity.”

The singularity concept, borrowed from physicists describing event horizons bordering matter-swallowing black holes in the cosmos, is the point of no return where human and machine intelligence will have completed their convergence. That’s when the machines “take over,” so to speak, and accelerate the development of civilization beyond traditional human understanding and capability.

The claims I’ve highlighted in Delmar’s article have been made before for other technologies, “xxx is like no other business before’ and “it is a technological revolution.”  Also if you keep scrolling down to the bottom of the article, you’ll find Delmar is a ‘public relations consultant’ which, if you look at his LinkedIn profile, you’ll find means he’s a managing partner in a PR firm known as Provocateur.

Bertrand Marotte’s May 20, 2017 article for the Montreal Gazette offers less hyperbole along with additional detail about the Montréal scene (Note: Links have been removed),

It might seem like an ambitious goal, but key players in Montreal’s rapidly growing artificial-intelligence sector are intent on transforming the city into a Silicon Valley of AI.

Certainly, the flurry of activity these days indicates that AI in the city is on a roll. Impressive amounts of cash have been flowing into academia, public-private partnerships, research labs and startups active in AI in the Montreal area.

…, researchers at Microsoft Corp. have successfully developed a computing system able to decipher conversational speech as accurately as humans do. The technology makes the same, or fewer, errors than professional transcribers and could be a huge boon to major users of transcription services like law firms and the courts.

Setting the goal of attaining the critical mass of a Silicon Valley is “a nice point of reference,” said tech entrepreneur Jean-François Gagné, co-founder and chief executive officer of Element AI, an artificial intelligence startup factory launched last year.

The idea is to create a “fluid, dynamic ecosystem” in Montreal where AI research, startup, investment and commercialization activities all mesh productively together, said Gagné, who founded Element with researcher Nicolas Chapados and Université de Montréal deep learning pioneer Yoshua Bengio.

“Artificial intelligence is seen now as a strategic asset to governments and to corporations. The fight for resources is global,” he said.

The rise of Montreal — and rival Toronto — as AI hubs owes a lot to provincial and federal government funding.

Ottawa promised $213 million last September to fund AI and big data research at four Montreal post-secondary institutions. Quebec has earmarked $100 million over the next five years for the development of an AI “super-cluster” in the Montreal region.

The provincial government also created a 12-member blue-chip committee to develop a strategic plan to make Quebec an AI hub, co-chaired by Claridge Investments Ltd. CEO Pierre Boivin and Université de Montréal rector Guy Breton.

But private-sector money has also been flowing in, particularly from some of the established tech giants competing in an intense AI race for innovative breakthroughs and the best brains in the business.

Montreal’s rich talent pool is a major reason Waterloo, Ont.-based language-recognition startup Maluuba decided to open a research lab in the city, said the company’s vice-president of product development, Mohamed Musbah.

“It’s been incredible so far. The work being done in this space is putting Montreal on a pedestal around the world,” he said.

Microsoft struck a deal this year to acquire Maluuba, which is working to crack one of the holy grails of deep learning: teaching machines to read like the human brain does. Among the company’s software developments are voice assistants for smartphones.

Maluuba has also partnered with an undisclosed auto manufacturer to develop speech recognition applications for vehicles. Voice recognition applied to cars can include such things as asking for a weather report or making remote requests for the vehicle to unlock itself.

Marotte’s Twitter profile describes him as a freelance writer, editor, and translator.

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 3 of 3

This is the final commentary on the report titled,(INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research). Part 1 of my commentary having provided some introductory material and first thoughts about the report, Part 2 offering more detailed thoughts; this part singles out ‘special cases’, sums up* my thoughts (circling back to ideas introduced in the first part), and offers link to other commentaries.

Special cases

Not all of the science funding in Canada is funneled through the four agencies designed for that purpose, (The Natural Sciences and Engineering Research Council (NSERC), Social Sciences and Humanities Research Council (SSHRC), Canadian Institutes of Health Research (CIHR) are known collectively as the tri-council funding agencies and are focused on disbursement of research funds received from the federal government. The fourth ‘pillar’ agency, the Canada Foundation for Innovation (CFI) is focused on funding for infrastructure and, technically speaking, is a 3rd party organization along with MITACS, CANARIE, the Perimeter Institute, and others.

In any event, there are also major research facilities and science initiatives which may receive direct funding from the federal government bypassing the funding agencies and, it would seem, peer review. For example, I featured this in my April 28, 2015 posting about the 2015 federal budget,

The $45 million announced for TRIUMF will support the laboratory’s role in accelerating science in Canada, an important investment in discovery research.

While the news about the CFI seems to have delighted a number of observers, it should be noted (as per Woodgett’s piece) that the $1.3B is to be paid out over six years ($220M per year, more or less) and the money won’t be disbursed until the 2017/18 fiscal year. As for the $45M designated for TRIUMF (Canada’s National Laboratory for Particle and Nuclear Physics), this is exciting news for the lab which seems to have bypassed the usual channels, as it has before, to receive its funding directly from the federal government. [emphases mine]

The Naylor report made this recommendation for Canada’s major research facilities, (MRF)

We heard from many who recommended that the federal government should manage its investments in “Big Science” in a more coordinated manner, with a cradle-to-grave perspective. The Panel agrees. Consistent with NACRI’s overall mandate, it should work closely with the CSA [Chief Science Advisor] in establishing a Standing Committee on Major Research Facilities (MRFs).

CFI defines a national research facility in the following way:

We define a national research facility as one that addresses the needs of a community of Canadian researchers representing a critical mass of users distributed across the country. This is done by providing shared access to substantial and advanced specialized equipment, services, resources, and scientific and technical personnel. The facility supports leading-edge research and technology development, and promotes the mobilization of knowledge and transfer of technology to society. A national research facility requires resource commitments well beyond the capacity of any one institution. A national research facility, whether single-sited, distributed or virtual, is specifically identified or recognized as serving pan-Canadian needs and its governance and management structures reflect this mandate.8

We accept this definition as appropriate for national research facilities to be considered by the Standing Committee on MRFs, but add that the committee should:

• define a capital investment or operating cost level above which such facilities are considered “major” and thus require oversight by this committee (e.g., defined so as to include the national MRFs proposed in Section 6.3: Compute Canada, Canadian Light Source, Canada’s National Design Network, Canadian Research Icebreaker Amundsen, International Vaccine Centre, Ocean Networks Canada, Ocean Tracking Network, and SNOLAB plus the TRIUMF facility); and

• consider international MRFs in which Canada has a significant role, such as astronomical telescopes of global significance.

The structure and function of this Special Standing Committee would closely track the proposal made in 2006 by former NSA [National Science Advisor] Dr Arthur Carty. We return to this topic in Chapter 6. For now, we observe that this approach would involve:

• a peer-reviewed decision on beginning an investment;

• a funded plan for the construction and operation of the facility, with continuing oversight by a peer specialist/agency review group for the specific facility;

• a plan for decommissioning; and

• a regular review scheduled to consider whether the facility still serves current needs.

We suggest that the committee have 10 members, with an eminent scientist as Chair. The members should include the CSA, two representatives from NACRI for liaison, and seven others. The other members should include Canadian and international scientists from a broad range of disciplines and experts on the construction, operation, and administration of MRFs. Consideration should be given to inviting the presidents of NRC [National Research Council of Canada] and CFI to serve as ex-officio members. The committee should be convened by the CSA, have access to the Secretariat associated with the CSA and NACRI, and report regularly to NACRI. (pp. 66-7 print; pp. 100-1 PDF)

I have the impression there’s been some ill feeling over the years regarding some of the major chunks of money given for ‘big science’. At a guess, direct appeals to a federal government that has no official mechanism for assessing the proposed ‘big science’ whether that means a major research facility (e.g., TRIUMF) or major science initiative (e.g., Pan Canadian Artificial Intelligence Strategy [keep reading to find out how I got the concept of a major science initiative wrong]) or 3rd party (MITACS) has seemed unfair to those who have to submit funding applications and go through vetting processes. This recommendation would seem to be an attempt to redress some of the issues.

Moving onto the third-party delivery and matching programs,

Three bodies in particular are the largest of these third-party organizations and illustrate the challenges of evaluating contribution agreements: Genome Canada, Mitacs, and Brain Canada. Genome Canada was created in 2000 at a time when many national genomics initiatives were being developed in the wake of the Human Genome Project. It emerged from a “bottom-up” design process driven by genomic scientists to complement existing programs by focusing on large-scale projects and technology platforms. Its funding model emphasized partnerships and matching funds to leverage federal commitments with the objective of rapidly ramping up genomics research in Canada.

This approach has been successful: Genome Canada has received $1.1 billion from the Government of Canada since its creation in 2000, and has raised over $1.6 billion through co-funding commitments, for a total investment in excess of $2.7 billion.34 The scale of Genome Canada’s funding programs allows it to support large-scale genomics research that the granting councils might otherwise not be able to fund. Genome Canada also supports a network of genomics technology and innovation centres with an emphasis on knowledge translation and has built domestic and international strategic partnerships. While its primary focus has been human health, it has also invested extensively in agriculture, forestry, fisheries, environment, and, more recently, oil and gas and mining— all with a view to the application and commercialization of genomic biotechnology.

Mitacs attracts, trains, and retains HQP [highly qualified personnel] in the Canadian research enterprise. Founded in 1999 as an NCE [Network Centre for Excellence], it was developed at a time when enrolments in graduate programs had flat-lined, and links between mathematics and industry were rare. Independent since 2011, Mitacs has focused on providing industrial research internships and postdoctoral fellowships, branching out beyond mathematics to all disciplines. It has leveraged funding effectively from the federal and provincial governments, industry, and not-for-profit organizations. It has also expanded internationally, providing two-way research mobility. Budget 2015 made Mitacs the single mechanism of federal support for postsecondary research internships with a total federal investment of $135.4 million over the next five years. This led to the wind-down of NSERC’s Industrial Postgraduate Scholarships Program. With matching from multiple other sources, Mitacs’ average annual budget is now $75 to $80 million. The organization aims to more than double the number of internships it funds to 10,000 per year by 2020.35

Finally, Brain Canada was created in 1998 (originally called NeuroScience Canada) to increase the scale of brain research funding in Canada and widen its scope with a view to encouraging interdisciplinary collaboration. In 2011 the federal government established the Canada Brain Research Fund to expand Brain Canada’s work, committing $100 million in new public investment for brain research to be matched 1:1 through contributions raised by Brain Canada. According to the STIC ‘State of the Nation’ 2014 report, Canada’s investment in neuroscience research is only about 40 per cent of that in the U.S. after adjusting for the size of the U.S. economy.36 Brain Canada may be filling a void left by declining success rates and flat funding at CIHR.

Recommendation and Elaboration

The Panel noted that, in general, third-party organizations for delivering research funding are particularly effective in leveraging funding from external partners. They fill important gaps in research funding and complement the work of the granting councils and CFI. At the same time, we questioned the overall efficiency of directing federal research funding through third-party organizations, noting that our consultations solicited mixed reactions. Some respondents favoured more overall funding concentrated in the agencies rather than diverting the funding to third-party entities. Others strongly supported the business models of these organizations.

We have indicated elsewhere that a system-wide review panel such as ours is not well-suited to examine these and other organizations subject to third-party agreements. We recommended instead in Chapter 4 that a new oversight body, NACRI, be created to provide expert advice and guidance on when a new entity might reasonably be supported by such an agreement. Here we make the case for enlisting NACRI in determining not just the desirability of initiating a new entity, but also whether contribution agreements should continue and, if so, on what terms.

The preceding sketches of three diverse organizations subject to contribution agreements help illustrate the rationale for this proposal. To underscore the challenges of adjudication, we elaborate briefly. Submissions highlighted that funding from Genome Canada has enabled fundamental discoveries to be made and important knowledge to be disseminated to the Canadian and international research communities. However, other experts suggested a bifurcation with CIHR or NSERC funding research-intensive development of novel technologies, while Genome Canada would focus on application (e.g., large-scale whole genome studies) and commercialization of existing technologies. From the Panel’s standpoint, these observations underscore the subtleties of determining where and how Genome Canada’s mandate overlaps and departs from that of CIHR and NSERC as well as CFI. Added to the complexity of any assessment is Genome Canada’s meaningful role in providing large-scale infrastructure grants and its commercialization program. Mitacs, even more than Genome Canada, bridges beyond academe to the private and non-profit sectors, again highlighting the advantage of having any review overseen by a body with representatives from both spheres. Finally, as did the other two entities, Brain Canada won plaudits, but some interchanges saw discussants ask when and whether it might be more efficient to flow this type of funding on a programmatic basis through CIHR.

We emphasize that the Panel’s intent here is neither to signal agreement nor disagreement with any of these submissions or discussions. We simply wish to highlight that decisions about ongoing funding will involve expert judgments informed by deep expertise in the relevant research areas and, in two of these examples, an ability to bridge from research to innovation and from extramural independent research to the private and non-profit sectors. Under current arrangements, management consulting firms and public servants drive the review and decision-making processes. Our position is that oversight by NACRI and stronger reliance on advice from content experts would be prudent given the sums involved and the nature of the issues. (pp. 102-4 print; pp. 136-8 PDF)

I wasn’t able to find anything other than this about major science initiatives (MSIs),

Big Science facilities, such as MSIs, have had particular challenges in securing ongoing stable operating support. Such facilities often have national or international missions. We termed them “major research facilities” (MRFs) xi in Chapter 4, and proposed an improved oversight mechanism that would provide lifecycle stewardship of these national science resources, starting with the decision to build them in the first instance. (p. 132 print; p. 166 PDF)

So, an MSI is an MRF? (head shaking) Why two terms for the same thing? And, how does the newly announced Pan Canadian Artificial Intelligence Strategy fit into the grand scheme of things?

The last ‘special case’ I’m featuring is the ‘Programme for Research Chairs for Excellent Scholars and Scientists’. Here’s what the report had to say about the state of affairs,

The major sources of federal funding for researcher salary support are the CRC [Canada Research Chair]and CERC [Canada Excellence Reseach Chair] programs. While some salary support is provided through council-specific programs, these investments have been declining over time. The Panel supports program simplification but, as noted in Chapter 5, we are concerned about the gaps created by the elimination of these personnel awards. While we focus here on the CRC and CERC programs because of their size, profile, and impact, our recommendations will reflect these concerns.

The CRC program was launched in 2000 and remains the Government of Canada’s flagship initiative to keep Canada among the world’s leading countries in higher education R&D. The program has created 2,000 research professorships across Canada with the stated aim “to attract and retain some of the world’s most accomplished and promising minds”5 as part of an effort to curtail the potential academic brain drain to the U.S. and elsewhere. The program is a tri-council initiative with most Chairs allocated to eligible institutions based on the national proportion of total research grant funding they receive from the three granting councils. The vast majority of Chairs are distributed based on area of research, of which 45 per cent align with NSERC, 35 per cent with CIHR, and 20 per cent with SSHRC; an additional special allocation of 120 Chairs can be used in the area of research chosen by the universities receiving the Chairs. There are two types of Chairs: Tier 1 Chairs are intended for outstanding researchers who are recognized as world leaders in their fields and are renewable; Tier 2 Chairs are targeted at exceptional emerging researchers with the potential to become leaders in their field and can be renewed once. Awards are paid directly to the universities and are valued at $200,000 annually for seven years (Tier 1) or $100,000 annually for five years (Tier 2). The program notes that Tier 2 Chairs are not meant to be a feeder group for Tier 1 Chairs; rather, universities are expected to develop a succession plan for their Tier 2 Chairs.

The CERC program was established in 2008 with the expressed aim of “support[ing] Canadian universities in their efforts to build on Canada’s growing reputation as a global leader in research and innovation.”6 The program aims to award world-renowned researchers and their teams with up to $10 million over seven years to establish ambitious research programs at Canadian universities, making these awards among the most prestigious and generous available internationally. There are currently 27 CERCs with funding available to support up to 30 Chairs, which are awarded in the priority areas established by the federal government. The awards, which are not renewable, require 1:1 matching funds from the host institution, and all degree-granting institutions that receive tri-council funding are eligible to compete. Both the CERC and CRC programs are open to Canadians and foreign citizens. However, until the most recent round, the CERCs have been constrained to the government’s STEM-related priorities; this has limited their availability to scholars and scientists from SSHRC-related disciplines. As well, even though Canadian-based researchers are eligible for CERC awards, the practice has clearly been to use them for international recruitment with every award to date going to researchers from abroad.

Similar to research training support, the funding for salary support to researchers and scholars is a significant proportion of total federal research investments, but relatively small with respect to the research ecosystem as a whole. There are more than 45,000 professors and teaching staff at Canada’s universities7 and a very small fraction hold these awards. Nevertheless, the programs can support research excellence by repatriating top Canadian talent from abroad and by recruiting and retaining top international talent in Canada.

The programs can also lead by example in promoting equity and diversity in the research enterprise. Unfortunately, both the CRC and CERC programs suffer from serious challenges regarding equity and diversity, as described in Chapter 5. Both programs have been criticized in particular for under-recruitment of women.

While the CERC program has recruited exclusively from outside Canada, the CRC program has shown declining performance in that regard. A 2016 evaluation of the CRC program8  observed that a rising number of chairholders were held by nominees who originated from within the host institution (57.5 per cent), and another 14.4 per cent had been recruited from other Canadian institutions. The Panel acknowledges that some of these awards may be important to retaining Canadian talent. However, we were also advised in our consultations that CRCs are being used with some frequency to offset salaries as part of regular faculty complement planning.

The evaluation further found that 28.1 per cent of current chairholders had been recruited from abroad, a decline from 32 per cent in the 2010 evaluation. That decline appears set to continue. The evaluation reported that “foreign nominees accounted, on average, for 13 per cent and 15 per cent respectively of new Tier 1 and Tier 2 nominees over the five-year period 2010 to 2014”, terming it a “large decrease” from 2005 to 2009 when the averages respectively were 32 per cent and 31 per cent. As well, between 2010-11 and 2014-15, the attrition rate for chairholders recruited from abroad was 75 per cent higher than for Canadian chairholders, indicating that the program is also falling short in its ability to retain international talent.9

One important factor here appears to be the value of the CRC awards. While they were generous in 2000, their value has remained unchanged for some 17 years, making it increasingly difficult to offer the level of support that world-leading research professors require. The diminishing real value of the awards also means that Chair positions are becoming less distinguishable from regular faculty positions, threatening the program’s relevance and effectiveness. To rejuvenate this program and make it relevant for recruitment and retention of top talent, it seems logical to take two steps:

• ask the granting councils and the Chairs Secretariat to work with universities in developing a plan to restore the effectiveness of these awards; and

• once that plan is approved, increase the award values by 35 per cent, thereby restoring the awards to their original value and making them internationally competitive once again.

In addition, the Panel observes that the original goal was for the program to fund 2,000 Chairs. Due to turnover and delays in filling Chair positions, approximately 10 to 15 per cent of them are unoccupied at any one time.i As a result, the program budget was reduced by $35 million in 2012. However, the occupancy rate has continued to decline since then, with an all-time low of only 1,612 Chair positions (80.6 per cent) filled as of December 2016. The Panel is dismayed by this inefficiency, especially at a time when Tier 2 Chairs remain one of the only external sources of salary support for ECRs [early career researchers]—a group that represents the future of Canadian research and scholarship. (pp. 142-4 print; pp. 176-8 PDF)

I think what you can see as a partial subtext in this report and which I’m attempting to highlight here in ‘special cases’ is a balancing act between supporting a broad range of research inquiries and focusing or pouring huge sums of money into ‘important’ research inquiries for high impact outcomes.

Final comments

There are many things to commend this report including the writing style. The notion that more coordination is needed amongst the various granting agencies, that greater recognition (i.e,, encouragement and funding opportunities) should be given to boundary-crossing research, and that we need to do more interprovincial collaboration is welcome. And yes, they want more money too. (That request is perfectly predictable. When was the last time a report suggested less funding?) Perhaps more tellingly, the request for money is buttressed with a plea to make it partisan-proof. In short, that funding doesn’t keep changing with the political tides.

One area that was not specifically mentioned, except when discussing prizes, was mathematics. I found that a bit surprising given how important the field of mathematics is to  to virtually all the ‘sciences’. A 2013 report, Spotlight on Science, suggests there’s a problem(as noted my Oct. 9, 2013 posting about that report,  (I also mention Canada’s PISA scores [Programme for International Student Assessment] by the OECD [Organization for Economic Cooperation and Development], which consistently show Canadian students at the age of 15 [grade 10] do well) ,

… it appears that we have high drop out rates in the sciences and maths, from an Oct. 8, 2013 news item on the CBC (Canadian Broadcasting Corporation) website,

… Canadians are paying a heavy price for the fact that less than 50 per cent of Canadian high school students graduate with senior courses in science, technology, engineering and math (STEM) at a time when 70 per cent of Canada’s top jobs require an education in those fields, said report released by the science education advocacy group Let’s Talk Science and the pharmaceutical company Amgen Canada.

Spotlight on Science Learning 2013 compiles publicly available information about individual and societal costs of students dropping out STEM courses early.

Even though most provinces only require math and science courses until Grade 10, the report [Spotlight on Science published by Let’s Talk Science and pharmaceutical company Amgen Canada) found students without Grade 12 math could expect to be excluded from 40 to 75 per cent of programs at Canadian universities, and students without Grade 11 could expect to be excluded from half of community college programs. [emphasis mine]

While I realize that education wasn’t the panel’s mandate they do reference the topic  elsewhere and while secondary education is a provincial responsibility there is a direct relationship between it and postsecondary education.

On the lack of imagination front, there was some mention of our aging population but not much planning or discussion about integrating older researchers into the grand scheme of things. It’s all very well to talk about the aging population but shouldn’t we start introducing these ideas into more of our discussions on such topics as research rather than only those discussions focused on aging?

Continuing on with the lack of  imagination and lack of forethought, I was not able to find any mention of independent scholars. The assumption, as always, is that one is affiliated with an institution. Given the ways in which our work world is changing with fewer jobs at the institutional level, it seems the panel was not focused on important and fra reaching trends. Also, there was no mention of technologies, such as artificial intelligence, that could affect basic research. One other thing from my wish list, which didn’t get mentioned, art/science or SciArt. Although that really would have been reaching.

Weirdly, one of the topics the panel did note, the pitiifull lack of interprovincial scientific collaboration, was completely ignored when it came time for recommendations.

Should you spot any errors in this commentary, please do drop me a comment.

Other responses to the report:

Nassif Ghoussoub (Piece of Mind blog; he’s a professor mathematics at the University of British Columbia; he attended one of the roundtable discussions held by the panel). As you might expect, he focuses on the money end of things in his May 1, 2017 posting.

You can find a series of essays about the report here under the title Response to Naylor Panel Report ** on the Canadian Science Policy Centre website.

There’s also this May 31, 2017 opinion piece by Jamie Cassels for The Vancouver Sun exhorting us to go forth collaborate internationally, presumably with added funding for the University of Victoria of which Cassels is the president and vice-chancellor. He seems not to have noticed that Canadian do much more poorly with interprovincial collaboration.

*ETA June 21, 2017: I’ve just stumbled across Ivan Semeniuk’s April 10, 2017 analysis (Globe and Mail newspaper) of the report. It’s substantive and well worth checking out.*

Again, here’s a link to the other parts:

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report) Commentaries

Part 1

Part 2

*’up’ added on June 8, 2017 at 15:10 hours PDT.

**’Science Funding Review Panel Repor’t was changed to ‘Responses to Naylor Panel Report’ on June 22, 2017.