Tag Archives: University of Alberta (U of A)

Age of AI and Big Data – Impact on Justice, Human Rights and Privacy Zoom event on September 28, 2022 at 12 – 1:30 pm EDT

The Canadian Science Policy Centre (CSPC) in a September 15, 2022 announcement (received via email) announced an event (Age of AI and Big Data – Impact on Justice, Human Rights and Privacy) centered on some of the latest government doings on artificial intelligence and privacy (Bill C-27),

In an increasingly connected world, we share a large amount of our data in our daily lives without our knowledge while browsing online, traveling, shopping, etc. More and more companies are collecting our data and using it to create algorithms or AI. The use of our data against us is becoming more and more common. The algorithms used may often be discriminatory against racial minorities and marginalized people.

As technology moves at a high pace, we have started to incorporate many of these technologies into our daily lives without understanding its consequences. These technologies have enormous impacts on our very own identity and collectively on civil society and democracy. 

Recently, the Canadian Government introduced the Artificial Intelligence and Data Act (AIDA) and Bill C-27 [which includes three acts in total] in parliament regulating the use of AI in our society. In this panel, we will discuss how our AI and Big data is affecting us and its impact on society, and how the new regulations affect us. 

Date: Sep 28 Time: 12:00 pm – 1:30 pm EDT Event Category: Virtual Session

Register Here

For some reason, there was no information about the moderator and panelists, other than their names, titles, and affiliations. Here’s a bit more:

Moderator: Yuan Stevens (from her eponymous website’s About page), Note: Links have been removed,

Yuan (“You-anne”) Stevens (she/they) is a legal and policy expert focused on sociotechnical security and human rights.

She works towards a world where powerful actors—and the systems they build—are held accountable to the public, especially when it comes to marginalized communities. 

She brings years of international experience to her role at the Leadership Lab at Toronto Metropolitan University [formerly Ryerson University], having examined the impacts of technology on vulnerable populations in Canada, the US and Germany. 

Committed to publicly accessible legal and technical knowledge, Yuan has written for popular media outlets such as the Toronto Star and Ottawa Citizen and has been quoted in news stories by the New York Times, the CBC and the Globe & Mail.

Yuan is a research fellow at the Centre for Law, Technology and Society at the University of Ottawa and a research affiliate at Data & Society Research Institute. She previously worked at Harvard University’s Berkman Klein Center for Internet & Society during her studies in law at McGill University.

She has been conducting research on artificial intelligence since 2017 and is currently exploring sociotechnical security as an LL.M candidate at University of Ottawa’s Faculty of Law working under Florian Martin-Bariteau.

Panelist: Brenda McPhail (from her Centre for International Governance Innovation profile page),

Brenda McPhail is the director of the Canadian Civil Liberties Association’s Privacy, Surveillance and Technology Project. Her recent work includes guiding the Canadian Civil Liberties Association’s interventions in key court cases that raise privacy issues, most recently at the Supreme Court of Canada in R v. Marakah and R v. Jones, which focused on privacy rights in sent text messages; research into surveillance of dissent, government information sharing, digital surveillance capabilities and privacy in relation to emergent technologies; and developing resources and presentations to drive public awareness about the importance of privacy as a social good.

Panelist: Nidhi Hegde (from her University of Alberta profile page),

My research has spanned many areas such as resource allocation in networking, smart grids, social information networks, machine learning. Broadly, my interest lies in gaining a fundamental understanding of a given system and the design of robust algorithms.

More recently my research focus has been in privacy in machine learning. I’m interested in understanding how robust machine learning methods are to perturbation, and privacy and fairness constraints, with the goal of designing practical algorithms that achieve privacy and fairness.

Bio

Before joining the University of Alberta, I spent many years in industry research labs. Most recently, I was a Research team lead at Borealis AI (a research institute at Royal Bank of Canada), where my team worked on privacy-preserving methods for machine learning models and other applied problems for RBC. Prior to that, I spent many years in research labs in Europe working on a variety of interesting and impactful problems. I was a researcher at Bell Labs, Nokia, in France from January 2015 to March 2018, where I led a new team focussed on Maths and Algorithms for Machine Learning in Networks and Systems, in the Maths and Algorithms group of Bell Labs. I also spent a few years at the Technicolor Paris Research Lab working on social network analysis, smart grids, and privacy in recommendations.

Panelist: Benjamin Faveri (from his LinkedIn page),

About

Benjamin Faveri is a Research and Policy Analyst at the Responsible AI Institute (RAII) [headquarted in Austin, Texas]. Currently, he is developing their Responsible AI Certification Program and leading it through Canada’s national accreditation process. Over the last several years, he has worked on numerous certification program-related research projects such as fishery economics and certification programs, police body-worn camera policy certification, and emerging AI certifications and assurance systems. Before his work at RAII, Benjamin completed a Master of Public Policy and Administration at Carleton University, where he was a Canada Graduate Scholar, Ontario Graduate Scholar, Social Innovation Fellow, and Visiting Scholar at UC Davis School of Law. He holds undergraduate degrees in criminology and psychology, finishing both with first class standing. Outside of work, Benjamin reads about how and why certification and private governance have been applied across various industries.

Panelist: Ori Freiman (from his eponymous website’s About page)

I research at the forefront of technological innovation. This website documents some of my academic activities.

My formal background is in Analytic Philosophy, Library and Information Science, and Science & Technology Studies. Until September 22′ [September 2022], I was a Post-Doctoral Fellow at the Ethics of AI Lab, at the University of Toronto’s Centre for Ethics. Before joining the Centre, I submitted my dissertation, about trust in technology, to The Graduate Program in Science, Technology and Society at Bar-Ilan University.

I have also found a number of overviews and bits of commentary about the Canadian federal government’s proposed Bill C-27, which I think of as an omnibus bill as it includes three proposed Acts.

The lawyers are excited but I’m starting with the Responsible AI Institute’s (RAII) response first as one of the panelists (Benjamin Faveri) works for them and it’s a view from a closely neighbouring country, from a June 22, 2022 RAII news release, Note: Links have been removed,

Business Implications of Canada’s Draft AI and Data Act

On June 16 [2022], the Government of Canada introduced the Artificial Intelligence and Data Act (AIDA), as part of the broader Digital Charter Implementation Act 2022 (Bill C-27). Shortly thereafter, it also launched the second phase of the Pan-Canadian Artificial Intelligence Strategy.

Both RAII’s Certification Program, which is currently under review by the Standards Council of Canada, and the proposed AIDA legislation adopt the same approach of gauging an AI system’s risk level in context; identifying, assessing, and mitigating risks both pre-deployment and on an ongoing basis; and pursuing objectives such as safety, fairness, consumer protection, and plain-language notification and explanation.

Businesses should monitor the progress of Bill C-27 and align their AI governance processes, policies, and controls to its requirements. Businesses participating in RAII’s Certification Program will already be aware of requirements, such as internal Algorithmic Impact Assessments to gauge risk level and Responsible AI Management Plans for each AI system, which include system documentation, mitigation measures, monitoring requirements, and internal approvals.

The AIDA draft is focused on the impact of any “high-impact system”. Companies would need to assess whether their AI systems are high-impact; identify, assess, and mitigate potential harms and biases flowing from high-impact systems; and “publish on a publicly available website a plain-language description of the system” if making a high-impact system available for use. The government elaborated in a press briefing that it will describe in future regulations the classes of AI systems that may have high impact.

The AIDA draft also outlines clear criminal penalties for entities which, in their AI efforts, possess or use unlawfully obtained personal information or knowingly make available for use an AI system that causes serious harm or defrauds the public and causes substantial economic loss to an individual.

If enacted, AIDA would establish the Office of the AI and Data Commissioner, to support Canada’s Minister of Innovation, Science and Economic Development, with powers to monitor company compliance with the AIDA, to order independent audits of companies’ AI activities, and to register compliance orders with courts. The Commissioner would also help the Minister ensure that standards for AI systems are aligned with international standards.

Apart from being aligned with the approach and requirements of Canada’s proposed AIDA legislation, RAII is also playing a key role in the Standards Council of Canada’s AI  accreditation pilot. The second phase of the Pan-Canadian includes funding for the Standards Council of Canada to “advance the development and adoption of standards and a conformity assessment program related to AI/”

The AIDA’s introduction shows that while Canada is serious about governing AI systems, its approach to AI governance is flexible and designed to evolve as the landscape changes.

Charles Mandel’s June 16, 2022 article for Betakit (Canadian Startup News and Tech Innovation) provides an overview of the government’s overall approach to data privacy, AI, and more,

The federal Liberal government has taken another crack at legislating privacy with the introduction of Bill C-27 in the House of Commons.

Among the bill’s highlights are new protections for minors as well as Canada’s first law regulating the development and deployment of high-impact AI systems.

“It [Bill C-27] will address broader concerns that have been expressed since the tabling of a previous proposal, which did not become law,” a government official told a media technical briefing on the proposed legislation.

François-Philippe Champagne, the Minister of Innovation, Science and Industry, together with David Lametti, the Minister of Justice and Attorney General of Canada, introduced the Digital Charter Implementation Act, 2022. The ministers said Bill C-27 will significantly strengthen Canada’s private sector privacy law, create new rules for the responsible development and use of artificial intelligence (AI), and continue to put in place Canada’s Digital Charter.

The Digital Charter Implementation Act includes three proposed acts: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act (AIDA)- all of which have implications for Canadian businesses.

Bill C-27 follows an attempt by the Liberals to introduce Bill C-11 in 2020. The latter was the federal government’s attempt to reform privacy laws in Canada, but it failed to gain passage in Parliament after the then-federal privacy commissioner criticized the bill.

The proposed Artificial Intelligence and Data Act is meant to protect Canadians by ensuring high-impact AI systems are developed and deployed in a way that identifies, assesses and mitigates the risks of harm and bias.

For businesses developing or implementing AI this means that the act will outline criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment.

..

An AI and data commissioner will support the minister of innovation, science, and industry in ensuring companies comply with the act. The commissioner will be responsible for monitoring company compliance, ordering third-party audits, and sharing information with other regulators and enforcers as appropriate.

The commissioner would also be expected to outline clear criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment.

Canada already collaborates on AI standards to some extent with a number of countries. Canada, France, and 13 other countries launched an international AI partnership to guide policy development and “responsible adoption” in 2020.

The federal government also has the Pan-Canadian Artificial Intelligence Strategy for which it committed an additional $443.8 million over 10 years in Budget 2021. Ahead of the 2022 budget, Trudeau [Canadian Prime Minister Justin Trudeau] had laid out an extensive list of priorities for the innovation sector, including tasking Champagne with launching or expanding national strategy on AI, among other things.

Within the AI community, companies and groups have been looking at AI ethics for some time. Scotiabank donated $750,000 in funding to the University of Ottawa in 2020 to launch a new initiative to identify solutions to issues related to ethical AI and technology development. And Richard Zemel, co-founder of the Vector Institute [formed as part of the Pan-Canadian Artificial Intelligence Strategy], joined Integrate.AI as an advisor in 2018 to help the startup explore privacy and fairness in AI.

When it comes to the Consumer Privacy Protection Act, the Liberals said the proposed act responds to feedback received on the proposed legislation, and is meant to ensure that the privacy of Canadians will be protected, and that businesses can benefit from clear rules as technology continues to evolve.

“A reformed privacy law will establish special status for the information of minors so that they receive heightened protection under the new law,” a federal government spokesperson told the technical briefing.

..

The act is meant to provide greater controls over Canadians’ personal information, including how it is handled by organizations as well as giving Canadians the freedom to move their information from one organization to another in a secure manner.

The act puts the onus on organizations to develop and maintain a privacy management program that includes the policies, practices and procedures put in place to fulfill obligations under the act. That includes the protection of personal information, how requests for information and complaints are received and dealt with, and the development of materials to explain an organization’s policies and procedures.

The bill also ensures that Canadians can request that their information be deleted from organizations.

The bill provides the privacy commissioner of Canada with broad powers, including the ability to order a company to stop collecting data or using personal information. The commissioner will be able to levy significant fines for non-compliant organizations—with fines of up to five percent of global revenue or $25 million, whichever is greater, for the most serious offences.

The proposed Personal Information and Data Protection Tribunal Act will create a new tribunal to enforce the Consumer Privacy Protection Act.

Although the Liberal government said it engaged with stakeholders for Bill C-27, the Council of Canadian Innovators (CCI) expressed reservations about the process. Nick Schiavo, CCI’s director of federal affairs, said it had concerns over the last version of privacy legislation, and had hoped to present those concerns when the bill was studied at committee, but the previous bill died before that could happen.

Now the lawyers. Simon Hodgett, Kuljit Bhogal, and Sam Ip have written a June 27, 2022 overview, which highlights the key features from the perspective of Osler, a leading business law firm practising internationally from offices across Canada and in New York.

Maya Medeiros and Jesse Beatson authored a June 23, 2022 article for Norton Rose Fulbright, a global law firm, which notes a few ‘weak’ spots in the proposed legislation,

… While the AIDA is directed to “high-impact” systems and prohibits “material harm,” these and other key terms are not yet defined. Further, the quantum of administrative penalties will be fixed only upon the issuance of regulations. 

Moreover, the AIDA sets out publication requirements but it is unclear if there will be a public register of high-impact AI systems and what level of technical detail about the AI systems will be available to the public. More clarity should come through Bill C-27’s second and third readings in the House of Commons, and subsequent regulations if the bill passes.

The AIDA may have extraterritorial application if components of global AI systems are used, developed, designed or managed in Canada. The European Union recently introduced its Artificial Intelligence Act, which also has some extraterritorial application. Other countries will likely follow. Multi-national companies should develop a coordinated global compliance program.

I have two podcasts from Michael Geist, a lawyer and Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa.

  • June 26, 2022: The Law Bytes Podcast, Episode 132: Ryan Black on the Government’s Latest Attempt at Privacy Law Reform “The privacy reform bill that is really three bills in one: a reform of PIPEDA, a bill to create a new privacy tribunal, and an artificial intelligence regulation bill. What’s in the bill from a privacy perspective and what’s changed? Is this bill any likelier to become law than an earlier bill that failed to even advance to committee hearings? To help sort through the privacy aspects of Bill C-27, Ryan Black, a Vancouver-based partner with the law firm DLA Piper (Canada) …” (about 45 mins.)
  • August 15, 2022: The Law Bytes Podcast, Episode 139: Florian Martin-Bariteau on the Artificial Intelligence and Data Act “Critics argue that regulations are long overdue, but have expressed concern about how much of the substance is left for regulations that are still to be developed. Florian Martin-Bariteau is a friend and colleague at the University of Ottawa, where he holds the University Research Chair in Technology and Society and serves as director of the Centre for Law, Technology and Society. He is currently a fellow at the Harvard’s Berkman Klein Center for Internet and Society …” (about 38 mins.)

Coming soon: Responsible AI at the 35th Canadian Conference on Artificial Intelligence (AI) from 30 May to 3 June, 2022

35 years? How have I not stumbled on this conference before? Anyway, I’m glad to have the news (even if I’m late to the party), from the 35th Canadian Conference on Artificial Intelligence homepage,

The 35th Canadian Conference on Artificial Intelligence will take place virtually in Toronto, Ontario, from 30 May to 3 June, 2022. All presentations and posters will be online, with in-person social events to be scheduled in Toronto for those who are able to attend in-person. Viewing rooms and isolated presentation facilities will be available for all visitors to the University of Toronto during the event.

The event is collocated with the Computer and Robot Vision conferences. These events (AI·CRV 2022) will bring together hundreds of leaders in research, industry, and government, as well as Canada’s most accomplished students. They showcase Canada’s ingenuity, innovation and leadership in intelligent systems and advanced information and communications technology. A single registration lets you attend any session in the two conferences, which are scheduled in parallel tracks.

The conference proceedings are published on PubPub, an open-source, privacy-respecting, and open access online platform. They are submitted to be indexed and abstracted in leading indexing services such as DBLP, ACM, Google Scholar.

You can view last year’s [2021] proceedings here: https://caiac.pubpub.org/ai2021.

The 2021 proceedings appear to be open access.

I can’t tell if ‘Responsible AI’ has been included as a specific topic in previous conferences but 2022 is definitely hosting a couple of sessions based on that theme, from the Responsible AI activities webpage,

Keynote speaker: Julia Stoyanovich

New York University

“Building Data Equity Systems”

Equity as a social concept — treating people differently depending on their endowments and needs to provide equality of outcome rather than equality of treatment — lends a unifying vision for ongoing work to operationalize ethical considerations across technology, law, and society.  In my talk I will present a vision for designing, developing, deploying, and overseeing data-intensive systems that consider equity as an essential objective.  I will discuss ongoing technical work, and will place this work into the broader context of policy, education, and public outreach.

Biography: Julia Stoyanovich is an Institute Associate Professor of Computer Science & Engineering at the Tandon School of Engineering, Associate Professor of Data Science at the Center for Data Science, and Director of the Center for Responsible AI at New York University (NYU).  Her research focuses on responsible data management and analysis: on operationalizing fairness, diversity, transparency, and data protection in all stages of the data science lifecycle.  She established the “Data, Responsibly” consortium and served on the New York City Automated Decision Systems Task Force, by appointment from Mayor de Blasio.  Julia developed and has been teaching courses on Responsible Data Science at NYU, and is a co-creator of an award-winning comic book series on this topic.  In addition to data ethics, Julia works on the management and analysis of preference and voting data, and on querying large evolving graphs. She holds M.S. and Ph.D. degrees in Computer Science from Columbia University, and a B.S. in Computer Science and in Mathematics & Statistics from the University of Massachusetts at Amherst.  She is a recipient of an NSF CAREER award and a Senior Member of the ACM.

Panel on ethical implications of AI

Panelists

Luke Stark, Faculty of Information and Media Studies, Western University

Luke Stark is an Assistant Professor in the Faculty of Information and Media Studies at Western University in London, ON. His work interrogating the historical, social, and ethical impacts of computing and AI technologies has appeared in journals including The Information Society, Social Studies of Science, and New Media & Society, and in popular venues like Slate, The Globe and Mail, and The Boston Globe. Luke was previously a Postdoctoral Researcher in AI ethics at Microsoft Research, and a Postdoctoral Fellow in Sociology at Dartmouth College; he holds a PhD from the Department of Media, Culture, and Communication at New York University, and a BA and MA from the University of Toronto.

Nidhi Hegde, Associate Professor in Computer Science and Amii [Alberta Machine Intelligence Institute] Fellow at the University of Alberta

Nidhi is a Fellow and Canada CIFAR [Canadian Institute for Advanced Research] AI Chair at Amii and an Associate Professor in the Department of Computing Science at the University of Alberta. Before joining UAlberta, she spent many years in industry research labs. Most recently, she was a Research team lead at Borealis AI (a research institute at Royal Bank of Canada), where her team worked on privacy-preserving methods for machine learning models and other applied problems for RBC. Prior to that, she spent many years in research labs in Europe working on a variety of interesting and impactful problems. She was a researcher at Bell Labs, Nokia, in France from January 2015 to March 2018, where she led a new team focussed on Maths and Algorithms for Machine Learning in Networks and Systems, in the Maths and Algorithms group of Bell Labs. She also spent a few years at the Technicolor Paris Research Lab working on social network analysis, smart grids, privacy, and recommendations. Nidhi is an associate editor of the IEEE/ACM Transactions on Networking, and an editor of the Elsevier Performance Evaluation Journal.

Karina Vold, Assistant Professor, Institute for the History and Philosophy of Science and Technology, University of Toronto

Dr. Karina Vold is an Assistant Professor at the Institute for the History and Philosophy of Science and Technology at the University of Toronto. She is also a Faculty Affiliate at the U of T Schwartz Reisman Institute for Technology and Society, a Faculty Associate at the U of T Centre for Ethics, and an Associate Fellow at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence. Vold specialises in Philosophy of Cognitive Science and Philosophy of Artificial Intelligence, and her recent research has focused on human autonomy, cognitive enhancement, extended cognition, and the risks and ethics of AI.

Elissa Strome, Executive Director, Pan-Canadian Artificial Intelligence Strategy at CIFAR

Elissa is Executive Director, Pan-Canadian Artificial Intelligence Strategy at CIFAR, working with research leaders across the country to implement Canada’s national research strategy in AI.  Elissa completed her PhD in Neuroscience from the University of British Columbia in 2006. Following a post-doc at Lund University, in Sweden, she decided to pursue a career in research strategy, policy and leadership. In 2008, she joined the University of Toronto’s Office of the Vice-President, Research and Innovation and was Director of Strategic Initiatives from 2011 to 2015. In that role, she led a small team dedicated to advancing the University’s strategic research priorities, including international institutional research partnerships, the institutional strategy for prestigious national and international research awards, and the establishment of the SOSCIP [Southern Ontario Smart Computing Innovation Platform] research consortium in 2012. From 2015 to 2017, Elissa was Executive Director of SOSCIP, leading the 17-member industry-academic consortium through a major period of growth and expansion, and establishing SOSCIP as Ontario’s leading platform for collaborative research and development in data science and advanced computing.

Tutorial on AI and the Law

Prof. Maura R. Grossman, University of Waterloo, and

Hon. Paul W. Grimm, United States District Court for the District of Maryland

AI applications are becoming more and more ubiquitous in almost every field of endeavor, and the same is true as to the legal industry. This panel, consisting of an experienced lawyer and computer scientist, and a U.S. federal trial court judge, will discuss how AI is currently being used in the legal profession, what adoption has been like since the introduction of AI to law in about 2009, what legal and ethical issues AI applications have raised in the legal system, and how a sitting trial court judge approaches AI evidence, in particular, the determination of whether to admit that AI evidence or not, when they are a non-expert.

How is AI being used in the legal industry today?

What has the legal industry’s reaction been to legal AI applications?

What are some of the biggest legal and ethical issues implicated by legal and other AI applications?

How does a sitting trial court judge evaluate AI evidence when making a determination of whether to admit that AI evidence or not?

What considerations go into the trial judge’s decision?

What happens if the judge is not an expert in AI?  Do they recuse?

You may recognize the name, Julia Stoyanovich, as she was mentioned here in my March 23, 2022 posting titled, The “We are AI” series gives citizens a primer on AI, a series of peer-to-peer workshops aimed at introducing the basics of AI to the public. There’s also a comic book series associated with it and all of the materials are available for free. It’s all there in the posting.

Getting back to the Responsible AI activities webpage,, there’s one more activity and this seems a little less focused on experts,

Virtual Meet and Greet on Responsible AI across Canada

Given the many activities that are fortunately happening around the responsible and ethical aspects of AI here in Canada, we are organizing an event in conjunction with Canadian AI 2022 this year to become familiar with what everyone is doing and what activities they are engaged in.

It would be wonderful to have a unified community here in Canada around responsible AI so we can support each other and find ways to more effectively collaborate and synergize. We are aiming for a casual, discussion-oriented event rather than talks or formal presentations.

The meet and greet will be hosted by Ebrahim Bagheri, Eleni Stroulia and Graham Taylor. If you are interested in participating, please email Ebrahim Bagheri (bagheri@ryerson.ca).

Thank you to the co-chairs for getting the word out about the Responsible AI topic at the conference,

Responsible AI Co-chairs

Ebrahim Bagheri
Professor
Electrical, Computer, and Biomedical Engineering, Ryerson University
Website

Eleni Stroulia
Professor, Department of Computing Science
Acting Vice Dean, Faculty of Science
Director, AI4Society Signature Area
University of Alberta
Website

The organization which hosts these conference has an almost palindromic abbreviation, CAIAC for Canadian Artificial Intelligence Association (CAIA) or Association Intelligence Artificiel Canadien (AIAC). Yes, you do have to read it in English and French and the C at either end gets knocked depending on which language you’re using, which is why it’s almost.

The CAIAC is almost 50 years old (under various previous names) and has its website here.

*April 22, 2022 at 1400 hours PT removed ‘the’ from this section of the headline: “… from 30 May to 3 June, 2022.” and removed period from the end.

Alberta adds a newish quantum nanotechnology research hub to the Canada’s quantum computing research scene

One of the winners in Canada’s 2017 federal budget announcement of the Pan-Canadian Artificial Intelligence Strategy was Edmonton, Alberta. It’s a fact which sometimes goes unnoticed while Canadians marvel at the wonderfulness found in Toronto and Montréal where it seems new initiatives and monies are being announced on a weekly basis (I exaggerate) for their AI (artificial intelligence) efforts.

Alberta’s quantum nanotechnology hub (graduate programme)

Intriguingly, it seems that Edmonton has higher aims than (an almost unnoticed) leadership in AI. Physicists at the University of Alberta have announced hopes to be just as successful as their AI brethren in a Nov. 27, 2017 article by Juris Graney for the Edmonton Journal,

Physicists at the University of Alberta [U of A] are hoping to emulate the success of their artificial intelligence studying counterparts in establishing the city and the province as the nucleus of quantum nanotechnology research in Canada and North America.

Google’s artificial intelligence research division DeepMind announced in July [2017] it had chosen Edmonton as its first international AI research lab, based on a long-running partnership with the U of A’s 10-person AI lab.

Retaining the brightest minds in the AI and machine-learning fields while enticing a global tech leader to Alberta was heralded as a coup for the province and the university.

It is something U of A physics professor John Davis believes the university’s new graduate program, Quanta, can help achieve in the world of quantum nanotechnology.

The field of quantum mechanics had long been a realm of theoretical science based on the theory that atomic and subatomic material like photons or electrons behave both as particles and waves.

“When you get right down to it, everything has both behaviours (particle and wave) and we can pick and choose certain scenarios which one of those properties we want to use,” he said.

But, Davis said, physicists and scientists are “now at the point where we understand quantum physics and are developing quantum technology to take to the marketplace.”

“Quantum computing used to be realm of science fiction, but now we’ve figured it out, it’s now a matter of engineering,” he said.

Quantum computing labs are being bought by large tech companies such as Google, IBM and Microsoft because they realize they are only a few years away from having this power, he said.

Those making the groundbreaking developments may want to commercialize their finds and take the technology to market and that is where Quanta comes in.

East vs. West—Again?

Ivan Semeniuk in his article, Quantum Supremacy, ignores any quantum research effort not located in either Waterloo, Ontario or metro Vancouver, British Columbia to describe a struggle between the East and the West (a standard Canadian trope). From Semeniuk’s Oct. 17, 2017 quantum article [link follows the excerpts] for the Globe and Mail’s October 2017 issue of the Report on Business (ROB),

 Lazaridis [Mike], of course, has experienced lost advantage first-hand. As co-founder and former co-CEO of Research in Motion (RIM, now called Blackberry), he made the smartphone an indispensable feature of the modern world, only to watch rivals such as Apple and Samsung wrest away Blackberry’s dominance. Now, at 56, he is engaged in a high-stakes race that will determine who will lead the next technology revolution. In the rolling heartland of southwestern Ontario, he is laying the foundation for what he envisions as a new Silicon Valley—a commercial hub based on the promise of quantum technology.

Semeniuk skips over the story of how Blackberry lost its advantage. I came onto that story late in the game when Blackberry was already in serious trouble due to a failure to recognize that the field they helped to create was moving in a new direction. If memory serves, they were trying to keep their technology wholly proprietary which meant that developers couldn’t easily create apps to extend the phone’s features. Blackberry also fought a legal battle in the US with a patent troll draining company resources and energy in proved to be a futile effort.

Since then Lazaridis has invested heavily in quantum research. He gave the University of Waterloo a serious chunk of money as they named their Quantum Nano Centre (QNC) after him and his wife, Ophelia (you can read all about it in my Sept. 25, 2012 posting about the then new centre). The best details for Lazaridis’ investments in Canada’s quantum technology are to be found on the Quantum Valley Investments, About QVI, History webpage,

History-bannerHistory has repeatedly demonstrated the power of research in physics to transform society.  As a student of history and a believer in the power of physics, Mike Lazaridis set out in 2000 to make real his bold vision to establish the Region of Waterloo as a world leading centre for physics research.  That is, a place where the best researchers in the world would come to do cutting-edge research and to collaborate with each other and in so doing, achieve transformative discoveries that would lead to the commercialization of breakthrough  technologies.

Establishing a World Class Centre in Quantum Research:

The first step in this regard was the establishment of the Perimeter Institute for Theoretical Physics.  Perimeter was established in 2000 as an independent theoretical physics research institute.  Mike started Perimeter with an initial pledge of $100 million (which at the time was approximately one third of his net worth).  Since that time, Mike and his family have donated a total of more than $170 million to the Perimeter Institute.  In addition to this unprecedented monetary support, Mike also devotes his time and influence to help lead and support the organization in everything from the raising of funds with government and private donors to helping to attract the top researchers from around the globe to it.  Mike’s efforts helped Perimeter achieve and grow its position as one of a handful of leading centres globally for theoretical research in fundamental physics.

Stephen HawkingPerimeter is located in a Governor-General award winning designed building in Waterloo.  Success in recruiting and resulting space requirements led to an expansion of the Perimeter facility.  A uniquely designed addition, which has been described as space-ship-like, was opened in 2011 as the Stephen Hawking Centre in recognition of one of the most famous physicists alive today who holds the position of Distinguished Visiting Research Chair at Perimeter and is a strong friend and supporter of the organization.

Recognizing the need for collaboration between theorists and experimentalists, in 2002, Mike applied his passion and his financial resources toward the establishment of The Institute for Quantum Computing at the University of Waterloo.  IQC was established as an experimental research institute focusing on quantum information.  Mike established IQC with an initial donation of $33.3 million.  Since that time, Mike and his family have donated a total of more than $120 million to the University of Waterloo for IQC and other related science initiatives.  As in the case of the Perimeter Institute, Mike devotes considerable time and influence to help lead and support IQC in fundraising and recruiting efforts.  Mike’s efforts have helped IQC become one of the top experimental physics research institutes in the world.

Quantum ComputingMike and Doug Fregin have been close friends since grade 5.  They are also co-founders of BlackBerry (formerly Research In Motion Limited).  Doug shares Mike’s passion for physics and supported Mike’s efforts at the Perimeter Institute with an initial gift of $10 million.  Since that time Doug has donated a total of $30 million to Perimeter Institute.  Separately, Doug helped establish the Waterloo Institute for Nanotechnology at the University of Waterloo with total gifts for $29 million.  As suggested by its name, WIN is devoted to research in the area of nanotechnology.  It has established as an area of primary focus the intersection of nanotechnology and quantum physics.

With a donation of $50 million from Mike which was matched by both the Government of Canada and the province of Ontario as well as a donation of $10 million from Doug, the University of Waterloo built the Mike & Ophelia Lazaridis Quantum-Nano Centre, a state of the art laboratory located on the main campus of the University of Waterloo that rivals the best facilities in the world.  QNC was opened in September 2012 and houses researchers from both IQC and WIN.

Leading the Establishment of Commercialization Culture for Quantum Technologies in Canada:

In the Research LabFor many years, theorists have been able to demonstrate the transformative powers of quantum mechanics on paper.  That said, converting these theories to experimentally demonstrable discoveries has, putting it mildly, been a challenge.  Many naysayers have suggested that achieving these discoveries was not possible and even the believers suggested that it could likely take decades to achieve these discoveries.  Recently, a buzz has been developing globally as experimentalists have been able to achieve demonstrable success with respect to Quantum Information based discoveries.  Local experimentalists are very much playing a leading role in this regard.  It is believed by many that breakthrough discoveries that will lead to commercialization opportunities may be achieved in the next few years and certainly within the next decade.

Recognizing the unique challenges for the commercialization of quantum technologies (including risk associated with uncertainty of success, complexity of the underlying science and high capital / equipment costs) Mike and Doug have chosen to once again lead by example.  The Quantum Valley Investment Fund will provide commercialization funding, expertise and support for researchers that develop breakthroughs in Quantum Information Science that can reasonably lead to new commercializable technologies and applications.  Their goal in establishing this Fund is to lead in the development of a commercialization infrastructure and culture for Quantum discoveries in Canada and thereby enable such discoveries to remain here.

Semeniuk goes on to set the stage for Waterloo/Lazaridis vs. Vancouver (from Semeniuk’s 2017 ROB article),

… as happened with Blackberry, the world is once again catching up. While Canada’s funding of quantum technology ranks among the top five in the world, the European Union, China, and the US are all accelerating their investments in the field. Tech giants such as Google [also known as Alphabet], Microsoft and IBM are ramping up programs to develop companies and other technologies based on quantum principles. Meanwhile, even as Lazaridis works to establish Waterloo as the country’s quantum hub, a Vancouver-area company has emerged to challenge that claim. The two camps—one methodically focused on the long game, the other keen to stake an early commercial lead—have sparked an East-West rivalry that many observers of the Canadian quantum scene are at a loss to explain.

Is it possible that some of the rivalry might be due to an influential individual who has invested heavily in a ‘quantum valley’ and has a history of trying to ‘own’ a technology?

Getting back to D-Wave Systems, the Vancouver company, I have written about them a number of times (particularly in 2015; for the full list: input D-Wave into the blog search engine). This June 26, 2015 posting includes a reference to an article in The Economist magazine about D-Wave’s commercial opportunities while the bulk of the posting is focused on a technical breakthrough.

Semeniuk offers an overview of the D-Wave Systems story,

D-Wave was born in 1999, the same year Lazaridis began to fund quantum science in Waterloo. From the start, D-Wave had a more immediate goal: to develop a new computer technology to bring to market. “We didn’t have money or facilities,” says Geordie Rose, a physics PhD who co0founded the company and served in various executive roles. …

The group soon concluded that the kind of machine most scientists were pursing based on so-called gate-model architecture was decades away from being realized—if ever. …

Instead, D-Wave pursued another idea, based on a principle dubbed “quantum annealing.” This approach seemed more likely to produce a working system, even if the application that would run on it were more limited. “The only thing we cared about was building the machine,” says Rose. “Nobody else was trying to solve the same problem.”

D-Wave debuted its first prototype at an event in California in February 2007 running it through a few basic problems such as solving a Sudoku puzzle and finding the optimal seating plan for a wedding reception. … “They just assumed we were hucksters,” says Hilton [Jeremy Hilton, D.Wave senior vice-president of systems]. Federico Spedalieri, a computer scientist at the University of Southern California’s [USC} Information Sciences Institute who has worked with D-Wave’s system, says the limited information the company provided about the machine’s operation provoked outright hostility. “I think that played against them a lot in the following years,” he says.

It seems Lazaridis is not the only one who likes to hold company information tightly.

Back to Semeniuk and D-Wave,

Today [October 2017], the Los Alamos National Laboratory owns a D-Wave machine, which costs about $15million. Others pay to access D-Wave systems remotely. This year , for example, Volkswagen fed data from thousands of Beijing taxis into a machine located in Burnaby [one of the municipalities that make up metro Vancouver] to study ways to optimize traffic flow.

But the application for which D-Wave has the hights hope is artificial intelligence. Any AI program hings on the on the “training” through which a computer acquires automated competence, and the 2000Q [a D-Wave computer] appears well suited to this task. …

Yet, for all the buzz D-Wave has generated, with several research teams outside Canada investigating its quantum annealing approach, the company has elicited little interest from the Waterloo hub. As a result, what might seem like a natural development—the Institute for Quantum Computing acquiring access to a D-Wave machine to explore and potentially improve its value—has not occurred. …

I am particularly interested in this comment as it concerns public funding (from Semeniuk’s article),

Vern Brownell, a former Goldman Sachs executive who became CEO of D-Wave in 2009, calls the lack of collaboration with Waterloo’s research community “ridiculous,” adding that his company’s efforts to establish closer ties have proven futile, “I’ll be blunt: I don’t think our relationship is good enough,” he says. Brownell also point out that, while  hundreds of millions in public funds have flowed into Waterloo’s ecosystem, little funding is available for  Canadian scientists wishing to make the most of D-Wave’s hardware—despite the fact that it remains unclear which core quantum technology will prove the most profitable.

There’s a lot more to Semeniuk’s article but this is the last excerpt,

The world isn’t waiting for Canada’s quantum rivals to forge a united front. Google, Microsoft, IBM, and Intel are racing to develop a gate-model quantum computer—the sector’s ultimate goal. (Google’s researchers have said they will unveil a significant development early next year.) With the U.K., Australia and Japan pouring money into quantum, Canada, an early leader, is under pressure to keep up. The federal government is currently developing  a strategy for supporting the country’s evolving quantum sector and, ultimately, getting a return on its approximately $1-billion investment over the past decade [emphasis mine].

I wonder where the “approximately $1-billion … ” figure came from. I ask because some years ago MP Peter Julian asked the government for information about how much Canadian federal money had been invested in nanotechnology. The government replied with sheets of paper (a pile approximately 2 inches high) that had funding disbursements from various ministries. Each ministry had its own method with different categories for listing disbursements and the titles for the research projects were not necessarily informative for anyone outside a narrow specialty. (Peter Julian’s assistant had kindly sent me a copy of the response they had received.) The bottom line is that it would have been close to impossible to determine the amount of federal funding devoted to nanotechnology using that data. So, where did the $1-billion figure come from?

In any event, it will be interesting to see how the Council of Canadian Academies assesses the ‘quantum’ situation in its more academically inclined, “The State of Science and Technology and Industrial Research and Development in Canada,” when it’s released later this year (2018).

Finally, you can find Semeniuk’s October 2017 article here but be aware it’s behind a paywall.

Whither we goest?

Despite any doubts one might have about Lazaridis’ approach to research and technology, his tremendous investment and support cannot be denied. Without him, Canada’s quantum research efforts would be substantially less significant. As for the ‘cowboys’ in Vancouver, it takes a certain temperament to found a start-up company and it seems the D-Wave folks have more in common with Lazaridis than they might like to admit. As for the Quanta graduate  programme, it’s early days yet and no one should ever count out Alberta.

Meanwhile, one can continue to hope that a more thoughtful approach to regional collaboration will be adopted so Canada can continue to blaze trails in the field of quantum research.