A new AI tool that picks out bird and amphibian sounds in audio recordings could improve how ecologists monitor and study Canada’s wildlife.
“HawkEars is a software package that analyzes audio recordings to identify bird and amphibian species, and it is trained on species that occur in Canada,” says Jan Huus, a retired software developer and avid bird watcher who created the tool. After reading about her research, he connected with ecologist Elly Knight, an adjunct professor in the Department of Biological Sciences, and the two have been collaborating ever since, with support from the Alberta Biodiversity Monitoring Institute.
“These acoustic cues have so much information in them because it’s essentially the currency the birds are communicating in,” says Knight, co-director of the Boreal Avian Modelling Centre with Biodiversity Pathways.
Although not very common, Northern Goshawks can be spotted in British Columbia, especially in the provincial parks. They are mostly spotted from September to February.
Northern Goshawks are the bigger and fiercer relative of the Sharp-shinned and Cooper’s Hawks. They are mostly gray with short, broad wings and a long tail and have a white stripe over their yellow eyes.
…
Northern Goshawks are residents in Alaska, Canada, and the mountainous west. Some younger birds may migrate to Central States during the winter.
They live in large forests, so they are hard to find, especially as they are very secretive and can be aggressive if you get too close to a nest.
Northern Goshawks live in large tracks of mostly coniferous or mixed forests. They watch for prey on high perches and mostly eat medium-sized birds and small mammals.
…
Getting back to HawkEars, Evan Cruickshank’s September 2, (?) 2025 article for The Gateway (University of Alberta’s student newspaper) provides a few more details about HawkEars, Note: Links have been removed,
An artificial intelligence (AI) tool developed during the COVID-19 pandemic is changing how Canadian scientists listen to wildlife. It’s also transforming what they learn from it. HawkEars can track species at risk, assess phenology shifts due to climate change, and fill data gaps for nocturnal or elusive species.
HawkEars identifies amphibians and birds from audio recordings using spectrograms. Jan Hughes, a retired programmer and bird watcher, created the technology during the pandemic. Elly Knight, a professor in the University of Alberta department of biological sciences, began collaborating with Hughes on HawkEars after he reached out to her.
Knight, already experienced with passive acoustic monitoring and boreal bird ecology, saw the tool’s potential and jumped on board. With support from the Alberta Biodiversity Monitoring Institute, they’ve been working together ever since.
The AI tool analyzes sound spectrograms, meaning it does not listen to raw audio, but rather looks at the output. HawkEars is able to identify 344 species of birds and 13 species of amphibians.
Monitoring wildlife like never before
According to WILDLABS, tools like HawkEars allow researchers to monitor wildlife at a broad scale and in real time. Having faster and more scalable tools speeds up research as manual fieldwork is time-intensive. The United Nations (UN) has stated that ecosystems are under increasing pressure from climate change and biodiversity loss, these tools are becoming incredibly valuable.
HawkEars also allows researchers to monitor shifts in the distribution of species, behaviour, and abundance within our forests.
“The best approach is a consensus between the AI and the human. If you aggregate the positive detections between the two, you get a better data set than if you just have a human do it,” Knight said.
According to Knight, it’s important to have a Canadian specific classifier as tools like BirdNET and Perch rely on training data sourced from the United States (U.S.). They are less effective in Canada because of how different our species diversity is in the boreal forests.
HawkEars trains exclusively on Canadian data to ensure accurate identification of local species. This makes it ideal for researchers at the U of A.HawkEars is significantly more accurate than other classifiers when used in Canada.
…
It’s important to note that HawkEars is not perfect, Knight said. AI struggles in the same way that humans might. Knight specifically mentioned that certain groups of birds, such as sparrows and warblers, are especially difficult to differentiate because of how similar their calls are.
While AI is powerful, it cannot replace trained human listeners. According to Knight, there needs to be a combined effort between humans and AI for the most accurate results.
“There are certainly cases where the AI does [sic] a better choice than a human … but on average, it’s just not as good,” Knight said.
HawkEars is widely available to both researchers and the public. Knight hopes it will help decentralize data collection.
HawkEars: A regional, high-performance avian acoustic classifier by Jan Huus, Kevin G. Kelly, Erin M. Bayne, Elly C. Knight. Ecological Informatics Volume 87, July 2025, 103122 DOI: https://doi.org/10.1016/j.ecoinf.2025.103122 Under a Creative Commons license CC BY-NC 4.0 Attribution-NonCommercial 4.0 International
This paper is open access.
For the curious, you can find HawkEars (hosted by Jan Huus?) on GitHub here.
This is going to be a jam-packed posting with the AI experts at the Canadian Science Policy Centre (CSPC) virtual panel, a look back at a ‘testy’ exchange between Yoshua Bengio (one of Canada’s godfathers of AI) and a former diplomat from China, an update on Canada’s Minister of Artificial Intelligence and Digital Innovation, Evan Solomon and his latest AI push, and a missive from the BC artificial intelligence community.
A Canadian Science Policy Centre AI panel on November 11, 2025
The Canadian Science Policy Centre (CSPC) provides an October 9, 2025 update on an upcoming virtual panel being held on Remembrance Day,
[AI-Driven Misinformation Across Sectors Addressing a Cross-Societal Challenge]
Upcoming Virtual Panel[s]: November 11 [2025]
Artificial Intelligence is transforming how information is created and trusted, offering immense benefits across sectors like healthcare, education, finance, and public discourse—yet also amplifying risks such as misinformation, deepfakes, and scams that threaten public trust. This panel brings together experts from diverse fields [emphasis mine] to examine the manifestations and impacts of AI-driven misinformation and to discuss policy, regulatory, and technical solutions [emphasis mine]. The conversation will highlight practical measures—from digital literacy and content verification to platform accountability—aimed at strengthening resilience in Canada and globally.
For more information on the panel and to register, click below.
Odd timing for this event. Moving on, I found more information on the CSPC’s webpage for this event, Note: Unfortunately, links to the moderator’s and speakers’ bios could not be copied here,
Canadian Science Policy Centre Email info@sciencepolicy.ca
…
This panel brings together cross-sectoral experts to examine how AI-driven misinformation manifests in their respective domains, its consequences, and how policy, regulation, and technical interventions can help mitigate harm. The discussion will explore practical pathways for action, such as digital literacy, risk audits, content verification technologies, platform responsibility, and regulatory frameworks. Attendees will leave with a nuanced understanding of both the risks and the resilience strategies being explored in Canada and globally.
Canada Research Chair in Internet & E-commerce Law, University of Ottawa See Bio
[Panelists]
Dr. Plinio Morita
Associate Professor / Director, Ubiquitous Health Technology Lab, University of Waterloo …
Dr. Nadia Naffi
Université Laval — Associate Professor of Educational Technology and expert on building human agency against AI-augmented disinformation and deepfakes. See Bio
Dr. Jutta Treviranus
Director, Inclusive Design Research Centre, OCAD U, Expert on AI misinformation in the Education sector and schools. See Bio
Dr. Fenwick McKelvey
Concordia University — Expert in political bots, information flows, and Canadian tech governance See Bio
Michael Geist has his own blog/website featuring posts on his ares of interest and featuring his podcast, Law Bytes. Jutta Treviranus is mentioned in my October 13, 2025 posting as a participant in “Who’s afraid of AI? Arts, Sciences, and the Futures of Intelligence,” a conference (October 23 – 24, 205) and arts festival at the University of Toronto (scroll down to find it) . She’s scheduled for a session on Thursday, October 23, 2025.
China, Canada, and the AI Action summit in February 2025
Zoe Kleinman’s February 10, 2025 article for the British Broadcasting Corporation (BBC) news online website also notes the encounter,
A former Chinese official poked fun at a major international AI safety report led by “AI Godfather” professor Yoshua Bengio and co-authored by 96 global experts – in front of him.
Fu Ying, former vice minister of foreign affairs and once China’s UK ambassador, is now an academic at Tsinghua University in Beijing.
The pair were speaking at a panel discussion ahead of a two-day global AI summit starting in Paris on Monday [February 10, 2025].
The aim of the summit is to unite world leaders, tech executives, and academics to examine AI’s impact on society, governance, and the environment.
Fu Ying began by thanking Canada’s Prof Bengio for the “very, very long” document, adding that the Chinese translation stretched to around 400 pages and she hadn’t finished reading it.
She also had a dig at the title of the AI Safety Institute – of which Prof Bengio is a member.
China now has its own equivalent; but they decided to call it The AI Development and Safety Network, she said, because there are lots of institutes already but this wording emphasised the importance of collaboration.
The AI Action Summit is welcoming guests from 80 countries, with OpenAI chief executive Sam Altman, Microsoft president Brad Smith and Google chief executive Sundar Pichai among the big names in US tech attending.
Elon Musk is not on the guest list but it is currently unknown whether he will decide to join them. [As of February 13, 2025, Mr. Musk did not attend the summit, which ended February 11, 2025.]
A key focus is regulating AI in an increasingly fractured world. The summit comes weeks after a seismic industry shift as China’s DeepSeek unveiled a powerful, low-cost AI model, challenging US dominance.
The pair’s heated exchanges were a symbol of global political jostling in the powerful AI arms race, but Fu Ying also expressed regret about the negative impact of current hostilities between the US and China on the progress of AI safety.
…
She gave a carefully-crafted glimpse behind the curtain of China’s AI scene, describing an “explosive period” of innovation since the country first published its AI development plan in 2017, five years before ChatGPT became a viral sensation in the west.
She added that “when the pace [of development] is rapid, risky stuff occurs” but did not elaborate on what might have taken place.
“The Chinese move faster [than the west] but it’s full of problems,” she said.
Fu Ying argued that building AI tools on foundations which are open source, meaning everyone can see how they work and therefore contribute to improving them, was the most effective way to make sure the tech did not cause harm.
Most of the US tech giants do not share the tech which drives their products.
Open source offers humans “better opportunities to detect and solve problems”, she said, adding that “the lack of transparency among the giants makes people nervous”.
But Prof Bengio disagreed.
His view was that open source also left the tech wide open for criminals to misuse.
He did however concede that “from a safety point of view”, it was easier to spot issues with the viral Chinese AI assistant DeepSeek, which was built using open source architecture, than ChatGPT, whose code has not been shared by its creator OpenAI.
…
Interesting, non? You can read more about Bengio’s views in an October 1, 2025 article by Rae Witte for Futurism.
In a Policy Forum, Yue Zhu and colleagues provide an overview of China’s emerging regulation for artificial intelligence (AI) technologies and its potential contributions to global AI governance. Open-source AI systems from China are rapidly expanding worldwide, even as the country’s regulatory framework remains in flux. In general, AI governance suffers from fragmented approaches, a lack of clarity, and difficulty reconciling innovation with risk management, making global coordination especially hard in the face of rising controversy. Although no official AI law has yet been enacted, experts in China have drafted two influential proposals – the Model AI Law and the AI Law (Scholar’s Proposal) – which serve as key references for ongoing policy discussions. As the nation’s lawmakers prepare to draft a consolidated AI law, Zhu et al. note that the decisions will shape not only China’s innovation, but also global collaboration on AI safety, openness, and risk mitigation. Here, the authors discuss China’s emerging AI regulation as structured around 6 pillars, which, combined, stress exemptive laws, efficient adjudication, and experimentalist requirements, while safeguarding against extreme risks. This framework seeks to balance responsible oversight with pragmatic openness, allowing developers to innovate for the long term and collaborate across the global research community. According to Zhu et al., despite the need for greater clarity, harmonization, and simplification, China’s evolving model is poised to shape future legislation and contribute meaningfully to global AI governance by promoting both safety and innovation at a time when international cooperation on extreme risks is urgently needed.
Here’s a link to and a citation for the paper,
China’s emerging regulation toward an open future for AI by Yue Zhu, Bo He, Hongyu Fu, Naying Hu, Shaoqing Wu, Taolue Zhang, Xinyi Liu, Gang Xu, Linghan Zhang, and Hui Zhou. Science 9 Oct 2025Vol 390, Issue 6769 pp. 132-135 DOI: 10.1126/science.ady7922
This paper is behind a paywall.
No mention of Fu Ying or China’s ‘The AI Development and Safety Network’ but perhaps that’s in the paper.
Canada and its Minister of AI and Digital Innovation
Evan Solomon (born April 20, 1968)[citation needed] is a Canadian politician and broadcaster who has been the minister of artificial intelligence and digital innovation since May 2025. A member of the Liberal Party, Solomon was elected as the member of Parliament (MP) for Toronto Centre in the April 2025 election.
He was the host of The Evan Solomon Show on Toronto-area talk radio station CFRB,[2] and a writer for Maclean’s magazine. He was the host of CTV’s national political news programs Power Play and Question Period.[3] In October 2022, he moved to New York City to accept a position with the Eurasia Group as publisher of GZERO Media.[4] Solomon continued with CTV News as a “special correspondent” reporting on Canadian politics and global affairs.”[4]
…
Had you asked me what background one needs to be a ‘Minister of Artificial Intelligence and Digital Innovation’, media would not have been my first thought. That said, sometimes people can surprise you.
Solomon appears to be an enthusiast if a June 10, 2025 article by Anja Karadeglija for The Canadian Press is to be believed,
Canada’s new minister of artificial intelligence said Tuesday [June 10, 2025] he’ll put less emphasis on AI regulation and more on finding ways to harness the technology’s economic benefits [emphases mine].
In his first speech since becoming Canada’s first-ever AI minister, Evan Solomon said Canada will move away from “over-indexing on warnings and regulation” to make sure the economy benefits from AI.
His regulatory focus will be on data protection and privacy, he told the audience at an event in Ottawa Tuesday morning organized by the think tank Canada 2020.
Solomon said regulation isn’t about finding “a saddle to throw on the bucking bronco called AI innovation. That’s hard. But it is to make sure that the horse doesn’t kick people in the face. And we need to protect people’s data and their privacy.”
The previous government introduced a privacy and AI regulation bill that targeted high-impact AI systems. It did not become law before the election was called.
That bill is “not gone, but we have to re-examine in this new environment where we’re going to be on that,” Solomon said.
He said constraints on AI have not worked at the international level.
“It’s really hard. There’s lots of leakages,” he said. “The United States and China have no desire to buy into any constraint or regulation.”
That doesn’t mean regulation won’t exist, he said, but it will have to be assembled in steps.
…
Solomon’s comments follow a global shift among governments to focus on AI adoption and away from AI safety and governance.
The first global summit focusing on AI safety was held in 2023 as experts warned of the technology’s dangers — including the risk that it could pose an existential threat to humanity. At a global meeting in Korea last year, countries agreed to launch a network of publicly backed safety institutes.
But the mood had shifted by the time this year’s AI Action Summit began in Paris. …
…
Solomon outlined several priorities for his ministry — scaling up Canada’s AI industry, driving adoption and ensuring Canadians have trust in and sovereignty over the technology.
He said that includes supporting Canadian AI companies like Cohere, which “means using government as essentially an industrial policy to champion our champions.”
The federal government is putting together a task force to guide its next steps on artificial intelligence, and Artificial Intelligence Minister Evan Solomon is promising an update to the government’s AI strategy.
Solomon told the All In artificial intelligence conference in Montreal on Wednesday [September 24, 2025] that the “refreshed” strategy will be tabled later this year, “almost two years ahead of schedule.”
…
“We need to update and move quickly,” he said in a keynote speech at the start of the conference.
The task force will include about 20 representatives from industry, academia and civil society. The government says it won’t reveal the membership until later this week.
Solomon said task force members are being asked to consult with their networks, suggest “bold, practical” ideas and report back to him in November [2025].
The group will look at various topics related to AI, including research, adoption, commercialization, investment, infrastructure, skills, and safety and security. The government is also planning to solicit input from the public. [emphasis mine]
Canada was the first country to launch a national AI strategy [the Pan-Canadian AI Strategy announced in 2016], which the government updated in 2022. The strategy focuses on commercialization, the development and adoption of AI standards, talent and research.
Solomon also teased a “major quantum initiative” coming in October [2025?] to ensure both quantum computing talent and intellectual property stay in the country.
Solomon called digital sovereignty “the most pressing policy and democratic issue of our time” and stressed the importance of Canada having its own “digital economy that someone else can’t decide to turn off.”
Solomon said the federal government’s recent focus on major projects extends to artificial intelligence. He compared current conversations on Canada’s AI framework to the way earlier generations spoke about a national railroad or highway.
…
He said his government will address concerns about AI by focusing on privacy reform and modernizing Canada’s 25-year-old privacy law.
“We’re going to include protections for consumers who are concerned about things like deep fakes and protection for children, because that’s a big, big issue. And we’re going to set clear standards for the use of data so innovators have clarity to unlock investment,” Solomon said.
…
The government is consulting with the public? Experience suggests that when all the major decisions will have been made; the public consultation comments will mined so officials can make some minor, unimportant tweaks.
Canada’s AI Task Force and parts of the Empire Club talk are revealed in a September 26, 2025 article by Alex Riehl for BetaKit,
Inovia Capital partner Patrick Pichette, Cohere chief artificial intelligence (AI) officer Joelle Pineau, and Build Canada founder Dan Debow are among 26 members of AI minister Evan Solomon’s AI Strategy Task Force trusted to help the federal government renew its AI strategy.
Solomon revealed the roster, filled with leading Canadian researchers and business figures, while speaking at the Empire Club in Toronto on Friday morning [September 26, 2025]. He teased its formation at the ALL IN conference earlier this week [September 24, 2025], saying the team would include “innovative thinkers from across the country.”
The group will have 30 days to add to a collective consultation process in areas including research, talent, commercialization, safety, education, infrastructure, and security.
…
The full AI Strategy Task Force is listed below; each member will consult their network on specific themes.
Research and Talent
Gail Murphy, professor of computer science and vice-president – research and innovation, University of British Columbia and vice-chair at the Digital Research Alliance of Canada
Diane Gutiw, VP – global AI research lead, CGI Canada and co-chair of the Advisory Council on AI
Michael Bowling, professor of computer science and principal investigator – Reinforcement Learning and Artificial Intelligence Lab, University of Alberta and research fellow, Alberta Machine Intelligence Institute and Canada CIFAR AI chair
Arvind Gupta, professor of computer science, University of Toronto
Adoption across industry and governments
Olivier Blais, co-founder and VP of AI, Moov and co-chair of the Advisory Council on AI
Cari Covent, technology executive
Dan Debow, chair of the board, Build Canada
Commercialization of AI
Louis Têtu, executive chairman, Coveo
Michael Serbinis, founder and CEO, League and board chair of the Perimeter Institute
Adam Keating, CEO and Founder, CoLab
Scaling our champions and attracting investment
Patrick Pichette, general partner, Inovia Capital
Ajay Agrawal, professor of strategic management, University of Toronto, founder, Next Canada and founder, Creative Destruction Lab
Sonia Sennik, CEO, Creative Destruction Lab
Ben Bergen, president, Council of Canadian Innovators
Building safe AI systems and public trust in AI
Mary Wells, dean of engineering, University of Waterloo
Joelle Pineau, chief AI officer, Cohere
Taylor Owen, founding director, Center [sic] for Media, Technology and Democracy [McGill University]
Education and Skills
Natiea Vinson, CEO, First Nations Technology Council
Alex Laplante, VP – cash management technology Canada, Royal Bank of Canada and board member at Mitacs
David Naylor, professor of medicine – University of Toronto
Infrastructure
Garth Gibson, chief technology and AI officer, VDURA
Ian Rae, president and CEO, Aptum
Marc Etienne Ouimette, chair of the board, Digital Moment and member, OECD One AI Group of Experts, affiliate researcher, sovereign AI, Cambridge University Bennett School of Public Policy
Security
Shelly Bruce, distinguished fellow, Centre for International Governance Innovation
James Neufeld, founder and CEO, Samdesk
Sam Ramadori, co-president and executive director, LawZero
With files from Josh Scott
If you have the time, Riehl ‘s September 26, 2025 article offers more depth than may be apparent in the excerpts I’ve chosen.
It’s been a while since I’ve seen Arvind Gupta’s name. I’m glad to see he’s part of this Task Force (Research and Talent). The man was treated quite shamefully at the University of British Columbia. (For the curious, this August 18, 2015 article by Ken MacQueen for Maclean’s Magazine presents a somewhat sanitized [in my opinion] review of the situation.)
One final comment, the experts on the virtual panel and members of Solomon’s Task Force are largely from Ontario and Québec. There is minor representation from others parts of the country but it is minor.
British Columbia wants entry into the national AI discussion
Just after I finished writing up this post, I received Kris Krug’s (techartist, quasi-sage, cyberpunk anti-hero from the future) October 14, 2025 communication (received via email) regarding an initiative from the BC + AI community,
Growth vs Guardrails: BC’s Framework for Steering AI
Our open letter to Minister Solomon shares what we’ve learned building community-led AI governance and how BC can help.
Ottawa created a Minister of Artificial Intelligence and just launched a national task force to shape the country’s next AI strategy. The conversation is happening right now about who gets compute, who sets the rules, and whose future this technology will serve.
Our new feature, Growth vs Guardrails [see link to letter below for ‘guardrails’], is already making the rounds in those rooms. The message is simple: if Ottawa’s foot is on the gas, BC is the steering wheel and the brakes. We can model a clean, ethical, community-led path that keeps power with people and place.
This is the time to show up together. Not as scattered voices, but as a connected movement with purpose, vision, and political gravity.
Over the past few months, almost 100 of us have joined as the new BC + AI Ecosystem Association non-profit as Founding Members. Builders. Artists. Researchers. Investors. Educators. Policymakers. People who believe that tech should serve communities, not the other way around.
Now we’re opening the door wider. Join and you’ll be part of the core group that built this from the ground up. Your membership is declaration that British Columbia deserves to shape its own AI future with ethics, creativity, and care.
If you’ve been watching from the sidelines, this is the time to lean in. We don’t do panels. We do portals. And this is the biggest one we’ve opened yet.
See you inside,
Kris Krüg Executive Director BC + AI Ecosystem Association kk@bc-ai.ca | bc-ai.ca
Canada just spun up a 30-day sprint to shape its next AI strategy. Minister Evan Solomon assembled 26 experts (mostly industry and academia) to advise on research, adoption, commercialization, safety, skills, and infrastructure.
On paper, it’s a pivot moment. In practice, it’s already drawing fire. Too much weight on scaling, not enough on governance. Too many boardrooms, not enough frontlines. Too much Ottawa, not enough ground truth.
…
This is Canada’s chance to reset the DNA of its AI ecosystem.
But only if we choose regeneration over extraction, sovereign data governance over corporate capture, and community benefit over narrow interests.
…
The Problem With The Task Force
Research says: The group’s stacked with expertise. But critics flag the imbalance. Where’s healthcare? Where’s civil society beyond token representation? Where are the people who’ll feel AI’s impact first: frontline workers, artists, community organizers?
…
The worry:Commercialization and scaling overshadow public trust, governance, and equitable outcomes. Again.
The numbers back this up: Only 24% of Canadians have AI training. Just 38% feel confident in their knowledge. Nearly two-thirds see potential harm. 71% would trust AI more under public regulation.
We’re building a national strategy on a foundation of low literacy and eroding trust. That’s not a recipe for sovereignty. That’s a recipe for capture.
Principles for a National AI Strategy: What BC + AI Stands For
I was very happy to see that a new edition of ‘The State of Science and Technology in Canada‘ or similarly named report from the Council of Canadian Academies (CCA). Here’s more about that report and others form the CCA’s May 2025 issue of The Advance (received via email),
Building Canada’s “report card” for science, technology, and innovation
In September 2006, the CCA published The State of Science and Technology in Canada—an expert assessment of the scientific disciplines and technological applications in which Canada excels. Commissioned by Industry Canada [or Innovation, Science and Economic Development Canada or ISED], the report provided a much-needed foundation for benchmarking Canada’s strengths in science and technology; previously, the report noted, there was “almost no published literature focused specifically on strengths of the Canadian science and technology system overall, and particularly not at a reasonably fine level of detail.” The CCA has built upon its inaugural study ever since, steadily reassessing Canada’s science and technology strengths as well as the relationships between research, development, and innovation.
With our next assessment of Canada’s science, technology, and innovation ecosystem underway, with support from ISED’s Strategic Science Fund, we are revisiting our flagship assessments and their impacts on our collective understanding of science, technology, and innovation.
For the inaugural edition of The State of Science and Technology in Canada, the CCA recruited a ten-person expert panel chaired by Elizabeth Dowdeswell, O.C. In order to create a well-rounded picture of Canadian innovation, the panel analyzed patents grants and citations as well as peer-reviewed journal publications; conducted an extensive literature review; and surveyed more than 1,500 experts on the strength and trajectory of Canada’s overall science and technology efforts and areas of note. The resulting report provided a broad sweep of expertise as well as granular information.
Since then, the CCA has published three additional assessments of Canada’s science, technology, and innovation performance, all at the request of Innovation, Science and Economic Development Canada. They include a second volume of The State of Science and Technology in Canada (2012); The State of Industrial Research and Development in Canada (2013); and Competing in a Global Innovation Economy (2018). This year, the CCA will publish a fifth installment focused on the current state of science, technology, and innovation.
[graphic]
Over the years, the CCA’s science and technology assessments have documented Canada’s reputation for world-leading infrastructure, high levels of education attainment, substantial research output and impact. They have also detailed declining R&D investment and intensity, and the accelerating outflow of Canada-born patents. They have identified sectors of R&D strength, from computer systems design to scientific research and development, and research-publication strengths in fields such as clinical medicine, public health, and the performing arts. Each assessment provides a multi-part assessment of Canada’s progress to-date, and an actionable platform for improving national prosperity, competitiveness, and well-being.
The CCA’s assessments have evolved in tandem with Canada’s science and technology landscape, expanding and refining the metrics on which we draw to detail its strengths and challenges. Our efforts include a Subcommittee on Science and Technology Research Methods, to provide recommendations for improving methodologies and closing data gaps. From research output to patents, from public- and private-sector investment to fields of global renown, each new assessment provides advanced methodologies for understanding Canadian innovation as it unfolds.
There’s more in the CCA’s May 2025 issue of The Advance,
Members of the Expert Panel on Balancing Research Security and Open Science for Dual-Use Research of Concern gathered in Ottawa in early May for a panel meeting, ahead of the project’s planned Fall 2025 release.
CIFAR is now accepting applications for its Neuroscience of Consciousness Winter School, to be held in Montebello, Quebec. CIFAR describes the school as “a unique, three-day event where tomorrow’s neuroscience leaders work closely with world-class researchers.” The deadline for applications is June 23 [2025].
For the Conversation, a team of researchers examines Canada’s “fragmented immunization data” and a drop in vaccine confidence, then asks if the country is prepared for a new pandemic. “In 2024, 17 per cent of Canadian parents were ‘really against’ vaccinating their children, up from four per cent in 2019,” write the researchers, drawing on work by the CCA’s Expert Panel on the Socioeconomic Impacts of Science and Health Misinformation. (Noni MacDonald, a co-author, served as a member of the CCA panel.)
TamIA, the first piece of the Pan-Canadian AI Compute Environment (PAICE), launched at Université Laval. TamIA is a computing cluster that will work in tandem with infrastructure at the University of Alberta and the University of Toronto. Frédéric Chanay-Savoyen, Vice President of AI Solutions and Technology at Mila, a PAICE partner, says TamIA’s increased computing capacity “makes it possible to develop an environment that fosters interdisciplinary collaboration on a national scale and that will allow Quebec and Canada to maintain its position as a leader in the field of cutting-edge AI research.”
The Open Notebook, a nonprofit that supports science journalists, recently asked a group of reporters how they navigate research reports, especially those that are hundreds (or thousands) of pages long. Their responses hold insights for all members of the science media.
Dr. Henry Friesen, best known for his discovery of prolactin and his trailblazing research on human growth hormones, died on April 30 at 90 years old. Friesen, a former member of the CCA’s Board of Governors, helped lead the development of the Canadian Institutes of Health Research and is a member of the Canadian Medical Hall of Fame, among many other honours.
This February 10, 2025 article on phys.org was a bit of a surprise as I haven’t seen Marshall McLuhan mentioned in a very long time, Note 1: Links have been removed, Note 2: There’s more (not much) about Marshall McLuhan in the next excerpt,
In recent decades, museums and galleries have made a sensory turn when it comes to designing displays and engaging visitors.
Museums like the Metropolitan in New York offer multi-sensory activities so visitors so can smell, touch and hear art, and museums have curated exhibitions about the senses.
The move is part of larger efforts to make public institutions more accessible.
It’s also aligned with museum and gallery institutional efforts to decolonize governance structures, and widen opportunities for museum and gallery participation from Indigenous and Global South artists and their communities, who have long been marginalized. Museums and galleries have sought to shape policy, reinterpret and repatriate artifacts stolen from Indigenous and Global South societies in response to social movements, community advocacy and decolonial theory.
Thinkers like Taiaiake Alfred have written about Indigenous cultural resurgence and resistance to colonialism, and shaped a questioning of curatorial practices.
As anthropologist David Howes argues, museums’ questioning of traditional forms of museum display and visitor engagement is aligned with the kind of re-ordering traditionally associated with unsettling colonial regimes.
In my forthcoming study, Harley Parker: The McLuhan of the Museum, I examine the influence of exhibition designer and painter Harley Parker (1915-92) on this “sensory turn” in museum curatorial practices.
..
A February 10, 2025 article for The Conversation by Gary A Genosko (Professor of Communication and Digital Media, Ontario Tech University), which originated the phys.org piece, delves further into the topic of his forthcoming book (publication date: May 15, 2025), Note: Links have been removed,
Parker was head of design at the Royal Ontario Museum [ROM] for 11 years from 1957-68. By applying media theorist and philosopher Marshall McLuhan’s ideas to museums, Parker created what has become known as “multi-sensory museology.” It is only beginning to be recognized as a precursor to the sensory museology in practice today.
Head of design at the ROM
Beyond being head of design at the ROM, Parker was an influential media thinker and a longtime collaborator of McLuhan’s.
Parker’s name is not yet well known. One reason is that his book manuscript, The Culture Box: Museums Are Today, was lost for almost 50 years.
Working with Parker’s children, I uncovered a typescript and will be bringing it into print. Retitled The Culture Box: Museums as Media, it contains detailed discussions of how Parker conceived of exhibition display through the lens of McLuhan’s idea that all media were sensory extensions of human capacities
Multisensory design
For Parker, the museum became a laboratory in which a designer could experiment with multi-sensory exhibition designs. These reflected McLuhan’s claim that new electronic media supplanted an older visually oriented linear model with a non-linear, aural-tactile environment.
Getting beyond the close link between visibility and linear thinking was one of main pillars of Parker’s efforts.
Between 1963 and 1967, Parker was considering designing with alternative orchestrations of perception, especially with regard to displays of Indigenous artifacts. He didn’t, however, achieve a fusion of what current sensory studies scholars call “sensory decolonization.”
In museums, “sensory decolonization” refers to shifting sensory and cultural perceptions around the meaning of “artifacts” from Indigenous or Global South communities. It means revisiting assumptions about protocols for engaging with or handling these, and developing new ethical protocols in relationship with communities.
Parker investigated the necessity of changing sensory assumptions around the display of artifacts, but lacked a decolonial critique.
Hypothetical exhibits
In the early 1960s, Parker published essays on hypothetical exhibits of Indigenous artefacts in the museum’s holdings.
He considered using recordings of Indigenous languages, visitor-controlled heating, cooling and lighting, odours, as well as multi-media projections. He tried to provoke, through design, some empathetic correlation between the mental modes of a contemporary museum visitor and the sensory attitudes of an Indigenous maker and creator of objects.
He linked the reordering of the senses with calls for greater community involvement in museums. He also expressed frustration about museum elitism and the gulf between philanthropic culture and visitors’ concerns.
…
Genosko’s February 10, 2025 article goes on to mention the changes that have been attempted, in some situations more successfully than others, to incorporate principles of decolonization and inclusion. It also describes one of Parker’s more avant-garde ideas, a ‘newseum’, a space for multi-sensory and multimedia exhibitions..
I don’t usually advertise for authors but I have a soft spot these days for Marshall McLuhan (Canadian communications theorist and philosopher) and Parker’s ideas sound interesting to me. You can order “Harley Parker; The McLuhan of the Museum” by Gary Genosko, published May 15, 2025 by the University of Alberta Press here.
Before getting to the announcements, here’s a bit about Simon Fraser University’s (SFU) Public Square, from their About page,
SFU Public Square is situated at 312 Main, a centre for social and economic innovation in Vancouver’s Downtown Eastside. We work across all of SFU’s campuses, supporting faculty, students, staff, alumni and diverse communities to convene accessible, innovative and inclusive programming that brings people together to find ways to meaningfully contribute to the issues that affect our lives.
…
It sounds like the SFU’s Public Square is whatever the administration, the current executive director, and her/his team think it is. As it turns out they seem to have an interest in science and technology as per their September 25, 2024 newsletter (received via email), Note: A link has been removed,
Science World Spotlights: Misinformation and the Toxic Drug Crisis
November 19 | 6:00pm-9:30pm | Free | In-person | Science World
Since B.C. declared a public health emergency on the toxic drug crisis in 2016, the province continues to experience the highest number of overdose deaths among Canadian jurisdictions. Exasperating this crisis is rampant misinformation, as it perpetuates stigma, hinders access to resources, and undermines efforts for intervention.
We’ve teamed up with Science World Spotlights for a thought-provoking panel discussion on addressing misinformation, the harm it causes, and the research and actions that can address common misconceptions. Following the discussion, there will be opportunities to connect with organizations tackling misinformation and providing support around the toxic drug crisis.
Science World Spotlights has teamed up with SFU Public Square for a thought-provoking panel discussion on addressing misinformation surrounding the toxic drug public health emergency in B.C. Our expert panelists will delve into the latest research, scientific insights, and community-based learnings to address common misconceptions and foster informed dialogue.
Since BC’s Provincial Health Officer declared a public health emergency on the toxic drug crisis in 2016, British Columbia continues to experience the highest number of overdose deaths among Canadian jurisdictions. The number of lives lost to this crisis has reached alarming levels, with devastating consequences for individuals, families, and communities. Misinformation exacerbates this crisis as it perpetuates stigma, hinders access to life-saving resources, and undermines efforts to implement evidence-based interventions. And misinformation related to equity deserving people in the context of the toxic drug crisis can exacerbate disparities and hinder effective responses.
At this free event, learn ways to contribute to informed dialogue and thoughtful action in addressing this crisis in our communities. Don’t miss this opportunity to gain a deeper understanding of this pressing public health issue.
Date: November 19, 2024
Time: 6:00pm – 9:30pm
Location:Science World, 1455 Quebec St., Vancouver
Format: a panel conversation with audience Q&A, followed by a networking reception with opportunities to connect with organizations providing tackling misinformation and providing support around the toxic drug crisis.
Register: To register, click the “Reserve a Spot” button.
This event was made possible by financial support provided by the Canadian Association of Science Centres (CASC) and ScienceUpFirst as part of their 2024 Together Against Misinformation Week.
We have asked for some demographic information in the registration form, which will be provided to ScienceUpFirst as part of their grant program.
Bohdan Nosyk is a Professor and St. Paul’s Hospital CANFAR Chair in HIV/AIDS Research at the Faculty of Health Sciences at Simon Fraser University, and leads the Health Economic Research Unit at the Centre for Advancing Health Outcomes (formerly the Center for Health Evaluation & Outcome Sciences).
Dr. Nosyk’s research seeks to inform complex policy decisions surrounding the prevention and management of HIV/AIDS and substance use disorders. He has led population-level evaluations in these disease areas in China, in the state of California and across urban centers in the US, and locally in British Columbia. He combines simulation modeling methods and cost-effectiveness analyses with econometric and biostatistical analyses of health administrative data to address these issues.
Dr. Cornelia (Nel) Wieman, Chief Medical Officer, is the CMO at the First Nations Health Authority (FNHA) in British Columbia, where she has worked since 2018. She is Anishinaabe(Mishi-Baawitigong First Nation, Manitoba) and lives, works and plays on the unceded territory of the Coast Salish peoples – the səl̓ílwətaʔɬ (Tsleil-Waututh), Sḵwx̱wú7mesh (Squamish), and xʷməθkʷəy̓əm (Musqueam) Nations.
Dr. Wieman completed her medical degree and psychiatry specialty training at McMaster University. Canada’s first female Indigenous psychiatrist, Dr. Wieman has more than 20 years’ clinical experience, working with Indigenous people in both rural/reserve and urban settings. Her previous activities include co-directing an Indigenous health research program in the Dalla Lana School of Public Health at the University of Toronto and the National Network for Indigenous Mental Health Research, being Deputy Chair of Health Canada’s Research Ethics Board, and serving on CIHR’s Governing Council. She has also worked and taught in many academic settings, has chaired national advisory groups within First Nations Inuit Health Branch – Health Canada, and has served as a Director on many boards, including the Indspire Foundation and Pacific Blue Cross. Dr. Wieman served as the President of the Indigenous Physicians Association of Canada (IPAC) from 2016-2022. She was one of the 6 Indigenous physician founders of the National Consortium on Indigenous Medical Education (NCIME). She was appointed to the BC Provincial Task Team charged with beginning implementation of the recommendations arising from the “In Plain Sight” report.
Leslie McBain, Chief Executive Officer, Moms Stop the Harm.
After losing her only child, her beautiful son Jordan, to a prescription drug overdose in 2014 Leslie co-founded Moms Stop the Harm, now a national organization with several thousand members who have been impacted by drug harms. Her vision is to support and to save the lives of people who use drugs and to advocate for evidence based, humane drug policies. Leslie has worked with numerous federal and provincial committees to this end. She resides on Pender Island.
MC and Panel Facilitator
Stephen Quinn is host of CBC Radio One’s popular morning show The Early Edition, a post he has often been quoted as saying is his “dream job.” Every weekday, listeners wake up and tune-in for their daily dose of breaking news, traffic, local stories, entertainment, and interviews.
Previously, Quinn was the long-time host of afternoon radio show On The Coast, where he was known for featuring people from the community and covering the day’s local news. He also connected with listeners on social media during shows, allowing citizens to participate in conversations in real-time, adding another dimension to live radio. He spent eight-years as CBC’s civic affairs reporter. This position spearheaded his passion for municipal politics, as well as his unwavering interview style and skill in prompting answers from notable subjects while delving into important issues. Quinn has guest-hosted several CBC shows, news specials and a series on the media for network radio. He is also the creator and host of the very popular Quinn’s Quiz on CBC Radio One.
Questions?
Contact Dana Higgins – dhiggins@scienceworld.ca
Director, Public Programs and Engagement at Science World.
Before moving on to the SFU Public Square news, I checked out the ScienceUpFirst initiative (one of the sponsors), here’s more from their Who We Are page (click on the About tab on the homepage),
ScienceUpFirst is a [Canadian]national initiative that works with a collective of independent scientists, researchers, climate and health experts and science communicators.
ScienceUpFirst emerged out of a critical need. When the pandemic hit, co-founders Timothy Caulfield and Senator Stan Kutcher saw how misinformation was hurting Canadians. Since ScienceUpFirst started in 2020, we have grown into a funded initiative of the Canadian Association of Science Centres, working to fight misinformation and promote scientific understanding.
…
Timothy Caulfield, a law professor from the University of Alberta, is well known as a science communicator and tv personality and by using his name as a search term on the blog, you will find a number of posts mentioning him and his work.
Meet Termeh Moini!
I don’t usually feature a new employee/intern but there is a bit of a agricultural technology connection, from SFU Public Square’s September 25, 2024 newsletter, Note: A link has been removed,
Meet Termeh Moini!
We are so excited to introduce new members of our team, starting with our new co-op student, Termeh! Termeh will be working with SFU Public Square and the B.C. Centre for Agritech Innovation as an Events and Communications Assistant. Get to know Termeh with the link below!
The B.C. Centre for Agritech Innovation (BCCAI) supports small and medium enterprises (SMEs), agri-producers and food processors in meeting their innovation needs. We provide our partners access to funding and domain knowledge via experts from academia, industry and government. The centre works with partners to develop and advance technology solutions and training opportunities to solve industry challenges, build resilient supply chains and generate global solutions for food insecurity and climate change [emphasis mine].
Glad to see the focus on food insecurity, climate change, and a global context for solutions.
Lithium-ion batteries are everywhere; they can be found in cell phones, laptops, e-scooters, e-bikes, and more. There are also some well documented problems with the batteries including the danger of fire. With the proliferating use of lithium-ion batteries, it seems fires are becoming more frequent as Samantha Murphy Kelly documents in her Mach 9, 2023 article for CNN news online, Note: Links have been removed,
Lithium-ion batteries, found in many popular consumer products, are under scrutiny again following a massive fire this week in New York City thought to be caused by the battery that powered an electric scooter.
At least seven people have been injured in a five-alarm fire in the Bronx which required the attention of 200 firefighters. Officials believe the incident stemmed from a lithium-ion battery of a scooter found on the roof of an apartment building. In 2022, the the New York City Fire Department responded to more than 200 e-scooter and e-bike fires, which resulted in six fatalities.
“In all of these fires, these lithium-ion fires, it is not a slow burn; there’s not a small amount of fire, it literally explodes,” FDNY [Fire Dept. New York] Commissioner Laura Kavanagh told reporters. “It’s a tremendous volume of fire as soon as it happens, and it’s very difficult to extinguish and so it’s particularly dangerous.”
A residential fire earlier this week in Carlsbad, California, was suspected to be caused by an e-scooter lithium battery. On Tuesday [March 7, 2023], an alarming video surfaced of a Canadian homeowner running downstairs to find his electric bike battery exploding into flames. [emphasis mine] A fire at a multi-family home in Massachusetts last month is also under investigation for similar issues.
These incidents are becoming more common for a number of reasons. For starters, lithium-ion batteries are now in numerous consumer tech products,powering laptops, cameras, smartphones and more. They allow companies to squeeze hours of battery life into increasingly slim devices. But a combination of manufacturer issues, misuse and aging batteries can heighten the risk from the batteries, which use flammable materials.
“Lithium batteries are generally safe and unlikely to fail, but only so long as there are no defects and the batteries are not damaged or mistreated,” said Steve Kerber, vice president and executive director of Underwriters Laboratory’s (UL) Fire Safety Research Institute (FSRI). “The more batteries that surround us the more incidents we will see.”
In 2016, Samsung issued a global recall of the Galaxy Note 7 in 2016, citing “battery cell issues” that caused the device to catch fire and at times explode. [emphasis mine] HP and Sony later recalled lithium computer batteries for fire hazards, and about 500,000 hoverboards were recalled due to a risk of “catching fire and/or exploding,” according to the U.S. Consumer Product Safety Commission.
In 2020, the Federal Aviation Administration [emphasis mine] banned uninstalled lithium-ion metal batteries from being checked in luggage and said they must remain with a passenger in their carry-on baggage, if approved by the airline and between 101-160 watt hours. “Smoke and fire incidents involving lithium batteries can be mitigated by the cabin crew and passengers inside the aircraft cabin,” the FAA said.
Despite the concerns, lithium-ion batteries continue to be prevalent in many of today’s most popular gadgets. Some tech companies point to their abilities to charge faster, last longer and pack more power into a lighter package.
But not all lithium batteries are the same.
…
Kelly’s Mach 9, 2023 article describes the problems (e.g., a short circuit) that may cause fires and includes some recommendations for better safety and for what to do in the event of a lithium-ion battery fire.Her mention of Samsung and the fires brought back memories; it was mentioned here briefly in a December 21, 2016 post titled, “The volatile lithium-ion battery,” which mostly featured then recent research into the batteries and fires.
More recently, I’ve got an update of sorts on lithium-ion batteries and fires on airplanes, from the May/June 2024 posting of the National Business Aviation Association (NBAA) Insider,
A smoke, fire or extreme heat incident involving lithium ion batteries takes place aboard an aircraft more than once per week [emphases mine] on average in the U.S., making it imperative for operators to fully understand these dangerous events and to prepare crews with safety training.
…
At any given time, there could be more than 1,000 Li-ion powered devices on board an airliner, while an international business jet might easily be flying with a few dozen. Despite their popularity, few people realize the dangers posed by Li-ion batteries.
Hazards run the gamut, from overheating, to emitting smoke, to bursting into flames or even exploding – spewing bits of white hot gel in all directions. In fact, a Li-ion fire can begin as a seemingly harmless overheat and erupt into a serious hazard in a matter of seconds.
FAA [US Federal Aviation Administration] data shows the scope of the threat: In 2023, more than one Li-ion incident occurred aboard an aircraft each week. Specifically, the agency said there were 208 issues with lithium ion battery packs, 111 with e-cigarettes and vaping devices, 68 with cell phones and 60 with laptop computers. (The FAA doesn’t offer incident data by aircraft type.
Thankfully, the data shows the chances of encountering an unstable mobile device aboard a business aircraft are small. But so is the possibility of a passenger experiencing a heart attack – yet many business aircraft carry defibrillators.
…
The threat with lithium ion batteries is known as thermal runaway. When a Li-ion battery overheats due to some previous damage that creates a short circuit [emphasis mine], the unit continues a catastrophic internal chain reaction until it melts or catches fire.
…
Short circuits, lithium ion batteries, and the University of Alberta
Lithium-ion batteries have a lot of advantages. They charge quickly, have a high energy density, and can be repeatedly charged and discharged.
They do have one significant shortcoming, however: they’re prone to short-circuiting. This occurs when a connection forms between the two electrodes inside the cell. A short circuit can result in a sudden loss of voltage or the rapid discharge of high current, both causing the battery to fail. In extreme cases, a short circuit can cause a cell to overheat, start on fire, or even explode. Video: Thin layer of tin prevents short-circuiting in lithium-ion batteries
A leading cause of short circuits are rough, tree-like crystal structures called dendrites that can form on the surface of one of the electrodes. When dendrites grow all the way across the cell and make contact with the other electrode, a short circuit can occur.
Using the Canadian Light Source (CLS) at the University of Saskatchewan (USask), researchers from the University of Alberta (UAlberta) have come up with a promising approach to prevent formation of dendrites in solid-state lithium-ion batteries. They found that adding a tin-rich layer between the electrode and the electrolyte helps spread the lithium around when it’s being deposited on the battery, creating a smooth surface that suppresses the formation of dendrites. The results are published in the journal ACS Applied Materials and Interfaces [ACS is American Chemical Society]. The team also found that the cell modified with the tin-rich structure can operate at a much higher current and withstand many more charging-discharging cycles than a regular cell.
Researcher Lingzi Sang, an assistant professor in UAlberta’s Faculty of Science (Chemistry), says the CLS played a key role in the research. “The HXMA beamline enabled us to see at a material’s structural level what was happening on the surface of the lithium in an operating battery,” says Sang. “As a chemist, what I find the most intriguing is we were able to access the exact tin structure that we introduced to the interface which can suppress dendrites and fix this short-circuiting problem.” In a related paper the team published earlier this year, they showed that adding a protective layer of tin also suppressed the formation of dendrites in liquid-electrolyte-based lithium-ion batteries.
This novel approach holds considerable potential for industrial applications, according to Sand. “Our next step is to try to find a sustainable, cost-effective approach to applying the protective layer in battery production.”
Here’s a link to and a citation for the latest paper,
The Canadian Science Policy Centre (CSPC) in a September 15, 2022 announcement (received via email) announced an event (Age of AI and Big Data – Impact on Justice, Human Rights and Privacy) centered on some of the latest government doings on artificial intelligence and privacy (Bill C-27),
In an increasingly connected world, we share a large amount of our data in our daily lives without our knowledge while browsing online, traveling, shopping, etc. More and more companies are collecting our data and using it to create algorithms or AI. The use of our data against us is becoming more and more common. The algorithms used may often be discriminatory against racial minorities and marginalized people.
As technology moves at a high pace, we have started to incorporate many of these technologies into our daily lives without understanding its consequences. These technologies have enormous impacts on our very own identity and collectively on civil society and democracy.
Recently, the Canadian Government introduced the Artificial Intelligence and Data Act (AIDA) and Bill C-27 [which includes three acts in total] in parliament regulating the use of AI in our society. In this panel, we will discuss how our AI and Big data is affecting us and its impact on society, and how the new regulations affect us.
For some reason, there was no information about the moderator and panelists, other than their names, titles, and affiliations. Here’s a bit more:
Moderator: Yuan Stevens (from her eponymous website’s About page), Note: Links have been removed,
Yuan (“You-anne”) Stevens (she/they) is a legal and policy expert focused on sociotechnical security and human rights.
She works towards a world where powerful actors—and the systems they build—are held accountable to the public, especially when it comes to marginalized communities.
She brings years of international experience to her role at the Leadership Lab at Toronto Metropolitan University [formerly Ryerson University], having examined the impacts of technology on vulnerable populations in Canada, the US and Germany.
Committed to publicly accessible legal and technical knowledge, Yuan has written for popular media outlets such as the Toronto Star and Ottawa Citizen and has been quoted in news stories by the New York Times, the CBC and the Globe & Mail.
Yuan is a research fellow at the Centre for Law, Technology and Society at the University of Ottawa and a research affiliate at Data & Society Research Institute. She previously worked at Harvard University’s Berkman Klein Center for Internet & Society during her studies in law at McGill University.
She has been conducting research on artificial intelligence since 2017 and is currently exploring sociotechnical security as an LL.M candidate at University of Ottawa’s Faculty of Law working under Florian Martin-Bariteau.
Brenda McPhail is the director of the Canadian Civil Liberties Association’s Privacy, Surveillance and Technology Project. Her recent work includes guiding the Canadian Civil Liberties Association’s interventions in key court cases that raise privacy issues, most recently at the Supreme Court of Canada in R v. Marakah and R v. Jones, which focused on privacy rights in sent text messages; research into surveillance of dissent, government information sharing, digital surveillance capabilities and privacy in relation to emergent technologies; and developing resources and presentations to drive public awareness about the importance of privacy as a social good.
My research has spanned many areas such as resource allocation in networking, smart grids, social information networks, machine learning. Broadly, my interest lies in gaining a fundamental understanding of a given system and the design of robust algorithms.
More recently my research focus has been in privacy in machine learning. I’m interested in understanding how robust machine learning methods are to perturbation, and privacy and fairness constraints, with the goal of designing practical algorithms that achieve privacy and fairness.
Bio
Before joining the University of Alberta, I spent many years in industry research labs. Most recently, I was a Research team lead at Borealis AI (a research institute at Royal Bank of Canada), where my team worked on privacy-preserving methods for machine learning models and other applied problems for RBC. Prior to that, I spent many years in research labs in Europe working on a variety of interesting and impactful problems. I was a researcher at Bell Labs, Nokia, in France from January 2015 to March 2018, where I led a new team focussed on Maths and Algorithms for Machine Learning in Networks and Systems, in the Maths and Algorithms group of Bell Labs. I also spent a few years at the Technicolor Paris Research Lab working on social network analysis, smart grids, and privacy in recommendations.
Benjamin Faveri is a Research and Policy Analyst at the Responsible AI Institute (RAII) [headquarted in Austin, Texas]. Currently, he is developing their Responsible AI Certification Program and leading it through Canada’s national accreditation process. Over the last several years, he has worked on numerous certification program-related research projects such as fishery economics and certification programs, police body-worn camera policy certification, and emerging AI certifications and assurance systems. Before his work at RAII, Benjamin completed a Master of Public Policy and Administration at Carleton University, where he was a Canada Graduate Scholar, Ontario Graduate Scholar, Social Innovation Fellow, and Visiting Scholar at UC Davis School of Law. He holds undergraduate degrees in criminology and psychology, finishing both with first class standing. Outside of work, Benjamin reads about how and why certification and private governance have been applied across various industries.
Panelist: Ori Freiman (from his eponymous website’s About page)
I research at the forefront of technological innovation. This website documents some of my academic activities.
My formal background is in Analytic Philosophy, Library and Information Science, and Science & Technology Studies. Until September 22′ [September 2022], I was a Post-Doctoral Fellow at the Ethics of AI Lab, at the University of Toronto’s Centre for Ethics. Before joining the Centre, I submitted my dissertation, about trust in technology, to The Graduate Program in Science, Technology and Society at Bar-Ilan University.
I have also found a number of overviews and bits of commentary about the Canadian federal government’s proposed Bill C-27, which I think of as an omnibus bill as it includes three proposed Acts.
The lawyers are excited but I’m starting with the Responsible AI Institute’s (RAII) response first as one of the panelists (Benjamin Faveri) works for them and it’s a view from a closely neighbouring country, from a June 22, 2022 RAII news release, Note: Links have been removed,
Business Implications of Canada’s Draft AI and Data Act
On June 16 [2022], the Government of Canada introduced the Artificial Intelligence and Data Act (AIDA), as part of the broader Digital Charter Implementation Act 2022 (Bill C-27). Shortly thereafter, it also launched the second phase of the Pan-Canadian Artificial Intelligence Strategy.
Both RAII’s Certification Program, which is currently under review by the Standards Council of Canada, and the proposed AIDA legislation adopt the same approach of gauging an AI system’s risk level in context; identifying, assessing, and mitigating risks both pre-deployment and on an ongoing basis; and pursuing objectives such as safety, fairness, consumer protection, and plain-language notification and explanation.
Businesses should monitor the progress of Bill C-27 and align their AI governance processes, policies, and controls to its requirements. Businesses participating in RAII’s Certification Program will already be aware of requirements, such as internal Algorithmic Impact Assessments to gauge risk level and Responsible AI Management Plans for each AI system, which include system documentation, mitigation measures, monitoring requirements, and internal approvals.
…
The AIDA draft is focused on the impact of any “high-impact system”. Companies would need to assess whether their AI systems are high-impact; identify, assess, and mitigate potential harms and biases flowing from high-impact systems; and “publish on a publicly available website a plain-language description of the system” if making a high-impact system available for use. The government elaborated in a press briefing that it will describe in future regulations the classes of AI systems that may have high impact.
The AIDA draft also outlines clear criminal penalties for entities which, in their AI efforts, possess or use unlawfully obtained personal information or knowingly make available for use an AI system that causes serious harm or defrauds the public and causes substantial economic loss to an individual.
If enacted, AIDA would establish the Office of the AI and Data Commissioner, to support Canada’s Minister of Innovation, Science and Economic Development, with powers to monitor company compliance with the AIDA, to order independent audits of companies’ AI activities, and to register compliance orders with courts. The Commissioner would also help the Minister ensure that standards for AI systems are aligned with international standards.
…
Apart from being aligned with the approach and requirements of Canada’s proposed AIDA legislation, RAII is also playing a key role in the Standards Council of Canada’s AI accreditation pilot. The second phase of the Pan-Canadian includes funding for the Standards Council of Canada to “advance the development and adoption of standards and a conformity assessment program related to AI/”
The AIDA’s introduction shows that while Canada is serious about governing AI systems, its approach to AI governance is flexible and designed to evolve as the landscape changes.
Charles Mandel’s June 16, 2022 article for Betakit (Canadian Startup News and Tech Innovation) provides an overview of the government’s overall approach to data privacy, AI, and more,
The federal Liberal government has taken another crack at legislating privacy with the introduction of Bill C-27 in the House of Commons.
Among the bill’s highlights are new protections for minors as well as Canada’s first law regulating the development and deployment of high-impact AI systems.
“It [Bill C-27] will address broader concerns that have been expressed since the tabling of a previous proposal, which did not become law,” a government official told a media technical briefing on the proposed legislation.
François-Philippe Champagne, the Minister of Innovation, Science and Industry, together with David Lametti, the Minister of Justice and Attorney General of Canada, introduced the Digital Charter Implementation Act, 2022. The ministers said Bill C-27 will significantly strengthen Canada’s private sector privacy law, create new rules for the responsible development and use of artificial intelligence (AI), and continue to put in place Canada’s Digital Charter.
The Digital Charter Implementation Act includes three proposed acts: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act (AIDA)- all of which have implications for Canadian businesses.
Bill C-27 follows an attempt by the Liberals to introduce Bill C-11 in 2020. The latter was the federal government’s attempt to reform privacy laws in Canada, but it failed to gain passage in Parliament after the then-federal privacy commissioner criticized the bill.
The proposed Artificial Intelligence and Data Act is meant to protect Canadians by ensuring high-impact AI systems are developed and deployed in a way that identifies, assesses and mitigates the risks of harm and bias.
For businesses developing or implementing AI this means that the act will outline criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment.
..
An AI and data commissioner will support the minister of innovation, science, and industry in ensuring companies comply with the act. The commissioner will be responsible for monitoring company compliance, ordering third-party audits, and sharing information with other regulators and enforcers as appropriate.
The commissioner would also be expected to outline clear criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment.
…
Canada already collaborates on AI standards to some extent with a number of countries. Canada, France, and 13 other countries launched an international AI partnership to guide policy development and “responsible adoption” in 2020.
The federal government also has the Pan-Canadian Artificial Intelligence Strategy for which it committed an additional $443.8 million over 10 years in Budget 2021. Ahead of the 2022 budget, Trudeau [Canadian Prime Minister Justin Trudeau] had laid out an extensive list of priorities for the innovation sector, including tasking Champagne with launching or expanding national strategy on AI, among other things.
Within the AI community, companies and groups have been looking at AI ethics for some time. Scotiabank donated $750,000 in funding to the University of Ottawa in 2020 to launch a new initiative to identify solutions to issues related to ethical AI and technology development. And Richard Zemel, co-founder of the Vector Institute [formed as part of the Pan-Canadian Artificial Intelligence Strategy], joined Integrate.AI as an advisor in 2018 to help the startup explore privacy and fairness in AI.
When it comes to the Consumer Privacy Protection Act, the Liberals said the proposed act responds to feedback received on the proposed legislation, and is meant to ensure that the privacy of Canadians will be protected, and that businesses can benefit from clear rules as technology continues to evolve.
“A reformed privacy law will establish special status for the information of minors so that they receive heightened protection under the new law,” a federal government spokesperson told the technical briefing.
..
The act is meant to provide greater controls over Canadians’ personal information, including how it is handled by organizations as well as giving Canadians the freedom to move their information from one organization to another in a secure manner.
The act puts the onus on organizations to develop and maintain a privacy management program that includes the policies, practices and procedures put in place to fulfill obligations under the act. That includes the protection of personal information, how requests for information and complaints are received and dealt with, and the development of materials to explain an organization’s policies and procedures.
The bill also ensures that Canadians can request that their information be deleted from organizations.
The bill provides the privacy commissioner of Canada with broad powers, including the ability to order a company to stop collecting data or using personal information. The commissioner will be able to levy significant fines for non-compliant organizations—with fines of up to five percent of global revenue or $25 million, whichever is greater, for the most serious offences.
The proposed Personal Information and Data Protection Tribunal Act will create a new tribunal to enforce the Consumer Privacy Protection Act.
Although the Liberal government said it engaged with stakeholders for Bill C-27, the Council of Canadian Innovators (CCI) expressed reservations about the process. Nick Schiavo, CCI’s director of federal affairs, said it had concerns over the last version of privacy legislation, and had hoped to present those concerns when the bill was studied at committee, but the previous bill died before that could happen.
…
Now the lawyers. Simon Hodgett, Kuljit Bhogal, and Sam Ip have written a June 27, 2022 overview, which highlights the key features from the perspective of Osler, a leading business law firm practising internationally from offices across Canada and in New York.
Maya Medeiros and Jesse Beatson authored a June 23, 2022 article for Norton Rose Fulbright, a global law firm, which notes a few ‘weak’ spots in the proposed legislation,
…
… While the AIDA is directed to “high-impact” systems and prohibits “material harm,” these and other key terms are not yet defined. Further, the quantum of administrative penalties will be fixed only upon the issuance of regulations.
Moreover, the AIDA sets out publication requirements but it is unclear if there will be a public register of high-impact AI systems and what level of technical detail about the AI systems will be available to the public. More clarity should come through Bill C-27’s second and third readings in the House of Commons, and subsequent regulations if the bill passes.
The AIDA may have extraterritorial application if components of global AI systems are used, developed, designed or managed in Canada. The European Union recently introduced its Artificial Intelligence Act, which also has some extraterritorial application. Other countries will likely follow. Multi-national companies should develop a coordinated global compliance program.
…
I have two podcasts from Michael Geist, a lawyer and Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa.
June 26, 2022: The Law Bytes Podcast, Episode 132: Ryan Black on the Government’s Latest Attempt at Privacy Law Reform “The privacy reform bill that is really three bills in one: a reform of PIPEDA, a bill to create a new privacy tribunal, and an artificial intelligence regulation bill. What’s in the bill from a privacy perspective and what’s changed? Is this bill any likelier to become law than an earlier bill that failed to even advance to committee hearings? To help sort through the privacy aspects of Bill C-27, Ryan Black, a Vancouver-based partner with the law firm DLA Piper (Canada) …” (about 45 mins.)
August 15, 2022: The Law Bytes Podcast, Episode 139: Florian Martin-Bariteau on the Artificial Intelligence and Data Act “Critics argue that regulations are long overdue, but have expressed concern about how much of the substance is left for regulations that are still to be developed. Florian Martin-Bariteau is a friend and colleague at the University of Ottawa, where he holds the University Research Chair in Technology and Society and serves as director of the Centre for Law, Technology and Society. He is currently a fellow at the Harvard’s Berkman Klein Center for Internet and Society …” (about 38 mins.)
The 35th Canadian Conference on Artificial Intelligence will take place virtually in Toronto, Ontario, from 30 May to 3 June, 2022. All presentations and posters will be online, with in-person social events to be scheduled in Toronto for those who are able to attend in-person. Viewing rooms and isolated presentation facilities will be available for all visitors to the University of Toronto during the event.
The event is collocated with the Computer and Robot Vision conferences. These events (AI·CRV 2022) will bring together hundreds of leaders in research, industry, and government, as well as Canada’s most accomplished students. They showcase Canada’s ingenuity, innovation and leadership in intelligent systems and advanced information and communications technology. A single registration lets you attend any session in the two conferences, which are scheduled in parallel tracks.
The conference proceedings are published on PubPub, an open-source, privacy-respecting, and open access online platform. They are submitted to be indexed and abstracted in leading indexing services such as DBLP, ACM, Google Scholar.
I can’t tell if ‘Responsible AI’ has been included as a specific topic in previous conferences but 2022 is definitely hosting a couple of sessions based on that theme, from the Responsible AI activities webpage,
Keynote speaker: Julia Stoyanovich
New York University
“Building Data Equity Systems”
Equity as a social concept — treating people differently depending on their endowments and needs to provide equality of outcome rather than equality of treatment — lends a unifying vision for ongoing work to operationalize ethical considerations across technology, law, and society. In my talk I will present a vision for designing, developing, deploying, and overseeing data-intensive systems that consider equity as an essential objective. I will discuss ongoing technical work, and will place this work into the broader context of policy, education, and public outreach.
Biography: Julia Stoyanovich is an Institute Associate Professor of Computer Science & Engineering at the Tandon School of Engineering, Associate Professor of Data Science at the Center for Data Science, and Director of the Center for Responsible AI at New York University (NYU). Her research focuses on responsible data management and analysis: on operationalizing fairness, diversity, transparency, and data protection in all stages of the data science lifecycle. She established the “Data, Responsibly” consortium and served on the New York City Automated Decision Systems Task Force, by appointment from Mayor de Blasio. Julia developed and has been teaching courses on Responsible Data Science at NYU, and is a co-creator of an award-winning comic book series on this topic. In addition to data ethics, Julia works on the management and analysis of preference and voting data, and on querying large evolving graphs. She holds M.S. and Ph.D. degrees in Computer Science from Columbia University, and a B.S. in Computer Science and in Mathematics & Statistics from the University of Massachusetts at Amherst. She is a recipient of an NSF CAREER award and a Senior Member of the ACM.
Panel on ethical implications of AI
Panelists
Luke Stark, Faculty of Information and Media Studies, Western University
Luke Stark is an Assistant Professor in the Faculty of Information and Media Studies at Western University in London, ON. His work interrogating the historical, social, and ethical impacts of computing and AI technologies has appeared in journals including The Information Society, Social Studies of Science, and New Media & Society, and in popular venues like Slate, The Globe and Mail, and The Boston Globe. Luke was previously a Postdoctoral Researcher in AI ethics at Microsoft Research, and a Postdoctoral Fellow in Sociology at Dartmouth College; he holds a PhD from the Department of Media, Culture, and Communication at New York University, and a BA and MA from the University of Toronto.
Nidhi Hegde, Associate Professor in Computer Science and Amii [Alberta Machine Intelligence Institute] Fellow at the University of Alberta
Nidhi is a Fellow and Canada CIFAR [Canadian Institute for Advanced Research] AI Chair at Amii and an Associate Professor in the Department of Computing Science at the University of Alberta. Before joining UAlberta, she spent many years in industry research labs. Most recently, she was a Research team lead at Borealis AI (a research institute at Royal Bank of Canada), where her team worked on privacy-preserving methods for machine learning models and other applied problems for RBC. Prior to that, she spent many years in research labs in Europe working on a variety of interesting and impactful problems. She was a researcher at Bell Labs, Nokia, in France from January 2015 to March 2018, where she led a new team focussed on Maths and Algorithms for Machine Learning in Networks and Systems, in the Maths and Algorithms group of Bell Labs. She also spent a few years at the Technicolor Paris Research Lab working on social network analysis, smart grids, privacy, and recommendations. Nidhi is an associate editor of the IEEE/ACM Transactions on Networking, and an editor of the Elsevier Performance Evaluation Journal.
Karina Vold, Assistant Professor, Institute for the History and Philosophy of Science and Technology, University of Toronto
Dr. Karina Vold is an Assistant Professor at the Institute for the History and Philosophy of Science and Technology at the University of Toronto. She is also a Faculty Affiliate at the U of T Schwartz Reisman Institute for Technology and Society, a Faculty Associate at the U of T Centre for Ethics, and an Associate Fellow at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence. Vold specialises in Philosophy of Cognitive Science and Philosophy of Artificial Intelligence, and her recent research has focused on human autonomy, cognitive enhancement, extended cognition, and the risks and ethics of AI.
Elissa Strome, Executive Director, Pan-Canadian Artificial Intelligence Strategy at CIFAR
Elissa is Executive Director, Pan-Canadian Artificial Intelligence Strategy at CIFAR, working with research leaders across the country to implement Canada’s national research strategy in AI. Elissa completed her PhD in Neuroscience from the University of British Columbia in 2006. Following a post-doc at Lund University, in Sweden, she decided to pursue a career in research strategy, policy and leadership. In 2008, she joined the University of Toronto’s Office of the Vice-President, Research and Innovation and was Director of Strategic Initiatives from 2011 to 2015. In that role, she led a small team dedicated to advancing the University’s strategic research priorities, including international institutional research partnerships, the institutional strategy for prestigious national and international research awards, and the establishment of the SOSCIP [Southern Ontario Smart Computing Innovation Platform] research consortium in 2012. From 2015 to 2017, Elissa was Executive Director of SOSCIP, leading the 17-member industry-academic consortium through a major period of growth and expansion, and establishing SOSCIP as Ontario’s leading platform for collaborative research and development in data science and advanced computing.
Tutorial on AI and the Law
Prof. Maura R. Grossman, University of Waterloo, and
Hon. Paul W. Grimm, United States District Court for the District of Maryland
AI applications are becoming more and more ubiquitous in almost every field of endeavor, and the same is true as to the legal industry. This panel, consisting of an experienced lawyer and computer scientist, and a U.S. federal trial court judge, will discuss how AI is currently being used in the legal profession, what adoption has been like since the introduction of AI to law in about 2009, what legal and ethical issues AI applications have raised in the legal system, and how a sitting trial court judge approaches AI evidence, in particular, the determination of whether to admit that AI evidence or not, when they are a non-expert.
How is AI being used in the legal industry today?
What has the legal industry’s reaction been to legal AI applications?
What are some of the biggest legal and ethical issues implicated by legal and other AI applications?
How does a sitting trial court judge evaluate AI evidence when making a determination of whether to admit that AI evidence or not?
What considerations go into the trial judge’s decision?
What happens if the judge is not an expert in AI? Do they recuse?
You may recognize the name, Julia Stoyanovich, as she was mentioned here in my March 23, 2022 posting titled, The “We are AI” series gives citizens a primer on AI, a series of peer-to-peer workshops aimed at introducing the basics of AI to the public. There’s also a comic book series associated with it and all of the materials are available for free. It’s all there in the posting.
Virtual Meet and Greet on Responsible AI across Canada
Given the many activities that are fortunately happening around the responsible and ethical aspects of AI here in Canada, we are organizing an event in conjunction with Canadian AI 2022 this year to become familiar with what everyone is doing and what activities they are engaged in.
It would be wonderful to have a unified community here in Canada around responsible AI so we can support each other and find ways to more effectively collaborate and synergize. We are aiming for a casual, discussion-oriented event rather than talks or formal presentations.
The meet and greet will be hosted by Ebrahim Bagheri, Eleni Stroulia and Graham Taylor. If you are interested in participating, please email Ebrahim Bagheri (bagheri@ryerson.ca).
Thank you to the co-chairs for getting the word out about the Responsible AI topic at the conference,
Responsible AI Co-chairs
Ebrahim Bagheri Professor Electrical, Computer, and Biomedical Engineering, Ryerson University Website
Eleni Stroulia Professor, Department of Computing Science Acting Vice Dean, Faculty of Science Director, AI4Society Signature Area University of Alberta Website
The organization which hosts these conference has an almost palindromic abbreviation, CAIAC for Canadian Artificial Intelligence Association (CAIA) or Association Intelligence Artificiel Canadien (AIAC). Yes, you do have to read it in English and French and the C at either end gets knocked depending on which language you’re using, which is why it’s almost.
The CAIAC is almost 50 years old (under various previous names) and has its website here.
*April 22, 2022 at 1400 hours PT removed ‘the’ from this section of the headline: “… from 30 May to 3 June, 2022.” and removed period from the end.
One of the winners in Canada’s 2017 federal budget announcement of the Pan-Canadian Artificial Intelligence Strategy was Edmonton, Alberta. It’s a fact which sometimes goes unnoticed while Canadians marvel at the wonderfulness found in Toronto and Montréal where it seems new initiatives and monies are being announced on a weekly basis (I exaggerate) for their AI (artificial intelligence) efforts.
Intriguingly, it seems that Edmonton has higher aims than (an almost unnoticed) leadership in AI. Physicists at the University of Alberta have announced hopes to be just as successful as their AI brethren in a Nov. 27, 2017 article by Juris Graney for the Edmonton Journal,
Physicists at the University of Alberta [U of A] are hoping to emulate the success of their artificial intelligence studying counterparts in establishing the city and the province as the nucleus of quantum nanotechnology research in Canada and North America.
Google’s artificial intelligence research division DeepMind announced in July [2017] it had chosen Edmonton as its first international AI research lab, based on a long-running partnership with the U of A’s 10-person AI lab.
Retaining the brightest minds in the AI and machine-learning fields while enticing a global tech leader to Alberta was heralded as a coup for the province and the university.
It is something U of A physics professor John Davis believes the university’s new graduate program, Quanta, can help achieve in the world of quantum nanotechnology.
…
The field of quantum mechanics had long been a realm of theoretical science based on the theory that atomic and subatomic material like photons or electrons behave both as particles and waves.
“When you get right down to it, everything has both behaviours (particle and wave) and we can pick and choose certain scenarios which one of those properties we want to use,” he said.
But, Davis said, physicists and scientists are “now at the point where we understand quantum physics and are developing quantum technology to take to the marketplace.”
“Quantum computing used to be realm of science fiction, but now we’ve figured it out, it’s now a matter of engineering,” he said.
…
Quantum computing labs are being bought by large tech companies such as Google, IBM and Microsoft because they realize they are only a few years away from having this power, he said.
Those making the groundbreaking developments may want to commercialize their finds and take the technology to market and that is where Quanta comes in.
…
East vs. West—Again?
Ivan Semeniuk in his article, Quantum Supremacy, ignores any quantum research effort not located in either Waterloo, Ontario or metro Vancouver, British Columbia to describe a struggle between the East and the West (a standard Canadian trope). From Semeniuk’s Oct. 17, 2017 quantum article [link follows the excerpts] for the Globe and Mail’s October 2017 issue of the Report on Business (ROB),
Lazaridis [Mike], of course, has experienced lost advantage first-hand. As co-founder and former co-CEO of Research in Motion (RIM, now called Blackberry), he made the smartphone an indispensable feature of the modern world, only to watch rivals such as Apple and Samsung wrest away Blackberry’s dominance. Now, at 56, he is engaged in a high-stakes race that will determine who will lead the next technology revolution. In the rolling heartland of southwestern Ontario, he is laying the foundation for what he envisions as a new Silicon Valley—a commercial hub based on the promise of quantum technology.
Semeniuk skips over the story of how Blackberry lost its advantage. I came onto that story late in the game when Blackberry was already in serious trouble due to a failure to recognize that the field they helped to create was moving in a new direction. If memory serves, they were trying to keep their technology wholly proprietary which meant that developers couldn’t easily create apps to extend the phone’s features. Blackberry also fought a legal battle in the US with a patent troll draining company resources and energy in proved to be a futile effort.
Since then Lazaridis has invested heavily in quantum research. He gave the University of Waterloo a serious chunk of money as they named their Quantum Nano Centre (QNC) after him and his wife, Ophelia (you can read all about it in my Sept. 25, 2012 posting about the then new centre). The best details for Lazaridis’ investments in Canada’s quantum technology are to be found on the Quantum Valley Investments, About QVI, History webpage,
History has repeatedly demonstrated the power of research in physics to transform society. As a student of history and a believer in the power of physics, Mike Lazaridis set out in 2000 to make real his bold vision to establish the Region of Waterloo as a world leading centre for physics research. That is, a place where the best researchers in the world would come to do cutting-edge research and to collaborate with each other and in so doing, achieve transformative discoveries that would lead to the commercialization of breakthrough technologies.
Establishing a World Class Centre in Quantum Research:
The first step in this regard was the establishment of the Perimeter Institute for Theoretical Physics. Perimeter was established in 2000 as an independent theoretical physics research institute. Mike started Perimeter with an initial pledge of $100 million (which at the time was approximately one third of his net worth). Since that time, Mike and his family have donated a total of more than $170 million to the Perimeter Institute. In addition to this unprecedented monetary support, Mike also devotes his time and influence to help lead and support the organization in everything from the raising of funds with government and private donors to helping to attract the top researchers from around the globe to it. Mike’s efforts helped Perimeter achieve and grow its position as one of a handful of leading centres globally for theoretical research in fundamental physics.
Perimeter is located in a Governor-General award winning designed building in Waterloo. Success in recruiting and resulting space requirements led to an expansion of the Perimeter facility. A uniquely designed addition, which has been described as space-ship-like, was opened in 2011 as the Stephen Hawking Centre in recognition of one of the most famous physicists alive today who holds the position of Distinguished Visiting Research Chair at Perimeter and is a strong friend and supporter of the organization.
Recognizing the need for collaboration between theorists and experimentalists, in 2002, Mike applied his passion and his financial resources toward the establishment of The Institute for Quantum Computing at the University of Waterloo. IQC was established as an experimental research institute focusing on quantum information. Mike established IQC with an initial donation of $33.3 million. Since that time, Mike and his family have donated a total of more than $120 million to the University of Waterloo for IQC and other related science initiatives. As in the case of the Perimeter Institute, Mike devotes considerable time and influence to help lead and support IQC in fundraising and recruiting efforts. Mike’s efforts have helped IQC become one of the top experimental physics research institutes in the world.
Mike and Doug Fregin have been close friends since grade 5. They are also co-founders of BlackBerry (formerly Research In Motion Limited). Doug shares Mike’s passion for physics and supported Mike’s efforts at the Perimeter Institute with an initial gift of $10 million. Since that time Doug has donated a total of $30 million to Perimeter Institute. Separately, Doug helped establish the Waterloo Institute for Nanotechnology at the University of Waterloo with total gifts for $29 million. As suggested by its name, WIN is devoted to research in the area of nanotechnology. It has established as an area of primary focus the intersection of nanotechnology and quantum physics.
With a donation of $50 million from Mike which was matched by both the Government of Canada and the province of Ontario as well as a donation of $10 million from Doug, the University of Waterloo built the Mike & Ophelia Lazaridis Quantum-Nano Centre, a state of the art laboratory located on the main campus of the University of Waterloo that rivals the best facilities in the world. QNC was opened in September 2012 and houses researchers from both IQC and WIN.
Leading the Establishment of Commercialization Culture for Quantum Technologies in Canada:
For many years, theorists have been able to demonstrate the transformative powers of quantum mechanics on paper. That said, converting these theories to experimentally demonstrable discoveries has, putting it mildly, been a challenge. Many naysayers have suggested that achieving these discoveries was not possible and even the believers suggested that it could likely take decades to achieve these discoveries. Recently, a buzz has been developing globally as experimentalists have been able to achieve demonstrable success with respect to Quantum Information based discoveries. Local experimentalists are very much playing a leading role in this regard. It is believed by many that breakthrough discoveries that will lead to commercialization opportunities may be achieved in the next few years and certainly within the next decade.
Recognizing the unique challenges for the commercialization of quantum technologies (including risk associated with uncertainty of success, complexity of the underlying science and high capital / equipment costs) Mike and Doug have chosen to once again lead by example. The Quantum Valley Investment Fund will provide commercialization funding, expertise and support for researchers that develop breakthroughs in Quantum Information Science that can reasonably lead to new commercializable technologies and applications. Their goal in establishing this Fund is to lead in the development of a commercialization infrastructure and culture for Quantum discoveries in Canada and thereby enable such discoveries to remain here.
Semeniuk goes on to set the stage for Waterloo/Lazaridis vs. Vancouver (from Semeniuk’s 2017 ROB article),
… as happened with Blackberry, the world is once again catching up. While Canada’s funding of quantum technology ranks among the top five in the world, the European Union, China, and the US are all accelerating their investments in the field. Tech giants such as Google [also known as Alphabet], Microsoft and IBM are ramping up programs to develop companies and other technologies based on quantum principles. Meanwhile, even as Lazaridis works to establish Waterloo as the country’s quantum hub, a Vancouver-area company has emerged to challenge that claim. The two camps—one methodically focused on the long game, the other keen to stake an early commercial lead—have sparked an East-West rivalry that many observers of the Canadian quantum scene are at a loss to explain.
Is it possible that some of the rivalry might be due to an influential individual who has invested heavily in a ‘quantum valley’ and has a history of trying to ‘own’ a technology?
Getting back to D-Wave Systems, the Vancouver company, I have written about them a number of times (particularly in 2015; for the full list: input D-Wave into the blog search engine). This June 26, 2015 posting includes a reference to an article in The Economist magazine about D-Wave’s commercial opportunities while the bulk of the posting is focused on a technical breakthrough.
Semeniuk offers an overview of the D-Wave Systems story,
D-Wave was born in 1999, the same year Lazaridis began to fund quantum science in Waterloo. From the start, D-Wave had a more immediate goal: to develop a new computer technology to bring to market. “We didn’t have money or facilities,” says Geordie Rose, a physics PhD who co0founded the company and served in various executive roles. …
The group soon concluded that the kind of machine most scientists were pursing based on so-called gate-model architecture was decades away from being realized—if ever. …
Instead, D-Wave pursued another idea, based on a principle dubbed “quantum annealing.” This approach seemed more likely to produce a working system, even if the application that would run on it were more limited. “The only thing we cared about was building the machine,” says Rose. “Nobody else was trying to solve the same problem.”
D-Wave debuted its first prototype at an event in California in February 2007 running it through a few basic problems such as solving a Sudoku puzzle and finding the optimal seating plan for a wedding reception. … “They just assumed we were hucksters,” says Hilton [Jeremy Hilton, D.Wave senior vice-president of systems]. Federico Spedalieri, a computer scientist at the University of Southern California’s [USC} Information Sciences Institute who has worked with D-Wave’s system, says the limited information the company provided about the machine’s operation provoked outright hostility. “I think that played against them a lot in the following years,” he says.
It seems Lazaridis is not the only one who likes to hold company information tightly.
Back to Semeniuk and D-Wave,
Today [October 2017], the Los Alamos National Laboratory owns a D-Wave machine, which costs about $15million. Others pay to access D-Wave systems remotely. This year , for example, Volkswagen fed data from thousands of Beijing taxis into a machine located in Burnaby [one of the municipalities that make up metro Vancouver] to study ways to optimize traffic flow.
But the application for which D-Wave has the hights hope is artificial intelligence. Any AI program hings on the on the “training” through which a computer acquires automated competence, and the 2000Q [a D-Wave computer] appears well suited to this task. …
Yet, for all the buzz D-Wave has generated, with several research teams outside Canada investigating its quantum annealing approach, the company has elicited little interest from the Waterloo hub. As a result, what might seem like a natural development—the Institute for Quantum Computing acquiring access to a D-Wave machine to explore and potentially improve its value—has not occurred. …
I am particularly interested in this comment as it concerns public funding (from Semeniuk’s article),
Vern Brownell, a former Goldman Sachs executive who became CEO of D-Wave in 2009, calls the lack of collaboration with Waterloo’s research community “ridiculous,” adding that his company’s efforts to establish closer ties have proven futile, “I’ll be blunt: I don’t think our relationship is good enough,” he says. Brownell also point out that, while hundreds of millions in public funds have flowed into Waterloo’s ecosystem, little funding is available for Canadian scientists wishing to make the most of D-Wave’s hardware—despite the fact that it remains unclear which core quantum technology will prove the most profitable.
There’s a lot more to Semeniuk’s article but this is the last excerpt,
The world isn’t waiting for Canada’s quantum rivals to forge a united front. Google, Microsoft, IBM, and Intel are racing to develop a gate-model quantum computer—the sector’s ultimate goal. (Google’s researchers have said they will unveil a significant development early next year.) With the U.K., Australia and Japan pouring money into quantum, Canada, an early leader, is under pressure to keep up. The federal government is currently developing a strategy for supporting the country’s evolving quantum sector and, ultimately, getting a return on its approximately $1-billion investment over the past decade [emphasis mine].
I wonder where the “approximately $1-billion … ” figure came from. I ask because some years ago MP Peter Julian asked the government for information about how much Canadian federal money had been invested in nanotechnology. The government replied with sheets of paper (a pile approximately 2 inches high) that had funding disbursements from various ministries. Each ministry had its own method with different categories for listing disbursements and the titles for the research projects were not necessarily informative for anyone outside a narrow specialty. (Peter Julian’s assistant had kindly sent me a copy of the response they had received.) The bottom line is that it would have been close to impossible to determine the amount of federal funding devoted to nanotechnology using that data. So, where did the $1-billion figure come from?
In any event, it will be interesting to see how the Council of Canadian Academies assesses the ‘quantum’ situation in its more academically inclined, “The State of Science and Technology and Industrial Research and Development in Canada,” when it’s released later this year (2018).
Despite any doubts one might have about Lazaridis’ approach to research and technology, his tremendous investment and support cannot be denied. Without him, Canada’s quantum research efforts would be substantially less significant. As for the ‘cowboys’ in Vancouver, it takes a certain temperament to found a start-up company and it seems the D-Wave folks have more in common with Lazaridis than they might like to admit. As for the Quanta graduate programme, it’s early days yet and no one should ever count out Alberta.
Meanwhile, one can continue to hope that a more thoughtful approach to regional collaboration will be adopted so Canada can continue to blaze trails in the field of quantum research.