This is going to be a jam-packed posting with the AI experts at the Canadian Science Policy Centre (CSPC) virtual panel, a look back at a ‘testy’ exchange between Yoshua Bengio (one of Canada’s godfathers of AI) and a former diplomat from China, an update on Canada’s Minister of Artificial Intelligence and Digital Innovation, Evan Solomon and his latest AI push, and a missive from the BC artificial intelligence community.
A Canadian Science Policy Centre AI panel on November 11, 2025
The Canadian Science Policy Centre (CSPC) provides an October 9, 2025 update on an upcoming virtual panel being held on Remembrance Day,
[AI-Driven Misinformation Across Sectors Addressing a Cross-Societal Challenge]
Upcoming Virtual Panel[s]: November 11 [2025]
Artificial Intelligence is transforming how information is created and trusted, offering immense benefits across sectors like healthcare, education, finance, and public discourse—yet also amplifying risks such as misinformation, deepfakes, and scams that threaten public trust. This panel brings together experts from diverse fields [emphasis mine] to examine the manifestations and impacts of AI-driven misinformation and to discuss policy, regulatory, and technical solutions [emphasis mine]. The conversation will highlight practical measures—from digital literacy and content verification to platform accountability—aimed at strengthening resilience in Canada and globally.
For more information on the panel and to register, click below.
Odd timing for this event. Moving on, I found more information on the CSPC’s webpage for this event, Note: Unfortunately, links to the moderator’s and speakers’ bios could not be copied here,
Canadian Science Policy Centre Email info@sciencepolicy.ca
…
This panel brings together cross-sectoral experts to examine how AI-driven misinformation manifests in their respective domains, its consequences, and how policy, regulation, and technical interventions can help mitigate harm. The discussion will explore practical pathways for action, such as digital literacy, risk audits, content verification technologies, platform responsibility, and regulatory frameworks. Attendees will leave with a nuanced understanding of both the risks and the resilience strategies being explored in Canada and globally.
Canada Research Chair in Internet & E-commerce Law, University of Ottawa See Bio
[Panelists]
Dr. Plinio Morita
Associate Professor / Director, Ubiquitous Health Technology Lab, University of Waterloo …
Dr. Nadia Naffi
Université Laval — Associate Professor of Educational Technology and expert on building human agency against AI-augmented disinformation and deepfakes. See Bio
Dr. Jutta Treviranus
Director, Inclusive Design Research Centre, OCAD U, Expert on AI misinformation in the Education sector and schools. See Bio
Dr. Fenwick McKelvey
Concordia University — Expert in political bots, information flows, and Canadian tech governance See Bio
Michael Geist has his own blog/website featuring posts on his ares of interest and featuring his podcast, Law Bytes. Jutta Treviranus is mentioned in my October 13, 2025 posting as a participant in “Who’s afraid of AI? Arts, Sciences, and the Futures of Intelligence,” a conference (October 23 – 24, 205) and arts festival at the University of Toronto (scroll down to find it) . She’s scheduled for a session on Thursday, October 23, 2025.
China, Canada, and the AI Action summit in February 2025
Zoe Kleinman’s February 10, 2025 article for the British Broadcasting Corporation (BBC) news online website also notes the encounter,
A former Chinese official poked fun at a major international AI safety report led by “AI Godfather” professor Yoshua Bengio and co-authored by 96 global experts – in front of him.
Fu Ying, former vice minister of foreign affairs and once China’s UK ambassador, is now an academic at Tsinghua University in Beijing.
The pair were speaking at a panel discussion ahead of a two-day global AI summit starting in Paris on Monday [February 10, 2025].
The aim of the summit is to unite world leaders, tech executives, and academics to examine AI’s impact on society, governance, and the environment.
Fu Ying began by thanking Canada’s Prof Bengio for the “very, very long” document, adding that the Chinese translation stretched to around 400 pages and she hadn’t finished reading it.
She also had a dig at the title of the AI Safety Institute – of which Prof Bengio is a member.
China now has its own equivalent; but they decided to call it The AI Development and Safety Network, she said, because there are lots of institutes already but this wording emphasised the importance of collaboration.
The AI Action Summit is welcoming guests from 80 countries, with OpenAI chief executive Sam Altman, Microsoft president Brad Smith and Google chief executive Sundar Pichai among the big names in US tech attending.
Elon Musk is not on the guest list but it is currently unknown whether he will decide to join them. [As of February 13, 2025, Mr. Musk did not attend the summit, which ended February 11, 2025.]
A key focus is regulating AI in an increasingly fractured world. The summit comes weeks after a seismic industry shift as China’s DeepSeek unveiled a powerful, low-cost AI model, challenging US dominance.
The pair’s heated exchanges were a symbol of global political jostling in the powerful AI arms race, but Fu Ying also expressed regret about the negative impact of current hostilities between the US and China on the progress of AI safety.
…
She gave a carefully-crafted glimpse behind the curtain of China’s AI scene, describing an “explosive period” of innovation since the country first published its AI development plan in 2017, five years before ChatGPT became a viral sensation in the west.
She added that “when the pace [of development] is rapid, risky stuff occurs” but did not elaborate on what might have taken place.
“The Chinese move faster [than the west] but it’s full of problems,” she said.
Fu Ying argued that building AI tools on foundations which are open source, meaning everyone can see how they work and therefore contribute to improving them, was the most effective way to make sure the tech did not cause harm.
Most of the US tech giants do not share the tech which drives their products.
Open source offers humans “better opportunities to detect and solve problems”, she said, adding that “the lack of transparency among the giants makes people nervous”.
But Prof Bengio disagreed.
His view was that open source also left the tech wide open for criminals to misuse.
He did however concede that “from a safety point of view”, it was easier to spot issues with the viral Chinese AI assistant DeepSeek, which was built using open source architecture, than ChatGPT, whose code has not been shared by its creator OpenAI.
…
Interesting, non? You can read more about Bengio’s views in an October 1, 2025 article by Rae Witte for Futurism.
In a Policy Forum, Yue Zhu and colleagues provide an overview of China’s emerging regulation for artificial intelligence (AI) technologies and its potential contributions to global AI governance. Open-source AI systems from China are rapidly expanding worldwide, even as the country’s regulatory framework remains in flux. In general, AI governance suffers from fragmented approaches, a lack of clarity, and difficulty reconciling innovation with risk management, making global coordination especially hard in the face of rising controversy. Although no official AI law has yet been enacted, experts in China have drafted two influential proposals – the Model AI Law and the AI Law (Scholar’s Proposal) – which serve as key references for ongoing policy discussions. As the nation’s lawmakers prepare to draft a consolidated AI law, Zhu et al. note that the decisions will shape not only China’s innovation, but also global collaboration on AI safety, openness, and risk mitigation. Here, the authors discuss China’s emerging AI regulation as structured around 6 pillars, which, combined, stress exemptive laws, efficient adjudication, and experimentalist requirements, while safeguarding against extreme risks. This framework seeks to balance responsible oversight with pragmatic openness, allowing developers to innovate for the long term and collaborate across the global research community. According to Zhu et al., despite the need for greater clarity, harmonization, and simplification, China’s evolving model is poised to shape future legislation and contribute meaningfully to global AI governance by promoting both safety and innovation at a time when international cooperation on extreme risks is urgently needed.
Here’s a link to and a citation for the paper,
China’s emerging regulation toward an open future for AI by Yue Zhu, Bo He, Hongyu Fu, Naying Hu, Shaoqing Wu, Taolue Zhang, Xinyi Liu, Gang Xu, Linghan Zhang, and Hui Zhou. Science 9 Oct 2025Vol 390, Issue 6769 pp. 132-135 DOI: 10.1126/science.ady7922
This paper is behind a paywall.
No mention of Fu Ying or China’s ‘The AI Development and Safety Network’ but perhaps that’s in the paper.
Canada and its Minister of AI and Digital Innovation
Evan Solomon (born April 20, 1968)[citation needed] is a Canadian politician and broadcaster who has been the minister of artificial intelligence and digital innovation since May 2025. A member of the Liberal Party, Solomon was elected as the member of Parliament (MP) for Toronto Centre in the April 2025 election.
He was the host of The Evan Solomon Show on Toronto-area talk radio station CFRB,[2] and a writer for Maclean’s magazine. He was the host of CTV’s national political news programs Power Play and Question Period.[3] In October 2022, he moved to New York City to accept a position with the Eurasia Group as publisher of GZERO Media.[4] Solomon continued with CTV News as a “special correspondent” reporting on Canadian politics and global affairs.”[4]
…
Had you asked me what background one needs to be a ‘Minister of Artificial Intelligence and Digital Innovation’, media would not have been my first thought. That said, sometimes people can surprise you.
Solomon appears to be an enthusiast if a June 10, 2025 article by Anja Karadeglija for The Canadian Press is to be believed,
Canada’s new minister of artificial intelligence said Tuesday [June 10, 2025] he’ll put less emphasis on AI regulation and more on finding ways to harness the technology’s economic benefits [emphases mine].
In his first speech since becoming Canada’s first-ever AI minister, Evan Solomon said Canada will move away from “over-indexing on warnings and regulation” to make sure the economy benefits from AI.
His regulatory focus will be on data protection and privacy, he told the audience at an event in Ottawa Tuesday morning organized by the think tank Canada 2020.
Solomon said regulation isn’t about finding “a saddle to throw on the bucking bronco called AI innovation. That’s hard. But it is to make sure that the horse doesn’t kick people in the face. And we need to protect people’s data and their privacy.”
The previous government introduced a privacy and AI regulation bill that targeted high-impact AI systems. It did not become law before the election was called.
That bill is “not gone, but we have to re-examine in this new environment where we’re going to be on that,” Solomon said.
He said constraints on AI have not worked at the international level.
“It’s really hard. There’s lots of leakages,” he said. “The United States and China have no desire to buy into any constraint or regulation.”
That doesn’t mean regulation won’t exist, he said, but it will have to be assembled in steps.
…
Solomon’s comments follow a global shift among governments to focus on AI adoption and away from AI safety and governance.
The first global summit focusing on AI safety was held in 2023 as experts warned of the technology’s dangers — including the risk that it could pose an existential threat to humanity. At a global meeting in Korea last year, countries agreed to launch a network of publicly backed safety institutes.
But the mood had shifted by the time this year’s AI Action Summit began in Paris. …
…
Solomon outlined several priorities for his ministry — scaling up Canada’s AI industry, driving adoption and ensuring Canadians have trust in and sovereignty over the technology.
He said that includes supporting Canadian AI companies like Cohere, which “means using government as essentially an industrial policy to champion our champions.”
The federal government is putting together a task force to guide its next steps on artificial intelligence, and Artificial Intelligence Minister Evan Solomon is promising an update to the government’s AI strategy.
Solomon told the All In artificial intelligence conference in Montreal on Wednesday [September 24, 2025] that the “refreshed” strategy will be tabled later this year, “almost two years ahead of schedule.”
…
“We need to update and move quickly,” he said in a keynote speech at the start of the conference.
The task force will include about 20 representatives from industry, academia and civil society. The government says it won’t reveal the membership until later this week.
Solomon said task force members are being asked to consult with their networks, suggest “bold, practical” ideas and report back to him in November [2025].
The group will look at various topics related to AI, including research, adoption, commercialization, investment, infrastructure, skills, and safety and security. The government is also planning to solicit input from the public. [emphasis mine]
Canada was the first country to launch a national AI strategy [the Pan-Canadian AI Strategy announced in 2016], which the government updated in 2022. The strategy focuses on commercialization, the development and adoption of AI standards, talent and research.
Solomon also teased a “major quantum initiative” coming in October [2025?] to ensure both quantum computing talent and intellectual property stay in the country.
Solomon called digital sovereignty “the most pressing policy and democratic issue of our time” and stressed the importance of Canada having its own “digital economy that someone else can’t decide to turn off.”
Solomon said the federal government’s recent focus on major projects extends to artificial intelligence. He compared current conversations on Canada’s AI framework to the way earlier generations spoke about a national railroad or highway.
…
He said his government will address concerns about AI by focusing on privacy reform and modernizing Canada’s 25-year-old privacy law.
“We’re going to include protections for consumers who are concerned about things like deep fakes and protection for children, because that’s a big, big issue. And we’re going to set clear standards for the use of data so innovators have clarity to unlock investment,” Solomon said.
…
The government is consulting with the public? Experience suggests that when all the major decisions will have been made; the public consultation comments will mined so officials can make some minor, unimportant tweaks.
Canada’s AI Task Force and parts of the Empire Club talk are revealed in a September 26, 2025 article by Alex Riehl for BetaKit,
Inovia Capital partner Patrick Pichette, Cohere chief artificial intelligence (AI) officer Joelle Pineau, and Build Canada founder Dan Debow are among 26 members of AI minister Evan Solomon’s AI Strategy Task Force trusted to help the federal government renew its AI strategy.
Solomon revealed the roster, filled with leading Canadian researchers and business figures, while speaking at the Empire Club in Toronto on Friday morning [September 26, 2025]. He teased its formation at the ALL IN conference earlier this week [September 24, 2025], saying the team would include “innovative thinkers from across the country.”
The group will have 30 days to add to a collective consultation process in areas including research, talent, commercialization, safety, education, infrastructure, and security.
…
The full AI Strategy Task Force is listed below; each member will consult their network on specific themes.
Research and Talent
Gail Murphy, professor of computer science and vice-president – research and innovation, University of British Columbia and vice-chair at the Digital Research Alliance of Canada
Diane Gutiw, VP – global AI research lead, CGI Canada and co-chair of the Advisory Council on AI
Michael Bowling, professor of computer science and principal investigator – Reinforcement Learning and Artificial Intelligence Lab, University of Alberta and research fellow, Alberta Machine Intelligence Institute and Canada CIFAR AI chair
Arvind Gupta, professor of computer science, University of Toronto
Adoption across industry and governments
Olivier Blais, co-founder and VP of AI, Moov and co-chair of the Advisory Council on AI
Cari Covent, technology executive
Dan Debow, chair of the board, Build Canada
Commercialization of AI
Louis Têtu, executive chairman, Coveo
Michael Serbinis, founder and CEO, League and board chair of the Perimeter Institute
Adam Keating, CEO and Founder, CoLab
Scaling our champions and attracting investment
Patrick Pichette, general partner, Inovia Capital
Ajay Agrawal, professor of strategic management, University of Toronto, founder, Next Canada and founder, Creative Destruction Lab
Sonia Sennik, CEO, Creative Destruction Lab
Ben Bergen, president, Council of Canadian Innovators
Building safe AI systems and public trust in AI
Mary Wells, dean of engineering, University of Waterloo
Joelle Pineau, chief AI officer, Cohere
Taylor Owen, founding director, Center [sic] for Media, Technology and Democracy [McGill University]
Education and Skills
Natiea Vinson, CEO, First Nations Technology Council
Alex Laplante, VP – cash management technology Canada, Royal Bank of Canada and board member at Mitacs
David Naylor, professor of medicine – University of Toronto
Infrastructure
Garth Gibson, chief technology and AI officer, VDURA
Ian Rae, president and CEO, Aptum
Marc Etienne Ouimette, chair of the board, Digital Moment and member, OECD One AI Group of Experts, affiliate researcher, sovereign AI, Cambridge University Bennett School of Public Policy
Security
Shelly Bruce, distinguished fellow, Centre for International Governance Innovation
James Neufeld, founder and CEO, Samdesk
Sam Ramadori, co-president and executive director, LawZero
With files from Josh Scott
If you have the time, Riehl ‘s September 26, 2025 article offers more depth than may be apparent in the excerpts I’ve chosen.
It’s been a while since I’ve seen Arvind Gupta’s name. I’m glad to see he’s part of this Task Force (Research and Talent). The man was treated quite shamefully at the University of British Columbia. (For the curious, this August 18, 2015 article by Ken MacQueen for Maclean’s Magazine presents a somewhat sanitized [in my opinion] review of the situation.)
One final comment, the experts on the virtual panel and members of Solomon’s Task Force are largely from Ontario and Québec. There is minor representation from others parts of the country but it is minor.
British Columbia wants entry into the national AI discussion
Just after I finished writing up this post, I received Kris Krug’s (techartist, quasi-sage, cyberpunk anti-hero from the future) October 14, 2025 communication (received via email) regarding an initiative from the BC + AI community,
Growth vs Guardrails: BC’s Framework for Steering AI
Our open letter to Minister Solomon shares what we’ve learned building community-led AI governance and how BC can help.
Ottawa created a Minister of Artificial Intelligence and just launched a national task force to shape the country’s next AI strategy. The conversation is happening right now about who gets compute, who sets the rules, and whose future this technology will serve.
Our new feature, Growth vs Guardrails [see link to letter below for ‘guardrails’], is already making the rounds in those rooms. The message is simple: if Ottawa’s foot is on the gas, BC is the steering wheel and the brakes. We can model a clean, ethical, community-led path that keeps power with people and place.
This is the time to show up together. Not as scattered voices, but as a connected movement with purpose, vision, and political gravity.
Over the past few months, almost 100 of us have joined as the new BC + AI Ecosystem Association non-profit as Founding Members. Builders. Artists. Researchers. Investors. Educators. Policymakers. People who believe that tech should serve communities, not the other way around.
Now we’re opening the door wider. Join and you’ll be part of the core group that built this from the ground up. Your membership is declaration that British Columbia deserves to shape its own AI future with ethics, creativity, and care.
If you’ve been watching from the sidelines, this is the time to lean in. We don’t do panels. We do portals. And this is the biggest one we’ve opened yet.
See you inside,
Kris Krüg Executive Director BC + AI Ecosystem Association kk@bc-ai.ca | bc-ai.ca
Canada just spun up a 30-day sprint to shape its next AI strategy. Minister Evan Solomon assembled 26 experts (mostly industry and academia) to advise on research, adoption, commercialization, safety, skills, and infrastructure.
On paper, it’s a pivot moment. In practice, it’s already drawing fire. Too much weight on scaling, not enough on governance. Too many boardrooms, not enough frontlines. Too much Ottawa, not enough ground truth.
…
This is Canada’s chance to reset the DNA of its AI ecosystem.
But only if we choose regeneration over extraction, sovereign data governance over corporate capture, and community benefit over narrow interests.
…
The Problem With The Task Force
Research says: The group’s stacked with expertise. But critics flag the imbalance. Where’s healthcare? Where’s civil society beyond token representation? Where are the people who’ll feel AI’s impact first: frontline workers, artists, community organizers?
…
The worry:Commercialization and scaling overshadow public trust, governance, and equitable outcomes. Again.
The numbers back this up: Only 24% of Canadians have AI training. Just 38% feel confident in their knowledge. Nearly two-thirds see potential harm. 71% would trust AI more under public regulation.
We’re building a national strategy on a foundation of low literacy and eroding trust. That’s not a recipe for sovereignty. That’s a recipe for capture.
Principles for a National AI Strategy: What BC + AI Stands For
Every once in a while I decide to dive further into a story and highlight some of the ways in which we all get fooled into thinking that the technology industry is going to leave British Columbia with use of a survey (Reading [1 of 2]) or that we can somehow make ourselves healthier (Reading [2 of 2)) with the use ‘scientifically’ derived data.
Setting the scene
The last time I encountered Miro Cernetig was when he was a member of a panel of political pundits (he was a reporter for the Vancouver Sun at that time in 2009). It seems he’s moved on into the realm of ‘storymaking’ and public relations. He popped up in Nick Eagland’s October 5, 2019 article (Artificial intelligence firms in B.C. seek more support from federal government),
Handol Kim, vice-chair of Network [Artificial Intelligence Network of B.C (AInBC)] said federal funding and support don’t measure up to the size and pace of B.C.’s AI sector, and should be earmarked for research.
…
In 2017, the federal budget included $125 million in funding for AI research at institutes in Edmonton, Toronto and Montreal. [emphasis mine] Kim said those centres boast AI “super star” and “rock star” researchers with international name recognition. B.C.’s sector hasn’t been able to market itself that way but has plenty to offer, Kim said.
“The tech industry doesn’t automatically assume the government is going to help,” he said. “But where government does have a role to play is in research and funding research, especially when we have a tenuous lead and a good position, and we’re getting outspent.”
CityAge is partnering with the Artificial Intelligence Network for CrossOver: AI, a conference in Vancouver on Dec. 9 [2019], which will help draw national attention to B.C.’s sector, said CityAge co-founder Miro Cernetig.[emphasis mine]
Cernetig, owner of branding agency Catalytico, said B.C.’s sector is strong at commercializing its technology — getting it to market for a profit. But he worries that Canada is too often recognized only for its natural resources, when it has plenty of “human capital” to give it an edge in the development of AI, particularly in B.C.
“It’s important that Vancouver and British Columbia be fully integrated into the national data strategy, which includes AI,” he said.
“Because the only way we’ll be able to compete globally is if we take all of the best pieces and nodes of excellent across the country and bring them together into a true Canadian approach.”
…
This seems like a standard ploy. “Our industry is not getting enough support, please give us more federal money or lower taxes, etc.” Looking backwards from our latest federal election on Oct. 22, 2019, the timing for this plea seems odd. Unless it’s a misdirect and the real audience is the provincial government (British Columbia). So, what is the story?
Miro Cernetig Storymaker and seasoned strategist who is founder of Catalytico ~ ideas in motion & Co-Founder of CityAge.
As noted earlier, Cernetig was a journalist (which gives him credentials when placing a story with former colleagues in the media). He also seems to have been quite successful (from his Huffington Post biography),
… Globe and Mail‘s bureau chief in Beijing, New York, Vancouver, Edmonton and the Arctic. He was also the Quebec bureau chief for the Toronto Star. During his 25-year career Miro has worked in film, print and digital mediums for the Globe and Mail, the CBC, the Toronto Star and most recently as a staff columnist at the Vancouver Sun.
Miro’s writing — on business, culture, politics and public policy — has also appeared in ROB Magazine[Report on Business; a Globe and Mail publication], the New York Times, the Economist, the International Herald Tribune and People Magazine.
A new survey found that more than half of B.C’s. artificial intelligence companies believe the federal government is not doing enough to boost the sector, and half have considered leaving the province. [emphasis mine]
The non-profit industry association, Artificial Intelligence Network of B.C., [AInBC] says there are more than 150 AI-related firms in B.C. and more than 65 submitted responses to its survey, which was conducted by CityAge and released this week. [emphases mine]
More than 56 per cent of respondents said the federal government needs to do more to help the local AI sector grow, with 31 per cent saying its efforts were lacking and 24 per cent saying they needed major attention.
Half of respondents said they have considered moving their companies out of B.C. They main reasons they gave were a desire to connect to bigger markets (35 per cent) and to operate in a better taxation and regulatory environment (11 per cent).
The firms said their most significant impediments to growth were lack of capital (30 per cent) and an inability to access the right talent (27 per cent).
But they also showed hope for the future, with 47 per cent saying they are “very confident” they will grow over the next three to five years, and 33 per cent saying they are “solid” but could be doing better.
…
A survey, eh? I guarantee that I could devise one where a majority of the respondents agree that I should receive $1M or more from the government, tax free, and for no particular reason.
It’s funny. We know surveys are highly dependent on who is surveyed and how and in what order the questions are asked and yet we forget when we see ‘survey facts’ published somewhere.
Does anyone think that members of the Artificial Intelligence Network of B.C would say no to more financial support? What was the point of the survey? The whole thing reminds me of an old saying, “lies, damn lies, and statistics,” (Note: Links in the excerpt have been removed)
“Lies, damned lies, and statistics” is a phrase describing the persuasive power of numbers, particularly the use of statistics to bolster weak arguments. It is also sometimes colloquially used to doubt statistics used to prove an opponent’s point.
The phrase was popularized in the United States by Mark Twain (among others), who attributed it to the British prime minister Benjamin Disraeli: “There are three kinds of lies: lies, damned lies, and statistics.” However, the phrase is not found in any of Disraeli’s works and the earliest known appearances were years after his death. Several other people have been listed as originators of the quote, and it is often erroneously attributed to Twain himself.[1]
…
By the way, I haven’t been able to find the survey or a report about the survey available online, which means that the methodology can’t be examined.
What’s the story? Answer: confusing
Eagland’s article looks like part of a campaign to get the federal government to spread their AI largesse in BC’s direction. (Am I the only one who thinks that British Columbia’s AI companies and educational institutions are smarting because they weren’t included in the federal government’s 2017 Pan-Canadian Artificial Intelligence Strategy? They budgeted $125M for AI communities in Edmonton, Montréal, and Toronto.) Or, it’s possible AInBC is signaling the provincial government that there are problems which they (the provincial government) could solve with funding
In Eagland’s relatively short article there’s a second message; it’s about an upcoming AI conference, CrossOver: AI on December 9, 2019. At that point, the articles start to look like an advertisement for an event organized by CityAge’s (Miro Cernetig’s company). I found this on the conference website’s About page,
Artificial Intelligence, and the technologies around it, will determine the builders of our future economy.
British Columbia has — and is building — that crucial AI ecosystem. Through it, we will have the local and global reach to build the future.
Organized by CityAge and the Artificial Intelligence network of British Columbia, CrossOver: AI will connect and catalyze an essential network of leaders in British Columbia and Canada’s emerging AI ecosystem. To take BC’s strengths in this transformative technology to the national and global stage.
CrossOver AI will:
Establish British Columbia as a national and global leader in AI/ML.
Showcase BC’s AI/ML start-up ecosystem to global investors and corporations for investment and partnerships.
Attract global corporations to invest in establishing AI/ML R&D in BC.
Demonstrate to BC and Canada’s business, government and academic leadership that we have a strong, growing AI network.
Gather and connect all of the members of BC’s AI network to each other.
CrossOver AI’s program will be structured to provide an engaging combination of high-quality content and practical business information.
The morning of the event will be a mix of panel discussions and 20-minute TED-style presentations.
The afternoon will be organized as an interactive mix of pitch sessions that profile the opportunities in global AI and BC’s capabilities.
About AInBC
The Artificial Intelligence network of British Columbia (AInBC) was established by business and academic leaders to unify, organize and catalyze the Artificial Intelligence (AI) and Machine Learning (ML) communities in British Columbia (BC) to establish BC as a national and global leader in AI by 2022.
AInBC believes that AI/ML is of strategic importance to the economic and social well-being of everyone in BC, and is dedicated to ensuring that BC leads rather than follows.
We define the AI community in BC as: Academic Institutions AI/ML companies/start-ups Corporations with AI/ML initiatives Entrepreneurs Investment Community Students Government (Provincial and Municipal) Foreign/Non-BC based Corporations seeking AI/ML talent in BC
AInBC recognizes that all members of this community must be served in order to create a vigorous and high-growth ecosystem that benefits all members and the province overall. AInBC is a not-for-profit Society.
…
About CityAge CityAge was founded on the idea that a neutral, focused set of high-powered conversations will help us develop and implement big ideas that build the future.
CityAge has held over 50 conferences on a variety of topics in major urban markets across North America, Europe and Asia, ranging in size from 150 to 500 leaders.
More than 7,000 leaders have attended CityAge and are part of the CityAge network.
Which Businesses AI is Disrupting Now: How your organization can use this essential new tool for business, managing natural resources, and discovering innovations. AI isn’t just for Silicon Valley; it’s available to everyone.
Unicorn AI: BC’s AI companies have the potential to be global players. We’ll look at how we can help them get there.
Attracting Global AI Investment: What do BC and Canada need to do to attract human and financial capital to the emerging AI cluster? How do we get the news out to the world that we are taking a leading role in the AI revolution?
AI for a Better World: AI will allow us new ways to look at social challenges we’ve been trying to solve. How will AI, with the human component and thoughtful policy, help us build a stronger economy and society?
AI and The Data Effect: BC and Canada can responsibly gather and use the data that AI needs. We will look at what competitors are doing, what our strategic advantages are, and how to use them to build our AI cluster.
Ethical AI: How to control the risks, enroll the public, and use AI to build the economy and improve lives.
It’s nice to see that they’ve tucked in ‘ethics’ and ‘making the world a better place’ along with the business-oriented themes.
As for what constitutes this story, it seems a little confused. First, we want money from the federal government 9we might leave if we don’t get it) and, second, we’ve got a conference where we want to attract business people and investors.
Analyzing the confusion
It would have been good to find out more about the artificial intelligence community in BC. Unfortunately, I don’t think Nick Eagland has enough experience to get that story. (BTW, A lot of reporters don’t have enough experience to ask the right questions, especially in science and technology. They don’t have the time to adequately research the topic and they can’t draw on past experience because they don’t spend enough time focused on one subject area long enough to learn about it.)
As for the branding or storymaking strategy on display, I don’t think it was a good idea to bundle the two messages together but then I’m not a member of any target audiences (e.g., business investor, venture capitalist, policy maker, etc.). As well, I’m not the client who may have been driving this message or, in this case, incompatible messages and there’s not a lot the PR flack can do in that case.
An example of ‘good’ storymaking
As for the standard tech community complaints, here’s one of the latest examples and it’s a good example of how to do this. From an Oct. 7, 2019 news item on Daily Hive,
Over 110 Canadian tech CEOs have signed an open letter urging political parties to take action to strengthen the country’s innovative economy, and avoid falling further behind international peers.
So far, major parties have put forward pledges in areas like affordability, first-time home buyers, and climate change, but the campaigns have offered few promises designed to drive economic growth in the digital age.
…
The letter was drafted by the Council of Canadian Innovators, a lobby group representing some of the country’s fastest-growing companies. Combined, its signatories run domestic firms that employed more than 35,000 people last year and generated more than $6 billion for the Canadian economy.
…
Ian Rae, CEO of Montreal big-data firm CloudOps, said his engineers receive unsolicited job offers, usually with big salaries and mostly from US tech firms.
“We need to be thinking in Canada about the future economy and the fact that the globe seems to be in this enormous shift towards the globalized digital economy,” said Rae.
He said deep-pocketed foreign investors have also had their eyes on Canadian firms with potential. The risk, he said, is that these companies are bought out before they can grow and generate wealth and employment returns in Canada.
“A lot of these US companies are cherry-picking Canadian scale-ups before they scale up, so that the ultimate net benefit tends to flow outside of the Canadian economy,” Rae said.
Tech CEOs have said the Liberal government’s efforts in recent years to support high growth firms have offered little for emerging scale-up companies that have already outgrown the start-up phase.
David Ross, CEO of Ross Video, said a recent study by the University of Toronto found that Canada was an international laggard when it came to scaling up private firms to the billion dollar mark, companies also known as unicorns. [emphasis mine]
…
“The situation is so bad that even if we were to create four times as many unicorns, we would still be in last place,” said the study from the university’s Impact Centre.
Ross, whose Ottawa information and communications technology company has 650 employees, said the performance “should be a bit of a crisis for our politicians.”
“Canada should be more than rocks, trees, and oil,” Ross said.
…
This story was tightly focused on science and technology innovation and party platforms prior to the October 21, 2019 election. It was timely and it was an appeal to make Canada “… more than rocks, …” tying in very nicely with an iconic slam poetry presentation (We Are More) at the 2010 Olympics in Vancouver by Shane Koyczan.
Should you be interested in more information about Mr. Cenetig’s companies, you can find out more about Catalytico here and CityAge here.
Nanomaterials currently used in food additives and packaging do not appear to pose a health risk but we will need to keep an eye on newer materials in the pipeline, reports Trans-tasman food regulator.
FSANZ (Food Standards Australia New Zealand) has released two reports reviewing the evidence for the safety of nanotechnologies in food packaging and food additives.
Certain compounds, when engineered as particles measuring on the nano scale (one billionth of a meter), can exhibit certain properties; for example, nano-silver particles have antimicrobial properties.
In 2015 an expert toxicologist prepared two reports for FSANZ on the potential use of nanotechnologies in existing food additives and food packaging. The reports were then peer reviewed by an expert pharmacologist and toxicologist to evaluate whether the conclusions for each of the reports were supported by the weight of evidence in scientific literature. The peer review agreed with the overall conclusions of the reports.
Scope of the work
The consultant was asked to review publically available scientific literature on whether there is reasonable evidence of health risks associated with oral ingestion of titanium dioxide, silicon dioxide and silver in food. These food additives may contain a proportion of material with at least one dimension in the nanoscale range.
As an extension of this work, evidence of risks to health from nanomaterials used in food packaging was also investigated.
Key findings
The consultant reviewed the evidence on nanoscale silicon dioxide, titanium dioxide and silver in food and found the weight of evidence does not support claims of significant health risks for food grade materials.
Titanium dioxide and silicon dioxide are used internationally in a range of food products and have been used safely for decades. They are approved food additives in Australia and New Zealand. Silver is also an approved additive in Australia and New Zealand but is permitted in very few foods.
Overall, the findings of the report are consistent with recently published information in the OECD’s Working Party on Manufactured Nanomaterials Sponsorship Programme for the Testing of Manufactured Nanomaterials toxicological dossiers on silicon dioxide, titanium dioxide and silver.
There is no direct evidence to suggest novel nanomaterials are currently being used in food packaging applications in Australia or New Zealand, with most patents found from the United States.
From the case studies on the use of nano-clay and nano silver in packaging, the report concludes that there is no evidence from the literature of migration of nano-clay from packaging into food. The nanoscale nature of nanosilver (whether used in packaging or food) is also not likely to be dangerous to consumer’s health.
An independent peer review agreed with the overall analysis and conclusions of both reports stating that they were appropriately balanced in their reporting and that none of the nanotechnologies described are of health concern.
There is also a June 7, 2016 essay about these reports by Ian Rae for The Conversation,
We know that nanoparticles in sunscreens and cosmetics can penetrate the skin, and this raises questions about what they can do in the body. …
For the most part, I found the piece informative and interesting but there is one flaw as can be seen in the sentence I’ve excerpted. In fact, there is very little penetration by nanoparticles found in sunscreens as noted in my Oct. 4, 2012 posting and those findings do appear to have been contradicted in the years since.