Tag Archives: Canada AI regulation and the second reading of the Digital Charter Implementation Act 2022 (Bill C-27)

The 2024 Canadian federal budget: some thoughts on science & technology, military, and cybersecurity spending

The 2024 Canadian federal budget – Fairness for Every Generation (or if you want to see the front page, Budget 2024 – Fairness for Every Generation and then go to Go View the Budget for the table of contents) was announced in April 2024. So, I’m very late with this posting.

There weren’t too many highlights in the 2024 budget as far as I was concerned. Overall, it was a bread and butter budget concerned with housing, jobs, business, and prices along with the government’s perennial focus on climate change and the future for young people and Indigenous peoples. There was nothing particularly special about the funds allocated for research and, as for defence spending in the 2024 budget, that was and is nominally interesting.

“Boosting Research, Innovation, and Productivity” was found in Chapter Four: Economic Growth for Every Generation.

4.1 Boosting Research, Innovation, and Productivity

For anyone who’s not familiar with ‘innovation’ as a buzzword, it’s code for ‘business’. From 4.1 of the budget,

Key Ongoing Actions

  • Supporting scientific discovery, developing Canadian research talent, and attracting top researchers from around the planet to make Canada their home base for their important work with more than $16 billion committed since 2016.
  • Supporting critical emerging sectors, through initiatives like the Pan-Canadian Artificial Intelligence Strategy, [emphases mine] the National Quantum Strategy, the Pan-Canadian Genomics Strategy, and the Biomanufacturing and Life Sciences Strategy.
  • Nearly $2 billion to fuel Canada’s Global Innovation Clusters to grow these innovation ecosystems, promote commercialization, support intellectual property creation and retention, and scale Canadian businesses.
  • Investing $3.5 billion in the Sustainable Canadian Agricultural Partnership to strengthen the innovation, competitiveness, and resiliency of the agriculture and agri-food sector.
  • Flowing up to $333 million over the next decade to support dairy sector investments in research, product and market development, and processing capacity for solids non-fat, thus increasing its competitiveness and productivity.

The only ’emerging’ sector singled out for new funding was the Pan-Canadian Artificial Intelligence Strategy and that is almost all ‘innovation’, from 4.1 of the budget,

Strengthening Canada’s AI Advantage

Canada’s artificial intelligence (AI) ecosystem is among the best in the world. Since 2017, the government has invested over $2 billion towards AI in Canada. Fuelled by those investments, Canada is globally recognized for strong AI talent, research, and its AI sector.

Today, Canada’s AI sector is ranked first in the world for growth of women in AI, and first in the G7 for year-over-year growth of AI talent. Every year since 2019, Canada has published the most AI-related papers, per capita, in the G7. Our AI firms are filing patents at three times the average rate in the G7, and they are attracting nearly a third of all venture capital in Canada. In 2022-23, there were over 140,000 actively engaged AI professionals in Canada, an increase of 29 per cent compared to the previous year. These are just a few of Canada’s competitive advantages in AI and we are aiming even higher.

To secure Canada’s AI advantage, the government has already:

  • Established the first national AI strategy in the world through the Pan-Canadian Artificial Intelligence Strategy;
  • Supported access to advanced computing capacity, including through the recent signing of a letter of intent with NVIDIA and a Memorandum of Understanding with the U.K. government; and,
  • Scaled-up Canadian AI firms through the Strategic Innovation Fund and Global Innovation Clusters program.
Figure 4.1: Building on  Canada's AI Advantage
Figure 4.1
Building on Canada’s AI Advantage

AI is a transformative economic opportunity for Canada and the government is committed to doing more to support our world-class research community, launch Canadian AI businesses, and help them scale-up to meet the demands of the global economy. The processing capacity required by AI is accelerating a global push for the latest technology, for the latest computing infrastructure.

Currently, most compute capacity is located in other countries. Challenges accessing compute power slows down AI research and innovation, and also exposes Canadian firms to a reliance on privately-owned computing, outside of Canada. This comes with dependencies and security risks. And, it is a barrier holding back our AI firms and researchers.

We need to break those barriers to stay competitive in the global AI race and ensure workers benefit from the higher wages of AI transformations; we must secure Canada’s AI advantage. We also need to ensure workers who fear their jobs may be negatively impacted by AI have the tools and skills training needed in a changing economy.

To secure Canada’s AI advantage Budget 2024 announces a monumental increase in targeted AI support of $2.4 billion, including:

  • $2 billion over five years, starting in 2024-25, to launch a new AI Compute Access Fund and Canadian AI Sovereign Compute Strategy, to help Canadian researchers, start-ups, and scale-up businesses access the computational power they need to compete and help catalyze the development of Canadian-owned and located AI infrastructure. 
  • $200 million over five years, starting in 2024-25, to boost AI start-ups to bring new technologies to market, and accelerate AI adoption in critical sectors, such as agriculture, clean technology, health care, and manufacturing. This support will be delivered through Canada’s Regional Development Agencies.
  • $100 million over five years, starting in 2024-25, for the National Research Council’s AI Assist Program to help Canadian small- and medium-sized businesses and innovators build and deploy new AI solutions, potentially in coordination with major firms, to increase productivity across the country.
  • $50 million over four years, starting in 2025-26, to support workers who may be impacted by AI, such as creative industries. This support will be delivered through the Sectoral Workforce Solutions Program, which will provide new skills training for workers in potentially disrupted sectors and communities.

The government will engage with industry partners and research institutes to swiftly implement AI investment initiatives, fostering collaboration and innovation across sectors for accelerated technological advancement.

Before moving to the part of budget that focuses on safe and responsible use of AI, I’ve got some information about the legislative situation and an omnibus bill C-27 which covers AI, from my October 10, 2024 posting,

The omnibus bill, C-27, which includes Artificial Intelligence and Data Act (AIDA) had passed its second reading in the House of Commons at the time of the posting. Since May 2023, the bill has been the subject of the House of Commons Standing Committee on Industry and Technology according to the Parliament of Canada’s LEGISinfo’s C-27 , 44th Parliament, 1st session Monday, November 22, 2021, to present: An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts webpage.

You can find more up-to-date information about the status of the Committee’s Bill-27 meetings on this webpage where it appears that September 26, 2024 was the committee’s most recent meeting. If you click on the highlighted meeting dates, you will be given the option of watching a webcast of the meeting. The webpage will also give you access to a list of witnesses, the briefs and the briefs themselves.

November 2024 update: The committee’s most recent meeting is still listed as September 26, 2024.

From 4.1 of the budget,

Safe and Responsible Use of AI

AI has tremendous economic potential, but as with all technology, it presents important considerations to ensure its safe development and implementation. Canada is a global leader in responsible AI and is supporting an AI ecosystem that promotes responsible use of technology. From development through to implementation and beyond, the government is taking action to protect Canadians from the potentially harmful impacts of AI.

The government is committed to guiding AI innovation in a positive direction, and to encouraging the responsible adoption of AI technologies by Canadians and Canadian businesses. To bolster efforts to ensure the responsible use of AI:

  • Budget 2024 proposes to provide $50 million over five years, starting in 2024-25, to create an AI Safety Institute of Canada to ensure the safe development and deployment of AI. The AI Safety Institute will help Canada better understand and protect against the risks of advanced and generative AI systems. The government will engage with stakeholders and international partners with competitive AI policies to inform the final design and stand-up of the AI Safety Institute.
  • Budget 2024 also proposes to provide $5.1 million in 2025-26 to equip the AI and Data Commissioner Office with the necessary resources to begin enforcing the proposed Artificial Intelligence and Data Act.
  • Budget 2024 proposes $3.5 million over two years, starting in 2024-25, to advance Canada’s leadership role with the Global Partnership on Artificial Intelligence, securing Canada’s leadership on the global stage when it comes to advancing the responsible development, governance, and use of AI technologies internationally.

Using AI to Keep Canadians Safe

AI has shown incredible potential to toughen up security systems, including screening protocols for air cargo. Since 2012, Transport Canada has been testing innovative approaches to ensure that air cargo coming into Canada is safe, protecting against terrorist attacks. This included launching a pilot project to screen 10 to 15 per cent of air cargo bound for Canada and developing an artificial intelligence system for air cargo screening.

  • Budget 2024 proposes to provide $6.7 million over five years, starting in 2024-25, to Transport Canada to establish the Pre-Load Air Cargo Targeting Program to screen 100 per cent of air cargo bound for Canada. This program, powered by cutting-edge artificial intelligence, will increase security and efficiency, and align Canada’s air security regime with those of its international partners.

There was a small section which updates some information about intellectual property retention (patent box retention) but otherwise is concerned with industrial R&B (a perennial Canadian weakness), from 4.1 of the budget,

Boosting R&D and Intellectual Property Retention

Research and development (R&D) is a key driver of productivity and growth. Made-in-Canada innovations meaningfully increase our gross domestic product (GDP) per capita, create good-paying jobs, and secure Canada’s position as a world-leading advanced economy.

To modernize and improve the Scientific Research and Experimental Development (SR&ED) tax incentives, the federal government launched consultations on January 31, 2024, to explore cost-neutral ways to enhance the program to better support innovative businesses and drive economic growth. In these consultations, which closed on April 15, 2024, the government asked Canadian researchers and innovators for ways to better deliver SR&ED support to small- and medium-sized Canadian businesses and enable the next generation of innovators to scale-up, create jobs, and grow the economy.

  • Budget 2024 announces the government is launching a second phase of consultations on more specific policy parameters, to hear further views from businesses and industry on specific and technical reforms. This includes exploring how Canadian public companies could be made eligible for the enhanced credit. Further details on the consultation process will be released shortly on the Department of Finance Canada website.
  • Budget 2024 proposes to provide $600 million over four years, starting in 2025-26, with $150 million per year ongoing for future enhancements to the SR&ED program. The second phase of consultations will inform how this funding could be targeted to boost research and innovation.

On January 31, 2024, the government also launched consultations on creating a patent box regime to encourage the development and retention of intellectual property in Canada. The patent box consultation closed on April 15, 2024. Submissions received through this process, which are still under review, will help inform future government decisions with respect to a patent box regime.

Nice to get an update on what’s happening with the patent box regime.

The Tri-Council consisting of the Canadian Institutes of Health Research (CIHR), the Natural Sciences and Engineering Research Council of Canada (NSERC), and the Social Sciences and Humanities Research Council of Canada (SSHRC) don’t often get mentioned in the federal budget but they did this year, from 4.1 of the budget,

Enhancing Research Support

Since 2016, the federal government has committed more than $16 billion in research, including funding for the federal granting councils—the Natural Sciences and Engineering Research Council (NSERC), the Canadian Institutes of Health Research (CIHR), and the Social Sciences and Humanities Research Council (SSHRC).

This research support enables groundbreaking discoveries in areas such as climate change, health emergencies, artificial intelligence, and psychological health. This plays a critical role in solving the world’s greatest challenges, those that will have impacts for generations.

Canada’s granting councils already do excellent work within their areas of expertise, but more needs to be done to maximize their effect. The improvements we are making today, following extensive consultations including with the Advisory Panel on the Federal Research Support System, will strengthen and modernize Canada’s federal research support.

  • To increase core research grant funding and support Canadian researchers, Budget 2024 proposes to provide $1.8 billion over five years, starting in 2024-25, with $748.3 million per year ongoing to SSHRC, NSERC, and CIHR.
  • To provide better coordination across the federally funded research ecosystem, Budget 2024 announces the government will create a new capstone research funding organization. The granting councils will continue to exist within this new organization, and continue supporting excellence in investigator-driven research, including linkages with the Health portfolio. This new organization and structure will also help to advance internationally collaborative, multi-disciplinary, and mission-driven research. The government is delivering on the Advisory Panel’s observation that more coordination is needed to maximize the impact of federal research support across Canada’s research ecosystem.
  • To help guide research priorities moving forward, Budget 2024 also announces the government will create an advisory Council on Science and Innovation. This Council will be made up of leaders from the academic, industry, and not-for-profit sectors, and be responsible for a national science and innovation strategy to guide priority setting and increase the impact of these significant federal investments.
  • Budget 2024 also proposes to provide a further $26.9 million over five years, starting in 2024-25, with $26.6 million in remaining amortization and $6.6 million ongoing, to the granting councils to establish an improved and harmonized grant management system.

The government will also work with other key players in the research funding system—the provinces, territories, and Canadian industry—to ensure stronger alignment, and greater co-funding to address important challenges, notably Canada’s relatively low level of business R&D investment.

More details on these important modernization efforts will be announced in the 2024 Fall Economic Statement.

World-Leading Research Infrastructure

Modern, high-quality research facilities and infrastructure are essential for breakthroughs in Canadian research and science. These laboratories and research centres are where medical and other scientific breakthroughs are born, helping to solve real-world problems and create the economic opportunities of the future. World-leading research facilities will attract and train the next generation of scientific talent. That’s why, since 2015, the federal government has made unprecedented investments in science and technology, at an average of $13.6 billion per year, compared to the average from 2009-10 to 2015-16 of just $10.8 billion per year. But we can’t stop here.

To advance the next generation of cutting-edge research, Budget 2024 proposes major research and science infrastructure investments, including:

  • $399.8 million over five years, starting in 2025-26, to support TRIUMF, Canada’s sub-atomic physics research laboratory, located on the University of British Columbia’s Vancouver campus. This investment will upgrade infrastructure at the world’s largest cyclotron particle accelerator, positioning TRIUMF, and the partnering Canadian research universities, at the forefront of physics research and enabling new medical breakthroughs and treatments, from drug development to cancer therapy.
  • $176 million over five years, starting in 2025‑26, to CANARIE, a national not-for-profit organization that manages Canada’s ultra high-speed network to connect researchers, educators, and innovators, including through eduroam. With network speeds hundreds of times faster, and more secure, than conventional home and office networks, this investment will ensure this critical infrastructure can connect researchers across Canada’s world-leading post-secondary institutions.
  • $83.5 million over three years, starting in 2026-27 to extend support to Canadian Light Source in Saskatoon. Funding will continue the important work at the only facility of its kind in Canada. A synchrotron light source allows scientists and researchers to examine the microscopic nature of matter. This specialized infrastructure contributes to breakthroughs in areas ranging from climate-resistant crop development to green mining processes.
  • $45.5 million over five years, starting in 2024-25, to support the Arthur B. McDonald Canadian Astroparticle Physics Research Institute, a network of universities and institutes that coordinate astroparticle physics expertise. Headquartered at Queen’s University in Kingston, Ontario, the institute builds on the legacy of Dr. McDonald’s 2015 Nobel Prize for his work on neutrino physics. These expert engineers, technicians, and scientists design, construct, and operate the experiments conducted in Canada’s underground and underwater research infrastructure, where research into dark matter and other mysterious particles thrives. This supports innovation in areas like clean technology and medical imaging, and educates and inspires the next wave of Canadian talent.
  • $30 million over three years, starting in 2024-25, to support the completion of the University of Saskatchewan’s Centre for Pandemic Research at the Vaccine and Infectious Disease Organization in Saskatoon. This investment will enable the study of high-risk pathogens to support vaccine and therapeutic development, a key pillar in Canada’s Biomanufacturing and Life Sciences Strategy. Of this amount, $3 million would be sourced from the existing resources of Prairies Economic Development Canada.

These new investments build on existing federal research support:

  • The Strategic Science Fund, which announced the results of its first competition in December 2023, providing support to 24 third-party science and research organizations starting in 2024-25;
  • Canada recently concluded negotiations to be an associate member of Horizon Europe, which would enable Canadians to access a broader range of research opportunities under the European program starting this year; and,
  • The steady increase in federal funding for extramural and intramural science and technology by the government which was 44 per cent higher in 2023 relative to 2015.

Advancing Space Research and Exploration

Canada is a leader in cutting-edge innovation and technologies for space research and exploration. Our astronauts make great contributions to international space exploration missions. The government is investing in Canada’s space research and exploration activities.

  • Budget 2024 proposes to provide $8.6 million in 2024-25 to the Canadian Space Agency for the Lunar Exploration Accelerator Program to support Canada’s world-class space industry and help accelerate the development of new technologies. This initiative empowers Canada to leverage space to solve everyday challenges, such as enhancing remote health care services and improving access to healthy food in remote communities, while also supporting Canada’s human space flight program.
  • Budget 2024 announces the establishment of a new whole-of-government approach to space exploration, technology development, and research. The new National Space Council will enable the level of collaboration required to secure Canada’s future as a leader in the global space race, addressing cross-cutting issues that span commercial, civil, and defence domains. This will also enable the government to leverage Canada’s space industrial base with its world-class capabilities, workforce, and track record of innovation and delivery.

I found two responses to the budget from two science organizations and the responses fall into the moderately pleased category. Here’s an April 17, 2024 news release from Evidence for Democracy (E4D), Note: Links have been removed,

As a leading advocate for evidence-informed decision-making and the advancement of science policy in Canada, Evidence for Democracy (E4D) welcomes the budget’s emphasis on scientific research and innovation. Since its inception, E4D has been at the forefront of advocating for policies that support robust scientific research and its integration into public policy. To support this work, we have compiled a budget analysis for the science and research sector here for more context on Budget 2024. 

“Budget 2024 provides an encouraging investment into next generation researchers and research support systems,” says Sarah Laframboise, Executive Director of E4D, “By prioritizing investments in research talent, infrastructure, and innovation, the government is laying the foundation for a future driven by science and evidence.”

The budget’s initiatives to enhance graduate student scholarships and postdoctoral fellowships reflect a commitment to nurturing Canada’s research talent, a cornerstone of E4D’s advocacy efforts through its role on the Coalition for Canadian Research. E4D is encouraged by this investment in next generation researchers and core research grants, who form the bedrock of scientific discovery and drive innovation across sectors. Additionally, the formation of a new capstone research funding organization and Advisory Council on Science and Innovation are signs of a strategic vision that values Canadian science and research.

While Budget 2024 represents a significant step forward for science and research in Canada, E4D recognizes that challenges and opportunities lie ahead. 

“We note that funding for research in Budget 2024 is heavily back-loaded, with larger funding values coming into effect in a few years time,” adds Laframboise, “Given that this also includes significant structural and policy changes, this leaves some concern over the execution and roll-out of these investments in practice.”

As the details of the budget initiatives unfold, E4D remains committed to monitoring developments, advocating for evidence-based policies, and engaging with stakeholders to ensure that science continues to thrive as a driver of progress and prosperity in Canada. 

The April 16, 2024 E4D budget analysis by Farah Qaiser, Nada Salem, Sarah Laframboise, Simarpreet Singh is here. The authors provide more detail than I do.

The second response to the 2024 budget is from the Canadian Institutes of Health Research (CIHR) is posted on a federal government website, from an April 29, 2024 letter, Note: Links have been removed,

Dear colleagues,

On April 16, 2024, the Government of Canada released Budget 2024 – Fairness for Every Generation – a Budget that proposes a historic level of investment in research and innovation. Most notably for CIHR, NSERC, and SSHRC, this included $1.8 billion in core research grant funding over five years (starting in 2024-25, with $748.3 million per year ongoing). This proposed investment recognizes the vital role played by research in improving the lives of Canadians. We are thrilled by the news of this funding and will share more details about how and when these funds will be distributed as the Budget process unfolds.

Budget 2024 also proposes $825 million over five years (starting in 2024-25, with $199.8 million per year ongoing) to increase the annual value of master’s and doctoral student scholarships to $27,000 and $40,000, respectively, and post-doctoral fellowships to $70,000. This will also increase the number of research scholarships and fellowships provided, building to approximately 1,720 more graduate students or fellows benefiting each year. To make it easier for students and fellows to access support, the enhanced suite of scholarships and fellowship programs will be streamlined into one talent program. These proposals are the direct result of a coordinated effort to recognize the importance of students in the research ecosystem.

The Budget proposes other significant investments in health research, including providing:

  • a further $26.9 million over five years (starting in 2024-25, with $26.6 million in remaining amortization and $6.6 million ongoing) to the granting councils to establish an improved and harmonized grant management system.
  • $10 million in 2024-2025 for CIHR to support an endowment to increase prize values awarded by the Gairdner Foundation for excellence in health research.
  • $80 million over five years for Health Canada to support the Brain Canada Foundation in its advancement of brain research.
  • $30 million over three years (starting in 2024-25) to support Indigenous participation in research, with $10 million each for First Nation, Métis, and Inuit partners.
  • $2 billion over five years (starting in 2024-25) to launch a new AI Compute Access Fund and Canadian AI Sovereign Compute Strategy, to help Canadian researchers, start-ups, and scale-up businesses access the computational power they need to compete and help catalyze the development of Canadian-owned and located AI infrastructure.
  • As well, to help guide research priorities moving forward, Budget 2024 announces that the government will create an Advisory Council on Science and Innovation. This Council will be comprised of leaders from the academic, industry, and not-for-profit sectors, and will be responsible for a national science and innovation strategy to guide priority setting and increase the impact of these significant federal investments.

In addition to these historic investments, Budget 2024 includes a proposal to create a “new capstone research funding organization” that will provide improved coordination across the federally funded research ecosystem. This proposal stems directly from the recommendations of the Advisory Panel on the Federal Research Support System, and recognizes the need for more strategic coordination in the federal research system. The Budget notes that the granting councils will each continue to exist within this new organization, and continue supporting excellence in investigator-driven research, including linkages with the Health portfolio. While the governance implications of this new organization are not known at this time, the CIHR Institutes will remain in place as an integral part of CIHR. As stated in the Budget, the timing and details with respect to the creation of this organization still need to be determined, but it did indicate that more details will be announced in the 2024 Fall Economic Statement.

As well, CIHR will be working closely with the Natural Sciences and Engineering Research Council, Social Sciences and Humanities Research Council, Health Canada, and Innovation, Science and Economic Development Canada in the coming months to implement various Budget measures related to research. In the meantime, CIHR will continue its business as usual.

These announcements and investments are significant and unprecedented and will create exciting opportunities for the Tri-Agencies and other partners across the federal research ecosystem to contribute to the health, social, and economic needs and priorities of Canadians. They will also ensure that Canada remains a world leader in science. This is positive and welcome news for the CIHR community. We look forward to embarking on this new journey with Canada’s health research community.

Tammy Clifford, PhD
Acting President, CIHR

Defence

I have taken to including information about the funding for the military on the grounds that the military has historically been the source of much science, medical, and technology innovation. (Television anyone?)

Defence in the 2024 Canadian federal budget is in Chapter 7: Protecting Canadians and Defending Democracy and after a parade of its greatest budget hits from years past, there’s this,

Stronger National Defence

As the world becomes increasingly unstable, as climate change increases the severity and frequency of natural disasters, and as the risk of conflict grows, Canada is asking more of our military. Whether it is deploying to Latvia as part of Operation REASSURANCE, or Nova Scotia as part of Operation LENTUS, those who serve in the Canadian Armed Forces have answered the call whenever they are needed, to keep Canadians safe.

On April 8 [2024], in response to the rapidly changing security environment, the government announced an update to its defence policy: Our North, Strong and Free. In this updated policy, the government laid out its vision for Canada’s national defence, which will ensure the safety of Canadians, our allies, and our partners by equipping our soldiers with the cutting-edge tools and advanced capabilities they need to keep Canadians safe in a changing world.

  • Budget 2024 proposes foundational investments of $8.1 billion over five years, starting in 2024-25, and $73.0 billion over 20 years to the Department of National Defence (DND), the Communications Security Establishment (CSE), and Global Affairs Canada (GAC) to ensure Canada is ready to respond to global threats and to protect the well-being of Canadian Armed Forces members. Canada’s defence spending-to-GDP ratio is expected to reach 1.76 per cent by 2029-30.  These include:
    • $549.4 million over four years, starting in 2025-26, with $267.8 billion in future years, for DND to replace Canada’s worldwide satellite communications equipment; for new tactical helicopters, long-range missile capabilities for the Army, and airborne early warning aircraft; and for other investments to defend Canada’s sovereignty;
    • $1.9 billion over five years, starting in 2024-25, with $8.2 billion in future years, for DND to extend the useful life of the Halifax-class frigates and extend the service contract of the auxiliary oiler replenishment vessel, while Canada awaits delivery of next generation naval vessels;
    • $1.4 billion over five years, starting in 2024-25, with $8.2 billion in future years, for DND to replenish its supplies of military equipment;
    • $1.8 billion over five years, starting in 2024-25, with $7.7 billion in future years, for DND to build a strategic reserve of ammunition and scale up the production of made-in-Canada artillery ammunition. Private sector beneficiaries are expected to contribute to infrastructure and retooling costs;
    • $941.9 million over four years, starting in 2025-26, with $16.2 billion in future years, for DND to ensure that military infrastructure can support modern equipment and operations;
    • $917.4 million over five years, starting in 2024-25, with $10.9 billion in future years and $145.8 million per year ongoing, for CSE and GAC to enhance their intelligence and cyber operations programs to protect Canada’s economic security and respond to evolving national security threats;
    • $281.3 million over five years, starting in 2024-25, with $216 million in future years, for DND for a new electronic health record platform for military health care;
    • $6.9 million over four years, starting in 2025-26, with $1.4 billion in future years, for DND to build up to 1,400 new homes and renovate an additional 2,500 existing units for Canadian Armed Forces personnel on bases across Canada (see Chapter 1);
    • $100 million over five years, starting in 2024-25, to DND for child care services for Canadian Armed Forces personnel and their families (see Chapter 2);
    • $149.9 million over four years, starting in 2025-26, with $1.8 billion in future years, for DND to increase the number of civilian specialists in priority areas; and,
    • $52.5 million over five years, starting in 2024-25, with $54.8 million in future years, to DND to support start-up firms developing dual-use technologies critical to our defence via the NATO Innovation Fund.

To support Our North, Strong and Free, $156.7 million over three years, starting in 2026-27, and $537.7 million in future years would be allocated from funding previously committed to Canada’s 2017 Defence Policy, Strong, Secure, Engaged.

  • Budget 2024 also proposes additional measures to strengthen Canada’s national defence:
    • $1.2 billion over 20 years, starting in 2024-25, to support the ongoing procurement of critical capabilities, military equipment, and infrastructure through DND’s Capital Investment Fund; and,
    • $66.5 million over five years, starting in 2024-25, with $7.4 billion in future years to DND for the Future Aircrew Training program to develop the next generation of Royal Canadian Air Force personnel. Of this amount, $66.5 million over five years, starting in 2024-25, would be sourced from existing DND resources.
  • Budget 2024 also announces reforms to Canadian defence policy and its review processes:
    • Committing Canada to undertake a Defence Policy Review every four years, as part of a cohesive review of the National Security Strategy; and,
    • Undertaking a review of Canada’s defence procurement system.

With this proposed funding, since 2022, the government has committed more than $125 billion over 20 years in incremental funding to strengthen national defence and help keep Canadians and our democracy safe in an increasingly unpredictable world—today and for generations. Since 2015, this adds up to over $175 billion in incremental funding for national defence.

Enhancing CSIS Intelligence Capabilities

As an advanced economy and an open and free democracy, Canada continues to be targeted by hostile actors, which threaten our democratic institutions, diaspora communities, and economic prosperity. The Canadian Security Intelligence Service (CSIS) protects Canadians from threats, such as violent extremism and foreign interference, through its intelligence operations in Canada and around the world.

To equip CSIS to combat emerging global threats and keep pace with technological developments, further investments in intelligence capabilities and infrastructure are needed. These will ensure CSIS can continue to protect Canadians.

  • Budget 2024 proposes to provide $655.7 million over eight years, starting in 2024-25, with $191.1 million in remaining amortization, and $114.7 million ongoing to the Canadian Security Intelligence Service to enhance its intelligence capabilities, and its presence in Toronto.

Maintaining a Robust Arctic Presence

The Canadian Arctic is warming four times faster than the world average, as a result of climate change. It is also where we share a border with today’s most hostile nuclear power—Russia. The shared imperatives of researching climate change where its impacts are most severe, and maintaining an ongoing presence in the Arctic enable Canada to advance this important scientific work and assert our sovereignty.

Maintaining a robust research presence supports Canada’s Arctic sovereignty. Scientific and research operations in the Arctic advance our understanding of how climate change is affecting people, the economy, and the environment in the region. This is an important competitive advantage, as economic competition increases in the region. 

To support research operations in Canada’s North, Budget 2024 proposes:

  • $46.9 million over five years starting in 2024-25, with $8.5 million in remaining amortization and $11.1 million ongoing, to Natural Resources Canada to renew the Polar Continental Shelf Program to continue supporting northern research logistics, such as lodging and flights for scientists; and,
  • $3.5 million in 2024-25 to Polar Knowledge Canada to support its activities, including the operation of the Canadian High Arctic Research Station.

Protecting Canadians from Financial Crimes

Financial crimes are serious threats to public safety, national security, and Canada’s financial system. They can range from terrorist financing, corruption, and the evasion of sanctions, to money laundering, fraud, and tax evasion. These crimes have real world implications, often enabling other criminal behaviour. Financial crime also undermines the fairness and transparency that are so essential to our economy.

Since 2017, the government has undertaken significant work to crack down on financial crime:

  • Investing close to $320 million since 2019 to strengthen compliance, financial intelligence, information sharing, and investigative capacity to support money laundering investigations;
  • Creating new Integrated Money Laundering Investigative Teams in British Columbia, Alberta, Ontario, and Quebec, which convene experts to advance investigations into money laundering, supported by dedicated forensic accounting experts;
  • Launching a publicly accessible beneficial ownership registry for federal corporations on January 22, 2024. The government continues to call upon provinces and territories to advance a pan-Canadian approach to beneficial ownership transparency;
  • Modernizing Canada’s anti-money laundering and anti-terrorist financing framework to adapt to emerging technologies; vulnerable sectors; and growing risks such as sanctions evasion; and,
  • Establishing public-private partnerships with the financial sector, that are improving the detection and disruption of profit-oriented crimes, including human trafficking, online child sexual exploitation, and fentanyl trafficking.

Budget 2024 takes further action to protect Canadians from financial crimes.

Anti-Money Laundering and Anti-Terrorist Financing

Criminal and terrorist organizations continually look for new ways to perpetrate illicit activities. Canada needs a robust legal framework that keeps pace with evolving financial crimes threats.

To combat money laundering, terrorist financing, and sanctions evasion, Budget 2024 announces:

  • The government intends to introduce legislative amendments to the Proceeds of Crime (Money Laundering) and Terrorist Financing Act (PCMLTFA), the Criminal Code the Income Tax Act, and the Excise Tax Act.
    • Proposed amendments to the PCMLTFA would:
      • Enhance the ability of reporting entities under the PCMLTFA to share information with each other to detect and deter money laundering, terrorist financing, and sanctions evasion, while maintaining privacy protections for personal information, including an oversight role for the Office of the Privacy Commissioner under regulations;
      • Permit the Financial Transactions and Reports Analysis Centre of Canada (FINTRAC) to disclose financial intelligence to provincial and territorial civil forfeiture offices to support efforts to seize property linked to unlawful activity; and, Immigration, Refugees and Citizenship Canada to strengthen the integrity of Canada’s citizenship process;
      • Enable anti-money laundering and anti-terrorist financing regulatory obligations to cover factoring companies, cheque cashing businesses, and leasing and finance companies to close a loophole and level the playing field across businesses providing financial services;
      • Allow FINTRAC to publicize more information around violations of obligations under the PCMLTFA when issuing administrative monetary penalties to strengthen transparency and compliance; and,
      • Make technical amendments to close loopholes and correct inconsistencies.
    • Proposed amendments to the Criminal Code would:
      • Allow courts to issue an order to require a financial institution to keep an account open to assist in the investigation of a suspected criminal offence; and,
      • Allow courts to issue a repeating production order to authorize law enforcement to obtain ongoing, specified information on activity in an account or multiple accounts connected to a person of interest in a criminal investigation.
    • Proposed amendments to the Income Tax Act and Excise Tax Act would:
      • Ensure Canada Revenue Agency officials who carry out criminal investigations are authorized to seek general warrants through court applications, thereby modernizing and simplifying evidence gathering processes and helping to fight tax evasion and other financial crimes.

Canada Financial Crimes Agency

As announced in Budget 2023, the Canada Financial Crimes Agency (CFCA) will become Canada’s lead enforcement agency against financial crime. It will bring together expertise necessary to increase money laundering charges, prosecutions, and convictions, and the seizure of criminal assets.

  • Budget 2024 proposes to provide $1.7 million over two years, starting in 2024-25, to the Department of Finance to finalize the design and legal framework for the CFCA.

Fighting Trade-Based Fraud and Money Laundering

  • Trade-based financial crime is one of the most pervasive means of laundering money; it’s estimated that this is how hundreds of millions of dollars are laundered each year. To strengthen efforts to fight trade fraud and money laundering, the 2023Fall Economic Statement announced enhancements to the Canada Border Services Agency’s authorities under the PCMLTFA to combat trade-based financial crime and the intent to create a Trade Transparency Unit.
  • Budget 2024 builds on this work by proposing to provide $29.9 million over five years, starting in 2024-25, with $5.1 million in remaining amortization and $4.2 million ongoing, for the Canada Border Services Agency to support the implementation of its new authorities under the PCMLTFA to combat financial crime and strengthen efforts to combat international financial crime with our allies.

Supporting Veterans’ Well-Being

After their service and their sacrifice, veterans of the Canadian Armed Forces deserve our full support. Veterans’ organizations are often best placed to understand the needs of veterans and to develop programming that improves their quality of life. In 2018, the federal government launched the Veteran and Family Well-Being Fund, which provides funding to public, private, and academic organizations, to advance research projects and innovative approaches to deliver services to veterans and their families.

  • Budget 2024 proposes to provide an additional $6 million over three years, starting in 2024-25, to Veterans Affairs Canada for the Veteran and Family Well-Being Fund. A portion of the funding will focus on projects for Indigenous, women, and 2SLGBTQI+ veterans.

Telemedicine Services for Veterans and Their Families

After serving in the Canadian Armed Forces, many veterans who previously received their health care from the Forces need to find a family doctor in the provincial system, which makes their transition to civilian life more stressful, especially if they need health care for service-related injuries.

To ensure veterans and their families have access to the care they deserve after their service to Canada:

  • Budget 2024 proposes to provide $9.3 million over five years, starting in 2024-25, to Veterans Affairs Canada to extend and expand the Veteran Family Telemedicine Service pilot for another three years. This initiative will provide up to two years of telemedicine services to recent veterans and their families.

I didn’t expect anything on economic matters, from Chapter 7: Protecting Canadians and Defending Democracy,

7.2 Economic Security for Canada and Our Allies

The system of rules and institutions that were established in the wake of the Second World War unleashed an era of prosperity unprecedented in human history. This era generated a massive expansion of global trade, and lifted hundreds of millions of people out of poverty. As a trading nation with privileged access to more than two-thirds of the global economy, Canada has benefitted enormously from the stability and certainty that this system provided.

Supply chain disruptions and rising protectionism threaten this Canadian advantage that has been enjoyed for generations. Canada is taking action to make sure we preserve the rules-based international order. We are strengthening our trade relationships and making sure they reflect our values. We are ensuring our economy is resilient and secure, protecting Canadians and Canada from economic pressure from authoritarian regimes, and defending Canada’s economic interests.

Budget 2024 makes investments to ensure the opportunities and prosperity of trade, enjoyed by generations of Canadians, continue to be there for every generation.

Key Ongoing Actions

  • Launching in 2017 Strong, Secure, Engaged, to maintain the Canadian Armed Forces as an agile, multi-purpose, combat-ready force, ensuring Canada is strong domestically, an active partner in North America, and engaged internationally.
  • Upholding Canada’s 15 free trade agreements with 51 countries. Canada is the only G7 country with comprehensive trade and investment agreements with all other G7 members.
  • Implementing the modernized Canada-Ukraine Free Trade Agreement and the United Kingdom’s accession to the Comprehensive and Progressive Agreement for Trans-Pacific Partnership.
  • Establishing a new Canada-Taiwan foreign investment promotion and protection arrangement in December 2023.
  • Launching Canada’s Indo-Pacific Strategy in November 2022, committing almost $2.3 billion to strengthen Canada’s role as a strong partner in the region. The strategy included:
    • $492.9 million over five years to reinforce Canada’s Indo-Pacific naval presence and increase Canadian Armed Forces participation in regional military exercises.
    • $227.8 million over five years to increase Canada’s work with partners in the region on national security, cyber security, and responses to crime, terrorism, and threats from weapons proliferation.
    • Canada is negotiating free trade agreements with Indonesia and the Association of Southeast Asian Nations to provide additional trade and investment opportunities in the Indo-Pacific region.
  • To further reinforce Canada’s role as a trusted supply chain partner, and its commitment to cooperate with like-minded partners in meeting emerging global challenges, including the economic resilience of the world’s democracies, Canada undertook the following actions:
    • Joined with the U.S. in the Energy Transformation Task Force to accelerate cooperation on critical clean energy opportunities and to strengthen integrated Canada-U.S. supply chains, which as announced in Chapter 4, has been extended for another year.
    • Canada signed a new agreement in May 2023 with South Korea for cooperation on critical mineral supply chains, clean energy transition, and energy security.
    • Canada endorsed the Joint Declaration Against Trade-Related Economic Coercion and Non-Market Policies and Practices with Australia, Japan, New Zealand, the U.K., and the U.S. in June 2023.

Protecting Canadian Businesses from Unfair Foreign Competition

Canadian companies and workers are able to do business around the world, selling their goods and expertise, because the government has delivered free trade agreements that cover 61 per cent of the world’s GDP and 1.5 billion consumers. This means Canadians can do business in Japan and Malaysia with the CPTPP; in Europe with CETA; in the United States and Mexico with the new NAFTA; and in Ukraine with a modernized CUFTA. These agreements mean good jobs and good salaries for people across the country.

However, this is only true when Canadian workers and businesses are competing on an even playing field, and countries respect agreed trade rules.

That is why the government has taken steps to ensure that Canada’s trade remedy and import monitoring systems have the tools needed to defend Canadian workers and businesses from unfair practices of foreign competitors. For instance, earlier this year, Canada introduced a system to track the countries steel imports are initially melted and poured in, to increase supply chain transparency and support effective enforcement of Canada’s trade laws.

  • Budget 2024 proposes to provide $10.5 million over three years, starting in 2024-25, for the Canada Border Services Agency to create a dedicated Market Watch Unit to monitor and update trade remedy measures annually, to protect Canadian workers and businesses from unfair trade practices, and ensure greater transparency and market predictability.

Ensuring Reciprocal Treatment for Canadian Businesses Abroad

Canada is taking action to protect Canadian businesses and workers from additional global economic and trade challenges. These challenges include protectionist and non-market policies and practices implemented by our trading partners. When Canada opens its markets to goods and services from other countries, we expect those countries to equally grant Canadian businesses the access that we provide their companies.

As detailed in the Policy Statement on Ensuring Reciprocal Treatment for Canadian Businesses Abroad, published alongside the 2023 Fall Economic Statement, Canada will consider reciprocity as a key design element for new policies going forward. This approach builds on Canada’s commitment to implement reciprocal procurement policies, including for infrastructure and sub-national infrastructure spending, in the near term. A reciprocal lens will also be applied to a range of new measures including, but not limited to, investment tax incentives, grants and contributions, technical barriers to trade, sanitary and phytosanitary measures, investment restrictions, and intellectual property requirements.

In pursuing reciprocity, Canada will continue working with its allies to introduce incentives for businesses to reorient supply chains to trusted, reliable partners, and will ensure that any new measures do not unnecessarily harm trading partners who do not discriminate against Canadian goods and suppliers.  

Protecting Critical Supply Chains

Recent events around the world, from the pandemic to Russia’s full-scale invasion of Ukraine, have exposed strategic vulnerabilities in critical supply chains, to which Canada and countries around the world are responding by derisking, or friendshoring, their supply chains. Canada is actively working with its allies to strengthen shared supply chains and deepen our economic ties with trusted partners, including in the context of accelerating the transition to a net-zero economy.

Ongoing efforts to build our critical supply chains through democracies like our own represent a significant economic opportunity for Canadian businesses and workers, and the government will continue to design domestic policies and programs with friendshoring as a top-of-mind objective.

To reinforce Canada’s role as a trusted supply chain partner for our allies, Budget 2023 took action to mobilize private investment and grow Canada’s economy towards net-zero. These investments are growing Canada’s economic capacity in industries across the economy, while simultaneously reducing Canada’s emissions and strengthening our essential trading relationships.

Eradicating Forced Labour from Canadian Supply Chains

Canada is gravely concerned by the ongoing human rights violations against Uyghurs and Muslim minorities in China, as well as by the use of forced labour around the world. 

  • Budget 2024 reaffirms the federal government’s commitment to introduce legislation in 2024 to eradicate forced labour from Canadian supply chains and to strengthen the import ban on goods produced with forced labour. The government will also work to ensure existing legislation fits within the overall framework to safeguard our supply chains.

This will build on funding committed in the 2023 Fall Economic Statement that, starting January 1, 2024, supports the requirement for annual reporting from public and private entities to demonstrate measures they have taken to prevent and reduce the risk that forced labour is used in their supply chains.

Before moving on to an interesting analysis of the defence portion of the 2024 budget by someone else, here’s a link to the national defence policy, Our North, Strong and Free: A Renewed Vision for Canada’s Defence, which was released on April 8, 2024 just days before the April 16, 2024 release date for this latest federal budget.

It seems there was a shift in policy during the nine-day interval. From Murray Brewster’s April 16, 2024 article for the Canadian Broadcasting Corporation’s (CBC) news online website, Note: Links have been removed,

The new federal budget promises good things will happen at the Department of National Defence … next year, and hopefully in the years after.

The new fiscal plan, presented Tuesday by Finance Minister Chrystia Freeland, marks a subtle but significant shift from what was proposed in last week’s long-awaited defence policy [emphasis mine], which committed to spending an additional $8.1 billion on defence.

The funding envelope in the budget earmarks the same amount but includes not only the defence department but proposed spending on both the Communications Security Establishment — the country’s electronic spy agency — and Global Affairs Canada. [emphases mine]

While the overall defence budget is expected to increase marginally in the current fiscal year to $33.8 billion, defence experts told CBC News that when the internal cost-cutting exercise ordered by the Liberal government and the new defence policy are factored in, the military can expect roughly $635 million less this year [emphasis mine] than was anticipated before spending restraint kicked in.

Freeland’s fiscal plan projects a 30 per cent increase in defence spending in the next fiscal year, bringing it to $44.2 billion.

This is how I understand what Brewster is saying:

  • 2024/25 defence budget as listed is $33.8B
  • Not all of this money is going directly to defence (the Communications Security Establishment and Global Affairs Canada will be partaking)
  • the defence department has been ordered to cut costs
  • so, there will be $635M less than defence might have expected
  • in 2025/26 defence spending will be increased to $44.2 billion, whatever that means

That’s quite the dance and Brewster’s April 16, 2024 article points out at least one more weakness,

Sahir Khan, the executive vice-president of the University of Ottawa’s Institute of Fiscal Studies and Democracy, said he would love to see the specifics.

“That’s one of the difficulties, I think, with this government is we have seen a lot of aspiration, but not always the perspiration,” said Khan, a former deputy parliamentary budget officer. “What is the plan to achieve the results?”

The politically charged promise to increase Canada’s defence spending to 1.76 per cent of the gross domestic product by the end of the decade could be left in doubt when the spending plans are laid alongside the budget’s economic projections during that time frame.

Generally, the better the economy does, the more the defence budget would have to be increased to meet the target.

“It’s really unclear how we actually get to 1.76 per cent of GDP, if you take the figures that are presented which outline how spending is going to increase,” said Dave Perry, a defence expert and president of the Canadian Global Affairs Institute.

“You can’t put that against the nominal GDP projection provided in the budget” and then add in other government departments, such as Veterans Affairs Canada, “and get anywhere close” to the GDP projection in the defence policy, he said.

There are more questions about the proposed defence spending in the 2024 federal budget in Brewster’s other April 16, 2024 article for CBC (Critics attack long timelines in defence plan as military awaits a budget boost).

About five weeks after the budget was released, Prime Minister Justin Trudeau received a letter, from a May 23, 2024 article by Alexander Panetta for CBC News online,

Nearly one-quarter of the members of the United States Senate have sent an unusually critical letter to Prime Minister Justin Trudeau expressing dismay over Canada’s level of defence spending.

They pressed Trudeau to come to this summer’s NATO summit with a plan to fulfil Canada’s commitment to reach the alliance’s longstanding spending target.

The letter from 23 members of the U.S. Senate, from both parties, represents a dramatic and public escalation of pressure from Washington over a longstanding bilateral irritant.

That written critique [letter] comes just days after Defence Minister Bill Blair completed what he referred to as a productive trip to Washington to promote Canada’s new military strategy.

“We are concerned and profoundly disappointed,” says the letter, referring to the spending levels in the strategy Blair came to promote.

The pressure is continuing at this year’s Halifax [Nova Scotia, Canada] International Security Forum held from November. 22 – 24, 2024 as can be seen in Sean Boynton’s November 24, 2024 article (includes embedded video) for Global News

A bipartisan pair of U.S. senators say they expect Canada and the U.S. to work collaboratively on shared issues of defence and the border, but suggested Ottawa’s policies on military spending need to change to speed up progress.

Speaking to Mercedes Stephenson from the Halifax International Security Forum in an interview that aired Sunday on The West Block, Republican Sen. James Risch of Idaho and Democratic Sen. Jeanne Shaheen of New Hampshire downplayed concerns that incoming president-elect Donald Trump will penalize Canada on things like trade if it doesn’t step up on defence spending.

As far as I’m concerned, this budget offers some moderate gains from a science and technology perspective and with regard to military spending, it seems a little lacklustre overall and with regard to military research, that might be called nonexistent.

Hardware policies best way to manage AI safety?

Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).

A very software approach?

This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,

In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.

The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.

The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),

At a glance

The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.

Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.

The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.

Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.

While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.

The European Union (EU) is preparing to pass its own AI regulation act through the European Parliament in 2024 according to a December 19, 2023 “EU AI Act: first regulation on artificial intelligence” article update, Note: Links have been removed,

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

*The ban of AI systems posing unacceptable risks will apply six months after the entry into force

*Codes of practice will apply nine months after entry into force

*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”

… The AI Act is expected to come into effect in late 2025 or early 2026.[109

I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.

The US is always to be considered in these matters and I have a November 2023 ‘briefing’ by Müge Fazlioglu on the International Association of Privacy Professionals (IAPP) website where she provides a quick overview of the international scene before diving deeper into US AI governance policy through the Barack Obama, Donald Trump, and Joe Biden administrations. There’s also this January 29, 2024 US White House “Fact Sheet: Biden-⁠Harris Administration Announces Key AI Actions Following President Biden’s Landmark Executive Order.”

What about AI and hardware?

A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,

Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.

A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.

Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.

The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.

The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.

A February 14, 2024 University of Cambridge press release by Fred Lewsey (also on EurekAlert), which originated the news item, provides more information about the ‘hardware approach to AI regulation’,

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI. 

“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.

“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”

The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.

Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.

Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute. 

The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”

Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.

For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.

The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.

“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.

Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.  

“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”

These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.

AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.

The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.

They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.

Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”

You can find the report, “Computing Power and the Governance of Artificial Intelligence” on the University of Cambridge’s Centre for the Study of Existential Risk.

Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.

The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.

“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.

*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.

As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.

Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems

These days there’s a lot of international interest in policy and regulation where AI is concerned. So even though this is a little late, here’s what happened back in September 2023, the Canadian government came to an agreement with various technology companies about adopting a new voluntary code. Quinn Henderson’s September 28, 2023 article for the Daily Hive starts in a typically Canadian fashion, Note: Links have been removed,

While not quite as star-studded [emphasis mine] at the [US] White House’s AI summit, the who’s who of Canadian tech companies have agreed to new rules concerning AI.

What happened: A handful of Canada’s biggest tech companies, including Blackberry, OpenText, and Cohere, agreed to sign on to new voluntary government guidelines for the development of AI technologies and a “robust, responsible AI ecosystem in Canada.”

What’s next: The code of conduct is something of a stopgap until the government’s *real* AI regulation, the Artificial Intelligence and Data Act (AIDA), comes into effect in two years.

The regulation race is on around the globe. The EU is widely viewed as leading the way with the world’s first comprehensive regulatory AI framework set to take effect in 2026. The US is also hard at work but only has a voluntary code in place.

Henderson’s September 28, 2023 article offers a good, brief summary of the situation regarding regulation and self-regulation of AI here in Canada and elsewhere around the world, albeit, from a few months ago. Oddly, there’s no mention of what was then an upcoming international AI summit in the UK (see my November 2, 2023 posting, “UK AI Summit (November 1 – 2, 2023) at Bletchley Park finishes“).

Getting back to Canada’s voluntary code of conduct. here’s the September 27, 2023 Innovation, Science and Economic Development Canada (ISED) news release about it, Note: Links have been removed,

Today [September 27, 2023], the Honourable François-Philippe Champagne, Minister of Innovation, Science and Industry, announced Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, which is effective immediately. The code identifies measures that organizations are encouraged to apply to their operations when they are developing and managing general-purpose generative artificial intelligence (AI) systems. The Government of Canada has already taken significant steps toward ensuring that AI technology evolves responsibly and safely through the proposed Artificial Intelligence and Data Act (AIDA), which was introduced as part of Bill C-27 in June 2022. This code is a critical bridge between now and when that legislation would be coming into force.The code outlines measures that are aligned with six core principles:

Accountability: Organizations will implement a clear risk management framework proportionate to the scale and impact of their activities.

Safety: Organizations will perform impact assessments and take steps to mitigate risks to safety, including addressing malicious or inappropriate uses.

Fairness and equity: Organizations will assess and test systems for biases throughout the lifecycle.

Transparency: Organizations will publish information on systems and ensure that AI systems and AI-generated content can be identified.

Human oversight and monitoring: Organizations will ensure that systems are monitored and that incidents are reported and acted on.

Validity and robustness: Organizations will conduct testing to ensure that systems operate effectively and are appropriately secured against attacks.

This code is based on the input received from a cross-section of stakeholders, including the Government of Canada’s Advisory Council on Artificial Intelligence, through the consultation on the development of a Canadian code of practice for generative AI systems. The government will publish a summary of feedback received during the consultation in the coming days. The code will also help reinforce Canada’s contributions to ongoing international deliberations on proposals to address common risks encountered with large-scale deployment of generative AI, including at the G7 and among like-minded partners.

Quotes

“Advances in AI have captured the world’s attention with the immense opportunities they present. Canada is a global AI leader, among the top countries in the world, and Canadians have created many of the world’s top AI innovations. At the same time, Canada takes the potential risks of AI seriously. The government is committed to ensuring Canadians can trust AI systems used across the economy, which in turn will accelerate AI adoption. Through our Voluntary Code of Conduct on the Responsible Development and Management of

Advanced Generative AI Systems, leading Canadian companies will adopt responsible guardrails for advanced generative AI systems in order to build safety and trust as the technology spreads. We will continue to ensure Canada’s AI policies are fit for purpose in a fast-changing world.”
– The Honourable François-Philippe Champagne, Minister of Innovation, Science and Industry

“We are very pleased to see the Canadian government taking a strong leadership role in building a regulatory framework that will help society maximize the benefits of AI, while addressing the many legitimate concerns that exist. It is essential that we, as an industry, address key issues like bias and ensure that humans maintain a clear role in oversight and monitoring of this incredibly exciting technology.”
– Aidan Gomez, CEO and Co-founder, Cohere

“AI technologies represent immense opportunities for every citizen and business in Canada. The societal impacts of AI are profound across education, biotech, climate and the very nature of work. Canada’s AI Code of Conduct will help accelerate innovation and citizen adoption by setting the standard on how to do it best. As Canada’s largest software company, we are honoured to partner with Minister Champagne and the Government of Canada in supporting this important step forward.”
– Mark J. Barrenechea, CEO and CTO, OpenText

“CCI has been calling for Canada to take a leadership role on AI regulation, and this should be done in the spirit of collaboration between government and industry leaders. The AI Code of Conduct is a meaningful step in the right direction and marks the beginning of an ongoing conversation about how to build a policy ecosystem for AI that fosters public trust and creates the conditions for success among Canadian companies. The global landscape for artificial intelligence regulation and adoption will evolve, and we are optimistic to see future collaboration to adapt to the emerging technological reality.”
– Benjamin Bergen, President, Council of Canadian Innovators

Quick facts

*The proposed Artificial Intelligence and Data Act (AIDA), part of Bill C-27, is designed to promote the responsible design, development and use of AI systems in Canada’s private sector, with a focus on systems with the greatest impact on health, safety and human rights (high-impact systems).

*Since the introduction of the bill, the government has engaged extensively with stakeholders on AIDA and will continue to seek the advice of Canadians, experts—including the government’s Advisory Council on AI—and international partners on the novel challenges posed by generative AI, as outlined in the Artificial Intelligence and Data Act (AIDA) – Companion document.

*Bill C-27 was adopted at second reading in the House of Commons in April 2023 and was referred to the House of Commons Standing Committee on Industry and Technology for study.

You can read more about Canada’s regulation efforts (Bill C-27) and some of the critiques in my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”

For now, the “Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems” can be found on this ISED September 2023 webpage.

Other Canadian AI policy bits and bobs

Back in 2016, shiny new Prime Minister Justin Trudeau announced the Pan-Canadian Artificial Intelligence Strategy (you can find out more about the strategy (Pillar 1: Commercialization) from this ISED Pan-Canadian Artificial Intelligence Strategy webpage, which was last updated July 20, 2022).

More recently, the Canadian Institute for Advanced Research (CIFAR), a prominent player in the Pan-Canadian AI strategy, published a report about regulating AI, from a November 21, 2023 CIFAR news release by Kathleen Sandusky, Note: Links have been removed,

New report from the CIFAR AI Insights Policy Briefs series cautions that current efforts to regulate AI are doomed to fail if they ignore a crucial aspect: the transformative impact of AI on regulatory processes themselves.

As rapid advances in artificial intelligence (AI) continue to reshape our world, global legislators and policy experts are working full-tilt to regulate this transformative technology. A new report, part of the CIFAR AI Insights Policy Briefs series, provides novel tools and strategies for a new way of thinking about regulation.

“Regulatory Transformation in the Age of AI” was authored by members of the Schwartz Reisman Institute for Technology and Society at the University of Toronto: Director and Chair Gillian Hadfield, who is also a Canada CIFAR AI Chair at the Vector Institute; Policy Researcher Jamie Amarat Sandhu; and Graduate Affiliate Noam Kolt.

The report challenges the current regulatory focus, arguing that the standard “harms paradigm” of regulating AI is necessary but incomplete. For example, current car safety regulations were not developed to address the advent of autonomous vehicles. In this way, the introduction of AI into vehicles has made some existing car safety regulations inefficient or irrelevant.

Through three Canadian case studies—in healthcare, financial services, and nuclear energy—the report illustrates some of the ways in which the targets and tools of regulation could be reconsidered for a world increasingly shaped by AI.

The brief proposes a novel concept—Regulatory Impacts Analysis (RIA)—as a means to evaluate the impact of AI on regulatory regimes. RIA aims to assess the likely impact of AI on regulatory targets and tools, helping policymakers adapt governance institutions to the changing conditions brought about by AI. The authors provide a real-world adaptable tool—a sample questionnaire—for policymakers to identify potential gaps in their domain as AI becomes more prevalent.

This report also highlights the need for a comprehensive regulatory approach that goes beyond mitigating immediate harms, recognizing AI as a “general-purpose technology” with far-reaching implications, including on the very act of regulation itself.

As AI is expected to play a pivotal role in the global economy, the authors emphasize the need for regulators to go beyond traditional approaches. The evolving landscape requires a more flexible and adaptive playbook, with tools like RIA helping to shape strategies to harness the benefits of AI, address associated risks, and prepare for the technology’s transformative impact.

You can find CIFAR’s November 2023 report, “Regulatory Transformation in the Age of AI” (PDF) here.

I have two more AI bits and these concern provincial AI policies, one from Ontario and the other from British Columbia (BC),

Stay tuned, there will be more about AI policy throughout 2024.

Non-human authors (ChatGPT or others) of scientific and medical studies and the latest AI panic!!!

It’s fascinating to see all the current excitement (distressed and/or enthusiastic) around the act of writing and artificial intelligence. Easy to forget that it’s not new. First, the ‘non-human authors’ and then the panic(s). *What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.*

How to handle non-human authors (ChatGPT and other AI agents)—the medical edition

The first time I wrote about the incursion of robots or artificial intelligence into the field of writing was in a July 16, 2014 posting titled “Writing and AI or is a robot writing this blog?” ChatGPT (then known as GPT-2) first made its way onto this blog in a February 18, 2019 posting titled “AI (artificial intelligence) text generator, too dangerous to release?

The folks at the Journal of the American Medical Association (JAMA) have recently adopted a pragmatic approach to the possibility of nonhuman authors of scientific and medical papers, from a January 31, 2022 JAMA editorial,

Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability..1

In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.

Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11

This is a link to and a citation for the JAMA editorial,

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge by Annette Flanagin, Kirsten Bibbins-Domingo, Michael Berkwits, Stacy L. Christiansen. JAMA. 2023;329(8):637-639. doi:10.1001/jama.2023.1344

The editorial appears to be open access.

ChatGPT in the field of education

Dr. Andrew Maynard (scientist, author, and professor of Advanced Technology Transitions in the Arizona State University [ASU] School for the Future if Innovation in Society and founder of the ASU Future of Being Human initiative and Director of the ASU Risk Innovation Nexus) also takes a pragmatic approach in a March 14, 2023 posting on his eponymous blog,

Like many of my colleagues, I’ve been grappling with how ChatGPT and other Large Language Models (LLMs) are impacting teaching and education — especially at the undergraduate level.

We’re already seeing signs of the challenges here as a growing divide emerges between LLM-savvy students who are experimenting with novel ways of using (and abusing) tools like ChatGPT, and educators who are desperately trying to catch up. As a result, educators are increasingly finding themselves unprepared and poorly equipped to navigate near-real-time innovations in how students are using these tools. And this is only exacerbated where their knowledge of what is emerging is several steps behind that of their students.

To help address this immediate need, a number of colleagues and I compiled a practical set of Frequently Asked Questions on ChatGPT in the classroom. These covers the basics of what ChatGPT is, possible concerns over use by students, potential creative ways of using the tool to enhance learning, and suggestions for class-specific guidelines.

Dr. Maynard goes on to offer the FAQ/practical guide here. Prior to issuing the ‘guide’, he wrote a December 8, 2022 essay on Medium titled “I asked Open AI’s ChatGPT about responsible innovation. This is what I got.”

Crawford Kilian, a longtime educator, author, and contributing editor to The Tyee, expresses measured enthusiasm for the new technology (as does Dr. Maynard), in a December 13, 2022 article for thetyee.ca, Note: Links have been removed,

ChatGPT, its makers tell us, is still in beta form. Like a million other new users, I’ve been teaching it (tuition-free) so its answers will improve. It’s pretty easy to run a tutorial: once you’ve created an account, you’re invited to ask a question or give a command. Then you watch the reply, popping up on the screen at the speed of a fast and very accurate typist.

Early responses to ChatGPT have been largely Luddite: critics have warned that its arrival means the end of high school English, the demise of the college essay and so on. But remember that the Luddites were highly skilled weavers who commanded high prices for their products; they could see that newfangled mechanized looms would produce cheap fabrics that would push good weavers out of the market. ChatGPT, with sufficient tweaks, could do just that to educators and other knowledge workers.

Having spent 40 years trying to teach my students how to write, I have mixed feelings about this prospect. But it wouldn’t be the first time that a technological advancement has resulted in the atrophy of a human mental skill.

Writing arguably reduced our ability to memorize — and to speak with memorable and persuasive coherence. …

Writing and other technological “advances” have made us what we are today — powerful, but also powerfully dangerous to ourselves and our world. If we can just think through the implications of ChatGPT, we may create companions and mentors that are not so much demonic as the angels of our better nature.

More than writing: emergent behaviour

The ChatGPT story extends further than writing and chatting. From a March 6, 2023 article by Stephen Ornes for Quanta Magazine, Note: Links have been removed,

What movie do these emojis describe?

That prompt was one of 204 tasks chosen last year to test the ability of various large language models (LLMs) — the computational engines behind AI chatbots such as ChatGPT. The simplest LLMs produced surreal responses. “The movie is a movie about a man who is a man who is a man,” one began. Medium-complexity models came closer, guessing The Emoji Movie. But the most complex model nailed it in one guess: Finding Nemo.

“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.

“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors [emphasis mine], including several identified in Dyer’s project. That list continues to grow.

Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.

But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.

As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli [Deep Ganguli, a computer scientist at the AI startup Anthropic] said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

There is an obvious problem with asking these models to explain themselves: They are notorious liars. [emphasis mine] “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” As one of many amusing examples, in February [2023] Google introduced its AI chatbot, Bard. The blog post announcing the new tool shows Bard making a factual error.

If you have time, I recommend reading Omes’s March 6, 2023 article.

The panic

Perhaps not entirely unrelated to current developments, there was this announcement in a May 1, 2023 article by Hannah Alberga for CTV (Canadian Television Network) news, Note: Links have been removed,

Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI

Geoffrey Hinton, professor at the University of Toronto and the “godfather” of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday [May 1, 2023] citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by “bad actors.”

Hinton, a British-Canadian computer scientist, is best-known for a series of deep neural network breakthroughs that won him, Yann LeCun and Yoshua Bengio the 2018 Turing Award, known as the Nobel Prize of computing. 

Hinton has been invested in the now-hot topic of artificial intelligence since its early stages. In 1970, he got a Bachelor of Arts in experimental psychology from Cambridge, followed by his Ph.D. in artificial intelligence in Edinburgh, U.K. in 1978.

He joined Google after spearheading a major breakthrough with two of his graduate students at the University of Toronto in 2012, in which the team uncovered and built a new method of artificial intelligence: neural networks. The team’s first neural network was  incorporated and sold to Google for $44 million.

Neural networks are a method of deep learning that effectively teaches computers how to learn the way humans do by analyzing data, paving the way for machines to classify objects and understand speech recognition.

There’s a bit more from Hinton in a May 3, 2023 article by Sheena Goodyear for the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens (the 10 minute radio interview is embedded in the article), Note: A link has been removed,

There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.

Nowadays, he’s not so sure.

“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”

For the last decade, he [Geoffrey Hinton] divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.

Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”

But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes. 

Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good. 

“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.

“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”

Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”

“That being said, I do think that some of these ideas that … AI supercomputers are going to ‘wake up’ and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear” and distract from more pressing issues, he said.

He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.

“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he [Hinton] said.

Don Pittis in his May 4, 2022 business analysis for CBC news online offers a somewhat jaundiced view of Hinton’s concern regarding AI, Note: Links have been removed,

As if we needed one more thing to terrify us, the latest warning from a University of Toronto scientist considered by many to be the founding intellect of artificial intelligence, adds a new layer of dread.

Others who have warned in the past that thinking machines are a threat to human existence seem a little miffed with the rock-star-like media coverage Geoffrey Hinton, billed at a conference this week as the Godfather of AI, is getting for what seems like a last minute conversion. Others say Hinton’s authoritative voice makes a difference.

Not only did Hinton tell an audience of experts at Wednesday’s [May 3, 2023] EmTech Digital conference that humans will soon be supplanted by AI — “I think it’s serious and fairly close.” — he said that due to national and business competition, there is no obvious way to prevent it.

“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial,” said Hinton on Wednesday [May 3, 2023] as he explained his change of heart in detailed technical terms. 

“But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.”

“I wish I had a nice and simple solution I could push, but I don’t,” he said. “It’s not clear there is a solution.”

So when is all this happening?

“In a few years time they may be significantly more intelligent than people,” he told Nil Köksal on CBC Radio’s As It Happens on Wednesday [May 3, 2023].

While he may be late to the party, Hinton’s voice adds new clout to growing anxiety that artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.

But long before that final day, he worries that the new technology will soon begin to strip away jobs and lead to a destabilizing societal gap between rich and poor that current politics will be unable to solve.

The EmTech Digital conference is a who’s who of AI business and academia, fields which often overlap. Most other participants at the event were not there to warn about AI like Hinton, but to celebrate the explosive growth of AI research and business.

As one expert I spoke to pointed out, the growth in AI is exponential and has been for a long time. But even knowing that, the increase in the dollar value of AI to business caught the sector by surprise.

Eight years ago when I wrote about the expected increase in AI business, I quoted the market intelligence group Tractica that AI spending would “be worth more than $40 billion in the coming decade,” which sounded like a lot at the time. It appears that was an underestimate.

“The global artificial intelligence market size was valued at $428 billion U.S. in 2022,” said an updated report from Fortune Business Insights. “The market is projected to grow from $515.31 billion U.S. in 2023.”  The estimate for 2030 is more than $2 trillion. 

This week the new Toronto AI company Cohere, where Hinton has a stake of his own, announced it was “in advanced talks” to raise $250 million. The Canadian media company Thomson Reuters said it was planning “a deeper investment in artificial intelligence.” IBM is expected to “pause hiring for roles that could be replaced with AI.” The founders of Google DeepMind and LinkedIn have launched a ChatGPT competitor called Pi.

And that was just this week.

“My one hope is that, because if we allow it to take over it will be bad for all of us, we could get the U.S. and China to agree, like we did with nuclear weapons,” said Hinton. “We’re all the in same boat with respect to existential threats, so we all ought to be able to co-operate on trying to stop it.”

Interviewer and moderator Will Douglas Heaven, an editor at MIT Technology Review finished Hinton’s sentence for him: “As long as we can make some money on the way.”

Hinton has attracted some criticism himself. Wilfred Chan writing for Fast Company has two articles, “‘I didn’t see him show up’: Ex-Googlers blast ‘AI godfather’ Geoffrey Hinton’s silence on fired AI experts” on May 5, 2023, Note: Links have been removed,

Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord.

But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.

“It’s disappointing to see this autumn-years redemption tour [emphasis mine] from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)

Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.

“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”

Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. …

… An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them. 

Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday [May 2, 2023], for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.” [emphasis mine]

Gebru has been mentioned here a few times. She’s mentioned in passing in a June 23, 2022 posting “Racist and sexist robots have flawed AI” and in a little more detail in an August 30, 2022 posting “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” scroll down to the ‘Consciousness and ethical AI’ subhead

Chan has another Fast Company article investigating AI issues also published on May 5, 2023, “Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them.”

The last two existential AI panics

The term “autumn-years redemption tour”is striking and while the reference to age could be viewed as problematic, it also hints at the money, honours, and acknowledgement that Hinton has enjoyed as an eminent scientist. I’ve covered two previous panics set off by eminent scientists. “Existential risk” is the title of my November 26, 2012 posting which highlights Martin Rees’ efforts to found the Centre for Existential Risk at the University of Cambridge.

Rees is a big deal. From his Wikipedia entry, Note: Links have been removed,

Martin John Rees, Baron Rees of Ludlow OM FRS FREng FMedSci FRAS HonFInstP[10][2] (born 23 June 1942) is a British cosmologist and astrophysicist.[11] He is the fifteenth Astronomer Royal, appointed in 1995,[12][13][14] and was Master of Trinity College, Cambridge, from 2004 to 2012 and President of the Royal Society between 2005 and 2010.[15][16][17][18][19][20]

The Centre for Existential Risk can be found here online (it is located at the University of Cambridge). Interestingly, Hinton who was born in December 1947 will be giving a lecture “Digital versus biological intelligence: Reasons for concern about AI” in Cambridge on May 25, 2023.

The next panic was set off by Stephen Hawking (1942 – 2018; also at the University of Cambridge, Wikipedia entry) a few years before he died. (Note: Rees, Hinton, and Hawking were all born within five years of each other and all have/had ties to the University of Cambridge. Interesting coincidence, eh?) From a January 9, 2015 article by Emily Chung for CBC news online,

Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, “the development of full artificial intelligence could spell the end of the human race.” Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably “our biggest existential threat.”

Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.

Hawking made his most recent comments at the beginning of December [2014], in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.

Popular works of science fiction – from the latest Terminator trailer, to the Matrix trilogy, to Star Trek’s borg – envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.

Sawyer has written about a different vision of life beyond singularity [when machines surpass humans in general intelligence,] — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the “possible misuse of powerful technologies” such as AI. He said Hawking and Musk have good reason to be concerned.

To sum up, the first panic was in 2012, the next in 2014/15, and the latest one began earlier this year (2023) with a letter. A March 29, 2023 Thompson Reuters news item on CBC news online provides information on the contents,

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a pioneer of research in the field.

According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.

The concerns come as EU police force Europol on Monday {March 27, 2023] joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Meanwhile, the U.K. government unveiled proposals for an “adaptable” regulatory framework around AI.

The government’s approach, outlined in a policy paper published on Wednesday [March 29, 2023], would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.

The engineers have chimed in, from an April 7, 2023 article by Margo Anderson for the IEEE (institute of Electrical and Electronics Engineers) Spectrum magazine, Note: Links have been removed,

The open letter [published March 29, 2023], titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”

It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.

In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.

IEEE members have expressed a similar diversity of opinions.

There was an earlier open letter in January 2015 according to Wikipedia’s “Open Letter on Artificial Intelligence” entry, Note: Links have been removed,

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.

As for ‘Mr. ChatGPT’ or Sam Altman, CEO of OpenAI, while he didn’t sign the March 29, 2023 letter, he appeared before US Congress suggesting AI needs to be regulated according to May 16, 2023 news article by Mohar Chatterjee for Politico.

You’ll notice I’ve arbitrarily designated three AI panics by assigning their origins to eminent scientists. In reality, these concerns rise and fall in ways that don’t allow for such a tidy analysis. As Chung notes, science fiction regularly addresses this issue. For example, there’s my October 16, 2013 posting, “Wizards & Robots: a comic book encourages study in the sciences and maths and discussions about existential risk.” By the way, will.i.am (of the Black Eyed Peas band was involved in the comic book project and he us a longtime supporter of STEM (science, technology, engineering, and mathematics) initiatives.

Finally (but not quite)

Puzzling, isn’t it? I’m not sure we’re asking the right questions but it’s encouraging to see that at least some are being asked.

Dr. Andrew Maynard in a May 12, 2023 essay for The Conversation (h/t May 12, 2023 item on phys.org) notes that ‘Luddites’ questioned technology’s inevitable progress and were vilified for doing so, Note: Links have been removed,

The term “Luddite” emerged in early 1800s England. At the time there was a thriving textile industry that depended on manual knitting frames and a skilled workforce to create cloth and garments out of cotton and wool. But as the Industrial Revolution gathered momentum, steam-powered mills threatened the livelihood of thousands of artisanal textile workers.

Faced with an industrialized future that threatened their jobs and their professional identity, a growing number of textile workers turned to direct action. Galvanized by their leader, Ned Ludd, they began to smash the machines that they saw as robbing them of their source of income.

It’s not clear whether Ned Ludd was a real person, or simply a figment of folklore invented during a period of upheaval. But his name became synonymous with rejecting disruptive new technologies – an association that lasts to this day.

Questioning doesn’t mean rejecting

Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time. Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life

In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.” Their sin? Raising concerns over the potential dangers of artificial intelligence.

The irony of three prominent scientists and entrepreneurs being labeled as Luddites underlines the disconnect between the term’s original meaning and its more modern use as an epithet for anyone who doesn’t wholeheartedly and unquestioningly embrace technological progress.

Yet technologists like Musk and Gates aren’t rejecting technology or innovation. Instead, they’re rejecting a worldview that all technological advances are ultimately good for society. This worldview optimistically assumes that the faster humans innovate, the better the future will be.

In an age of ChatGPT, gene editing and other transformative technologies, perhaps we all need to channel the spirit of Ned Ludd as we grapple with how to ensure that future technologies do more good than harm.

In fact, “Neo-Luddites” or “New Luddites” is a term that emerged at the end of the 20th century.

In 1990, the psychologist Chellis Glendinning published an essay titled “Notes toward a Neo-Luddite Manifesto.”

Then there are the Neo-Luddites who actively reject modern technologies, fearing that they are damaging to society. New York City’s Luddite Club falls into this camp. Formed by a group of tech-disillusioned Gen-Zers, the club advocates the use of flip phones, crafting, hanging out in parks and reading hardcover or paperback books. Screens are an anathema to the group, which sees them as a drain on mental health.

I’m not sure how many of today’s Neo-Luddites – whether they’re thoughtful technologists, technology-rejecting teens or simply people who are uneasy about technological disruption – have read Glendinning’s manifesto. And to be sure, parts of it are rather contentious. Yet there is a common thread here: the idea that technology can lead to personal and societal harm if it is not developed responsibly.

Getting back to where this started with nonhuman authors, Amelia Eqbal has written up an informal transcript of a March 16, 2023 CBC radio interview (radio segment is embedded) about ChatGPT-4 (the latest AI chatbot from OpenAI) between host Elamin Abdelmahmoud and tech journalist, Alyssa Bereznak.

I was hoping to add a little more Canadian content, so in March 2023 and again in April 2023, I sent a question about whether there were any policies regarding nonhuman or AI authors to Kim Barnhardt at the Canadian Medical Association Journal (CMAJ). To date, there has been no reply but should one arrive, I will place it here.

In the meantime, I have this from Canadian writer, Susan Baxter in her May 15, 2023 blog posting “Coming soon: Robot Overlords, Sentient AI and more,”

The current threat looming (Covid having been declared null and void by the WHO*) is Artificial Intelligence (AI) which, we are told, is becoming too smart for its own good and will soon outsmart humans. Then again, given some of the humans I’ve met along the way that wouldn’t be difficult.

All this talk of scary-boo AI seems to me to have become the worst kind of cliché, one that obscures how our lives have become more complicated and more frustrating as apps and bots and cyber-whatsits take over.

The trouble with clichés, as Alain de Botton wrote in How Proust Can Change Your Life, is not that they are wrong or contain false ideas but more that they are “superficial articulations of good ones”. Cliches are oversimplifications that become so commonplace we stop noticing the more serious subtext. (This is rife in medicine where metaphors such as talk of “replacing” organs through transplants makes people believe it’s akin to changing the oil filter in your car. Or whatever it is EV’s have these days that needs replacing.)

Should you live in Vancouver (Canada) and are attending a May 28, 2023 AI event, you may want to read Susan Baxter’s piece as a counterbalance to, “Discover the future of artificial intelligence at this unique AI event in Vancouver,” a May 19, 2023 sponsored content by Katy Brennan for the Daily Hive,

If you’re intrigued and eager to delve into the rapidly growing field of AI, you’re not going to want to miss this unique Vancouver event.

On Sunday, May 28 [2023], a Multiplatform AI event is coming to the Vancouver Playhouse — and it’s set to take you on a journey into the future of artificial intelligence.

The exciting conference promises a fusion of creativity, tech innovation, and thought–provoking insights, with talks from renowned AI leaders and concept artists, who will share their experiences and opinions.

Guests can look forward to intense discussions about AI’s pros and cons, hear real-world case studies, and learn about the ethical dimensions of AI, its potential threats to humanity, and the laws that govern its use.

Live Q&A sessions will also be held, where leading experts in the field will address all kinds of burning questions from attendees. There will also be a dynamic round table and several other opportunities to connect with industry leaders, pioneers, and like-minded enthusiasts. 

This conference is being held at The Playhouse, 600 Hamilton Street, from 11 am to 7:30 pm, ticket prices range from $299 to $349 to $499 (depending on when you make your purchase, From the Multiplatform AI Conference homepage,

Event Speakers

Max Sills
General Counsel at Midjourney

From Jan 2022 – present (Advisor – now General Counsel) – Midjourney – An independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species (SF) Midjourney – a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called “prompts”, similar to OpenAI’s DALL-E and Stable Diffusion. For now the company uses Discord Server as a source of service and, with huge 15M+ members, is the biggest Discord server in the world. In the two-things-at-once department, Max Sills also known as an owner of Open Advisory Services, firm which is set up to help small and medium tech companies with their legal needs (managing outside counsel, employment, carta, TOS, privacy). Their clients are enterprise level, medium companies and up, and they are here to help anyone on open source and IP strategy. Max is an ex-counsel at Block, ex-general manager of the Crypto Open Patent Alliance. Prior to that Max led Google’s open source legal group for 7 years.

So, the first speaker listed is a lawyer associated with Midjourney, a highly controversial generative artificial intelligence programme used to generate images. According to their entry on Wikipedia, the company is being sued, Note: Links have been removed,

On January 13, 2023, three artists – Sarah Andersen, Kelly McKernan, and Karla Ortiz – filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists.[32]

My October 24, 2022 posting highlights some of the issues with generative image programmes and Midjourney is mentioned throughout.

As I noted earlier, I’m glad to see more thought being put into the societal impact of AI and somewhat disconcerted by the hyperbole from the like of Geoffrey Hinton and the like of Vancouver’s Multiplatform AI conference organizers. Mike Masnick put it nicely in his May 24, 2023 posting on TechDirt (Note 1: I’ve taken a paragraph out of context, his larger issue is about proposals for legislation; Note 2: Links have been removed),

Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.

For anyone interested in the Canadian government attempts to legislate AI, there’s my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”

Addendum (June 1, 2023)

Another statement warning about runaway AI was issued on Tuesday, May 30, 2023. This was far briefer than the previous March 2023 warning, from the Center for AI Safety’s “Statement on AI Risk” webpage,

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war [followed by a list of signatories] …

Vanessa Romo’s May 30, 2023 article (with contributions from Bobby Allyn) for NPR ([US] National Public Radio) offers an overview of both warnings. Rae Hodge’s May 31, 2023 article for Salon offers a more critical view, Note: Links have been removed,

The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of “nuclear war” and human “extinction.” Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement from the non-profit Center for AI Safety said.

But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board.

TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry.

“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote.

“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.

The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts.

“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”

Fear, after all, is a powerful sales tool.

Romo’s May 30, 2023 article for NPR offers a good overview and, if you have the time, I recommend reading Hodge’s May 31, 2023 article for Salon in its entirety.

*ETA June 8, 2023: This sentence “What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.” was added to the introductory paragraph at the beginning of this post.