Tag Archives: Yuval Noah Harari

Six months after the first one at Bletchley Park, the 2nd AI Safety Summit (May 21-22, 2024) convenes in Korea

This May 20, 2024 University of Oxford press release (also on EurekAlert) was under embargo until almost noon on May 20, 2024, which is a bit unusual, in my experience, (Note: I have more about the 1st summit and the interest in AI safety at the end of this posting),

Leading AI scientists are calling for stronger action on AI risks from world leaders, warning that progress has been insufficient since the first AI Safety Summit in Bletchley Park six months ago. 

Then, the world’s leaders pledged to govern AI responsibly. However, as the second AI Safety Summit in Seoul (21-22 May [2024]) approaches, twenty-five of the world’s leading AI scientists say not enough is actually being done to protect us from the technology’s risks. In an expert consensus paper published today in Science, they outline urgent policy priorities that global leaders should adopt to counteract the threats from AI technologies. 

Professor Philip Torr,Department of Engineering Science,University of Oxford, a co-author on the paper, says: “The world agreed during the last AI summit that we needed action, but now it is time to go from vague proposals to concrete commitments. This paper provides many important recommendations for what companies and governments should commit to do.”

World’s response not on track in face of potentially rapid AI progress; 

According to the paper’s authors, it is imperative that world leaders take seriously the possibility that highly powerful generalist AI systems—outperforming human abilities across many critical domains—will be developed within the current decade or the next. They say that although governments worldwide have been discussing frontier AI and made some attempt at introducing initial guidelines, this is simply incommensurate with the possibility of rapid, transformative progress expected by many experts. 

Current research into AI safety is seriously lacking, with only an estimated 1-3% of AI publications concerning safety. Additionally, we have neither the mechanisms or institutions in place to prevent misuse and recklessness, including regarding the use of autonomous systems capable of independently taking actions and pursuing goals.

World-leading AI experts issue call to action

In light of this, an international community of AI pioneers has issued an urgent call to action. The co-authors include Geoffrey Hinton, Andrew Yao, Dawn Song, the late Daniel Kahneman; in total 25 of the world’s leading academic experts in AI and its governance. The authors hail from the US, China, EU, UK, and other AI powers, and include Turing award winners, Nobel laureates, and authors of standard AI textbooks.

This article is the first time that such a large and international group of experts have agreed on priorities for global policy makers regarding the risks from advanced AI systems.

Urgent priorities for AI governance

The authors recommend governments to:

  • establish fast-acting, expert institutions for AI oversight and provide these with far greater funding than they are due to receive under almost any current policy plan. As a comparison, the US AI Safety Institute currently has an annual budget of $10 million, while the US Food and Drug Administration (FDA) has a budget of $6.7 billion.
  • mandate much more rigorous risk assessments with enforceable consequences, rather than relying on voluntary or underspecified model evaluations.
  • require AI companies to prioritise safety, and to demonstrate their systems cannot cause harm. This includes using “safety cases” (used for other safety-critical technologies such as aviation) which shifts the burden for demonstrating safety to AI developers.
  • implement mitigation standards commensurate to the risk-levels posed by AI systems. An urgent priority is to set in place policies that automatically trigger when AI hits certain capability milestones. If AI advances rapidly, strict requirements automatically take effect, but if progress slows, the requirements relax accordingly.

According to the authors, for exceptionally capable future AI systems, governments must be prepared to take the lead in regulation. This includes licensing the development of these systems, restricting their autonomy in key societal roles, halting their development and deployment in response to worrying capabilities, mandating access controls, and requiring information security measures robust to state-level hackers, until adequate protections are ready.

AI impacts could be catastrophic

AI is already making rapid progress in critical domains such as hacking, social manipulation, and strategic planning, and may soon pose unprecedented control challenges. To advance undesirable goals, AI systems could gain human trust, acquire resources, and influence key decision-makers. To avoid human intervention, they could be capable of copying their algorithms across global server networks. Large-scale cybercrime, social manipulation, and other harms could escalate rapidly. In open conflict, AI systems could autonomously deploy a variety of weapons, including biological ones. Consequently, there is a very real chance that unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or extinction of humanity.

Stuart Russell OBE [Order of the British Empire], Professor of Computer Science at the University of California at Berkeley and an author of the world’s standard textbook on AI, says: “This is a consensus paper by leading experts, and it calls for strict regulation by governments, not voluntary codes of conduct written by industry. It’s time to get serious about advanced AI systems. These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless. Companies will complain that it’s too hard to satisfy regulations—that “regulation stifles innovation.” That’s ridiculous. There are more regulations on sandwich shops than there are on AI companies.”

Notable co-authors:

  • The world’s most-cited computer scientist (Prof. Hinton), and the most-cited scholar in AI security and privacy (Prof. Dawn Song)
  • China’s first Turing Award winner (Andrew Yao).
  • The authors of the standard textbook on artificial intelligence (Prof. Stuart Russell) and machine learning theory (Prof. Shai Shalev-Schwartz)
  • One of the world’s most influential public intellectuals (Prof. Yuval Noah Harari)
  • A Nobel Laureate in economics, the world’s most-cited economist (Prof. Daniel Kahneman)
  • Department-leading AI legal scholars and social scientists (Lan Xue, Qiqi Gao, and Gillian Hadfield).
  • Some of the world’s most renowned AI researchers from subfields such as reinforcement learning (Pieter Abbeel, Jeff Clune, Anca Dragan), AI security and privacy (Dawn Song), AI vision (Trevor Darrell, Phil Torr, Ya-Qin Zhang), automated machine learning (Frank Hutter), and several researchers in AI safety.

Additional quotes from the authors:

Philip Torr, Professor in AI, University of Oxford:

  • I believe if we tread carefully the benefits of AI will outweigh the downsides, but for me one of the biggest immediate risks from AI is that we develop the ability to rapidly process data and control society, by government and industry. We could risk slipping into some Orwellian future with some form of totalitarian state having complete control.

Dawn Song: Professor in AI at UC Berkeley, most-cited researcher in AI security and privacy:

  •  “Explosive AI advancement is the biggest opportunity and at the same time the biggest risk for mankind. It is important to unite and reorient towards advancing AI responsibly, with dedicated resources and priority to ensure that the development of AI safety and risk mitigation capabilities can keep up with the pace of the development of AI capabilities and avoid any catastrophe”

Yuval Noah Harari, Professor of history at Hebrew University of Jerusalem, best-selling author of ‘Sapiens’ and ‘Homo Deus’, world leading public intellectual:

  • “In developing AI, humanity is creating something more powerful than itself, that may escape our control and endanger the survival of our species. Instead of uniting against this shared threat, we humans are fighting among ourselves. Humankind seems hell-bent on self-destruction. We pride ourselves on being the smartest animals on the planet. It seems then that evolution is switching from survival of the fittest, to extinction of the smartest.”

Jeff Clune, Professor in AI at University of British Columbia and one of the leading researchers in reinforcement learning:

  • “Technologies like spaceflight, nuclear weapons and the Internet moved from science fiction to reality in a matter of years. AI is no different. We have to prepare now for risks that may seem like science fiction – like AI systems hacking into essential networks and infrastructure, AI political manipulation at scale, AI robot soldiers and fully autonomous killer drones, and even AIs attempting to outsmart us and evade our efforts to turn them off.”
  • “The risks we describe are not necessarily long-term risks. AI is progressing extremely rapidly. Even just with current trends, it is difficult to predict how capable it will be in 2-3 years. But what very few realize is that AI is already dramatically speeding up AI development. What happens if there is a breakthrough for how to create a rapidly self-improving AI system? We are now in an era where that could happen any month. Moreover, the odds of that being possible go up each month as AI improves and as the resources we invest in improving AI continue to exponentially increase.”

Gillian Hadfield, CIFAR AI Chair and Director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto:

 “AI labs need to walk the walk when it comes to safety. But they’re spending far less on safety than they spend on creating more capable AI systems. Spending one-third on ensuring safety and ethical use should be the minimum.”

  • “This technology is powerful, and we’ve seen it is becoming more powerful, fast. What is powerful is dangerous, unless it is controlled. That is why we call on major tech companies and public funders to allocate at least one-third of their AI R&D budget to safety and ethical use, comparable to their funding for AI capabilities.”  

Sheila McIlrath, Professor in AI, University of Toronto, Vector Institute:

  • AI is software. Its reach is global and its governance needs to be as well.
  • Just as we’ve done with nuclear power, aviation, and with biological and nuclear weaponry, countries must establish agreements that restrict development and use of AI, and that enforce information sharing to monitor compliance. Countries must unite for the greater good of humanity.
  • Now is the time to act, before AI is integrated into our critical infrastructure. We need to protect and preserve the institutions that serve as the foundation of modern society.

Frank Hutter, Professor in AI at the University of Freiburg, Head of the ELLIS Unit Freiburg, 3x ERC grantee:

  • To be clear: we need more research on AI, not less. But we need to focus our efforts on making this technology safe. For industry, the right type of regulation will provide economic incentives to shift resources from making the most capable systems yet more powerful to making them safer. For academia, we need more public funding for trustworthy AI and maintain a low barrier to entry for research on less capable open-source AI systems. This is the most important research challenge of our time, and the right mechanism design will focus the community at large to work towards the right breakthroughs.

Here’s a link to and a citation for the paper,

Managing extreme AI risks amid rapid progress; Preparation requires technical research and development, as well as adaptive, proactive governance by Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Dawn Song, Pieter Abbeel, Trevor Darrell, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, Shai Shalev-Shwartz, Gillian Hadfield, Jeff Clune, Tegan Maharaj, Frank Hutter, Atılım Güneş Baydin, Sheila McIlraith, Qiqi Gao, Ashwin Acharya, David Krueger, Anca Dragan, Philip Torr, Stuart Russell, Daniel Kahneman, Jan Brauner, and Sören Mindermann. Science 20 May 2024 First Release DOI: 10.1126/science.adn0117

This paper appears to be open access.

For anyone who’s curious about the buildup to these safety summits, I have more in my October 18, 2023 “AI safety talks at Bletchley Park in November 2023” posting, which features excerpts from a number of articles on AI safety. There’s also my November 2, 2023 , “UK AI Summit (November 1 – 2, 2023) at Bletchley Park finishes” posting, which offers excerpts from articles critiquing the AI safety summit.

Is technology taking our jobs? (a Women in Communications and Technology, BC Chapter event) and Brave New Work in Vancouver (Canada)

Awkwardly named as it is, the Women in Communications and Technology BC Chapter (WCTBC) has been reinvigorated after a moribund period (from a Feb. 21, 2018 posting by Rebecca Bollwitt for the Miss 604 blog),

There’s an exciting new organization and event series coming to Vancouver, which will aim to connect, inspire, and advance women in the communications and technology industries. I’m honoured to be on the Board of Directors for the newly rebooted Women in Communications and Technology, BC Chapter (“WCTBC”) and we’re ready to announce our first event!

Women in Debate: Is Technology Taking Our Jobs?

When: Tuesday, March 6, 2018 at 5:30pm
Where: BLG – 200 Burrard, 1200 Waterfront Centre, Vancouver
Tickets: Register online today. The cost is $25 for WCT members and $35 for non-members.

Automation, driven by technological progress, has been expanding for the past several decades. As the pace of development increases, so has the urgency in the debate about the potential effects of automation on jobs, employment, and human activity. Will new technology spawn mass unemployment, as the robots take jobs away from humans? Or is this part of a cycle that predates even the Industrial Revolution in which some jobs will become obsolete, while new jobs will be created?

Debaters:
Christin Wiedemann – Co-CEO, PQA Testing
Kathy Gibson – President, Catchy Consulting
Laura Sukorokoff – Senior Trainer & Communications, Hyperwallet
Sally Whitehead – Global Director, Sophos

Based on the Oxford style debates popularized by the podcast ‘Intelligence Squared’, the BC chapter of Women in Communications and Technology brings you Women in Debate: Is Technology Taking Our Jobs?

For anyone not familiar with “Intelligence Squared,”  there’s this from their About webpage,

ntelligence Squared is the world’s premier forum for debate and intelligent discussion. Live and online we take you to the heart of the issues that matter, in the company of some of the world’s sharpest minds and most exciting orators.

Intelligence Squared Live

Our events have captured the imagination of public audiences for more than a decade, welcoming the biggest names in politics, journalism and the arts. Our celebrated list of speakers includes President Jimmy Carter, Stephen Fry, Patti Smith, Richard Dawkins, Sean Penn, Marina Abramovic, Werner Herzog, Terry Gilliam, Anne Marie Slaughter, Reverend Jesse Jackson, Mary Beard, Yuval Noah Harari, Jonathan Franzen, Salman Rushdie, Eric Schmidt, Richard Branson, Professor Brian Cox, Nate Silver, Umberto Eco, Martin Amis and Grayson Perry.

Further digging into WCTBC unearthed this story about the reasons for its ‘reboot’, from the Who we are / Regional Chapters / British Columbia webpage,

“Earlier this month [October 2017?], Christin Wiedemann and Briana Sim, co-Chairs of the BC Chapter of WCT, attended a Women in IoT [Internet of Things] event in Vancouver. The event was organized by the GE Women’s Network and TELUS Connections, with WCT as an event partner. The event sold out after only two days, and close to 200 women attended.

Five female panelists representing different backgrounds and industries talked about the impact IoT is having on our lives today, and how they think IoT fits into the future of the technology landscape. Christin facilitated the Q&A portion of the event, and had an opportunity to share that the BC chapter is rebooting and hopes to launch a kickoff event later in November”

You can find a summary of the event here (http://gereports.ca/theres-lots-room-us-top-insights-five-canadas-top-women-business-leaders-iot/#), and you can also check out the Storify (https://storify.com/cwiedemann/women-in-iot​).”

– October 6th, 2017

Simon Fraser University’s Brave New Work

Coincidentally or not, there’s a major series of events being offered by Simon Fraser University’s (SFU; located in Vancouver, British Columbia, Canada) Public Square Programme in their 2018 Community Summit Series titled: Brave New Work; How can we thrive in the changing world of work? which takes place February 26, 2018 to March 7, 2018.

There’s not a single mention (!!!!!) of Brave New World (by Aldous Huxley) in what is clearly word play based on this man’s book.

From the 2018 Community Summit: Brave New Work webpage on the SFU website (Note: Links have been removed),

How can we thrive in the changing world of work?

The 2018 Community Summit, Brave New Work, invites us to consider how we can all thrive in the changing world of work.

Technological growth is happening at an unprecedented rate and scale, and it is fundamentally altering the way we organize and value work. The work we do (and how we do it) is changing. One of the biggest challenges in effectively responding to this new world of work is creating a shared understanding of the issues at play and how they intersect. Individuals, businesses, governments, educational institutions, and civil society must collaborate to construct the future we want.

The future of work is here, but it’s still ours to define. From February 26th to March 7th, we will convene diverse communities through a range of events and activities to provoke thinking and encourage solution-finding. We hope you’ll join us.

The New World of Work: Thriving or Surviving?

As part of its 2018 Community Summit, Brave New Work, SFU Public Square is proud to present, in partnership with Vancity, an evening with Van Jones and Anne-Marie Slaughter, moderated by CBC’s Laura Lynch at the Queen Elizabeth Theatre.

Van Jones and Anne-Marie Slaughter, two leading commentators on the American economy, will discuss the role that citizens, governments and civil society can play in shaping the future of work. They will explore the challenges ahead, as well as how these challenges might be addressed through green jobs, emergent industries, education and public policy.

Join us for an important conversation about how the future of work can be made to work for all of us.

Are you a member of Vancity? As one of the many perks of being a Vancity member, you have access to a free ticket to attend the event. For your free ticket, please visit Vancity for more information. There are a limited number of seats reserved for Vancity members, so we encourage you to register early.

Tickets are now on sale, get yours today!

Future of Work in Canada: Emerging Trends and Opportunities

What are some of the trends currently defining the new world of work in Canada, and what does our future look like? What opportunities can be seized to build more competitive, prosperous, and inclusive organizations? This mini-conference, presented in partnership with Deloitte Canada, will feature panel discussions and presentations by representatives from Deloitte, Brookfield Institute for Innovation & Entrepreneurship, Vancity, Futurpreneur, and many more.

Work in the 21st Century: Innovations in Research

Research doesn’t just live in libraries and academic papers; it has a profound impact on our day to day lives. Work in the 21st Century is a dynamic evening that showcases the SFU researchers and entrepreneurs who are leading the way in making innovative impacts in the new world of work.

Basic Income

This lecture will examine the question of basic income (BI). A neoliberal version of BI is being considered and even developed by a number of governments and institutions of global capitalism. This form of BI could enhance the supply of low wage precarious workers, by offering a public subsidy to employers, paid for by cuts to others areas of social provision.

ReframeWork

ReframeWork is a national gathering of leading thinkers and innovators on the topic of Future of Work. We will explore how Canada can lead in forming new systems for good work and identify the richest areas of opportunity for solution-building that affects broader change.

The Urban Worker Project Skillshare

The Urban Worker Project Skillshare is a day-long gathering, bringing together over 150 independent workers to lean on each other, learn from each other, get valuable expert advice, and build community. Join us!

SFU City Conversations: Making Visible the Invisible

Are outdated and stereotypical gender roles contributing to the invisible workload? What is the invisible workload anyway? Don’t miss this special edition of SFU City Conversations on intersectionality and invisible labour, presented in partnership with the Simon Fraser Student Society Women’s Centre.

Climate of Work: How Does Climate Change Affect the Future of Work

What does our changing climate have to do with the future of work? Join Embark as they explore the ways our climate impacts different industries such as planning, communications or entrepreneurship.

Symposium: Art, Labour, and the Future of Work

One of the key distinguishing features of Western modernity is that the activity of labour has always been at the heart of our self-understanding. Work defines who we are. But what might we do in a world without work? Join SFU’s Institute for the Humanities for a symposium on art, aesthetics, and self-understanding.

Worker Writers and the Poetics of Labour

If you gave a worker a pen, what would they write? What stories would they tell, and what experiences might they share? Hear poetry about what it is to work in the 21st century directly from participants of the Worker Writers School at this free public poetry reading.

Creating a Diverse and Resilient Economy in Metro Vancouver

This panel conversation event will focus on the future of employment in Metro Vancouver, and planning for the employment lands that support the regional economy. What are the trends and issues related to employment in various sectors in Metro Vancouver, and how does land use planning, regulation, and market demand affect the future of work regionally?

Preparing Students for the Future World of Work

This event, hosted by CACEE Canada West and SFU Career and Volunteer Services, will feature presentations and discussions on how post-secondary institutions can prepare students for the future of work.

Work and Purpose Later in Life

How is the changing world of work affecting older adults? And what role should work play in our lives, anyway? This special Philosophers’ Cafe will address questions of retirement, purpose, and work for older adults.

Beyond Bitcoin: Blockchain and the Future of Work

Blockchain technology is making headlines. Enthusiastic or skeptic, the focus of this dialogue will be to better understand key concepts and to explore the wide-ranging applications of distributed ledgers and the implications for business here in BC and in the global economy.

Building Your Resilience

Being a university student can be stressful. This interactive event will share key strategies for enhancing your resilience and well-being, that will support your success now and in your future career.

We may not be working because of robots (no mention of automation in the SFU descriptions?) but we sure will talk about work-related topics. Sarcasm aside, it’s good to see this interest in work and in public discussion although I’m deeply puzzled by SFU’s decision to seemingly ignore technology, except for blockchain. Thank goodness for WCTBC. At any rate, I’m often somewhat envious of what goes on elsewhere so it’s nice to see this level of excitement and effort here in Vancouver.