The Global Challenges Foundation recently released a report which lists 12 global risks (from the Global Challenges: 12 Risks ,that threaten human civilisation report webpage,
This report has, to the best of the authors’ knowledge, created the first list of global risks with impacts that for all practical purposes can be called infinite. It is also the first structured overview of key events related to such risks and has tried to provide initial rough quantifications for the probabilities of these impacts.
With such a focus it may surprise some readers to find that the report’s essential aim is to inspire action and dialogue as well as an increased use of the methodologies used for risk assessment.
The real focus is not on the almost unimaginable impacts of the risks the report outlines. Its fundamental purpose is to encourage global collaboration and to use this new category of risk as a driver for innovation.
The 12 global risks that threaten human civilisation are:
1. Extreme Climate Change
2. Nuclear War
3. Ecological Catastrophe
4. Global Pandemic
5. Global System Collapse
6. Major Asteroid Impact
8. Synthetic Biology
10. Artificial Intelligence
11. Uncertain Risks
Global policy risk
12. Future Bad Global Governance
The report is fairly new as it was published in February 2015. Here’s a summary of the nanotechnology risk from the report‘s executive summary,
Atomically precise manufacturing, the creation of effective, high- throughput manufacturing processes that operate at the atomic or molecular level. It could create new products – such as smart or extremely resilient materials – and would allow many different groups or even individuals to manufacture a wide range of things. This could lead to the easy construction of large arsenals of conventional or more novel weapons made possible by atomically precise manufacturing. AI is the intelligence exhibited by machines or software, and the branch of computer science that develops machines and software with human-level intelligence. The field is often defined as “the study and design of intelligent agents”, systems that perceive their environment and act to maximise their chances of success. Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), and would probably act to boost their own intelligence and acquire maximal resources for almost all initial AI motivations.
Of particular relevance is whether nanotechnology allows the construction of nuclear bombs. But many of the world’s current problems may be solvable with the manufacturing possibilities that nanotechnology would offer, such as depletion of natural resources, pollution, climate change, clean water and even poverty. Some have conjectured special self-replicating nanomachines which would be engineered to consume the entire environment. [grey goo and/or green goo scenarios; emphasis mine] The misuse of medical nanotechnology is another risk scenario. [p. 18 print version; p. 20 PDF]
I was a bit surprised to see the ‘goo’ scenarios referenced since Eric Drexler one of the participants and the person who first posted the ‘grey goo’ scenario (a green goo scenario was subsequently theorized by Robert Freitas) has long tried to dissociate himself from it.
The report lists the academics and experts (including Drexler) who helped to produce the report,
Dr Nick Beckstead, Research Fellow, Future of Humanity Institute, Oxford Martin School & Faculty of Philosophy, University of Oxford
Kennette Benedict, Executive Director and Publisher of the Bulletin of the Atomic Scientists
Oliver Bettis, Pricing Actuary, Munich RE and Fellow of the Chartered Insurance Institute and the Institute & Faculty of Actuaries
Dr Eric Drexler, Academic Visitor, Future of Humanity Institute, Oxford Martin School & Faculty of Philosophy, University of Oxford [emphasis mine]
Madeleine Enarsson , Transformative Catalyst, 21st Century Frontiers
Pan Jiahua, Director of the Institute for Urban and Environmental Studies, Chinese Academy of Social Sciences (CASS); Professor of economics at CASS; Vice-President Chinese Society for Ecological Economics; Member of the National Expert Panel on Climate Change and National Foreign Policy Advisory Committee, China
Jennifer Morgan, Founder & Co-Convener, The Finance Lab
James Martin Research Fellow, Future of Humanity Institute, Oxford Martin School & Faculty of Philosophy, University of Oxford
Andrew Simms, Author, Fellow at the New Economics Foundation and Chief Analyst at Global Witness
Nathan Wolfe, Director of Global Viral and the Lorry I. Lokey Visiting Professor in Human Biology at Stanford University
Liang Yin, Investment Consultant at Towers Watson [p. 1 print versioin; p. 3 PDF]
While I don’t recognize any names other that Drexler’s, it’s an interesting list albeit with a preponderance of individuals associated with the University of Oxford .
The Feb. 16, 2015 Global Challenges Foundation press release announcing the risk report includes a brief description of the foundation and, I gather, a sister organization at Oxford University,
About the Global Challenges Foundation
The Global Challenges Foundation works to raise awareness of the greatest threats facing humanity and how these threats are linked to poverty and the rapid growth in global population. The Global Challenges Foundation was founded in 2011 by investor László Szombatfalvy.
About Oxford University’s Future of Humanity Institute
The Future of Humanity Institute is a multidisciplinary research institute at the University of Oxford. It enables a select set of leading intellectuals to bring the tools of
mathematics, philosophy, and science to bear on big-picture questions about humanity and its prospects. The Institute belongs to the Faculty of Philosophy and is affiliated with
the Oxford Martin School.
The report is 212 pp (PDF), Happy Reading!