Tag Archives: Leverhulme Centre for the Future of Intelligence (CFI)

Hardware policies best way to manage AI safety?

Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).

A very software approach?

This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,

In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.

The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.

The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),

At a glance

The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.

Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.

The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.

Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.

While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.

The European Union (EU) is preparing to pass its own AI regulation act through the European Parliament in 2024 according to a December 19, 2023 “EU AI Act: first regulation on artificial intelligence” article update, Note: Links have been removed,

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

*The ban of AI systems posing unacceptable risks will apply six months after the entry into force

*Codes of practice will apply nine months after entry into force

*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”

… The AI Act is expected to come into effect in late 2025 or early 2026.[109

I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.

The US is always to be considered in these matters and I have a November 2023 ‘briefing’ by Müge Fazlioglu on the International Association of Privacy Professionals (IAPP) website where she provides a quick overview of the international scene before diving deeper into US AI governance policy through the Barack Obama, Donald Trump, and Joe Biden administrations. There’s also this January 29, 2024 US White House “Fact Sheet: Biden-⁠Harris Administration Announces Key AI Actions Following President Biden’s Landmark Executive Order.”

What about AI and hardware?

A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,

Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.

A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.

Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.

The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.

The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.

A February 14, 2024 University of Cambridge press release by Fred Lewsey (also on EurekAlert), which originated the news item, provides more information about the ‘hardware approach to AI regulation’,

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI. 

“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.

“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”

The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.

Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.

Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute. 

The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”

Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.

For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.

The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.

“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.

Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.  

“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”

These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.

AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.

The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.

They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.

Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”

You can find the report, “Computing Power and the Governance of Artificial Intelligence” on the University of Cambridge’s Centre for the Study of Existential Risk.

Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.

The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.

“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.

*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.

As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.

Congratulations! Noēma magazine’s first year anniversary

Apparently, I am an idiot—if the folks at Expunct and other organizations passionately devoted to their own viewpoints are to be believed.

To be specific, Berggruen Institute (which publishes Noēma magazine) has attracted remarkably sharp criticism and, by implication, that seems to include anyone examining, listening, or reading the institute’s various communication efforts.

Perhaps you’d like to judge the quality of the ideas for yourself?

Abut the Institute and about the magazine

The institute is a think tank founded by Nicolas Berggruen, US-based billionaire investor and philanthropist, and Nathan Gardels, journalist and editor-in-chief of Noēma magazine, in 2010. Before moving onto the magazine’s first anniversary, here’s more about the Institute from its About webpage,

Ideas for a Changing World

We live in a time of great transformations. From capitalism, to democracy, to the global order, our institutions are faltering. The very meaning of the human is fragmenting.

The Berggruen Institute was established in 2010 to develop foundational ideas about how to reshape political and social institutions in the face of these great transformations. We work across cultures, disciplines and political boundaries, engaging great thinkers to develop and promote long-term answers to the biggest challenges of the 21st Century.

As the for the magazine, here’s more from the About Us webpage (Note: I have rearranged the paragraph order),

In ancient Greek, noēma means “thinking” or the “object of thought.” And that is our intention: to delve deeply into the critical issues transforming the world today, at length and with historical context, in order to illuminate new pathways of thought in a way not possible through the immediacy of daily media. In this era of accelerated social change, there is a dire need for new ideas and paradigms to frame the world we are moving into.

Noema is a magazine exploring the transformations sweeping our world. We publish essays, interviews, reportage, videos and art on the overlapping realms of philosophy, governance, geopolitics, economics, technology and culture. In doing so, our unique approach is to get out of the usual lanes and cross disciplines, social silos and cultural boundaries. From artificial intelligence and the climate crisis to the future of democracy and capitalism, Noema Magazine seeks a deeper understanding of the most pressing challenges of the 21st century.

Published online and in print by the Berggruen Institute, Noema grew out of a previous publication called The WorldPost, which was first a partnership with HuffPost and later with The Washington Post. Noema publishes thoughtful, rigorous, adventurous pieces by voices from both inside and outside the institute. While committed to using journalism to help build a more sustainable and equitable world, we do not promote any particular set of national, economic or partisan interests.

First anniversary

Noēma’s anniversary is being marked by its second paper publication (the first was produced for the magazine’s launch). From a July 1, 2021 announcement received via email,

June 2021 marked one year since the launch of Noema Magazine, a crucial milestone for the new publication focused on exploring and amplifying transformative ideas. Noema is working to attract audiences through longform perspectives and contemporary artwork that weave together threads in philosophy, governance, geopolitics, economics, technology, and culture.

“What began more than seven years ago as a news-driven global voices platform for The Huffington Post known as The WorldPost, and later in partnership with The Washington Post, has been reimagined,” said Nathan Gardels, editor-in-chief of Noema. “It has evolved into a platform for expansive ideas through a visual lens, and a timely and provocative portal to plumb the deeper issues behind present events.”

The magazine’s editorial board, involved in the genesis and as content drivers of the magazine, includes Orhan Pamuk, Arianna Huffington, Fareed Zakaria, Reid Hoffman, Dambisa Moyo, Walter Isaacson, Pico Iyer, and Elif Shafak. Pieces by thinkers cracking the calcifications of intellectual domains include, among many others:

·      Francis Fukuyama on the future of the nation-state

·      A collage of commentary on COVID with Yuval Harari and Jared Diamond 

·      An interview with economist Mariana Mazzucato on “mission-oriented government”

·      Taiwan’s Digital Minister Audrey Tang on digital democracy

·      Hedge-fund giant Ray Dalio in conversation with Nobel laureate Joe Stiglitz

·      Shannon Vallor on how AI is making us less intelligent and more artificial

·      Former Governor Jerry Brown in conversation with Stewart Brand 

·      Ecologist Suzanne Simard on the intelligence of forest ecosystems

·      A discussion on protecting the biosphere with Bill Gates’s guru Vaclav Smil 

·      An original story by Chinese science-fiction writer Hao Jingfang

Noema seeks to highlight how the great transformations of the 21st century are reflected in the work of today’s artistic innovators. Most articles are accompanied by an original illustration, melding together an aesthetic experience with ideas in social science and public policy. Among others, in the past year, the magazine has featured work from multimedia artist Pierre Huyghe, illustrator Daniel Martin Diaz, painter Scott Listfield, graphic designer and NFT artist Jonathan Zawada, 3D motion graphics artist Kyle Szostek, illustrator Moonassi, collage artist Lauren Lakin, and aerial photographer Brooke Holm. Additional contributions from artists include Berggruen Fellows Agnieszka Kurant and Anicka Yi discussing how their work explores the myth of the self.

Noema is available online and annually in print; the magazine’s second print issue will be released on July13, 2021. The theme of this issue is “planetary realism,” which proposes to go beyond the exhausted notions of globalization and geopolitical competition among nation-states to a new “Gaiapolitik.” It addresses the existential challenge of climate change across all borders and recognizes that human civilization is but one part of the ecology of being that encompasses multiple intelligences from microbes to forests to the emergent global exoskeleton of AI and internet connectivity (more on this in the letter from the editors below).

Published by the Berggruen Institute, Noema is an incubator for the Institute’s core ideas, such as “participation without populism,” “pre-distribution” and universal basic capital (vs. income), and the need for dialogue between the U.S. and China to avoid an AI arms race or inadvertent war.

“The world needs divergent thinking on big questions if we’re going to meet the challenges of the 21st century; Noema publishes bold and experimental ideas,” said Kathleen Miles, executive editor of Noema. “The magazine cross-fertilizes ideas across boundaries and explores correspondences among them in order to map out the terrain of the great transformations underway.”  

I notice Suzanne Simard (from the University of British Columbia and author of “Finding the Mother Tree: Discovering the Wisdom of the Forest”) on the list of essayists along with a story by Chinese science fiction writer, Hao Jingfang.

Simard was mentioned here in a May 12, 2021 posting (scroll down to the “UBC forestry professor, Suzanne Simard’s memoir going to the movies?” subhead) when it was announced that her then not yet published memoir will be a film starring Amy Adams (or so they hope).

Hao Jingfang was mentioned here in a November 16, 2020 posting titled: “Telling stories about artificial intelligence (AI) and Chinese science fiction; a Nov. 17, 2020 virtual event” (co-hosted by the Berggruen Institute and University of Cambridge’s Leverhulme Centre for the Future of Intelligence [CFI]).

A month after Noēma’s second paper issue on July 13, 2021, the theme and topics appear especially timely in light of the extensive news coverage in Canada and many other parts of the world given to the Monday, August, 9, 2021 release of the sixth UN Climate report raising alarms over irreversible impacts. (Emily Chung’s August 12, 2021 analysis for the Canadian Broadcasting Corporation [CBC] offers a little good news for those severely alarmed by the report.) Note: The Intergovernmental Panel on Climate Change (IPCC) is the UN body tasked with assessing the science related to climate change.

Telling stories about artificial intelligence (AI) and Chinese science fiction; a Nov. 17, 2020 virtual event

[downloaded from https://www.berggruen.org/events/ai-narratives-in-contemporary-chinese-science-fiction/]

Exciting news: Chris Eldred of the Berggruen Institute sent this notice (from his Nov. 13, 2020 email)

Renowned science fiction novelists Hao Jingfang, Chen Qiufan, and Wang Yao (Xia Jia) will be featured in a virtual event next Tuesday, and I thought their discussion may be of interest to you and your readers. The event will explore how AI is used in contemporary Chinese science fiction, and the writers’ roundtable will address questions such as: How does Chinese sci-fi literature since the Reform and Opening-Up compare to sci-fi writing in the West? How does the Wandering Earth narrative and Chinese perspectives on home influence ideas about the impact of AI on the future?

Berggruen Fellow Hao Jingfang is an economist by training and an award-winning author (Hugo Award for Best Novelette). This event will be co-hosted with the University of Cambridge Leverhulme Centre for the Future of Intelligence. 

This event will be live streamed on Zoom (agenda and registration link here) on Tuesday, November 17th, from 8:30-11:50 AM GMT / 4:30-7:50 PM CST. Simultaneous English translation will be provided. 

The Berggruen Institute is offering a conversation with authors and researchers about how Chinese science fiction grapples with artificial intelligence (from the Berggruen Institute’s AI Narratives in Contemporary Chinese Science Fiction event page),

AI Narratives in Contemporary Chinese Science Fiction

November 17, 2020

Platform & Language:

Zoom (Chinese and English, with simultaneous translation)

Click here to register.

Discussion points:

1. How does Chinese sci-fi literature since the Reform and Opening-Up compare to sci-fi writing in the West?

2. How does the Wandering Earth narrative and Chinese perspectives on home influence ideas about the impact of AI on the future

About the Speakers:

WU Yan is a professor and PhD supervisor at the Humanities Center of Southern University of Science and Technology. He is a science fiction writer, vice chairman of the China Science Writers Association, recipient of the Thomas D Clareson Award of the American Science Fiction Research Association, and co-founder of the Xingyun (Nebula) Awards for Global Chinese Science Fiction. He is the author of science fictions such as Adventure of the Soul and The Sixth Day of Life and Death, academic works such as Outline of Science Fiction Literature, and textbooks such as Science and Fantasy – Training Course for Youth Imagination and Scientific Innovation.

Sanfeng is a science fiction researcher, visiting researcher of the Humanities Center of Southern University of Science and Technology, chief researcher of Shenzhen Science & Fantasy Growth Foundation, honorary assistant professor of the University of Hong Kong, Secretary-General of the World Chinese Science Fiction Association, and editor-in-chief of Nebula Science Fiction Review. His research covers the history of Chinese science fiction, development of science fiction industry, science fiction and urban development, science fiction and technological innovation, etc.

About the Event

Keynote 1 “Chinese AI Science Fiction in the Early Period of Reform and Opening-Up (1978-1983)”

(改革开放早期(1978-1983)的中国AI科幻小说)

Abstract: Science fiction on the themes of computers and robots emerged early but in a scattered manner in China. In the stories, the protagonists are largely humanlike assistants chiefly collecting data or doing daily manual labor, and this does not fall in the category of today’s artificial intelligence. Major changes took place after the reform and opening-up in 1978 in this regard. In 1979, the number of robot-themed works ballooned. By 1980, the quality of works also saw a quantum leap, and stories on the nature of artificial intelligence began to appear. At this stage, the AI works such as Spy Case Outside the Pitch, Dulles and Alice, Professor Shalom’s Misconception, and Riot on the Ziwei Island That Shocked the World describe how intelligent robots respond to activities such as adversarial ball games (note that these are not chess games), fully integrate into the daily life of humans, and launch collective riots beyond legal norms under special circumstances. The ideas that the growth of artificial intelligence requires a suitable environment, stable family relationship, social adaptation, etc. are still of important value.

Keynote 2 “Algorithm of the Soul: Narrative of AI in Recent Chinese Science Fiction”

(灵魂的算法:近期中国科幻小说中的AI叙事)

Abstract: As artificial intelligence has been applied to the fields of technology and daily life in the past decade, the AI narrative in Chinese science fiction has also seen seismic changes. On the one hand, young authors are aware that the “soul” of AI comes, to a large extent, from machine learning algorithms. As a result, their works often highlight the existence and implementation of algorithms, bringing maneuverability and credibility to the AI. On the other hand, the authors prefer to focus on the conflicts and contradictions in emotions, ethics, and morality caused by AI that penetrate into human life. If the previous AI-themed science fiction is like a distant robot fable, the recent AI narrative assumes contemporary and practical significance. This report focuses on exploring the AI-themed science fiction by several young authors (including Hao Jingfang’s [emphasis mine] The Problem of Love and Where Are You, Chen Qiufan’s Image Maker and Algorithm for Life, and Xia Jia’s Let’s Have a Talk and Shejiang, Baoshu’s Little Girl and Shuangchimu’s The Cock Prince, etc.) to delve into the breakthroughs and achievements in AI narratives.

Hao Jingfang, one of the authors mentioned in the abstract, is currently a fellow at the Berggruen Institute and she is scheduled to be a guest according to the co-host’s the University of Cambridge’s Leverhulme Centre for the Future of Intelligence (CFI) page: Workshop: AI Narratives in Contemporary Chinese Science Fiction programme description (I’ll try not to include too much repetitive information),

Workshop 2 – November 17, 2020

AI Narratives in Contemporary Chinese Science Fiction

Programme

16:30-16:40 CST (8:30-8:40 GMT)  Introductions

SONG Bing, Vice President, Co-Director, Berggruen Research Center, Peking University

Kanta Dihal, Postdoctoral Researcher, Project Lead on Global Narratives, Leverhulme Centre for the Future of Intelligence, University of Cambridge  

16:40-17:10 CST (8:40-9:10 GMT)  Talk 1 [Chinese AI SciFi and the early period]

17:10-17:40 CST (9:10-9:40 GMT)  Talk 2  [Algorithm of the soul]

17:40-18:10 CST (9:40-10:10 GMT)  Q&A

18:10-18:20 CST (10:10-10:20 GMT) Break

18:20-19:50 CST (10:20-11:50 GMT)  Roundtable Discussion

Host:

HAO Jingfang(郝景芳), author, researcher & Berggruen Fellow

Guests:

Baoshu (宝树), sci-fi and fantasy writer

CHEN Qiufan(陈楸帆), sci-fi writer, screenwriter & translator

Feidao(飞氘), sci-fi writer, Associate Professor in the Department of Chinese Language and Literature at Tsinghua University

WANG Yao(王瑶,pen name “Xia Jia”), sci-fi writer, Associate Professor of Chinese Literature at Xi’an Jiaotong University

Suggested Readings

ABOUT CHINESE [Science] FICTION

“What Makes Chinese Fiction Chinese?”, by Xia Jia and Ken Liu,

The Worst of All Possible Universes and the Best of All Possible Earths: Three Body and Chinese Science Fiction”, Cixin Liu, translated by Ken Liu

Science Fiction in China: 2016 in Review

SHORT NOVELS ABOUT ROBOTS/AI/ALGORITHM:

The Robot Who Liked to Tell Tall Tales”, by Feidao, translated by Ken Liu

Goodnight, Melancholy”, by Xia Jia, translated by Ken Liu

The Reunion”, by Chen Qiufan, translated by Emily Jin and Ken Liu, MIT Technology Review, December 16, 2018

Folding Beijing”, by Hao Jingfang, translated by Ken Liu

Let’s have a talk”, by Xia Jia

For those of us on the West Coast of North America the event times are: Tuesday, November 17, 2020, 1430 – 1750 or 2:30 – 5:50 pm. *Added On Nov.16.20 at 11:55 am PT: For anyone who can’t attend the live event, a full recording will be posted to YouTube.*

Kudos to all involved in organizing and participating in this event. It’s important to get as many viewpoints as possible on AI and its potential impacts.

Finally and for the curious, there’s another posting about Chinese science fiction here (May 31, 2019).