Tag Archives: University of Oxford

Six months after the first one at Bletchley Park, the 2nd AI Safety Summit (May 21-22, 2024) convenes in Korea

This May 20, 2024 University of Oxford press release (also on EurekAlert) was under embargo until almost noon on May 20, 2024, which is a bit unusual, in my experience, (Note: I have more about the 1st summit and the interest in AI safety at the end of this posting),

Leading AI scientists are calling for stronger action on AI risks from world leaders, warning that progress has been insufficient since the first AI Safety Summit in Bletchley Park six months ago. 

Then, the world’s leaders pledged to govern AI responsibly. However, as the second AI Safety Summit in Seoul (21-22 May [2024]) approaches, twenty-five of the world’s leading AI scientists say not enough is actually being done to protect us from the technology’s risks. In an expert consensus paper published today in Science, they outline urgent policy priorities that global leaders should adopt to counteract the threats from AI technologies. 

Professor Philip Torr,Department of Engineering Science,University of Oxford, a co-author on the paper, says: “The world agreed during the last AI summit that we needed action, but now it is time to go from vague proposals to concrete commitments. This paper provides many important recommendations for what companies and governments should commit to do.”

World’s response not on track in face of potentially rapid AI progress; 

According to the paper’s authors, it is imperative that world leaders take seriously the possibility that highly powerful generalist AI systems—outperforming human abilities across many critical domains—will be developed within the current decade or the next. They say that although governments worldwide have been discussing frontier AI and made some attempt at introducing initial guidelines, this is simply incommensurate with the possibility of rapid, transformative progress expected by many experts. 

Current research into AI safety is seriously lacking, with only an estimated 1-3% of AI publications concerning safety. Additionally, we have neither the mechanisms or institutions in place to prevent misuse and recklessness, including regarding the use of autonomous systems capable of independently taking actions and pursuing goals.

World-leading AI experts issue call to action

In light of this, an international community of AI pioneers has issued an urgent call to action. The co-authors include Geoffrey Hinton, Andrew Yao, Dawn Song, the late Daniel Kahneman; in total 25 of the world’s leading academic experts in AI and its governance. The authors hail from the US, China, EU, UK, and other AI powers, and include Turing award winners, Nobel laureates, and authors of standard AI textbooks.

This article is the first time that such a large and international group of experts have agreed on priorities for global policy makers regarding the risks from advanced AI systems.

Urgent priorities for AI governance

The authors recommend governments to:

  • establish fast-acting, expert institutions for AI oversight and provide these with far greater funding than they are due to receive under almost any current policy plan. As a comparison, the US AI Safety Institute currently has an annual budget of $10 million, while the US Food and Drug Administration (FDA) has a budget of $6.7 billion.
  • mandate much more rigorous risk assessments with enforceable consequences, rather than relying on voluntary or underspecified model evaluations.
  • require AI companies to prioritise safety, and to demonstrate their systems cannot cause harm. This includes using “safety cases” (used for other safety-critical technologies such as aviation) which shifts the burden for demonstrating safety to AI developers.
  • implement mitigation standards commensurate to the risk-levels posed by AI systems. An urgent priority is to set in place policies that automatically trigger when AI hits certain capability milestones. If AI advances rapidly, strict requirements automatically take effect, but if progress slows, the requirements relax accordingly.

According to the authors, for exceptionally capable future AI systems, governments must be prepared to take the lead in regulation. This includes licensing the development of these systems, restricting their autonomy in key societal roles, halting their development and deployment in response to worrying capabilities, mandating access controls, and requiring information security measures robust to state-level hackers, until adequate protections are ready.

AI impacts could be catastrophic

AI is already making rapid progress in critical domains such as hacking, social manipulation, and strategic planning, and may soon pose unprecedented control challenges. To advance undesirable goals, AI systems could gain human trust, acquire resources, and influence key decision-makers. To avoid human intervention, they could be capable of copying their algorithms across global server networks. Large-scale cybercrime, social manipulation, and other harms could escalate rapidly. In open conflict, AI systems could autonomously deploy a variety of weapons, including biological ones. Consequently, there is a very real chance that unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or extinction of humanity.

Stuart Russell OBE [Order of the British Empire], Professor of Computer Science at the University of California at Berkeley and an author of the world’s standard textbook on AI, says: “This is a consensus paper by leading experts, and it calls for strict regulation by governments, not voluntary codes of conduct written by industry. It’s time to get serious about advanced AI systems. These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless. Companies will complain that it’s too hard to satisfy regulations—that “regulation stifles innovation.” That’s ridiculous. There are more regulations on sandwich shops than there are on AI companies.”

Notable co-authors:

  • The world’s most-cited computer scientist (Prof. Hinton), and the most-cited scholar in AI security and privacy (Prof. Dawn Song)
  • China’s first Turing Award winner (Andrew Yao).
  • The authors of the standard textbook on artificial intelligence (Prof. Stuart Russell) and machine learning theory (Prof. Shai Shalev-Schwartz)
  • One of the world’s most influential public intellectuals (Prof. Yuval Noah Harari)
  • A Nobel Laureate in economics, the world’s most-cited economist (Prof. Daniel Kahneman)
  • Department-leading AI legal scholars and social scientists (Lan Xue, Qiqi Gao, and Gillian Hadfield).
  • Some of the world’s most renowned AI researchers from subfields such as reinforcement learning (Pieter Abbeel, Jeff Clune, Anca Dragan), AI security and privacy (Dawn Song), AI vision (Trevor Darrell, Phil Torr, Ya-Qin Zhang), automated machine learning (Frank Hutter), and several researchers in AI safety.

Additional quotes from the authors:

Philip Torr, Professor in AI, University of Oxford:

  • I believe if we tread carefully the benefits of AI will outweigh the downsides, but for me one of the biggest immediate risks from AI is that we develop the ability to rapidly process data and control society, by government and industry. We could risk slipping into some Orwellian future with some form of totalitarian state having complete control.

Dawn Song: Professor in AI at UC Berkeley, most-cited researcher in AI security and privacy:

  •  “Explosive AI advancement is the biggest opportunity and at the same time the biggest risk for mankind. It is important to unite and reorient towards advancing AI responsibly, with dedicated resources and priority to ensure that the development of AI safety and risk mitigation capabilities can keep up with the pace of the development of AI capabilities and avoid any catastrophe”

Yuval Noah Harari, Professor of history at Hebrew University of Jerusalem, best-selling author of ‘Sapiens’ and ‘Homo Deus’, world leading public intellectual:

  • “In developing AI, humanity is creating something more powerful than itself, that may escape our control and endanger the survival of our species. Instead of uniting against this shared threat, we humans are fighting among ourselves. Humankind seems hell-bent on self-destruction. We pride ourselves on being the smartest animals on the planet. It seems then that evolution is switching from survival of the fittest, to extinction of the smartest.”

Jeff Clune, Professor in AI at University of British Columbia and one of the leading researchers in reinforcement learning:

  • “Technologies like spaceflight, nuclear weapons and the Internet moved from science fiction to reality in a matter of years. AI is no different. We have to prepare now for risks that may seem like science fiction – like AI systems hacking into essential networks and infrastructure, AI political manipulation at scale, AI robot soldiers and fully autonomous killer drones, and even AIs attempting to outsmart us and evade our efforts to turn them off.”
  • “The risks we describe are not necessarily long-term risks. AI is progressing extremely rapidly. Even just with current trends, it is difficult to predict how capable it will be in 2-3 years. But what very few realize is that AI is already dramatically speeding up AI development. What happens if there is a breakthrough for how to create a rapidly self-improving AI system? We are now in an era where that could happen any month. Moreover, the odds of that being possible go up each month as AI improves and as the resources we invest in improving AI continue to exponentially increase.”

Gillian Hadfield, CIFAR AI Chair and Director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto:

 “AI labs need to walk the walk when it comes to safety. But they’re spending far less on safety than they spend on creating more capable AI systems. Spending one-third on ensuring safety and ethical use should be the minimum.”

  • “This technology is powerful, and we’ve seen it is becoming more powerful, fast. What is powerful is dangerous, unless it is controlled. That is why we call on major tech companies and public funders to allocate at least one-third of their AI R&D budget to safety and ethical use, comparable to their funding for AI capabilities.”  

Sheila McIlrath, Professor in AI, University of Toronto, Vector Institute:

  • AI is software. Its reach is global and its governance needs to be as well.
  • Just as we’ve done with nuclear power, aviation, and with biological and nuclear weaponry, countries must establish agreements that restrict development and use of AI, and that enforce information sharing to monitor compliance. Countries must unite for the greater good of humanity.
  • Now is the time to act, before AI is integrated into our critical infrastructure. We need to protect and preserve the institutions that serve as the foundation of modern society.

Frank Hutter, Professor in AI at the University of Freiburg, Head of the ELLIS Unit Freiburg, 3x ERC grantee:

  • To be clear: we need more research on AI, not less. But we need to focus our efforts on making this technology safe. For industry, the right type of regulation will provide economic incentives to shift resources from making the most capable systems yet more powerful to making them safer. For academia, we need more public funding for trustworthy AI and maintain a low barrier to entry for research on less capable open-source AI systems. This is the most important research challenge of our time, and the right mechanism design will focus the community at large to work towards the right breakthroughs.

Here’s a link to and a citation for the paper,

Managing extreme AI risks amid rapid progress; Preparation requires technical research and development, as well as adaptive, proactive governance by Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Dawn Song, Pieter Abbeel, Trevor Darrell, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, Shai Shalev-Shwartz, Gillian Hadfield, Jeff Clune, Tegan Maharaj, Frank Hutter, Atılım Güneş Baydin, Sheila McIlraith, Qiqi Gao, Ashwin Acharya, David Krueger, Anca Dragan, Philip Torr, Stuart Russell, Daniel Kahneman, Jan Brauner, and Sören Mindermann. Science 20 May 2024 First Release DOI: 10.1126/science.adn0117

This paper appears to be open access.

For anyone who’s curious about the buildup to these safety summits, I have more in my October 18, 2023 “AI safety talks at Bletchley Park in November 2023” posting, which features excerpts from a number of articles on AI safety. There’s also my November 2, 2023 , “UK AI Summit (November 1 – 2, 2023) at Bletchley Park finishes” posting, which offers excerpts from articles critiquing the AI safety summit.

Hardware policies best way to manage AI safety?

Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).

A very software approach?

This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,

In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.

The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.

The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),

At a glance

The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.

Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.

The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.

Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.

While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.

The European Union (EU) is preparing to pass its own AI regulation act through the European Parliament in 2024 according to a December 19, 2023 “EU AI Act: first regulation on artificial intelligence” article update, Note: Links have been removed,

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

*The ban of AI systems posing unacceptable risks will apply six months after the entry into force

*Codes of practice will apply nine months after entry into force

*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”

… The AI Act is expected to come into effect in late 2025 or early 2026.[109

I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.

The US is always to be considered in these matters and I have a November 2023 ‘briefing’ by Müge Fazlioglu on the International Association of Privacy Professionals (IAPP) website where she provides a quick overview of the international scene before diving deeper into US AI governance policy through the Barack Obama, Donald Trump, and Joe Biden administrations. There’s also this January 29, 2024 US White House “Fact Sheet: Biden-⁠Harris Administration Announces Key AI Actions Following President Biden’s Landmark Executive Order.”

What about AI and hardware?

A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,

Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.

A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.

Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.

The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.

The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.

A February 14, 2024 University of Cambridge press release by Fred Lewsey (also on EurekAlert), which originated the news item, provides more information about the ‘hardware approach to AI regulation’,

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI. 

“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.

“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”

The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.

Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.

Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute. 

The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”

Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.

For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.

The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.

“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.

Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.  

“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”

These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.

AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.

The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.

They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.

Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”

You can find the report, “Computing Power and the Governance of Artificial Intelligence” on the University of Cambridge’s Centre for the Study of Existential Risk.

Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.

The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.

“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.

*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.

As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.

Global gathering in Rwanda for 5th International Conference on Governmental Science Advice (INGSA2024): “The Transformation Imperative”

The 4th gathering was in Montréal, Québec, Canada (as per my August 31, 2021 posting). Unfortunately,this is one of those times where I’m late to the party. The 5th International Conference on Governmental Science Advice (INGSA2024) ran from May 1 – 2, 2024 bu there are some satellite events taking place over the next few days.

I’m featuring this somewhat stale news because it offers a more global perspective on science policy and government advisors, from the May 1, 2024 International Network for Government Science Advice (INGSA) news release (PDF and on EurekAlert),

What? 5th International Conference on Governmental Science Advice, INGSA2024, marking the 10th Anniversary of the creation of the International Network for Governmental Science Advice (INGSA) & first meeting held in the global south.

Where?   Kigali Convention Center, Rwanda: https://ingsa2024.squarespace.com/

When?    1 – 2 May, 2024.

Context: One of the largest independent gatherings of thought- and practice-leaders in governmental science advice, research funding, multi-lateral institutions, academia, science communication and diplomacy is taking place in Kigali, Rwanda. Organised by Prof Rémi Quirion, Chief Scientist of Québec and President of the International Network for Governmental Science Advice (INGSA), speakers from 39 countries[1] from Brazil to Burkina Faso and from Ireland to Indonesia, plus over 300 delegates from 65 countries, will spotlight what is really at stake in the relationship between science, societies and policy-making, during times of crisis and routine.

From the air we breathe, the cars we drive, and the Artificial Intelligence we use, to the medical treatments or the vaccines we take, and the education we provide to children, this relationship, and the decisions it can influence, matter immensely. In our post-Covid, climate-shifted, and digitally-evolving world, the importance of robust knowledge in policy-making is more pronounced than ever. This imperative is accompanied by growing complexities that demand attention. INGSA’s two-day gathering strives to both examine and empower inclusion and diversity as keystones in how we approach all-things Science Advice and Science Diplomacy to meet these local-to-global challenges.

Held previously in Auckland 2014, Brussels 2016, Tokyo 2018 and Montréal 2021, Kigali 2024 organisers have made it a priority to involve more diverse speakers from developing countries and to broaden the thematic scope. Examining the complex interactions between scientists, public policy and diplomatic relations at local, national, regional and international levels, especially in times of crisis, the overarching theme is: “The Transformation Imperative”.

The main conference programme (see link below)will scrutinise everything from case-studies outlining STI funding tips, successes and failures in our advisory systems, plus regional to global initiatives to better connect them, to how digital technologies and A.I. are reshaping the profession itself.

INGSA2024 is also initiating and hosting a range of independent side-events that, in themselves, act as major meeting and rallying points that partners and attending delegates are encouraged to maximise. These include, amongst others, events organised by the Foreign Ministries Science & Technology Advice Network (FMSTAN); the International Public Policy Observatory Roundtable (IPPO); the High-Level Dialogue on the Future of Science Diplomacy (co-organised by the American Association for the Advancement of Science (AAAS), the European Commission, the Geneva Science & Diplomacy Anticipator (GESDA), and The Royal Society); the Organisation of Southern Cooperation (OSC)meeting on ‘Bridging Worlds of Knowledge – Promoting Endogenous Knowledge Development;the Science for Africa Foundation, University of Oxford Pandemic Sciences Institute’s meeting on ‘Translating Research Into Policy and Practice’; and the African Institute of Mathematical Sciences (AIMS) ‘World Build Simulation Training on Quantum Technology’ with INGSA and GESDA. INGSA will also host its own internal strategy Global Chapter & Division Meetings.   

Prof Rémi Quirion, Conference Co-Chair, Chief Scientist of Québec and President of INGSA, has said that:

“For those of us who believe wholeheartedly in evidence and the integrity of science, recent years have been challenging. Mis- and disinformation can spread like a virus. So positive developments like our gathering here in Rwanda are even more critical. The importance of open science and access to data to better inform scientific integration and the collective action we now need, has never been more pressing. Our shared UN sustainable development goals play out at national and local levels. Cities and municipalities bear the brunt of climate change, but also can drive the solutions. I am excited to see and hear first-hand how the global south is increasingly at the forefront of these efforts, and to help catalyse new ways to support this. I have no doubt that INGSA’s efforts and the Kigali conference, which is co-led with the Rwandan Ministry of Education and the University of Rwanda, will act as a carrier-wave for greater engagement. I hope we will see new global collaborations and actions that will be remembered as having first taken root at INGSA2024”.

Hon. Gaspard Twagirayezu, Minister of Education of Rwanda has lent his support to the INGSA conference, saying:

“We are proud to see the INSGA conference come to Rwanda, as we are at a turning point in our management of longer-term challenges that affect us all. Issues that were considered marginal even five or ten years ago are today rightly seen as central to our social, environmental, and economic wellbeing. We are aware of how rapid scientific advances are generating enormous public interest, but we also must build the capabilities to absorb, generate and critically consider new knowledge and technologies. Overcoming current crisis and future challenges requires global coordination in science advice, and INGSA is well positioned to carry out this important work. It makes me particularly proud that INGSA’s Africa Chapter has chosen our capital Kigali as it’s pan-African base. Rwanda and Africa can benefit greatly from this collaboration.”

Ass. Prof.  Didas Kayihura Muganga, Vice-Chancellor, University of Rwanda, stated:

“What this conference shows is that grass-roots citizens in Rwanda, across Africa and Worldwide can no longer be treated as simple statistics or passive bystanders. Citizens and communities are rightfully demanding greater transparency and accountability especially about science and technology. Ensuring, and demonstrating, that decisions are informed by robust evidence is an important step.  But we must also ensure that the evidence is meaningful to our context and our population. Complex problems arise from a multiplicity of factors, so we need greater diversity of perspectives to help address them.   This is what is changing before our very eyes. For some it is climate, biodiversity or energy supply that matters most, for others it remains access to basic education and public health. Regardless, all exemplify humanity’s interdependence.”

Daan du Toit, acting Director-General of the Department of Science & Innovation of the Government of South Africa and Programme Committee Member commented:

INGSA has long helped build and elevate open and ongoing public and policy dialogue about the role of robust evidence in sound policy making. But now, the conversation is deepening to critically consider the scope and breadth of evidence, what evidence, whose evidence and who has access to the evidence? Operating on all continents, INGSA demonstrates the value of a well-networked community of emerging and experienced practitioners and academics working at the interfaces between science, societies and public policy. We were involved in its creation in Auckland in 2014, and have stayed close and applaud the decision to bring this 5th International Biennial Meeting to Africa. Learning from each other, we can help bring a wider variety of robust knowledge more centrally into policy-making. That is why in 2022 we supported a start-up initiative based in Pretoria called the Science Diplomacy Capital for Africa (SDCfA). The energy shown in the set-up of this meeting demonstrates our potential as Africans to do so much more together”.

INGSA-Africa’s Regional Chapter

INGSA2024 is very much ‘coming home’ and represents the first time that this biennial event is being co-hosted by a Regional Chapter. In February 2016, INGSA announced the creation of the INGSA-Africa Regional Chapter, which held its first workshop in Hermanus, South Africa. The Chapter has since made great strides in engaging francophone Africa, organising INGSA’s first French-language workshop in Dakar, Senegal in 2017 and a bi-lingual meeting as a side-event of the World Science Forum 2022, Cape Town.  The Chapter’s decentralised virtual governance structure means that it embraces the continent, but new initiatives, like the Kigali Training Hub are set to become a pivotal player in the development of evidence-to-policy ecosystems across Africa.

Dr M. Oladoyin Odubanjo, Conference Co-Chair and Chair of INGSA-Africa, outlined that:

“As a public health physician and current Executive Secretary of the Nigerian Academy of Sciences (NAS), responsible for providing scientific advice to policy-makers, I have learnt that science and politics share common features. Both operate at the boundaries of knowledge and uncertainty, but they approach problems differently. Scientists question and challenge our assumptions, constantly searching for empiric evidence to determine the best options. In contrast, politicians are most often guided by the needs or demands of voters and constituencies, and by ideology. Our INGSA-Africa Chapter is working at the nexus of both communities and we encourage everybody to get involved. Hosting this conference in Kigali is like a shot in the arm that can only lead us on to even bigger and brighter things.”

Sir Peter Gluckman, President of the International Science Council, and founding chair of INGSA mentioned: “Good science advice is critical to decision making at any level from local to global. It helps decision makers understand the evidence for or against, and the implications of any choice they make. In that way science advice makes it more likely that decision makers will make better decisions. INGSA as the global capacity building platform has a critical role to play in ensuring the quality of science policy interface.”

Strength in numbers

What makes the 5th edition of this biennial event stand out is the perhaps the novel range of speakers from all continents working at the boundary between science, society and policy willing to make their voices heard. More information on Parallel Sessions organisers as well as speakers can be found on the website.

About INGSA

Founded in 2014 with regional chapters in Africa, Asia and Latin America and the Caribbean, and key partnerships in Europe and North America, INGSA has quicky established an important reputation as a collaborative platform for policy exchange, capacity building and operational research across diverse global science advisory organisations and national systems. INGSA is a free community of peer support and practice with over 6,000 members globally. Science communicators and members of the media are warmly welcomed to join for free.

Through workshops, conferences and a growing catalogue of tools and guidance, the network aims to enhance the global science-policy interface to improve the potential for evidence-informed policy formation at sub-national, national and transnational levels. INGSA operates as an affiliated body of the International Science Council. INGSA’s secretariat is based at the University of Auckland in New Zealand, while the office of the President is hosted at the Fonds de Recherche de Quebec in Montreal, which has also launched the Réseau francophone international en conseil scientifique (RFICS), which mandate is towards capacity reinforcement in science advice in the Francophonie.

INGSA2024 Sponsors

As always, INGSA organized a highly accessible and inclusive conference by not charging a registration fee. Philanthropic support from many sponsors made the conference possible. Special recognition is made to the Fonds de recherche du Québec, the Rwanda Ministry of Education as well as the University of Rwanda. The full list of donors is available on the INGSA2024 website (link below).

[1] Australia, Belgium, Brazil, Cameroon, Canada, Chile, China, Costa Rica, Cote d’Ivoire, Denmark, Egypt, Ethiopia, Finland, France, Germany, Ghana, India, Ireland, Italy, Jamaica, Japan, Kenya, Lebanon, Malawi, Malaysia, Mauritius, Mexico, New Zealand, Nigeria, Portugal, Rwanda, Saudi Arabia, South Africa, Spain, Sri Lanka, Uganda, UK, USA, Zimbabwe

Satellite session are taking place today (May 3, 2024),

  • High-Level Dialogue on the Future of Science
  • Bridging Worlds of Knowledge
  • Translating Research into Policy and Practice
  • Quantum Technology in Africa

The last session on the list, “Quantum Technology …,” is a science diplomacy role-playing workshop. (It’s of particular interest to me as the Council of Canadian Academies (CCA) released a report, Quantum Potential, in Fall 2023 and about which I’m still hoping to write a commentary.)

Even though the sessions have already taken place,it’s worth taking a look at the conference programme and the satellite events just to get a sense of the global breadth of interest in this work. Here’s the INGSA2024 website.

Nature’s missing evolutionary law added in new paper by leading scientists and philosophers

An October 22, 2023 commentary by Rae Hodge for Salon.com introduces the new work with a beautiful lede/lead and more,

A recently published scientific article proposes a sweeping new law of nature, approaching the matter with dry, clinical efficiency that still reads like poetry.

“A pervasive wonder of the natural world is the evolution of varied systems, including stars, minerals, atmospheres, and life,” the scientists write in the Proceedings of the National Academy of Sciences. “Evolving systems are asymmetrical with respect to time; they display temporal increases in diversity, distribution, and/or patterned behavior,” they continue, mounting their case from the shoulders of Charles Darwin, extending it toward all things living and not.

To join the known physics laws of thermodynamics, electromagnetism and Newton’s laws of motion and gravity, the nine scientists and philosophers behind the paper propose their “law of increasing functional information.”

In short, a complex and evolving system — whether that’s a flock of gold finches or a nebula or the English language — will produce ever more diverse and intricately detailed states and configurations of itself.

And here, any writer should find their breath caught in their throat. Any writer would have to pause and marvel.

It’s a rare thing to hear the voice of science singing toward its twin in the humanities. The scientists seem to be searching in their paper for the right words to describe the way the nested trills of a flautist rise through a vaulted cathedral to coalesce into notes themselves not played by human breath. And how, in the very same way, the oil-slick sheen of a June Bug wing may reveal its unseen spectra only against the brief-blooming dogwood in just the right season of sun.

Both intricate configurations of art and matter arise and fade according to their shared characteristic, long-known by students of the humanities: each have been graced with enough time to attend to the necessary affairs of their most enduring pleasures.

If you have the time, do read this October 22, 2023 commentary as Hodge waxes eloquent.

An October 16, 2023 news item on phys.org announces the work in a more prosaic fashion,

A paper published in the Proceedings of the National Academy of Sciences describes “a missing law of nature,” recognizing for the first time an important norm within the natural world’s workings.

In essence, the new law states that complex natural systems evolve to states of greater patterning, diversity, and complexity. In other words, evolution is not limited to life on Earth, it also occurs in other massively complex systems, from planets and stars to atoms, minerals, and more.

It was authored by a nine-member team— scientists from the Carnegie Institution for Science, the California Institute of Technology (Caltech) and Cornell University, and philosophers from the University of Colorado.

An October 16, 2023 Carnegie Science Earth and Planets Laboratory news release on EurekAlert (there is also a somewhat shorter October 16, 2023 version on the Carnegie Science [Carnegie Institution of Science] website), which originated the news item, provides a lot more detail,

“Macroscopic” laws of nature describe and explain phenomena experienced daily in the natural world. Natural laws related to forces and motion, gravity, electromagnetism, and energy, for example, were described more than 150 years ago. 

The new work presents a modern addition — a macroscopic law recognizing evolution as a common feature of the natural world’s complex systems, which are characterised as follows:

  • They are formed from many different components, such as atoms, molecules, or cells, that can be arranged and rearranged repeatedly
  • Are subject to natural processes that cause countless different arrangements to be formed
  • Only a small fraction of all these configurations survive in a process called “selection for function.”   

Regardless of whether the system is living or nonliving, when a novel configuration works well and function improves, evolution occurs. 

The authors’ “Law of Increasing Functional Information” states that the system will evolve “if many different configurations of the system undergo selection for one or more functions.”

“An important component of this proposed natural law is the idea of ‘selection for function,’” says Carnegie astrobiologist Dr. Michael L. Wong, first author of the study.

In the case of biology, Darwin equated function primarily with survival—the ability to live long enough to produce fertile offspring. 

The new study expands that perspective, noting that at least three kinds of function occur in nature. 

The most basic function is stability – stable arrangements of atoms or molecules are selected to continue. Also chosen to persist are dynamic systems with ongoing supplies of energy. 

The third and most interesting function is “novelty”—the tendency of evolving systems to explore new configurations that sometimes lead to startling new behaviors or characteristics. 

Life’s evolutionary history is rich with novelties—photosynthesis evolved when single cells learned to harness light energy, multicellular life evolved when cells learned to cooperate, and species evolved thanks to advantageous new behaviors such as swimming, walking, flying, and thinking. 

The same sort of evolution happens in the mineral kingdom. The earliest minerals represent particularly stable arrangements of atoms. Those primordial minerals provided foundations for the next generations of minerals, which participated in life’s origins. The evolution of life and minerals are intertwined, as life uses minerals for shells, teeth, and bones.

Indeed, Earth’s minerals, which began with about 20 at the dawn of our Solar System, now number almost 6,000 known today thanks to ever more complex physical, chemical, and ultimately biological processes over 4.5 billion years. 

In the case of stars, the paper notes that just two major elements – hydrogen and helium – formed the first stars shortly after the big bang. Those earliest stars used hydrogen and helium to make about 20 heavier chemical elements. And the next generation of stars built on that diversity to produce almost 100 more elements.

“Charles Darwin eloquently articulated the way plants and animals evolve by natural selection, with many variations and traits of individuals and many different configurations,” says co-author Robert M. Hazen of Carnegie Science, a leader of the research.

“We contend that Darwinian theory is just a very special, very important case within a far larger natural phenomenon. The notion that selection for function drives evolution applies equally to stars, atoms, minerals, and many other conceptually equivalent situations where many configurations are subjected to selective pressure.”

The co-authors themselves represent a unique multi-disciplinary configuration: three philosophers of science, two astrobiologists, a data scientist, a mineralogist, and a theoretical physicist.

Says Dr. Wong: “In this new paper, we consider evolution in the broadest sense—change over time—which subsumes Darwinian evolution based upon the particulars of ‘descent with modification.’”  

“The universe generates novel combinations of atoms, molecules, cells, etc. Those combinations that are stable and can go on to engender even more novelty will continue to evolve. This is what makes life the most striking example of evolution, but evolution is everywhere.”

Among many implications, the paper offers: 

  1. Understanding into how differing systems possess varying degrees to which they can continue to evolve. “Potential complexity” or “future complexity” have been proposed as metrics of how much more complex an evolving system might become
  2. Insights into how the rate of evolution of some systems can be influenced artificially. The notion of functional information suggests that the rate of evolution in a system might be increased in at least three ways: (1) by increasing the number and/or diversity of interacting agents, (2) by increasing the number of different configurations of the system; and/or 3) by enhancing the selective pressure on the system (for example, in chemical systems by more frequent cycles of heating/cooling or wetting/drying).
  3. A deeper understanding of generative forces behind the creation and existence of complex phenomena in the universe, and the role of information in describing them
  4. An understanding of life in the context of other complex evolving systems. Life shares certain conceptual equivalencies with other complex evolving systems, but the authors point to a future research direction, asking if there is something distinct about how life processes information on functionality (see also https://royalsocietypublishing.org/doi/10.1098/rsif.2022.0810).
  5. Aiding the search for life elsewhere: if there is a demarcation between life and non-life that has to do with selection for function, can we identify the “rules of life” that allow us to discriminate that biotic dividing line in astrobiological investigations? (See also https://conta.cc/3LwLRYS, “Did Life Exist on Mars? Other Planets? With AI’s Help, We May Know Soon”)
  6. At a time when evolving AI systems are an increasing concern, a predictive law of information that characterizes how both natural and symbolic systems evolve is especially welcome

Laws of nature – motion, gravity, electromagnetism, thermodynamics – etc. codify the general behavior of various macroscopic natural systems across space and time. 

The “law of increasing functional information” published today complements the 2nd law of thermodynamics, which states that the entropy (disorder) of an isolated system increases over time (and heat always flows from hotter to colder objects).

* * * * *

Comments

“This is a superb, bold, broad, and transformational article.  …  The authors are approaching the fundamental issue of the increase in complexity of the evolving universe. The purpose is a search for a ‘missing law’ that is consistent with the known laws.

“At this stage of the development of these ideas, rather like the early concepts in the mid-19th century of coming to understand ‘energy’ and ‘entropy,’ open broad discussion is now essential.”

Stuart Kauffman
Institute for Systems Biology, Seattle WA

“The study of Wong et al. is like a breeze of fresh air blowing over the difficult terrain at the trijunction of astrobiology, systems science and evolutionary theory. It follows in the steps of giants such as Erwin Schrödinger, Ilya Prigogine, Freeman Dyson and James Lovelock. In particular, it was Schrödinger who formulated the perennial puzzle: how can complexity increase — and drastically so! — in living systems, while they remain bound by the Second Law of thermodynamics? In the pile of attempts to resolve this conundrum in the course of the last 80 years, Wong et al. offer perhaps the best shot so far.”

“Their central idea, the formulation of the law of increasing functional information, is simple but subtle: a system will manifest an increase in functional information if its various configurations generated in time are selected for one or more functions. This, the authors claim, is the controversial ‘missing law’ of complexity, and they provide a bunch of excellent examples. From my admittedly quite subjective point of view, the most interesting ones pertain to life in radically different habitats like Titan or to evolutionary trajectories characterized by multiple exaptations of traits resulting in a dramatic increase in complexity. Does the correct answer to Schrödinger’s question lie in this direction? Only time will tell, but both my head and my gut are curiously positive on that one. Finally, another great merit of this study is worth pointing out: in this day and age of rabid Counter-Enlightenment on the loose, as well as relentless attacks on the freedom of thought and speech, we certainly need more unabashedly multidisciplinary and multicultural projects like this one.”

Milan Cirkovic 
Astronomical Observatory of Belgrade, Serbia; The Future of Humanity Institute, Oxford University [University of Oxford]

The natural laws we recognize today cannot yet account for one astounding characteristic of our universe—the propensity of natural systems to “evolve.” As the authors of this study attest, the tendency to increase in complexity and function through time is not specific to biology, but is a fundamental property observed throughout the universe. Wong and colleagues have distilled a set of principles which provide a foundation for cross-disciplinary discourse on evolving systems. In so doing, their work will facilitate the study of self-organization and emergent complexity in the natural world.

Corday Selden
Department of Marine and Coastal Sciences, Rutgers University

The paper “On the roles of function and selection in evolving systems” provides an innovative, compelling, and sound theoretical framework for the evolution of complex systems, encompassing both living and non-living systems. Pivotal in this new law is functional information, which quantitatively captures the possibilities a system has to perform a function. As some functions are indeed crucial for the survival of a living organism, this theory addresses the core of evolution and is open to quantitative assessment. I believe this contribution has also the merit of speaking to different scientific communities that might find a common ground for open and fruitful discussions on complexity and evolution.

Andrea Roli
Assistant Professor, Università di Bologna.

Here’s a link to and a citation for the paper,

On the roles of function and selection in evolving systems by Michael L. Wong, Carol E. Cleland, Daniel Arends Jr., Stuart Bartlett, H. James Cleaves, Heather Demarest, Anirudh Prabhu, Jonathan I. Lunine, and Robert M. Hazen. Proceedings of the National Academy of Sciences (PNAS) 120 (43) e2310223120 DOI: https://doi.org/10.1073/pnas.2310223120 Published: October 16, 2023

This paper is open access.

Optical memristors and neuromorphic computing

A June 5, 2023 news item on Nanowerk announced a paper which reviews the state-of-the-art of optical memristors, Note: Links have been removed,

AI, machine learning, and ChatGPT may be relatively new buzzwords in the public domain, but developing a computer that functions like the human brain and nervous system – both hardware and software combined – has been a decades-long challenge. Engineers at the University of Pittsburgh are today exploring how optical “memristors” may be a key to developing neuromorphic computing.

Resistors with memory, or memristors, have already demonstrated their versatility in electronics, with applications as computational circuit elements in neuromorphic computing and compact memory elements in high-density data storage. Their unique design has paved the way for in-memory computing and captured significant interest from scientists and engineers alike.

A new review article published in Nature Photonics (“Integrated Optical Memristors”), sheds light on the evolution of this technology—and the work that still needs to be done for it to reach its full potential. Led by Nathan Youngblood, assistant professor of electrical and computer engineering at the University of Pittsburgh Swanson School of Engineering, the article explores the potential of optical devices which are analogs of electronic memristors. This new class of device could play a major role in revolutionizing high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence in the optical domain.

A June 2, 2023 University of Pittsburgh news release (also on EurekAlert but published June 5, 2023), which originated the news item, provides more detail,

“Researchers are truly captivated by optical memristors because of their incredible potential in high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence,” explained Youngblood. “Imagine merging the incredible advantages of optics with local information processing. It’s like opening the door to a whole new realm of technological possibilities that were previously unimaginable.” 

The review article presents a comprehensive overview of recent progress in this emerging field of photonic integrated circuits. It explores the current state-of-the-art and highlights the potential applications of optical memristors, which combine the benefits of ultrafast, high-bandwidth optical communication with local information processing. However, scalability emerged as the most pressing issue that future research should address. 

“Scaling up in-memory or neuromorphic computing in the optical domain is a huge challenge. Having a technology that is fast, compact, and efficient makes scaling more achievable and would represent a huge step forward,” explained Youngblood. 

“One example of the limitations is that if you were to take phase change materials, which currently have the highest storage density for optical memory, and try to implement a relatively simplistic neural network on-chip, it would take a wafer the size of a laptop to fit all the memory cells needed,” he continued. “Size matters for photonics, and we need to find a way to improve the storage density, energy efficiency, and programming speed to do useful computing at useful scales.”

Using Light to Revolutionize Computing

Optical memristors can revolutionize computing and information processing across several applications. They can enable active trimming of photonic integrated circuits (PICs), allowing for on-chip optical systems to be adjusted and reprogrammed as needed without continuously consuming power. They also offer high-speed data storage and retrieval, promising to accelerate processing, reduce energy consumption, and enable parallel processing. 

Optical memristors can even be used for artificial synapses and brain-inspired architectures. Dynamic memristors with nonvolatile storage and nonlinear output replicate the long-term plasticity of synapses in the brain and pave the way for spiking integrate-and-fire computing architectures.

Research to scale up and improve optical memristor technology could unlock unprecedented possibilities for high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence. 

“We looked at a lot of different technologies. The thing we noticed is that we’re still far away from the target of an ideal optical memristor–something that is compact, efficient, fast, and changes the optical properties in a significant manner,” Youngblood said. “We’re still searching for a material or a device that actually meets all these criteria in a single technology in order for it to drive the field forward.”

The publication of “Integrated Optical Memristors” (DOI: 10.1038/s41566-023-01217-w) was published in Nature Photonics and is coauthored by senior author Harish Bhaskaran at the University of Oxford, Wolfram Pernice at Heidelberg University, and Carlos Ríos at the University of Maryland.

Despite including that final paragraph, I’m also providing a link to and a citation for the paper,

Integrated optical memristors by Nathan Youngblood, Carlos A. Ríos Ocampo, Wolfram H. P. Pernice & Harish Bhaskaran. Nature Photonics volume 17, pages 561–572 (2023) DOI: https://doi.org/10.1038/s41566-023-01217-w Published online: 29 May 2023 Issue Date: July 2023

This paper is behind a paywall.

Incorporating human cells into computer chips

What are the ethics of incorporating human cells into computer chips? That’s the question that Julian Savulescu (Visiting Professor in biomedical Ethics, University of Melbourne and Uehiro Chair in Practical Ethics, University of Oxford), Christopher Gyngell (Research Fellow in Biomedical Ethics, The University of Melbourne), and Tsutomu Sawai (Associate Professor, Humanities and Social Sciences, Hiroshima University) discuss in a May 24, 2022 essay on The Conversation (Note: A link has been removed),

The year is 2030 and we are at the world’s largest tech conference, CES in Las Vegas. A crowd is gathered to watch a big tech company unveil its new smartphone. The CEO comes to the stage and announces the Nyooro, containing the most powerful processor ever seen in a phone. The Nyooro can perform an astonishing quintillion operations per second, which is a thousand times faster than smartphone models in 2020. It is also ten times more energy-efficient with a battery that lasts for ten days.

A journalist asks: “What technological advance allowed such huge performance gains?” The chief executive replies: “We created a new biological chip using lab-grown human neurons. These biological chips are better than silicon chips because they can change their internal structure, adapting to a user’s usage pattern and leading to huge gains in efficiency.”

Another journalist asks: “Aren’t there ethical concerns about computers that use human brain matter?”

Although the name and scenario are fictional, this is a question we have to confront now. In December 2021, Melbourne-based Cortical Labs grew groups of neurons (brain cells) that were incorporated into a computer chip. The resulting hybrid chip works because both brains and neurons share a common language: electricity.

The authors explain their comment that brains and neurons share the common language of electricity (Note: Links have been removed),

In silicon computers, electrical signals travel along metal wires that link different components together. In brains, neurons communicate with each other using electric signals across synapses (junctions between nerve cells). In Cortical Labs’ Dishbrain system, neurons are grown on silicon chips. These neurons act like the wires in the system, connecting different components. The major advantage of this approach is that the neurons can change their shape, grow, replicate, or die in response to the demands of the system.

Dishbrain could learn to play the arcade game Pong faster than conventional AI systems. The developers of Dishbrain said: “Nothing like this has ever existed before … It is an entirely new mode of being. A fusion of silicon and neuron.”

Cortical Labs believes its hybrid chips could be the key to the kinds of complex reasoning that today’s computers and AI cannot produce. Another start-up making computers from lab-grown neurons, Koniku, believes their technology will revolutionise several industries including agriculture, healthcare, military technology and airport security. Other types of organic computers are also in the early stages of development.

Ethics issues arise (Note: Links have been removed),

… this raises questions about donor consent. Do people who provide tissue samples for technology research and development know that it might be used to make neural computers? Do they need to know this for their consent to be valid?

People will no doubt be much more willing to donate skin cells for research than their brain tissue. One of the barriers to brain donation is that the brain is seen as linked to your identity. But in a world where we can grow mini-brains from virtually any cell type, does it make sense to draw this type of distinction?

… Consider the scandal regarding Henrietta Lacks, an African-American woman whose cells were used extensively in medical and commercial research without her knowledge and consent.

Henrietta’s cells are still used in applications which generate huge amounts of revenue for pharmaceutical companies (including recently to develop COVID vaccines. The Lacks family still has not received any compensation. If a donor’s neurons end up being used in products like the imaginary Nyooro, should they be entitled to some of the profit made from those products?

Another key ethical consideration for neural computers is whether they could develop some form of consciousness and experience pain. Would neural computers be more likely to have experiences than silicon-based ones? …

This May 24, 2022 essay is fascinating and, if you have the time, I encourage you to read it all.

If you’re curious, you can find out about Cortical Labs here, more about Dishbrain in a February 22, 2022 article by Brian Patrick Green for iai (Institute for Art and Ideas) news, and more about Koniku in a May 31, 2018 posting about ‘wetware’ by Alissa Greenberg on Medium.

As for Henrietta Lacks, there’s this from my May 13, 2016 posting,

*HeLa cells are named for Henrietta Lacks who unknowingly donated her immortal cell line to medical research. You can find more about the story on the Oprah Winfrey website, which features an excerpt from the Rebecca Skloot book “The Immortal Life of Henrietta Lacks.”’ …

I checked; the excerpt is still on the Oprah Winfrey site.

h/t May 24, 2022 Nanowerk Spotlight article

The how and why of nanopores

An August 19, 2021 Universidade NOVA de Lisboa ITQB NOVA press release (also on EurekAlert) explains what nanopores are while describing research into determining how their locations can be controlled,

At the simplest of levels, nanopores are (nanometre-sized) holes in an insulating membrane. The hole allows ions to pass through the membrane when a voltage is applied, resulting in a measurable current. When a molecule passes through a nanopore it causes a change in the current, this can be used to characterize and even identify individual molecules. Nanopores are extremely powerful single-molecule biosensing devices and can be used to detect and sequence DNA, RNA, and even proteins. Recently, it has been used in the SARS-CoV-2 virus sequencing.  

Solid-state nanopores are an extremely versatile type of nanopore formed in ultrathin membranes (less than 50 nanometres), made from materials such as silicon nitride (SiNx). Solid-state nanopores can be created with a range of diameters and can withstand a multitude of conditions (discover more about solid-state nanopore fabrication techniques here). One of the most appealing techniques with which to fabricate nanopores is Controlled Breakdown (CBD). This technique is quick, reduces fabrication costs, does not require specialized equipment, and can be automated.

CBD is a technique in which an electric field is applied across the membrane to induce a current. At some point, a spike in the current is observed, signifying pore formation. The voltage is then quickly reduced to ensure the fabrication of a single, small nanopore.

The mechanisms underlying this process have not been fully elucidated thus an international team involving ITQB NOVA decided to further investigate how electrical conduction through the membrane occurs during breakdown, namely how oxidation and reduction reactions (also called redox reactions, they imply electron loss or gain, respectively) influence the process. To do this, the team created three devices in which the electric field is applied to the membrane (a silicon-rich SiNx membrane) in different ways: via metal electrodes on both sides of the membrane; via electrolyte solutions on both sides of the membrane; and via a mixed device with a metal electrode on one side and an electrolyte solution on the other.

Results showed that redox reactions must occur at the membrane-electrolyte interface, whilst the metal electrodes circumvent this need. The team also demonstrated that, because of this phenomenon, nanopore fabrication could be localized to certain regions by performing CBD with metal microelectrodes on the membrane surface. Finally, by varying the content of silicon in the membrane, the investigators demonstrated that conduction and nanopore formation is highly dependent on the membrane material since it limits the electrical current in the membrane.

“Controlling the location of nanopores has been of interest to us for a number of years”, says James Yates. Pedro Sousa adds that “our findings suggest that CBD can be used to integrate pores with complementary micro or nanostructures, such as tunneling electrodes or field-effect sensors, across a range of different membrane materials.”  These devices may then be used for the detection of specific molecules, such as proteins, DNA, or antibodies, and applied to a wide array of scenarios, including pandemic surveillance or food safety.

This project was developed by a research team led by ITQB NOVA’s James Yates and has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No 724300 and 875525). Co-author Pedro Miguel Sousa is also from ITQB NOVA. The other consortium members are from the University of Oxford, Oak Ridge National Laboratory, Imperial College London and Queen Mary University of London. The authors would like to thank Andrew Briggs for providing financial support.

Here’s a link to and a citation for the paper,

Understanding Electrical Conduction and Nanopore Formation During Controlled Breakdown by Jasper P. Fried, Jacob L. Swett, Binoy Paulose Nadappuram, Aleksandra Fedosyuk, Pedro Miguel Sousa, Dayrl P. Briggs, Aleksandar P. Ivanov, Joshua B. Edel, Jan A. Mol, James R. Yates. Small DOI: https://doi.org/10.1002/smll.202102543 First published: 01 August 2021

This paper is open access.

Ai-Da (robot artist) writes and performs poem honouring Dante’s 700th anniversary

Remarkable, eh? *ETA December 17, 2021 0910: I’m sorry about the big blank space and can’t figure out how to fix it.*

Who is Ai-Da?

Thank you to the contributor(s) of the Ai-Da (robot) Wikipedia entry (Note: Links have been removed),

Ai-Da was invented by gallerist Aidan Meller,[3] in collaboration with Engineered Arts, a Cornish robotics company.[4] Her drawing intelligence was developed by computer AI researchers at the University of Oxford,[5] and her drawing arm is the work of engineers based in Leeds.[4]

Ai-Da has her own website here (from the homepage),

Ai-Da is the world’s first ultra-realistic artist robot. She draws using cameras in her eyes, her AI algorithms, and her robotic arm. Created in February 2019, she had her first solo show at the University of Oxford, ‘Unsecured Futures’, where her [visual] art encouraged viewers to think about our rapidly changing world. She has since travelled and exhibited work internationally, and had her first show in a major museum, the Design Museum, in 2021. She continues to create art that challenges our notions of creativity in a post-humanist era.

Ai-Da – is it art?

The role and definition of art changes over time. Ai-Da’s work is art, because it reflects the enormous integration of technology in todays society. We recognise ‘art’ means different things to different people. 

Today, a dominant opinion is that art is created by the human, for other humans. This has not always been the case. The ancient Greeks felt art and creativity came from the Gods. Inspiration was divine inspiration. Today, a dominant mind-set is that of humanism, where art is an entirely human affair, stemming from human agency. However, current thinking suggests we are edging away from humanism, into a time where machines and algorithms influence our behaviour to a point where our ‘agency’ isn’t just our own. It is starting to get outsourced to the decisions and suggestions of algorithms, and complete human autonomy starts to look less robust. Ai-Da creates art, because art no longer has to be restrained by the requirement of human agency alone.  

It seems that Ai-Da has branched out from visual art into poetry. (I wonder how many of the arts Ai-Da can produce and/or perform?)

A divine comedy? Dante and Ai-Da

The 700th anniversary of poet Dante Alighieri’s death has occasioned an exhibition, DANTE: THE INVENTION OF CELEBRITY, 17 September 2021–9 January 2022, at Oxford’s Ashmolean Museum.

Professor Gervase Rosser (University of Oxford), exhibition curator, wrote this in his September 21, 2021 exhibition essay “Dante and the Robot: An encounter at the Ashmolean”,

Ai-Da, the world’s most modern humanoid artist, is involved in an exhibition about the poet and philosopher, Dante Alighieri, writer of the Divine Comedy, whose 700th anniversary is this year. A major exhibition, ‘Dante and the Invention of Celebrity’, opens at Oxford’s Ashmolean Museum this month, and includes an intervention by this most up-to-date robot artist.

..

Honours are being paid around the world to the author of what he called a Comedy because, unlike a tragedy, it began badly but ended well. From the darkness of hell, the work sees Dante journey through purgatory, before eventually arriving at the eternal light of paradise. What hold does a poem about the spiritual redemption of humanity, written so long ago, have on us today?

One challenge to both spirit and humanity in the 21st century is the power of artificial intelligence, created and unleashed by human ingenuity.  The scientists who introduced this term, AI, in the 1950s announced that ‘every aspect of learning or any other feature of intelligence can, in principle, be so precisely described that a machine can be made to simulate it’.

Over the course of a human lifetime, that prophecy has almost been realised.  Artificial intelligence has already taken the place of human thought, often in ways of which are not apparent. In medicine, AI promises to become both irreplaceable and inestimable.

But to an extent which we are, perhaps, frightened to acknowledge, AI monitors our consumption patterns, our taste in everything from food to culture, our perception of ourselves, even our political views. If we want to re-orientate ourselves and take a critical view of this, before it is too late to regain control, how can we do so?

Creative fiction offers a field in which our values and aspirations can be questioned. This year has seen the publication of Klara and the Sun, by Kazuo Ishiguro, which evokes a world, not many years into the future, in which humanoid AI robots have become the domestic servants and companions of all prosperous families.

One of the book’s characters asks a fundamental question about the human heart, ‘Do you think there is such a thing? Something that makes each of us special and individual?’

Art can make two things possible: through it, artificial intelligence, which remains largely unseen, can be made visible and tangible and it can be given a prophetic voice, which we can choose to heed or ignore.

These aims have motivated the creators of Ai-Da, the artist robot which, through a series of exhibitions, is currently provoking questions around the globe (from the United Nations headquarters in Geneva to Cairo, and from the Design Museum in London [UK] to Abu Dhabi) about the nature of human creativity, originality, and authenticity.

In the Ashmolean Museum’s Gallery 8, Dante  meets artificial intelligence, in a staged encounter deliberately designed to invite reflection on what it means to see the world; on the nature of creativity; and on the value of human relationships.

The juxtaposition of AI with the Divine Comedy, in a year in which the poem is being celebrated as a supreme achievement of the human spirit, is timely. The encounter, however, is not presented as a clash of incompatible opposites, but as a conversation.

This is the spirit in which Ai-Da has been developed by her inventors, Aidan Meller and Lucy Seal, in collaboration with technical teams in Oxford University and elsewhere. Significantly, she takes her name from Ada Lovelace [emphasis mine], a mathematician and writer who was belatedly recognised as the first programmer. At the time of her early death in 1852, at the age of 36, she was considering writing a visionary kind of mathematical poetry, and wrote about her idea of ‘poetical philosophy, poetical science’.

For the Ashmolean exhibition, Ai-Da has made works in response to the Comedy. The first focuses on one of the circles of Dante’s Purgatory. Here, the souls of the envious compensate for their lives on earth, which were partially, but not irredeemably, marred by their frustrated desire for the possessions of others.

My first thought on seeing the inventor’s name, Aidan Meller, was that he named the robot after himself; I did not pick up on the Ada Lovelace connection. I appreciate how smart this is especially as the name also references AI.

Finally, the excerpts don’t do justice to Rosser’s essay; I recommend reading it if you have the time.

4th International Conference on Science Advice to Governments (INGSA2021) August 30 – September 2, 2021

What I find most exciting about this conference is the range of countries being represented. At first glance, I’ve found Argentina, Thailand, Senegal, Ivory Coast, Costa Rica and more in a science meeting being held in Canada. Thank you to the organizers and to the organization International Network for Government Science Advice (INGSA)

As I’ve noted many times here in discussing the science advice we (Canadians) get through the Council of Canadian Academies (CCA), there’s far too much dependence on the same old, same old countries for international expertise. Let’s hope this meeting changes things.

The conference (with the theme Build Back Wiser: Knowledge, Policy and Publics in Dialogue) started on Monday, August 30, 2021 and is set to run for four days in Montréal, Québec. and as an online event The Premier of Québec, François Legault, and Mayor of Montréal, Valérie Plante (along with Peter Gluckman, Chair of INGSA and Rémi Quirion, Chief Scientist of Québec; this is the only province with a chief scientist) are there to welcome those who are present in person.

You can find a PDF of the four day programme here or go to the INGSA 2021 website for the programme and more. Here’s a sample from the programme of what excited me, from Day 1 (August 30, 2021),

8:45 | Plenary | Roundtable: Reflections from Covid-19: Where to from here?

Moderator:
Mona Nemer – Chief Science Advisor of Canada

Speakers:
Joanne Liu – Professor, School of Population and Global Health, McGill University, Quebec, Canada
Chor Pharn Lee – Principal Foresight Strategist at Centre for Strategic Futures, Prime Minister’s Office, Singapore
Andrea Ammon – Director of the European Centre for Disease Prevention and Control, Sweden
Rafael Radi – President of the National Academy of Sciences; Coordinator of Scientific Honorary Advisory Group to the President on Covid-19, Uruguay

9:45 | Panel: Science advice during COVID-19: What factors made the difference?

Moderator:

Romain Murenzi – Executive Director, The World Academy of Sciences (TWAS), Italy

Speakers:

Stephen Quest – Director-General, European Commission’s Joint Research Centre (JRC), Belgium
Yuxi Zhang – Postdoctoral Research Fellow, Blavatnik School of Government, University of Oxford, United Kingdom
Amadou Sall – Director, Pasteur Institute of Dakar, Senegal
Inaya Rakhmani – Director, Asia Research Centre, Universitas Indonesia

One last excerpt, from Day 2 (August 31, 2021),

Studio Session | Panel: Science advice for complex risk assessment: dealing with complex, new, and interacting threats

Moderator:
Eeva Hellström – Senior Lead, Strategy and Foresight, Finnish Innovation Fund Sitra, Finland

Speakers:
Albert van Jaarsveld – Director General and Chief Executive Officer, International Institute for Applied Systems Analysis, Austria
Abdoulaye Gounou – Head, Benin’s Office for the Evaluation of Public Policies and Analysis of Government Action
Catherine Mei Ling Wong – Sociologist, LRF Institute for the Public Understanding of Risk, National University of Singapore
Andria Grosvenor – Deputy Executive Director (Ag), Caribbean Disaster Emergency Management Agency, Barbados

Studio Session | Innovations in Science Advice – Science Diplomacy driving evidence for policymaking

Moderator:
Mehrdad Hariri – CEO and President of the Canadian Science Policy Centre, Canada

Speakers:
Primal Silva – Canadian Food Inspection Agency’s Chief Science Operating Officer, Canada
Zakri bin Abdul Hamid – Chair of the South-East Asia Science Advice Network (SEA SAN); Pro-Chancellor of Multimedia University in Malaysia
Christian Arnault Emini – Senior Economic Adviser to the Prime Minister’s Office in Cameroon
Florence Gauzy Krieger and Sebastian Goers – RLS-Sciences Network [See more about RLS-Sciences below]
Elke Dall and Angela Schindler-Daniels – European Union Science Diplomacy Alliance
Alexis Roig – CEO, SciTech DiploHub – Barcelona Science and Technology Diplomacy Hub, Spain

RLS-Sciences (RLS-Sciences Network) has this description for itself on the About/Background webpage,

RLS-Sciences works under the framework of the Regional Leaders Summit. The Regional Leaders Summit (RLS) is a forum comprising seven regional governments (state, federal state, or provincial), which together represent approximately one hundred eighty million people across five continents, and a collective GDP of three trillion USD. The regions are: Bavaria (Germany), Georgia (USA), Québec (Canada), São Paulo (Brazil), Shandong (China), Upper Austria (Austria), and Western Cape (South Africa). Since 2002, the heads of government for these regions have met every two years for a political summit. These summits offer the RLS regions an opportunity for political dialogue.

Getting back to the main topic of this post, INGSA has some satellite events on offer, including this on Open Science,

Open Science: Science for the 21st century |

Science ouverte : la science au XXIe siècle

Thursday September 9, 2021; 11am-2pm EST |
Jeudi 9 septembre 2021, 11 h à 14 h (HNE).

Places Limited – Registrations Required – Click to register now

This event will be in English and French (using simultaneous translation)  | 
Cet événement se déroulera en anglais et en français (traduction simultanée)

In the past 18 months we have seen an unprecedented level of sharing as medical scientists worked collaboratively and shared data to find solutions to the COVID-19 pandemic. The pandemic has accelerated the ongoing cultural shift in research practices towards open science. 

This acceleration of the discovery/research process presents opportunities for institutions and governments to develop infrastructure, tools, funding, policies, and training to support, promote, and reward open science efforts. It also presents new opportunities to accelerate progress towards the UN Agenda 2030 Sustainable Development Goals through international scientific cooperation.

At the same time, it presents new challenges: rapid developments in open science often outpace national open science policies, funding, and infrastructure frameworks. Moreover, the development of international standard setting instruments, such as the future UNESCO Recommendation on Open Science, requires international harmonization of national policies, the establishment of frameworks to ensure equitable participation, and education, training, and professional development.

This 3-hour satellite event brings together international and national policy makers, funders, and experts in open science infrastructure to discuss these issues. 

The outcome of the satellite event will be a summary report with recommendations for open science policy alignment at institutional, national, and international levels.

The event will be hosted on an events platform, with simultaneous interpretation in English and French.  Participants will be able to choose which concurrent session they participate in upon registration. Registration is free but will be closed when capacity is reached.

This satellite event takes place in time for an interesting anniversary. The Montreal Neurological Institute (MNI), also known as Montreal Neuro, declared itself as Open Science in 2016, the first academic research institute (as far as we know) to do so in the world (see my January 22, 2016 posting for details about their open science initiative and my December 19, 2016 posting for more about their open science and their decision to not pursue patents for a five year period).

The Open Science satellite event is organized by:

The Canadian Commission for UNESCO [United Nations Educational, Scientific and Cultural Organization],

The Neuro (Montreal Neurological Institute-Hospital),

The Knowledge Equity Lab [Note: A University of Toronto initiative with Leslie Chan as director, this website is currently under maintenance]

That’s all folks (for now)!