Tag Archives: UK

China’s ex-UK ambassador clashes with ‘AI godfather’ on panel at AI Action Summit in France (February 10 – 11, 2025)

The Artificial Intelligence (AI) Action Summit held from February 10 – 11, 2025 in Paris seems to have been pretty exciting, President Emanuel Macron announced a 09B euros investment in the French AI sector on February 10, 2025 (I have more in my February 13, 2025 posting [scroll down to the ‘What makes Canadian (and Greenlandic) minerals and water so important?’ subhead]). I also have this snippet, which suggests Macron is eager to provide an alternative to US domination in the field of AI, from a February 10, 2025 posting on CCGTN (China Global Television Network),

French President Emmanuel Macron announced on Sunday night [February 10, 2025] that France is set to receive a total investment of 109 billion euros (approximately $112 billion) in artificial intelligence over the coming years.

Speaking in a televised interview on public broadcaster France 2, Macron described the investment as “the equivalent for France of what the United States announced with ‘Stargate’.”

He noted that the funding will come from the United Arab Emirates, major American and Canadian investment funds [emphases mine], as well as French companies.

Prime Minister Justin Trudeau attended the AI Action Summit on Tuesday, February 11, 2025 according to a Canadian Broadcasting Corporation (CBC) news online article by Ashley Burke and Olivia Stefanovich,

Prime Minister Justin Trudeau warned U.S. Vice-President J.D. Vance that punishing tariffs on Canadian steel and aluminum will hurt his home state of Ohio, a senior Canadian official said. 

The two leaders met on the sidelines of an international summit in Paris Tuesday [February 11, 2025], as the Trump administration moves forward with its threat to impose 25 per cent tariffs on all steel and aluminum imports, including from its biggest supplier, Canada, effective March 12.

Speaking to reporters on Wednesday [February 12, 2025] as he departed from Brussels, Trudeau characterized the meeting as a brief chat that took place as the pair met.

“It was just a quick greeting exchange,” Trudeau said. “I highlighted that $2.2 billion worth of steel and aluminum exports from Canada go directly into the Ohio economy, often to go into manufacturing there.

“He nodded, and noted it, but it wasn’t a longer exchange than that.”

Vance didn’t respond to Canadian media’s questions about the tariffs while arriving at the summit on Tuesday [February 11, 2025].

Additional insight can be gained from a February 10, 2025 PBS (US Public Broadcasting Service) posting of an AP (Associated Press) article with contributions from Kelvin Chan and Angela Charlton in Paris, Ken Moritsugu in Beijing, and Aijaz Hussain in New Delhi,

JD Vance stepped onto the world stage this week for the first time as U.S. vice president, using a high-stakes AI summit in Paris and a security conference in Munich to amplify Donald Trump’s aggressive new approach to diplomacy.

The 40-year-old vice president, who was just 18 months into his tenure as a senator before joining Trump’s ticket, is expected, while in Paris, to push back on European efforts to tighten AI oversight while advocating for a more open, innovation-driven approach.

The AI summit has drawn world leaders, top tech executives, and policymakers to discuss artificial intelligence’s impact on global security, economics, and governance. High-profile attendees include Chinese Vice Premier Zhang Guoqing, signaling Beijing’s deep interest in shaping global AI standards.

Macron also called on “simplifying” rules in France and the European Union to allow AI advances, citing sectors like healthcare, mobility, energy, and “resynchronize with the rest of the world.”

“We are most of the time too slow,” he said.

The summit underscores a three-way race for AI supremacy: Europe striving to regulate and invest, China expanding access through state-backed tech giants, and the U.S. under Trump prioritizing a hands-off approach.

Vance has signaled he will use the Paris summit as a venue for candid discussions with world leaders on AI and geopolitics.

“I think there’s a lot that some of the leaders who are present at the AI summit could do to, frankly — bring the Russia-Ukraine conflict to a close, help us diplomatically there — and so we’re going to be focused on those meetings in France,” Vance told Breitbart News.

Vance is expected to meet separately Tuesday with Indian Prime Minister Narendra Modi and European Commission President Ursula von der Leyen, according to a person familiar with planning who spoke on the condition of anonymity.

Modi is co-hosting the summit with Macron in an effort to prevent the sector from becoming a U.S.-China battle.

Indian Foreign Secretary Vikram Misri stressed the need for equitable access to AI to avoid “perpetuating a digital divide that is already existing across the world.”

But the U.S.-China rivalry overshadowed broader international talks.

The U.S.-China rivalry didn’t entirely overshadow the talks. At least one Chinese former diplomat chose to make her presence felt by chastising a Canadian academic according to a February 11, 2025 article by Matthew Broersma for silicon.co.uk

A representative of China at this week’s AI Action Summit in Paris stressed the importance of collaboration on artificial intelligence, while engaging in a testy exchange with Yoshua Bengio, a Canadian academic considered one of the “Godfathers” of AI.

Fu Ying, a former Chinese government official and now an academic at Tsinghua University in Beijing, said the name of China’s official AI Development and Safety Network was intended to emphasise the importance of collaboration to manage the risks around AI.

She also said tensions between the US and China were impeding the ability to develop AI safely.

… Fu Ying, a former vice minister of foreign affairs in China and the country’s former UK ambassador, took veiled jabs at Prof Bengio, who was also a member of the panel.

Zoe Kleinman’s February 10, 2025 article for the British Broadcasting Corporation (BBC) news online website also notes the encounter,

A former Chinese official poked fun at a major international AI safety report led by “AI Godfather” professor Yoshua Bengio and co-authored by 96 global experts – in front of him.

Fu Ying, former vice minister of foreign affairs and once China’s UK ambassador, is now an academic at Tsinghua University in Beijing.

The pair were speaking at a panel discussion ahead of a two-day global AI summit starting in Paris on Monday [February 10, 2025].

The aim of the summit is to unite world leaders, tech executives, and academics to examine AI’s impact on society, governance, and the environment.

Fu Ying began by thanking Canada’s Prof Bengio for the “very, very long” document, adding that the Chinese translation stretched to around 400 pages and she hadn’t finished reading it.

She also had a dig at the title of the AI Safety Institute – of which Prof Bengio is a member.

China now has its own equivalent; but they decided to call it The AI Development and Safety Network, she said, because there are lots of institutes already but this wording emphasised the importance of collaboration.

The AI Action Summit is welcoming guests from 80 countries, with OpenAI chief executive Sam Altman, Microsoft president Brad Smith and Google chief executive Sundar Pichai among the big names in US tech attending.

Elon Musk is not on the guest list but it is currently unknown whether he will decide to join them. [As of February 13, 2025, Mr. Musk did not attend the summit, which ended February 11, 2025.]

A key focus is regulating AI in an increasingly fractured world. The summit comes weeks after a seismic industry shift as China’s DeepSeek unveiled a powerful, low-cost AI model, challenging US dominance.

The pair’s heated exchanges were a symbol of global political jostling in the powerful AI arms race, but Fu Ying also expressed regret about the negative impact of current hostilities between the US and China on the progress of AI safety.

She gave a carefully-crafted glimpse behind the curtain of China’s AI scene, describing an “explosive period” of innovation since the country first published its AI development plan in 2017, five years before ChatGPT became a viral sensation in the west.

She added that “when the pace [of development] is rapid, risky stuff occurs” but did not elaborate on what might have taken place.

“The Chinese move faster [than the west] but it’s full of problems,” she said.

Fu Ying argued that building AI tools on foundations which are open source, meaning everyone can see how they work and therefore contribute to improving them, was the most effective way to make sure the tech did not cause harm.

Most of the US tech giants do not share the tech which drives their products.

Open source offers humans “better opportunities to detect and solve problems”, she said, adding that “the lack of transparency among the giants makes people nervous”.

But Prof Bengio disagreed.

His view was that open source also left the tech wide open for criminals to misuse.

He did however concede that “from a safety point of view”, it was easier to spot issues with the viral Chinese AI assistant DeepSeek, which was built using open source architecture, than ChatGPT, whose code has not been shared by its creator OpenAI.

Fro anyone curious about Professor Bengio’s AI safety report, I have more information in a September 29, 2025 Université de Montréal (UdeM) press release,

The first international report on the safety of artificial intelligence, led by Université de Montréal computer-science professor Yoshua Bengio, was released today and promises to serve as a guide for policymakers worldwide. 

Announced in November 2023 at the AI Safety Summit at Bletchley Park, England, and inspired by the workings of the United Nations Intergovernmental Panel on Climate Change, the report consolidates leading international expertise on AI and its risks. 

Supported by the United Kingdom’s Department for Science, Innovation and Technology, Bengio, founder and scientific director of the UdeM-affiliated Mila – Quebec AI Institute, led a team of 96 international experts in drafting the report.

The experts were drawn from 30 countries, the U.N., the European Union and the OECD [Organisation for Economic Cooperation and Development]. Their report will help inform discussions next month at the AI Action Summit in Paris, France and serve as a global handbook on AI safety to help support policymakers.

Towards a common understanding

The most advanced AI systems in the world now have the ability to write increasingly sophisticated computer programs, identify cyber vulnerabilities, and perform on a par with human PhD-level experts on tests in biology, chemistry, and physics. 

In what is identified as a key development for policymakers to monitor, the AI Safety Report published today warns that AI systems are also increasingly capable of acting as AI agents, autonomously planning and acting in pursuit of a goal. 

As policymakers worldwide grapple with the rapid and unpredictable advancements in AI, the report contributes to bridging the gap by offering a scientific understanding of emerging risks to guide decision-making.  

The document sets out the first comprehensive, independent, and shared scientific understanding of advanced AI systems and their risks, highlighting how quickly the technology has evolved.  

Several areas require urgent research attention, according to the report, including how rapidly capabilities will advance, how general-purpose AI models work internally, and how they can be designed to behave reliably. 

Three distinct categories of AI risks are identified: 

  • Malicious use risks: these include cyberattacks, the creation of AI-generated child-sexual-abuse material, and even the development of biological weapons; 
  • System malfunctions: these include bias, reliability issues, and the potential loss of control over advanced general-purpose AI systems; 
  • Systemic risks: these stem from the widespread adoption of AI, include workforce disruption, privacy concerns, and environmental impacts.  

The report places particular emphasis on the urgency of increasing transparency and understanding in AI decision-making as the systems become more sophisticated and the technology continues to develop at a rapid pace. 

While there are still many challenges in mitigating the risks of general-purpose AI, the report highlights promising areas for future research and concludes that progress can be made.   

Ultimately, it emphasizes that while AI capabilities could advance at varying speeds, their development and potential risks are not a foregone conclusion. The outcomes depend on the choices that societies and governments make today and in the future. 

“The capabilities of general-purpose AI have increased rapidly in recent years and months,” said Bengio. “While this holds great potential for society, AI also presents significant risks that must be carefully managed by governments worldwide.  

“This report by independent experts aims to facilitate constructive and evidence-based discussion around these risks and serves as a common basis for policymakers around the world to understand general-purpose AI capabilities, risks and possible mitigations.” 

The report is more formally known as the International AI Safety Report 2025 and can be found on the gov.uk website.

There have been two previous AI Safety Summits that I’m aware of and you can read about them in my May 21, 2024 posting about the one in Korea and in my November 2, 2023 posting about the first summit at Bletchley Park in the UK.

You can find the Canadian Artificial Intelligence Safety Institute (or AI Safety Institute) here and my coverage of DeepSeek’s release and the panic in the US artificial intelligence and the business communities that ensued in my January 29, 2025 posting.

Digital Culture Talks presented by The Space online February 12 – 13, 2025

A February 5, 2025 notice (received via email) from The Space, a UK Arts organization, announced a two-day series of talks on digital culture,

Digital Culture Talks 2025!

There’s just a week to go till The Space’s conference and we’re pleased to confirm our speakers for each of the roundtable talks on Day 1 and 2. There’s lots that will be of interest, including:

* A timely debate about how to make online communities safer
* In introduction to CreaTech – a £6.75 million investment to develop small, micro- and medium-sized businesses specialising in creative tech like video games and immersive reality – find out how to get involved
* Discussions on the role of artists in a digital world
* Explorations of digital accessibiliy, community ownership, engagement and empowerment. 

Find out more here and below

Day 1
Digital communities and online harms
Wednesday 12 February

Digital accessibility, inclusion and community

Roundtable 1
How can we think differently about how we create digital content and challenge assumptions about what culture looks like? Exploring community ownership, engagement and empowerment through digital.

  • Zoe Partington – Acting CEO DaDa, Artist and Disability Consultant
  • Rachel Farrer – Associate Director, Cultural and Community Engagement Innovation Ecosystem, Coventry University
  • Parminder Dosanjh – Creative Director, Creative Black County
  • Jo Capper – Collaborative Programme Curator, Grand Union

Reducing online harms, how to make social media and online communities safer

Roundtable 2
In a world of increasingly polarised online spaces, what are the emerging trends and challenges when engaging audiences and building communities online?

Day 2
The role of artists in a digital world
Thursday 13 February

Calling all in the West Midlands!

Day 2 is taking place in person as well as streaming online. If you’d like to join us in person at the STEAMhouse in Birmingham, please register for free below.

As well as joining us for the great roundtables we have lined up, there’ll be a great chance to network in between sessions over lunch. Look forward to seeing you there!

Join us in person!

CreaTech, the Digital West Midlands and beyond – Local and Global [CreaTech is an initiative of the UK’s Creative Industries Council]

Roundtable 1
An introduction to CreaTech – a £6.75 million investment to develop small, micro- and medium-sized businesses specialising in creative tech like video games and immersive reality. Creatives and academics from across the Midlands and further afield discuss arising opportunities and what this means for the region and beyond.

  • Richard Willacy – General Director, Birmingham Opera Company 
  • Tom Rogers – Creative Content Producer, Birmingham Royal Ballet
  • Louise Latter – Head of Programme, BOM
  • Lamberto Coccioli – Project lead, CreaTech Frontiers, Professor of Music and Technology at the Royal Birmingham Conservatoire (BCU) 
  • Rachel Davis – Director of Warwick Enterprise, University of Warwick 

Platforming artists and storytellers – are artists and storyteller missing from modern discourse?

Roundtable 2
Artists and storytellers have historically played pivotal roles in shaping societal narratives and fostering cultural discourse. However, is their presence in mainstream discussions diminishing?

Come and join in the conversation!

Register to join us online

If you got to The Space’s Digital Culture Talks 2025 webpage, you’ll find a few more details. Clicking on the link to register will give you the event time appropriate to your timezone.

For anyone curious about The Space, from their homepage (scroll down about 60% of the way),

About us

Welcome to The Space. We help the arts, culture and heritage sector to engage audiences using digital and broadcast content and platforms.

As an independent not-for-profit organisation, our role is to fund the creation of new digital cultural content and provide free training, mentoring and online resources for organisations, artists and creative practitioners.

We are funded by a range of national and regional agencies, to enable you to build your digital skills, confidence and experience via practical advice and hands-on experience. We can also help you to find ways to make your digital content accessible to new and more diverse audiences.

We also offer a low-cost consultancy service for organisations who want to develop their digital cultural content strategy.

There you have it.

Your garden as a ‘living artwork’ for insects

Pollinator Pathmaker Eden Project Edition. Photo Royston Hunt. Courtesy Alexandra Daisy Ginsberg Ltd

I suppose you could call this a kind of citizen science as well as an art project. A September 11, 2024 news item on phys.org describes a new scientific art project designed for insects,

Gardens can become “living artworks” to help prevent the disastrous decline of pollinating insects, according to researchers working on a new project.

Pollinator Pathmaker is an artwork by Dr. Alexandra Daisy Ginsberg that uses an algorithm to generate unique planting designs that prioritize pollinators’ needs over human aesthetic tastes.

A September 11, 2024 University of Exeter press release (also on EurekAlert), which originated the news item, provides more detail about the research project,

Originally commissioned by the Eden Project in Cornwall in 2021, the general public can access the artist’s online tool (www.pollinator.art) to design and plant their own living artwork for local pollinators.

While pollinators – including bees, butterflies, moths, wasps, ants and beetles – are the main audience, the results may also be appealing to humans.

Pollinator Pathmaker allows users to input the specific details of their garden, including size of plot, location conditions, soil type, and play with how the algorithm will “solve” the planting to optimise it for pollinator diversity, rather than how it looks to humans.

The new research project – led by the universities of Exeter and Edinburgh – has received funding from UK Research and Innovation as part of a new cross research council responsive mode scheme to support exciting interdisciplinary research.

The project aims to demonstrate how an artwork can help to drive innovative ecological conservation, by asking residents in the village of Constantine in Cornwall to plant a network of Pollinator Pathmaker living artworks in their gardens. These will become part of the multidisciplinary study.

“Pollinators are declining rapidly worldwide and – with urban and agricultural areas often hostile to them – gardens are increasingly vital refuges,” said Dr Christopher Kaiser-Bunbury, of the Centre for Ecology and Conservation on Exeter’s Penryn Campus in Cornwall.

“Our research project brings together art, ecology, social science and philosophy to reimagine what gardens are, and what they’re for.

“By reflecting on fundamental questions like these, we will empower people to rethink the way they see gardens.

 “We hope Pollinator Pathmaker will help to create connected networks of pollinator-friendly gardens across towns and cities.”

Good luck with the pollinators!

Bio-hybrid robotics (living robots) needs public debate and regulation

A July 23, 2024 University of Southampton (UK) press release (also on EurekAlert but published July 22, 2024) describes the emerging science/technology of bio-hybrid robotics and a recent study about the ethical issues raised, Note 1: bio-hybrid may also be written as biohybrid; Note 2: Links have been removed,

Development of ‘living robots’ needs regulation and public debate

Researchers are calling for regulation to guide the responsible and ethical development of bio-hybrid robotics – a ground-breaking science which fuses artificial components with living tissue and cells.

In a paper published in Proceedings of the National Academy of Sciences [PNAS] a multidisciplinary team from the University of Southampton and universities in the US and Spain set out the unique ethical issues this technology presents and the need for proper governance.

Combining living materials and organisms with synthetic robotic components might sound like something out of science fiction, but this emerging field is advancing rapidly. Bio-hybrid robots using living muscles can crawl, swim, grip, pump, and sense their surroundings. Sensors made from sensory cells or insect antennae have improved chemical sensing. Living neurons have even been used to control mobile robots.

Dr Rafael Mestre from the University of Southampton, who specialises in emergent technologies and is co-lead author of the paper, said: “The challenges in overseeing bio-hybrid robotics are not dissimilar to those encountered in the regulation of biomedical devices, stem cells and other disruptive technologies. But unlike purely mechanical or digital technologies, bio-hybrid robots blend biological and synthetic components in unprecedented ways. This presents unique possible benefits but also potential dangers.”

Research publications relating to bio-hybrid robotics have increased continuously over the last decade. But the authors found that of the more than 1,500 publications on the subject at the time, only five considered its ethical implications in depth.

The paper’s authors identified three areas where bio-hybrid robotics present unique ethical issues: Interactivity – how bio-robots interact with humans and the environment, Integrability – how and whether humans might assimilate bio-robots (such as bio-robotic organs or limbs), and Moral status.

In a series of thought experiments, they describe how a bio-robot for cleaning our oceans could disrupt the food chain, how a bio-hybrid robotic arm might exacerbate inequalities [emphasis mine], and how increasing sophisticated bio-hybrid assistants could raise questions about sentience and moral value.

“Bio-hybrid robots create unique ethical dilemmas,” says Aníbal M. Astobiza, an ethicist from the University of the Basque Country in Spain and co-lead author of the paper. “The living tissue used in their fabrication, potential for sentience, distinct environmental impact, unusual moral status, and capacity for biological evolution or adaptation create unique ethical dilemmas that extend beyond those of wholly artificial or biological technologies.”

The paper is the first from the Biohybrid Futures project led by Dr Rafael Mestre, in collaboration with the Rebooting Democracy project. Biohybrid Futures is setting out to develop a framework for the responsible research, application, and governance of bio-hybrid robotics.

The paper proposes several requirements for such a framework, including risk assessments, consideration of social implications, and increasing public awareness and understanding.

Dr Matt Ryan, a political scientist from the University of Southampton and a co-author on the paper, said: “If debates around embryonic stem cells, human cloning or artificial intelligence have taught us something, it is that humans rarely agree on the correct resolution of the moral dilemmas of emergent technologies.

“Compared to related technologies such as embryonic stem cells or artificial intelligence, bio-hybrid robotics has developed relatively unattended by the media, the public and policymakers, but it is no less significant. We want the public to be included in this conversation to ensure a democratic approach to the development and ethical evaluation of this technology.”

In addition to the need for a governance framework, the authors set out actions that the research community can take now to guide their research.

“Taking these steps should not be seen as prescriptive in any way, but as an opportunity to share responsibility, taking a heavy weight away from the researcher’s shoulders,” says Dr Victoria Webster-Wood, a biomechanical engineer from Carnegie Mellon University in the US and co-author on the paper.

“Research in bio-hybrid robotics has evolved in various directions. We need to align our efforts to fully unlock its potential.”

Here’s a link to and a citation for the paper,

Ethics and responsibility in biohybrid robotics research by Rafael Mestre, Aníbal M. Astobiza, Victoria A. Webster-Wood, Matt Ryan, and M. Taher A. Saif. PNAS 121 (31) e2310458121 July 23, 2024 DOI: https://doi.org/10.1073/pnas.2310458121

This paper is open access.

Cyborg or biohybrid robot?

Earlier, I highlighted “… how a bio-hybrid robotic arm might exacerbate inequalities …” because it suggests cyborgs, which are not mentioned in the press release or in the paper, This seems like an odd omission but, over the years, terminology does change although it’s not clear that’s the situation here.

I have two ‘definitions’, the first is from an October 21, 2019 article by Javier Yanes for OpenMind BBVA, Note: More about BBVA later,

The fusion between living organisms and artificial devices has become familiar to us through the concept of the cyborg (cybernetic organism). This approach consists of restoring or improving the capacities of the organic being, usually a human being, by means of technological devices. On the other hand, biohybrid robots are in some ways the opposite idea: using living tissues or cells to provide the machine with functions that would be difficult to achieve otherwise. The idea is that if soft robots seek to achieve this through synthetic materials, why not do so directly with living materials?

In contrast, there’s this from “Biohybrid robots: recent progress, challenges, and perspectives,” Note 1: Full citation for paper follows excerpt; Note 2: Links have been removed,

2.3. Cyborgs

Another approach to building biohybrid robots is the artificial enhancement of animals or using an entire animal body as a scaffold to manipulate robotically. The locomotion of these augmented animals can then be externally controlled, spanning three modes of locomotion: walking/running, flying, and swimming. Notably, these capabilities have been demonstrated in jellyfish (figure 4(A)) [139, 140], clams (figure 4(B)) [141], turtles (figure 4(C)) [142, 143], and insects, including locusts (figure 4(D)) [27, 144], beetles (figure 4(E)) [28, 145–158], cockroaches (figure 4(F)) [159–165], and moths [166–170].

….

The advantages of using entire animals as cyborgs are multifold. For robotics, augmented animals possess inherent features that address some of the long-standing challenges within the field, including power consumption and damage tolerance, by taking advantage of animal metabolism [172], tissue healing, and other adaptive behaviors. In particular, biohybrid robotic jellyfish, composed of a self-contained microelectronic swim controller embedded into live Aurelia aurita moon jellyfish, consumed one to three orders of magnitude less power per mass than existing swimming robots [172], and cyborg insects can make use of the insect’s hemolymph directly as a fuel source [173].

So, sometimes there’s a distinction and sometimes there’s not. I take this to mean that the field is still emerging and that’s reflected in evolving terminology.

Here’s a link to and a citation for the paper,

Biohybrid robots: recent progress, challenges, and perspectives by Victoria A Webster-Wood, Maria Guix, Nicole W Xu, Bahareh Behkam, Hirotaka Sato, Deblina Sarkar, Samuel Sanchez, Masahiro Shimizu and Kevin Kit Parker. Bioinspiration & Biomimetics, Volume 18, Number 1 015001 DOI 10.1088/1748-3190/ac9c3b Published 8 November 2022 • © 2022 The Author(s). Published by IOP Publishing Ltd

This paper is open access.

A few notes about BBVA and other items

BBVA is Banco Bilbao Vizcaya Argentaria according to its Wikipedia entry, Note: Links have been removed,

Banco Bilbao Vizcaya Argentaria, S.A. (Spanish pronunciation: [ˈbaŋko βilˈβao βiθˈkaʝa aɾxenˈtaɾja]), better known by its initialism BBVA, is a Spanish multinational financial services company based in Madrid and Bilbao, Spain. It is one of the largest financial institutions in the world, and is present mainly in Spain, Portugal, Mexico, South America, Turkey, Italy and Romania.[2]

BBVA’s OpenMind is, from their About us page,

OpenMind: BBVA’s knowledge community

OpenMind is a non-profit project run by BBVA that aims to contribute to the generation and dissemination of knowledge about fundamental issues of our time, in an open and free way. The project is materialized in an online dissemination community.

Sharing knowledge for a better future.

At OpenMind we want to help people understand the main phenomena affecting our lives; the opportunities and challenges that we face in areas such as science, technology, humanities or economics. Analyzing the impact of scientific and technological advances on the future of the economy, society and our daily lives is the project’s main objective, which always starts on the premise that a broader and greater quality knowledge will help us to make better individual and collective decisions.

As for other items, you can find my latest (biorobotic, cyborg, or bionic depending what terminology you what to use) jellyfish story in this June 6, 2024 posting, the Biohybrid Futures project mentioned in the press release here, and also mentioned in the Rebooting Democracy project (unexpected in the context of an emerging science/technology) can be found here on this University of Southampton website.

Finally, you can find more on these stories (science/technology announcements and/or ethics research/issues) here by searching for ‘robots’ (tag and category), ‘cyborgs’ (tag), ‘machine/flesh’ (tag), ‘neuroprosthetic’ (tag), and human enhancement (category).

Hardware policies best way to manage AI safety?

Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).

A very software approach?

This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,

In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.

The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.

The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),

At a glance

The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.

Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.

The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.

Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.

While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.

The European Union (EU) is preparing to pass its own AI regulation act through the European Parliament in 2024 according to a December 19, 2023 “EU AI Act: first regulation on artificial intelligence” article update, Note: Links have been removed,

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

*The ban of AI systems posing unacceptable risks will apply six months after the entry into force

*Codes of practice will apply nine months after entry into force

*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”

… The AI Act is expected to come into effect in late 2025 or early 2026.[109

I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.

The US is always to be considered in these matters and I have a November 2023 ‘briefing’ by Müge Fazlioglu on the International Association of Privacy Professionals (IAPP) website where she provides a quick overview of the international scene before diving deeper into US AI governance policy through the Barack Obama, Donald Trump, and Joe Biden administrations. There’s also this January 29, 2024 US White House “Fact Sheet: Biden-⁠Harris Administration Announces Key AI Actions Following President Biden’s Landmark Executive Order.”

What about AI and hardware?

A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,

Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.

A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.

Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.

The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.

The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.

A February 14, 2024 University of Cambridge press release by Fred Lewsey (also on EurekAlert), which originated the news item, provides more information about the ‘hardware approach to AI regulation’,

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI. 

“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.

“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”

The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.

Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.

Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute. 

The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”

Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.

For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.

The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.

“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.

Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.  

“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”

These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.

AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.

The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.

They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.

Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”

You can find the report, “Computing Power and the Governance of Artificial Intelligence” on the University of Cambridge’s Centre for the Study of Existential Risk.

Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.

The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.

“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.

*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.

As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.

AI safety talks at Bletchley Park in November 2023

There’s a very good article about the upcoming AI (artificial intelligence) safety talks on the British Broadcasting Corporation (BBC) news website (plus some juicy perhaps even gossipy news about who may not be attending the event) but first, here’s the August 24, 2023 UK government press release making the announcement,

Iconic Bletchley Park to host UK AI Safety Summit in early November [2023]

Major global event to take place on the 1st and 2nd of November.[2023]

– UK to host world first summit on artificial intelligence safety in November

– Talks will explore and build consensus on rapid, international action to advance safety at the frontier of AI technology

– Bletchley Park, one of the birthplaces of computer science, to host the summit

International governments, leading AI companies and experts in research will unite for crucial talks in November on the safe development and use of frontier AI technology, as the UK Government announces Bletchley Park as the location for the UK summit.

The major global event will take place on the 1st and 2nd November to consider the risks of AI, especially at the frontier of development, and discuss how they can be mitigated through internationally coordinated action. Frontier AI models hold enormous potential to power economic growth, drive scientific progress and wider public benefits, while also posing potential safety risks if not developed responsibly.

To be hosted at Bletchley Park in Buckinghamshire, a significant location in the history of computer science development and once the home of British Enigma codebreaking – it will see coordinated action to agree a set of rapid, targeted measures for furthering safety in global AI use.

Preparations for the summit are already in full flow, with Matt Clifford and Jonathan Black recently appointed as the Prime Minister’s Representatives. Together they’ll spearhead talks and negotiations, as they rally leading AI nations and experts over the next three months to ensure the summit provides a platform for countries to work together on further developing a shared approach to agree the safety measures needed to mitigate the risks of AI.

Prime Minister Rishi Sunak said:

“The UK has long been home to the transformative technologies of the future, so there is no better place to host the first ever global AI safety summit than at Bletchley Park this November.

To fully embrace the extraordinary opportunities of artificial intelligence, we must grip and tackle the risks to ensure it develops safely in the years ahead.

With the combined strength of our international partners, thriving AI industry and expert academic community, we can secure the rapid international action we need for the safe and responsible development of AI around the world.”

Technology Secretary Michelle Donelan said:

“International collaboration is the cornerstone of our approach to AI regulation, and we want the summit to result in leading nations and experts agreeing on a shared approach to its safe use.

The UK is consistently recognised as a world leader in AI and we are well placed to lead these discussions. The location of Bletchley Park as the backdrop will reaffirm our historic leadership in overseeing the development of new technologies.

AI is already improving lives from new innovations in healthcare to supporting efforts to tackle climate change, and November’s summit will make sure we can all realise the technology’s huge benefits safely and securely for decades to come.”

The summit will also build on ongoing work at international forums including the OECD, Global Partnership on AI, Council of Europe, and the UN and standards-development organisations, as well as the recently agreed G7 Hiroshima AI Process.

The UK boasts strong credentials as a world leader in AI. The technology employs over 50,000 people, directly supports one of the Prime Minister’s five priorities by contributing £3.7 billion to the economy, and is the birthplace of leading AI companies such as Google DeepMind. It has also invested more on AI safety research than any other nation, backing the creation of the Foundation Model Taskforce with an initial £100 million.

Foreign Secretary James Cleverly said:

“No country will be untouched by AI, and no country alone will solve the challenges posed by this technology. In our interconnected world, we must have an international approach.

The origins of modern AI can be traced back to Bletchley Park. Now, it will also be home to the global effort to shape the responsible use of AI.”

Bletchley Park’s role in hosting the summit reflects the UK’s proud tradition of being at the frontier of new technology advancements. Since Alan Turing’s celebrated work some eight decades ago, computing and computer science have become fundamental pillars of life both in the UK and across the globe.

Iain Standen, CEO of the Bletchley Park Trust, said:

“Bletchley Park Trust is immensely privileged to have been chosen as the venue for the first major international summit on AI safety this November, and we look forward to welcoming the world to our historic site.

It is fitting that the very spot where leading minds harnessed emerging technologies to influence the successful outcome of World War 2 will, once again, be the crucible for international co-ordinated action.

We are incredibly excited to be providing the stage for discussions on global safety standards, which will help everyone manage and monitor the risks of artificial intelligence.”

The roots of AI can be traced back to the leading minds who worked at Bletchley during World War 2, with codebreakers Jack Good and Donald Michie among those who went on to write extensive works on the technology. In November [2023], it will once again take centre stage as the international community comes together to agree on important guardrails which ensure the opportunities of AI can be realised, and its risks safely managed.

The announcement follows the UK government allocating £13 million to revolutionise healthcare research through AI, unveiled last week. The funding supports a raft of new projects including transformations to brain tumour surgeries, new approaches to treating chronic nerve pain, and a system to predict a patient’s risk of developing future health problems based on existing conditions.

Tom Gerken’s August 24, 2023 BBC news article (an analysis by Zoe Kleinman follows as part of the article) fills in a few blanks, Note: Links have been removed,

World leaders will meet with AI companies and experts on 1 and 2 November for the discussions.

The global talks aim to build an international consensus on the future of AI.

The summit will take place at Bletchley Park, where Alan Turing, one of the pioneers of modern computing, worked during World War Two.

It is unknown which world leaders will be invited to the event, with a particular question mark over whether the Chinese government or tech giant Baidu will be in attendance.

The BBC has approached the government for comment.

The summit will address how the technology can be safely developed through “internationally co-ordinated action” but there has been no confirmation of more detailed topics.

It comes after US tech firm Palantir rejected calls to pause the development of AI in June, with its boss Alex Karp saying it was only those with “no products” who wanted a pause.

And in July [2023], children’s charity the Internet Watch Foundation called on Mr Sunak to tackle AI-generated child sexual abuse imagery, which it says is on the rise.

Kleinman’s analysis includes this, Note: A link has been removed,

Will China be represented? Currently there is a distinct east/west divide in the AI world but several experts argue this is a tech that transcends geopolitics. Some say a UN-style regulator would be a better alternative to individual territories coming up with their own rules.

If the government can get enough of the right people around the table in early November [2023], this is perhaps a good subject for debate.

Three US AI giants – OpenAI, Anthropic and Palantir – have all committed to opening London headquarters.

But there are others going in the opposite direction – British DeepMind co-founder Mustafa Suleyman chose to locate his new AI company InflectionAI in California. He told the BBC the UK needed to cultivate a more risk-taking culture in order to truly become an AI superpower.

Many of those who worked at Bletchley Park decoding messages during WW2 went on to write and speak about AI in later years, including codebreakers Irving John “Jack” Good and Donald Michie.

Soon after the War, [Alan] Turing proposed the imitation game – later dubbed the “Turing test” – which seeks to identify whether a machine can behave in a way indistinguishable from a human.

There is a Bletchley Park website, which sells tickets for tours.

Insight into political jockeying (i.e., some juicy news bits)

This has recently been reported by BBC, from an October 17 (?). 2023 news article by Jessica Parker & Zoe Kleinman on BBC news online,

German Chancellor Olaf Scholz may turn down his invitation to a major UK summit on artificial intelligence, the BBC understands.

While no guest list has been published of an expected 100 participants, some within the sector say it’s unclear if the event will attract top leaders.

A government source insisted the summit is garnering “a lot of attention” at home and overseas.

The two-day meeting is due to bring together leading politicians as well as independent experts and senior execs from the tech giants, who are mainly US based.

The first day will bring together tech companies and academics for a discussion chaired by the Secretary of State for Science, Innovation and Technology, Michelle Donelan.

The second day is set to see a “small group” of people, including international government figures, in meetings run by PM Rishi Sunak.

Though no final decision has been made, it is now seen as unlikely that the German Chancellor will attend.

That could spark concerns of a “domino effect” with other world leaders, such as the French President Emmanuel Macron, also unconfirmed.

Government sources say there are heads of state who have signalled a clear intention to turn up, and the BBC understands that high-level representatives from many US-based tech giants are going.

The foreign secretary confirmed in September [2023] that a Chinese representative has been invited, despite controversy.

Some MPs within the UK’s ruling Conservative Party believe China should be cut out of the conference after a series of security rows.

It is not known whether there has been a response to the invitation.

China is home to a huge AI sector and has already created its own set of rules to govern responsible use of the tech within the country.

The US, a major player in the sector and the world’s largest economy, will be represented by Vice-President Kamala Harris.

Britain is hoping to position itself as a key broker as the world wrestles with the potential pitfalls and risks of AI.

However, Berlin is thought to want to avoid any messy overlap with G7 efforts, after the group of leading democratic countries agreed to create an international code of conduct.

Germany is also the biggest economy in the EU – which is itself aiming to finalise its own landmark AI Act by the end of this year.

It includes grading AI tools depending on how significant they are, so for example an email filter would be less tightly regulated than a medical diagnosis system.

The European Commission President Ursula von der Leyen is expected at next month’s summit, while it is possible Berlin could send a senior government figure such as its vice chancellor, Robert Habeck.

A source from the Department for Science, Innovation and Technology said: “This is the first time an international summit has focused on frontier AI risks and it is garnering a lot of attention at home and overseas.

“It is usual not to confirm senior attendance at major international events until nearer the time, for security reasons.”

Fascinating, eh?

The sound of dirt

So you don’t get your hopes up, this acoustic story doesn’t offer any accompanying audio/acoustic files, i.e., I couldn’t find the sound of dirt.

In any event, there’s still an interesting story in an April 10, 2023 news item on phys.org,

U.K. and Australian ecologists have used audio technology to record different types of sounds in the soils of a degraded and restored forest to indicate the health of ecosystems.

Non-invasive acoustic monitoring has great potential for scientists to gather long-term information on species and their abundance, says Flinders University [Australia] researcher Dr. Jake Robinson, who conducted the study while at the University of Sheffield in England.

Photo: Pixabay

An April 8, 2023 Flinders University press release, which originated the news item, delves into the researcher’s work, Note: Links have been removed,

“Eco-acoustics can measure the health lf landscapes affected by farming, mining and deforestation but can also monitor their recovery following revegetation,” he says.  

“From earthworms and plant roots to shifting soils and other underground activity, these subtle sounds were stronger and more diverse in healthy soils – once background noise was blocked out.”   

The subterranean study used special microphones to collect almost 200 sound samples, each about three minutes long, from soil samples collected in restored and cleared forests in South Yorkshire, England. 

“Like underwater and above-ground acoustic monitoring, below-ground biodiversity monitoring using eco-acoustics has great potential,” says Flinders University co-author, Associate Professor Martin Breed. 

Since joining Flinders University, Dr Robinson has released his first book, entitled Invisible Friends (DOI: 10.53061/NZYJ2969) [emphasis mine], which covers his core research into ‘how microbes in the environment shape our lives and the world around us’. 

Now a researcher in restoration genomics at the College of Science and Engineering at Flinders University, the new book examines the powerful role invisible microbes play in ecology, immunology, psychology, forensics and even architecture.  

“Instead of considering microbes the bane of our life, as we have done during the global pandemic, we should appreciate the many benefits they bring in keeping plants animals, and ourselves, alive.”  

In another new article, Dr Robinson and colleagues call for a return to ‘nature play’ for children [emphasis mine] to expose their developing immune systems to a diverse array of microbes at a young age for better long-term health outcomes. 

“Early childhood settings should optimise both outdoor and indoor environments for enhanced exposure to diverse microbiomes for social, cognitive and physiological health,” the researchers say.  

“It’s important to remember that healthy soils feed the air with these diverse microbes,” Dr Robinson adds.  

It seems Robinson has gone on a publicity blitz, academic style, for his book. There’s a May 22, 2023 essay by Robinson, Carlos Abrahams (Senior Lecturer in Environmental Biology – Director of Bioacoustics, Nottingham Trent University); and Martin Breed (Associate Professor in Biology, Flinders University) on the Conversation, Note: A link has been removed,

Nurturing a forest ecosystem back to life after it’s been logged is not always easy.

It can take a lot of hard work and careful monitoring to ensure biodiversity thrives again. But monitoring biodiversity can be costly, intrusive and resource-intensive. That’s where ecological acoustic survey methods, or “ecoacoustics”, come into play.

Indeed, the planet sings. Think of birds calling, bats echolocating, tree leaves fluttering in the breeze, frogs croaking and bush crickets stridulating. We live in a euphonious theatre of life.

Even the creatures in the soil beneath our feet emit unique vibrations as they navigate through the earth to commute, hunt, feed and mate.

Robinson has published three papers within five months of each other, in addition to the book, which seems like heavy output to me.

First, here’s a link to and a citation for the education paper,

Optimising Early Childhood Educational Settings for Health Using Nature-Based Solutions: The Microbiome Aspect by Jake M. Robinson and Alexia Barrable. Educ. Sci. 2023, 13 (2), 211 DOI: https://doi.org/10.3390/educsci13020211
Published: 16 February 2023

This is an open access paper.

For these two links and citations, the articles seem to be very closely linked.,

The sound of restored soil: Measuring soil biodiversity in a forest restoration chronosequence with ecoacoustics by Jake M. Robinson, Martin F. Breed, Carlos Abrahams. doi: https://doi.org/10.1101/2023.01.23.525240 Posted January 23, 2023

The sound of restored soil: using ecoacoustics to measure soil biodiversity in a temperate forest restoration context by Jake M. Robinson, Martin F. Breed, Carlos Abrahams. Restoration Ecology, Online Version of Record before inclusion in an issue e13934 DOI: https://doi.org/10.1111/rec.13934 First published: 22 May 2023

Both links lead to open access papers.

Finally, there’s the book,

Invisible Friends; How Microbes Shape Our Lives and the World Around Us by Jake Robinson. Pelagic Publishing, 2022. ISBN 9781784274337 DOI: 10.53061/NZYJ2969

This you have to pay for.

For those would would like to hear something from nature, I have a May 27, 2022 posting, The sound of the mushroom. Enjoy!

Council of Canadian Academies (Eric Meslin) converses with with George Freeman, UK Minister of Science (hybrid event) on June 8, 2023

I think this is a first, for me anyway, a Council of Canadian Academies (CCA) event that’s not focused on a reports from one of their expert panels. Here’s more about the ‘conversation’, from a June 2, 2023 CCA announcement (received via email),

A conversation with George Freeman, UK Minister of Science (hybrid event)

Join us for a wide-ranging chat about the challenges and opportunities facing policymakers and researchers in Canada, the UK, and around the globe.
(anglais seulement)

Thursday, Jun 8, 2023 2:30 PM – 3:30 PM EDT
Bayview Yards
7 Bayview Station Road
Ottawa, ON
(and online)
 
The CCA is pleased to invite you to a conversation with George Freeman, MP, UK Minister of Science, Research and Innovation. Minister Freeman will join Eric M. Meslin, PhD, FRSC, FCAHS, President and CEO of the CCA, at Bayview Yards for a wide-ranging chat about the challenges and opportunities facing policymakers and researchers in Canada, the UK, and around the world.
 
Minister Freeman and Dr. Meslin will address a host of topics:

  • The state of science, technology and innovation policy and performance on both sides of the Atlantic;
  • Opportunities to create effective international collaborations;
  • National strategies to harness the power of quantum technologies;
  • Antimicrobial resistance and availability;
  • Arctic and Northern research priorities and approaches; and
  • Biomanufacturing and engineering biology.

Advanced registration is required.

Register for the in-person event: https://www.eventbrite.ca/e/a-conversation-with-george-freeman-uk-minister-of-science-in-person-tickets-646220832907

Register to attend virtually: https://www.eventbrite.ca/e/a-conversation-with-george-freeman-uk-minister-of-science-virtual-tickets-646795341277

Why listen to George Freeman?

Ordinarily being a Minister of Science would be enough to say ‘Of course, let’s hear what he has to say’ but Mr. Freeman’s ‘ministerial’ history is a little confusing. According to a September 24, 2021 article for Nature by Jonathan O’Callahan,

The United Kingdom has a new science minister [emphasis mine] — its ninth since 2010, following a reshuffle of Prime Minister Boris Johnson’s cabinet. George Freeman, a former investor in life-sciences companies, takes the role at a time when the coronavirus pandemic has renewed focus on research. But there are concerns that the Conservative government’s ambitious target for research spending will not be met. …

Chris Havergal’s Sept. 17, 2021 article for the Times Higher Education is titled, “George Freeman replaces Amanda Solloway as UK science minister; Former life sciences minister founded series of Cambridge biomedical start-ups before entering politics.”

For further proof of Freeman’s position, there’s this November 21, 2022 “Royal Society response to statement made by George Freeman, Minister of State (Minister for Science, Research and Innovation)”

Responding to today’s [November 21, 2022] announcement from George Freeman, Minister of State (Minister for Science, Research and Innovation), Professor Linda Partridge, Vice President of the Royal Society, said: “Last week the Government committed to protecting the science budget. Today’s announcement shows the Government’s commitment to putting science at the heart of plans for increasing productivity and driving economic growth.

“The ongoing failure to associate to Horizon Europe [the massive, cornerstone science funding programme for the European Union] remains damaging to UK science and the best solution remains securing rapid association. In the meantime, the funding announced today is a welcome intervention to help protect and stabilise the science sector.”

Oddly, Mr. Freeman’s UK government profile page does not reflect this history,

George Freeman was appointed Minister of State in the Department for Science, Innovation and Technology on 7 February 2023 [emphasis mine].

George was previously Minister of State in the Department for Business, Energy and Industrial Strategy from 26 October 2022 to 7 February 2023, Parliamentary Under Secretary of State in the Department for Business, Energy and Industrial Strategy from 17 September 2021 to 7 July 2022 [emphases mine], a Minister of State at the Department for Transport from 26 July 2019 to 13 February 2020, Parliamentary Under Secretary of State for Life Sciences at the Department for Business, Innovation and Skills and the Department of Health from July 2014 until July 2016. He also served as Parliamentary Private Secretary to the Minister of State for Climate Change from 2010 to 2011.

He was appointed government adviser on Life Sciences in July 2011, co-ordinating the government’s Life Science and Innovation, Health and Wealth Strategies (2011), and the Agri-Tech Industrial Strategy (2013). He was appointed the Prime Minister’s UK Trade Envoy in 2013.

How did Nature, Times Higher Education, and the Royal Society get the dates so wrong? Even granting that the UK had a very chaotic time with three Prime Minister within one year, Freeman’s biographical details seem peculiar.

Here’s a description of the job from Mr. Freeman’s UK government profile page,

Minister of State (Minister for Science, Research and Innovation)

The minister is responsible for:

More about this role

Department for Science, Innovation and Technology

Doesn’t ‘Minister of State’ signify a junior Ministry as it does in Canada? In any event, all this casts an interesting light on a January 17, 2023 posting on the Campaign for Science and Engineering (CASE) website,

Last week George Freeman, the Minister of State for Science, Research and Innovation, gave a speech to the Onward think tank setting out the UK Government’s ‘global science strategy’. Here our policy officer, Camilla d’Angelo, takes a look at his speech and what it all might mean.  

In his speech, the Minister outlined what it means for the UK to be a ‘Science Superpower’ [emphasis mine] and how this should go alongside being an ‘Innovation Nation’, highlighting a series of opportunities and policy reforms needed to achieve this. In the event the UK’s association to the EU Horizon Europe programme continues to be blocked, the Minister outlined an alternative to the scheme, setting out the UK Government’s vision for a UK science strategy. Freeman reiterated the UK Government’s commitment to increasing R&D funding to £20bn per year by 2024/25 and a plan to use this to drive private investment. It is now widely accepted that the UK is likely spending just under 3% of GDP on R&D, and the UK Government is keen to push ahead and extend the target to remain competitive with other research-intensive countries. It is positive to hear a coherent vision from the UK Government on what it wants increased R&D investment to achieve.  

Becoming a Science Superpower is required to solve societal challenges  

The Science Minister highlighted the central role of science and technology in solving some of the world’s most pressing challenges, from water security through to food production and climate change. In particular, he stressed that UK research and innovation can and should have a bigger global role and impact in helping to solving some of these challenges. The view that the UK needs to be a science and technology superpower was also echoed by a panel of R&I experts. 

George Freeman outlined some of the important dimensions of what it means for the UK to become a ‘Science Superpower’ and ‘Innovation Nation’. The UK is widely held to be an academic powerhouse, with its academic science system one of its greatest national strengths. A greater focus on mission-driven research, alongside investment in general purpose technologies, could be a way to encourage the diffusion and adoption of innovations. In addition to this, other important factors include talent, industrial output, culture, soft power and geopolitical influence, many of which the UK performs less well in. 

Are the Brits going to encourage us be a science superpower too? If everyone is a science superpower, doesn’t that mean no one is a science superpower? Will the CCA one day invite someone from South Korea to talk about how their science policies have turned that country into a science powerhouse?

What advice can we expect from George Freeman? I guess we’ll find out on June 8, 2023. For those of us on Pacific Time, that means 11:30 am to 12:30 pm.

Don’t forget, there are two different registration pages,

Register for the in-person event: https://www.eventbrite.ca/e/a-conversation-with-george-freeman-uk-minister-of-science-in-person-tickets-646220832907

Register to attend virtually: https://www.eventbrite.ca/e/a-conversation-with-george-freeman-uk-minister-of-science-virtual-tickets-646795341277

Transformational machine learning (TML)

It seems machine learning is getting a tune-up. A November 29, 2021 news item on ScienceDaily describes research into improving machine learning from an international team of researchers,

Researchers have developed a new approach to machine learning that ‘learns how to learn’ and out-performs current machine learning methods for drug design, which in turn could accelerate the search for new disease treatments.

The method, called transformational machine learning (TML), was developed by a team from the UK, Sweden, India and Netherlands. It learns from multiple problems and improves performance while it learns.

A November 29, 2021 University of Cambridge press release (also on EurekAlert), which originated the news item, describes the potential this new technique may have on drug discovery and more,

TML could accelerate the identification and production of new drugs by improving the machine learning systems which are used to identify them. The results are reported in the Proceedings of the National Academy of Sciences.

Most types of machine learning (ML) use labelled examples, and these examples are almost always represented in the computer using intrinsic features, such as the colour or shape of an object. The computer then forms general rules that relate the features to the labels.

“It’s sort of like teaching a child to identify different animals: this is a rabbit, this is a donkey and so on,” said Professor Ross King from Cambridge’s Department of Chemical Engineering and Biotechnology, who led the research. “If you teach a machine learning algorithm what a rabbit looks like, it will be able to tell whether an animal is or isn’t a rabbit. This is the way that most machine learning works – it deals with problems one at a time.”

However, this is not the way that human learning works: instead of dealing with a single issue at a time, we get better at learning because we have learned things in the past.

“To develop TML, we applied this approach to machine learning, and developed a system that learns information from previous problems it has encountered in order to better learn new problems,” said King, who is also a Fellow at The Alan Turing Institute. “Where a typical ML system has to start from scratch when learning to identify a new type of animal – say a kitten – TML can use the similarity to existing animals: kittens are cute like rabbits, but don’t have long ears like rabbits and donkeys. This makes TML a much more powerful approach to machine learning.”

The researchers demonstrated the effectiveness of their idea on thousands of problems from across science and engineering. They say it shows particular promise in the area of drug discovery, where this approach speeds up the process by checking what other ML models say about a particular molecule. A typical ML approach will search for drug molecules of a particular shape, for example. TML instead uses the connection of the drugs to other drug discovery problems.

“I was surprised how well it works – better than anything else we know for drug design,” said King. “It’s better at choosing drugs than humans are – and without the best science, we won’t get the best results.”

Here’s a link to and a citation for the paper,

Transformational machine learning: Learning how to learn from many related scientific problems by Ivan Olier, Oghenejokpeme I. Orhobor, Tirtharaj Dash, Andy M. Davis, Larisa N. Soldatova, Joaquin Vanschoren, and Ross D. King. PNAS December 7, 2021 118 (49) e2108013118; DOI: https://doi.org/10.1073/pnas.2108013118

This paper appears to be open access.