Tag Archives: UK

A Multidisciplinary Centre for Neuromorphic (brainlike) Computing in the UK

A May 6, 2025 Aston University press release (also on EurekAlert but published May 7, 2025) announces a UK ‘neuromorphic initiative’, Note: Links have been removed,

  • Aston University to lead the UK’s new centre to pioneer brain-inspired, energy-efficient computing technologies 
  • The initiative will receive £5.6 million over four years from the Engineering and Physical Sciences Research Council [EPSRC]
  • The aim of the centre is to become a focal point for networking and collaboration on fundamental research and technology.

The UK will be getting a new centre to pioneer brain-inspired, energy-efficient computing technologies.

The UK Multidisciplinary Centre for Neuromorphic Computing is led by Aston University and will receive £5.6 million over four years from the UKRI [UK Research and Innovation] Engineering and Physical Sciences Research Council (EPSRC).

The aim of the centre is to become a focal point for networking and collaboration on fundamental research and technology of neuromorphic computing to address the sustainability challenges facing today’s digital infrastructure and artificial intelligence systems.

The centre will be led by the Aston Institute of Photonic Technologies (AIPT) and will include the world-leading researchers from Aston University, the University of Oxford, the University of Cambridge, the University of Southampton, Queen Mary University of London, Loughborough University and the University of Strathclyde. 

Neuromorphic computing seeks to replicate the brain’s structural and functional principles, however scientists currently lack a deep, system-level understanding of how the human brain computes at cellular and network scales. The researchers aim to tackle that challenge directly, blending stem-cell-derived human neuron experiments with advanced computational models, low-power algorithms and novel photonic hardware.

The centre team includes world-leading researchers with broad and complementary expertise in neuroscience, non-conventional computing algorithms, photonics, opto- and nano-electronics and materials science. In collaboration with policymakers and industrial partners the scientists and engineers aim to demonstrate the capabilities of neuromorphic computing across a range of sectors and applications. The centre will be supported by a broad network of industry partners including Microsoft Research, Thales, BT, QinetiQ, Nokia Bell Labs, Hewlett Packard Labs, Leonardo, Northrop Grumman and a number of small to medium enterprises. Their contribution will focus on enhancing the centre’s impact on society.

Professor Rhein Parri, co-director and neurophysiologist at Aston University said: “For the first time, we can combine the study of living human neurons with that of advanced computing platforms to co-develop the future of computing. 

“This project is an exciting leap forward, learning from biology and technology in ways that were not previously possible.”

The experts aim to co-design brain-inspired neuromorphic systems by studying human neuronal function using the latest human induced pluripotent stem cell – or hiPSC technologies – and developing new computational paradigms and low-power AI algorithms. They also plan to create devices and hardware that are inspired by biological systems, like the human brain. These devices will use light – or photonic hardware – to process information. This approach will be the next big step in making computing more energy-efficient and capable of handling many tasks at the same time. They also aim to create a sustainable UK research ecosystem through training, road mapping, and international collaboration.

Professor Sergei K. Turitsyn, director of the centre and AIPT, said: “The project’s ambition is not only to develop future technologies, but also to create a new internationally known UK research brand in neuromorphic computing that will unite the UK’s best minds across disciplines and will lead to sustainable operation and a long-term impact. It’s a proud moment for AIPT and Aston University to lead this national effort.”

Professor Natalia Berloff, co-director of the centre who is based at the University of Cambridge said: “One of the most exciting aspects of neuromorphic computing is the potential of photonic hardware to deliver truly brain-like efficiency. 

“Light-based processors can exploit massive parallelism and ultrafast signal propagation to outperform conventional electronics on demanding AI workloads, while consuming far less power. By combining these photonic architectures with insights from living human neurons, we aim to co-design neuromorphic systems that move beyond incremental improvements and toward a genuinely transformative computing paradigm.”

In addition, the researchers aim to tackle the increasing global energy footprint of information and communication technologies which is developing at an unsustainable pace, driven partly by the explosive growth of artificial intelligence. Today’s AI systems are built on traditional computing hardware with increasingly high-power consumption (kW), posing a barrier to scalability and sustainability. In contrast, the human brain performs complex computation and communication tasks using just 20 watts.

Professor Dimitra Georgiadou, co-director of the centre who is based at the University of Southampton added: “To address the challenge of substantially lowering the power consumption in electronics, novel materials and device architectures are needed that can effectively emulate computation in the brain and cellular responses to certain stimuli.”

The centre’s ambition goes beyond technology development as it aims to serve as a foundation for a long-term, interdisciplinary research ecosystem – actively expanding its membership and reach over time. It aims to establish a sustainable centre that continues to be a focal point for the community and will thrive beyond the initial funding period, reinforcing innovation, partnership, and impact in the field of neuromorphic computing.

Good luck to this effort to lower power consumption.

With a wave of your finger you can control your electronic fabric

A March 6, 2025 news item on ScienceDaily announces a durable electronic textile that can be washed,

A team of researchers from Nottingham Trent University (UK), Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and Free University of Bozen-Bolzano (Italy) has created washable and durable magnetic field sensing electronic textiles — thought to be the first of their kind — which they say paves the way to transform use in clothing, as they report in the journal Communications Engineering. This technology will allow users to interact with everyday textiles or specialized clothing by simply pointing their finger above a sensor.

A March 5, 2025 Helmholtz-Zentrum Dresden-Rossendorf press release (also on EurekAlert but published March 6, 2025), which originated the news item, describes some possibilities that, until now, have been the province of science fiction,

The researchers show how they placed tiny flexible and highly responsive magnetoresistive sensors within braided textile yarns compatible with conventional textile manufacturing. The garment can be operated by the user across a variety of functions through the use of a ring or glove which would require a miniature magnet. The sensors are seamlessly integrated within the textile, allowing the position of the sensors to be indicated using dyeing or embroidering, acting as touchless controls or ‘buttons’.

The technology, which could even be in the form of a textile-based keyboard, can be integrated into clothing and other textiles and can work underwater and across different weather conditions. Importantly, the researchers argue, it is not prone to accidental activation unlike some capacitive sensors in textiles and textile-based switches. “By integrating the technology into everyday clothing people would be able to interact with computers, smart phones, watches and other smart devices, transforming their clothes into a wearable human-computer interface”, summarizes Dr. Denys Makarov from the Institute of Ion Beam Physics and Materials Research at HZDR.

Washable fashion for human-computer interaction

The technology could be applied to areas such as temperature or safety controls for specialized clothing, gaming, or interactive fashion – such as allowing its users to employ simple gestures to control LEDs or other illuminating devices embedded in the textiles. Furthermore, the research team demonstrates the technology on a variety of uses, including a functional armband allowing navigational control in a virtual reality environment, and a self-monitoring safety strap for a motorcycle helmet. “It is the first time that washable magnetic sensors have been unobtrusively integrated within textiles to be used for human-computer interactions”, emphasizes Prof. Niko Münzenrieder from Free University of Bozen-Bolzano.

“Our design could revolutionize electronic textiles for both specialized and everyday clothing,” said lead researcher Dr. Pasindu Lugoda, who is based in Nottingham Trent University’s Department of Engineering. He further remarks: “Tactile sensors on textiles vary in usefulness as accidental activation occurs when they rub or brush against surfaces. Touchless interaction reduces wear and tear. Importantly, our technology is designed for everyday use. It is machine washable and durable and does not impact the drape, or overall aesthetic appeal of the textile.”

Electronic textiles are becoming increasingly popular with wide-ranging uses, but the fusion of electronic functionality and textile fabrics can be very challenging. Such textiles have evolved and now rely on soft and flexible materials which are robust enough to endure washing and bending, but which are intuitive and reliable.

Here’s a link to and and a citation for the paper,

Submersible touchless interactivity in conformable textiles enabled by highly selective overbraided magnetoresistive sensors by Pasindu Lugoda, Eduardo Sergio Oliveros-Mata, Kalana Marasinghe, Rahul Bhaumik, Niccolò Pretto, Carlos Oliveira, Tilak Dias, Theodore Hughes-Riley, Michael Haller, Niko Münzenrieder & Denys Makarov. Communications Engineering volume 4, Article number: 33 (2025) DOI: https://doi.org/10.1038/s44172-025-00373-x Published: 25 February 2025

This paper is open access.

Everyone’s talking about* insects/bugs: InsectNet technology, a park for bugs, and more

I have been stumbling across bug (or insect) research at a greater rate than usual and while the ‘bug-informed’ community is, no doubt, acutely aware of the loss of insect life, the severity of the situation was a revelation to me.

Bugpocalypse (h/t IFLScience for the head)

Caption: Drosophila use multiple mechanisms to adapt to hot, dry desert temperatures. Credit: Sarah Becan for the Gallio Lab/Northwestern University

This work looks at adaptation strategies, from a March 5, 2025 Northwestern University (Chicago, Illinois) news release (also on EurekAlert) by Win Reynolds, Note: Links have been removed,

  • Insect populations, foundational to food chains and pollination, have dramatically declined over the past 20 years due to rapid climate change
  • Scientists identify two ways fly species from different climates (high-altitude forest and hot desert) have adapted to temperature
  • Paper provides evidence that changes in brain wiring and heat sensitivity contributed to shifting preference to hot or cold conditions, respectively
  • Results may help predict the impact of ongoing climate change on insect distribution and behavior

EVANSTON, Ill. — Tiny, cold-blooded animals like flies depend on their environment to regulate body temperature, making them ideal “canaries in the mine” for gauging the impact of climate change on the behavior and distribution of animal species. Yet, scientists know relatively little about how insect sense and respond to temperature.

Using two species of flies from different climates — one from the cool, high-altitude forests of Northern California, the other hailing from the hot, dry deserts of the Southwest (both cousins of the common laboratory fly, drosophila melanogaster) — Northwestern scientists discovered remarkable differences in the way each processes external temperature.

Forest flies showed increased avoidance of heat, potentially explained by higher sensitivity in their antennae’s molecular heat receptors, while desert flies were instead actively attracted to heat, a response that could be tracked to differences in brain wiring  in a region of the fly brain that helps compute the valence (inherent attractiveness or aversiveness) of sensory cues.

The scientists believe these two mechanisms may have accompanied the evolution of each species as it adapted to its distinctive thermal environment, starting from a common ancestor dating back 40 million years (not long after dinosaurs went extinct).

These findings, published today (March 5 [2025]) in the journal Nature, help understand how animals evolve the preferences for specific temperature environments and may help predict the impact of a rapidly changing climate on animal behavior and distribution.

‘Not enough people care about insects’

“Insects are especially threatened by climate change,” said Northwestern neurobiologist Marco Gallio. “Behavior is the first interface between an animal and its environment. Even before the struggle to survive or perish, animals can respond to climate change by migration and by changing their distribution. We are already seeing insect populations declining in many regions, and even insect vectors of disease like the Zika virus and malaria spreading into new areas.”

Gallio, a self-appointed “insect advocate,” is a professor in the neurobiology department and the Soretta and Henry Shapiro Research Professor in Molecular Biology at the Weinberg College of Arts and Sciences. His lab examines fruit flies and their sensing systems. Gallio acknowledged there is limited data because “not enough people care about the insects,” but that available figures record a dramatic decline in insects in the past 20 to 50 years. Though bug haters may rejoice, Gallio said the population decline in the animal group with the most species on Earth is nothing to celebrate.

In addition to their position at the foundation of most terrestrial food chains, insects pollinate 70% of our crops. Gallio said losing insect communities could cause catastrophic damage to ecosystems across the globe and have a direct impact on human wellbeing.

Understanding heat circuits in the brain

Previous work from the Gallio Lab focused on how small insects like laboratory flies respond to sensory cues like harmless and painful temperature changes.

“The common fruit fly is an especially powerful animal to study how the external world is represented and processed within the brain,” Gallio said. “Many years of work on fly genetics and neuroscience have given us a map of the fly brain more detailed than that of any other animal.”

In the present study, Gallio and colleagues wondered how the brain circuits and resulting behaviors compared in fly species that were very similar aside from their choices of thermal habitat.

Using genetic tools, including CRISPR [clustered regularly interspaced short palindromic repeats], to knock out certain genes and gene swaps between species, the team studied both the molecular and brain mechanisms that may explain species-specific differences in temperature preference.

Ph.D. student and lead author Matthew Capek explained that they first found differences in the molecules that detect heat, causing them to activate at different temperatures. And while Capek said the difference in activation could explain the forest flies’ preference for cooler environments, a shift in receptor activation was not enough to explain the behavior of the desert fly.

“The desert fly seemed actively attracted to warmer temperatures — around 90 degrees Fahrenheit compared to the forest fly’s sweet spot just below 70 degrees,” said Capek, who works in the Gallio lab. “In fact, the activation threshold of the antenna heat sensors corresponded to their favorite temperature range, which they will seek, rather than to a temperature they should avoid.”

“In other words, the fly doesn’t behave any longer as though the antennae are telling it to run away from dangerous heat; they seem to be telling it higher temperatures are good, and to approach them.”

High cost, high reward

Gallio was initially puzzled — deserts are hot, so it did not make sense that flies sought out heat — but a lab trip to the Anza Borrego desert of Southern California provided key inspiration.

“Deserts in this region are very hot during the day, but temperatures can drop extremely rapidly when the sun goes down, and night can be downright freezing,” said Alessia Para, also a key author of the study and a research associate professor of neurobiology. “Flies in this climate may need to constantly attend to the rapidly changing temperature and always seek the ideal range, finding shady spots during the day and hiding in cacti for warmth at night.”

Flies from more forgiving environments may instead ignore temperature except when it changes rapidly. Constantly detecting the right temperature is costly from an energy perspective, but for desert flies, it’s life or death.

“This comparative work is useful in a couple of different ways,” Gallio said. “When an animal is born, the brain is already programmed to know if many of the things it will encounter are bad or good for it, and we do not understand how that programming works.

These fly species represent a natural experiment because a stimulus that is good for one species is bad for the other, and we can study the differences that make it so. We also want to learn more about how animals have been able to adapt to different temperatures during evolution, so that we may be able to better understand and even predict how they react to ongoing climate change. Of course we care about the insects, and we hope that what we learn may help us appreciate and protect them better.”

There’s more but first, a citation and a link to the Gallio Lab’s paper,

Evolution of temperature preference in flies of the genus Drosophila by Matthew Capek, Oscar M. Arenas, Michael H. Alpert, Emanuela E. Zaharieva, Iván D. Méndez-González, José Miguel Simões, Hamin Gil, Aldair Acosta, Yuqing Su, Alessia Para & Marco Gallio. Nature (2025) DOI: https://doi.org/10.1038/s41586-025-08682-z Published: 05 March 2025

This paper is behind a paywall.

Bugs Matter

Thanks to buglife.org.uk for the subhead and the report. Here’s more from their April 30, 2025 press release, Note: Links have been removed,

The troubling extent of insect declines across the UK has been highlighted once again by the results of the 2024 Bugs Matter citizen science survey published today. The latest data shows that the number of flying insects sampled on vehicle number plates, across the UK, has fallen by a staggering 63% since 2021.

The Bugs Matter survey, led by Kent Wildlife Trust in partnership with invertebrate charity Buglife, relies on a nationwide network of volunteer citizen scientists who record insect splats on their vehicle number plates after journeys, using the Bugs Matter app built by Natural Apptitude. Analysis of records from more than 25,000 journeys across the UK since 2021 shows an alarming decrease in bug splats but data from 2024 shows this decrease has slowed.

Insects are critical to ecosystem functioning and services. They pollinate crops, provide natural pest control, decompose waste and recycle nutrients, and underpin food chains that support birds, mammals and other wildlife. Without insects, the planet’s ecological systems would collapse.

Dr. Lawrence Ball of Kent Wildlife Trust stated: “This huge decrease in insect splats over such a short time is really alarming. Its most likely that we are seeing the compounding effects of both a background rate of decline as well as a short term cycle of decline, perhaps linked to the extreme climate in the UK in recent years. Bug splats declined 8% from 2023 to 2024, following sharper drops of 44% in 2023 and 28% in 2022. This shows the rate of decline has slowed and it may even flatten or reverse next year. Continued support from citizen scientists is key to revealing the overall trend in insect numbers.”

The new data shows a decrease in insect splat rates across all the UK nations, with the sharpest fall between 2021 and 2024 recorded in Scotland at 65%. In England, the number of insect splats fell by 62%, in Wales by 64%, and in Northern Ireland by 55%, over the same time period.

Andrew Whitehouse, from Buglife added: “The latest Bugs Matter data suggests that the abundance of flying insects in our countryside has fallen again. The consequences are potentially far-reaching, not only impacting the health of the natural world, but affecting so many of the essential services that nature provides for us. Human activities continue to have a huge impact on nature, habitat loss and damage, pesticide use, pollution, and climate change all contribute to the decline in insects. Society must heed the warning signs of ecological collapse, and take urgent action to restore nature.”

Participation in Bugs Matter is growing and the number of journeys recorded in 2024 far exceeded previous years. This is in part thanks to a new partnership with Openreach, owner of the nation’s second largest commercial van fleet.

Peter Stewart, Openreach’s UK Operations Director for Service Delivery said: “We’re excited to participate in the ‘Bugs Matter’ survey for the second year. Our engineers travel millions of miles annually across the UK to build and maintain our network, making it easy for them to measure insect splats on vehicle number plates. We recognise the crucial role pollinators play for all of us to thrive, and as part of our strategy to protect nature, we’re proud to support this campaign again. Last year, we contributed around 10% of the registered journeys, and with our 25,000-strong fleet, we aim to do even better this year.”

Andrew Whitehouse concluded: “Thank you to everyone who participated in the Bugs Matter survey in 2024. Your contribution has provided invaluable insights into the health of our insect populations and wider environment. We are relaunching the survey on May 1 this year [2025], and with our expansion into the Republic of Ireland, we hope to engage even more people in this crucial citizen science effort.”

The 2025 Bugs Matter survey will run from Saturday 1 May to Tuesday 30 September. It is quick, free and easy to get involved – simply download the free mobile phone app and start recording insect splats on vehicle journeys.

Expansion into Republic of Ireland

In response to growing interest and the need for more comprehensive data, the Bugs Matter survey is expanding into the Republic of Ireland for the 2025 season, thanks to the Amazon Web Services (AWS) Imagine Grant ‘Go Further, Faster’ Award received by Bugs Matter at the end of 2024. This grant provides vital resources to non-profit organisations looking to deploy cloud technology as a central tool to achieve their mission goals, and is providing Bugs Matter with a combination of funding, cloud computing credits, and engagement with AWS technical specialists. This marks an important step in building a more complete picture of insect populations across the British Isles, and future expansion of the Bugs Matter survey.

Dr. Lawrence Ball of Kent Wildlife Trust stated,We’re extremely grateful for the financial and technical support from Amazon Web Services, which means we can launch in Ireland this year and in more countries in 2026. If you drive or know someone who drives in Ireland, please download the app, sign up, and take part! The UK results highlight the importance of understanding insect numbers elsewhere.”

The charities caution that continued long-term monitoring is essential to track the precise magnitude of these alarming trends, but stress that the current pace of decline is clearly ecologically unviable. By taking part in the Bugs Matter survey each year, citizen scientists can provide crucial data to better understand insect population patterns and support evidence-based conservation actions.

Zac Sherratt’s April 30, 2025 article for the British Broadcasting Corporation’s (BBC) online news website offers little more information,

A survey tracking the “staggering” decline in insect numbers across the UK and Ireland has begun.

The Bugs Matter survey, led by Kent Wildlife Trust and invertebrate charity Buglife, runs from 1 May to 30 September each year and sees “citizen scientists” record the number of bug splats on their vehicle number plates after a journey.

Dr Ball [Dr. Lawrence Ball of Kent Wildlife Trust ] said: “Without insects, the planet’s ecological systems would collapse so this huge decrease in insect splats over such a short time is really alarming.”

Bug splats declined 8% in 2024, following sharper drops of 44% in 2023 and 28% in 2022.

Dr Ball said the slowing rate of decline shows the curve may flatten or even reverse next year.

More than 25,000 journeys have been analysed as part of the survey since 2021.

You can find the 2024 The Bugs Matter Citizen Science Survey here and the Buglife organization (and signup information for the 2025 survey) here.

IFLScience (and Bugpocalypse)

There’s an interesting back story for IFLScience (which started life as as Facebook page titled, “I Fucking Love Science”). If you want to find out more about IFLScience’s origins and founder, there’s Elise Andrew’s Wikipedia entry.

Returning to the bugs, Dr. Russell Moul’s April 30 (?), 2025 article for IFLScience further highlights the plight of insects around the world, Note: Links have been removed,

Insect populations have been declining across the world at an alarming rate, but no one has been sure why. According to a new study, intense agricultural practices are at the top of the list of causes, but there are multiple interrelated factors that are all contributing to quickly killing off these vital creatures.

“Insects are fundamental to life on earth. They are really important pollinators, decomposers, and prey for birds, bats, reptiles, and other species”, Eliza Grames, Assistant Professor of Biological Sciences, told IFLScience.

“Insects pollinate around 80 percent of wild flowering plants, and 75 percent of agricultural crop species rely on insects for pollination. Without insects as decomposers, the earth would essentially be covered in manure. Cow manure takes 60 percent longer to deteriorate when insects are excluded from an area.”

But despite their importance, insect numbers are declining. In 2017, a devastating study demonstrated that there has been more than a 75 percent decline in insect populations over the last three decades. As a result, scientists have been seeking to identify the likely causes for this decline.

In order to understand which causes the scientific community has found so far, Grames and colleagues from Binghamton University examined some 175 scientific reviews, which contained over 500 hypothesized drivers behind the decline. This information allowed the team to create an interconnected network of 3,000 possible links, known as a meta-synthetic approach, which spanned everything from beekeeping and deforestation to urban sprawl and parasites.

Within this network of information, the team found that intensified agriculture was the most cited driver behind the mass die-off. This was linked to issues such as land-use change and insecticides. However, focusing solely on the most cited drivers is not the way to interpret this information. As the team note in their work, the results show how interconnected the drivers are, highlighting complex issues.

For example, the climate may be an important driver behind the decline, but there are aspects within that, such as extreme precipitation, fire, and temperature rises, which can then contribute to other drivers. It’s an extremely connected and synergistic network.

“The drivers of insect decline are really complex and there are many overlooked stressors that we should be thinking about and researching,” Grames told IFLScience.

If you have a little more time, you can find some interesting tidbits in Moul’s April 30 (?), 2025 article.

Here’s a link to and a citation for the recent meta-analysis/meta-synthesis mentioned in the article,

Meta-synthesis reveals interconnections among apparent drivers of insect biodiversity loss by Christopher A Halsch, Chris S Elphick, Christie A Bahlai, Matthew L Forister, David L Wagner, Jessica L Ware, Eliza M Grames. BioScience, biaf034 DOI: https://doi.org/10.1093/biosci/biaf034 Published: 22 April 2025

This paper is behind a paywall.

InsectNet

A February 6, 2025 news item on ScienceDaily announces an application that uses machine learning for insect identification,

A farmer notices an unfamiliar insect on a leaf.

Is this a pollinator? Or a pest? Good news at harvest time? Or bad? Need to be controlled? Or not?

That farmer can snap a picture, use a smartphone or computer to feed the photo into a web-based application called InsectNet and, with the help of machine learning technology, get back real-time information.

“The app identifies the insect and returns a prediction of its taxonomic classification and role in the ecosystem as a pest, predator, pollinator, parasitoid, decomposer, herbivore, indicator and invasive species,” said a scientific paper describing InsectNet recently published by the journal PNAS Nexus [PNAS stands for Proceedings of the National Academy of Sciences of the US]. Iowa State University’s Baskar Ganapathysubramanian and Arti Singh are the corresponding authors.

..

A February 5, 2025 Iowa State University news release (also on EurekAlert but published February 6, 2025), which originated the news item, delves further into InsectNet,

InsectNet – which is backed by a dataset of 12 million insect images, including many collected by citizen-scientists – provides identification and predictions for more than 2,500 insect species at more than 96% accuracy. When the application isn’t sure about an insect, it says it is uncertain, giving users more confidence when it does provide answers.

And, because the application was built as a global-to-local model, it can be geographically fine-tuned using expert-verified local and regional datasets. That makes it useful to farmers everywhere.

So, beware, armyworms, cutworms, grasshoppers, stink bugs and all the other harmful insects. And, hello, butterflies, bees and all the other pollinators. Good to see you, lady beetles, mantises and all the other pest predators.

“We envision InsectNet to complement existing approaches, and be part of a growing suite of AI technologies for addressing agricultural challenges,” the authors wrote.

A village of researchers

InsectNet’s ability to be fine-tuned for specific regions or countries make it particularly useful, said Singh, an associate professor of agronomy.

In Iowa, for example, Singh said there are about 50 insect species particularly important to the state’s agricultural production. To identify and provide predictions about those insects, Singh said the project used about 500,000 insect images.

That could happen for farmers all over the globe. And wherever there isn’t sufficient data – these sophisticated models often require millions of images – for local fine-tuning, the global dataset is still available for farmers.

InsectNet isn’t just for farmers, though. Singh said it could also help agents at ports or border crossings identify invasive species. Or it could help researchers working on ecological studies.

So, the app is usable and flexible. But is it accessible?

You can’t go to an app store and download a version just yet, said Ganapathysubramanian, the Joseph and Elizabeth Anderlik Professor in Engineering and director of the AI Institute for Resilient Agriculture based at Iowa State. But the app is running on a server at Iowa State. With a QR code (see sidebar) or this URL (insectapp.las.iastate.edu/), users can upload insect pictures and get an identification and prediction.

This works throughout the stages of an insect’s life: from egg to larva to pupa to adult. It works with look-alike species. And it works with diverse image qualities and orientations.

The bottom line for any user is basic information about an insect: “Is this a pest?” Singh said. “Or is it a friend?”

Developers demonstrated the app during last August’s Farm Progress Show in Boone, Iowa. And now the research paper is introducing it to a broader, scientific audience.

But aren’t there already apps that help identify insects?

Yes, said Ganapathysubramanian, but they’re not to the scale of InsectNet and aren’t capable of global-to-local applications. And they’re also not open-source applications with technology that can be shared.

“Making InsectNet open source can encourage broader scientific efforts,” he said. “The scientific community can build on these efforts, rather than starting from scratch.”

The project also answered a lot of technical questions that could be applied to other projects, he said.

How much data is enough? Where can we get that much data? What can we do with noisy data?

How much computer power is necessary? How do we deal with so much data?

“Lastly, it takes a village of expertise to get to this point, right?” said Ganapathysubramanian.

It took agronomists and computer engineers and statisticians and data scientists and artificial intelligence specialists about two years to put InsectNet together and make it work.

“What we learned working with insects can be expanded to include weeds and plant diseases or any other related identification and classification problem in agriculture,” Singh said. “We’re very close to a one-stop shop for identifying all of these.”

Paper co-authors are:

Iowa State University

  • Shivani Chiranjeevi (first author)
  • Mojdeh Saadati
  • Talukder Z. Jubery
  • Daren Mueller
  • Matthew E. O’Neal
  • Asheesh K. Singh
  • Soumik Sarkar
  • Arti Singh (corresponding author)
  • Baskar Ganapathysubramanian (corresponding author)

Carnegie Mellon University

  • Jayanth Koushik
  • Aarti Singh

University of Arizona

  • Zi K. Deng
  • Nirav Merchant

Funding

The InsectNet project was supported by the U.S. Department of Agriculture’s National Institute of Food and Agriculture (through the AI Institute for Resilient Agriculture), the National Science Foundation (through COALESCE: COntext Aware LEarning for Sustainable CybEr-Agricultural Systems), the NSF’s Smart and Connected Communities Program, the USDA’s Current Research Information System Project, and Iowa State’s Plant Sciences Institute.

Here’s a link to and a citation for the paper,

InsectNet: Real-time identification of insects using an end-to-end machine learning pipeline by Shivani Chiranjeevi, Mojdeh Saadati, Zi K Deng, Jayanth Koushik, Talukder Z Jubery, Daren S Mueller, Matthew O’Neal , Nirav Merchant , Aarti Singh , Asheesh K Singh , Soumik Sarkar , Arti Singh , Baskar Ganapathysubramanian. PNAS Nexus, Volume 4, Issue 1, January 2025, pgae575, DOI: https://doi.org/10.1093/pnasnexus/pgae575 Published: 27 December 2024

This paper is open access.

Bugs and kids

The University of Adelaide’s (Australia) March 25, 2025 press release (also on EurekAlert but published March 24, 2025) announces some research on insect-related, school-based citizen science,

Pro-environmental behaviour increases among school students who participate in insect-related citizen science projects, according to new research from the University of Adelaide.

Students who participated in citizen science project Insect Investigators, which engages students in the discovery of new insects, not only expressed an intention to change their personal behaviour but also to encourage others to protect nature.

“As a result of their involvement in this program, students expressed intentions to further engage in insect–science–nature activities,” says the University of Adelaide’s Dr Erinn Fagan-Jeffries, who contributed to the study.

“In addition, teachers reported increased intentions to include insect-related topics in their teaching, which was positively associated with students’ own intentions for pro-environmental behaviour change.

“This suggests students’ response to the project influenced their teacher’s decision to include citizen science in their lessons.”

School-based citizen science projects facilitate authentic scientific interactions between research and educational institutions while exposing students to scientific processes.

“Teachers’ motivations for providing citizen science experiences to students was to create hands-on learning opportunities and to connect students with real science and scientists,” says Professor Patrick O’Connor AM, Director of the University’s School of Economics and Public Policy.

“Teachers reported interactions with researchers as invaluable. These interactions could take the form of in-person visits by team members, or even instructional videos and curriculum-linked teacher lesson plans.”

Incorporating insects into school-based citizen science projects can challenge widespread human misconceptions about insects and their roles in ecosystems, and foster human–insect connections.

“Given global concerns of rapid insect declines and the overarching biodiversity crisis, insect-focused, school-based citizen science projects can ultimately contribute towards equipping students with knowledge of, and actions to promote, insect conservation,” says lead author Dr Andy Howe, from the University of the Sunshine Coast.

“In Australia, approximately 33 per cent of insects are formally described, the remainder exist as ‘dark taxa’, to the detriment of environmental and biodiversity management initiatives.

“Encouraging more young people to engage in science not only engenders positive feelings in them towards the environment, it will also help to build the next generation of scientists who will fill in the vast knowledge gap that exists in the world of insects.”

Before getting to the link and citation, here’s an update on the Australian higher education ecosystem, from the March 24, 2025 version of the press release on EurekAlert ,

The University of Adelaide and the University of South Australia are joining forces to become Australia’s new major university – Adelaide University. Building on the strengths, legacies and resources of two leading universities, Adelaide University will deliver globally relevant research at scale, innovative, industry-informed teaching and an outstanding student experience. Adelaide University will open its doors in January 2026. Find out more on the Adelaide University website.

Here’s a link to and a citation for the study,

Catching ‘the bug’: Investigating insects through school-based citizen science increases intentions for environmental activities in students and teachers by Andy G. Howe, Trang Thi Thu Nguyen, Patrick O’Connor, Alice Woodward, Sylvia Clarke, Nathan Ducker, Kate Dilger, Erinn P. Fagan-Jeffries. Austral Entomology Volume 64, Issue 2 May 2025 e70004 DOI: https://doi.org/10.1111/aen.70004 First published online: 18 March 2025

This paper is open access.

You can find Insect Investigators here. BTW, (from their About US webpage, “Inspired by the Canadian School Malaise Trap Program [hosted by the University of Guelph], we’re working with schools across South Australia, Western Australia and Queensland to collect specimens of invertebrates: butterflies, spiders and more.”)

Bugs and parks

The University of British Columbia (UBC) issued an April 22, 2025 news release (also received via email) by Sachi Wickramasinghe announcing research on ‘parks for bugs’,

As the days get longer and gardeners plan their spring planting, research from the University of British Columbia offers some good news this Earth Day: small, simple changes to urban green spaces can make a big difference for pollinators. The study, published in Ecology Letters, found that reducing lawn mowing and creating pollinator meadows – think of them as ‘parks for bugs’– significantly boosts pollinator diversity, creating healthier and more resilient ecosystems.

A buzzing success

The three-year study, conducted in collaboration with the City of Vancouver’s pollinator meadows program, surveyed pollinators in 18 urban parks across Vancouver, comparing parks where meadows were planted and mowing was restricted with parks that remained as standard turfgrass lawns.

And while the tall grass caused a small stir among some neighbours, the results were striking: parks with meadows saw an immediate increase in pollinator species, with 21 to 47 more wild bee and hoverfly species compared to parks without meadows. The increase persisted over the three-year study period, suggesting that the meadow parks also support pollinators in the long run.

More than 100 species of wild bees and hoverflies were identified, with 35 of them only found in parks with meadows – including the Vancouver and Nevada bumble bee, some miner bees such as the Milwaukee miner bee, the red-faced miner bee and several species of hoverflies.

“Many people think of urban landscapes as poor environments for biodiversity, but our research shows that small actions can have a lasting impact,” said lead author Jens Ulrich, a PhD candidate in the faculty of land and food systems. “You don’t need a lot of space or resources to make a difference.”

Urban landscapes as pollinator havens

Unlike farmland, where large fields with monocrops can limit pollinator movement, urban areas are full of green spaces—gardens, parks, and even roadside boulevards—that can serve as pollinator refuges. The patchwork of small habitats allows species to move freely and settle into restored areas quickly.

The research highlights the importance of maintaining and expanding such efforts. Ongoing management, such as adding more native plants and controlling invasive species, can further strengthen pollinator communities.

The findings also offer practical guidance for city planners and community groups looking to enhance urban green spaces, and have already informed the City of Vancouver’s long-term planning—helping to establish pollinator meadows as a permanent option for parks and shaping future efforts to balance ecological function with aesthetic and cultural values.

“With so much land dedicated to lawns, there’s a major opportunity to rethink how we use these spaces,” said co-author Dr. Risa Sargent, an associate professor in the faculty of land and food systems. “Even small patches of insect-friendly meadows can provide critical resources for pollinators.”

Whether you have a backyard, balcony, or community garden plot, you can support pollinators with these simple steps:

  • Reduce mowing: Pollinators thrive in areas where flowers are allowed to bloom. Consider letting a section of your lawn grow longer or mowing less frequently.
  • Plant native flowering shrubs and trees: Perennial species like native chokecherry, Pacific ninebark, oceanspray, native hawthorn, red flowering currant, salal, red-osier dogwood, snowberry and vine maple are great choices for British Columbia’s Lower Mainland.
  • Create a diverse habitat: Incorporate a variety of plants that bloom at different times of the year to provide food from spring to fall.
  • Avoid pesticides: Many urban areas, including Vancouver, have already restricted pesticide use, but avoiding chemical treatments in your own garden can further protect pollinators.
  • Leave natural nesting sites: Many native bees nest in the ground or in plant stems. Keeping some bare soil or leaving flower stalks through winter can provide valuable shelter.

Here’s a link to and a citation for the paper,

Habitat Restorations in an Urban Landscape Rapidly Assemble Diverse Pollinator Communities That Persist by Jens Ulrich, Risa D. Sargent. Ecology Letters Volume 28, Issue 1 January 2025 e70037 DOI: https://doi.org/10.1111/ele.70037 First published online: 31 December 2024

This paper is open access.

You can find out more about Vancouver’s Pollinator meadows (project) here.

*May 26, 2025 at 3:07 pm PT: ‘abut’ corrected to ‘about’

China’s ex-UK ambassador clashes with ‘AI godfather’ on panel at AI Action Summit in France (February 10 – 11, 2025)

The Artificial Intelligence (AI) Action Summit held from February 10 – 11, 2025 in Paris seems to have been pretty exciting, President Emanuel Macron announced a 09B euros investment in the French AI sector on February 10, 2025 (I have more in my February 13, 2025 posting [scroll down to the ‘What makes Canadian (and Greenlandic) minerals and water so important?’ subhead]). I also have this snippet, which suggests Macron is eager to provide an alternative to US domination in the field of AI, from a February 10, 2025 posting on CCGTN (China Global Television Network),

French President Emmanuel Macron announced on Sunday night [February 10, 2025] that France is set to receive a total investment of 109 billion euros (approximately $112 billion) in artificial intelligence over the coming years.

Speaking in a televised interview on public broadcaster France 2, Macron described the investment as “the equivalent for France of what the United States announced with ‘Stargate’.”

He noted that the funding will come from the United Arab Emirates, major American and Canadian investment funds [emphases mine], as well as French companies.

Prime Minister Justin Trudeau attended the AI Action Summit on Tuesday, February 11, 2025 according to a Canadian Broadcasting Corporation (CBC) news online article by Ashley Burke and Olivia Stefanovich,

Prime Minister Justin Trudeau warned U.S. Vice-President J.D. Vance that punishing tariffs on Canadian steel and aluminum will hurt his home state of Ohio, a senior Canadian official said. 

The two leaders met on the sidelines of an international summit in Paris Tuesday [February 11, 2025], as the Trump administration moves forward with its threat to impose 25 per cent tariffs on all steel and aluminum imports, including from its biggest supplier, Canada, effective March 12.

Speaking to reporters on Wednesday [February 12, 2025] as he departed from Brussels, Trudeau characterized the meeting as a brief chat that took place as the pair met.

“It was just a quick greeting exchange,” Trudeau said. “I highlighted that $2.2 billion worth of steel and aluminum exports from Canada go directly into the Ohio economy, often to go into manufacturing there.

“He nodded, and noted it, but it wasn’t a longer exchange than that.”

Vance didn’t respond to Canadian media’s questions about the tariffs while arriving at the summit on Tuesday [February 11, 2025].

Additional insight can be gained from a February 10, 2025 PBS (US Public Broadcasting Service) posting of an AP (Associated Press) article with contributions from Kelvin Chan and Angela Charlton in Paris, Ken Moritsugu in Beijing, and Aijaz Hussain in New Delhi,

JD Vance stepped onto the world stage this week for the first time as U.S. vice president, using a high-stakes AI summit in Paris and a security conference in Munich to amplify Donald Trump’s aggressive new approach to diplomacy.

The 40-year-old vice president, who was just 18 months into his tenure as a senator before joining Trump’s ticket, is expected, while in Paris, to push back on European efforts to tighten AI oversight while advocating for a more open, innovation-driven approach.

The AI summit has drawn world leaders, top tech executives, and policymakers to discuss artificial intelligence’s impact on global security, economics, and governance. High-profile attendees include Chinese Vice Premier Zhang Guoqing, signaling Beijing’s deep interest in shaping global AI standards.

Macron also called on “simplifying” rules in France and the European Union to allow AI advances, citing sectors like healthcare, mobility, energy, and “resynchronize with the rest of the world.”

“We are most of the time too slow,” he said.

The summit underscores a three-way race for AI supremacy: Europe striving to regulate and invest, China expanding access through state-backed tech giants, and the U.S. under Trump prioritizing a hands-off approach.

Vance has signaled he will use the Paris summit as a venue for candid discussions with world leaders on AI and geopolitics.

“I think there’s a lot that some of the leaders who are present at the AI summit could do to, frankly — bring the Russia-Ukraine conflict to a close, help us diplomatically there — and so we’re going to be focused on those meetings in France,” Vance told Breitbart News.

Vance is expected to meet separately Tuesday with Indian Prime Minister Narendra Modi and European Commission President Ursula von der Leyen, according to a person familiar with planning who spoke on the condition of anonymity.

Modi is co-hosting the summit with Macron in an effort to prevent the sector from becoming a U.S.-China battle.

Indian Foreign Secretary Vikram Misri stressed the need for equitable access to AI to avoid “perpetuating a digital divide that is already existing across the world.”

But the U.S.-China rivalry overshadowed broader international talks.

The U.S.-China rivalry didn’t entirely overshadow the talks. At least one Chinese former diplomat chose to make her presence felt by chastising a Canadian academic according to a February 11, 2025 article by Matthew Broersma for silicon.co.uk

A representative of China at this week’s AI Action Summit in Paris stressed the importance of collaboration on artificial intelligence, while engaging in a testy exchange with Yoshua Bengio, a Canadian academic considered one of the “Godfathers” of AI.

Fu Ying, a former Chinese government official and now an academic at Tsinghua University in Beijing, said the name of China’s official AI Development and Safety Network was intended to emphasise the importance of collaboration to manage the risks around AI.

She also said tensions between the US and China were impeding the ability to develop AI safely.

… Fu Ying, a former vice minister of foreign affairs in China and the country’s former UK ambassador, took veiled jabs at Prof Bengio, who was also a member of the panel.

Zoe Kleinman’s February 10, 2025 article for the British Broadcasting Corporation (BBC) news online website also notes the encounter,

A former Chinese official poked fun at a major international AI safety report led by “AI Godfather” professor Yoshua Bengio and co-authored by 96 global experts – in front of him.

Fu Ying, former vice minister of foreign affairs and once China’s UK ambassador, is now an academic at Tsinghua University in Beijing.

The pair were speaking at a panel discussion ahead of a two-day global AI summit starting in Paris on Monday [February 10, 2025].

The aim of the summit is to unite world leaders, tech executives, and academics to examine AI’s impact on society, governance, and the environment.

Fu Ying began by thanking Canada’s Prof Bengio for the “very, very long” document, adding that the Chinese translation stretched to around 400 pages and she hadn’t finished reading it.

She also had a dig at the title of the AI Safety Institute – of which Prof Bengio is a member.

China now has its own equivalent; but they decided to call it The AI Development and Safety Network, she said, because there are lots of institutes already but this wording emphasised the importance of collaboration.

The AI Action Summit is welcoming guests from 80 countries, with OpenAI chief executive Sam Altman, Microsoft president Brad Smith and Google chief executive Sundar Pichai among the big names in US tech attending.

Elon Musk is not on the guest list but it is currently unknown whether he will decide to join them. [As of February 13, 2025, Mr. Musk did not attend the summit, which ended February 11, 2025.]

A key focus is regulating AI in an increasingly fractured world. The summit comes weeks after a seismic industry shift as China’s DeepSeek unveiled a powerful, low-cost AI model, challenging US dominance.

The pair’s heated exchanges were a symbol of global political jostling in the powerful AI arms race, but Fu Ying also expressed regret about the negative impact of current hostilities between the US and China on the progress of AI safety.

She gave a carefully-crafted glimpse behind the curtain of China’s AI scene, describing an “explosive period” of innovation since the country first published its AI development plan in 2017, five years before ChatGPT became a viral sensation in the west.

She added that “when the pace [of development] is rapid, risky stuff occurs” but did not elaborate on what might have taken place.

“The Chinese move faster [than the west] but it’s full of problems,” she said.

Fu Ying argued that building AI tools on foundations which are open source, meaning everyone can see how they work and therefore contribute to improving them, was the most effective way to make sure the tech did not cause harm.

Most of the US tech giants do not share the tech which drives their products.

Open source offers humans “better opportunities to detect and solve problems”, she said, adding that “the lack of transparency among the giants makes people nervous”.

But Prof Bengio disagreed.

His view was that open source also left the tech wide open for criminals to misuse.

He did however concede that “from a safety point of view”, it was easier to spot issues with the viral Chinese AI assistant DeepSeek, which was built using open source architecture, than ChatGPT, whose code has not been shared by its creator OpenAI.

Fro anyone curious about Professor Bengio’s AI safety report, I have more information in a September 29, 2025 Université de Montréal (UdeM) press release,

The first international report on the safety of artificial intelligence, led by Université de Montréal computer-science professor Yoshua Bengio, was released today and promises to serve as a guide for policymakers worldwide. 

Announced in November 2023 at the AI Safety Summit at Bletchley Park, England, and inspired by the workings of the United Nations Intergovernmental Panel on Climate Change, the report consolidates leading international expertise on AI and its risks. 

Supported by the United Kingdom’s Department for Science, Innovation and Technology, Bengio, founder and scientific director of the UdeM-affiliated Mila – Quebec AI Institute, led a team of 96 international experts in drafting the report.

The experts were drawn from 30 countries, the U.N., the European Union and the OECD [Organisation for Economic Cooperation and Development]. Their report will help inform discussions next month at the AI Action Summit in Paris, France and serve as a global handbook on AI safety to help support policymakers.

Towards a common understanding

The most advanced AI systems in the world now have the ability to write increasingly sophisticated computer programs, identify cyber vulnerabilities, and perform on a par with human PhD-level experts on tests in biology, chemistry, and physics. 

In what is identified as a key development for policymakers to monitor, the AI Safety Report published today warns that AI systems are also increasingly capable of acting as AI agents, autonomously planning and acting in pursuit of a goal. 

As policymakers worldwide grapple with the rapid and unpredictable advancements in AI, the report contributes to bridging the gap by offering a scientific understanding of emerging risks to guide decision-making.  

The document sets out the first comprehensive, independent, and shared scientific understanding of advanced AI systems and their risks, highlighting how quickly the technology has evolved.  

Several areas require urgent research attention, according to the report, including how rapidly capabilities will advance, how general-purpose AI models work internally, and how they can be designed to behave reliably. 

Three distinct categories of AI risks are identified: 

  • Malicious use risks: these include cyberattacks, the creation of AI-generated child-sexual-abuse material, and even the development of biological weapons; 
  • System malfunctions: these include bias, reliability issues, and the potential loss of control over advanced general-purpose AI systems; 
  • Systemic risks: these stem from the widespread adoption of AI, include workforce disruption, privacy concerns, and environmental impacts.  

The report places particular emphasis on the urgency of increasing transparency and understanding in AI decision-making as the systems become more sophisticated and the technology continues to develop at a rapid pace. 

While there are still many challenges in mitigating the risks of general-purpose AI, the report highlights promising areas for future research and concludes that progress can be made.   

Ultimately, it emphasizes that while AI capabilities could advance at varying speeds, their development and potential risks are not a foregone conclusion. The outcomes depend on the choices that societies and governments make today and in the future. 

“The capabilities of general-purpose AI have increased rapidly in recent years and months,” said Bengio. “While this holds great potential for society, AI also presents significant risks that must be carefully managed by governments worldwide.  

“This report by independent experts aims to facilitate constructive and evidence-based discussion around these risks and serves as a common basis for policymakers around the world to understand general-purpose AI capabilities, risks and possible mitigations.” 

The report is more formally known as the International AI Safety Report 2025 and can be found on the gov.uk website.

There have been two previous AI Safety Summits that I’m aware of and you can read about them in my May 21, 2024 posting about the one in Korea and in my November 2, 2023 posting about the first summit at Bletchley Park in the UK.

You can find the Canadian Artificial Intelligence Safety Institute (or AI Safety Institute) here and my coverage of DeepSeek’s release and the panic in the US artificial intelligence and the business communities that ensued in my January 29, 2025 posting.

Digital Culture Talks presented by The Space online February 12 – 13, 2025

A February 5, 2025 notice (received via email) from The Space, a UK Arts organization, announced a two-day series of talks on digital culture,

Digital Culture Talks 2025!

There’s just a week to go till The Space’s conference and we’re pleased to confirm our speakers for each of the roundtable talks on Day 1 and 2. There’s lots that will be of interest, including:

* A timely debate about how to make online communities safer
* In introduction to CreaTech – a £6.75 million investment to develop small, micro- and medium-sized businesses specialising in creative tech like video games and immersive reality – find out how to get involved
* Discussions on the role of artists in a digital world
* Explorations of digital accessibiliy, community ownership, engagement and empowerment. 

Find out more here and below

Day 1
Digital communities and online harms
Wednesday 12 February

Digital accessibility, inclusion and community

Roundtable 1
How can we think differently about how we create digital content and challenge assumptions about what culture looks like? Exploring community ownership, engagement and empowerment through digital.

  • Zoe Partington – Acting CEO DaDa, Artist and Disability Consultant
  • Rachel Farrer – Associate Director, Cultural and Community Engagement Innovation Ecosystem, Coventry University
  • Parminder Dosanjh – Creative Director, Creative Black County
  • Jo Capper – Collaborative Programme Curator, Grand Union

Reducing online harms, how to make social media and online communities safer

Roundtable 2
In a world of increasingly polarised online spaces, what are the emerging trends and challenges when engaging audiences and building communities online?

Day 2
The role of artists in a digital world
Thursday 13 February

Calling all in the West Midlands!

Day 2 is taking place in person as well as streaming online. If you’d like to join us in person at the STEAMhouse in Birmingham, please register for free below.

As well as joining us for the great roundtables we have lined up, there’ll be a great chance to network in between sessions over lunch. Look forward to seeing you there!

Join us in person!

CreaTech, the Digital West Midlands and beyond – Local and Global [CreaTech is an initiative of the UK’s Creative Industries Council]

Roundtable 1
An introduction to CreaTech – a £6.75 million investment to develop small, micro- and medium-sized businesses specialising in creative tech like video games and immersive reality. Creatives and academics from across the Midlands and further afield discuss arising opportunities and what this means for the region and beyond.

  • Richard Willacy – General Director, Birmingham Opera Company 
  • Tom Rogers – Creative Content Producer, Birmingham Royal Ballet
  • Louise Latter – Head of Programme, BOM
  • Lamberto Coccioli – Project lead, CreaTech Frontiers, Professor of Music and Technology at the Royal Birmingham Conservatoire (BCU) 
  • Rachel Davis – Director of Warwick Enterprise, University of Warwick 

Platforming artists and storytellers – are artists and storyteller missing from modern discourse?

Roundtable 2
Artists and storytellers have historically played pivotal roles in shaping societal narratives and fostering cultural discourse. However, is their presence in mainstream discussions diminishing?

Come and join in the conversation!

Register to join us online

If you got to The Space’s Digital Culture Talks 2025 webpage, you’ll find a few more details. Clicking on the link to register will give you the event time appropriate to your timezone.

For anyone curious about The Space, from their homepage (scroll down about 60% of the way),

About us

Welcome to The Space. We help the arts, culture and heritage sector to engage audiences using digital and broadcast content and platforms.

As an independent not-for-profit organisation, our role is to fund the creation of new digital cultural content and provide free training, mentoring and online resources for organisations, artists and creative practitioners.

We are funded by a range of national and regional agencies, to enable you to build your digital skills, confidence and experience via practical advice and hands-on experience. We can also help you to find ways to make your digital content accessible to new and more diverse audiences.

We also offer a low-cost consultancy service for organisations who want to develop their digital cultural content strategy.

There you have it.

Your garden as a ‘living artwork’ for insects

Pollinator Pathmaker Eden Project Edition. Photo Royston Hunt. Courtesy Alexandra Daisy Ginsberg Ltd

I suppose you could call this a kind of citizen science as well as an art project. A September 11, 2024 news item on phys.org describes a new scientific art project designed for insects,

Gardens can become “living artworks” to help prevent the disastrous decline of pollinating insects, according to researchers working on a new project.

Pollinator Pathmaker is an artwork by Dr. Alexandra Daisy Ginsberg that uses an algorithm to generate unique planting designs that prioritize pollinators’ needs over human aesthetic tastes.

A September 11, 2024 University of Exeter press release (also on EurekAlert), which originated the news item, provides more detail about the research project,

Originally commissioned by the Eden Project in Cornwall in 2021, the general public can access the artist’s online tool (www.pollinator.art) to design and plant their own living artwork for local pollinators.

While pollinators – including bees, butterflies, moths, wasps, ants and beetles – are the main audience, the results may also be appealing to humans.

Pollinator Pathmaker allows users to input the specific details of their garden, including size of plot, location conditions, soil type, and play with how the algorithm will “solve” the planting to optimise it for pollinator diversity, rather than how it looks to humans.

The new research project – led by the universities of Exeter and Edinburgh – has received funding from UK Research and Innovation as part of a new cross research council responsive mode scheme to support exciting interdisciplinary research.

The project aims to demonstrate how an artwork can help to drive innovative ecological conservation, by asking residents in the village of Constantine in Cornwall to plant a network of Pollinator Pathmaker living artworks in their gardens. These will become part of the multidisciplinary study.

“Pollinators are declining rapidly worldwide and – with urban and agricultural areas often hostile to them – gardens are increasingly vital refuges,” said Dr Christopher Kaiser-Bunbury, of the Centre for Ecology and Conservation on Exeter’s Penryn Campus in Cornwall.

“Our research project brings together art, ecology, social science and philosophy to reimagine what gardens are, and what they’re for.

“By reflecting on fundamental questions like these, we will empower people to rethink the way they see gardens.

 “We hope Pollinator Pathmaker will help to create connected networks of pollinator-friendly gardens across towns and cities.”

Good luck with the pollinators!

Bio-hybrid robotics (living robots) needs public debate and regulation

A July 23, 2024 University of Southampton (UK) press release (also on EurekAlert but published July 22, 2024) describes the emerging science/technology of bio-hybrid robotics and a recent study about the ethical issues raised, Note 1: bio-hybrid may also be written as biohybrid; Note 2: Links have been removed,

Development of ‘living robots’ needs regulation and public debate

Researchers are calling for regulation to guide the responsible and ethical development of bio-hybrid robotics – a ground-breaking science which fuses artificial components with living tissue and cells.

In a paper published in Proceedings of the National Academy of Sciences [PNAS] a multidisciplinary team from the University of Southampton and universities in the US and Spain set out the unique ethical issues this technology presents and the need for proper governance.

Combining living materials and organisms with synthetic robotic components might sound like something out of science fiction, but this emerging field is advancing rapidly. Bio-hybrid robots using living muscles can crawl, swim, grip, pump, and sense their surroundings. Sensors made from sensory cells or insect antennae have improved chemical sensing. Living neurons have even been used to control mobile robots.

Dr Rafael Mestre from the University of Southampton, who specialises in emergent technologies and is co-lead author of the paper, said: “The challenges in overseeing bio-hybrid robotics are not dissimilar to those encountered in the regulation of biomedical devices, stem cells and other disruptive technologies. But unlike purely mechanical or digital technologies, bio-hybrid robots blend biological and synthetic components in unprecedented ways. This presents unique possible benefits but also potential dangers.”

Research publications relating to bio-hybrid robotics have increased continuously over the last decade. But the authors found that of the more than 1,500 publications on the subject at the time, only five considered its ethical implications in depth.

The paper’s authors identified three areas where bio-hybrid robotics present unique ethical issues: Interactivity – how bio-robots interact with humans and the environment, Integrability – how and whether humans might assimilate bio-robots (such as bio-robotic organs or limbs), and Moral status.

In a series of thought experiments, they describe how a bio-robot for cleaning our oceans could disrupt the food chain, how a bio-hybrid robotic arm might exacerbate inequalities [emphasis mine], and how increasing sophisticated bio-hybrid assistants could raise questions about sentience and moral value.

“Bio-hybrid robots create unique ethical dilemmas,” says Aníbal M. Astobiza, an ethicist from the University of the Basque Country in Spain and co-lead author of the paper. “The living tissue used in their fabrication, potential for sentience, distinct environmental impact, unusual moral status, and capacity for biological evolution or adaptation create unique ethical dilemmas that extend beyond those of wholly artificial or biological technologies.”

The paper is the first from the Biohybrid Futures project led by Dr Rafael Mestre, in collaboration with the Rebooting Democracy project. Biohybrid Futures is setting out to develop a framework for the responsible research, application, and governance of bio-hybrid robotics.

The paper proposes several requirements for such a framework, including risk assessments, consideration of social implications, and increasing public awareness and understanding.

Dr Matt Ryan, a political scientist from the University of Southampton and a co-author on the paper, said: “If debates around embryonic stem cells, human cloning or artificial intelligence have taught us something, it is that humans rarely agree on the correct resolution of the moral dilemmas of emergent technologies.

“Compared to related technologies such as embryonic stem cells or artificial intelligence, bio-hybrid robotics has developed relatively unattended by the media, the public and policymakers, but it is no less significant. We want the public to be included in this conversation to ensure a democratic approach to the development and ethical evaluation of this technology.”

In addition to the need for a governance framework, the authors set out actions that the research community can take now to guide their research.

“Taking these steps should not be seen as prescriptive in any way, but as an opportunity to share responsibility, taking a heavy weight away from the researcher’s shoulders,” says Dr Victoria Webster-Wood, a biomechanical engineer from Carnegie Mellon University in the US and co-author on the paper.

“Research in bio-hybrid robotics has evolved in various directions. We need to align our efforts to fully unlock its potential.”

Here’s a link to and a citation for the paper,

Ethics and responsibility in biohybrid robotics research by Rafael Mestre, Aníbal M. Astobiza, Victoria A. Webster-Wood, Matt Ryan, and M. Taher A. Saif. PNAS 121 (31) e2310458121 July 23, 2024 DOI: https://doi.org/10.1073/pnas.2310458121

This paper is open access.

Cyborg or biohybrid robot?

Earlier, I highlighted “… how a bio-hybrid robotic arm might exacerbate inequalities …” because it suggests cyborgs, which are not mentioned in the press release or in the paper, This seems like an odd omission but, over the years, terminology does change although it’s not clear that’s the situation here.

I have two ‘definitions’, the first is from an October 21, 2019 article by Javier Yanes for OpenMind BBVA, Note: More about BBVA later,

The fusion between living organisms and artificial devices has become familiar to us through the concept of the cyborg (cybernetic organism). This approach consists of restoring or improving the capacities of the organic being, usually a human being, by means of technological devices. On the other hand, biohybrid robots are in some ways the opposite idea: using living tissues or cells to provide the machine with functions that would be difficult to achieve otherwise. The idea is that if soft robots seek to achieve this through synthetic materials, why not do so directly with living materials?

In contrast, there’s this from “Biohybrid robots: recent progress, challenges, and perspectives,” Note 1: Full citation for paper follows excerpt; Note 2: Links have been removed,

2.3. Cyborgs

Another approach to building biohybrid robots is the artificial enhancement of animals or using an entire animal body as a scaffold to manipulate robotically. The locomotion of these augmented animals can then be externally controlled, spanning three modes of locomotion: walking/running, flying, and swimming. Notably, these capabilities have been demonstrated in jellyfish (figure 4(A)) [139, 140], clams (figure 4(B)) [141], turtles (figure 4(C)) [142, 143], and insects, including locusts (figure 4(D)) [27, 144], beetles (figure 4(E)) [28, 145–158], cockroaches (figure 4(F)) [159–165], and moths [166–170].

….

The advantages of using entire animals as cyborgs are multifold. For robotics, augmented animals possess inherent features that address some of the long-standing challenges within the field, including power consumption and damage tolerance, by taking advantage of animal metabolism [172], tissue healing, and other adaptive behaviors. In particular, biohybrid robotic jellyfish, composed of a self-contained microelectronic swim controller embedded into live Aurelia aurita moon jellyfish, consumed one to three orders of magnitude less power per mass than existing swimming robots [172], and cyborg insects can make use of the insect’s hemolymph directly as a fuel source [173].

So, sometimes there’s a distinction and sometimes there’s not. I take this to mean that the field is still emerging and that’s reflected in evolving terminology.

Here’s a link to and a citation for the paper,

Biohybrid robots: recent progress, challenges, and perspectives by Victoria A Webster-Wood, Maria Guix, Nicole W Xu, Bahareh Behkam, Hirotaka Sato, Deblina Sarkar, Samuel Sanchez, Masahiro Shimizu and Kevin Kit Parker. Bioinspiration & Biomimetics, Volume 18, Number 1 015001 DOI 10.1088/1748-3190/ac9c3b Published 8 November 2022 • © 2022 The Author(s). Published by IOP Publishing Ltd

This paper is open access.

A few notes about BBVA and other items

BBVA is Banco Bilbao Vizcaya Argentaria according to its Wikipedia entry, Note: Links have been removed,

Banco Bilbao Vizcaya Argentaria, S.A. (Spanish pronunciation: [ˈbaŋko βilˈβao βiθˈkaʝa aɾxenˈtaɾja]), better known by its initialism BBVA, is a Spanish multinational financial services company based in Madrid and Bilbao, Spain. It is one of the largest financial institutions in the world, and is present mainly in Spain, Portugal, Mexico, South America, Turkey, Italy and Romania.[2]

BBVA’s OpenMind is, from their About us page,

OpenMind: BBVA’s knowledge community

OpenMind is a non-profit project run by BBVA that aims to contribute to the generation and dissemination of knowledge about fundamental issues of our time, in an open and free way. The project is materialized in an online dissemination community.

Sharing knowledge for a better future.

At OpenMind we want to help people understand the main phenomena affecting our lives; the opportunities and challenges that we face in areas such as science, technology, humanities or economics. Analyzing the impact of scientific and technological advances on the future of the economy, society and our daily lives is the project’s main objective, which always starts on the premise that a broader and greater quality knowledge will help us to make better individual and collective decisions.

As for other items, you can find my latest (biorobotic, cyborg, or bionic depending what terminology you what to use) jellyfish story in this June 6, 2024 posting, the Biohybrid Futures project mentioned in the press release here, and also mentioned in the Rebooting Democracy project (unexpected in the context of an emerging science/technology) can be found here on this University of Southampton website.

Finally, you can find more on these stories (science/technology announcements and/or ethics research/issues) here by searching for ‘robots’ (tag and category), ‘cyborgs’ (tag), ‘machine/flesh’ (tag), ‘neuroprosthetic’ (tag), and human enhancement (category).

Hardware policies best way to manage AI safety?

Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).

A very software approach?

This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,

In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.

The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.

The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),

At a glance

The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.

Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.

The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.

Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.

While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.

The European Union (EU) is preparing to pass its own AI regulation act through the European Parliament in 2024 according to a December 19, 2023 “EU AI Act: first regulation on artificial intelligence” article update, Note: Links have been removed,

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

*The ban of AI systems posing unacceptable risks will apply six months after the entry into force

*Codes of practice will apply nine months after entry into force

*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”

… The AI Act is expected to come into effect in late 2025 or early 2026.[109

I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.

The US is always to be considered in these matters and I have a November 2023 ‘briefing’ by Müge Fazlioglu on the International Association of Privacy Professionals (IAPP) website where she provides a quick overview of the international scene before diving deeper into US AI governance policy through the Barack Obama, Donald Trump, and Joe Biden administrations. There’s also this January 29, 2024 US White House “Fact Sheet: Biden-⁠Harris Administration Announces Key AI Actions Following President Biden’s Landmark Executive Order.”

What about AI and hardware?

A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,

Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.

A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.

Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.

The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.

The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.

A February 14, 2024 University of Cambridge press release by Fred Lewsey (also on EurekAlert), which originated the news item, provides more information about the ‘hardware approach to AI regulation’,

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI. 

“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.

“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”

The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.

Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.

Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute. 

The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”

Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.

For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.

The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.

“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.

Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.  

“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”

These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.

AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.

The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.

They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.

Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”

You can find the report, “Computing Power and the Governance of Artificial Intelligence” on the University of Cambridge’s Centre for the Study of Existential Risk.

Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.

The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.

“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.

*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.

As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.

AI safety talks at Bletchley Park in November 2023

There’s a very good article about the upcoming AI (artificial intelligence) safety talks on the British Broadcasting Corporation (BBC) news website (plus some juicy perhaps even gossipy news about who may not be attending the event) but first, here’s the August 24, 2023 UK government press release making the announcement,

Iconic Bletchley Park to host UK AI Safety Summit in early November [2023]

Major global event to take place on the 1st and 2nd of November.[2023]

– UK to host world first summit on artificial intelligence safety in November

– Talks will explore and build consensus on rapid, international action to advance safety at the frontier of AI technology

– Bletchley Park, one of the birthplaces of computer science, to host the summit

International governments, leading AI companies and experts in research will unite for crucial talks in November on the safe development and use of frontier AI technology, as the UK Government announces Bletchley Park as the location for the UK summit.

The major global event will take place on the 1st and 2nd November to consider the risks of AI, especially at the frontier of development, and discuss how they can be mitigated through internationally coordinated action. Frontier AI models hold enormous potential to power economic growth, drive scientific progress and wider public benefits, while also posing potential safety risks if not developed responsibly.

To be hosted at Bletchley Park in Buckinghamshire, a significant location in the history of computer science development and once the home of British Enigma codebreaking – it will see coordinated action to agree a set of rapid, targeted measures for furthering safety in global AI use.

Preparations for the summit are already in full flow, with Matt Clifford and Jonathan Black recently appointed as the Prime Minister’s Representatives. Together they’ll spearhead talks and negotiations, as they rally leading AI nations and experts over the next three months to ensure the summit provides a platform for countries to work together on further developing a shared approach to agree the safety measures needed to mitigate the risks of AI.

Prime Minister Rishi Sunak said:

“The UK has long been home to the transformative technologies of the future, so there is no better place to host the first ever global AI safety summit than at Bletchley Park this November.

To fully embrace the extraordinary opportunities of artificial intelligence, we must grip and tackle the risks to ensure it develops safely in the years ahead.

With the combined strength of our international partners, thriving AI industry and expert academic community, we can secure the rapid international action we need for the safe and responsible development of AI around the world.”

Technology Secretary Michelle Donelan said:

“International collaboration is the cornerstone of our approach to AI regulation, and we want the summit to result in leading nations and experts agreeing on a shared approach to its safe use.

The UK is consistently recognised as a world leader in AI and we are well placed to lead these discussions. The location of Bletchley Park as the backdrop will reaffirm our historic leadership in overseeing the development of new technologies.

AI is already improving lives from new innovations in healthcare to supporting efforts to tackle climate change, and November’s summit will make sure we can all realise the technology’s huge benefits safely and securely for decades to come.”

The summit will also build on ongoing work at international forums including the OECD, Global Partnership on AI, Council of Europe, and the UN and standards-development organisations, as well as the recently agreed G7 Hiroshima AI Process.

The UK boasts strong credentials as a world leader in AI. The technology employs over 50,000 people, directly supports one of the Prime Minister’s five priorities by contributing £3.7 billion to the economy, and is the birthplace of leading AI companies such as Google DeepMind. It has also invested more on AI safety research than any other nation, backing the creation of the Foundation Model Taskforce with an initial £100 million.

Foreign Secretary James Cleverly said:

“No country will be untouched by AI, and no country alone will solve the challenges posed by this technology. In our interconnected world, we must have an international approach.

The origins of modern AI can be traced back to Bletchley Park. Now, it will also be home to the global effort to shape the responsible use of AI.”

Bletchley Park’s role in hosting the summit reflects the UK’s proud tradition of being at the frontier of new technology advancements. Since Alan Turing’s celebrated work some eight decades ago, computing and computer science have become fundamental pillars of life both in the UK and across the globe.

Iain Standen, CEO of the Bletchley Park Trust, said:

“Bletchley Park Trust is immensely privileged to have been chosen as the venue for the first major international summit on AI safety this November, and we look forward to welcoming the world to our historic site.

It is fitting that the very spot where leading minds harnessed emerging technologies to influence the successful outcome of World War 2 will, once again, be the crucible for international co-ordinated action.

We are incredibly excited to be providing the stage for discussions on global safety standards, which will help everyone manage and monitor the risks of artificial intelligence.”

The roots of AI can be traced back to the leading minds who worked at Bletchley during World War 2, with codebreakers Jack Good and Donald Michie among those who went on to write extensive works on the technology. In November [2023], it will once again take centre stage as the international community comes together to agree on important guardrails which ensure the opportunities of AI can be realised, and its risks safely managed.

The announcement follows the UK government allocating £13 million to revolutionise healthcare research through AI, unveiled last week. The funding supports a raft of new projects including transformations to brain tumour surgeries, new approaches to treating chronic nerve pain, and a system to predict a patient’s risk of developing future health problems based on existing conditions.

Tom Gerken’s August 24, 2023 BBC news article (an analysis by Zoe Kleinman follows as part of the article) fills in a few blanks, Note: Links have been removed,

World leaders will meet with AI companies and experts on 1 and 2 November for the discussions.

The global talks aim to build an international consensus on the future of AI.

The summit will take place at Bletchley Park, where Alan Turing, one of the pioneers of modern computing, worked during World War Two.

It is unknown which world leaders will be invited to the event, with a particular question mark over whether the Chinese government or tech giant Baidu will be in attendance.

The BBC has approached the government for comment.

The summit will address how the technology can be safely developed through “internationally co-ordinated action” but there has been no confirmation of more detailed topics.

It comes after US tech firm Palantir rejected calls to pause the development of AI in June, with its boss Alex Karp saying it was only those with “no products” who wanted a pause.

And in July [2023], children’s charity the Internet Watch Foundation called on Mr Sunak to tackle AI-generated child sexual abuse imagery, which it says is on the rise.

Kleinman’s analysis includes this, Note: A link has been removed,

Will China be represented? Currently there is a distinct east/west divide in the AI world but several experts argue this is a tech that transcends geopolitics. Some say a UN-style regulator would be a better alternative to individual territories coming up with their own rules.

If the government can get enough of the right people around the table in early November [2023], this is perhaps a good subject for debate.

Three US AI giants – OpenAI, Anthropic and Palantir – have all committed to opening London headquarters.

But there are others going in the opposite direction – British DeepMind co-founder Mustafa Suleyman chose to locate his new AI company InflectionAI in California. He told the BBC the UK needed to cultivate a more risk-taking culture in order to truly become an AI superpower.

Many of those who worked at Bletchley Park decoding messages during WW2 went on to write and speak about AI in later years, including codebreakers Irving John “Jack” Good and Donald Michie.

Soon after the War, [Alan] Turing proposed the imitation game – later dubbed the “Turing test” – which seeks to identify whether a machine can behave in a way indistinguishable from a human.

There is a Bletchley Park website, which sells tickets for tours.

Insight into political jockeying (i.e., some juicy news bits)

This has recently been reported by BBC, from an October 17 (?). 2023 news article by Jessica Parker & Zoe Kleinman on BBC news online,

German Chancellor Olaf Scholz may turn down his invitation to a major UK summit on artificial intelligence, the BBC understands.

While no guest list has been published of an expected 100 participants, some within the sector say it’s unclear if the event will attract top leaders.

A government source insisted the summit is garnering “a lot of attention” at home and overseas.

The two-day meeting is due to bring together leading politicians as well as independent experts and senior execs from the tech giants, who are mainly US based.

The first day will bring together tech companies and academics for a discussion chaired by the Secretary of State for Science, Innovation and Technology, Michelle Donelan.

The second day is set to see a “small group” of people, including international government figures, in meetings run by PM Rishi Sunak.

Though no final decision has been made, it is now seen as unlikely that the German Chancellor will attend.

That could spark concerns of a “domino effect” with other world leaders, such as the French President Emmanuel Macron, also unconfirmed.

Government sources say there are heads of state who have signalled a clear intention to turn up, and the BBC understands that high-level representatives from many US-based tech giants are going.

The foreign secretary confirmed in September [2023] that a Chinese representative has been invited, despite controversy.

Some MPs within the UK’s ruling Conservative Party believe China should be cut out of the conference after a series of security rows.

It is not known whether there has been a response to the invitation.

China is home to a huge AI sector and has already created its own set of rules to govern responsible use of the tech within the country.

The US, a major player in the sector and the world’s largest economy, will be represented by Vice-President Kamala Harris.

Britain is hoping to position itself as a key broker as the world wrestles with the potential pitfalls and risks of AI.

However, Berlin is thought to want to avoid any messy overlap with G7 efforts, after the group of leading democratic countries agreed to create an international code of conduct.

Germany is also the biggest economy in the EU – which is itself aiming to finalise its own landmark AI Act by the end of this year.

It includes grading AI tools depending on how significant they are, so for example an email filter would be less tightly regulated than a medical diagnosis system.

The European Commission President Ursula von der Leyen is expected at next month’s summit, while it is possible Berlin could send a senior government figure such as its vice chancellor, Robert Habeck.

A source from the Department for Science, Innovation and Technology said: “This is the first time an international summit has focused on frontier AI risks and it is garnering a lot of attention at home and overseas.

“It is usual not to confirm senior attendance at major international events until nearer the time, for security reasons.”

Fascinating, eh?