Tag Archives: University of Cambridge

Hardware policies best way to manage AI safety?

Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).

A very software approach?

This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,

In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.

The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.

The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),

At a glance

The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.

Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.

The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.

Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.

While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.

The European Union (EU) is preparing to pass its own AI regulation act through the European Parliament in 2024 according to a December 19, 2023 “EU AI Act: first regulation on artificial intelligence” article update, Note: Links have been removed,

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

*The ban of AI systems posing unacceptable risks will apply six months after the entry into force

*Codes of practice will apply nine months after entry into force

*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”

… The AI Act is expected to come into effect in late 2025 or early 2026.[109

I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.

The US is always to be considered in these matters and I have a November 2023 ‘briefing’ by Müge Fazlioglu on the International Association of Privacy Professionals (IAPP) website where she provides a quick overview of the international scene before diving deeper into US AI governance policy through the Barack Obama, Donald Trump, and Joe Biden administrations. There’s also this January 29, 2024 US White House “Fact Sheet: Biden-⁠Harris Administration Announces Key AI Actions Following President Biden’s Landmark Executive Order.”

What about AI and hardware?

A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,

Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.

A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.

Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.

The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.

The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.

A February 14, 2024 University of Cambridge press release by Fred Lewsey (also on EurekAlert), which originated the news item, provides more information about the ‘hardware approach to AI regulation’,

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI. 

“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.

“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”

The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.

Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.

Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute. 

The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”

Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.

For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.

The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.

“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.

Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.  

“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”

These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.

AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.

The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.

They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.

Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”

You can find the report, “Computing Power and the Governance of Artificial Intelligence” on the University of Cambridge’s Centre for the Study of Existential Risk.

Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.

The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.

“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.

*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.

As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.

‘Frozen smoke’ sensors can detect toxic formaldehyde in homes and offices

I love the fact that ‘frozen smoke’ is another term for aerogel (which has multiple alternative terms) and the latest work on this interesting material is from the University of Cambridge (UK) according to a February 9, 2023 news item on ScienceDaily,

Researchers have developed a sensor made from ‘frozen smoke’ that uses artificial intelligence techniques to detect formaldehyde in real time at concentrations as low as eight parts per billion, far beyond the sensitivity of most indoor air quality sensors.

The researchers, from the University of Cambridge, developed sensors made from highly porous materials known as aerogels. By precisely engineering the shape of the holes in the aerogels, the sensors were able to detect the fingerprint of formaldehyde, a common indoor air pollutant, at room temperature.

The proof-of-concept sensors, which require minimal power, could be adapted to detect a wide range of hazardous gases, and could also be miniaturised for wearable and healthcare applications. The results are reported in the journal Science Advances.

A February 9, 2024 University of Cambridge press release (also on EurekAlert), which originated the news item, describes the problem and the proposed solution in more detail, Note: Links have been removed,

Volatile organic compounds (VOCs) are a major source of indoor air pollution, causing watery eyes, burning in the eyes and throat, and difficulty breathing at elevated levels. High concentrations can trigger attacks in people with asthma, and prolonged exposure may cause certain cancers.

Formaldehyde is a common VOC and is emitted by household items including pressed wood products (such as MDF), wallpapers and paints, and some synthetic fabrics. For the most part, the levels of formaldehyde emitted by these items are low, but levels can build up over time, especially in garages where paints and other formaldehyde-emitting products are more likely to be stored.

According to a 2019 report from the campaign group Clean Air Day, a fifth of households in the UK showed notable concentrations of formaldehyde, with 13% of residences surpassing the recommended limit set by the World Health Organization (WHO).

“VOCs such as formaldehyde can lead to serious health problems with prolonged exposure even at low concentrations, but current sensors don’t have the sensitivity or selectivity to distinguish between VOCs that have different impacts on health,” said Professor Tawfique Hasan from the Cambridge Graphene Centre, who led the research.

“We wanted to develop a sensor that is small and doesn’t use much power, but can selectively detect formaldehyde at low concentrations,” said Zhuo Chen, the paper’s first author.

The researchers based their sensors on aerogels: ultra-light materials sometimes referred to as ‘liquid smoke’, since they are more than 99% air by volume. The open structure of aerogels allows gases to easily move in and out. By precisely engineering the shape, or morphology, of the holes, the aerogels can act as highly effective sensors.

Working with colleagues at Warwick University, the Cambridge researchers optimised the composition and structure of the aerogels to increase their sensitivity to formaldehyde, making them into filaments about three times the width of a human hair. The researchers 3D printed lines of a paste made from graphene, a two-dimensional form of carbon, and then freeze-dried the graphene paste to form the holes in the final aerogel structure. The aerogels also incorporate tiny semiconductors known as quantum dots.

The sensors they developed were able to detect formaldehyde at concentrations as low as eight parts per billion, which is 0.4 percent of the level deemed safe in UK workplaces. The sensors also work at room temperature, consuming very low power.

“Traditional gas sensors need to be heated up, but because of the way we’ve engineered the materials, our sensors work incredibly well at room temperature, so they use between 10 and 100 times less power than other sensors,” said Chen.

To improve selectivity, the researchers then incorporated machine learning algorithms into the sensors. The algorithms were trained to detect the ‘fingerprint’ of different gases, so that the sensor was able to distinguish the fingerprint of formaldehyde from other VOCs.

“Existing VOC detectors are blunt instruments – you only get one number for the overall concentration in the air,” said Hasan. “By building a sensor that is able to detect specific VOCs at very low concentrations in real time, it can give home and business owners a more accurate picture of air quality and any potential health risks.”

The researchers say that the same technique could be used to develop sensors to detect other VOCs. In theory, a device the size of a standard household carbon monoxide detector could incorporate multiple different sensors within it, providing real-time information about a range of different hazardous gases. The team at Warwick are developing a low-cost multi-sensor platform that will incorporate these new aerogel materials and, coupled with AI algorithms, detect different VOCs.

“By using highly porous materials as the sensing element, we’re opening up whole new ways of detecting hazardous materials in our environment,” said Chen.

The research was supported in part by the Henry Royce Institute, and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). Tawfique Hasan is a Fellow of Churchill College, Cambridge.

Here’s a link to and a citation for the paper,

Real-time, noise and drift resilient formaldehyde sensing at room temperature with aerogel filaments by Zhuo Chen, Binghan Zhou, Mingfei Xiao, Tynee Bhowmick, Padmanathan Karthick Kannan, Luigi G. Occhipinti, Julian William Gardner, and Tawfique Hasan. Science Advances 9 Feb 2024 Vol 10, Issue 6 DOI: 10.1126/sciadv.adk6856

This paper is open access.

Synthetic human embryos—what now? (1 of 2)

Usually, there’s a rough chronological order to how I introduce the research, but this time I’m looking at the term used to describe it, following up with the various news releases and commentaries about the research, and finishing with a Canadian perspective.

After writing this post (but before it was published), the Weizmann Institute of Science (Israel) made their September 6, 2023 announcement and things changed a bit. That’s in Part two.

Say what you really mean (a terminology issue)

First, it might be useful to investigate the term, ‘synthetic human embryos’ as Julian Hitchcock does in his June 29, 2023 article on Bristows website (h/t Mondaq’s July 5, 2023 news item), Note: Links have been removed,

Synthetic Embryos” are neither Synthetic nor Embryos. So why are editors giving that name to stem cell-based models of human development?

One of the less convincing aspects of the last fortnight’s flurry of announcements about advances in simulating early human development (see here) concerned their name. Headlines galore (in newspapers and scientific journals) referred to “synthetic embryos“.

But embryo models, however impressive, are not embryos. To claim that the fundamental stages of embryo development that we learnt at school – fertilisation, cleavage and compaction – could now be bypassed to achieve the same result would be wrong. Nor are these objects “synthesised”: indeed, their interest to us lies in the ways in which they organise themselves. The researchers merely place the stem cells in a matrix in appropriate conditions, then stand back and watch them do it. Scientists were therefore unhappy about this use of the term in news media, and relieved when the International Society for Stem Cell Research (ISSCR) stepped in with a press release:

“Unlike some recent media reports describing this research, the ISSCR advises against using the term “synthetic embryo” to describe embryo models, because it is inaccurate and can create confusion. Integrated embryo models are neither synthetic nor embryos. While these models can replicate aspects of the early-stage development of human embryos, they cannot and will not develop to the equivalent of postnatal stage humans. Further, the ISSCR Guidelines prohibit the transfer of any embryo model to the uterus of a human or an animal.”

Although this was the ISSCR’s first attempt to put that position to the public, it had already made that recommendation to the research community two years previously. Its 2021 Guidelines for Stem Cell Research and Clinical Translation had recommended researchers to “promote accurate, current, balanced, and responsive public representations of stem cell research”. In particular:

“While organoids, chimeras, embryo models, and other stem cell-based models are useful research tools offering possibilities for further scientific progress, limitations on the current state of scientific knowledge and regulatory constraints must be clearly explained in any communications with the public or media. Suggestions that any of the current in vitro models can recapitulate an intact embryo, human sentience or integrated brain function are unfounded overstatements that should be avoided and contradicted with more precise characterizations of current understanding.”

Here’s a little bit about Hitchcock from his Bristows profile page,

  • Diploma Medical School, University of Birmingham (1975-78)
  • LLB, University of Wolverhampton
  • Diploma in Intellectual Property Law & Practice, University of Bristol
  • Qualified 1998

Following an education in medicine at the University of Birmingham and a career as a BBC science producer, Julian has focused on the law and regulation of life science technologies since 1997, practising in England and Australia. He joined Bristows with Alex Denoon in 2018.

Hitchcock’s June 29, 2023 article comments on why this term is being used,

I have a lot of sympathy with the position of the science writers and editors incurring the scientists’ ire. First, why should journalists have known of the ISSCR’s recommendations on the use of the term “synthetic embryo”? A journalist who found Recommendation 4.1 of the ISSCR Guidelines would probably not have found them specific enough to address the point, and the academic introduction containing the missing detail is hard to find. …

My second reason for being sympathetic to the use of the terrible term is that no suitable alternative has been provided, other than in the Stem Cell Reports paper, which recommends the umbrella terms “embryo models” or “stem cell based embryo models”. …

When asked why she had used the term “synthetic embryo”, the journalist I contacted remarked that, “We’re still working out the right language and it’s something we’re discussing and will no doubt evolve along with the science”.

It is absolutely in the public’s interest (and in the interest of science), that scientific research is explained in terms that the public understands. There is, therefore, a need, I think, for the scientific community to supply a name to the media or endure the penalties of misinformation …

In such an intensely competitive field of research, disagreement among researchers, even as to names, is inevitable. In consequence, however, journalists and their audiences are confronted by a slew of terms which may or may not be synonymous or overlapping, with no agreed term [emphasis mine] for the overall class of stem cell based embryo models. We cannot blame them if they make up snappy titles of their own [emphasis mine]. …

The announcement

The earliest date for the announcement at the International Society for Stem Cell Researh meeting that I can find is Hannah Devlin’s June 14, 2023 article in The Guardian newspaper, Note: A link has been removed,

Scientists have created synthetic human embryos using stem cells, in a groundbreaking advance that sidesteps the need for eggs or sperm.

Scientists say these model embryos, which resemble those in the earliest stages of human development, could provide a crucial window on the impact of genetic disorders and the biological causes of recurrent miscarriage.

However, the work also raises serious ethical and legal issues as the lab-grown entities fall outside current legislation in the UK and most other countries.

The structures do not have a beating heart or the beginnings of a brain, but include cells that would typically go on to form the placenta, yolk sac and the embryo itself.

Prof Magdalena Żernicka-Goetz, of the University of Cambridge and the California Institute of Technology, described the work in a plenary address on Wednesday [June 14, 2023] at the International Society for Stem Cell Research’s annual meeting in Boston.

The (UK) Science Media Centre made expert comments available in a June 14, 2023 posting “expert reaction to Guardian reporting news of creation of synthetic embryos using stem cells.”

Two days later, this June 16, 2023 essay by Kathryn MacKay, Senior Lecturer in Bioethics, University of Sydney (Australia), appeared on The Conversation (h/t June 16, 2023 news item on phys.org), Note: Links have been removed,

Researchers have created synthetic human embryos using stem cells, according to media reports. Remarkably, these embryos have reportedly been created from embryonic stem cells, meaning they do not require sperm and ova.

This development, widely described as a breakthrough that could help scientists learn more about human development and genetic disorders, was revealed this week in Boston at the annual meeting of the International Society for Stem Cell Research.

The research, announced by Professor Magdalena Żernicka-Goetz of the University of Cambridge and the California Institute of Technology, has not yet been published in a peer-reviewed journal. But Żernicka-Goetz told the meeting these human-like embryos had been made by reprogramming human embryonic stem cells.

So what does all this mean for science, and what ethical issues does it present?

MacKay goes on to answer her own questions, from the June 16, 2023 essay, Note: A link has been removed,

One of these quandaries arises around whether their creation really gets us away from the use of human embryos.

Robin Lovell-Badge, the head of stem cell biology and developmental genetics at the Francis Crick Institute in London UK, reportedly said that if these human-like embryos can really model human development in the early stages of pregnancy, then we will not have to use human embryos for research.

At the moment, it is unclear if this is the case for two reasons.

First, the embryos were created from human embryonic stem cells, so it seems they do still need human embryos for their creation. Perhaps more light will be shed on this when Żernicka-Goetz’s research is published.

Second, there are questions about the extent to which these human-like embryos really can model human development.

Professor Magdalena Żernicka-Goetz’s research is published

Almost two weeks later the research from the Cambridge team (there are other teams and countries also racing; see Part two for the news from Sept. 6, 2023) was published, from a June 27, 2023 news item on ScienceDaily,

Cambridge scientists have created a stem cell-derived model of the human embryo in the lab by reprogramming human stem cells. The breakthrough could help research into genetic disorders and in understanding why and how pregnancies fail.

Published today [Tuesday, June 27, 2023] in the journal Nature, this embryo model is an organised three-dimensional structure derived from pluripotent stem cells that replicate some developmental processes that occur in early human embryos.

Use of such models allows experimental modelling of embryonic development during the second week of pregnancy. They can help researchers gain basic knowledge of the developmental origins of organs and specialised cells such as sperm and eggs, and facilitate understanding of early pregnancy loss.

A June 27, 2023 University of Cambridge press release (also on EurekAlert), which originated the news item, provides more detail about the work,

“Our human embryo-like model, created entirely from human stem cells, gives us access to the developing structure at a stage that is normally hidden from us due to the implantation of the tiny embryo into the mother’s womb,” said Professor Magdalena Zernicka-Goetz in the University of Cambridge’s Department of Physiology, Development and Neuroscience, who led the work.

She added: “This exciting development allows us to manipulate genes to understand their developmental roles in a model system. This will let us test the function of specific factors, which is difficult to do in the natural embryo.”

In natural human development, the second week of development is an important time when the embryo implants into the uterus. This is the time when many pregnancies are lost.

The new advance enables scientists to peer into the mysterious ‘black box’ period of human development – usually following implantation of the embryo in the uterus – to observe processes never directly observed before.

Understanding these early developmental processes holds the potential to reveal some of the causes of human birth defects and diseases, and to develop tests for these in pregnant women.

Until now, the processes could only be observed in animal models, using cells from zebrafish and mice, for example.

Legal restrictions in the UK currently prevent the culture of natural human embryos in the lab beyond day 14 of development: this time limit was set to correspond to the stage where the embryo can no longer form a twin. [emphasis mine]

Until now, scientists have only been able to study this period of human development using donated human embryos. This advance could reduce the need for donated human embryos in research.

Zernicka-Goetz says the while these models can mimic aspects of the development of human embryos, they cannot and will not develop to the equivalent of postnatal stage humans.

Over the past decade, Zernicka-Goetz’s group in Cambridge has been studying the earliest stages of pregnancy, in order to understand why some pregnancies fail and some succeed.

In 2021 and then in 2022 her team announced in Developmental Cell, Nature and Cell Stem Cell journals that they had finally created model embryos from mouse stem cells that can develop to form a brain-like structure, a beating heart, and the foundations of all other organs of the body.

The new models derived from human stem cells do not have a brain or beating heart, but they include cells that would typically go on to form the embryo, placenta and yolk sac, and develop to form the precursors of germ cells (that will form sperm and eggs).

Many pregnancies fail at the point when these three types of cells orchestrate implantation into the uterus begin to send mechanical and chemical signals to each other, which tell the embryo how to develop properly.

There are clear regulations governing stem cell-based models of human embryos and all researchers doing embryo modelling work must first be approved by ethics committees. Journals require proof of this ethics review before they accept scientific papers for publication. Zernicka-Goetz’s laboratory holds these approvals.

“It is against the law and FDA regulations to transfer any embryo-like models into a woman for reproductive aims. These are highly manipulated human cells and their attempted reproductive use would be extremely dangerous,” said Dr Insoo Hyun, Director of the Center for Life Sciences and Public Learning at Boston’s Museum of Science and a member of Harvard Medical School’s Center for Bioethics.

Zernicka-Goetz also holds position at the California Institute of Technology and is NOMIS Distinguished Scientist and Scholar Awardee.

The research was funded by the Wellcome Trust and Open Philanthropy.

(There’s more about legal concerns further down in this post.)

Here’s a link to and a citation for the paper,

Pluripotent stem cell-derived model of the post-implantation human embryo by Bailey A. T. Weatherbee, Carlos W. Gantner, Lisa K. Iwamoto-Stohl, Riza M. Daza, Nobuhiko Hamazaki, Jay Shendure & Magdalena Zernicka-Goetz. Nature (2023) DOI: https://doi.org/10.1038/s41586-023-06368-y Published: 27 June 2023

This paper is open access.

Published the same day (June 27, 2023) is a paper (citation and link follow) also focused on studying human embryonic development using stem cells. First, there’s this from the Abstract,

Investigating human development is a substantial scientific challenge due to the technical and ethical limitations of working with embryonic samples. In the face of these difficulties, stem cells have provided an alternative to experimentally model inaccessible stages of human development in vitro …

This time the work is from a US/German team,

Self-patterning of human stem cells into post-implantation lineages by Monique Pedroza, Seher Ipek Gassaloglu, Nicolas Dias, Liangwen Zhong, Tien-Chi Jason Hou, Helene Kretzmer, Zachary D. Smith & Berna Sozen. Nature (2023) DOI: https://doi.org/10.1038/s41586-023-06354-4 Published: 27 June 2023

The paper is open access.

Legal concerns and a Canadian focus

A July 25, 2023 essay by Françoise Baylis and Jocelyn Downie of Dalhousie University (Nova Scotia, Canada) for The Conversation (h/t July 25, 2023 article on phys.org) covers the advantages of doing this work before launching into a discussion of legislation and limits in the UK and, more extensively, in Canada, Note: Links have been removed,

This research could increase our understanding of human development and genetic disorders, help us learn how to prevent early miscarriages, lead to improvements in fertility treatment, and — perhaps — eventually allow for reproduction without using sperm and eggs.

Synthetic human embryos — also called embryoid bodies, embryo-like structures or embryo models — mimic the development of “natural human embryos,” those created by fertilization. Synthetic human embryos include the “cells that would typically go on to form the embryo, placenta and yolk sac, and develop to form the precursors of germ cells (that will form sperm and eggs).”

Though research involving natural human embryos is legal in many jurisdictions, it remains controversial. For some people, research involving synthetic human embryos is less controversial because these embryos cannot “develop to the equivalent of postnatal stage humans.” In other words, these embryos are non-viable and cannot result in live births.

Now, for a closer look at the legalities in the UK and in Canada, from the July 25, 2023 essay, Note: Links have been removed,

The research presented by Żernicka-Goetz at the ISSCR meeting took place in the United Kingdom. It was conducted in accordance with the Human Fertilization and Embryology Act, 1990, with the approval of the U.K. Stem Cell Bank Steering Committee.

U.K. law limits the research use of human embryos to 14 days of development. An embryo is defined as “a live human embryo where fertilisation is complete, and references to an embryo include an egg in the process of fertilisation.”

Synthetic embryos are not created by fertilization and therefore, by definition, the 14-day limit on human embryo research does not apply to them. This means that synthetic human embryo research beyond 14 days can proceed in the U.K.

The door to the touted potential benefits — and ethical controversies — seems wide open in the U.K.

While the law in the U.K. does not apply to synthetic human embryos, the law in Canada clearly does. This is because the legal definition of an embryo in Canada is not limited to embryos created by fertilization [emphasis mine].

The Assisted Human Reproduction Act (the AHR Act) defines an embryo as “a human organism during the first 56 days of its development following fertilization or creation, excluding any time during which its development has been suspended.”

Based on this definition, the AHR Act applies to embryos created by reprogramming human embryonic stem cells — in other words, synthetic human embryos — provided such embryos qualify as human organisms.

A synthetic human embryo is a human organism. It is of the species Homo sapiens, and is thus human. It also qualifies as an organism — a life form — alongside other organisms created by means of fertilization, asexual reproduction, parthenogenesis or cloning.

Given that the AHR Act applies to synthetic human embryos, there are legal limits on their creation and use in Canada.

First, human embryos — including synthetic human embryos – can only be created for the purposes of “creating a human being, improving or providing instruction in assisted reproduction procedures.”

Given the state of the science, it follows that synthetic human embryos could legally be created for the purpose of improving assisted reproduction procedures.

Second, “spare” or “excess” human embryos — including synthetic human embryos — originally created for one of the permitted purposes, but no longer wanted for this purpose, can be used for research. This research must be done in accordance with the consent regulations which specify that consent must be for a “specific research project.”

Finally, all research involving human embryos — including synthetic human embryos — is subject to the 14-day rule. The law stipulates that: “No person shall knowingly… maintain an embryo outside the body of a female person after the fourteenth day of its development following fertilization or creation, excluding any time during which its development has been suspended.”

Putting this all together, the creation of synthetic embryos for improving assisted human reproduction procedures is permitted, as is research using “spare” or “excess” synthetic embryos originally created for this purpose — provided there is specific consent and the research does not exceed 14 days.

This means that while synthetic human embryos may be useful for limited research on pre-implantation embryo development, they are not available in Canada for research on post-implantation embryo development beyond 14 days.

The authors close with this comment about the prospects for expanding Canada’s14-day limit, from the July 25, 2023 essay,

… any argument will have to overcome the political reality that the federal government is unlikely to open up the Pandora’s box of amending the AHR Act.

It therefore seems likely that synthetic human embryo research will remain limited in Canada for the foreseeable future.

As mentioned, in September 2023 there was a new development. See: Part two.

European medieval monks, Japanese scribes, and Middle Eastern chroniclers all contributed to volcanology

Volcanoes are not often a topic on this blog, which is focused on emerging science and technology. However, stories featuring scientific information from unexpected sources has long been a fascination of mine and this April 5, 2023 news item on ScienceDaily shines a light on an unusual cast of medieval scientific observers spanning the globe,

By observing the night sky, medieval monks unwittingly recorded some of history’s largest volcanic eruptions. An international team of researchers, led by the University of Geneva (UNIGE), drew on readings of 12th and 13th century European and Middle Eastern chronicles, along with ice core and tree ring data, to accurately date some of the biggest volcanic eruptions the world has ever seen. Their results, reported in the journal Nature, uncover new information about one of the most volcanically active periods in Earth’s history, which some think helped to trigger the Little Ice Age, a long interval of cooling that saw the advance of European glaciers.

llumination from the late 14th or early 15th century, which portrays two individuals observing a lunar eclipse. It features the words «La lune avant est eclipsee», «The moon is eclipsed» in english. © Source gallica.bnf.fr / BnF Courtesy: Université de Genève

An April 5, 2023 Université de Genève (UNIGE) press release (also on EurekAlert), which originated the news item, includes observations from Japanese scribes along with those from medieval European monks and Middle Eastern scholars,

It took the researchers almost five years to examine hundreds of annals and chronicles from across Europe and the Middle East, in search of references to total lunar eclipses and their colouration. Total lunar eclipses occur when the moon passes into the Earth’s shadow. Typically, the moon remains visible as a reddish orb because it is still bathed in sunlight bent round the Earth by its atmosphere. But after a very large volcanic eruption, there can be so much dust in the stratosphere – the middle part of the atmosphere starting roughly where commercial aircraft fly – that the eclipsed moon almost disappears.

Medieval chroniclers recorded and described all kinds of historical events, including the deeds of kings and popes, important battles, and natural disasters and famines. Just as noteworthy were the celestial phenomena that might foretell such calamities. Mindful of the Book of Revelation, a vision of the end times that speaks of a blood-red moon, the monks were especially careful to take note of the moon’s coloration. Of the 64 total lunar eclipses that occurred in Europe between 1100 and 1300, the chroniclers had faithfully documented 51. In five of these cases, they also reported that the moon was exceptionally dark.

The contribution of Japanese scribes 

Asked what made him connect the monks’ records of the brightness and colour of the eclipsed moon with volcanic gloom, the lead author of the work, Sébastien Guillet, senior research associate at the Institute for environmental sciences at the UNIGE,  said: “I was listening to Pink Floyd’s Dark Side of the Moon album when I realised that the darkest lunar eclipses all occurred within a year or so of major volcanic eruptions. Since we know the exact days of the eclipses, it opened the possibility of using the sightings to narrow down when the eruptions must have happened.”

The researchers found that scribes in Japan took equal note of lunar eclipses. One of the best known, Fujiwara no Teika, wrote of an unprecedented dark eclipse observed on 2 December 1229: ‘the old folk had never seen it like this time, with the location of the disk of the Moon not visible, just as if it had disappeared during the eclipse… It was truly something to fear.’ The stratospheric dust from large volcanic eruptions was not only responsible for the vanishing moon. It also cooled summer temperatures by limiting the sunlight reaching the Earth’s surface. This in turn could bring ruin to agricultural crops.

Cross-checking text and data 

“We know from previous work that strong tropical eruptions can induce global cooling on the order of roughly 1°C over a few years,” said Markus Stoffel, full professor at the Institute for environmental sciences at the UNIGE and last author of the study, a specialist in converting measurements of tree rings into climate data, who co-designed the study. “They can also lead to rainfall anomalies with droughts in one place and floods in another.”

Despite these effects, people at the time could not have imagined that the poor harvests or the unusual lunar eclipses had anything to do with volcanoes – the eruptions themselves were all but one undocumented. “We only knew about these eruptions because they left traces in the ice of Antarctica and Greenland,” said co-author Clive Oppenheimer, professor at the Department of Geography at the University of Cambridge. “By putting together the information from ice cores and the descriptions from medieval texts we can now make better estimates of when and where some of the biggest eruptions of this period occurred.”

Climate and society affected 

To make the most of this integration, Sébastien Guillet worked with climate modellers to compute the most likely timing of the eruptions. “Knowing the season when the volcanoes erupted is essential, as it influences the spread of the volcanic dust and the cooling and other climate anomalies associated with these eruptions,” he said.

As well as helping to narrow down the timing and intensity of these events, what makes the findings significant is that the interval from 1100 to 1300 is known from ice core evidence to be one of the most volcanically active periods in history. Of the 15 eruptions considered in the new study, one in the mid-13th century rivals the famous 1815 eruption of Tambora that brought on ‘the year without a summer’ of 1816. The collective effect of the medieval eruptions on Earth’s climate may have led to the Little Ice Age, when winter ice fairs were held on the frozen rivers of Europe. “Improving our knowledge of these otherwise mysterious eruptions, is crucial to understanding whether and how past volcanism affected not only climate but also society during the Middle Ages,” concludes the researcher.

Here’s a link to and a citation for the paper,

Lunar eclipses illuminate timing and climate impact of medieval volcanism by Sébastien Guillet, Christophe Corona, Clive Oppenheimer, Franck Lavigne, Myriam Khodri, Francis Ludlow, Michael Sigl, Matthew Toohey, Paul S. Atkins, Zhen Yang, Tomoko Muranaka, Nobuko Horikawa & Markus Stoffel. Nature volume 616, pages 90–95 (2023) Issue Date: 06 April 2023 DOI: https://doi.org/10.1038/s41586-023-05751-z Published online: 05 April 2023

This paper is open access.

Biohybrid device (a new type of neural implant) could restore limb function

A March 23, 2023 news item on ScienceDaily announces a neural implant that addresses failures due to scarring issues,

Researchers have developed a new type of neural implant that could restore limb function to amputees and others who have lost the use of their arms or legs.

In a study carried out in rats, researchers from the University of Cambridge used the device to improve the connection between the brain and paralysed limbs. The device combines flexible electronics and human stem cells — the body’s ‘reprogrammable’ master cells — to better integrate with the nerve and drive limb function.

Previous attempts at using neural implants to restore limb function have mostly failed, as scar tissue tends to form around the electrodes over time, impeding the connection between the device and the nerve. By sandwiching a layer of muscle cells reprogrammed from stem cells between the electrodes and the living tissue, the researchers found that the device integrated with the host’s body and the formation of scar tissue was prevented. The cells survived on the electrode for the duration of the 28-day experiment, the first time this has been monitored over such a long period.

A March 22, 2023 University of Cambridge press release (also on EurekAlert but published March 23, 2023) by Sarah Collins, delves further into the topic,

The researchers say that by combining two advanced therapies for nerve regeneration – cell therapy and bioelectronics – into a single device, they can overcome the shortcomings of both approaches, improving functionality and sensitivity.

While extensive research and testing will be needed before it can be used in humans, the device is a promising development for amputees or those who have lost function of a limb or limbs. The results are reported in the journal Science Advances.

A huge challenge when attempting to reverse injuries that result in the loss of a limb or the loss of function of a limb is the inability of neurons to regenerate and rebuild disrupted neural circuits.

“If someone has an arm or a leg amputated, for example, all the signals in the nervous system are still there, even though the physical limb is gone,” said Dr Damiano Barone from Cambridge’s Department of Clinical Neurosciences, who co-led the research. “The challenge with integrating artificial limbs, or restoring function to arms or legs, is extracting the information from the nerve and getting it to the limb so that function is restored.”

One way of addressing this problem is implanting a nerve in the large muscles of the shoulder and attaching electrodes to it. The problem with this approach is scar tissue forms around the electrode, plus it is only possible to extract surface-level information from the electrode.

To get better resolution, any implant for restoring function would need to extract much more information from the electrodes. And to improve sensitivity, the researchers wanted to design something that could work on the scale of a single nerve fibre, or axon.

“An axon itself has a tiny voltage,” said Barone. “But once it connects with a muscle cell, which has a much higher voltage, the signal from the muscle cell is easier to extract. That’s where you can increase the sensitivity of the implant.”

The researchers designed a biocompatible flexible electronic device that is thin enough to be attached to the end of a nerve. A layer of stem cells, reprogrammed into muscle cells, was then placed on the electrode. This is the first time that this type of stem cell, called an induced pluripotent stem cell, has been used in a living organism in this way.

“These cells give us an enormous degree of control,” said Barone. “We can tell them how to behave and check on them throughout the experiment. By putting cells in between the electronics and the living body, the body doesn’t see the electrodes, it just sees the cells, so scar tissue isn’t generated.”

The Cambridge biohybrid device was implanted into the paralysed forearm of the rats. The stem cells, which had been transformed into muscle cells prior to implantation, integrated with the nerves in the rat’s forearm. While the rats did not have movement restored to their forearms, the device was able to pick up the signals from the brain that control movement. If connected to the rest of the nerve or a prosthetic limb, the device could help restore movement.

The cell layer also improved the function of the device, by improving resolution and allowing long-term monitoring inside a living organism. The cells survived through the 28-day experiment: the first time that cells have been shown to survive an extended experiment of this kind.

The researchers say that their approach has multiple advantages over other attempts to restore function in amputees. In addition to its easier integration and long-term stability, the device is small enough that its implantation would only require keyhole surgery. Other neural interfacing technologies for the restoration of function in amputees require complex patient-specific interpretations of cortical activity to be associated with muscle movements, while the Cambridge-developed device is a highly scalable solution since it uses ‘off the shelf’ cells.

In addition to its potential for the restoration of function in people who have lost the use of a limb or limbs, the researchers say their device could also be used to control prosthetic limbs by interacting with specific axons responsible for motor control.

“This interface could revolutionise the way we interact with technology,” said co-first author Amy Rochford, from the Department of Engineering. “By combining living human cells with bioelectronic materials, we’ve created a system that can communicate with the brain in a more natural and intuitive way, opening up new possibilities for prosthetics, brain-machine interfaces, and even enhancing cognitive abilities.”

“This technology represents an exciting new approach to neural implants, which we hope will unlock new treatments for patients in need,” said co-first author Dr Alejandro Carnicer-Lombarte, also from the Department of Engineering.

“This was a high-risk endeavour, and I’m so pleased that it worked,” said Professor George Malliaras from Cambridge’s Department of Engineering, who co-led the research. “It’s one of those things that you don’t know whether it will take two years or ten before it works, and it ended up happening very efficiently.”

The researchers are now working to further optimise the devices and improve their scalability. The team have filed a patent application on the technology with the support of Cambridge Enterprise, the University’s technology transfer arm.

The technology relies on opti-oxTM enabled muscle cells. opti-ox is a precision cellular reprogramming technology that enables faithful execution of genetic programmes in cells allowing them to be manufactured consistently at scale. The opti-ox enabled muscle iPSC cell lines used in the experiment were supplied by the Kotter lab [Mark Kotter] from the University of Cambridge. The opti-ox reprogramming technology is owned by synthetic biology company bit.bio.

The research was supported in part by the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI), Wellcome, and the European Union’s Horizon 2020 Research and Innovation Programme.

Caption: In a study carried out in rats, researchers from the University of Cambridge used a biohybrid device to improve the connection between the brain and paralysed limbs. The device combines flexible electronics and human stem cells – the body’s ‘reprogrammable’ master cells – to better integrate with the nerve and drive limb function. Credit: University of Cambirdge

Here’s a link to and a citation for the paper,

Functional neurological restoration of amputated peripheral nerve using biohybrid regenerative bioelectronics by Amy E. Rochford, Alejandro Carnicer-Lombarte, Malak Kawan, Amy Jin, Sam Hilton, Vincenzo F. Curto, Alexandra L. Rutz, Thomas Moreau, Mark R. N. Kotter, George G. Malliaras, and Damiano G. Barone. Science Advances 22 Mar 2023 Vol 9, Issue 12 DOI: 10.1126/sciadv.add8162

This paper is open access.

The synthetic biology company mentioned in the press release, bit.bio is here

Graphene goes to the moon

The people behind the European Union’s Graphene Flagship programme (if you need a brief explanation, keep scrolling down to the “What is the Graphene Flagship?” subhead) and the United Arab Emirates have got to be very excited about the announcement made in a November 29, 2022 news item on Nanowerk, Note: Canadians too have reason to be excited as of April 3, 2023 when it was announced that Canadian astronaut Jeremy Hansen was selected to be part of the team on NASA’s [US National Aeronautics and Space Administration] Artemis II to orbit the moon (April 3, 2023 CBC news online article by Nicole Mortillaro) ·

Graphene Flagship Partners University of Cambridge (UK) and Université Libre de Bruxelles (ULB, Belgium) paired up with the Mohammed bin Rashid Space Centre (MBRSC, United Arab Emirates), and the European Space Agency (ESA) to test graphene on the Moon. This joint effort sees the involvement of many international partners, such as Airbus Defense and Space, Khalifa University, Massachusetts Institute of Technology, Technische Universität Dortmund, University of Oslo, and Tohoku University.

The Rashid rover is planned to be launched on 30 November 2022 [Note: the launch appears to have occurred on December 11, 2022; keep scrolling for more about that] from Cape Canaveral in Florida and will land on a geologically rich and, as yet, only remotely explored area on the Moon’s nearside – the side that always faces the Earth. During one lunar day, equivalent to approximately 14 days on Earth, Rashid will move on the lunar surface investigating interesting geological features.

A November 29, 2022 Graphene Flagship press release (also on EurekAlert), which originated the news item, provides more details,

The Rashid rover wheels will be used for repeated exposure of different materials to the lunar surface. As part of this Material Adhesion and abrasion Detection experiment, graphene-based composites on the rover wheels will be used to understand if they can protect spacecraft against the harsh conditions on the Moon, and especially against regolith (also known as ‘lunar dust’).

Regolith is made of extremely sharp, tiny and sticky grains and, since the Apollo missions, it has been one of the biggest challenges lunar missions have had to overcome. Regolith is responsible for mechanical and electrostatic damage to equipment, and is therefore also hazardous for astronauts. It clogs spacesuits’ joints, obscures visors, erodes spacesuits and protective layers, and is a potential health hazard.  

University of Cambridge researchers from the Cambridge Graphene Centre produced graphene/polyether ether ketone (PEEK) composites. The interaction of these composites with the Moon regolith (soil) will be investigated. The samples will be monitored via an optical camera, which will record footage throughout the mission. ULB researchers will gather information during the mission and suggest adjustments to the path and orientation of the rover. Images obtained will be used to study the effects of the Moon environment and the regolith abrasive stresses on the samples.

This moon mission comes soon after the ESA announcement of the 2022 class of astronauts, including the Graphene Flagship’s own Meganne Christian, a researcher at Graphene Flagship Partner the Institute of Microelectronics and Microsystems (IMM) at the National Research Council of Italy.

“Being able to follow the Moon rover’s progress in real time will enable us to track how the lunar environment impacts various types of graphene-polymer composites, thereby allowing us to infer which of them is most resilient under such conditions. This will enhance our understanding of how graphene-based composites could be used in the construction of future lunar surface vessels,” says Sara Almaeeni, MBRSC science team lead, who designed Rashid’s communication system.

“New materials such as graphene have the potential to be game changers in space exploration. In combination with the resources available on the Moon, advanced materials will enable radiation protection, electronics shielding and mechanical resistance to the harshness of the Moon’s environment. The Rashid rover will be the first opportunity to gather data on the behavior of graphene composites within a lunar environment,” says Carlo Iorio, Graphene Flagship Space Champion, from ULB.

Leading up to the Moon mission, a variety of inks containing graphene and related materials, such as conducting graphene, insulating hexagonal boron nitride and graphene oxide, semiconducting molybdenum disulfide, prepared by the University of Cambridge and ULB were also tested on the MAterials Science Experiment Rocket 15 (MASER 15) mission, successfully launched on the 23rd of November 2022 from the Esrange Space Center in Sweden. This experiment, named ARLES-2 (Advanced Research on Liquid Evaporation in Space) and supported by European and UK space agencies (ESA, UKSA) included contributions from Graphene Flagship Partners University of Cambridge (UK), University of Pisa (Italy) and Trinity College Dublin (Ireland), with many international collaborators, including Aix-Marseille University (France), Technische Universität Darmstadt (Germany), York University (Canada), Université de Liège (Belgium), University of Edinburgh and Loughborough.

This experiment will provide new information about the printing of GMR inks in weightless conditions, contributing to the development of new addictive manufacturing procedures in space such as 3d printing. Such procedures are key for space exploration, during which replacement components are often needed, and could be manufactured from functional inks.

“Our experiments on graphene and related materials deposition in microgravity pave the way addictive manufacturing in space. The study of the interaction of Moon regolith with graphene composites will address some key challenges brought about by the harsh lunar environment,” says Yarjan Abdul Samad, from the Universities of Cambridge and Khalifa, who prepared the samples and coordinated the interactions with the United Arab Emirates.    

“The Graphene Flagship is spearheading the investigation of graphene and related materials (GRMs) for space applications. In November 2022, we had the first member of the Graphene Flagship appointed to the ESA astronaut class. We saw the launch of a sounding rocket to test printing of a variety of GRMs in zero gravity conditions, and the launch of a lunar rover that will test the interaction of graphene—based composites with the Moon surface. Composites, coatings and foams based on GRMs have been at the core of the Graphene Flagship investigations since its beginning. It is thus quite telling that, leading up to the Flagship’s 10th anniversary, these innovative materials are now to be tested on the lunar surface. This is timely, given the ongoing effort to bring astronauts back to the Moon, with the aim of building lunar settlements. When combined with polymers, GRMs can tailor the mechanical, thermal, electrical properties of then host matrices. These pioneering experiments could pave the way for widespread adoption of GRM-enhanced materials for space exploration,” says Andrea Ferrari, Science and Technology Officer and Chair of the Management Panel of the Graphene Flagship. 

Caption: The MASER15 launch Credit: John-Charles Dupin

A pioneering graphene work and a first for the Arab World

A December 11, 2022 news item on Alarabiya news (and on CNN) describes the ‘graphene’ launch which was also marked the Arab World’s first mission to the moon,

The United Arab Emirates’ Rashid Rover – the Arab world’s first mission to the Moon – was launched on Sunday [December 11, 2022], the Mohammed bin Rashid Space Center (MBRSC) announced on its official Twitter account.

The launch came after it was previously postponed for “pre-flight checkouts.”

The launch of a SpaceX Falcon 9 rocket carrying the UAE’s Rashid rover successfully took off from Cape Canaveral, Florida.

The Rashid rover – built by Emirati engineers from the UAE’s Mohammed bin Rashid Space Center (MBRSC) – is to be sent to regions of the Moon unexplored by humans.

What is the Graphene Flagship?

In 2013, the Graphene Flagship was chosen as one of two FET (Future and Emerging Technologies) funding projects (the other being the Human Brain Project) each receiving €1 billion to be paid out over 10 years. In effect, it’s a science funding programme specifically focused on research, development, and commercialization of graphene (a two-dimensional [it has length and width but no depth] material made of carbon atoms).

You can find out more about the flagship and about graphene here.

“Living in a Dream,” part of Cambridge Festival (on display March 31 and April 1, 2023 in the UK)

Caption: Dream artwork by Jewel Chang of Anglia Ruskin University, which will be on display at the Cambridge Festival. Credit: Jewel Chang, Anglia Ruskin University

Let’s clear up a few things. First, as noted in the headline, the Cambridge Festival (March 17 – April 2, 2023) is being held in the UK by the University of Cambridge in the town of Cambridge. Second, the specific festival event featured here is a display put together by students and professors at Anglia Ruskin University (ARU) and in the town of Cambridge as part of the festival and will be held for two days, March 31 – April 1, 2023.

A March 27, 2023 ARU press release (also on EurekAlert) provides more details about the two day display, Note: Links have been removed,

Dreams are being turned into reality as new research investigating the unusual experiences of people with depersonalisation symptoms is being brought to life in an art exhibition at Anglia Ruskin University (ARU) in Cambridge, England.

ARU neuroscientist Dr Jane Aspell has led a major international study into depersonalisation, funded by the Bial Foundation. The “Living in a Dream” project, results from which will be published later this year, found that people who experience depersonalisation symptoms sometimes experience life from a very different perspective, both while awake and while dreaming.

Those experiencing depersonalisation often report feeling as though they are not real and that their body does not belong to them. Dr Aspell’s study, which is the first to examine how people with this disorder experience dreams, collected almost 1,000 dream reports from participants.

Now these dreams have been recreated by eight students from ARU’s MA Illustration course and the artwork will go on display for the first time on 31 March and 1 April as part of the Cambridge Festival.

This collaboration between art and science, led by psychologist Matt Gwyther and illustrator Dr Nanette Hoogslag, with the support of artist and creative technologist Emily Godden, has resulted in 12 original artworks, which have been created using the latest audio-visual technologies, including artificial intelligence (AI), and are presented using a mix of audio-visual installation, virtual reality (VR) experiences, and traditional media.

Dr Jane Aspell, Associate Professor of Cognitive Neuroscience at ARU and Head of the Self and Body Lab, said: “People who experience depersonalisation sometimes feel detached from their self and body, and a common complaint is that it’s like they are watching their own life as a film.

“Because their waking reality is so different, myself and my international collaborators – Dr Anna Ciaunica, Professor Bigna Lenggenhager and Dr Jennifer Windt – were keen to investigate how they experience their dreams.

“People who took part in the study completed daily ‘dream diaries’, and it is fabulous to see how these dreams have been recreated by this group of incredibly talented artists.”

Matt Gwyther added: “Dreams are both incredibly visual and surreal, and you lose so much when attempting to put them into words. By bringing them to life as art, it has not only produced fabulous artwork, but it also helps us as scientists better understand the experiences of our research participants.”

Amongst the artists contributing to the exhibition is MA student Jewel Chang, who has recreated a dream about being chased. When the person woke up, they continued to experience it and were unsure whether they were experiencing the dream or reality.

False awakenings and multiple layers of dreams can be confusing, affecting our perception of time and space. Jewel used AI to create an environment with depth and endless moving patterns that makes the visitor feel trapped in their dream, unable to escape.

Kelsey Wu, meanwhile, used special 3D software and cameras to recreate a dream of floating over hills and forests, and losing balance. The immersive piece, with the audience invited to sit on a grass-covered floor, creates a sense of loss of control of the body, which moves in an abnormal and unbalanced way, and evokes a struggle between illusion and reality as the landscape continuously moves.

Dr Nanette Hoogslag, Course Leader for the MA in Illustration at ARU, said: “This project has been a unique challenge, where students not only applied themselves in supporting scientific research, but investigated and used a range of new technologies, including virtual reality and AI-generated imagery. The final pieces are absolutely remarkable, and also slightly unsettling!”

You can find out more about the 2023 Cambridge Festival here and about the Anglia Ruskin University exhibit, “Living in a Dream: A visual exploration of the self in dreams using AI technology” here.

Transformational machine learning (TML)

It seems machine learning is getting a tune-up. A November 29, 2021 news item on ScienceDaily describes research into improving machine learning from an international team of researchers,

Researchers have developed a new approach to machine learning that ‘learns how to learn’ and out-performs current machine learning methods for drug design, which in turn could accelerate the search for new disease treatments.

The method, called transformational machine learning (TML), was developed by a team from the UK, Sweden, India and Netherlands. It learns from multiple problems and improves performance while it learns.

A November 29, 2021 University of Cambridge press release (also on EurekAlert), which originated the news item, describes the potential this new technique may have on drug discovery and more,

TML could accelerate the identification and production of new drugs by improving the machine learning systems which are used to identify them. The results are reported in the Proceedings of the National Academy of Sciences.

Most types of machine learning (ML) use labelled examples, and these examples are almost always represented in the computer using intrinsic features, such as the colour or shape of an object. The computer then forms general rules that relate the features to the labels.

“It’s sort of like teaching a child to identify different animals: this is a rabbit, this is a donkey and so on,” said Professor Ross King from Cambridge’s Department of Chemical Engineering and Biotechnology, who led the research. “If you teach a machine learning algorithm what a rabbit looks like, it will be able to tell whether an animal is or isn’t a rabbit. This is the way that most machine learning works – it deals with problems one at a time.”

However, this is not the way that human learning works: instead of dealing with a single issue at a time, we get better at learning because we have learned things in the past.

“To develop TML, we applied this approach to machine learning, and developed a system that learns information from previous problems it has encountered in order to better learn new problems,” said King, who is also a Fellow at The Alan Turing Institute. “Where a typical ML system has to start from scratch when learning to identify a new type of animal – say a kitten – TML can use the similarity to existing animals: kittens are cute like rabbits, but don’t have long ears like rabbits and donkeys. This makes TML a much more powerful approach to machine learning.”

The researchers demonstrated the effectiveness of their idea on thousands of problems from across science and engineering. They say it shows particular promise in the area of drug discovery, where this approach speeds up the process by checking what other ML models say about a particular molecule. A typical ML approach will search for drug molecules of a particular shape, for example. TML instead uses the connection of the drugs to other drug discovery problems.

“I was surprised how well it works – better than anything else we know for drug design,” said King. “It’s better at choosing drugs than humans are – and without the best science, we won’t get the best results.”

Here’s a link to and a citation for the paper,

Transformational machine learning: Learning how to learn from many related scientific problems by Ivan Olier, Oghenejokpeme I. Orhobor, Tirtharaj Dash, Andy M. Davis, Larisa N. Soldatova, Joaquin Vanschoren, and Ross D. King. PNAS December 7, 2021 118 (49) e2108013118; DOI: https://doi.org/10.1073/pnas.2108013118

This paper appears to be open access.