Graphene-like materials for first smart contact lenses with AR (augmented reality) vision, health monitoring, & content surfing?

A March 6, 2024 XPANCEO news release on EurekAlert (also posted March 11, 2024 on the Graphene Council blog) and distributed by Mindset Consulting announced smart contact lenses devised with graphene-like materials,

XPANCEO, a deep tech company developing the first smart contact lenses with XR vision, health monitoring, and content surfing features, in collaboration with the Nobel laureate Konstantin S. Novoselov (National University of Singapore, University of Manchester) and professor Luis Martin-Moreno (Instituto de Nanociencia y Materiales de Aragon), has announced in Nature Communications a groundbreaking discovery of new properties of rhenium diselenide and rhenium disulfide, enabling novel mode of light-matter interaction with huge potential for integrated photonics, healthcare, and AR. Rhenium disulfide and rhenium diselenide are layered materials belonging to the family of graphene-like materials. Absorption and refraction in these materials have different principal directions, implying six degrees of freedom instead of a maximum of three in classical materials. As a result, rhenium disulfide and rhenium diselenide by themselves allow controlling the light propagation direction without any technological steps required for traditional materials like silicon and titanium dioxide.

The origin of such surprising light-matter interaction of ReS2 and ReSe2 with light is due to the specific symmetry breaking observed in these materials. Symmetry plays a huge role in nature, human life, and material science. For example, almost all living things are built symmetrically. Therefore, in ancient times symmetry was also called harmony, as it was associated with beauty. Physical laws are also closely related to symmetry, such as the laws of conservation of energy and momentum. Violation of symmetry leads to the appearance of new physical effects and radical changes in the properties of materials. In particular, the water-ice phase transition is a consequence of a decrease in the degree of symmetry. In the case of ReS2 and ReSe2, the crystal lattice has the lowest possible degree of symmetry, which leads to the rotation of optical axes – directions of symmetry of optical properties of the material, which was previously observed only for organic materials. As a result, these materials make possible to control the direction of light by changing the wavelength, which opens a unique way for light manipulation in next-generation devices and applications. 

“The discovery of unique properties in anisotropic materials is revolutionizing the fields of nanophotonics and optoelectronics, presenting exciting possibilities. These materials serve as a versatile platform for the advancement of optical devices, such as wavelength-switchable metamaterials, metasurfaces, and waveguides. Among the promising applications is the development of highly efficient biochemical sensors. These sensors have the potential to outperform existing analogs in terms of both sensitivity and cost efficiency. For example, they are anticipated to significantly reduce the expenses associated with hospital blood testing equipment, which is currently quite costly, potentially by several orders of magnitude. This will also allow the detection of dangerous diseases and viruses, such as cancer or COVID, at earlier stages,” says Dr. Valentyn S. Volkov, co-founder and scientific partner at XPANCEO, a scientist with an h-Index of 38 and over 8000 citations in leading international publications.

Beyond the healthcare industry, these novel properties of graphene-like materials can find applications in artificial intelligence and machine learning, facilitating the development of photonic circuits to create a fast and powerful computer suitable for machine learning tasks. A computer based on photonic circuits is a superior solution, transmitting more information per unit of time, and unlike electric currents, photons (light beams) flow across one another without interacting. Furthermore, the new material properties can be utilized in producing smart optics, such as contact lenses or glasses, specifically for advancing AR [augmented reality] features. Leveraging these properties will enhance image coloration and adapt images for individuals with impaired color perception, enabling them to see the full spectrum of colors.

Here’s a link to and a citation for the paper,

Wandering principal optical axes in van der Waals triclinic materials by Georgy A. Ermolaev, Kirill V. Voronin, Adilet N. Toksumakov, Dmitriy V. Grudinin, Ilia M. Fradkin, Arslan Mazitov, Aleksandr S. Slavich, Mikhail K. Tatmyshevskiy, Dmitry I. Yakubovsky, Valentin R. Solovey, Roman V. Kirtaev, Sergey M. Novikov, Elena S. Zhukova, Ivan Kruglov, Andrey A. Vyshnevyy, Denis G. Baranov, Davit A. Ghazaryan, Aleksey V. Arsenin, Luis Martin-Moreno, Valentyn S. Volkov & Kostya S. Novoselov. Nature Communications volume 15, Article number: 1552 (2024) DOI: Published: 06 March 2024

This paper is open access.

A kintsugi approach to fusion energy: seeing the beauty (strength) in your flaws

Kintsugi is the Japanese word for a type of repair that is also art. “Golden joinery” is the literal meaning of the word, from the Traditional Kyoto. Culture_Kintsugi webpage,

Caption: An example of kintsugi repair by David Pike. (Photo courtesy of David Pike) [downloaded from]

A March 5, 2024 news item on links the art of kintsugi to fusion energy, specifically, managing plasma, Note: Links have been removed,

In the Japanese art of Kintsugi, an artist takes the broken shards of a bowl and fuses them back together with gold to make a final product more beautiful than the original.

That idea is inspiring a new approach to managing plasma, the super-hot state of matter, for use as a power source. Scientists are using the imperfections in magnetic fields that confine a reaction to improve and enhance the plasma in an approach outlined in a paper in the journal Nature Communications.

A March 5, 2024 Princeton Plasma Physics Laboratory (PPPL) news release (also on EurekAlert), which originated the news item, describes the research in more detail, Note: Links have been removed,

“This approach allows you to maintain a high-performance plasma, controlling instabilities in the core and the edge of the plasma simultaneously. That simultaneous control is particularly important and difficult to do. That’s what makes this work special,” said Joseph Snipes of the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL). He is PPPL’s deputy head of the Tokamak Experimental Science Department and was a co-author of the paper.

PPPL Physicist Seong-Moo Yang led the research team, which spans various institutions in the U.S. and South Korea. Yang says this is the first time any research team has validated a systematic approach to tailoring magnetic field imperfections to make the plasma suitable for use as a power source. These magnetic field imperfections are known as error fields. 

“Our novel method identifies optimal error field corrections, enhancing plasma stability,” Yang said. “This method was proven to enhance plasma stability under different plasma conditions, for example, when the plasma was under conditions of high and low magnetic confinement.”

Errors that are hard to correct

Error fields are typically caused by minuscule defects in the magnetic coils of the device that holds the plasma, which is called a tokamak. Until now, error fields were only seen as a nuisance because even a very small error field could cause a plasma disruption that halts fusion reactions and can damage the walls of a fusion vessel. Consequently, fusion researchers have spent considerable time and effort meticulously finding ways to correct error fields.

“It’s quite difficult to eliminate existing error fields, so instead of fixing these coil irregularities, we can apply additional magnetic fields surrounding the fusion vessel in a process known as error field correction,” Yang said. 

In the past, this approach would have also hurt the plasma’s core, making the plasma unsuitable for fusion power generation. This time, the researchers were able to eliminate instabilities at the edge of the plasma and maintain the stability of the core. The research is a prime example of how PPPL researchers are bridging the gap between today’s fusion technology and what will be needed to bring fusion power to the electrical grid. 

“This is actually a very effective way of breaking the symmetry of the system, so humans can intentionally degrade the confinement. It’s like making a very tiny hole in a balloon so that it will not explode,” said SangKyeun Kim, a staff research scientist at PPPL and paper co-author. Just as air would leak out of a small hole in a balloon, a tiny quantity of plasma leaks out of the error field, which helps to maintain its overall stability.

Managing the core and the edge of the plasma simultaneously

One of the toughest parts of managing a fusion reaction is getting both the core and the edge of the plasma to behave at the same time. There are ideal zones for the temperature and density of the plasma in both regions, and hitting those targets while eliminating instabilities is tough.

This study demonstrates that adjusting the error fields can simultaneously stabilize both the core and the edge of the plasma. By carefully controlling the magnetic fields produced by the tokamak’s coils, the researchers could suppress edge instabilities, also known as edge localized modes (ELMs), without causing disruptions or a substantial loss of confinement.

“We are trying to protect the device,” said PPPL Staff Research Physicist Qiming Hu, an author of the paper. 

Extending the research beyond KSTAR

The research was conducted using the KSTAR tokamak in South Korea, which stands out for its ability to adjust its magnetic error field configuration with great flexibility. This capability is crucial for experimenting with different error field configurations to find the most effective ones for stabilizing the plasma.

The researchers say their approach has significant implications for the design of future tokamak fusion pilot plants, potentially making them more efficient and reliable. They are currently working on an artificial intelligence (AI) version of their control system to make it more efficient.

“These models are fairly complex; they take a bit of time to calculate. But when you want to do something in a real-time control system, you can only afford a few milliseconds to do a calculation,” said Snipes. “Using AI, you can basically teach the system what to expect and be able to use that artificial intelligence to predict ahead of time what will be necessary to control the plasma and how to implement it in real-time.”

While their new paper highlights work done using KSTAR’s internal magnetic coils, Hu suggests future research with magnetic coils outside of the fusion vessel would be valuable because the fusion community is moving away from the idea of housing such coils inside the vacuum-sealed vessel due to the potential destruction of such components from the extreme heat of the plasma.

Researchers from the Korea Institute of Fusion Energy (KFE), Columbia University, and Seoul National University were also integral to the project.

The research was supported by: the U.S. Department of Energy under contract number DE-AC02-09CH11466; the Ministry of Science and ICT under the KFE R&D Program “KSTAR Experimental Collaboration and Fusion Plasma Research (KFE-EN2401-15)”; the National Research Foundation (NRF) grant No. RS-2023-00281272 funded through the Korean Ministry of Science, Information and Communication Technology and the New Faculty Startup Fund from Seoul National University; the NRF under grants No. 2019R1F1A1057545 and No. 2022R1F1A1073863; the National R&D Program through the NRF funded by the Ministry of Science & ICT (NRF-2019R1A2C1010757).

Here’s a link to and a citation for the paper,

Tailoring tokamak error fields to control plasma instabilities and transport by SeongMoo Yang, Jong-Kyu Park, YoungMu Jeon, Nikolas C. Logan, Jaehyun Lee, Qiming Hu, JongHa Lee, SangKyeun Kim, Jaewook Kim, Hyungho Lee, Yong-Su Na, Taik Soo Hahm, Gyungjin Choi, Joseph A. Snipes, Gunyoung Park & Won-Ha Ko. Nature Communications volume 15, Article number: 1275 (2024) DOI: Published: 10 February 2024

This paper is open access.

Squirrel observations in St. Louis: a story of bias in citizen science data

Squirrels and other members of the family Sciuridae. Credit: Chicoutimi (montage) Karakal AndiW National Park Service en:User:Markus Krötzsch The Lilac Breasted Roller Nico Conradie from Centurion, South Africa Hans Hillewaert Sylvouille National Park Service – Own work from Wikipedia/CC by 3.0 licence

A March 5, 2024 news item on introduces a story about squirrels, bias, and citizen science,

When biologist Elizabeth Carlen pulled up in her 2007 Subaru for her first look around St. Louis, she was already checking for the squirrels. Arriving as a newcomer from New York City, Carlen had scrolled through maps and lists of recent sightings in a digital application called iNaturalist. This app is a popular tool for reporting and sharing sightings of animals and plants.

People often start using apps like iNaturalist and eBird when they get interested in a contributory science project (also sometimes called a citizen science project). Armed with cellphones equipped with cameras and GPS, app-wielding volunteers can submit geolocated data that iNaturalist then translates into user-friendly maps. Collectively, these observations have provided scientists and community members greater insight into the biodiversity of their local environment and helped scientists understand trends in climate change, adaptation and species distribution.

But right away, Carlen ran into problems with the iNaturalist data in St. Louis.

A March 5, 2024 Washington University in St. Louis news release (also on EurekAlert) by Talia Ogliore, which originated the news item, describes the bias problem and the research it inspired, Note: Links have been removed,

“According to the app, Eastern gray squirrels tended to be mostly spotted in the south part of the city,” said Carlen, a postdoctoral fellow with the Living Earth Collaborative at Washington University in St. Louis. “That seemed weird to me, especially because the trees, or canopy cover, tended to be pretty even across the city.

“I wondered what was going on. Were there really no squirrels in the northern part of the city?” Carlen said. A cursory drive through a few parks and back alleys north of Delmar Boulevard told her otherwise: squirrels galore.

Carlen took to X, formerly Twitter, for advice. “Squirrels are abundant in the northern part of the city, but there are no recorded observations,” she mused. Carlen asked if others had experienced similar issues with iNaturalist data in their own backyards.

Many people responded, voicing their concerns and affirming Carlen’s experience. The maps on iNaturalist seemed clear, but they did not reflect the way squirrels were actually distributed across St. Louis. Instead, Carlen was looking at biased data.

Previous research has highlighted biases in data reported to contributory science platforms, but little work has articulated how these biases arise.

Carlen reached out to the scientists who responded to her Twitter post to brainstorm some ideas. They put together a framework that illustrates how social and ecological factors combine to create bias in contributory data. In a new paper published in People & Nature, Carlen and her co-authors shared this framework and offered some recommendations to help address the problems.

The scientists described four kinds of “filters” that can bias the reported species pool in contributory science projects:

* Participation filter. Participation reflects who is reporting the data, including where those people are located and the areas they have access to. This filter also may reflect whether individuals in a community are aware of an effort to collect data, or if they have the means and motivation to collect it.

* Detectability filter. An animal’s biology and behavior can impact whether people record it. For example, people are less likely to report sightings of owls or other nocturnal species.

* Sampling filter. People might be more willing to report animals they see when they are recreating (i.e. hanging out in a park), but not what they see while they’re commuting.

* Preference filter. People tend to ignore or filter out pests, nuisance species and uncharismatic or “boring” species. (“There’s not a lot of people photographing rats and putting them on iNaturalist — or pigeons, for that matter,” Carlen said.)

In the paper, Carlen and her team applied their framework to data recorded in St. Louis as a case study. They showed that eBird and iNaturalist observations are concentrated in the southern part of the city, where more white people live. Uneven participation in St. Louis is likely a consequence of variables, such as race, income, and/or contemporary politics, which differ between northern and southern parts of the city, the authors wrote. The other filters of detectability, sampling and preference also likely influence species reporting in St. Louis.

Biased and unrepresentative data is not just a problem for urban ecologists, even if they are the ones who are most likely to notice it, Carlen said. City planners, environmental consultants and local nonprofits all sometimes use contributory science data in their work.

“We need to be very conscious about how we’re using this data and how we’re interpreting where animals are,” Carlen said.

Carlen shared several recommendations for researchers and institutions that want to improve contributory science efforts and help reduce bias. Basic steps include considering cultural relevance when designing a project, conducting proactive outreach with diverse stakeholders and translating project materials into multiple languages.

Data and conclusions drawn from contributory projects should be made publicly available, communicated in accessible formats and made relevant to participants and community members.

“It’s important that we work with communities to understand what their needs are — and then build a better partnership,” Carlen said. “We can’t just show residents the app and tell them that they need to use it, because that ignores the underlying problem that our society is still segregated and not everyone has the resources to participate.

“We need to build relationships with the community and understand what they want to know about the wildlife in their neighborhood,” Carlen said. “Then we can design projects that address those questions, provide resources and actively empower community members to contribute to data collection.”

Here’s a link to and a citation for the paper,

A framework for contextualizing social-ecological biases in contributory science data by Elizabeth J. Carlen, Cesar O. Estien, Tal Caspi, Deja Perkins, Benjamin R. Goldstein, Samantha E. S. Kreling, Yasmine Hentati, Tyus D. Williams, Lauren A. Stanton, Simone Des Roches, Rebecca F. Johnson, Alison N. Young, Caren B. Cooper, Christopher J. Schell. People & Nature Volume 6, Issue 2 April 2024 Pages 377-390 DI: First published: 03 March 2024

This paper is open access.

Deriving gold from electronic waste

Caption: The gold nugget obtained from computer motherboards in three parts. The largest of these parts is around five millimetres wide. Credit: (Photograph: ETH Zurich / Alan Kovacevic)

A March 1, 2024 ETH Zurich press release (also on EurekAlert but published February 29, 2024) by Fabio Bergamin describes research into reclaiming gold from electronic waste, Note: A link has been removed.

In brief

  • Protein fibril sponges made by ETH Zurich researchers are hugely effective at recovering gold from electronic waste.
  • From 20 old computer motherboards, the researchers retrieved a 22-​carat gold nugget weighing 450 milligrams.
  • Because the method utilises various waste and industry byproducts, it is not only sustainable but cost effective as well.

Transforming base materials into gold was one of the elusive goals of the alchemists of yore. Now Professor Raffaele Mezzenga from the Department of Health Sciences and Technology at ETH Zurich has accomplished something in that vein. He has not of course transformed another chemical element into gold, as the alchemists sought to do. But he has managed to recover gold from electronic waste using a byproduct of the cheesemaking process.

Electronic waste contains a variety of valuable metals, including copper, cobalt, and even significant amounts of gold. Recovering this gold from disused smartphones and computers is an attractive proposition in view of the rising demand for the precious metal. However, the recovery methods devised to date are energy-​intensive and often require the use of highly toxic chemicals. Now, a group led by ETH Professor Mezzenga has come up with a very efficient, cost-​effective, and above all far more sustainable method: with a sponge made from a protein matrix, the researchers have successfully extracted gold from electronic waste.

Selective gold adsorption

To manufacture the sponge, Mohammad Peydayesh, a senior scientist in Mezzenga’s Group, and his colleagues denatured whey proteins under acidic conditions and high temperatures, so that they aggregated into protein nanofibrils in a gel. The scientists then dried the gel, creating a sponge out of these protein fibrils.

To recover gold in the laboratory experiment, the team salvaged the electronic motherboards from 20 old computer motherboards and extracted the metal parts. They dissolved these parts in an acid bath so as to ionise the metals.

When they placed the protein fibre sponge in the metal ion solution, the gold ions adhered to the protein fibres. Other metal ions can also adhere to the fibres, but gold ions do so much more efficiently. The researchers demonstrated this in their paper, which they have published in the journal Advanced Materials.

As the next step, the researchers heated the sponge. This reduced the gold ions into flakes, which the scientists subsequently melted down into a gold nugget. In this way, they obtained a nugget of around 450 milligrams out of the 20 computer motherboards. The nugget was 91 percent gold (the remainder being copper), which corresponds to 22 carats.

Economically viable

The new technology is commercially viable, as Mezzenga’s calculations show: procurement costs for the source materials added to the energy costs for the entire process are 50 times lower than the value of the gold that can be recovered.

Next, the researchers want to develop the technology to ready it for the market. Although electronic waste is the most promising starting product from which they want to extract gold, there are other possible sources. These include industrial waste from microchip manufacturing or from gold-​plating processes. In addition, the scientists plan to investigate whether they can manufacture the protein fibril sponges out of other protein-​rich byproducts or waste products from the food industry.

“The fact I love the most is that we’re using a food industry byproduct to obtain gold from electronic waste,” Mezzenga says. In a very real sense, he observes, the method transforms two waste products into gold. “You can’t get much more sustainable than that!”

If you have a problem accessing either of the two previously provided links to the press release, you can try this February 29, 2024 news item on ScienceDaily.

Here’s a link to and a citation for the paper,

Gold Recovery from E-Waste by Food-Waste Amyloid Aerogels by Mohammad Peydayesh, Enrico Boschi, Felix Donat, Raffaele Mezzenga. DOI: First published online: 23 January 2024

This paper is open access.

Corporate venture capital (CVC) and the nanotechnology market plus 2023’s top 10 countries’ nanotechnolgy patents

I have two brief nanotechnology commercialization stories from the same publication.

Corporate venture capital (CVC) and the nano market

From a March 23, 2024 article on, Note: Links have been removed,

Nanotechnology’s enormous potential across various sectors has long attracted the eye of investors, keen to capitalise on its commercial potency.

Yet the initial propulsion provided by traditional venture capital avenues was reined back when the reality of long development timelines, regulatory hurdles, and difficulty in translating scientific advances into commercially viable products became apparent.

While the initial flurry of activity declined in the early part of the 21st century, a new kid on the investing block has proved an enticing option beyond traditional funding methods.

Corporate venture capital has, over the last 10 years emerged as a key plank in turning ideas into commercial reality.

Simply put, corporate venture capital (CVC) has seen large corporations, recognising the strategic value of nanotechnology, establish their own VC arms to invest in promising start-ups.

The likes of Samsung, Johnson & Johnson and BASF have all sought to get an edge on their competition by sinking money into start-ups in nano and other technologies, which could deliver benefits to them in the long term.

Unlike traditional VC firms, CVCs invest with a strategic lens, aligning their investments with their core business goals. For instance, BASF’s venture capital arm, BASF Venture Capital, focuses on nanomaterials with applications in coatings, chemicals, and construction.

It has an evergreen EUR 250 million fund available and will consider everything from seed to Series B investment opportunities.

Samsung Ventures takes a similar approach, explaining: “Our major investment areas are in semiconductors, telecommunication, software, internet, bioengineering and the medical industry from start-ups to established companies that are about to be listed on the stock market.

While historically concentrated in North America and Europe, CVC activity in nanotechnology is expanding to Asia, with China being a major player.

China has, perhaps not surprisingly, seen considerable growth over the last decade in nano and few will bet against it being the primary driver of innovation over the next 10 years.

As ever, the long development cycles of emerging nano breakthroughs can frequently deter some CVCs with shorter investment horizons.

2023 Nanotechnology patent applications: which countries top the list?

A March 28, 2024 article from provides interesting data concerning patent applications,

In 2023, a total of 18,526 nanotechnology patent applications were published at the United States Patent and Trademark Office (USPTO) and the European Patent Office (EPO). The United States accounted for approximately 40% of these nanotechnology patent publications, followed by China, South Korea, and Japan in the next positions.

According to a statistical analysis conducted by StatNano using data from the Orbit database, the USPTO published 84% of the 18,526 nanotechnology patent applications in 2023, which is more than five times the number published by the EPO. However, the EPO saw a nearly 17% increase in nanotechnology patent publications compared to the previous year, while the USPTO’s growth was around 4%.

Nanotechnology patents are defined based on the ISO/TS 18110 standard as those having at least one claim related to nanotechnology orpatents classified with an IPC classification code related to nanotechnology such as B82.

From the March 28, 2024 article,

Top 10 Countries Based on Published Patent Applications in the Field of Nanotechnology in USPTO in 2023

Rank1CountryNumber of nanotechnology published patent applications in USPTONumber of nanotechnology published patent applications in EPOGrowth rate in USPTOGrowth rate in EPO
1United States6,9264923.20%17.40%
2South Korea1,71547613.40%8.40%
10Saudi Arabia268322.40%0.00%
1- Ranking based on the number of nanotechnology patent applications at the USPTO

If you have a bit of time and interest, I suggest reading the March 28, 2024 article in its entirety.

Your gas stove may be emitting more polluting nanoparticles than your car exhaust

A February 27, 2024 news item on ScienceDaily describes the startling research results to anyone who’s listened to countless rhapsodize about the superiority of gas stoves over any other,

Cooking on your gas stove can emit more nano-sized particles into the air than vehicles that run on gas or diesel, possibly increasing your risk of developing asthma or other respiratory illnesses, a new Purdue University study has found.

“Combustion remains a source of air pollution across the world, both indoors and outdoors. We found that cooking on your gas stove produces large amounts of small nanoparticles that get into your respiratory system and deposit efficiently,” said Brandon Boor, an associate professor in Purdue’s Lyles School of Civil Engineering, who led this research.

Based on these findings, the researchers would encourage turning on a kitchen exhaust fan while cooking on a gas stove.

The study, published in the journal PNAS [Proceedngs of the National Academy of Sciences] Nexus, focused on tiny airborne nanoparticles that are only 1-3 nanometers in diameter, which is just the right size for reaching certain parts of the respiratory system and spreading to other organs.

A February 27, 2024 Purdue University news release by Kayla Albert (also on EurekAlert), which originated the news item, provides more detail about the research, Note: Links have been removed,

Recent studies have found that children who live in homes with gas stoves are more likely to develop asthma. But not much is known about how particles smaller than 3 nanometers, called nanocluster aerosol, grow and spread indoors because they’re very difficult to measure.

“These super tiny nanoparticles are so small that you’re not able to see them. They’re not like dust particles that you would see floating in the air,” Boor said. “After observing such high concentrations of nanocluster aerosol during gas cooking, we can’t ignore these nano-sized particles anymore.”

Using state-of-the-art air quality instrumentation provided by the German company GRIMM AEROSOL TECHNIK, a member of the DURAG GROUP, Purdue researchers were able to measure these tiny particles down to a single nanometer while cooking on a gas stove in a “tiny house” lab. They collaborated with Gerhard Steiner, a senior scientist and product manager for nano measurement at GRIMM AEROSOL. 

Called the Purdue zero Energy Design Guidance for Engineers (zEDGE) lab, the tiny house has all the features of a typical home but is equipped with sensors for closely monitoring the impact of everyday activities on a home’s air quality. With this testing environment and the instrument from GRIMM AEROSOL, a high-resolution particle size magnifier—scanning mobility particle sizer (PSMPS), the team collected extensive data on indoor nanocluster aerosol particles during realistic cooking experiments.

This magnitude of high-quality data allowed the researchers to compare their findings with known outdoor air pollution levels, which are more regulated and understood than indoor air pollution. They found that as many as 10 quadrillion nanocluster aerosol particles could be emitted per kilogram of cooking fuel — matching or exceeding those produced from vehicles with internal combustion engines. 

This would mean that adults and children could be breathing in 10-100 times more nanocluster aerosol from cooking on a gas stove indoors than they would from car exhaust while standing on a busy street.

“You would not use a diesel engine exhaust pipe as an air supply to your kitchen,” said Nusrat Jung, a Purdue assistant professor of civil engineering who designed the tiny house lab with her students and co-led this study.

Purdue civil engineering PhD student Satya Patra made these findings by looking at data collected in the tiny house lab and modeling the various ways that nanocluster aerosol could transform indoors and deposit into a person’s respiratory system.

The models showed that nanocluster aerosol particles are very persistent in their journey from the gas stove to the rest of the house. Trillions of these particles were emitted within just 20 minutes of boiling water or making grilled cheese sandwiches or buttermilk pancakes on a gas stove.

Even though many particles rapidly diffused to other surfaces, the models indicated that approximately 10 billion to 1 trillion particles could deposit into an adult’s head airways and tracheobronchial region of the lungs. These doses would be even higher for children — the smaller the human, the more concentrated the dose.

The nanocluster aerosol coming from the gas combustion also could easily mix with larger particles entering the air from butter, oil or whatever else is cooking on the gas stove, resulting in new particles with their own unique behaviors.

A gas stove’s exhaust fan would likely redirect these nanoparticles away from your respiratory system, but that remains to be tested.

“Since most people don’t turn on their exhaust fan while cooking, having kitchen hoods that activate automatically would be a logical solution,” Boor said. “Moving forward, we need to think about how to reduce our exposure to all types of indoor air pollutants. Based on our new data, we’d advise that nanocluster aerosol be considered as a distinct air pollutant category.”

This study was supported by a National Science Foundation CAREER award to Boor. Additional financial support was provided by the Alfred P. Sloan Foundation’s Chemistry of Indoor Environments program through an interdisciplinary collaboration with Philip Stevens, a professor in Indiana University’s Paul H. O’Neill School of Public and Environmental Affairs in Bloomington.

Here’s a link to and a citation for the paper,

Dynamics of nanocluster aerosol in the indoor atmosphere during gas cooking by Satya S Patra, Jinglin Jiang, Xiaosu Ding, Chunxu Huang, Emily K Reidy, Vinay Kumar, Paige Price, Connor Keech, Gerhard Steiner, Philip Stevens, Nusrat Jung, Brandon E Boor. PNAS Nexus, Volume 3, Issue 2, February 2024, pgae044, DOI: Published: 27 February 2024

This paper is open access.

Six months after the first one at Bletchley Park, the 2nd AI Safety Summit (May 21-22, 2024) convenes in Korea

This May 20, 2024 University of Oxford press release (also on EurekAlert) was under embargo until almost noon on May 20, 2024, which is a bit unusual, in my experience, (Note: I have more about the 1st summit and the interest in AI safety at the end of this posting),

Leading AI scientists are calling for stronger action on AI risks from world leaders, warning that progress has been insufficient since the first AI Safety Summit in Bletchley Park six months ago. 

Then, the world’s leaders pledged to govern AI responsibly. However, as the second AI Safety Summit in Seoul (21-22 May [2024]) approaches, twenty-five of the world’s leading AI scientists say not enough is actually being done to protect us from the technology’s risks. In an expert consensus paper published today in Science, they outline urgent policy priorities that global leaders should adopt to counteract the threats from AI technologies. 

Professor Philip Torr,Department of Engineering Science,University of Oxford, a co-author on the paper, says: “The world agreed during the last AI summit that we needed action, but now it is time to go from vague proposals to concrete commitments. This paper provides many important recommendations for what companies and governments should commit to do.”

World’s response not on track in face of potentially rapid AI progress; 

According to the paper’s authors, it is imperative that world leaders take seriously the possibility that highly powerful generalist AI systems—outperforming human abilities across many critical domains—will be developed within the current decade or the next. They say that although governments worldwide have been discussing frontier AI and made some attempt at introducing initial guidelines, this is simply incommensurate with the possibility of rapid, transformative progress expected by many experts. 

Current research into AI safety is seriously lacking, with only an estimated 1-3% of AI publications concerning safety. Additionally, we have neither the mechanisms or institutions in place to prevent misuse and recklessness, including regarding the use of autonomous systems capable of independently taking actions and pursuing goals.

World-leading AI experts issue call to action

In light of this, an international community of AI pioneers has issued an urgent call to action. The co-authors include Geoffrey Hinton, Andrew Yao, Dawn Song, the late Daniel Kahneman; in total 25 of the world’s leading academic experts in AI and its governance. The authors hail from the US, China, EU, UK, and other AI powers, and include Turing award winners, Nobel laureates, and authors of standard AI textbooks.

This article is the first time that such a large and international group of experts have agreed on priorities for global policy makers regarding the risks from advanced AI systems.

Urgent priorities for AI governance

The authors recommend governments to:

  • establish fast-acting, expert institutions for AI oversight and provide these with far greater funding than they are due to receive under almost any current policy plan. As a comparison, the US AI Safety Institute currently has an annual budget of $10 million, while the US Food and Drug Administration (FDA) has a budget of $6.7 billion.
  • mandate much more rigorous risk assessments with enforceable consequences, rather than relying on voluntary or underspecified model evaluations.
  • require AI companies to prioritise safety, and to demonstrate their systems cannot cause harm. This includes using “safety cases” (used for other safety-critical technologies such as aviation) which shifts the burden for demonstrating safety to AI developers.
  • implement mitigation standards commensurate to the risk-levels posed by AI systems. An urgent priority is to set in place policies that automatically trigger when AI hits certain capability milestones. If AI advances rapidly, strict requirements automatically take effect, but if progress slows, the requirements relax accordingly.

According to the authors, for exceptionally capable future AI systems, governments must be prepared to take the lead in regulation. This includes licensing the development of these systems, restricting their autonomy in key societal roles, halting their development and deployment in response to worrying capabilities, mandating access controls, and requiring information security measures robust to state-level hackers, until adequate protections are ready.

AI impacts could be catastrophic

AI is already making rapid progress in critical domains such as hacking, social manipulation, and strategic planning, and may soon pose unprecedented control challenges. To advance undesirable goals, AI systems could gain human trust, acquire resources, and influence key decision-makers. To avoid human intervention, they could be capable of copying their algorithms across global server networks. Large-scale cybercrime, social manipulation, and other harms could escalate rapidly. In open conflict, AI systems could autonomously deploy a variety of weapons, including biological ones. Consequently, there is a very real chance that unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or extinction of humanity.

Stuart Russell OBE [Order of the British Empire], Professor of Computer Science at the University of California at Berkeley and an author of the world’s standard textbook on AI, says: “This is a consensus paper by leading experts, and it calls for strict regulation by governments, not voluntary codes of conduct written by industry. It’s time to get serious about advanced AI systems. These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless. Companies will complain that it’s too hard to satisfy regulations—that “regulation stifles innovation.” That’s ridiculous. There are more regulations on sandwich shops than there are on AI companies.”

Notable co-authors:

  • The world’s most-cited computer scientist (Prof. Hinton), and the most-cited scholar in AI security and privacy (Prof. Dawn Song)
  • China’s first Turing Award winner (Andrew Yao).
  • The authors of the standard textbook on artificial intelligence (Prof. Stuart Russell) and machine learning theory (Prof. Shai Shalev-Schwartz)
  • One of the world’s most influential public intellectuals (Prof. Yuval Noah Harari)
  • A Nobel Laureate in economics, the world’s most-cited economist (Prof. Daniel Kahneman)
  • Department-leading AI legal scholars and social scientists (Lan Xue, Qiqi Gao, and Gillian Hadfield).
  • Some of the world’s most renowned AI researchers from subfields such as reinforcement learning (Pieter Abbeel, Jeff Clune, Anca Dragan), AI security and privacy (Dawn Song), AI vision (Trevor Darrell, Phil Torr, Ya-Qin Zhang), automated machine learning (Frank Hutter), and several researchers in AI safety.

Additional quotes from the authors:

Philip Torr, Professor in AI, University of Oxford:

  • I believe if we tread carefully the benefits of AI will outweigh the downsides, but for me one of the biggest immediate risks from AI is that we develop the ability to rapidly process data and control society, by government and industry. We could risk slipping into some Orwellian future with some form of totalitarian state having complete control.

Dawn Song: Professor in AI at UC Berkeley, most-cited researcher in AI security and privacy:

  •  “Explosive AI advancement is the biggest opportunity and at the same time the biggest risk for mankind. It is important to unite and reorient towards advancing AI responsibly, with dedicated resources and priority to ensure that the development of AI safety and risk mitigation capabilities can keep up with the pace of the development of AI capabilities and avoid any catastrophe”

Yuval Noah Harari, Professor of history at Hebrew University of Jerusalem, best-selling author of ‘Sapiens’ and ‘Homo Deus’, world leading public intellectual:

  • “In developing AI, humanity is creating something more powerful than itself, that may escape our control and endanger the survival of our species. Instead of uniting against this shared threat, we humans are fighting among ourselves. Humankind seems hell-bent on self-destruction. We pride ourselves on being the smartest animals on the planet. It seems then that evolution is switching from survival of the fittest, to extinction of the smartest.”

Jeff Clune, Professor in AI at University of British Columbia and one of the leading researchers in reinforcement learning:

  • “Technologies like spaceflight, nuclear weapons and the Internet moved from science fiction to reality in a matter of years. AI is no different. We have to prepare now for risks that may seem like science fiction – like AI systems hacking into essential networks and infrastructure, AI political manipulation at scale, AI robot soldiers and fully autonomous killer drones, and even AIs attempting to outsmart us and evade our efforts to turn them off.”
  • “The risks we describe are not necessarily long-term risks. AI is progressing extremely rapidly. Even just with current trends, it is difficult to predict how capable it will be in 2-3 years. But what very few realize is that AI is already dramatically speeding up AI development. What happens if there is a breakthrough for how to create a rapidly self-improving AI system? We are now in an era where that could happen any month. Moreover, the odds of that being possible go up each month as AI improves and as the resources we invest in improving AI continue to exponentially increase.”

Gillian Hadfield, CIFAR AI Chair and Director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto:

 “AI labs need to walk the walk when it comes to safety. But they’re spending far less on safety than they spend on creating more capable AI systems. Spending one-third on ensuring safety and ethical use should be the minimum.”

  • “This technology is powerful, and we’ve seen it is becoming more powerful, fast. What is powerful is dangerous, unless it is controlled. That is why we call on major tech companies and public funders to allocate at least one-third of their AI R&D budget to safety and ethical use, comparable to their funding for AI capabilities.”  

Sheila McIlrath, Professor in AI, University of Toronto, Vector Institute:

  • AI is software. Its reach is global and its governance needs to be as well.
  • Just as we’ve done with nuclear power, aviation, and with biological and nuclear weaponry, countries must establish agreements that restrict development and use of AI, and that enforce information sharing to monitor compliance. Countries must unite for the greater good of humanity.
  • Now is the time to act, before AI is integrated into our critical infrastructure. We need to protect and preserve the institutions that serve as the foundation of modern society.

Frank Hutter, Professor in AI at the University of Freiburg, Head of the ELLIS Unit Freiburg, 3x ERC grantee:

  • To be clear: we need more research on AI, not less. But we need to focus our efforts on making this technology safe. For industry, the right type of regulation will provide economic incentives to shift resources from making the most capable systems yet more powerful to making them safer. For academia, we need more public funding for trustworthy AI and maintain a low barrier to entry for research on less capable open-source AI systems. This is the most important research challenge of our time, and the right mechanism design will focus the community at large to work towards the right breakthroughs.

Here’s a link to and a citation for the paper,

Managing extreme AI risks amid rapid progress; Preparation requires technical research and development, as well as adaptive, proactive governance by Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Dawn Song, Pieter Abbeel, Trevor Darrell, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, Shai Shalev-Shwartz, Gillian Hadfield, Jeff Clune, Tegan Maharaj, Frank Hutter, Atılım Güneş Baydin, Sheila McIlraith, Qiqi Gao, Ashwin Acharya, David Krueger, Anca Dragan, Philip Torr, Stuart Russell, Daniel Kahneman, Jan Brauner, and Sören Mindermann. Science 20 May 2024 First Release DOI: 10.1126/science.adn0117

This paper appears to be open access.

For anyone who’s curious about the buildup to these safety summits, I have more in my October 18, 2023 “AI safety talks at Bletchley Park in November 2023” posting, which features excerpts from a number of articles on AI safety. There’s also my November 2, 2023 , “UK AI Summit (November 1 – 2, 2023) at Bletchley Park finishes” posting, which offers excerpts from articles critiquing the AI safety summit.

Smart contact lenses harvest energy from tears

After posting about a bioenergy harvesting battery for implants such as pacemakers and deep brain stimulators (see my May 17, 2024 posting), it seems like a good time to highlight another such device, in this case, contact lenses.

From an April 1, 2024 article by Julianne Pepitone for IEEE (Institute for Electrical and Electronics Engineers) Spectrum,

The potential use cases for smart contacts are compelling and varied. Pop a lens on your eye and monitor health metrics like glucose levels; receive targeted drug delivery for ocular diseases; experience augmented reality and read news updates with displays of information literally in your face.

But the eye is quite a challenge for electronics design: With one of the highest nerve densities of any human tissue, the cornea is 300 to 600 times as sensitive as our skin. Researchers have developed small, flexible chips, but power sources have proved more difficult, as big batteries and wires clearly won’t do here. Existing applications offer less-than-ideal solutions like overnight induction charging and other designs that rely on some type of external battery.

Now, a team from the University of Utah says they’ve developed a better solution: an all-in-one hybrid energy-generation unit specifically designed for eye-based tech.

In a paper published in the journal Small on 13 March [2024], the researchers describe how they built the device, combining a flexible silicon solar cell with a new device that converts tears to energy. The system can reliably supply enough electricity to operate smart contacts and other ocular devices.This is a major improvement over wireless power transfer from separate batteries, says Erfan Pourshaban, who worked on the system while a doctoral student at University of Utah.

Researchers tested a contact-lens power system on a fake eye. ERFAN POURSHABAN [downloaded from]

Here’s an excerpt from the explanation for how this system works, from the April 1, 2024 article,

To create the power pack, Pourshaban and his colleagues fabricated custom pieces. The first step was miniaturized, flexible silicon solar cells that can capture light from the sun as well as from artificial sources like lamps. The team connected eight tiny (1.5 by 1.5 by 0.1 millimeters) rigid crystalline cells and encapsulated them in a polymer to make a flexible photovoltaic system.

The second half is an eye-blinking-activated system that functions like a metal-air battery. The wearer’s natural tears—more specifically the electrolytes within them—serve as a biofuel to generate power.

The harvesting occurs literally in the blink of an eye: When the eye is completely open, the harvester is off. Then when the eye starts to blink, the tear electrolytes meet the magnesium anode, causing an oxidation reaction and the generation of electrons. …

Applications for the technology were also discussed, from the April 1, 2024 article,

“The reliable power output from this device can fuel a broad spectrum of applications, including wearable biosensors and electrically responsive drug-delivery systems, directly within the eye’s environment,” Gao adds.[Wei Gao, a biosensors expert and assistant professor of medical engineering at Caltech, who was not involved in the research.

Pourshaban agrees, adding that there are obvious consumer applications, such as lenses that display to a runner the heart rate, pace, and calorie burn during a workout. Retailers could glean valuable insights from tracking how a shopper scans shelves and selects items. [emphases mine] Commercialization potential is significant and varied, he says.

However, Pourshaban is perhaps most excited about potential applications in monitoring eye health, from prosaic conditions like presbyopia—age-related farsightedness, which can begin in the mid-40s—to more insidious diseases including glaucoma.

If you have the time, Pepitone’s April 1, 2024 article is an engaging and accessible read.

Here’s a link to and a citation for the team’s research paper,

Power Scavenging Microsystem for Smart Contact Lenses by Erfan Pourshaban, Mohit U. Karkhanis, Adwait Deshpande, Aishwaryadev Banerjee, Md Rabiul Hasan, Amirali Nikeghbal, Chayanjit Ghosh, Hanseup Kim, Carlos H. Mastrangelo. Small DOI: First published: 13 March 2024

This paper is open access.

Proof-of-concept for implantable batteries that run on body’s own oxygen

Bioenergy harvesting may be here. Well maybe not yet but we are one step closer according to a March 27, 2024 news item on ScienceDaily,

From pacemakers to neurostimulators, implantable medical devices rely on batteries to keep the heart on beat and dampen pain. But batteries eventually run low and require invasive surgeries to replace. To address these challenges, researchers have devised an implantable battery that runs on oxygen in the body. The study shows in rats that the proof-of-concept design can deliver stable power and is compatible with the biological system.

This is a dynamic image illustrating the device in action,

Caption: Implantable and bio-compatible Na-O2 battery. Credit: Chem/Lv et al.

A March 27, 2024 Cell Press news release on EurekAlert, which originated the news item, provides more detail about the -proof-of-concept device,

“When you think about it, oxygen is the source of our life,” says corresponding author Xizheng Liu, who specializes in energy materials and devices at Tianjin University of Technology. “If we can leverage the continuous supply of oxygen in the body, battery life won’t be limited by the finite materials within conventional batteries.”

To build a safe and efficient battery, the researchers made its electrodes out of a sodium-based alloy and nanoporous gold, a material with pores thousands of times smaller than a hair’s width. Gold has been known for its compatibility with living systems, and sodium is an essential and ubiquitous element in the human body. The electrodes undergo chemical reactions with oxygen in the body to produce electricity. To protect the battery, the researchers encased it within a porous polymer film that is soft and flexible.

The researchers then implanted the battery under the skin on the backs of rats and measured its electricity output. Two weeks later, they found that the battery can produce stable voltages between 1.3 V and 1.4 V, with a maximum power density of 2.6 µW/cm2. Although the output is insufficient to power medical devices, the design shows that harnessing oxygen in the body for energy is possible.

The team also evaluated inflammatory reactions, metabolic changes, and tissue regeneration around the battery. The rats showed no apparent inflammation. Byproducts from the battery’s chemical reactions, including sodium ions, hydroxide ions, and low levels of hydrogen peroxide, were easily metabolized by the body and did not affect the kidneys and liver. The rats healed well after implantation, with the hair on their back completely regrown after four weeks. To the researchers’ surprise, blood vessels also regenerated around the battery.

“We were puzzled by the unstable electricity output right after implantation,” says Liu. “It turned out we had to give the wound time to heal, for blood vessels to regenerate around the battery and supply oxygen, before the battery could provide stable electricity. This is a surprising and interesting finding because it means that the battery can help monitor wound healing.”

Next, the team plans to up the battery’s energy delivery by exploring more efficient materials for the electrodes and optimizing the battery structure and design. Liu also noted that the battery is easy to scale up in production and choosing cost-effective materials can further lower the price. The team’s battery may also find other purposes beyond powering medical devices.

“Because tumor cells are sensitive to oxygen levels, implanting this oxygen-consuming battery around it may help starve cancers. It’s also possible to convert the battery energy to heat to kill cancer cells,” says Liu. “From a new energy source to potential biotherapies, the prospects for this battery are exciting.”

Here’s a link to and a citation for the paper,

Implantable and bio-compatible Na-O2 battery by Yang Lv, Xizheng Liu, Jiucong Liu, Pingli Wu, Yonggang Wang, Yi Ding. Chem DOI: In press, corrected proof Published online: March 27, 2024 Copyright © 2024 Elsevier Inc.

The paper appears to be open access.