Monthly Archives: May 2024

Exploring biodiversity beyond boundaries and participatory (citizen) science

As this has been confusing to me with the two terms being used interchangeably, I investigated and, based on the findings, believe that ‘participatory sciences’ is a larger classification (subject) term, which includes ‘citizen science’ as a specific subset (type) of participatory science.

Bearing that in mind, here’s more from a May 29, 2024 letter/notice received via email about an upcoming participatory sciences conference,

There are so many areas where participatory sciences are creating a better understanding of the world around us. Sometimes looking at just one of those areas can help us see where there is real strength in these practices–and where combined work across this field can inspire huge change.

Right now, biodiversity is on my mind. 

Last week’s International Day of Biological Diversity invited everyone on the planet to be #PartOfThePlan to protect the systems that sustain us. The Biodiversity Plan calls for scientific collaborations, shared commitments, tracking indicators of progress, and developing transparent communication and engagement around actions by the end of this decade.

Participatory science projects have proven–but underutilized–potential to address spatial and temporal gaps in datasets; engage multiple ways of knowing; inform multilateral environmental agreements; and inspire action and change based on improved understandings of the systems that sustain us.

In this field, we have the the tools, experience, and vision to rise to this global challenge. What would it take to leverage the full power of participatory sciences to inspire and inform wise decisions for people and the planet?

If you are working in, or interested in, the frontiers of participatory sciences to address global challenges like biodiversity, you can be part of driving strategies and solutions at next week’s action-oriented stand on biodiversity at CAPS 2024, [Conference for Advancing the Participatory Sciences] June 3-6. Woven throughout the virtual four-day event are sessions that will both inform and inspire collaborative problem solving to improve how the participatory sciences are leveraged to confront the biodiversity crisis.

There will be opportunities in the program to share your thoughts and experiences, whether or not you are giving a talk.  This event is designed to bring together a diversity of perspectives from across the Americas and beyond.

The strand is a collaboration between AAPS [Association for Advancing Participatory Sciences], the Red Iberoamericana de Cienci A Participativa (the Iberoamerican Network of Participatory Science), iDigBio [Integrated Digitized Biocollections], and Florida State University’s Institute for Digital Information & Scientific Communication.  

CAPS 2024 Biodiversity Elements:

Collaborative Sessions Addressing Biodiversity Knowledge

Each day, multiple sessions will convene global leaders, practitioners, and others to discuss how to advance biodiversity knowledge worldwide. Formats include daily symposia, ideas-to-action conversations, virtual multi-media posters, and lightning talk discussions. Our virtual format provides plenty of opportunities for exchanges. 

Find the full biodiversity strand program here >

Plenary Symposia: Biodiversity Beyond Boundaries

Join global leaders as they share their work to span boundaries to create connected knowledge for biodiversity research and action. 

Learn more about the Plenary Symposia >

Biodiversity-themed Virtual Posters and Live Poster Sessions

Over one-third of the 100+ posters focus specifically on advancing biodiversity-related participatory science. Each day, poster sessions highlight a selection of posters via lightning talks and group discussions.  

Our media-rich virtual poster platform lets you easily scroll through all of the posters and chat with presenters on your own time – even from your phone!

View the full poster presenter list here >

There is still time to register!

Sign up now to ensure a seamless conference experience.

We have tiered registration rates to enable equitable access to the event, and to support delivery of future programming for everyone.

Register Here

This image is from the May 22, 2024 International Day of Biological Diversity,

The unrestricted exploitation of wildlife has led to the disappearance of many animal species at an alarming rate, destroying Earth’s biological diversity and upsetting the ecological balance Photo:Vladimir Wrangel/Adobe Stock

Graphene-like materials for first smart contact lenses with AR (augmented reality) vision, health monitoring, & content surfing?

A March 6, 2024 XPANCEO news release on EurekAlert (also posted March 11, 2024 on the Graphene Council blog) and distributed by Mindset Consulting announced smart contact lenses devised with graphene-like materials,

XPANCEO, a deep tech company developing the first smart contact lenses with XR vision, health monitoring, and content surfing features, in collaboration with the Nobel laureate Konstantin S. Novoselov (National University of Singapore, University of Manchester) and professor Luis Martin-Moreno (Instituto de Nanociencia y Materiales de Aragon), has announced in Nature Communications a groundbreaking discovery of new properties of rhenium diselenide and rhenium disulfide, enabling novel mode of light-matter interaction with huge potential for integrated photonics, healthcare, and AR. Rhenium disulfide and rhenium diselenide are layered materials belonging to the family of graphene-like materials. Absorption and refraction in these materials have different principal directions, implying six degrees of freedom instead of a maximum of three in classical materials. As a result, rhenium disulfide and rhenium diselenide by themselves allow controlling the light propagation direction without any technological steps required for traditional materials like silicon and titanium dioxide.

The origin of such surprising light-matter interaction of ReS2 and ReSe2 with light is due to the specific symmetry breaking observed in these materials. Symmetry plays a huge role in nature, human life, and material science. For example, almost all living things are built symmetrically. Therefore, in ancient times symmetry was also called harmony, as it was associated with beauty. Physical laws are also closely related to symmetry, such as the laws of conservation of energy and momentum. Violation of symmetry leads to the appearance of new physical effects and radical changes in the properties of materials. In particular, the water-ice phase transition is a consequence of a decrease in the degree of symmetry. In the case of ReS2 and ReSe2, the crystal lattice has the lowest possible degree of symmetry, which leads to the rotation of optical axes – directions of symmetry of optical properties of the material, which was previously observed only for organic materials. As a result, these materials make possible to control the direction of light by changing the wavelength, which opens a unique way for light manipulation in next-generation devices and applications. 

“The discovery of unique properties in anisotropic materials is revolutionizing the fields of nanophotonics and optoelectronics, presenting exciting possibilities. These materials serve as a versatile platform for the advancement of optical devices, such as wavelength-switchable metamaterials, metasurfaces, and waveguides. Among the promising applications is the development of highly efficient biochemical sensors. These sensors have the potential to outperform existing analogs in terms of both sensitivity and cost efficiency. For example, they are anticipated to significantly reduce the expenses associated with hospital blood testing equipment, which is currently quite costly, potentially by several orders of magnitude. This will also allow the detection of dangerous diseases and viruses, such as cancer or COVID, at earlier stages,” says Dr. Valentyn S. Volkov, co-founder and scientific partner at XPANCEO, a scientist with an h-Index of 38 and over 8000 citations in leading international publications.

Beyond the healthcare industry, these novel properties of graphene-like materials can find applications in artificial intelligence and machine learning, facilitating the development of photonic circuits to create a fast and powerful computer suitable for machine learning tasks. A computer based on photonic circuits is a superior solution, transmitting more information per unit of time, and unlike electric currents, photons (light beams) flow across one another without interacting. Furthermore, the new material properties can be utilized in producing smart optics, such as contact lenses or glasses, specifically for advancing AR [augmented reality] features. Leveraging these properties will enhance image coloration and adapt images for individuals with impaired color perception, enabling them to see the full spectrum of colors.

Here’s a link to and a citation for the paper,

Wandering principal optical axes in van der Waals triclinic materials by Georgy A. Ermolaev, Kirill V. Voronin, Adilet N. Toksumakov, Dmitriy V. Grudinin, Ilia M. Fradkin, Arslan Mazitov, Aleksandr S. Slavich, Mikhail K. Tatmyshevskiy, Dmitry I. Yakubovsky, Valentin R. Solovey, Roman V. Kirtaev, Sergey M. Novikov, Elena S. Zhukova, Ivan Kruglov, Andrey A. Vyshnevyy, Denis G. Baranov, Davit A. Ghazaryan, Aleksey V. Arsenin, Luis Martin-Moreno, Valentyn S. Volkov & Kostya S. Novoselov. Nature Communications volume 15, Article number: 1552 (2024) DOI: https://doi.org/10.1038/s41467-024-45266-3 Published: 06 March 2024

This paper is open access.

A kintsugi approach to fusion energy: seeing the beauty (strength) in your flaws

Kintsugi is the Japanese word for a type of repair that is also art. “Golden joinery” is the literal meaning of the word, from the Traditional Kyoto. Culture_Kintsugi webpage,

Caption: An example of kintsugi repair by David Pike. (Photo courtesy of David Pike) [downloaded from https://traditionalkyoto.com/culture/kintsugi/]

A March 5, 2024 news item on phys.org links the art of kintsugi to fusion energy, specifically, managing plasma, Note: Links have been removed,

In the Japanese art of Kintsugi, an artist takes the broken shards of a bowl and fuses them back together with gold to make a final product more beautiful than the original.

That idea is inspiring a new approach to managing plasma, the super-hot state of matter, for use as a power source. Scientists are using the imperfections in magnetic fields that confine a reaction to improve and enhance the plasma in an approach outlined in a paper in the journal Nature Communications.

A March 5, 2024 Princeton Plasma Physics Laboratory (PPPL) news release (also on EurekAlert), which originated the news item, describes the research in more detail, Note: Links have been removed,

“This approach allows you to maintain a high-performance plasma, controlling instabilities in the core and the edge of the plasma simultaneously. That simultaneous control is particularly important and difficult to do. That’s what makes this work special,” said Joseph Snipes of the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL). He is PPPL’s deputy head of the Tokamak Experimental Science Department and was a co-author of the paper.

PPPL Physicist Seong-Moo Yang led the research team, which spans various institutions in the U.S. and South Korea. Yang says this is the first time any research team has validated a systematic approach to tailoring magnetic field imperfections to make the plasma suitable for use as a power source. These magnetic field imperfections are known as error fields. 

“Our novel method identifies optimal error field corrections, enhancing plasma stability,” Yang said. “This method was proven to enhance plasma stability under different plasma conditions, for example, when the plasma was under conditions of high and low magnetic confinement.”

Errors that are hard to correct

Error fields are typically caused by minuscule defects in the magnetic coils of the device that holds the plasma, which is called a tokamak. Until now, error fields were only seen as a nuisance because even a very small error field could cause a plasma disruption that halts fusion reactions and can damage the walls of a fusion vessel. Consequently, fusion researchers have spent considerable time and effort meticulously finding ways to correct error fields.

“It’s quite difficult to eliminate existing error fields, so instead of fixing these coil irregularities, we can apply additional magnetic fields surrounding the fusion vessel in a process known as error field correction,” Yang said. 

In the past, this approach would have also hurt the plasma’s core, making the plasma unsuitable for fusion power generation. This time, the researchers were able to eliminate instabilities at the edge of the plasma and maintain the stability of the core. The research is a prime example of how PPPL researchers are bridging the gap between today’s fusion technology and what will be needed to bring fusion power to the electrical grid. 

“This is actually a very effective way of breaking the symmetry of the system, so humans can intentionally degrade the confinement. It’s like making a very tiny hole in a balloon so that it will not explode,” said SangKyeun Kim, a staff research scientist at PPPL and paper co-author. Just as air would leak out of a small hole in a balloon, a tiny quantity of plasma leaks out of the error field, which helps to maintain its overall stability.

Managing the core and the edge of the plasma simultaneously

One of the toughest parts of managing a fusion reaction is getting both the core and the edge of the plasma to behave at the same time. There are ideal zones for the temperature and density of the plasma in both regions, and hitting those targets while eliminating instabilities is tough.

This study demonstrates that adjusting the error fields can simultaneously stabilize both the core and the edge of the plasma. By carefully controlling the magnetic fields produced by the tokamak’s coils, the researchers could suppress edge instabilities, also known as edge localized modes (ELMs), without causing disruptions or a substantial loss of confinement.

“We are trying to protect the device,” said PPPL Staff Research Physicist Qiming Hu, an author of the paper. 

Extending the research beyond KSTAR

The research was conducted using the KSTAR tokamak in South Korea, which stands out for its ability to adjust its magnetic error field configuration with great flexibility. This capability is crucial for experimenting with different error field configurations to find the most effective ones for stabilizing the plasma.

The researchers say their approach has significant implications for the design of future tokamak fusion pilot plants, potentially making them more efficient and reliable. They are currently working on an artificial intelligence (AI) version of their control system to make it more efficient.

“These models are fairly complex; they take a bit of time to calculate. But when you want to do something in a real-time control system, you can only afford a few milliseconds to do a calculation,” said Snipes. “Using AI, you can basically teach the system what to expect and be able to use that artificial intelligence to predict ahead of time what will be necessary to control the plasma and how to implement it in real-time.”

While their new paper highlights work done using KSTAR’s internal magnetic coils, Hu suggests future research with magnetic coils outside of the fusion vessel would be valuable because the fusion community is moving away from the idea of housing such coils inside the vacuum-sealed vessel due to the potential destruction of such components from the extreme heat of the plasma.

Researchers from the Korea Institute of Fusion Energy (KFE), Columbia University, and Seoul National University were also integral to the project.

The research was supported by: the U.S. Department of Energy under contract number DE-AC02-09CH11466; the Ministry of Science and ICT under the KFE R&D Program “KSTAR Experimental Collaboration and Fusion Plasma Research (KFE-EN2401-15)”; the National Research Foundation (NRF) grant No. RS-2023-00281272 funded through the Korean Ministry of Science, Information and Communication Technology and the New Faculty Startup Fund from Seoul National University; the NRF under grants No. 2019R1F1A1057545 and No. 2022R1F1A1073863; the National R&D Program through the NRF funded by the Ministry of Science & ICT (NRF-2019R1A2C1010757).

Here’s a link to and a citation for the paper,

Tailoring tokamak error fields to control plasma instabilities and transport by SeongMoo Yang, Jong-Kyu Park, YoungMu Jeon, Nikolas C. Logan, Jaehyun Lee, Qiming Hu, JongHa Lee, SangKyeun Kim, Jaewook Kim, Hyungho Lee, Yong-Su Na, Taik Soo Hahm, Gyungjin Choi, Joseph A. Snipes, Gunyoung Park & Won-Ha Ko. Nature Communications volume 15, Article number: 1275 (2024) DOI: https://doi.org/10.1038/s41467-024-45454-1 Published: 10 February 2024

This paper is open access.

Squirrel observations in St. Louis: a story of bias in citizen science data

Squirrels and other members of the family Sciuridae. Credit: Chicoutimi (montage) Karakal AndiW National Park Service en:User:Markus Krötzsch The Lilac Breasted Roller Nico Conradie from Centurion, South Africa Hans Hillewaert Sylvouille National Park Service – Own work from Wikipedia/CC by 3.0 licence

A March 5, 2024 news item on phys.org introduces a story about squirrels, bias, and citizen science,

When biologist Elizabeth Carlen pulled up in her 2007 Subaru for her first look around St. Louis, she was already checking for the squirrels. Arriving as a newcomer from New York City, Carlen had scrolled through maps and lists of recent sightings in a digital application called iNaturalist. This app is a popular tool for reporting and sharing sightings of animals and plants.

People often start using apps like iNaturalist and eBird when they get interested in a contributory science project (also sometimes called a citizen science project). Armed with cellphones equipped with cameras and GPS, app-wielding volunteers can submit geolocated data that iNaturalist then translates into user-friendly maps. Collectively, these observations have provided scientists and community members greater insight into the biodiversity of their local environment and helped scientists understand trends in climate change, adaptation and species distribution.

But right away, Carlen ran into problems with the iNaturalist data in St. Louis.

A March 5, 2024 Washington University in St. Louis news release (also on EurekAlert) by Talia Ogliore, which originated the news item, describes the bias problem and the research it inspired, Note: Links have been removed,

“According to the app, Eastern gray squirrels tended to be mostly spotted in the south part of the city,” said Carlen, a postdoctoral fellow with the Living Earth Collaborative at Washington University in St. Louis. “That seemed weird to me, especially because the trees, or canopy cover, tended to be pretty even across the city.

“I wondered what was going on. Were there really no squirrels in the northern part of the city?” Carlen said. A cursory drive through a few parks and back alleys north of Delmar Boulevard told her otherwise: squirrels galore.

Carlen took to X, formerly Twitter, for advice. “Squirrels are abundant in the northern part of the city, but there are no recorded observations,” she mused. Carlen asked if others had experienced similar issues with iNaturalist data in their own backyards.

Many people responded, voicing their concerns and affirming Carlen’s experience. The maps on iNaturalist seemed clear, but they did not reflect the way squirrels were actually distributed across St. Louis. Instead, Carlen was looking at biased data.

Previous research has highlighted biases in data reported to contributory science platforms, but little work has articulated how these biases arise.

Carlen reached out to the scientists who responded to her Twitter post to brainstorm some ideas. They put together a framework that illustrates how social and ecological factors combine to create bias in contributory data. In a new paper published in People & Nature, Carlen and her co-authors shared this framework and offered some recommendations to help address the problems.

The scientists described four kinds of “filters” that can bias the reported species pool in contributory science projects:

* Participation filter. Participation reflects who is reporting the data, including where those people are located and the areas they have access to. This filter also may reflect whether individuals in a community are aware of an effort to collect data, or if they have the means and motivation to collect it.

* Detectability filter. An animal’s biology and behavior can impact whether people record it. For example, people are less likely to report sightings of owls or other nocturnal species.

* Sampling filter. People might be more willing to report animals they see when they are recreating (i.e. hanging out in a park), but not what they see while they’re commuting.

* Preference filter. People tend to ignore or filter out pests, nuisance species and uncharismatic or “boring” species. (“There’s not a lot of people photographing rats and putting them on iNaturalist — or pigeons, for that matter,” Carlen said.)

In the paper, Carlen and her team applied their framework to data recorded in St. Louis as a case study. They showed that eBird and iNaturalist observations are concentrated in the southern part of the city, where more white people live. Uneven participation in St. Louis is likely a consequence of variables, such as race, income, and/or contemporary politics, which differ between northern and southern parts of the city, the authors wrote. The other filters of detectability, sampling and preference also likely influence species reporting in St. Louis.

Biased and unrepresentative data is not just a problem for urban ecologists, even if they are the ones who are most likely to notice it, Carlen said. City planners, environmental consultants and local nonprofits all sometimes use contributory science data in their work.

“We need to be very conscious about how we’re using this data and how we’re interpreting where animals are,” Carlen said.

Carlen shared several recommendations for researchers and institutions that want to improve contributory science efforts and help reduce bias. Basic steps include considering cultural relevance when designing a project, conducting proactive outreach with diverse stakeholders and translating project materials into multiple languages.

Data and conclusions drawn from contributory projects should be made publicly available, communicated in accessible formats and made relevant to participants and community members.

“It’s important that we work with communities to understand what their needs are — and then build a better partnership,” Carlen said. “We can’t just show residents the app and tell them that they need to use it, because that ignores the underlying problem that our society is still segregated and not everyone has the resources to participate.

“We need to build relationships with the community and understand what they want to know about the wildlife in their neighborhood,” Carlen said. “Then we can design projects that address those questions, provide resources and actively empower community members to contribute to data collection.”

Here’s a link to and a citation for the paper,

A framework for contextualizing social-ecological biases in contributory science data by Elizabeth J. Carlen, Cesar O. Estien, Tal Caspi, Deja Perkins, Benjamin R. Goldstein, Samantha E. S. Kreling, Yasmine Hentati, Tyus D. Williams, Lauren A. Stanton, Simone Des Roches, Rebecca F. Johnson, Alison N. Young, Caren B. Cooper, Christopher J. Schell. People & Nature Volume 6, Issue 2 April 2024 Pages 377-390 DI: https://doi.org/10.1002/pan3.10592 First published: 03 March 2024

This paper is open access.

Deriving gold from electronic waste

Caption: The gold nugget obtained from computer motherboards in three parts. The largest of these parts is around five millimetres wide. Credit: (Photograph: ETH Zurich / Alan Kovacevic)

A March 1, 2024 ETH Zurich press release (also on EurekAlert but published February 29, 2024) by Fabio Bergamin describes research into reclaiming gold from electronic waste, Note: A link has been removed.

In brief

  • Protein fibril sponges made by ETH Zurich researchers are hugely effective at recovering gold from electronic waste.
  • From 20 old computer motherboards, the researchers retrieved a 22-​carat gold nugget weighing 450 milligrams.
  • Because the method utilises various waste and industry byproducts, it is not only sustainable but cost effective as well.

Transforming base materials into gold was one of the elusive goals of the alchemists of yore. Now Professor Raffaele Mezzenga from the Department of Health Sciences and Technology at ETH Zurich has accomplished something in that vein. He has not of course transformed another chemical element into gold, as the alchemists sought to do. But he has managed to recover gold from electronic waste using a byproduct of the cheesemaking process.

Electronic waste contains a variety of valuable metals, including copper, cobalt, and even significant amounts of gold. Recovering this gold from disused smartphones and computers is an attractive proposition in view of the rising demand for the precious metal. However, the recovery methods devised to date are energy-​intensive and often require the use of highly toxic chemicals. Now, a group led by ETH Professor Mezzenga has come up with a very efficient, cost-​effective, and above all far more sustainable method: with a sponge made from a protein matrix, the researchers have successfully extracted gold from electronic waste.

Selective gold adsorption

To manufacture the sponge, Mohammad Peydayesh, a senior scientist in Mezzenga’s Group, and his colleagues denatured whey proteins under acidic conditions and high temperatures, so that they aggregated into protein nanofibrils in a gel. The scientists then dried the gel, creating a sponge out of these protein fibrils.

To recover gold in the laboratory experiment, the team salvaged the electronic motherboards from 20 old computer motherboards and extracted the metal parts. They dissolved these parts in an acid bath so as to ionise the metals.

When they placed the protein fibre sponge in the metal ion solution, the gold ions adhered to the protein fibres. Other metal ions can also adhere to the fibres, but gold ions do so much more efficiently. The researchers demonstrated this in their paper, which they have published in the journal Advanced Materials.

As the next step, the researchers heated the sponge. This reduced the gold ions into flakes, which the scientists subsequently melted down into a gold nugget. In this way, they obtained a nugget of around 450 milligrams out of the 20 computer motherboards. The nugget was 91 percent gold (the remainder being copper), which corresponds to 22 carats.

Economically viable

The new technology is commercially viable, as Mezzenga’s calculations show: procurement costs for the source materials added to the energy costs for the entire process are 50 times lower than the value of the gold that can be recovered.

Next, the researchers want to develop the technology to ready it for the market. Although electronic waste is the most promising starting product from which they want to extract gold, there are other possible sources. These include industrial waste from microchip manufacturing or from gold-​plating processes. In addition, the scientists plan to investigate whether they can manufacture the protein fibril sponges out of other protein-​rich byproducts or waste products from the food industry.

“The fact I love the most is that we’re using a food industry byproduct to obtain gold from electronic waste,” Mezzenga says. In a very real sense, he observes, the method transforms two waste products into gold. “You can’t get much more sustainable than that!”

If you have a problem accessing either of the two previously provided links to the press release, you can try this February 29, 2024 news item on ScienceDaily.

Here’s a link to and a citation for the paper,

Gold Recovery from E-Waste by Food-Waste Amyloid Aerogels by Mohammad Peydayesh, Enrico Boschi, Felix Donat, Raffaele Mezzenga. DOI: https://doi.org/10.1002/adma.202310642 First published online: 23 January 2024

This paper is open access.

Corporate venture capital (CVC) and the nanotechnology market plus 2023’s top 10 countries’ nanotechnolgy patents

I have two brief nanotechnology commercialization stories from the same publication.

Corporate venture capital (CVC) and the nano market

From a March 23, 2024 article on statnano.com, Note: Links have been removed,

Nanotechnology’s enormous potential across various sectors has long attracted the eye of investors, keen to capitalise on its commercial potency.

Yet the initial propulsion provided by traditional venture capital avenues was reined back when the reality of long development timelines, regulatory hurdles, and difficulty in translating scientific advances into commercially viable products became apparent.

While the initial flurry of activity declined in the early part of the 21st century, a new kid on the investing block has proved an enticing option beyond traditional funding methods.

Corporate venture capital has, over the last 10 years emerged as a key plank in turning ideas into commercial reality.

Simply put, corporate venture capital (CVC) has seen large corporations, recognising the strategic value of nanotechnology, establish their own VC arms to invest in promising start-ups.

The likes of Samsung, Johnson & Johnson and BASF have all sought to get an edge on their competition by sinking money into start-ups in nano and other technologies, which could deliver benefits to them in the long term.

Unlike traditional VC firms, CVCs invest with a strategic lens, aligning their investments with their core business goals. For instance, BASF’s venture capital arm, BASF Venture Capital, focuses on nanomaterials with applications in coatings, chemicals, and construction.

It has an evergreen EUR 250 million fund available and will consider everything from seed to Series B investment opportunities.

Samsung Ventures takes a similar approach, explaining: “Our major investment areas are in semiconductors, telecommunication, software, internet, bioengineering and the medical industry from start-ups to established companies that are about to be listed on the stock market.

While historically concentrated in North America and Europe, CVC activity in nanotechnology is expanding to Asia, with China being a major player.

China has, perhaps not surprisingly, seen considerable growth over the last decade in nano and few will bet against it being the primary driver of innovation over the next 10 years.

As ever, the long development cycles of emerging nano breakthroughs can frequently deter some CVCs with shorter investment horizons.

2023 Nanotechnology patent applications: which countries top the list?

A March 28, 2024 article from statnano.com provides interesting data concerning patent applications,

In 2023, a total of 18,526 nanotechnology patent applications were published at the United States Patent and Trademark Office (USPTO) and the European Patent Office (EPO). The United States accounted for approximately 40% of these nanotechnology patent publications, followed by China, South Korea, and Japan in the next positions.

According to a statistical analysis conducted by StatNano using data from the Orbit database, the USPTO published 84% of the 18,526 nanotechnology patent applications in 2023, which is more than five times the number published by the EPO. However, the EPO saw a nearly 17% increase in nanotechnology patent publications compared to the previous year, while the USPTO’s growth was around 4%.

Nanotechnology patents are defined based on the ISO/TS 18110 standard as those having at least one claim related to nanotechnology orpatents classified with an IPC classification code related to nanotechnology such as B82.

From the March 28, 2024 article,

Top 10 Countries Based on Published Patent Applications in the Field of Nanotechnology in USPTO in 2023

Rank1CountryNumber of nanotechnology published patent applications in USPTONumber of nanotechnology published patent applications in EPOGrowth rate in USPTOGrowth rate in EPO
1United States6,9264923.20%17.40%
2South Korea1,71547613.40%8.40%
3China1,6275694.20%47.40%
4Taiwan1,118615.00%-12.90%
5Japan1,113445-1.20%9.30%
6Germany484229-10.20%15.70%
7England331505.10%16.30%
8France323145-8.00%17.90%
9Canada290125.10%-14.30%
10Saudi Arabia268322.40%0.00%
1- Ranking based on the number of nanotechnology patent applications at the USPTO

If you have a bit of time and interest, I suggest reading the March 28, 2024 article in its entirety.

Your gas stove may be emitting more polluting nanoparticles than your car exhaust

A February 27, 2024 news item on ScienceDaily describes the startling research results to anyone who’s listened to countless rhapsodize about the superiority of gas stoves over any other,

Cooking on your gas stove can emit more nano-sized particles into the air than vehicles that run on gas or diesel, possibly increasing your risk of developing asthma or other respiratory illnesses, a new Purdue University study has found.

“Combustion remains a source of air pollution across the world, both indoors and outdoors. We found that cooking on your gas stove produces large amounts of small nanoparticles that get into your respiratory system and deposit efficiently,” said Brandon Boor, an associate professor in Purdue’s Lyles School of Civil Engineering, who led this research.

Based on these findings, the researchers would encourage turning on a kitchen exhaust fan while cooking on a gas stove.

The study, published in the journal PNAS [Proceedngs of the National Academy of Sciences] Nexus, focused on tiny airborne nanoparticles that are only 1-3 nanometers in diameter, which is just the right size for reaching certain parts of the respiratory system and spreading to other organs.

A February 27, 2024 Purdue University news release by Kayla Albert (also on EurekAlert), which originated the news item, provides more detail about the research, Note: Links have been removed,

Recent studies have found that children who live in homes with gas stoves are more likely to develop asthma. But not much is known about how particles smaller than 3 nanometers, called nanocluster aerosol, grow and spread indoors because they’re very difficult to measure.

“These super tiny nanoparticles are so small that you’re not able to see them. They’re not like dust particles that you would see floating in the air,” Boor said. “After observing such high concentrations of nanocluster aerosol during gas cooking, we can’t ignore these nano-sized particles anymore.”

Using state-of-the-art air quality instrumentation provided by the German company GRIMM AEROSOL TECHNIK, a member of the DURAG GROUP, Purdue researchers were able to measure these tiny particles down to a single nanometer while cooking on a gas stove in a “tiny house” lab. They collaborated with Gerhard Steiner, a senior scientist and product manager for nano measurement at GRIMM AEROSOL. 

Called the Purdue zero Energy Design Guidance for Engineers (zEDGE) lab, the tiny house has all the features of a typical home but is equipped with sensors for closely monitoring the impact of everyday activities on a home’s air quality. With this testing environment and the instrument from GRIMM AEROSOL, a high-resolution particle size magnifier—scanning mobility particle sizer (PSMPS), the team collected extensive data on indoor nanocluster aerosol particles during realistic cooking experiments.

This magnitude of high-quality data allowed the researchers to compare their findings with known outdoor air pollution levels, which are more regulated and understood than indoor air pollution. They found that as many as 10 quadrillion nanocluster aerosol particles could be emitted per kilogram of cooking fuel — matching or exceeding those produced from vehicles with internal combustion engines. 

This would mean that adults and children could be breathing in 10-100 times more nanocluster aerosol from cooking on a gas stove indoors than they would from car exhaust while standing on a busy street.

“You would not use a diesel engine exhaust pipe as an air supply to your kitchen,” said Nusrat Jung, a Purdue assistant professor of civil engineering who designed the tiny house lab with her students and co-led this study.

Purdue civil engineering PhD student Satya Patra made these findings by looking at data collected in the tiny house lab and modeling the various ways that nanocluster aerosol could transform indoors and deposit into a person’s respiratory system.

The models showed that nanocluster aerosol particles are very persistent in their journey from the gas stove to the rest of the house. Trillions of these particles were emitted within just 20 minutes of boiling water or making grilled cheese sandwiches or buttermilk pancakes on a gas stove.

Even though many particles rapidly diffused to other surfaces, the models indicated that approximately 10 billion to 1 trillion particles could deposit into an adult’s head airways and tracheobronchial region of the lungs. These doses would be even higher for children — the smaller the human, the more concentrated the dose.

The nanocluster aerosol coming from the gas combustion also could easily mix with larger particles entering the air from butter, oil or whatever else is cooking on the gas stove, resulting in new particles with their own unique behaviors.

A gas stove’s exhaust fan would likely redirect these nanoparticles away from your respiratory system, but that remains to be tested.

“Since most people don’t turn on their exhaust fan while cooking, having kitchen hoods that activate automatically would be a logical solution,” Boor said. “Moving forward, we need to think about how to reduce our exposure to all types of indoor air pollutants. Based on our new data, we’d advise that nanocluster aerosol be considered as a distinct air pollutant category.”

This study was supported by a National Science Foundation CAREER award to Boor. Additional financial support was provided by the Alfred P. Sloan Foundation’s Chemistry of Indoor Environments program through an interdisciplinary collaboration with Philip Stevens, a professor in Indiana University’s Paul H. O’Neill School of Public and Environmental Affairs in Bloomington.

Here’s a link to and a citation for the paper,

Dynamics of nanocluster aerosol in the indoor atmosphere during gas cooking by Satya S Patra, Jinglin Jiang, Xiaosu Ding, Chunxu Huang, Emily K Reidy, Vinay Kumar, Paige Price, Connor Keech, Gerhard Steiner, Philip Stevens, Nusrat Jung, Brandon E Boor. PNAS Nexus, Volume 3, Issue 2, February 2024, pgae044, DOI: https://doi.org/10.1093/pnasnexus/pgae044 Published: 27 February 2024

This paper is open access.

Six months after the first one at Bletchley Park, the 2nd AI Safety Summit (May 21-22, 2024) convenes in Korea

This May 20, 2024 University of Oxford press release (also on EurekAlert) was under embargo until almost noon on May 20, 2024, which is a bit unusual, in my experience, (Note: I have more about the 1st summit and the interest in AI safety at the end of this posting),

Leading AI scientists are calling for stronger action on AI risks from world leaders, warning that progress has been insufficient since the first AI Safety Summit in Bletchley Park six months ago. 

Then, the world’s leaders pledged to govern AI responsibly. However, as the second AI Safety Summit in Seoul (21-22 May [2024]) approaches, twenty-five of the world’s leading AI scientists say not enough is actually being done to protect us from the technology’s risks. In an expert consensus paper published today in Science, they outline urgent policy priorities that global leaders should adopt to counteract the threats from AI technologies. 

Professor Philip Torr,Department of Engineering Science,University of Oxford, a co-author on the paper, says: “The world agreed during the last AI summit that we needed action, but now it is time to go from vague proposals to concrete commitments. This paper provides many important recommendations for what companies and governments should commit to do.”

World’s response not on track in face of potentially rapid AI progress; 

According to the paper’s authors, it is imperative that world leaders take seriously the possibility that highly powerful generalist AI systems—outperforming human abilities across many critical domains—will be developed within the current decade or the next. They say that although governments worldwide have been discussing frontier AI and made some attempt at introducing initial guidelines, this is simply incommensurate with the possibility of rapid, transformative progress expected by many experts. 

Current research into AI safety is seriously lacking, with only an estimated 1-3% of AI publications concerning safety. Additionally, we have neither the mechanisms or institutions in place to prevent misuse and recklessness, including regarding the use of autonomous systems capable of independently taking actions and pursuing goals.

World-leading AI experts issue call to action

In light of this, an international community of AI pioneers has issued an urgent call to action. The co-authors include Geoffrey Hinton, Andrew Yao, Dawn Song, the late Daniel Kahneman; in total 25 of the world’s leading academic experts in AI and its governance. The authors hail from the US, China, EU, UK, and other AI powers, and include Turing award winners, Nobel laureates, and authors of standard AI textbooks.

This article is the first time that such a large and international group of experts have agreed on priorities for global policy makers regarding the risks from advanced AI systems.

Urgent priorities for AI governance

The authors recommend governments to:

  • establish fast-acting, expert institutions for AI oversight and provide these with far greater funding than they are due to receive under almost any current policy plan. As a comparison, the US AI Safety Institute currently has an annual budget of $10 million, while the US Food and Drug Administration (FDA) has a budget of $6.7 billion.
  • mandate much more rigorous risk assessments with enforceable consequences, rather than relying on voluntary or underspecified model evaluations.
  • require AI companies to prioritise safety, and to demonstrate their systems cannot cause harm. This includes using “safety cases” (used for other safety-critical technologies such as aviation) which shifts the burden for demonstrating safety to AI developers.
  • implement mitigation standards commensurate to the risk-levels posed by AI systems. An urgent priority is to set in place policies that automatically trigger when AI hits certain capability milestones. If AI advances rapidly, strict requirements automatically take effect, but if progress slows, the requirements relax accordingly.

According to the authors, for exceptionally capable future AI systems, governments must be prepared to take the lead in regulation. This includes licensing the development of these systems, restricting their autonomy in key societal roles, halting their development and deployment in response to worrying capabilities, mandating access controls, and requiring information security measures robust to state-level hackers, until adequate protections are ready.

AI impacts could be catastrophic

AI is already making rapid progress in critical domains such as hacking, social manipulation, and strategic planning, and may soon pose unprecedented control challenges. To advance undesirable goals, AI systems could gain human trust, acquire resources, and influence key decision-makers. To avoid human intervention, they could be capable of copying their algorithms across global server networks. Large-scale cybercrime, social manipulation, and other harms could escalate rapidly. In open conflict, AI systems could autonomously deploy a variety of weapons, including biological ones. Consequently, there is a very real chance that unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or extinction of humanity.

Stuart Russell OBE [Order of the British Empire], Professor of Computer Science at the University of California at Berkeley and an author of the world’s standard textbook on AI, says: “This is a consensus paper by leading experts, and it calls for strict regulation by governments, not voluntary codes of conduct written by industry. It’s time to get serious about advanced AI systems. These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless. Companies will complain that it’s too hard to satisfy regulations—that “regulation stifles innovation.” That’s ridiculous. There are more regulations on sandwich shops than there are on AI companies.”

Notable co-authors:

  • The world’s most-cited computer scientist (Prof. Hinton), and the most-cited scholar in AI security and privacy (Prof. Dawn Song)
  • China’s first Turing Award winner (Andrew Yao).
  • The authors of the standard textbook on artificial intelligence (Prof. Stuart Russell) and machine learning theory (Prof. Shai Shalev-Schwartz)
  • One of the world’s most influential public intellectuals (Prof. Yuval Noah Harari)
  • A Nobel Laureate in economics, the world’s most-cited economist (Prof. Daniel Kahneman)
  • Department-leading AI legal scholars and social scientists (Lan Xue, Qiqi Gao, and Gillian Hadfield).
  • Some of the world’s most renowned AI researchers from subfields such as reinforcement learning (Pieter Abbeel, Jeff Clune, Anca Dragan), AI security and privacy (Dawn Song), AI vision (Trevor Darrell, Phil Torr, Ya-Qin Zhang), automated machine learning (Frank Hutter), and several researchers in AI safety.

Additional quotes from the authors:

Philip Torr, Professor in AI, University of Oxford:

  • I believe if we tread carefully the benefits of AI will outweigh the downsides, but for me one of the biggest immediate risks from AI is that we develop the ability to rapidly process data and control society, by government and industry. We could risk slipping into some Orwellian future with some form of totalitarian state having complete control.

Dawn Song: Professor in AI at UC Berkeley, most-cited researcher in AI security and privacy:

  •  “Explosive AI advancement is the biggest opportunity and at the same time the biggest risk for mankind. It is important to unite and reorient towards advancing AI responsibly, with dedicated resources and priority to ensure that the development of AI safety and risk mitigation capabilities can keep up with the pace of the development of AI capabilities and avoid any catastrophe”

Yuval Noah Harari, Professor of history at Hebrew University of Jerusalem, best-selling author of ‘Sapiens’ and ‘Homo Deus’, world leading public intellectual:

  • “In developing AI, humanity is creating something more powerful than itself, that may escape our control and endanger the survival of our species. Instead of uniting against this shared threat, we humans are fighting among ourselves. Humankind seems hell-bent on self-destruction. We pride ourselves on being the smartest animals on the planet. It seems then that evolution is switching from survival of the fittest, to extinction of the smartest.”

Jeff Clune, Professor in AI at University of British Columbia and one of the leading researchers in reinforcement learning:

  • “Technologies like spaceflight, nuclear weapons and the Internet moved from science fiction to reality in a matter of years. AI is no different. We have to prepare now for risks that may seem like science fiction – like AI systems hacking into essential networks and infrastructure, AI political manipulation at scale, AI robot soldiers and fully autonomous killer drones, and even AIs attempting to outsmart us and evade our efforts to turn them off.”
  • “The risks we describe are not necessarily long-term risks. AI is progressing extremely rapidly. Even just with current trends, it is difficult to predict how capable it will be in 2-3 years. But what very few realize is that AI is already dramatically speeding up AI development. What happens if there is a breakthrough for how to create a rapidly self-improving AI system? We are now in an era where that could happen any month. Moreover, the odds of that being possible go up each month as AI improves and as the resources we invest in improving AI continue to exponentially increase.”

Gillian Hadfield, CIFAR AI Chair and Director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto:

 “AI labs need to walk the walk when it comes to safety. But they’re spending far less on safety than they spend on creating more capable AI systems. Spending one-third on ensuring safety and ethical use should be the minimum.”

  • “This technology is powerful, and we’ve seen it is becoming more powerful, fast. What is powerful is dangerous, unless it is controlled. That is why we call on major tech companies and public funders to allocate at least one-third of their AI R&D budget to safety and ethical use, comparable to their funding for AI capabilities.”  

Sheila McIlrath, Professor in AI, University of Toronto, Vector Institute:

  • AI is software. Its reach is global and its governance needs to be as well.
  • Just as we’ve done with nuclear power, aviation, and with biological and nuclear weaponry, countries must establish agreements that restrict development and use of AI, and that enforce information sharing to monitor compliance. Countries must unite for the greater good of humanity.
  • Now is the time to act, before AI is integrated into our critical infrastructure. We need to protect and preserve the institutions that serve as the foundation of modern society.

Frank Hutter, Professor in AI at the University of Freiburg, Head of the ELLIS Unit Freiburg, 3x ERC grantee:

  • To be clear: we need more research on AI, not less. But we need to focus our efforts on making this technology safe. For industry, the right type of regulation will provide economic incentives to shift resources from making the most capable systems yet more powerful to making them safer. For academia, we need more public funding for trustworthy AI and maintain a low barrier to entry for research on less capable open-source AI systems. This is the most important research challenge of our time, and the right mechanism design will focus the community at large to work towards the right breakthroughs.

Here’s a link to and a citation for the paper,

Managing extreme AI risks amid rapid progress; Preparation requires technical research and development, as well as adaptive, proactive governance by Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Dawn Song, Pieter Abbeel, Trevor Darrell, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, Shai Shalev-Shwartz, Gillian Hadfield, Jeff Clune, Tegan Maharaj, Frank Hutter, Atılım Güneş Baydin, Sheila McIlraith, Qiqi Gao, Ashwin Acharya, David Krueger, Anca Dragan, Philip Torr, Stuart Russell, Daniel Kahneman, Jan Brauner, and Sören Mindermann. Science 20 May 2024 First Release DOI: 10.1126/science.adn0117

This paper appears to be open access.

For anyone who’s curious about the buildup to these safety summits, I have more in my October 18, 2023 “AI safety talks at Bletchley Park in November 2023” posting, which features excerpts from a number of articles on AI safety. There’s also my November 2, 2023 , “UK AI Summit (November 1 – 2, 2023) at Bletchley Park finishes” posting, which offers excerpts from articles critiquing the AI safety summit.