Tag Archives: China

Nanomedicine: two stories about wound healing

Different strokes for different folks or, in this case, somewhat different approaches to healing different wounds.

Infected wounds

A July 17, 2024 news item on Nanowerk highlights work from China’s Research Center for Neutrophil Engineering Technology (affiliated with Suzhou Hospital of Nanjing Medical University), Note: A link has been removed,

Infectious wounds represent a critical challenge in healthcare, especially for diabetic patients grappling with ineffective antibiotics and escalating drug resistance. Conventional therapies often inadequately address deep tissue infections, highlighting the need for more innovative solutions. Engineered nanovesicles (NVs) from activated neutrophils provide a precise mechanism to combat pathogens deeply embedded in tissues, potentially revolutionizing the management of complex infectious wounds and boosting overall treatment efficacy.

Researchers at the Research Center for Neutrophil Engineering Technology have achieved a significant advancement in medical nanotechnology. Their findings, published in the journal Burns & Trauma (“Engineered nanovesicles from activated neutrophils with enriched bactericidal proteins have molecular debridement ability and promote infectious wound healing”), detail the creation of novel neutrophil-engineered NVs.

A July 17, 2024 Maximum Academic Press ‘press release’ on EurekAlert, which originated the news item, goes on to describe what the researchers discovered,

This study reveals that engineered NVs derived from activated neutrophils not only mimic the physical properties of exosomes but surpass them due to their rich content of bactericidal proteins. Extensively tested both in vitro and in vivo, these NVs effectively combat key pathogens like Staphylococcus aureus and Escherichia coli, which contribute to deep tissue infections. The NVs promote rapid debridement, significantly reduce bacterial populations, and boost collagen deposition, thus hastening the healing process. This research positions NVs as a formidable alternative to traditional antibiotics, introducing a novel method for treating resistant infections and advancing the field of wound care.

Dr. Bingwei Sun, the lead researcher, emphasized, “These engineered NVs mark a major advancement in the management of infectious diseases. By targeting the infection site with high levels of bactericidal proteins, we achieve swift and effective healing, thereby opening new paths for the treatment of chronic and resistant infections.”

The advent of activated neutrophil-derived NVs signifies a major leap in medical technology, potentially reducing healthcare costs and enhancing patient outcomes. This innovation not only promises to improve wound healing in diabetic and other chronic infection patients but also sets the stage for further development of biologically inspired therapeutic strategies.

Here’s a link to and a citation for the paper,

Engineered nanovesicles from activated neutrophils with enriched bactericidal proteins have molecular debridement ability and promote infectious wound healing by Hangfei Jin, Xiao Wen, Ran Sun, Yanzhen Yu, Zaiwen Guo, Yunxi Yang, Linbin Li, and Bingwei Sun. Burns & Trauma, Volume 12, 2024, tkae018, DOI: https://doi.org/10.1093/burnst/tkae018 Published: 20 June 2024

This paper is open access.

Diabetic wounds

A July 17, 2024 news item on phys.org announces work from another team developing its own approach to healing wounds, albeit, a different category of wounds,

Diabetic wounds are notoriously challenging to treat, due to prolonged inflammation and a high risk of infection. Traditional treatments generally offer only passive protection and fail to dynamically interact with the wound environment.

In a new article published in Burns & Trauma on June 5, 2024, a research team from Mudanjiang Medical University and allied institutions assesses the effectiveness of PLLA nanofibrous membranes.

Infused with curcumin and silver nanoparticles, these membranes are designed to substantially enhance the healing processes in diabetic wounds by targeting fundamental issues like excessive inflammation and infection.

This research centered on developing PLLA/C/Ag nanofibrous membranes through air-jet spinning, achieving a consistent fiber distribution essential for effective therapeutic delivery. The membranes boast dual benefits: antioxidant properties that reduce harmful reactive oxygen species in wound environments and potent antibacterial activity that decreases infection risks.

A July 17, 2024 Maximum Academic Press ‘press release‘ on EurekAlert provides more information about the research, Note 1: This press release appears to have originated the news item, which was then edited and rewritten; Note 2: Links have been removed,

In a pioneering study, researchers have developed a poly (L-lactic acid) (PLLA) nanofibrous membrane enhanced with curcumin and silver nanoparticles (AgNPs), aimed at improving the healing of diabetic wounds. This advanced dressing targets critical barriers such as inflammation, oxidative stress, and bacterial infections, which hinder the recovery process in diabetic patients. The study’s results reveal a promising therapeutic strategy that could revolutionize care for diabetes-related wounds.

Diabetic wounds are notoriously challenging to heal, with prolonged inflammation and a high risk of infection. Traditional treatments generally offer only passive protection and fail to dynamically interact with the wound environment. The creation of bioactive dressings like the poly (L-lactic acid) (PLLA) nanofibrous membranes incorporated with AgNPs and curcumin (PLLA/C/Ag) membranes signifies a crucial shift towards therapies that actively correct imbalances in the wound healing process, offering a more effective solution for managing diabetic wounds.

Published (DOI: 10.1093/burnst/tkae009) in Burns & Trauma on June 5, 2024, this trailblazing research by a team from Mudanjiang Medical University and allied institutions assesses the effectiveness of PLLA nanofibrous membranes. Infused with curcumin and silver nanoparticles, these membranes are designed to substantially enhance the healing processes in diabetic wounds by targeting fundamental issues like excessive inflammation and infection.

This research centered on developing PLLA/C/Ag nanofibrous membranes through air-jet spinning, achieving a consistent fiber distribution essential for effective therapeutic delivery. The membranes boast dual benefits: antioxidant properties that reduce harmful reactive oxygen species in wound environments and potent antibacterial activity that decreases infection risks. In vivo tests on diabetic mice demonstrated the membranes’ capability to promote crucial healing processes such as angiogenesis and collagen deposition. These findings illustrate that PLLA/C/Ag membranes not only protect wounds but also actively support and expedite the healing process, marking them as a significant therapeutic innovation for diabetic wound management with potential for broader chronic wound care applications.

Dr. Yanhui Chu, a principal investigator of the study, highlighted the importance of these developments: “The PLLA/C/Ag membranes are a significant breakthrough in diabetic wound care. Their ability to effectively modulate the wound environment and enhance healing could establish a new standard in treatment, providing hope to millions affected by diabetes-related complications.”

The deployment of PLLA/C/Ag nanofibrous membranes in clinical environments could transform the treatment of diabetic wounds, offering a more active and effective approach. Beyond diabetes management, this technology has the potential for extensive applications in various chronic wounds, paving the way for future breakthroughs in bioactive wound dressings. This study not only progresses our understanding of wound management but also paves new paths for developing adaptive treatments for complex wound scenarios.

Here’s a link to and a citation for the paper,

Immunomodulatory poly(L-lactic acid) nanofibrous membranes promote diabetic wound healing by inhibiting inflammation, oxidation and bacterial infection by Yan Wu, Jin Zhang, Anqi Lin, Tinglin Zhang, Yong Liu, Chunlei Zhang, Yongkui Yin, Ran Guo, Jie Gao, Yulin Li, and Yanhui Chu. Burns & Trauma, Volume 12, 2024, tkae009, DOI: https://doi.org/10.1093/burnst/tkae009 Published: 05 June 2024

This paper is open access.

Science publishing

As I think most people know, publishing of any kind is a tough business, particularly these days. This instability has led to some interesting corporate relationships. E.g., Springer Nature (a German-British academic publisher) is the outcome of some mergers as the Springer Nature Wikipedia entry notes,

The company originates from several journals and publishing houses, notably Springer-Verlag, which was founded in 1842 by Julius Springer in Berlin[4] (the grandfather of Bernhard Springer who founded Springer Publishing in 1950 in New York),[5] Nature Publishing Group which has published Nature since 1869,[6] and Macmillan Education, which goes back to Macmillan Publishers founded in 1843.[7]

Springer Nature was formed in 2015 by the merger of Nature Publishing Group, Palgrave Macmillan, and Macmillan Education (held by Holtzbrinck Publishing Group) with Springer Science+Business Media (held by BC Partners). Plans for the merger were first announced on 15 January 2015.[8] The transaction was concluded in May 2015 with Holtzbrinck having the majority 53% share.[9]

Now you have what was an independent science journal, Nature, owned by Springer. By the way, Springer Nature also acquired Scientific American, another major science journal.

Relatedly, seeing Maximum Academic Press as the issuer for the press releases mentioned here aroused my curiosity. I haven’t stumbled across the company before but found this on the company’s About Us webpage, Note: Links have been removed,

Maximum Academic Press (MAP) is an independent publishing company with focus on publishing golden open access academic journals. From 2020 to now, MAP has successfully launched 24 academic journals which cover the research fields of agriculture, biology, environmental sciences, engineering and humanities and social sciences.                    

Professor Zong-Ming (Max) Cheng, chief editor and founder of MAP, who earned his Ph.D from Cornell University in 1991 and worked as an Assistant, Associate and Professor at North Dakota State University and University of Tennessee for over 30 years. Prior to establishing MAP, Dr. Cheng launched Horticulture Research (initially published by Nature Publishing Group) in 2014, Plant Phenomics (published by American Association of Advancement of Sciences, AAAS) in 2019, and BioDesign Research (published by AAAS) in 2020, and served as the Editor-in-Chief, Co-Editors-in-Chief, and the executive editor, respectively. Dr. Cheng wishes to apply all successful experiences in launching and managing these three high quality journals to MAP-published journals with highest quality and ethics standards.

It was a little bit of a surprise to see that MAP doesn’t publish the journal, Burns & Trauma, where the studies (cited here) were published. From the Burns & Trauma About the Journal webpage on the Oxford University Press website for Oxford Academic journals,

Aims and scope

Burns & Trauma is an open access, peer-reviewed journal publishing the latest developments in basic, clinical, and translational research related to burns and traumatic injuries, with a special focus on various aspects of biomaterials, tissue engineering, stem cells, critical care, immunobiology, skin transplantation, prevention, and regeneration of burns and trauma injury.

Society affiliations

Burns & Trauma is the official journal of Asia-Pacific Society of Scar Medicine, Chinese Burn Association, Chinese Burn Care and Rehabilitation Association and Chinese Society for Scar Medicine. It is sponsored by the Institute of Burn Research, Southwest Hospital (First Affiliated Hospital of Army Medical University), China.

I don’t know what to make of it all but I can safely say scientific publishing has gotten quite complicated since the days that Nature first published its own eponymous journal.

Better (safer, cheaper) battery invented for wearable tech

A June 5, 2024 news item on phys.org announces new research into ‘aqueous’ wearable batteries,

Researchers have developed a safer, cheaper, better performing and more flexible battery option for wearable devices. A paper describing the “recipe” for their new battery type was published in the journal Nano Research Energy on June 3 [2024].

Fitness trackers. Smart watches. Virtual-reality headsets. Even smart clothing and implants. Wearable smart devices are everywhere these days. But for greater comfort, reliability and longevity, these devices will require greater levels of flexibility and miniaturization of their energy storage mechanisms, which are often frustratingly bulky, heavy and fragile. On top of this, any improvements cannot come at the expense of safety.

As a result, in recent years, a great deal of battery research has focused on the development of “micro” flexible energy storage devices, or MFESDs. A range of different structures and electrochemical foundations have been explored, and among them, aqueous micro batteries offer many distinct advantages.

A June 5, 2024 Tsinghua University press release on EurekAlert, which originated the news item, provides more detail,

Aqueous batteries—those that use a water-based solution as an electrolyte (the medium that allows transport of ions in the battery and thus creating an electric circuit) are nothing new. They have been around since the late 19th century. However, their energy density—or the amount of energy contained in the battery per unit of volume—is too low for use in things like electric vehicles as they would take up too much space. Lithium-ion batteries are far more appropriate for such uses.

At the same time, aqueous batteries are much less flammable, and thus safer, than lithium-ion batteries. They are also much cheaper. As a result of this more robust safety and low cost, aqueous options have increasingly been explored as one of the better options for MFESDs. These are termed aqueous micro batteries, or just AMBs.

“Up till now, sadly, AMBs have not lived up to their potential,” said Ke Niu, a materials scientist with the Guangxi Key Laboratory of Optical and Electronic Materials and Devices at the Guilin University of Technology—one of the lead researchers on the team. “To be able to be used in a wearable device, they need to withstand a certain degree of real-world bending and twisting. But most of those explored so far fail in the face of such stress.”

To overcome this, any fractures or failure points in an AMB would need to be self-healing following such stress. Unfortunately, the self-healing AMBs that have been developed so far have tended to depend on metallic compounds as the carriers of charge in the battery’s electric circuit. This has the undesirable side-effect of strong reaction between the metal’s ions and the materials that the electrodes (the battery’s positive and negative electrical conductors) are made out of. This in turn reduces the battery’s reaction rate (the speed at which the electrochemical reactions at the heart of any battery take place), drastically limiting performance.

“So we started investigating the possibility of non-metallic charge carriers, as these would not suffer from the same difficulties from interaction with the electrodes,” added Junjie Shi, another leading member of the team and a researcher with the School of Physics and Center zfor Nanoscale Characterization & Devices (CNCD) at the Huazhong University of Science and Technology in Wuhan.

The research team alighted upon ammonium ions, derived from abundantly available ammonium salts, as the optimal charge carriers. They are far less corrosive than other options and have a wide electrochemical stability window.

“But ammonium ions are not the only ingredient in the recipe needed to make our batteries self-healing,” said Long Zhang, the third leading member of the research team, also at CNCD.

For that, the team incorporated the ammonium salts into a hydrogel—a polymer material that can absorb and retain a large amount of water without disturbing its structure. This gives hydrogels impressive flexibility—delivering precisely the sort of self-healing character needed. Gelatin is probably the most well-known hydrogel, although the researchers in this case opted for a polyvinyl alcohol hydrogel (PVA) for its great strength and low cost.

To optimize compatibility with the ammonium electrolyte, titanium carbide—a ‘2D’ nanomaterial with only a single layer of atoms—was chosen for the anode (the negative electrode) material for its excellent conductivity. Meanwhile manganese dioxide, already commonly used in dry cell batteries, was woven into a carbon nanotube matrix (again to improve conductivity) for the cathode (the positive electrode).

Testing of the prototype self-healing battery showed it exhibited excellent energy density, power density, cycle life, flexibility, and self-healing even after ten self-healing cycles.

The team now aims to further develop and optimise their prototype in preparation for commercial production.


About Nano Research Energy

Nano Research Energy is launched by Tsinghua University Press and exclusively available via SciOpen, aiming at being an international, open-access and interdisciplinary journal. We will publish research on cutting-edge advanced nanomaterials and nanotechnology for energy. It is dedicated to exploring various aspects of energy-related research that utilizes nanomaterials and nanotechnology, including but not limited to energy generation, conversion, storage, conservation, clean energy, etc. Nano Research Energy will publish four types of manuscripts, that is, Communications, Research Articles, Reviews, and Perspectives in an open-access form.

About SciOpen

SciOpen is a professional open access resource for discovery of scientific and technical content published by the Tsinghua University Press and its publishing partners, providing the scholarly publishing community with innovative technology and market-leading capabilities. SciOpen provides end-to-end services across manuscript submission, peer review, content hosting, analytics, and identity management and expert advice to ensure each journal’s development by offering a range of options across all functions as Journal Layout, Production Services, Editorial Services, Marketing and Promotions, Online Functionality, etc. By digitalizing the publishing process, SciOpen widens the reach, deepens the impact, and accelerates the exchange of ideas.

Here’s a link to and a citation for the paper,

A self-healing aqueous ammonium-ion micro batteries based on PVA-NH4Cl hydrogel electrolyte and MXene-integrated perylene anode by Ke Niu, Junjie Shi, Long Zhang, Yang Yue, Mengjie Wang, Qixiang Zhang, Yanan Ma, Shuyi Mo, Shaofei Li, Wenbiao Li, Li Wen, Yixin Hou, Fei Long, Yihua Gao. Nano Research Energy (2024)DOI: https://doi.org/10.26599/NRE.2024.9120127 Published: 03 June 2024

This paper is open access by means of a “Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, distribution and reproduction in any medium, provided the original work is properly cited.”

Corporate venture capital (CVC) and the nanotechnology market plus 2023’s top 10 countries’ nanotechnolgy patents

I have two brief nanotechnology commercialization stories from the same publication.

Corporate venture capital (CVC) and the nano market

From a March 23, 2024 article on statnano.com, Note: Links have been removed,

Nanotechnology’s enormous potential across various sectors has long attracted the eye of investors, keen to capitalise on its commercial potency.

Yet the initial propulsion provided by traditional venture capital avenues was reined back when the reality of long development timelines, regulatory hurdles, and difficulty in translating scientific advances into commercially viable products became apparent.

While the initial flurry of activity declined in the early part of the 21st century, a new kid on the investing block has proved an enticing option beyond traditional funding methods.

Corporate venture capital has, over the last 10 years emerged as a key plank in turning ideas into commercial reality.

Simply put, corporate venture capital (CVC) has seen large corporations, recognising the strategic value of nanotechnology, establish their own VC arms to invest in promising start-ups.

The likes of Samsung, Johnson & Johnson and BASF have all sought to get an edge on their competition by sinking money into start-ups in nano and other technologies, which could deliver benefits to them in the long term.

Unlike traditional VC firms, CVCs invest with a strategic lens, aligning their investments with their core business goals. For instance, BASF’s venture capital arm, BASF Venture Capital, focuses on nanomaterials with applications in coatings, chemicals, and construction.

It has an evergreen EUR 250 million fund available and will consider everything from seed to Series B investment opportunities.

Samsung Ventures takes a similar approach, explaining: “Our major investment areas are in semiconductors, telecommunication, software, internet, bioengineering and the medical industry from start-ups to established companies that are about to be listed on the stock market.

While historically concentrated in North America and Europe, CVC activity in nanotechnology is expanding to Asia, with China being a major player.

China has, perhaps not surprisingly, seen considerable growth over the last decade in nano and few will bet against it being the primary driver of innovation over the next 10 years.

As ever, the long development cycles of emerging nano breakthroughs can frequently deter some CVCs with shorter investment horizons.

2023 Nanotechnology patent applications: which countries top the list?

A March 28, 2024 article from statnano.com provides interesting data concerning patent applications,

In 2023, a total of 18,526 nanotechnology patent applications were published at the United States Patent and Trademark Office (USPTO) and the European Patent Office (EPO). The United States accounted for approximately 40% of these nanotechnology patent publications, followed by China, South Korea, and Japan in the next positions.

According to a statistical analysis conducted by StatNano using data from the Orbit database, the USPTO published 84% of the 18,526 nanotechnology patent applications in 2023, which is more than five times the number published by the EPO. However, the EPO saw a nearly 17% increase in nanotechnology patent publications compared to the previous year, while the USPTO’s growth was around 4%.

Nanotechnology patents are defined based on the ISO/TS 18110 standard as those having at least one claim related to nanotechnology orpatents classified with an IPC classification code related to nanotechnology such as B82.

From the March 28, 2024 article,

Top 10 Countries Based on Published Patent Applications in the Field of Nanotechnology in USPTO in 2023

Rank1CountryNumber of nanotechnology published patent applications in USPTONumber of nanotechnology published patent applications in EPOGrowth rate in USPTOGrowth rate in EPO
1United States6,9264923.20%17.40%
2South Korea1,71547613.40%8.40%
3China1,6275694.20%47.40%
4Taiwan1,118615.00%-12.90%
5Japan1,113445-1.20%9.30%
6Germany484229-10.20%15.70%
7England331505.10%16.30%
8France323145-8.00%17.90%
9Canada290125.10%-14.30%
10Saudi Arabia268322.40%0.00%
1- Ranking based on the number of nanotechnology patent applications at the USPTO

If you have a bit of time and interest, I suggest reading the March 28, 2024 article in its entirety.

Hardware policies best way to manage AI safety?

Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).

A very software approach?

This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,

In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.

The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.

The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),

At a glance

The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.

Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.

The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.

Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.

While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.

The European Union (EU) is preparing to pass its own AI regulation act through the European Parliament in 2024 according to a December 19, 2023 “EU AI Act: first regulation on artificial intelligence” article update, Note: Links have been removed,

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

*The ban of AI systems posing unacceptable risks will apply six months after the entry into force

*Codes of practice will apply nine months after entry into force

*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”

… The AI Act is expected to come into effect in late 2025 or early 2026.[109

I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.

The US is always to be considered in these matters and I have a November 2023 ‘briefing’ by Müge Fazlioglu on the International Association of Privacy Professionals (IAPP) website where she provides a quick overview of the international scene before diving deeper into US AI governance policy through the Barack Obama, Donald Trump, and Joe Biden administrations. There’s also this January 29, 2024 US White House “Fact Sheet: Biden-⁠Harris Administration Announces Key AI Actions Following President Biden’s Landmark Executive Order.”

What about AI and hardware?

A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,

Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.

A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.

Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.

The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.

The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.

A February 14, 2024 University of Cambridge press release by Fred Lewsey (also on EurekAlert), which originated the news item, provides more information about the ‘hardware approach to AI regulation’,

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI. 

“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.

“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”

The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.

Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.

Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute. 

The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”

Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.

For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.

The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.

“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.

Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.  

“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”

These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.

AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.

The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.

They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.

Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”

You can find the report, “Computing Power and the Governance of Artificial Intelligence” on the University of Cambridge’s Centre for the Study of Existential Risk.

Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.

The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.

“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.

*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.

As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.

Who owns prehistory? The relationship between science and sovereignty

Brachiopod (photo taken in Alberta, Canada). Courtesy: AlbertaWow.com

This February 28, 2024 news item on phys.org takes the discussion about appropriating cultural artifacts out of the world of art and into museum fossil collections , Note: Links have been removed,

Many museums and other cultural institutions in the West have faced, in recent years, demands for artistic repatriation. The Elgin Marbles, currently housed in the British Museum, are perhaps the most prominent subject of this charge, with numerous appeals having been made for their return to their original home in Greece.

Taking up the issue of cultural imperialism is a new article in Isis [journal of the History of Science Society],.

“Fossils and Sovereignty: Science Diplomacy and the Politics of Deep Time in the Sino-American Fossil Dispute of the 1920s” by author Hsiao-pei Yen, narrates the controversy surrounding paleontological excavation in the interwar period through a conflict between the American Museum of Natural History and the emerging Chinese scientific nationalist movement, and, ultimately, examines the place of fossil ownership in global politics.

A February 28, 2024 (?) University of Chicago Press news release, which originated the news item, delves further into the topic,

In the early decades of the 20th century, many scientists were convinced that the key to understanding human origins, the so-called “missing link,” could be found in Central Asia. A delegation from the American Museum of Natural History (AMNH) was sent to the Gobi Desert in search of this great intellectual prize and failed to find any evidence of human ancestry in the region, but, over the course of the first half of the 1920s, sent many other valuable fossils and archaeological relics back to the United States. In 1928, however, amidst the changing political landscape of Chiang Kai-shek’s revolutionary reunification of China, the Americans were frustrated to discover that their findings had been detained under orders of the Beijing Society for the Preservation of Cultural Objects (SPCO). The resulting negotiations between the Americans and the Chinese inspired conflicting perspectives not only regarding the ownership of these prehistoric remains, but also the very nature of the relationship between fossils and sovereignty.

Nationalists in China were keen to correct the historical imbalance in treaties concerning trade between their country and rich Western nations. The debate over the fate of relics uncovered in China represented a unique opportunity to reclaim a measure of autonomy. As Yen writes, “The antiquities were deemed priceless national treasures not only because they were a link to China’s past but because … they were also resources of cultural capital with high academic value as research objects that would enable native scholars to establish and develop their own knowledge framework.” The representatives of the AMNH and those of the SPCO initially agreed to share botanical, zoological, and mineral specimens, while all archaeological materials and invertebrate fossils were to be kept in China, and all vertebrate fossils sent to America, with duplicates returning to their home country. The AMNH was insistent on this distinction between archaeological remains and fossils. Paleontological fossils, they claimed, “were formed in geological time and had no historical or cultural attachment to the people of the place where they were found.” As a result, argued the AMNH, they could be exported and retained by representatives of any country.

Following this agreement, however, the Chinese government called for a reclassification of fossils as sovereign property. This decision, part of a “vertical turn” in geopolitical history, was summarized by one government official: “’the territory of a nation-state is not limited to the surface. The terrain up to the sky and down to the subterranean should all be included in the national domain.’” As of 1930, China rejected the interpretation of fossils and the geological time they represented as universal, and therefore easily exploitable by more powerful countries, and claimed them instead as local, and contingent. The protections around Chinese fossils by no means limited the production of knowledge surrounding their discovery, but meant, instead, that the Chinese state had more control over their study and their diplomatic applications. The author concludes, “A vertical sensitivity enacted a new political and temporal imagination: geoscience and Earth history might be universal, but they should be explored within national boundaries.”

Since its inception in 1912, Isis has featured scholarly articles, research notes, and commentary on the history of science, medicine, and technology and their cultural influences. Review essays and book reviews on new contributions to the discipline are also included. An official publication of the History of Science Society, Isis is the oldest English-language journal in the field.

Founded in 1924, the History of Science Society is the world’s largest society dedicated to understanding science, technology, medicine, and their interactions with society in historical context.

Here’s a link to and a citation for the paper,

Fossils and Sovereignty: Science Diplomacy and the Politics of Deep Time in the Sino-American Fossil Dispute of the 1920s by Hsiao-pei Yen. Isis Volume 115, Number 1 March 2024 DOI: https://doi.org/10.1086/729176

This paper is behind a paywall.

Portable and non-invasive (?) mind-reading AI (artificial intelligence) turns thoughts into text and some thoughts about the near future

First, here’s some of the latest research and if by ‘non-invasive,’ you mean that electrodes are not being planted in your brain, then this December 12, 2023 University of Technology Sydney (UTS) press release (also on EurekAlert) highlights non-invasive mind-reading AI via a brain-computer interface (BCI), Note: Links have been removed,

In a world-first, researchers from the GrapheneX-UTS Human-centric Artificial Intelligence Centre at the University of Technology Sydney (UTS) have developed a portable, non-invasive system that can decode silent thoughts and turn them into text. 

The technology could aid communication for people who are unable to speak due to illness or injury, including stroke or paralysis. It could also enable seamless communication between humans and machines, such as the operation of a bionic arm or robot.

The study has been selected as the spotlight paper at the NeurIPS conference, a top-tier annual meeting that showcases world-leading research on artificial intelligence and machine learning, held in New Orleans on 12 December 2023.

The research was led by Distinguished Professor CT Lin, Director of the GrapheneX-UTS HAI Centre, together with first author Yiqun Duan and fellow PhD candidate Jinzhou Zhou from the UTS Faculty of Engineering and IT.

In the study participants silently read passages of text while wearing a cap that recorded electrical brain activity through their scalp using an electroencephalogram (EEG). A demonstration of the technology can be seen in this video [See UTS press release].

The EEG wave is segmented into distinct units that capture specific characteristics and patterns from the human brain. This is done by an AI model called DeWave developed by the researchers. DeWave translates EEG signals into words and sentences by learning from large quantities of EEG data. 

“This research represents a pioneering effort in translating raw EEG waves directly into language, marking a significant breakthrough in the field,” said Distinguished Professor Lin.

“It is the first to incorporate discrete encoding techniques in the brain-to-text translation process, introducing an innovative approach to neural decoding. The integration with large language models is also opening new frontiers in neuroscience and AI,” he said.

Previous technology to translate brain signals to language has either required surgery to implant electrodes in the brain, such as Elon Musk’s Neuralink [emphasis mine], or scanning in an MRI machine, which is large, expensive, and difficult to use in daily life.

These methods also struggle to transform brain signals into word level segments without additional aids such as eye-tracking, which restrict the practical application of these systems. The new technology is able to be used either with or without eye-tracking.

The UTS research was carried out with 29 participants. This means it is likely to be more robust and adaptable than previous decoding technology that has only been tested on one or two individuals, because EEG waves differ between individuals. 

The use of EEG signals received through a cap, rather than from electrodes implanted in the brain, means that the signal is noisier. In terms of EEG translation however, the study reported state-of the art performance, surpassing previous benchmarks.

“The model is more adept at matching verbs than nouns. However, when it comes to nouns, we saw a tendency towards synonymous pairs rather than precise translations, such as ‘the man’ instead of ‘the author’,” said Duan. [emphases mine; synonymous, eh? what about ‘woman’ or ‘child’ instead of the ‘man’?]

“We think this is because when the brain processes these words, semantically similar words might produce similar brain wave patterns. Despite the challenges, our model yields meaningful results, aligning keywords and forming similar sentence structures,” he said.

The translation accuracy score is currently around 40% on BLEU-1. The BLEU score is a number between zero and one that measures the similarity of the machine-translated text to a set of high-quality reference translations. The researchers hope to see this improve to a level that is comparable to traditional language translation or speech recognition programs, which is closer to 90%.

The research follows on from previous brain-computer interface technology developed by UTS in association with the Australian Defence Force [ADF] that uses brainwaves to command a quadruped robot, which is demonstrated in this ADF video [See my June 13, 2023 posting, “Mind-controlled robots based on graphene: an Australian research story” for the story and embedded video].

About one month after the research announcement regarding the University of Technology Sydney’s ‘non-invasive’ brain-computer interface (BCI), I stumbled across an in-depth piece about the field of ‘non-invasive’ mind-reading research.

Neurotechnology and neurorights

Fletcher Reveley’s January 18, 2024 article on salon.com (originally published January 3, 2024 on Undark) shows how quickly the field is developing and raises concerns, Note: Links have been removed,

One afternoon in May 2020, Jerry Tang, a Ph.D. student in computer science at the University of Texas at Austin, sat staring at a cryptic string of words scrawled across his computer screen:

“I am not finished yet to start my career at twenty without having gotten my license I never have to pull out and run back to my parents to take me home.”

The sentence was jumbled and agrammatical. But to Tang, it represented a remarkable feat: A computer pulling a thought, however disjointed, from a person’s mind.

For weeks, ever since the pandemic had shuttered his university and forced his lab work online, Tang had been at home tweaking a semantic decoder — a brain-computer interface, or BCI, that generates text from brain scans. Prior to the university’s closure, study participants had been providing data to train the decoder for months, listening to hours of storytelling podcasts while a functional magnetic resonance imaging (fMRI) machine logged their brain responses. Then, the participants had listened to a new story — one that had not been used to train the algorithm — and those fMRI scans were fed into the decoder, which used GPT1, a predecessor to the ubiquitous AI chatbot ChatGPT, to spit out a text prediction of what it thought the participant had heard. For this snippet, Tang compared it to the original story:

“Although I’m twenty-three years old I don’t have my driver’s license yet and I just jumped out right when I needed to and she says well why don’t you come back to my house and I’ll give you a ride.”

The decoder was not only capturing the gist of the original, but also producing exact matches of specific words — twenty, license. When Tang shared the results with his adviser, a UT Austin neuroscientist named Alexander Huth who had been working towards building such a decoder for nearly a decade, Huth was floored. “Holy shit,” Huth recalled saying. “This is actually working.” By the fall of 2021, the scientists were testing the device with no external stimuli at all — participants simply imagined a story and the decoder spat out a recognizable, albeit somewhat hazy, description of it. “What both of those experiments kind of point to,” said Huth, “is the fact that what we’re able to read out here was really like the thoughts, like the idea.”

The scientists brimmed with excitement over the potentially life-altering medical applications of such a device — restoring communication to people with locked-in syndrome, for instance, whose near full-body paralysis made talking impossible. But just as the potential benefits of the decoder snapped into focus, so too did the thorny ethical questions posed by its use. Huth himself had been one of the three primary test subjects in the experiments, and the privacy implications of the device now seemed visceral: “Oh my god,” he recalled thinking. “We can look inside my brain.”

Huth’s reaction mirrored a longstanding concern in neuroscience and beyond: that machines might someday read people’s minds. And as BCI technology advances at a dizzying clip, that possibility and others like it — that computers of the future could alter human identities, for example, or hinder free will — have begun to seem less remote. “The loss of mental privacy, this is a fight we have to fight today,” said Rafael Yuste, a Columbia University neuroscientist. “That could be irreversible. If we lose our mental privacy, what else is there to lose? That’s it, we lose the essence of who we are.”

Spurred by these concerns, Yuste and several colleagues have launched an international movement advocating for “neurorights” — a set of five principles Yuste argues should be enshrined in law as a bulwark against potential misuse and abuse of neurotechnology. But he may be running out of time.

Reveley’s January 18, 2024 article provides fascinating context and is well worth reading if you have the time.

For my purposes, I’m focusing on ethics, Note: Links have been removed,

… as these and other advances propelled the field forward, and as his own research revealed the discomfiting vulnerability of the brain to external manipulation, Yuste found himself increasingly concerned by the scarce attention being paid to the ethics of these technologies. Even Obama’s multi-billion-dollar BRAIN Initiative, a government program designed to advance brain research, which Yuste had helped launch in 2013 and supported heartily, seemed to mostly ignore the ethical and societal consequences of the research it funded. “There was zero effort on the ethical side,” Yuste recalled.

Yuste was appointed to the rotating advisory group of the BRAIN Initiative in 2015, where he began to voice his concerns. That fall, he joined an informal working group to consider the issue. “We started to meet, and it became very evident to me that the situation was a complete disaster,” Yuste said. “There was no guidelines, no work done.” Yuste said he tried to get the group to generate a set of ethical guidelines for novel BCI technologies, but the effort soon became bogged down in bureaucracy. Frustrated, he stepped down from the committee and, together with a University of Washington bioethicist named Sara Goering, decided to independently pursue the issue. “Our aim here is not to contribute to or feed fear for doomsday scenarios,” the pair wrote in a 2016 article in Cell, “but to ensure that we are reflective and intentional as we prepare ourselves for the neurotechnological future.”

In the fall of 2017, Yuste and Goering called a meeting at the Morningside Campus of Columbia, inviting nearly 30 experts from all over the world in such fields as neurotechnology, artificial intelligence, medical ethics, and the law. By then, several other countries had launched their own versions of the BRAIN Initiative, and representatives from Australia, Canada [emphasis mine], China, Europe, Israel, South Korea, and Japan joined the Morningside gathering, along with veteran neuroethicists and prominent researchers. “We holed ourselves up for three days to study the ethical and societal consequences of neurotechnology,” Yuste said. “And we came to the conclusion that this is a human rights issue. These methods are going to be so powerful, that enable to access and manipulate mental activity, and they have to be regulated from the angle of human rights. That’s when we coined the term ‘neurorights.’”

The Morningside group, as it became known, identified four principal ethical priorities, which were later expanded by Yuste into five clearly defined neurorights: The right to mental privacy, which would ensure that brain data would be kept private and its use, sale, and commercial transfer would be strictly regulated; the right to personal identity, which would set boundaries on technologies that could disrupt one’s sense of self; the right to fair access to mental augmentation, which would ensure equality of access to mental enhancement neurotechnologies; the right of protection from bias in the development of neurotechnology algorithms; and the right to free will, which would protect an individual’s agency from manipulation by external neurotechnologies. The group published their findings in an often-cited paper in Nature.

But while Yuste and the others were focused on the ethical implications of these emerging technologies, the technologies themselves continued to barrel ahead at a feverish speed. In 2014, the first kick of the World Cup was made by a paraplegic man using a mind-controlled robotic exoskeleton. In 2016, a man fist bumped Obama using a robotic arm that allowed him to “feel” the gesture. The following year, scientists showed that electrical stimulation of the hippocampus could improve memory, paving the way for cognitive augmentation technologies. The military, long interested in BCI technologies, built a system that allowed operators to pilot three drones simultaneously, partially with their minds. Meanwhile, a confusing maelstrom of science, science-fiction, hype, innovation, and speculation swept the private sector. By 2020, over $33 billion had been invested in hundreds of neurotech companies — about seven times what the NIH [US National Institutes of Health] had envisioned for the 12-year span of the BRAIN Initiative itself.

Now back to Tang and Huth (from Reveley’s January 18, 2024 article), Note: Links have been removed,

Central to the ethical questions Huth and Tang grappled with was the fact that their decoder, unlike other language decoders developed around the same time, was non-invasive — it didn’t require its users to undergo surgery. Because of that, their technology was free from the strict regulatory oversight that governs the medical domain. (Yuste, for his part, said he believes non-invasive BCIs pose a far greater ethical challenge than invasive systems: “The non-invasive, the commercial, that’s where the battle is going to get fought.”) Huth and Tang’s decoder faced other hurdles to widespread use — namely that fMRI machines are enormous, expensive, and stationary. But perhaps, the researchers thought, there was a way to overcome that hurdle too.

The information measured by fMRI machines — blood oxygenation levels, which indicate where blood is flowing in the brain — can also be measured with another technology, functional Near-Infrared Spectroscopy, or fNIRS. Although lower resolution than fMRI, several expensive, research-grade, wearable fNIRS headsets do approach the resolution required to work with Huth and Tang’s decoder. In fact, the scientists were able to test whether their decoder would work with such devices by simply blurring their fMRI data to simulate the resolution of research-grade fNIRS. The decoded result “doesn’t get that much worse,” Huth said.

And while such research-grade devices are currently cost-prohibitive for the average consumer, more rudimentary fNIRS headsets have already hit the market. Although these devices provide far lower resolution than would be required for Huth and Tang’s decoder to work effectively, the technology is continually improving, and Huth believes it is likely that an affordable, wearable fNIRS device will someday provide high enough resolution to be used with the decoder. In fact, he is currently teaming up with scientists at Washington University to research the development of such a device.

Even comparatively primitive BCI headsets can raise pointed ethical questions when released to the public. Devices that rely on electroencephalography, or EEG, a commonplace method of measuring brain activity by detecting electrical signals, have now become widely available — and in some cases have raised alarm. In 2019, a school in Jinhua, China, drew criticism after trialing EEG headbands that monitored the concentration levels of its pupils. (The students were encouraged to compete to see who concentrated most effectively, and reports were sent to their parents.) Similarly, in 2018 the South China Morning Post reported that dozens of factories and businesses had begun using “brain surveillance devices” to monitor workers’ emotions, in the hopes of increasing productivity and improving safety. The devices “caused some discomfort and resistance in the beginning,” Jin Jia, then a brain scientist at Ningbo University, told the reporter. “After a while, they got used to the device.”

But the primary problem with even low-resolution devices is that scientists are only just beginning to understand how information is actually encoded in brain data. In the future, powerful new decoding algorithms could discover that even raw, low-resolution EEG data contains a wealth of information about a person’s mental state at the time of collection. Consequently, nobody can definitively know what they are giving away when they allow companies to collect information from their brains.

Huth and Tang concluded that brain data, therefore, should be closely guarded, especially in the realm of consumer products. In an article on Medium from last April, Tang wrote that “decoding technology is continually improving, and the information that could be decoded from a brain scan a year from now may be very different from what can be decoded today. It is crucial that companies are transparent about what they intend to do with brain data and take measures to ensure that brain data is carefully protected.” (Yuste said the Neurorights Foundation recently surveyed the user agreements of 30 neurotech companies and found that all of them claim ownership of users’ brain data — and most assert the right to sell that data to third parties. [emphases mine]) Despite these concerns, however, Huth and Tang maintained that the potential benefits of these technologies outweighed their risks, provided the proper guardrails [emphasis mine] were put in place.

It would seem the first guardrails are being set up in South America (from Reveley’s January 18, 2024 article), Note: Links have been removed,

On a hot summer night in 2019, Yuste sat in the courtyard of an adobe hotel in the north of Chile with his close friend, the prominent Chilean doctor and then-senator Guido Girardi, observing the vast, luminous skies of the Atacama Desert and discussing, as they often did, the world of tomorrow. Girardi, who every year organizes the Congreso Futuro, Latin America’s preeminent science and technology event, had long been intrigued by the accelerating advance of technology and its paradigm-shifting impact on society — “living in the world at the speed of light,” as he called it. Yuste had been a frequent speaker at the conference, and the two men shared a conviction that scientists were birthing technologies powerful enough to disrupt the very notion of what it meant to be human.

Around midnight, as Yuste finished his pisco sour, Girardi made an intriguing proposal: What if they worked together to pass an amendment to Chile’s constitution, one that would enshrine protections for mental privacy as an inviolable right of every Chilean? It was an ambitious idea, but Girardi had experience moving bold pieces of legislation through the senate; years earlier he had spearheaded Chile’s famous Food Labeling and Advertising Law, which required companies to affix health warning labels on junk food. (The law has since inspired dozens of countries to pursue similar legislation.) With BCI, here was another chance to be a trailblazer. “I said to Rafael, ‘Well, why don’t we create the first neuro data protection law?’” Girardi recalled. Yuste readily agreed.

… Girardi led the political push, promoting a piece of legislation that would amend Chile’s constitution to protect mental privacy. The effort found surprising purchase across the political spectrum, a remarkable feat in a country famous for its political polarization. In 2021, Chile’s congress unanimously passed the constitutional amendment, which Piñera [Sebastián Piñera] swiftly signed into law. (A second piece of legislation, which would establish a regulatory framework for neurotechnology, is currently under consideration by Chile’s congress.) “There was no divide between the left or right,” recalled Girardi. “This was maybe the only law in Chile that was approved by unanimous vote.” Chile, then, had become the first country in the world to enshrine “neurorights” in its legal code.

Even before the passage of the Chilean constitutional amendment, Yuste had begun meeting regularly with Jared Genser, an international human rights lawyer who had represented such high-profile clients as Desmond Tutu, Liu Xiaobo, and Aung San Suu Kyi. (The New York Times Magazine once referred to Genser as “the extractor” for his work with political prisoners.) Yuste was seeking guidance on how to develop an international legal framework to protect neurorights, and Genser, though he had just a cursory knowledge of neurotechnology, was immediately captivated by the topic. “It’s fair to say he blew my mind in the first hour of discussion,” recalled Genser. Soon thereafter, Yuste, Genser, and a private-sector entrepreneur named Jamie Daves launched the Neurorights Foundation, a nonprofit whose first goal, according to its website, is “to protect the human rights of all people from the potential misuse or abuse of neurotechnology.”

To accomplish this, the organization has sought to engage all levels of society, from the United Nations and regional governing bodies like the Organization of American States, down to national governments, the tech industry, scientists, and the public at large. Such a wide-ranging approach, said Genser, “is perhaps insanity on our part, or grandiosity. But nonetheless, you know, it’s definitely the Wild West as it comes to talking about these issues globally, because so few people know about where things are, where they’re heading, and what is necessary.”

This general lack of knowledge about neurotech, in all strata of society, has largely placed Yuste in the role of global educator — he has met several times with U.N. Secretary-General António Guterres, for example, to discuss the potential dangers of emerging neurotech. And these efforts are starting to yield results. Guterres’s 2021 report, “Our Common Agenda,” which sets forth goals for future international cooperation, urges “updating or clarifying our application of human rights frameworks and standards to address frontier issues,” such as “neuro-technology.” Genser attributes the inclusion of this language in the report to Yuste’s advocacy efforts.

But updating international human rights law is difficult, and even within the Neurorights Foundation there are differences of opinion regarding the most effective approach. For Yuste, the ideal solution would be the creation of a new international agency, akin to the International Atomic Energy Agency — but for neurorights. “My dream would be to have an international convention about neurotechnology, just like we had one about atomic energy and about certain things, with its own treaty,” he said. “And maybe an agency that would essentially supervise the world’s efforts in neurotechnology.”

Genser, however, believes that a new treaty is unnecessary, and that neurorights can be codified most effectively by extending interpretation of existing international human rights law to include them. The International Covenant of Civil and Political Rights, for example, already ensures the general right to privacy, and an updated interpretation of the law could conceivably clarify that that clause extends to mental privacy as well.

There is no need for immediate panic (from Reveley’s January 18, 2024 article),

… while Yuste and the others continue to grapple with the complexities of international and national law, Huth and Tang have found that, for their decoder at least, the greatest privacy guardrails come not from external institutions but rather from something much closer to home — the human mind itself. Following the initial success of their decoder, as the pair read widely about the ethical implications of such a technology, they began to think of ways to assess the boundaries of the decoder’s capabilities. “We wanted to test a couple kind of principles of mental privacy,” said Huth. Simply put, they wanted to know if the decoder could be resisted.

In late 2021, the scientists began to run new experiments. First, they were curious if an algorithm trained on one person could be used on another. They found that it could not — the decoder’s efficacy depended on many hours of individualized training. Next, they tested whether the decoder could be thrown off simply by refusing to cooperate with it. Instead of focusing on the story that was playing through their headphones while inside the fMRI machine, participants were asked to complete other mental tasks, such as naming random animals, or telling a different story in their head. “Both of those rendered it completely unusable,” Huth said. “We didn’t decode the story they were listening to, and we couldn’t decode anything about what they were thinking either.”

Given how quickly this field of research is progressing, it seems like a good idea to increase efforts to establish neurorights (from Reveley’s January 18, 2024 article),

For Yuste, however, technologies like Huth and Tang’s decoder may only mark the beginning of a mind-boggling new chapter in human history, one in which the line between human brains and computers will be radically redrawn — or erased completely. A future is conceivable, he said, where humans and computers fuse permanently, leading to the emergence of technologically augmented cyborgs. “When this tsunami hits us I would say it’s not likely it’s for sure that humans will end up transforming themselves — ourselves — into maybe a hybrid species,” Yuste said. He is now focused on preparing for this future.

In the last several years, Yuste has traveled to multiple countries, meeting with a wide assortment of politicians, supreme court justices, U.N. committee members, and heads of state. And his advocacy is beginning to yield results. In August, Mexico began considering a constitutional reform that would establish the right to mental privacy. Brazil is currently considering a similar proposal, while Spain, Argentina, and Uruguay have also expressed interest, as has the European Union. In September [2023], neurorights were officially incorporated into Mexico’s digital rights charter, while in Chile, a landmark Supreme Court ruling found that Emotiv Inc, a company that makes a wearable EEG headset, violated Chile’s newly minted mental privacy law. That suit was brought by Yuste’s friend and collaborator, Guido Girardi.

“This is something that we should take seriously,” he [Huth] said. “Because even if it’s rudimentary right now, where is that going to be in five years? What was possible five years ago? What’s possible now? Where’s it gonna be in five years? Where’s it gonna be in 10 years? I think the range of reasonable possibilities includes things that are — I don’t want to say like scary enough — but like dystopian enough that I think it’s certainly a time for us to think about this.”

You can find The Neurorights Foundation here and/or read Reveley’s January 18, 2024 article on salon.com or as originally published January 3, 2024 on Undark. Finally, thank you for the article, Fletcher Reveley!

Photonic synapses with low power consumption (and a few observations)

This work on brainlike (neuromorphic) computing was announced in a June 30, 2022 Compuscript Ltd news release on EurekAlert,

Photonic synapses with low power consumption and high sensitivity are expected to integrate sensing-memory-preprocessing capabilities

A new publication from Opto-Electronic Advances; DOI 10.29026/oea.2022.210069 discusses how photonic synapses with low power consumption and high sensitivity are expected to integrate sensing-memory-preprocessing capabilities.

Neuromorphic photonics/electronics is the future of ultralow energy intelligent computing and artificial intelligence (AI). In recent years, inspired by the human brain, artificial neuromorphic devices have attracted extensive attention, especially in simulating visual perception and memory storage. Because of its advantages of high bandwidth, high interference immunity, ultrafast signal transmission and lower energy consumption, neuromorphic photonic devices are expected to realize real-time response to input data. In addition, photonic synapses can realize non-contact writing strategy, which contributes to the development of wireless communication. The use of low-dimensional materials provides an opportunity to develop complex brain-like systems and low-power memory logic computers. For example, large-scale, uniform and reproducible transition metal dichalcogenides (TMDs) show great potential for miniaturization and low-power biomimetic device applications due to their excellent charge-trapping properties and compatibility with traditional CMOS processes. The von Neumann architecture with discrete memory and processor leads to high power consumption and low efficiency of traditional computing. Therefore, the sensor-memory fusion or sensor-memory- processor integration neuromorphic architecture system can meet the increasingly developing demands of big data and AI for low power consumption and high performance devices. Artificial synaptic devices are the most important components of neuromorphic systems. The performance evaluation of synaptic devices will help to further apply them to more complex artificial neural networks (ANN).

Chemical vapor deposition (CVD)-grown TMDs inevitably introduce defects or impurities, showed a persistent photoconductivity (PPC) effect. TMDs photonic synapses integrating synaptic properties and optical detection capabilities show great advantages in neuromorphic systems for low-power visual information perception and processing as well as brain memory.

The research Group of Optical Detection and Sensing (GODS) have reported a three-terminal photonic synapse based on the large-area, uniform multilayer MoS2 films. The reported device realized ultrashort optical pulse detection within 5 μs and ultralow power consumption about 40 aJ, which means its performance is much better than the current reported properties of photonic synapses. Moreover, it is several orders of magnitude lower than the corresponding parameters of biological synapses, indicating that the reported photonic synapse can be further used for more complex ANN. The photoconductivity of MoS2 channel grown by CVD is regulated by photostimulation signal, which enables the device to simulate short-term synaptic plasticity (STP), long-term synaptic plasticity (LTP), paired-pulse facilitation (PPF) and other synaptic properties. Therefore, the reported photonic synapse can simulate human visual perception, and the detection wavelength can be extended to near infrared light. As the most important system of human learning, visual perception system can receive 80% of learning information from the outside. With the continuous development of AI, there is an urgent need for low-power and high sensitivity visual perception system that can effectively receive external information. In addition, with the assistant of gate voltage, this photonic synapse can simulate the classical Pavlovian conditioning and the regulation of different emotions on memory ability. For example, positive emotions enhance memory ability and negative emotions weaken memory ability. Furthermore, a significant contrast in the strength of STP and LTP based on the reported photonic synapse suggests that it can preprocess the input light signal. These results indicate that the photo-stimulation and backgate control can effectively regulate the conductivity of MoS2 channel layer by adjusting carrier trapping/detrapping processes. Moreover, the photonic synapse presented in this paper is expected to integrate sensing-memory-preprocessing capabilities, which can be used for real-time image detection and in-situ storage, and also provides the possibility to break the von Neumann bottleneck. 

Here’s a link to and a citation for the paper,

Photonic synapses with ultralow energy consumption for artificial visual perception and brain storage by Caihong Li, Wen Du, Yixuan Huang, Jihua Zou, Lingzhi Luo, Song Sun, Alexander O. Govorov, Jiang Wu, Hongxing Xu, Zhiming Wang. Opto-Electron Adv Vol 5, No 9 210069 (2022). doi: 10.29026/oea.2022.210069

This paper is open access.

Observations

I don’t have much to say about the research itself other than, I believe this is the first time I’ve seen a news release about neuromorphic computing research from China.

it’s China that most interests me, especially these bits from the June 30, 2022 Compuscript Ltd news release on EurekAlert,

Group of Optical Detection and Sensing (GODS) [emphasis mine] was established in 2019. It is a research group focusing on compound semiconductors, lasers, photodetectors, and optical sensors. GODS has established a well-equipped laboratory with research facilities such as Molecular Beam Epitaxy system, IR detector test system, etc. GODS is leading several research projects funded by NSFC and National Key R&D Programmes. GODS have published more than 100 research articles in Nature Electronics, Light: Science and Applications, Advanced Materials and other international well-known high-level journals with the total citations beyond 8000.

Jiang Wu obtained his Ph.D. from the University of Arkansas Fayetteville in 2011. After his Ph.D., he joined UESTC as associate professor and later professor. He joined University College London [UCL] as a research associate in 2012 and then lecturer in the Department of Electronic and Electrical Engineering at UCL from 2015 to 2018. He is now a professor at UESTC [University of Electronic Science and Technology of China] [emphases mine]. His research interests include optoelectronic applications of semiconductor heterostructures. He is a Fellow of the Higher Education Academy and Senior Member of IEEE.

Opto-Electronic Advances (OEA) is a high-impact, open access, peer reviewed monthly SCI journal with an impact factor of 9.682 (Journals Citation Reports for IF 2020). Since its launch in March 2018, OEA has been indexed in SCI, EI, DOAJ, Scopus, CA and ICI databases over the time and expanded its Editorial Board to 36 members from 17 countries and regions (average h-index 49). [emphases mine]

The journal is published by The Institute of Optics and Electronics, Chinese Academy of Sciences, aiming at providing a platform for researchers, academicians, professionals, practitioners, and students to impart and share knowledge in the form of high quality empirical and theoretical research papers covering the topics of optics, photonics and optoelectronics.

The research group’s awkward name was almost certainly developed with the rather grandiose acronym, GODS, in mind. I don’t think you could get away with doing this in an English-speaking country as your colleagues would mock you mercilessly.

It’s Jiang Wu’s academic and work history that’s of most interest as it might provide insight into China’s Young Thousand Talents program. A January 5, 2023 American Association for the Advancement of Science (AAAS) news release describes the program,

In a systematic evaluation of China’s Young Thousand Talents (YTT) program, which was established in 2010, researchers find that China has been successful in recruiting and nurturing high-caliber Chinese scientists who received training abroad. Many of these individuals outperform overseas peers in publications and access to funding, the study shows, largely due to access to larger research teams and better research funding in China. Not only do the findings demonstrate the program’s relative success, but they also hold policy implications for the increasing number of governments pursuing means to tap expatriates for domestic knowledge production and talent development. China is a top sender of international students to United States and European Union science and engineering programs. The YTT program was created to recruit and nurture the productivity of high-caliber, early-career, expatriate scientists who return to China after receiving Ph.Ds. abroad. Although there has been a great deal of international attention on the YTT, some associated with the launch of the U.S.’s controversial China Initiative and federal investigations into academic researchers with ties to China, there has been little evidence-based research on the success, impact, and policy implications of the program itself. Dongbo Shi and colleagues evaluated the YTT program’s first 4 cohorts of scholars and compared their research productivity to that of their peers that remained overseas. Shi et al. found that China’s YTT program successfully attracted high-caliber – but not top-caliber – scientists. However, those young scientists that did return outperformed others in publications across journal-quality tiers – particularly in last-authored publications. The authors suggest that this is due to YTT scholars’ greater access to larger research teams and better research funding in China. The authors say the dearth of such resources in the U.S. and E.U. “may not only expedite expatriates’ return decisions but also motivate young U.S.- and E.U.-born scientists to seek international research opportunities.” They say their findings underscore the need for policy adjustments to allocate more support for young scientists.

Here’s a link to and a citation for the paper,

Has China’s Young Thousand Talents program been successful in recruiting and nurturing top-caliber scientists? by Dongbo Shi, Weichen Liu, and Yanbo Wang. Science 5 Jan 2023 Vol 379, Issue 6627 pp. 62-65 DOI: 10.1126/science.abq1218

This paper is behind a paywall.

Kudos to the folks behind China’s Young Thousands Talents program! Jiang Wu’s career appears to be a prime example of the program’s success. Perhaps Canadian policy makers will be inspired.

China and nanotechnology

it’s been quite a while since I’ve come across any material about Nanopolis, a scientific complex in China devoted to nanotechnology (as described in my September 26, 2014 posting titled, More on Nanopolis in China’s Suzhou Industrial Park). Note: The most recent , prior to now, information about the complex is in my June 1, 2017 posting, which mentions China’s Nanopolis and Nano-X endeavours.

Dr. Mahbube K. Siddiki’s March 12, 2022 article about China’s nanotechnology work in the Small Wars Journal provides a situation overview and an update along with a tidbit about Nanopolis, Note: Footnotes for the article have not been included here,

The Nanotechnology industry in China is moving forward, with substantially high levels of funding, a growing talent pool, and robust international collaborations. The strong state commitment to support this field of science and technology is a key advantage for China to compete with leading forces like US, EU, Japan, and Russia. The Chinese government focuses on increasing competitiveness in nanotechnology by its inclusion as strategic industry in China’s 13th Five-Year Plan, reconfirming state funding, legislative and regulatory support. Research and development (R&D) in Nanoscience and Nanotechnology is a key component of the ambitious ‘Made in China 2025’ initiative aimed at turning China into a high-tech manufacturing powerhouse [1].

A bright example of Chinese nanotech success is the world’s largest nanotech industrial zone called ‘Nanopolis’, located in the eastern city of Suzhou. This futuristic city houses several private multinationals and new Chinese startups across different fields of nanotechnology and nanoscience. Needless to say, China leads the world’s nanotech startups. Involvement of private sector opens new and unique pools of funding and talent, focusing on applied research. Thus, private sector is leading in R&D in China, where state-sponsored institutions still dominate in all other sectors of rapid industrialization and modernization. From cloning to cancer research, from sea to space exploration, this massive and highly populated nation is using nanoscience and nanotechnology innovation to drive some of the world’s biggest breakthroughs, which is raising concerns in many other competing countries [3].

China has established numerous nanotech research institutions throughout the country over the years. Prominent universities like Peking University, City University of Hong Kong, Nanjing University, Hong Kong University of Science and Technology, Soochow University, University of Science and Technology of China are the leading institutions that house state of art nanotech research labs to foster study and research of nanoscience and nanotechnology [5]. Chinese Academy of Science (CAS), National Center for Nanoscience and Technology (NCNST) and Suzhou Institute of Nano-Tech and Nano-Bionics (SINANO) are top among the state sponsored specialized nanoscience and nanotechnology research centers, which have numerous labs and prominent researchers to conduct cutting edge research in the area of nanotechnology. Public-Private collaboration along with the above mentioned research institutes gave birth to many nanotechnology companies, most notable of them are Array Nano, Times Nano, Haizisi Nano Technology, Nano Medtech, Sun Nanotech, XP nano etc. [6]. These companies are thriving on the research breakthroughs China achieved recently in this sector. 

Here are some of the notable achievements in this sector by China. In June 2020, an international team of researchers led by Chinese scientists developed a new form of synthetic and  biodegradable nanoparticle [7]. This modifiable lipid nanoparticle is capable of targeting, penetrating, and altering cells by delivering the CRISPR/Cas9 gene-editing tool into a cell. This novel nanoparticle can be used in the treatment of some gene related disorders, as well as other diseases including some forms of cancer in the brain, liver, and lungs. At the State Key Laboratory of Robotics in the northeast city of Shenyang, researchers have developed a laser that produces a tiny gas bubble[8]. This bubble can be used as a tiny “robot” to manipulate and move materials on a nanoscale with microscopic precision. The technology termed as “Bubble bot” promises new possibilities in the field of artificial tissue creation and cloning [9].

In another report [13] it was shown that China surpassed the U.S. in chemistry in 2018 and now leading the later with a significant gap, which might take years to overcome. In the meantime, the country is approaching the US in Earth & Environmental sciences as well as physical sciences. According to the trend China may take five years or less to surpass US. On the contrary, in life science research China is lagging the US quite significantly, which might be attributed to both countries’ priority of sponsorship, in terms of funding. In fact, in the time of CORONA pandemic, US can use this gap for her strategic gain over China.

Outstanding economic growth and rapid technological advances of China over the last three decades have given her an unprecedented opportunity to play a leading role in contemporary geopolitical competition. The United States, and many of her partners and allies in the west as well as in Asia, have a range of concerns about how the authoritarian leadership in Beijing maneuver [sic] its recently gained power and position on the world stage. They are warily observing this regime’s deployment of sophisticated technology like “Nano” in ways that challenge many of their core interests and values all across the world. Though the U.S. is considered the only superpower in the world and has maintained its position as the dominant power of technological innovation for decades, China has made massive investments and swiftly implemented policies that have contributed significantly to its technological innovation, economic growth, military capability, and global influence. In some areas, China has eclipsed, or is on the verge of eclipsing, the United States — particularly in the rapid deployment of certain technologies, and nanoscience and nanotechnology appears to be the leading one. …

[About Dr. Siddiki]

Dr. Siddiki is an instructor of Robotic and Autonomous System in the Department of Multi-Domain Operations at the [US] Army Management Staff College where he teaches and does research in that area. He was Assistant Teaching Professor of Electrical Engineering at the Department of Computer Science and Electrical Engineering in the School of Computing and Engineering at University of Missouri Kansas City (UMKC). In UMKC, Dr. Siddiki designed, developed and taught undergraduate and graduate level courses, and supervised research works of Ph.D., Master and undergraduate students. Dr. Siddiki’s research interests lie in the area of nano and quantum tech, Robotic and Autonomous System, Green Energy & Power, and their implications in geopolitics.

As you can see in the article, there are anxieties over China’s rising dominance with regard to scientific research and technology; these anxieties have become more visible since I started this blog in 2008.

I was piqued to see that Dr. Siddiki’s article is in the Small Wars Journal and not in a journal focused on science, research, technology, and/or economics. I found this explanation for the term, ‘small wars’ on the journal’s About page (Note: A link has been removed),

Small Wars” is an imperfect term used to describe a broad spectrum of spirited continuation of politics by other means, falling somewhere in the middle bit of the continuum between feisty diplomatic words and global thermonuclear war.  The Small Wars Journal embraces that imperfection.

Just as friendly fire isn’t, there isn’t necessarily anything small about a Small War.

The term “Small War” either encompasses or overlaps with a number of familiar terms such as counterinsurgency, foreign internal defense, support and stability operations, peacemaking, peacekeeping, and many flavors of intervention.  Operations such as noncombatant evacuation, disaster relief, and humanitarian assistance will often either be a part of a Small War, or have a Small Wars feel to them.  Small Wars involve a wide spectrum of specialized tactical, technical, social, and cultural skills and expertise, requiring great ingenuity from their practitioners.  The Small Wars Manual (a wonderful resource, unfortunately more often referred to than read) notes that:

Small Wars demand the highest type of leadership directed by intelligence, resourcefulness, and ingenuity. Small Wars are conceived in uncertainty, are conducted often with precarious responsibility and doubtful authority, under indeterminate orders lacking specific instructions.

The “three block war” construct employed by General Krulak is exceptionally useful in describing the tactical and operational challenges of a Small War and of many urban operations.  Its only shortcoming is that is so useful that it is often mistaken as a definition or as a type of operation.

Who Are Those Guys?

Small Wars Journal is NOT a government, official, or big corporate site. It is run by Small Wars Foundation, a non-profit corporation, for the benefit of the Small Wars community of interest. The site principals are Dave Dilegge (Editor-in-Chief) and Bill Nagle (Publisher), and it would not be possible without the support of myriad volunteers as well as authors who care about this field and contribute their original works to the community. We do this in our spare time, because we want to.  McDonald’s pays more.  But we’d rather work to advance our noble profession than watch TV, try to super-size your order, or interest you in a delicious hot apple pie.  If and when you’re not flipping burgers, please join us.

The overview and analysis provided by Dr. Siddiki is very interesting to me and absent any conflicting data, I’m assuming it’s solid work. As for the anxiety that permeates the article, this is standard. All countries are anxious about who’s winning the science and technology race. If memory serves, you can find an example of the anxiety in C.P. Snow’s classic lecture and book, Two Cultures (the book is “The Two Cultures and the Scientific Revolution”) given/published in 1959. The British scientific establishment was very concerned that it was being eclipsed by the US and by the Russians.

Windows and roofs ‘self-adapt’ to heating and cooling conditions

I have two items about thermochromic coatings. It’s a little confusing since the American Association for the Advancement of Science (AAAS), which publishes the journal featuring both papers has issued a news release that seemingly refers to both papers as a single piece of research.

Onto, the press/new releases from the research institutions to be followed by the AAAS news release.

Nanyang Technological University (NTU) does windows

A December 16, 2021 news item on Nanowerk announced work on energy-saving glass,

An international research team led by scientists from Nanyang Technological University, Singapore (NTU Singapore) has developed a material that, when coated on a glass window panel, can effectively self-adapt to heat or cool rooms across different climate zones in the world, helping to cut energy usage.

Developed by NTU researchers and reported in the journal Science (“Scalable thermochromic smart windows with passive radiative cooling regulation”), the first-of-its-kind glass automatically responds to changing temperatures by switching between heating and cooling.

The self-adaptive glass is developed using layers of vanadium dioxide nanoparticles composite, Poly(methyl methacrylate) (PMMA), and low-emissivity coating to form a unique structure which could modulate heating and cooling simultaneously.

A December 17, 2021 NTU press release (PDF), also on EurekAlert but published December 16, 2021, which originated the news item, delves further into the research (Note: A link has been removed),

The newly developed glass, which has no electrical components, works by exploiting the spectrums of light responsible for heating and cooling.

During summer, the glass suppresses solar heating (near infrared light), while boosting radiative cooling (long-wave infrared) – a natural phenomenon where heat emits through surfaces towards the cold universe – to cool the room. In the winter, it does the opposite to warm up the room.

In lab tests using an infrared camera to visualise results, the glass allowed a controlled amount of heat to emit in various conditions (room temperature – above 70°C), proving its ability to react dynamically to changing weather conditions.

New glass regulates both heating and cooling

Windows are one of the key components in a building’s design, but they are also the least energy-efficient and most complicated part. In the United States alone, window-associated energy consumption (heating and cooling) in buildings accounts for approximately four per cent of their total primary energy usage each year according to an estimation based on data available from the Department of Energy in US.[1]

While scientists elsewhere have developed sustainable innovations to ease this energy demand – such as using low emissivity coatings to prevent heat transfer and electrochromic glass that regulate solar transmission from entering the room by becoming tinted – none of the solutions have been able to modulate both heating and cooling at the same time, until now.

The principal investigator of the study, Dr Long Yi of the NTU School of Materials Science and Engineering (MSE) said, “Most energy-saving windows today tackle the part of solar heat gain caused by visible and near infrared sunlight. However, researchers often overlook the radiative cooling in the long wavelength infrared. While innovations focusing on radiative cooling have been used on walls and roofs, this function becomes undesirable during winter. Our team has demonstrated for the first time a glass that can respond favourably to both wavelengths, meaning that it can continuously self-tune to react to a changing temperature across all seasons.”

As a result of these features, the NTU research team believes their innovation offers a convenient way to conserve energy in buildings since it does not rely on any moving components, electrical mechanisms, or blocking views, to function.

To improve the performance of windows, the simultaneous modulation of both solar transmission and radiative cooling are crucial, said co-authors Professor Gang Tan from The University of Wyoming, USA, and Professor Ronggui Yang from the Huazhong University of Science and Technology, Wuhan, China, who led the building energy saving simulation.

“This innovation fills the missing gap between traditional smart windows and radiative cooling by paving a new research direction to minimise energy consumption,” said Prof Gang Tan.

The study is an example of groundbreaking research that supports the NTU 2025 strategic plan, which seeks to address humanity’s grand challenges on sustainability, and accelerate the translation of research discoveries into innovations that mitigate human impact on the environment.

Innovation useful for a wide range of climate types

As a proof of concept, the scientists tested the energy-saving performance of their invention using simulations of climate data covering all populated parts of the globe (seven climate zones).

The team found the glass they developed showed energy savings in both warm and cool seasons, with an overall energy saving performance of up to 9.5%, or ~330,000 kWh per year (estimated energy required to power 60 household in Singapore for a year) less than commercially available low emissivity glass in a simulated medium sized office building.

First author of the study Wang Shancheng, who is Research Fellow and former PhD student of Dr Long Yi, said, “The results prove the viability of applying our glass in all types of climates as it is able to help cut energy use regardless of hot and cold seasonal temperature fluctuations. This sets our invention apart from current energy-saving windows which tend to find limited use in regions with less seasonal variations.”

Moreover, the heating and cooling performance of their glass can be customised to suit the needs of the market and region for which it is intended.

“We can do so by simply adjusting the structure and composition of special nanocomposite coating layered onto the glass panel, allowing our innovation to be potentially used across a wide range of heat regulating applications, and not limited to windows,” Dr Long Yi said.

Providing an independent view, Professor Liangbing Hu, Herbert Rabin Distinguished Professor, Director of the Center for Materials Innovation at the University of Maryland, USA, said, “Long and co-workers made the original development of smart windows that can regulate the near-infrared sunlight and the long-wave infrared heat. The use of this smart window could be highly important for building energy-saving and decarbonization.”  

A Singapore patent has been filed for the innovation. As the next steps, the research team is aiming to achieve even higher energy-saving performance by working on the design of their nanocomposite coating.

The international research team also includes scientists from Nanjing Tech University, China. The study is supported by the Singapore-HUJ Alliance for Research and Enterprise (SHARE), under the Campus for Research Excellence and Technological Enterprise (CREATE) programme, Minster of Education Research Fund Tier 1, and the Sino-Singapore International Joint Research Institute.

Here’s a link to and a citation for the paper,

Scalable thermochromic smart windows with passive radiative cooling regulation by Shancheng Wang, Tengyao Jiang, Yun Meng, Ronggui Yang, Gang Tan, and Yi Long. Science • 16 Dec 2021 • Vol 374, Issue 6574 • pp. 1501-1504 • DOI: 10.1126/science.abg0291

This paper is behind a paywall.

Lawrence Berkeley National Laboratory (Berkeley Lab; LBNL) does roofs

A December 16, 2021 Lawrence Berkeley National Laboratory news release (also on EurekAlert) announces an energy-saving coating for roofs (Note: Links have been removed),

Scientists have developed an all-season smart-roof coating that keeps homes warm during the winter and cool during the summer without consuming natural gas or electricity. Research findings reported in the journal Science point to a groundbreaking technology that outperforms commercial cool-roof systems in energy savings.

“Our all-season roof coating automatically switches from keeping you cool to warm, depending on outdoor air temperature. This is energy-free, emission-free air conditioning and heating, all in one device,” said Junqiao Wu, a faculty scientist in Berkeley Lab’s Materials Sciences Division and a UC Berkeley professor of materials science and engineering who led the study.

Today’s cool roof systems, such as reflective coatings, membranes, shingles, or tiles, have light-colored or darker “cool-colored” surfaces that cool homes by reflecting sunlight. These systems also emit some of the absorbed solar heat as thermal-infrared radiation; in this natural process known as radiative cooling, thermal-infrared light is radiated away from the surface.

The problem with many cool-roof systems currently on the market is that they continue to radiate heat in the winter, which drives up heating costs, Wu explained.

“Our new material – called a temperature-adaptive radiative coating or TARC – can enable energy savings by automatically turning off the radiative cooling in the winter, overcoming the problem of overcooling,” he said.

A roof for all seasons

Metals are typically good conductors of electricity and heat. In 2017, Wu and his research team discovered that electrons in vanadium dioxide behave like a metal to electricity but an insulator to heat – in other words, they conduct electricity well without conducting much heat. “This behavior contrasts with most other metals where electrons conduct heat and electricity proportionally,” Wu explained.

Vanadium dioxide below about 67 degrees Celsius (153 degrees Fahrenheit) is also transparent to (and hence not absorptive of) thermal-infrared light. But once vanadium dioxide reaches 67 degrees Celsius, it switches to a metal state, becoming absorptive of thermal-infrared light. This ability to switch from one phase to another – in this case, from an insulator to a metal – is characteristic of what’s known as a phase-change material.

To see how vanadium dioxide would perform in a roof system, Wu and his team engineered a 2-centimeter-by-2-centimeter TARC thin-film device.

TARC “looks like Scotch tape, and can be affixed to a solid surface like a rooftop,” Wu said.

In a key experiment, co-lead author Kechao Tang set up a rooftop experiment at Wu’s East Bay home last summer to demonstrate the technology’s viability in a real-world environment.

A wireless measurement device set up on Wu’s balcony continuously recorded responses to changes in direct sunlight and outdoor temperature from a TARC sample, a commercial dark roof sample, and a commercial white roof sample over multiple days.

How TARC outperforms in energy savings

The researchers then used data from the experiment to simulate how TARC would perform year-round in cities representing 15 different climate zones across the continental U.S.

Wu enlisted Ronnen Levinson, a co-author on the study who is a staff scientist and leader of the Heat Island Group in Berkeley Lab’s Energy Technologies Area, to help them refine their model of roof surface temperature. Levinson developed a method to estimate TARC energy savings from a set of more than 100,000 building energy simulations that the Heat Island Group previously performed to evaluate the benefits of cool roofs and cool walls across the United States.

Finnegan Reichertz, a 12th grade student at the East Bay Innovation Academy in Oakland who worked remotely as a summer intern for Wu last year, helped to simulate how TARC and the other roof materials would perform at specific times and on specific days throughout the year for each of the 15 cities or climate zones the researchers studied for the paper.

The researchers found that TARC outperforms existing roof coatings for energy saving in 12 of the 15 climate zones, particularly in regions with wide temperature variations between day and night, such as the San Francisco Bay Area, or between winter and summer, such as New York City.

“With TARC installed, the average household in the U.S. could save up to 10% electricity,” said Tang, who was a postdoctoral researcher in the Wu lab at the time of the study. He is now an assistant professor at Peking University in Beijing, China.

Standard cool roofs have high solar reflectance and high thermal emittance (the ability to release heat by emitting thermal-infrared radiation) even in cool weather.

According to the researchers’ measurements, TARC reflects around 75% of sunlight year-round, but its thermal emittance is high (about 90%) when the ambient temperature is warm (above 25 degrees Celsius or 77 degrees Fahrenheit), promoting heat loss to the sky. In cooler weather, TARC’s thermal emittance automatically switches to low, helping to retain heat from solar absorption and indoor heating, Levinson said.

Findings from infrared spectroscopy experiments using advanced tools at Berkeley Lab’s Molecular Foundry validated the simulations.

“Simple physics predicted TARC would work, but we were surprised it would work so well,” said Wu. “We originally thought the switch from warming to cooling wouldn’t be so dramatic. Our simulations, outdoor experiments, and lab experiments proved otherwise – it’s really exciting.”

The researchers plan to develop TARC prototypes on a larger scale to further test its performance as a practical roof coating. Wu said that TARC may also have potential as a thermally protective coating to prolong battery life in smartphones and laptops, and shield satellites and cars from extremely high or low temperatures. It could also be used to make temperature-regulating fabric for tents, greenhouse coverings, and even hats and jackets.

Co-lead authors on the study were Kaichen Dong and Jiachen Li.

The Molecular Foundry is a nanoscience user facility at Berkeley Lab.

This work was primarily supported by the DOE Office of Science and a Bakar Fellowship.

The technology is available for licensing and collaboration. If interested, please contact Berkeley Lab’s Intellectual Property Office, ipo@lbl.gov.

Here’s a link to and a citation for the paper,

Temperature-adaptive radiative coating for all-season household thermal regulation by Kechao Tang, Kaichen Dong, Jiachen Li, Madeleine P. Gordon, Finnegan G. Reichertz, Hyungjin Kim, Yoonsoo Rho, Qingjun Wang, Chang-Yu Lin, Costas P. Grigoropoulos, Ali Javey, Jeffrey J. Urban, Jie Yao, Ronnen Levinson, Junqiao Wu. Science • 16 Dec 2021 • Vol 374, Issue 6574 • pp. 1504-1509 • DOI: 10.1126/science.abf7136

This paper is behind a paywall.

An interesting news release from the AAAS

While it’s a little confusing as it cites only the ‘window’ research from NTU, the body of this news release offers some additional information about the usefulness of thermochromic materials and seemingly refers to both papers, from a December 16, 2021 AAAS news release,

Temperature-adaptive passive radiative cooling for roofs and windows

When it’s cold out, window glass and roof coatings that use passive radiative cooling to keep buildings cool can be designed to passively turn off radiative cooling to avoid heat loss, two new studies show.  Their proof-of-concept analyses demonstrate that passive radiative cooling can be expanded to warm and cold climate applications and regions, potentially providing all-season energy savings worldwide. Buildings consume roughly 40% of global energy, a large proportion of which is used to keep them cool in warmer climates. However, most temperature regulation systems commonly employed are not very energy efficient and require external power or resources. In contrast, passive radiative cooling technologies, which use outer space as a near-limitless natural heat sink, have been extensively examined as a means of energy-efficient cooling for buildings. This technology uses materials designed to selectively emit narrow-band radiation through the infrared atmospheric window to disperse heat energy into the coldness of space. However, while this approach has proven effective in cooling buildings to below ambient temperatures, it is only helpful during the warmer months or in regions that are perpetually hot. Furthermore, the inability to “turn off” passive cooling in cooler climes or in regions with large seasonal temperature variations means that continuous cooling during colder periods would exacerbate the energy costs of heating. In two different studies, by Shancheng Wang and colleagues and Kechao Tang and colleagues, researchers approach passive radiative cooling from an all-season perspective and present a new, scalable temperature-adaptive radiative technology that passively turns off radiative cooling at lower temperatures. Wang et al. and Tang et al. achieve this using a tungsten-doped vanadium dioxide and show how it can be applied to create both window glass and a flexible roof coating, respectively. Model simulations of the self-adapting materials suggest they could provide year-round energy savings across most climate zones, especially those with substantial seasonal temperature variations. 

I wish them all good luck with getting these materials to market.

Secure quantum communication network with 15 users

Things are moving quickly where quantum communication networks are concerned. Back in April 2021, Dutch scientists announced the first multi-node quantum network connecting three processors (see my July 8, 2021 posting with the news and an embedded video).

Less than six months later, Chinese scientists announced work on a 15-user quantum network. From a September 23, 2021 news item on phys.org,

Quantum secure direct communication (QSDC) based on entanglement can directly transmit confidential information. Scientist [sic] in China explored a QSDC network based on time-energy entanglement and sum-frequency generation. The results show that when any two users are performing QSDC over 40 kilometers of optical fiber, and the rate of information transmission can be maintained at 1Kbp/s. Our result lays the foundation for the realization of satellite-based long-distance and global QSDC in the future.

A September 23, 2021 Chinese Academy of Sciences (CAS) press release on EurekAlert, which seems to have originated the news item, provides additional detail,

Quantum communication has presented a revolutionary step in secure communication due to its high security of the quantum information, and many communication protocols have been proposed, such as the quantum secure direct communication (QSDC) protocol. QSDC based on entanglement can directly transmit confidential information. Any attack of QSDC results to only random number, and cannot obtain any useful information from it. Therefore, QSDC has simple communication steps and reduces potential security loopholes, and offers high security guarantees, which guarantees the security and the value propositions of quantum communications in general. However, the inability to simultaneously distinguish the four sets of encoded orthogonal entangled states in entanglement-based QSDC protocols limits its practical application. Furthermore, it is important to construct quantum network in order to make wide applications of quantum secure direct communication. Experimental demonstration of QSDC is badly required.

In a new paper published in Light Science & Application, a team of scientists, led by Professor Xianfeng Chen from State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Physics and Astronomy, Shanghai Jiao Tong University, China and Professor Yuanhua Li from Department of Physics, Jiangxi Normal University, China have explored a QSDC network based on time-energy entanglement and sum-frequency generation (SFG). They present a fully connected entanglement-based QSDC network including five subnets, with 15 users. Using the frequency correlations of the fifteen photon pairs via time division multiplexing and dense wavelength division multiplexing (DWDM), they perform a 40-kilometer fiber QSDC experiment by implying two-step transmission between each user. In this process, the network processor divides the spectrum of the single-photon source into 30 International Telecommunication Union (ITU) channels. With these channels, there will be a coincidence event between each user by performing a Bell-state measurement based on the SFG. This allows the four sets of encoded entangled states to be identified simultaneously without post-selection.

It is well known that the security and reliability of the information transmission for QSDC is an essential part in the quantum network. Therefore, they implemented block transmission and step-by-step transmission methods in QSDC with estimating the secrecy capacity of the quantum channel. After confirming the security of the quantum channel, the legitimate user performs encoding or decoding operations within these schemes reliably.

These scientists summarize the experiment results of their network scheme:

“The results show that when any two users are performing QSDC over 40 kilometers of optical fiber, the fidelity of the entangled state shared by them is still greater than 95%, and the rate of information transmission can be maintained at 1 Kbp/s. Our result demonstrates the feasibility of a proposed QSDC network, and hence lays the foundation for the realization of satellite-based long-distance and global QSDC in the future.”

“With this scheme, each user interconnects with any others through shared pairs of entangled photons in different wavelength. Moreover, it is possible to improve the information transmission rate greater than 100 Kbp/s in the case of the high-performance detectors, as well as high-speed control in modulator being used” they added.

“It is worth noting the present-work, which offers long-distance point-to-point QSDC connection, combined with the recently proposed secure-repeater quantum network of QSDC, which offers secure end-to-end communication throughout the quantum Internet, will enable the construction of secure quantum network using present-day technology, realizing the great potential of QSDC in future communication.” the scientists forecast.

Here’s a link to and a citation for the paper,

A 15-user quantum secure direct communication network by Zhantong Qi, Yuanhua Li, Yiwen Huang, Juan Feng, Yuanlin Zheng & Xianfeng Chen. Light: Science & Applications volume 10, Article number: 183 (2021) DOI: https://doi.org/10.1038/s41377-021-00634-2 Published: 14 September 2021

This paper is open access.

For the profoundly curious, there is an earlier version of this paper on arXiv.org, the site run by Cornell University where it was posted after moderation but prior to peer-review for publication in a journal.