Tag Archives: China

Corporate venture capital (CVC) and the nanotechnology market plus 2023’s top 10 countries’ nanotechnolgy patents

I have two brief nanotechnology commercialization stories from the same publication.

Corporate venture capital (CVC) and the nano market

From a March 23, 2024 article on statnano.com, Note: Links have been removed,

Nanotechnology’s enormous potential across various sectors has long attracted the eye of investors, keen to capitalise on its commercial potency.

Yet the initial propulsion provided by traditional venture capital avenues was reined back when the reality of long development timelines, regulatory hurdles, and difficulty in translating scientific advances into commercially viable products became apparent.

While the initial flurry of activity declined in the early part of the 21st century, a new kid on the investing block has proved an enticing option beyond traditional funding methods.

Corporate venture capital has, over the last 10 years emerged as a key plank in turning ideas into commercial reality.

Simply put, corporate venture capital (CVC) has seen large corporations, recognising the strategic value of nanotechnology, establish their own VC arms to invest in promising start-ups.

The likes of Samsung, Johnson & Johnson and BASF have all sought to get an edge on their competition by sinking money into start-ups in nano and other technologies, which could deliver benefits to them in the long term.

Unlike traditional VC firms, CVCs invest with a strategic lens, aligning their investments with their core business goals. For instance, BASF’s venture capital arm, BASF Venture Capital, focuses on nanomaterials with applications in coatings, chemicals, and construction.

It has an evergreen EUR 250 million fund available and will consider everything from seed to Series B investment opportunities.

Samsung Ventures takes a similar approach, explaining: “Our major investment areas are in semiconductors, telecommunication, software, internet, bioengineering and the medical industry from start-ups to established companies that are about to be listed on the stock market.

While historically concentrated in North America and Europe, CVC activity in nanotechnology is expanding to Asia, with China being a major player.

China has, perhaps not surprisingly, seen considerable growth over the last decade in nano and few will bet against it being the primary driver of innovation over the next 10 years.

As ever, the long development cycles of emerging nano breakthroughs can frequently deter some CVCs with shorter investment horizons.

2023 Nanotechnology patent applications: which countries top the list?

A March 28, 2024 article from statnano.com provides interesting data concerning patent applications,

In 2023, a total of 18,526 nanotechnology patent applications were published at the United States Patent and Trademark Office (USPTO) and the European Patent Office (EPO). The United States accounted for approximately 40% of these nanotechnology patent publications, followed by China, South Korea, and Japan in the next positions.

According to a statistical analysis conducted by StatNano using data from the Orbit database, the USPTO published 84% of the 18,526 nanotechnology patent applications in 2023, which is more than five times the number published by the EPO. However, the EPO saw a nearly 17% increase in nanotechnology patent publications compared to the previous year, while the USPTO’s growth was around 4%.

Nanotechnology patents are defined based on the ISO/TS 18110 standard as those having at least one claim related to nanotechnology orpatents classified with an IPC classification code related to nanotechnology such as B82.

From the March 28, 2024 article,

Top 10 Countries Based on Published Patent Applications in the Field of Nanotechnology in USPTO in 2023

Rank1CountryNumber of nanotechnology published patent applications in USPTONumber of nanotechnology published patent applications in EPOGrowth rate in USPTOGrowth rate in EPO
1United States6,9264923.20%17.40%
2South Korea1,71547613.40%8.40%
3China1,6275694.20%47.40%
4Taiwan1,118615.00%-12.90%
5Japan1,113445-1.20%9.30%
6Germany484229-10.20%15.70%
7England331505.10%16.30%
8France323145-8.00%17.90%
9Canada290125.10%-14.30%
10Saudi Arabia268322.40%0.00%
1- Ranking based on the number of nanotechnology patent applications at the USPTO

If you have a bit of time and interest, I suggest reading the March 28, 2024 article in its entirety.

Hardware policies best way to manage AI safety?

Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).

A very software approach?

This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,

In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.

The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.

The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),

At a glance

The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.

Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.

The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.

Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.

While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.

The European Union (EU) is preparing to pass its own AI regulation act through the European Parliament in 2024 according to a December 19, 2023 “EU AI Act: first regulation on artificial intelligence” article update, Note: Links have been removed,

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

*The ban of AI systems posing unacceptable risks will apply six months after the entry into force

*Codes of practice will apply nine months after entry into force

*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”

… The AI Act is expected to come into effect in late 2025 or early 2026.[109

I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.

The US is always to be considered in these matters and I have a November 2023 ‘briefing’ by Müge Fazlioglu on the International Association of Privacy Professionals (IAPP) website where she provides a quick overview of the international scene before diving deeper into US AI governance policy through the Barack Obama, Donald Trump, and Joe Biden administrations. There’s also this January 29, 2024 US White House “Fact Sheet: Biden-⁠Harris Administration Announces Key AI Actions Following President Biden’s Landmark Executive Order.”

What about AI and hardware?

A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,

Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.

A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.

Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.

The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.

The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.

A February 14, 2024 University of Cambridge press release by Fred Lewsey (also on EurekAlert), which originated the news item, provides more information about the ‘hardware approach to AI regulation’,

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI. 

“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.

“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”

The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.

Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.

Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute. 

The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”

Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.

For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.

The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.

“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.

Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.  

“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”

These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.

AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.

The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.

They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.

Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”

You can find the report, “Computing Power and the Governance of Artificial Intelligence” on the University of Cambridge’s Centre for the Study of Existential Risk.

Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.

The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.

“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.

*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.

As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.

Who owns prehistory? The relationship between science and sovereignty

Brachiopod (photo taken in Alberta, Canada). Courtesy: AlbertaWow.com

This February 28, 2024 news item on phys.org takes the discussion about appropriating cultural artifacts out of the world of art and into museum fossil collections , Note: Links have been removed,

Many museums and other cultural institutions in the West have faced, in recent years, demands for artistic repatriation. The Elgin Marbles, currently housed in the British Museum, are perhaps the most prominent subject of this charge, with numerous appeals having been made for their return to their original home in Greece.

Taking up the issue of cultural imperialism is a new article in Isis [journal of the History of Science Society],.

“Fossils and Sovereignty: Science Diplomacy and the Politics of Deep Time in the Sino-American Fossil Dispute of the 1920s” by author Hsiao-pei Yen, narrates the controversy surrounding paleontological excavation in the interwar period through a conflict between the American Museum of Natural History and the emerging Chinese scientific nationalist movement, and, ultimately, examines the place of fossil ownership in global politics.

A February 28, 2024 (?) University of Chicago Press news release, which originated the news item, delves further into the topic,

In the early decades of the 20th century, many scientists were convinced that the key to understanding human origins, the so-called “missing link,” could be found in Central Asia. A delegation from the American Museum of Natural History (AMNH) was sent to the Gobi Desert in search of this great intellectual prize and failed to find any evidence of human ancestry in the region, but, over the course of the first half of the 1920s, sent many other valuable fossils and archaeological relics back to the United States. In 1928, however, amidst the changing political landscape of Chiang Kai-shek’s revolutionary reunification of China, the Americans were frustrated to discover that their findings had been detained under orders of the Beijing Society for the Preservation of Cultural Objects (SPCO). The resulting negotiations between the Americans and the Chinese inspired conflicting perspectives not only regarding the ownership of these prehistoric remains, but also the very nature of the relationship between fossils and sovereignty.

Nationalists in China were keen to correct the historical imbalance in treaties concerning trade between their country and rich Western nations. The debate over the fate of relics uncovered in China represented a unique opportunity to reclaim a measure of autonomy. As Yen writes, “The antiquities were deemed priceless national treasures not only because they were a link to China’s past but because … they were also resources of cultural capital with high academic value as research objects that would enable native scholars to establish and develop their own knowledge framework.” The representatives of the AMNH and those of the SPCO initially agreed to share botanical, zoological, and mineral specimens, while all archaeological materials and invertebrate fossils were to be kept in China, and all vertebrate fossils sent to America, with duplicates returning to their home country. The AMNH was insistent on this distinction between archaeological remains and fossils. Paleontological fossils, they claimed, “were formed in geological time and had no historical or cultural attachment to the people of the place where they were found.” As a result, argued the AMNH, they could be exported and retained by representatives of any country.

Following this agreement, however, the Chinese government called for a reclassification of fossils as sovereign property. This decision, part of a “vertical turn” in geopolitical history, was summarized by one government official: “’the territory of a nation-state is not limited to the surface. The terrain up to the sky and down to the subterranean should all be included in the national domain.’” As of 1930, China rejected the interpretation of fossils and the geological time they represented as universal, and therefore easily exploitable by more powerful countries, and claimed them instead as local, and contingent. The protections around Chinese fossils by no means limited the production of knowledge surrounding their discovery, but meant, instead, that the Chinese state had more control over their study and their diplomatic applications. The author concludes, “A vertical sensitivity enacted a new political and temporal imagination: geoscience and Earth history might be universal, but they should be explored within national boundaries.”

Since its inception in 1912, Isis has featured scholarly articles, research notes, and commentary on the history of science, medicine, and technology and their cultural influences. Review essays and book reviews on new contributions to the discipline are also included. An official publication of the History of Science Society, Isis is the oldest English-language journal in the field.

Founded in 1924, the History of Science Society is the world’s largest society dedicated to understanding science, technology, medicine, and their interactions with society in historical context.

Here’s a link to and a citation for the paper,

Fossils and Sovereignty: Science Diplomacy and the Politics of Deep Time in the Sino-American Fossil Dispute of the 1920s by Hsiao-pei Yen. Isis Volume 115, Number 1 March 2024 DOI: https://doi.org/10.1086/729176

This paper is behind a paywall.

Portable and non-invasive (?) mind-reading AI (artificial intelligence) turns thoughts into text and some thoughts about the near future

First, here’s some of the latest research and if by ‘non-invasive,’ you mean that electrodes are not being planted in your brain, then this December 12, 2023 University of Technology Sydney (UTS) press release (also on EurekAlert) highlights non-invasive mind-reading AI via a brain-computer interface (BCI), Note: Links have been removed,

In a world-first, researchers from the GrapheneX-UTS Human-centric Artificial Intelligence Centre at the University of Technology Sydney (UTS) have developed a portable, non-invasive system that can decode silent thoughts and turn them into text. 

The technology could aid communication for people who are unable to speak due to illness or injury, including stroke or paralysis. It could also enable seamless communication between humans and machines, such as the operation of a bionic arm or robot.

The study has been selected as the spotlight paper at the NeurIPS conference, a top-tier annual meeting that showcases world-leading research on artificial intelligence and machine learning, held in New Orleans on 12 December 2023.

The research was led by Distinguished Professor CT Lin, Director of the GrapheneX-UTS HAI Centre, together with first author Yiqun Duan and fellow PhD candidate Jinzhou Zhou from the UTS Faculty of Engineering and IT.

In the study participants silently read passages of text while wearing a cap that recorded electrical brain activity through their scalp using an electroencephalogram (EEG). A demonstration of the technology can be seen in this video [See UTS press release].

The EEG wave is segmented into distinct units that capture specific characteristics and patterns from the human brain. This is done by an AI model called DeWave developed by the researchers. DeWave translates EEG signals into words and sentences by learning from large quantities of EEG data. 

“This research represents a pioneering effort in translating raw EEG waves directly into language, marking a significant breakthrough in the field,” said Distinguished Professor Lin.

“It is the first to incorporate discrete encoding techniques in the brain-to-text translation process, introducing an innovative approach to neural decoding. The integration with large language models is also opening new frontiers in neuroscience and AI,” he said.

Previous technology to translate brain signals to language has either required surgery to implant electrodes in the brain, such as Elon Musk’s Neuralink [emphasis mine], or scanning in an MRI machine, which is large, expensive, and difficult to use in daily life.

These methods also struggle to transform brain signals into word level segments without additional aids such as eye-tracking, which restrict the practical application of these systems. The new technology is able to be used either with or without eye-tracking.

The UTS research was carried out with 29 participants. This means it is likely to be more robust and adaptable than previous decoding technology that has only been tested on one or two individuals, because EEG waves differ between individuals. 

The use of EEG signals received through a cap, rather than from electrodes implanted in the brain, means that the signal is noisier. In terms of EEG translation however, the study reported state-of the art performance, surpassing previous benchmarks.

“The model is more adept at matching verbs than nouns. However, when it comes to nouns, we saw a tendency towards synonymous pairs rather than precise translations, such as ‘the man’ instead of ‘the author’,” said Duan. [emphases mine; synonymous, eh? what about ‘woman’ or ‘child’ instead of the ‘man’?]

“We think this is because when the brain processes these words, semantically similar words might produce similar brain wave patterns. Despite the challenges, our model yields meaningful results, aligning keywords and forming similar sentence structures,” he said.

The translation accuracy score is currently around 40% on BLEU-1. The BLEU score is a number between zero and one that measures the similarity of the machine-translated text to a set of high-quality reference translations. The researchers hope to see this improve to a level that is comparable to traditional language translation or speech recognition programs, which is closer to 90%.

The research follows on from previous brain-computer interface technology developed by UTS in association with the Australian Defence Force [ADF] that uses brainwaves to command a quadruped robot, which is demonstrated in this ADF video [See my June 13, 2023 posting, “Mind-controlled robots based on graphene: an Australian research story” for the story and embedded video].

About one month after the research announcement regarding the University of Technology Sydney’s ‘non-invasive’ brain-computer interface (BCI), I stumbled across an in-depth piece about the field of ‘non-invasive’ mind-reading research.

Neurotechnology and neurorights

Fletcher Reveley’s January 18, 2024 article on salon.com (originally published January 3, 2024 on Undark) shows how quickly the field is developing and raises concerns, Note: Links have been removed,

One afternoon in May 2020, Jerry Tang, a Ph.D. student in computer science at the University of Texas at Austin, sat staring at a cryptic string of words scrawled across his computer screen:

“I am not finished yet to start my career at twenty without having gotten my license I never have to pull out and run back to my parents to take me home.”

The sentence was jumbled and agrammatical. But to Tang, it represented a remarkable feat: A computer pulling a thought, however disjointed, from a person’s mind.

For weeks, ever since the pandemic had shuttered his university and forced his lab work online, Tang had been at home tweaking a semantic decoder — a brain-computer interface, or BCI, that generates text from brain scans. Prior to the university’s closure, study participants had been providing data to train the decoder for months, listening to hours of storytelling podcasts while a functional magnetic resonance imaging (fMRI) machine logged their brain responses. Then, the participants had listened to a new story — one that had not been used to train the algorithm — and those fMRI scans were fed into the decoder, which used GPT1, a predecessor to the ubiquitous AI chatbot ChatGPT, to spit out a text prediction of what it thought the participant had heard. For this snippet, Tang compared it to the original story:

“Although I’m twenty-three years old I don’t have my driver’s license yet and I just jumped out right when I needed to and she says well why don’t you come back to my house and I’ll give you a ride.”

The decoder was not only capturing the gist of the original, but also producing exact matches of specific words — twenty, license. When Tang shared the results with his adviser, a UT Austin neuroscientist named Alexander Huth who had been working towards building such a decoder for nearly a decade, Huth was floored. “Holy shit,” Huth recalled saying. “This is actually working.” By the fall of 2021, the scientists were testing the device with no external stimuli at all — participants simply imagined a story and the decoder spat out a recognizable, albeit somewhat hazy, description of it. “What both of those experiments kind of point to,” said Huth, “is the fact that what we’re able to read out here was really like the thoughts, like the idea.”

The scientists brimmed with excitement over the potentially life-altering medical applications of such a device — restoring communication to people with locked-in syndrome, for instance, whose near full-body paralysis made talking impossible. But just as the potential benefits of the decoder snapped into focus, so too did the thorny ethical questions posed by its use. Huth himself had been one of the three primary test subjects in the experiments, and the privacy implications of the device now seemed visceral: “Oh my god,” he recalled thinking. “We can look inside my brain.”

Huth’s reaction mirrored a longstanding concern in neuroscience and beyond: that machines might someday read people’s minds. And as BCI technology advances at a dizzying clip, that possibility and others like it — that computers of the future could alter human identities, for example, or hinder free will — have begun to seem less remote. “The loss of mental privacy, this is a fight we have to fight today,” said Rafael Yuste, a Columbia University neuroscientist. “That could be irreversible. If we lose our mental privacy, what else is there to lose? That’s it, we lose the essence of who we are.”

Spurred by these concerns, Yuste and several colleagues have launched an international movement advocating for “neurorights” — a set of five principles Yuste argues should be enshrined in law as a bulwark against potential misuse and abuse of neurotechnology. But he may be running out of time.

Reveley’s January 18, 2024 article provides fascinating context and is well worth reading if you have the time.

For my purposes, I’m focusing on ethics, Note: Links have been removed,

… as these and other advances propelled the field forward, and as his own research revealed the discomfiting vulnerability of the brain to external manipulation, Yuste found himself increasingly concerned by the scarce attention being paid to the ethics of these technologies. Even Obama’s multi-billion-dollar BRAIN Initiative, a government program designed to advance brain research, which Yuste had helped launch in 2013 and supported heartily, seemed to mostly ignore the ethical and societal consequences of the research it funded. “There was zero effort on the ethical side,” Yuste recalled.

Yuste was appointed to the rotating advisory group of the BRAIN Initiative in 2015, where he began to voice his concerns. That fall, he joined an informal working group to consider the issue. “We started to meet, and it became very evident to me that the situation was a complete disaster,” Yuste said. “There was no guidelines, no work done.” Yuste said he tried to get the group to generate a set of ethical guidelines for novel BCI technologies, but the effort soon became bogged down in bureaucracy. Frustrated, he stepped down from the committee and, together with a University of Washington bioethicist named Sara Goering, decided to independently pursue the issue. “Our aim here is not to contribute to or feed fear for doomsday scenarios,” the pair wrote in a 2016 article in Cell, “but to ensure that we are reflective and intentional as we prepare ourselves for the neurotechnological future.”

In the fall of 2017, Yuste and Goering called a meeting at the Morningside Campus of Columbia, inviting nearly 30 experts from all over the world in such fields as neurotechnology, artificial intelligence, medical ethics, and the law. By then, several other countries had launched their own versions of the BRAIN Initiative, and representatives from Australia, Canada [emphasis mine], China, Europe, Israel, South Korea, and Japan joined the Morningside gathering, along with veteran neuroethicists and prominent researchers. “We holed ourselves up for three days to study the ethical and societal consequences of neurotechnology,” Yuste said. “And we came to the conclusion that this is a human rights issue. These methods are going to be so powerful, that enable to access and manipulate mental activity, and they have to be regulated from the angle of human rights. That’s when we coined the term ‘neurorights.’”

The Morningside group, as it became known, identified four principal ethical priorities, which were later expanded by Yuste into five clearly defined neurorights: The right to mental privacy, which would ensure that brain data would be kept private and its use, sale, and commercial transfer would be strictly regulated; the right to personal identity, which would set boundaries on technologies that could disrupt one’s sense of self; the right to fair access to mental augmentation, which would ensure equality of access to mental enhancement neurotechnologies; the right of protection from bias in the development of neurotechnology algorithms; and the right to free will, which would protect an individual’s agency from manipulation by external neurotechnologies. The group published their findings in an often-cited paper in Nature.

But while Yuste and the others were focused on the ethical implications of these emerging technologies, the technologies themselves continued to barrel ahead at a feverish speed. In 2014, the first kick of the World Cup was made by a paraplegic man using a mind-controlled robotic exoskeleton. In 2016, a man fist bumped Obama using a robotic arm that allowed him to “feel” the gesture. The following year, scientists showed that electrical stimulation of the hippocampus could improve memory, paving the way for cognitive augmentation technologies. The military, long interested in BCI technologies, built a system that allowed operators to pilot three drones simultaneously, partially with their minds. Meanwhile, a confusing maelstrom of science, science-fiction, hype, innovation, and speculation swept the private sector. By 2020, over $33 billion had been invested in hundreds of neurotech companies — about seven times what the NIH [US National Institutes of Health] had envisioned for the 12-year span of the BRAIN Initiative itself.

Now back to Tang and Huth (from Reveley’s January 18, 2024 article), Note: Links have been removed,

Central to the ethical questions Huth and Tang grappled with was the fact that their decoder, unlike other language decoders developed around the same time, was non-invasive — it didn’t require its users to undergo surgery. Because of that, their technology was free from the strict regulatory oversight that governs the medical domain. (Yuste, for his part, said he believes non-invasive BCIs pose a far greater ethical challenge than invasive systems: “The non-invasive, the commercial, that’s where the battle is going to get fought.”) Huth and Tang’s decoder faced other hurdles to widespread use — namely that fMRI machines are enormous, expensive, and stationary. But perhaps, the researchers thought, there was a way to overcome that hurdle too.

The information measured by fMRI machines — blood oxygenation levels, which indicate where blood is flowing in the brain — can also be measured with another technology, functional Near-Infrared Spectroscopy, or fNIRS. Although lower resolution than fMRI, several expensive, research-grade, wearable fNIRS headsets do approach the resolution required to work with Huth and Tang’s decoder. In fact, the scientists were able to test whether their decoder would work with such devices by simply blurring their fMRI data to simulate the resolution of research-grade fNIRS. The decoded result “doesn’t get that much worse,” Huth said.

And while such research-grade devices are currently cost-prohibitive for the average consumer, more rudimentary fNIRS headsets have already hit the market. Although these devices provide far lower resolution than would be required for Huth and Tang’s decoder to work effectively, the technology is continually improving, and Huth believes it is likely that an affordable, wearable fNIRS device will someday provide high enough resolution to be used with the decoder. In fact, he is currently teaming up with scientists at Washington University to research the development of such a device.

Even comparatively primitive BCI headsets can raise pointed ethical questions when released to the public. Devices that rely on electroencephalography, or EEG, a commonplace method of measuring brain activity by detecting electrical signals, have now become widely available — and in some cases have raised alarm. In 2019, a school in Jinhua, China, drew criticism after trialing EEG headbands that monitored the concentration levels of its pupils. (The students were encouraged to compete to see who concentrated most effectively, and reports were sent to their parents.) Similarly, in 2018 the South China Morning Post reported that dozens of factories and businesses had begun using “brain surveillance devices” to monitor workers’ emotions, in the hopes of increasing productivity and improving safety. The devices “caused some discomfort and resistance in the beginning,” Jin Jia, then a brain scientist at Ningbo University, told the reporter. “After a while, they got used to the device.”

But the primary problem with even low-resolution devices is that scientists are only just beginning to understand how information is actually encoded in brain data. In the future, powerful new decoding algorithms could discover that even raw, low-resolution EEG data contains a wealth of information about a person’s mental state at the time of collection. Consequently, nobody can definitively know what they are giving away when they allow companies to collect information from their brains.

Huth and Tang concluded that brain data, therefore, should be closely guarded, especially in the realm of consumer products. In an article on Medium from last April, Tang wrote that “decoding technology is continually improving, and the information that could be decoded from a brain scan a year from now may be very different from what can be decoded today. It is crucial that companies are transparent about what they intend to do with brain data and take measures to ensure that brain data is carefully protected.” (Yuste said the Neurorights Foundation recently surveyed the user agreements of 30 neurotech companies and found that all of them claim ownership of users’ brain data — and most assert the right to sell that data to third parties. [emphases mine]) Despite these concerns, however, Huth and Tang maintained that the potential benefits of these technologies outweighed their risks, provided the proper guardrails [emphasis mine] were put in place.

It would seem the first guardrails are being set up in South America (from Reveley’s January 18, 2024 article), Note: Links have been removed,

On a hot summer night in 2019, Yuste sat in the courtyard of an adobe hotel in the north of Chile with his close friend, the prominent Chilean doctor and then-senator Guido Girardi, observing the vast, luminous skies of the Atacama Desert and discussing, as they often did, the world of tomorrow. Girardi, who every year organizes the Congreso Futuro, Latin America’s preeminent science and technology event, had long been intrigued by the accelerating advance of technology and its paradigm-shifting impact on society — “living in the world at the speed of light,” as he called it. Yuste had been a frequent speaker at the conference, and the two men shared a conviction that scientists were birthing technologies powerful enough to disrupt the very notion of what it meant to be human.

Around midnight, as Yuste finished his pisco sour, Girardi made an intriguing proposal: What if they worked together to pass an amendment to Chile’s constitution, one that would enshrine protections for mental privacy as an inviolable right of every Chilean? It was an ambitious idea, but Girardi had experience moving bold pieces of legislation through the senate; years earlier he had spearheaded Chile’s famous Food Labeling and Advertising Law, which required companies to affix health warning labels on junk food. (The law has since inspired dozens of countries to pursue similar legislation.) With BCI, here was another chance to be a trailblazer. “I said to Rafael, ‘Well, why don’t we create the first neuro data protection law?’” Girardi recalled. Yuste readily agreed.

… Girardi led the political push, promoting a piece of legislation that would amend Chile’s constitution to protect mental privacy. The effort found surprising purchase across the political spectrum, a remarkable feat in a country famous for its political polarization. In 2021, Chile’s congress unanimously passed the constitutional amendment, which Piñera [Sebastián Piñera] swiftly signed into law. (A second piece of legislation, which would establish a regulatory framework for neurotechnology, is currently under consideration by Chile’s congress.) “There was no divide between the left or right,” recalled Girardi. “This was maybe the only law in Chile that was approved by unanimous vote.” Chile, then, had become the first country in the world to enshrine “neurorights” in its legal code.

Even before the passage of the Chilean constitutional amendment, Yuste had begun meeting regularly with Jared Genser, an international human rights lawyer who had represented such high-profile clients as Desmond Tutu, Liu Xiaobo, and Aung San Suu Kyi. (The New York Times Magazine once referred to Genser as “the extractor” for his work with political prisoners.) Yuste was seeking guidance on how to develop an international legal framework to protect neurorights, and Genser, though he had just a cursory knowledge of neurotechnology, was immediately captivated by the topic. “It’s fair to say he blew my mind in the first hour of discussion,” recalled Genser. Soon thereafter, Yuste, Genser, and a private-sector entrepreneur named Jamie Daves launched the Neurorights Foundation, a nonprofit whose first goal, according to its website, is “to protect the human rights of all people from the potential misuse or abuse of neurotechnology.”

To accomplish this, the organization has sought to engage all levels of society, from the United Nations and regional governing bodies like the Organization of American States, down to national governments, the tech industry, scientists, and the public at large. Such a wide-ranging approach, said Genser, “is perhaps insanity on our part, or grandiosity. But nonetheless, you know, it’s definitely the Wild West as it comes to talking about these issues globally, because so few people know about where things are, where they’re heading, and what is necessary.”

This general lack of knowledge about neurotech, in all strata of society, has largely placed Yuste in the role of global educator — he has met several times with U.N. Secretary-General António Guterres, for example, to discuss the potential dangers of emerging neurotech. And these efforts are starting to yield results. Guterres’s 2021 report, “Our Common Agenda,” which sets forth goals for future international cooperation, urges “updating or clarifying our application of human rights frameworks and standards to address frontier issues,” such as “neuro-technology.” Genser attributes the inclusion of this language in the report to Yuste’s advocacy efforts.

But updating international human rights law is difficult, and even within the Neurorights Foundation there are differences of opinion regarding the most effective approach. For Yuste, the ideal solution would be the creation of a new international agency, akin to the International Atomic Energy Agency — but for neurorights. “My dream would be to have an international convention about neurotechnology, just like we had one about atomic energy and about certain things, with its own treaty,” he said. “And maybe an agency that would essentially supervise the world’s efforts in neurotechnology.”

Genser, however, believes that a new treaty is unnecessary, and that neurorights can be codified most effectively by extending interpretation of existing international human rights law to include them. The International Covenant of Civil and Political Rights, for example, already ensures the general right to privacy, and an updated interpretation of the law could conceivably clarify that that clause extends to mental privacy as well.

There is no need for immediate panic (from Reveley’s January 18, 2024 article),

… while Yuste and the others continue to grapple with the complexities of international and national law, Huth and Tang have found that, for their decoder at least, the greatest privacy guardrails come not from external institutions but rather from something much closer to home — the human mind itself. Following the initial success of their decoder, as the pair read widely about the ethical implications of such a technology, they began to think of ways to assess the boundaries of the decoder’s capabilities. “We wanted to test a couple kind of principles of mental privacy,” said Huth. Simply put, they wanted to know if the decoder could be resisted.

In late 2021, the scientists began to run new experiments. First, they were curious if an algorithm trained on one person could be used on another. They found that it could not — the decoder’s efficacy depended on many hours of individualized training. Next, they tested whether the decoder could be thrown off simply by refusing to cooperate with it. Instead of focusing on the story that was playing through their headphones while inside the fMRI machine, participants were asked to complete other mental tasks, such as naming random animals, or telling a different story in their head. “Both of those rendered it completely unusable,” Huth said. “We didn’t decode the story they were listening to, and we couldn’t decode anything about what they were thinking either.”

Given how quickly this field of research is progressing, it seems like a good idea to increase efforts to establish neurorights (from Reveley’s January 18, 2024 article),

For Yuste, however, technologies like Huth and Tang’s decoder may only mark the beginning of a mind-boggling new chapter in human history, one in which the line between human brains and computers will be radically redrawn — or erased completely. A future is conceivable, he said, where humans and computers fuse permanently, leading to the emergence of technologically augmented cyborgs. “When this tsunami hits us I would say it’s not likely it’s for sure that humans will end up transforming themselves — ourselves — into maybe a hybrid species,” Yuste said. He is now focused on preparing for this future.

In the last several years, Yuste has traveled to multiple countries, meeting with a wide assortment of politicians, supreme court justices, U.N. committee members, and heads of state. And his advocacy is beginning to yield results. In August, Mexico began considering a constitutional reform that would establish the right to mental privacy. Brazil is currently considering a similar proposal, while Spain, Argentina, and Uruguay have also expressed interest, as has the European Union. In September [2023], neurorights were officially incorporated into Mexico’s digital rights charter, while in Chile, a landmark Supreme Court ruling found that Emotiv Inc, a company that makes a wearable EEG headset, violated Chile’s newly minted mental privacy law. That suit was brought by Yuste’s friend and collaborator, Guido Girardi.

“This is something that we should take seriously,” he [Huth] said. “Because even if it’s rudimentary right now, where is that going to be in five years? What was possible five years ago? What’s possible now? Where’s it gonna be in five years? Where’s it gonna be in 10 years? I think the range of reasonable possibilities includes things that are — I don’t want to say like scary enough — but like dystopian enough that I think it’s certainly a time for us to think about this.”

You can find The Neurorights Foundation here and/or read Reveley’s January 18, 2024 article on salon.com or as originally published January 3, 2024 on Undark. Finally, thank you for the article, Fletcher Reveley!

Photonic synapses with low power consumption (and a few observations)

This work on brainlike (neuromorphic) computing was announced in a June 30, 2022 Compuscript Ltd news release on EurekAlert,

Photonic synapses with low power consumption and high sensitivity are expected to integrate sensing-memory-preprocessing capabilities

A new publication from Opto-Electronic Advances; DOI 10.29026/oea.2022.210069 discusses how photonic synapses with low power consumption and high sensitivity are expected to integrate sensing-memory-preprocessing capabilities.

Neuromorphic photonics/electronics is the future of ultralow energy intelligent computing and artificial intelligence (AI). In recent years, inspired by the human brain, artificial neuromorphic devices have attracted extensive attention, especially in simulating visual perception and memory storage. Because of its advantages of high bandwidth, high interference immunity, ultrafast signal transmission and lower energy consumption, neuromorphic photonic devices are expected to realize real-time response to input data. In addition, photonic synapses can realize non-contact writing strategy, which contributes to the development of wireless communication. The use of low-dimensional materials provides an opportunity to develop complex brain-like systems and low-power memory logic computers. For example, large-scale, uniform and reproducible transition metal dichalcogenides (TMDs) show great potential for miniaturization and low-power biomimetic device applications due to their excellent charge-trapping properties and compatibility with traditional CMOS processes. The von Neumann architecture with discrete memory and processor leads to high power consumption and low efficiency of traditional computing. Therefore, the sensor-memory fusion or sensor-memory- processor integration neuromorphic architecture system can meet the increasingly developing demands of big data and AI for low power consumption and high performance devices. Artificial synaptic devices are the most important components of neuromorphic systems. The performance evaluation of synaptic devices will help to further apply them to more complex artificial neural networks (ANN).

Chemical vapor deposition (CVD)-grown TMDs inevitably introduce defects or impurities, showed a persistent photoconductivity (PPC) effect. TMDs photonic synapses integrating synaptic properties and optical detection capabilities show great advantages in neuromorphic systems for low-power visual information perception and processing as well as brain memory.

The research Group of Optical Detection and Sensing (GODS) have reported a three-terminal photonic synapse based on the large-area, uniform multilayer MoS2 films. The reported device realized ultrashort optical pulse detection within 5 μs and ultralow power consumption about 40 aJ, which means its performance is much better than the current reported properties of photonic synapses. Moreover, it is several orders of magnitude lower than the corresponding parameters of biological synapses, indicating that the reported photonic synapse can be further used for more complex ANN. The photoconductivity of MoS2 channel grown by CVD is regulated by photostimulation signal, which enables the device to simulate short-term synaptic plasticity (STP), long-term synaptic plasticity (LTP), paired-pulse facilitation (PPF) and other synaptic properties. Therefore, the reported photonic synapse can simulate human visual perception, and the detection wavelength can be extended to near infrared light. As the most important system of human learning, visual perception system can receive 80% of learning information from the outside. With the continuous development of AI, there is an urgent need for low-power and high sensitivity visual perception system that can effectively receive external information. In addition, with the assistant of gate voltage, this photonic synapse can simulate the classical Pavlovian conditioning and the regulation of different emotions on memory ability. For example, positive emotions enhance memory ability and negative emotions weaken memory ability. Furthermore, a significant contrast in the strength of STP and LTP based on the reported photonic synapse suggests that it can preprocess the input light signal. These results indicate that the photo-stimulation and backgate control can effectively regulate the conductivity of MoS2 channel layer by adjusting carrier trapping/detrapping processes. Moreover, the photonic synapse presented in this paper is expected to integrate sensing-memory-preprocessing capabilities, which can be used for real-time image detection and in-situ storage, and also provides the possibility to break the von Neumann bottleneck. 

Here’s a link to and a citation for the paper,

Photonic synapses with ultralow energy consumption for artificial visual perception and brain storage by Caihong Li, Wen Du, Yixuan Huang, Jihua Zou, Lingzhi Luo, Song Sun, Alexander O. Govorov, Jiang Wu, Hongxing Xu, Zhiming Wang. Opto-Electron Adv Vol 5, No 9 210069 (2022). doi: 10.29026/oea.2022.210069

This paper is open access.

Observations

I don’t have much to say about the research itself other than, I believe this is the first time I’ve seen a news release about neuromorphic computing research from China.

it’s China that most interests me, especially these bits from the June 30, 2022 Compuscript Ltd news release on EurekAlert,

Group of Optical Detection and Sensing (GODS) [emphasis mine] was established in 2019. It is a research group focusing on compound semiconductors, lasers, photodetectors, and optical sensors. GODS has established a well-equipped laboratory with research facilities such as Molecular Beam Epitaxy system, IR detector test system, etc. GODS is leading several research projects funded by NSFC and National Key R&D Programmes. GODS have published more than 100 research articles in Nature Electronics, Light: Science and Applications, Advanced Materials and other international well-known high-level journals with the total citations beyond 8000.

Jiang Wu obtained his Ph.D. from the University of Arkansas Fayetteville in 2011. After his Ph.D., he joined UESTC as associate professor and later professor. He joined University College London [UCL] as a research associate in 2012 and then lecturer in the Department of Electronic and Electrical Engineering at UCL from 2015 to 2018. He is now a professor at UESTC [University of Electronic Science and Technology of China] [emphases mine]. His research interests include optoelectronic applications of semiconductor heterostructures. He is a Fellow of the Higher Education Academy and Senior Member of IEEE.

Opto-Electronic Advances (OEA) is a high-impact, open access, peer reviewed monthly SCI journal with an impact factor of 9.682 (Journals Citation Reports for IF 2020). Since its launch in March 2018, OEA has been indexed in SCI, EI, DOAJ, Scopus, CA and ICI databases over the time and expanded its Editorial Board to 36 members from 17 countries and regions (average h-index 49). [emphases mine]

The journal is published by The Institute of Optics and Electronics, Chinese Academy of Sciences, aiming at providing a platform for researchers, academicians, professionals, practitioners, and students to impart and share knowledge in the form of high quality empirical and theoretical research papers covering the topics of optics, photonics and optoelectronics.

The research group’s awkward name was almost certainly developed with the rather grandiose acronym, GODS, in mind. I don’t think you could get away with doing this in an English-speaking country as your colleagues would mock you mercilessly.

It’s Jiang Wu’s academic and work history that’s of most interest as it might provide insight into China’s Young Thousand Talents program. A January 5, 2023 American Association for the Advancement of Science (AAAS) news release describes the program,

In a systematic evaluation of China’s Young Thousand Talents (YTT) program, which was established in 2010, researchers find that China has been successful in recruiting and nurturing high-caliber Chinese scientists who received training abroad. Many of these individuals outperform overseas peers in publications and access to funding, the study shows, largely due to access to larger research teams and better research funding in China. Not only do the findings demonstrate the program’s relative success, but they also hold policy implications for the increasing number of governments pursuing means to tap expatriates for domestic knowledge production and talent development. China is a top sender of international students to United States and European Union science and engineering programs. The YTT program was created to recruit and nurture the productivity of high-caliber, early-career, expatriate scientists who return to China after receiving Ph.Ds. abroad. Although there has been a great deal of international attention on the YTT, some associated with the launch of the U.S.’s controversial China Initiative and federal investigations into academic researchers with ties to China, there has been little evidence-based research on the success, impact, and policy implications of the program itself. Dongbo Shi and colleagues evaluated the YTT program’s first 4 cohorts of scholars and compared their research productivity to that of their peers that remained overseas. Shi et al. found that China’s YTT program successfully attracted high-caliber – but not top-caliber – scientists. However, those young scientists that did return outperformed others in publications across journal-quality tiers – particularly in last-authored publications. The authors suggest that this is due to YTT scholars’ greater access to larger research teams and better research funding in China. The authors say the dearth of such resources in the U.S. and E.U. “may not only expedite expatriates’ return decisions but also motivate young U.S.- and E.U.-born scientists to seek international research opportunities.” They say their findings underscore the need for policy adjustments to allocate more support for young scientists.

Here’s a link to and a citation for the paper,

Has China’s Young Thousand Talents program been successful in recruiting and nurturing top-caliber scientists? by Dongbo Shi, Weichen Liu, and Yanbo Wang. Science 5 Jan 2023 Vol 379, Issue 6627 pp. 62-65 DOI: 10.1126/science.abq1218

This paper is behind a paywall.

Kudos to the folks behind China’s Young Thousands Talents program! Jiang Wu’s career appears to be a prime example of the program’s success. Perhaps Canadian policy makers will be inspired.

China and nanotechnology

it’s been quite a while since I’ve come across any material about Nanopolis, a scientific complex in China devoted to nanotechnology (as described in my September 26, 2014 posting titled, More on Nanopolis in China’s Suzhou Industrial Park). Note: The most recent , prior to now, information about the complex is in my June 1, 2017 posting, which mentions China’s Nanopolis and Nano-X endeavours.

Dr. Mahbube K. Siddiki’s March 12, 2022 article about China’s nanotechnology work in the Small Wars Journal provides a situation overview and an update along with a tidbit about Nanopolis, Note: Footnotes for the article have not been included here,

The Nanotechnology industry in China is moving forward, with substantially high levels of funding, a growing talent pool, and robust international collaborations. The strong state commitment to support this field of science and technology is a key advantage for China to compete with leading forces like US, EU, Japan, and Russia. The Chinese government focuses on increasing competitiveness in nanotechnology by its inclusion as strategic industry in China’s 13th Five-Year Plan, reconfirming state funding, legislative and regulatory support. Research and development (R&D) in Nanoscience and Nanotechnology is a key component of the ambitious ‘Made in China 2025’ initiative aimed at turning China into a high-tech manufacturing powerhouse [1].

A bright example of Chinese nanotech success is the world’s largest nanotech industrial zone called ‘Nanopolis’, located in the eastern city of Suzhou. This futuristic city houses several private multinationals and new Chinese startups across different fields of nanotechnology and nanoscience. Needless to say, China leads the world’s nanotech startups. Involvement of private sector opens new and unique pools of funding and talent, focusing on applied research. Thus, private sector is leading in R&D in China, where state-sponsored institutions still dominate in all other sectors of rapid industrialization and modernization. From cloning to cancer research, from sea to space exploration, this massive and highly populated nation is using nanoscience and nanotechnology innovation to drive some of the world’s biggest breakthroughs, which is raising concerns in many other competing countries [3].

China has established numerous nanotech research institutions throughout the country over the years. Prominent universities like Peking University, City University of Hong Kong, Nanjing University, Hong Kong University of Science and Technology, Soochow University, University of Science and Technology of China are the leading institutions that house state of art nanotech research labs to foster study and research of nanoscience and nanotechnology [5]. Chinese Academy of Science (CAS), National Center for Nanoscience and Technology (NCNST) and Suzhou Institute of Nano-Tech and Nano-Bionics (SINANO) are top among the state sponsored specialized nanoscience and nanotechnology research centers, which have numerous labs and prominent researchers to conduct cutting edge research in the area of nanotechnology. Public-Private collaboration along with the above mentioned research institutes gave birth to many nanotechnology companies, most notable of them are Array Nano, Times Nano, Haizisi Nano Technology, Nano Medtech, Sun Nanotech, XP nano etc. [6]. These companies are thriving on the research breakthroughs China achieved recently in this sector. 

Here are some of the notable achievements in this sector by China. In June 2020, an international team of researchers led by Chinese scientists developed a new form of synthetic and  biodegradable nanoparticle [7]. This modifiable lipid nanoparticle is capable of targeting, penetrating, and altering cells by delivering the CRISPR/Cas9 gene-editing tool into a cell. This novel nanoparticle can be used in the treatment of some gene related disorders, as well as other diseases including some forms of cancer in the brain, liver, and lungs. At the State Key Laboratory of Robotics in the northeast city of Shenyang, researchers have developed a laser that produces a tiny gas bubble[8]. This bubble can be used as a tiny “robot” to manipulate and move materials on a nanoscale with microscopic precision. The technology termed as “Bubble bot” promises new possibilities in the field of artificial tissue creation and cloning [9].

In another report [13] it was shown that China surpassed the U.S. in chemistry in 2018 and now leading the later with a significant gap, which might take years to overcome. In the meantime, the country is approaching the US in Earth & Environmental sciences as well as physical sciences. According to the trend China may take five years or less to surpass US. On the contrary, in life science research China is lagging the US quite significantly, which might be attributed to both countries’ priority of sponsorship, in terms of funding. In fact, in the time of CORONA pandemic, US can use this gap for her strategic gain over China.

Outstanding economic growth and rapid technological advances of China over the last three decades have given her an unprecedented opportunity to play a leading role in contemporary geopolitical competition. The United States, and many of her partners and allies in the west as well as in Asia, have a range of concerns about how the authoritarian leadership in Beijing maneuver [sic] its recently gained power and position on the world stage. They are warily observing this regime’s deployment of sophisticated technology like “Nano” in ways that challenge many of their core interests and values all across the world. Though the U.S. is considered the only superpower in the world and has maintained its position as the dominant power of technological innovation for decades, China has made massive investments and swiftly implemented policies that have contributed significantly to its technological innovation, economic growth, military capability, and global influence. In some areas, China has eclipsed, or is on the verge of eclipsing, the United States — particularly in the rapid deployment of certain technologies, and nanoscience and nanotechnology appears to be the leading one. …

[About Dr. Siddiki]

Dr. Siddiki is an instructor of Robotic and Autonomous System in the Department of Multi-Domain Operations at the [US] Army Management Staff College where he teaches and does research in that area. He was Assistant Teaching Professor of Electrical Engineering at the Department of Computer Science and Electrical Engineering in the School of Computing and Engineering at University of Missouri Kansas City (UMKC). In UMKC, Dr. Siddiki designed, developed and taught undergraduate and graduate level courses, and supervised research works of Ph.D., Master and undergraduate students. Dr. Siddiki’s research interests lie in the area of nano and quantum tech, Robotic and Autonomous System, Green Energy & Power, and their implications in geopolitics.

As you can see in the article, there are anxieties over China’s rising dominance with regard to scientific research and technology; these anxieties have become more visible since I started this blog in 2008.

I was piqued to see that Dr. Siddiki’s article is in the Small Wars Journal and not in a journal focused on science, research, technology, and/or economics. I found this explanation for the term, ‘small wars’ on the journal’s About page (Note: A link has been removed),

Small Wars” is an imperfect term used to describe a broad spectrum of spirited continuation of politics by other means, falling somewhere in the middle bit of the continuum between feisty diplomatic words and global thermonuclear war.  The Small Wars Journal embraces that imperfection.

Just as friendly fire isn’t, there isn’t necessarily anything small about a Small War.

The term “Small War” either encompasses or overlaps with a number of familiar terms such as counterinsurgency, foreign internal defense, support and stability operations, peacemaking, peacekeeping, and many flavors of intervention.  Operations such as noncombatant evacuation, disaster relief, and humanitarian assistance will often either be a part of a Small War, or have a Small Wars feel to them.  Small Wars involve a wide spectrum of specialized tactical, technical, social, and cultural skills and expertise, requiring great ingenuity from their practitioners.  The Small Wars Manual (a wonderful resource, unfortunately more often referred to than read) notes that:

Small Wars demand the highest type of leadership directed by intelligence, resourcefulness, and ingenuity. Small Wars are conceived in uncertainty, are conducted often with precarious responsibility and doubtful authority, under indeterminate orders lacking specific instructions.

The “three block war” construct employed by General Krulak is exceptionally useful in describing the tactical and operational challenges of a Small War and of many urban operations.  Its only shortcoming is that is so useful that it is often mistaken as a definition or as a type of operation.

Who Are Those Guys?

Small Wars Journal is NOT a government, official, or big corporate site. It is run by Small Wars Foundation, a non-profit corporation, for the benefit of the Small Wars community of interest. The site principals are Dave Dilegge (Editor-in-Chief) and Bill Nagle (Publisher), and it would not be possible without the support of myriad volunteers as well as authors who care about this field and contribute their original works to the community. We do this in our spare time, because we want to.  McDonald’s pays more.  But we’d rather work to advance our noble profession than watch TV, try to super-size your order, or interest you in a delicious hot apple pie.  If and when you’re not flipping burgers, please join us.

The overview and analysis provided by Dr. Siddiki is very interesting to me and absent any conflicting data, I’m assuming it’s solid work. As for the anxiety that permeates the article, this is standard. All countries are anxious about who’s winning the science and technology race. If memory serves, you can find an example of the anxiety in C.P. Snow’s classic lecture and book, Two Cultures (the book is “The Two Cultures and the Scientific Revolution”) given/published in 1959. The British scientific establishment was very concerned that it was being eclipsed by the US and by the Russians.

Windows and roofs ‘self-adapt’ to heating and cooling conditions

I have two items about thermochromic coatings. It’s a little confusing since the American Association for the Advancement of Science (AAAS), which publishes the journal featuring both papers has issued a news release that seemingly refers to both papers as a single piece of research.

Onto, the press/new releases from the research institutions to be followed by the AAAS news release.

Nanyang Technological University (NTU) does windows

A December 16, 2021 news item on Nanowerk announced work on energy-saving glass,

An international research team led by scientists from Nanyang Technological University, Singapore (NTU Singapore) has developed a material that, when coated on a glass window panel, can effectively self-adapt to heat or cool rooms across different climate zones in the world, helping to cut energy usage.

Developed by NTU researchers and reported in the journal Science (“Scalable thermochromic smart windows with passive radiative cooling regulation”), the first-of-its-kind glass automatically responds to changing temperatures by switching between heating and cooling.

The self-adaptive glass is developed using layers of vanadium dioxide nanoparticles composite, Poly(methyl methacrylate) (PMMA), and low-emissivity coating to form a unique structure which could modulate heating and cooling simultaneously.

A December 17, 2021 NTU press release (PDF), also on EurekAlert but published December 16, 2021, which originated the news item, delves further into the research (Note: A link has been removed),

The newly developed glass, which has no electrical components, works by exploiting the spectrums of light responsible for heating and cooling.

During summer, the glass suppresses solar heating (near infrared light), while boosting radiative cooling (long-wave infrared) – a natural phenomenon where heat emits through surfaces towards the cold universe – to cool the room. In the winter, it does the opposite to warm up the room.

In lab tests using an infrared camera to visualise results, the glass allowed a controlled amount of heat to emit in various conditions (room temperature – above 70°C), proving its ability to react dynamically to changing weather conditions.

New glass regulates both heating and cooling

Windows are one of the key components in a building’s design, but they are also the least energy-efficient and most complicated part. In the United States alone, window-associated energy consumption (heating and cooling) in buildings accounts for approximately four per cent of their total primary energy usage each year according to an estimation based on data available from the Department of Energy in US.[1]

While scientists elsewhere have developed sustainable innovations to ease this energy demand – such as using low emissivity coatings to prevent heat transfer and electrochromic glass that regulate solar transmission from entering the room by becoming tinted – none of the solutions have been able to modulate both heating and cooling at the same time, until now.

The principal investigator of the study, Dr Long Yi of the NTU School of Materials Science and Engineering (MSE) said, “Most energy-saving windows today tackle the part of solar heat gain caused by visible and near infrared sunlight. However, researchers often overlook the radiative cooling in the long wavelength infrared. While innovations focusing on radiative cooling have been used on walls and roofs, this function becomes undesirable during winter. Our team has demonstrated for the first time a glass that can respond favourably to both wavelengths, meaning that it can continuously self-tune to react to a changing temperature across all seasons.”

As a result of these features, the NTU research team believes their innovation offers a convenient way to conserve energy in buildings since it does not rely on any moving components, electrical mechanisms, or blocking views, to function.

To improve the performance of windows, the simultaneous modulation of both solar transmission and radiative cooling are crucial, said co-authors Professor Gang Tan from The University of Wyoming, USA, and Professor Ronggui Yang from the Huazhong University of Science and Technology, Wuhan, China, who led the building energy saving simulation.

“This innovation fills the missing gap between traditional smart windows and radiative cooling by paving a new research direction to minimise energy consumption,” said Prof Gang Tan.

The study is an example of groundbreaking research that supports the NTU 2025 strategic plan, which seeks to address humanity’s grand challenges on sustainability, and accelerate the translation of research discoveries into innovations that mitigate human impact on the environment.

Innovation useful for a wide range of climate types

As a proof of concept, the scientists tested the energy-saving performance of their invention using simulations of climate data covering all populated parts of the globe (seven climate zones).

The team found the glass they developed showed energy savings in both warm and cool seasons, with an overall energy saving performance of up to 9.5%, or ~330,000 kWh per year (estimated energy required to power 60 household in Singapore for a year) less than commercially available low emissivity glass in a simulated medium sized office building.

First author of the study Wang Shancheng, who is Research Fellow and former PhD student of Dr Long Yi, said, “The results prove the viability of applying our glass in all types of climates as it is able to help cut energy use regardless of hot and cold seasonal temperature fluctuations. This sets our invention apart from current energy-saving windows which tend to find limited use in regions with less seasonal variations.”

Moreover, the heating and cooling performance of their glass can be customised to suit the needs of the market and region for which it is intended.

“We can do so by simply adjusting the structure and composition of special nanocomposite coating layered onto the glass panel, allowing our innovation to be potentially used across a wide range of heat regulating applications, and not limited to windows,” Dr Long Yi said.

Providing an independent view, Professor Liangbing Hu, Herbert Rabin Distinguished Professor, Director of the Center for Materials Innovation at the University of Maryland, USA, said, “Long and co-workers made the original development of smart windows that can regulate the near-infrared sunlight and the long-wave infrared heat. The use of this smart window could be highly important for building energy-saving and decarbonization.”  

A Singapore patent has been filed for the innovation. As the next steps, the research team is aiming to achieve even higher energy-saving performance by working on the design of their nanocomposite coating.

The international research team also includes scientists from Nanjing Tech University, China. The study is supported by the Singapore-HUJ Alliance for Research and Enterprise (SHARE), under the Campus for Research Excellence and Technological Enterprise (CREATE) programme, Minster of Education Research Fund Tier 1, and the Sino-Singapore International Joint Research Institute.

Here’s a link to and a citation for the paper,

Scalable thermochromic smart windows with passive radiative cooling regulation by Shancheng Wang, Tengyao Jiang, Yun Meng, Ronggui Yang, Gang Tan, and Yi Long. Science • 16 Dec 2021 • Vol 374, Issue 6574 • pp. 1501-1504 • DOI: 10.1126/science.abg0291

This paper is behind a paywall.

Lawrence Berkeley National Laboratory (Berkeley Lab; LBNL) does roofs

A December 16, 2021 Lawrence Berkeley National Laboratory news release (also on EurekAlert) announces an energy-saving coating for roofs (Note: Links have been removed),

Scientists have developed an all-season smart-roof coating that keeps homes warm during the winter and cool during the summer without consuming natural gas or electricity. Research findings reported in the journal Science point to a groundbreaking technology that outperforms commercial cool-roof systems in energy savings.

“Our all-season roof coating automatically switches from keeping you cool to warm, depending on outdoor air temperature. This is energy-free, emission-free air conditioning and heating, all in one device,” said Junqiao Wu, a faculty scientist in Berkeley Lab’s Materials Sciences Division and a UC Berkeley professor of materials science and engineering who led the study.

Today’s cool roof systems, such as reflective coatings, membranes, shingles, or tiles, have light-colored or darker “cool-colored” surfaces that cool homes by reflecting sunlight. These systems also emit some of the absorbed solar heat as thermal-infrared radiation; in this natural process known as radiative cooling, thermal-infrared light is radiated away from the surface.

The problem with many cool-roof systems currently on the market is that they continue to radiate heat in the winter, which drives up heating costs, Wu explained.

“Our new material – called a temperature-adaptive radiative coating or TARC – can enable energy savings by automatically turning off the radiative cooling in the winter, overcoming the problem of overcooling,” he said.

A roof for all seasons

Metals are typically good conductors of electricity and heat. In 2017, Wu and his research team discovered that electrons in vanadium dioxide behave like a metal to electricity but an insulator to heat – in other words, they conduct electricity well without conducting much heat. “This behavior contrasts with most other metals where electrons conduct heat and electricity proportionally,” Wu explained.

Vanadium dioxide below about 67 degrees Celsius (153 degrees Fahrenheit) is also transparent to (and hence not absorptive of) thermal-infrared light. But once vanadium dioxide reaches 67 degrees Celsius, it switches to a metal state, becoming absorptive of thermal-infrared light. This ability to switch from one phase to another – in this case, from an insulator to a metal – is characteristic of what’s known as a phase-change material.

To see how vanadium dioxide would perform in a roof system, Wu and his team engineered a 2-centimeter-by-2-centimeter TARC thin-film device.

TARC “looks like Scotch tape, and can be affixed to a solid surface like a rooftop,” Wu said.

In a key experiment, co-lead author Kechao Tang set up a rooftop experiment at Wu’s East Bay home last summer to demonstrate the technology’s viability in a real-world environment.

A wireless measurement device set up on Wu’s balcony continuously recorded responses to changes in direct sunlight and outdoor temperature from a TARC sample, a commercial dark roof sample, and a commercial white roof sample over multiple days.

How TARC outperforms in energy savings

The researchers then used data from the experiment to simulate how TARC would perform year-round in cities representing 15 different climate zones across the continental U.S.

Wu enlisted Ronnen Levinson, a co-author on the study who is a staff scientist and leader of the Heat Island Group in Berkeley Lab’s Energy Technologies Area, to help them refine their model of roof surface temperature. Levinson developed a method to estimate TARC energy savings from a set of more than 100,000 building energy simulations that the Heat Island Group previously performed to evaluate the benefits of cool roofs and cool walls across the United States.

Finnegan Reichertz, a 12th grade student at the East Bay Innovation Academy in Oakland who worked remotely as a summer intern for Wu last year, helped to simulate how TARC and the other roof materials would perform at specific times and on specific days throughout the year for each of the 15 cities or climate zones the researchers studied for the paper.

The researchers found that TARC outperforms existing roof coatings for energy saving in 12 of the 15 climate zones, particularly in regions with wide temperature variations between day and night, such as the San Francisco Bay Area, or between winter and summer, such as New York City.

“With TARC installed, the average household in the U.S. could save up to 10% electricity,” said Tang, who was a postdoctoral researcher in the Wu lab at the time of the study. He is now an assistant professor at Peking University in Beijing, China.

Standard cool roofs have high solar reflectance and high thermal emittance (the ability to release heat by emitting thermal-infrared radiation) even in cool weather.

According to the researchers’ measurements, TARC reflects around 75% of sunlight year-round, but its thermal emittance is high (about 90%) when the ambient temperature is warm (above 25 degrees Celsius or 77 degrees Fahrenheit), promoting heat loss to the sky. In cooler weather, TARC’s thermal emittance automatically switches to low, helping to retain heat from solar absorption and indoor heating, Levinson said.

Findings from infrared spectroscopy experiments using advanced tools at Berkeley Lab’s Molecular Foundry validated the simulations.

“Simple physics predicted TARC would work, but we were surprised it would work so well,” said Wu. “We originally thought the switch from warming to cooling wouldn’t be so dramatic. Our simulations, outdoor experiments, and lab experiments proved otherwise – it’s really exciting.”

The researchers plan to develop TARC prototypes on a larger scale to further test its performance as a practical roof coating. Wu said that TARC may also have potential as a thermally protective coating to prolong battery life in smartphones and laptops, and shield satellites and cars from extremely high or low temperatures. It could also be used to make temperature-regulating fabric for tents, greenhouse coverings, and even hats and jackets.

Co-lead authors on the study were Kaichen Dong and Jiachen Li.

The Molecular Foundry is a nanoscience user facility at Berkeley Lab.

This work was primarily supported by the DOE Office of Science and a Bakar Fellowship.

The technology is available for licensing and collaboration. If interested, please contact Berkeley Lab’s Intellectual Property Office, ipo@lbl.gov.

Here’s a link to and a citation for the paper,

Temperature-adaptive radiative coating for all-season household thermal regulation by Kechao Tang, Kaichen Dong, Jiachen Li, Madeleine P. Gordon, Finnegan G. Reichertz, Hyungjin Kim, Yoonsoo Rho, Qingjun Wang, Chang-Yu Lin, Costas P. Grigoropoulos, Ali Javey, Jeffrey J. Urban, Jie Yao, Ronnen Levinson, Junqiao Wu. Science • 16 Dec 2021 • Vol 374, Issue 6574 • pp. 1504-1509 • DOI: 10.1126/science.abf7136

This paper is behind a paywall.

An interesting news release from the AAAS

While it’s a little confusing as it cites only the ‘window’ research from NTU, the body of this news release offers some additional information about the usefulness of thermochromic materials and seemingly refers to both papers, from a December 16, 2021 AAAS news release,

Temperature-adaptive passive radiative cooling for roofs and windows

When it’s cold out, window glass and roof coatings that use passive radiative cooling to keep buildings cool can be designed to passively turn off radiative cooling to avoid heat loss, two new studies show.  Their proof-of-concept analyses demonstrate that passive radiative cooling can be expanded to warm and cold climate applications and regions, potentially providing all-season energy savings worldwide. Buildings consume roughly 40% of global energy, a large proportion of which is used to keep them cool in warmer climates. However, most temperature regulation systems commonly employed are not very energy efficient and require external power or resources. In contrast, passive radiative cooling technologies, which use outer space as a near-limitless natural heat sink, have been extensively examined as a means of energy-efficient cooling for buildings. This technology uses materials designed to selectively emit narrow-band radiation through the infrared atmospheric window to disperse heat energy into the coldness of space. However, while this approach has proven effective in cooling buildings to below ambient temperatures, it is only helpful during the warmer months or in regions that are perpetually hot. Furthermore, the inability to “turn off” passive cooling in cooler climes or in regions with large seasonal temperature variations means that continuous cooling during colder periods would exacerbate the energy costs of heating. In two different studies, by Shancheng Wang and colleagues and Kechao Tang and colleagues, researchers approach passive radiative cooling from an all-season perspective and present a new, scalable temperature-adaptive radiative technology that passively turns off radiative cooling at lower temperatures. Wang et al. and Tang et al. achieve this using a tungsten-doped vanadium dioxide and show how it can be applied to create both window glass and a flexible roof coating, respectively. Model simulations of the self-adapting materials suggest they could provide year-round energy savings across most climate zones, especially those with substantial seasonal temperature variations. 

I wish them all good luck with getting these materials to market.

Secure quantum communication network with 15 users

Things are moving quickly where quantum communication networks are concerned. Back in April 2021, Dutch scientists announced the first multi-node quantum network connecting three processors (see my July 8, 2021 posting with the news and an embedded video).

Less than six months later, Chinese scientists announced work on a 15-user quantum network. From a September 23, 2021 news item on phys.org,

Quantum secure direct communication (QSDC) based on entanglement can directly transmit confidential information. Scientist [sic] in China explored a QSDC network based on time-energy entanglement and sum-frequency generation. The results show that when any two users are performing QSDC over 40 kilometers of optical fiber, and the rate of information transmission can be maintained at 1Kbp/s. Our result lays the foundation for the realization of satellite-based long-distance and global QSDC in the future.

A September 23, 2021 Chinese Academy of Sciences (CAS) press release on EurekAlert, which seems to have originated the news item, provides additional detail,

Quantum communication has presented a revolutionary step in secure communication due to its high security of the quantum information, and many communication protocols have been proposed, such as the quantum secure direct communication (QSDC) protocol. QSDC based on entanglement can directly transmit confidential information. Any attack of QSDC results to only random number, and cannot obtain any useful information from it. Therefore, QSDC has simple communication steps and reduces potential security loopholes, and offers high security guarantees, which guarantees the security and the value propositions of quantum communications in general. However, the inability to simultaneously distinguish the four sets of encoded orthogonal entangled states in entanglement-based QSDC protocols limits its practical application. Furthermore, it is important to construct quantum network in order to make wide applications of quantum secure direct communication. Experimental demonstration of QSDC is badly required.

In a new paper published in Light Science & Application, a team of scientists, led by Professor Xianfeng Chen from State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Physics and Astronomy, Shanghai Jiao Tong University, China and Professor Yuanhua Li from Department of Physics, Jiangxi Normal University, China have explored a QSDC network based on time-energy entanglement and sum-frequency generation (SFG). They present a fully connected entanglement-based QSDC network including five subnets, with 15 users. Using the frequency correlations of the fifteen photon pairs via time division multiplexing and dense wavelength division multiplexing (DWDM), they perform a 40-kilometer fiber QSDC experiment by implying two-step transmission between each user. In this process, the network processor divides the spectrum of the single-photon source into 30 International Telecommunication Union (ITU) channels. With these channels, there will be a coincidence event between each user by performing a Bell-state measurement based on the SFG. This allows the four sets of encoded entangled states to be identified simultaneously without post-selection.

It is well known that the security and reliability of the information transmission for QSDC is an essential part in the quantum network. Therefore, they implemented block transmission and step-by-step transmission methods in QSDC with estimating the secrecy capacity of the quantum channel. After confirming the security of the quantum channel, the legitimate user performs encoding or decoding operations within these schemes reliably.

These scientists summarize the experiment results of their network scheme:

“The results show that when any two users are performing QSDC over 40 kilometers of optical fiber, the fidelity of the entangled state shared by them is still greater than 95%, and the rate of information transmission can be maintained at 1 Kbp/s. Our result demonstrates the feasibility of a proposed QSDC network, and hence lays the foundation for the realization of satellite-based long-distance and global QSDC in the future.”

“With this scheme, each user interconnects with any others through shared pairs of entangled photons in different wavelength. Moreover, it is possible to improve the information transmission rate greater than 100 Kbp/s in the case of the high-performance detectors, as well as high-speed control in modulator being used” they added.

“It is worth noting the present-work, which offers long-distance point-to-point QSDC connection, combined with the recently proposed secure-repeater quantum network of QSDC, which offers secure end-to-end communication throughout the quantum Internet, will enable the construction of secure quantum network using present-day technology, realizing the great potential of QSDC in future communication.” the scientists forecast.

Here’s a link to and a citation for the paper,

A 15-user quantum secure direct communication network by Zhantong Qi, Yuanhua Li, Yiwen Huang, Juan Feng, Yuanlin Zheng & Xianfeng Chen. Light: Science & Applications volume 10, Article number: 183 (2021) DOI: https://doi.org/10.1038/s41377-021-00634-2 Published: 14 September 2021

This paper is open access.

For the profoundly curious, there is an earlier version of this paper on arXiv.org, the site run by Cornell University where it was posted after moderation but prior to peer-review for publication in a journal.

East/West collaboration on scholarship and imagination about humanity’s long-term future— six new fellows at Berggruen Research Center at Peking University

According to a January 4, 2022 Berggruen Institute (also received via email), they have appointed a new crop of fellows for their research center at Peking University,

The Berggruen Institute has announced six scientists and philosophers to serve as Fellows at the Berggruen Research Center at Peking University in Beijing, China. These eminent scholars will work together across disciplines to explore how the great transformations of our time may shift human experience and self-understanding in the decades and centuries to come.

The new Fellows are Chenjian Li, University Chair Professor at Peking University; Xianglong Zhang, professor of philosophy at Peking University; Xiaoli Liu, professor of philosophy at Renmin University of China; Jianqiao Ge, lecturer at the Academy for Advanced Interdisciplinary Studies (AAIS) at Peking University; Xiaoping Chen, Director of the Robotics Laboratory at the University of Science and Technology of China; and Haidan Chen, associate professor of medical ethics and law at the School of Health Humanities at Peking University.

“Amid the pandemic, climate change, and the rest of the severe challenges of today, our Fellows are surmounting linguistic and cultural barriers to imagine positive futures for all people,” said Bing Song, Director of the China Center and Vice President of the Berggruen Institute. “Dialogue and shared understanding are crucial if we are to understand what today’s breakthroughs in science and technology really mean for the human community and the planet we all share.”

The Fellows will investigate deep questions raised by new understandings and capabilities in science and technology, exploring their implications for philosophy and other areas of study.  Chenjian Li is considering the philosophical and ethical considerations of gene editing technology. Meanwhile, Haidan Chen is exploring the social implications of brain/computer interface technologies in China, while Xiaoli Liu is studying philosophical issues arising from the intersections among psychology, neuroscience, artificial intelligence, and art.

Jianqiao Ge’s project considers the impact of artificial intelligence on the human brain, given the relative recency of its evolution into current form. Xianglong Zhang’s work explores the interplay between literary culture and the development of technology. Finally, Xiaoping Chen is developing a new concept for describing innovation that draws from Daoist, Confucianist, and ancient Greek philosophical traditions.

Fellows at the China Center meet monthly with the Institute’s Los Angeles-based Fellows. These fora provide an opportunity for all Fellows to share and discuss their work. Through this cross-cultural dialogue, the Institute is helping to ensure continued high-level of ideas among China, the United States, and the rest of the world about some of the deepest and most fundamental questions humanity faces today.

“Changes in our capability and understanding of the physical world affect all of humanity, and questions about their implications must be pondered at a cross-cultural level,” said Bing. “Through multidisciplinary dialogue that crosses the gulf between East and West, our Fellows are pioneering new thought about what it means to be human.”

Haidan Chen is associate professor of medical ethics and law at the School of Health Humanities at Peking University. She was a visiting postgraduate researcher at the Institute for the Study of Science Technology and Innovation (ISSTI), the University of Edinburgh; a visiting scholar at the Brocher Foundation, Switzerland; and a Fulbright visiting scholar at the Center for Biomedical Ethics, Stanford University. Her research interests embrace the ethical, legal, and social implications (ELSI) of genetics and genomics, and the governance of emerging technologies, in particular stem cells, biobanks, precision medicine, and brain science. Her publications appear at Social Science & MedicineBioethics and other journals.

Xiaoping Chen is the director of the Robotics Laboratory at University of Science and Technology of China. He also currently serves as the director of the Robot Technical Standard Innovation Base, an executive member of the Global AI Council, Chair of the Chinese RoboCup Committee, and a member of the International RoboCup Federation’s Board of Trustees. He has received the USTC’s Distinguished Research Presidential Award and won Best Paper at IEEE ROBIO 2016. His projects have won the IJCAI’s Best Autonomous Robot and Best General-Purpose Robot awards as well as twelve world champions at RoboCup. He proposed an intelligent technology pathway for robots based on Open Knowledge and the Rong-Cha principle, which have been implemented and tested in the long-term research on KeJia and JiaJia intelligent robot systems.

Jianqiao Ge is a lecturer at the Academy for Advanced Interdisciplinary Studies (AAIS) at Peking University. Before, she was a postdoctoral fellow at the University of Chicago and the Principal Investigator / Co-Investigator of more than 10 research grants supported by the Ministry of Science and Technology of China, the National Natural Science Foundation of China, and Beijing Municipal Science & Technology Commission. She has published more than 20 peer-reviewed articles on leading academic journals such as PNAS, the Journal of Neuroscience, and has been awarded two national patents. In 2008, by scanning the human brain with functional MRI, Ge and her collaborator were among the first to confirm that the human brain engages distinct neurocognitive strategies to comprehend human intelligence and artificial intelligence. Ge received her Ph.D. in psychology, B.S in physics, a double B.S in mathematics and applied mathematics, and a double B.S in economics from Peking University.

Chenjian Li is the University Chair Professor of Peking University. He also serves on the China Advisory Board of Eli Lilly and Company, the China Advisory Board of Cornell University, and the Rhodes Scholar Selection Committee. He is an alumnus of Peking University’s Biology Department, Peking Union Medical College, and Purdue University. He was the former Vice Provost of Peking University, Executive Dean of Yuanpei College, and Associate Dean of the School of Life Sciences at Peking University. Prior to his return to China, he was an associate professor at Weill Medical College of Cornell University and the Aidekman Endowed Chair of Neurology at Mount Sinai School of Medicine. Dr. Li’s academic research focuses on the molecular and cellular mechanisms of neurological diseases, cancer drug development, and gene-editing and its philosophical and ethical considerations. Li also writes as a public intellectual on science and humanity, and his Chinese translation of Richard Feynman’s book What Do You Care What Other People Think? received the 2001 National Publisher’s Book Award.

Xiaoli Liu is professor of philosophy at Renmin University. She is also Director of the Chinese Society of Philosophy of Science Leader. Her primary research interests are philosophy of mathematics, philosophy of science and philosophy of cognitive science. Her main works are “Life of Reason: A Study of Gödel’s Thought,” “Challenges of Cognitive Science to Contemporary Philosophy,” “Philosophical Issues in the Frontiers of Cognitive Science.” She edited “Symphony of Mind and Machine” and series of books “Mind and Cognition.” In 2003, she co-founded the “Mind and Machine workshop” with interdisciplinary scholars, which has held 18 consecutive annual meetings. Liu received her Ph.D. from Peking University and was a senior visiting scholar in Harvard University.

Xianglong Zhang is a professor of philosophy at Peking University. His research areas include Confucian philosophy, phenomenology, Western and Eastern comparative philosophy. His major works (in Chinese except where noted) include: Heidegger’s Thought and Chinese Tao of HeavenBiography of HeideggerFrom Phenomenology to ConfuciusThe Exposition and Comments of Contemporary Western Philosophy; The Exposition and Comments of Classic Western PhilosophyThinking to Take Refuge: The Chinese Ancient Philosophies in the GlobalizationLectures on the History of Confucian Philosophy (four volumes); German Philosophy, German Culture and Chinese Philosophical ThinkingHome and Filial Piety: From the View between the Chinese and the Western.

About the Berggruen China Center
Breakthroughs in artificial intelligence and life science have led to the fourth scientific and technological revolution. The Berggruen China Center is a hub for East-West research and dialogue dedicated to the cross-cultural and interdisciplinary study of the transformations affecting humanity. Intellectual themes for research programs are focused on frontier sciences, technologies, and philosophy, as well as issues involving digital governance and globalization.

About the Berggruen Institute:
The Berggruen Institute’s mission is to develop foundational ideas and shape political, economic, and social institutions for the 21st century. Providing critical analysis using an outwardly expansive and purposeful network, we bring together some of the best minds and most authoritative voices from across cultural and political boundaries to explore fundamental questions of our time. Our objective is enduring impact on the progress and direction of societies around the world. To date, projects inaugurated at the Berggruen Institute have helped develop a youth jobs plan for Europe, fostered a more open and constructive dialogue between Chinese leadership and the West, strengthened the ballot initiative process in California, and launched Noema, a new publication that brings thought leaders from around the world together to share ideas. In addition, the Berggruen Prize, a $1 million award, is conferred annually by an independent jury to a thinker whose ideas are shaping human self-understanding to advance humankind.

You can find out more about the Berggruen China Center here and you can access a list along with biographies of all the Berggruen Institute fellows here.

Getting ready

I look forward to hearing about the projects from these thinkers.

Gene editing and ethics

I may have to reread some books in anticipation of Chenjian Li’s philosophical work and ethical considerations of gene editing technology. I wonder if there’ll be any reference to the He Jiankui affair.

(Briefly for those who may not be familiar with the situation, He claimed to be the first to gene edit babies. In November 2018, news about the twins, Lulu and Nana, was a sensation and He was roundly criticized for his work. I have not seen any information about how many babies were gene edited for He’s research; there could be as many as six. My July 28, 2020 posting provided an update. I haven’t stumbled across anything substantive since then.)

There are two books I recommend should you be interested in gene editing, as told through the lens of the He Jiankui affair. If you can, read both as that will give you a more complete picture.

In no particular order: This book provides an extensive and accessible look at the science, the politics of scientific research, and some of the pressures on scientists of all countries. Kevin Davies’ 2020 book, “Editing Humanity; the CRISPR Revolution and the New Era of Genome Editing” provides an excellent introduction from an insider. Here’s more from Davies’ biographical sketch,

Kevin Davies is the executive editor of The CRISPR Journal and the founding editor of Nature Genetics . He holds an MA in biochemistry from the University of Oxford and a PhD in molecular genetics from the University of London. He is the author of Cracking the Genome, The $1,000 Genome, and co-authored a new edition of DNA: The Story of the Genetic Revolution with Nobel Laureate James D. Watson and Andrew Berry. …

The other book is “The Mutant Project; Inside the Global Race to Genetically Modify Humans” (2020) by Eben Kirksey, an anthropologist who has an undergraduate degree in one of the sciences. He too provides scientific underpinning but his focus is on the cultural and personal underpinnings of the He Jiankui affair, on the culture of science research, irrespective of where it’s practiced, and the culture associated with the DIY (do-it-yourself) Biology community. Here’s more from Kirksey’s biographical sketch,

EBEN KIRKSEY is an American anthropologist and Member of the Institute for Advanced Study in Princeton, New Jersey. He has been published in Wired, The Atlantic, The Guardian and The Sunday Times . He is sought out as an expert on science in society by the Associated Press, The Wall Street Journal, The New York Times, Democracy Now, Time and the BBC, among other media outlets. He speaks widely at the world’s leading academic institutions including Oxford, Yale, Columbia, UCLA, and the International Summit of Human Genome Editing, plus music festivals, art exhibits, and community events. Professor Kirksey holds a long-term position at Deakin University in Melbourne, Australia.

Brain/computer interfaces (BCI)

I’m happy to see that Haidan Chen will be exploring the social implications of brain/computer interface technologies in China. I haven’t seen much being done here in Canada but my December 23, 2021 posting, Your cyborg future (brain-computer interface) is closer than you think, highlights work being done at the Imperial College London (ICL),

“For some of these patients, these devices become such an integrated part of themselves that they refuse to have them removed at the end of the clinical trial,” said Rylie Green, one of the authors. “It has become increasingly evident that neurotechnologies have the potential to profoundly shape our own human experience and sense of self.”

You might also find my September 17, 2020 posting has some useful information. Check under the “Brain-computer interfaces, symbiosis, and ethical issues” subhead for another story about attachment to one’s brain implant and also the “Finally” subhead for more reading suggestions.

Artificial intelligence (AI), art, and the brain

I’ve lumped together three of the thinkers, Xiaoli Liu, Jianqiao Ge and Xianglong Zhang, as there is some overlap (in my mind, if nowhere else),

  • Liu’s work on philosophical issues as seen in the intersections of psychology, neuroscience, artificial intelligence, and art
  • Ge’s work on the evolution of the brain and the impact that artificial intelligence may have on it
  • Zhang’s work on the relationship between literary culture and the development of technology

A December 3, 2021 posting, True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read), is both a review of a recent episode of the Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, and a dive into a number of issues as can be seen under subheads such as “AI and Creativity,” “Kazuo Ishiguro?” and “Evolution.”

You may also want to check out my December 27, 2021 posting, Ai-Da (robot artist) writes and performs poem honouring Dante’s 700th anniversary, for an eye opening experience. If nothing else, just watch the embedded video.

This suggestion relates most closely to Ge’s and Zhang’s work. If you haven’t already come across it, there’s Walter J. Ong’s 1982 book, “Orality and Literacy: The Technologizing of the Word.” From the introductory page of the 2002 edition (PDF),

This classic work explores the vast differences between oral and
literate cultures and offers a brilliantly lucid account of the
intellectual, literary and social effects of writing, print and
electronic technology. In the course of his study, Walter J.Ong
offers fascinating insights into oral genres across the globe and
through time and examines the rise of abstract philosophical and
scientific thinking. He considers the impact of orality-literacy
studies not only on literary criticism and theory but on our very
understanding of what it is to be a human being, conscious of self
and other.

In 2013, a 30th anniversary edition of the book was released and is still in print.

Philosophical traditions

I’m very excited to learn more about Xiaoping Chen’s work describing innovation that draws from Daoist, Confucianist, and ancient Greek philosophical traditions.

Should any of my readers have suggestions for introductory readings on these philosophical traditions, please do use the Comments option for this blog. In fact, if you have suggestions for other readings on these topics, I would be very happy to learn of them.

Congratulations to the six Fellows at the Berggruen Research Center at Peking University in Beijing, China. I look forward to reading articles about your work in the Berggruen Institute’s Noema magazine and, possibly, attending your online events.

Charles Lieber, nanoscientist, and the US Dept. of Justice

Charles Lieber, professor at Harvard University and one of the world’s leading researchers in nanotechnology went on trial on Tuesday, December 14, 2021.

Accused of hiding his ties to a People’s Republic of China (PRC)-run recruitment programme, Lieber is probably the highest profile academic and one of the few who was not born in China or has familial origins in China to be charged under the auspices of the US Department of Justice’s ‘China Initiative’.

This US National Public Radio (NPR) December 14, 2021 audio excerpt provides a brief summary of the situation by Ryan Lucas,

A December 14, 2021 article by Jess Aloe, Eileen Guo, and Antonio Regalado for the Massachusetts Institute of Technology (MIT) Technology Review lays out the situation in more detail (Note: A link has been removed),

In January of 2020, agents arrived at Harvard University looking for Charles Lieber, a renowned nanotechnology researcher who chaired the school’s department of chemistry and chemical biology. They were there to arrest him on charges of hiding his financial ties with a university in China. By arresting Lieber steps from Harvard Yard, authorities were sending a loud message to the academic community: failing to disclose such links is a serious crime.

Now Lieber is set to go on trial beginning December 14 [2021] in federal court in Boston. He has pleaded not guilty, and hundreds of academics have signed letters of support. In fact, some critics say it’s the Justice Department’s China Initiative—a far-reaching effort started in 2018 to combat Chinese economic espionage and trade-secret theft—that should be on trial, not Lieber. They are calling the prosecutions fundamentally flawed, a witch hunt that misunderstands the open-book nature of basic science and that is selectively destroying scientific careers over financial misdeeds and paperwork errors without proof of actual espionage or stolen technology.

For their part, prosecutors believe they have a tight case. They allege that Lieber was recruited into China’s Thousand Talents Plan—a program aimed at attracting top scientists—and paid handsomely to establish a research laboratory at the Wuhan University of Technology, but hid the affiliation from US grant agencies when asked about it (read a copy of the indictment here). Lieber is facing six felony charges: two counts of making false statements to investigators, two counts of filing a false tax return, and two counts of failing to report a foreign bank account. [emphases mine; Note: None of these charges have been proved in court]

The case against Lieber could be a bellwether for the government, which has several similar cases pending against US professors alleging that they didn’t disclose their China affiliations to granting agencies.

As for the China Initiative (from the MIT Technology Review December 14, 2021 article),

The China Initiative was announced in 2018 by Jeff Sessions, then the Trump administration’s attorney general, as a central component of the administration’s tough stance toward China.

An MIT Technology Review investigation published earlier this month [December 2021] found that the China Initiative is an umbrella for various types of prosecutions somehow connected to China, with targets ranging from a Chinese national who ran a turtle-smuggling ring to state-sponsored hackers believed to be behind some of the biggest data breaches in history. In total, MIT Technology Review identified 77 cases brought under the initiative; of those, a quarter have led to guilty pleas or convictions, but nearly two-thirds remain pending.

The government’s prosecution of researchers like Lieber for allegedly hiding ties to Chinese institutions has been the most controversial, and fastest-growing, aspect of the government’s efforts. In 2020, half of the 31 new cases brought under the China Initiative were cases against scientists or researchers. These cases largely did not accuse the defendants of violating the Economic Espionage Act.

… hundreds of academics across the country, from institutions including Stanford University and Princeton University,signed a letter calling on Attorney General Merrick Garland to end the China Initiative. The initiative, they wrote, has drifted from its original mission of combating Chinese intellectual-property theft and is instead harming American research competitiveness by discouraging scholars from coming to or staying in the US.

Lieber’s case is the second [emphasis mine] China Initiative prosecution of an academic to end up in the courtroom. The only previous person to face trial [emphasis mine] on research integrity charges, University of Tennessee–Knoxville professor Anming Hu, was acquitted of all charges [emphasis mine] by a judge in June [2021] after a deadlocked jury led to a mistrial.

Ken Dilanian wrote an October 19, 2021 article for (US) National Broadcasting Corporation’s (NBC) news online about Hu’s eventual acquittal and about the China Inititative (Note: Dilanian’s timeline for the acquittal differs from the timeline in the MIT Technology Review),

The federal government brought the full measure of its legal might against Anming Hu, a nanotechnology expert at the University of Tennessee.

But the Justice Department’s efforts to convict Hu as part of its program to crack down on illicit technology transfer to China failed — spectacularly. A judge acquitted him last month [September 2021] after a lengthy trial offered little evidence of anything other than a paperwork misunderstanding, according to local newspaper coverage. It was the second trial, after the first ended in a hung jury.

“The China Initiative has turned up very little by way of clear espionage and the transfer of genuinely strategic information to the PRC,” said Robert Daly, a China expert at the Wilson Center, referring to the country by its formal name, the People’s Republic of China. “They are mostly process crimes, disclosure issues. A growing number of voices are calling for an end to the China initiative because it’s seen as discriminatory.”

The China Initiative began under President Donald Trump’s attorney general, Jeff Sessions, in 2018. But concerns about Chinese espionage in the United States — and the transfer of technology to China through business and academic relationships — are bipartisan.

John Demers, who departed in June [2021] as head of the Justice Department’s National Security Division, said in an interview that the problem of technology transfer at universities is real. But he said he also believes conflict of interest and disclosure rules were not rigorously enforced for many years. For that reason, he recommended an amnesty program offering academics with undisclosed foreign ties a chance to come clean and avoid penalties. So far, the Biden administration has not implemented such a program.

When I first featured the Lieber case in a January 28, 2020 posting I was more focused on the financial elements,

ETA January 28, 2020 at 1645 hours: I found a January 28, 2020 article by Antonio Regalado for the MIT Technology Review which provides a few more details about Lieber’s situation,

“…

Big money: According to the charging document, Lieber, starting in 2011,  agreed to help set up a research lab at the Wuhan University of Technology and “make strategic visionary and creative research proposals” so that China could do cutting-edge science.

He was well paid for it. Lieber earned a salary when he visited China worth up to $50,000 per month, as well as $150,000 a year in expenses in addition to research funds. According to the complaint, he got paid by way of a Chinese bank account but also was known to send emails asking for cash instead.

Harvard eventually wised up to the existence of a Wuhan lab using its name and logo, but when administrators confronted Lieber, he lied and said he didn’t know about a formal joint program, according to the government complaint.

This is messy not least because Lieber and the members of his Harvard lab have done some extraordinary work as per my November 15, 2019 (Human-machine interfaces and ultra-small nanoprobes) posting about injectable electronics.