Tag Archives: Tsinghua University

New chip for neuromorphic computing runs at a fraction of the energy of today’s systems

An August 17, 2022 news item on Nanowerk announces big (so to speak) claims from a team researching neuromorphic (brainlike) computer chips,

An international team of researchers has designed and built a chip that runs computations directly in memory and can run a wide variety of artificial intelligence (AI) applications–all at a fraction of the energy consumed by computing platforms for general-purpose AI computing.

The NeuRRAM neuromorphic chip brings AI a step closer to running on a broad range of edge devices, disconnected from the cloud, where they can perform sophisticated cognitive tasks anywhere and anytime without relying on a network connection to a centralized server. Applications abound in every corner of the world and every facet of our lives, and range from smart watches, to VR headsets, smart earbuds, smart sensors in factories and rovers for space exploration.

The NeuRRAM chip is not only twice as energy efficient as the state-of-the-art “compute-in-memory” chips, an innovative class of hybrid chips that runs computations in memory, it also delivers results that are just as accurate as conventional digital chips. Conventional AI platforms are a lot bulkier and typically are constrained to using large data servers operating in the cloud.

In addition, the NeuRRAM chip is highly versatile and supports many different neural network models and architectures. As a result, the chip can be used for many different applications, including image recognition and reconstruction as well as voice recognition.

..

An August 17, 2022 University of California at San Diego (UCSD) news release (also on EurekAlert), which originated the news item, provides more detail than usually found in a news release,

“The conventional wisdom is that the higher efficiency of compute-in-memory is at the cost of versatility, but our NeuRRAM chip obtains efficiency while not sacrificing versatility,” said Weier Wan, the paper’s first corresponding author and a recent Ph.D. graduate of Stanford University who worked on the chip while at UC San Diego, where he was co-advised by Gert Cauwenberghs in the Department of Bioengineering. 

The research team, co-led by bioengineers at the University of California San Diego, presents their results in the Aug. 17 [2022] issue of Nature.

Currently, AI computing is both power hungry and computationally expensive. Most AI applications on edge devices involve moving data from the devices to the cloud, where the AI processes and analyzes it. Then the results are moved back to the device. That’s because most edge devices are battery-powered and as a result only have a limited amount of power that can be dedicated to computing. 

By reducing power consumption needed for AI inference at the edge, this NeuRRAM chip could lead to more robust, smarter and accessible edge devices and smarter manufacturing. It could also lead to better data privacy as the transfer of data from devices to the cloud comes with increased security risks. 

On AI chips, moving data from memory to computing units is one major bottleneck. 

“It’s the equivalent of doing an eight-hour commute for a two-hour work day,” Wan said. 

To solve this data transfer issue, researchers used what is known as resistive random-access memory, a type of non-volatile memory that allows for computation directly within memory rather than in separate computing units. RRAM and other emerging memory technologies used as synapse arrays for neuromorphic computing were pioneered in the lab of Philip Wong, Wan’s advisor at Stanford and a main contributor to this work. Computation with RRAM chips is not necessarily new, but generally it leads to a decrease in the accuracy of the computations performed on the chip and a lack of flexibility in the chip’s architecture. 

“Compute-in-memory has been common practice in neuromorphic engineering since it was introduced more than 30 years ago,” Cauwenberghs said.  “What is new with NeuRRAM is that the extreme efficiency now goes together with great flexibility for diverse AI applications with almost no loss in accuracy over standard digital general-purpose compute platforms.”

A carefully crafted methodology was key to the work with multiple levels of “co-optimization” across the abstraction layers of hardware and software, from the design of the chip to its configuration to run various AI tasks. In addition, the team made sure to account for various constraints that span from memory device physics to circuits and network architecture. 

“This chip now provides us with a platform to address these problems across the stack from devices and circuits to algorithms,” said Siddharth Joshi, an assistant professor of computer science and engineering at the University of Notre Dame , who started working on the project as a Ph.D. student and postdoctoral researcher in Cauwenberghs lab at UC San Diego. 

Chip performance

Researchers measured the chip’s energy efficiency by a measure known as energy-delay product, or EDP. EDP combines both the amount of energy consumed for every operation and the amount of times it takes to complete the operation. By this measure, the NeuRRAM chip achieves 1.6 to 2.3 times lower EDP (lower is better) and 7 to 13 times higher computational density than state-of-the-art chips. 

Researchers ran various AI tasks on the chip. It achieved 99% accuracy on a handwritten digit recognition task; 85.7% on an image classification task; and 84.7% on a Google speech command recognition task. In addition, the chip also achieved a 70% reduction in image-reconstruction error on an image-recovery task. These results are comparable to existing digital chips that perform computation under the same bit-precision, but with drastic savings in energy. 

Researchers point out that one key contribution of the paper is that all the results featured are obtained directly on the hardware. In many previous works of compute-in-memory chips, AI benchmark results were often obtained partially by software simulation. 

Next steps include improving architectures and circuits and scaling the design to more advanced technology nodes. Researchers also plan to tackle other applications, such as spiking neural networks.

“We can do better at the device level, improve circuit design to implement additional features and address diverse applications with our dynamic NeuRRAM platform,” said Rajkumar Kubendran, an assistant professor for the University of Pittsburgh, who started work on the project while a Ph.D. student in Cauwenberghs’ research group at UC San Diego.

In addition, Wan is a founding member of a startup that works on productizing the compute-in-memory technology. “As a researcher and  an engineer, my ambition is to bring research innovations from labs into practical use,” Wan said. 

New architecture 

The key to NeuRRAM’s energy efficiency is an innovative method to sense output in memory. Conventional approaches use voltage as input and measure current as the result. But this leads to the need for more complex and more power hungry circuits. In NeuRRAM, the team engineered a neuron circuit that senses voltage and performs analog-to-digital conversion in an energy efficient manner. This voltage-mode sensing can activate all the rows and all the columns of an RRAM array in a single computing cycle, allowing higher parallelism. 

In the NeuRRAM architecture, CMOS neuron circuits are physically interleaved with RRAM weights. It differs from conventional designs where CMOS circuits are typically on the peripheral of RRAM weights.The neuron’s connections with the RRAM array can be configured to serve as either input or output of the neuron. This allows neural network inference in various data flow directions without incurring overheads in area or power consumption. This in turn makes the architecture easier to reconfigure. 

To make sure that accuracy of the AI computations can be preserved across various neural network architectures, researchers developed a set of hardware algorithm co-optimization techniques. The techniques were verified on various neural networks including convolutional neural networks, long short-term memory, and restricted Boltzmann machines. 

As a neuromorphic AI chip, NeuroRRAM performs parallel distributed processing across 48 neurosynaptic cores. To simultaneously achieve high versatility and high efficiency, NeuRRAM supports data-parallelism by mapping a layer in the neural network model onto multiple cores for parallel inference on multiple data. Also, NeuRRAM offers model-parallelism by mapping different layers of a model onto different cores and performing inference in a pipelined fashion.

An international research team

The work is the result of an international team of researchers. 

The UC San Diego team designed the CMOS circuits that implement the neural functions interfacing with the RRAM arrays to support the synaptic functions in the chip’s architecture, for high efficiency and versatility. Wan, working closely with the entire team, implemented the design; characterized the chip; trained the AI models; and executed the experiments. Wan also developed a software toolchain that maps AI applications onto the chip. 

The RRAM synapse array and its operating conditions were extensively characterized and optimized at Stanford University. 

The RRAM array was fabricated and integrated onto CMOS at Tsinghua University. 

The Team at Notre Dame contributed to both the design and architecture of the chip and the subsequent machine learning model design and training.

The research started as part of the National Science Foundation funded Expeditions in Computing project on Visual Cortex on Silicon at Penn State University, with continued funding support from the Office of Naval Research Science of AI program, the Semiconductor Research Corporation and DARPA [{US} Defense Advanced Research Projects Agency] JUMP program, and Western Digital Corporation. 

Here’s a link to and a citation for the paper,

A compute-in-memory chip based on resistive random-access memory by Weier Wan, Rajkumar Kubendran, Clemens Schaefer, Sukru Burc Eryilmaz, Wenqiang Zhang, Dabin Wu, Stephen Deiss, Priyanka Raina, He Qian, Bin Gao, Siddharth Joshi, Huaqiang Wu, H.-S. Philip Wong & Gert Cauwenberghs. Nature volume 608, pages 504–512 (2022) DOI: https://doi.org/10.1038/s41586-022-04992-8 Published: 17 August 2022 Issue Date: 18 August 2022

This paper is open access.

Reconfiguring a LEGO-like AI chip with light

MIT engineers have created a reconfigurable AI chip that comprises alternating layers of sensing and processing elements that can communicate with each other. Credit: Figure courtesy of the researchers and edited by MIT News

This image certainly challenges any ideas I have about what Lego looks like. It seems they see things differently at the Massachusetts Institute of Technology (MIT). From a June 13, 2022 MIT news release (also on EurekAlert),

Imagine a more sustainable future, where cellphones, smartwatches, and other wearable devices don’t have to be shelved or discarded for a newer model. Instead, they could be upgraded with the latest sensors and processors that would snap onto a device’s internal chip — like LEGO bricks incorporated into an existing build. Such reconfigurable chipware could keep devices up to date while reducing our electronic waste. 

Now MIT engineers have taken a step toward that modular vision with a LEGO-like design for a stackable, reconfigurable artificial intelligence chip.

The design comprises alternating layers of sensing and processing elements, along with light-emitting diodes (LED) that allow for the chip’s layers to communicate optically. Other modular chip designs employ conventional wiring to relay signals between layers. Such intricate connections are difficult if not impossible to sever and rewire, making such stackable designs not reconfigurable.

The MIT design uses light, rather than physical wires, to transmit information through the chip. The chip can therefore be reconfigured, with layers that can be swapped out or stacked on, for instance to add new sensors or updated processors.

“You can add as many computing layers and sensors as you want, such as for light, pressure, and even smell,” says MIT postdoc Jihoon Kang. “We call this a LEGO-like reconfigurable AI chip because it has unlimited expandability depending on the combination of layers.”

The researchers are eager to apply the design to edge computing devices — self-sufficient sensors and other electronics that work independently from any central or distributed resources such as supercomputers or cloud-based computing.

“As we enter the era of the internet of things based on sensor networks, demand for multifunctioning edge-computing devices will expand dramatically,” says Jeehwan Kim, associate professor of mechanical engineering at MIT. “Our proposed hardware architecture will provide high versatility of edge computing in the future.”

The team’s results are published today in Nature Electronics. In addition to Kim and Kang, MIT authors include co-first authors Chanyeol Choi, Hyunseok Kim, and Min-Kyu Song, and contributing authors Hanwool Yeon, Celesta Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim, Han Eol Lee, Doyoon Lee, Subeen Pang, Sang-Hoon Bae, Hun S. Kum, and Peng Lin, along with collaborators from Harvard University, Tsinghua University, Zhejiang University, and elsewhere.

Lighting the way

The team’s design is currently configured to carry out basic image-recognition tasks. It does so via a layering of image sensors, LEDs, and processors made from artificial synapses — arrays of memory resistors, or “memristors,” that the team previously developed, which together function as a physical neural network, or “brain-on-a-chip.” Each array can be trained to process and classify signals directly on a chip, without the need for external software or an Internet connection.

In their new chip design, the researchers paired image sensors with artificial synapse arrays, each of which they trained to recognize certain letters — in this case, M, I, and T. While a conventional approach would be to relay a sensor’s signals to a processor via physical wires, the team instead fabricated an optical system between each sensor and artificial synapse array to enable communication between the layers, without requiring a physical connection. 

“Other chips are physically wired through metal, which makes them hard to rewire and redesign, so you’d need to make a new chip if you wanted to add any new function,” says MIT postdoc Hyunseok Kim. “We replaced that physical wire connection with an optical communication system, which gives us the freedom to stack and add chips the way we want.”

The team’s optical communication system consists of paired photodetectors and LEDs, each patterned with tiny pixels. Photodetectors constitute an image sensor for receiving data, and LEDs to transmit data to the next layer. As a signal (for instance an image of a letter) reaches the image sensor, the image’s light pattern encodes a certain configuration of LED pixels, which in turn stimulates another layer of photodetectors, along with an artificial synapse array, which classifies the signal based on the pattern and strength of the incoming LED light.

Stacking up

The team fabricated a single chip, with a computing core measuring about 4 square millimeters, or about the size of a piece of confetti. The chip is stacked with three image recognition “blocks,” each comprising an image sensor, optical communication layer, and artificial synapse array for classifying one of three letters, M, I, or T. They then shone a pixellated image of random letters onto the chip and measured the electrical current that each neural network array produced in response. (The larger the current, the larger the chance that the image is indeed the letter that the particular array is trained to recognize.)

The team found that the chip correctly classified clear images of each letter, but it was less able to distinguish between blurry images, for instance between I and T. However, the researchers were able to quickly swap out the chip’s processing layer for a better “denoising” processor, and found the chip then accurately identified the images.

“We showed stackability, replaceability, and the ability to insert a new function into the chip,” notes MIT postdoc Min-Kyu Song.

The researchers plan to add more sensing and processing capabilities to the chip, and they envision the applications to be boundless.

“We can add layers to a cellphone’s camera so it could recognize more complex images, or makes these into healthcare monitors that can be embedded in wearable electronic skin,” offers Choi, who along with Kim previously developed a “smart” skin for monitoring vital signs.

Another idea, he adds, is for modular chips, built into electronics, that consumers can choose to build up with the latest sensor and processor “bricks.”

“We can make a general chip platform, and each layer could be sold separately like a video game,” Jeehwan Kim says. “We could make different types of neural networks, like for image or voice recognition, and let the customer choose what they want, and add to an existing chip like a LEGO.”

This research was supported, in part, by the Ministry of Trade, Industry, and Energy (MOTIE) from South Korea; the Korea Institute of Science and Technology (KIST); and the Samsung Global Research Outreach Program.

Here’s a link to and a citation for the paper,

Reconfigurable heterogeneous integration using stackable chips with embedded artificial intelligence by Chanyeol Choi, Hyunseok Kim, Ji-Hoon Kang, Min-Kyu Song, Hanwool Yeon, Celesta S. Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim, Han Eol Lee, Doyoon Lee, Jaeyong Lee, Ikbeom Jang, Subeen Pang, Kanghyun Ryu, Sang-Hoon Bae, Yifan Nie, Hyun S. Kum, Min-Chul Park, Suyoun Lee, Hyung-Jun Kim, Huaqiang Wu, Peng Lin & Jeehwan Kim. Nature Electronics volume 5, pages 386–393 (2022) 05 May 2022 Issue Date: June 2022 Published: 13 June 2022 DOI: https://doi.org/10.1038/s41928-022-00778-y

This paper is behind a paywall.

Bruno Latour, science, and the 2021 Kyoto Prize in Arts and Philosophy: Commemorative Lecture

The Kyoto Prize (Wikipedia entry) was first given out in 1985. These days (I checked out a currency converter today, November 15, 2021), the Inamori Foundation, which administers the prize, gives out $100M yen per prize, worth about $1,098,000 CAD or $876,800 USD.

Here’s more about the prize from the November 9, 2021 Inamori Foundation press release on EurekAlert,

The Kyoto Prize is an international award of Japanese origin, presented to individuals who have made significant contributions to the progress of science, the advancement of civilization, and the enrichment and elevation of the human spirit. The Prize is granted in the three categories of Advanced Technology, Basic Sciences; Arts and Philosophy, each of which comprises four fields, making a total of 12 fields. Every year, one Prize is awarded in each of the three categories with prize money of 100 million yen per category.

One of the distinctive features of the Kyoto Prize is that it recognizes both “science” and “arts and philosophy” fields. This is because of its founder Kazuo Inamori’s conviction that the future of humanity can be assured only when there is a balance between scientific development and the enrichment of the human spirit.

The recipient for arts and philosophy, Bruno Latour has been mentioned here before (from a July 15, 2020 posting titled, ‘Architecture, the practice of science, and meaning’),

The 1979 book, Laboratory Life: the Social Construction of Scientific Facts by Bruno Latour and Steve Woolgar immediately came to mind on reading about a new book (The New Architecture of Science: Learning from Graphene) linking architecture to the practice of science (research on graphene). It turns out that one of the authors studied with Latour. (For more about Laboratory Life see: Bruno Latour’s Wikipedia entry; scroll down to Main Works)

Back to Latour and his prize from the November 9, 2021 Inamori Foundation press release,

Bruno Latour, Professor Emeritus at Paris Institute of Political Studies (Sciences Po), received the 2021 Kyoto Prize in Arts and Philosophy for his radically re-examining “modernity” by developing a philosophy that focuses on interactions between technoscience and social structure. Latour’s Commemorative Lecture “How to React to a Change in Cosmology” will be released on November 10, 2021, 10:00 AM JST at the 2021 Kyoto Prize Special Website.

“Viruses–we don’t even know if viruses are our enemies or our friends!” says Latour in his lecture. By using the ongoing Covid epidemic as a sort of lead, Latour discusses the shift in cosmology, a structure that distributes agencies around. He then suggests a “new project” we have to work on now, which he assumes is very different from the modernist project.

Bruno Latour has revolutionized the conventional view of science by treating nature, humans, laboratory equipment, and other entities as equal actors, and describing technoscience as the hybrid network of these actors. His philosophy re-examines “modernity” based on the dualism of nature and society. He has a large influence across disciplines, with his multifaceted activities that include proposals regarding global environmental issues.

Latour and the other two 2021 Kyoto Prize laureates are introduced on the 2021 Kyoto Prize Special Website with information about their work, profiles, and three-minute introduction videos. The Kyoto Prize in Advanced Technology for this year went to Andrew Chi-Chih Yao, Professor of Institute for Interdisciplinary Information Sciences at Tsinghua University, and Basic Sciences to Robert G. Roeder, Arnold and Mabel Beckman Professor of Biochemistry and Molecular Biology at The Rockefeller University. 

The folks at the Kyoto Prize have made a three-minute video introduction to Bruno Latour available,

For more information you can check out the Inamori Foundation website. There are two Kyoto Prize websites, the 2021 Kyoto Prize Special Website and the Kyoto Prize website. These are all English language websites and, if you have the language skills and the interest, it is possible to toggle (upper right hand side) and get the Japanese language version.

Finally, there’s a dedicated Bruno Latour webpage on the 2021 Kyoto Prize Special Website and Bruno Latour has his own website where French and English are items are mixed together but it seems the majority of the content is in English.

Memristor artificial neural network learning based on phase-change memory (PCM)

Caption: Professor Hongsik Jeong and his research team in the Department of Materials Science and Engineering at UNIST. Credit: UNIST

I’m pretty sure that Professor Hongsik Jeong is the one on the right. He seems more relaxed, like he’s accustomed to posing for pictures highlighting his work.

Now on to the latest memristor news, which features the number 8.

For anyone unfamiliar with the term memristor, it’s a device (of sorts) which scientists, involved in neuromorphic computing (computers that operate like human brains), are researching as they attempt to replicate brainlike processes for computers.

From a January 22, 2021 Ulsan National Institute of Science and Technology (UNIST) press release (also on EurekAlert but published March 15, 2021),

An international team of researchers, affiliated with UNIST has unveiled a novel technology that could improve the learning ability of artificial neural networks (ANNs).

Professor Hongsik Jeong and his research team in the Department of Materials Science and Engineering at UNIST, in collaboration with researchers from Tsinghua University in China, proposed a new learning method to improve the learning ability of ANN chips by challenging its instability.

Artificial neural network chips are capable of mimicking the structural, functional and biological features of human neural networks, and thus have been considered the technology of the future. In this study, the research team demonstrated the effectiveness of the proposed learning method by building phase change memory (PCM) memristor arrays that operate like ANNs. This learning method is also advantageous in that its learning ability can be improved without additional power consumption, since PCM undergoes a spontaneous resistance increase due to the structural relaxation after amorphization.

ANNs, like human brains, use less energy even when performing computation and memory tasks, simultaneously. However, the artificial neural network chip in which a large number of physical devices are integrated has a disadvantage that there is an error. The existing artificial neural network learning method assumes a perfect artificial neural network chip with no errors, so the learning ability of the artificial neural network is poor.

The research team developed a memristor artificial neural network learning method based on a phase-change memory, conceiving that the real human brain does not require near-perfect motion. This learning method reflects the “resistance drift” (increased electrical resistance) of the phase change material in the memory semiconductor in learning. During the learning process, since the information update pattern is recorded in the form of increasing electrical resistance in the memristor, which serves as a synapse, the synapse additionally learns the association between the pattern it changes and the data it is learning.

The research team showed that the learning method developed through an experiment to classify handwriting composed of numbers 0-9 has an effect of improving learning ability by about 3%. In particular, the accuracy of the number 8, which is difficult to classify handwriting, has improved significantly. [emphasis mine] The learning ability improved thanks to the synaptic update pattern that changes differently according to the difficulty of handwriting classification.

Researchers expect that their findings are expected to promote the learning algorithms with the intrinsic properties of memristor devices, opening a new direction for development of neuromorphic computing chips.

Here’s a link to and a citation for the paper,

Spontaneous sparse learning for PCM-based memristor neural networks by Dong-Hyeok Lim, Shuang Wu, Rong Zhao, Jung-Hoon Lee, Hongsik Jeong & Luping Shi. Nature Communications volume 12, Article number: 319 (2021) DOI: https://doi.org/10.1038/s41467-020-20519-z Published 12 January 2021

This paper is open access.

New boron nanostructure—carbon, watch out!

Carbon nanotubes, buckminsterfullerenes (also known as, buckyballs), and/or graphene are names for different carbon nanoscale structures and, as far as I’m aware,carbon is the only element that merits some distinct names at the nanoscale. By comparison, gold can be gold nanorods, gold nanostars, gold nanoparticles, and so on. In short, nanostructures made of gold (and most other elements) are always prefaced with the word ‘gold’ followed by a word with ‘nano’ in it.

Scientists naming a new boron nanoscale structure seem to have adopted both strategies for a hybrid name. Here’s more from a June 25, 2020 news item on phys.org,

The discovery of carbon nanostructures like two-dimensional graphene and soccer ball-shaped buckyballs helped to launch a nanotechnology revolution. In recent years, researchers from Brown University [located in Rhode Island, US] and elsewhere have shown that boron, carbon’s neighbor on the periodic table, can make interesting nanostructures too, including two-dimensional borophene and a buckyball-like hollow cage structure called borospherene.

Caption: The family of boron-based nanostructures has a new member: metallo-borospherenes, hollow cages made from 18 boron atoms and three atoms of lanthanide elements. Credit: Wang Lab / Brown University

A June 25, 2020 Brown University news release (also on EurekAlert), wbich originated the news item, describes these new structures in detail,

Now, researchers from Brown and Tsinghua University have added another boron nanostructure to the list. In a paper published in Nature Communications, they show that clusters of 18 boron atoms and three atoms of lanthanide elements form a bizarre cage-like structure unlike anything they’ve ever seen.

“This is just not a type of structure you expect to see in chemistry,” said Lai-Sheng Wang, a professor of chemistry at Brown and the study’s senior author. “When we wrote the paper we really struggled to describe it. It’s basically a spherical trihedron. Normally you can’t have a closed three-dimensional structure with only three sides, but since it’s spherical, it works.”

The researchers are hopeful that the nanostructure may shed light on the bulk structure and chemical bonding behavior of boron lanthanides, an important class of materials widely used in electronics and other applications. The nanostructure by itself may have interesting properties as well, the researchers say.

“Lanthanide elements are important magnetic materials, each with very different magnetic moments,” Wang said. “We think any of the lanthanides will make this structure, so they could have very interesting magnetic properties.”

Wang and his students created the lanthanide-boron clusters by focusing a powerful laser onto a solid target made of a mixture of boron and a lanthanide element. The clusters are formed upon cooling of the vaporized atoms. Then they used a technique called photoelectron spectroscopy to study the electronic properties of the clusters. The technique involves zapping clusters of atoms with another high-powered laser. Each zap knocks an electron out of the cluster. By measuring the kinetic energies of those freed electrons, researchers can create a spectrum of binding energies for the electrons that bond the cluster together.

“When we see a simple, beautiful spectrum, we know there’s a beautiful structure behind it,” Wang said.

To figure out what that structure looks like, Wang compared the photoelectron spectra with theoretical calculations done by Professor Jun Li and his students from Tsinghua. Once they find a theoretical structure with a binding spectrum that matches the experiment, they know they’ve found the right structure.

“This structure was something we never would have predicted,” Wang said. “That’s the value of combining theoretical calculation with experimental data.”

Wang and his colleagues have dubbed the new structures metallo-borospherenes, and they’re hopeful that further research will reveal their properties.

Here’s a link to and a citation for the paper,

Spherical trihedral metallo-borospherenes by Teng-Teng Chen, Wan-Lu Li, Wei-Jia Chen, Xiao-Hu Yu, Xin-Ran Dong, Jun Li & Lai-Sheng Wang. Nature Communications volume 11, Article number: 2766 (2020) DOI: https://doi.org/10.1038/s41467-020-16532-x Published: 02 June 2020

This paper is open access.

China is world leader in nanotechnology and in other fields too?

State of Chinese nanoscience/nanotechnology

China claims to be the world leader in the field in a white paper announced in an August 29, 2017 Springer Nature press release,

Springer Nature, the National Center for Nanoscience and Technology, China and the National Science Library of the Chinese Academy of Sciences (CAS) released in both Chinese and English a white paper entitled “Small Science in Big China: An overview of the state of Chinese nanoscience and technology” at NanoChina 2017, an international conference on nanoscience and technology held August 28 and 29 in Beijing. The white paper looks at the rapid growth of China’s nanoscience research into its current role as the world’s leader [emphasis mine], examines China’s strengths and challenges, and makes some suggestions for how its contribution to the field can continue to thrive.

The white paper points out that China has become a strong contributor to nanoscience research in the world, and is a powerhouse of nanotechnology R&D. Some of China’s basic research is leading the world. China’s applied nanoscience research and the industrialization of nanotechnologies have also begun to take shape. These achievements are largely due to China’s strong investment in nanoscience and technology. China’s nanoscience research is also moving from quantitative increase to quality improvement and innovation, with greater emphasis on the applications of nanotechnologies.

“China took an initial step into nanoscience research some twenty years ago, and has since grown its commitment at an unprecedented rate, as it has for scientific research as a whole. Such a growth is reflected both in research quantity and, importantly, in quality. Therefore, I regard nanoscience as a window through which to observe the development of Chinese science, and through which we could analyze how that rapid growth has happened. Further, the experience China has gained in developing nanoscience and related technologies is a valuable resource for the other countries and other fields of research to dig deep into and draw on,” said Arnout Jacobs, President, Greater China, Springer Nature.

The white paper explores at China’s research output relative to the rest of the world in terms of research paper output, research contribution contained in the Nano database, and finally patents, providing insight into China’s strengths and expertise in nano research. The white paper also presents the results of a survey of experts from the community discussing the outlook for and challenges to the future of China’s nanoscience research.

China nano research output: strong rise in quantity and quality

In 1997, around 13,000 nanoscience-related papers were published globally. By 2016, this number had risen to more than 154,000 nano-related research papers. This corresponds to a compound annual growth rate of 14% per annum, almost four times the growth in publications across all areas of research of 3.7%. Over the same period of time, the nano-related output from China grew from 820 papers in 1997 to over 52,000 papers in 2016, a compound annual growth rate of 24%.

China’s contribution to the global total has been growing steadily. In 1997, Chinese researchers co-authored just 6% of the nano-related papers contained in the Science Citation Index (SCI). By 2010, this grew to match the output of the United States. They now contribute over a third of the world’s total nanoscience output — almost twice that of the United States.

Additionally, China’s share of the most cited nanoscience papers has kept increasing year on year, with a compound annual growth rate of 22% — more than three times the global rate. It overtook the United States in 2014 and its contribution is now many times greater than that of any other country in the world, manifesting an impressive progression in both quantity and quality.

The rapid growth of nanoscience in China has been enabled by consistent and strong financial support from the Chinese government. As early as 1990, the State Science and Technology Committee, the predecessor of the Ministry of Science and Technology (MOST), launched the Climbing Up project on nanomaterial science. During the 1990s, the National Natural Science Foundation of China (NSFC) also funded nearly 1,000 small-scale projects in nanoscience. In the National Guideline on Medium- and Long-Term Program for Science and Technology Development (for 2006−2020) issued in early 2006 by the Chinese central government, nanoscience was identified as one of four areas of basic research and received the largest proportion of research budget out of the four areas. The brain boomerang, with more and more foreign-trained Chinese researchers returning from overseas, is another contributor to China’s rapid rise in nanoscience.

The white paper clarifies the role of Chinese institutions, including CAS, in driving China’s rise to become the world’s leader in nanoscience. Currently, CAS is the world’s largest producer of high impact nano research, contributing more than twice as many papers in the 1% most-cited nanoscience literature than its closest competitors. In addition to CAS, five other Chinese institutions are ranked among the global top 20 in terms of output of top cited 1% nanoscience papers — Tsinghua University, Fudan University, Zhejiang University, University of Science and Technology of China and Peking University.

Nano database reveals advantages and focus of China’s nano research

The Nano database (http://nano.nature.com) is a comprehensive platform that has been recently developed by Nature Research – part of Springer Nature – which contains nanoscience-related papers published in 167 peer-reviewed journals including Advanced Materials, Nano Letters, Nature, Science and more. Analysis of the Nano database of nanomaterial-containing articles published in top 30 journals during 2014–2016 shows that Chinese scientists explore a wide range of nanomaterials, the five most common of which are nanostructured materials, nanoparticles, nanosheets, nanodevices and nanoporous materials.

In terms of the research of applications, China has a clear leading edge in catalysis research, which is the most popular area of the country’s quality nanoscience papers. Chinese nano researchers also contributed significantly to nanomedicine and energy-related applications. China is relatively weaker in nanomaterials for electronics applications, compared to other research powerhouses, but robotics and lasers are emerging applications areas of nanoscience in China, and nanoscience papers addressing photonics and data storage applications also see strong growth in China. Over 80% of research from China listed in the database explicitly mentions applications of the nanostructures and nanomaterials described, notably higher than from most other leading nations such as the United States, Germany, the UK, Japan and France.

Nano also reveals the extent of China’s international collaborations in nano research. China has seen the percentage of its internationally collaborated papers increasing from 36% in 2014 to 44% in 2016. This level of international collaboration, similar to that of South Korea, is still much lower than that of the western countries, and the rate of growth is also not as fast as those in the United States, France and Germany.

The United States is China’s biggest international collaborator, contributing to 55% of China’s internationally collaborated papers on nanoscience that are included in the top 30 journals in the Nano database. Germany, Australia and Japan follow in a descending order as China’s collaborators on nano-related quality papers.

China’s patent output: topping the world, mostly applied domestically

Analysis of the Derwent Innovation Index (DII) database of Clarivate Analytics shows that China’s accumulative total number of patent applications for the past 20 years, amounting to 209,344 applications, or 45% of the global total, is more than twice as many as that of the United States, the second largest contributor to nano-related patents. China surpassed the United States and ranked the top in the world since 2008.

Five Chinese institutions, including the CAS, Zhejiang University, Tsinghua University, Hon Hai Precision Industry Co., Ltd. and Tianjin University can be found among the global top 10 institutional contributors to nano-related patent applications. CAS has been at the top of the global rankings since 2008, with a total of 11,218 patent applications for the past 20 years. Interestingly, outside of China, most of the other big institutional contributors among the top 10 are commercial enterprises, while in China, research or academic institutions are leading in patent applications.

However, the number of nano-related patents China applied overseas is still very low, accounting for only 2.61% of its total patent applications for the last 20 years cumulatively, whereas the proportion in the United States is nearly 50%. In some European countries, including the UK and France, more than 70% of patent applications are filed overseas.

China has high numbers of patent applications in several popular technical areas for nanotechnology use, and is strongest in patents for polymer compositions and macromolecular compounds. In comparison, nano-related patent applications in the United States, South Korea and Japan are mainly for electronics or semiconductor devices, with the United States leading the world in the cumulative number of patents for semiconductor devices.

Outlook, opportunities and challenges

The white paper highlights that the rapid rise of China’s research output and patent applications has painted a rosy picture for the development of Chinese nanoscience, and in both the traditionally strong subjects and newly emerging areas, Chinese nanoscience shows great potential.

Several interviewed experts in the survey identify catalysis and catalytic nanomaterials as the most promising nanoscience area for China. The use of nanotechnology in the energy and medical sectors was also considered very promising.

Some of the interviewed experts commented that the industrial impact of China’s nanotechnology is limited and there is still a gap between nanoscience research and the industrialization of nanotechnologies. Therefore, they recommended that the government invest more in applied research to drive the translation of nanoscience research and find ways to encourage enterprises to invest more in R&D.

As more and more young scientists enter the field, the competition for research funding is becoming more intense. However, this increasing competition for funding was not found to concern most interviewed young scientists, rather, they emphasized that the soft environment is more important. They recommended establishing channels that allow the suggestions or creative ideas of the young to be heard. Also, some interviewed young researchers commented that they felt that the current evaluation system was geared towards past achievements or favoured overseas experience, and recommended the development of an improved talent selection mechanism to ensure a sustainable growth of China’s nanoscience.

I have taken a look at the white paper and found it to be well written. It also provides a brief but thorough history of nanotechnology/nanoscience even adding a bit of historical information that was new to me. As for the rest of the white paper, it relies on bibliometrics (number of published papers and number of citations) and number of patents filed to lay the groundwork for claiming Chinese leadership in nanotechnology. As I’ve stated many times before, these are problematic measures but as far as I can determine they are almost the only ones we have. Frankly, as a Canadian, it doesn’t much matter to me since Canada no matter how you slice or dice it is always in a lower tier relative to science leadership in major fields. It’s the Americans who might feel inclined to debate leadership with regard to nanotechnology and other major fields and I leave it to to US commentators to take up the cudgels should they be inclined. The big bonuses here are the history, the glimpse into the Chinese perspective on the field of nanotechnology/nanoscience, and the analysis of weaknesses and strengths.

Coming up fast on Google and Amazon

A November 16, 2017 article by Christina Bonnington for Slate explores the possibility that a Chinese tech giant, Baidu,  will provide Google and Amazon serious competition in their quests to dominate world markets (Note: Links have been removed,

raven_h
The company took a playful approach to the form—but it has functional reasons for the design, too. Baidu

 

One of the most interesting companies in tech right now isn’t based in Palo Alto, or San Francisco, or Seattle. Baidu, a Chinese company with headquarters in Beijing, is taking on America’s biggest and most innovative tech titans—with style.

Baidu, a titan in its own right, leapt onto the scene as a competitor to Google in the search engine space. Since then, the company, largely underappreciated here in the U.S., has focused on beefing up its artificial intelligence efforts. Former AI chief Andrew Ng, upon leaving the company in March, credited Baidu’s CEO Robin Li on being one of the first technology leaders to fully appreciate the value of deep learning. Baidu now has a 1,300 person AI group, and that investment in AI has helped the company catch up to older, more established companies like Google and Amazon—both in emerging spaces, such as autonomous vehicles, and in consumer tech, as its latest announcement shows.

On Thursday [November 16, 2017], Baidu debuted its entrants to the popular virtual assistant space: a connected speaker and two robots. Baidu aims for the speaker to compete against options such as Amazon’s Echo line, Google Home, and Apple HomePod. Inside, the $256 device will utilize Baidu’s DuerOS conversational artificial intelligence platform, which is already used in more than 100 different smart home brands’ products. DuerOS will let you use your voice to do things like ask the speaker for information, play music, or hail a cab. Called the Raven H, the speaker includes high-end audio components from Tymphany and a unique design jointly created by acquired startup Raven Tech and Swedish consumer electronics company Teenage Engineering.

While the focus is on exciting new technology products from Baidu, the subtext, such as it is, suggests US companies had best keep an eye on its Chinese competitor(s).

Dutch/Chinese partnership to produce nanoparticles at the touch of a button

Now back to China and nanotechnology leadership and the production of nanoparticles. This announcement was made in a November 17, 2017 news item on Azonano,

Delft University of Technology [Netherlands] spin-off VSPARTICLE enters the booming Chinese market with a radical technology that allows researchers to produce nanoparticles at the push of a button. VSPARTICLE’s nanoparticle generator uses atoms, the worlds’ smallest building blocks, to provide a controllable source of nanoparticles. The start-up from Delft signed a distribution agreement with Bio-Sun to make their VSP-G1 nanoparticle generator available in China.

A November 16, 2017 VSPARTICLE press release, which originated the news item,

“We are honoured to cooperate with VSPARTICLE and bring the innovative VSP-G1 nanoparticle generator into the Chinese market. The VSP-G1 will create new possibilities for researchers in catalysis, aerosol, healthcare and electronics,” says Yinghui Cai, CEO of Bio-Sun.

With an exponential growth in nanoparticle research in the last decade, China is one of the leading countries in the field of nanotechnology and its applications. Vincent Laban, CFO of VSPARTICLE, explains: “Due to its immense investments in IOT, sensors, semiconductor technology, renewable energy and healthcare applications, China will eventually become one of our biggest markets. The collaboration with Bio-Sun offers a valuable opportunity to enter the Chinese market at exactly the right time.”

NANOPARTICLES ARE THE BUILDING BLOCKS OF THE FUTURE

Increasingly, scientists are focusing on nanoparticles as a key technology in enabling the transition to a sustainable future. Nanoparticles are used to make new types of sensors and smart electronics; provide new imaging and treatment possibilities in healthcare; and reduce harmful waste in chemical processes.

CURRENT RESEARCH TOOLKIT LACKS A FAST WAY FOR MAKING SPECIFIC BUILDING BLOCKS

With the latest tools in nanotechnology, researchers are exploring the possibilities of building novel materials. This is, however, a trial-and-error method. Getting the right nanoparticles often is a slow struggle, as most production methods take a substantial amount of effort and time to develop.

VSPARTICLE’S VSP-G1 NANOPARTICLE GENERATOR

With the VSP-G1 nanoparticle generator, VSPARTICLE makes the production of nanoparticles as easy as pushing a button. . Easy and fast iterations enable researchers to fast forward their research cycle, and verify their hypotheses.

VSPARTICLE

Born out of the research labs of Delft University of Technology, with over 20 years of experience in the synthesis of aerosol, VSPARTICLE believes there is a whole new world of possibilities and materials at the nanoscale. The company was founded in 2014 and has an international sales network in Europe, Japan and China.

BIO-SUN

Bio-Sun was founded in Beijing in 2010 and is a leader in promoting nanotechnology and biotechnology instruments in China. It serves many renowned customers in life science, drug discovery and material science. Bio-Sun has four branch offices in Qingdao, Shanghai, Guangzhou and Wuhan City, and a nationwide sale network.

That’s all folks!

The volatile lithium-ion battery

On the heels of Samsung’s Galaxy Note 7 recall due to fires (see Alex Fitzpatrick’s Sept. 9, 2016 article for Time magazine for a good description of lithium-ion batteries and why they catch fire; see my May 29, 2013 posting on lithium-ion batteries, fires [including the airplane fires], and nanotechnology risk assessments), there’s new research on lithium-ion batteries and fires from China. From an Oct. 21, 2016 news item on Nanotechnology Now,

Dozens of dangerous gases are produced by the batteries found in billions of consumer devices, like smartphones and tablets, according to a new study. The research, published in Nano Energy, identified more than 100 toxic gases released by lithium batteries, including carbon monoxide.

An Oct. 20, 2016 Elsevier Publishing press release (also on EurekAlert), which originated the news item, expands on the theme,

The gases are potentially fatal, they can cause strong irritations to the skin, eyes and nasal passages, and harm the wider environment. The researchers behind the study, from the Institute of NBC Defence and Tsinghua University in China, say many people may be unaware of the dangers of overheating, damaging or using a disreputable charger for their rechargeable devices.

In the new study, the researchers investigated a type of rechargeable battery, known as a “lithium-ion” battery, which is placed in two billion consumer devices every year.

“Nowadays, lithium-ion batteries are being actively promoted by many governments all over the world as a viable energy solution to power everything from electric vehicles to mobile devices. The lithium-ion battery is used by millions of families, so it is imperative that the general public understand the risks behind this energy source,” explained Dr. Jie Sun, lead author and professor at the Institute of NBC Defence.

The dangers of exploding batteries have led manufacturers to recall millions of devices: Dell recalled four million laptops in 2006 and millions of Samsung Galaxy Note 7 devices were recalled this month after reports of battery fires. But the threats posed by toxic gas emissions and the source of these emissions are not well understood.

Dr. Sun and her colleagues identified several factors that can cause an increase in the concentration of the toxic gases emitted. A fully charged battery will release more toxic gases than a battery with 50 percent charge, for example. The chemicals contained in the batteries and their capacity to release charge also affected the concentrations and types of toxic gases released.

Identifying the gases produced and the reasons for their emission gives manufacturers a better understanding of how to reduce toxic emissions and protect the wider public, as lithium-ion batteries are used in a wide range of environments.

“Such dangerous substances, in particular carbon monoxide, have the potential to cause serious harm within a short period of time if they leak inside a small, sealed environment, such as the interior of a car or an airplane compartment,” Dr. Sun said.

Almost 20,000 lithium-ion batteries were heated to the point of combustion in the study, causing most devices to explode and all to emit a range of toxic gases. Batteries can be exposed to such temperature extremes in the real world, for example, if the battery overheats or is damaged in some way.

The researchers now plan to develop this detection technique to improve the safety of lithium-ion batteries so they can be used to power the electric vehicles of the future safely.

“We hope this research will allow the lithium-ion battery industry and electric vehicle sector to continue to expand and develop with a greater understanding of the potential hazards and ways to combat these issues,” Sun concluded.

Here’s a link to and a citation for the paper,

Toxicity, a serious concern of thermal runaway from commercial Li-ion battery by Jie Sun, Jigang Li, Tian Zhou, Kai Yang, Shouping Wei, Na Tang, Nannan Dang, Hong Li, Xinping Qiu, Liquan Chend. Nano Energy Volume 27, September 2016, Pages 313–319  http://dx.doi.org/10.1016/j.nanoen.2016.06.031

This paper appears to be open access.

Feed your silkworms graphene or carbon nanotubes for stronger silk

This Oct. 11, 2016 news item on Nanowerk may make you wonder about a silkworm’s standard diet,

Researchers at Tsinghua University in Beijing, China, have demonstrated that mechanically enhanced silk fibers could be naturally produced by feeding silkworms with diets containing single-walled carbon nanotubes (SW[C]NTs) or graphene.

The as-spun silk fibers containing nanofillers showed evidently increased fracture strength and elongation-at-break, demonstrating the validity of SWNT or graphene incorporation into silkworm silk as reinforcement through an in situ functionalization approach.

The researchers conclude that “by analyzing the silk fibers and the excrement of silkworms, … parts of the fed carbon nanomaterials were incorporated into the as-spun silk fibers, while others went into excrement.

Bob Yirka in an Oct. 11, 2016 article for phys.org provides a little information about silkworms and their eating habits,

In this new effort, the researchers sought to add new properties to silk by adding carbon nanotubes and graphene to their diet.

To add the materials, the researchers sprayed a water solution containing .2 percent carbon nanotubes or graphene onto mulberry leaves and then fed the leaves to the silkworms. They then allowed the silkworms to make their silk in the normal way. Testing of the silks that were produced showed they could withstand approximately 50 percent more stress than traditional silk. A closer look showed that the new silk was made of a more orderly crystal structure than normal silk. And taking their experiments one step further, the researchers cooked the new silk at 1,050 °C causing it to be carbonized—that caused the silk to conduct electricity.

Here’s a link to and a citation for the paper,

Feeding Single-Walled Carbon Nanotubes or Graphene to Silkworms for Reinforced Silk Fibers by Qi Wang, Chunya Wang, Mingchao Zhang, Muqiang Jian, and Yingying Zhang. Nano Lett., Article ASAP DOI: 10.1021/acs.nanolett.6b03597 Publication Date (Web): September 13, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Powering up your graphene implants so you don’t get fried in the process

A Sept. 23, 2016 news item on phys.org describes a way of making graphene-based medical implants safer,

In the future, our health may be monitored and maintained by tiny sensors and drug dispensers, deployed within the body and made from graphene—one of the strongest, lightest materials in the world. Graphene is composed of a single sheet of carbon atoms, linked together like razor-thin chicken wire, and its properties may be tuned in countless ways, making it a versatile material for tiny, next-generation implants.

But graphene is incredibly stiff, whereas biological tissue is soft. Because of this, any power applied to operate a graphene implant could precipitously heat up and fry surrounding cells.

Now, engineers from MIT [Massachusetts Institute of Technology] and Tsinghua University in Beijing have precisely simulated how electrical power may generate heat between a single layer of graphene and a simple cell membrane. While direct contact between the two layers inevitably overheats and kills the cell, the researchers found they could prevent this effect with a very thin, in-between layer of water.

A Sept. 23, 2016 MIT news release by Emily Chu, which originated the news item, provides more technical details,

By tuning the thickness of this intermediate water layer, the researchers could carefully control the amount of heat transferred between graphene and biological tissue. They also identified the critical power to apply to the graphene layer, without frying the cell membrane. …

Co-author Zhao Qin, a research scientist in MIT’s Department of Civil and Environmental Engineering (CEE), says the team’s simulations may help guide the development of graphene implants and their optimal power requirements.

“We’ve provided a lot of insight, like what’s the critical power we can accept that will not fry the cell,” Qin says. “But sometimes we might want to intentionally increase the temperature, because for some biomedical applications, we want to kill cells like cancer cells. This work can also be used as guidance [for those efforts.]”

Sandwich model

Typically, heat travels between two materials via vibrations in each material’s atoms. These atoms are always vibrating, at frequencies that depend on the properties of their materials. As a surface heats up, its atoms vibrate even more, causing collisions with other atoms and transferring heat in the process.

The researchers sought to accurately characterize the way heat travels, at the level of individual atoms, between graphene and biological tissue. To do this, they considered the simplest interface, comprising a small, 500-nanometer-square sheet of graphene and a simple cell membrane, separated by a thin layer of water.

“In the body, water is everywhere, and the outer surface of membranes will always like to interact with water, so you cannot totally remove it,” Qin says. “So we came up with a sandwich model for graphene, water, and membrane, that is a crystal clear system for seeing the thermal conductance between these two materials.”

Qin’s colleagues at Tsinghua University had previously developed a model to precisely simulate the interactions between atoms in graphene and water, using density functional theory — a computational modeling technique that considers the structure of an atom’s electrons in determining how that atom will interact with other atoms.

However, to apply this modeling technique to the group’s sandwich model, which comprised about half a million atoms, would have required an incredible amount of computational power. Instead, Qin and his colleagues used classical molecular dynamics — a mathematical technique based on a “force field” potential function, or a simplified version of the interactions between atoms — that enabled them to efficiently calculate interactions within larger atomic systems.

The researchers then built an atom-level sandwich model of graphene, water, and a cell membrane, based on the group’s simplified force field. They carried out molecular dynamics simulations in which they changed the amount of power applied to the graphene, as well as the thickness of the intermediate water layer, and observed the amount of heat that carried over from the graphene to the cell membrane.

Watery crystals

Because the stiffness of graphene and biological tissue is so different, Qin and his colleagues expected that heat would conduct rather poorly between the two materials, building up steeply in the graphene before flooding and overheating the cell membrane. However, the intermediate water layer helped dissipate this heat, easing its conduction and preventing a temperature spike in the cell membrane.

Looking more closely at the interactions within this interface, the researchers made a surprising discovery: Within the sandwich model, the water, pressed against graphene’s chicken-wire pattern, morphed into a similar crystal-like structure.

“Graphene’s lattice acts like a template to guide the water to form network structures,” Qin explains. “The water acts more like a solid material and makes the stiffness transition from graphene and membrane less abrupt. We think this helps heat to conduct from graphene to the membrane side.”

The group varied the thickness of the intermediate water layer in simulations, and found that a 1-nanometer-wide layer of water helped to dissipate heat very effectively. In terms of the power applied to the system, they calculated that about a megawatt of power per meter squared, applied in tiny, microsecond bursts, was the most power that could be applied to the interface without overheating the cell membrane.

Qin says going forward, implant designers can use the group’s model and simulations to determine the critical power requirements for graphene devices of different dimensions. As for how they might practically control the thickness of the intermediate water layer, he says graphene’s surface may be modified to attract a particular number of water molecules.

“I think graphene provides a very promising candidate for implantable devices,” Qin says. “Our calculations can provide knowledge for designing these devices in the future, for specific applications, like sensors, monitors, and other biomedical applications.”

This research was supported in part by the MIT International Science and Technology Initiative (MISTI): MIT-China Seed Fund, the National Natural Science Foundation of China, DARPA [US Defense Advanced Research Projects Agency], the Department of Defense (DoD) Office of Naval Research, the DoD Multidisciplinary Research Initiatives program, the MIT Energy Initiative, and the National Science Foundation.

Here’s a link to and a citation for the paper,

Intercalated water layers promote thermal dissipation at bio–nano interfaces by Yanlei Wang, Zhao Qin, Markus J. Buehler, & Zhiping Xu. Nature Communications 7, Article number: 12854 doi:10.1038/ncomms12854 Published 23 September 2016

This paper is open access.