Monthly Archives: December 2022

Nigeria and its nanotechnology research

Agbaje Lateef’s (Professor of Microbiology and Head of Nanotechnology Research Group (NANO+) at Ladoke Akintola University of Technology) April 20, 2022 essay on nanotechnology in Nigeria for The Conversation offers an overview and a plea, Note: Links have been removed,

Egypt, South Africa, Tunisia, Nigeria and Algeria lead the field in Africa. Since 2006, South Africa has been developing scientists, providing infrastructure, establishing centres of excellence, developing national policy and setting regulatory standards for nanotechnology. Companies such as Mintek, Nano South Africa, SabiNano and Denel Dynamics are applying the science.

In contrast, Nigeria’s nanotechnology journey, which started with a national initiative in 2006, has been slow. It has been dogged by uncertainties, poor funding and lack of proper coordination. Still, scientists in Nigeria have continued to place the country on the map through publications.

In addition, research clusters at the University of Nigeria, Nsukka, Ladoke Akintola University of Technology and others have organised conferences. Our research group also founded an open access journal, Nano Plus: Science and Technology of Nanomaterials.

To get an idea of how well Nigeria was performing in nanotechnology research and development, we turned to SCOPUS, an academic database.

Our analysis shows that research in nanotechnology takes place in 71 Nigerian institutions in collaboration with 58 countries. South Africa, Malaysia, India, the US and China are the main collaborators. Nigeria ranked fourth in research articles published from 2010 to 2020 after Egypt, South Africa and Tunisia.

Five institutions contributed 43.88% of the nation’s articles in this period. They were the University of Nigeria, Nsukka; Covenant University, Ota; Ladoke Akintola University of Technology, Ogbomoso; University of Ilorin; and University of Lagos.

The number of articles published by Nigerian researchers in the same decade was 645. Annual output grew from five articles in 2010 to 137 in the first half of 2020. South Africa published 2,597 and Egypt 5,441 from 2010 to 2020. The global total was 414,526 articles.

The figures show steady growth in Nigeria’s publications. But the performance is low in view of the fact that the country has the most universities in Africa.

The research performance is also low in relation to population and economy size. Nigeria produced 1.58 articles per 2 million people and 1.09 articles per US$3 billion of GDP in 2019. South Africa recorded 14.58 articles per 2 million people and 3.65 per US$3 billion. Egypt published 18.51 per 2 million people and 9.20 per US$3 billion in the same period.

There is no nanotechnology patent of Nigerian origin in the US patents office. Standards don’t exist for nano-based products. South Africa had 23 patents in five years, from 2016 to 2020.

Nigerian nanotechnology research is limited by a lack of sophisticated instruments for analysis. It is impossible to conduct meaningful research locally without foreign collaboration on instrumentation. The absence of national policy on nanotechnology and of dedicated funds also hinder research.

In February 2018, Nigeria’s science and technology minister unveiled a national steering committee on nanotechnology policy. But the policy is yet to be approved by the federal government. In September 2021, I presented a memorandum to the national council on science, technology and innovation to stimulate national discourse on nanotechnology.

Given that this essay is dated more than six months after Professor Lateef’s memorandum to the national council, I’m assuming that no action has been taken as of yet.

A June 2022 addition to the Nigerian nanotechnology story

Agbaje Lateef has written a June 8, 2022 essay for The Conversation about nanotechnology and the Nigerian textile industry (Note: Links have been removed),

Nigeria’s cotton production has fallen steeply in recent years. It once supported the largest textile industry in Africa. The fall is due to weak demand for cotton and to poor yields resulting from planting low-quality cottonseeds. For these reasons, farmers switched from cotton to other crops.

Nigeria’s cotton output fell from 602,400 tonnes in 2010 to 51,000 tonnes in 2020. In the 1970s and early 1980s, the country’s textile industry had 180 textile mills employing over 450,000 people, supported by about 600,000 cotton farmers. By 2019, there were 25 textile mills and 25,000 workers.

Nowadays, textiles’ properties can be greatly improved through nanotechnology – the use of extremely small materials with special properties. Nanomaterials like graphene and silver nanoparticles make textiles stronger, durable, and resistant to germs, radiation, water and fire.

Adding nanomaterials to textiles produces nanotextiles. These are often “smart” because they respond to the external environment in different ways when combined with electronics. They can be used to harvest and store energy, to release drugs, and as sensors in different applications.

Nanotextiles are increasingly used in defence and healthcare. For hospitals, they are used to produce bandages, curtains, uniforms and bedsheets with the ability to kill pathogens. The market value of nanotextiles was US$5.1 billion in 2019 and could reach US$14.8 billion in 2024.

At the moment, Nigeria is not benefiting from nanotextiles’ economic potential as it produces none. With over 216 million people, the country should be able to support its textile industry. It could also explore trading opportunities in the African Continental Free Trade Agreement to market innovative nanotextiles.

Lateef goes on to describe his research (from his June 8, 2022 essay),

Our nanotechnology research group has made the first attempt to produce nanotextiles using cotton and silk in Nigeria. We used silver and silver-titanium oxide nanoparticles produced by locust beans’ wastewater. Locust bean is a multipurpose tree legume found in Nigeria and some other parts of Africa. The seeds, the fruit pulp and the leaves are used to prepare foods and drinks.

The seeds are used to produce a local condiment called “iru” in southwest Nigeria. The processing of iru generates a large quantity of wastewater that is not useful. We used the wastewater to reduce some compounds to produce silver and silver-titanium nanoparticles in the laboratory.

Fabrics were dipped into nanoparticle solutions to make nanotextiles. Thereafter, the nanotextiles were exposed to known bacteria and fungi. The growth of the organisms was monitored to determine the ability of the nanotextiles to kill them.

The nanotextiles prevented growth of several pathogenic bacteria and black mould, making them useful as antimicrobial materials. They were active against germs even after being washed five times with detergent. Textiles without nanoparticles did not prevent the growth of microorganisms.

These studies showed that nanotextiles can kill harmful microorganisms including those that are resistant to drugs. Materials such as air filters, sportswear, nose masks, and healthcare fabrics produced from nanotextiles possess excellent antimicrobial attributes. Nanotextiles can also promote wound healing and offer resistance to radiation, water and fire.

Our studies established the value that nanotechnology can add to textiles through hygiene and disease prevention. Using nanotextiles will promote good health and well-being for sustainable development. They will assist to reduce infections that are caused by germs.

Despite these benefits, nanomaterials in textiles can have some unwanted effects on the environment, health and safety. Some nanomaterials can harm human health causing irritation when they come in contact with skin or inhaled. Also, their release to the environment in large quantities can harm lower organisms and reduce growth of plants. We recommend that the impacts of nanotextiles should be evaluated case by case before use.

Dear Professor Lateef, I hope you see some action on your suggestions soon and thank you for the update. Also, good luck with your nanotextiles.

‘Ghost’ nannofossils and resilience

Here are the ‘ghosts’,

Microscopic plankton cell-wall coverings preserved as “ghost” fossil impressions, pressed into the surface of ancient organic matter (183 million years old). The images show the impressions of a collapsed cell-wall covering (a coccosphere) on the surface of a fragment of ancient organic matter (left) with the individual plates (coccoliths) enlarged to show the exquisite preservation of sub-micron-scale structures (right). The blue image is inverted to give a virtual fossil cast, i.e., to show the original three-dimensional form. The original plates have been removed from the sediment by dissolution, leaving behind only the ghost imprints. S.M. Slater, P. Bown et al / Science journal

A May 19, 2022 news item on phys.org makes the announcement (Note: A link has been removed),

An international team of scientists from UCL (University College London), the Swedish Museum of Natural History, Natural History Museum (London) and the University of Florence have found a remarkable type of fossilization that has remained almost entirely overlooked until now.

The fossils are microscopic imprints, or “ghosts”, of single-celled plankton, called coccolithophores, that lived in the seas millions of years ago, and their discovery is changing our understanding of how plankton in the oceans are affected by climate change.

Coccolithophores are important in today’s oceans, providing much of the oxygen we breathe, supporting marine food webs, and locking carbon away in seafloor sediments. They are a type of microscopic plankton that surround their cells with hard calcareous plates, called coccoliths, and these are what normally fossilize in rocks.

Declines in the abundance of these fossils have been documented from multiple past global warming events, suggesting that these plankton were severely affected by climate change and ocean acidification. However, a study published today in the journal Science presents new global records of abundant ghost fossils from three Jurassic and Cretaceous warming events (94, 120 and 183 million years ago), suggesting that coccolithophores were more resilient to past climate change than was previously thought.

….

A May 20, 2022 UCL press release (also on EurekAlert but published May 19, 2022), which originated the news item, provides more detail and quotes from some very excited academics,

“The discovery of these beautiful ghost fossils was completely unexpected”, says Dr. Sam Slater from the Swedish Museum of Natural History. “We initially found them preserved on the surfaces of fossilized pollen, and it quickly became apparent that they were abundant during intervals where normal coccolithophore fossils were rare or absent – this was a total surprise!”

Despite their microscopic size, coccolithophores can be hugely abundant in the present ocean, being visible from space as cloud-like blooms. After death, their calcareous exoskeletons sink to the seafloor, accumulating in vast numbers, forming rocks such as chalk.

“The preservation of these ghost nannofossils is truly remarkable,” says Professor Paul Bown (UCL). “The ghost fossils are extremely small ‒ their length is approximately five thousandths of a millimetre, 15 times narrower than the width of a human hair! ‒ but the detail of the original plates is still perfectly visible, pressed into the surfaces of ancient organic matter, even though the plates themselves have dissolved away”.

The ghost fossils formed while the sediments at the seafloor were being buried and turned into rock. As more mud was gradually deposited on top, the resulting pressure squashed the coccolith plates and other organic remains together, and the hard coccoliths were pressed into the surfaces of pollen, spores and other soft organic matter. Later, acidic waters within spaces in the rock dissolved away the coccoliths, leaving behind just their impressions – the ghosts.

“Normally, palaeontologists only search for the fossil coccoliths themselves, and if they don’t find any then they often assume that these ancient plankton communities collapsed,” explains Professor Vivi Vajda (Swedish Museum of Natural History). “These ghost fossils show us that sometimes the fossil record plays tricks on us and there are other ways that these calcareous nannoplankton may be preserved, which need to be taken into account when trying to understand responses to past climate change”.

Professor Silvia Danise (University of Florence) says: “Ghost nannofossils are likely common in the fossil record, but they have been overlooked due to their tiny size and cryptic mode of preservation. We think that this peculiar type of fossilization will be useful in the future, particularly when studying geological intervals where the original coccoliths are missing from the fossil record”.

The study focused on the Toarcian Oceanic Anoxic Event (T-OAE), an interval of rapid global warming in the Early Jurassic (183 million years ago), caused by an increase in CO2-levels in the atmosphere from massive volcanism in the Southern Hemisphere. The researchers found ghost nannofossils associated with the T-OAE from the UK, Germany, Japan and New Zealand, but also from two similar global warming events in the Cretaceous: Oceanic Anoxic Event 1a (120 million years ago) from Sweden, and Oceanic Anoxic Event 2 (94 million years ago) from Italy.

“The ghost fossils show that nannoplankton were abundant, diverse and thriving during past warming events in the Jurassic and Cretaceous, where previous records have assumed that plankton collapsed due to ocean acidification,” explains Professor Richard Twitchett (Natural History Museum, London). “These fossils are rewriting our understanding of how the calcareous nannoplankton respond to warming events.”

Finally, Dr. Sam Slater explains: “Our study shows that algal plankton were abundant during these past warming events and contributed to the expansion of marine dead zones, where seafloor oxygen-levels were too low for most species to survive. These conditions, with plankton blooms and dead zones, may become more widespread across our globally warming oceans.”

For the curious, there is also a May 19, 2022 American Association for the Advanced of Science (AAAS) news release about this discovery in Science, the journal they publish.

Here’s a link to and a citation for the paper,

Global record of “ghost” nannofossils reveals plankton resilience to high CO2 and warming by Sam M. Slater, Paul Bown, Richard J. Twitchett, Silvia Danise, and Vivi Vajda. Science 19 May 2022 Vol 376, Issue 6595 pp. 853-856 DOI: 10.1126/science.abm7330

This paper is behind a paywall.

Fast hydrogen separation with graphene-wrapped zeolite membranes for clean energy

A May 18, 2022 news item on phys.org highlights the problem with using hydrogen as an energy source,

The effects of global warming are becoming more serious, and there is a strong demand for technological advances to reduce carbon dioxide emissions. Hydrogen is an ideal clean energy which produces water when burned. To promote the use of hydrogen energy, it is essential to develop safe, energy-saving technologies for hydrogen production and storage. Currently, hydrogen is made from natural gas, so it is not appropriate for decarbonization. Using a lot of energy to separate hydrogen would not make it qualify as clean energy.

Polymer separation membranes have the great advantage of enlarging the separation membrane and increasing the separation coefficient. However, the speed of permeation through the membrane is extremely low, and high pressure must be applied to increase the permeation speed. Therefore, a large amount of energy is required for separation using a polymer separation membrane. The goal is to create a new kind of separation membrane technology that can achieve separation speeds that are 50 times faster than that of conventional separation membranes.

A May 18, 2022 Shinshu University (Japan) press release on EurekAlert, which originated the news item, describes a proposed solution to the hydrogen problem,

The graphene-wrapped molecular-sieving membrane prepared in this study has a separation factor of 245 and a permeation coefficient of 5.8 x 106 barrers, which is more than 100 times better than that of conventional polymer separation membranes. If the size of the separation membrane is increased in the future, it is very probable that an energy-saving separation process will be established for the separation of important gases such as carbon dioxide and oxygen as well as hydrogen.

As seen in the transmission electron microscope image in Figure 1 [not shown], graphene is wrapped around the MFI-type zeolite crystal, being hydrophobic. The wrapping uses the principles of colloidal science to keep graphene and zeolite crystal planes close to each other due to reduction of the repulsive interaction. About 5 layers of graphene enclose zeolite crystals in this figure. Around the red arrow, there is a narrow interface space where only hydrogen can permeate. Graphene is also present on hydrophobic zeolite, so the structure of the zeolite crystal cannot be seen with this. Since a strong attractive force acts between graphene, the zeolite crystals wrapped with graphene are in close contact with each other by a simple compression treatment and does not let any gas through.

Figure 2 [not shown] shows a model in which zeolite crystals wrapped with graphene are in contact with each other. The surface of the zeolite crystal has grooves derived from the structure, and there is an interfacial channel between zeolite and graphene through which hydrogen molecules can selectively permeate. The model in which the black circles are connected is graphene, and there are nano-windows represented by blanks in some places. Any gas can freely permeate the nanowindows, but the very narrow channels between graphene and zeolite crystal faces allow hydrogen to permeate preferentially. This structure allows efficient separation of hydrogen and methane. On the other hand, the movement of hydrogen is rapid because there are many voids between the graphene-wrapped zeolite particles. For this reason, ultra-high-speed permeation is possible while maintaining the high separation factor of 200 or more.

Figure 3 [not shown] compares the hydrogen separation factor and gas permeation coefficient for methane with the previously reported separation membranes, which is called Robeson plot. Therefore, this separation membrane separates hydrogen at a speed of about 100 times while maintaining a higher separation coefficient than conventional separation membranes. The farther in the direction of the arrow, the better the performance. This newly developed separation membrane has paved the way for energy-saving separation technologies for the first time.

In addition, this separation principle is different from the conventional dissolution mechanism with polymers and the separation mechanism with pore size in zeolite separation membranes, and it depends on the separation target by selecting the surface structure of zeolite or another crystal. High-speed separation for any target gas is possible in principle. For this reason, if the industrial manufacturing method of this separation membrane and the separation membrane becomes scalable, the chemical industry, combustion industry, and other industries can be significantly improved energy consumption, leading to a significant reduction in carbon dioxide emissions. Currently, the group is conducting research toward the establishment of basic technology for rapidly producing a large amount of enriched oxygen from air. The development of enriched oxygen manufacturing technologies will revolutionize the steel and chemical industry and even medicine.

The figures referenced in the press release are best seen in the context of the paper. I can show you part of Figure 1,

Caption: The black circle connection is a one-layer graphene model, and the nano window is shown as blank. Red hydrogen permeates the gap between graphene and the surface of the zeolite crystal. On the other hand, large CH4 molecules are difficult to permeate. Credit: Copyright©2022 The Authors, License 4.0 (CC BY-NC)

For the rest of Figure 1 and more figures, here’s a link to and a citation for the paper,

Ultrapermeable 2D-channeled graphene-wrapped zeolite molecular sieving membranes for hydrogen separation by Radovan Kukobat, Motomu Sakai, Hideki Tanaka, Hayato Otsuka, Fernando Vallejos-Burgos, Christian Lastoskie, Masahiko Matsukata, Yukichi Sasaki, Kaname Yoshida, Takuya Hayashi and Katsumi Kaneko. Science Advances 18 May 2022 Vol 8, Issue 20 DOI: 10.1126/sciadv.abl3521

This paper is open access.

Art and 5G at museums in Turin (Italy)

Caption: In the framework of EU-funded project 5GTours, R1 humanoid robot tested at GAM (Turin) its ability to navigate and interact with visitors at the 20th-century collections, accompanying them to explore a selection of the museum’s most representative works, such as Andy Warhol’s “Orange car crash”. The robot has been designed and developed by IIT, while the 5G connection was set up by TIM using Ericsson technology.. Credit: IIT-Istituto Italiano di Tecnologia/GAM

This May 27, 2022 Istituto Italiano di Tecnologia (IIT) press release on EurekAlert offers an intriguing view into the potential for robots in art galleries,

Robotics, 5G and art: during the month of May visitors to the Turin’s art museums, Turin Civic Gallery of Modern and Contemporary Art (GAM) and Turin City Museum of Ancient Art (Palazzo Madama), had the opportunity to be part of various experiments based on 5G-network technology. Interactive technologies and robots were the focus of an innovative enjoyment of the art collections, with a great appreciation from the public.

Visitors to the GAM and to Palazzo Madama were provided with a number of engaging interactive experiences made possible through a significant collaboration between public and private organisations, which have been working together for more than three years to experiment the potential of new 5G technology in the framework of the EU-funded project 5GTours (https://5gtours.eu/).

The demonstrations set up in Turin led to the creation of innovative applications in the tourism and culture sectors that can easily be replicated in any artistic or museum context.

In both venues, visitors had the opportunity to meet R1, the humanoid robot designed by the IIT-Istituto Italiano di Tecnologia (Italian Institute of Technology) in Genova and created to operate in domestic and professional environments, whose autonomous and remote navigation system is well integrated with the bandwidth and latency offered by a 5G connection. R1, the robot – 1 metre 25 cm in height, weighing 50 kg, made 50% from plastic and 50% from carbon fibre and metal – is able to describe the works and answer questions regarding the artist or the period in history to which the work belongs. 5G connectivity is required in order to transmit the considerable quantity of data generated by the robot’s sensors and the algorithms that handle environmental perception, autonomous navigation and dialogue to external processing systems with extremely rapid response times.

At Palazzo Madama R1 humanoid robot led a guided tour of the Ceramics Room, while at GAM it was available to visitors of the twentieth-century collections, accompanying them to explore a selection of the museum’s most representative works. R1 robot explained and responded to questions about six relevant paintings: Felice Casorati’s “Daphne a Pavarolo”, Osvaldo Lucini’s “Uccello 2”, Marc Chagall’s “Dans mon pays”, Alberto Burri’s “Sacco”, Andy Warhol’s “Orange car crash” and Mario Merz’s “Che Fare?”.

Moreover, visitors – with the use of Meta Quest visors also connected to the 5G network – were required to solve a puzzle, putting the paintings in the Guards’ Room back into their frames. With these devices, the works in the hall, which in reality cannot be touched, can be handled and moved virtually. Lastly, the visitors involved had the opportunity to visit the underground spaces of Palazzo Madama with the mini-robot Double 3, which uses the 5G network to move reactively and precisely within the narrow spaces.

At GAM a class of students from a local school were able to remotely connect and manoeuvre the mini-robot Double 3 located in the rooms of the twentieth-century collections at the GAM directly from their classroom. A treasure hunt held in the museum with the participants never leaving the school.

In the Educational Area, a group of youngsters had the opportunity of collaborating in the painting of a virtual work of art on a large technological wall, drawing inspiration from works by Nicola De Maria.

The 5G network solutions created at the GAM and at Palazzo Madama by TIM [Telecom Italia] with Ericsson technology in collaboration with the City of Turin and the Turin Museum Foundation, guarantee constant high-speed transmission and extremely low latency. These solutions, which comply with 3GPP standard, are extremely flexible in terms of setting up and use. In the case of Palazzo Madama, a UNESCO World Heritage Site, tailor-made installations were designed, using apparatus and solutions that perfectly integrate with the museum spaces, while at the same time guaranteeing extremely high performance. At the GAM, the Radio Dot System has been implemented, a new 5G solution from Ericsson that is small enough to be held in the palm of a hand, and that provides network coverage and performance required for busy indoor areas. Thanks to these activities, Turin is ever increasingly playing a role as an open-air laboratory for urban innovation; since 2021 it has been the location of the “House of Emerging Technology – CTE NEXT”, a veritable centre for technology transfer via 5G and for emerging technologies coordinated by the Municipality of Turin and financed by the Ministry for Economic Development.

Through these solutions, Palazzo Madama and the GAM are now unique examples of technology in Italy and a rare example on a European level of museum buildings with full 5G coverage.

The experience was the result of the project financed by the European Union, 5G-TOURS 5G smarT mObility, media and e-health for toURists and citizenS”, the city of Turin – Department and Directorate of Innovation, in collaboration with the Department of Culture – Ericsson, TIM [Telecom Italia], the Turin Museum Foundation and the IIT-Istituto Italiano di Tecnologia (Italian Institute of Technology) of Genova, with the contribution of the international partners Atos and Samsung. The 5G coverage within the two museums was set up by TIM using Ericsson technology, solutions that perfectly integrated with the areas within the two museums structures.

Just in case you missed the link in the press release, you can find more information about this European Union Horizon 2020-funded 5G project, here at 5G TOURS (SmarT mObility, media and e-health for toURists and citizenS). You can find out more about the grant, e.g., this project sunset in July 2022, here.

US announces fusion energy breakthrough

Nice to learn of this news, which is on the CBC (Canadian Broadcasting Corporation) news online website. From a December 13, 2022 news item provided by Associated Press (Note: the news item was updated to include general description and some Canadian content at about 12 pm PT) ,

Researchers at the Lawrence Livermore National Laboratory in California for the first time produced more energy in a fusion reaction than was used to ignite it, [emphasis mine] something called net energy gain, the Energy Department said.

Peter Behr’s December 13, 2022 article on Politico.com about the US Department of Energy’s big announcement also breaks the news,

The Department of Energy announced Tuesday [December 12, 2022] that its scientists have produced the first ever fusion reaction that yielded more energy than the reaction required, an essential step in the long path toward commercial fusion power, officials said.

The experiment Dec. 5 [2022], at the Lawrence Livermore National Laboratory in California, took a few billionths of second. But laboratory leaders said today that it demonstrated for the first time that sustained fusion power is possible.

Behr explains what nuclear fusion is but first he touches on why scientists are so interested in the process, from his December 13, 2022 article,

In theory, nuclear fusion could produce massive amounts of energy without producing lost-lasting radioactive waste, or posing the risk of meltdowns. That’s unlike nuclear fission, which powers today’s reactors.

Fission results when radioactive atoms — most commonly uranium — are split by neutrons in controlled chain reactions, creating lighter atoms and large amounts of radiation and energy to produce electric power.

Fusion is the opposite process. In the most common approach, swirling hydrogen isotopes are forced together under tremendous heat to create helium and energy for power generation. This is the same process that powers the sun and other stars. But scientists have been trying since the mid-20th century to find a way to use it to generate power on Earth.

There are two main approaches to making fusion happen and I found a description for them in an October 2022 article about local company, General Fusion, by Nelson Bennett for Business in Vancouver magazine (paper version),

Most fusion companies are pursuing one of two approaches: Magnet [sic] or inertial confinement. General fusion is one of the few that is taking a more hybrid approach ¬ magnetic confinement with pulse compression.

Fusion occurs when smaller nuclei are fused together under tremendous force into larger nuclei, with a release of energy occurring in the form of neutrons. It’s what happens to stars when gravitational force creates extreme heat that turns on the fusion engine.

Replicating that in a machine requires some form of confinement to squeeze plasma ¬ a kind of super-hot fog of unbound positive and negative particles ¬ to the point where nuclei fuse.

One approach is inertial confinement, in which lasers are focused on a small capsule of heavy hydrogen fuel (deuterium and tritium) to create ignition. This takes a tremendous amount of energy, and the challenge for all fusion efforts is to get a sustained ignition that produces more energy than it takes to get ignition ¬ called net energy gain.

The other main approach is magnetic confinement, using powerful magnets in a machine called a tokomak to contain and squeeze plasma into a donut-shaped form called a torus.

General Fusion uses magnets to confine the plasma, but to get ignition it uses pistons arrayed around a spherical chamber to fire synchronously to essentially collapse the plasma on itself and spark ignition.

General Fusion’s machine uses liquid metal spinning inside a chamber that acts as a protective barrier between the hot plasma and the machine ¬ basically a sphere of plasma contained within a sphere of liquid metal. This protects the machine from damage.

The temperatures generated in fusion ¬ up to to 150 million degrees Celsius ¬ are five to six times hotter than the core of the sun, and can destroy machines that produce them. This makes durability a big challenge in any machine.

The Lawrence Livermore National Laboratory (LLNL) issued a December 13, 2022 news release, which provides more detail about their pioneering work, Note: I have changed the order of the paragraphs but all of this is from the news release,

Fusion is the process by which two light nuclei combine to form a single heavier nucleus, releasing a large amount of energy. In the 1960s, a group of pioneering scientists at LLNL hypothesized that lasers could be used to induce fusion in a laboratory setting. Led by physicist John Nuckolls, who later served as LLNL director from 1988 to 1994, this revolutionary idea became inertial confinement fusion, kicking off more than 60 years of research and development in lasers, optics, diagnostics, target fabrication, computer modeling and simulation and experimental design.

To pursue this concept, LLNL built a series of increasingly powerful laser systems, leading to the creation of NIF [National Ignition Facility], the world’s largest and most energetic laser system. NIF — located at LLNL in Livermore, California — is the size of a sports stadium and uses powerful laser beams to create temperatures and pressures like those in the cores of stars and giant planets, and inside exploding nuclear weapons.

LLNL’s experiment surpassed the fusion threshold by delivering 2.05 megajoules (MJ) of energy to the target, resulting in 3.15 MJ of fusion energy output, demonstrating for the first time a most fundamental science basis for inertial fusion energy (IFE). Many advanced science and technology developments are still needed to achieve simple, affordable IFE to power homes and businesses, and DOE is currently restarting a broad-based, coordinated IFE program in the United States. Combined with private-sector investment, there is a lot of momentum to drive rapid progress toward fusion commercialization.

If you want to see some really excited comments from scientists just read the LLNL’s December 13, 2022 news release. Even the news release’s banner is exuberant,

Behr peers into the future of fusion energy, from his December 13, 2022 article,

Fearful that China might wind up dominating fusion energy in the second half of this century, Congress in 2020 told DOE [Department of Energy] to begin funding development of a utility-scale fusion pilot plant that could deliver at least 50 megawatts of power to the U.S. grid.

In September [2022], DOE invited private companies to apply for an initial $50 million in research grants to help fund development of detailed pilot plant plans.

“We’re seeking strong partnerships between DOE and the private sector,” a senior DOE official told POLITICO’s E&E News recently. The official was not willing to speak on the record, saying the grant process is ongoing and confidential.

As the competition proceeds, DOE will set technical milestones or requirements, challenging the teams to show how critical engineering challenges will be overcome. DOE’s goal is “hopefully to enable a fusion pilot to operate in the early 2030s,” the official added.

At least 15 U.S. and foreign fusion companies have submitted requests for an initial total of $50 million in pilot plant grants, and some of them are pursuing the laser-ignition fusion process that Lawrence Livermore has pioneered, said Holland. He did not name the companies because the competition is confidential.

I wonder if General Fusion whose CEO (Chief Executive Officer) Greg Twinney declared, “Commercializing fusion energy is within reach, and General Fusion is ready to deliver it to the grid by the 2030s …” (in a December 12, 2022 company press release) is part of the US competition.

I noticed that General Fusion lists this at the end of the press release,

… Founded in 2002, we are headquartered in Vancouver, Canada, with additional centers co-located with internationally recognized fusion research laboratories near London, U.K., and Oak Ridge, Tennessee, U.S.A.

The Oak Ridge National Laboratory (ORNL), like the LLNL, is a US Department of Energy research facility.

As for General Fusion’s London connection, I have more about that in my October 28, 2022 posting “Overview of fusion energy scene,” which includes General Fusion’s then latest news about a commercialization agreement with the UKAEA (UK Atomic Energy Authority) and a ‘fusion’ video by rapper Baba Brinkman along with the overview.

400 nm thick glucose fuel cell uses body’s own sugar

This May 12, 2022 news item on Nanowerk reminds me of bioenergy harvesting (using the body’s own processes rather than batteries to power implants),

Glucose is the sugar we absorb from the foods we eat. It is the fuel that powers every cell in our bodies. Could glucose also power tomorrow’s medical implants?

Engineers at MIT [Massachusetts Institute of Technology] and the Technical University of Munich think so. They have designed a new kind of glucose fuel cell that converts glucose directly into electricity. The device is smaller than other proposed glucose fuel cells, measuring just 400 nanometers thick. The sugary power source generates about 43 microwatts per square centimeter of electricity, achieving the highest power density of any glucose fuel cell to date under ambient conditions.

Caption: Silicon chip with 30 individual glucose micro fuel cells, seen as small silver squares inside each gray rectangle. Credit Image: Kent Dayton

A May 12, 2022 MIT news release (also on EuekAlert) by Jennifer Chu, which originated the news item, describes the technology in more detail, Note: A link has been removed,

The new device is also resilient, able to withstand temperatures up to 600 degrees Celsius. If incorporated into a medical implant, the fuel cell could remain stable through the high-temperature sterilization process required for all implantable devices.

The heart of the new device is made from ceramic, a material that retains its electrochemical properties even at high temperatures and miniature scales. The researchers envision the new design could be made into ultrathin films or coatings and wrapped around implants to passively power electronics, using the body’s abundant glucose supply.

“Glucose is everywhere in the body, and the idea is to harvest this readily available energy and use it to power implantable devices,” says Philipp Simons, who developed the design as part of his PhD thesis in MIT’s Department of Materials Science and Engineering (DMSE). “In our work we show a new glucose fuel cell electrochemistry.”

“Instead of using a battery, which can take up 90 percent of an implant’s volume, you could make a device with a thin film, and you’d have a power source with no volumetric footprint,” says Jennifer L.M. Rupp, Simons’ thesis supervisor and a DMSE visiting professor, who is also an associate professor of solid-state electrolyte chemistry at Technical University Munich in Germany.

Simons and his colleagues detail their design today in the journal Advanced Materials. Co-authors of the study include Rupp, Steven Schenk, Marco Gysel, and Lorenz Olbrich.

A “hard” separation

The inspiration for the new fuel cell came in 2016, when Rupp, who specializes in ceramics and electrochemical devices, went to take a routine glucose test toward the end of her pregnancy.

“In the doctor’s office, I was a very bored electrochemist, thinking what you could do with sugar and electrochemistry,” Rupp recalls. “Then I realized, it would be good to have a glucose-powered solid state device. And Philipp and I met over coffee and wrote out on a napkin the first drawings.”

The team is not the first to conceive of a glucose fuel cell, which was initially introduced in the 1960s and showed potential for converting glucose’s chemical energy into electrical energy. But glucose fuel cells at the time were based on soft polymers and were quickly eclipsed by lithium-iodide batteries, which would become the standard power source for medical implants, most notably the cardiac pacemaker.

However, batteries have a limit to how small they can be made, as their design requires the physical capacity to store energy.

“Fuel cells directly convert energy rather than storing it in a device, so you don’t need all that volume that’s required to store energy in a battery,” Rupp says.

In recent years, scientists have taken another look at glucose fuel cells as potentially smaller power sources, fueled directly by the body’s abundant glucose.

A glucose fuel cell’s basic design consists of three layers: a top anode, a middle electrolyte, and a bottom cathode. The anode reacts with glucose in bodily fluids, transforming the sugar into gluconic acid. This electrochemical conversion releases a pair of protons and a pair of electrons. The middle electrolyte acts to separate the protons from the electrons, conducting the protons through the fuel cell, where they combine with air to form molecules of water — a harmless byproduct that flows away with the body’s fluid. Meanwhile, the isolated electrons flow to an external circuit, where they can be used to power an electronic device.

The team looked to improve on existing materials and designs by modifying the electrolyte layer, which is often made from polymers. But polymer properties, along with their ability to conduct protons, easily degrade at high temperatures, are difficult to retain when scaled down to the dimension of nanometers, and are hard to sterilize. The researchers wondered if a ceramic — a heat-resistant material which can naturally conduct protons — could be made into an electrolyte for glucose fuel cells.

“When you think of ceramics for such a glucose fuel cell, they have the advantage of long-term stability, small scalability, and silicon chip integration,” Rupp notes. “They’re hard and robust.”

Peak power

The researchers designed a glucose fuel cell with an electrolyte made from ceria, a ceramic material that possesses high ion conductivity, is mechanically robust, and as such, is widely used as an electrolyte in hydrogen fuel cells. It has also been shown to be biocompatible.

“Ceria is actively studied in the cancer research community,” Simons notes. “It’s also similar to zirconia, which is used in tooth implants, and is biocompatible and safe.”

The team sandwiched the electrolyte with an anode and cathode made of platinum, a stable material that readily reacts with glucose. They fabricated 150 individual glucose fuel cells on a chip, each about 400 nanometers thin, and about 300 micrometers wide (about the width of 30 human hairs). They patterned the cells onto silicon wafers, showing that the devices can be paired with a common semiconductor material. They then measured the current produced by each cell as they flowed a solution of glucose over each wafer in a custom-fabricated test station.

They found many cells produced a peak voltage of about 80 millivolts. Given the tiny size of each cell, this output is the highest power density of any existing glucose fuel cell design.

“Excitingly, we are able to draw power and current that’s sufficient to power implantable devices,” Simons says.

“It is the first time that proton conduction in electroceramic materials can be used for glucose-to-power conversion, defining a new type of electrochemstry,” Rupp says. “It extends the material use-cases from hydrogen fuel cells to new, exciting glucose-conversion modes.”

Here’s a link to and a citation for the paper,

A Ceramic-Electrolyte Glucose Fuel Cell for Implantable Electronics by Philipp Simons, Steven A. Schenk, Marco A. Gysel, Lorenz F. Olbrich, Jennifer L. M. Rupp. Advanced Materials https://doi.org/10.1002/adma.202109075 First published: 05 April 2022

This paper is open access.

Building Transdisciplinary Research Paths [for a] Sustainable & Inclusive Future, a December 14, 2022 science policy event

I received (via email) a December 8, 2022 Canadian Science Policy Centre (CSPC) announcement about their various doings when this event, which seems a little short on information, caught my attention,

[Building Transdisciplinary Research Paths towards a more Sustainable and Inclusive Future]

Upcoming Virtual Event

With this workshop, Belmont Forum and IAI aim to open a collective reflection on the ideas and practices around ‘Transdisciplinarity’ (TD) to foster participatory knowledge production. Our goal is to create a safe environment for people to share their impressions about TD, as a form of experimental lab based on a culture of collaboration.

This CSPC event page cleared up a few questions,

Building Transdisciplinary Research Paths towards a more Sustainable and Inclusive Future

Global environmental change and sustainability require engagement with civil society and wide participation to gain social legitimacy, also, it is necessary to open cooperation among different scientific disciplines, borderless collaboration, and collaborative learning processes, among other crucial issues.

Those efforts have been recurrently encompassed by the idea of ‘Transdisciplinarity’ (TD), which is a fairly new word and evolving concept. Several of those characteristics are daily practices in academic and non-academic communities, sometimes under different words or conceptions.

With this workshop, Belmont Forum and IAI [Inter-American Institute for Global Change Research?] aim to open a collective reflection on the ideas and practices around ‘Transdisciplinarity’ (TD) to foster participatory knowledge production. Our goal is to create a safe environment for people to share their impressions about TD, as a form of experimental lab based on a culture of collaboration.

Date: Dec 14 [2022]

Time: 3:00 pm – 4:00 pm EST

Website [Register here]: https://us02web.zoom.us/meeting/register/tZArcOCupj4rHdBbwhSUpVhpvPuou5kNlZId

For the curious, here’s more about the Belmont Forum from their About page, Note: Links have been removed,

Established in 2009, the Belmont Forum is a partnership of funding organizations, international science councils, and regional consortia committed to the advancement of transdisciplinary science. Forum operations are guided by the Belmont Challenge, a vision document that encourages:

International transdisciplinary research providing knowledge for understanding, mitigating and adapting to global environmental change.

Forum members and partner organizations work collaboratively to meet this Challenge by issuing international calls for proposals, committing to best practices for open data access, and providing transdisciplinary training.  To that end, the Belmont Forum is also working to enhance the broader capacity to conduct transnational environmental change research through its e-Infrastructure and Data Management initiative.

Since its establishment, the Forum has successfully led 19 calls for proposals, supporting 134 projects and more than 1,000 scientists and stakeholders, representing over 90 countries.  Themes addressed by CRAs have included Freshwater Security, Coastal Vulnerability, Food Security and Land Use Change, Climate Predictability and Inter-Regional Linkages, Biodiversity and Ecosystem Services, Arctic Observing and Science for Sustainability, and Mountains as Sentinels of Change.  New themes are developed through a scoping process and made available for proposals through the Belmont Forum website and its BF Grant Operations site.

If you keep scrolling down the Bellmont Forum’s About page, you’ll find an impressive list of partners including the Natural Sciences and Engineering Research Council of Canada (NSERC).

I’m pretty sure IAI is Inter-American Institute for Global Change Research, given that two of the panelists come from that organization. Here’s more about the IAI from their About Us page, Note: Links have been removed,

Humans have affected practically all ecosystems on earth. Over the past 200 years, mankind’s emissions of greenhouse gases into the Earth’s atmosphere have changed its radiative properties and are causing a rise in global temperatures which is now modifying Earth system functions globally. As a result, the 21st-century is faced with environmental changes from local to global scales that require large efforts of mitigation and adaptation by societies and ecosystems. The causes and effects, problems and solutions of global change interlink biogeochemistry, Earth system functions and socio-economic conditions in increasingly complex ways. To guide efforts of mitigation and adaptation to global change and aid policy decisions, scientific knowledge now needs to be generated in broad transdisciplinary ways that address the needs of knowledge users and also provide profound understanding of complex socio-environmental systems.

To address these knowledge needs, 12 nations of the American continent came together in Montevideo, Uruguay, in 1992 to establish the Inter-American Institute for Global Change Research (IAI). The 12 governments, in the Declaration of Montevideo, called for the Institute to develop the best possible international coordination of scientific and economic research on the extent, causes, and consequences of global change in the Americas.

Sixteen governments signed the resulting Agreement Establishing the IAI which laid the  foundation for the IAI’s function as a regional intergovernmental organization that promotes interdisciplinary scientific research and capacity building to inform decision-makers on the continent and beyond. Since the establishment of the Agreement in 1992, 3 additional nations have acceded the treaty, and the IAI has now 19 Parties in the Americas, which come together once every year in the Conference of the Parties to monitor and direct the IAI’s activities.

Now onto the best part, reading about the panelists (from CSPC event page, scroll down and click on the See bio button), Note: I have made some rough changes to the formatting so that the bios match each other more closely,

Dr. Lily House-Peters is Associate Professor in the Department of Geography at California State University, Long Beach. Dr. House-Peters is a broadly trained human-environment geographer with experience in qualitative and quantitative approaches to human dimensions of environmental change, water security, mining and extraction, and natural resource conservation policy. She has a decade of experience and expertise in transdisciplinary research for action-oriented solutions to global environmental change. She is currently part of a team creating a curriculum for global change researchers in the Americas focused on the drivers and barriers of effective transdisciplinary collaboration and processes of integration and convergence in diverse teams.

Dr. Gabriela Alonso Yanez, Associate Professor, Werklund School of Education University of Calgary. Learning and education in the context of sustainability and global change are the focus of my work. Over the last ten years, I have participated in several collaborative research projects with multiple aims, including building researchers’ and organizations’ capacity for collaboration and engaging networks that include knowledge keepers, local community members and academics in co-designing and co-producing solutions-oriented knowledge.

Marshalee Valentine, MSc, BTech. Marshalee Valentine is Co-founder and Vice President of the International Women’s Coffee Alliance Jamaica (IWCA), a charitable organization responsible for the development and implementation of social impact and community development projects geared towards improving the livelihoods of women along the coffee value chain in Jamaica. In addition, she also owns and operates a Quality, Food Safety and Environmental Management Systems Consultancy. Her areas of expertise include; Process improvement, technology and Innovation transfer methods, capacity building and community-based research.

With a background in Agriculture, she holds a Bachelor of Technology in Environmental Sciences and a Master’s Degree in Environmental Management. Marshalee offers a unique perspective for regional authenticity bringing deep sensibility to issues of gender, equity and inclusion, in particular related to GEC issues in small countries.

Fany Ramos Quispe, Science Technology and Policy Fellow, Inter-American Institute for Global Change Research. Fany Ramos Quispe holds a B.S. in Environmental Engineering from the Polytechnic Institute of Mexico, and an MSc. in Environmental Change and International Development from the University of Sheffield in the United Kingdom. She worked with a variety of private and public organizations at the national and international levels. She has experience on projects related to renewable energies, waste and water management, environmental education, climate change, and inter and transdisciplinary research, among others. After her postgraduate studies, she joined the Bolivian government mainly to support international affairs related to climate change at the Plurinational Authority of Mother Earth, afterwards, she joined the Centre for Social Research of the Vicepresidency as a Climate Change Specialist.

For several years now she combined academic and professional activities with side projects and activism for environmental and educational issues. She is a founder and former chair (2019-2020) of the environmental engineers’ society of La Paz and collaborates with different grassroots organizations.

Fany is a member of OWSD Bolivia [Organization for Women in Science for the Developing World {OWSD}] and current IAI Science, Technology and Policy fellow at the Belmont Forum.

Dr. Laila Sandroni, Science Technology and Policy Fellow, InterAmerican Institute for Global Change Research. Laila Sandroni is an Anthropologist and Geographer with experience in transdisciplinary research in social sciences. Her research interests lie in the field of transformations to sustainability and the role of different kinds of knowledge in defining the best paths to achieve biodiversity conservation and forest management. She has particular expertise in epistemology, power-knowledge relations, and evidence-based policy in environmental issues.

Laila has a longstanding involvement with stakeholders working on different paths towards biodiversity conservation. She has experience in transdisciplinary science and participatory methodologies to encompass plural knowledge on the management of protected areas in tropical rainforests in Brazil.

This event seems to be free and it looks like an exciting panel.

Unexpectedly, they don’t have a male participant amongst the panelists. Outside of groups that are explicitly exploring women’s issues in the sciences, I’ve never before seen a science panel composed entirely of women. As well, the organizers seem to have broadened the range of geographies represented at a Canadian event with a researcher who has experience in Brazil, another with experience in Bolivia, a panelist who works in Jamaica, and two academics who focus on the Americas (South, Central, and North).

Transdisciplinarity and other disciplinarities

There are so many: crossdisciplinarity, multidisciplinarity, interdisciplinarity, and transdisciplinarity, that the whole subject gets a little confusing. Jeffrey Evans’ July 29, 2014 post on the Purdue University (Indiana, US) Polytechnic Institute blog answers questions about three (trans-, multi-, and inter-) of the disciplinarities,

Learners entering the Polytechnic Incubator’s new program will no doubt hear the terms “multidisciplinary (arity)” and “interdisciplinary (arity)” thrown about somewhat indiscriminately. Interestingly, we administrators, faculty, and staff also use these terms rather loosely and too often without carefully considering their underlying meaning.

Recently I gave a talk about yet another disciplinarity: “transdisciplinarity.” The purpose of the talk was to share with colleagues from around the country the opportunities and challenges associated with developing a truly transdisciplinary environment in an institution of higher education. During a meeting after I returned, the terms “multi”, “inter”, and “trans” disciplinary(arity) were being thrown around, and it was clear that the meanings of the terms were not clearly understood. Hopefully this blog entry will help shed some light on the subject. …

First, I am not an expert in the various “disciplinarities.” The ideas and descriptions that follow are not mine and have been around for decades, with many books and articles written on the subject. Yet my Polytechnic Incubator colleagues and I believe in these ideas and in their advantages and constraints, and they serve to motivate the design of the Incubator’s transdisciplinary environment.

In 1992, Hugh G. Petrie wrote a seminal article1 for the American Educational Research Association that articulates the meaning of these ideas. Later, in 2007, A. Wendy Russell, Fern Wickson, and Anna L. Carew contributed an article2 discussing the context of transdisciplinarity, prescriptions for transdisciplinary knowledge production and the contradictions that arise, and suggestions for universities to develop capacity for transdisciplinarity, rather than simply investing in knowledge “products.” …

Multidisciplinarity

Petrie1 discusses multidisciplinarity as “the idea of a number of disciplines working together on a problem, an educational program, or a research study. The effect is additive rather than integrative. The project is usually short-lived, and there is seldom any long-term change in the ways in which the disciplinary participants in a multidisciplinary project view their own work.”

Interdisciplinarity

Moving to extend the idea of multidisciplinarity to include more integration, rather than just addition, Petrie writes about interdisciplinarity in this way:

“Interdisciplinary research or education typically refers to those situations in which the integration of the work goes beyond the mere concatenation of disciplinary contributions. Some key elements of disciplinarians’ use of their concepts and tools change. There is a level of integration. Interdisciplinary subjects in university curricula such as physical chemistry or social psychology, which by now have, perhaps,themselves become disciplines, are good examples. A newer one might be the field of immunopharmocology, which combines the work of bacteriology, chemistry, physiology, and immunology. Another instance of interdisciplinarity might be the emerging notion of a core curriculum that goes considerably beyond simple distribution requirements in undergraduate programs of general education.”

Transdisciplinarity

Petrie1 writes about transdisciplinarity in this way: “The notion of transdisciplinarity exemplifies one of the historically important driving forces in the area of interdisciplinarity, namely, the idea of the desirability of the integration of knowledge into some meaningful whole. The best example, perhaps, of the drive to transdisciplinarity might be the early discussions of general systems theory when it was being held forward as a grand synthesis of knowledge. Marxism, structuralism, and feminist theory are sometimes cited as examples of a transdisciplinary approach. Essentially, this kind of interdisciplinarity represents the impetus to integrate knowledge, and, hence, is often characterized by a denigration and repudiation of the disciplines and disciplinary work as essentially fragmented and incomplete.

It seems multidisciplinarity could be viewed as an ad hoc approach whereas interdsciplinarity and transdisciplinarity are intimately related with ‘inter-‘ being a subset of ‘trans-‘.

I think that’s enough for now. Should I ever stumble across a definition for crossdisciplinarity, I will endeavour to add it here.

Why can’t they produce graphene at usable (industrial) scales?

Kevin Wyss, PhD chemistry student at Rice University, has written an explanation of why graphene is not produced in quantities that make it usable in industry in his November 29, 2022 essay for The Conversation (h/t Nov. 29, 2022 phys.org news item), Note: Links have been removed from the following,

“Future chips may be 10 times faster, all thanks to graphene”; “Graphene may be used in COVID-19 detection”; and “Graphene allows batteries to charge 5x faster” – those are just a handful of recent dramatic headlines lauding the possibilities of graphene. Graphene is an incredibly light, strong and durable material made of a single layer of carbon atoms. With these properties, it is no wonder researchers have been studying ways that graphene could advance material science and technology for decades.

Graphene is a fascinating material, just as the sensational headlines suggest, but it is only just starting be used in real-world applications. The problem lies not in graphene’s properties, but in the fact that it is still incredibly difficult and expensive to manufacture at commercial scales.

Wyss highlights the properties that make graphene so attractive, from the November 29, 2022 essay (Note: Links have been removed from the following),

… The material can be used to create flexible electronics and to purify or desalinate water. And adding just 0.03 ounces (1 gram) of graphene to 11.5 pounds (5 kilograms) of cement increases the strength of the cement by 35%.

As of late 2022, Ford Motor Co., with which I worked as part of my doctoral research, is one of the the only companies to use graphene at industrial scales. Starting in 2018, Ford began making plastic for its vehicles that was 0.5% graphene – increasing the plastic’s strength by 20%.

There are two ways of producing graphene as Wyss notes in his November 29, 2022 essay (Note: Links have been removed from the following),

Top-down synthesis [emphasis mine], also known as graphene exfoliation, works by peeling off the thinnest possible layers of carbon from graphite. Some of the earliest graphene sheets were made by using cellophane tape to peel off layers of carbon from a larger piece of graphite.

The problem is that the molecular forces holding graphene sheets together in graphite are very strong, and it’s hard to pull sheets apart. Because of this, graphene produced using top-down methods is often many layers thick, has holes or deformations, and can contain impurities. Factories can produce a few tons of mechanically or chemically exfoliated graphene per year, and for many applications – like mixing it into plastic – the lower-quality graphene works well.

Bottom-up synthesis [emphasis mine] builds the carbon sheets one atom at a time over a few hours. This process – called vapor deposition – allows researchers to produce high-quality graphene that is one atom thick and up to 30 inches across. This yields graphene with the best possible mechanical and electrical properties. The problem is that with a bottom-up synthesis, it can take hours to make even 0.00001 gram – not nearly fast enough for any large scale uses like in flexible touch-screen electronics or solar panels, for example.

Current production methods of graphene, both top-down and bottom-up, are expensive as well as energy and resource intensive, and simply produce too little product, too slowly.

Wyss has written an informative essay and, for those who need it, he has included an explanation of the substance known as graphene.

Swiss researchers, memristors, perovskite crystals, and neuromorphic (brainlike) computing

A May 18, 2022 news item on Nanowerk highlights research into making memristors more ‘flexible’, (Note: There’s an almost identical May 18, 2022 news item on ScienceDaily but the issuing agency is listed as ETH Zurich rather than Empa as listed on Nanowerk),

Compared with computers, the human brain is incredibly energy-efficient. Scientists are therefore drawing on how the brain and its interconnected neurons function for inspiration in designing innovative computing technologies. They foresee that these brain-inspired computing systems, will be more energy-efficient than conventional ones, as well as better at performing machine-learning tasks.

Much like neurons, which are responsible for both data storage and data processing in the brain, scientists want to combine storage and processing in a single type of electronic component, known as a memristor. Their hope is that this will help to achieve greater efficiency because moving data between the processor and the storage, as conventional computers do, is the main reason for the high energy consumption in machine-learning applications.

Researchers at ETH Zurich, Empa and the University of Zurich have now developed an innovative concept for a memristor that can be used in a far wider range of applications than existing memristors.

“There are different operation modes for memristors, and it is advantageous to be able to use all these modes depending on an artificial neural network’s architecture,” explains ETH Zurich postdoc Rohit John. “But previous conventional memristors had to be configured for one of these modes in advance.”

The new memristors can now easily switch between two operation modes while in use: a mode in which the signal grows weaker over time and dies (volatile mode), and one in which the signal remains constant (non-volatile mode).

Once you get past the first two paragraphs in the Nanowerk news item, you find the ETH Zurich and Empa May 18, 2022 press releases by Fabio Begamin, in both cases, are identical (ETH is listed as the authoring agency on EurekAlert), (Note: A link has been removed in the following),

Just like in the brain

“These two operation modes are also found in the human brain,” John says. On the one hand, stimuli at the synapses are transmitted from neuron to neuron with biochemical neurotransmitters. These stimuli start out strong and then gradually become weaker. On the other hand, new synaptic connections to other neurons form in the brain while we learn. These connections are longer-​lasting.

John, who is a postdoc in the group headed by ETH Professor Maksym Kovalenko, was awarded an ETH fellowship for outstanding postdoctoral researchers in 2020. John conducted this research together with Yiğit Demirağ, a doctoral student in Professor Giacomo Indiveri’s group at the Institute for Neuroinformatics of the University of Zurich and ETH Zurich.

Semiconductor known from solar cells

The memristors the researchers have developed are made of halide perovskite nanocrystals, a semiconductor material known primarily from its use in photovoltaic cells. “The ‘nerve conduction’ in these new memristors is mediated by temporarily or permanently stringing together silver ions from an electrode to form a nanofilament penetrating the perovskite structure through which current can flow,” explains Kovalenko.

This process can be regulated to make the silver-​ion filament either thin, so that it gradually breaks back down into individual silver ions (volatile mode), or thick and permanent (non-​volatile mode). This is controlled by the intensity of the current conducted on the memristor: applying a weak current activates the volatile mode, while a strong current activates the non-​volatile mode.

New toolkit for neuroinformaticians

“To our knowledge, this is the first memristor that can be reliably switched between volatile and non-​volatile modes on demand,” Demirağ says. This means that in the future, computer chips can be manufactured with memristors that enable both modes. This is a significance advance because it is usually not possible to combine several different types of memristors on one chip.

Within the scope of the study, which they published in the journal Nature Communications, the researchers tested 25 of these new memristors and carried out 20,000 measurements with them. In this way, they were able to simulate a computational problem on a complex network. The problem involved classifying a number of different neuron spikes as one of four predefined patterns.

Before these memristors can be used in computer technology, they will need to undergo further optimisation.  However, such components are also important for research in neuroinformatics, as Indiveri points out: “These components come closer to real neurons than previous ones. As a result, they help researchers to better test hypotheses in neuroinformatics and hopefully gain a better understanding of the computing principles of real neuronal circuits in humans and animals.”

Here’s a link to and a citation for the paper,

Reconfigurable halide perovskite nanocrystal memristors for neuromorphic computing by Rohit Abraham John, Yiğit Demirağ, Yevhen Shynkarenko, Yuliia Berezovska, Natacha Ohannessian, Melika Payvand, Peng Zeng, Maryna I. Bodnarchuk, Frank Krumeich, Gökhan Kara, Ivan Shorubalko, Manu V. Nair, Graham A. Cooke, Thomas Lippert, Giacomo Indiveri & Maksym V. Kovalenko. Nature Communications volume 13, Article number: 2074 (2022) DOI: https://doi.org/10.1038/s41467-022-29727-1 Published: 19 April 2022

This paper is open access.

How AI-designed fiction reading lists and self-publishing help nurture far-right and neo-Nazi novelists

Literary theorists Helen Young and Geoff M Boucher, both at Deakin University (Australia), have co-written a fascinating May 29, 2022 essay on The Conversation (and republished on phys.org) analyzing some of the reasons (e.g., novels) for the resurgence in neo-Nazi activity and far-right extremism, Note: Links have been removed,

Far-right extremists pose an increasing risk in Australia and around the world. In 2020, ASIO [Australian Security Intelligence Organisation] revealed that about 40% of its counter-terrorism work involved the far right.

The recent mass murder in Buffalo, U.S., and the attack in Christchurch, New Zealand, in 2019 are just two examples of many far-right extremist acts of terror.

Far-right extremists have complex and diverse methods for spreading their messages of hate. These can include through social media, video games, wellness culture, interest in medieval European history, and fiction [emphasis mine]. Novels by both extremist and non-extremist authors feature on far-right “reading lists” designed to draw people into their beliefs and normalize hate.

Here’s more about how the books get published and distributed, from the May 29, 2022 essay, Note: Links have been removed,

Publishing houses once refused to print such books, but changes in technology have made traditional publishers less important. With self-publishing and e-books, it is easy for extremists to produce and distribute their fiction.

In this article, we have only given the titles and authors of those books that are already notorious, to avoid publicizing other dangerous hate-filled fictions.

Why would far-right extremists write novels?

Reading fiction is different to reading non-fiction. Fiction offers readers imaginative scenarios that can seem to be truthful, even though they are not fact-based. It can encourage readers to empathize with the emotions, thoughts and ethics of characters, particularly when they recognize those characters as being “like” them.

A novel featuring characters who become radicalized to far-right extremism, or who undertake violent terrorist acts, can help make those things seem justified and normal.

Novels that promote political violence, such as The Turner Diaries, are also ways for extremists to share plans and give readers who hold extreme views ideas about how to commit terrorist acts. …

In the late 20th century, far-right extremists without Pierce’s notoriety [American neo-Nazi William L. Pierce published The Turner Diaries (1978)] found it impossible to get their books published. One complained about this on his blog in 1999, blaming feminists and Jewish people. Just a few years later, print-on-demand and digital self-publishing made it possible to circumvent this difficulty.

The same neo-Nazi self-published what he termed “a lifetime of writing” in the space of a few years in the early 2000s. The company he paid to produce his books—iUniverse.com—helped get them onto the sales lists of major booksellers Barnes and Noble and Amazon in the early 2000s, making a huge difference to how easily they circulated outside extremist circles.

It still produces print-on-demand hard copies, even though the author has died. The same author’s books also circulate in digital versions, including on Google Play and Kindle, making them easily accessible.

Distributing extremist novels digitally

Far-right extremists use social media to spread their beliefs, but other digital platforms are also useful for them.

Seemingly innocent sites that host a wide range of mainstream material, such as Google Books, Project Gutenberg, and the Internet Archive, are open to exploitation. Extremists use them to share, for example, material denying the Holocaust alongside historical Nazi newspapers.

Amazon’s Kindle self-publishing service has been called “a haven for white supremacists” because of how easy it is for them to circulate political tracts there. The far-right extremist who committed the Oslo terrorist attacks in 2011 recommended in his manifesto that his followers use Kindle to to spread his message.

Our research has shown that novels by known far-right extremists have been published and circulated through Kindle as well as other digital self-publishing services.

Ai and its algorithms also play a role, from the May 29, 2022 essay,

Radicalising recommendations

As we researched how novels by known violent extremists circulate, we noticed that the sales algorithms of mainstream platforms were suggesting others that we might also be interested in. Sales algorithms work by recommending items that customers who purchased one book have also viewed or bought.

Those recommendations directed us to an array of novels that, when we investigated them, proved to resonate with far-right ideologies.

A significant number of them were by authors with far-right political views. Some had ties to US militia movements and the gun-obsessed “prepper” subculture. Almost all of the books were self-published as e-books and print-on-demand editions.

Without the marketing and distribution channels of established publishing houses, these books rely on digital circulation for sales, including sale recommendation algorithms.

The trail of sales recommendations led us, with just two clicks, to the novels of mainstream authors. They also led us back again, from mainstream authors’ books to extremist novels. This is deeply troubling. It risks unsuspecting readers being introduced to the ideologies, world-views and sometimes powerful emotional narratives of far-right extremist novels designed to radicalise.

It’s not always easy to tell right away if you’re reading fiction promoting far-right ideologies, from the May 29, 2022 essay,

Recognising far-right messages

Some extremist novels follow the lead of The Turner Diaries and represent the start of a racist, openly genocidal war alongside a call to bring one about. Others are less obvious about their violent messages.

Some are not easily distinguished from mainstream novels – for example, from political thrillers and dystopian adventure stories like those of Tom Clancy or Matthew Reilly – so what is different about them? Openly neo-Nazi authors, like Pierce, often use racist, homophobic and misogynist slurs, but many do not. This may be to help make their books more palatable to general readers, or to avoid digital moderation based on specific words.

Knowing more about far-right extremism can help. Researchers generally say that there are three main things that connect the spectrum of far-right extremist politics: acceptance of social inequality, authoritarianism, and embracing violence as a tool for political change. Willingness to commit or endorse violence is a key factor separating extremism from other radical politics.

It is very unlikely that anyone would become radicalised to violent extremism just by reading novels. Novels can, however, reinforce political messages heard elsewhere (such as on social media) and help make those messages and acts of hate feel justified.

With the growing threat of far-right extremism and deliberate recruitment strategies of extremists targeting unexpected places, it is well worth being informed enough to recognise the hate-filled stories they tell.

I recommend reading the essay as my excerpts don’t do justice to the ideas being presented. As Young and Boucher note, it’s “… unlikely that anyone would become radicalised to violent extremism …” by reading novels but far-right extremists and neo-Nazis write fiction because the tactic works at some level.