Tag Archives: Purdue University

A hardware (neuromorphic and quantum) proposal for handling increased AI workload

It’s been a while since I’ve featured anything from Purdue University (Indiana, US). From a November 7, 2023 news item on Nanowerk, Note Links have been removed,

Technology is edging closer and closer to the super-speed world of computing with artificial intelligence. But is the world equipped with the proper hardware to be able to handle the workload of new AI technological breakthroughs?

Key Takeaways
Current AI technologies are strained by the limitations of silicon-based computing hardware, necessitating new solutions.

Research led by Erica Carlson [Purdue University] suggests that neuromorphic [brainlike] architectures, which replicate the brain’s neurons and synapses, could revolutionize computing efficiency and power.

Vanadium oxides have been identified as a promising material for creating artificial neurons and synapses, crucial for neuromorphic computing.

Innovative non-volatile memory, observed in vanadium oxides, could be the key to more energy-efficient and capable AI hardware.

Future research will explore how to optimize the synaptic behavior of neuromorphic materials by controlling their memory properties.

The colored landscape above shows a transition temperature map of VO2 (pink surface) as measured by optical microscopy. This reveals the unique way that this neuromorphic quantum material [emphasis mine] stores memory like a synapse. Image credit: Erica Carlson, Alexandre Zimmers, and Adobe Stock

An October 13, 2023 Purdue University news release (also on EurekAlert but published November 6, 2023) by Cheryl Pierce, which originated the news item, provides more detail about the work, Note: A link has been removed,

“The brain-inspired codes of the AI revolution are largely being run on conventional silicon computer architectures which were not designed for it,” explains Erica Carlson, 150th Anniversary Professor of Physics and Astronomy at Purdue University.

A joint effort between Physicists from Purdue University, University of California San Diego (USCD) and École Supérieure de Physique et de Chimie Industrielles (ESPCI) in Paris, France, believe they may have discovered a way to rework the hardware…. [sic] By mimicking the synapses of the human brain.  They published their findings, “Spatially Distributed Ramp Reversal Memory in VO2” in Advanced Electronic Materials which is featured on the back cover of the October 2023 edition.

New paradigms in hardware will be necessary to handle the complexity of tomorrow’s computational advances. According to Carlson, lead theoretical scientist of this research, “neuromorphic architectures hold promise for lower energy consumption processors, enhanced computation, fundamentally different computational modes, native learning and enhanced pattern recognition.”

Neuromorphic architecture basically boils down to computer chips mimicking brain behavior.  Neurons are cells in the brain that transmit information. Neurons have small gaps at their ends that allow signals to pass from one neuron to the next which are called synapses. In biological brains, these synapses encode memory. This team of scientists concludes that vanadium oxides show tremendous promise for neuromorphic computing because they can be used to make both artificial neurons and synapses.

“The dissonance between hardware and software is the origin of the enormously high energy cost of training, for example, large language models like ChatGPT,” explains Carlson. “By contrast, neuromorphic architectures hold promise for lower energy consumption by mimicking the basic components of a brain: neurons and synapses. Whereas silicon is good at memory storage, the material does not easily lend itself to neuron-like behavior. Ultimately, to provide efficient, feasible neuromorphic hardware solutions requires research into materials with radically different behavior from silicon – ones that can naturally mimic synapses and neurons. Unfortunately, the competing design needs of artificial synapses and neurons mean that most materials that make good synaptors fail as neuristors, and vice versa. Only a handful of materials, most of them quantum materials, have the demonstrated ability to do both.”

The team relied on a recently discovered type of non-volatile memory which is driven by repeated partial temperature cycling through the insulator-to-metal transition. This memory was discovered in vanadium oxides.

Alexandre Zimmers, lead experimental scientist from Sorbonne University and École Supérieure de Physique et de Chimie Industrielles, Paris, explains, “Only a few quantum materials are good candidates for future neuromorphic devices, i.e., mimicking artificial synapses and neurons. For the first time, in one of them, vanadium dioxide, we can see optically what is changing in the material as it operates as an artificial synapse. We find that memory accumulates throughout the entirety of the sample, opening new opportunities on how and where to control this property.”

“The microscopic videos show that, surprisingly, the repeated advance and retreat of metal and insulator domains causes memory to be accumulated throughout the entirety of the sample, rather than only at the boundaries of domains,” explains Carlson. “The memory appears as shifts in the local temperature at which the material transitions from insulator to metal upon heating, or from metal to insulator upon cooling. We propose that these changes in the local transition temperature accumulate due to the preferential diffusion of point defects into the metallic domains that are interwoven through the insulator as the material is cycled partway through the transition.”

Now that the team has established that vanadium oxides are possible candidates for future neuromorphic devices, they plan to move forward in the next phase of their research.

“Now that we have established a way to see inside this neuromorphic material, we can locally tweak and observe the effects of, for example, ion bombardment on the material’s surface,” explains Zimmers. “This could allow us to guide the electrical current through specific regions in the sample where the memory effect is at its maximum. This has the potential to significantly enhance the synaptic behavior of this neuromorphic material.”

There’s a very interesting 16 mins. 52 secs. video embedded in the October 13, 2023 Purdue University news release. In an interview with Dr. Erica Carlson who hosts The Quantum Age website and video interviews on its YouTube Channel, Alexandre Zimmers takes you from an amusing phenomenon observed by 19th century scientists through the 20th century where it becomes of more interest as the nanscale phenonenon can be exploited (sonar, scanning tunneling microscopes, singing birthday cards, etc.) to the 21st century where we are integrating this new information into a quantum* material for neuromorphic hardware.

Here’s a link to and a citation for the paper,

Spatially Distributed Ramp Reversal Memory in VO2 by Sayan Basak, Yuxin Sun, Melissa Alzate Banguero, Pavel Salev, Ivan K. Schuller, Lionel Aigouy, Erica W. Carlson, Alexandre Zimmers. Advanced Electronic Materials Volume 9, Issue 10 October 2023 2300085 DOI: https://doi.org/10.1002/aelm.202300085 First published: 10 July 2023

This paper is open access.

There’s a lot of research into neuromorphic hardware, here’s a sampling of some of my most recent posts on the topic,

There’s more, just use ‘neuromorphic hardware’ for your search term.

*’meta’ changed to ‘quantum’ on January 8, 2024.

Building Transdisciplinary Research Paths [for a] Sustainable & Inclusive Future, a December 14, 2022 science policy event

I received (via email) a December 8, 2022 Canadian Science Policy Centre (CSPC) announcement about their various doings when this event, which seems a little short on information, caught my attention,

[Building Transdisciplinary Research Paths towards a more Sustainable and Inclusive Future]

Upcoming Virtual Event

With this workshop, Belmont Forum and IAI aim to open a collective reflection on the ideas and practices around ‘Transdisciplinarity’ (TD) to foster participatory knowledge production. Our goal is to create a safe environment for people to share their impressions about TD, as a form of experimental lab based on a culture of collaboration.

This CSPC event page cleared up a few questions,

Building Transdisciplinary Research Paths towards a more Sustainable and Inclusive Future

Global environmental change and sustainability require engagement with civil society and wide participation to gain social legitimacy, also, it is necessary to open cooperation among different scientific disciplines, borderless collaboration, and collaborative learning processes, among other crucial issues.

Those efforts have been recurrently encompassed by the idea of ‘Transdisciplinarity’ (TD), which is a fairly new word and evolving concept. Several of those characteristics are daily practices in academic and non-academic communities, sometimes under different words or conceptions.

With this workshop, Belmont Forum and IAI [Inter-American Institute for Global Change Research?] aim to open a collective reflection on the ideas and practices around ‘Transdisciplinarity’ (TD) to foster participatory knowledge production. Our goal is to create a safe environment for people to share their impressions about TD, as a form of experimental lab based on a culture of collaboration.

Date: Dec 14 [2022]

Time: 3:00 pm – 4:00 pm EST

Website [Register here]: https://us02web.zoom.us/meeting/register/tZArcOCupj4rHdBbwhSUpVhpvPuou5kNlZId

For the curious, here’s more about the Belmont Forum from their About page, Note: Links have been removed,

Established in 2009, the Belmont Forum is a partnership of funding organizations, international science councils, and regional consortia committed to the advancement of transdisciplinary science. Forum operations are guided by the Belmont Challenge, a vision document that encourages:

International transdisciplinary research providing knowledge for understanding, mitigating and adapting to global environmental change.

Forum members and partner organizations work collaboratively to meet this Challenge by issuing international calls for proposals, committing to best practices for open data access, and providing transdisciplinary training.  To that end, the Belmont Forum is also working to enhance the broader capacity to conduct transnational environmental change research through its e-Infrastructure and Data Management initiative.

Since its establishment, the Forum has successfully led 19 calls for proposals, supporting 134 projects and more than 1,000 scientists and stakeholders, representing over 90 countries.  Themes addressed by CRAs have included Freshwater Security, Coastal Vulnerability, Food Security and Land Use Change, Climate Predictability and Inter-Regional Linkages, Biodiversity and Ecosystem Services, Arctic Observing and Science for Sustainability, and Mountains as Sentinels of Change.  New themes are developed through a scoping process and made available for proposals through the Belmont Forum website and its BF Grant Operations site.

If you keep scrolling down the Bellmont Forum’s About page, you’ll find an impressive list of partners including the Natural Sciences and Engineering Research Council of Canada (NSERC).

I’m pretty sure IAI is Inter-American Institute for Global Change Research, given that two of the panelists come from that organization. Here’s more about the IAI from their About Us page, Note: Links have been removed,

Humans have affected practically all ecosystems on earth. Over the past 200 years, mankind’s emissions of greenhouse gases into the Earth’s atmosphere have changed its radiative properties and are causing a rise in global temperatures which is now modifying Earth system functions globally. As a result, the 21st-century is faced with environmental changes from local to global scales that require large efforts of mitigation and adaptation by societies and ecosystems. The causes and effects, problems and solutions of global change interlink biogeochemistry, Earth system functions and socio-economic conditions in increasingly complex ways. To guide efforts of mitigation and adaptation to global change and aid policy decisions, scientific knowledge now needs to be generated in broad transdisciplinary ways that address the needs of knowledge users and also provide profound understanding of complex socio-environmental systems.

To address these knowledge needs, 12 nations of the American continent came together in Montevideo, Uruguay, in 1992 to establish the Inter-American Institute for Global Change Research (IAI). The 12 governments, in the Declaration of Montevideo, called for the Institute to develop the best possible international coordination of scientific and economic research on the extent, causes, and consequences of global change in the Americas.

Sixteen governments signed the resulting Agreement Establishing the IAI which laid the  foundation for the IAI’s function as a regional intergovernmental organization that promotes interdisciplinary scientific research and capacity building to inform decision-makers on the continent and beyond. Since the establishment of the Agreement in 1992, 3 additional nations have acceded the treaty, and the IAI has now 19 Parties in the Americas, which come together once every year in the Conference of the Parties to monitor and direct the IAI’s activities.

Now onto the best part, reading about the panelists (from CSPC event page, scroll down and click on the See bio button), Note: I have made some rough changes to the formatting so that the bios match each other more closely,

Dr. Lily House-Peters is Associate Professor in the Department of Geography at California State University, Long Beach. Dr. House-Peters is a broadly trained human-environment geographer with experience in qualitative and quantitative approaches to human dimensions of environmental change, water security, mining and extraction, and natural resource conservation policy. She has a decade of experience and expertise in transdisciplinary research for action-oriented solutions to global environmental change. She is currently part of a team creating a curriculum for global change researchers in the Americas focused on the drivers and barriers of effective transdisciplinary collaboration and processes of integration and convergence in diverse teams.

Dr. Gabriela Alonso Yanez, Associate Professor, Werklund School of Education University of Calgary. Learning and education in the context of sustainability and global change are the focus of my work. Over the last ten years, I have participated in several collaborative research projects with multiple aims, including building researchers’ and organizations’ capacity for collaboration and engaging networks that include knowledge keepers, local community members and academics in co-designing and co-producing solutions-oriented knowledge.

Marshalee Valentine, MSc, BTech. Marshalee Valentine is Co-founder and Vice President of the International Women’s Coffee Alliance Jamaica (IWCA), a charitable organization responsible for the development and implementation of social impact and community development projects geared towards improving the livelihoods of women along the coffee value chain in Jamaica. In addition, she also owns and operates a Quality, Food Safety and Environmental Management Systems Consultancy. Her areas of expertise include; Process improvement, technology and Innovation transfer methods, capacity building and community-based research.

With a background in Agriculture, she holds a Bachelor of Technology in Environmental Sciences and a Master’s Degree in Environmental Management. Marshalee offers a unique perspective for regional authenticity bringing deep sensibility to issues of gender, equity and inclusion, in particular related to GEC issues in small countries.

Fany Ramos Quispe, Science Technology and Policy Fellow, Inter-American Institute for Global Change Research. Fany Ramos Quispe holds a B.S. in Environmental Engineering from the Polytechnic Institute of Mexico, and an MSc. in Environmental Change and International Development from the University of Sheffield in the United Kingdom. She worked with a variety of private and public organizations at the national and international levels. She has experience on projects related to renewable energies, waste and water management, environmental education, climate change, and inter and transdisciplinary research, among others. After her postgraduate studies, she joined the Bolivian government mainly to support international affairs related to climate change at the Plurinational Authority of Mother Earth, afterwards, she joined the Centre for Social Research of the Vicepresidency as a Climate Change Specialist.

For several years now she combined academic and professional activities with side projects and activism for environmental and educational issues. She is a founder and former chair (2019-2020) of the environmental engineers’ society of La Paz and collaborates with different grassroots organizations.

Fany is a member of OWSD Bolivia [Organization for Women in Science for the Developing World {OWSD}] and current IAI Science, Technology and Policy fellow at the Belmont Forum.

Dr. Laila Sandroni, Science Technology and Policy Fellow, InterAmerican Institute for Global Change Research. Laila Sandroni is an Anthropologist and Geographer with experience in transdisciplinary research in social sciences. Her research interests lie in the field of transformations to sustainability and the role of different kinds of knowledge in defining the best paths to achieve biodiversity conservation and forest management. She has particular expertise in epistemology, power-knowledge relations, and evidence-based policy in environmental issues.

Laila has a longstanding involvement with stakeholders working on different paths towards biodiversity conservation. She has experience in transdisciplinary science and participatory methodologies to encompass plural knowledge on the management of protected areas in tropical rainforests in Brazil.

This event seems to be free and it looks like an exciting panel.

Unexpectedly, they don’t have a male participant amongst the panelists. Outside of groups that are explicitly exploring women’s issues in the sciences, I’ve never before seen a science panel composed entirely of women. As well, the organizers seem to have broadened the range of geographies represented at a Canadian event with a researcher who has experience in Brazil, another with experience in Bolivia, a panelist who works in Jamaica, and two academics who focus on the Americas (South, Central, and North).

Transdisciplinarity and other disciplinarities

There are so many: crossdisciplinarity, multidisciplinarity, interdisciplinarity, and transdisciplinarity, that the whole subject gets a little confusing. Jeffrey Evans’ July 29, 2014 post on the Purdue University (Indiana, US) Polytechnic Institute blog answers questions about three (trans-, multi-, and inter-) of the disciplinarities,

Learners entering the Polytechnic Incubator’s new program will no doubt hear the terms “multidisciplinary (arity)” and “interdisciplinary (arity)” thrown about somewhat indiscriminately. Interestingly, we administrators, faculty, and staff also use these terms rather loosely and too often without carefully considering their underlying meaning.

Recently I gave a talk about yet another disciplinarity: “transdisciplinarity.” The purpose of the talk was to share with colleagues from around the country the opportunities and challenges associated with developing a truly transdisciplinary environment in an institution of higher education. During a meeting after I returned, the terms “multi”, “inter”, and “trans” disciplinary(arity) were being thrown around, and it was clear that the meanings of the terms were not clearly understood. Hopefully this blog entry will help shed some light on the subject. …

First, I am not an expert in the various “disciplinarities.” The ideas and descriptions that follow are not mine and have been around for decades, with many books and articles written on the subject. Yet my Polytechnic Incubator colleagues and I believe in these ideas and in their advantages and constraints, and they serve to motivate the design of the Incubator’s transdisciplinary environment.

In 1992, Hugh G. Petrie wrote a seminal article1 for the American Educational Research Association that articulates the meaning of these ideas. Later, in 2007, A. Wendy Russell, Fern Wickson, and Anna L. Carew contributed an article2 discussing the context of transdisciplinarity, prescriptions for transdisciplinary knowledge production and the contradictions that arise, and suggestions for universities to develop capacity for transdisciplinarity, rather than simply investing in knowledge “products.” …

Multidisciplinarity

Petrie1 discusses multidisciplinarity as “the idea of a number of disciplines working together on a problem, an educational program, or a research study. The effect is additive rather than integrative. The project is usually short-lived, and there is seldom any long-term change in the ways in which the disciplinary participants in a multidisciplinary project view their own work.”

Interdisciplinarity

Moving to extend the idea of multidisciplinarity to include more integration, rather than just addition, Petrie writes about interdisciplinarity in this way:

“Interdisciplinary research or education typically refers to those situations in which the integration of the work goes beyond the mere concatenation of disciplinary contributions. Some key elements of disciplinarians’ use of their concepts and tools change. There is a level of integration. Interdisciplinary subjects in university curricula such as physical chemistry or social psychology, which by now have, perhaps,themselves become disciplines, are good examples. A newer one might be the field of immunopharmocology, which combines the work of bacteriology, chemistry, physiology, and immunology. Another instance of interdisciplinarity might be the emerging notion of a core curriculum that goes considerably beyond simple distribution requirements in undergraduate programs of general education.”

Transdisciplinarity

Petrie1 writes about transdisciplinarity in this way: “The notion of transdisciplinarity exemplifies one of the historically important driving forces in the area of interdisciplinarity, namely, the idea of the desirability of the integration of knowledge into some meaningful whole. The best example, perhaps, of the drive to transdisciplinarity might be the early discussions of general systems theory when it was being held forward as a grand synthesis of knowledge. Marxism, structuralism, and feminist theory are sometimes cited as examples of a transdisciplinary approach. Essentially, this kind of interdisciplinarity represents the impetus to integrate knowledge, and, hence, is often characterized by a denigration and repudiation of the disciplines and disciplinary work as essentially fragmented and incomplete.

It seems multidisciplinarity could be viewed as an ad hoc approach whereas interdsciplinarity and transdisciplinarity are intimately related with ‘inter-‘ being a subset of ‘trans-‘.

I think that’s enough for now. Should I ever stumble across a definition for crossdisciplinarity, I will endeavour to add it here.

An ‘artificial brain’ and life-long learning

Talk of artificial brains (also known as, brainlike computing or neuromorphic computing) usually turns to memory fairly quickly. This February 3, 2022 news item on ScienceDaily does too although the focus is on how memory and forgetting affect the ability to learn,

When the human brain learns something new, it adapts. But when artificial intelligence learns something new, it tends to forget information it already learned.

As companies use more and more data to improve how AI recognizes images, learns languages and carries out other complex tasks, a paper publishing in Science this week shows a way that computer chips could dynamically rewire themselves to take in new data like the brain does, helping AI to keep learning over time.

“The brains of living beings can continuously learn throughout their lifespan. We have now created an artificial platform for machines to learn throughout their lifespan,” said Shriram Ramanathan, a professor in Purdue University’s [Indiana, US] School of Materials Engineering who specializes in discovering how materials could mimic the brain to improve computing.

Unlike the brain, which constantly forms new connections between neurons to enable learning, the circuits on a computer chip don’t change. A circuit that a machine has been using for years isn’t any different than the circuit that was originally built for the machine in a factory.

This is a problem for making AI more portable, such as for autonomous vehicles or robots in space that would have to make decisions on their own in isolated environments. If AI could be embedded directly into hardware rather than just running on software as AI typically does, these machines would be able to operate more efficiently.

A February 3, 2022 Purdue University news release (also on EurekAlert), which originated the news item, provides more technical detail about the work (Note: Links have been removed),

In this study, Ramanathan and his team built a new piece of hardware that can be reprogrammed on demand through electrical pulses. Ramanathan believes that this adaptability would allow the device to take on all of the functions that are necessary to build a brain-inspired computer.

“If we want to build a computer or a machine that is inspired by the brain, then correspondingly, we want to have the ability to continuously program, reprogram and change the chip,” Ramanathan said.

Toward building a brain in chip form

The hardware is a small, rectangular device made of a material called perovskite nickelate,  which is very sensitive to hydrogen. Applying electrical pulses at different voltages allows the device to shuffle a concentration of hydrogen ions in a matter of nanoseconds, creating states that the researchers found could be mapped out to corresponding functions in the brain.

When the device has more hydrogen near its center, for example, it can act as a neuron, a single nerve cell. With less hydrogen at that location, the device serves as a synapse, a connection between neurons, which is what the brain uses to store memory in complex neural circuits.

Through simulations of the experimental data, the Purdue team’s collaborators at Santa Clara University and Portland State University showed that the internal physics of this device creates a dynamic structure for an artificial neural network that is able to more efficiently recognize electrocardiogram patterns and digits compared to static networks. This neural network uses “reservoir computing,” which explains how different parts of a brain communicate and transfer information.

Researchers from The Pennsylvania State University also demonstrated in this study that as new problems are presented, a dynamic network can “pick and choose” which circuits are the best fit for addressing those problems.

Since the team was able to build the device using standard semiconductor-compatible fabrication techniques and operate the device at room temperature, Ramanathan believes that this technique can be readily adopted by the semiconductor industry.

“We demonstrated that this device is very robust,” said Michael Park, a Purdue Ph.D. student in materials engineering. “After programming the device over a million cycles, the reconfiguration of all functions is remarkably reproducible.”

The researchers are working to demonstrate these concepts on large-scale test chips that would be used to build a brain-inspired computer.

Experiments at Purdue were conducted at the FLEX Lab and Birck Nanotechnology Center of Purdue’s Discovery Park. The team’s collaborators at Argonne National Laboratory, the University of Illinois, Brookhaven National Laboratory and the University of Georgia conducted measurements of the device’s properties.

Here’s a link to and a citation for the paper,

Reconfigurable perovskite nickelate electronics for artificial intelligence by Hai-Tian Zhang, Tae Joon Park, A. N. M. Nafiul Islam, Dat S. J. Tran, Sukriti Manna, Qi Wang, Sandip Mondal, Haoming Yu, Suvo Banik, Shaobo Cheng, Hua Zhou, Sampath Gamage, Sayantan Mahapatra, Yimei Zhu, Yohannes Abate, Nan Jiang, Subramanian K. R. S. Sankaranarayanan, Abhronil Sengupta, Christof Teuscher, Shriram Ramanathan. Science • 3 Feb 2022 • Vol 375, Issue 6580 • pp. 533-539 • DOI: 10.1126/science.abj7943

This paper is behind a paywall.

Neuromorphic (brainlike) computing inspired by sea slugs

The sea slug has taught neuroscientists the intelligence features that any creature in the animal kingdom needs to survive. Now, the sea slug is teaching artificial intelligence how to use those strategies. Pictured: Aplysia californica. (Image by NOAA Monterey Bay National Marine Sanctuary/Chad King.)

I don’t think I’ve ever seen a picture of a sea slug before. Its appearance reminds me of its terrestrial cousin.

As for some of the latest news on brainlike computing, a December 7, 2021 news item on Nanowerk makes an announcement from the Argonne National Laboratory (a US Department of Energy laboratory; Note: Links have been removed),

A team of scientists has discovered a new material that points the way toward more efficient artificial intelligence hardware for everything from self-driving cars to surgical robots.

For artificial intelligence (AI) to get any smarter, it needs first to be as intelligent as one of the simplest creatures in the animal kingdom: the sea slug.

A new study has found that a material can mimic the sea slug’s most essential intelligence features. The discovery is a step toward building hardware that could help make AI more efficient and reliable for technology ranging from self-driving cars and surgical robots to social media algorithms.

The study, published in the Proceedings of the National Academy of Sciences [PNAS] (“Neuromorphic learning with Mott insulator NiO”), was conducted by a team of researchers from Purdue University, Rutgers University, the University of Georgia and the U.S. Department of Energy’s (DOE) Argonne National Laboratory. The team used the resources of the Advanced Photon Source (APS), a DOE Office of Science user facility at Argonne.

A December 6, 2021 Argonne National Laboratory news release (also on EurekAlert) by Kayla Wiles and Andre Salles, which originated the news item, provides more detail,

“Through studying sea slugs, neuroscientists discovered the hallmarks of intelligence that are fundamental to any organism’s survival,” said Shriram Ramanathan, a Purdue professor of Materials Engineering. ​“We want to take advantage of that mature intelligence in animals to accelerate the development of AI.”

Two main signs of intelligence that neuroscientists have learned from sea slugs are habituation and sensitization. Habituation is getting used to a stimulus over time, such as tuning out noises when driving the same route to work every day. Sensitization is the opposite — it’s reacting strongly to a new stimulus, like avoiding bad food from a restaurant.

AI has a really hard time learning and storing new information without overwriting information it has already learned and stored, a problem that researchers studying brain-inspired computing call the ​“stability-plasticity dilemma.” Habituation would allow AI to ​“forget” unneeded information (achieving more stability) while sensitization could help with retaining new and important information (enabling plasticity).

In this study, the researchers found a way to demonstrate both habituation and sensitization in nickel oxide, a quantum material. Quantum materials are engineered to take advantage of features available only at nature’s smallest scales, and useful for information processing. If a quantum material could reliably mimic these forms of learning, then it may be possible to build AI directly into hardware. And if AI could operate both through hardware and software, it might be able to perform more complex tasks using less energy.

“We basically emulated experiments done on sea slugs in quantum materials toward understanding how these materials can be of interest for AI,” Ramanathan said.

Neuroscience studies have shown that the sea slug demonstrates habituation when it stops withdrawing its gill as much in response to tapping. But an electric shock to its tail causes its gill to withdraw much more dramatically, showing sensitization.

For nickel oxide, the equivalent of a ​“gill withdrawal” is an increased change in electrical resistance. The researchers found that repeatedly exposing the material to hydrogen gas causes nickel oxide’s change in electrical resistance to decrease over time, but introducing a new stimulus like ozone greatly increases the change in electrical resistance.

Ramanathan and his colleagues used two experimental stations at the APS to test this theory, using X-ray absorption spectroscopy. A sample of nickel oxide was exposed to hydrogen and oxygen, and the ultrabright X-rays of the APS were used to see changes in the material at the atomic level over time.

“Nickel oxide is a relatively simple material,” said Argonne physicist Hua Zhou, a co-author on the paper who worked with the team at beamline 33-ID. ​“The goal was to use something easy to manufacture, and see if it would mimic this behavior. We looked at whether the material gained or lost a single electron after exposure to the gas.”

The research team also conducted scans at beamline 29-ID, which uses softer X-rays to probe different energy ranges. While the harder X-rays of 33-ID are more sensitive to the ​“core” electrons, those closer to the nucleus of the nickel oxide’s atoms, the softer X-rays can more readily observe the electrons on the outer shell. These are the electrons that define whether a material is conductive or resistive to electricity.

“We’re very sensitive to the change of resistivity in these samples,” said Argonne physicist Fanny Rodolakis, a co-author on the paper who led the work at beamline 29-ID. ​“We can directly probe how the electronic states of oxygen and nickel evolve under different treatments.”

Physicist Zhan Zhang and postdoctoral researcher Hui Cao, both of Argonne, contributed to the work, and are listed as co-authors on the paper. Zhang said the APS is well suited for research like this, due to its bright beam that can be tuned over different energy ranges.

For practical use of quantum materials as AI hardware, researchers will need to figure out how to apply habituation and sensitization in large-scale systems. They also would have to determine how a material could respond to stimuli while integrated into a computer chip.

This study is a starting place for guiding those next steps, the researchers said. Meanwhile, the APS is undergoing a massive upgrade that will not only increase the brightness of its beams by up to 500 times, but will allow for those beams to be focused much smaller than they are today. And this, Zhou said, will prove useful once this technology does find its way into electronic devices.

“If we want to test the properties of microelectronics,” he said, ​“the smaller beam that the upgraded APS will give us will be essential.”

In addition to the experiments performed at Purdue and Argonne, a team at Rutgers University performed detailed theory calculations to understand what was happening within nickel oxide at a microscopic level to mimic the sea slug’s intelligence features. The University of Georgia measured conductivity to further analyze the material’s behavior.

A version of this story was originally published by Purdue University

About the Advanced Photon Source

The U. S. Department of Energy Office of Science’s Advanced Photon Source (APS) at Argonne National Laboratory is one of the world’s most productive X-ray light source facilities. The APS provides high-brightness X-ray beams to a diverse community of researchers in materials science, chemistry, condensed matter physics, the life and environmental sciences, and applied research. These X-rays are ideally suited for explorations of materials and biological structures; elemental distribution; chemical, magnetic, electronic states; and a wide range of technologically important engineering systems from batteries to fuel injector sprays, all of which are the foundations of our nation’s economic, technological, and physical well-being. Each year, more than 5,000 researchers use the APS to produce over 2,000 publications detailing impactful discoveries, and solve more vital biological protein structures than users of any other X-ray light source research facility. APS scientists and engineers innovate technology that is at the heart of advancing accelerator and light-source operations. This includes the insertion devices that produce extreme-brightness X-rays prized by researchers, lenses that focus the X-rays down to a few nanometers, instrumentation that maximizes the way the X-rays interact with samples being studied, and software that gathers and manages the massive quantity of data resulting from discovery research at the APS.

This research used resources of the Advanced Photon Source, a U.S. DOE Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.

You can find the September 24, 2021 Purdue University story, Taking lessons from a sea slug, study points to better hardware for artificial intelligence here.

Here’s a link to and a citation for the paper,

Neuromorphic learning with Mott insulator NiO by Zhen Zhang, Sandip Mondal, Subhasish Mandal, Jason M. Allred, Neda Alsadat Aghamiri, Alireza Fali, Zhan Zhang, Hua Zhou, Hui Cao, Fanny Rodolakis, Jessica L. McChesney, Qi Wang, Yifei Sun, Yohannes Abate, Kaushik Roy, Karin M. Rabe, and Shriram Ramanathan. PNAS September 28, 2021 118 (39) e2017239118 DOI: https://doi.org/10.1073/pnas.2017239118

This paper is behind a paywall.

Pandemic science breakthroughs: combining supercomputing materials with specialized oxides to mimic brain function

This breakthrough in neuromorphic (brainlike) computing is being attributed to the pandemic (COVID-19) according to a September 3, 2021 news item on phys.org,

Isaac Newton’s groundbreaking scientific productivity while isolated from the spread of bubonic plague is legendary. University of California San Diego physicists can now claim a stake in the annals of pandemic-driven science.

A team of UC San Diego [University of California San Diego] researchers and colleagues at Purdue University have now simulated the foundation of new types of artificial intelligence computing devices that mimic brain functions, an achievement that resulted from the COVID-19 pandemic lockdown. By combining new supercomputing materials with specialized oxides, the researchers successfully demonstrated the backbone of networks of circuits and devices that mirror the connectivity of neurons and synapses in biologically based neural networks.

A September 3, 2021 UC San Diego news release by Mario Aguilera, which originated the news item, delves further into the topic of neuromorphic computing,

As bandwidth demands on today’s computers and other devices reach their technological limit, scientists are working towards a future in which new materials can be orchestrated to mimic the speed and precision of animal-like nervous systems. Neuromorphic computing based on quantum materials, which display quantum-mechanics-based properties, allow scientists the ability to move beyond the limits of traditional semiconductor materials. This advanced versatility opens the door to new-age devices that are far more flexible with lower energy demands than today’s devices. Some of these efforts are being led by Department of Physics Assistant Professor Alex Frañó and other researchers in UC San Diego’s Quantum Materials for Energy Efficient Neuromorphic Computing (Q-MEEN-C), a Department of Energy-supported Energy Frontier Research Center.

“In the past 50 years we’ve seen incredible technological achievements that resulted in computers that were progressively smaller and faster—but even these devices have limits for data storage and energy consumption,” said Frañó, who served as one of the PNAS paper’s authors, along with former UC San Diego chancellor, UC president and physicist Robert Dynes. “Neuromorphic computing is inspired by the emergent processes of the millions of neurons, axons and dendrites that are connected all over our body in an extremely complex nervous system.”

As experimental physicists, Frañó and Dynes are typically busy in their laboratories using state-of-the-art instruments to explore new materials. But with the onset of the pandemic, Frañó and his colleagues were forced into isolation with concerns about how they would keep their research moving forward. They eventually came to the realization that they could advance their science from the perspective of simulations of quantum materials.

“This is a pandemic paper,” said Frañó. “My co-authors and I decided to study this issue from a more theoretical perspective so we sat down and started having weekly (Zoom-based) meetings. Eventually the idea developed and took off.”

The researchers’ innovation was based on joining two types of quantum substances—superconducting materials based on copper oxide and metal insulator transition materials that are based on nickel oxide. They created basic “loop devices” that could be precisely controlled at the nano-scale with helium and hydrogen, reflecting the way neurons and synapses are connected. Adding more of these devices that link and exchange information with each other, the simulations showed that eventually they would allow the creation of an array of networked devices that display emergent properties like an animal’s brain.

Like the brain, neuromorphic devices are being designed to enhance connections that are more important than others, similar to the way synapses weigh more important messages than others.

“It’s surprising that when you start to put in more loops, you start to see behavior that you did not expect,” said Frañó. “From this paper we can imagine doing this with six, 20 or a hundred of these devices—then it gets exponentially rich from there. Ultimately the goal is to create a very large and complex network of these devices that will have the ability to learn and adapt.”

With eased pandemic restrictions, Frañó and his colleagues are back in the laboratory, testing the theoretical simulations described in the PNAS [Proceedings of the National Academy of Sciences] paper with real-world instruments.

Here’s a link to and a citation for the paper,

Low-temperature emergent neuromorphic networks with correlated oxide devices by Uday S. Goteti, Ivan A. Zaluzhnyy, Shriram Ramanathan, Robert C. Dynes, and Alex Frano. PNAS August 31, 2021 118 (35) e2103934118; DOI: https://doi.org/10.1073/pnas.2103934118

This paper is open access.

The coolest paint

It’s the ‘est’ of it all. The coolest, the whitest, the blackest … Scientists and artists are both pursuing the ‘est’. (More about the pursuit later in this posting.)

In this case, scientists have developed the coolest, whitest paint yet. From an April 16, 2021 news item on Nanowerk,

In an effort to curb global warming, Purdue University engineers have created the whitest paint yet. Coating buildings with this paint may one day cool them off enough to reduce the need for air conditioning, the researchers say.

In October [2020], the team created an ultra-white paint that pushed limits on how white paint can be. Now they’ve outdone that. The newer paint not only is whiter but also can keep surfaces cooler than the formulation that the researchers had previously demonstrated.

“If you were to use this paint to cover a roof area of about 1,000 square feet, we estimate that you could get a cooling power of 10 kilowatts. That’s more powerful than the central air conditioners used by most houses,” said Xiulin Ruan, a Purdue professor of mechanical engineering.

Caption: Xiulin Ruan, a Purdue University professor of mechanical engineering, holds up his lab’s sample of the whitest paint on record. Credit: Purdue University/Jared Pike

This is nicely done. Researcher Xiulin Ruan is standing close to a structure that could be said to resemble the sun while in shirtsleeves and sunglasses and holding up a sample of his whitest paint in April (not usually a warm month in Indiana).

An April 15, 2021 Purdue University news release (also on EurkeAlert), which originated the news item, provides more detail about the work and hints about its commercial applications both civilian and military,

The researchers believe that this white may be the closest equivalent of the blackest black, “Vantablack,” [emphasis mine; see comments later in this post] which absorbs up to 99.9% of visible light. The new whitest paint formulation reflects up to 98.1% of sunlight – compared with the 95.5% of sunlight reflected by the researchers’ previous ultra-white paint – and sends infrared heat away from a surface at the same time.

Typical commercial white paint gets warmer rather than cooler. Paints on the market that are designed to reject heat reflect only 80%-90% of sunlight and can’t make surfaces cooler than their surroundings.

The team’s research paper showing how the paint works publishes Thursday (April 15 [2021]) as the cover of the journal ACS Applied Materials & Interfaces.

What makes the whitest paint so white

Two features give the paint its extreme whiteness. One is the paint’s very high concentration of a chemical compound called barium sulfate [emphasis mine] which is also used to make photo paper and cosmetics white.

“We looked at various commercial products, basically anything that’s white,” said Xiangyu Li, a postdoctoral researcher at the Massachusetts Institute of Technology who worked on this project as a Purdue Ph.D. student in Ruan’s lab. “We found that using barium sulfate, you can theoretically make things really, really reflective, which means that they’re really, really white.”

The second feature is that the barium sulfate particles are all different sizes in the paint. How much each particle scatters light depends on its size, so a wider range of particle sizes allows the paint to scatter more of the light spectrum from the sun.

“A high concentration of particles that are also different sizes gives the paint the broadest spectral scattering, which contributes to the highest reflectance,” said Joseph Peoples, a Purdue Ph.D. student in mechanical engineering.

There is a little bit of room to make the paint whiter, but not much without compromising the paint.”Although a higher particle concentration is better for making something white, you can’t increase the concentration too much. The higher the concentration, the easier it is for the paint to break or peel off,” Li said.

How the whitest paint is also the coolest

The paint’s whiteness also means that the paint is the coolest on record. Using high-accuracy temperature reading equipment called thermocouples, the researchers demonstrated outdoors that the paint can keep surfaces 19 degrees Fahrenheit cooler than their ambient surroundings at night. It can also cool surfaces 8 degrees Fahrenheit below their surroundings under strong sunlight during noon hours.

The paint’s solar reflectance is so effective, it even worked in the middle of winter. During an outdoor test with an ambient temperature of 43 degrees Fahrenheit, the paint still managed to lower the sample temperature by 18 degrees Fahrenheit.

This white paint is the result of six years of research building on attempts going back to the 1970s to develop radiative cooling paint as a feasible alternative to traditional air conditioners.

Ruan’s lab had considered over 100 different materials, narrowed them down to 10 and tested about 50 different formulations for each material. Their previous whitest paint was a formulation made of calcium carbonate, an earth-abundant compound commonly found in rocks and seashells.

The researchers showed in their study that like commercial paint, their barium sulfate-based paint can potentially handle outdoor conditions. The technique that the researchers used to create the paint also is compatible with the commercial paint fabrication process.

Patent applications for this paint formulation have been filed through the Purdue Research Foundation Office of Technology Commercialization. This research was supported by the Cooling Technologies Research Center at Purdue University and the Air Force Office of Scientific Research [emphasis mine] through the Defense University Research Instrumentation Program (Grant No.427 FA9550-17-1-0368). The research was performed at Purdue’s FLEX Lab and Ray W. Herrick Laboratories and the Birck Nanotechnology Center of Purdue’s Discovery Park.

Here’s a link to and a citation for the paper,

Ultrawhite BaSO4 Paints and Films for Remarkable Daytime Subambient Radiative Cooling by Xiangyu Li, Joseph Peoples, Peiyan Yao, and Xiulin Ruan. ACS Appl. Mater. Interfaces 2021, XXXX, XXX, XXX-XXX DOI: https://doi.org/10.1021/acsami.1c02368 Publication Date:April 15, 2021 © 2021 American Chemical Society

This paper is behind a paywall.

Vantablack and the ongoing ‘est’ of blackest

Vantablack’s 99.9% light absorption no longer qualifies it for the ‘blackest black’. A newer standard for the ‘blackest black’ was set by the US National Institute of Standards and Technology at 99.99% light absorption with its N.I.S.T. ultra-black in 2019, although that too seems to have been bested.

I have three postings covering the Vantablack and blackest black story,

The third posting (December 2019) provides a brief summary of the story along with what was the latest from the US National Institute of Standards and Technology. There’s also a little bit about the ‘The Redemption of Vanity’ an art piece demonstrating the blackest black material from the Massachusetts Institute of Technology, which they state has 99.995% (at least) absorption of light.

From a science perspective, the blackest black would be useful for space exploration.

I am surprised there doesn’t seem to have been an artistic rush to work with the whitest white. That impression may be due to the fact that the feuds get more attention than quiet work.

Dark side to the whitest white?

Andrew Parnell, research fellow in physics and astronomy at the University of Sheffield (UK), mentions a downside to obtaining the material needed to produce this cooling white paint in a June 10, 2021 essay on The Conversation (h/t Fast Company), Note: Links have been removed,

… this whiter-than-white paint has a darker side. The energy required to dig up raw barite ore to produce and process the barium sulphite that makes up nearly 60% of the paint means it has a huge carbon footprint. And using the paint widely would mean a dramatic increase in the mining of barium.

Parnell ends his essay with this (Note: Links have been removed),

Barium sulphite-based paint is just one way to improve the reflectivity of buildings. I’ve spent the last few years researching the colour white in the natural world, from white surfaces to white animals. Animal hairs, feathers and butterfly wings provide different examples of how nature regulates temperature within a structure. Mimicking these natural techniques could help to keep our cities cooler with less cost to the environment.

The wings of one intensely white beetle species called Lepidiota stigma appear a strikingly bright white thanks to nanostructures in their scales, which are very good at scattering incoming light. This natural light-scattering property can be used to design even better paints: for example, by using recycled plastic to create white paint containing similar nanostructures with a far lower carbon footprint. When it comes to taking inspiration from nature, the sky’s the limit.

Spider web-like electronics with graphene

A spiderweb-inspired fractal design is used for hemispherical 3D photodetection to replicate the vision system of arthropods. (Sena Huh image)

This image is pretty and I’m pretty sure it’s an illustration and not a real photodetection system. Regardless, an Oct. 21, 2020 news item on Nanowerk describes the research into producing a real 3D hemispheric photodetector for biomedical imaging (Note: A link has been removed),

Purdue University innovators are taking cues from nature to develop 3D photodetectors for biomedical imaging.

The researchers used some architectural features from spider webs to develop the technology. Spider webs typically provide excellent mechanical adaptability and damage-tolerance against various mechanical loads such as storms.

“We employed the unique fractal design of a spider web for the development of deformable and reliable electronics that can seamlessly interface with any 3D curvilinear surface,” said Chi Hwan Lee, a Purdue assistant professor of biomedical engineering and mechanical engineering. “For example, we demonstrated a hemispherical, or dome-shaped, photodetector array that can detect both direction and intensity of incident light at the same time, like the vision system of arthropods such as insects and crustaceans.”

The Purdue technology uses the structural architecture of a spider web that exhibits a repeating pattern. This work is published in Advanced Materials (“Fractal Web Design of a Hemispherical Photodetector Array with Organic-Dye-Sensitized Graphene Hybrid Composites”).

An Oct. 21, 2020 Purdue University news release by Chris Adam, which originated the news item, delves further into the work,

Lee said this provides unique capabilities to distribute externally induced stress throughout the threads according to the effective ratio of spiral and radial dimensions and provides greater extensibility to better dissipate force under stretching. Lee said it also can tolerate minor cuts of the threads while maintaining overall strength and function of the entire web architecture.

“The resulting 3D optoelectronic architectures are particularly attractive for photodetection systems that require a large field of view and wide-angle antireflection, which will be useful for many biomedical and military imaging purposes,” said Muhammad Ashraful Alam, the Jai N. Gupta Professor of Electrical and Computer Engineering.

Alam said the work establishes a platform technology that can integrate a fractal web design with system-level hemispherical electronics and sensors, thereby offering several excellent mechanical adaptability and damage-tolerance against various mechanical loads.

“The assembly technique presented in this work enables deploying 2D deformable electronics in 3D architectures, which may foreshadow new opportunities to better advance the field of 3D electronic and optoelectronic devices,” Lee said.

Here’s a link to and a citation for the paper,

Fractal Web Design of a Hemispherical Photodetector Array with Organic‐Dye‐Sensitized Graphene Hybrid Composites by Eun Kwang Lee, Ratul Kumar Baruah, Jung Woo Leem, Woohyun Park, Bong Hoon Kim, Augustine Urbas, Zahyun Ku, Young L. Kim, Muhammad Ashraful Alam, Chi Hwan Lee. Advanced Materials Volume 32, Issue 46 November 19, 2020 2004456 DOI: https://doi.org/10.1002/adma.202004456 First published online: 12 October 2020

This paper is behind a paywall.

Artificial intelligence (AI) consumes a lot of energy but tree-like memory may help conserve it

A simulation of a quantum material’s properties reveals its ability to learn numbers, a test of artificial intelligence. (Purdue University image/Shakti Wadekar)

A May 7, 2020 Purdue University news release (also on EurekAlert) describes a new approach for energy-efficient hardware in support of artificial intelligence (AI) systems,

To just solve a puzzle or play a game, artificial intelligence can require software running on thousands of computers. That could be the energy that three nuclear plants produce in one hour.

A team of engineers has created hardware that can learn skills using a type of AI that currently runs on software platforms. Sharing intelligence features between hardware and software would offset the energy needed for using AI in more advanced applications such as self-driving cars or discovering drugs.

“Software is taking on most of the challenges in AI. If you could incorporate intelligence into the circuit components in addition to what is happening in software, you could do things that simply cannot be done today,” said Shriram Ramanathan, a professor of materials engineering at Purdue University.

AI hardware development is still in early research stages. Researchers have demonstrated AI in pieces of potential hardware, but haven’t yet addressed AI’s large energy demand.

As AI penetrates more of daily life, a heavy reliance on software with massive energy needs is not sustainable, Ramanathan said. If hardware and software could share intelligence features, an area of silicon might be able to achieve more with a given input of energy.

Ramanathan’s team is the first to demonstrate artificial “tree-like” memory in a piece of potential hardware at room temperature. Researchers in the past have only been able to observe this kind of memory in hardware at temperatures that are too low for electronic devices.

The results of this study are published in the journal Nature Communications.

The hardware that Ramanathan’s team developed is made of a so-called quantum material. These materials are known for having properties that cannot be explained by classical physics. Ramanathan’s lab has been working to better understand these materials and how they might be used to solve problems in electronics.

Software uses tree-like memory to organize information into various “branches,” making that information easier to retrieve when learning new skills or tasks.

The strategy is inspired by how the human brain categorizes information and makes decisions.

“Humans memorize things in a tree structure of categories. We memorize ‘apple’ under the category of ‘fruit’ and ‘elephant’ under the category of ‘animal,’ for example,” said Hai-Tian Zhang, a Lillian Gilbreth postdoctoral fellow in Purdue’s College of Engineering. “Mimicking these features in hardware is potentially interesting for brain-inspired computing.”

The team introduced a proton to a quantum material called neodymium nickel oxide. They discovered that applying an electric pulse to the material moves around the proton. Each new position of the proton creates a different resistance state, which creates an information storage site called a memory state. Multiple electric pulses create a branch made up of memory states.

“We can build up many thousands of memory states in the material by taking advantage of quantum mechanical effects. The material stays the same. We are simply shuffling around protons,” Ramanathan said.

Through simulations of the properties discovered in this material, the team showed that the material is capable of learning the numbers 0 through 9. The ability to learn numbers is a baseline test of artificial intelligence.

The demonstration of these trees at room temperature in a material is a step toward showing that hardware could offload tasks from software.

“This discovery opens up new frontiers for AI that have been largely ignored because implementing this kind of intelligence into electronic hardware didn’t exist,” Ramanathan said.

The material might also help create a way for humans to more naturally communicate with AI.

“Protons also are natural information transporters in human beings. A device enabled by proton transport may be a key component for eventually achieving direct communication with organisms, such as through a brain implant,” Zhang said.

Here’s a link to and a citation for the published study,

Perovskite neural trees by Hai-Tian Zhang, Tae Joon Park, Shriram Ramanathan. Nature Communications volume 11, Article number: 2245 (2020) DOI: https://doi.org/10.1038/s41467-020-16105-y Published: 07 May 2020

This paper is open access.

Control your electronics devices with your clothing while protecting yourself from bacteria

Purdue University researchers have developed a new fabric innovation that allows the wearer to control electronic devices through the clothing. Courtesy: Purdue University

I like the image but do they really want someone pressing a cufflink? Anyway, being able to turn on your house lights and music system with your clothing would certainly be convenient. From an August 8, 2019 Purdue University (Indiana, US) news release (also on EurekAlert) by Chris Adam,

A new addition to your wardrobe may soon help you turn on the lights and music – while also keeping you fresh, dry, fashionable, clean and safe from the latest virus that’s going around.

Purdue University researchers have developed a new fabric innovation that allows wearers to control electronic devices through clothing.

“It is the first time there is a technique capable to transform any existing cloth item or textile into a self-powered e-textile containing sensors, music players or simple illumination displays using simple embroidery without the need for expensive fabrication processes requiring complex steps or expensive equipment,” said Ramses Martinez, an assistant professor in the School of Industrial Engineering and in the Weldon School of Biomedical Engineering in Purdue’s College of Engineering.

The technology is featured in the July 25 [2019] edition of Advanced Functional Materials.

“For the first time, it is possible to fabricate textiles that can protect you from rain, stains, and bacteria while they harvest the energy of the user to power textile-based electronics,” Martinez said. “These self-powered e-textiles also constitute an important advancement in the development of wearable machine-human interfaces, which now can be washed many times in a conventional washing machine without apparent degradation.

Martinez said the Purdue waterproof, breathable and antibacterial self-powered clothing is based on omniphobic triboelectric nanogeneragtors (RF-TENGs) – which use simple embroidery and fluorinated molecules to embed small electronic components and turn a piece of clothing into a mechanism for powering devices. The Purdue team says the RF-TENG technology is like having a wearable remote control that also keeps odors, rain, stains and bacteria away from the user.

“While fashion has evolved significantly during the last centuries and has easily adopted recently developed high-performance materials, there are very few examples of clothes on the market that interact with the user,” Martinez said. “Having an interface with a machine that we are constantly wearing sounds like the most convenient approach for a seamless communication with machines and the Internet of Things.”

The technology is being patented through the Purdue Research Foundation Office of Technology Commercialization. The researchers are looking for partners to test and commercialize their technology.

Their work aligns with Purdue’s Giant Leaps celebration of the university’s global advancements in artificial intelligence and health as part of Purdue’s 150th anniversary. It is one of the four themes of the yearlong celebration’s Ideas Festival, designed to showcase Purdue as an intellectual center solving real-world issues.

Here’s a link to and a citation for the paper,

Waterproof, Breathable, and Antibacterial Self‐Powered e‐Textiles Based on Omniphobic Triboelectric Nanogenerators by Marina Sala de Medeiros, Daniela Chanci, Carolina Moreno, Debkalpa Goswami, Ramses V. Martinez. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.201904350 First published online: 25 July 2019

This paper is behind a paywall.

New ingredient for computers: water!

A July 25, 2019 news item on Nanowerk provides a description of Moore`s Law and some ‘watery’ research that may upend it,

Moore’s law – which says the number of components that could be etched onto the surface of a silicon wafer would double every two years – has been the subject of recent debate. The quicker pace of computing advancements in the past decade have led some experts to say Moore’s law, the brainchild of Intel co-founder Gordon Moore in the 1960s, no longer applies. Particularly of concern, next-generation computing devices require features smaller than 10 nanometers – driving unsustainable increases in fabrication costs.

Biology creates features at sub-10nm scales routinely, but they are often structured in ways that are not useful for applications like computing. A Purdue University group has found ways of transforming structures that occur naturally in cell membranes to create other architectures, like parallel 1nm-wide line segments, more applicable to computing.

Inspired by biological cell membranes, Purdue researchers in the Claridge Research Group have developed surfaces that act as molecular-scale blueprints for unpacking and aligning nanoscale components for next-generation computers. The secret ingredient? Water, in tiny amounts.

A July 25, 2019 Purdue University news release (also on EurekAlert), expands on the theme,

“Biology has an amazing tool kit for embedding chemical information in a surface,” said Shelley Claridge, a recently tenured faculty member in chemistry and biomedical engineering at Purdue, who leads a group of nanomaterials researchers. “What we’re finding is that these instructions can become even more powerful in nonbiological settings, where water is scarce.”

In work just published in Chem, sister journal to Cell, the group has found that stripes of lipids can unpack and order flexible gold nanowires with diameters of just 2 nm, over areas corresponding to many millions of molecules in the template surface.

“The real surprise was the importance of water,” Claridge said. “Your body is mostly water, so the molecules in your cell membranes depend on it to function. Even after we transform the membrane structure in a way that’s very nonbiological and dry it out, these molecules can pull enough water out of dry winter air to do their job.”

Their work aligns with Purdue’s Giant Leaps celebration, celebrating the global advancements in sustainability as part of Purdue’s 150th anniversary. Sustainability is one of the four themes of the yearlong celebration’s Ideas Festival, designed to showcase Purdue as an intellectual center solving real-world issues.

The research team is working with the Purdue Research Foundation Office of Technology Commercialization to patent their work. They are looking for partners for continued research and to take the technology to market. [emphasis mine]

I wonder how close they are to taking this work to market. Usually they say it will be five to 10 years but perhaps we’ll see water-based computers in the near future. In the meantime, here’s a link to and a citation for the paper,

1-nm-Wide Hydrated Dipole Arrays Regulate AuNW Assembly on Striped Monolayers in Nonpolar Solvent by Ashlin G. Porter, Tianhong Ouyang, Tyler R. Hayes, John Biechele-Speziale, Shane R. Russell, Shelley A. Claridge. Chem DOI: DOI:https://doi.org/10.1016/j.chempr.2019.07.002 Published online:July 25, 2019

This paper is behind a paywall.