Tag Archives: University of Oklahoma

Device with brainlike plasticity

A September 1, 2021 news item on ScienceDaily announces a new type of memristor from Texas A&M University (Texas A&M or TAMU) and the National University of Singapore (NUS)

In a discovery published in the journal Nature, an international team of researchers has described a novel molecular device with exceptional computing prowess.

Reminiscent of the plasticity of connections in the human brain, the device can be reconfigured on the fly for different computational tasks by simply changing applied voltages. Furthermore, like nerve cells can store memories, the same device can also retain information for future retrieval and processing.

Two of the universities involved in the research have issued news/press releases. I’m going to start with the September 1, 2021 Texas A&M University news release (also on EurekAlert), which originated the news item on ScienceDaily,

“The brain has the remarkable ability to change its wiring around by making and breaking connections between nerve cells. Achieving something comparable in a physical system has been extremely challenging,” said Dr. R. Stanley Williams [emphasis mine], professor in the Department of Electrical and Computer Engineering at Texas A&M University. “We have now created a molecular device with dramatic reconfigurability, which is achieved not by changing physical connections like in the brain, but by reprogramming its logic.”

Dr. T. Venkatesan, director of the Center for Quantum Research and Technology (CQRT) at the University of Oklahoma, Scientific Affiliate at National Institute of Standards and Technology, Gaithersburg, and adjunct professor of electrical and computer engineering at the National University of Singapore, added that their molecular device might in the future help design next-generation processing chips with enhanced computational power and speed, but consuming significantly reduced energy.

Whether it is the familiar laptop or a sophisticated supercomputer, digital technologies face a common nemesis, the von Neumann bottleneck. This delay in computational processing is a consequence of current computer architectures, wherein the memory, containing data and programs, is physically separated from the processor. As a result, computers spend a significant amount of time shuttling information between the two systems, causing the bottleneck. Also, despite extremely fast processor speeds, these units can be idling for extended amounts of time during periods of information exchange.

As an alternative to conventional electronic parts used for designing memory units and processors, devices called memristors offer a way to circumvent the von Neumann bottleneck. Memristors, such as those made of niobium dioxide and vanadium dioxide, transition from being an insulator to a conductor at a set temperature. This property gives these types of memristors the ability to perform computations and store data.

However, despite their many advantages, these metal oxide memristors are made of rare-earth elements and can operate only in restrictive temperature regimes. Hence, there has been an ongoing search for promising organic molecules that can perform a comparable memristive function, said Williams.

Dr. Sreebrata Goswami, a professor at the Indian Association for the Cultivation of Science, designed the material used in this work. The compound has a central metal atom (iron) bound to three phenyl azo pyridine organic molecules called ligands.

“This behaves like an electron sponge that can absorb as many as six electrons reversibly, resulting in seven different redox states,” said Sreebrata. “The interconnectivity between these states is the key behind the reconfigurability shown in this work.”

Dr. Sreetosh Goswami, a researcher at the National University of Singapore, devised this project by creating a tiny electrical circuit consisting of a 40-nanometer layer of molecular film sandwiched between a layer of gold on top and gold-infused nanodisc and indium tin oxide at the bottom.

On applying a negative voltage on the device, Sreetosh witnessed a current-voltage profile that was nothing like anyone had seen before. Unlike metal-oxide memristors that can switch from metal to insulator at only one fixed voltage, the organic molecular devices could switch back and forth from insulator to conductor at several discrete sequential voltages.

“So, if you think of the device as an on-off switch, as we were sweeping the voltage more negative, the device first switched from on to off, then off to on, then on to off and then back to on. I’ll say that we were just blown out of our seat,” said Venkatesan. “We had to convince ourselves that what we were seeing was real.”

Sreetosh and Sreebrata investigated the molecular mechanisms underlying the curious switching behavior using an imaging technique called Raman spectroscopy. In particular, they looked for spectral signatures in the vibrational motion of the organic molecule that could explain the multiple transitions. Their investigation revealed that sweeping the voltage negative triggered the ligands on the molecule to undergo a series of reduction, or electron-gaining, events that caused the molecule to transition between off state and on states.

Next, to describe the extremely complex current-voltage profile of the molecular device mathematically, Williams deviated from the conventional approach of basic physics-based equations. Instead, he described the behavior of the molecules using a decision tree algorithm with “if-then-else” statements, a commonplace line of code in several computer programs, particularly digital games.

“Video games have a structure where you have a character that does something, and then something occurs as a result. And so, if you write that out in a computer algorithm, they are if-then-else statements,” said Williams. “Here, the molecule is switching from on to off as a consequence of applied voltage, and that’s when I had the eureka moment to use decision trees to describe these devices, and it worked very well.” 

But the researchers went a step further to exploit these molecular devices to run programs for different real-world computational tasks. Sreetosh showed experimentally that their devices could perform fairly complex computations in a single time step and then be reprogrammed to perform another task in the next instant.

“It was quite extraordinary; our device was doing something like what the brain does, but in a very different way,” said Sreetosh. “When you’re learning something new or when you’re deciding, the brain can actually reconfigure and change physical wiring around. Similarly, we can logically reprogram or reconfigure our devices by giving them a different voltage pulse then they’ve seen before.” 

Venkatesan noted that it would take thousands of transistors to perform the same computational functions as one of their molecular devices with its different decision trees. Hence, he said their technology might first be used in handheld devices, like cell phones and sensors, and other applications where power is limited.

Other contributors to the research include Dr. Abhijeet Patra and Dr. Ariando from the National University of Singapore; Dr. Rajib Pramanick and Dr. Santi Prasad Rath from the Indian Association for the Cultivation of Science; Dr. Martin Foltin from Hewlett Packard Enterprise, Colorado; and Dr. Damien Thompson from the University of Limerick, Ireland.

Venkatesan said that this research is indicative of the future discoveries from this collaborative team, which will include the center of nanoscience and engineering at the Indian Institute of Science and the Microsystems and Nanotechnology Division at the NIST.

I’ve highlighted R. Stanley Williams because he and his team at HP [Hewlett Packard] Labs helped to kick off current memristor research in 2008 with the publication of two papers as per my April 5, 2010 posting,

In 2008, two memristor papers were published in Nature and Nature Nanotechnology, respectively. In the first (Nature, May 2008 [article still behind a paywall], a team at HP Labs claimed they had proved the existence of memristors (a fourth member of electrical engineering’s ‘Holy Trinity of the capacitor, resistor, and inductor’). In the second paper (Nature Nanotechnology, July 2008 [article still behind a paywall]) the team reported that they had achieved engineering control.

The novel memory device is based on a molecular system that can transition between on and off states at several discrete sequential voltages Courtesy: National University of Singapore

There is more technical detail in the September 2, 2022 NUS press release (also on EurekAlert),

Many electronic devices today are dependent on semiconductor logic circuits based on switches hard-wired to perform predefined logic functions. Physicists from the National University of Singapore (NUS), together with an international team of researchers, have developed a novel molecular memristor, or an electronic memory device, that has exceptional memory reconfigurability. 

Unlike hard-wired standard circuits, the molecular device can be reconfigured using voltage to embed different computational tasks. The energy-efficient new technology, which is capable of enhanced computational power and speed, can potentially be used in edge computing, as well as handheld devices and applications with limited power resource.

“This work is a significant breakthrough in our quest to design low-energy computing. The idea of using multiple switching in a single element draws inspiration from how the brain works and fundamentally reimagines the design strategy of a logic circuit,” said Associate Professor Ariando from the NUS Department of Physics who led the research.

The research was first published in the journal Nature on 1 September 2021, and carried out in collaboration with the Indian Association for the Cultivation of Science, Hewlett Packard Enterprise, the University of Limerick, the University of Oklahoma, and Texas A&M University.

Brain-inspired technology

“This new discovery can contribute to developments in edge computing as a sophisticated in-memory computing approach to overcome the von Neumann bottleneck, a delay in computational processing seen in many digital technologies due to the physical separation of memory storage from a device’s processor,” said Assoc Prof Ariando. The new molecular device also has the potential to contribute to designing next generation processing chips with enhanced computational power and speed.

“Similar to the flexibility and adaptability of connections in the human brain, our memory device can be reconfigured on the fly for different computational tasks by simply changing applied voltages. Furthermore, like how nerve cells can store memories, the same device can also retain information for future retrieval and processing,” said first author Dr Sreetosh Goswami, Research Fellow from the Department of Physics at NUS.

Research team member Dr Sreebrata Goswami, who was a Senior Research Scientist at NUS and previously Professor at the Indian Association for the Cultivation of Science, conceptualised and designed a molecular system belonging to the chemical family of phenyl azo pyridines that have a central metal atom bound to organic molecules called ligands. “These molecules are like electron sponges that can offer as many as six electron transfers resulting in five different molecular states. The interconnectivity between these states is the key behind the device’s reconfigurability,” explained Dr Sreebrata Goswami.

Dr Sreetosh Goswami created a tiny electrical circuit consisting a 40-nanometer layer of molecular film sandwiched between a top layer of gold, and a bottom layer of gold-infused nanodisc and indium tin oxide. He observed an unprecedented current-voltage profile upon applying a negative voltage to the device. Unlike conventional metal-oxide memristors that are switched on and off at only one fixed voltage, these organic molecular devices could switch between on-off states at several discrete sequential voltages.

Using an imaging technique called Raman spectroscopy, spectral signatures in the vibrational motion of the organic molecule were observed to explain the multiple transitions. Dr Sreebrata Goswami explained, “Sweeping the negative voltage triggered the ligands on the molecule to undergo a series of reduction, or electron-gaining which caused the molecule to transition between off and on states.”

The researchers described the behavior of the molecules using a decision tree algorithm with “if-then-else” statements, which is used in the coding of several computer programs, particularly digital games, as compared to the conventional approach of using basic physics-based equations.

New possibilities for energy-efficient devices

Building on their research, the team used the molecular memory devices to run programs for different real-world computational tasks. As a proof of concept, the team demonstrated that their technology could perform complex computations in a single step, and could be reprogrammed to perform another task in the next instant. An individual molecular memory device could perform the same computational functions as thousands of transistors, making the technology a more powerful and energy-efficient memory option.

“The technology might first be used in handheld devices, like cell phones and sensors, and other applications where power is limited,” added Assoc Prof Ariando.

The team in the midst of building new electronic devices incorporating their innovation, and working with collaborators to conduct simulation and benchmarking relating to existing technologies.

Other contributors to the research paper include Abhijeet Patra and Santi Prasad Rath from NUS, Rajib Pramanick from the Indian Association for the Cultivation of Science, Martin Foltin from Hewlett Packard Enterprise, Damien Thompson from the University of Limerick, T. Venkatesan from the University of Oklahoma, and R. Stanley Williams from Texas A&M University.

Here’s a link to and a citation for the paper,

Decision trees within a molecular memristor by Sreetosh Goswami, Rajib Pramanick, Abhijeet Patra, Santi Prasad Rath, Martin Foltin, A. Ariando, Damien Thompson, T. Venkatesan, Sreebrata Goswami & R. Stanley Williams. Nature volume 597, pages 51–56 (2021) DOI: https://doi.org/10.1038/s41586-021-03748-0 Published 01 September 2021 Issue Date 02 September 2021

This paper is behind a paywall.

Nanoscale measurements for osteoarthritis biomarker

There’s a new technique for measuring hyaluronic acid (HA), which appears to be associated with osteoarthritis. A March 12, 2018 news item on ScienceDaily makes the announcement,

For the first time, scientists at Wake Forest Baptist Medical Center have been able to measure a specific molecule indicative of osteoarthritis and a number of other inflammatory diseases using a newly developed technology.

This preclinical [emphasis mine] study used a solid-state nanopore sensor as a tool for the analysis of hyaluronic acid (HA).

I looked at the abstract for the paper (citation and link follow at end of this post) and found that it has been tested on ‘equine models’. Presumably they mean horses or, more accurately, members of the horse family. The next step is likely to be testing on humans, i.e., clinical trials.

A March 12, 2018 Wake Forest Baptist Medical Center news release (also on EurekAlert), which originated the news item, provides more details,

HA is a naturally occurring molecule that is involved in tissue hydration, inflammation and joint lubrication in the body. The abundance and size distribution of HA in biological fluids is recognized as an indicator of inflammation, leading to osteoarthritis and other chronic inflammatory diseases. It can also serve as an indicator of how far the disease has progressed.

“Our results established a new, quantitative method for the assessment of a significant molecular biomarker that bridges a gap in the conventional technology,” said lead author Adam R. Hall, Ph.D., assistant professor of biomedical engineering at Wake Forest School of Medicine, part of Wake Forest Baptist.

“The sensitivity, speed and small sample requirements of this approach make it attractive as the basis for a powerful analytic tool with distinct advantages over current assessment technologies.”

The most widely used method is gel electrophoresis, which is slow, messy, semi-quantitative, and requires a lot of starting material, Hall said. Other technologies include mass spectrometry and size-exclusion chromatography, which are expensive and limited in range, and multi-angle light scattering, which is non-quantitative and has limited precision.

The study, which is published in the current issue of Nature Communications, was led by Hall and Elaheh Rahbar, Ph.D., of Wake Forest Baptist, and conducted in collaboration with scientists at Cornell University and the University of Oklahoma.

In the study, Hall, Rahbar and their team first employed synthetic HA polymers to validate the measurement approach. They then used the platform to determine the size distribution of as little as 10 nanograms (one-billionth of a gram) of HA extracted from the synovial fluid of a horse model of osteoarthritis.

The measurement approach consists of a microchip with a single hole or pore in it that is a few nanometers wide – about 5,000 times smaller than a human hair. This is small enough that only individual molecules can pass through the opening, and as they do, each can be detected and analyzed. By applying the approach to HA molecules, the researchers were able to determine their size one-by-one. HA size distribution changes over time in osteoarthritis, so this technology could help better assess disease progression, Hall said.

“By using a minimally invasive procedure to extract a tiny amount of fluid – in this case synovial fluid from the knee – we may be able to identify the disease or determine how far it has progressed, which is valuable information for doctors in determining appropriate treatments,” he said.

Hall, Rahbar and their team hope to conduct their next study in humans, and then extend the technology with other diseases where HA and similar molecules play a role, including traumatic injuries and cancer.

Here’s a link to and a citation for the paper,

Label-free analysis of physiological hyaluronan size distribution with a solid-state nanopore sensor by Felipe Rivas, Osama K. Zahid, Heidi L. Reesink, Bridgette T. Peal, Alan J. Nixon, Paul L. DeAngelis, Aleksander Skardal, Elaheh Rahbar, & Adam R. Hall. Nature Communications volume 9, Article number: 1037 (2018) doi:10.1038/s41467-018-03439-x
Published online: 12 March 2018

This paper is open access.

Earthquakes, deep and shallow, and their nanocrystals

Those of us who live in this region are warned on a regular basis that a ‘big’ one is overdue somewhere along the West Coast of Canada and the US. It gives me an interest in the geological side of things  While the May 19, 2015 news items on Azonano featuring the research story as told by the University of Oklahoma and the University of California at Riverside doesn’t fall directly under my purview, it’s close enough.

Here’s the lead researcher, Harry W. Green II, from the University of California at Riverside explaining, the work,

The May 18, 2015 University of Oklahoma news release on EurekAlert offers a succinct summary,

A University of Oklahoma structural geologist and collaborators are studying earthquake instability and the mechanisms associated with fault weakening during slip. The mechanism of this weakening is central to understanding earthquake sliding.

Ze’ev Reches, professor in the OU School of Geology and Geophysics, is using electron microscopy to examine velocity and temperature in two key observations: (1) a high-speed friction experiment on carbonate at conditions of shallow earthquakes, and (2) a high-pressure/high-temperature faulting experiment at conditions of very deep earthquakes.

Reches and his collaborators have shown phase transformation and the formation of nano-size (millionth of a millimeter) grains are associated with profound weakening and that fluid is not necessary for such weakening. If this mechanism operates in major earthquakes, it resolves two major conflicts between laboratory results and natural faulting–lack of a thermal zone around major faults and the rarity of glassy rocks along faults.

The May 18, 2015 University of California at Riverside (UCR) news release provides more detail about earthquakes,

Earthquakes are labeled “shallow” if they occur at less than 50 kilometers depth.  They are labeled “deep” if they occur at 300-700 kilometers depth.  When slippage occurs during these earthquakes, the faults weaken.  How this fault weakening takes place is central to understanding earthquake sliding.

A new study published online in Nature Geoscience today by a research team led by University of California, Riverside geologists now reports that a universal sliding mechanism operates for earthquakes of all depths – from the deep ones all the way up to the crustal ones.

“Although shallow earthquakes – the kind that threaten California – must initiate differently from the very deep ones, our new work shows that, once started, they both slide by the same physics,” said deep-earthquake expert Harry W. Green II, a distinguished professor of the Graduate Division in UC Riverside’s Department of Earth Sciences, who led the research project. “Our research paper presents a new, unifying model of how earthquakes work. Our results provide a more accurate understanding of what happens during earthquake sliding that can lead to better computer models and could lead to better predictions of seismic shaking danger.”

The UCR news release goes on to describe the physics of sliding and a controversy concerning shallow and deep earthquakes,

The physics of the sliding is the self-lubrication of the earthquake fault by flow of a new material consisting of tiny new crystals, the study reports. Both shallow earthquakes and deep ones involve phase transformations of rocks that produce tiny crystals of new phases on which sliding occurs.

“Other researchers have suggested that fluids are present in the fault zones or generated there,” Green said. “Our study shows fluids are not necessary for fault weakening. As earthquakes get started, local extreme heating takes place in the fault zone. The result of that heating in shallow earthquakes is to initiate reactions like the ones that take place in deep earthquakes so they both end up lubricated in the same way.”

Green explained that at 300-700 kilometers depth, the pressure and temperature are so high that rocks in this deep interior of the planet cannot break by the brittle processes seen on Earth’s surface. In the case of shallow earthquakes, stresses on the fault increase slowly in response to slow movement of tectonic plates, with sliding beginning when these stresses exceed static friction. While deep earthquakes also get started in response to increasing stresses, the rocks there flow rather than break, except under special conditions.

“Those special conditions of temperature and pressure induce minerals in the rock to break down to other minerals, and in the process of this phase transformation a fault can form and suddenly move, radiating the shaking – just like at shallow depths,” Green said.

The research explains why large faults like the San Andreas Fault in California do not have a heat-flow anomaly around them. Were shallow earthquakes to slide by the grinding and crunching of rock, as geologists once imagined, the process would generate enough heat so that major faults like the San Andreas would be a little warmer along their length than they would be otherwise.

“But such a predicted warm region along such faults has never been found,” Green said.  “The logical conclusion is that the fault must move more easily than we thought.  Extreme heating in a very thin zone along the fault produces the very weak lubricant.  The volume of material that is heated is very small and survives for a very short time – seconds, perhaps – followed by very little heat generation during sliding because the lubricant is very weak.”

The new research also explains why faults with glass on them (reflecting the fact that during the earthquake the fault zone melted) are rare. As shallow earthquakes start, the temperature rises locally until it is hot enough to start a chemical reaction – usually the breakdown of clays or carbonates or other hydrous phases in the fault zone.  The reactions that break down the clays or carbonates stop the temperature from climbing higher, with heat being used up in the reactions that produce the nanocrystalline lubricant.

If the fault zone does not have hydrous phases or carbonates, the sudden heating that begins when sliding starts raises the local temperature on the fault all the way to the melting temperature of the rock.  In such cases, the melt behaves like a lubricant and the sliding surface ends up covered with melt (that would quench to a glass) instead of the nanocrystalline lubricant.

“The reason this does not happen often, that is, the reason we do not see lots of faults with glass on them, is that the Earth’s crust is made up to a large degree of hydrous and carbonate phases, and even the rocks that don’t have such phases usually have feldspars that get crushed up in the fault zone,” Green explained. “The feldspars will ‘rot’ to clays during the hundred years or so between earthquakes as water moves along the fault zone. In that case, when the next earthquake comes, the fault zone is ready with clays and other phases that can break down, and the process repeats itself.”

The research involved the study of laboratory earthquakes – high-pressure earthquakes as well as high-speed ones – using electron microscopy in friction and faulting experiments. It was Green’s laboratory that first conducted a serendipitous series of experiments, in 1989, on the right kind of mantle rocks that give geologists insight into how deep earthquakes work. In the new work, Green and his team also investigated the Punchbowl Fault, an ancestral branch of the San Andreas Fault that has been exhumed by erosion from several kilometers depth, and found nanometric materials within the fault – as predicted by their model.

Here’s a link to and a citation for the paper,

Phase transformation and nanometric flow cause extreme weakening during fault slip by H. W. Green II, F. Shi, K. Bozhilov, G. Xia, & Z. Reches. Nature Geoscience (2015) doi:10.1038/ngeo2436 Published online 18 May 2015

This paper is behind a paywall.