Scientific studies describing the most basic processes often have the greatest impact in the long run. A new work by Rice University engineers could be one such, and it’s a gas, gas, gas for nanomaterials.
Yes, I ‘stole’ the phrase from the news item/release for my headline. For anyone unfamiliar with the word gas’ used as slang, it mean something is good or wonderful (See Urban Dictionary).
Getting back to science, gas, and nanomaterials, a June 11, 2021 Rice University news release (also on EurekAlert), which originated the news item, answers some questions about how manufacturing nanomaterial used in electronics could be more easily manufactured,
Rice materials theorist Boris Yakobson, graduate student Jincheng Lei and alumnus Yu Xie of Rice’s Brown School of Engineering have unveiled how a popular 2D material, molybdenum disulfide (MoS2), flashes into existence during chemical vapor deposition (CVD).
Knowing how the process works will give scientists and engineers a way to optimize the bulk manufacture of MoS2 and other valuable materials classed as transition metal dichalcogenides (TMDs), semiconducting crystals that are good bets to find a home in next-generation electronics.
Their study in the American Chemical Society journal ACS Nano focuses on MoS2’s “pre-history,” specifically what happens in a CVD furnace once all the solid ingredients are in place. CVD, often associated with graphene and carbon nanotubes, has been exploited to make a variety of 2D materials by providing solid precursors and catalysts that sublimate into gas and react. The chemistry dictates which molecules fall out of the gas and settle on a substrate, like copper or silicone, and assemble into a 2D crystal.
The problem has been that once the furnace cranks up, it’s impossible to see or measure the complicated chain of reactions in the chemical stew in real time.
“Hundreds of labs are cooking these TMDs, quite oblivious to the intricate transformations occurring in the dark oven,” said Yakobson, the Karl F. Hasselmann Professor of Materials Science and NanoEngineering and a professor of chemistry. “Here, we’re using quantum-chemical simulations and analysis to reveal what’s there, in the dark, that leads to synthesis.”
Yakobson’s theories often lead experimentalists to make his predictions come true. (For example, boron buckyballs.) This time, the Rice lab determined the path molybdenum oxide (MoO3) and sulfur powder take to deposit an atomically thin lattice onto a surface.
The short answer is that it takes three steps. First, the solids are sublimated through heating to change them from solid to gas, including what Yakobson called a “beautiful” ring-molecule, trimolybdenum nonaoxide (Mo3O9). Second, the molybdenum-containing gases react with sulfur atoms under high heat, up to 4,040 degrees Fahrenheit. Third, molybdenum and sulfur molecules fall to the surface, where they crystallize into the jacks-like lattice that is characteristic of TMDs.
What happens in the middle step was of the most interest to the researchers. The lab’s simulations showed a trio of main gas phase reactants are the prime suspects in making MoS2: sulfur, the ring-like Mo3O9 molecules that form in sulfur’s presence and the subsequent hybrid of MoS6 that forms the crystal, releasing excess sulfur atoms in the process.
Lei said the molecular dynamics simulations showed the activation barriers that must be overcome to move the process along, usually in picoseconds.
“In our molecular dynamics simulation, we find that this ring is opened by its interaction with sulfur, which attacks oxygen connected to the molybdenum atoms,” he said. “The ring becomes a chain, and further interactions with the sulfur molecules separate this chain into molybdenum sulfide monomers. The most important part is the chain breaking, which overcomes the highest energy barrier.”
That realization could help labs streamline the process, Lei said. “If we can find precursor molecules with only one molybdenum atom, we would not need to overcome the high barrier of breaking the chain,” he said.
Yakobson said the study could apply to other TMDs.
“The findings raise oftentimes empirical nanoengineering to become a basic science-guided endeavor, where processes can be predicted and optimized,” he said, noting that while the chemistry has been generally known since the discovery of TMD fullerenes in the early ’90s, understanding the specifics will further the development of 2D synthesis.
“Only now can we ‘sequence’ the step-by-step chemistry involved,” Yakobson said. “That will allow us to improve the quality of 2D material, and also see which gas side-products might be useful and captured on the way, opening opportunities for chemical engineering.”
Comparing it to a ‘camera’, even with the quotes, is a bit of a stretch for my taste but I can’t come up with a better comparison. Here’s a video so you can judge for yourself,
Caption: This video repeats three times the graphene camera images of a single beat of an embryonic chicken heart. The images, separated by 5 milliseconds, were measured by a laser bouncing off a graphene sheet lying beneath the heart. The images are about 2 millimeters on a side. Credit: UC Berkeley images by Halleh Balch, Alister McGuire and Jason Horng
Bay Area [San Francisco, California] scientists have captured the real-time electrical activity of a beating heart, using a sheet of graphene to record an optical image — almost like a video camera — of the faint electric fields generated by the rhythmic firing of the heart’s muscle cells.
The graphene camera represents a new type of sensor useful for studying cells and tissues that generate electrical voltages, including groups of neurons or cardiac muscle cells. To date, electrodes or chemical dyes have been used to measure electrical firing in these cells. But electrodes and dyes measure the voltage at one point only; a graphene sheet measures the voltage continuously over all the tissue it touches.
The development, published online last week in the journal Nano Letters, comes from a collaboration between two teams of quantum physicists at the University of California, Berkeley, and physical chemists at Stanford University.
“Because we are imaging all cells simultaneously onto a camera, we don’t have to scan, and we don’t have just a point measurement. We can image the entire network of cells at the same time,” said Halleh Balch, one of three first authors of the paper and a recent Ph.D. recipient in UC Berkeley’s Department of Physics.
While the graphene sensor works without having to label cells with dyes or tracers, it can easily be combined with standard microscopy to image fluorescently labeled nerve or muscle tissue while simultaneously recording the electrical signals the cells use to communicate.
“The ease with which you can image an entire region of a sample could be especially useful in the study of neural networks that have all sorts of cell types involved,” said another first author of the study, Allister McGuire, who recently received a Ph.D. from Stanford and. “If you have a fluorescently labeled cell system, you might only be targeting a certain type of neuron. Our system would allow you to capture electrical activity in all neurons and their support cells with very high integrity, which could really impact the way that people do these network level studies.”
Graphene is a one-atom thick sheet of carbon atoms arranged in a two-dimensional hexagonal pattern reminiscent of honeycomb. The 2D structure has captured the interest of physicists for several decades because of its unique electrical properties and robustness and its interesting optical and optoelectronic properties.
“This is maybe the first example where you can use an optical readout of 2D materials to measure biological electrical fields,” said senior author Feng Wang, UC Berkeley professor of physics. “People have used 2D materials to do some sensing with pure electrical readout before, but this is unique in that it works with microscopy so that you can do parallel detection.”
The team calls the tool a critically coupled waveguide-amplified graphene electric field sensor, or CAGE sensor.
“This study is just a preliminary one; we want to showcase to biologists that there is such a tool you can use, and you can do great imaging. It has fast time resolution and great electric field sensitivity,” said the third first author, Jason Horng, a UC Berkeley Ph.D. recipient who is now a postdoctoral fellow at the National Institute of Standards and Technology. “Right now, it is just a prototype, but in the future, I think we can improve the device.”
Graphene is sensitive to electric fields
Ten years ago, Wang discovered that an electric field affects how graphene reflects or absorbs light. Balch and Horng exploited this discovery in designing the graphene camera. They obtained a sheet of graphene about 1 centimeter on a side produced by chemical vapor deposition in the lab of UC Berkeley physics professor Michael Crommie and placed on it a live heart from a chicken embryo, freshly extracted from a fertilized egg. These experiments were performed in the Stanford lab of Bianxiao Cui, who develops nanoscale tools to study electrical signaling in neurons and cardiac cells.
The team showed that when the graphene was tuned properly, the electrical signals that flowed along the surface of the heart during a beat were sufficient to change the reflectance of the graphene sheet.
“When cells contract, they fire action potentials that generate a small electric field outside of the cell,” Balch said. “The absorption of graphene right under that cell is modified, so we will see a change in the amount of light that comes back from that position on the large area of graphene.”
In initial studies, however, Horng found that the change in reflectance was too small to detect easily. An electric field reduces the reflectance of graphene by at most 2%; the effect was much less from changes in the electric field when the heart muscle cells fired an action potential.
Together, Balch, Horng and Wang found a way to amplify this signal by adding a thin waveguide below graphene, forcing the reflected laser light to bounce internally about 100 times before escaping. This made the change in reflectance detectable by a normal optical video camera.
“One way of thinking about it is that the more times that light bounces off of graphene as it propagates through this little cavity, the more effects that light feels from graphene’s response, and that allows us to obtain very, very high sensitivity to electric fields and voltages down to microvolts,” Balch said.
The increased amplification necessarily lowers the resolution of the image, but at 10 microns, it is more than enough to study cardiac cells that are several tens of microns across, she said.
Another application, McGuire said, is to test the effect of drug candidates on heart muscle before these drugs go into clinical trials to see whether, for example, they induce an unwanted arrhythmia. To demonstrate this, he and his colleagues observed the beating chicken heart with CAGE and an optical microscope while infusing it with a drug, blebbistatin, that inhibits the muscle protein myosin. They observed the heart stop beating, but CAGE showed that the electrical signals were unaffected.
Because graphene sheets are mechanically tough, they could also be placed directly on the surface of the brain to get a continuous measure of electrical activity — for example, to monitor neuron firing in the brains of those with epilepsy or to study fundamental brain activity. Today’s electrode arrays measure activity at a few hundred points, not continuously over the brain surface.
“One of the things that is amazing to me about this project is that electric fields mediate chemical interactions, mediate biophysical interactions — they mediate all sorts of processes in the natural world — but we never measure them. We measure current, and we measure voltage,” Balch said. “The ability to actually image electric fields gives you a look at a modality that you previously had little insight into.”
I have one research announcement from China and another from the Netherlands, both of which concern memristors and oxides.
A May 17, 2021 news item on Nanowerk announces work, which suggests that memristors may not need to rely solely on oxides but could instead utilize light more gainfully,
Scientists are getting better at making neuron-like junctions for computers that mimic the human brain’s random information processing, storage and recall. Fei Zhuge of the Chinese Academy of Sciences and colleagues reviewed the latest developments in the design of these ‘memristors’ for the journal Science and Technology of Advanced Materials …
Computers apply artificial intelligence programs to recall previously learned information and make predictions. These programs are extremely energy- and time-intensive: typically, vast volumes of data must be transferred between separate memory and processing units. To solve this issue, researchers have been developing computer hardware that allows for more random and simultaneous information transfer and storage, much like the human brain.
Electronic circuits in these ‘neuromorphic’ computers include memristors that resemble the junctions between neurons called synapses. Energy flows through a material from one electrode to another, much like a neuron firing a signal across the synapse to the next neuron. Scientists are now finding ways to better tune this intermediate material so the information flow is more stable and reliable.
I had no success locating the original news release, which originated the news item, but have found this May 17, 2021 news item on eedesignit.com, which provides the remaining portion of the news release.
“Oxides are the most widely used materials in memristors,” said Zhuge. “But oxide memristors have unsatisfactory stability and reliability. Oxide-based hybrid structures can effectively improve this.”
Memristors are usually made of an oxide-based material sandwiched between two electrodes. Researchers are getting better results when they combine two or more layers of different oxide-based materials between the electrodes. When an electrical current flows through the network, it induces ions to drift within the layers. The ions’ movements ultimately change the memristor’s resistance, which is necessary to send or stop a signal through the junction.
Memristors can be tuned further by changing the compounds used for electrodes or by adjusting the intermediate oxide-based materials. Zhuge and his team are currently developing optoelectronic neuromorphic computers based on optically-controlled oxide memristors. Compared to electronic memristors, photonic ones are expected to have higher operation speeds and lower energy consumption. They could be used to construct next generation artificial visual systems with high computing efficiency.
Now for a picture that accompanied the news release, which follows,
A research group led by Prof. ZHUGE Fei at the Ningbo Institute of Materials Technology and Engineering (NIMTE) of the Chinese Academy of Sciences (CAS) developed an all-optically controlled (AOC) analog memristor, whose memconductance can be reversibly tuned by varying only the wavelength of the controlling light.
As the next generation of artificial intelligence (AI), neuromorphic computing (NC) emulates the neural structure and operation of the human brain at the physical level, and thus can efficiently perform multiple advanced computing tasks such as learning, recognition and cognition.
Memristors are promising candidates for NC thanks to the feasibility of high-density 3D integration and low energy consumption. Among them, the emerging optoelectronic memristors are competitive by virtue of combining the advantages of both photonics and electronics. However, the reversible tuning of memconductance depends highly on the electric excitation, which have severely limited the development and application of optoelectronic NC.
To address this issue, researchers at NIMTE proposed a bilayered oxide AOC memristor, based on the relatively mature semiconductor material InGaZnO and a memconductance tuning mechanism of light-induced electron trapping and detrapping.
The traditional electrical memristors require strong electrical stimuli to tune their memconductance, leading to high power consumption, a large amount of Joule heat, microstructural change triggered by the Joule heat, and even high crosstalk in memristor crossbars.
On the contrary, the developed AOC memristor does not involve microstructure changes, and can operate upon weak light irradiation with light power density of only 20 μW cm-2, which has provided a new approach to overcome the instability of the memristor.
Specifically, the AOC memristor can serve as an excellent synaptic emulator and thus mimic spike-timing-dependent plasticity (STDP) which is an important learning rule in the brain, indicating its potential applications in AOC spiking neural networks for high-efficiency optoelectronic NC.
Moreover, compared to purely optical computing, the optoelectronic computing using our AOC memristor showed higher practical feasibility, on account of the simple structure and fabrication process of the device.
The study may shed light on the in-depth research and practical application of optoelectronic NC, and thus promote the development of the new generation of AI.
This work was supported by the National Natural Science Foundation of China (No. 61674156 and 61874125), the Strategic Priority Research Program of Chinese Academy of Sciences (No. XDB32050204), and the Zhejiang Provincial Natural Science Foundation of China (No. LD19E020001).
Classic computers use binary values (0/1) to perform. By contrast, our brain cells can use more values to operate, making them more energy-efficient than computers. This is why scientists are interested in neuromorphic (brain-like) computing.
Physicists from the University of Groningen (the Netherlands) have used a complex oxide to create elements comparable to the neurons and synapses in the brain using spins, a magnetic property of electrons.
The press release, which follows, was accompanied by this image illustrating the work,
Although computers can do straightforward calculations much faster than humans, our brains outperform silicon machines in tasks like object recognition. Furthermore, our brain uses less energy than computers. Part of this can be explained by the way our brain operates: whereas a computer uses a binary system (with values 0 or 1), brain cells can provide more analogue signals with a range of values.
The operation of our brains can be simulated in computers, but the basic architecture still relies on a binary system. That is why scientist look for ways to expand this, creating hardware that is more brain-like, but will also interface with normal computers. ‘One idea is to create magnetic bits that can have intermediate states’, says Tamalika Banerjee, Professor of Spintronics of Functional Materials at the Zernike Institute for Advanced Materials, University of Groningen. She works on spintronics, which uses a magnetic property of electrons called ‘spin’ to transport, manipulate and store information.
In this study, her PhD student Anouk Goossens, first author of the paper, created thin films of a ferromagnetic metal (strontium-ruthenate oxide, SRO) grown on a substrate of strontium titanate oxide. The resulting thin film contained magnetic domains that were perpendicular to the plane of the film. ‘These can be switched more efficiently than in-plane magnetic domains’, explains Goossens. By adapting the growth conditions, it is possible to control the crystal orientation in the SRO. Previously, out-of-plane magnetic domains have been made using other techniques, but these typically require complex layer structures.
The magnetic domains can be switched using a current through a platinum electrode on top of the SRO. Goossens: ‘When the magnetic domains are oriented perfectly perpendicular to the film, this switching is deterministic: the entire domain will switch.’ However, when the magnetic domains are slightly tilted, the response is probabilistic: not all the domains are the same, and intermediate values occur when only part of the crystals in the domain have switched.
By choosing variants of the substrate on which the SRO is grown, the scientists can control its magnetic anisotropy. This allows them to produce two different spintronic devices. ‘This magnetic anisotropy is exactly what we wanted’, says Goossens. ‘Probabilistic switching compares to how neurons function, while the deterministic switching is more like a synapse.’
The scientists expect that in the future, brain-like computer hardware can be created by combining these different domains in a spintronic device that can be connected to standard silicon-based circuits. Furthermore, probabilistic switching would also allow for stochastic computing, a promising technology which represents continuous values by streams of random bits. Banerjee: ‘We have found a way to control intermediate states, not just for memory but also for computing.’
Northwestern University researchers are building social bonds with beams of light.
For the first time ever, Northwestern engineers and neurobiologists have wirelessly programmed — and then deprogrammed — mice to socially interact with one another in real time. The advancement is thanks to a first-of-its-kind ultraminiature, wireless, battery-free and fully implantable device that uses light to activate neurons.
This study is the first optogenetics (a method for controlling neurons with light) paper exploring social interactions within groups of animals, which was previously impossible with current technologies.
The research was published May 10  in the journal Nature Neuroscience.
The thin, flexible, wireless nature of the implant allows the mice to look normal and behave normally in realistic environments, enabling researchers to observe them under natural conditions. Previous research using optogenetics required fiberoptic wires, which restrained mouse movements and caused them to become entangled during social interactions or in complex environments.
“With previous technologies, we were unable to observe multiple animals socially interacting in complex environments because they were tethered,” said Northwestern neurobiologist Yevgenia Kozorovitskiy, who designed the experiment. “The fibers would break or the animals would become entangled. In order to ask more complex questions about animal behavior in realistic environments, we needed this innovative wireless technology. It’s tremendous to get away from the tethers.”
“This paper represents the first time we’ve been able to achieve wireless, battery-free implants for optogenetics with full, independent digital control over multiple devices simultaneously in a given environment,” said Northwestern bioelectronics pioneer John A. Rogers, who led the technology development. “Brain activity in an isolated animal is interesting, but going beyond research on individuals to studies of complex, socially interacting groups is one of the most important and exciting frontiers in neuroscience. We now have the technology to investigate how bonds form and break between individuals in these groups and to examine how social hierarchies arise from these interactions.”
Kozorovitskiy is the Soretta and Henry Shapiro Research Professor of Molecular Biology and associate professor of neurobiology in Northwestern’s Weinberg College of Arts and Sciences. She also is a member of the Chemistry of Life Processes Institute. Rogers is the Louis Simpson and Kimberly Querrey Professor of Materials Science and Engineering, Biomedical Engineering and Neurological Surgery in the McCormick School of Engineering and Northwestern University Feinberg School of Medicine and the director of the Querrey Simpson Institute for Bioelectronics.
Kozorovitskiy and Rogers led the work with Yonggang Huang, the Jan and Marcia Achenbach Professor in Mechanical Engineering at McCormick, and Zhaoqian Xie, a professor of engineering mechanics at Dalian University of Technology in China. The paper’s co-first authors are Yiyuan Yang, Mingzheng Wu and Abraham Vázquez-Guardado — all at Northwestern.
Promise and problems of optogenetics
Because the human brain is a system of nearly 100 billion intertwined neurons, it’s extremely difficult to probe single — or even groups of — neurons. Introduced in animal models around 2005, optogenetics offers control of specific, genetically targeted neurons in order to probe them in unprecedented detail to study their connectivity or neurotransmitter release. Researchers first modify neurons in living mice to express a modified gene from light-sensitive algae. Then they can use external light to specifically control and monitor brain activity. Because of the genetic engineering involved, the method is not yet approved in humans.
“It sounds like sci-fi, but it’s an incredibly useful technique,” Kozorovitskiy said. “Optogenetics could someday soon be used to fix blindness or reverse paralysis.”
Previous optogenetics studies, however, were limited by the available technology to deliver light. Although researchers could easily probe one animal in isolation, it was challenging to simultaneously control neural activity in flexible patterns within groups of animals interacting socially. Fiberoptic wires typically emerged from an animal’s head, connecting to an external light source. Then a software program could be used to turn the light off and on, while monitoring the animal’s behavior.
“As they move around, the fibers tugged in different ways,” Rogers said. “As expected, these effects changed the animal’s patterns of motion. One, therefore, has to wonder: What behavior are you actually studying? Are you studying natural behaviors or behaviors associated with a physical constraint?”
Wireless control in real time
A world-renowned leader in wireless, wearable technology, Rogers and his team developed a tiny, wireless device that gently rests on the skull’s outer surface but beneath the skin and fur of a small animal. The half-millimeter-thick device connects to a fine, flexible filamentary probe with LEDs on the tip, which extend down into the brain through a tiny cranial defect.
The miniature device leverages near-field communication protocols, the same technology used in smartphones for electronic payments. Researchers wirelessly operate the light in real time with a user interface on a computer. An antenna surrounding the animals’ enclosure delivers power to the wireless device, thereby eliminating the need for a bulky, heavy battery.
Activating social connections
To establish proof of principle for Rogers’ technology, Kozorovitskiy and colleagues designed an experiment to explore an optogenetics approach to remote-control social interactions among pairs or groups of mice.
When mice were physically near one another in an enclosed environment, Kozorovitskiy’s team wirelessly synchronously activated a set of neurons in a brain region related to higher order executive function, causing them to increase the frequency and duration of social interactions. Desynchronizing the stimulation promptly decreased social interactions in the same pair of mice. In a group setting, researchers could bias an arbitrarily chosen pair to interact more than others.
“We didn’t actually think this would work,” Kozorovitskiy said. “To our knowledge, this is the first direct evaluation of a major long-standing hypothesis about neural synchrony in social behavior.”
Here’s a citation and a link to the paper,
Wireless multilateral devices for optogenetic studies of individual and social behaviors by Yiyuan Yang, Mingzheng Wu, Amy J. Wegener, Jose G. Grajales-Reyes, Yujun Deng, Taoyi Wang, Raudel Avila, Justin A. Moreno, Samuel Minkowicz, Vasin Dumrongprechachan, Jungyup Lee, Shuangyang Zhang, Alex A. Legaria, Yuhang Ma, Sunita Mehta, Daniel Franklin, Layne Hartman, Wubin Bai, Mengdi Han, Hangbo Zhao, Wei Lu, Yongjoon Yu, Xing Sheng, Anthony Banks, Xinge Yu, Zoe R. Donaldson, Robert W. Gereau IV, Cameron H. Good, Zhaoqian Xie, Yonggang Huang, Yevgenia Kozorovitskiy and John A. Rogers. Nature Neuroscience (2021) DOI: https://doi.org/10.1038/s41593-021-00849-x Published 10 May 2021
This paper is behind a paywall.
This latest research seems to be the continuation of research featured here in a July 16, 2019 posting: “Controlling neurons with light: no batteries or wires needed.”
Researchers in the Nanoscience Center of University of Jyvaskyla, in Finland and in the Guadalajara University in Mexico developed a method that allows for simulation and visualization of magnetic-field-induced electron currents inside gold nanoparticles. The method facilitates accurate analysis of magnetic field effects inside complex nanostructures in nuclear magnetic resonance measurements and establishes quantitative criteria for aromaticity of nanoparticles. The work was published 30.4.2021 as an Open Access article in Nature Communications.
According to the classical electromagnetism, a charged particle moving in an external magnetic field experiences a force that makes the particle’s path circular. This basic law of physics is used, e.g., in designing cyclotrons that work as particle accelerators. When nanometer-size metal particles are placed in a magnetic field, the field induces a circulating electron current inside the particle. The circulating current in turn creates an internal magnetic field that opposes the external field. This physical effect is called magnetic shielding.
The strength of the shielding can be investigated by using nuclear magnetic resonance (NMR) spectroscopy. The internal magnetic shielding varies strongly in an atomic length scale even inside a nanometer-size particle. Understanding these atom-scale variations is possible only by employing quantum mechanical theory of the electronic properties of each atom making the nanoparticle.
Now, the research group of Professor Hannu Häkkinen in the University of Jyväskylä, in collaboration with University of Guadalajara in Mexico, developed a method to compute, visualize, and analyze the circulating electron currents inside complex 3D nanostructures. The method was applied to gold nanoparticles with a diameter of only about one nanometer. The calculations shed light onto unexplained experimental results from previous NMR measurements in the literature regarding how magnetic shielding inside the particle changes when one gold atom is replaced by one platinum atom.
A new quantitative measure to characterize aromaticity inside metal nanoparticles was also developed based on the total integrated strength of the shielding electron current.
“Aromaticity of molecules is one of the oldest concepts in chemistry, and it has been traditionally connected to ring-like organic molecules and to their delocalized valence electron density that can develop circulating currents in an external magnetic field. However, generally accepted quantitative criteria for the degree of aromaticity have been lacking. Our method yields now a new tool to study and analyze electron currents at the resolution of one atom inside any nanostructure, in principle. The peer reviewers of our work considered this as a significant advancement in the field”, says Professor Häkkinen who coordinated the research.
An April 30, 2021 news item on Nanowerk announced research from a joint team at Northwestern University (located in Chicago, Illinois, US) and University of Hong Kong of researchers in the field of neuromorphic (brainlike) computing,
Researchers have developed a brain-like computing device that is capable of learning by association.
Similar to how famed physiologist Ivan Pavlov conditioned dogs to associate a bell with food, researchers at Northwestern University and the University of Hong Kong successfully conditioned their circuit to associate light with pressure.
The device’s secret lies within its novel organic, electrochemical “synaptic transistors,” which simultaneously process and store information just like the human brain. The researchers demonstrated that the transistor can mimic the short-term and long-term plasticity of synapses in the human brain, building on memories to learn over time.
With its brain-like ability, the novel transistor and circuit could potentially overcome the limitations of traditional computing, including their energy-sapping hardware and limited ability to perform multiple tasks at the same time. The brain-like device also has higher fault tolerance, continuing to operate smoothly even when some components fail.
“Although the modern computer is outstanding, the human brain can easily outperform it in some complex and unstructured tasks, such as pattern recognition, motor control and multisensory integration,” said Northwestern’s Jonathan Rivnay, a senior author of the study. “This is thanks to the plasticity of the synapse, which is the basic building block of the brain’s computational power. These synapses enable the brain to work in a highly parallel, fault tolerant and energy-efficient manner. In our work, we demonstrate an organic, plastic transistor that mimics key functions of a biological synapse.”
Rivnay is an assistant professor of biomedical engineering at Northwestern’s McCormick School of Engineering. He co-led the study with Paddy Chan, an associate professor of mechanical engineering at the University of Hong Kong. Xudong Ji, a postdoctoral researcher in Rivnay’s group, is the paper’s first author.
Conventional, digital computing systems have separate processing and storage units, causing data-intensive tasks to consume large amounts of energy. Inspired by the combined computing and storage process in the human brain, researchers, in recent years, have sought to develop computers that operate more like the human brain, with arrays of devices that function like a network of neurons.
“The way our current computer systems work is that memory and logic are physically separated,” Ji said. “You perform computation and send that information to a memory unit. Then every time you want to retrieve that information, you have to recall it. If we can bring those two separate functions together, we can save space and save on energy costs.”
Currently, the memory resistor, or “memristor,” is the most well-developed technology that can perform combined processing and memory function, but memristors suffer from energy-costly switching and less biocompatibility. These drawbacks led researchers to the synaptic transistor — especially the organic electrochemical synaptic transistor, which operates with low voltages, continuously tunable memory and high compatibility for biological applications. Still, challenges exist.
“Even high-performing organic electrochemical synaptic transistors require the write operation to be decoupled from the read operation,” Rivnay said. “So if you want to retain memory, you have to disconnect it from the write process, which can further complicate integration into circuits or systems.”
How the synaptic transistor works
To overcome these challenges, the Northwestern and University of Hong Kong team optimized a conductive, plastic material within the organic, electrochemical transistor that can trap ions. In the brain, a synapse is a structure through which a neuron can transmit signals to another neuron, using small molecules called neurotransmitters. In the synaptic transistor, ions behave similarly to neurotransmitters, sending signals between terminals to form an artificial synapse. By retaining stored data from trapped ions, the transistor remembers previous activities, developing long-term plasticity.
The researchers demonstrated their device’s synaptic behavior by connecting single synaptic transistors into a neuromorphic circuit to simulate associative learning. They integrated pressure and light sensors into the circuit and trained the circuit to associate the two unrelated physical inputs (pressure and light) with one another.
Perhaps the most famous example of associative learning is Pavlov’s dog, which naturally drooled when it encountered food. After conditioning the dog to associate a bell ring with food, the dog also began drooling when it heard the sound of a bell. For the neuromorphic circuit, the researchers activated a voltage by applying pressure with a finger press. To condition the circuit to associate light with pressure, the researchers first applied pulsed light from an LED lightbulb and then immediately applied pressure. In this scenario, the pressure is the food and the light is the bell. The device’s corresponding sensors detected both inputs.
After one training cycle, the circuit made an initial connection between light and pressure. After five training cycles, the circuit significantly associated light with pressure. Light, alone, was able to trigger a signal, or “unconditioned response.”
Because the synaptic circuit is made of soft polymers, like a plastic, it can be readily fabricated on flexible sheets and easily integrated into soft, wearable electronics, smart robotics and implantable devices that directly interface with living tissue and even the brain [emphasis mine].
“While our application is a proof of concept, our proposed circuit can be further extended to include more sensory inputs and integrated with other electronics to enable on-site, low-power computation,” Rivnay said. “Because it is compatible with biological environments, the device can directly interface with living tissue, which is critical for next-generation bioelectronics.”
I love the video (wish the narrator had a more conversational style rather than the ‘read aloud’ style so many of us adopted in school),
Joel Goldberg’s April 28, 2021 news article (short read) in Science magazine online describes the research (Note: A link has been removed),
Behold the salt monsters. These twisted mineral crystals—formed from the buildup of slightly salty water in power plant pipes—come in many shapes and sizes. But the tiny monsters are a big problem: Each year, they cost the world’s power plants at least $100 billion because workers have to purge the pipes and scrub the crystals from filters.
Now, a solution may be at hand. Engineers can reduce the damage by coating the insides of the pipes with textured, water-repellant [hydrophobic] surfaces …
This paper is open access. As research papers go, this is quite readable, from the Introduction (Note: Links have been removed),
Many of the uses for water are intimately familiar to us. Drinking water, wash water, water for agriculture, and even water used for recreation have an omnipresent and essential impact on our lives. However, water’s impact and importance extend far beyond these everyday uses. In many developed countries, thermoelectric power production is one of the largest sources of water consumption (1), where it is used to cool reactors and transport heat. In 2015, 41% of all surface water withdrawals in the United States went toward cooling in thermoelectric power plants (2). Thermoelectric power accounts for 90% of all electricity generated within the United States and encompasses many forms of power production, including nuclear, coal, natural gas, and oil.
Rebecca Trager in a March 5, 2021 news article for Chemistry World highlights support for Charles M. Lieber (Harvard professor and chair of the chemistry department) from his colleagues (Note: Links have been removed),
More than a year after the chair of Harvard University’s chemistry department was arrested for allegedly hiding his receipt of millions of dollars in research funding from China from his university and the US government, dozens of prominent researchers – including many Nobel Prize winners – are coming to Charles Lieber’s defence. They are calling the US Department of Justice (DOJ) case against him ‘unjust’ and urging the agency to drop it.
Following his January 2020 arrest, Lieber was placed on ‘indefinite’ paid administrative leave. The nanoscience pioneer was indicted in June  on charges of making false statements to federal authorities regarding his participation in China’s Thousand Talents plan – the country’s programme to attract, recruit and cultivate high-level scientific talent from abroad. Lieber faces up to five years in prison and a fine of $250,000 (£179,000) if convicted.
A 1 March  open letter, drafted and coordinated by Harvard chemist Stuart Schreiber, co-founder of the Broad Institute, and professor emeritus Elias Corey, winner of the 1990 chemistry Nobel prize, says Lieber became the target of a ‘tragically misguided government campaign’. The letter refers to Lieber as ‘one of the great scientist of his generation’ and warns such government actions are discouraging US scientists from collaborating with peers in other countries, particularly China. The open letter also notes that Lieber is fighting to salvage his reputation while suffering from incurable lymphoma.
Ferguson goes on to contrast Lieber’s treatment by Harvard to another embattled colleague’s treatment by his home institution (Note: Links have been removed),
Harvard’s treatment of Lieber stands in contrast to how the Massachusetts Institute of Technology (MIT) handled the more recent case of nanotechnologist Gang Chen, who was arrested in January  for failing to report his ties to the Chinese government. MIT agreed to cover his legal fees, and more than 100 faculty members signed a letter to their university’s president that picked apart the DOJ’s allegations against Chen.
The word ‘memristor’ usually pops up when there’s research into artificial synapses but not in this new piece of research. I didn’t see any mention of the memristor in the paper’s references either but I did find James Gimzewski from the University of California at Los Angeles (UCLA) whose research into brainlike computing (neuromorphic computing) is running parallel but separately to the memristor research.
Dr. Thamarasee Jeewandara has written a March 25, 2021 article for phys.org about the latest neuromorphic computing research (Note: Links have been removed)
Multifunctional and diverse artificial neural systems can incorporate multimodal plasticity, memory and supervised learning functions to assist neuromorphic computation. In a new report, Jinran Yu and a research team in nanoenergy, nanoscience and materials science in China and the US., presented a bioinspired mechano-photonic artificial synapse with synergistic mechanical and optical plasticity. The team used an optoelectronic transistor made of graphene/molybdenum disulphide (MoS2) heterostructure and an integrated triboelectric nanogenerator to compose the artificial synapse. They controlled the charge transfer/exchange in the heterostructure with triboelectric potential and modulated the optoelectronic synapse behaviors readily, including postsynaptic photocurrents, photosensitivity and photoconductivity. The mechano-photonic artificial synapse is a promising implementation to mimic the complex biological nervous system and promote the development of interactive artificial intelligence. The work is now published on Science Advances.
The human brain can integrate cognition, learning and memory tasks via auditory, visual, olfactory and somatosensory interactions. This process is difficult to be mimicked using conventional von Neumann architectures that require additional sophisticated functions. Brain-inspired neural networks are made of various synaptic devices to transmit information and process using the synaptic weight. Emerging photonic synapse combine the optical and electric neuromorphic modulation and computation to offer a favorable option with high bandwidth, fast speed and low cross-talk to significantly reduce power consumption. Biomechanical motions including touch, eye blinking and arm waving are other ubiquitous triggers or interactive signals to operate electronics during artificial synapse plasticization. In this work, Yu et al. presented a mechano-photonic artificial synapse with synergistic mechanical and optical plasticity. The device contained an optoelectronic transistor and an integrated triboelectric nanogenerator (TENG) in contact-separation mode. The mechano-optical artificial synapses have huge functional potential as interactive optoelectronic interfaces, synthetic retinas and intelligent robots. [emphasis mine]
As you can see Jeewandara has written quite a technical summary of the work. Here’s an image from the Science Advances paper,
A team of scientists, led by researchers at Northwestern University, Shirley Ryan AbilityLab and the University of Illinois at Chicago (UIC), has developed novel technology promising to increase understanding of how brains develop, and offer answers on repairing brains in the wake of neurotrauma and neurodegenerative diseases.
Their research is the first to combine the most sophisticated 3-D bioelectronic systems with highly advanced 3-D human neural cultures. The goal is to enable precise studies of how human brain circuits develop and repair themselves in vitro. The study is the cover story for the March 19 [March 17, 2021 according to the citation] issue of Science Advances.
The cortical spheroids used in the study, akin to “mini-brains,” were derived from human-induced pluripotent stem cells. Leveraging a 3-D neural interface system that the team developed, scientists were able to create a “mini laboratory in a dish” specifically tailored to study the mini-brains and collect different types of data simultaneously. Scientists incorporated electrodes to record electrical activity. They added tiny heating elements to either keep the brain cultures warm or, in some cases, intentionally overheated the cultures to stress them. They also incorporated tiny probes — such as oxygen sensors and small LED lights — to perform optogenetic experiments. For instance, they introduced genes into the cells that allowed them to control the neural activity using different-colored light pulses.
This platform then enabled scientists to perform complex studies of human tissue without directly involving humans or performing invasive testing. In theory, any person could donate a limited number of their cells (e.g., blood sample, skin biopsy). Scientists can then reprogram these cells to produce a tiny brain spheroid that shares the person’s genetic identity. The authors believe that, by combining this technology with a personalized medicine approach using human stem cell-derived brain cultures, they will be able to glean insights faster and generate better, novel interventions.
“The advances spurred by this research will offer a new frontier in the way we study and understand the brain,” said Shirley Ryan AbilityLab’s Dr. Colin Franz, co-lead author on the paper who led the testing of the cortical spheroids. “Now that the 3-D platform has been developed and validated, we will be able to perform more targeted studies on our patients recovering from neurological injury or battling a neurodegenerative disease.”
Yoonseok Park, postdoctoral fellow at Northwestern University and co-lead author, added, “This is just the beginning of an entirely new class of miniaturized, 3-D bioelectronic systems that we can construct to expand the capacity of the regenerative medicine field. For example, our next generation of device will support the formation of even more complex neural circuits from brain to muscle, and increasingly dynamic tissues like a beating heart.”
Current electrode arrays for tissue cultures are 2-D, flat and unable to match the complex structural designs found throughout nature, such as those found in the human brain. Moreover, even when a system is 3-D, it is extremely challenging to incorporate more than one type of material into a small 3-D structure. With this advance, however, an entire class of 3-D bioelectronics devices has been tailored for the field of regenerative medicine.
“Now, with our small, soft 3-D electronics, the capacity to build devices that mimic the complex biological shapes found in the human body is finally possible, providing a much more holistic understanding of a culture,” said Northwestern’s John Rogers, who led the technology development using technology similar to that found in phones and computers. “We no longer have to compromise function to achieve the optimal form for interfacing with our biology.”
As a next step, scientists will use the devices to better understand neurological disease, test drugs and therapies that have clinical potential, and compare different patient-derived cell models. This understanding will then enable a better grasp of individual differences that may account for the wide variation of outcomes seen in neurological rehabilitation.
“As scientists, our goal is to make laboratory research as clinically relevant as possible,” said Kristen Cotton, research assistant in Dr. Franz’s lab. “This 3-D platform opens the door to new experiments, discovery and scientific advances in regenerative neurorehabilitation medicine that have never been possible.”
As for what brain ogranoids might be, Carl Zimmer in an Aug. 29, 2019 article for the New York Times provides an explanation,
Organoids Are Not Brains. How Are They Making Brain Waves?
Two hundred and fifty miles over Alysson Muotri’s head, a thousand tiny spheres of brain cells were sailing through space.
The clusters, called brain organoids, had been grown a few weeks earlier in the biologist’s lab here at the University of California, San Diego. He and his colleagues altered human skin cells into stem cells, then coaxed them to develop as brain cells do in an embryo.
The organoids grew into balls about the size of a pinhead, each containing hundreds of thousands of cells in a variety of types, each type producing the same chemicals and electrical signals as those cells do in our own brains.
In July, NASA packed the organoids aboard a rocket and sent them to the International Space Station to see how they develop in zero gravity.
Now the organoids were stowed inside a metal box, fed by bags of nutritious broth. “I think they are replicating like crazy at this stage, and so we’re going to have bigger organoids,” Dr. Muotri said in a recent interview in his office overlooking the Pacific.
What, exactly, are they growing into? That’s a question that has scientists and philosophers alike scratching their heads.
On Thursday, Dr. Muotri and his colleagues reported that they have recorded simple brain waves in these organoids. In mature human brains, such waves are produced by widespread networks of neurons firing in synchrony. Particular wave patterns are linked to particular forms of brain activity, like retrieving memories and dreaming.
As the organoids mature, the researchers also found, the waves change in ways that resemble the changes in the developing brains of premature babies.
“It’s pretty amazing,” said Giorgia Quadrato, a neurobiologist at the University of Southern California who was not involved in the new study. “No one really knew if that was possible.”
But Dr. Quadrato stressed it was important not to read too much into the parallels. What she, Dr. Muotri and other brain organoid experts build are clusters of replicating brain cells, not actual brains.
Cleber Trujillo led me to a windowless room banked with refrigerators, incubators, and microscopes. He extended his blue-gloved hands to either side and nearly touched the walls. “This is where we spend half our day,” he said.
In that room Trujillo and a team of graduate students raised a special kind of life. He opened an incubator and picked out a clear plastic box. Raising it above his head, he had me look up at it through its base. Inside the box were six circular wells, each the width of a cookie and ﬁlled with what looked like watered-down grape juice. In each well 100 pale globes ﬂoated, each the size of a houseﬂy head.
Getting back to the research about monitoring brain organoids, here’s a link to and a citation for the paper about cortical spheroids,
Three-dimensional, multifunctional neural interfaces for cortical spheroids and engineered assembloids by Yoonseok Park, Colin K. Franz, Hanjun Ryu, Haiwen Luan, Kristen Y. Cotton, Jong Uk Kim, Ted S. Chung, Shiwei Zhao, Abraham Vazquez-Guardado, Da Som Yang, Kan Li, Raudel Avila, Jack K. Phillips, Maria J. Quezada, Hokyung Jang, Sung Soo Kwak, Sang Min Won, Kyeongha Kwon, Hyoyoung Jeong, Amay J. Bandodkar, Mengdi Han, Hangbo Zhao, Gabrielle R. Osher, Heling Wang, KunHyuck Lee, Yihui Zhang, Yonggang Huang, John D. Finan and John A. Rogers. Science Advances 17 Mar 2021: Vol. 7, no. 12, eabf9153 DOI: 10.1126/sciadv.abf9153
This paper appears to be open access.
According to a March 22, 2021 posting on the Shirley Riley AbilityLab website, the paper is featured on the front cover of Science Advances (vol. 7 no. 12).