Here’s what makes a small synchrotron ‘big’ news. Synchrotrons (also called synchrotorn light sources) are very expensive and very large. Most countries have one only; if they have any synchrotrons at at all. (I believe there are approximately 40 worldwide.) For anyone who doesn’t know what a synchrotron is, there’s an explanation from the Canadian Light Source’s What is a synchrotron? webpage,
Overview
A synchrotron produces different kinds of light in order to study the structural and chemical properties of materials at the molecular level. This is possible by looking at the ways light interacts with the individual molecules of a material.
The CLS synchrotron produces light by accelerating electrons to nearly the speed of light and directing the electrons around a ring. The electrons are directed around the ring by a combination of radio frequency waves and powerful electromagnets. When the electrons go around the bends, they give off energy in the form of incredibly bright and highly focused light. Different types of light, primarily infrared and X-ray, are directed down to the end of beamlines, where researchers use the light for their experiments at endstations. Each beamline and endstation at the CLS is designed for a specific type of experiment.
For the first time, researchers can study the microstructures inside metals, ceramics and rocks with X-rays in a standard laboratory without needing to travel to a particle accelerator, according to a study led by University of Michigan engineers.
The new technique makes 3D X-ray diffraction — known as 3DXRD — more readily accessible, potentially enabling quick analysis of samples and prototypes in academia and industry, as well as providing more opportunities for students.
….
Once only possible in specialized shared-use facilities, the newly developed laboratory scale three-dimensional x-ray diffraction (Lab-3DXRD) opens up more opportunities for student use. Yuefeng Jin, a doctoral student of mechanical engineering at U-M, carefully positions a metal sample for measurement. Image credit: Marcin Szczepanski, Michigan Engineering
Synchrotron in a closet: Bringing powerful 3D X-ray microscopy to smaller labs
…
3DXRD reconstructs 3D images using X-rays taken at multiple angles, similar to a CT scan. Instead of the imaging device rotating about a patient, a few-millimeters-wide material sample rotates on a stand in front of a powerful beam with about a million times more X-rays than a medical X-ray.
The huge X-ray concentration produces a micro-cale image of the tiny fused crystals that make up most metals, ceramics and rocks—known as polycrystalline materials.
Results help researchers understand how materials react to mechanical stresses by measuring thousands of individual crystals’ volume, position, orientation and strain. For example, imaging a sample from a steel beam under compression can show how crystals respond to bearing the weight of a building, helping researchers understand large-scale wear.
Synchrotrons were once the only facilities able to produce enough X-rays for 3DXRD as electrons spit off scads of X-rays as they travel through circular particle accelerators, which can then be directed into a sample.
While synchrotron X-ray beams produce state-of-the-art detail, there are only about 70 facilities world-wide. Research teams must put together project proposals for “beam time.” Accepted projects often must wait six months to up to two years to run their experiments, which are limited to a maximum of six days.
In an effort to make this technique more widely available, the research team worked with PROTO Manufacturing to custom build the first laboratory-scale 3DXRD. As a whole, the instrument is about the size of a residential bathroom, but could be scaled down to the size of a broom closet.
“This technique gives us such interesting data that I wanted to create the opportunity to try new things that are high risk, high reward and allow teachable moments for students without the wait-time and pressure of synchrotron beam time,” said Ashley Bucsek, U-M assistant professor of mechanical engineering and materials science and engineering and co-corresponding author of the study published in Nature Communications.
Previously, small-scale devices could not produce enough X-rays for 3DXRD because at a certain point, the electron beam pumps so much power into the anode—the solid metal surface that the electrons strike to make X-rays—that it would melt. Lab-3DXRD leverages a liquid-metal-jet anode that is already liquid at room temperature, allowing it to take in more power and produce more X-rays than once possible at this scale.
The researchers put the design to the test by scanning the same titanium alloy sample using three methods: lab-3DXRD, synchrotron-3DXRD and laboratory diffraction contrast tomography or LabDCT—a technique used to map out crystal structures in 3D without strain information.
Lab-3DXRD was highly accurate, with 96% of the crystals it picked up overlapping with the other two methods. It did particularly well with larger crystals over 60 micrometers, but missed some of the smaller crystals. The researchers note that adding a more sensitive photon-counting detector, which detects the X-rays that are used to build the images, could help catch the finest-grained crystals.
With this technique available in-house, Bucsek’s research team can try new experiments, honing parameters to prepare for a larger experiment at a synchrotron.
“Lab-3DXRD is like a nice backyard telescope while synchrotron-3DXRD is the Hubble Telescope. There are still certain situations where you need the Hubble, but we are now well prepared for those big experiments because we can try everything out beforehand,” Bucsek said.
Beyond enabling more accessible experiments, lab-3DXRD allows researchers to extend projects past the synchrotron six day limit, which is particularly helpful when studying cyclic loading—how a material responds to repeated stresses over thousands of cycles.
First author and co-corresponding author Seunghee Oh, a research fellow in mechanical engineering at the time of the study, now works in the X-ray Science Division at Argonne National Laboratory.
The research is funded by the National Science Foundation (CMMI-2142302; DMR-1829070) and the U.S. Department of Energy (Award DE-SC0008637).
Researchers from PROTO Manufacturing also contributed to the study.
Apparently, trees are ‘roughly’ fractal. As for fractals themselves, there’s this from the Fractal Foundation’s What are Fractals? webpage,
…
[downloaded from https://fractalfoundation.org/resources/what-are-fractals/]
A fractal is a never-ending pattern. Fractals are infinitely complex patterns that are self-similar across different scales. They are created by repeating a simple process over and over in an ongoing feedback loop. Driven by recursion, fractals are images of dynamic systems – the pictures of Chaos. Geometrically, they exist in between our familiar dimensions. Fractal patterns are extremely familiar, since nature is full of fractals. For instance: trees, rivers, coastlines, mountains, clouds, seashells, hurricanes, etc. Abstract fractals – such as the Mandelbrot Set – can be generated by a computer calculating a simple equation over and over.
…
Caption: Piet Mondrian painted the same tree in “The gray tree” (left) and “Blooming apple tree” (right). Viewers can readily discern the tree in “The gray tree” with a branch diameter scaling exponent of 2.8. In “Blooming apple tree,” all the brush strokes have roughly the same thickness and viewers report seeing fish, water and other non-tree things. Credit: Kunstmuseum Den Haag
While artistic beauty may be a matter of taste, our ability to identify trees in works of art may be connected to objective—and relatively simple—mathematics, according to a new study.
Led by researchers from the University of Michigan and the University of New Mexico, the study investigated how the relative thickness of a tree’s branching boughs affected its tree-like appearance.
This idea has been studied for centuries by artists, including Leonardo DaVinci [Leonardo da Vinci], but the researchers brought a newer branch of math into the equation to reveal deeper insights.
“There are some characteristics of the art that feel like they’re aesthetic or subjective, but we can use math to describe it,” said Jingyi Gao, lead author of the study. “I think that’s pretty cool.”
Gao performed the research as an undergraduate in the U-M Department of Mathematics, working with Mitchell Newberry, now a research assistant professor at UNM and an affiliate of the U-M Center for the Study of Complex Systems. Gao is now a doctoral student at the University of Wisconsin.
In particular, the researchers revealed one quantity related to the complexity and proportions of a tree’s branches that artists have preserved and played with to affect if and how viewers perceive a tree.
“We’ve come up with something universal here that kind of applies to all trees in art and in nature,” said Newberry, senior author of the study. “It’s at the core of a lot of different depictions of trees, even if they’re in different styles and different cultures or centuries.”
The work is published in the journal PNAS [Proceedings of the National Academy of Sciences] Nexus.
As a matter of fractals
The math the duo used to approach their question of proportions is rooted in fractals. Geometrically speaking, fractals are structures that repeat the same motifs across different scales.
Fractals are name-dropped in the Oscar-winning smash hit “Let it Go” from Disney’s “Frozen,” making it hard to argue there’s a more popular physical example than the self-repeating crystal geometries of snowflakes. But biology is also full of important fractals, including the branching structures of lungs, blood vessels and, of course, trees.
“Fractals are just figures that repeat themselves,” Gao said. “If you look at a tree, its branches are branching. Then the child branches repeat the figure of the parent branch.”
In the latter half of the 20th century, mathematicians introduced a number that is referred to as a fractal dimension to quantify the complexity of a fractal. In their study, Gao and Newberry analyzed an analogous number for tree branches, which they called the branch diameter scaling exponent. Branch diameter scaling describes the variation in branch diameter in terms of how many smaller branches there are per larger branch.
“We measure branch diameter scaling in trees and it plays the same role as fractal dimension,” Newberry said. “It shows how many more tiny branches there are as you zoom in.”
While bridging art and mathematics, Gao and Newberry worked to keep their study as accessible as possible to folks from both realms and beyond. Its mathematical complexity maxes out with the famous—or infamous, depending on how you felt about middle school geometry—Pythagorean theorem: a2 + b2 = c2.
Roughly speaking, a and b can be thought of as the diameter of smaller branches stemming from a larger branch with diameter c. The exponent 2 corresponds to the branch diameter scaling exponent, but for real trees its value can be between about 1.5 and 3.
The researchers found that, in works of art that preserved that factor, viewers were able to easily recognize trees—even if they had been stripped of other distinguishing features.
Artistic experimentation
For their study, Gao and Newberry analyzed artwork from around the world, including 16th century stone window carvings from the Sidi Saiyyed Mosque in India, an 18th century painting called “Cherry Blossoms” by Japanese artist Matsumuara Goshun and two early 20th century works by Dutch painter Piet Mondrian.
It was the mosque carvings in India that initially inspired the study. Despite their highly stylized curvy, almost serpentine branches, these trees have a beautiful, natural sense of proportion to them, Newberry said. That got him wondering if there might be a more universal factor in how we recognize trees. The researchers took a clue from DaVinci’s [sic] analysis of trees to understand that branch thickness was important.
Looking at the branch diameter scaling factor, Gao and Newberry found that some of the carvings had values closer to real trees than the tree in “Cherry Blossoms,” which appears more natural.
“That was actually quite surprising for me because Goshun’s painting is more realistic,” Gao said.
Newberry shared that sentiment and hypothesized that having a more realistic branch diameter scaling factor enables artists to take trees in more creative directions and have them still appear as trees.
“As you abstract away details and still want viewers to recognize this as a beautiful tree, then you may have to be closer to reality in some other aspects,” Newberry said.
Mondrian’s work provided a serendipitous experiment to test this thinking. He painted a series of pieces depicting the same tree, but in different, increasingly abstract ways. For his 1911 work “De grijze boom” (“The gray tree”), Mondrian had reached a point in the series where he was representing the tree with just a series of black lines against a gray background.
“If you show this painting to anyone, it’s obviously a tree,” Newberry said. “But there’s no color, no leaves and not even branching, really.”
The researchers found that Mondrian’s branch scaling exponent fell in the real tree range at 2.8. For Mondrian’s 1912 “Bloeiende appelboom” (“Blooming apple tree”), however, that scaling is gone, as is the consensus that the object is a tree.
“People see dancers, fish scales, water, boats, all kinds of things,” Newberry said. “The only difference between these two paintings—they’re both black strokes on a basically gray background—is whether there is branch diameter scaling.”
Gao designed the study and measured the first trees as part of her U-M Math Research Experience for Undergraduates project supported by the James Van Loo Applied Mathematics and Physics Undergraduate Support Fund. Newberry undertook the project as a junior fellow of the Michigan Society of Fellows. Both researchers acknowledged how important interdisciplinary spaces at Michigan were to the study.
“We could not have done this research without interaction between the Center for the Study of Complex Systems and the math department. This center is a very special thing about U of M, where math flourishes as a common language to talk across disciplinary divides,” Newberry said. “And I have been really inspired by conversations that put mathematicians and art historians at the same table as part of the Society of Fellows.”
Caption: Leonardo da Vinci’s sketch of a tree illustrates the principle that combined thickness is preserved at different stages of ramification. Credit: Institut de France Manuscript M, p. 78v.
The math that describes the branching pattern of trees in nature also holds for trees depicted in art—and may even underlie our ability to recognize artworks as depictions of trees.
Trees are loosely fractal, branching forms that repeat the same patterns at smaller and smaller scales from trunk to branch tip. Jingyi Gao and Mitchell Newberry examine scaling of branch thickness in depictions of trees and derive mathematical rules for proportions among branch diameters and for the approximate number of branches of different diameters. The authors begin with Leonardo da Vinci’s observation that trees limbs preserve their thickness as they branch. The parameter α, known as the radius scaling exponent in self-similar branching, determines the relationships between the diameters of the various branches. If the thickness of a branch is always the same as the summed thickness of the two smaller branches, as da Vinci asserts, then the parameter α would be 2. The authors surveyed trees in art, selected to cover a broad geographical range and also for their subjective beauty, and found values from 1.5 to 2.8, which correspond to the range of natural trees. Even abstract works of art that don’t visually show branch junctions or treelike colors, such as Piet Mondrian’s cubist Gray Tree, can be visually identified as trees if a realistic value for α is used. By contrast, Mondrian’s later painting, Blooming Apple Tree, which sets aside scaling in branch diameter, is not recognizable as a tree. According to the authors, art and science provide complementary lenses on the natural and human worlds.
A September 10, 2024 news item on ScienceDaily provides a technical explanation of how memristors, without a power source, can retain information,
Phase separation, when molecules part like oil and water, works alongside oxygen diffusion to help memristors — electrical components that store information using electrical resistance — retain information even after the power is shut off, according to a University of Michigan led study recently published in Matter.
Up to this point, explanations have not fully grasped how memristors retain information without a power source, known as nonvolatile memory, because models and experiments do not match up.
“While experiments have shown devices can retain information for over 10 years, the models used in the community show that information can only be retained for a few hours,” said Jingxian Li, U-M doctoral graduate of materials science and engineering and first author of the study.
To better understand the underlying phenomenon driving nonvolatile memristor memory, the researchers focused on a device known as resistive random access memory or RRAM, an alternative to the volatile RAM used in classical computing, and are particularly promising for energy-efficient artificial intelligence applications.
The specific RRAM studied, a filament-type valence change memory (VCM), sandwiches an insulating tantalum oxide layer between two platinum electrodes. When a certain voltage is applied to the platinum electrodes, a conductive filament forms a tantalum ion bridge passing through the insulator to the electrodes, which allows electricity to flow, putting the cell in a low resistance state representing a “1” in binary code. If a different voltage is applied, the filament is dissolved as returning oxygen atoms react with the tantalum ions, “rusting” the conductive bridge and returning to a high resistance state, representing a binary code of “0”.
It was once thought that RRAM retains information over time because oxygen is too slow to diffuse back. However, a series of experiments revealed that previous models have neglected the role of phase separation.
“In these devices, oxygen ions prefer to be away from the filament and will never diffuse back, even after an indefinite period of time. This process is analogous to how a mixture of water and oil will not mix, no matter how much time we wait, because they have lower energy in a de-mixed state,” said Yiyang Li, U-M assistant professor of materials science and engineering and senior author of the study.
To test retention time, the researchers sped up experiments by increasing the temperature. One hour at 250°C is equivalent to about 100 years at 85°C—the typical temperature of a computer chip.
Using the extremely high-resolution imaging of atomic force microscopy, the researchers imaged filaments, which measure only about five nanometers or 20 atoms wide, forming within the one micron wide RRAM device.
“We were surprised that we could find the filament in the device. It’s like finding a needle in a haystack,” Li said.
The research team found that different sized filaments yielded different retention behavior. Filaments smaller than about 5 nanometers dissolved over time, whereas filaments larger than 5 nanometers strengthened over time. The size-based difference cannot be explained by diffusion alone.
Together, experimental results and models incorporating thermodynamic principles showed the formation and stability of conductive filaments depend on phase separation.
The research team leveraged phase separation to extend memory retention from one day to well over 10 years in a rad-hard memory chip—a memory device built to withstand radiation exposure for use in space exploration.
Other applications include in-memory computing for more energy efficient AI applications or memory devices for electronic skin—a stretchable electronic interface designed to mimic the sensory capabilities of human skin. Also known as e-skin, this material could be used to provide sensory feedback to prosthetic limbs, create new wearable fitness trackers or help robots develop tactile sensing for delicate tasks.
“We hope that our findings can inspire new ways to use phase separation to create information storage devices,” Li said.
Researchers at Ford Research, Dearborn; Oak Ridge National Laboratory; University at Albany; NY CREATES; Sandia National Laboratories; and Arizona State University, Tempe contributed to this study.
…
Here’s a link to and a citation for the paper,
Thermodynamic origin of nonvolatility in resistive memory by Jingxian Li, Anirudh Appachar, Sabrina L. Peczonczyk, Elisa T. Harrison, Anton V. Ievlev, Ryan Hood, Dongjae Shin, Sangmin Yoo, Brianna Roest, Kai Sun, Karsten Beckmann, Olya Popova, Tony Chiang, William S. Wahby, Robin B. Jacobs-Godrim, Matthew J. Marinella, Petro Maksymovych, John T. Heron, Nathaniel Cady, Wei D. Lu, Suhas Kumar, A. Alec Talin, Wenhao Sun, Yiyang Li. Matter DOI: https://doi.org/10.1016/j.matt.2024.07.018 Published online: August 26, 2024
An air curtain shooting down from the brim of a hard hat can prevent 99.8% of aerosols from reaching a worker’s face. The technology, created by University of Michigan startup Taza Aya, potentially offers a new protection option for workers in industries where respiratory disease transmission is a concern.
Independent, third-party testing of Taza Aya’s device showed the effectiveness of the air curtain, curved to encircle the face, coming from nozzles at the hat’s brim. But for the air curtain to effectively protect against pathogens in the room, it must first be cleansed of pathogens itself. Previous research by the group of Taza Aya co-founder Herek Clack, U-M associate professor of civil and environmental engineering, showed that their method can remove and kill 99% of airborne viruses in farm and laboratory settings.
“Our air curtain technology is precisely designed to protect wearers from airborne infectious pathogens, using treated air as a barrier in which any pathogens present have been inactivated so that they are no longer able to infect you if you breathe them in,” Clack said. “It’s virtually unheard of—our level of protection against airborne germs, especially when combined with the improved ergonomics it also provides.”
Fire has been used throughout history for sterilization, and while we might not usually think of it this way, it’s what’s known as a thermal plasma. Nonthermal, or cold, plasmas are made of highly energetic, electrically charged molecules and molecular fragments that achieve a similar effect without the heat. Those ions and molecules stabilize quickly, becoming ordinary air before reaching the curtain nozzles.
Taza Aya’s prototype features a backpack, weighing roughly 10 pounds, that houses the nonthermal plasma module, air handler, electronics and the unit’s battery pack. The handler draws air into the module, where it’s treated before flowing to the air curtain’s nozzle array.
Taza Aya’s progress comes in the wake of the COVID-19 pandemic and in the midst of a summer when the U.S. Centers for Disease Control and Prevention have reported four cases of humans testing positive for bird flu. During the pandemic, agriculture suffered disruptions in meat production due to shortages in labor, which had a direct impact on prices, the availability of some products and the extended supply chain.
In recent months, Taza Aya has conducted user experience testing with workers at Michigan Turkey Producers in Wyoming, Michigan, a processing plant that practices the humane handling of birds. The plant is home to hundreds of workers, many of them coming into direct contact with turkeys during their work day.
To date, paper masks have been the main strategy for protecting employees in such large-scale agriculture productions. But on a noisy production line, where many workers speak English as a second language, masks further reduce the ability of workers to communicate by muffling voices and hiding facial clues.
“During COVID, it was a problem for many plants—the masks were needed, but they prevented good communication with our associates,” said Tina Conklin, Michigan Turkey’s vice president of technical services.
In addition, the effectiveness of masks is reliant on a tight seal over the mouth and noise to ensure proper filtration, which can change minute to minute during a workday. Masks can also fog up safety goggles, and they have to be removed for workers to eat. Taza Aya’s technology avoids all of those problems.
As a researcher at U-M, Clack spent years exploring the use of nonthermal plasma to protect livestock. With the arrival of COVID-19 in early 2020, he quickly pivoted to how the technology might be used for personal protection from airborne pathogens.
In October of that year, Taza Aya was named an awardee in the Invisible Shield QuickFire Challenge—a competition created by Johnson & Johnson Innovation in cooperation with the U.S. Department of Health and Human Services. The program sought to encourage the development of technologies that could protect people from airborne viruses while having a minimal impact on daily life.
“We are pleased with the study results as we embark on this journey,” said Alberto Elli, Taza Aya’s CEO. “This real-world product and user testing experience will help us successfully launch the Worker Wearable [Protection] in 2025.”
There’s a bit more information about the 3rd party testing mentioned at the start of the news release in a June 26, 2024 posting by Herek Clack on the Taza Aya company blog. You find out more about Worker and Individual Wearable Protection on Taza Aya’s The Solution webpage, scroll down abut 55% of the way.
Antibiotics at the nanoscale = nanobiotics. For a more complete explanation, there’s this (Note: the video runs a little longer than most of the others embedded on this blog),
Before pushing further into this research, a note about antibiotic resistance. In a sense, we’ve created the problem we (those scientists in particular) are trying to solve.
Antibiotics and cleaning products kill 99.9% of the bacteria, leaving 0.1% that are immune. As so many living things on earth do, bacteria reproduce. Now, a new antibiotic is needed and discovered; it too kills 99.9% of the bacteria. The 0.1% left are immune to two antibiotics. And,so it goes.
As the scientists have made clear, we’re running out of options using standard methods and they’re hoping this ‘nanoparticle approach’ as described in a June 5, 2023 news item on Nanowerk will work, Note: A link has been removed,
Identifying whether and how a nanoparticle and protein will bind with one another is an important step toward being able to design antibiotics and antivirals on demand, and a computer model developed at the University of Michigan can do it.
The new tool could help find ways to stop antibiotic-resistant infections and new viruses—and aid in the design of nanoparticles for different purposes.
“Just in 2019, the number of people who died of antimicrobial resistance was 4.95 million. Even before COVID, which worsened the problem, studies showed that by 2050, the number of deaths by antibiotic resistance will be 10 million,” said Angela Violi, an Arthur F. Thurnau Professor of mechanical engineering, and corresponding author of the study that made the cover of Nature Computational Science (“Domain-agnostic predictions of nanoscale interactions in proteins and nanoparticles”).
In my ideal scenario, 20 or 30 years from now, I would like—given any superbug—to be able to quickly produce the best nanoparticles that can treat it.”
Much of the work within cells is done by proteins. Interaction sites on their surfaces can stitch molecules together, break them apart and perform other modifications—opening doorways into cells, breaking sugars down to release energy, building structures to support groups of cells and more. If we could design medicines that target crucial proteins in bacteria and viruses without harming our own cells, that would enable humans to fight new and changing diseases quickly.
The new [computer] model, named NeCLAS [NeCLAS (Nanoparticle-Computed Ligand Affinity Scoring)], uses machine learning—the AI technique that powers the virtual assistant on your smartphone and ChatGPT. But instead of learning to process language, it absorbs structural models of proteins and their known interaction sites. From this information, it learns to extrapolate how proteins and nanoparticles might interact, predict binding sites and the likelihood of binding between them—as well as predicting interactions between two proteins or two nanoparticles.
“Other models exist, but ours is the best for predicting interactions between proteins and nanoparticles,” said Paolo Elvati, U-M associate research scientist in mechanical engineering.
AlphaFold, for example, is a widely used tool for predicting the 3D structure of a protein based on its building blocks, called amino acids. While this capacity is crucial, this is only the beginning: Discovering how these proteins assemble into larger structures and designing practical nanoscale systems are the next steps.
“That’s where NeCLAS comes in,” said Jacob Saldinger, U-M doctoral student in chemical engineering and first author of the study. “It goes beyond AlphaFold by showing how nanostructures will interact with one another, and it’s not limited to proteins. This enables researchers to understand the potential applications of nanoparticles and optimize their designs.”
The team tested three case studies for which they had additional data:
Molecular tweezers, in which a molecule binds to a particular site on another molecule. This approach can stop harmful biological processes, such as the aggregation of protein plaques in diseases of the brain like Alzheimer’s.
How graphene quantum dots break up the biofilm produced by staph bacteria. These nanoparticles are flakes of carbon, no more than a few atomic layers thick and 0.0001 millimeters to a side. Breaking up biofilms is likely a crucial tool in fighting antibiotic-resistant infections—including the superbug methicillin-resistant Staphylococcus aureus (MRSA), commonly acquired at hospitals.
Whether graphene quantum dots would disperse in water, demonstrating the model’s ability to predict nanoparticle-nanoparticle binding even though it had been trained exclusively on protein-protein data.
While many protein-protein models set amino acids as the smallest unit that the model must consider, this doesn’t work for nanoparticles. Instead, the team set the size of that smallest feature to be roughly the size of the amino acid but then let the computer model decide where the boundaries between these minimum features were. The result is representations of proteins and nanoparticles that look a bit like collections of interconnected beads, providing more flexibility in exploring small scale interactions.
“Besides being more general, NeCLAS also uses way less training data than AlphaFold. We only have 21 nanoparticles to look at, so we have to use protein data in a clever way,” said Matt Raymond, U-M doctoral student in electrical and computer engineering and study co-author.
Next, the team intends to explore other biofilms and microorganisms, including viruses.
The Nature Computational Science study was funded by the University of Michigan Blue Sky Initiative, the Army Research Office and the National Science Foundation.
There’s a good (brief) description of how these fibres become organoids in the photo caption,
Engineered extracellular matrices composed of fibrillar fibronectin are suspended over a porous polymer framework and provide the niche for stem cells to attach, differentiate, and mature into organoids. Credit: Ayse Muñiz Courtesy: Michigan Medicine – University of Michigan
Researchers at University of Michigan developed a method to produce artificially grown miniature brains — called human brain organoids — free of animal cells that could greatly improve the way neurodegenerative conditions are studied and, eventually, treated.
Over the last decade of researching neurologic diseases, scientists have explored the use of human brain organoids as an alternative to mouse models. These self-assembled, 3D tissues derived from embryonic or pluripotent stem cells more closely model the complex brain structure compared to conventional two-dimensional cultures.
Until now, the engineered network of proteins and molecules that give structure to the cells in brain organoids, known as extracellular matrices, often used a substance derived from mouse sarcomas called Matrigel. That method suffers significant disadvantages, with a relatively undefined composition and batch-to-batch variability.
The latest U-M research, published in Annals of Clinical and Translational Neurology, offers a solution to overcome Matrigel’s weaknesses. Investigators created a novel culture method that uses an engineered extracellular matrix for human brain organoids — without the presence of animal components – and enhanced the neurogenesis of brain organoids compared to previous studies.
“This advancement in the development of human brain organoids free of animal components will allow for significant strides in the understanding of neurodevelopmental biology,” said senior author Joerg Lahann, Ph.D., director of the U-M Biointerfaces Institute and Wolfgang Pauli Collegiate Professor of Chemical Engineering at U-M.
“Scientists have long struggled to translate animal research into the clinical world, and this novel method will make it easier for translational research to make its way from the lab to the clinic.”
The foundational extracellular matrices of the research team’s brain organoids were comprised of human fibronectin, a protein that serves as a native structure for stem cells to adhere, differentiate and mature. They were supported by a highly porous polymer scaffold.
The organoids were cultured for months, while lab staff was unable to enter the building due to the COVID 19-pandemic.
Using proteomics, researchers found their brain organoids developed cerebral spinal fluid, a clear liquid that flows around healthy brain and spinal cords. This fluid more closely matched human adult CSF compared to a landmark study of human brain organoids developed in Matrigel.
“When our brains are naturally developing in utero, they are of course not growing on a bed of extracellular matrix produced by mouse cancer cells,” said first author Ayşe Muñiz, Ph.D., who was a graduate student in the U-M Macromolecular Science and Engineering Program at the time of the work.
“By putting cells in an engineered niche that more closely resembles their natural environment, we predicted we would observe differences in organoid development that more faithfully mimics what we see in nature.”
The success of these xenogeneic-free human brain organoids opens the door for reprogramming with cells from patients with neurodegenerative diseases, says co-author Eva Feldman, M.D., Ph.D., director of the ALS Center of Excellence at U-M and James W. Albers Distinguished Professor of Neurology at U-M Medical School.
“There is a possibility to take the stem cells from a patient with a condition such as ALS or Alzheimer’s and, essentially, build an avatar mini brain of that patients to investigate possible treatments or model how their disease will progress,” Feldman said. “These models would create another avenue to predict disease and study treatment on a personalized level for conditions that often vary greatly from person to person.”
Here’s a link to and a citation for the paper,
Engineered extracellular matrices facilitate brain organoids from human pluripotent stem cells by Ayşe J. Muñiz, Tuğba Topal, Michael D. Brooks, Angela Sze, Do Hoon Kim, Jacob Jordahl, Joe Nguyen, Paul H. Krebsbach, Masha G. Savelieff, Eva L. Feldman, Joerg Lahann. Annals of Clinical and Translational Neurology DOI: https://doi.org/10.1002/acn3.51820 First published: 07 June 2023
A May 16, 2022 news item on phys.org announces work on a new machine learning model that could be useful in the research into engineered nanoparticles for medical purposes (Note: Links have been removed),
With antibiotic-resistant infections on the rise and a continually morphing pandemic virus, it’s easy to see why researchers want to be able to design engineered nanoparticles that can shut down these infections.
A new machine learning model that predicts interactions between nanoparticles and proteins, developed at the University of Michigan, brings us a step closer to that reality.
“We have reimagined nanoparticles to be more than mere drug delivery vehicles. We consider them to be active drugs in and of themselves,” said J. Scott VanEpps, an assistant professor of emergency medicine and an author of the study in Nature Computational Science.
Discovering drugs is a slow and unpredictable process, which is why so many antibiotics are variations on a previous drug. Drug developers would like to design medicines that can attack bacteria and viruses in ways that they choose, taking advantage of the “lock-and-key” mechanisms that dominate interactions between biological molecules. But it was unclear how to transition from the abstract idea of using nanoparticles to disrupt infections to practical implementation of the concept.
“By applying mathematical methods to protein-protein interactions, we have streamlined the design of nanoparticles that mimic one of the proteins in these pairs,” said Nicholas Kotov, the Irving Langmuir Distinguished University Professor of Chemical Sciences and Engineering and corresponding author of the study.
“Nanoparticles are more stable than biomolecules and can lead to entirely new classes of antibacterial and antiviral agents.”
The new machine learning algorithm compares nanoparticles to proteins using three different ways to describe them. While the first was a conventional chemical description, the two that concerned structure turned out to be most important for making predictions about whether a nanoparticle would be a lock-and-key match with a specific protein.
Between them, these two structural descriptions captured the protein’s complex surface and how it might reconfigure itself to enable lock-and-key fits. This includes pockets that a nanoparticle could fit into, along with the size such a nanoparticle would need to be. The descriptions also included chirality, a clockwise or counterclockwise twist that is important for predicting how a protein and nanoparticle will lock in.
“There are many proteins outside and inside bacteria that we can target. We can use this model as a first screening to discover which nanoparticles will bind with which proteins,” said Emine Sumeyra Turali Emre, a postdoctoral researcher in chemical engineering and co-first author of the paper, along with Minjeong Cha, a PhD student in materials science and engineering.
Emre and Cha explained that researchers could follow up on matches identified by their algorithm with more detailed simulations and experiments. One such match could stop the spread of MRSA, a common antibiotic-resistant strain, using zinc oxide nanopyramids that block metabolic enzymes in the bacteria.
“Machine learning algorithms like ours will provide a design tool for nanoparticles that can be used in many biological processes. Inhibition of the virus that causes COVID-19 is one good example,” said Cha. “We can use this algorithm to efficiently design nanoparticles that have broad-spectrum antiviral activity against all variants.”
This breakthrough was enabled by the Blue Sky Initiative at the University of Michigan College of Engineering. It provided $1.5 million to support the interdisciplinary team carrying out the fundamental exploration of whether a machine learning approach could be effective when data on the biological activity of nanoparticles is so sparse.
“The core of the Blue Sky idea is exactly what this work covers: finding a way to represent proteins and nanoparticles in a unified approach to understand and design new classes of drugs that have multiple ways of working against bacteria,” said Angela Violi, an Arthur F. Thurnau Professor, a professor of mechanical engineering and leader of the nanobiotics Blue Sky project.
Emre led the building of a database of interactions between proteins that could help to predict nanoparticle and protein interaction. Cha then identified structural descriptors that would serve equally well for nanoparticles and proteins, working with collaborators at the University of Southern California, Los Angeles to develop a machine learning algorithm that combed through the database and used the patterns it found to predict how proteins and nanoparticles would interact with one another. Finally, the team compared these predictions for lock-and-key matches with the results from experiments and detailed simulations, finding that they closely matched.
Additional collaborators on the project include Ji-Young Kim, a postdoctoral researcher in chemical engineering at U-M, who helped calculate chirality in the proteins and nanoparticles. Paul Bogdan and Xiongye Xiao, a professor and PhD student, respectively, in electrical and computer engineering at USC [University of Southern California] contributed to the graph theory descriptors. Cha then worked with them to design and train the neural network, comparing different machine learning models. All authors helped analyze the data.
…
Here are links to and a citation for the research briefing and paper, respectively,
Unifying structural descriptors for biological and bioinspired nanoscale complexes by Minjeong Cha, Emine Sumeyra Turali Emre, Xiongye Xiao, Ji-Young Kim, Paul Bogdan, J. Scott VanEpps, Angela Violi & Nicholas A. Kotov. Nature Computational Science volume 2, pages 243–252 (2022) Issue Date: April 2022 DOI: https://doi.org/10.1038/s43588-022-00229-w Published: 28 April 2022
Turns out entropy binds nanoparticles a lot like electrons bind chemical crystals
ANN ARBOR—Entropy, a physical property often explained as “disorder,” is revealed as a creator of order with a new bonding theory developed at the University of Michigan and published in the Proceedings of the National Academy of Sciences [PNAS].
Engineers dream of using nanoparticles to build designer materials, and the new theory can help guide efforts to make nanoparticles assemble into useful structures. The theory explains earlier results exploring the formation of crystal structures by space-restricted nanoparticles, enabling entropy to be quantified and harnessed in future efforts.
And curiously, the set of equations that govern nanoparticle interactions due to entropy mirror those that describe chemical bonding. Sharon Glotzer, the Anthony C. Lembke Department Chair of Chemical Engineering, and Thi Vo, a postdoctoral researcher in chemical engineering, answered some questions about their new theory.
What is entropic bonding?
Glotzer: Entropic bonding is a way of explaining how nanoparticles interact to form crystal structures. It’s analogous to the chemical bonds formed by atoms. But unlike atoms, there aren’t electron interactions holding these nanoparticles together. Instead, the attraction arises because of entropy.
Oftentimes, entropy is associated with disorder, but it’s really about options. When nanoparticles are crowded together and options are limited, it turns out that the most likely arrangement of nanoparticles can be a particular crystal structure. That structure gives the system the most options, and thus the highest entropy. Large entropic forces arise when the particles become close to one another.
By doing the most extensive studies of particle shapes and the crystals they form, my group found that as you change the shape, you change the directionality of those entropic forces that guide the formation of these crystal structures. That directionality simulates a bond, and since it’s driven by entropy, we call it entropic bonding.
Why is this important?
Glotzer: Entropy’s contribution to creating order is often overlooked when designing nanoparticles for self-assembly, but that’s a mistake. If entropy is helping your system organize itself, you may not need to engineer explicit attraction between particles—for example, using DNA or other sticky molecules—with as strong an interaction as you thought. With our new theory, we can calculate the strength of those entropic bonds.
While we’ve known that entropic interactions can be directional like bonds, our breakthrough is that we can describe those bonds with a theory that line-for-line matches the theory that you would write down for electron interactions in actual chemical bonds. That’s profound. I’m amazed that it’s even possible to do that. Mathematically speaking, it puts chemical bonds and entropic bonds on the same footing. This is both fundamentally important for our understanding of matter and practically important for making new materials.
Electrons are the key to those chemical equations though. How did you do this when no particles mediate the interactions between your nanoparticles?
Glotzer: Entropy is related to the free space in the system, but for years I didn’t know how to count that space. Thi’s big insight was that we could count that space using fictitious point particles. And that gave us the mathematical analogue of the electrons.
Vo: The pseudoparticles move around the system and fill in the spaces that are hard for another nanoparticle to fill—we call this the excluded volume around each nanoparticle. As the nanoparticles become more ordered, the excluded volume around them becomes smaller, and the concentration of pseudoparticles in those regions increases. The entropic bonds are where that concentration is highest.
In crowded conditions, the entropy lost by increasing the order is outweighed by the entropy gained by shrinking the excluded volume. As a result, the configuration with the highest entropy will be the one where pseudoparticles occupy the least space.
The research is funded by the Simons Foundation, Office of Naval Research, and the Office of the Undersecretary of Defense for Research and Engineering. It relied on the computing resources of the National Science Foundation’s Extreme Science and Engineering Discovery Environment. Glotzer is also the John Werner Cahn Distinguished University Professor of Engineering, the Stuart W. Churchill Collegiate Professor of Chemical Engineering, and a professor of material science and engineering, macromolecular science and engineering, and physics at U-M.
Here’s a link to and a citation for the paper,
A theory of entropic bonding by Thi Vo and Sharon C. Glotzer. PNAS January 25, 2022 119 (4) e2116414119 DOI: https://doi.org/10.1073/pnas.2116414119
An October 21, 2021 news item on phys.org features a quote about nothingness and symmetry (Note: A link has been removed),
In research that could inform future high-performance nanomaterials, a University of Michigan-led team has uncovered for the first time how mollusks build ultradurable structures with a level of symmetry that outstrips everything else in the natural world, with the exception of individual atoms.
“We humans, with all our access to technology, can’t make something with a nanoscale architecture as intricate as a pearl,” said Robert Hovden, U-M assistant professor of materials science and engineering and an author on the paper. “So we can learn a lot by studying how pearls go from disordered nothingness to this remarkably symmetrical structure.” [emphasis mine]
The analysis was done in collaboration with researchers at the Australian National University, Lawrence Berkeley National Laboratory, Western Norway University [of Applied Sciences] and Cornell University.
…
a. A Keshi pearl that has been sliced into pieces for study. b. A magnified cross-section of the pearl shows its transition from its disorderly center to thousands of layers of finely matched nacre. c. A magnification of the nacre layers shows their self-correction—when one layer is thicker, the next is thinner to compensate, and vice-versa. d, e: Atomic scale images of the nacre layers. f, g, h, i: Microscopy images detail the transitions between the pearl’s layers. Credit: University of Michigan
Published in the Proceedings of the National Academy of Sciences [PNAS], the study found that a pearl’s symmetry becomes more and more precise as it builds, answering centuries-old questions about how the disorder at its center becomes a sort of perfection.
Layers of nacre, the iridescent and extremely durable organic-inorganic composite that also makes up the shells of oysters and other mollusks, build on a shard of aragonite that surrounds an organic center. The layers, which make up more than 90% of a pearl’s volume, become progressively thinner and more closely matched as they build outward from the center.
Perhaps the most surprising finding is that mollusks maintain the symmetry of their pearls by adjusting the thickness of each layer of nacre. If one layer is thicker, the next tends to be thinner, and vice versa. The pearl pictured in the study contains 2,615 finely matched layers of nacre, deposited over 548 days.
“These thin, smooth layers of nacre look a little like bed sheets, with organic matter in between,” Hovden said. “There’s interaction between each layer, and we hypothesize that that interaction is what enables the system to correct as it goes along.”
The team also uncovered details about how the interaction between layers works. A mathematical analysis of the pearl’s layers show that they follow a phenomenon known as “1/f noise,” where a series of events that seem to be random are connected, with each new event influenced by the one before it. 1/f noise has been shown to govern a wide variety of natural and human-made processes including seismic activity, economic markets, electricity, physics and even classical music.
“When you roll dice, for example, every roll is completely independent and disconnected from every other roll. But 1/f noise is different in that each event is linked,” Hovden said. “We can’t predict it, but we can see a structure in the chaos. And within that structure are complex mechanisms that enable a pearl’s thousands of layers of nacre to coalesce toward order and precision.”
The team found that pearls lack true long-range order—the kind of carefully planned symmetry that keeps the hundreds of layers in brick buildings consistent. Instead, pearls exhibit medium-range order, maintaining symmetry for around 20 layers at a time. This is enough to maintain consistency and durability over the thousands of layers that make up a pearl.
The team gathered their observations by studying Akoya “keshi” pearls, produced by the Pinctada imbricata fucata oyster near the Eastern shoreline of Australia. They selected these particular pearls, which measure around 50 millimeters in diameter, because they form naturally, as opposed to bead-cultured pearls, which have an artificial center. Each pearl was cut with a diamond wire saw into sections measuring three to five millimeters in diameter, then polished and examined under an electron microscope.
Hovden says the study’s findings could help inform next-generation materials with precisely layered nanoscale architecture.
“When we build something like a brick building, we can build in periodicity through careful planning and measuring and templating,” he said. “Mollusks can achieve similar results on the nanoscale by using a different strategy. So we have a lot to learn from them, and that knowledge could help us make stronger, lighter materials in the future.”
Here’s a link to and a citation for the paper,
The mesoscale order of nacreous pearls by Jiseok Gim, Alden Koch, Laura M. Otter, Benjamin H. Savitzky, Sveinung Erland, Lara A. Estroff, Dorrit E. Jacob, and Robert Hovden. PNAS vol. 118 no. 42 e2107477118 DOI: https://doi.org/10.1073/pnas.2107477118 Published in issue October 19, 2021 Published online October 18, 2021
They’re usually called apostates; those people who switch from one belief to its opposite. In this case, an advocate who opposed genetically modified foods switched sides as a January 17, 2019 news item on ScienceDaily explains,
What happens when a strong advocate for one side of a controversial issue in science publicly announces that he or she now believes the opposite? Does the message affect the views of those who witness it — and if so, how?
Although past research suggests that such “conversion messages” may be an effective persuasion technique, the actual effect of such messages has been unknown.
Now, a new study from researchers at the Annenberg Public Policy Center shows that such a conversion message can influence public attitudes toward genetically modified (GM) foods.
Using video of a talk by the British environmentalist Mark Lynas about his transformation from an opponent of GM crops to an advocate, researchers found that Lynas’ conversion narrative had a greater impact on the attitudes of people who viewed it than a direct advocacy message.
“People exposed to the conversion message rather than a simple pro-GM message had a more favorable attitude toward GM foods,” said Benjamin A. Lyons, a former postdoctoral fellow at the Annenberg Public Policy Center (APPC) of the University of Pennsylvania. “The two-sided nature of the conversion message – presenting old beliefs and then refuting them – was more effective than a straightforward argument in favor of GM crops.”
“Conversion messages and attitude change: Strong arguments, not costly signals” was published in January 2019 in the journal Public Understanding of Science. The study was done by Lyons, now a research fellow at the University of Exeter, U.K., with two other former APPC postdoctoral fellows – Ariel Hasell, a research fellow at the University of Michigan, and Meghnaa Tallapragada, an assistant professor of strategic communication at Clemson University – and APPC Director Kathleen Hall Jamieson.
How the study worked In 2013, Lynas, a journalist and activist who had opposed GM crops, spoke at the Oxford Farming Conference about his change of belief. In the current experiment, APPC researchers used video excerpts from Lynas’ talk to more than 650 U.S. adult participants, who competed a survey about it.
The respondents each were shown one of three video clips: 1) Lynas explaining the benefits of GM crops; 2) Lynas discussing his prior beliefs and changing his mind about GM crops; and 3) Lynas explaining why his beliefs changed, including the realization that the anti-GM movement he helped to lead was a form of anti-science environmentalism.
The researchers found that both forms of the conversion message (2 and 3) were more influential than the simple advocacy message. There was no difference in impact between the basic conversion message and the more elaborate one.
Measuring how the conversion narrative worked, the researchers found that it enhanced Lynas’ “perceived argument strength,” rather than bolstering his personal credibility, which they found an important distinction. The fact that argument strength served as a mediator on GM attitudes supports the idea that “the unexpected shift in the position of the speaker … prompted central or systematic processing of the argument,” which, in turn, implies a more durable change in attitudes.
GM foods: A low-profile issue on which minds may be changed? Unlike other controversial issues in science such as evolution or climate change, Americans’ views on GM crops do not seem to be related to political ideology or religious beliefs. Nor are Americans especially knowledgeable about GM foods – one prior study found that only 43 percent of Americans know that GM foods are available for human consumption and only 26 percent believe that they have eaten food that was genetically modified. In another earlier study, 71 percent of Americans say they have heard little or nothing about GM foods – yet 39 percent think GM foods present a risk to human health.
Given that many Americans’ views on genetically modified foods aren’t yet fixed by group values and motivated reasoning [emphasis mine], their minds may be more easily changeable on this issue. Lyons said it may be possible to present scientific evidence through a conversion narrative to people on such low-knowledge, lower-profile issues and affect their views.
“After completing this study, I’m more optimistic about our ability to change minds on the issues that haven’t been totally polluted by ideology,” Lyons said.
The researchers cautioned that the findings may not extend beyond an American audience, and said that their audience included many who did not have strong pro- or anti-GM attitudes. They said conversion messaging should be tested with people who do have strong pre-existing views on GM foods. They also noted that this research tested a conversion in only one direction – from anti-GM to pro-GM foods – and said it would be valuable to explore the opposite case.