Tag Archives: Matt Raymond

Nanobiotics and artificial intelligence (AI)

Antibiotics at the nanoscale = nanobiotics. For a more complete explanation, there’s this (Note: the video runs a little longer than most of the others embedded on this blog),

Before pushing further into this research, a note about antibiotic resistance. In a sense, we’ve created the problem we (those scientists in particular) are trying to solve.

Antibiotics and cleaning products kill 99.9% of the bacteria, leaving 0.1% that are immune. As so many living things on earth do, bacteria reproduce. Now, a new antibiotic is needed and discovered; it too kills 99.9% of the bacteria. The 0.1% left are immune to two antibiotics. And,so it goes.

As the scientists have made clear, we’re running out of options using standard methods and they’re hoping this ‘nanoparticle approach’ as described in a June 5, 2023 news item on Nanowerk will work, Note: A link has been removed,

Identifying whether and how a nanoparticle and protein will bind with one another is an important step toward being able to design antibiotics and antivirals on demand, and a computer model developed at the University of Michigan can do it.

The new tool could help find ways to stop antibiotic-resistant infections and new viruses—and aid in the design of nanoparticles for different purposes.

“Just in 2019, the number of people who died of antimicrobial resistance was 4.95 million. Even before COVID, which worsened the problem, studies showed that by 2050, the number of deaths by antibiotic resistance will be 10 million,” said Angela Violi, an Arthur F. Thurnau Professor of mechanical engineering, and corresponding author of the study that made the cover of Nature Computational Science (“Domain-agnostic predictions of nanoscale interactions in proteins and nanoparticles”).

In my ideal scenario, 20 or 30 years from now, I would like—given any superbug—to be able to quickly produce the best nanoparticles that can treat it.”

A June 5, 2023 University of Michigan news release (also on EurekAlert), which originated the news item, provides more technical details, Note: A link has been removed,

Much of the work within cells is done by proteins. Interaction sites on their surfaces can stitch molecules together, break them apart and perform other modifications—opening doorways into cells, breaking sugars down to release energy, building structures to support groups of cells and more. If we could design medicines that target crucial proteins in bacteria and viruses without harming our own cells, that would enable humans to fight new and changing diseases quickly.

The new [computer] model, named NeCLAS [NeCLAS (Nanoparticle-Computed Ligand Affinity Scoring)], uses machine learning—the AI technique that powers the virtual assistant on your smartphone and ChatGPT. But instead of learning to process language, it absorbs structural models of proteins and their known interaction sites. From this information, it learns to extrapolate how proteins and nanoparticles might interact, predict binding sites and the likelihood of binding between them—as well as predicting interactions between two proteins or two nanoparticles.

“Other models exist, but ours is the best for predicting interactions between proteins and nanoparticles,” said Paolo Elvati, U-M associate research scientist in mechanical engineering.

AlphaFold, for example, is a widely used tool for predicting the 3D structure of a protein based on its building blocks, called amino acids. While this capacity is crucial, this is only the beginning: Discovering how these proteins assemble into larger structures and designing practical nanoscale systems are the next steps.

“That’s where NeCLAS comes in,” said Jacob Saldinger, U-M doctoral student in chemical engineering and first author of the study. “It goes beyond AlphaFold by showing how nanostructures will interact with one another, and it’s not limited to proteins. This enables researchers to understand the potential applications of nanoparticles and optimize their designs.”

The team tested three case studies for which they had additional data: 

  • Molecular tweezers, in which a molecule binds to a particular site on another molecule. This approach can stop harmful biological processes, such as the aggregation of protein plaques in diseases of the brain like Alzheimer’s.
  • How graphene quantum dots break up the biofilm produced by staph bacteria. These nanoparticles are flakes of carbon, no more than a few atomic layers thick and 0.0001 millimeters to a side. Breaking up biofilms is likely a crucial tool in fighting antibiotic-resistant infections—including the superbug methicillin-resistant Staphylococcus aureus (MRSA), commonly acquired at hospitals.
  • Whether graphene quantum dots would disperse in water, demonstrating the model’s ability to predict nanoparticle-nanoparticle binding even though it had been trained exclusively on protein-protein data.

While many protein-protein models set amino acids as the smallest unit that the model must consider, this doesn’t work for nanoparticles. Instead, the team set the size of that smallest feature to be roughly the size of the amino acid but then let the computer model decide where the boundaries between these minimum features were. The result is representations of proteins and nanoparticles that look a bit like collections of interconnected beads, providing more flexibility in exploring small scale interactions.

“Besides being more general, NeCLAS also uses way less training data than AlphaFold. We only have 21 nanoparticles to look at, so we have to use protein data in a clever way,” said Matt Raymond, U-M doctoral student in electrical and computer engineering and study co-author.  

Next, the team intends to explore other biofilms and microorganisms, including viruses.

The Nature Computational Science study was funded by the University of Michigan Blue Sky Initiative, the Army Research Office and the National Science Foundation. 

Here’s a link to and a citation for the paper,

Domain-agnostic predictions of nanoscale interactions in proteins and nanoparticles by Jacob Charles Saldinger, Matt Raymond, Paolo Elvati & Angela Violi. Nature Computational Science volume 3, pages 393–402 (2023) DOI: https://doi.org/10.1038/s43588-023-00438-x Published: 01 May 2023 Issue Date: May 2023

This paper is behind a paywall.

Dr. Wei Lu, the memristor, and the cat brain; military surveillance takes a Star Trek: Next Generation turn with a medieval twist; archiving tweets; patents and innovation

Last week I featured the ‘memristor’ story mentioning that much of the latest excitement was set off by Dr. Wei Lu’s work at the University of Michigan (U-M). While HP Labs was the center for much of the interest, it was Dr. Lu’s work (published in Nano Letters which is available behind a paywall) that provoked the renewed interest. Thanks to this news item on Nanowerk, I’ve now found more details about Dr. Lu and his team’s work,

U-M computer engineer Wei Lu has taken a step toward developing this revolutionary type of machine that could be capable of learning and recognizing, as well as making more complex decisions and performing more tasks simultaneously than conventional computers can.

Lu previously built a “memristor,” a device that replaces a traditional transistor and acts like a biological synapse, remembering past voltages it was subjected to. Now, he has demonstrated that this memristor can connect conventional circuits and support a process that is the basis for memory and learning in biological systems.

Here’s where it gets interesting,

In a conventional computer, logic and memory functions are located at different parts of the circuit and each computing unit is only connected to a handful of neighbors in the circuit. As a result, conventional computers execute code in a linear fashion, line by line, Lu said. They are excellent at performing relatively simple tasks with limited variables.

But a brain can perform many operations simultaneously, or in parallel. That’s how we can recognize a face in an instant, but even a supercomputer would take much, much longer and consume much more energy in doing so.

So far, Lu has connected two electronic circuits with one memristor. He has demonstrated that this system is capable of a memory and learning process called “spike timing dependent plasticity.” This type of plasticity refers to the ability of connections between neurons to become stronger based on when they are stimulated in relation to each other. Spike timing dependent plasticity is thought to be the basis for memory and learning in mammalian brains.

“We show that we can use voltage timing to gradually increase or decrease the electrical conductance in this memristor-based system. In our brains, similar changes in synapse conductance essentially give rise to long term memory,” Lu said.

Do visit Nanowerk for the full explanation provided by Dr. Lu, if you’re so inclined. In one of my earlier posts about this I speculated that this work was being funded by DARPA (Defense Advanced Research Projects Agency) which is part of the US Dept. of Defense . Happily, I found this at the end of today’s news item,

Lu said an electronic analog of a cat brain would be able to think intelligently at the cat level. For example, if the task were to find the shortest route from the front door to the sofa in a house full of furniture, and the computer knows only the shape of the sofa, a conventional machine could accomplish this. But if you moved the sofa, it wouldn’t realize the adjustment and find a new path. That’s what engineers hope the cat brain computer would be capable of. The project’s major funder, the Defense Advanced Research Projects Agency [emphasis mine], isn’t interested in sofas. But this illustrates the type of learning the machine is being designed for.

I previously mentioned the story here on April 8, 2010 and provided links that led to other aspects of the story as I and others have covered it.

Military surveillance

Named after a figure in Greek mythology, Argos Panoptes (the sentry with 100 eyes), there are two new applications being announced by researchers in a news item on Azonano,

Researchers are expanding new miniature camera technology for military and security uses so soldiers can track combatants in dark caves or urban alleys, and security officials can unobtrusively identify a subject from an iris scan.

The two new surveillance applications both build on “Panoptes,” a platform technology developed under a project led by Marc Christensen at Southern Methodist University in Dallas. The Department of Defense is funding development of the technology’s first two extension applications with a $1.6 million grant.

The following  image, which accompanies the article at the Southern Methodist University (SMU) website, features an individual who suggests a combination of the Geordi character in Star Trek: The Next Generation with his ‘sensing visor’ and a medieval knight in full armour wearing his helmet with the visor down.

Soldier wearing helmet with hi-res "eyes" courtesy of Southern Methodist University Research

From the article on the SMU site,

“The Panoptes technology is sufficiently mature that it can now leave our lab, and we’re finding lots of applications for it,” said ‘Marc’ Christensen [project leader], an expert in computational imaging and optical interconnections. “This new money will allow us to explore Panoptes’ use for non-cooperative iris recognition systems for Homeland Security and other defense applications. And it will allow us to enhance the camera system to make it capable of active illumination so it can travel into dark places — like caves and urban areas.”

Well, there’s nothing like some non-ccoperative retinal scanning. In fact, you won’t know that the scanning is taking place if they’re successful  with their newest research which suggests the panopticon, a concept from Jeremy Bentham in the 18th century about prison surveillance which takes place without the prisoners being aware of the surveillance (Wikipedia essay here).

Archiving tweets

The US Library of Congress has just announced that it will be saving (archiving) all the ‘tweets’ that have been sent since Twitter launched four years ago. From the news item on physorg.com,

“Library to acquire ENTIRE Twitter archive — ALL public tweets, ever, since March 2006!” the Washington-based library, the world’s largest, announced in a message on its Twitter account at Twitter.com/librarycongress.

“That’s a LOT of tweets, by the way: Twitter processes more than 50 million tweets every day, with the total numbering in the billions,” Matt Raymond of the Library of Congress added in a blog post.

Raymond highlighted the “scholarly and research implications” of acquiring the micro-blogging service’s archive.

He said the messages being archived include the first-ever “tweet,” sent by Twitter co-founder Jack Dorsey, and the one that ran on Barack Obama’s Twitter feed when he was elected president.

Meanwhile, Google made an announcement about another twitter-related development, Google Replay, their real-time search function which will give you data about the specific tweets made on a particular date.  Dave Bruggeman at the Pasco Phronesis blog offers more information and a link to the beta version of Google Replay.

Patents and innovation

I find it interesting that countries and international organizations use the number of patents filed as one indicator for scientific progress while studies indicate that the opposite may be true. This news item on Science Daily strongly suggests that there are some significant problems with the current system. From the news item,

As single-gene tests give way to multi-gene or even whole-genome scans, exclusive patent rights could slow promising new technologies and business models for genetic testing even further, the Duke [Institute for Genome Sciences and Policy] researchers say.

The findings emerge from a series of case studies that examined genetic risk testing for 10 clinical conditions, including breast and colon cancer, cystic fibrosis and hearing loss. …

In seven of the conditions, exclusive licenses have been a source of controversy. But in no case was the holder of exclusive patent rights the first to market with a test.

“That finding suggests that while exclusive licenses have proven valuable for developing drugs and biologics that might not otherwise be developed, in the world of gene testing they are mainly a tool for clearing the field of competition [emphasis mine], and that is a sure-fire way to irritate your customers, both doctors and patients,” said Robert Cook-Deegan, director of the IGSP Center for Genome Ethics, Law & Policy.

This isn’t an argument against the entire patenting system but rather the use of exclusive licenses.