Category Archives: robots

A newsletter from the Pan-Canadian AI strategy folks

The AICan (Artificial Intelligence Canada) Bulletin is published by CIFAR (Canadian Institute For Advanced Research) and it is the official newsletter for the Pan-Canadian AI Strategy. This is a joint production from CIFAR, Amii (Alberta Machine Intelligence Institute), Mila (Quebec’s Artificial Intelligence research institute) and the Vector Institute for Artificial Intelligence (Toronto, Ontario).

For anyone curious about the Pan-Canadian Artificial Intelligence Strategy, first announced in the 2017 federal budget, I have a March 31, 2017 post which focuses heavily on the, then new, Vector Institute but it also contains information about the artificial intelligence scene in Canada at the time, which is at least in part still relevant today.

The AICan Bulletin October 2021 issue number 16 (The Energy and Environment Issue) is available for viewing here and includes these articles,

Equity, diversity and inclusion in AI climate change research

The effects of climate change significantly impact our most vulnerable populations. Canada CIFAR AI Chair David Rolnick (Mila) and Tami Vasanthakumaran (Girls Belong Here) share their insights and call to action for the AI research community.

Predicting the perfect storm

Canada CIFAR AI Chair Samira Kahou (Mila) is using AI to detect and predict extreme weather events to aid in disaster management and raise awareness for the climate crisis.

AI in biodiversity is crucial to our survival

Graham Taylor, a Canada CIFAR AI Chair at the Vector Institute, is using machine learning to build an inventory of life on Earth with DNA barcoding.

ISL Adapt uses ML to make water treatment cleaner & greener

Amii, the University of Alberta, and ISL Engineering explores how machine learning can make water treatment more environmentally friendly and cost-effective with the support of Amii Fellows and Canada CIFAR AI Chairs — Adam White, Martha White and Csaba Szepesvári.

This climate does not exist: Picturing impacts of the climate crisis with AI, one address at a time

Immerse yourself into this AI-driven virtual experience based on empathy to visualize the impacts of climate change on places you hold dear with Mila.

The bulletin also features AI stories from Canada and the US, as well as, events and job postings.

I found two different pages where you can subscribe. First, there’s this subscription page (which is at the bottom of the October 2021 bulletin and then, there’s this page, which requires more details from you.

I’ve taken a look at the CIFAR website and can’t find any of the previous bulletins on it, which would seem to make subscription the only means of access.

Autonopia will pilot automated window cleaning in Vancouver (Canada) in 2022

Construction worker working outdoors with the project. Courtesy: Autonopia

Kenneth Chan in a June 10, 2021 article for the Daily Hive describes a startup company in Vancouver (Canada), which hopes to run a pilot project in 2022 for its “HŌMĀN, a highly capable, fast and efficient autonomous machine, designed specifically for cleaning the glasses [windows] perfectly and quickly.” (The description is from Autonopia’s homepage.)

Chan’s June 10, 2021 article describe the new automated window washer as a roomba-like robot,

The business of washing windows on a tower with human labour is a dangerous, inefficient, and costly practice, but a Vancouver innovator’s robotic solution could potentially disrupt this service globally.

Researchers with robotic systems startup Autonopia have come up with a robot that can mimic the behaviour of human window washers, including getting into the nooks and crannies of all types of complicated building facades — any surface structure.

It is also far more efficient than humans, cleaning windows three to four times faster, and can withstand wind and cold temperatures. According to a [news?] release, the robot is described as a modular device with a plug-and-play design [emphasis mine] that allows it to work on any building without requiring any additional infrastructure to be installed.

While artificial intelligence and the robotic device replaces manual work, it still requires a skilled operator to oversee the cleaning.

“It’s intimidating, hard work that most workers don’t want to do, [emphasis mine]” said Autonopia co-founder Mohammad Dabiri, who came up with the idea after witnessing an accident in Southeast Asia [emphasis mine].

“There’s high overhead to manage the hiring, allocation and training of workers, and sometimes they quit as soon as it comes time to go on a high rise.”

“We realized this problem has existed for a while, and yet none of the available solutions has managed to scale,” said Kamali Hossein, the co-founder and CTO of Autonopia, and a Mitacs postdoctoral research [sic] in mechatronic systems engineering at Simon Fraser University.

To clarify, the company is Autonopia and the product the company is promoting is HŌMĀN, an automated or robotic window washer for tall buildings (towers).

HŌMĀN (as it’s written in the Encyclopedia Iranica) or Houmān, as it’s written in Wikipedia, seems to be a literary hero or, perhaps, superhero,

… is one of the most famous Turanian heroes in Shahnameh, the national epic of Greater Iran. Houmān is famous for his bravery, loyalty, and chivalry, such that even Iranians who are longtime enemies of Turanians admire his personality. He is a descendant of Tur, a son of Viseh and brother of Piran. Houmān is the highest ranking Turanian commander and after Piran, he is the second leading member of Viseh clan. Houman first appears in the story of Rostam and Sohrab, …

Autonopia’s website is very attractive and weirdly uninformative. I looked for a more in depth description of ‘plug and play’ and found this,

Modular and Maintainable

The design of simple, but highly capable and modular components, along with the overall simplicity of the robot structure allows for a shorter build time and maintenance turnover. …

Cleans any tower

The flexible and capable design of the robot allows it to adjust to the complexities of the structures and it can maneuver uneven surfaces of different buildings very quickly and safely. No tower is off-limits for HŌMĀN. It is designed to cater to the specific requirements of each high-rise

I wish there were more details about the hardware and the software, e.g., there’s no mention of artificial intelligence as mentioned in Chan’s article.

As for whether or not this is “intimidating, hard work that most workers don’t want to do,” I wonder how Mohammad Dabiri can be so certain. If this product is successful, it will have an impact on people who rely on this work for their livelihoods. Possibly adding some insult to injury, Dabiri and Hossein claim their product is better at the job than humans are.

Nobody can argue about making work safer but it would be nice if some of these eager, entrepreneurial types put some thought into the impact both positive and negative that their bright ideas can have on other people.

As for whether HŌMĀN can work on any tower, photographs like the one at the beginning of this posting, feature modern office buildings which look like glass sheets held together with steel and concrete. So, it doesn’t look likely to work (and it’s probably not feasible from a business perspective) on older buildings with fewer stories, stone ornamentation, and even more nooks and crannies. As for some of the newer buildings which feature odd shapes and are reintroducing ornamentation, I’d imagine that will be problematic. But perhaps the market is overseas where tall buildings can range from 65 stories to over 100 stories (Wikipedia ‘List of tallest buildings‘). After all the genesis for this project was an incident in Southeast Asia. Vancouver doesn’t have 65 story buildings—yet. But, I’m sure there’s a developer or two out there with some plans.

An algorithm for modern quilting

Caption: Each of the blocks in this quilt were designed using an algorithm-based tool developed by Stanford researchers. Credit: Mackenzie Leake

I love the colours. This research into quilting and artificial intelligence (AI) was presented at SIGGRAPH 2021 in August. (SIGGRAPH is, also known as, ACM SIGGRAPH or ‘Association for Computing Machinery’s Special Interest Group on Computer Graphics and Interactive Techniques’.)

A June 3, 2021 news item on ScienceDaily announced the presentation,

Stanford University computer science graduate student Mackenzie Leake has been quilting since age 10, but she never imagined the craft would be the focus of her doctoral dissertation. Included in that work is new prototype software that can facilitate pattern-making for a form of quilting called foundation paper piecing, which involves using a backing made of foundation paper to lay out and sew a quilted design.

Developing a foundation paper piece quilt pattern — which looks similar to a paint-by-numbers outline — is often non-intuitive. There are few formal guidelines for patterning and those that do exist are insufficient to assure a successful result.

“Quilting has this rich tradition and people make these very personal, cherished heirlooms but paper piece quilting often requires that people work from patterns that other people designed,” said Leake, who is a member of the lab of Maneesh Agrawala, the Forest Baskett Professor of Computer Science and director of the Brown Institute for Media Innovation at Stanford. “So, we wanted to produce a digital tool that lets people design the patterns that they want to design without having to think through all of the geometry, ordering and constraints.”

A paper describing this work is published and will be presented at the computer graphics conference SIGGRAPH 2021 in August.

A June 2, 2021 Stanford University news release (also on EurekAlert), which originated the news item, provides more detail,

Respecting the craft

In describing the allure of paper piece quilts, Leake cites the modern aesthetic and high level of control and precision. The seams of the quilt are sewn through the paper pattern and, as the seaming process proceeds, the individual pieces of fabric are flipped over to form the final design. All of this “sew and flip” action means the pattern must be produced in a careful order.

Poorly executed patterns can lead to loose pieces, holes, misplaced seams and designs that are simply impossible to complete. When quilters create their own paper piecing designs, figuring out the order of the seams can take considerable time – and still lead to unsatisfactory results.

“The biggest challenge that we’re tackling is letting people focus on the creative part and offload the mental energy of figuring out whether they can use this technique or not,” said Leake, who is lead author of the SIGGRAPH paper. “It’s important to me that we’re really aware and respectful of the way that people like to create and that we aren’t over-automating that process.”

This isn’t Leake’s first foray into computer-aided quilting. She previously designed a tool for improvisational quilting, which she presented [PatchProv: Supporting Improvistiional Design Practices for Modern Quilting by Mackenzie Leake, Frances Lai, Tovi Grossman, Daniel Wigdor, and Ben Lafreniere] at the human-computer interaction conference CHI in May [2021]. [Note: Links to the May 2021 conference and paper added by me.]

Quilting theory

Developing the algorithm at the heart of this latest quilting software required a substantial theoretical foundation. With few existing guidelines to go on, the researchers had to first gain a more formal understanding of what makes a quilt paper piece-able, and then represent that mathematically.

They eventually found what they needed in a particular graph structure, called a hypergraph. While so-called “simple” graphs can only connect data points by lines, a hypergraph can accommodate overlapping relationships between many data points. (A Venn diagram is a type of hypergraph.) The researchers found that a pattern will be paper piece-able if it can be depicted by a hypergraph whose edges can be removed one at a time in a specific order – which would correspond to how the seams are sewn in the pattern.

The prototype software allows users to sketch out a design and the underlying hypergraph-based algorithm determines what paper foundation patterns could make it possible – if any. Many designs result in multiple pattern options and users can adjust their sketch until they get a pattern they like. The researchers hope to make a version of their software publicly available this summer.

“I didn’t expect to be writing my computer science dissertation on quilting when I started,” said Leake. “But I found this really rich space of problems involving design and computation and traditional crafts, so there have been lots of different pieces we’ve been able to pull off and examine in that space.”

###

Researchers from University of California, Berkeley and Cornell University are co-authors of this paper. Agrawala is also an affiliate of the Institute for Human-Centered Artificial Intelligence (HAI).

An abstract for the paper “A Mathematical Foundation for Foundation Paper Pieceable Quilts” by Mackenzie Leake, Gilbert Bernstein, Abe Davis and Maneesh Agrawala can be found here along with links to a PDF of the full paper and video on YouTube.

Afterthought: I noticed that all of the co-authors for the May 2021 paper are from the University of Toronto and most of them including Mackenzie Leake are associated with that university’s Chatham Labs.

Finishing Beethoven’s unfinished 10th Symphony

Throughout the project, Beethoven’s genius loomed. Circe Denyer

This is an artificial intelligence (AI) story set to music. Professor Ahmed Elgammal (Director of the Art & AI Lab at Rutgers University located in New Jersey, US) has a September 24, 2021 essay posted on The Conversation (and, then, in the Smithsonian Magazine online) describing the AI project and upcoming album release and performance (Note: A link has been removed),

When Ludwig van Beethoven died in 1827, he was three years removed from the completion of his Ninth Symphony, a work heralded by many as his magnum opus. He had started work on his 10th Symphony but, due to deteriorating health, wasn’t able to make much headway: All he left behind were some musical sketches.

A full recording of Beethoven’s 10th Symphony is set to be released on Oct. 9, 2021, the same day as the world premiere performance scheduled to take place in Bonn, Germany – the culmination of a two-year-plus effort.

These excerpts from the Elgammal’s September 24, 2021 essay on the The Conversation provide a summarized view of events. By the way, this isn’t the first time an attempt has been made to finish Beethoven’s 10th Symphony (Note: Links have been removed),

Around 1817, the Royal Philharmonic Society in London commissioned Beethoven to write his Ninth and 10th symphonies. Written for an orchestra, symphonies often contain four movements: the first is performed at a fast tempo, the second at a slower one, the third at a medium or fast tempo, and the last at a fast tempo.

Beethoven completed his Ninth Symphony in 1824, which concludes with the timeless “Ode to Joy.”

But when it came to the 10th Symphony, Beethoven didn’t leave much behind, other than some musical notes and a handful of ideas he had jotted down.

There have been some past attempts to reconstruct parts of Beethoven’s 10th Symphony. Most famously, in 1988, musicologist Barry Cooper ventured to complete the first and second movements. He wove together 250 bars of music from the sketches to create what was, in his view, a production of the first movement that was faithful to Beethoven’s vision.

Yet the sparseness of Beethoven’s sketches made it impossible for symphony experts to go beyond that first movement.

In early 2019, Dr. Matthias Röder, the director of the Karajan Institute, an organization in Salzburg, Austria, that promotes music technology, contacted me. He explained that he was putting together a team to complete Beethoven’s 10th Symphony in celebration of the composer’s 250th birthday. Aware of my work on AI-generated art, he wanted to know if AI would be able to help fill in the blanks left by Beethoven.

Röder then compiled a team that included Austrian composer Walter Werzowa. Famous for writing Intel’s signature bong jingle, Werzowa was tasked with putting together a new kind of composition that would integrate what Beethoven left behind with what the AI would generate. Mark Gotham, a computational music expert, led the effort to transcribe Beethoven’s sketches and process his entire body of work so the AI could be properly trained.

The team also included Robert Levin, a musicologist at Harvard University who also happens to be an incredible pianist. Levin had previously finished a number of incomplete 18th-century works by Mozart and Johann Sebastian Bach.

… We didn’t have a machine that we could feed sketches to, push a button and have it spit out a symphony. Most AI available at the time couldn’t continue an uncompleted piece of music beyond a few additional seconds.

We would need to push the boundaries of what creative AI could do by teaching the machine Beethoven’s creative process – how he would take a few bars of music and painstakingly develop them into stirring symphonies, quartets and sonatas.

Here’s Elgammal’s description of the difficulties from an AI perspective, from the September 24, 2021 essay (Note: Links have been removed),

First, and most fundamentally, we needed to figure out how to take a short phrase, or even just a motif, and use it to develop a longer, more complicated musical structure, just as Beethoven would have done. For example, the machine had to learn how Beethoven constructed the Fifth Symphony out of a basic four-note motif.

Next, because the continuation of a phrase also needs to follow a certain musical form, whether it’s a scherzo, trio or fugue, the AI needed to learn Beethoven’s process for developing these forms.

The to-do list grew: We had to teach the AI how to take a melodic line and harmonize it. The AI needed to learn how to bridge two sections of music together. And we realized the AI had to be able to compose a coda, which is a segment that brings a section of a piece of music to its conclusion.

Finally, once we had a full composition, the AI was going to have to figure out how to orchestrate it, which involves assigning different instruments for different parts.

And it had to pull off these tasks in the way Beethoven might do so.

The team tested its work, from the September 24, 2021 essay, Note: A link has been removed,

In November 2019, the team met in person again – this time, in Bonn, at the Beethoven House Museum, where the composer was born and raised.

This meeting was the litmus test for determining whether AI could complete this project. We printed musical scores that had been developed by AI and built off the sketches from Beethoven’s 10th. A pianist performed in a small concert hall in the museum before a group of journalists, music scholars and Beethoven experts.

We challenged the audience to determine where Beethoven’s phrases ended and where the AI extrapolation began. They couldn’t.

A few days later, one of these AI-generated scores was played by a string quartet in a news conference. Only those who intimately knew Beethoven’s sketches for the 10th Symphony could determine when the AI-generated parts came in.

The success of these tests told us we were on the right track. But these were just a couple of minutes of music. There was still much more work to do.

There is a preview of the finished 10th symphony,

Beethoven X: The AI Project: III Scherzo. Allegro – Trio (Official Video) | Beethoven Orchestra Bonn

Modern Recordings / BMG present as a foretaste of the album “Beethoven X – The AI Project” (release: 8.10.) the edit of the 3rd movement “Scherzo. Allegro – Trio” as a classical music video. Listen now: https://lnk.to/BeethovenX-Scherzo

Album pre-order link: https://lnk.to/BeethovenX

The Beethoven Orchestra Bonn performing with Dirk Kaftan and Walter Werzowa a great recording of world-premiere Beethoven pieces. Developed by AI and music scientists as well as composers, Beethoven’s once unfinished 10th symphony now surprises with beautiful Beethoven-like harmonics and dynamics.

For anyone who’d like to hear the October 9, 2021 performance, Sharon Kelly included some details in her August 16, 2021 article for DiscoverMusic,

The world premiere of Beethoven’s 10th Symphony on 9 October 2021 at the Telekom Forum in Bonn, performed by the Beethoven Orchestra Bonn conducted by Dirk Kaftan, will be broadcast live and free of charge on MagentaMusik 360.

Sadly, the time is not listed but MagentaMusik 360 is fairly easy to find online.

You can find out more about Professor Elgammal on his Rutgers University profile page. Elgammal has graced this blog before in an August 16, 2019 posting “AI (artificial intelligence) artist got a show at a New York City art gallery“. He’s mentioned in an excerpt about 20% of the way down the page,

Ahmed Elgammal thinks AI art can be much more than that. A Rutgers University professor of computer science, Elgammal runs an art-and-artificial-intelligence lab, where he and his colleagues develop technologies that try to understand and generate new “art” (the scare quotes are Elgammal’s) with AI—not just credible copies of existing work, like GANs do. “That’s not art, that’s just repainting,” Elgammal says of GAN-made images. “It’s what a bad artist would do.”

Elgammal calls his approach a “creative adversarial network,” or CAN. It swaps a GAN’s discerner—the part that ensures similarity—for one that introduces novelty instead. The system amounts to a theory of how art evolves: through small alterations to a known style that produce a new one. That’s a convenient take, given that any machine-learning technique has to base its work on a specific training set.

Finally, thank you to @winsontang whose tweet led me to this story.

Carbon nanotubes can scavenge energy from environment to generate electricity

A June 7, 2021 news item on phys.org announces research into a new method for generating electricity (Note: A link has been removed),

MIT [Massachusetts Institute of Technology] engineers have discovered a new way of generating electricity using tiny carbon particles that can create a current simply by interacting with liquid surrounding them.

The liquid, an organic solvent, draws electrons out of the particles, generating a current that could be used to drive chemical reactions or to power micro- or nanoscale robots, the researchers say.

“This mechanism is new, and this way of generating energy is completely new,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT. “This technology is intriguing because all you have to do is flow a solvent through a bed of these particles. This allows you to do electrochemistry, but with no wires.”

A June 7, 2021 MIT news release (also on EurekAlert), which generated the news item, delves further into the research,

In a new study describing this phenomenon, the researchers showed that they could use this electric current to drive a reaction known as alcohol oxidation — an organic chemical reaction that is important in the chemical industry.

Strano is the senior author of the paper, which appears today [June 7, 2021] in Nature Communications. The lead authors of the study are MIT graduate student Albert Tianxiang Liu and former MIT researcher Yuichiro Kunai. Other authors include former graduate student Anton Cottrill, postdocs Amir Kaplan and Hyunah Kim, graduate student Ge Zhang, and recent MIT graduates Rafid Mollah and Yannick Eatmon.

Unique properties

The new discovery grew out of Strano’s research on carbon nanotubes — hollow tubes made of a lattice of carbon atoms, which have unique electrical properties. In 2010, Strano demonstrated, for the first time, that carbon nanotubes can generate “thermopower waves.” When a carbon nanotube is coated with layer of fuel, moving pulses of heat, or thermopower waves, travel along the tube, creating an electrical current.

That work led Strano and his students to uncover a related feature of carbon nanotubes. They found that when part of a nanotube is coated with a Teflon-like polymer, it creates an asymmetry that makes it possible for electrons to flow from the coated to the uncoated part of the tube, generating an electrical current. Those electrons can be drawn out by submerging the particles in a solvent that is hungry for electrons.

To harness this special capability, the researchers created electricity-generating particles by grinding up carbon nanotubes and forming them into a sheet of paper-like material. One side of each sheet was coated with a Teflon-like polymer, and the researchers then cut out small particles, which can be any shape or size. For this study, they made particles that were 250 microns by 250 microns.

When these particles are submerged in an organic solvent such as acetonitrile, the solvent adheres to the uncoated surface of the particles and begins pulling electrons out of them.

“The solvent takes electrons away, and the system tries to equilibrate by moving electrons,” Strano says. “There’s no sophisticated battery chemistry inside. It’s just a particle and you put it into solvent and it starts generating an electric field.”

Particle power

The current version of the particles can generate about 0.7 volts of electricity per particle. In this study, the researchers also showed that they can form arrays of hundreds of particles in a small test tube. This “packed bed” reactor generates enough energy to power a chemical reaction called an alcohol oxidation, in which an alcohol is converted to an aldehyde or a ketone. Usually, this reaction is not performed using electrochemistry because it would require too much external current.

“Because the packed bed reactor is compact, it has more flexibility in terms of applications than a large electrochemical reactor,” Zhang says. “The particles can be made very small, and they don’t require any external wires in order to drive the electrochemical reaction.”

In future work, Strano hopes to use this kind of energy generation to build polymers using only carbon dioxide as a starting material. In a related project, he has already created polymers that can regenerate themselves using carbon dioxide as a building material, in a process powered by solar energy. This work is inspired by carbon fixation, the set of chemical reactions that plants use to build sugars from carbon dioxide, using energy from the sun.

In the longer term, this approach could also be used to power micro- or nanoscale robots. Strano’s lab has already begun building robots at that scale, which could one day be used as diagnostic or environmental sensors. The idea of being able to scavenge energy from the environment to power these kinds of robots is appealing, he says.

“It means you don’t have to put the energy storage on board,” he says. “What we like about this mechanism is that you can take the energy, at least in part, from the environment.”

Here’s a link to and a citation for the paper,

Solvent-induced electrochemistry at an electrically asymmetric carbon Janus particle by Albert Tianxiang Liu, Yuichiro Kunai, Anton L. Cottrill, Amir Kaplan, Ge Zhang, Hyunah Kim, Rafid S. Mollah, Yannick L. Eatmon & Michael S. Strano. Nature Communications volume 12, Article number: 3415 (2021) DOI: https://doi.org/10.1038/s41467-021-23038-7Published 07 June 2021

This paper is open access.

Nanosensors use AI to explore the biomolecular world

EPFL scientists have developed AI-powered nanosensors that let researchers track various kinds of biological molecules without disturbing them. Courtesy: École polytechnique fédérale de Lausanne (EPFL)

If you look at the big orange dot (representing the nanosensors?), you’ll see those purplish/fuschia objects resemble musical notes (biological molecules?). I think that brainlike object to the left and in light blue is the artificial intelligence (AI) component. (If anyone wants to correct my guesses or identify the bits I can’t, please feel free to add to the Comments for this blog.)

Getting back to my topic, keep the ‘musical notes’ in mind as you read about some of the latest research from l’École polytechnique fédérale de Lausanne (EPFL) in an April 7, 2021 news item on Nanowerk,

The tiny world of biomolecules is rich in fascinating interactions between a plethora of different agents such as intricate nanomachines (proteins), shape-shifting vessels (lipid complexes), chains of vital information (DNA) and energy fuel (carbohydrates). Yet the ways in which biomolecules meet and interact to define the symphony of life is exceedingly complex.

Scientists at the Bionanophotonic Systems Laboratory in EPFL’s School of Engineering have now developed a new biosensor that can be used to observe all major biomolecule classes of the nanoworld without disturbing them. Their innovative technique uses nanotechnology, metasurfaces, infrared light and artificial intelligence.

To each molecule its own melody

In this nano-sized symphony, perfect orchestration makes physiological wonders such as vision and taste possible, while slight dissonances can amplify into horrendous cacophonies leading to pathologies such as cancer and neurodegeneration.

An April 7, 2021 EPFL press release, which originated the news item, provides more detail,

“Tuning into this tiny world and being able to differentiate between proteins, lipids, nucleic acids and carbohydrates without disturbing their interactions is of fundamental importance for understanding life processes and disease mechanisms,” says Hatice Altug, the head of the Bionanophotonic Systems Laboratory. 

Light, and more specifically infrared light, is at the core of the biosensor developed by Altug’s team. Humans cannot see infrared light, which is beyond the visible light spectrum that ranges from blue to red. However, we can feel it in the form of heat in our bodies, as our molecules vibrate under the infrared light excitation.

Molecules consist of atoms bonded to each other and – depending on the mass of the atoms and the arrangement and stiffness of their bonds – vibrate at specific frequencies. This is similar to the strings on a musical instrument that vibrate at specific frequencies depending on their length. These resonant frequencies are molecule-specific, and they mostly occur in the infrared frequency range of the electromagnetic spectrum. 

“If you imagine audio frequencies instead of infrared frequencies, it’s as if each molecule has its own characteristic melody,” says Aurélian John-Herpin, a doctoral assistant at Altug’s lab and the first author of the publication. “However, tuning into these melodies is very challenging because without amplification, they are mere whispers in a sea of sounds. To make matters worse, their melodies can present very similar motifs making it hard to tell them apart.” 

Metasurfaces and artificial intelligence

The scientists solved these two issues using metasurfaces and AI. Metasurfaces are man-made materials with outstanding light manipulation capabilities at the nano scale, thereby enabling functions beyond what is otherwise seen in nature. Here, their precisely engineered meta-atoms made out of gold nanorods act like amplifiers of light-matter interactions by tapping into the plasmonic excitations resulting from the collective oscillations of free electrons in metals. “In our analogy, these enhanced interactions make the whispered molecule melodies more audible,” says John-Herpin.

AI is a powerful tool that can be fed with more data than humans can handle in the same amount of time and that can quickly develop the ability to recognize complex patterns from the data. John-Herpin explains, “AI can be imagined as a complete beginner musician who listens to the different amplified melodies and develops a perfect ear after just a few minutes and can tell the melodies apart, even when they are played together – like in an orchestra featuring many instruments simultaneously.” 

The first biosensor of its kind

When the scientists’ infrared metasurfaces are augmented with AI, the new sensor can be used to analyze biological assays featuring multiple analytes simultaneously from the major biomolecule classes and resolving their dynamic interactions. 

“We looked in particular at lipid vesicle-based nanoparticles and monitored their breakage through the insertion of a toxin peptide and the subsequent release of vesicle cargos of nucleotides and carbohydrates, as well as the formation of supported lipid bilayer patches on the metasurface,” says Altug.

This pioneering AI-powered, metasurface-based biosensor will open up exciting perspectives for studying and unraveling inherently complex biological processes, such as intercellular communication via exosomesand the interaction of nucleic acids and carbohydrates with proteins in gene regulation and neurodegeneration. 

“We imagine that our technology will have applications in the fields of biology, bioanalytics and pharmacology – from fundamental research and disease diagnostics to drug development,” says Altug. 

Here’s a link to and a citation for the paper,

Infrared Metasurface Augmented by Deep Learning for Monitoring Dynamics between All Major Classes of Biomolecules by Aurelian John‐Herpin, Deepthy Kavungal. Lea von Mücke, Hatice Altug. Advanced Materials Volume 33, Issue 14 April 8, 2021 2006054 DOI: https://doi.org/10.1002/adma.202006054 First published: 22 February 2021

This paper is open access.

Mechano-photonic artificial synapse is bio-inspired

The word ‘memristor’ usually pops up when there’s research into artificial synapses but not in this new piece of research. I didn’t see any mention of the memristor in the paper’s references either but I did find James Gimzewski from the University of California at Los Angeles (UCLA) whose research into brainlike computing (neuromorphic computing) is running parallel but separately to the memristor research.

Dr. Thamarasee Jeewandara has written a March 25, 2021 article for phys.org about the latest neuromorphic computing research (Note: Links have been removed)

Multifunctional and diverse artificial neural systems can incorporate multimodal plasticity, memory and supervised learning functions to assist neuromorphic computation. In a new report, Jinran Yu and a research team in nanoenergy, nanoscience and materials science in China and the US., presented a bioinspired mechano-photonic artificial synapse with synergistic mechanical and optical plasticity. The team used an optoelectronic transistor made of graphene/molybdenum disulphide (MoS2) heterostructure and an integrated triboelectric nanogenerator to compose the artificial synapse. They controlled the charge transfer/exchange in the heterostructure with triboelectric potential and modulated the optoelectronic synapse behaviors readily, including postsynaptic photocurrents, photosensitivity and photoconductivity. The mechano-photonic artificial synapse is a promising implementation to mimic the complex biological nervous system and promote the development of interactive artificial intelligence. The work is now published on Science Advances.

The human brain can integrate cognition, learning and memory tasks via auditory, visual, olfactory and somatosensory interactions. This process is difficult to be mimicked using conventional von Neumann architectures that require additional sophisticated functions. Brain-inspired neural networks are made of various synaptic devices to transmit information and process using the synaptic weight. Emerging photonic synapse combine the optical and electric neuromorphic modulation and computation to offer a favorable option with high bandwidth, fast speed and low cross-talk to significantly reduce power consumption. Biomechanical motions including touch, eye blinking and arm waving are other ubiquitous triggers or interactive signals to operate electronics during artificial synapse plasticization. In this work, Yu et al. presented a mechano-photonic artificial synapse with synergistic mechanical and optical plasticity. The device contained an optoelectronic transistor and an integrated triboelectric nanogenerator (TENG) in contact-separation mode. The mechano-optical artificial synapses have huge functional potential as interactive optoelectronic interfaces, synthetic retinas and intelligent robots. [emphasis mine]

As you can see Jeewandara has written quite a technical summary of the work. Here’s an image from the Science Advances paper,

Fig. 1 Biological tactile/visual neurons and mechano-photonic artificial synapse. (A) Schematic illustrations of biological tactile/visual sensory system. (B) Schematic diagram of the mechano-photonic artificial synapse based on graphene/MoS2 (Gr/MoS2) heterostructure. (i) Top-view scanning electron microscope (SEM) image of the optoelectronic transistor; scale bar, 5 μm. The cyan area indicates the MoS2 flake, while the white strip is graphene. (ii) Illustration of charge transfer/exchange for Gr/MoS2 heterostructure. (iii) Output mechano-photonic signals from the artificial synapse for image recognition.

You can find the paper here,

Bioinspired mechano-photonic artificial synapse based on graphene/MoS2 heterostructure by Jinran Yu, Xixi Yang, Guoyun Gao, Yao Xiong, Yifei Wang, Jing Han, Youhui Chen, Huai Zhang, Qijun Sun and Zhong Lin Wang. Science Advances 17 Mar 2021: Vol. 7, no. 12, eabd9117 DOI: 10.1126/sciadv.abd9117

This appears to be open access.

A new generation of xenobots made with frog cells

I meant to feature this work last year when it was first announced so I’m delighted a second chance has come around so soon after. From a March 31, 2021 news item on ScienceDaily,

Last year, a team of biologists and computer scientists from Tufts University and the University of Vermont (UVM) created novel, tiny self-healing biological machines from frog cells called “Xenobots” that could move around, push a payload, and even exhibit collective behavior in the presence of a swarm of other Xenobots.

Get ready for Xenobots 2.0.

Here’s a video of the Xenobot 2.0. It’s amazing but, for anyone who has problems with animal experimentation, this may be disturbing,


The next version of Xenobots have been created – they’re faster, live longer, and can now record information. (Source: Doug Blackiston & Emma Lederer)

A March 31, 2021 Tufts University news release by Mike Silver (also on EurekAlert and adapted and published as Scientists Create the Next Generation of Living Robots on the University of Vermont website as a UVM Today story),

The same team has now created life forms that self-assemble a body from single cells, do not require muscle cells to move, and even demonstrate the capability of recordable memory. The new generation Xenobots also move faster, navigate different environments, and have longer lifespans than the first edition, and they still have the ability to work together in groups and heal themselves if damaged. The results of the new research were published today [March 31, 2021] in Science Robotics.

Compared to Xenobots 1.0, in which the millimeter-sized automatons were constructed in a “top down” approach by manual placement of tissue and surgical shaping of frog skin and cardiac cells to produce motion, the next version of Xenobots takes a “bottom up” approach. The biologists at Tufts took stem cells from embryos of the African frog Xenopus laevis (hence the name “Xenobots”) and allowed them to self-assemble and grow into spheroids, where some of the cells after a few days differentiated to produce cilia – tiny hair-like projections that move back and forth or rotate in a specific way. Instead of using manually sculpted cardiac cells whose natural rhythmic contractions allowed the original Xenobots to scuttle around, cilia give the new spheroidal bots “legs” to move them rapidly across a surface. In a frog, or human for that matter, cilia would normally be found on mucous surfaces, like in the lungs, to help push out pathogens and other foreign material. On the Xenobots, they are repurposed to provide rapid locomotion. 

“We are witnessing the remarkable plasticity of cellular collectives, which build a rudimentary new ‘body’ that is quite distinct from their default – in this case, a frog – despite having a completely normal genome,” said Michael Levin, Distinguished Professor of Biology and director of the Allen Discovery Center at Tufts University, and corresponding author of the study. “In a frog embryo, cells cooperate to create a tadpole. Here, removed from that context, we see that cells can re-purpose their genetically encoded hardware, like cilia, for new functions such as locomotion. It is amazing that cells can spontaneously take on new roles and create new body plans and behaviors without long periods of evolutionary selection for those features.”

“In a way, the Xenobots are constructed much like a traditional robot.  Only we use cells and tissues rather than artificial components to build the shape and create predictable behavior.” said senior scientist Doug Blackiston, who co-first authored the study with research technician Emma Lederer. “On the biology end, this approach is helping us understand how cells communicate as they interact with one another during development, and how we might better control those interactions.”

While the Tufts scientists created the physical organisms, scientists at UVM were busy running computer simulations that modeled different shapes of the Xenobots to see if they might exhibit different behaviors, both individually and in groups. Using the Deep Green supercomputer cluster at UVM’s Vermont Advanced Computing Core, the team, led by computer scientists and robotics experts Josh Bongard and Sam Kriegman, simulated the Xenbots under hundreds of thousands of random environmental conditions using an evolutionary algorithm.  These simulations were used to identify Xenobots most able to work together in swarms to gather large piles of debris in a field of particles

“We know the task, but it’s not at all obvious — for people — what a successful design should look like. That’s where the supercomputer comes in and searches over the space of all possible Xenobot swarms to find the swarm that does the job best,” says Bongard. “We want Xenobots to do useful work. Right now we’re giving them simple tasks, but ultimately we’re aiming for a new kind of living tool that could, for example, clean up microplastics in the ocean or contaminants in soil.” 

It turns out, the new Xenobots are much faster and better at tasks such as garbage collection than last year’s model, working together in a swarm to sweep through a petri dish and gather larger piles of iron oxide particles. They can also cover large flat surfaces, or travel through narrow capillary tubes.

These studies also suggest that the in silico [computer] simulations could in the future optimize additional features of biological bots for more complex behaviors. One important feature added in the Xenobot upgrade is the ability to record information.

Now with memory

A central feature of robotics is the ability to record memory and use that information to modify the robot’s actions and behavior. With that in mind, the Tufts scientists engineered the Xenobots with a read/write capability to record one bit of information, using a fluorescent reporter protein called EosFP, which normally glows green. However, when exposed to light at 390nm wavelength, the protein emits red light instead. 

The cells of the frog embryos were injected with messenger RNA coding for the EosFP protein before stem cells were excised to create the Xenobots. The mature Xenobots now have a built-in fluorescent switch which can record exposure to blue light around 390nm.
The researchers tested the memory function by allowing 10 Xenobots to swim around a surface on which one spot is illuminated with a beam of 390nm light. After two hours, they found that three bots emitted red light. The rest remained their original green, effectively recording the “travel experience” of the bots.

This proof of principle of molecular memory could be extended in the future to detect and record not only light but also the presence of radioactive contamination, chemical pollutants, drugs, or a disease condition. Further engineering of the memory function could enable the recording of multiple stimuli (more bits of information) or allow the bots to release compounds or change behavior upon sensation of stimuli. 

“When we bring in more capabilities to the bots, we can use the computer simulations to design them with more complex behaviors and the ability to carry out more elaborate tasks,” said Bongard. “We could potentially design them not only to report conditions in their environment but also to modify and repair conditions in their environment.”

Xenobot, heal thyself

“The biological materials we are using have many features we would like to someday implement in the bots – cells can act like sensors, motors for movement, communication and computation networks, and recording devices to store information,” said Levin. “One thing the Xenobots and future versions of biological bots can do that their metal and plastic counterparts have difficulty doing is constructing their own body plan as the cells grow and mature, and then repairing and restoring themselves if they become damaged. Healing is a natural feature of living organisms, and it is preserved in Xenobot biology.” 

The new Xenobots were remarkably adept at healing and would close the majority of a severe full-length laceration half their thickness within 5 minutes of the injury. All injured bots were able to ultimately heal the wound, restore their shape and continue their work as before. 

Another advantage of a biological robot, Levin adds, is metabolism. Unlike metal and plastic robots, the cells in a biological robot can absorb and break down chemicals and work like tiny factories synthesizing and excreting chemicals and proteins. The whole field of synthetic biology – which has largely focused on reprogramming single celled organisms to produce useful molecules – can now be exploited in these multicellular creatures

Like the original Xenobots, the upgraded bots can survive up to ten days on their embryonic energy stores and run their tasks without additional energy sources, but they can also carry on at full speed for many months if kept in a “soup” of nutrients. 

What the scientists are really after

An engaging description of the biological bots and what we can learn from them is presented in a TED talk by Michael Levin. In his TED Talk, professor Levin describes not only the remarkable potential for tiny biological robots to carry out useful tasks in the environment or potentially in therapeutic applications, but he also points out what may be the most valuable benefit of this research – using the bots to understand how individual cells come together, communicate, and specialize to create a larger organism, as they do in nature to create a frog or human. It’s a new model system that can provide a foundation for regenerative medicine.

Xenobots and their successors may also provide insight into how multicellular organisms arose from ancient single celled organisms, and the origins of information processing, decision making and cognition in biological organisms. 

Recognizing the tremendous future for this technology, Tufts University and the University of Vermont have established the Institute for Computer Designed Organisms (ICDO), to be formally launched in the coming months, which will pull together resources from each university and outside sources to create living robots with increasingly sophisticated capabilities.

The ultimate goal for the Tufts and UVM researchers is not only to explore the full scope of biological robots they can make; it is also to understand the relationship between the ‘hardware’ of the genome and the ‘software’ of cellular communications that go into creating organized tissues, organs and limbs. Then we can gain greater control of that morphogenesis for regenerative medicine, and the treatment of cancer and diseases of aging.

Here’s a link to and a citation for the paper,

A cellular platform for the development of synthetic living machines by Douglas Blackiston, Emma Lederer, Sam Kriegman, Simon Garnier, Joshua Bongard, and Michael Levin. Science Robotics 31 Mar 2021: Vol. 6, Issue 52, eabf1571 DOI: 10.1126/scirobotics.abf1571

This paper is behind a paywall.

An electronics-free, soft robotic dragonfly

From the description on YouTube,

With the ability to sense changes in pH, temperature and oil, this completely soft, electronics-free robot dubbed “DraBot” could be the prototype for future environmental sentinels. …

Music: Joneve by Mello C from the Free Music Archive

A favourite motif in the Art Nouveau movement (more about that later in the post), dragonflies or a facsimile thereof feature in March 25, 2021 Duke University news release (also on EurekAlert) by Ken Kingery,

Engineers at Duke University have developed an electronics-free, entirely soft robot shaped like a dragonfly that can skim across water and react to environmental conditions such as pH, temperature or the presence of oil. The proof-of-principle demonstration could be the precursor to more advanced, autonomous, long-range environmental sentinels for monitoring a wide range of potential telltale signs of problems.

The soft robot is described online March 25 [2021] in the journal Advanced Intelligent Systems.

Soft robots are a growing trend in the industry due to their versatility. Soft parts can handle delicate objects such as biological tissues that metal or ceramic components would damage. Soft bodies can help robots float or squeeze into tight spaces where rigid frames would get stuck.

The expanding field was on the mind of Shyni Varghese, professor of biomedical engineering, mechanical engineering and materials science, and orthopaedic surgery at Duke, when inspiration struck.

“I got an email from Shyni from the airport saying she had an idea for a soft robot that uses a self-healing hydrogel that her group has invented in the past to react and move autonomously,” said Vardhman Kumar, a PhD student in Varghese’s laboratory and first author of the paper. “But that was the extent of the email, and I didn’t hear from her again for days. So the idea sort of sat in limbo for a little while until I had enough free time to pursue it, and Shyni said to go for it.”

In 2012, Varghese and her laboratory created a self-healing hydrogel that reacts to changes in pH in a matter of seconds. Whether it be a crack in the hydrogel or two adjoining pieces “painted” with it, a change in acidity causes the hydrogel to form new bonds, which are completely reversible when the pH returns to its original levels.

Varghese’s hastily written idea was to find a way to use this hydrogel on a soft robot that could travel across water and indicate places where the pH changes. Along with a few other innovations to signal changes in its surroundings, she figured her lab could design such a robot as a sort of autonomous environmental sensor.

With the help of Ung Hyun Ko, a postdoctoral fellow also in Varghese’s laboratory, Kumar began designing a soft robot based on a fly. After several iterations, the pair settled on the shape of a dragonfly engineered with a network of interior microchannels that allow it to be controlled with air pressure.

They created the body–about 2.25 inches long with a 1.4-inch wingspan–by pouring silicon into an aluminum mold and baking it. The team used soft lithography to create interior channels and connected with flexible silicon tubing.

DraBot was born.

“Getting DraBot to respond to air pressure controls over long distances using only self-actuators without any electronics was difficult,” said Ko. “That was definitely the most challenging part.”

DraBot works by controlling the air pressure coming into its wings. Microchannels carry the air into the front wings, where it escapes through a series of holes pointed directly into the back wings. If both back wings are down, the airflow is blocked, and DraBot goes nowhere. But if both wings are up, DraBot goes forward.

To add an element of control, the team also designed balloon actuators under each of the back wings close to DraBot’s body. When inflated, the balloons cause the wings to curl upward. By changing which wings are up or down, the researchers tell DraBot where to go.

“We were happy when we were able to control DraBot, but it’s based on living things,” said Kumar. “And living things don’t just move around on their own, they react to their environment.”

That’s where self-healing hydrogel comes in. By painting one set of wings with the hydrogel, the researchers were able to make DraBot responsive to changes in the surrounding water’s pH. If the water becomes acidic, one side’s front wing fuses with the back wing. Instead of traveling in a straight line as instructed, the imbalance causes the robot to spin in a circle. Once the pH returns to a normal level, the hydrogel “un-heals,” the fused wings separate, and DraBot once again becomes fully responsive to commands.

To beef up its environmental awareness, the researchers also leveraged the sponges under the wings and doped the wings with temperature-responsive materials. When DraBot skims over water with oil floating on the surface, the sponges will soak it up and change color to the corresponding color of oil. And when the water becomes overly warm, DraBot’s wings change from red to yellow.

The researchers believe these types of measurements could play an important part in an environmental robotic sensor in the future. Responsiveness to pH can detect freshwater acidification, which is a serious environmental problem affecting several geologically-sensitive regions. The ability to soak up oils makes such long-distance skimming robots an ideal candidate for early detection of oil spills. Changing colors due to temperatures could help spot signs of red tide and the bleaching of coral reefs, which leads to decline in the population of aquatic life.

The team also sees many ways that they could improve on their proof-of-concept. Wireless cameras or solid-state sensors could enhance the capabilities of DraBot. And creating a form of onboard propellant would help similar bots break free of their tubing.

“Instead of using air pressure to control the wings, I could envision using some sort of synthetic biology that generates energy,” said Varghese. “That’s a totally different field than I work in, so we’ll have to have a conversation with some potential collaborators to see what’s possible. But that’s part of the fun of working on an interdisciplinary project like this.”

Here’s a link to and a citation for the paper,

Microengineered Materials with Self‐Healing Features for Soft Robotics by Vardhman Kumar, Ung Hyun Ko, Yilong Zhou, Jiaul Hoque, Gaurav Arya, Shyni Varghese. Advanced Intelligent Systems DOI: https://doi.org/10.1002/aisy.202100005 First published: 25 March 2021

This paper is open access.

The earlier reference to Art Nouveau gives me an excuse to introduce this March 7, 2020 (?) essay by Bex Simon (artist blacksmith) on her eponymous website.

Dragonflies, in particular, are a very poplar subject matter in the Art Nouveau movement. Art Nouveau, with its wonderful flowing lines and hidden fantasies, is full of symbolism.  The movement was a response to the profound social changes and industrialization of every day life and the style of the moment was, in part, inspired by Japanese art.

Simon features examples of Art Nouveau dragonfly art along with examples of her own take on the subject. She also has this,

[downloaded from https://www.bexsimon.com/dragonflies-and-butterflies-in-art-nouveau/]

This is a closeup of a real dragonfly as seen on Simon’s website. If you have an interest, reading her March 7, 2020 (?) essay and gazing at the images won’t take much time.