Tag Archives: Cornell University

Metal oxide nanoparticles and negative effects on gut health?

A May 4, 2023 Binghamton University news release by Jillian McCarthy (also on EurekAlert but published on May 9, 2023) announces the research, Note: A link has been removed,

Common food additives known as metal oxide nanoparticles may have negative effects on your gut health, according to new research from Binghamton University, State University of New York and Cornell University. 

Gretchen Mahler, professor of biomedical engineering and interim vice provost and dean of the Graduate School, worked in collaboration with Cornell researchers to study five of these nanoparticles. Their findings were recently published in the Journal of Antioxidants.

“They’re all actual food additives,” said Mahler. “Titanium dioxide tends to show up as a whitening and brightening agent. Silicon dioxide tends to be added to foods to prevent it from clumping. Iron oxide tends to be added to meats, for example, to keep that red color. And zinc oxide can be used as a preservative because it’s antimicrobial.” [emphases mine]

In order to test these nanoparticles, Mahler and Elad Tako, senior author and associate professor of food science in the College of Agriculture and Life Sciences at Cornell, used the intestinal tract of chickens. A chicken’s intestinal tract is comparable to a human’s; the microbiota that they have and the bacterial components have a lot of overlap with the microbiota that you see in the human digestive system, said Mahler. 

“We’ve been testing a series of nanomaterials here at Binghamton, and we’ve been looking at things like nutrient absorption, enzyme expression and some of the more subtle, functional markers,” said Mahler.

The doses of nanoparticles that were tested reflect what is typically consumed by humans. The nanoparticles were injected into the amniotic sac of broiler chicken eggs, which are specifically bred and raised for their meat. These chickens get larger faster, so the effects of the nanoparticles are more obvious earlier in development. The amniotic sac at a certain stage of development flows through the chicken intestine.

“When they hatched, we harvested tissue from the small intestine, the microbiota and the liver,” said Mahler. “We looked at gene expression, microbiota composition and the structure of the small intestine.”

The researchers found more negative effects with silicone dioxide and titanium dioxide. They also found that the nanoparticles had affected the functioning of the chicken’s intestinal lining (called the brush border membrane), the balance of bacteria in their intestinal tract and the chickens’ ability to absorb minerals.

The other nanoparticles had more neutral, or even positive, effects. Zinc oxide appeared to support intestinal development or compensatory mechanisms due to intestinal damage. Iron oxide could potentially be used for iron fortification, but with potential alterations in intestinal functionality and health.

Mahler doesn’t want to suggest that these nanoparticles need to be removed from our diets completely. Their research is meant to provide some information, and allow people to have a better understanding of what’s really in the food they consume.

“We’re eating these things, so it’s important to consider what some of the more subtle effects could be,” said Mahler. “We develop these gut models around this problem to try to understand it, and this collaboration, where we have these complementary methods to try to look at the problem, has been successful.”

Here’s a link to and a citation for the paper,

Food-Grade Metal Oxide Nanoparticles Exposure Alters Intestinal Microbial Populations, Brush Border Membrane Functionality and Morphology, In Vivo (Gallus gallus) by Jacquelyn Cheng, Nikolai Kolba, Alba García-Rodríguez, Cláudia N. H. Marques, Gretchen J. Mahler and Elad Tako. Antioxidants 2023, 12(2), 431, DOI: https://doi.org/10.3390/antiox12020431 Published online : 9 February 2023 (This article belongs to the Special Issue Dietary Supplements and Oxidative Stress)

This paper is open access.

SkinKit: smart tattoo provides on-skin computing

The SkinKit wearable sensing interface, developed in the Hybrid Body Lab, can be used for health and wellness, personal safety, as assistive technology and for athletic training, among many applications. Hybrid Body Lab/Provided

A November 3, 2022 Cornell University news release on EurekAlert announces a computer you can attach to your skin (Note: Links have been removed),

Researchers at Cornell University have come up with a reliable, skin-tight computing system that’s easy to attach and detach, and can be used for a variety of purposes – from health monitoring to fashion.

On-skin interfaces – sometimes known as “smart tattoos” – have the potential to outperform the sensing capabilities of current wearable technologies but combining comfort and durability has proven challenging.

“We’ve been working on this for years,” said Cindy (Hsin-Liu) Kao, assistant professor of human centered design, and the study’s senior author, “and I think we’ve finally figured out a lot of the technical challenges. We wanted to create a modular approach to smart tattoos, to make them as straightforward as building Legos.”

SkinKit – a plug-and-play system that aims to “lower the floor for entry” to on-skin interfaces for those with little or no technical expertise – is the product of countless hours of development, testing and redevelopment, Kao said. Fabrication is done with temporary tattoo paper, silicone textile stabilizer and water, creating a multi-layer thin film structure they call “skin cloth.” The layered material can be cut into desired shapes and fitted with electronics hardware to perform a range of tasks.

“The wearer can easily attach them together and also detach them,” said Pin-Sung Ku, lead author of the paper and Hybrid Body Lab member. “Let’s say that today you want to use one of the sensors for certain purposes, but tomorrow you want it for something different. You can easily just detach them and reuse some of the modules to make a new device in minutes.”

The paper “SkinKit: Construction Kit for On-Skin Interface Prototyping” was presented at UbiComp ’22, the Association for Computing Machinery’s international joint conference on pervasive and ubiquitous computing.

Here’s a SkinKit video provided by Cornell University’s Hybrid Body Lab,

Tom Fleischman’s November 3, 2022 story for the Cornell Chronicle provides more details about SkinKit (Note: Links have been removed),

SkinKit – a plug-and-play system that aims to “lower the floor for entry” to on-skin interfaces, Kao said, for those with little or no technical expertise – is the product of countless hours of development, testing and redevelopment, she said.

Kao’s lab is also very conscious of cultural differences generally, and she thinks it’s important to bring these devices to diverse populations.

“People from different cultures, backgrounds and ethnicities can have very different perceptions toward these devices,” she said. “We felt it’s actually very important to let more people have a voice in saying what they want these smart tattoos to do.”

To test SkinKit, the researchers first recruited nine participants with both STEM and design backgrounds to build and wear the devices. Their input from the 90-minute workshop helped inform further modifications, which the group performed before conducting a larger, two-day study involving 25 participants with both STEM and design backgrounds.

Devices designed by the 25 study participants addressed: health and wellness, including temperature sensors to detect fever due to COVID-19; personal safety, including a device that would help the wearer maintain social distance during the pandemic; notification, including an arm-worn device that a runner could wear that would vibrate when a vehicle was near; and assistive technology, such as a wrist-worn sensor for the blind that would vibrate when the wearer was about to bump into an object.

Kao said members of her lab, including Ku, took part in the 4-H Career Explorations Conference over the summer, and had approximately 10 middle-schoolers from upstate New York build their own SkinKit devices.

“I think it just shows us a lot of potential for STEM [science, technology, engineering, and mathematics] learning, and especially to be able to engage people who maybe originally wouldn’t have interest in STEM,” Kao said. “But by combining it with body art and fashion, I think there’s a lot of potential for it to engage the next generation and broader populations to explore the future of smart tattoos.”

Here’s a citation for the paper,

SkinKit: Construction Kit for On-Skin Interface Prototyping” by Pin-Sung Ku, Md. Tahmidul Islam Molla, Kunpeng Huang, Priya Kattappurath, Krithik Ranjan, Hsin-Liu Cindy Kao. Proceedings of the ACM [Aossciation for Computing Machinery] on Interactive, Mobile, Wearable and Ubiquitous Technologies Volume 5 Issue 4 Dec 2021 Article No.: 165pp 1–23 DOI: https://doi.org/10.1145/3494989 Published: 30 December 2021

This paper is behind a paywall.

The Hybrid Body Lab can be found here (the pictures are fascinating). Here’s more from their About page,

The Hybrid Body Lab at Cornell University, founded and directed by Prof. Cindy Hsin-Liu Kao, focuses on the invention of culturally-inspired materials, processes, and tools for crafting technology on the body surface. Designing across scales, we explore how body scale interfaces can enhance our relations with everyday products and both natural and man-made environments. We conduct research at the intersection of Human-Computer Interaction, Wearable & Ubiquitous Computing, Digital Fabrication, Interaction Design, and Fashion & Body Art. We synthesize this knowledge to contribute a culturally-sensitive lens to the future of designs that interface the body and the environment. Our current investigations include:

Wearable Technology & On-Skin Interfaces
We develop novel wearable interfaces and fabrication processes, which a focus on skin-conformable or textile-based form factors. By hybridizing miniaturized robotics, machines, and materials with cultural body decoration practices, we investigate how technology can be situated as a culturally meaningful material for crafting our identities.

Designing Skins Across Scales
‘Many different types of machines that were parts of architecture have become parts of our bodies.’ —Bill Mitchell, Me++

We design “skins” that can be adapted across scales, from the architectural to the body scale. We investigate the interactions of a wearer’s body-borne interface with its surrounding ecology. This includes its interaction with other people, objects, to environments. We are also interested in developing skins that can be deployed across scales — from the body to the architectural scale.

Understanding Social Perceptions Towards On-Body Technologies
Wearable devices have evolved towards intrinsic human augmentation, unlocking the human skin as an interface for seamless interaction. However, the non-traditional form factor of these on-skin interfaces may raise concerns for public wear. These perceptions will influence whether a new form of technology will eventually be accepted, or rejected by society.  We investigate the cultural and social concerns that need to be considered when generating on-body technologies for inclusive design.

Spiders can outsource hearing to their webs

A March 29, 2022 news item on ScienceDaily highlights research into how spiders hear,

Everyone knows that humans and most other vertebrate species hear using eardrums that turn soundwave pressure into signals for our brains. But what about smaller animals like insects and arthropods? Can they detect sounds? And if so, how?

Distinguished Professor Ron Miles, a Department of Mechanical Engineering faculty member at Binghamton University’s Thomas J. Watson College of Engineering and Applied Science, has been exploring that question for more than three decades, in a quest to revolutionize microphone technology.

A newly published study of orb-weaving spiders — the species featured in the classic children’s book “Charlotte’s Web” — has yielded some extraordinary results: The spiders are using their webs as extended auditory arrays to capture sounds, possibly giving spiders advanced warning of incoming prey or predators.

Binghamton University (formal name: State University of New York at Binghamton) has made this fascinating (to me anyway) video available,

Binghamton University and Cornell University (also in New York state) researchers worked collaboratively on this project. Consequently, there are two news releases and there is some redundancy but I always find that information repeated in different ways is helpful for learning.

A March 29, 2022 Binghamton University news release (also on EurekAlert) by Chris Kocher gives more detail about the work (Note: Links have been removed),

It is well-known that spiders respond when something vibrates their webs, such as potential prey. In these new experiments, researchers for the first time show that spiders turned, crouched or flattened out in response to sounds in the air.

The study is the latest collaboration between Miles and Ron Hoy, a biology professor from Cornell, and it has implications for designing extremely sensitive bio-inspired microphones for use in hearing aids and cell phone

Jian Zhou, who earned his PhD in Miles’ lab and is doing postdoctoral research at the Argonne National Laboratory, and Junpeng Lai, a current PhD student in Miles’ lab, are co-first authors. Miles, Hoy and Associate Professor Carol I. Miles from the Harpur College of Arts and Sciences’ Department of Biological Sciences at Binghamton are also authors for this study. Grants from the National Institutes of Health to Ron Miles funded the research.

A single strand of spider silk is so thin and sensitive that it can detect the movement of vibrating air particles that make up a soundwave, which is different from how eardrums work. Ron Miles’ previous research has led to the invention of novel microphone designs that are based on hearing in insects.

“The spider is really a natural demonstration that this is a viable way to sense sound using viscous forces in the air on thin fibers,” he said. “If it works in nature, maybe we should have a closer look at it.”

Spiders can detect miniscule movements and vibrations through sensory organs on their tarsal claws at the tips of their legs, which they use to grasp their webs. Orb-weaver spiders are known to make large webs, creating a kind of acoustic antennae with a sound-sensitive surface area that is up to 10,000 times greater than the spider itself.

In the study, the researchers used Binghamton University’s anechoic chamber, a completely soundproof room under the Innovative Technologies Complex. Collecting orb-weavers from windows around campus, they had the spiders spin a web inside a rectangular frame so they could position it where they wanted.

The team began by using pure tone sound 3 meters away at different sound levels to see if the spiders responded or not. Surprisingly, they found spiders can respond to sound levels as low as 68 decibels. For louder sound, they found even more types of behaviors.

They then placed the sound source at a 45-degree angle, to see if the spiders behaved differently. They found that not only are the spiders localizing the sound source, but they can tell the sound incoming direction with 100% accuracy.

To better understand the spider-hearing mechanism, the researchers used laser vibrometry and measured over one thousand locations on a natural spider web, with the spider sitting in the center under the sound field. The result showed that the web moves with sound almost at maximum physical efficiency across an ultra-wide frequency range.

“Of course, the real question is, if the web is moving like that, does the spider hear using it?” Miles said. “That’s a hard question to answer.”

Lai added: “There could even be a hidden ear within the spider body that we don’t know about.”

So the team placed a mini-speaker 5 centimeters away from the center of the web where the spider sits, and 2 millimeters away from the web plane — close but not touching the web. This allows the sound to travel to the spider both through air and through the web. The researchers found that the soundwave from the mini-speaker died out significantly as it traveled through the air, but it propagated readily through the web with little attenuation. The sound level was still at around 68 decibels when it reached the spider. The behavior data showed that four out of 12 spiders responded to this web-borne signal.

Those reactions proved that the spiders could hear through the webs, and Lai was thrilled when that happened: “I’ve been working on this research for five years. That’s a long time, and it’s great to see all these efforts will become something that everybody can read.”

The researchers also found that, by crouching and stretching, spiders may be changing the tension of the silk strands, thereby tuning them to pick up different frequencies. By using this external structure to hear, the spider could be able to customize it to hear different sorts of sounds.

Future experiments may investigate how spiders make use of the sound they can detect using their web. Additionally, the team would like to test whether other types of web-weaving spiders also use their silk to outsource their hearing.

“It’s reasonable to guess that a similar spider on a similar web would respond in a similar way,” Ron Miles said. “But we can’t draw any conclusions about that, since we tested a certain kind of spider that happens to be pretty common.”

Lai admitted he had no idea he would be working with spiders when he came to Binghamton as a mechanical engineering PhD student.

“I’ve been afraid of spiders all my life, because of their alien looks and hairy legs!” he said with a laugh. “But the more I worked with spiders, the more amazing I found them. I’m really starting to appreciate them.”

A March 29, 2022 Cornell University news release (also on EurekAlert but published March 30, 2022) by Krishna Ramanujan offers a somewhat different perspective on the work, Note: Links have been removed)

Charlotte’s web is made for more than just trapping prey.

A study of orb weaver spiders finds their massive webs also act as auditory arrays that capture sounds, possibly giving spiders advanced warning of incoming prey or predators.

In experiments, the researchers found the spiders turned, crouched or flattened out in response to sounds, behaviors that spiders have been known to exhibit when something vibrates their webs.

The paper, “Outsourced Hearing in an Orb-weaving Spider That Uses its Web as an Auditory Sensor,” published March 29 [2022] in the Proceedings of the National Academy of Sciences, provides the first behavioral evidence that a spider can outsource hearing to its web.

The findings have implications for designing bio-inspired extremely sensitive microphones for use in hearing aids and cell phones.

A single strand of spider silk is so thin and sensitive it can detect the movement of vibrating air particles that make up a sound wave. This is different from how ear drums work, by sensing pressure from sound waves; spider silk detects sound from nanoscale air particles that become excited from sound waves.

“The individual [silk] strands are so thin that they’re essentially wafting with the air itself, jostled around by the local air molecules,” said Ron Hoy, the Merksamer Professor of Biological Science, Emeritus, in the College of Arts and Sciences and one of the paper’s senior authors, along with Ronald Miles, professor of mechanical engineering at Binghamton University.

Spiders can detect miniscule movements and vibrations via sensory organs in their tarsi – claws at the tips of their legs they use to grasp their webs, Hoy said. Orb weaver spiders are known to make large webs, creating a kind of acoustic antennae with a sound-sensitive surface area that is up to 10,000 times greater than the spider itself.

In the study, the researchers used a special quiet room without vibrations or air flows at Binghamton University. They had an orb-weaver build a web inside a rectangular frame, so they could position it where they wanted. The team began by putting a mini-speaker within millimeters of the web without actually touching it, where sound operates as a mechanical vibration. They found the spider detected the mechanical vibration and moved in response.

They then placed a large speaker 3 meters away on the other side of the room from the frame with the web and spider, beyond the range where mechanical vibration could affect the web. A laser vibrometer was able to show the vibrations of the web from excited air particles.

The team then placed the speaker in different locations, to the right, left and center with respect to the frame. They found that the spider not only detected the sound, it turned in the direction of the speaker when it was moved. Also, it behaved differently based on the volume, by crouching or flattening out.

Future experiments may investigate whether spiders rebuild their webs, sometimes daily, in part to alter their acoustic capabilities, by varying a web’s geometry or where it is anchored. Also, by crouching and stretching, spiders may be changing the tension of the silk strands, thereby tuning them to pick up different frequencies, Hoy said.

Additionally, the team would like to test if other types of web-weaving spiders also use their silk to outsource their hearing. “The potential is there,” Hoy said.

Miles’ lab is using tiny fiber strands bio-inspired by spider silk to design highly sensitive microphones that – unlike conventional pressure-based microphones – pick up all frequencies and cancel out background noise, a boon for hearing aids.  

Here’s a link to and a citation for the paper,

Outsourced hearing in an orb-weaving spider that uses its web as an auditory sensor by Jian Zhou, Junpeng Lai, Gil Menda, Jay A. Stafstrom, Carol I. Miles, Ronald R. Hoy, and Ronald N. Miles. Proceedings of the National Academy of Sciences (PNAS) DOI: https://doi.org/10.1073/pnas.2122789119 Published March 29, 2022 | 119 (14) e2122789119

This paper appears to be open access and video/audio files are included (you can heat the sound and watch the spider respond).

The nanoscale precision of pearls

An October 21, 2021 news item on phys.org features a quote about nothingness and symmetry (Note: A link has been removed),

In research that could inform future high-performance nanomaterials, a University of Michigan-led team has uncovered for the first time how mollusks build ultradurable structures with a level of symmetry that outstrips everything else in the natural world, with the exception of individual atoms.

“We humans, with all our access to technology, can’t make something with a nanoscale architecture as intricate as a pearl,” said Robert Hovden, U-M assistant professor of materials science and engineering and an author on the paper. “So we can learn a lot by studying how pearls go from disordered nothingness to this remarkably symmetrical structure.” [emphasis mine]

The analysis was done in collaboration with researchers at the Australian National University, Lawrence Berkeley National Laboratory, Western Norway University [of Applied Sciences] and Cornell University.

a. A Keshi pearl that has been sliced into pieces for study. b. A magnified cross-section of the pearl shows its transition from its disorderly center to thousands of layers of finely matched nacre. c. A magnification of the nacre layers shows their self-correction—when one layer is thicker, the next is thinner to compensate, and vice-versa. d, e: Atomic scale images of the nacre layers. f, g, h, i: Microscopy images detail the transitions between the pearl’s layers. Credit: University of Michigan

An October 21, 2021 University of Michigan news release (also on EurekAlert), which originated the news item, reveals a surprise,

Published in the Proceedings of the National Academy of Sciences [PNAS], the study found that a pearl’s symmetry becomes more and more precise as it builds, answering centuries-old questions about how the disorder at its center becomes a sort of perfection. 

Layers of nacre, the iridescent and extremely durable organic-inorganic composite that also makes up the shells of oysters and other mollusks, build on a shard of aragonite that surrounds an organic center. The layers, which make up more than 90% of a pearl’s volume, become progressively thinner and more closely matched as they build outward from the center.

Perhaps the most surprising finding is that mollusks maintain the symmetry of their pearls by adjusting the thickness of each layer of nacre. If one layer is thicker, the next tends to be thinner, and vice versa. The pearl pictured in the study contains 2,615 finely matched layers of nacre, deposited over 548 days.

“These thin, smooth layers of nacre look a little like bed sheets, with organic matter in between,” Hovden said. “There’s interaction between each layer, and we hypothesize that that interaction is what enables the system to correct as it goes along.”

The team also uncovered details about how the interaction between layers works. A mathematical analysis of the pearl’s layers show that they follow a phenomenon known as “1/f noise,” where a series of events that seem to be random are connected, with each new event influenced by the one before it. 1/f noise has been shown to govern a wide variety of natural and human-made processes including seismic activity, economic markets, electricity, physics and even classical music.

“When you roll dice, for example, every roll is completely independent and disconnected from every other roll. But 1/f noise is different in that each event is linked,” Hovden said. “We can’t predict it, but we can see a structure in the chaos. And within that structure are complex mechanisms that enable a pearl’s thousands of layers of nacre to coalesce toward order and precision.”

The team found that pearls lack true long-range order—the kind of carefully planned symmetry that keeps the hundreds of layers in brick buildings consistent. Instead, pearls exhibit medium-range order, maintaining symmetry for around 20 layers at a time. This is enough to maintain consistency and durability over the thousands of layers that make up a pearl.

The team gathered their observations by studying Akoya “keshi” pearls, produced by the Pinctada imbricata fucata oyster near the Eastern shoreline of Australia. They selected these particular pearls, which measure around 50 millimeters in diameter, because they form naturally, as opposed to bead-cultured pearls, which have an artificial center. Each pearl was cut with a diamond wire saw into sections measuring three to five millimeters in diameter, then polished and examined under an electron microscope.

Hovden says the study’s findings could help inform next-generation materials with precisely layered nanoscale architecture.

“When we build something like a brick building, we can build in periodicity through careful planning and measuring and templating,” he said. “Mollusks can achieve similar results on the nanoscale by using a different strategy. So we have a lot to learn from them, and that knowledge could help us make stronger, lighter materials in the future.”

Here’s a link to and a citation for the paper,

The mesoscale order of nacreous pearls by Jiseok Gim, Alden Koch, Laura M. Otter, Benjamin H. Savitzky, Sveinung Erland, Lara A. Estroff, Dorrit E. Jacob, and Robert Hovden. PNAS vol. 118 no. 42 e2107477118 DOI: https://doi.org/10.1073/pnas.2107477118 Published in issue October 19, 2021 Published online October 18, 2021

This paper appears to be open access.

Ever heard a bird singing and wondered what kind of bird?

The Cornell University Lab of Ornithology’s sound recognition feature in its Merlin birding app(lication) can answer that question for you according to a July 14, 2021 article by Steven Melendez for Fast Company (Note: Links have been removed),

The lab recently upgraded its Merlin smartphone app, designed for both new and experienced birdwatchers. It now features an AI-infused “Sound ID” feature that can capture bird sounds and compare them to crowdsourced samples to figure out just what bird is making that sound. … people have used it to identify more than 1 million birds. New user counts are also up 58% since the two weeks before launch, and up 44% over the same period last year, according to Drew Weber, Merlin’s project coordinator.

Even when it’s listening to bird sounds, the app still relies on recent advances in image recognition, says project research engineer Grant Van Horn. …, it actually transforms the sound into a visual graph called a spectrogram, similar to what you might see in an audio editing program. Then, it analyzes that spectrogram to look for similarities to known bird calls, which come from the Cornell Lab’s eBird citizen science project.

There’s more detail about Merlin in Marc Devokaitis’ June 23, 2021 article for the Cornell Chronicle,

… Merlin can recognize the sounds of more than 400 species from the U.S. and Canada, with that number set to expand rapidly in future updates.

As Merlin listens, it uses artificial intelligence (AI) technology to identify each species, displaying in real time a list and photos of the birds that are singing or calling.

Automatic song ID has been a dream for decades, but analyzing sound has always been extremely difficult. The breakthrough came when researchers, including Merlin lead researcher Grant Van Horn, began treating the sounds as images and applying new and powerful image classification algorithms like the ones that already power Merlin’s Photo ID feature.

“Each sound recording a user makes gets converted from a waveform to a spectrogram – a way to visualize the amplitude [volume], frequency [pitch] and duration of the sound,” Van Horn said. “So just like Merlin can identify a picture of a bird, it can now use this picture of a bird’s sound to make an ID.”

Merlin’s pioneering approach to sound identification is powered by tens of thousands of citizen scientists who contributed their bird observations and sound recordings to eBird, the Cornell Lab’s global database.

“Thousands of sound recordings train Merlin to recognize each bird species, and more than a billion bird observations in eBird tell Merlin which birds are likely to be present at a particular place and time,” said Drew Weber, Merlin project coordinator. “Having this incredibly robust bird dataset – and feeding that into faster and more powerful machine-learning tools – enables Merlin to identify birds by sound now, when doing so seemed like a daunting challenge just a few years ago.”

The Merlin Bird ID app with the new Sound ID feature is available for free on iOS and Android devices. Click here to download the Merlin Bird ID app and follow the prompts. If you already have Merlin installed on your phone, tap “Get Sound ID.”

Do take a look at Devokaitis’ June 23, 2021 article for more about how the Merlin app provides four ways to identify birds.

For anyone who likes to listen to the news, there’s an August 26, 2021 podcast (The Warblers by Birds Canada) featuring Drew Weber, Merlin project coordinator, and Jody Allair, Birds Canada Director of Community Engagement, discussing Merlin,

It’s a dream come true – there’s finally an app for identifying bird sounds. In the next episode of The Warblers podcast, we’ll explore the Merlin Bird ID app’s new Sound ID feature and how artificial intelligence is redefining birding. We talk with Drew Weber and Jody Allair and go deep into the implications and opportunities that this technology will bring for birds, and new as well as experienced birders.

The Warblers is hosted by Andrea Gress and Andrés Jiménez.

Sounds of Central African Landscapes; a Cornell (University) Elephant Listening Project

This September 13, 2021 news item about sound recordings taken in a rainforest (on phys.org) is downright fascinating,

More than a million hours of sound recordings are available from the Elephant Listening Project (ELP) in the K. Lisa Yang Center for Conservation Bioacoustics at the Cornell Lab of Ornithology—a rainforest residing in the cloud.

ELP researchers, in collaboration with the Wildlife Conservation Society, use remote recording units to capture the entire soundscape of a Congolese rainforest. Their targets are vocalizations from endangered African forest elephants, but they also capture tropical parrots shrieking, chimps chattering and rainfall spattering on leaves to the beat of grumbling thunder.

For someone who suffers from acrophobia (fear of heights), this is a disturbing picture (how tall is that tree? is the rope reinforced? who or what is holding him up? where is the photographer perched?),

Frelcia Bambi is a member of the Congolese team that deploys sound recorders in the rainforest and analyzes the data. Photo by Sebastien Assoignons, courtesy of the Wildlife Conservation Society.

A September 13, 2021 Cornell University (NY state, US) news release by Pat Leonard, which originated the news item, provides more details about the sounds themselves and the Elephant Listening Project,

“Scientists can use these soundscapes to monitor biodiversity,” said ELP director Peter Wrege. “You could measure overall sound levels before, during and after logging operations, for example. Or hone in on certain frequencies where insects may vocalize. Sound is increasingly being used as a conservation tool, especially for establishing the presence or absence of a species.”

For the past four years, 50 tree-mounted recording units have been collecting data continuously, covering a region that encompasses old logging sites, recent logging sites and part of the Nouabalé-Ndoki National Park in the Republic of the Congo. The sensors sometimes capture the booming guns of poachers, alerting rangers who then head out to track down the illegal activity.

But everyday nature lovers can tune in rainforest sounds, too.

“We’ve had requests to use some of the files for meditation or for yoga,” Wrege said. “It is very soothing to listen to rainforest sounds—you hear the sounds of insects, birds, frogs, chimps, wind and rain all blended together.”

But, as Wrege and others have learned, big data can also be a big problem. The Sounds of Central African Landscapes recordings would gobble up nearly 100 terabytes of computer space, and ELP takes in another eight terabytes every four months. But now, Amazon Web Services is storing the jungle sounds for free under its Open Data Sponsorship Program, which preserves valuable scientific data for public use.

This makes it possible for Wrege to share the jungle sounds and easier for users to analyze them with Amazon tools so they don’t have to move the massive files or try to download them.

Searching for individual species amid the wealth of data is a bit more daunting. ELP uses computer algorithms to search through the recordings for elephant sounds. Wrege has created a detector for the sounds of gorillas beating their chests. There are software platforms that help users create detectors for specific sounds, including Raven Pro 1.6, created by the Cornell Lab’s bioacoustics engineers. Wrege says the next iteration, Raven 2.0, will make this process even easier.

Wrege is also eyeing future educational uses for the recordings which he says could help train in-country biologists to not only collect the data but do the analyses. This is gradually happening now in the Republic of the Congo—ELP’s team of Congolese researchers does all the analysis for gunshot detection, though the elephant analyses are still done at ELP.

“We could use these recordings for internships and student training in Congo and other countries where we work, such as Gabon,” Wrege said. “We can excite young people about conservation in Central Africa. It would be a huge benefit to everyone living there.”

To listen or download clips from Sounds of the Central African Landscape, go to ELP’s data page on Amazon Web Services. You’ll need to create an account with AWS (choose the free option). Then sign in with your username and password. Click on the “recordings” item in the list you see, then “wav/” on the next page. From there you can click on any item in the list to play or download clips that are each 1.3 GB and 24 hours long.

Scientists looking to use sounds for research and analysis should start here.

World Conservation Society Forest Elephant Congo [downloaded from https://congo.wcs.org/Wildlife/Forest-Elephant.aspx]

What follows may be a little cynical but I can’t help noticing that this worthwhile and fascinating project will result in more personal and/or professional data for Amazon since you have to sign up even if all you’re doing is reading or listening to a few files that they’ve made available for the general public. In a sense, Amazon gets ‘paid’ when you give up an email address to them. Plus, Amazon gets to look like a good world citizen.

Let’s hope something greater than one company’s reputation as a world citizen comes out of this.

Fishes ‘talk’ and ‘sing’

This posting started out with two items and then, it became more. If you’re interested in marine bioacoustics especially the work that’s been announced in the last four months, read on.

Fish songs

This item is about how fish sounds (songs) signify successful coral reef restoration got coverage on BBC (British Broadcasting Corporation), CBC (Canadian Broadcasting Corporation) and elsewhere. This video is courtesy of the Guardian Newspaper,

Whoops and grunts: ‘bizarre’ fish songs raise hopes for coral reef recovery https://www.theguardian.com/environme…

A December 8, 2021 University of Exeter press release (also on EurekAlert) explains why the sounds give hope (Note: Links have been removed),

Newly discovered fish songs demonstrate reef restoration success

Whoops, croaks, growls, raspberries and foghorns are among the sounds that demonstrate the success of a coral reef restoration project.

Thousands of square metres of coral are being grown on previously destroyed reefs in Indonesia, but previously it was unclear whether these new corals would revive the entire reef ecosystem.

Now a new study, led by researchers from the University of Exeter and the University of Bristol, finds a heathy, diverse soundscape on the restored reefs.

These sounds – many of which have never been recorded before – can be used alongside visual observations to monitor these vital ecosystems.

“Restoration projects can be successful at growing coral, but that’s only part of the ecosystem,” said lead author Dr Tim Lamont, of the University of Exeter and the Mars Coral Reef Restoration Project, which is restoring the reefs in central Indonesia.

“This study provides exciting evidence that restoration really works for the other reef creatures too – by listening to the reefs, we’ve documented the return of a diverse range of animals.”

Professor Steve Simpson, from the University of Bristol, added: “Some of the sounds we recorded are really bizarre, and new to us as scientists.  

“We have a lot still to learn about what they all mean and the animals that are making them. But for now, it’s amazing to be able to hear the ecosystem come back to life.”

The soundscapes of the restored reefs are not identical to those of existing healthy reefs – but the diversity of sounds is similar, suggesting a healthy and functioning ecosystem.

There were significantly more fish sounds recorded on both healthy and restored reefs than on degraded reefs.

This study used acoustic recordings taken in 2018 and 2019 as part of the monitoring programme for the Mars Coral Reef Restoration Project.

The results are positive for the project’s approach, in which hexagonal metal frames called ‘Reef Stars’ are seeded with coral and laid over a large area. The Reef Stars stabilise loose rubble and kickstart rapid coral growth, leading to the revival of the wider ecosystem.  

Mochyudho Prasetya, of the Mars Coral Reef Restoration Project, said: “We have been restoring and monitoring these reefs here in Indonesia for many years. Now it is amazing to see more and more evidence that our work is helping the reefs come back to life.”

Professor David Smith, Chief Marine Scientist for Mars Incorporated, added: “When the soundscape comes back like this, the reef has a better chance of becoming self-sustaining because those sounds attract more animals that maintain and diversify reef populations.”

Asked about the multiple threats facing coral reefs, including climate change and water pollution, Dr Lamont said: “If we don’t address these wider problems, conditions for reefs will get more and more hostile, and eventually restoration will become impossible.

“Our study shows that reef restoration can really work, but it’s only part of a solution that must also include rapid action on climate change and other threats to reefs worldwide.”

The study was partly funded by the Natural Environment Research Council and the Swiss National Science Foundation.

Here’s a link to and a citation for the paper,

The sound of recovery: Coral reef restoration success is detectable in the soundscape by Timothy A. C. Lamont, Ben Williams, Lucille Chapuis, Mochyudho E. Prasetya, Marie J. Seraphim, Harry R. Harding, Eleanor B. May, Noel Janetski, Jamaluddin Jompa, David J. Smith, Andrew N. Radford, Stephen D. Simpson. Journal of Applied Ecology DOI: https://doi.org/10.1111/1365-2664.14089 First published: 07 December 2021

This paper is open access.

You can find the MARS Coral Reef Restoration Project here.

Fish talk

There is one item here. This research from Cornell University also features the sounds fish make. It’s no surprise given the attention being given to sound that the Cornell Lab of Ornithology is involved. In addition to the lab’s main focus, birds, many other animal sounds are gathered too.

A January 27, 2022 Cornell University news release (also on EurekAlert) describes ‘fish talk’,

There’s a whole lot of talking going on beneath the waves. A new study from Cornell University finds that fish are far more likely to communicate with sound than generally thought—and some fish have been doing this for at least 155 million years. These findings were just published in the journal Ichthyology & Herpetology.

“We’ve known for a long time that some fish make sounds,” said lead author Aaron Rice, a researcher at the K. Lisa Yang Center for Conservation Bioacoustics at the Cornell Lab of Ornithology [emphasis mine]. “But fish sounds were always perceived as rare oddities. We wanted to know if these were one-offs or if there was a broader pattern for acoustic communication in fishes.”

The authors looked at a branch of fishes called the ray-finned fishes. These are vertebrates (having a backbone) that comprise 99% of the world’s known species of fishes. They found 175 families that contain two-thirds of fish species that do, or are likely to, communicate with sound. By examining the fish family tree, study authors found that sound was so important, it evolved at least 33 separate times over millions of years.

“Thanks to decades of basic research on the evolutionary relationships of fishes, we can now explore many questions about how different functions and behaviors evolved in the approximately 35,000 known species of fishes,” said co-author William E. Bemis ’76, Cornell professor of ecology and evolutionary biology in the College of Agriculture and Life Sciences. “We’re getting away from a strictly human-centric way of thinking. What we learn could give us some insight on the drivers of sound communication and how it continues to evolve.”

The scientists used three sources of information: existing recordings and scientific papers describing fish sounds; the known anatomy of a fish—whether they have the right tools for making sounds, such as certain bones, an air bladder, and sound-specific muscles; and references in 19th century literature before underwater microphones were invented.
 
“Sound communication is often overlooked within fishes, yet they make up more than half of all living vertebrate species,” said Andrew Bass, co-lead author and the Horace White Professor of Neurobiology and Behavior in the College of Arts and Sciences. “They’ve probably been overlooked because fishes are not easily heard or seen, and the science of underwater acoustic communication has primarily focused on whales and dolphins. But fishes have voices, too!”
 
Listen:

Oyster ToadfishWilliam Tavolga, Macaulay Library

Longspine squirrelfishHoward Winn, Macaulay Library 

Banded drumDonald Batz, Macaulay Library

Midshipman, Andrew Bass, Macaulay Library

What are the fish talking about? Pretty much the same things we all talk about—sex and food. Rice says the fish are either trying to attract a mate, defend a food source or territory, or let others know where they are. Even some of the common names for fish are based on the sounds they make, such as grunts, croakers, hog fish, squeaking catfish, trumpeters, and many more.
 
Rice intends to keep tracking the discovery of sound in fish species and add them to his growing database (see supplemental material, Table S1)—a project he began 20 years ago with study co-authors Ingrid Kaatz ’85, MS ’92, and Philip Lobel, a professor of biology at Boston University. Their collaboration has continued and expanded since Rice came to Cornell.
 
“This introduces sound communication to so many more groups than we ever thought,” said Rice. “Fish do everything. They breathe air, they fly, they eat anything and everything—at this point, nothing would surprise me about fishes and the sounds that they can make.”

The research was partly funded by the National Science Foundation, the U.S. Bureau of Ocean Energy Management, the Tontogany Creek Fund, and the Cornell Lab of Ornithology.

I’ve embedded one of the audio files, Oyster Toadfish (William Tavolga) here,

Here’s a link to and a citation for the paper,

Evolutionary Patterns in Sound Production across Fishes by Aaron N. Rice, Stacy C. Farina, Andrea J. Makowski, Ingrid M. Kaatz, Phillip S. Lobel, William E. Bemis, Andrew H. Bass. Ichthyology & Herpetology, 110(1):1-12 (2022) DOI: https://doi.org/10.1643/i2020172 20 January 2022

This paper is open access.

Marine sound libraries

Thanks to Aly Laube’s March 2, 2022 article on the DailyHive.com, I learned of Kieran Cox’s work at the University of Victoria and FishSounds (Note: Links have been removed),

Fish have conversations and a group of researchers made a website to document them. 

It’s so much fun to peruse and probably the good news you need. Listen to a Bocon toadfish “boop” or this sablefish tick, which is slightly creepier, but still pretty cool. This streaked gurnard can growl, and this grumpy Atlantic cod can grunt.

The technical term for “fishy conversations” is “marine bioacoustics,” which is what Kieran Cox specializes in. They can be used to track, monitor, and learn more about aquatic wildlife.

The doctor of marine biology at the University of Victoria co-authored an article about fish sounds in Reviews in Fish Biology and Fisheries called “A Quantitative Inventory of Global Soniferous Fish Diversity.”

It presents findings from his process, helping create FishSounds.net. He and his team looked over over 3,000 documents from 834 studies to put together the library of 989 fish species.

A March 2, 2022 University of Victoria news release provides more information about the work and the research team (Note: Links have been removed),

Fascinating soundscapes exist beneath rivers, lakes and oceans. An unexpected sound source are fish making their own unique and entertaining noise from guttural grunts to high-pitched squeals. Underwater noise is a vital part of marine ecosystems, and thanks to almost 150 years of researchers documenting those sounds we know hundreds of fish species contribute their distinctive sounds. Although fish are the largest and most diverse group of sound-producing vertebrates in water, there was no record of which fish species make sound and the sounds they produce. For the very first time, there is now a digital place where that data can be freely accessed or contributed to, an online repository, a global inventory of fish sounds.

Kieran Cox co-authored the published article about fish sounds and their value in Reviews in Fish Biology and Fisheries while completing his Ph.D in marine biology at the University of Victoria. Cox recently began a Liber Ero post-doctoral collaboration with Francis Juanes that aims to integrate marine bioacoustics into the conservation of Canada’s oceans. Liber Ero program is devoted to promoting applied and evidence-based conservation in Canada.

The international group of researchers includes UVic, the University of Florida, Universidade de São Paulo, and Marine Environmental Research Infrastructure for Data Integration and Application Network (MERIDIAN) [emphasis mine] have launched the first ever, dedicated website focused on fish and their sounds: FishSounds.net. …

According to Cox, “This data is absolutely critical to our efforts. Without it, we were having a one-sided conversation about how noise impacts marine life. Now we can better understand the contributions fish make to soundscapes and examine which species may be most impacted by noise pollution.” Cox, an avid scuba diver, remembers his first dive when the distinct sound of parrotfish eating coral resonated over the reef, “It’s thrilling to know we are now archiving vital ecological information and making it freely available to the public, I feel like my younger self would be very proud of this effort.” …

There’s also a March 2, 2022 University of Florida news release on EurekAlert about FishSounds which adds more details about the work (Note: Links have been removed),

Cows moo. Wolves howl. Birds tweet. And fish, it turns out, make all sorts of ruckus.

“People are often surprised to learn that fish make sounds,” said Audrey Looby, a doctoral candidate at the University of Florida. “But you could make the case that they are as important for understanding fish as bird sounds are for studying birds.”

The sounds of many animals are well documented. Go online, and you’ll find plenty of resources for bird calls and whale songs. However, a global library for fish sounds used to be unheard of.

That’s why Looby, University of Victoria collaborator Kieran Cox and an international team of researchers created FishSounds.net, the first online, interactive fish sounds repository of its kind.

“There’s no standard system yet for naming fish sounds, so our project uses the sound names researchers have come up with,” Looby said. “And who doesn’t love a fish that boops?”

The library’s creators hope to add a feature that will allow people to submit their own fish sound recordings. Other interactive features, such as a world map with clickable fish sound data points, are also in the works.

Fish make sound in many ways. Some, like the toadfish, have evolved organs or other structures in their bodies that produce what scientists call active sounds. Other fish produce incidental or passive sounds, like chewing or splashing, but even passive sounds can still convey information.

Scientists think fish evolved to make sound because sound is an effective way to communicate underwater. Sound travels faster under water than it does through air, and in low visibility settings, it ensures the message still reaches an audience.

“Fish sounds contain a lot of important information,” said Looby, who is pursuing a doctorate in fisheries and aquatic sciences at the UF/IFAS College of Agricultural and Life Sciences. “Fish may communicate about territory, predators, food and reproduction. And when we can match fish sounds to fish species, their sounds are a kind of calling card that can tell us what kinds of fish are in an area and what they are doing.”

Knowing the location and movements of fish species is critical for environmental monitoring, fisheries management and conservation efforts. In the future, marine, estuarine or freshwater ecologists could use hydrophones — special underwater microphones — to gather data on fish species’ whereabouts. But first, they will need to be able to identify which fish they are hearing, and that’s where the fish sounds database can assist.

FishSounds.net emerged from the research team’s efforts to gather and review the existing scientific literature on fish sounds. An article synthesizing that literature has just been published in Reviews in Fish Biology and Fisheries.

In the article, the researchers reviewed scientific reports of fish sounds going back almost 150 years. They found that a little under a thousand fish species are known to make active sounds, and several hundred species were studied for their passive sounds. However, these are probably both underestimates, Cox explained.

Here’s a link to and a citation for the paper,

A quantitative inventory of global soniferous fish diversity by Audrey Looby, Kieran Cox, Santiago Bravo, Rodney Rountree, Francis Juanes, Laura K. Reynolds & Charles W. Martin. Reviews in Fish Biology and Fisheries (2022) DOI: https://doi.org/10.1007/s11160-022-09702-1 Published 18 February 2022

This paper is behind a paywall.

Finally, there’s GLUBS. A comprehensive February 27, 2022 Rockefeller University news release on EurekAlert announces a proposal for the Global Library of Underwater Biological Sounds (GLUBS), Note 1: Links have been removed; Note 2: If you’re interested in the topic, I recommend reading either the original February 27, 2022 Rockefeller University news release with its numerous embedded images, audio files, and links to marine audio libraries,

Of the roughly 250,000 known marine species, scientists think all ~126 marine mammals emit sounds – the ‘thwop’, ‘muah’, and ‘boop’s of a humpback whale, for example, or the boing of a minke whale. Audible too are at least 100 invertebrates, 1,000 of the world’s 34,000 known fish species, and likely many thousands more.

Now a team of 17 experts from nine countries has set a goal [emphasis mine] of gathering on a single platform huge collections of aquatic life’s tell-tale sounds, and expanding it using new enabling technologies – from highly sophisticated ocean hydrophones and artificial intelligence learning systems to phone apps and underwater GoPros used by citizen scientists.

The Global Library of Underwater Biological Sounds, “GLUBS,” will underpin a novel non-invasive, affordable way for scientists to listen in on life in marine, brackish and freshwaters, monitor its changing diversity, distribution and abundance, and identify new species. Using the acoustic properties of underwater soundscapes can also characterize an ecosystem’s type and condition.

“A database of unidentified sounds is, in some ways, as important as one for known sources,” the scientists say. “As the field progresses, new unidentified sounds will be collected, and more unidentified sounds can be matched to species.”

This can be “particularly important for high-biodiversity systems such as coral reefs, where even a short recording can pick up multiple animal sounds.”

Existing libraries of undersea sounds (several of which are listed with hyperlinks below) “often focus on species of interest that are targeted by the host institute’s researchers,” the paper says, and several are nationally-focussed. Few libraries identify what is missing from their catalogs, which the proposed global library would.

“A global reference library of underwater biological sounds would increase the ability for more researchers in more locations to broaden the number of species assessed within their datasets and to identify sounds they personally do not recognize,” the paper says.

The scientists note that listening to the sea has revealed great whales swimming in unexpected places, new species and new sounds.

With sound, “biologically important areas can be mapped; spawning grounds, essential fish habitat, and migration pathways can be delineated…These and other questions can be queried on broader scales if we have a global catalog of sounds.”

Meanwhile, comparing sounds from a single species across broad areas and times helps understand their diversity and evolution.

Numerous marine animals are cosmopolitan, the paper says, “either as wide-roaming individuals, such as the great whales, or as broadly distributed species, such as many fishes.”

Fin whale calls, for example, can differ among populations in the Northern and Southern hemispheres, and over seasons, whereas the call of pilot whales are similar worldwide, even though their home ranges do not (or no longer) cross the equator.

Some fishes even seem to develop geographic ‘dialects’ or completely different signal structures among regions, several of which evolve over time.

Madagascar’s skunk anemonefish … , for example, produces different agonistic (fight-related) sounds than those in Indonesia, while differences in the song of humpback whales have been observed across ocean basins.

Phone apps, underwater GoPros and citizen science

Much like BirdNet and FrogID, a library of underwater biological sounds and automated detection algorithms would be useful not only for the scientific, industry and marine management communities but also for users with a general interest.

“Acoustic technology has reached the stage where a hydrophone can be connected to a mobile phone so people can listen to fishes and whales in the rivers and seas around them. Therefore, sound libraries are becoming invaluable to citizen scientists and the general public,” the paper adds.

And citizen scientists could be of great help to the library by uploading the results of, for example, the River Listening app (www.riverlistening.com), which encourages the public to listen to and record fish sounds in rivers and coastal waters.

Low-cost hydrophones and recording systems (such as the Hydromoth) are increasingly available and waterproof recreational recording systems (such as GoPros) can also collect underwater biological sounds.

Here’s a link to and a citation for the paper,

Sounding the Call for a Global Library of Underwater Biological Sounds by Miles J. G. Parsons, Tzu-Hao Lin, T. Aran Mooney, Christine Erbe, Francis Juanes, Marc Lammers, Songhai Li, Simon Linke, Audrey Looby, Sophie L. Nedelec, Ilse Van Opzeeland, Craig Radford, Aaron N. Rice, Laela Sayigh, Jenni Stanley, Edward Urban and Lucia Di Iorio. Front. Ecol. Evol., 08 February 2022 DOI: https://doi.org/10.3389/fevo.2022.810156 Published: 08 February 2022.

This paper appears to be open access.

Internet of living things (IoLT)?

It’s not here yet but there are scientists working on an internet of living things (IoLT). There are some details (see the fourth paragraph from the bottom of the news release excerpt) about how an IoLT would be achieved but it seems these are early days. From a September 9, 2021 University of Illinois news release (also on EurekAlert), Note: Links have been removed,

The National Science Foundation (NSF) announced today an investment of $25 million to launch the Center for Research on Programmable Plant Systems (CROPPS). The center, a partnership among the University of Illinois at Urbana-Champaign, Cornell University, the Boyce Thompson Institute, and the University of Arizona, aims to develop tools to listen and talk to plants and their associated organisms.

“CROPPS will create systems where plants communicate their hidden biology to sensors, optimizing plant growth to the local environment. This Internet of Living Things (IoLT) will enable breakthrough discoveries, offer new educational opportunities, and open transformative opportunities for productive, sustainable, and profitable management of crops,” says Steve Moose (BSD/CABBI/GEGC), the grant’s principal investigator at Illinois. Moose is a genomics professor in the Department of Crop Sciences, part of the College of Agricultural, Consumer and Environmental Sciences (ACES). 

As an example of what’s possible, CROPPS scientists could deploy armies of autonomous rovers to monitor and modify crop growth in real time. The researchers created leaf sensors to report on belowground processes in roots. This combination of machine and living sensors will enable completely new ways of decoding the language of plants, allowing researchers to teach plants how to better handle environmental challenges. 

“Right now, we’re working to program a circuit that responds to low-nitrogen stress, where the plant growth rate is ‘slowed down’ to give farmers more time to apply fertilizer during the window that is the most efficient at increasing yield,” Moose explains.

With 150+ years of global leadership in crop sciences and agricultural engineering, along with newer transdisciplinary research units such as the National Center for Supercomputing Applications (NCSA) and the Center for Digital Agriculture (CDA), Illinois is uniquely positioned to take on the technical challenges associated with CROPPS.

But U of I scientists aren’t working alone. For years, they’ve collaborated with partner institutions to conceptualize the future of digital agriculture and bring it into reality. For example, researchers at Illinois’ CDA and Cornell’s Initiative for Digital Agriculture jointly proposed the first IoLT for agriculture, laying the foundation for CROPPS.

“CROPPS represents a significant win from having worked closely with our partners at Cornell and other institutions. We’re thrilled to move forward with our colleagues to shift paradigms in agriculture,” says Vikram Adve, Donald B. Gillies Professor in computer science at Illinois and co-director of the CDA.

CROPPS research may sound futuristic, and that’s the point.

The researchers say new tools are needed to make crops productive, flexible, and sustainable enough to feed our growing global population under a changing climate. Many of the tools under development – biotransducers small enough to fit between soil particles, dexterous and highly autonomous field robots, field-applied gene editing nanoparticles, IoLT clouds, and more – have been studied in the proof-of-concept phase, and are ready to be scaled up.

“One of the most exciting goals of CROPPS is to apply recent advances in sensing and data analytics to understand the rules of life, where plants have much to teach us. What we learn will bring a stronger biological dimension to the next phase of digital agriculture,” Moose says. 

CROPPS will also foster innovations in STEM [science, technology[ engineering, and mathematics] education through programs that involve students at all levels, and each partner institution will share courses in digital agriculture topics. CROPPS also aims to engage professionals in digital agriculture at any career stage, and learn how the public views innovations in this emerging technology area.

“Along with cutting-edge research, CROPPS coordinated educational programs will address the future of work in plant sciences and agriculture,” says Germán Bollero, associate dean for research in the College of ACES.

I look forward to hearing more about IoLT.

Attosecond imaging technology with record high-harmonic generation

This July 21, 2021 news item on Nanowerk is all about laser pulses and tiny timescales.

Cornell researchers have developed nanostructures that enable record-breaking conversion of laser pulses into high-harmonic generation, paving the way for new scientific tools for high-resolution imaging and studying physical processes that occur at the scale of an attosecond – one quintillionth of a second [emphasis mine].

High-harmonic generation has long been used to merge photons from a pulsing laser into one, ultrashort photon with much higher energy, producing extreme ultraviolet light and X-rays used for a variety of scientific purposes. Traditionally, gases have been used as sources of harmonics, but a research team led by Gennady Shvets, professor of applied and engineering physics in the College of Engineering, has shown that engineered nanostructures have a bright future for this application.

llustration of an infrared laser hitting a gallium-phosphide metsurface, which efficiently produces even and odd high-harmonic generation. Credit: Daniil Shilkin/Provided

A July 21, 2021 Cornell University news release by Syl Kacapyr (also on EurekAlert), which originated the news item, provides more detail about the nanostructures,

The nanostructures created by the team make up an ultrathin resonant gallium-phosphide metasurface that overcomes many of the usual problems associated with high-harmonic generation in gases and other solids. The gallium-phosphide material permits harmonics of all orders without reabsorbing them, and the specialized structure can interact with the laser pulse’s entire light spectrum.

“Achieving this required engineering of the metasurface’s structure using full-wave simulations,” Shcherbakov [Maxim Shcherbakov] said. “We carefully selected the parameters of the gallium-phosphide particles to fulfill this condition, and then it took a custom nanofabrication flow to bring it to light.”

The result is nanostructures capable of generating both even and odd harmonics – a limitation of most other harmonic materials – covering a wide range of photon energies between 1.3 and 3 electron volts. The record-breaking conversion efficiency enables scientists to observe molecular and electronic dynamics within a material with just one laser shot, helping to preserve samples that may otherwise be degraded by multiple high-powered shots.

The study is the first to observe high-harmonic generated radiation from a single laser pulse, which allowed the metasurface to withstand high powers – five to 10 times higher than previously shown in other metasurfaces.

“It opens up new opportunities to study matter at ultrahigh fields, a regime not readily accessible before,” Shcherbakov said. “With our method, we envision that people can study materials beyond metasurfaces, including but not limited to crystals, 2D materials, single atoms, artificial atomic lattices and other quantum systems.”

Now that the research team has demonstrated the advantages of using nanostructures for high-harmonic generation, it hopes to improve high-harmonic devices and facilities by stacking the nanostructures together to replace a solid-state source, such as crystals.

Here’s a link to and a citation for the paper,

Generation of even and odd high harmonics in resonant metasurfaces using single and multiple ultra-intense laser pulses by Maxim R. Shcherbakov, Haizhong Zhang, Michael Tripepi, Giovanni Sartorello, Noah Talisa, Abdallah AlShafey, Zhiyuan Fan, Justin Twardowski, Leonid A. Krivitsky, Arseniy I. Kuznetsov, Enam Chowdhury & Gennady Shvets. Nature Communications volume 12, Article number: 4185 DOI: https://doi.org/10.1038/s41467-021-24450-9 Published: 07 July 2021

This paper is open access.

True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)

The Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, which has been broadcast since November 1960, explored the world of emotional, empathic and creative artificial intelligence (AI) in a Friday, November 19, 2021 telecast titled, The Machine That Feels,

The Machine That Feels explores how artificial intelligence (AI) is catching up to us in ways once thought to be uniquely human: empathy, emotional intelligence and creativity.

As AI moves closer to replicating humans, it has the potential to reshape every aspect of our world – but most of us are unaware of what looms on the horizon.

Scientists see AI technology as an opportunity to address inequities and make a better, more connected world. But it also has the capacity to do the opposite: to stoke division and inequality and disconnect us from fellow humans. The Machine That Feels, from The Nature of Things, shows viewers what they need to know about a field that is advancing at a dizzying pace, often away from the public eye.

What does it mean when AI makes art? Can AI interpret and understand human emotions? How is it possible that AI creates sophisticated neural networks that mimic the human brain? The Machine That Feels investigates these questions, and more.

In Vienna, composer Walter Werzowa has — with the help of AI — completed Beethoven’s previously unfinished 10th symphony. By feeding data about Beethoven, his music, his style and the original scribbles on the 10th symphony into an algorithm, AI has created an entirely new piece of art.

In Atlanta, Dr. Ayanna Howard and her robotics lab at Georgia Tech are teaching robots how to interpret human emotions. Where others see problems, Howard sees opportunity: how AI can help fill gaps in education and health care systems. She believes we need a fundamental shift in how we perceive robots: let’s get humans and robots to work together to help others.

At Tufts University in Boston, a new type of biological robot has been created: the xenobot. The size of a grain of sand, xenobots are grown from frog heart and skin cells, and combined with the “mind” of a computer. Programmed with a specific task, they can move together to complete it. In the future, they could be used for environmental cleanup, digesting microplastics and targeted drug delivery (like releasing chemotherapy compounds directly into tumours).

The film includes interviews with global leaders, commentators and innovators from the AI field, including Geoff Hinton, Yoshua Bengio, Ray Kurzweil and Douglas Coupland, who highlight some of the innovative and cutting-edge AI technologies that are changing our world.

The Machine That Feels focuses on one central question: in the flourishing age of artificial intelligence, what does it mean to be human?

I’ll get back to that last bit, “… what does it mean to be human?” later.

There’s a lot to appreciate in this 44 min. programme. As you’d expect, there was a significant chunk of time devoted to research being done in the US but Poland and Japan also featured and Canadian content was substantive. A number of tricky topics were covered and transitions from one topic to the next were smooth.

In the end credits, I counted over 40 source materials from Getty Images, Google Canada, Gatebox, amongst others. It would have been interesting to find out which segments were produced by CBC.

David Suzuki’s (programme host) script was well written and his narration was enjoyable, engaging, and non-intrusive. That last quality is not always true of CBC hosts who can fall into the trap of overdramatizing the text.

Drilling down

I have followed artificial intelligence stories in a passive way (i.e., I don’t seek them out) for many years. Even so, there was a lot of material in the programme that was new to me.

For example, there was this love story (from the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage on the CBC),

In the The Machine That Feels, a documentary from The Nature of Things, we meet Kondo Akihiko, a Tokyo resident who “married” a hologram of virtual pop singer Hatsune Miku using a certificate issued by Gatebox (the marriage isn’t recognized by the state, and Gatebox acknowledges the union goes “beyond dimensions”).

I found Akihiko to be quite moving when he described his relationship, which is not unique. It seems some 4,000 men have ‘wed’ their digital companions, you can read about that and more on the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage.

What does it mean to be human?

Overall, this Nature of Things episode embraces certainty, which means the question of what it means to human is referenced rather than seriously discussed. An unanswerable philosophical question, the programme is ill-equipped to address it, especially since none of the commentators are philosophers or seem inclined to philosophize.

The programme presents AI as a juggernaut. Briefly mentioned is the notion that we need to make some decisions about how our juggernaut is developed and utilized. No one discusses how we go about making changes to systems that are already making critical decisions for us. (For more about AI and decision-making, see my February 28, 2017 posting and scroll down to the ‘Algorithms and big data’ subhead for Cathy O’Neil’s description of how important decisions that affect us are being made by AI systems. She is the author of the 2016 book, ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’; still a timely read.)

In fact, the programme’s tone is mostly one of breathless excitement. A few misgivings are expressed, e.g,, one woman who has an artificial ‘texting friend’ (Replika; a chatbot app) noted that it can ‘get into your head’ when she had a chat where her ‘friend’ told her that all of a woman’s worth is based on her body; she pushed back but intimated that someone more vulnerable could find that messaging difficult to deal with.

The sequence featuring Akihiko and his hologram ‘wife’ is followed by one suggesting that people might become more isolated and emotionally stunted as they interact with artificial friends. It should be noted, Akihiko’s wife is described as ‘perfect’. I gather perfection means that you are always understanding and have no needs of your own. She also seems to be about 18″ high.

Akihiko has obviously been asked about his ‘wife’ before as his answers are ready. They boil down to “there are many types of relationships” and there’s nothing wrong with that. It’s an intriguing thought which is not explored.

Also unexplored, these relationships could be said to resemble slavery. After all, you pay for these friends over which you have control. But perhaps that’s alright since AI friends don’t have consciousness. Or do they? In addition to not being able to answer the question, “what is it to be human?” we still can’t answer the question, “what is consciousness?”

AI and creativity

The Nature of Things team works fast. ‘Beethoven X – The AI Project’ had its first performance on October 9, 2021. (See my October 1, 2021 post ‘Finishing Beethoven’s unfinished 10th Symphony’ for more information from Ahmed Elgammal’s (Director of the Art & AI Lab at Rutgers University) technical perspective on the project.

Briefly, Beethoven died before completing his 10th symphony and a number of computer scientists, musicologists, AI, and musicians collaborated to finish the symphony.)

The one listener (Felix Mayer, music professor at the Technical University Munich) in the hall during a performance doesn’t consider the work to be a piece of music. He does have a point. Beethoven left some notes but this ’10th’ is at least partly mathematical guesswork. A set of probabilities where an algorithm chooses which note comes next based on probability.

There was another artist also represented in the programme. Puzzlingly, it was the still living Douglas Coupland. In my opinion, he’s better known as a visual artist than a writer (his Wikipedia entry lists him as a novelist first) but he has succeeded greatly in both fields.

What makes his inclusion in the Nature of Things ‘The Machine That Feels’ programme puzzling, is that it’s not clear how he worked with artificial intelligence in a collaborative fashion. Here’s a description of Coupland’s ‘AI’ project from a June 29, 2021 posting by Chris Henry on the Google Outreach blog (Note: Links have been removed),

… when the opportunity presented itself to explore how artificial intelligence (AI) inspires artistic expression — with the help of internationally renowned Canadian artist Douglas Coupland — the Google Research team jumped on it. This collaboration, with the support of Google Arts & Culture, culminated in a project called Slogans for the Class of 2030, which spotlights the experiences of the first generation of young people whose lives are fully intertwined with the existence of AI. 

This collaboration was brought to life by first introducing Coupland’s written work to a machine learning language model. Machine learning is a form of AI that provides computer systems the ability to automatically learn from data. In this case, Google research scientists tuned a machine learning algorithm with Coupland’s 30-year body of written work — more than a million words — so it would familiarize itself with the author’s unique style of writing. From there, curated general-public social media posts on selected topics were added to teach the algorithm how to craft short-form, topical statements. [emphases mine]

Once the algorithm was trained, the next step was to process and reassemble suggestions of text for Coupland to use as inspiration to create twenty-five Slogans for the Class of 2030. [emphasis mine]

I would comb through ‘data dumps’ where characters from one novel were speaking with those in other novels in ways that they might actually do. It felt like I was encountering a parallel universe Doug,” Coupland says. “And from these outputs, the statements you see here in this project appeared like gems. Did I write them? Yes. No. Could they have existed without me? No.” [emphases mine]

So, the algorithms crunched through Coupland’s word and social media texts to produce slogans, which Coupland then ‘combed through’ to pick out 25 slogans for the ‘Slogans For The Class of 2030’ project. (Note: In the programme, he says that he started a sentence and then the AI system completed that sentence with material gleaned from his own writings, which brings to Exquisite Corpse, a collaborative game for writers originated by the Surrealists, possibly as early as 1918.)

The ‘slogans’ project also reminds me of William S. Burroughs and the cut-up technique used in his work. From the William S. Burroughs Cut-up technique webpage on the Language is a Virus website (Thank you to Lake Rain Vajra for a very interesting website),

The cutup is a mechanical method of juxtaposition in which Burroughs literally cuts up passages of prose by himself and other writers and then pastes them back together at random. This literary version of the collage technique is also supplemented by literary use of other media. Burroughs transcribes taped cutups (several tapes spliced into each other), film cutups (montage), and mixed media experiments (results of combining tapes with television, movies, or actual events). Thus Burroughs’s use of cutups develops his juxtaposition technique to its logical conclusion as an experimental prose method, and he also makes use of all contemporary media, expanding his use of popular culture.

[Burroughs says] “All writing is in fact cut-ups. A collage of words read heard overheard. What else? Use of scissors renders the process explicit and subject to extension and variation. Clear classical prose can be composed entirely of rearranged cut-ups. Cutting and rearranging a page of written words introduces a new dimension into writing enabling the writer to turn images in cinematic variation. Images shift sense under the scissors smell images to sound sight to sound to kinesthetic. This is where Rimbaud was going with his color of vowels. And his “systematic derangement of the senses.” The place of mescaline hallucination: seeing colors tasting sounds smelling forms.

“The cut-ups can be applied to other fields than writing. Dr Neumann [emphasis mine] in his Theory of Games and Economic behavior introduces the cut-up method of random action into game and military strategy: assume that the worst has happened and act accordingly. … The cut-up method could be used to advantage in processing scientific data. [emphasis mine] How many discoveries have been made by accident? We cannot produce accidents to order. The cut-ups could add new dimension to films. Cut gambling scene in with a thousand gambling scenes all times and places. Cut back. Cut streets of the world. Cut and rearrange the word and image in films. There is no reason to accept a second-rate product when you can have the best. And the best is there for all. Poetry is for everyone . . .”

First, John von Neumann (1902 – 57) is a very important figure in the history of computing. From a February 25, 2017 John von Neumann and Modern Computer Architecture essay on the ncLab website, “… he invented the computer architecture that we use today.”

Here’s Burroughs on the history of writers and cutups (thank you to QUEDEAR for posting this clip),

You can hear Burroughs talk about the technique and how he started using it in 1959.

There is no explanation from Coupland as to how his project differs substantively from Burroughs’ cut-ups or a session of Exquisite Corpse. The use of a computer programme to crunch through data and give output doesn’t seem all that exciting. *(More about computers and chatbots at end of posting).* It’s hard to know if this was an interview situation where he wasn’t asked the question or if the editors decided against including it.

Kazuo Ishiguro?

Given that Ishiguro’s 2021 book (Klara and the Sun) is focused on an artificial friend and raises the question of ‘what does it mean to be human’, as well as the related question, ‘what is the nature of consciousness’, it would have been interesting to hear from him. He spent a fair amount of time looking into research on machine learning in preparation for his book. Maybe he was too busy?

AI and emotions

The work being done by Georgia Tech’s Dr. Ayanna Howard and her robotics lab is fascinating. They are teaching robots how to interpret human emotions. The segment which features researchers teaching and interacting with robots, Pepper and Salt, also touches on AI and bias.

Watching two African American researchers talk about the ways in which AI is unable to read emotions on ‘black’ faces as accurately as ‘white’ faces is quite compelling. It also reinforces the uneasiness you might feel after the ‘Replika’ segment where an artificial friend informs a woman that her only worth is her body.

(Interestingly, Pepper and Salt are produced by Softbank Robotics, part of Softbank, a multinational Japanese conglomerate, [see a June 28, 2021 article by Ian Carlos Campbell for The Verge] whose entire management team is male according to their About page.)

While Howard is very hopeful about the possibilities of a machine that can read emotions, she doesn’t explore (on camera) any means for pushing back against bias other than training AI by using more black faces to help them learn. Perhaps more representative management and coding teams in technology companies?

While the programme largely focused on AI as an algorithm on a computer, robots can be enabled by AI (as can be seen in the segment with Dr. Howard).

My February 14, 2019 posting features research with a completely different approach to emotions and machines,

“I’ve always felt that robots shouldn’t just be modeled after humans [emphasis mine] or be copies of humans,” he [Guy Hoffman, assistant professor at Cornell University)] said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”

[from a July 16, 2018 Cornell University news release on EurekAlert]

This brings the question back to, what is consciousness?

What scientists aren’t taught

Dr. Howard notes that scientists are not taught to consider the implications of their work. Her comment reminded me of a question I was asked many years ago after a presentation, it concerned whether or not science had any morality. (I said, no.)

My reply angered an audience member (a visual artist who was working with scientists at the time) as she took it personally and started defending scientists as good people who care and have morals and values. She failed to understand that the way in which we teach science conforms to a notion that somewhere there are scientific facts which are neutral and objective. Society and its values are irrelevant in the face of the larger ‘scientific truth’ and, as a consequence, you don’t need to teach or discuss how your values or morals affect that truth or what the social implications of your work might be.

Science is practiced without much if any thought to values. By contrast, there is the medical injunction, “Do no harm,” which suggests to me that someone recognized competing values. E.g., If your important and worthwhile research is harming people, you should ‘do no harm’.

The experts, the connections, and the Canadian content

It’s been a while since I’ve seen Ray Kurzweil mentioned but he seems to be getting more attention these days. (See this November 16, 2021 posting by Jonny Thomson titled, “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” on The Big Think for more). Note: I will have a little more about evolution later in this post.

Interestingly, Kurzweil is employed by Google these days (see his Wikipedia entry, the column to the right). So is Geoffrey Hinton, another one of the experts in the programme (see Hinton’s Wikipedia entry, the column to the right, under Institutions).

I’m not sure about Yoshu Bengio’s relationship with Google but he’s a professor at the Université de Montréal, and he’s the Scientific Director for Mila ((Quebec’s Artificial Intelligence research institute)) & IVADO (Institut de valorisation des données), Note: IVADO is not particularly relevant to what’s being discussed in this post.

As for Mila, the Canada Google blog in a November 21, 2016 posting notes a $4.5M grant to the institution,

Google invests $4.5 Million in Montreal AI Research

A new grant from Google for the Montreal Institute for Learning Algorithms (MILA) will fund seven faculty across a number of Montreal institutions and will help tackle some of the biggest challenges in machine learning and AI, including applications in the realm of systems that can understand and generate natural language. In other words, better understand a fan’s enthusiasm for Les Canadien [sic].

Google is expanding its academic support of deep learning at MILA, renewing Yoshua Bengio’s Focused Research Award and offering Focused Research Awards to MILA faculty at University of Montreal and McGill University:

Google reaffirmed their commitment to Mila in 2020 with a grant worth almost $4M (from a November 13, 2020 posting on the Mila website, Note: A link has been removed),

Google Canada announced today [November 13, 2020] that it will be renewing its funding of Mila – Quebec Artificial Intelligence Institute, with a generous pledge of nearly $4M over a three-year period. Google previously invested $4.5M US in 2016, enabling Mila to grow from 25 to 519 researchers.

In a piece written for Google’s Official Canada Blog, Yoshua Bengio, Mila Scientific Director, says that this year marked a “watershed moment for the Canadian AI community,” as the COVID-19 pandemic created unprecedented challenges that demanded rapid innovation and increased interdisciplinary collaboration between researchers in Canada and around the world.

COVID-19 has changed the world forever and many industries, from healthcare to retail, will need to adapt to thrive in our ‘new normal.’ As we look to the future and how priorities will shift, it is clear that AI is no longer an emerging technology but a useful tool that can serve to solve world problems. Google Canada recognizes not only this opportunity but the important task at hand and I’m thrilled they have reconfirmed their support of Mila with an additional $3,95 million funding grant until 22.

– Yoshua Bengio, for Google’s Official Canada Blog

Interesting, eh? Of course, Douglas Coupland is working with Google, presumably for money, and that would connect over 50% of the Canadian content (Douglas Coupland, Yoshua Bengio, and Geoffrey Hinton; Kurzweil is an American) in the programme to Google.

My hat’s off to Google’s marketing communications and public relations teams.

Anthony Morgan of Science Everywhere also provided some Canadian content. His LinkedIn profile indicates that he’s working on a PhD in molecular science, which is described this way, “My work explores the characteristics of learning environments, that support critical thinking and the relationship between critical thinking and wisdom.”

Morgan is also the founder and creative director of Science Everywhere, from his LinkedIn profile, “An events & media company supporting knowledge mobilization, community engagement, entrepreneurship and critical thinking. We build social tools for better thinking.”

There is this from his LinkedIn profile,

I develop, create and host engaging live experiences & media to foster critical thinking.

I’ve spent my 15+ years studying and working in psychology and science communication, thinking deeply about the most common individual and societal barriers to critical thinking. As an entrepreneur, I lead a team to create, develop and deploy cultural tools designed to address those barriers. As a researcher I study what we can do to reduce polarization around science.

There’s a lot more to Morgan (do look him up; he has connections to the CBC and other media outlets). The difficulty is: why was he chosen to talk about artificial intelligence and emotions and creativity when he doesn’t seem to know much about the topic? He does mention GPT-3, an AI programming language. He seems to be acting as an advocate for AI although he offers this bit of almost cautionary wisdom, “… algorithms are sets of instructions.” (You can can find out more about it in my April 27, 2021 posting. There’s also this November 26, 2021 posting [The Inherent Limitations of GPT-3] by Andrey Kurenkov, a PhD student with the Stanford [University] Vision and Learning Lab.)

Most of the cautionary commentary comes from Luke Stark, assistant professor at Western [Ontario] University’s Faculty of Information and Media Studies. He’s the one who mentions stunted emotional growth.

Before moving on, there is another set of connections through the Pan-Canadian Artificial Intelligence Strategy, a Canadian government science funding initiative announced in the 2017 federal budget. The funds allocated to the strategy are administered by the Canadian Institute for Advanced Research (CIFAR). Yoshua Bengio through Mila is associated with the strategy and CIFAR, as is Geoffrey Hinton through his position as Chief Scientific Advisor for the Vector Institute.

Evolution

Getting back to “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” Xenobots point in a disconcerting (for some of us) evolutionary direction.

I featured the work, which is being done at Tufts University in the US, in my June 21, 2021 posting, which includes an embedded video,

From a March 31, 2021 news item on ScienceDaily,

Last year, a team of biologists and computer scientists from Tufts University and the University of Vermont (UVM) created novel, tiny self-healing biological machines from frog cells called “Xenobots” that could move around, push a payload, and even exhibit collective behavior in the presence of a swarm of other Xenobots.

Get ready for Xenobots 2.0.

Also from an excerpt in the posting, the team has “created life forms that self-assemble a body from single cells, do not require muscle cells to move, and even demonstrate the capability of recordable memory.”

Memory is key to intelligence and this work introduces the notion of ‘living’ robots which leads to questioning what constitutes life. ‘The Machine That Feels’ is already grappling with far too many questions to address this development but introducing the research here might have laid the groundwork for the next episode, The New Human, telecast on November 26, 2021,

While no one can be certain what will happen, evolutionary biologists and statisticians are observing trends that could mean our future feet only have four toes (so long, pinky toe) or our faces may have new combinations of features. The new humans might be much taller than their parents or grandparents, or have darker hair and eyes.

And while evolution takes a lot of time, we might not have to wait too long for a new version of ourselves.

Technology is redesigning the way we look and function — at a much faster pace than evolution. We are merging with technology more than ever before: our bodies may now have implanted chips, smart limbs, exoskeletons and 3D-printed organs. A revolutionary gene editing technique has given us the power to take evolution into our own hands and alter our own DNA. How long will it be before we are designing our children?

As the story about the xenobots doesn’t say, we could also take the evolution of another species into our hands.

David Suzuki, where are you?

Our programme host, David Suzuki surprised me. I thought that as an environmentalist he’d point out that the huge amounts of computing power needed for artificial intelligence as mentioned in the programme, constitutes an environmental issue. I also would have expected a geneticist like Suzuki might have some concerns with regard to xenobots but perhaps that’s being saved for the next episode (The New Human) of the Nature of Things.

Artificial stupidity

Thanks to Will Knight for introducing me to the term ‘artificial stupidity’. Knight, a senior writer covers artificial intelligence for WIRED magazine. According to its Wikipedia entry,

Artificial stupidity is commonly used as a humorous opposite of the term artificial intelligence (AI), often as a derogatory reference to the inability of AI technology to adequately perform its tasks.[1] However, within the field of computer science, artificial stupidity is also used to refer to a technique of “dumbing down” computer programs in order to deliberately introduce errors in their responses.

Knight was using the term in its humorous, derogatory form.

Finally

The episode certainly got me thinking if not quite in the way producers might have hoped. ‘The Machine That Feels’ is a glossy, pretty well researched piece of infotainment.

To be blunt, I like and have no problems with infotainment but it can be seductive. I found it easier to remember the artificial friends, wife, xenobots, and symphony than the critiques and concerns.

Hopefully, ‘The Machine That Feels’ stimulates more interest in some very important topics. If you missed the telecast, you can catch the episode here.

For anyone curious about predictive policing, which was mentioned in the Ayanna Howard segment, see my November 23, 2017 posting about Vancouver’s plunge into AI and car theft.

*ETA December 6, 2021: One of the first ‘chatterbots’ was ELIZA, a computer programme developed from1964 to 1966. The most famous ELIZA script was DOCTOR, where the programme simulated a therapist. Many early users believed ELIZA understood and could respond as a human would despite Joseph Weizenbaum’s (creator of the programme) insistence otherwise.