Tag Archives: Washington University at St. Louis

Electro-agriculture and uncoupling from nature?

An October 23, 2024 news item on ScienceDaily announces a radical (by my standards) new technology for agriculture,

Photosynthesis, the chemical reaction that enables almost all life on Earth, is extremely inefficient at capturing energy — only around 1% of light energy that a plant absorbs is converted into chemical energy within the plant. Bioengineers propose a radical new method of food production that they call ‘electro-agriculture.’ The method essentially replaces photosynthesis with a solar-powered chemical reaction that more efficiently converts CO2 into an organic molecule that plants would be genetically engineered to ‘eat.’ The researchers estimate that if all food in the US were produced using electro-agriculture, it would reduce the amount of land needed for agriculture by 94%. The method could also be used to grow food in space.

An October 23, 2024 Cell Press news release, which originated the news item, offers more information about this new technique,

“If we don’t need to grow plants with sunlight anymore, then we can decouple agriculture from the environment [emphasis mine] and grow food in indoor, controlled environments,” says corresponding author and biological engineer Robert Jinkerson (@JinkersonLab) of University of California, Riverside. “I think that we need to move agriculture into the next phase of technology, and producing it in a controlled way that is decoupled from nature has to be the next step [emphasis mine].”

Electro-agriculture would mean replacing agricultural fields with multi-story buildings. Solar panels on or near the buildings would absorb the sun’s radiation, and this energy would power a chemical reaction between CO2 and water to produce acetate—a molecule similar to acetic acid, the main component in vinegar. The acetate would then be used to feed plants that are grown hydroponically. The method could also be used to grow other food-producing organisms, since acetate is naturally used by mushrooms, yeast, and algae.

“The whole point of this new process to try to boost the efficiency of photosynthesis,” says senior author Feng Jiao (@Jiao_Lab), an electrochemist at Washington University in St. Louis. “Right now, we are at about 4% efficiency, which is already four times higher than for photosynthesis, and because everything is more efficient with this method, the CO2 footprint associated with the production of the food becomes much smaller.”

To genetically engineer acetate-eating plants, the researchers are taking advantage of a metabolic pathway that germinating plants use to break down food stored in their seeds. This pathway is switched off once plants become capable of photosynthesis, but switching it back on would enable them to use acetate as a source of energy and carbon.

“We’re trying to turn this pathway back on in adult plants and reawaken their native ability to utilize acetate,” says Jinkerson. “It’s analogous to lactose intolerance in humans—as babies we can digest lactose in milk, but for many people that pathway is turned off when they grow up. It’s kind of the same idea, only for plants.”

The team is focusing their initial research on tomatoes and lettuce but plan to move on to high-calorie staple crops such as cassava, sweet potatoes, and grain crops in future. Currently, they’ve managed to engineer plants that can use acetate in addition to photosynthesis, but they ultimately aim to engineer plants that can obtain all of their necessary energy from acetate, meaning that they would not need any light themselves.

“For plants, we’re still in the research-and-development phase of trying to get them to utilize acetate as their carbon source, because plants have not evolved to grow this way, but we’re making progress,” says Jinkerson. “Mushrooms and yeast and algae, however, can be grown like this today, so I think that those applications could be commercialized first, and plants will come later down the line.”

The researchers also plan to continue refining their method of acetate production to make the carbon-fixation system even more efficient.

“This is just the first step for this research, and I think there’s a hope that its efficiency and cost will be significantly improved in the near future,” says Jiao.

Here’s a link to and a citation for the paper,

Electro-agriculture: Revolutionizing farming for a sustainable future by Bradie S. Crandall, Marcus Harland-Dunaway, Robert E. Jinkerson, Feng Jiao. Joule Volume 8, Issue 11p2974-2991 November 20, 2024 DOI: 10.1016/j.joule.2024.09.011 Published online October 23, 2024 Copyright: © 2024 The Author(s). Published by Elsevier Inc.

This is paper is open access under a Creative Commons Attribution (CC BY 4.0) user’s licence.

Audio map of 24 emotions

Caption: Audio map of vocal bursts across 24 emotions. To visit the online map and hear the sounds, go to https://s3-us-west-1.amazonaws.com/vocs/map.html# and move the cursor across the map. Credit: Courtesy of Alan Cowen

The real map, not the the image of the map you see above, offers a disconcerting (for me, anyway) experience. Especially since I’ve just finished reading Lisa Feldman Barrett’s 2017 book, How Emotions are Made, where she presents her theory of ‘constructed emotion. (There’s more about ‘constructed emotion’ later in this post.)

Moving on to the story about the ‘auditory emotion map’ in the headline, a February 4, 2019 University of California at Berkeley news release by Yasmin Anwar (also on EurekAlert but published on Feb. 5, 2019) describes the work,

Ooh, surprise! Those spontaneous sounds we make to express everything from elation (woohoo) to embarrassment (oops) say a lot more about what we’re feeling than previously understood, according to new research from the University of California, Berkeley.

Proving that a sigh is not just a sigh [a reference to the song, As Time Goes By? The lyric is “a kiss is still a kiss, a sigh is just a sigh …”], UC Berkeley scientists conducted a statistical analysis of listener responses to more than 2,000 nonverbal exclamations known as “vocal bursts” and found they convey at least 24 kinds of emotion. Previous studies of vocal bursts set the number of recognizable emotions closer to 13.

The results, recently published online in the American Psychologist journal, are demonstrated in vivid sound and color on the first-ever interactive audio map of nonverbal vocal communication.

“This study is the most extensive demonstration of our rich emotional vocal repertoire, involving brief signals of upwards of two dozen emotions as intriguing as awe, adoration, interest, sympathy and embarrassment,” said study senior author Dacher Keltner, a psychology professor at UC Berkeley and faculty director of the Greater Good Science Center, which helped support the research.

For millions of years, humans have used wordless vocalizations to communicate feelings that can be decoded in a matter of seconds, as this latest study demonstrates.

“Our findings show that the voice is a much more powerful tool for expressing emotion than previously assumed,” said study lead author Alan Cowen, a Ph.D. student in psychology at UC Berkeley.

On Cowen’s audio map, one can slide one’s cursor across the emotional topography and hover over fear (scream), then surprise (gasp), then awe (woah), realization (ohhh), interest (ah?) and finally confusion (huh?).

Among other applications, the map can be used to help teach voice-controlled digital assistants and other robotic devices to better recognize human emotions based on the sounds we make, he said.

As for clinical uses, the map could theoretically guide medical professionals and researchers working with people with dementia, autism and other emotional processing disorders to zero in on specific emotion-related deficits.

“It lays out the different vocal emotions that someone with a disorder might have difficulty understanding,” Cowen said. “For example, you might want to sample the sounds to see if the patient is recognizing nuanced differences between, say, awe and confusion.”

Though limited to U.S. responses, the study suggests humans are so keenly attuned to nonverbal signals – such as the bonding “coos” between parents and infants – that we can pick up on the subtle differences between surprise and alarm, or an amused laugh versus an embarrassed laugh.

For example, by placing the cursor in the embarrassment region of the map, you might find a vocalization that is recognized as a mix of amusement, embarrassment and positive surprise.

A tour through amusement reveals the rich vocabulary of laughter and a spin through the sounds of adoration, sympathy, ecstasy and desire may tell you more about romantic life than you might expect,” said Keltner.

Researchers recorded more than 2,000 vocal bursts from 56 male and female professional actors and non-actors from the United States, India, Kenya and Singapore by asking them to respond to emotionally evocative scenarios.

Next, more than 1,000 adults recruited via Amazon’s Mechanical Turk online marketplace listened to the vocal bursts and evaluated them based on the emotions and meaning they conveyed and whether the tone was positive or negative, among several other characteristics.

A statistical analysis of their responses found that the vocal bursts fit into at least two dozen distinct categories including amusement, anger, awe, confusion, contempt, contentment, desire, disappointment, disgust, distress, ecstasy, elation, embarrassment, fear, interest, pain, realization, relief, sadness, surprise (positive) surprise (negative), sympathy and triumph.

For the second part of the study, researchers sought to present real-world contexts for the vocal bursts. They did this by sampling YouTube video clips that would evoke the 24 emotions established in the first part of the study, such as babies falling, puppies being hugged and spellbinding magic tricks.

This time, 88 adults of all ages judged the vocal bursts extracted from YouTube videos. Again, the researchers were able to categorize their responses into 24 shades of emotion. The full set of data were then organized into a semantic space onto an interactive map.

“These results show that emotional expressions color our social interactions with spirited declarations of our inner feelings that are difficult to fake, and that our friends, co-workers, and loved ones rely on to decipher our true commitments,” Cowen said.

The writer assumes that emotions are pre-existing. Somewhere, there’s happiness, sadness, anger, etc. It’s the pre-existence that Lisa Feldman Barret challenges with her theory that we construct our emotions (from her Wikipedia entry),

She highlights differences in emotions between different cultures, and says that emotions “are not triggered; you create them. They emerge as a combination of the physical properties of your body, a flexible brain that wires itself to whatever environment it develops in, and your culture and upbringing, which provide that environment.”

You can find Barrett’s December 6, 2017 TED talk here wheres she explains her theory in greater detail. One final note about Barrett, she was born and educated in Canada and now works as a Professor of Psychology at Northeastern University, with appointments at Harvard Medical School and Massachusetts General Hospital at Northeastern University in Boston, Massachusetts; US.

A February 7, 2019 by Mark Wilson for Fast Company delves further into the 24 emotion audio map mentioned at the outset of this posting (Note: Links have been removed),

Fear, surprise, awe. Desire, ecstasy, relief.

These emotions are not distinct, but interconnected, across the gradient of human experience. At least that’s what a new paper from researchers at the University of California, Berkeley, Washington University, and Stockholm University proposes. The accompanying interactive map, which charts the sounds we make and how we feel about them, will likely persuade you to agree.

At the end of his article, Wilson also mentions the Dalai Lama and his Atlas of Emotions, a data visualization project, (featured in Mark Wilson’s May 13, 2016 article for Fast Company). It seems humans of all stripes are interested in emotions.

Here’s a link to and a citation for the paper about the audio map,

Mapping 24 emotions conveyed by brief human vocalization by Cowen, Alan S;, Elfenbein, Hillary Ange;, Laukka, Petri; Keltner, Dacher. American Psychologist, Dec 20, 2018, No Pagination Specified DOI: 10.1037/amp0000399


This paper is behind a paywall.

On the verge of controlling neurons by wireless?

Scientists have controlled a mouse’s neurons with a wireless device (and unleashed some paranoid fantasies? well, mine if no one else’s) according to a July 16, 2015 news item on Nanowerk (Note: A link has been removed),

A study showed that scientists can wirelessly determine the path a mouse walks with a press of a button. Researchers at the Washington University School of Medicine, St. Louis, and University of Illinois, Urbana-Champaign, created a remote controlled, next-generation tissue implant that allows neuroscientists to inject drugs and shine lights on neurons deep inside the brains of mice. The revolutionary device is described online in the journal Cell (“Wireless Optofluidic Systems for Programmable In Vivo Pharmacology and Optogenetics”). Its development was partially funded by the [US] National Institutes of Health [NIH].

The researchers have made an image/illustration of the probe available,

Mind Bending Probe Scientists used soft materials to create a brain implant a tenth the width of a human hair that can wirelessly control neurons with lights and drugs. Courtesy of Jeong lab, University of Colorado Boulder.

A July 16, 2015 US NIH National Institute of Neurological Disorders and Stroke news release, which originated the news item, describes the study and notes that instructions for building the implant are included in the published study,

“It unplugs a world of possibilities for scientists to learn how brain circuits work in a more natural setting.” said Michael R. Bruchas, Ph.D., associate professor of anesthesiology and neurobiology at Washington University School of Medicine and a senior author of the study.

The Bruchas lab studies circuits that control a variety of disorders including stress, depression, addiction, and pain. Typically, scientists who study these circuits have to choose between injecting drugs through bulky metal tubes and delivering lights through fiber optic cables. Both options require surgery that can damage parts of the brain and introduce experimental conditions that hinder animals’ natural movements.

To address these issues, Jae-Woong Jeong, Ph.D., a bioengineer formerly at the University of Illinois at Urbana-Champaign, worked with Jordan G. McCall, Ph.D., a graduate student in the Bruchas lab, to construct a remote controlled, optofluidic implant. The device is made out of soft materials that are a tenth the diameter of a human hair and can simultaneously deliver drugs and lights.

“We used powerful nano-manufacturing strategies to fabricate an implant that lets us penetrate deep inside the brain with minimal damage,” said John A. Rogers, Ph.D., professor of materials science and engineering, University of Illinois at Urbana-Champaign and a senior author. “Ultra-miniaturized devices like this have tremendous potential for science and medicine.”

With a thickness of 80 micrometers and a width of 500 micrometers, the optofluidic implant is thinner than the metal tubes, or cannulas, scientists typically use to inject drugs. When the scientists compared the implant with a typical cannula they found that the implant damaged and displaced much less brain tissue.

The scientists tested the device’s drug delivery potential by surgically placing it into the brains of mice. In some experiments, they showed that they could precisely map circuits by using the implant to inject viruses that label cells with genetic dyes. In other experiments, they made mice walk in circles by injecting a drug that mimics morphine into the ventral tegmental area (VTA), a region that controls motivation and addiction.

The researchers also tested the device’s combined light and drug delivery potential when they made mice that have light-sensitive VTA neurons stay on one side of a cage by commanding the implant to shine laser pulses on the cells. The mice lost the preference when the scientists directed the device to simultaneously inject a drug that blocks neuronal communication. In all of the experiments, the mice were about three feet away from the command antenna.

“This is the kind of revolutionary tool development that neuroscientists need to map out brain circuit activity,” said James Gnadt, Ph.D., program director at the NIH’s National Institute of Neurological Disorders and Stroke (NINDS).  “It’s in line with the goals of the NIH’s BRAIN Initiative.”

The researchers fabricated the implant using semi-conductor computer chip manufacturing techniques. It has room for up to four drugs and has four microscale inorganic light-emitting diodes. They installed an expandable material at the bottom of the drug reservoirs to control delivery. When the temperature on an electric heater beneath the reservoir rose then the bottom rapidly expanded and pushed the drug out into the brain.

“We tried at least 30 different prototypes before one finally worked,” said Dr. McCall.

“This was truly an interdisciplinary effort,” said Dr. Jeong, who is now an assistant professor of electrical, computer, and energy engineering at University of Colorado Boulder. “We tried to engineer the implant to meet some of neurosciences greatest unmet needs.”

In the study, the scientists provide detailed instructions for manufacturing the implant.

“A tool is only good if it’s used,” said Dr. Bruchas. “We believe an open, crowdsourcing approach to neuroscience is a great way to understand normal and healthy brain circuitry.”

Here’s a link to and a citation for the paper,

Wireless Optofluidic Systems for Programmable In Vivo Pharmacology and Optogenetics by Jae-Woong Jeong, Jordan G. McCall, Gunchul Shin, Yihui Zhang, Ream Al-Hasani, Minku Kim, Shuo Li, Joo Yong Sim, Kyung-In Jang, Yan Shi, Daniel Y. Hong, Yuhao Liu, Gavin P. Schmitz, Li Xia, Zhubin He, Paul Gamble, Wilson Z. Ray, Yonggang Huang, Michael R. Bruchas, and John A. Rogers.  Cell, July 16, 2015. DOI: 10.1016/j.cell.2015.06.058

This paper is behind a paywall.

I last wrote about wireless activation of neurons in a May 28, 2014 posting which featured research at the University of Massachusetts Medical School.