Tag Archives: (US) National Science Foundation (NSF)

Art and science, Salk Institute and Los Angeles County Museum of Art (LACMA), to study museum visitor behaviour

The Salk Institute wouldn’t have been my first guess for the science partner in this art and science project, which will be examining museum visitor behaviour. From the September 28, 2022 Salk Institute news release (also on EurekAlert and a copy received via email) announcing the project grant,

Clay vessels of innumerable shapes and sizes come to life as they illuminate a rich history of symbolic meanings and identity. Some museum visitors may lean in to get a better view, while others converse with their friends over the rich hues. Exhibition designers have long wondered how the human brain senses, perceives, and learns in the rich environment of a museum gallery.

In a synthesis of science and art, Salk scientists have teamed up with curators and design experts at the Los Angeles County Museum of Art (LACMA) to study how nearly 100,000 museum visitors respond to exhibition design. The goal of the project, funded by a $900,000 grant from the National Science Foundation, is to better understand how people perceive, make choices in, interact with, and learn from a complex environment, and to further enhance the educational mission of museums through evidence-based design strategies.   

The Salk team is led by Professor Thomas Albright, Salk Fellow Talmo Pereira, and Staff Scientist Sergei Gepshtein.

The experimental exhibition at LACMA—called “Conversing in Clay: Ceramics from the LACMA Collection”—is open until May 21, 2023.

“LACMA is one of the world’s greatest art museums, so it is wonderful to be able to combine its expertise with our knowledge of brain function and behavior,” says Albright, director of Salk’s Vision Center Laboratory and Conrad T. Prebys Chair in Vision Research. “The beauty of this project is that it extends our laboratory research on perception, memory, and decision-making into the real world.”

Albright and Gepshtein study the visual system and how it informs decisions and behaviors. A major focus of their work is uncovering how perception guides movement in space. Pereira’s expertise lies in measuring and quantifying behaviors. He invented a deep learning technique called SLEAP [Social LEAP Estimates Animal Poses (SLEAP)], which precisely captures the movements of organisms, from single cells to whales, using conventional videography. This technology has enabled scientists to describe behaviors with unprecedented precision.

For this project, the scientists have placed 10 video cameras throughout a LACMA gallery. The researchers will record how the museum environment shapes behaviors as visitors move through the space, including preferred viewing locations, paths and rates of movement, postures, social interactions, gestures, and expressions. Those behaviors will, in turn, provide novel insights into the underlying perceptual and cognitive processes that guide our engagement with works of art. The scientists will also test strategic modifications to gallery design to produce the most rewarding experience.

“We plan to capture every behavior that every person does while visiting the exhibit,” Pereira says. “For example, how long they stand in front an object, whether they’re talking to a friend or even scratching their head. Then we can use this data to predict how the visitor will act next, such as if they will visit another object in the exhibit or if they leave instead.”

Results from the study will help inform future exhibit design and visitor experience and provide an unprecedented quantitative look at how human systems for perception and memory lead to predictable decisions and actions in a rich sensory environment.

“As a museum that has a long history of melding art with science and technology, we are thrilled to partner with the Salk Institute for this study,” says Michael Govan, LACMA CEO and Wallis Annenberg director. “LACMA is always striving to create accessible, engaging gallery environments for all visitors. We look forward to applying what we learn to our approach to gallery design and to enhance visitor experience.” 

Next, the scientists plan to employ this experimental approach to gain a better understanding of how the design of environments for people with specific needs, like school-age children or patients with dementia, might improve cognitive processes and behaviors.

Several members of the research team are also members of the Academy of Neuroscience for Architecture, which seeks to promote and advance knowledge that links neuroscience research to a growing understanding of human responses to the built environment.

Gepshtein is also a member of Salk’s Center for the Neurobiology of Vision and director of the Collaboratory for Adaptive Sensory Technologies. Additionally, he serves as the director of the Center for Spatial Perception & Concrete Experience at the University of Southern California.

About the Los Angeles County Museum of Art:

LACMA is the largest art museum in the western United States, with a collection of more than 149,000 objects that illuminate 6,000 years of artistic expression across the globe. Committed to showcasing a multitude of art histories, LACMA exhibits and interprets works of art from new and unexpected points of view that are informed by the region’s rich cultural heritage and diverse population. LACMA’s spirit of experimentation is reflected in its work with artists, technologists, and thought leaders as well as in its regional, national, and global partnerships to share collections and programs, create pioneering initiatives, and engage new audiences.

About the Salk Institute for Biological Studies:

Every cure has a starting point. The Salk Institute embodies Jonas Salk’s mission to dare to make dreams into reality. Its internationally renowned and award-winning scientists explore the very foundations of life, seeking new understandings in neuroscience, genetics, immunology, plant biology, and more. The Institute is an independent nonprofit organization and architectural landmark: small by choice, intimate by nature, and fearless in the face of any challenge. Be it cancer or Alzheimer’s, aging or diabetes, Salk is where cures begin. Learn more at: salk.edu.

I find this image quite intriguing,

Caption: Motion capture technology is used to classify human behavior in an art exhibition. Credit: Salk Institute

I’m trying to figure out how they’ll do this. Will each visitor be ‘tagged’ as they enter the LACMA gallery so they can be ‘followed’ individually as they respond (or don’t respond) to the exhibits? Will they be notified that they are participating in a study?

I was tracked without my knowledge or consent at the Vancouver (Canada) Art Gallery’s (VAG) exhibition, “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” (March 5, 2022 – October 23, 2022). It was disconcerting to find out that my ‘tracks’ had become part of a real time installation. (The result of my trip to the VAG was a two-part commentary: “Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver [Canada] Art Gallery [1 of 2]: The Objects” and “Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver [Canada] Art Gallery [2 of 2]: Meditations”. My response to the experience can be found under the ‘Eeek’ subhead of part 2: Meditations. For the curious, part 1: The Objects is here.)

Internet of living things (IoLT)?

It’s not here yet but there are scientists working on an internet of living things (IoLT). There are some details (see the fourth paragraph from the bottom of the news release excerpt) about how an IoLT would be achieved but it seems these are early days. From a September 9, 2021 University of Illinois news release (also on EurekAlert), Note: Links have been removed,

The National Science Foundation (NSF) announced today an investment of $25 million to launch the Center for Research on Programmable Plant Systems (CROPPS). The center, a partnership among the University of Illinois at Urbana-Champaign, Cornell University, the Boyce Thompson Institute, and the University of Arizona, aims to develop tools to listen and talk to plants and their associated organisms.

“CROPPS will create systems where plants communicate their hidden biology to sensors, optimizing plant growth to the local environment. This Internet of Living Things (IoLT) will enable breakthrough discoveries, offer new educational opportunities, and open transformative opportunities for productive, sustainable, and profitable management of crops,” says Steve Moose (BSD/CABBI/GEGC), the grant’s principal investigator at Illinois. Moose is a genomics professor in the Department of Crop Sciences, part of the College of Agricultural, Consumer and Environmental Sciences (ACES). 

As an example of what’s possible, CROPPS scientists could deploy armies of autonomous rovers to monitor and modify crop growth in real time. The researchers created leaf sensors to report on belowground processes in roots. This combination of machine and living sensors will enable completely new ways of decoding the language of plants, allowing researchers to teach plants how to better handle environmental challenges. 

“Right now, we’re working to program a circuit that responds to low-nitrogen stress, where the plant growth rate is ‘slowed down’ to give farmers more time to apply fertilizer during the window that is the most efficient at increasing yield,” Moose explains.

With 150+ years of global leadership in crop sciences and agricultural engineering, along with newer transdisciplinary research units such as the National Center for Supercomputing Applications (NCSA) and the Center for Digital Agriculture (CDA), Illinois is uniquely positioned to take on the technical challenges associated with CROPPS.

But U of I scientists aren’t working alone. For years, they’ve collaborated with partner institutions to conceptualize the future of digital agriculture and bring it into reality. For example, researchers at Illinois’ CDA and Cornell’s Initiative for Digital Agriculture jointly proposed the first IoLT for agriculture, laying the foundation for CROPPS.

“CROPPS represents a significant win from having worked closely with our partners at Cornell and other institutions. We’re thrilled to move forward with our colleagues to shift paradigms in agriculture,” says Vikram Adve, Donald B. Gillies Professor in computer science at Illinois and co-director of the CDA.

CROPPS research may sound futuristic, and that’s the point.

The researchers say new tools are needed to make crops productive, flexible, and sustainable enough to feed our growing global population under a changing climate. Many of the tools under development – biotransducers small enough to fit between soil particles, dexterous and highly autonomous field robots, field-applied gene editing nanoparticles, IoLT clouds, and more – have been studied in the proof-of-concept phase, and are ready to be scaled up.

“One of the most exciting goals of CROPPS is to apply recent advances in sensing and data analytics to understand the rules of life, where plants have much to teach us. What we learn will bring a stronger biological dimension to the next phase of digital agriculture,” Moose says. 

CROPPS will also foster innovations in STEM [science, technology[ engineering, and mathematics] education through programs that involve students at all levels, and each partner institution will share courses in digital agriculture topics. CROPPS also aims to engage professionals in digital agriculture at any career stage, and learn how the public views innovations in this emerging technology area.

“Along with cutting-edge research, CROPPS coordinated educational programs will address the future of work in plant sciences and agriculture,” says Germán Bollero, associate dean for research in the College of ACES.

I look forward to hearing more about IoLT.

Filmmaking beetles wearing teeny, tiny wireless cameras

Researchers at the University of Washington have developed a tiny camera that can ride aboard an insect. Here a Pinacate beetle explores the UW campus with the camera on its back. Credit: Mark Stone/University of Washington

Scientists at Washington University have created a removable wireless camera backpack for beetles and for tiny robots resembling beetles. I’m embedding a video shot by a beetle later in this post with a citation and link for the paper, near the end of this post where you’ll also find links to my other posts on insects and technology.

As for the latest on insects and technology, there’s a July 15, 2020 news item on ScienceDaily,

In the movie “Ant-Man,” the title character can shrink in size and travel by soaring on the back of an insect. Now researchers at the University of Washington have developed a tiny wireless steerable camera that can also ride aboard an insect, giving everyone a chance to see an Ant-Man view of the world.

The camera, which streams video to a smartphone at 1 to 5 frames per second, sits on a mechanical arm that can pivot 60 degrees. This allows a viewer to capture a high-resolution, panoramic shot or track a moving object while expending a minimal amount of energy. To demonstrate the versatility of this system, which weighs about 250 milligrams — about one-tenth the weight of a playing card — the team mounted it on top of live beetles and insect-sized robots.

A July 15, 2020 University of Washington news release (also on EurekAlert), which originated the news item, provides more technical detail (although I still have a few questions) about the work,

“We have created a low-power, low-weight, wireless camera system that can capture a first-person view of what’s happening from an actual live insect or create vision for small robots,” said senior author Shyam Gollakota, a UW associate professor in the Paul G. Allen School of Computer Science & Engineering. “Vision is so important for communication and for navigation, but it’s extremely challenging to do it at such a small scale. As a result, prior to our work, wireless vision has not been possible for small robots or insects.”

Typical small cameras, such as those used in smartphones, use a lot of power to capture wide-angle, high-resolution photos, and that doesn’t work at the insect scale. While the cameras themselves are lightweight, the batteries they need to support them make the overall system too big and heavy for insects — or insect-sized robots — to lug around. So the team took a lesson from biology.

“Similar to cameras, vision in animals requires a lot of power,” said co-author Sawyer Fuller, a UW assistant professor of mechanical engineering. “It’s less of a big deal in larger creatures like humans, but flies are using 10 to 20% of their resting energy just to power their brains, most of which is devoted to visual processing. To help cut the cost, some flies have a small, high-resolution region of their compound eyes. They turn their heads to steer where they want to see with extra clarity, such as for chasing prey or a mate. This saves power over having high resolution over their entire visual field.”

To mimic an animal’s vision, the researchers used a tiny, ultra-low-power black-and-white camera that can sweep across a field of view with the help of a mechanical arm. The arm moves when the team applies a high voltage, which makes the material bend and move the camera to the desired position. Unless the team applies more power, the arm stays at that angle for about a minute before relaxing back to its original position. This is similar to how people can keep their head turned in one direction for only a short period of time before returning to a more neutral position.

“One advantage to being able to move the camera is that you can get a wide-angle view of what’s happening without consuming a huge amount of power,” said co-lead author Vikram Iyer, a UW doctoral student in electrical and computer engineering. “We can track a moving object without having to spend the energy to move a whole robot. These images are also at a higher resolution than if we used a wide-angle lens, which would create an image with the same number of pixels divided up over a much larger area.”

The camera and arm are controlled via Bluetooth from a smartphone from a distance up to 120 meters away, just a little longer than a football field.

The researchers attached their removable system to the backs of two different types of beetles — a death-feigning beetle and a Pinacate beetle. Similar beetles have been known to be able to carry loads heavier than half a gram, the researchers said.

“We made sure the beetles could still move properly when they were carrying our system,” said co-lead author Ali Najafi, a UW doctoral student in electrical and computer engineering. “They were able to navigate freely across gravel, up a slope and even climb trees.”

The beetles also lived for at least a year after the experiment ended. [emphasis mine]

“We added a small accelerometer to our system to be able to detect when the beetle moves. Then it only captures images during that time,” Iyer said. “If the camera is just continuously streaming without this accelerometer, we could record one to two hours before the battery died. With the accelerometer, we could record for six hours or more, depending on the beetle’s activity level.”

The researchers also used their camera system to design the world’s smallest terrestrial, power-autonomous robot with wireless vision. This insect-sized robot uses vibrations to move and consumes almost the same power as low-power Bluetooth radios need to operate.

The team found, however, that the vibrations shook the camera and produced distorted images. The researchers solved this issue by having the robot stop momentarily, take a picture and then resume its journey. With this strategy, the system was still able to move about 2 to 3 centimeters per second — faster than any other tiny robot that uses vibrations to move — and had a battery life of about 90 minutes.

While the team is excited about the potential for lightweight and low-power mobile cameras, the researchers acknowledge that this technology comes with a new set of privacy risks.

“As researchers we strongly believe that it’s really important to put things in the public domain so people are aware of the risks and so people can start coming up with solutions to address them,” Gollakota said.

Applications could range from biology to exploring novel environments, the researchers said. The team hopes that future versions of the camera will require even less power and be battery free, potentially solar-powered.

“This is the first time that we’ve had a first-person view from the back of a beetle while it’s walking around. There are so many questions you could explore, such as how does the beetle respond to different stimuli that it sees in the environment?” Iyer said. “But also, insects can traverse rocky environments, which is really challenging for robots to do at this scale. So this system can also help us out by letting us see or collect samples from hard-to-navigate spaces.”

###

Johannes James, a UW mechanical engineering doctoral student, is also a co-author on this paper. This research was funded by a Microsoft fellowship and the National Science Foundation.

I’m surprised there’s no funding from a military agency as the military and covert operation applications seem like an obvious pairing. In any event, here’s a link to and a citation for the paper,

Wireless steerable vision for live insects and insect-scale robots by Vikram Iyer, Ali Najafi, Johannes James, Sawyer Fuller, and Shyamnath Gollakota. Science Robotics 15 Jul 2020: Vol. 5, Issue 44, eabb0839 DOI: 10.1126/scirobotics.abb0839

This paper is behind a paywall.

Video and links

As promised, here’s the video the scientists have released,

These posts feature some fairly ruthless uses of the insects.

  1. The first mention of insects and technology here is in a July 27, 2009 posting titled: Nanotechnology enables robots and human enhancement: part 4. The mention is in the second to last paragraph of the post. Then,.
  2. A November 23, 2011 post titled: Cyborg insects and trust,
  3. A January 9, 2012 post titled: Controlling cyborg insects,
  4. A June 26, 2013 post titled: Steering cockroaches in the lab and in your backyard—cutting edge neuroscience, and, finally,
  5. An April 11, 2014 post titled: Computerized cockroaches as precursors to new healing techniques.

As for my questions (how do you put the backpacks on the beetles? is there a strap, is it glue, is it something else? how heavy is the backpack and camera? how old are the beetles you use for this experiment? where did you get the beetles from? do you have your own beetle farm where you breed them?), I’ll see if I can get some answers.