Tag Archives: Stanford University

15th-century Inca building constructed for sound

Carpa uasi. The carpa uasi was the bottom level of this building; it originally ended to the left of the arch (near the right side of the floor level). The 15th-century structure survived because the church built over and around it lent stability. Credit: Stella Nair Courtesy: University of California at Los Angeles (UCLA)

This October 21, 2025 University of California at Los Angeles (UCLA) news release by Sean Brenner tells a fascinating story about sound and architecture, Note: Links have been removed,

Key takeaways

  • UCLA art history professor Stella Nair is collaborating with an interdisciplinary team analyzing a unique Inca building that dates to the mid-15th century.
  • The building, in the remote town of Huaytará, Peru, appears to have been constructed specifically for the purpose of amplifying music and sound, with three walls and an opening at one end.
  • The study is important in part because scholars tend to focus on visual evidence when analyzing cultures of the past, but understanding the role of sound can create a more three-dimensional picture.

The Inca empire is renowned for its architecture; its buildings were intricately designed and extraordinarily durable.

But this summer, it was another aspect of Inca construction that captured the attention of Stella Nair, a UCLA associate professor of art history whose expertise is Indigenous arts and architecture of the Americas.

Nair spent three weeks in the remote town of Huaytará, Peru, studying a single Inca building that appears to have been created primarily to amplify sound and music. Known as a carpa uasi, the structure was likely built in the mid-15th century.

“We’re learning that sound was incredibly important from the earliest cities on, dating back several thousand years B.C.,” said Nair, who is working on her third book about Andean (in and around the Andes mountains) architecture. “Builders were incredibly sophisticated with their aural architecture, and the Incas are one part of this long, sophisticated tradition of sonic engineering.”

One of a kind

Nair said the structure is the only known carpa uasi in existence, and although scholars have known about it for many years, the building hasn’t been extensively researched — and no previous studies had identified its potential for amplifying sound.

One of its distinctive characteristics is that, because of its intended use, the carpa uasi was built with only three walls, with an opening at one of the gable ends. (The phrase carpa uasi means “tent house,” a reference to that open-ended structure.) Nair and her colleagues theorize that the design would have made it possible for sound — such as drums being used to announce the beginning or end of a battle — to be focused toward the building’s open end and then out to the surrounding environment.

“Many people look at Inca architecture and are impressed with the stonework, but that’s just the tip of the iceberg,” Nair said. “They were also concerned with the ephemeral, temporary and impermanent, and sound was one of those things.

Sound was deeply valued and an incredibly important part of Andean and Inca architecture — so much so that the builders allowed some instability in this structure just because of its acoustic potential.” [emphasis mine]

The partially open structure would have made such buildings significantly less stable than most other Inca buildings. Ironically, Nair said, this carpa uasi has survived for centuries because, perhaps at the direction of Spanish settlers, a church was later built on top of it, stabilizing the structure below.

Nair is collaborating on the project with a team of acoustic experts led by Stanford University music professor Jonathan Berger. Nair primarily studied the carpa uasi’s architecture, taking measurements and making drawings and photographs. Next, she will use hand drawings and 3-D modeling to determine what the roof may have looked like and how the building’s overall form influenced its function. Together, the researchers expect to produce a model for how sound would have traveled through and outside the building.

Toward a more complete understanding

“We’re exploring the possibility that the carpa uasi may have amplified low-frequency sounds, such as drumming, with minimal reverberation,” Nair said. “With this research, for the first time, we’ll be able to tell what the Incas valued sonically in this building.”

Investigating the sonic properties of a 600-year-old building in the Andes is much more than an academic exercise for Nair and her collaborators — and not only because it is the only surviving example of its kind.

“Sound studies are really critical, because we tend to emphasize the visual in how we understand the world around us, including our past,” Nair said. “But that’s not how we experience life — all of our senses are critical. So how we understand ourselves and our history changes if you put sound back into the conversation.”

Nair said the project reflects the importance of collaboration across disciplines, institutions and borders. The American scholars also benefited from the cooperation of partners in Peru, including the priest who oversees the Church of San Juan Bautista, the building whose architecture incorporates the carpa uasi, and a local archaeologist.

Nair’s work was funded in part by a grant from the UCLA College Division of Humanities; Berger received funding from the Templeton Religion Trust.

Ella Feldman’s October 30, 2025 article for the Smithsonian magazine enhances the ‘sound’ story with a few more details about the Inca empire. There’s also more about Stella Nair and her work on her UCLA bio webpage.

A couple of proposed solutions to AI’s insatiable need for power?

I have two stories about research into making artificial intelligence (AI) less wasteful of power. One is from the International Society for Optics and Photonics (SPIE) and the other from the Politecnico di Milano (Polytechnic of Milan).

International Society for Optics and Photonics (SPIE)

A September 9, 2025 news item on ScienceDaily announced a more energy efficient AI chip,

Artificial intelligence (AI) systems are increasingly central to technology, powering everything from facial recognition to language translation. But as AI models grow more complex, they consume vast amounts of electricity — posing challenges for energy efficiency and sustainability. A new chip developed by researchers at the University of Florida could help address this issue by using light, rather than just electricity, to perform one of AI’s most power-hungry tasks. Their research is reported in Advanced Photonics.

A September 8, 2025 SPIE (International Society for Optics and Photonics) press release, which originated the news item, provides more detail about the work, Note: Links have been removed,

The chip is designed to carry out convolution operations, a core function in machine learning that enables AI systems to detect patterns in images, video, and text. These operations typically require significant computing power. By integrating optical components directly onto a silicon chip, the researchers have created a system that performs convolutions using laser light and microscopic lenses—dramatically reducing energy consumption and speeding up processing.

“Performing a key machine learning computation at near zero energy is a leap forward for future AI systems,” said study leader Volker J. Sorger, the Rhines Endowed Professor in Semiconductor Photonics at the University of Florida. “This is critical to keep scaling up AI capabilities in years to come.”

In tests, the prototype chip classified handwritten digits with about 98 percent accuracy, comparable to traditional electronic chips. The system uses two sets of miniature Fresnel lenses—flat, ultrathin versions of the lenses found in lighthouses—fabricated using standard semiconductor manufacturing techniques. These lenses are narrower than a human hair and are etched directly onto the chip.

To perform a convolution, machine learning data is first converted into laser light on the chip. The light passes through the Fresnel lenses, which carry out the mathematical transformation. The result is then converted back into a digital signal to complete the AI task.

“This is the first time anyone has put this type of optical computation on a chip and applied it to an AI neural network,” said Hangbo Yang, a research associate professor in Sorger’s group at UF and co-author of the study.

The team also demonstrated that the chip could process multiple data streams simultaneously by using lasers of different colors—a technique known as wavelength multiplexing. “We can have multiple wavelengths, or colors, of light passing through the lens at the same time,” Yang said. “That’s a key advantage of photonics.”

The research was conducted in collaboration with the Florida Semiconductor Institute, UCLA [University of California at Los Angeles], and George Washington University. Sorger noted that chip manufacturers such as NVIDIA already use optical elements in some parts of their AI systems, which could make it easier to integrate this new technology.

“In the near future, chip-based optics will become a key part of every AI chip we use daily,” Sorger said. “And optical AI computing is next.”

There’s also a September 8, 2025 University of Florida news release (also on EurekAlert), which is similar to the one issued by SPIE.

The paper has been published on two different sites; the citation for the paper remains the same and there are links to two different sites hosting the paper,

Near-energy-free photonic Fourier transformation for convolution operation acceleration by Hangbo Yang, Nicola Peserico, Shurui Li, Xiaoxuan Ma, Russell L. T. Schwartz, Mostafa Hosseini, Aydin Babakhani, Chee Wei Wong, Puneet Gupta, Volker J. Sorger SPIE Digital library or Advanced Photonics Vol. 7, Issue 5, 056007 (2025) DOI: 10.1117/1.AP.7.5.056007

Both sites offer open access to the paper.

Politecnico di Milano (Polytechnic of Milan)

Caption: The photonic microchip (below) developed for the study on physical neural networks, along with the electronic chip (above, the yellow one) of control. Credit: Politecnico di Milano, DEIB – Department of Electronics, Information and Bioengineering

A September 12, 2025 Politecnico di Milano (Polytechnic of Milan) press release (also on EurekAlert but published September 9, 2025) announces work into a more energy efficient way to train artificial intelligence, specifically physical neural networks,

Artificial intelligence is now part of our daily lives, with the subsequent pressing need for larger, more complex models. However, the demand for ever-increasing power and computing capacity is rising faster than the performance traditional computers can provide.

To overcome these limitations, research is moving towards innovative technologies such as physical neural networks, analogue circuits that directly exploit the laws of physics (properties of light beams, quantum phenomena) to process information. Their potential is at the heart of the study published by the prestigious journal Nature. It is the outcome of collaboration between several international institutes, including the Politecnico di Milano, the École Polytechnique Fédérale in Lausanne, Stanford University, the University of Cambridge, and the Max Planck Institute.

The article entitled “Training of Physical Neural Networks” discusses the steps of research on training physical neural networks, carried out with the collaboration of Francesco Morichetti, professor at DEIB – Department of Electronics, Information and Bioengineering, and head of the university’s Photonic Devices Lab.

Politecnico di Milano contributed to this study by developing photonic chips for the creation of neural networks, exploiting integrated photonic technologies. Mathematical operations, such as sums and multiplications, can now be performed through light interference mechanisms on silicon microchips barely a few square millimetres in size.

By eliminating the operations required for the digitisation of information, our photonic chips allow calculations to be carried out with a significant reduction in both energy consumption and processing time,” says Francesco Morichetti. A step forward to make artificial intelligence (which relies on extremely energy-intensive data centres) more sustainable.

The study published in Nature addresses the theme of training, precisely the phase in which the network learns to perform certain tasks. «With our research within the Department of Electronics, Information and Bioengineering, we have helped develop an “in-situ” training technique for photonic neural networks, i.e. without going through digital models. The procedure is carried out entirely using light signals. Hence, network training will not only be faster, but also more robust and efficient», adds Morichetti.

The use of photonic chips will allow the development of more sophisticated models for artificial intelligence, or devices capable of processing real-time data directly on site – such as autonomous cars or intelligent sensors integrated into portable devices – without requiring remote processing.

Here’s a link to and a citation for the paper,

Training of physical neural networks by Ali Momeni, Babak Rahmani, Benjamin Scellier, Logan G. Wright, Peter L. McMahon, Clara C. Wanjura, Yuhang Li, Anas Skalli, Natalia G. Berloff, Tatsuhiro Onodera, Ilker Oguz, Francesco Morichetti, Philipp del Hougne, Manuel Le Gallo, Abu Sebastian, Azalia Mirhoseini, Cheng Zhang, Danijela Marković, Daniel Brunner, Christophe Moser, Sylvain Gigan, Florian Marquardt, Aydogan Ozcan, Julie Grollier, Andrea J. Liu, Demetri Psaltis, Andrea Alù, Romain Fleury. Nature volume 645, pages 53–61 (2025) DOI: https://doi.org/10.1038/s41586-025-09384-2 Published: 03 September 2025 Version of record: 03 September 2025 Issue date: 04 September 2025

This paper is behind a paywall.

Using sound to sculpt light for better displays and imaging

A July 31, 2025 Stanford University news release (also on EurekAlert) describes a nanodevice that can sculpt light, Note: Links have been removed,

Light can behave in very unexpected ways when you squeeze it into small spaces. In a new paper in the journal Science, Mark Brongersma, a professor of materials science and engineering at Stanford University, and doctoral candidate Skyler Selvin describe the novel way they have used sound to manipulate light that has been confined to gaps only a few nanometers across – allowing the researchers exquisite control over the color and intensity of light mechanically.

The findings could have broad implications in fields ranging from computer and virtual reality displays to 3D holographic imagery, optical communications, and even new ultrafast, light-based neural networks.

The new device is not the first to manipulate light with sound, but it is smaller and potentially more practical and powerful than conventional methods. From an engineering standpoint, acoustic waves are attractive because they can vibrate very fast, billions of times per second. Unfortunately, the atomic displacements produced by acoustic waves are extremely small – about 1,000 times smaller than the wavelength of light. Thus, acousto-optical devices have had to be larger and thicker to amplify sound’s tiny effect – too big for today’s nanoscale world.

“In optics, big equals slow,” Brongersma said. “So, this device’s small scale makes it very fast.”

Simplicity from the start

The new device is deceptively simple. A thin gold mirror is coated with an ultrathin layer of a rubbery silicone-based polymer only a few nanometers thick. The research team could fabricate the silicone layer to desired thicknesses – anywhere between 2 and 10 nanometers. For comparison, the wavelength of light is almost 500 nanometers tip to tail.

The researchers then deposit an array of 100-nanometer gold nanoparticles across the silicone. The nanoparticles float like golden beach balls on an ocean of polymer atop a mirrored sea floor. Light is gathered by the nanoparticles and mirror and focused into the silicone between – shrinking the light to the nanoscale.

To the side, they attach a special kind of ultrasound speaker – an interdigitated transducer, IDT – that sends high-frequency sound waves rippling across the film at nearly a billion times a second. The high‑frequency sound waves (surface acoustic waves, SAWs) surf along the surface of the gold mirror beneath the nanoparticles. The elastic polymer acts like a spring, stretching and compressing as the nanoparticles bob up and down as the sound waves course by.

The researchers then shine light into the system. The light gets squeezed into the oscillating gaps between the gold nanoparticles and the gold film. The gaps change in size by the mere width of a few atoms, but it is enough to produce an outsized effect on the light.

The size of the gaps determines the color of the light resonating from each nanoparticle. The researchers can control the gaps by modulating the acoustic wave and therefore control the color and intensity of each particle.

“In this narrow gap, the light is squeezed so tightly that even the smallest movement significantly affects it,” Selvin said. “We are controlling the light with lengths on the nanometer scale, where typically millimeters have been required to modulate light acoustically.”

Starry, starry sky

When white light is shined from the side and the sound wave is turned on, the result is a series of flickering, multicolored nanoparticles against a black background, like stars twinkling in the night sky. Any light that does not strike a nanoparticle is bounced out of the field of view by the mirror, and only the light that is scattered by the particles is directed outward toward the human eye. Thus, the gold mirror appears black and each gold nanoparticle shines like a star.

The degree of optical modulation caught the researchers off guard. “I was rolling on the floor laughing,” Brongersma said of his reaction when Selvin showed him the results of his first experiments. “I thought it would be a very subtle effect, but I was amazed how much nanometer changes in distance can change the light scattering properties so dramatically.”

The exceptional tunability, small form factor, and efficiency of the new device could transform any number of commercial fields. One can imagine ultrathin video displays, ultra-fast optical communications based on acousto-optics’ high-frequency capabilities, or perhaps new holographic virtual reality headsets that are much smaller than the bulky displays of today, among other applications.

“When we can control the light so effectively and dynamically,” Brongersma said, “we can do everything with light that we could want – holography, beam steering, 3D displays – anything.”


Here’s a link to and a citation for the paper,

Acoustic wave modulation of gap plasmon cavities by Skyler P. Selvin, Majid Esfandyarpour, Anqi Ji, Yan Joe Lee, Colin Yule, Jung-Hwan Song, Mohammad Taghinejad and Mark L. Brongersma. Science 31 Jul 2025 Vol 389, Issue 6759 pp. 516-520 DOI: 10.1126/science.adv1728

This paper is behind a paywall.

The subhead ‘Starry, starry sky’ reminded me of a song by Don McLean, ‘Starry, Starry Night’, a lyrical tribute to Vincent van Gogh and his painting, ‘The Starry Night’. First, ‘Starry, starry sky’,

How the nanoparticles look with and without the surface acoustic wave (SAW) activation. Brongersma compared it to a starry night sky. | Selvin et al., Supplementary Movie 1 from “Acoustic wave modulation of gap plasmon cavities,” Science (2025), ©2025 AAAS; courtesy of the authors [downloaded from https://news.stanford.edu/stories/2025/07/nanoscale-device-control-light-sound-acoustic-waves-imaging-communications]

Next, ‘The Starry Night’,

By Vincent van Gogh – Google Arts & Culture — bgEuwDxel93-Pg, Public Domain, https://commons.wikimedia.org/w/index.php?curid=25498286

As for Don McLean’s song ‘Starry, Starry Night’, I leave that to you. In days gone by, I would have embedded a YouTube version of the song but the owners have turned that site into one long commercial occasionally interrupted by content.

Merry 2025 Christmas: sea slugs, the Nicholas Brothers, ethnomathematics, three new frogs (hidden gems), gifted dogs, and more

What you’re looking at is a sea slug:

Anatomy of the sacoglossan mollusc Elysia chlorotica. Sea slug consuming its obligate algal food Vaucheria litorea. Small, punctate green circles are the plastids located within the extensive digestive diverticula of the animal. By Karen N. Pelletreau et al. – http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0097477, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=38619279 [[image downloaded from the Elysia chlorotica Wikipedia entry]

This is one of my favourite stories. From a November 27, 2025 article “A rare photosynthesizing sea slug has been found off N.S. Here’s why scientists are excited” by Frances Willick for the Canadian Broadcasting Corporation (CBC) news online website,

When she made the discovery that would thrill her fellow snorkellers and excite researchers across North America, she didn’t think much of it at first.

“I just thought, oh, that’s a rotten leaf, keep going,” says Elli Ofthenorth.

The avid snorkeller passed by this “black gunk” once, twice, but it wasn’t until her third pass that something caught her eye enough to take a closer look, and she realized it was a living creature.

“I just started yelling, there’s a sea slug here!”

Ofthenorth’s mother, who was on the shore at Rainbow Haven Provincial Park near Dartmouth, N.S., lit up the snorkel group chat, and within minutes, members identified it as Elysia chlorotica, or Eastern emerald elysia.

This unassuming creature could almost pass for your garden-variety slug — the kind that decimates your lettuce every summer. That is, until its crinkly-looking back unfurls a stunning, emerald green “leaf,” complete with pale “veins” branching outward from the centre.

It’s this “leaf,” and what it does for the sea slug, that holds so much promise for research in medicine, clean energy and other fields. 

But it’s so elusive that researchers are having a hard time studying it.

E. chlorotica can photosynthesize, stealing the chloroplasts — the photosynthesizing organs — of the algae it eats, keeping them alive in its body, and using them to get energy from the sun. The sea slugs can then subsist for months at a time without consuming food.

“It’s like if I ate a whole bunch of spinach and then I just woke up this morning and I just sunbathed for an hour and then I wouldn’t need to eat for the rest of the week,” says Hunter Stevens, a biologist with the Canadian Parks and Wilderness Society’s Nova Scotia chapter. “These slugs are essentially doing the same thing.”

Elusive and ephemeral

However, this coveted slug excels at eluding researchers.

Historically, known populations have existed in the Minas Basin area of Nova Scotia and in Martha’s Vineyard in Massachusetts — and theoretically their habitat exists all along the Eastern Seaboard of the U.S. — but recent efforts to find them have been unsuccessful.

“For so long it just seemed like nobody had seen them,” says Krug [Patrick Krug, a professor of biological sciences at California State University, Los Angeles]. “It was such a shot in the dark, it wasn’t even worth going to look.”

Dylan Gagler, a PhD student at the American Museum of Natural History in New York, has searched the slug’s favoured habitat off Martha’s Vineyard repeatedly this year, but without luck so far.

When Stevens’s Instagram post about the Rainbow Haven discovery popped up in Gagler’s feed, he says he was “having like a freak-out, FOMO [fear of missing out] moment of, like, I got to get up to Nova Scotia. Like, this is clearly where all the action is.”

Gagler contacted Stevens to get information about the conditions at the Rainbow Haven location, such as the air and water temperature and the depth they were found at, in order to fine-tune his own searches. He’s also exploring the permitting process to collect specimens from Nova Scotia to raise in a lab.

Though E. chlorotica has been hard to find, Krug says there have been a few sightings in recent months, including the one in Nova Scotia, as well as in the Carolinas and in Tampa Bay, Fla.

He says the populations are “ephemeral,” seeming to go through cycles of boom and bust — sometimes abundant, but then vanishing suddenly.

The fact that the recent discovery of this thriving population was made within the bounds of a provincial park in Nova Scotia underscores how important protected areas are to biodiversity, Stevens says.

“As coastal development proliferates and continues to advance, some of these populations, we may not even know about them, and they’ll disappear,” he says. “And so these slugs will probably get rarer as time goes on.”

Stevens says Ofthenorth’s discovery highlights the importance of citizen science.

“It just shows the power of curiosity and how anybody here can go into the water and there’s still that potential to find this really scientifically significant observation.”

If you have the time, Willick’s November 27, 2025 article “A rare photosynthesizing sea slug has been found off N.S. Here’s why scientists are excited” includes pictures and a video.

Introduction

I always look forward to this Christmas posting as it’s an opportunity to publish some stories that wouldn’t ordinarily be featured here like the sea slug/leaf that opened this post.The focus will be (mostly) on animals. Note: I will not be removing links from the news/press releases nor will I be providing separate citations and links.

On to the rest of the programme, there should always be a little dance in one’s life.

Nicholas Brothers

[downloaded from https://mymodernmet.com/cab-calloway-jumpin-jive-nicholas-brothers/]

Watch their feet. There’s a reason dancers still talk about those two; the synchronization is something. There was link to the Nicholas Brothers’ routine in Stormy Weather (1943 film) was in my December 24, 2025 posting but …

Given the past year, I think it’s time to revisit the brothers and Emma Taggart’s October 4, 2019 article on My Modern Met (scroll down to see Cab Calloway and his band performing ‘Jumpin Jive’ as an introduction to what is one of the most lauded tap routines ever recorded on film).

What makes it jaw-dropping is that it was done in one take and with no rehearsal .Do read Taggart’s October 4, 2019 article for a detail I found a little mystifying (what was the director thinking?).

I have a little more about ‘Stormy Weather’ and the dance. Nick Castle was the dance director (uncredited) for the movie and the Nicholas Brothers’ routine was something he worked out with the brothers..I found that bit of information in “C’mon, Get Happy; the Making of Summer Stock,” a 2023 book by David Frantle and Tom Johnson. The book has a number of dancers and choreographers commenting on the movie’s (Summer Stock) dance routines (Gene Kelly was in the movie) in detail. Castle was the ‘dance stager’ for “Summer Stock,” which is why the routine and the Nicholas Brothers are mentioned.

8,000 years ago, before numbers existed, art demonstrated early mathematical thinking (ethnomathematics)

A December 16, 2025 news item on ScienceDaily makes an extraordinary statement about art, mathematics, and prehistoric civilization,

A study published in the Journal of World Prehistory suggests that some of the earliest known images of plants created by humans served a deeper purpose than decoration. According to the researchers, these ancient designs also reveal early mathematical thinking.

By closely examining prehistoric pottery, Prof. Yosef Garfinkel and Sarah Krulwich of the Hebrew University traced the oldest consistent use of plant imagery in human art to more than 8,000 years ago. The pottery comes from the Halafian culture of northern Mesopotamia (c. 6200-5500 BCE). Their findings show that early farming communities carefully painted flowers, shrubs, branches, and trees, arranging them in ways that reflect deliberate geometric structure and numerical order.

A meticulously executed drawing of a single large flower, depicted in a symmetrical arrangement with 16 or 32 petals, and a bowl with 64 (+ 12) flowers Courtesy Canadian Friends of the Hebrew University of Jerusalem

This undated news release on the Canadian Friends of the Hebrew University of Jerusalem website provides more detail about the work, Note: There are lots of images accompanying this story that are not included here,

A new study reveals that the Halafian culture of northern Mesopotamia (c. 6200–5500 BCE) produced the earliest systematic plant imagery in prehistoric art, flowers, shrubs, branches, and trees painted on fine pottery, arranged with precise symmetry and numerical sequences, especially petal and flower counts of 4, 8, 16, 32, and 64. This suggests that early farming villages in the Near East already possessed sophisticated, practical mathematical thinking about dividing space and quantities, likely tied to everyday needs such as fairly sharing crops from collectively worked fields, long before writing or formal number systems existed.

A new study published in the Journal of World Prehistory reveals that some of humanity’s earliest artistic representations of botanical figures were far more than decorative, they were mathematical.

In an extensive analysis of ancient pottery, Prof. Yosef Garfinkel and Sarah Krulwich of the Hebrew University have identified the earliest systematic depictions of vegetal motifs in human history, dating back over 8,000 years to the Halafian culture of northern Mesopotamia (c. 6200–5500 BCE). Their research shows that these early agricultural communities painted flowers, shrubs, branches, and trees with remarkable care, and embedded within them evidence of complex geometric and arithmetic thinking. 

A New Understanding of Prehistoric Art

Earlier prehistoric art focused primarily on humans and animals. Halafian pottery, however, marks the moment when the plant world entered human artistic expression in a systematic and visually sophisticated way.

Across 29 archaeological sites, Garfinkel and Krulwich documented hundreds of carefully rendered vegetal motifs, some naturalistic, others abstract, all reflecting conscious artistic choice.

“These vessels represent the first moment in history when people chose to portray the botanical world as a subject worthy of artistic attention,” the authors note. “It reflects a cognitive shift tied to village life and a growing awareness of symmetry and aesthetics.” 

Among the study’s most striking insights is the precise numerical patterning in Halafian floral designs. Many bowls feature flowers with petal counts that follow geometric progression: 4, 8, 16, 32, and even arrangements of 64 flowers.

These sequences, the researchers argue, are intentional and demonstrate a sophisticated grasp of spatial division long before the appearance of written numbers.

“The ability to divide space evenly, reflected in these floral motifs, likely had practical roots in daily life, such as sharing harvests or allocating communal fields,” Garfinkel explains. 

This work contributes to the field of ethnomathematics, which identifies mathematical knowledge embedded in cultural expression.

The motifs documented span the full botanical spectrum:

  • Flowers with meticulously balanced petals
  • Seedlings and shrubs, rendered with botanical clarity
  • Branches, arranged in rhythmic, repeating bands
  • Tall, imposing trees, sometimes shown alongside animals or architecture

Notably, none of the images depict edible crops, suggesting that the purpose was aesthetic rather than agricultural or ritualistic. Flowers, the authors note, are associated with positive emotional responses, which may explain their prominence. 

Revising the History of Mathematics

While written mathematical texts appear millennia later in Sumer, Halafian pottery reveals an earlier, intuitive form of mathematical reasoning, rooted in symmetry, repetition, and geometric organization.

“These patterns show that mathematical thinking began long before writing,” Krulwich says. “People visualized divisions, sequences, and balance through their art.”

By cataloguing these vegetal motifs and revealing their mathematical foundations, the study offers a new perspective on how early communities understood the natural world, organized their environments, and expressed cognitive complexity.

The research paper titled “The Earliest Vegetal Motifs in Prehistoric Art: Painted Halafian Pottery of Mesopotamia and Prehistoric Mathematical Thinking” is now available in Journal of World Prehistory and can be accessed at https://doi.org/10.1007/s10963-025-09200-9

And now, back to the animal stories

Frogs! There must always be at least one story here at Christmas about these critters.

Caption: Pristimantis chinguelas. Credit: Germán Chávez

According to a June 25, 2025 Pensoft (publisher) news release on EurekAlert, three new species have been round in Northern Peru, Note: The link to the study has been retained while one other link has been removed,

High in the cloud-wrapped peaks of the Cordillera de Huancabamba, where the Andes dip and twist into isolated ridges, a team of Peruvian scientists has brought three secretive frogs out of obscurity and into the scientific record. The study [appears to be open access], led by herpetologist Germán Chávez and published in Evolutionary Systematics, describes Pristimantis chinguelas, P. nunezcortezi, and P. yonke—three new species discovered in the rugged, misty highlands of northwestern Peru.

“They’re small and unassuming,” Chávez says, “but these frogs are powerful reminders of how much we still don’t know about the Andes.”

Between 2021 and 2024, the team carried out a series of tough expeditions, hiking steep trails and combing mossy forests and wet páramo for signs of amphibian life. It was in this setting—both harsh and enchanting—that they encountered the new species.

Each frog tells a different story:

P. chinguelas, discovered on a cliffside of Cerro Chinguelas, has a body dotted with prominent large tubercles on both sides. Its high-pitched “peep” can be heard on humid nights.

P. nunezcortezi lives near a cool mountain stream in a regenerating forest. With large black blotches on axillae and groins, it was named in honour of ornithologist Elio Nuñez-Cortez, a conservation trailblazer in the region.

P. yonke, the smallest of the three, was found nestled in bromeliads at nearly 3,000 meters. Its name nods to “yonque,” a sugarcane spirit consumed by locals to brave the highland chill.

“Exploring this area is more than fieldwork—it’s an immersion into wilderness, culture, and resilience,” says co-author Karen Victoriano-Cigüeñas.

“Many of these mountain ridges are isolated, with no roads and extreme terrain,” adds Ivan Wong. “The weather shifts within minutes, and the steep cliffs make every step a challenge. It’s no wonder so few scientists have worked here before. But that’s exactly why there’s still so much to find.”

Despite the thrill of discovery, the frogs’ future is uncertain. The team observed signs of habitat degradation, fire damage, and expanding farmland. For now, the species are listed as Data Deficient under IUCN criteria, but the call to action is clear.

“The Cordillera de Huancabamba is not just a remote range—it’s a living archive of biodiversity and cultural legacy,” says co-author Wilmar Aznaran. “And we’ve barely scratched the surface.”

Caption: A new study publishing in Current Biology on September 18 reveals that dogs with a vocabulary of toy names—known as Gifted Word Learners—can extend learned labels to entirely new objects, not because the objects look similar, but because they are used in the same way. Credit: Department of Ethology / Eötvös Loránd University

Hungarian scientists have furthered the research into dogs and learning. From a September 18, 2025 Eötvös Loránd University press release (also on EurekAlert),

BUDAPEST, Hungary — A new study publishing in Current Biology on September 18 by the Department of Ethology at Eötvös Loránd University reveals that dogs with a vocabulary of toy names—known as Gifted Word Learners—can extend learned labels to entirely new objects, not because the objects look similar, but because they are used in the same way.

In humans, “label extension” is a cornerstone of early language development. In non-humans, until now, it had only been documented in few so-called language-trained individual animals, after years of intensive training in captivity.

But learning to extend labels to objects that share the same function, rather than visual similarities, is considered an even more complex skill. A toddler learns that the word “cup” can apply to mugs, tumblers, and sippy cups, or that both a spoon and a ladle are “for scooping.” While individuals of many animal species can group items by appearance, extending a learned label to a functionally similar but visually different object has long been considered an advanced skill.

Video abstract at this link: https://youtu.be/8_NbCYAWSfU

The time and efforts needed to train animals in captivity to learn verbal labels, as well as the very limited number of subjects that successfully acquired such vocabulary, have until now limited the feasibility of this type of research.

But here comes the twist! “Gifted Word Learner dogs offer a unique possibility to study this phenomenon because they rapidly learn verbal labels – the names of toys – during natural interactions in their human families” said Dr. Claudia Fugazza, lead author of the study.

“Our results show that these dogs do not just memorize object names,” continues Dr. Fugazza. “They understand the meaning behind those labels well enough to apply them to new, very different-looking toys— by recognizing what the toys were for.”

Link to the social media of the Gifted Word Learner dogs project: https://linktr.ee/geniusdogchallenge

A Play-Based Experiment

Researchers of the Department of Ethology, at Eötvös Lorand University tested 7 Gifted Word Learner dogs—(six Border collies and a Blue heeler)—known for their unusual ability to learn the names of dozens of toys naturally, through everyday play.

The experiment had four stages, all of them conducted in a natural setup, at the house of each dog owner, during playful interactions:

  1. Fist, in the Learning Phase, Dogs learned two new labels, such as “Pull” and “Fetch,” each referring not to a single item, but to a group of toys that looked completely different but were used in the same way during play (tug or retrieve).
  2. Second, during a formal Assessment, the dogs showed that they had successfully learned those labels and could appropriately choose the “Pulls” and “Fetches” when asked.
  3. The crucial part of the experiment was carried out after this Assessment: in the Generalization Phase, the dogs were introduced to new toys, also with diverse physical features, and the owner played in the same two ways as before, but this time saying no labels.
  4. Test – When asked for a “Pull” or “Fetch,” the dogs selected the correct unlabelled toy significantly above chance, indicating they had generalized the labels to a functional category.

Why This Matters

The study provides the first evidence that dogs can generalize verbal labels to functional categories during natural-like playful interactions in their human families—mirroring, in functional terms, the natural context of human language development.

“This ability shows that classification linked to verbal labels can emerge in non-human, non-linguistic species living in natural settings,” said Dr. Adam Miklosi, coauthor of the study. “It opens exciting new avenues for studying how language-related skills may evolve and function beyond our own species.”

Key Points

  • Dogs extended verbal labels to objects that shared only functional properties, not appearance.
  • The skill emerged naturally through play with owners—no formal training required.
  • While the mechanisms of such learning are not known, the context in which it happens present a striking parallel with that of human infants: daily life in a human family.
  • The study of these skills in a non-human species in its natural environment paves the way for understanding the how language-related skills evolved and function.

Journal

Current Biology

DOI

10.1016/j.cub.2025.08.013

Article Title

Dogs extend verbal labels for functional classification of objects

Caption: Red-cheeked Cordonbleu. Credit: Çağan Şekercioğlu, University of Utah

A September 30, 2025 University of Utah news release (also on EurekAlert) announced the BIRDBASE dataset has tracked (and continues to track) ecological traits for over 11,000 birds,

Çağan Şekercioğlu was an ambitious, but perhaps naive graduate student when, 26 years ago, he embarked on a simple data-compilation project that would soon evolve into a massive career-defining achievement.

With the help of countless students and volunteers, the University of Utah conservation biologist has finally released BIRDBASE, an encyclopedic dataset of traits covering all the bird species recognized by the world’s four major avian taxonomies.

Described this week in a study published in the journal Scientific Data, the dataset covers 78 ecological traits, including conservation status, for 11,589 species of birds in 254 families. The main trait categories tracked are body mass; habitat; diet; nest type; clutch size; life history; elevational range; and movement strategy, that is whether and how they migrate.

While some little-known species still have incomplete data, the dataset provides a foundation for ornithologists around the world to conduct new global analyses in ornithology, conservation biology and macroecology, including the links between bird species’ ecological traits and their risk of extinction, according to Şekercioğlu, a professor in the university’s School of Biological Sciences. He also hopes BIRDBASE will help other biologists win support for studying avian conservation.

“To get funding you have to have a big question, but without data, how are you going to answer those big questions?” Şekercioğlu posed. “It also shows we still have ways to go. Birds are the best-known class of organism, but even though they are the best known, we still have big data gaps.”

BIRDBASE’s public launch coincides with the release of the first unified global checklist for birds, known as AviList, a grand taxonomy under one cover.

The BIRDBASE project started in 1999 when Şekercioğlu was a graduate student at Stanford University, spending field seasons in Costa Rica. While writing the first chapter of his Ph.D. thesis, he needed to know the percentage of tropical forest understory bug-eating birds, technically known as insectivores, that are threatened with extinction. He was perplexed to discover that information had yet to be determined.

“I realized that statistic doesn’t exist because nobody had analyzed all the birds of the world and their threat status based on diet,” he said. “I’m like, this is unbelievable. There’s no global database on birds. I’m lucky that I was in grad school because I was naive and I love birds.”

In other words, he set out to figure it out himself. That meant gathering and organizing life history traits for all such bird species, including their diets, habitats and conservation status. For a keen birder like Şekercioğlu, it seemed like a simple task that would be fun, compiling data found on thousands of bird species published in huge beautifully illustrated volumes. It turned out to be tedious and seriously time consuming, but worthwhile.

Thanks to a cadre of volunteers in the Stanford Volunteer Program and undergraduates, whose labors were compensated by the Stanford Center for Conservation Biology, Şekercioğlu answered his question within a couple years. Twenty-seven percent of tropical understory insectivores were threatened or near threatened with extinction. This finding wound up not supporting the hypothesis of his research, but that’s science.

Yet the dataset was so helpful that he labored on with the data-compiling project to eventually cover all bird species and expanded the number of traits included. “What started as this little specialized question turned into this global database, the first of its kind” he said.

BIRDBASE has proven a boon to many other avian researchers who have tapped it to support dozens of papers, most of them listing Şekercioğlu as co-author. The tally of Şekercioğlu’s papers that have used BIRDBASE currently stands at 98, accounting for 14,000 of Şekercioğlu’s 24,000-plus citations.

Among the conclusions the dataset has enabled is that a majority of the world’s bird species, or 54%, are insectivores, and many species in this group are under pressure.

“Most of them are tropical forest species. It is a very important group and they’re declining,” he said. “They’re sensitive even though they’re not hunted. They are small, so they don’t need a big area. You wouldn’t expect them to be the most sensitive group to habitat fragmentation but they are highly specialized.”

The dataset also showed that fish-eating seabirds are at elevated risk of extinction as well, and fruit-eating birds are vital to the survival of tropical rain forests.

“The most important seed dispersers in the tropics are frugivorous birds,” Şekercioğlu said. “In some tropical forests, over 90% of all woody plants’ seeds are dispersed by fruit-eating birds who eat them and then defecate the seeds somewhere else and they germinate.”

Now for the first time BIRDBASE is publicly available to all researchers online, “no strings attached.” It can be found as an Excel spreadsheet on a site hosted by Figshare, with separate worksheet tabs for trait values, trait definitions, nest details and data sources, packaged on one row per species.

Şekercioğlu emphasized that BIRDBASE remains a work in progress that will be continuously updated. Kind of like a medieval cathedral that is open for worship, but never really finished. He estimated that nearly 30 person-years of labor have gone into the project, work that entails entering data collected from various authoritative sources, such as BirdLife International, Birds of the World, hundreds of bird books and ornithological papers, and Şekercioğlu’s field observations of more than 9,400 bird species.

“Thanks to my being naïve, something that started with just a little question in grad school led to the foundation of my career. Right now, if one of my students came to me and said, ‘Hey, as part of my PhD I want to enter the world’s birds into a dataset,’ I’m like, ‘No, you’re not doing that. You’ll never finish your Ph.D.’ Fortunately I finished my Ph.D., but think about it, 1999 is when I had the idea and we are still putting finishing touches in 2025.”

downloaded from bumblebeeconservation.org

Bumlebees can read Morse code? Apparently, the answer is yes. From a November 13, 2025 Queen Mary University of London (QMUL) press release (also on EurekAlert but published on November 12, 2025),

Researchers at Queen Mary University of London have shown for the first time that an insect – the bumblebee Bombus terrestris – can decide where to forage for food based on different durations of visual cues.  

In Morse code, a short duration flash or ‘dot’ denotes a letter ‘E’ and a long duration flash, or ‘dash’, means letter ‘T’. Until now, the ability to discriminate between ‘dot’ and ‘dash’ has been seen only in humans and other vertebrates such as macaques or pigeons.  

PhD student Alex Davidson and his supervisor Dr Elisabetta Versace, Senior Lecturer in Psychology at Queen Mary, led a team that studied this ability in bees. They built a special maze to train individual bees to find a sugar reward at one of two flashing circles, shown with either a long or short flash duration. For instance, when the short flash, or ‘dot’, was associated with sugar, then the long flash, or ‘dash’, was instead associated with a bitter substance that bees dislike.  

At each room in the maze, the position of the ‘dot’ and ‘dash’ stimulus was changed, so that bees could not rely on spatial cues to orient their choices. After bees learned to go straight to the flashing circle paired with the sugar, they were tested with flashing lights but no sugar present, to check whether bees’ choices were driven by the flashing light, rather than by olfactory or visual cues present in the sugar.   

It was clear the bees had learnt to tell the light apart based on their duration, as most of them went straight to the ‘correct’ flashing light duration previously associated with sugar, irrespective of spatial location of the stimulus. 

Alex Davidson said: “We wanted to find out if bumblebees could learn to the difference between these different durations, and it was so exciting to see them do it”. 

“Since bees don’t encounter flashing stimuli in their natural environment, it’s remarkable that they could succeed at this task. The fact that they could track the duration of visual stimuli might suggest an extension of a time processing capacity that has evolved for different purposes, such as keeping track of movement in space or communication”. 

“Alternatively, this surprising ability to encode and process time duration might be a fundamental component of the nervous system that is intrinsic in the properties of neurons. Only further research will be able to address this issue.” 

The neural mechanisms involved in the ability to keep track of time for these durations remain mostly unknown, as the mechanisms discovered for entraining with the daylight cycle (circadian rhythms) and seasonal changes are too slow to explain the ability to differentiate between a ‘dash’ and a ‘dot’ with different duration.  

Various theories have been put forward, suggesting the presence of a single or multiple internal clocks. Now that the ability to differentiate between durations of flashing lights has been discovered in insects, researchers will be able to test different models in these ‘miniature brains’ smaller than one cubic millimetre. 

Elisabetta Versace continued: “Many complex animal behaviours, such as navigation and communication, depend on time processing abilities. It will be important to use a broad comparative approach across different species, including insects, to shed light on the evolution of those abilities. Processing durations in insects is evidence of a complex task solution using minimal neural substrate. This has implications for complex cognitive-like traits in artificial neural networks, which should seek to be as efficient as possible to be scalable, taking inspiration from biological intelligence.” 


Journal

Biology Letters

DOI

10.1098/rsbl.2025.0440

Article Title

Duration discrimination in the bumblebee Bombus terrestris

The bumblebee image at the start of this news bit is from Bumblebee Conservation Trust in the UK; their website can be found here.

Joyeux Noël!

We live in such an extraordinary world: able to watch the Nicholas Brothers give a performance that is decades old, observe a leaf that’s really a sea slug, discover that bumblebees can learn Morse code, etc.

I’m ‘wrapping’ this up with two more items.

The mathematics of gift wrapping

Credit: Krysten Casumpang. Courtesy: University of British Columbia (UBC)

A December 18, 2025 University of British Columbia (UBC) Question & Answer (Q&A) interview (also received via email) features mathematician Adam Martens,

UBC Mathematics postdoctoral fellow Adam Martens talks about the geometry of gift wrapping—and why you can’t wrap a ball perfectly (so don’t even bother!). 

From Christmas to Hanukkah to Kwanzaa, the gift-giving season is upon us. After we track down the perfect items for our favourite people, another task awaits us: gift wrapping. It’s not just an art—it’s math in disguise. 

We spoke to Dr. Adam Martens, UBC mathematics postdoctoral fellow and differential geometer about the best shapes to reduce waste—and why a donut-shaped object can be wrapped perfectly, but only if you work in four dimensions. 

What is a differential geometer? 

 A geometer is a specialist in geometry, or the study of points, lines, angles, surfaces, and solids. A differential mathematician studies smooth objects called ‘manifolds’, for example, a flat piece of paper or the surface of a ball. We also think about higher-dimensional objects, like the space-time of the universe. 

What is the easiest shape to wrap? 

No surprises here, but a box. The nice thing about wrapping a box is that each side is flat, and the flat edges meet at simple creases. Wrapping paper can be easily folded over the edges—mathematicians call this a manifold with corners. 

Wrapping paper is inherently flat and rigid. It can be folded, but from a mathematical point of view, it cannot be warped so that it lies flat on a curved surface. 

This means it’s mathematically impossible to wrap a sphere perfectly i.e. without any creases or folds. The only way to effectively wrap a ball is to put the ball in a box. 
A closely related theorem in calculus is the “hairy ball theorem,” which says you can’t comb a hairy ball flat without creating a cowlick or hair swirl. 

What is the most difficult shape to wrap? 

Technically, any shape that is not flat is equally difficult because they are all impossible. You cannot bend the wrapping paper to fit non-flat shapes. You could work around this by cutting and taping, but if any point is not flat, it’s impossible – at least not without creasing the wrapping paper. 

That being said, there are shapes that seem impossible to wrap but are actually technically doable. Take a donut shape, what we call a “torus” in math. This object sits inside four-dimensional space where, if you were a 4D creature, you could make a torus flat and wrap it— so potentially not very helpful for your holiday shopping since we’re 3D beings and can’t visualize what is going on.  

We can see this by taking a flat piece of paper. If you glued the long sides together, you would get a cylinder. You can’t do this in 3D because the paper would crinkle, but if you bend the paper and glue the short ends together, you’re able to take a flat piece of paper and bend it into a torus. 

What gift-container shape minimizes the amount of wrapping needed? 

In geometry, the isoperimetric inequality is a principle that tells us that a sphere is the most efficient shape for enclosing an item. An example of this is when we blow bubbles in a glass of water—the bubbles form as spheres because the air inside of them wants to take up as little space as possible due to the air pressure they face on the outside. In this sense, a sphere would be your most optimal shape for minimizing wrapping, except it wouldn’t really because, as we know, you can’t really wrap a sphere very well. 

The next best option would be a cube—not an arbitrary rectangular box—where all sides are equal in length. For a fixed volume, a cube minimizes the surface area that needs to be covered in wrapping paper. 

How about gift bags? 

It’s not always about optimization. As human beings, we tend to find things aesthetically pleasing when they’re not square. Gift bags, for example, are elongated in one direction. We like the look of this. A lot of it has to do with the golden ratio—1.618, also known as Phi—which we can find in nature, including in the radial spiral of pinecones or sunflower seeds, in art in the proportions of the Mona Lisa’s face and torso, and architecture, in the proportions of the Parthenon. I even have it tattooed on my arm. Many people think that some of these appearances in nature are just a coincidence or selection bias, but something about this ratio is very pleasing to the eye. 

3D Printed Ice Christmas Tree Image: University of Amsterdam [downloaded from https://www.homecrux.com/3d-printed-ice-christmas-tree/353009/]

A tree made entirely of ice with not a freezer nor piece of refrigeration equipment nor chainsaw and ice block in sight. You might call if a physics miracle.

A thank you to Nanowerk where I found the December 17, 2025 news item.

You can also read more about the icy Christmas tree in a December 17, 2025 University of Amsterdam (Netherlands) press release or in a December 19, 2025 article by Happy Jasta for homecrux.com

I wish you all the best of celebrations.

Toronto’s ArtSci Salon is hosting a couple more October 2025 events

I have two art/science events and one art/science conference/festival (IRL [in real life or in person] and Zoom) taking place in Toronto, Ontario.

October 16, 2025

There is a closing event for the “I don’t do Math” series mentioned in my September 8, 2025 posting,

ABOUT
“I don’t do math” is a photographic series referencing dyscalculia, a learning difference affecting a person’s ability to understand and manipulate number-based information.

This initiative seeks to raise awareness about the challenges posed by dyscalculia with educators, fellow mathematicians, and parents, and to normalize its existence, leading to early detection and augmented support. In addition, it seeks to reflect on and question broader issues and assumptions about the role and significance of Mathematics and Math education in today’s changing socio-cultural and economic contexts. 

The exhibition will contain pedagogical information and activities for visitors and students. The artist will also address the extensive research that led to the exhibition. The exhibition will feature two panel discussions following the opening and to conclude the exhibition.

I have some information from an October 12, 2025 ArtSci Salon announcement (received via email) about the “I don’t do math” closing event,

in us for 

Closing Exhibition Panel Discussion
Thursday, October 16 2025
10:00 am -12:00 pm room 309
The Fields Institute for Research in Mathematical Sciences (or online)

Artist Ann Piché will be in conversation with
Andrew Fiss, Jacqueline Wernimont, Amenda Chow, Ellen Abrams, Michael Barany and JP Ascher

RSVP here

October 21, 2025

The second event mentioned in the October 12, 2025 ArtSci Salon announcement, Note 1: A link has been removed, Note 2: This event is part of a larger series,

Marco Donnarumma 
Monsters of Grace: bodies, sounds, and machines

Tuesday, October 21, 2025
3:30-4:30 PM
Sensorium Research Loft 
4th floor
Goldfarb Centre for Fine Arts
York University

About the talk
What is sound to those who do not hear it? How does one listen to something that cannot be heard? What kind of sensory gaps are created by aiding technologies such as prostheses and artificial intelligence (AI)? As a matter of fact, the majority of non-deaf people hear only partially due to age and personal experience. Still, sound is most often considered through the normalizing viewpoint of the non-deaf. If I become your body, what does sound become for me? Join us to welcome Marco Donnarumma  ahead of his new installation/performance at Paul Cadario Conference Room (Oct 22, 8-10 PM University College [University of Toronto] – 15 King’s College Circle). His talk will focus on this latest work in the context of a largest body of work titled “I Am Your Body,” an ongoing project investigating how normative power is enforced through the technological mediation of the senses.

About the artist:
Marco Donnarumma is an artist, inventor and theorist. His oeuvre confronts normative body politics with uncompromising counter-narratives, where bodies are in tension between control and agency, presence and absence, grace and monstrosity. He is best known for using sound, AI, biosensors, and robotics to turn the body into a site of resistance and transformation. He has presented his work in thirty-seven countries across Asia, Europe, North and South America and is the recipient of numerous accolades, most notably the German Federal Ministry of Research and Education’s Artist of the Science Year 2018, and the Prix Ars Electronica’s Award of Distinction in Sound Art 2017. Donnarumma received a ZER01NE Creator grant in 2024 and was named a pioneer of performing arts with advanced technologies by the major national newspaper Der Standard, Austria. His writings are published in Frontiers in Computer Science, Computer Music Journal and Performance Research, among others, and his newest book chapter, co-authored with Elizabeth Jochum, will appear in Robot Theaters by Routledge. Together with Margherita Pevere he runs the performance group Fronte Vacuo.


I wonder if Donnarumma’s “Monsters of Grace: bodies, sounds, and machines’ received any inspiration from “Monsters of Grace” (Wikipedia entry) or if it’s just happenstance, Note: Links have been removed,

Monsters of Grace is a multimedia chamber opera in 13 short acts directed by Robert Wilson, with music by Philip Glass and libretto from the works of 13th-century Sufi mystic Jalaluddin Rumi. The title is said to be a reference to Wilson’s corruption of a line from Hamlet: “Angels and ministers of grace defend us!” (1.4.39).

So, the October 21, 2025 event is a talk at York University taking place before the “Who’s afraid of AI? Arts, Sciences, and the Futures of Intelligence” (more below).

“Who’s afraid of AI? Arts, Sciences, and the Futures of Intelligence,” a conference and arts festival at the University of Toronto

The conference (October 23 – 24, 2025) is concurrent with the arts festival (October 19 – 25, 2025) at the University of Toronto. Here’s more from the event homepage on the https://bmolab.artsci.utoronto.ca/ website, Note: BMO stands for Bank of Montreal, Note: No mention of Edward Albee and “Who’s afraid of Virginia Woolf?,”

2025 marks an inflection point in our technological landscape, driven by seismic shifts in AI innovation.

Who’s Afraid of AI? Arts, Science, and the Futures of Intelligence is a week-long inquiry into the implications and future directions of AI for our creative and collective imaginings, and the many possible futures of intelligence. The complexity of these immediate future calls for interdisciplinary dialogue, bringing together artists, AI researchers, and humanities scholars.

In this volatile domain, the question of who envisions our futures is vital. Artists explore with complexity and humanity, while the humanities reveal the histories of intelligence and the often-overlooked ways knowledge and decision-making have been shaped. By placing these voices in dialogue with AI researchers and technologists, Who’s Afraid of AI? examines the social dimensions of technology, questions tech solutionism from a social-impact perspective, and challenges profit-driven AI with innovation guided by public values.

The two-day conference at the University of Toronto’s University College anchors the week and features panels and debates with leading figures in these disciplines, including a keynote by 2025 Nobel Laureate in Physics Geoffrey Hinton, the “Godfather of AI” and 2025 Neil Graham Lecturer in Science, Fei-Fei Li, an AI pioneer.

Throughout the week, the conversation continues across the city with:

  • AI-themed and AI powered art shows and exhibitions
  • Film screenings
  • Innovative theatre
  • Experimental music

Who’s Afraid of AI? demonstrates that Toronto has not only shaped the history of AI but continues to prepare its future.Step into this changing landscape and be part of this transformative dialogue — register today!

Organizing Committee:

Pia Kleber, Professor-Emerita, Comparative Literature, and Drama, U of T
Dirk Bernhardt-Walther, Department of Psychology, Program Director, Cognitive Science, U of T
David Rokeby, Director, BMO Lab, Centre for Drama, Theatre and Performance Studies, U of T
Rayyan Dabbous, PhD candidate, Centre for Comparative Literature, U of T

This looks like a pretty interesting programme (if you’re mainly focused on AI and the creative arts), from the event homepage on the https://bmolab.artsci.utoronto.ca/ website, Note 1: All times are ET, Note 2: I have not included speakers’ photos,

The conference will explore core questions about AI such as its capabilities, possibilities and challenges, bringing their unique research, creative practice, scholarship and experience to the discussion. Speakers will also engage in an interdisciplinary conversation on topics including AI’s implications for theories of mind and embodiment, its influence on creation, innovation, and discovery, its recognition of diverse perspectives, and its transformation of artistic, cultural, political and everyday practices.

Thursday, October 23, 2025

Mind the World

9 AM | Clark Reading Room, University College – 15 King’s College Circle

What are the merits and limits of artificial intelligence within the larger debate on embodiment? This session brings together an artist who has given AI a physical dimension, a neuroscientist who reckons with the biological neural networks inspiring AI, and a humanist knowledgeable of the longer history in which the human has tried to decouple itself from its bodily needs and wants.

Suzanne Kite
Director, The Wihanble S’a Center for Indigenous AI

James DiCarlo
Director, MIT Quest for Intelligence

N. Katherine Hayles
James B. Duke Distinguished Professor Emerita of Literature

Staging AI

11 AM | Clark Reading Room, University College – 15 King’s College Circle

How is AI changing the arts? To answer this question, we bring together theatre directors and artists who have made AI the main driving plot of their stories and those who opted to keep technology secondary in their productions.

Kay Voges
Artistic Director, Schauspiel Köln

Roland Schimmelpfennig
Playwright and Director, Berlin

Hito Steyerl
Artist, Filmmaker and Writer, Berlin

Recognizing ‘Noise’

2 PM | Clark Reading Room, University College – 15 King’s College Circle

How can we design a more inclusive AI? This session brings together an artist who has worked with AI and has been sensitive to groups who may be excluded by its practice, an inclusive design scholar who has grappled with AI’s potential for personalized accessibility, and a humanist who understands the longer history on pattern and recognition from which emerged AI.

Marco Donnarumma
Artist, Inventor, Theorist, Berlin

Jutta Treviranus
Director, OCADU [Ontario College of Art & Design University],
Inclusive Design Research Centre

Eryk Salvaggio
Media Artist and Tech Policy Press Fellow, Rochester

Art, Design, and Application are the Solution to AI’s Charlie Chaplain Problem

4 PM | Hart House Theatre – 7 Hart House Circle

Daniel Wigdor
CoFounder and Chief Executive Officer, AXL

Keynote and Neil Graham Lecture in Science

4:15 PM | Hart House Theatre – 7 Hart House Circle

Fei-Fei Li
Sequoia Professor in Computer Science, Stanford Institute for Human-Centered AI

Geoffrey Hinton
2024 Nobel Laureate in Physics, Professor Emeritus in Computer Science

Friday, October 24, 2025

Life with AI

9 AM | Clark Reading Room, University College – 15 King’s College Circle

How do machine minds relate to human minds? What can we learn from one about the other? In this session we interrogate the impact of AI on our understanding of human knowledge and tool-making, from the perspective of philosophy, computer science, as well as the arts.

Jeanette Winterson
Author, Fellow of the Royal Society of Literature, Great Britain

Leif Weatherby
Professor of German and Director of Digital Theory Lab at
New York University

Jennifer Nagel
Professor, Philosophy, University of Toronto Mississauga

Discovery & In/Sight

11 AM | Clark Reading Room, University College – 15 King’s College Circle

This session explores creative practice through the lens of innovation and cultural/scientific advancement. An artist who creates with critical inspiration from AI joins forces with an innovation scholar who investigates the effects of AI on our decision making, as well as a philosopher of science who understands scientific discovery and inference as well as their limits.

Vladan Joler
Visual Artist and Professor of
New Media, University of Novi Sad [Serbia]

Alán Aspuru-Guzik
Professor of Chemistry and Computer Science, University of Toronto

Brian Baigrie
Professor, Institute for the History and Philosophy of Science & Technology, University of Toronto

Social history & Possible Futures

2 PM | Clark Reading Room, University College – 15 King’s College Circle

How does AI ownership and its private uses coexist within a framework of public good? It brings together an artist who has created AI tools to be used by others, an AI ethics researcher who has turned algorithmic bias into collective insight, and a philosopher who understands the connection between AI and the longer history of automation and work from which AI emerged.

Memo Akten
Artist working with Code, Data and AI, UC San Diego

Beth Coleman
Professor, Institute of Communication, Culture, Information and Technology, University of Toronto

Matteo Pasquinelli
Professor, Philosophy and Cultural Heritage Università Ca’ Foscari Venezia [Italy]

A Theory of Latent Spaces | Conclusion: Where do we go from here?

4 PM | Clark Reading Room, University College – 15 King’s College Circle

Antonio Somaini, curator of the remarkable ‘World through AI’ exhibition at the Museé du Jeu de Paume in Paris, will discuss the way in which ‘latent spaces’, a core characteristic of current AI models as “meta-archives” that shape profoundly our relation with the past.

Following this, we will engage in a larger discussion amongst the various conference speakers and attendees on how we can, as artists, humanities scholars, scientists and the general public, collectively imagine and cultivate a future where AI serves the public good and enhances our individual and collective lives.”

Antonio Somaini
Curator and Professor, Sorbonne Nouvelle [Université Sorbonne Nouvelle]

You can register here for this free conference, although, there’s now a waitlist for in person attendance. Do not despair, there’s access by Zoom,

In case you can’t make it in person, join us by Zoom:

Link: https://utoronto.zoom.us/j/82603012955

Webinar ID: 826 0301 2955

Passcode: 512183

I have not forgotten the festival, from the event homepage on the https://bmolab.artsci.utoronto.ca/ website,

Events Also Happening

October 22 | 2 PM | Student Forum and AI Commentary Contest Award | Paul Cadario Conference Room, University College – 15 King’s College Circle

October 22 | 8 – 10 PM | Marco Donnarumma, world première of a new performance installation | Paul Cadario Conference Room, University College – 15 King’s College Circle

October 23 | 2 PM | Jeanette Winterson: Arts & AI Talk | Paul Cadario Conference Room, University College – 15 King’s College Circle

October 24 | 7 PM | The Kiss by Roland Schimmelpfennig | The BMO Lab, University College – 15 King’s College Circle (Note: we are scheduling more performances. Check back for more info soon!)

October 25 | 8 PM | AI Cabaret featuring Jason Sherman, Rick Miller, Cole Lewis, BMO Lab projects and more| Crow’s Theatre, Nada Ristich Studio-Gallery – 345 Carlaw Avenue..

Get tickets for the AI Cabaret

(Use promo code AICAB for 100% discount)

Enjoy!

Brain-machine interface on a chip

Caption: An entire brain-machine interface on a chip: Converting brain activity to text on one extremely small integrated system. Credit: © 2024 EPFL / Lundi13 – CC-BY-SA 4.0

News about an entire brain-machine interface (BMI) on a chip comes from an August 26, 2024 École Polytechnique Fédérale de Lausanne (EPFL) press release (also on EurekAlert), Note: Links have been removed,

Brain-machine interfaces (BMIs) have emerged as a promising solution for restoring communication and control to individuals with severe motor impairments. Traditionally, these systems have been bulky, power-intensive, and limited in their practical applications. Researchers at EPFL have developed the first high-performance, Miniaturized Brain-Machine Interface (MiBMI), offering an extremely small, low-power, highly accurate, and versatile solution. Published in the latest issue of the IEEE Journal of Solid-State Circuits and presented at the International Solid-State Circuits Conference, the MiBMI not only enhances the efficiency and scalability of brain-machine interfaces but also paves the way for practical, fully implantable devices. This technology holds the potential to significantly improve the quality of life for patients with conditions such as amyotrophic lateral sclerosis (ALS) and spinal cord injuries.

The MiBMI’s small size and low power are key features, making the system suitable for implantable applications. Its minimal invasiveness ensures safety and practicality for use in clinical and real-life settings. It is also a fully integrated system, meaning that the recording and processing are done on two extremely small chips with a total area of 8mm2. Thisis the latest in a new class of low-power BMI devices developed at Mahsa Shoaran’s Integrated Neurotechnologies Laboratory (INL) at EPFL’s IEM and Neuro X institutes. 

“MiBMI allows us to convert intricate neural activity into readable text with high accuracy and low power consumption.This advancement brings us closer to practical, implantable solutions that can significantly enhance communication abilities for individuals with severe motor impairments,” says Shoaran.  

Brain-to-text conversion involves decoding neural signals generated when a person imagines writing letters or words. In this process, electrodes implanted in the brain record neural activity associated with the motor actions of handwriting. The MiBMI chipset then processes these signals in real-time, translating the brain’s intended hand movements into corresponding digital text. This technology allows individuals, especially those with locked-in syndrome and other severe motor impairments, to communicate by simply thinking about writing, with the interface converting their thoughts into readable text on a screen.

“While the chip has not yet been integrated into a working BMI, it has processed data from previous live recordings, such as those from the Shenoy lab at Stanford [Stanford University in California, US}, converting handwriting activity into text with an impressive 91% accuracy,” says lead author Mohammed Ali Shaeri. The chip can currently decode up to 31 different characters, an achievement unmatched by any other integrated systems. “We are confident that we can decode up to 100 characters, but a handwriting dataset with more characters is not yet available,” adds Shaeri. 

Current BMIs record the data from electrodes implanted in the brain and then send these signals to a separate computer to do the decoding. The MiBMI chips records the data but also processes the information in real time—integrating a 192-channel neural recording system with a 512-channel neural decoder. This neurotechnological breakthrough is a feat of extreme miniaturization that combines expertise in integrated circuits, neural engineering, and artificial intelligence. This innovation is particularly exciting in the emerging era of neurotech startups in the BMI domain, where integration and miniaturization are key focuses. EPFL’s MiBMI offers promising insights and potential for the future of the field.

To be able to process the massive amount of information picked up by the electrodes on the miniaturized BMI, the researchers had to take a completely different approach to data analysis. They discovered that the brain activity for each letter, when the patient imagines writing it by hand, contains very specific markers, which the researchers have named distinctive neural codes (DNCs). Instead of processing thousands of bytes of data for each letter, the microchip only needs to process the DNCs, which are around a hundred bytes. This makes the system fast, accurate, and with low-power consumption.  This breakthrough also allows for faster training times, making learning how to use the BMI easier and more accessible. 

Collaborations with other teams at EPFL’s Neuro-X and IEM Institutes, such as with the laboratories of Grégoire Courtine, Silvestro Micera, Stéphanie Lacour, and David Atienza promise to create the next generation of integrated BMI systems. Shoaran, Shaeri and their team are exploring various applications for the MiBMI system beyond handwriting recognition. “We are collaborating with other research groups to test the system in different contexts, such as speech decoding and movement control. Our goal is to develop a versatile BMI that can be tailored to various neurological disorders, providing a broader range of solutions for patients,” says Shoaran.

Here’s a link to and a citation for the paper,

A 2.46-mm2 Miniaturized Brain-Machine Interface (MiBMI) Enabling 31-Class Brain-to-Text Decoding by MohammadAli Shaeri, Uisub Shin, Amitabh Yadav, Riccardo Caramellino, Gregor Rainer, Mahsa Shoaran. IEEE Journal of Solid-State Circuits Volume: 59 Issue: 11 pp. 3566-3579, Nov. 2024, DOI : 10.1109/JSSC.2024.3443254

This paper is behind a paywall.

Sprayable gels could protect buildings during wildfires

This seems like a good idea especially for those of us who live in areas where wildfires have become commonplace, from an August 22, 2024 news item on ScienceDaily,

As climate change creates hotter, drier conditions, we are seeing longer fire seasons with larger, more frequent wildfires. In recent years, catastrophic wildfires have destroyed homes and infrastructure, caused devastating losses in lives and livelihoods of people living in affected areas, and damaged wildland resources and the economy. We need new solutions to fight wildfires and protect areas from damage.

Researchers at Stanford have developed a water-enhancing gel that could be sprayed on homes and critical infrastructure to help keep them from burning during wildfires [emphasis mine]. The research, published Aug. 21 [2024] in Advanced Materials, shows that the new gels last longer and are significantly more effective [emphasis mine] than existing commercial gels.

An August 21, 2024 Stanford University news release (also on EurekAlert but published August 22, 2024), which originated the news item, delves further into the research, Note: Links have been removed,

“Under typical wildfire conditions, current water-enhancing gels dry out in 45 minutes,” said Eric Appel, associate professor of materials science and engineering in the School of Engineering, who is senior author of the paper. “We’ve developed a gel that would have a broader application window – you can spray it further in advance of the fire and still get the benefit of the protection – and it will work better when the fire comes.

Long-lasting protection

Water-enhancing gels are made of super-absorbent polymers – similar to the absorbent powder found in disposable diapers. Mixed with water and sprayed on a building, they swell into a gelatinous substance that clings to the outside of the structure, creating a thick, wet shield. But the conditions in the vicinity of a wildfire are extremely dry – temperatures can be near 100 degrees, with high winds and zero percent humidity – and even water locked in a gel evaporates fairly quickly.

In the gel designed by Appel and his colleagues, the water is just the first layer of protection. In addition to a cellulose-based polymer, the gel contains silica particles, which get left behind when the gels are subjected to heat. “We have discovered a unique phenomenon where a soft, squishy hydrogel seamlessly transitions into a robust aerogel shield under heat, offering enhanced and long-lasting wildfire protection. This environmentally conscious breakthrough surpasses current commercial solutions, offering a superior and scalable defense against wildfires,” said the lead author of the study, Changxin “Lyla” Dong.

“When the water boils off and all of the cellulose burns off, we’re left with the silica particles assembled into a foam,” Appel said. “That foam is highly insulative and ends up scattering all of the heat, completely protecting the substrate underneath it.”

The silica forms an aerogel – a solid, porous structure that is a particularly good insulator. Similar silica aerogels are used in space applications because they are extremely lightweight and can prevent most methods of heat transfer.

The researchers tested several formulations of their new gel by applying them to pieces of plywood and exposing them to direct flame from a gas hand-torch, which burns at a considerably higher temperature than a wildfire. Their most effective formulation lasted for more than 7 minutes before the board began to char. When they tested a commercially available water-enhancing gel in the same way, it protected the plywood for less than 90 seconds.

“Traditional gels don’t work once they dry out,” Appel said. “Our materials form this silica aerogel when exposed to fire that continues to protect the treated substrates after all the water has evaporated. These materials can be easily washed away once the fire is gone.”

A serendipitous discovery

The new gels build off of Appel’s previous wildfire prevention work. In 2019, Appel and his colleagues used these same gels as a vehicle to hold wildland fire retardants on vegetation for months at a time. The formulation was intended to help prevent ignition in wildfire-prone areas.

“We’ve been working with this platform for years now,” Appel said. “This new development was somewhat serendipitous – we were wondering how these gels would behave on their own, so we just smushed some on a piece of wood and exposed it to flames from a torch we had laying around the lab. What we observed was this super cool outcome where the gels puffed up into an aerogel foam.”

After that initial success, it took several years of additional engineering to optimize the formulation. It is now stable in storage, easily sprayable with standard equipment, and adheres well to all kinds of surfaces. The gels are made of nontoxic components that have already been approved for use by the U.S. Forest Service, and the researchers conducted studies to show that they are easily broken down by soil microbes.

“They’re safe for both people and the environment,” Appel said. “There may need to be additional optimization, but my hope is that we can do pilot-scale application and evaluation of these gels so we can use them to help protect critical infrastructure when a fire comes through.”


Here’s a link to and a citation for the paper,

Water-Enhancing Gels Exhibiting Heat-Activated Formation of Silica Aerogels for Protection of Critical Infrastructure During Catastrophic Wildfire by Changxin Dong, Andrea I. d’Aquino, Samya Sen, Ian A. Hall, Anthony C. Yu, Gabriel B. Crane, Jesse D. Acosta, Eric A. Appel. Advanced Materials DOI: https://doi.org/10.1002/adma.202407375 First published online: 21 August 2024

This paper is open access.

Back to school: Stanford University (California) brings nanoscience to teachers and Ingenium brings STEAM to school

I have two stories that fit into the ‘back to school’ theme, one from Stanford University and one from Ingenium (Canada’s Museums of Science and Innovation).

Stanford, nanoscience, and middle school teachers

h/t to Google Alert of August 27, 2024 (received via email) for information about a Stanford University programme for middle school teachers. From an August 27, 2024 article in the Stanford Report, Note: Links have been removed,

Crafting holographic chocolate, printing with the power of the sun, and seeing behind the scenes of cutting-edge research at the scale of one-billionth of a meter, educators participating in the Nanoscience Summer Institute for Middle School Teachers (NanoSIMST) got to play the role of students, for a change.

Teachers hailed from the Bay Area and Southern California – one had even come all the way from Arkansas – for the professional development program. NanoSIMST, run by nano@stanford, is designed to connect middle school teachers with activities, skills, and knowledge about science at the scale of molecules and atoms so they can incorporate it into their curriculum. NanoSIMST also prioritizes teachers from Title I schools, which are low-income schools with low-income student populations that receive federal funding to improve academic achievement.

Debbie Senesky, the site investigator and principal researcher on the nano@stanford project, highlighted the importance of nanoscience at the university. “It’s not just about focusing on research – we also have bigger impacts on entrepreneurs, start-ups, community colleges, and other educators who can use these facilities,” said Senesky, who is also an associate professor of aeronautics and astronautics and of electrical engineering. “We’re helping to train the next generation of people who can be a workforce in the nanotechnology and semiconductor industry.”

The program also supports education and outreach, including through NanoSIMST, which uniquely reaches out to middle school teachers due to the STEM education outcomes that occur at that age. According to a 2009 report by the Lemelson-MIT InvenTeam Initiative, even among teens who were interested in and felt academically prepared in their STEM studies, “nearly two-thirds of teens indicated that they may be discouraged from pursuing a career in science, technology, engineering or mathematics because they do not know anyone who works in these fields (31%) or understand what people in these fields do (28%).”

A teacher from the Oakland Unified School District, Thuon Chen, connected several other teachers from OUSD to attend NanoSIMST as a first-time group. He emphasized that young kids, especially in middle school, have a unique way of approaching new technologies. “Kids have this sense where they’re always pushing things and coming up with completely new uses, so introducing them to a new technology can give them a lot to work with.”

Over the course of four days in the summer, NanoSIMST provides teachers with an understanding of extremely small science and technology: they go through tours of the nano facilities, speak with scientists, perform experiments that can be conducted in the classroom, and learn about careers in nanotechnology and the semiconductor industry.

Tara Hodge, the teacher who flew all the way from Arkansas, was thrilled about bringing what she learned back with her. “I’m not a good virtual learner, honestly. That’s why I came here. And I’m really excited to learn about different hands-on activities. Anything I can get excited about, I know I can get my students excited about.”

They have provided a video,

One comment regarding the host, Daniella Duran, the director of education and outreach for nano@stanford, she comments about nano being everywhere and, then, says “… everything has a microchip in it.” I wish she’d been a little more careful with the wording. Granted those microchips likely have nanoscale structures.

Ingenium’s STEAM (science, technology, engineering, arts, and mathematics) programmes for teachers across Canada

An August 27, 2024 Ingenium newsletter (received via email) lists STEAM resources being made available for teachers across the country.

There appears to be a temporary copy of the August 27, 2024 Ingenium newsletter here,

STEAM lessons made simple!

Another school year is about to begin, and whether you’re an experienced teacher or leading your first class, Ingenium has what you need to make your STEAM (science, technology, engineering, arts and math) lessons fun! With three museums of science and innovation – the Canada Agriculture and Food Museum, the Canada Aviation and Space Museum and the Canada Science and Technology Museum – under one umbrella, we are uniquely positioned to help your STEAM lessons come to life.

Embark on an exciting adventure with our bilingual virtual field trips and meet the animals in our barns, explore aviation technology, and conduct amazing science experiments.

Or take advantage of our FREE lesson plans, activities and resources to simplify and animate your classroom, all available in English and French. With Ingenium, innovation is at your fingertips!

Bring the museum to your classroom with a virtual field trip!

Can’t visit in person? Don’t worry, Ingenium will bring the museum to you! All of our virtual field trips are led by engaging guides who will animate each subject with an entertaining and educational approach. Choose from an array of bilingual programs designed for all learners that cover the spectrum of STEAM subjects, including the importance of healthy soil, the genetic considerations of a dairy farm operation, the science of flight, simple machines, climate change and the various states of matter. There is so much to discover with Ingenium. Book your virtual field trip today!

Here’s a video introduction to Ingenium’s offerings,

To get a look at all the resources, check out this temporary copy of the August 27, 2024 Ingenium newsletter here.

Interweave: A multi-sensory show (March 21, 2024 in Vancouver, Canada) where fashion, movement, & music come together though wearable instruments.

Interweave is a free show at The Kent in the gallery in downtown Vancouver, Canada. Here’s more from a Simon Fraser University (SFU) announcement (received via email),

SFU School for the Contemporary Arts (SCA) alumnus, Kimia Koochakzadeh-Yazdi, is hosting Interweave, a multi-sensory show where fashion, movement, and music come together though wearable instruments.

Embrace the fusion of creativity and expression alongside your fellow alumni in a setting that celebrates innovation and the uncharted synergy between fashion, music, and movement. This is a great opportunity to mingle and reconnect with your peers.

Event Details:

Date: March 21, 2024
Time: Doors 7:30pm, Show 8:00pm
Location: The Kent Vancouver, 534 Cambie Street
Free Entry, RSVP required

Interweave is the first event from Fashion x Electronics (FXE), a collective created by Kimia Koochakzadeh-Yazdi, SCA alumnus, composer, and performer, and designer Kayla Yazdi. FXE is an interdisciplinary collective that is building multi-sensory experiences for their community, bridging together a diverse range of disciplines.

This is a 19+ event. ID will be checked at the door.

RSVP Now!

I wasn’t able to discern much more about the event or the Yazdi sisters from their Fashion x Electronics (FXE) website but there is this about Kayla Yazdi on her FXE profile,

Kayla Yazdi

Designer / Co-Producer

Kayla Yazdi is an Iranian-Canadian designer based in Vancouver, Canada. Her upbringing in Iran immersed her in a world of culture, art, and color. Holding a diploma in painting and a bachelor’s degree in design with a specialization in fashion and technology, Kayla has cultivated the skill set that merges her artistic sensibilities with innovative design concepts.

Kayla is dedicated to the creation of “almost” zero-waste garments. With design, technology, and experimentation, Kayla seeks to minimize environmental impacts while delivering unique styles.

Kimia Koochakzadeh-Yazdi’s FXE profile has this,

Kimia Koochakzadeh-Yazdi

Sound Artist / Co-Producer

Kimia Koochakzadeh-Yazdi(b. 1997 Tehran, Iran) is a California/Vancouver-based composer and performer. She writes for hybrid instrumental/electronic ensembles, creates electroacoustic and audiovisual works, and performs electronic music. Kimia explores the unfamiliar familiar while constantly being driven by the concepts of motion, interaction, and growth in both human life and in the sonic world. Being a cross-disciplinary artist, she has actively collaborated on projects evolving around dance, film, and theatre. Kimia’s work has been showcased by organizations such as Iranian Female Composer Association, Music on Main, Western Front, Vancouver New Music, and Media Arts Committee. She has been featured in The New York Times, Georgia Straight, MusicWorks Magazine, Vancouver Sun, and Sequenza 21. Her work has been performed at festivals around the world including Ars Electronica Festival, Festival Ecos Urbanos, Tehran Contemporary Sounds, AudioVisual Frontiers Virtual Exhibition, The New York City Electroacoustic Music Festival, Yarn/Wire Institute, Ensemble Evolution, New Music on the Point, wasteLAnd Summer Academy, EQ: Evolution of the String Quartet, Modulus Festival, and SALT New Music Festival. She holds a BFA in Music Composition from Simon Fraser University’s Interdisciplinary School for the Contemporary Arts, having studied with Sabrina Schroeder and Mauricio Pauly. Kimia is currently pursuing her DMA in Music Composition at Stanford University.

For more details about the sisters and the performance, Marilyn R. Wilson has written up a February 21, 2024 interview with both sisters for her Olio blog,

Can you share a little bit about your background, the life, work, experiences that led you to who you are today?
Kayla: I’m a visual artist with a focus on fashion design, and textile development. I like to explore ways to create wearable art with minimal waste produced in the process. I studied painting at Azadehgan School of Art in Iran and fashion design & technology at Wilson School of Design in Vancouver. My interest in fashion is rooted in creating functional art. I enjoy the business aspect of fashion however, I want to push boundaries of how fashion can be seen as art rather than solely as production.

Kimia: I’m a composer of acoustic and electronic music, I perform and build instruments, and a lot of times I combine these components together. Working with various disciplines is also an important part of my practice. I studied piano performance at Tehran Music School before moving to Vancouver to study composition at Simon Fraser University. I am currently a doctorate candidate in music composition at Stanford University. I love electronic music, food, and sports! My family, partner, and friends are a huge part of my life!

You have your premier event called “Interweave” coming up on March 21st at The Kent Gallery in Vancouver. What can guests attending expect this evening?

Kayla & Kimia: Interweave is a multidisciplinary performance that bridges fashion, music, technology, and dance. Our dancers will be performing in garments designed by Kayla, that are embedded with microcontrollers and sensors developed by Kimia. The dancers control various musical parameters through their movements and their interaction with the sensors that are incorporated within the garments. Along with works for movement and dance, there will be a live electronic music performance made for costume-made instruments. So far we have received an amazing amount of support and RSVP’s from the art industry in Vancouver and look forward to welcoming many local creative individuals.

We’d love to know about the team of professionals who are working hard to create this unique experience. 

Kayla & Kimia: We are working with the amazing choreographers/dancers Anya Saugstad and Daria Mikhailiuk. We are thankful for Laleh Zandi’s help for creating a sculpture for one of our instruments which will be performed by Kimia. Celeste Betancur and Richard Lee have been our amazing audio tech assistants. We are very appreciative of everyone involved in FXE’s premiere and can’t wait to showcase our hard work.

I have a bit more about Kimia Koochakzadeh-Yazdi and her work in music from a February 27, 2024 profile on the SFU School for the Contemporary Arts website, Note: Links have been removed,

Please introduce yourself.

I’m a composer of acoustic and electronic music, I perform and build instruments, and a lot of times, I combine these components together. Working with various disciplines is also an important part of my practice. I studied piano performance at Tehran Music School before moving to Vancouver to study composition at Simon Fraser University, graduating from the SCA in 2020. I am currently a doctoral student in music composition at Stanford University, where I spend most of my time.

Tell us about your current studies.

I’m in the third year of the DMA (Doctor of Musical Arts) program at Stanford University. I do the majority of my work at the Center for Computer Research in Music and Acoustics (CCRMA). I’m currently trying to learn and to experiment as much as possible! The amount of resources and ideas that I have been exposed to during the last couple of years has been quite significant and wonderful. I have been taking courses in subjects that I never thought I would study, from classes in the computer science and the mechanical engineering departments, to ones in education and theatre. I’m grateful to have been given a supportive platform to truly experiment and to learn.

As for my compositions, they are more melodic than before, and that currently makes me happy. I have started to perform more again (piano and electronics), and it makes me question: why did I ever stop…?

Koochakzadeh-Yazdi’s mention of building instruments reminded me of Icelandic musician, Bjork and Biophilia, which was an album, various art projects, and a film (Biophilia Live), which featured a number of musical instruments she created.

Getting back to Interweave, it’ s on March 21, 2024 at The Kent, specifically the gallery, which has,

… 14 foot ceilings boasts 50 track lights with the ability to transform the vacuous hall from candlelight to daylight. The lights are fully dimmable in an array of playful hues, according to your whim.   A full array of DMX Lighting and control systems live alongside the track light system and our recently installed (Vancouvers only) immersive projection system [emphasis mine] is ready for your vision.  This is your show.

I wonder if ‘multi-sensory’ includes an immersive experience.

Don’t forget, you have to RSVP for Interweave, which is free.