Shaving the ‘hairs’ off nanocrystals for more efficient electronics

A March 24, 2022 news item on phys.org announced research into nanoscale crystals and how they might be integrated into electronic devices, Note: A link has been removed,

You can carry an entire computer in your pocket today because the technological building blocks have been getting smaller and smaller since the 1950s. But in order to create future generations of electronics—such as more powerful phones, more efficient solar cells, or even quantum computers—scientists will need to come up with entirely new technology at the tiniest scales.

One area of interest is nanocrystals. These tiny crystals can assemble themselves into many configurations, but scientists have had trouble figuring out how to make them talk to each other.  

A new study introduces a breakthrough in making nanocrystals function together electronically. Published March 25 [2022] in Science, the research may open the doors to future devices with new abilities. 

A March 25, 2022 University of Chicago news release (also on EurekAlert but published on March 24, 2022), which originated the news item, expands on the possibilities the research makes possible, Note: Links have been removed,

“We call these super atomic building blocks, because they can grant new abilities—for example, letting cameras see in the infrared range,” said University of Chicago Prof. Dmitri Talapin, the corresponding author of the paper. “But until now, it has been very difficult to both assemble them into structures and have them talk to each other. Now for the first time, we don’t have to choose. This is a transformative improvement.”  

In their paper, the scientists lay out design rules which should allow for the creation of many different types of materials, said Josh Portner, a Ph.D. student in chemistry and one of the first authors of the study. 

A tiny problem

Scientists can grow nanocrystals out of many different materials: metals, semiconductors, and magnets will each yield different properties. But the trouble was that whenever they tried to assemble these nanocrystals together into arrays, the new supercrystals would grow with long “hairs” around them. 

These hairs made it difficult for electrons to jump from one nanocrystal to another. Electrons are the messengers of electronic communication; their ability to move easily along is a key part of any electronic device. 

The researchers needed a method to reduce the hairs around each nanocrystal, so they could pack them in more tightly and reduce the gaps in between. “When these gaps are smaller by just a factor of three, the probability for electrons to jump across is about a billion times higher,” said Talapin, the Ernest DeWitt Burton Distinguished Service Professor of Chemistry and Molecular Engineering at UChicago and a senior scientist at Argonne National Laboratory. “It changes very strongly with distance.”

To shave off the hairs, they sought to understand what was going on at the atomic level. For this, they needed the aid of powerful X-rays at the Center for Nanoscale Materials at Argonne and the Stanford Synchrotron Radiation Lightsource at SLAC National Accelerator Laboratory, as well as powerful simulations and models of the chemistry and physics at play. All these allowed them to understand what was happening at the surface—and find the key to harnessing their production.

Part of the process to grow supercrystals is done in solution—that is, in liquid. It turns out that as the crystals grow, they undergo an unusual transformation in which gas, liquid and solid phases all coexist. By precisely controlling the chemistry of that stage, they could create crystals with harder, slimmer exteriors which could be packed in together much more closely. “Understanding their phase behavior was a massive leap forward for us,” said Portner. 

The full range of applications remains unclear, but the scientists can think of multiple areas where the technique could lead. “For example, perhaps each crystal could be a qubit in a quantum computer; coupling qubits into arrays is one of the fundamental challenges of quantum technology right now,” said Talapin. 

Portner is also interested in exploring the unusual intermediate state of matter seen during supercrystal growth: “Triple phase coexistence like this is rare enough that it’s intriguing to think about how to take advantage of this chemistry and build new materials.”

The study included scientists with the University of Chicago, Technische Universität Dresden, Northwestern University, Arizona State University, SLAC, Lawrence Berkeley National Laboratory, and the University of California, Berkeley.

Here’s a link to and a citation for the paper,

Self-assembly of nanocrystals into strongly electronically coupled all-inorganic supercrystals by Igor Coropceanu, Eric M. Janke, Joshua Portner, Danny Haubold, Trung Dac Nguyen, Avishek Das, Christian P. N. Tanner, James K. Utterback, Samuel W. Teitelbaum¸ Margaret H. Hudson, Nivedina A. Sarma, Alex M. Hinkle, Christopher J. Tassone, Alexander Eychmüller, David T. Limmer, Monica Olvera de la Cruz, Naomi S. Ginsberg and Dmitri V. Talapin. Science • 24 Mar 2022 • Vol 375, Issue 6587 • pp. 1422-1426 • DOI: 10.1126/science.abm6753

This paper is behind a paywall.

Honey-based neuromorphic chips for brainlike computers?

Photo by Mariana Ibanez on Unsplash Courtesy Washington State University

An April 5, 2022 news item on Nanowerk explains the connection between honey and a neuromorphic (brainlike) computer chip, Note: Links have been removed,

Honey might be a sweet solution for developing environmentally friendly components for neuromorphic computers, systems designed to mimic the neurons and synapses found in the human brain.

Hailed by some as the future of computing, neuromorphic systems are much faster and use much less power than traditional computers. Washington State University engineers have demonstrated one way to make them more organic too.

In a study published in Journal of Physics D (“Memristive synaptic device based on a natural organic material—honey for spiking neural network in biodegradable neuromorphic systems”), the researchers show that honey can be used to make a memristor, a component similar to a transistor that can not only process but also store data in memory.

An April 5, 2022 Washington State University (WSU) news release (also on EurekAlert) by Sara Zaske, which originated the news item, describes the purpose for the work and details about making chips from honey,

“This is a very small device with a simple structure, but it has very similar functionalities to a human neuron,” said Feng Zhao, associate professor of WSU’s School of Engineering and Computer Science and corresponding author on the study.“This means if we can integrate millions or billions of these honey memristors together, then they can be made into a neuromorphic system that functions much like a human brain.”

For the study, Zhao and first author Brandon Sueoka, a WSU graduate student in Zhao’s lab, created memristors by processing honey into a solid form and sandwiching it between two metal electrodes, making a structure similar to a human synapse. They then tested the honey memristors’ ability to mimic the work of synapses with high switching on and off speeds of 100 and 500 nanoseconds respectively. The memristors also emulated the synapse functions known as spike-timing dependent plasticity and spike-rate dependent plasticity, which are responsible for learning processes in human brains and retaining new information in neurons.

The WSU engineers created the honey memristors on a micro-scale, so they are about the size of a human hair. The research team led by Zhao plans to develop them on a nanoscale, about 1/1000 of a human hair, and bundle many millions or even billions together to make a full neuromorphic computing system.

Currently, conventional computer systems are based on what’s called the von Neumann architecture. Named after its creator, this architecture involves an input, usually from a keyboard and mouse, and an output, such as the monitor. It also has a CPU, or central processing unit, and RAM, or memory storage. Transferring data through all these mechanisms from input to processing to memory to output takes a lot of power at least compared to the human brain, Zhao said. For instance, the Fugaku supercomputer uses upwards of 28 megawatts, roughly equivalent to 28 million watts, to run while the brain uses only around 10 to 20 watts.

The human brain has more than 100 billion neurons with more than 1,000 trillion synapses, or connections, among them. Each neuron can both process and store data, which makes the brain much more efficient than a traditional computer, and developers of neuromorphic computing systems aim to mimic that structure.

Several companies, including Intel and IBM, have released neuromorphic chips which have the equivalent of more than 100 million “neurons” per chip, but this is not yet near the number in the brain. Many developers are also still using the same nonrenewable and toxic materials that are currently used in conventional computer chips.

Many researchers, including Zhao’s team, are searching for biodegradable and renewable solutions for use in this promising new type of computing. Zhao is also leading investigations into using proteins and other sugars such as those found in Aloe vera leaves in this capacity, but he sees strong potential in honey.

“Honey does not spoil,” he said. “It has a very low moisture concentration, so bacteria cannot survive in it. This means these computer chips will be very stable and reliable for a very long time.”

The honey memristor chips developed at WSU should tolerate the lower levels of heat generated by neuromorphic systems which do not get as hot as traditional computers. The honey memristors will also cut down on electronic waste.

“When we want to dispose of devices using computer chips made of honey, we can easily dissolve them in water,” he said. “Because of these special properties, honey is very useful for creating renewable and biodegradable neuromorphic systems.”

This also means, Zhao cautioned, that just like conventional computers, users will still have to avoid spilling their coffee on them.

Nice note of humour at the end. There are a few questions, I wonder if the variety of honey (clover, orange blossom, blackberry, etc.) has an impact on the chip’s speed and/or longevity. Also, if someone spilled coffee and the chip melted and a child decided to lap it up, what would happen?

Here’s a link to and a citation for the paper,

Memristive synaptic device based on a natural organic material—honey for spiking neural network in biodegradable neuromorphic systems. Brandon Sueoka and Feng Zhao. Journal of Physics D: Applied Physics, Volume 55, Number 22 (225105) Published 7 March 2022 • © 2022 IOP Publishing Ltd

This paper is behind a paywall.

Spiders can outsource hearing to their webs

A March 29, 2022 news item on ScienceDaily highlights research into how spiders hear,

Everyone knows that humans and most other vertebrate species hear using eardrums that turn soundwave pressure into signals for our brains. But what about smaller animals like insects and arthropods? Can they detect sounds? And if so, how?

Distinguished Professor Ron Miles, a Department of Mechanical Engineering faculty member at Binghamton University’s Thomas J. Watson College of Engineering and Applied Science, has been exploring that question for more than three decades, in a quest to revolutionize microphone technology.

A newly published study of orb-weaving spiders — the species featured in the classic children’s book “Charlotte’s Web” — has yielded some extraordinary results: The spiders are using their webs as extended auditory arrays to capture sounds, possibly giving spiders advanced warning of incoming prey or predators.

Binghamton University (formal name: State University of New York at Binghamton) has made this fascinating (to me anyway) video available,

Binghamton University and Cornell University (also in New York state) researchers worked collaboratively on this project. Consequently, there are two news releases and there is some redundancy but I always find that information repeated in different ways is helpful for learning.

A March 29, 2022 Binghamton University news release (also on EurekAlert) by Chris Kocher gives more detail about the work (Note: Links have been removed),

It is well-known that spiders respond when something vibrates their webs, such as potential prey. In these new experiments, researchers for the first time show that spiders turned, crouched or flattened out in response to sounds in the air.

The study is the latest collaboration between Miles and Ron Hoy, a biology professor from Cornell, and it has implications for designing extremely sensitive bio-inspired microphones for use in hearing aids and cell phone

Jian Zhou, who earned his PhD in Miles’ lab and is doing postdoctoral research at the Argonne National Laboratory, and Junpeng Lai, a current PhD student in Miles’ lab, are co-first authors. Miles, Hoy and Associate Professor Carol I. Miles from the Harpur College of Arts and Sciences’ Department of Biological Sciences at Binghamton are also authors for this study. Grants from the National Institutes of Health to Ron Miles funded the research.

A single strand of spider silk is so thin and sensitive that it can detect the movement of vibrating air particles that make up a soundwave, which is different from how eardrums work. Ron Miles’ previous research has led to the invention of novel microphone designs that are based on hearing in insects.

“The spider is really a natural demonstration that this is a viable way to sense sound using viscous forces in the air on thin fibers,” he said. “If it works in nature, maybe we should have a closer look at it.”

Spiders can detect miniscule movements and vibrations through sensory organs on their tarsal claws at the tips of their legs, which they use to grasp their webs. Orb-weaver spiders are known to make large webs, creating a kind of acoustic antennae with a sound-sensitive surface area that is up to 10,000 times greater than the spider itself.

In the study, the researchers used Binghamton University’s anechoic chamber, a completely soundproof room under the Innovative Technologies Complex. Collecting orb-weavers from windows around campus, they had the spiders spin a web inside a rectangular frame so they could position it where they wanted.

The team began by using pure tone sound 3 meters away at different sound levels to see if the spiders responded or not. Surprisingly, they found spiders can respond to sound levels as low as 68 decibels. For louder sound, they found even more types of behaviors.

They then placed the sound source at a 45-degree angle, to see if the spiders behaved differently. They found that not only are the spiders localizing the sound source, but they can tell the sound incoming direction with 100% accuracy.

To better understand the spider-hearing mechanism, the researchers used laser vibrometry and measured over one thousand locations on a natural spider web, with the spider sitting in the center under the sound field. The result showed that the web moves with sound almost at maximum physical efficiency across an ultra-wide frequency range.

“Of course, the real question is, if the web is moving like that, does the spider hear using it?” Miles said. “That’s a hard question to answer.”

Lai added: “There could even be a hidden ear within the spider body that we don’t know about.”

So the team placed a mini-speaker 5 centimeters away from the center of the web where the spider sits, and 2 millimeters away from the web plane — close but not touching the web. This allows the sound to travel to the spider both through air and through the web. The researchers found that the soundwave from the mini-speaker died out significantly as it traveled through the air, but it propagated readily through the web with little attenuation. The sound level was still at around 68 decibels when it reached the spider. The behavior data showed that four out of 12 spiders responded to this web-borne signal.

Those reactions proved that the spiders could hear through the webs, and Lai was thrilled when that happened: “I’ve been working on this research for five years. That’s a long time, and it’s great to see all these efforts will become something that everybody can read.”

The researchers also found that, by crouching and stretching, spiders may be changing the tension of the silk strands, thereby tuning them to pick up different frequencies. By using this external structure to hear, the spider could be able to customize it to hear different sorts of sounds.

Future experiments may investigate how spiders make use of the sound they can detect using their web. Additionally, the team would like to test whether other types of web-weaving spiders also use their silk to outsource their hearing.

“It’s reasonable to guess that a similar spider on a similar web would respond in a similar way,” Ron Miles said. “But we can’t draw any conclusions about that, since we tested a certain kind of spider that happens to be pretty common.”

Lai admitted he had no idea he would be working with spiders when he came to Binghamton as a mechanical engineering PhD student.

“I’ve been afraid of spiders all my life, because of their alien looks and hairy legs!” he said with a laugh. “But the more I worked with spiders, the more amazing I found them. I’m really starting to appreciate them.”

A March 29, 2022 Cornell University news release (also on EurekAlert but published March 30, 2022) by Krishna Ramanujan offers a somewhat different perspective on the work, Note: Links have been removed)

Charlotte’s web is made for more than just trapping prey.

A study of orb weaver spiders finds their massive webs also act as auditory arrays that capture sounds, possibly giving spiders advanced warning of incoming prey or predators.

In experiments, the researchers found the spiders turned, crouched or flattened out in response to sounds, behaviors that spiders have been known to exhibit when something vibrates their webs.

The paper, “Outsourced Hearing in an Orb-weaving Spider That Uses its Web as an Auditory Sensor,” published March 29 [2022] in the Proceedings of the National Academy of Sciences, provides the first behavioral evidence that a spider can outsource hearing to its web.

The findings have implications for designing bio-inspired extremely sensitive microphones for use in hearing aids and cell phones.

A single strand of spider silk is so thin and sensitive it can detect the movement of vibrating air particles that make up a sound wave. This is different from how ear drums work, by sensing pressure from sound waves; spider silk detects sound from nanoscale air particles that become excited from sound waves.

“The individual [silk] strands are so thin that they’re essentially wafting with the air itself, jostled around by the local air molecules,” said Ron Hoy, the Merksamer Professor of Biological Science, Emeritus, in the College of Arts and Sciences and one of the paper’s senior authors, along with Ronald Miles, professor of mechanical engineering at Binghamton University.

Spiders can detect miniscule movements and vibrations via sensory organs in their tarsi – claws at the tips of their legs they use to grasp their webs, Hoy said. Orb weaver spiders are known to make large webs, creating a kind of acoustic antennae with a sound-sensitive surface area that is up to 10,000 times greater than the spider itself.

In the study, the researchers used a special quiet room without vibrations or air flows at Binghamton University. They had an orb-weaver build a web inside a rectangular frame, so they could position it where they wanted. The team began by putting a mini-speaker within millimeters of the web without actually touching it, where sound operates as a mechanical vibration. They found the spider detected the mechanical vibration and moved in response.

They then placed a large speaker 3 meters away on the other side of the room from the frame with the web and spider, beyond the range where mechanical vibration could affect the web. A laser vibrometer was able to show the vibrations of the web from excited air particles.

The team then placed the speaker in different locations, to the right, left and center with respect to the frame. They found that the spider not only detected the sound, it turned in the direction of the speaker when it was moved. Also, it behaved differently based on the volume, by crouching or flattening out.

Future experiments may investigate whether spiders rebuild their webs, sometimes daily, in part to alter their acoustic capabilities, by varying a web’s geometry or where it is anchored. Also, by crouching and stretching, spiders may be changing the tension of the silk strands, thereby tuning them to pick up different frequencies, Hoy said.

Additionally, the team would like to test if other types of web-weaving spiders also use their silk to outsource their hearing. “The potential is there,” Hoy said.

Miles’ lab is using tiny fiber strands bio-inspired by spider silk to design highly sensitive microphones that – unlike conventional pressure-based microphones – pick up all frequencies and cancel out background noise, a boon for hearing aids.  

Here’s a link to and a citation for the paper,

Outsourced hearing in an orb-weaving spider that uses its web as an auditory sensor by Jian Zhou, Junpeng Lai, Gil Menda, Jay A. Stafstrom, Carol I. Miles, Ronald R. Hoy, and Ronald N. Miles. Proceedings of the National Academy of Sciences (PNAS) DOI: https://doi.org/10.1073/pnas.2122789119 Published March 29, 2022 | 119 (14) e2122789119

This paper appears to be open access and video/audio files are included (you can heat the sound and watch the spider respond).

ArtSci Salon hosts Basic Necessities—connectivity and cultural creativity in Cuba, a talk on Oct 3, 4:30-6:00 pm ET at York University

It’s like the flood gates have opened and I am being inundated with event notices. The latest is from Toronto’s (Canada) ArtSci Salon (again). From a September 21, 2022 notice (received via email),

Basic Necessities
Connectivity and cultural creativity in Cuba

A public lecture by Nestor Siré
With online participation by Steffen Köhn

Join me in welcoming Nestor Siré.
Nestor Siré is a multimedia artist based in Cuba. His projects and collaborations explore unofficial methods for circulating information and goods, such as alternative forms of economic production, and phenomena resulting from social creativity and recycling, piracy, as well as a-legal activities benefitting from loopholes. Siré will discuss some of his recent creative works in the Cuban context.
His “Paquete Semanal” is an offline digital media circulation system based on in person file sharing to provide a solution to connectivity and infrastructure failure in Cuba. “Basic Necessities”, a recent collaboration with Steffen Köhln, portraits the dynamics of the informal economy in Cuba as it unfolds in Telegram groups and analyses the eclectic and creative uses of product photography within this digital context.
Köhln will join him in conversation via zoom.

October 3, 2022
4:30-6:00 pm [ET]
Room YH 245
Glendon Campus [York University]
2275 Bayview Ave
North York,
ON M4N 3M6
Directions

Nestor Siré
(*1988), lives and works in Havana, Cuba.
www.nestorsire.com
Nestor Siré’s artistic practice intervenes directly in social contexts in order to analyze specific cultural phenomena, often engaging with the particular idiosyncrasies of digital culture in the Cuban context.
His works have been shown in the Museo Nacional de Bellas Artes (Havana), Queens Museum (New York), Rhizome (New York), New Museum (New York), Hong-Gah Museum (Taipei), Museo de Arte Contemporáneo (Mexico City), Museo de Arte Contemporáneo, Santa Fe (Argentina), The Photographers’ Gallery (London), among other places. He has participated in events such as the Manifesta 13 Biennial (France), Gwangju Biennale (South Korea), Curitiba Biennial (Brazil), the Havana Biennial (Cuba) and the Asunción International Biennale (Paraguay), the Festival of New Latin American Cinema in Cuba and the Oberhausen International Festival of Short Film (Germany).

Steffen Köhn
(*1980), lives and works in Berlin.
www.steffenkoehn.com

Steffen Köhn is a filmmaker, anthropologist and video artist who uses ethnography to understand contemporary sociotechnical landscapes. For his video and installation works he engages in local collaborations with gig workers, software developers, or science fiction writers to explore viable alternatives to current distributions of technological access and arrangements of power.
His works have been shown at the Academy of the Arts Berlin, Kunsthaus Graz, Vienna Art Week, Hong Gah Museum Taipei, Lulea Biennial, The Photographers’ Gallery and the ethnographic museums of Copenhagen and Dresden. His films have been screened (among others) at the Berlinale, Rotterdam International Film Festival, and the Word Film Festival Montreal.

I tried to find out if this event will be webcast or streamed but was unsuccessful. You can check the ArtSci Salon website, perhaps they’ll post something closer to the event date.

Canadian Forum on Innovation and Societal Impact launches at McMaster University on Oct. 12th and 13th 2022

The Canadian Science Policy Centre’s September 22, 2022 announcement (received via email) includes this nugget of information,

The Canadian Forum on Innovation and Societal Impact [CFSI] will launch in the Fall 2022 with a first series of catalyst roundtables, deliberative dialogues and concertation workshops at McMaster University on October 12th and 13th,  2022. A joint-venture of the Canadian Science Policy Centre and The/La Collaborative, CFISI will convene social research and innovation stakeholders across sectors with the purpose of exploring alignment on policies and practices that leverage impact-first training and knowledge mobilisation in the Social Sciences, Humanities and Arts (SSHA) to foster innovation and build capacity in the social and municipal sectors. For more information, please click here.

I went to the CFSI webpage on the McMaster University website and found this,

The objective of the inaugural meeting is to understand how universities can better utilize and mobilize social and human knowledge into their communities, in particular into social sector organizations (non-profit, charities, funders) and municipal governments. What are the knowledge gaps? What are the needs and priorities in social sector organizations and the social innovation ecosystem? Which approaches to cross-sectoral and interdisciplinary collaborations can help address the biggest social and policy challenges?

The event is by invitation only [emphasis mine], and will proceed under the Chatham House rules. Sessions will leverage strategic visioning and deliberative dialogue to create stakeholder alignment and deliver a first series of action plans.

Participants will bring individual and organisational perspectives on a range issues including:

The nature and structure of innovation in the social sector.

Campus-community relationship and the challenges of knowledge mobilisation in social innovation ecosystem

The role of municipal governments in fostering innovation in the social space

Needs around policy innovation and talent in the social and municipal sector.

Contribution of Indigenous knowledge to social sector, local policy making

Who are the Participants?

Social sector organisation leaders

Social Innovation stakeholders

Research and higher education policy stakeholders

Decision- and policy-makers from municipal governments

Social and human sciences researchers

Concertation and Action Plan

The event is intended as a concertation and consultation. Sense-making workshops and consultative roundtables will aim at building consensus around key concepts and best-practices. Catalysts panels and reporting sessions will aim to establish a shared vision for an action plan around cross-sectoral strategies for innovation in the social impact ecosystem.

The genuinely cross-sectoral setting will provide an opportunity to learn from a variety of perspectives in an effort to reduce barriers to knowledge-driven collaboration and partnerships and increase the social capital of research institutions to streamline impact.

This is where it got interesting,

SUBMIT A LETTER OF INTENT TO REQUEST PARTICIPATION

The event is by invitation only and will proceed under the Chatham House rules. Those with a demonstrated interest in the theme of the Forum can submit a letter of intent to request participation in the meeting. The number of places is limited.

Please fill out the participation request FORM and return it to:

forum22@mcmaster.ca

The deadline for submitting the form is September 15th [2022; emphases mine].
Applicants will be notified shortly thereafter.

It’s a bit late but perhaps there’s a little space left for more participants?

I’m not able to confirm whether this event is in person (in Hamilton, Ontario), online, or hybrid (in person and online).

US National Academies Sept. 22-23, 2022 workshop on techno, legal & ethical issues of brain-machine interfaces (BMIs)

If you’ve been longing for an opportunity to discover more and to engage in discussion about brain-machine interfaces (BMIs) and their legal, technical, and ethical issues, an opportunity is just a day away. From a September 20, 2022 (US) National Academies of Sciences, Engineering, and Medicine (NAS/NASEM or National Academies) notice (received via email),

Sept. 22-23 [2022] Workshop Explores Technical, Legal, Ethical Issues Raised by Brain-Machine Interfaces [official title: Brain-Machine and Related Neural Interface Technologies: Scientific, Technical, Ethical, and Regulatory Issues – A Workshop]

Technological developments and advances in understanding of the human brain have led to the development of new Brain-Machine Interface technologies. These include technologies that “read” the brain to record brain activity and decode its meaning, and those that “write” to the brain to manipulate activity in specific brain regions. Right now, most of these interface technologies are medical devices placed inside the brain or other parts of the nervous system – for example, devices that use deep brain stimulation to modulate the tremors of Parkinson’s disease.

But tech companies are developing mass-market wearable devices that focus on understanding emotional states or intended movements, such as devices used to detect fatigue, boost alertness, or enable thoughts to control gaming and other digital-mechanical systems. Such applications raise ethical and legal issues, including risks that thoughts or mood might be accessed or manipulated by companies, governments, or others; risks to privacy; and risks related to a widening of social inequalities.

A virtual workshop [emphasis mine] hosted by the National Academies of Sciences, Engineering, and Medicine on Sept. 22-23 [2022] will explore the present and future of these technologies and the ethical, legal, and regulatory issues they raise.

The workshop will run from 12:15 p.m. to 4:25 p.m. ET on Sept. 22 and from noon to 4:30 p.m. ET on Sept. 23. View agenda and register.

For those who might want a peak at the agenda before downloading it, I have listed the titles for the sessions (from my downloaded Agenda, Note: I’ve reformatted the information; there are no breaks, discussion periods, or Q&As included),

Sept. 22, 2022 Draft Agenda

12: 30 pm ET Brain-Machine and Related Neural Interface Technologies: The State and Limitations of the Technology

2:30 pm ET Brain-Machine and Related Neural Interface Technologies: Reading and Writing the Brain for Movement

Sept. 23, 2022 Draft Agenda

12:05 pm ET Brain-Machine and Related Neural Interface Technologies: Reading and Writing the Brain for Mood and Affect

2:05 pm ET Brain-Machine and Related Neural Interface Technologies: Reading and Writing the Brain for Thought, Communication, and Memory

4:00 pm ET Concluding Thoughts from Workshop Planning Committee

Regarding terminology, there’s brain-machine interface (BMI), which I think is a more generic term that includes: brain-computer interface (BCI), neural interface and/or neural implant. There are other terms as well, including this one in the title of my September 17, 2020 posting, “Turning brain-controlled wireless electronic prostheses [emphasis mine] into reality plus some ethical points.” I have a more recent April 5, 2022 posting, which is a very deep dive, “Going blind when your neural implant company flirts with bankruptcy (long read).” As you can see, various social issues associated with these devices have been of interest to me.

I’m not sure quite what to make of the session titles. There doesn’t seem to be all that much emphasis on ethical and legal issues but perhaps that’s the role the various speakers will play.

Two art/sci exhibitions, one name: Sensoria from Sept. 16 – Oct. 30, 2022 in Gdansk, Poland and in Toronto, Canada

I got a notice (via email) from Toronto’s ArtSci Salon about Sensoria: The Art and Science of Our Senses 2022. This looks interesting and it is confusing as to which site is hosting which installations/art pieces. It starts nice and easy and then … Here’s more from the notice,

Sensoria: the Art & Science of Our Senses is a multi-site exhibition and symposium that bridges LAZNIA Centre for Contemporary Art (LCCA) in Gdansk, Poland and Sensorium: Centre for Digital Arts & Technology at York University in Toronto, Canada.

Held simultaneously in both locations, the exhibition and symposium will engage multi-sensory research that revitalizes our sensory connections to our surroundings, through and despite technological tools, networks and latencies.

The exhibition component is co-curated by distinguished curator Nina Czegledy (Agents for Change: Facing the Anthropocene, 2020 & Leonardo/ ISAST 50th Celebrations, 2018) and Sensorium director Joel Ong.  Czegledy brings together an international network of artists and scholars who explore the intersection of art, science and the senses. Sited concurrently in both Poland and Toronto, the exhibition will explore the dissociative potential of contemporary technologies on the senses, treating it not only as a social crisis but also an opportunity for creative play and experimentation. It aims to engage a conversation about the senses from the perspective of art, but also science, incorporating artists that straddle the boundaries of knowledge production in a variety of ways.

The event will be complemented by a workshop by Csenge Kolozsvari.

Kolozsvari brings together somatic practices (crawling side by side, drawing, moving with bags full of water, walking backwards, playing with breath, touching textures, voicing etc.) with the concept of the schiz, cut, or interval, following philosophers Deleuze and Guattari in their book Anti-Oedipus. The aim is to build practices that do not presuppose where bodies begin and end, and to agitate the habitual narratives of bodily borders and edges as solid and knowable.

The symposium leverages the exhibition content as the starting point for more in-depth conversation about the connective aesthetics of everyday sensing and the knowledge-creation potential of artists and scientists collaborating in innovative ways. The socio-political turbulences we have experienced worldwide during the last decade have created unprecedented social and personal strife. While connections are sustained now amongst virtual networks that straddle vast spaces, how might we consider the sharing of intimate senses through smell, touch, and bodily movement as a form of mutual support? The symposium explores questions such as these with keynote presentations by Ryszard Khuszcynski [I believe this is the correct spellling: Ryszard Kluszczyński], Chris Salter and David Howse, as well as roundtables between artists and scientists, and performances by Csenge Kolozsvari and York University’s DisPerSions Lab (led by Doug Van Nort). All aspects of the symposium will be presented with virtual components, so as to allow both in-person engagement in Toronto and virtual presence in Gdansk and elsewhere.

Now for details about the Gdansk portion, from the LAZNIA Centre for Contemporary Art (LCCA) event page, (Note 1: This is quite lengthy. Note 2: If you follow the link to the LCCA event page, you may need to click the English language option [upper right hand corner of the screen] and, then, scroll down to click MORE at the bottom of the left text column.)

Dates of the exhibition: 16 September–30 October 2022
Location: CCA Laznia 1 oraz CCA Laznia 2
Curator: Nina Czegledy

Exhibition: September 16-October 30, 2022
Places: Laznia 1 ( Jaskółcza 1) and Laznia 2 (Strajku Dokerów 5), Gdańsk

Opening: September 16, 2022
– time. 19.00 (Laznia 1, Dolne Miasto)
– time. 20.30 (Laznia 2, Nowy Port)

During the vernissage, we provide transport by bus from Łaźnia 1 to Łaźnia 2 and back.

Artists:
Guy van Belle | Karolina Hałatek | Csenge Kolozsvari | Hilda Kozari | Agnes Meyer-Brandis | Gayil Nalls | Raewyn Turner and Brian Harris | Artur Żmijewski

Sensoria, The Art & Science of Our Senses

Curatorial Statement

Nina Czegledy

Introduction

Sensoria, The Art & Science of Our Senses a multi-site project is focused on multisensory perception in the arts and the sciences. The cross-disciplinary initiative explores our sensory world through scientific, social, cultural and scholastic interpretations. The exhibitions, performances and the symposium link LAZNIA Centre for Contemporary Art (LCCA) in Gdansk, Poland (1) and Sensorium: Centre for Digital Art and Technology at York University, Toronto, Canada (2) in a cross-institutional and inter-cultural collaboration. The participation of international artists in the exhibition and symposium span the globe from New Zealand to Finland to the Czech Republic and reflect on the effects of recent ecological and socio-cultural alterations on sensory organisms in humans and other species.

We perceive the world through our senses, yet for a long time the senses were treated as independent perceptual modules. Contemporary research confirmed that our senses are fundamentally interrelated and interact with each other (3). Moreover, our perception of visual, auditory or tactile events change as a result of information exchange between receptors (4). The impact of radical changes such as the constraints of the COVID 19 Pandemic caused extensive psycho-emotional stress and has affected every aspect of our life from geopolitics to economies to the arts and sciences including sensory awareness (5). Considering implications of COVID-19 for the human senses Derek Victor Byrne noted that initial work has shown short- and likely longer-term negative effects on the human senses (6). Curatorial reflection of these issues presented in the last years became essential.

The way that we perceive our environment via our sensory systems has been frequently a source of controversy concerning one of the basic characteristics of our existence. (7).

As David Howes observed ”The perceptual is cultural and political, and not simply (as psychologists and neuroscientists would have it) a matter of cognitive processes or neurological mechanisms located in the individual subject” (8)

With the changing notions of the constitution of sentient beings a revision of knowledge – led to a closer engagement with the traditional experience by indigenous peoples. The benefits of Nature on our sensorial being are well known, however it is important to remember that our attitude to, and representation of Nature is always closely linked to political, religious, environmental and social considerations. In investigating sensory awareness the impact of the geographical, cultural and social context on individual sensory perception cannot be underestimated (9).

Curatorial research and development of the Sensoria project since 2019 was aimed to present the theme in an unconventional way. International artist residencies, workshops, presentations and thematically related round table discussions in collaboration with local Polish academic and corporate research institutions were offered before the Pandemic in 2019 and 2020. Strategically, the exhibitions now focus on a “return” to the sensory capacity of the body after the last two and a half years of telematic and virtual modes of communication that have biased the audio-visual spectrums of sensory experience.

While the estrangement of the senses have been exacerbated by technologies in the way media elements have contributed to the dissociation of the senses from one another and a subsequent bias of audio-visual content in our digital and virtual environments, the SENSORIA exhibition adapt what Caroline Jones (10) has described as the “creatively dissociated self”. In her landmark exhibition “Sensorium” of 2006 , she considers the dissociative potential of contemporary technologies on the senses as an invitation to engage in creative play and experimentations around this prospect. In this way, SENSORIA builds on the unique interests of the artists curated around the olfactory, tactile and sonic senses; and explores the tensions of telematic/virtual co-presence over two geographically separate galleries.

The exhibition’s primary goal is to create a broad visibility for the wide variety of art project concerning sensory perception. It aims to engage a conversation about the senses from the perspective of art, but also science, incorporating artists that straddle the boundaries of knowledge production in a variety of ways. In Poland, the exhibition linked established European artists with local Polish ones; the Toronto hub similarly links international artists in the main hubs with local artists. In this way, the exhibition forges networks across continents and ideas, bringing a range of different perspectives together to explore how our globalized world has both linked and disconnected us from one another. In addition, being situated simultaneously in both sites, Sensoria also builds on the unique interests of the artists curated around the olfactory, tactile and sonic senses; and explores the tensions of telematic/virtual co-presence over two geographically separate galleries. Sensoria artists, curated through a collaborative process with the project’s lead curators and team members, have been invited to considered site-specific adaptations of their internationally renowned artworks. In this way, the goal of the project is to revitalize our sensory connections to our immediate surroundings, through and despite technological tools, networks and latencies; and to share in a collective experience and discussion of them. In addition, the symposium component hosted by Sensorium at York University focuses on a “return” to the sensory capacity of the body after the last two and a half years of telematic and virtual modes of communication that have biased the audio-visual spectrums of sensory experience. The constraints of the Pandemic have precipitated our current estrangement from our sensuous surroundings, and with the gradual and tentative reopening of regulations in North America, Europe and the world this Spring, we expect a resurgence in a desire for people to engage once again with the multi-sensory sensorium, prioritizing the senses of smell, touch and taste that have broadly been neglected in collective experience. The Sensoria symposium will feature artists, curators and theorists through a series of keynote lectures, performances and artist panels.

Sincere thanks to the LAZNIA Team, especially Lila Bosowska and Aleksandra Ksiezopolska for our curatorial collaboration in the difficult times of the last three years. Sincere thanks to Ryszard Kluszczyński for advising the Sensoria project.

Respectful acknowledgements to Jadwiga Charzynska Director of Laznia.

Last but not least deepest thanks to Prof. Yu-Zhi Joel Ong for his role in expanding Sensoria into an international cross-institutional collaboration.

Reference

1 LAZNIA Centre for Contemporary Art (LCCA) in Gdansk, Poland

2 Sensorium: Centre for Digital Art and Technology at York University (Sensorium) Toronto, Canada. https://sensorium.ampd.yorku.ca/

3 Burston, D and Cohen J. 2015 Perceptual Integration, Modularity, and Cognitive Penetration In: Cognitive Influences on Perception: Implications for Philosophy of Mind, Epistemology, and Philosophy of Action (pp.123-143). Oxford University Press

4 Masrour F, Nirshberg, G, Schon Nm Leardi J and Barrett Emily Revisiting the empirical case against perceptual modularity Front Psychol. 2015; 6: 1676. Published online 2015 Nov 4. doi: 10.3389/fpsyg.2015.01676

5. Tasha R Stanton, T,R and Spence Charles. The Influence of Auditory Cues on Bodily and Movement Perception. Front. Psychol., 17 January 2020 Sec. Perception Science https://doi.org/10.3389/fpsyg.2019.03001

6. Byrne, V Effects and Implications of COVID-19 for the Human Senses, Consumer Preferences, Appetite and Eating Behaviour: Volume I Foods. 2022 Jun; 11(12): 1738. Published online 2022 Jun 14. doi: 10.3390/foods11121738

7. Mc Cann, H. Our sensory experience of the pandemic https://pursuit.unimelb.edu.au/

8 Howes, D Architecture of the Senses. https://www.david-howes.com/DH-research-sampler-arch-senses.htm

9 D B Rose Val Plumwood’s Philosophical Animism: Attentive Inter-actions in the Sentient World Environmental Humanities 3(1):93-19

10 Jones C. The Mediated Sensorium. https://citythroughthebody.files.wordpress.com/2013/08/sensorium.pdf

Descriptions of the artworks presented at Sensoria:

Agnes Meyer Brandis Berlin based artist contributes One Tree ID and Have a tea with a Tree“ to the Sensoria exhibition. One Tree ID is a biochemical and Biopoetic Odour Communication Installation The project One Tree ID transforms the ID of a specific tree into a perfume that can then be applied to the human body. By applying it, a person can invisibly wear not just characteristics of the tree he/she is standing next to, but also use parts of its communication system and potentially have a conversation that – although invisible and inaudible by nature – might still take place on the biochemical level plants use for information exchange. VOC and Have a tea with a Tree provides a booking link to a personal video conference with up to 16 trees. The trees will participate in real time. Address for conference booking: www.teawithatree.com. The internet protocol is secured.

Polish artist Karolina Hałatek will present “Ascent” – a large-scale site-specific light installation that embodies a variety of archetypical and physical associations – from microscopic observations, electromagnetic wave dynamics, and atmospheric phenomena of a whirlwind to a spiritual epiphany. Most importantly, Ascent offers a unique immersive experience, that invites the viewer to become its central point, and transforms the perception of the viewer on a sensual level. The light and the fog create a monumental dynamic space that is participatory, the space that opens up a new dimension and directs the attention toward the bodily sensations in the explicit environment. The viewer is free to approach the work according to its own sensual response, but direct interaction can offer the potential to evoke a new perceptual imagination.

Bodylandscapes by Csenge Kolozsvari is a single channel video piece feeling-with the fascial planes (connective tissues) of bodies; thinking them beyond human scales and temporalities, as constantly emerging fields. The camera is a listening device for the softness of skin-talk; a composition of detailed skin-textures and close-ups of body parts that are imperceptibly transitioning into one another, following creases and swellings, creating landscapes in-the-making. The video is a proposition for remembering the ecological ways of our belonging, of other ways of knowing, connecting into the vastness that surrounds us and moves across us, of becoming-environment once again.

Artur Zmijewski a Polish artist asked a group of visually impaired people to paint the world as they see it. The result is compiled in Blindly a video with sound. Some of the volunteers were congenitally disabled; others became blind in their lifetime. In the film they draw self-portraits and landscapes, occasionally asking the artist for instructions or giving verbal explanation for their decisions. Their paintings are clumsy and abstract. It is however not the resulting works but the process of making them that is at the core of the film.

Hilda Kozari leads a 3 hour-long memory workshop with visually impaired participants and Emilia Leszkowicz a local neuroscientist coordinated with the Education Department of LAZNIA. The workshop is focused on, triggering smell memories and discussions of the scents and the memories triggered by them. Tactility is also a theme of this workshop for the visually impaired participants which is conveyed via felt discs in various sizes. From the different sizes of the discs it is possible to form the Braille verbs and messages.

The findings and results of the workshop material to be transferred on the Sensoria exhibition walls. The multisensory installation is accessible for visually impaired visitors during the exhibition. For other visitors for rethinking perception, enjoying the smell and touch of the installation and seeing the Braille signs as spatial, visually fascinating structure. It is hoped that this is an opportunity recognising the visually impaired as active members of the community.

Gayil Nalls from New York city brings her World Sensorium project to Sensoria World Sensorium which was officially part of New York City’s millennium event “Times Square 2000: The Global Celebration at the Crossroads of the World,” where for 24 hours around New Year’s Eve, the peoples and cultures of nations around the world were celebrated through sight, sound, and—with World Sensorium— scent. World Sensorium is a large-scale, transdisciplinary, olfactory artwork comprised of botanical substances formulated by country population percentages into a single global essence. The phytoconstituents are those most valued by humanity since ancient times, plants established through ethnobotanical research and a global survey process with world governments. Discussion of the World Sensorium link between psychology and olfaction, and the phenomena of odor-evoked memory follows. Individuals attending are invited to participate in ‘Experience World Sensorium:Poland “ and have a chance to dive beneath the insightful a fragmentary memoir of their own experience at a future date.

Raewyn Turner & Brian Harris, New Zealand based artists present Read Reed at Sensoria. Read Reed proceeds from the mythological story of the discretion of Midas’s hairdresser who, feeling that he may betray Midas’s trust, dug a hole in the earth and spoke into it whereby he laid his secret, only to have the secret broadcast to the world via the whispering reeds which grew over the hole. ReedRead relates to data misinterpretation, hidden secrets and the desire for vast wealth. The artists are using the story of secrets whispered into a hole in the earth and the inevitable leakage and exposure of secrets as a starting point. Data from any source including reeds swishing in the wind may be formed into letters and words that relate to digital capitalism and the obscuring of knowledge through the unknowns of ambiguity, uncertainty and risk. Both the clandestine nature of pervasive monitoring and the authorization for increasing the scope and breadth of collected information originates with NSA’s aspiration to sniff it all, know it all, exploit it all etc., and is part of creating the conditions for digital capitalism.

Guy Van Belle in collaboration with Krzysztof Topolski and the Gdansk University Choir present Fanfara Gdansk performance using a simple and open setup for the participatory visitors/performers. For centuries the arts were rather interested in the non-human expressions around or communication and phenomena that we faintly or hardly understand. To quote Paul Demarinis “Music is sound to my ears”. The sound score gives an indication of discrete and continuous time, pitches and amplitudes, complexities and silences, some combinatory ideas, etc. in the form of sounds you can listen to, sing/play along with it or counter, imitate and enrich it… The expressivity and performativity aims at providing a real time interpretation of the sound score.

The Fanfara Gdansk performance consists of a backtrack with recorded and computer generated birdsongs, which is transmitted over local FM, and received by the musicians on headsets from their phones, tables, portable radio receivers. All musicians are ‘singing’ along with the birdsongs, but they can also bring additional small handheld objects that produce sound: battery operated electronics; resonating objects, … some megaphones and small amplifiers will be available, but all wearable.
The singers from the choir move slowly in formation together with the additional musicians and participatory audience, towards the entrance of the exhibition. Any single movement from the musicians and the audience influences the position of the others.

There’s more about the Toronto portion of the exhibitions, etc. on York University’s Sensorium Centre for Digital Arts and Technologies’ events page, Note: This is where it gets a little confusing as it seems that some of these artists are displaying the same pieces in two different cities at the same time: World Sensorium has a version in Poland and a version in Toronto; Read Reed is in Poland and ReedRead is in Toronto; I’m not sure about One Tree ID, which seems to be in two places at once,

Courtesy: York University

SENSORIA: the Art & Science of Our Senses is a multi-site exhibition and symposium that bridges LAZNIA Centre for Contemporary Art (LCCA) in Gdansk, Poland and Sensorium: Centre for Digital Arts & Technology at York University in Toronto, Canada. Held simultaneously in both locations, the exhibition and symposium will engage multi-sensory research that revitalizes our sensory connections to our surroundings, through and despite technological tools, networks and latencies.

Register for the symposium Oct. 4–5, 2022: https://www.eventbrite.ca/e/symposium-sensoria-the-art-and-science-of-our-senses-tickets-418241681127.   

The symposium will also feature 2 keynote performances from 12:45pm EST each day:  The Power of the Spill by Csenge Kolozsvari Oct. 4, 2022, and Doug Van Nort Electro-Acoustic Orchestra Oct. 5, 2022. 

EXHIBITION

September 26 – October 14, 2022
Gales Gallery, York University
105 Accolade West Building,
86 Fine Arts Road, Toronto, ON 

Held at the Gales Gallery, the Sensoria exhibition will feature the works:

One Tree ID.  Agnes Meyer-Brandis,
SunEaters.  Grace Grothaus,
World Sensorium.  Gayil Nalls,
Emergent: A Mobile Gallery featuring “The Connection”, Michaela Pňaček,  Roberta Buiani, Lorella Di Cintio and Kavi
ReedRead.  Raewyn Turner/ Brian Harris 
Kinetic Shadows.  Hrysovalanti Maheras 
Marching Choir Guy Van Belle  

The exhibition is also open during Nuit Blanche as part of the AGYU’s Streams” project.

In addition, Csenge Kolozsvari will be leading the Schizo-Somatic Workshop on Oct. 3, 2022. Please click on the hyperlinks for separate registration. 

SYMPOSIUM

SENSORIA: The Art and Science of Our Senses symposium presents keynote lectures, discussions and performances around the connective aesthetics of everyday sensing and the knowledge-creation potential of artists and scientists collaborations. Registration link : https://www.eventbrite.ca/e/symposium-sensoria-the-art-and-science-of-our-senses-tickets-418241681127

Running from Oct. 4–5 (9am – 12noon EST), The symposium will feature keynote lectures by Ryszard Kluszcynski , Chris Salter and David Howse; roundtable discussions by the artists/theorists/scientists Agnes Meyer-Brandis, Gayil Nalls, Rasa Smite, Katarzyna Pastuszak, Grace Grothaus, Katarzyna Sloboda, Raewyn Turner/Brian Harris, Hilda Kosari [a web search suggests that Kozari is a more correct spelling] and Agnieszka Sorokowska.

The symposium will also feature 2 keynote performances from 12:45pm EST each day:  The Power of the Spill by Csenge Kolozsvari Oct. 4, 2022, and Doug Van Nort Electro-Acoustic Orchestra Oct. 5, 2022. 

In addition, Csenge Kolozsvari will be leading the Schizo-Somatic Workshop on Oct. 3, 2022. Please click on the hyperlinks for separate registration. 

Symposium Schedule:

Tuesday, Oct. 4: 9am – 130pm EST
9:00 : Introductions and land acknowledgement (Joel Ong)
9:05 : Introduction from Sensoria Curator (Nina Czegledy)
9:10 : Introduction from LAZNIA (Jadwiga Charzynska, Director)
9:30 : Keynote 1 —Professor Ryszard Kluszcynski
10:30 : Sensoria Panel 1 — Agnes Meyer-Brandis, Gayil Nalls, Rasa Smite, Katarzyna Pastuszak, Grace Grothaus (Discussant)
12:00 :  Lunch Break
12:30 : Keynote Performance 1 — Csenge Kolozsvari [Sensorium Flex Space] + Q&A
1:30 : End

Wednesday Oct 5th 9am – 130pm EST

9:00 : Introductions and land acknowledgement
9:10 : Curatorial presentation  (Toronto curatorial team)
9:30 : Keynote 2 — Professors Chris Salter and David Howse
10:30 :  Sensoria Panel 2 — Katarzyna Sloboda, Raewyn Turner/Brian Harris, Hilda Kosari, Agnieszka Sorokowska, Hrysovalanti Maheras (Discussant)
12:00 : Lunch Break
12:30 : Keynote Performance 2 — Doug Van Nort Telematic Orchestra  [DisPerSions Lab] + Q&A
1:30 : Ending Notes

Description: 

SENSORIA: the Art & Science of Our Senses is a multi-site exhibition and symposium that bridges LAZNIA Centre for Contemporary Art (LCCA) in Gdansk, Poland and Sensorium: Centre for Digital Arts & Technology at York University in Toronto, Canada. Held simultaneously in both locations, the exhibition and symposium will engage multi-sensory research that revitalizes our sensory connections to our surroundings, through and despite technological tools, networks and latencies.

The exhibition is co-curated by distinguished curator Nina Czegledy (Agents for Change: Facing the Anthropocene, 2020 & Leonardo/ISAST 50th Celebrations, 2018) and Sensorium director Joel Ong, with the support of assistant curators Eva Lu and Cleo Sallis-Parchet. Sensoria explores the intersection of art, science and the senses, bringing together an international network of artists: Guy van Belle, Roberta Buiani, Lorella Di Cintio, Grace Grothaus, Kavi, Hrysovalanti Maheras, Agnes Meyer-Brandis, Gayil Nalls, Michael Palumbo, Michaela Pnacekova, Raewyn Turner and Brian Harris. Sited concurrently in both Poland and Toronto, the exhibition will explore the dissociative potential of contemporary technologies on the senses, treating it not only as a social crisis but also an opportunity for creative play and experimentation. It aims to engage a conversation about the senses from the perspective of art, but also science, incorporating artists that straddle the boundaries of knowledge production in a variety of ways.

The symposium leverages the exhibition content as the starting point for more in-depth conversation about the connective aesthetics of everyday sensing and the knowledge-creation potential of artists and scientists collaborating in innovative ways. The socio-political turbulences we have experienced worldwide during the last decade have created unprecedented social and personal strife. While connections are sustained now amongst virtual networks that straddle vast spaces, how might we consider the sharing of intimate senses through smell, touch, and bodily movement as a form of mutual support? The symposium explores questions such as these with keynote presentations by Ryszard Khuszcynsk [Kluszcynski]i, Chris Salter and David Howse, as well as roundtables between artists and scientists: Agnes Meyer-Brandis, Gayil Nalls, Rasa Smite, Katarzyna Pastuszak, Grace Grothaus, Katarzyna Sloboda, Hilda Kosari [Kozari], Agnieszka Sorokowska, Hrysovalanti Maheras, Raewyn Turner and Brian Harris. All aspects of the symposium will be presented with virtual components, so as to allow both in-person engagement in Toronto and virtual presence in Gdansk and elsewhere. 

The event will be complemented by a workshop byCsenge Kolozsvari.  Kolozsvari’s Schizo-Somatic Session brings together somatic practices (crawling side by side, drawing, moving with bags full of water, walking backwards, playing with breath, touching textures, voicing etc.) with the concept of the schiz, cut, or interval, following philosophers Deleuze and Guattari in their book Anti-Oedipus. The aim is to build practices that do not presuppose where bodies begin and end, and to agitate the habitual narratives of bodily borders and edges as solid and knowable. 

Csenge Kolozsvari’s performance The Power of the Spill is a multidisciplinary live performance working at the intersection of digital and imaginary technologies. It uses live video feedback, algorithmic processes of image (Hydra), sound as well as a movement-choreography informed by somatic practices. This project is a study on visual perception and how it affects our ways of making sense of the world, aiming to create an alternative lens that acknowledges the vitality of objects, a topology that is cross-species, the ways seemingly separate entities are in constant exchange, towards a more ecological way of being. The performance is in collaboration with Kieran Maraj, with original live coding by Rodrigo Velasco. Performance will be followed by a Q&A with the artist. 

Doug Van Nort’s performance The Telematic Orchestra  

The sense of touch (or tactility) is not highlighted in the image for the poster but there are some workshops which incorporate that sense.

I apologize for the redundancies and for not correcting or noting the errors in the various texts and with people’s names.

One final note, York University’s Sensorium Centre for Digital Arts and Technologies was last mentioned here in an October 26, 2020 posting about an ArtSci Salon event.

Science Summit at the 77th United Nations General Assembly (Science Summit UNGA77) from September 13 – 30, 2022

Late last week (at the end of Friday, Sept. 16, 2022) I saw a notice about a Science Summit at the 77th United Nations (UN) General Assembly. (BTW, Canadians may want to check out the Special note further down this posting.) Here’s more about the 8th edition of the Science Summit from the UN Science Summit webpage (Note: I have made some formatting changes),

ISC [International Science Council] and its partners will organise the 8th edition of the Science Summit around the 77th United Nations General Assembly (UNGA77) on 13-30 September 2022.

The role and contribution of science to attaining the United Nations Sustainable Development Goals (SDGs) will be the central theme of the Summit. The objective is to develop and launch science collaborations to demonstrate global science mechanisms and activities to support the attainment of the UN SDGs, Agenda 2030 and Local2030. The meeting will also prepare input for the United Nations Summit of the Future, which will take place during UNGA78 beginning on 12 September 2023.

The UN General Assembly (UNGA) has elected, by acclamation, Csaba Kőrösi, Director of Environmental Sustainability at the Office of the President of Hungary, to serve as President of its 77th session. In his acceptance speech, Kőrösi said his presidency’s efforts will be guided by the motto, ‘Solutions through Solidarity, Sustainability and Science.’ He will succeed Abdulla Shahid of Maldives, current UNGA President, assuming the presidency on 13 September 2022

The Summit will examine what enabling policy, regulatory and financial environments are needed to implement and sustain the science mechanisms required to support genuinely global scientific collaborations across continents, nations and themes. Scientific discovery through the analysis of massive data sets is at hand. This data-enabled approach to science, research and development will be necessary if the SDGs are to be achieved.

SSUNGA77 builds on the successful Science Summit at UNGA76, which brought together over 460 speakers from all continents in more than 80 sessions.

SSUNGA77 will bring together thought leaders, scientists, technologists, innovators, policymakers, decision-makers, regulators, financiers, philanthropists, journalists and editors, and community leaders to increase health science and citizen collaborations across a broad spectrum of themes ICT, nutrition, agriculture and the environment.

Objectives

Present key science initiatives in a series of workshops, presentations, seminars, roundtables and plenary sessions addressing each UN SDG.

Promote collaboration by enabling researchers, scientists and civil society organisations to become aware of each other and work to understand and address critical challenges.

Promote inclusive science, including increasing access to scientific data by lower and middle-income countries.

Focus meetings will be organised around each of the UN SDGs, bringing key stakeholders together to understand and advance global approaches.

Priority will be given to developing science capacity globally to implement the SDGs.

Demonstrate how research infrastructures work as a driver for international cooperation.

Promote awareness of data-enabled science and related capacities and infrastructures.

Understand how key UN initiatives, including The Age of Digital Interdependence, LOCAL 2030, and the Summit of the Future,can provide a basis for increasing science cooperation globally to address global challenges.

Highlights

Two days of meetings on Wall Street at the New York Stock Exchange while highlighting the theme of science contribution to the SDGs and launching a series of meetings with corporate financiers on science funding.

Science and ICT [Information and Communications Technology]/Digital ministers in the world will be approached for their engagement and support, to have their respective missions at the United Nations host individual meetings and to request the participation of their Prime Minister.

A powerful youth programme for children, teens and students. This includes a space-related initiative currently involving some 60 countries, and this number is; very likely to increase. To inspire the world’s youth to come together and lead regional inter-generation projects to attain the “moonshots” of the 21st century – the first in this series would be the 2030 SDGs.

13-30 September 2022: Thematic Sessions and Scientific Sessions: approximately 400 sessions are planned: approximately 100 hybrid events will take place in New York City, with the remainder taking place online;

20 Keynote Lectures by eminent scientists and innovative thinkers;

12 Thematic Days, covering soil, biodiversity, indigenous knowledge, materials, clean water

4 Plenary Sessions;

100 Ministers will participate, covering science, health, environment, climate, industry and regulation;

At least 100,000 participants – in person and online.

Here’s a link to the Agenda for the 8th Science Summit and should one or more sessions pique your interest, you can Register for free here. Sessions are in person and/or via Zoom.

Special notes

Dr. Mona Nemer, Chief Science Advisor of Canada, is presenting at 4 pm EDT (1 pm PDT) today, on Monday, September 19, 2022. Here’s more from the session page (keep scrolling down past the registration button)

(REF 19052 – Hybrid) Keynote Speech: Dr Mona Nemer, Chief Science Advisor of Canada (In-Person)

“Science knows no country, because knowledge belongs to humanity,” Pasteur famously said nearly 150 years ago. In the time since, the world has seen an enormous increase in the pace of scientific discovery and consequent need for collaboration, as our challenges become both more urgent and more complex. From climate change and food security to pandemic preparedness and building the societies of tomorrow, science has a major role to play in guiding us toward a peaceful, healthy and sustainable future, and getting there requires that we work together.

In this talk, Canada’s Chief Science Advisor, Dr. Mona Nemer, shares her insights on the importance of a global science culture that promotes openness, diversity and collaboration, and how growing our science advisory systems will help to both frame the emerging issues that the world faces and provide the evidence needed to solve them.

“Science knows no country …” Really?

One final bit, it’s regarding the second highlight (Science and ICT [Information and Communications Technology]/Digital ministers …), Canada did have a Minister of Digital Government and, sometimes, has a Minister of Science. Currently, neither position exists. For the nitpicky, there is Innovation, Science and Economic Development Canada (ISED) which seems to be largely dedicated to monetizing science rather than the pursuit of science.

Age of AI and Big Data – Impact on Justice, Human Rights and Privacy Zoom event on September 28, 2022 at 12 – 1:30 pm EDT

The Canadian Science Policy Centre (CSPC) in a September 15, 2022 announcement (received via email) announced an event (Age of AI and Big Data – Impact on Justice, Human Rights and Privacy) centered on some of the latest government doings on artificial intelligence and privacy (Bill C-27),

In an increasingly connected world, we share a large amount of our data in our daily lives without our knowledge while browsing online, traveling, shopping, etc. More and more companies are collecting our data and using it to create algorithms or AI. The use of our data against us is becoming more and more common. The algorithms used may often be discriminatory against racial minorities and marginalized people.

As technology moves at a high pace, we have started to incorporate many of these technologies into our daily lives without understanding its consequences. These technologies have enormous impacts on our very own identity and collectively on civil society and democracy. 

Recently, the Canadian Government introduced the Artificial Intelligence and Data Act (AIDA) and Bill C-27 [which includes three acts in total] in parliament regulating the use of AI in our society. In this panel, we will discuss how our AI and Big data is affecting us and its impact on society, and how the new regulations affect us. 

Date: Sep 28 Time: 12:00 pm – 1:30 pm EDT Event Category: Virtual Session

Register Here

For some reason, there was no information about the moderator and panelists, other than their names, titles, and affiliations. Here’s a bit more:

Moderator: Yuan Stevens (from her eponymous website’s About page), Note: Links have been removed,

Yuan (“You-anne”) Stevens (she/they) is a legal and policy expert focused on sociotechnical security and human rights.

She works towards a world where powerful actors—and the systems they build—are held accountable to the public, especially when it comes to marginalized communities. 

She brings years of international experience to her role at the Leadership Lab at Toronto Metropolitan University [formerly Ryerson University], having examined the impacts of technology on vulnerable populations in Canada, the US and Germany. 

Committed to publicly accessible legal and technical knowledge, Yuan has written for popular media outlets such as the Toronto Star and Ottawa Citizen and has been quoted in news stories by the New York Times, the CBC and the Globe & Mail.

Yuan is a research fellow at the Centre for Law, Technology and Society at the University of Ottawa and a research affiliate at Data & Society Research Institute. She previously worked at Harvard University’s Berkman Klein Center for Internet & Society during her studies in law at McGill University.

She has been conducting research on artificial intelligence since 2017 and is currently exploring sociotechnical security as an LL.M candidate at University of Ottawa’s Faculty of Law working under Florian Martin-Bariteau.

Panelist: Brenda McPhail (from her Centre for International Governance Innovation profile page),

Brenda McPhail is the director of the Canadian Civil Liberties Association’s Privacy, Surveillance and Technology Project. Her recent work includes guiding the Canadian Civil Liberties Association’s interventions in key court cases that raise privacy issues, most recently at the Supreme Court of Canada in R v. Marakah and R v. Jones, which focused on privacy rights in sent text messages; research into surveillance of dissent, government information sharing, digital surveillance capabilities and privacy in relation to emergent technologies; and developing resources and presentations to drive public awareness about the importance of privacy as a social good.

Panelist: Nidhi Hegde (from her University of Alberta profile page),

My research has spanned many areas such as resource allocation in networking, smart grids, social information networks, machine learning. Broadly, my interest lies in gaining a fundamental understanding of a given system and the design of robust algorithms.

More recently my research focus has been in privacy in machine learning. I’m interested in understanding how robust machine learning methods are to perturbation, and privacy and fairness constraints, with the goal of designing practical algorithms that achieve privacy and fairness.

Bio

Before joining the University of Alberta, I spent many years in industry research labs. Most recently, I was a Research team lead at Borealis AI (a research institute at Royal Bank of Canada), where my team worked on privacy-preserving methods for machine learning models and other applied problems for RBC. Prior to that, I spent many years in research labs in Europe working on a variety of interesting and impactful problems. I was a researcher at Bell Labs, Nokia, in France from January 2015 to March 2018, where I led a new team focussed on Maths and Algorithms for Machine Learning in Networks and Systems, in the Maths and Algorithms group of Bell Labs. I also spent a few years at the Technicolor Paris Research Lab working on social network analysis, smart grids, and privacy in recommendations.

Panelist: Benjamin Faveri (from his LinkedIn page),

About

Benjamin Faveri is a Research and Policy Analyst at the Responsible AI Institute (RAII) [headquarted in Austin, Texas]. Currently, he is developing their Responsible AI Certification Program and leading it through Canada’s national accreditation process. Over the last several years, he has worked on numerous certification program-related research projects such as fishery economics and certification programs, police body-worn camera policy certification, and emerging AI certifications and assurance systems. Before his work at RAII, Benjamin completed a Master of Public Policy and Administration at Carleton University, where he was a Canada Graduate Scholar, Ontario Graduate Scholar, Social Innovation Fellow, and Visiting Scholar at UC Davis School of Law. He holds undergraduate degrees in criminology and psychology, finishing both with first class standing. Outside of work, Benjamin reads about how and why certification and private governance have been applied across various industries.

Panelist: Ori Freiman (from his eponymous website’s About page)

I research at the forefront of technological innovation. This website documents some of my academic activities.

My formal background is in Analytic Philosophy, Library and Information Science, and Science & Technology Studies. Until September 22′ [September 2022], I was a Post-Doctoral Fellow at the Ethics of AI Lab, at the University of Toronto’s Centre for Ethics. Before joining the Centre, I submitted my dissertation, about trust in technology, to The Graduate Program in Science, Technology and Society at Bar-Ilan University.

I have also found a number of overviews and bits of commentary about the Canadian federal government’s proposed Bill C-27, which I think of as an omnibus bill as it includes three proposed Acts.

The lawyers are excited but I’m starting with the Responsible AI Institute’s (RAII) response first as one of the panelists (Benjamin Faveri) works for them and it’s a view from a closely neighbouring country, from a June 22, 2022 RAII news release, Note: Links have been removed,

Business Implications of Canada’s Draft AI and Data Act

On June 16 [2022], the Government of Canada introduced the Artificial Intelligence and Data Act (AIDA), as part of the broader Digital Charter Implementation Act 2022 (Bill C-27). Shortly thereafter, it also launched the second phase of the Pan-Canadian Artificial Intelligence Strategy.

Both RAII’s Certification Program, which is currently under review by the Standards Council of Canada, and the proposed AIDA legislation adopt the same approach of gauging an AI system’s risk level in context; identifying, assessing, and mitigating risks both pre-deployment and on an ongoing basis; and pursuing objectives such as safety, fairness, consumer protection, and plain-language notification and explanation.

Businesses should monitor the progress of Bill C-27 and align their AI governance processes, policies, and controls to its requirements. Businesses participating in RAII’s Certification Program will already be aware of requirements, such as internal Algorithmic Impact Assessments to gauge risk level and Responsible AI Management Plans for each AI system, which include system documentation, mitigation measures, monitoring requirements, and internal approvals.

The AIDA draft is focused on the impact of any “high-impact system”. Companies would need to assess whether their AI systems are high-impact; identify, assess, and mitigate potential harms and biases flowing from high-impact systems; and “publish on a publicly available website a plain-language description of the system” if making a high-impact system available for use. The government elaborated in a press briefing that it will describe in future regulations the classes of AI systems that may have high impact.

The AIDA draft also outlines clear criminal penalties for entities which, in their AI efforts, possess or use unlawfully obtained personal information or knowingly make available for use an AI system that causes serious harm or defrauds the public and causes substantial economic loss to an individual.

If enacted, AIDA would establish the Office of the AI and Data Commissioner, to support Canada’s Minister of Innovation, Science and Economic Development, with powers to monitor company compliance with the AIDA, to order independent audits of companies’ AI activities, and to register compliance orders with courts. The Commissioner would also help the Minister ensure that standards for AI systems are aligned with international standards.

Apart from being aligned with the approach and requirements of Canada’s proposed AIDA legislation, RAII is also playing a key role in the Standards Council of Canada’s AI  accreditation pilot. The second phase of the Pan-Canadian includes funding for the Standards Council of Canada to “advance the development and adoption of standards and a conformity assessment program related to AI/”

The AIDA’s introduction shows that while Canada is serious about governing AI systems, its approach to AI governance is flexible and designed to evolve as the landscape changes.

Charles Mandel’s June 16, 2022 article for Betakit (Canadian Startup News and Tech Innovation) provides an overview of the government’s overall approach to data privacy, AI, and more,

The federal Liberal government has taken another crack at legislating privacy with the introduction of Bill C-27 in the House of Commons.

Among the bill’s highlights are new protections for minors as well as Canada’s first law regulating the development and deployment of high-impact AI systems.

“It [Bill C-27] will address broader concerns that have been expressed since the tabling of a previous proposal, which did not become law,” a government official told a media technical briefing on the proposed legislation.

François-Philippe Champagne, the Minister of Innovation, Science and Industry, together with David Lametti, the Minister of Justice and Attorney General of Canada, introduced the Digital Charter Implementation Act, 2022. The ministers said Bill C-27 will significantly strengthen Canada’s private sector privacy law, create new rules for the responsible development and use of artificial intelligence (AI), and continue to put in place Canada’s Digital Charter.

The Digital Charter Implementation Act includes three proposed acts: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act (AIDA)- all of which have implications for Canadian businesses.

Bill C-27 follows an attempt by the Liberals to introduce Bill C-11 in 2020. The latter was the federal government’s attempt to reform privacy laws in Canada, but it failed to gain passage in Parliament after the then-federal privacy commissioner criticized the bill.

The proposed Artificial Intelligence and Data Act is meant to protect Canadians by ensuring high-impact AI systems are developed and deployed in a way that identifies, assesses and mitigates the risks of harm and bias.

For businesses developing or implementing AI this means that the act will outline criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment.

..

An AI and data commissioner will support the minister of innovation, science, and industry in ensuring companies comply with the act. The commissioner will be responsible for monitoring company compliance, ordering third-party audits, and sharing information with other regulators and enforcers as appropriate.

The commissioner would also be expected to outline clear criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment.

Canada already collaborates on AI standards to some extent with a number of countries. Canada, France, and 13 other countries launched an international AI partnership to guide policy development and “responsible adoption” in 2020.

The federal government also has the Pan-Canadian Artificial Intelligence Strategy for which it committed an additional $443.8 million over 10 years in Budget 2021. Ahead of the 2022 budget, Trudeau [Canadian Prime Minister Justin Trudeau] had laid out an extensive list of priorities for the innovation sector, including tasking Champagne with launching or expanding national strategy on AI, among other things.

Within the AI community, companies and groups have been looking at AI ethics for some time. Scotiabank donated $750,000 in funding to the University of Ottawa in 2020 to launch a new initiative to identify solutions to issues related to ethical AI and technology development. And Richard Zemel, co-founder of the Vector Institute [formed as part of the Pan-Canadian Artificial Intelligence Strategy], joined Integrate.AI as an advisor in 2018 to help the startup explore privacy and fairness in AI.

When it comes to the Consumer Privacy Protection Act, the Liberals said the proposed act responds to feedback received on the proposed legislation, and is meant to ensure that the privacy of Canadians will be protected, and that businesses can benefit from clear rules as technology continues to evolve.

“A reformed privacy law will establish special status for the information of minors so that they receive heightened protection under the new law,” a federal government spokesperson told the technical briefing.

..

The act is meant to provide greater controls over Canadians’ personal information, including how it is handled by organizations as well as giving Canadians the freedom to move their information from one organization to another in a secure manner.

The act puts the onus on organizations to develop and maintain a privacy management program that includes the policies, practices and procedures put in place to fulfill obligations under the act. That includes the protection of personal information, how requests for information and complaints are received and dealt with, and the development of materials to explain an organization’s policies and procedures.

The bill also ensures that Canadians can request that their information be deleted from organizations.

The bill provides the privacy commissioner of Canada with broad powers, including the ability to order a company to stop collecting data or using personal information. The commissioner will be able to levy significant fines for non-compliant organizations—with fines of up to five percent of global revenue or $25 million, whichever is greater, for the most serious offences.

The proposed Personal Information and Data Protection Tribunal Act will create a new tribunal to enforce the Consumer Privacy Protection Act.

Although the Liberal government said it engaged with stakeholders for Bill C-27, the Council of Canadian Innovators (CCI) expressed reservations about the process. Nick Schiavo, CCI’s director of federal affairs, said it had concerns over the last version of privacy legislation, and had hoped to present those concerns when the bill was studied at committee, but the previous bill died before that could happen.

Now the lawyers. Simon Hodgett, Kuljit Bhogal, and Sam Ip have written a June 27, 2022 overview, which highlights the key features from the perspective of Osler, a leading business law firm practising internationally from offices across Canada and in New York.

Maya Medeiros and Jesse Beatson authored a June 23, 2022 article for Norton Rose Fulbright, a global law firm, which notes a few ‘weak’ spots in the proposed legislation,

… While the AIDA is directed to “high-impact” systems and prohibits “material harm,” these and other key terms are not yet defined. Further, the quantum of administrative penalties will be fixed only upon the issuance of regulations. 

Moreover, the AIDA sets out publication requirements but it is unclear if there will be a public register of high-impact AI systems and what level of technical detail about the AI systems will be available to the public. More clarity should come through Bill C-27’s second and third readings in the House of Commons, and subsequent regulations if the bill passes.

The AIDA may have extraterritorial application if components of global AI systems are used, developed, designed or managed in Canada. The European Union recently introduced its Artificial Intelligence Act, which also has some extraterritorial application. Other countries will likely follow. Multi-national companies should develop a coordinated global compliance program.

I have two podcasts from Michael Geist, a lawyer and Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa.

  • June 26, 2022: The Law Bytes Podcast, Episode 132: Ryan Black on the Government’s Latest Attempt at Privacy Law Reform “The privacy reform bill that is really three bills in one: a reform of PIPEDA, a bill to create a new privacy tribunal, and an artificial intelligence regulation bill. What’s in the bill from a privacy perspective and what’s changed? Is this bill any likelier to become law than an earlier bill that failed to even advance to committee hearings? To help sort through the privacy aspects of Bill C-27, Ryan Black, a Vancouver-based partner with the law firm DLA Piper (Canada) …” (about 45 mins.)
  • August 15, 2022: The Law Bytes Podcast, Episode 139: Florian Martin-Bariteau on the Artificial Intelligence and Data Act “Critics argue that regulations are long overdue, but have expressed concern about how much of the substance is left for regulations that are still to be developed. Florian Martin-Bariteau is a friend and colleague at the University of Ottawa, where he holds the University Research Chair in Technology and Society and serves as director of the Centre for Law, Technology and Society. He is currently a fellow at the Harvard’s Berkman Klein Center for Internet and Society …” (about 38 mins.)

One of world’s most precise microchip sensors thanks to nanotechnology, machine learning, extended cognition, and spiderwebs

I love science stories about the inspirational qualities of spiderwebs. A November 26, 2021 news item on phys.org describes how spiderwebs have inspired advances in sensors and, potentially, quantum computing,,

A team of researchers from TU Delft [Delft University of Technology; Netherlands] managed to design one of the world’s most precise microchip sensors. The device can function at room temperature—a ‘holy grail’ for quantum technologies and sensing. Combining nanotechnology and machine learning inspired by nature’s spiderwebs, they were able to make a nanomechanical sensor vibrate in extreme isolation from everyday noise. This breakthrough, published in the Advanced Materials Rising Stars Issue, has implications for the study of gravity and dark matter as well as the fields of quantum internet, navigation and sensing.

Inspired by nature’s spider webs and guided by machine learning, Richard Norte (left) and Miguel Bessa (right) demonstrate a new type of sensor in the lab. [Photography: Frank Auperlé]

A November 24, 2021 TU Delft press release (also on EurekAlert but published on November 23, 2021), which originated the news item, describes the research in more detail,

One of the biggest challenges for studying vibrating objects at the smallest scale, like those used in sensors or quantum hardware, is how to keep ambient thermal noise from interacting with their fragile states. Quantum hardware for example is usually kept at near absolute zero (−273.15°C) temperatures, with refrigerators costing half a million euros apiece. Researchers from TU Delft created a web-shaped microchip sensor which resonates extremely well in isolation from room temperature noise. Among other applications, their discovery will make building quantum devices much more affordable.

Hitchhiking on evolution
Richard Norte and Miguel Bessa, who led the research, were looking for new ways to combine nanotechnology and machine learning. How did they come up with the idea to use spiderwebs as a model? Richard Norte: “I’ve been doing this work already for a decade when during lockdown, I noticed a lot of spiderwebs on my terrace. I realised spiderwebs are really good vibration detectors, in that they want to measure vibrations inside the web to find their prey, but not outside of it, like wind through a tree. So why not hitchhike on millions of years of evolution and use a spiderweb as an initial model for an ultra-sensitive device?” 

Since the team did not know anything about spiderwebs’ complexities, they let machine learning guide the discovery process. Miguel Bessa: “We knew that the experiments and simulations were costly and time-consuming, so with my group we decided to use an algorithm called Bayesian optimization, to find a good design using few attempts.” Dongil Shin, co-first author in this work, then implemented the computer model and applied the machine learning algorithm to find the new device design. 

Microchip sensor based on spiderwebs
To the researcher’s surprise, the algorithm proposed a relatively simple spiderweb out of 150 different spiderweb designs, which consists of only six strings put together in a deceivingly simple way. Bessa: “Dongil’s computer simulations showed that this device could work at room temperature, in which atoms vibrate a lot, but still have an incredibly low amount of energy leaking in from the environment – a higher Quality factor in other words. With machine learning and optimization we managed to adapt Richard’s spider web concept towards this much better quality factor.”

Based on this new design, co-first author Andrea Cupertino built a microchip sensor with an ultra-thin, nanometre-thick film of ceramic material called Silicon Nitride. They tested the model by forcefully vibrating the microchip ‘web’ and measuring the time it takes for the vibrations to stop. The result was spectacular: a record-breaking isolated vibration at room temperature. Norte: “We found almost no energy loss outside of our microchip web: the vibrations move in a circle on the inside and don’t touch the outside. This is somewhat like giving someone a single push on a swing, and having them swing on for nearly a century without stopping.”

Implications for fundamental and applied sciences
With their spiderweb-based sensor, the researchers’ show how this interdisciplinary strategy opens a path to new breakthroughs in science, by combining bio-inspired designs, machine learning and nanotechnology. This novel paradigm has interesting implications for quantum internet, sensing, microchip technologies and fundamental physics: exploring ultra-small forces for example, like gravity or dark matter which are notoriously difficult to measure. According to the researchers, the discovery would not have been possible without the university’s Cohesion grant, which led to this collaboration between nanotechnology and machine learning.

Here’s a link to and a citation for the paper,

Spiderweb Nanomechanical Resonators via Bayesian Optimization: Inspired by Nature and Guided by Machine Learning by Dongil Shin, Andrea Cupertino, Matthijs H. J. de Jong, Peter G. Steeneken, Miguel A. Bessa, Richard A. Norte. Advanced Materials Volume34, Issue3 January 20, 2022 2106248 DOI: https://doi.org/10.1002/adma.202106248 First published (online): 25 October 2021

This paper is open access.

If spiderwebs can be sensors, can they also think?

it’s called ‘extended cognition’ or ‘extended mind thesis’ (Wikipedia entry) and the theory holds that the mind is not solely in the brain or even in the body. Predictably, the theory has both its supporters and critics as noted in Joshua Sokol’s article “The Thoughts of a Spiderweb” originally published on May 22, 2017 in Quanta Magazine (Note: Links have been removed),

Millions of years ago, a few spiders abandoned the kind of round webs that the word “spiderweb” calls to mind and started to focus on a new strategy. Before, they would wait for prey to become ensnared in their webs and then walk out to retrieve it. Then they began building horizontal nets to use as a fishing platform. Now their modern descendants, the cobweb spiders, dangle sticky threads below, wait until insects walk by and get snagged, and reel their unlucky victims in.

In 2008, the researcher Hilton Japyassú prompted 12 species of orb spiders collected from all over Brazil to go through this transition again. He waited until the spiders wove an ordinary web. Then he snipped its threads so that the silk drooped to where crickets wandered below. When a cricket got hooked, not all the orb spiders could fully pull it up, as a cobweb spider does. But some could, and all at least began to reel it in with their two front legs.

Their ability to recapitulate the ancient spiders’ innovation got Japyassú, a biologist at the Federal University of Bahia in Brazil, thinking. When the spider was confronted with a problem to solve that it might not have seen before, how did it figure out what to do? “Where is this information?” he said. “Where is it? Is it in her head, or does this information emerge during the interaction with the altered web?”

In February [2017], Japyassú and Kevin Laland, an evolutionary biologist at the University of Saint Andrews, proposed a bold answer to the question. They argued in a review paper, published in the journal Animal Cognition, that a spider’s web is at least an adjustable part of its sensory apparatus, and at most an extension of the spider’s cognitive system.

This would make the web a model example of extended cognition, an idea first proposed by the philosophers Andy Clark and David Chalmers in 1998 to apply to human thought. In accounts of extended cognition, processes like checking a grocery list or rearranging Scrabble tiles in a tray are close enough to memory-retrieval or problem-solving tasks that happen entirely inside the brain that proponents argue they are actually part of a single, larger, “extended” mind.

Among philosophers of mind, that idea has racked up citations, including supporters and critics. And by its very design, Japyassú’s paper, which aims to export extended cognition as a testable idea to the field of animal behavior, is already stirring up antibodies among scientists. …

It seems there is no definitive answer to the question of whether there is an ‘extended mind’ but it’s an intriguing question made (in my opinion) even more so with the spiderweb-inspired sensors from TU Delft.