Category Archives: robots

Bionanomotors for bio-inspired robots on the battlefield

An October 9, 2019 news item on ScienceDaily provides some insight into the latest US Army research into robots,

In an effort to make robots more effective and versatile teammates for Soldiers in combat, Army researchers are on a mission to understand the value of the molecular living functionality of muscle, and the fundamental mechanics that would need to be replicated in order to artificially achieve the capabilities arising from the proteins responsible for muscle contraction.

Caption: Army researchers are on a mission to understand the value of the molecular ‘living’ functionality of muscle, and the fundamental mechanics that would need to be replicated in order to artificially achieve the capabilities arising from the proteins responsible for muscle contraction. Credit: US Army-Shutterstock

An October 8, 2019 US Army Research Laboratory news release (also on EurekAlert but published on October 9, 2019), which originated the news item, delves further into the research,

Bionanomotors, like myosins that move along actin networks, are responsible for most methods of motion in all life forms. Thus, the development of artificial nanomotors could be game-changing in the field of robotics research.

Researchers from the U.S. Army Combat Capabilities Development Command’s [CCDC] Army Research Laboratory [ARL] have been looking to identify a design that would allow the artificial nanomotor to take advantage of Brownian motion, the property of particles to agitatedly move simply because they are warm.

The CCDC ARL researchers believe understanding and developing these fundamental mechanics are a necessary foundational step toward making informed decisions on the viability of new directions in robotics involving the blending of synthetic biology, robotics, and dynamics and controls engineering.

“By controlling the stiffness of different geometrical features of a simple lever-arm design, we found that we could use Brownian motion to make the nanomotor more capable of reaching desirable positions for creating linear motion,” said Dean Culver, a researcher in CCDC ARL’s Vehicle Technology Directorate. “This nano-scale feature translates to more energetically efficient actuation at a macro scale, meaning robots that can do more for the warfighter over a longer amount of time.”

According to Culver, the descriptions of protein interactions in muscle contraction are typically fairly high-level. More specifically, rather than describing the forces that act on an individual protein to seek its counterpart, prescribed or empirical rate functions that dictate the conditions under which a binding or a release event occurs have been used by the research community to replicate this biomechanical process.

“These widely accepted muscle contraction models are akin to a black-box understanding of a car engine,” Culver said. “More gas, more power. It weighs this much and takes up this much space. Combustion is involved. But, you can’t design a car engine with that kind of surface-level information. You need to understand how the pistons work, and how finely injection needs to be tuned. That’s a component-level understanding of the engine. We dive into the component-level mechanics of the built-up protein system and show the design and control value of living functionality as well as a clearer understanding of design parameters that would be key to synthetically reproducing such living functionality.”

Culver stated that the capacity for Brownian motion to kick a tethered particle from a disadvantageous elastic position to an advantageous one, in terms of energy production for a molecular motor, has been illustrated by ARL at a component level, a crucial step in the design of artificial nanomotors that offer the same performance capabilities as biological ones.

“This research adds a key piece of the puzzle for fast, versatile robots that can perform autonomous tactical maneuver and reconnaissance functions,” Culver said. “These models will be integral to the design of distributed actuators that are silent, low thermal signature and efficient – features that will make these robots more impactful in the field.”

Culver noted that they are silent because the muscles don’t make a lot of noise when they actuate, especially compared to motors or servos, cold because the amount of heat generation in a muscle is far less than a comparable motor, and efficient because of the advantages of the distributed chemical energy model and potential escape via Brownian motion.

According to Culver, the breadth of applications for actuators inspired by the biomolecular machines in animal muscles is still unknown, but many of the existing application spaces have clear Army applications such as bio-inspired robotics, nanomachines and energy harvesting.

“Fundamental and exploratory research in this area is therefore a wise investment for our future warfighter capabilities,” Culver said.

Moving forward, there are two primary extensions of this research.

“First, we need to better understand how molecules, like the tethered particle discussed in our paper, interact with each other in more complicated environments,” Culver said. “In the paper, we see how a tethered particle can usefully harness Brownian motion to benefit the contraction of the muscle overall, but the particle in this first model is in an idealized environment. In our bodies, it’s submerged in a fluid carrying many different ions and energy-bearing molecules in solution. That’s the last piece of the puzzle for the single-motor, nano-scale models of molecular motors.”

The second extension, stated Culver, is to repeat this study with a full 3-D model, paving the way to scaling up to practical designs.

Also notable is the fact that because this research is so young, ARL researchers used this project to establish relationships with other investigators in the academic community.

“Leaning on their expertise will be critical in the years to come, and we’ve done a great job of reaching out to faculty members and researchers from places like the University of Washington, Duke University and Carnegie Mellon University,” Culver said.

According to Culver, taking this research project into the next steps with help from collaborative partners will lead to tremendous capabilities for future Soldiers in combat, a critical requirement considering the nature of the ever-changing battlefield.

Here’s a link to and a citation for the paper,

A Dynamic Escape Problem of Molecular Motors by Dean Culver, Bryan Glaz, Samuel Stanton. J Biomech Eng. Paper No: BIO-18-1527 https://doi.org/10.1115/1.4044580 Published Online: August 1, 2019

This paper is behind a paywall.

Connecting biological and artificial neurons (in UK, Switzerland, & Italy) over the web

Caption: The virtual lab connecting Southampton, Zurich and Padova. Credit: University of Southampton

A February 26, 2020 University of Southampton press release (also on EurekAlert) describes this work,

Research on novel nanoelectronics devices led by the University of Southampton enabled brain neurons and artificial neurons to communicate with each other. This study has for the first time shown how three key emerging technologies can work together: brain-computer interfaces, artificial neural networks and advanced memory technologies (also known as memristors). The discovery opens the door to further significant developments in neural and artificial intelligence research.

Brain functions are made possible by circuits of spiking neurons, connected together by microscopic, but highly complex links called ‘synapses’. In this new study, published in the scientific journal Nature Scientific Reports, the scientists created a hybrid neural network where biological and artificial neurons in different parts of the world were able to communicate with each other over the internet through a hub of artificial synapses made using cutting-edge nanotechnology. This is the first time the three components have come together in a unified network.

During the study, researchers based at the University of Padova in Italy cultivated rat neurons in their laboratory, whilst partners from the University of Zurich and ETH Zurich created artificial neurons on Silicon microchips. The virtual laboratory was brought together via an elaborate setup controlling nanoelectronic synapses developed at the University of Southampton. These synaptic devices are known as memristors.

The Southampton based researchers captured spiking events being sent over the internet from the biological neurons in Italy and then distributed them to the memristive synapses. Responses were then sent onward to the artificial neurons in Zurich also in the form of spiking activity. The process simultaneously works in reverse too; from Zurich to Padova. Thus, artificial and biological neurons were able to communicate bidirectionally and in real time.

Themis Prodromakis, Professor of Nanotechnology and Director of the Centre for Electronics Frontiers at the University of Southampton said “One of the biggest challenges in conducting research of this kind and at this level has been integrating such distinct cutting edge technologies and specialist expertise that are not typically found under one roof. By creating a virtual lab we have been able to achieve this.”

The researchers now anticipate that their approach will ignite interest from a range of scientific disciplines and accelerate the pace of innovation and scientific advancement in the field of neural interfaces research. In particular, the ability to seamlessly connect disparate technologies across the globe is a step towards the democratisation of these technologies, removing a significant barrier to collaboration.

Professor Prodromakis added “We are very excited with this new development. On one side it sets the basis for a novel scenario that was never encountered during natural evolution, where biological and artificial neurons are linked together and communicate across global networks; laying the foundations for the Internet of Neuro-electronics. On the other hand, it brings new prospects to neuroprosthetic technologies, paving the way towards research into replacing dysfunctional parts of the brain with AI [artificial intelligence] chips.”

I’m fascinated by this work and after taking a look at the paper, I have to say, the paper is surprisingly accessible. In other words, I think I get the general picture. For example (from the Introduction to the paper; citation and link follow further down),

… To emulate plasticity, the memristor MR1 is operated as a two-terminal device through a control system that receives pre- and post-synaptic depolarisations from one silicon neuron (ANpre) and one biological neuron (BN), respectively. …

If I understand this properly, they’ve integrated a biological neuron and an artificial neuron in a single system across three countries.

For those who care to venture forth, here’s a link and a citation for the paper,

Memristive synapses connect brain and silicon spiking neurons by Alexantrou Serb, Andrea Corna, Richard George, Ali Khiat, Federico Rocchi, Marco Reato, Marta Maschietto, Christian Mayr, Giacomo Indiveri, Stefano Vassanelli & Themistoklis Prodromakis. Scientific Reports volume 10, Article number: 2590 (2020) DOI: https://doi.org/10.1038/s41598-020-58831-9 Published 25 February 2020

The paper is open access.

Uncanny Valley: Being Human in the Age of AI (artificial intelligence) at the de Young museum (San Francisco, US) February 22 – October 25, 2020

So we’re still stuck in 20th century concepts about artificial intelligence (AI), eh? Sean Captain’s February 21, 2020 article (for Fast Company) about the new AI exhibit in San Francisco suggests that artists can help us revise our ideas (Note: Links have been removed),

Though we’re well into the age of machine learning, popular culture is stuck with a 20th century notion of artificial intelligence. While algorithms are shaping our lives in real ways—playing on our desires, insecurities, and suspicions in social media, for instance—Hollywood is still feeding us clichéd images of sexy, deadly robots in shows like Westworld and Star Trek Picard.

The old-school humanlike sentient robot “is an important trope that has defined the visual vocabulary around this human-machine relationship for a very long period of time,” says Claudia Schmuckli, curator of contemporary art and programming at the Fine Arts Museums of San Francisco. It’s also a naïve and outdated metaphor, one she is challenging with a new exhibition at San Francisco’s de Young Museum, called Uncanny Valley, that opens on February 22 [2020].

The show’s name [Uncanny Valley: Being Human in the Age of AI] is a kind of double entendre referencing both the dated and emerging conceptions of AI. Coined in the 1970s, the term “uncanny valley” describes the rise and then sudden drop off of empathy we feel toward a machine as its resemblance to a human increases. Putting a set of cartoony eyes on a robot may make it endearing. But fitting it with anatomically accurate eyes, lips, and facial gestures gets creepy. As the gap between the synthetic and organic narrows, the inability to completely close that gap becomes all the more unsettling.

But the artists in this exhibit are also looking to another valley—Silicon Valley, and the uncanny nature of the real AI the region is building. “One of the positions of this exhibition is that it may be time to rethink the coordinates of the Uncanny Valley and propose a different visual vocabulary,” says Schmuckli.

Artist Stephanie Dinkins faces off with robot Bina48, a bot on display at the de Young Museum’s Uncanny Valley show. [Photo: courtesy of the artist; courtesy of the Fine Arts Museums of San Francisco]

From Captain’s February 21, 2020 article,

… the resemblance to humans is only synthetic-skin deep. Bina48 can string together a long series of sentences in response to provocative questions from Dinkins, such as, “Do you know racism?” But the answers are sometimes barely intelligible, or at least lack the depth and nuance of a conversation with a real human. The robot’s jerky attempts at humanlike motion also stand in stark contrast to Dinkins’s calm bearing and fluid movement. Advanced as she is by today’s standards, Bina48 is tragically far from the sci-fi concept of artificial life. Her glaring shortcomings hammer home why the humanoid metaphor is not the right framework for understanding at least today’s level of artificial intelligence.

For anybody who has more curiosity about the ‘uncanny valley’, there’s this Wikipedia entry.

For more details about the’ Uncanny Valley: Being Human in the Age of AI’ exhibition there’s this September 26, 2019 de Young museum news release,

What are the invisible mechanisms of current forms of artificial intelligence (AI)? How is AI impacting our personal lives and socioeconomic spheres? How do we define intelligence? How do we envision the future of humanity?

SAN FRANCISCO (September 26, 2019) — As technological innovation continues to shape our identities and societies, the question of what it means to be, or remain human has become the subject of fervent debate. Taking advantage of the de Young museum’s proximity to Silicon Valley, Uncanny Valley: Being Human in the Age of AI arrives as the first major exhibition in the US to explore the relationship between humans and intelligent machines through an artistic lens. Organized by the Fine Arts Museums of San Francisco, with San Francisco as its sole venue, Uncanny Valley: Being Human in the Age of AI will be on view from February 22 to October 25, 2020.

“Technology is changing our world, with artificial intelligence both a new frontier of possibility but also a development fraught with anxiety,” says Thomas P. Campbell, Director and CEO of the Fine Arts Museums of San Francisco. “Uncanny Valley: Being Human in the Age of AI brings artistic exploration of this tension to the ground zero of emerging technology, raising challenging questions about the future interface of human and machine.”

The exhibition, which extends through the first floor of the de Young and into the museum’s sculpture garden, explores the current juncture through philosophical, political, and poetic questions and problems raised by AI. New and recent works by an intergenerational, international group of artists and activist collectives—including Zach Blas, Ian Cheng, Simon Denny, Stephanie Dinkins, Forensic Architecture, Lynn Hershman Leeson, Pierre Huyghe, Christopher Kulendran Thomas in collaboration with Annika Kuhlmann, Agnieszka Kurant, Lawrence Lek, Trevor Paglen, Hito Steyerl, Martine Syms, and the Zairja Collective—will be presented.

The Uncanny Valley

In 1970 Japanese engineer Masahiro Mori introduced the concept of the “uncanny valley” as a terrain of existential uncertainty that humans experience when confronted with autonomous machines that mimic their physical and mental properties. An enduring metaphor for the uneasy relationship between human beings and lifelike robots or thinking machines, the uncanny valley and its edges have captured the popular imagination ever since. Over time, the rapid growth and affordability of computers, cloud infrastructure, online search engines, and data sets have fueled developments in machine learning that fundamentally alter our modes of existence, giving rise to a newly expanded uncanny valley.

“As our lives are increasingly organized and shaped by algorithms that track, collect, evaluate, and monetize our data, the uncanny valley has grown to encompass the invisible mechanisms of behavioral engineering and automation,” says Claudia Schmuckli, Curator in Charge of Contemporary Art and Programming at the Fine Arts Museums of San Francisco. “By paying close attention to the imminent and nuanced realities of AI’s possibilities and pitfalls, the artists in the exhibition seek to thicken the discourse around AI. Although fables like HBO’s sci-fi drama Westworld, or Spike Jonze’s feature film Her still populate the collective imagination with dystopian visions of a mechanized future, the artists in this exhibition treat such fictions as relics of a humanist tradition that has little relevance today.”

In Detail

Ian Cheng’s digitally simulated AI creature BOB (Bag of Beliefs) reflects on the interdependency of carbon and silicon forms of intelligence. An algorithmic Tamagotchi, it is capable of evolution, but its growth, behavior, and personality are molded by online interaction with visitors who assume collective responsibility for its wellbeing.

In A.A.I. (artificial artificial intelligence), an installation of multiple termite mounds of colored sand, gold, glitter and crystals, Agnieszka Kurant offers a vibrant critique of new AI economies, with their online crowdsourcing marketplace platforms employing invisible armies of human labor at sub-minimum wages.

Simon Denny ‘s Amazon worker cage patent drawing as virtual King Island Brown Thornbill cage (US 9,280,157 B2: “System and method for transporting personnel within an active workspace”, 2016) (2019) also examines the intersection of labor, resources, and automation. He presents 3-D prints and a cage-like sculpture based on an unrealized machine patent filed by Amazon to contain human workers. Inside the cage an augmented reality application triggers the appearance of a King Island Brown Thornbill — a bird on the verge of extinction; casting human labor as the proverbial canary in the mine. The humanitarian and ecological costs of today’s data economy also informs a group of works by the Zairja Collective that reflect on the extractive dynamics of algorithmic data mining. 

Hito Steyerl addresses the political risks of introducing machine learning into the social sphere. Her installation The City of Broken Windows presents a collision between commercial applications of AI in urban planning along with communal and artistic acts of resistance against neighborhood tipping: one of its short films depicts a group of technicians purposefully smashing windows to teach an algorithm how to recognize the sound of breaking glass, and another follows a group of activists through a Camden, NJ neighborhood as they work to keep decay at bay by replacing broken windows in abandoned homes with paintings. 

Addressing the perpetuation of societal biases and discrimination within AI, Trevor Paglen’s They Took the Faces from the Accused and the Dead…(SD18), presents a large gridded installation of more than three thousand mugshots from the archives of the American National Standards Institute. The institute’s collections of such images were used to train ealry facial-recognition technologies — without the consent of those pictured. Lynn Hershman Leeson’s new installation Shadow Stalker critiques the problematic reliance on algorithmic systems, such as the military forecasting tool Predpol now widely used for policing, that categorize individuals into preexisting and often false “embodied metrics.”

Stephanie Dinkins extends the inquiry into how value systems are built into AI and the construction of identity in Conversations with Bina48, examining the social robot’s (and by extension our society’s) coding of technology, race, gender and social equity. In the same territory, Martine Syms posits AI as a “shamespace” for misrepresentation. For Mythiccbeing she has created an avatar of herself that viewers can interact with through text messaging. But unlike service agents such as Siri and Alexa, who readily respond to questions and demands, Syms’s Teeny is a contrarious interlocutor, turning each interaction into an opportunity to voice personal observations and frustrations about racial inequality and social injustice.

Countering the abusive potential of machine learning, Forensic Architecture pioneers an application to the pursuit of social justice. Their proposition of a Model Zoo marks the beginnings of a new research tool for civil society built of military vehicles, missile fragments, and bomb clouds—evidence of human-rights violations by states and militaries around the world. Christopher Kulendran Thomas’s video Being Human, created in collaboration with Annika Kuhlmann, poses the philosophical question of what it means to be human when machines are able to synthesize human understanding ever more convincingly. Set  in Sri Lanka, it employs AI-generated characters of singer Taylor Swift and artist Oscar Murillo to reflect on issues of individual authenticity, collective sovereignty, and the future of human rights.

Lawrence Lek’s sci-fi-inflected film Aidol, which explores the relationship between algorithmic automation and human creativity, projects this question into the future. It transports the viewer into the computer-generated “sinofuturist” world of the 2065 eSports Olympics: when the popular singer Diva enlists the super-intelligent Geomancer to help her stage her artistic comeback during the game’s halftime show, she unleashes an existential and philosophical battle that explodes the divide between humans and machines.

The Doors, a newly commissioned installation by Zach Blas, by contrast shines the spotlight back onto the present and on the culture and ethos of Silicon Valley — the ground zero for the development of AI. Inspired by the ubiquity of enclosed gardens on tech campuses, he has created an artificial garden framed by a six-channel video projected on glass panes that convey a sense of algorithmic psychedelia aiming to open new “doors of perception.” While luring visitors into AI’s promises, it also asks what might become possible when such glass doors begin to crack. 

Unveiled in late spring Pierre Huyghe‘s Exomind (Deep Water), a sculpture of a crouched female nude with a live beehive as its head will be nestled within the museum’s garden. With its buzzing colony pollinating the surrounding flora, it offers a poignant metaphor for the modeling of neural networks on the biological brain and an understanding of intelligence as grounded in natural forms and processes.

The Uncanny Valley: Being Human in the Age of AI event page features a link to something unexpected 9scroll down about 40% of the way), a Statement on Eyal Weizman of Forensic Architecture,

On Thursday, February 13 [2020], Eyal Weizman of Forensic Architecture had his travel authorization to the United States revoked due to an “algorithm” that identified him as a security threat.

He was meant to be in the United States promoting multiple exhibitions including Uncanny Valley: Being Human in the Age of AI, opening on February 22 [2020] at the de Young museum in San Francisco.

Since 2018, Forensic Architecture has used machine learning / AI to aid in humanitarian work, using synthetic images—photorealistic digital renderings based around 3-D models—to train algorithmic classifiers to identify tear gas munitions and chemical bombs deployed against protesters worldwide, including in Hong Kong, Chile, the US, Venezuela, and Sudan.

Their project, Model Zoo, on view in Uncanny Valley represents a growing collection of munitions and weapons used in conflict today and the algorithmic models developed to identify them. It shows a collection of models being used to track and hold accountable human rights violators around the world. The piece joins work by 14 contemporary artists reflecting on the philosophical and political consequences of the application of AI into the social sphere.

We are deeply saddened that Weizman will not be allowed to travel to celebrate the opening of the exhibition. We stand with him and Forensic Architecture’s partner communities who continue to resist violent states and corporate practices, and who are increasingly exposed to the regime of “security algorithms.”

—Claudia Schmuckli, Curator-in-Charge, Contemporary Art & Programming, & Thomas P. Campbell, Director and CEO, Fine Arts Museums of San Francisco

There is a February 20, 2020 article (for Fast Company) by Eyal Weizman chronicling his experience with being denied entry by an algorithm. Do read it in its entirety (the Fast Company is itself an excerpt from Weizman’s essay) if you have the time, if not, here’s the description of how he tried to gain entry after being denied the first time,

The following day I went to the U.S. Embassy in London to apply for a visa. In my interview, the officer informed me that my authorization to travel had been revoked because the “algorithm” had identified a security threat. He said he did not know what had triggered the algorithm but suggested that it could be something I was involved in, people I am or was in contact with, places to which I had traveled (had I recently been in Syria, Iran, Iraq, Yemen, or Somalia or met their nationals?), hotels at which I stayed, or a certain pattern of relations among these things. I was asked to supply the Embassy with additional information, including 15 years of travel history, in particular where I had gone and who had paid for it. The officer said that Homeland Security’s investigators could assess my case more promptly if I supplied the names of anyone in my network whom I believed might have triggered the algorithm. I declined to provide this information.

I hope the exhibition is successful; it has certainly experienced a thought-provoking start.

Finally, I have often featured postings that discuss the ‘uncanny valley’. To find those postings, just use that phrase in the blog search engine. You might also went to search ‘Hiroshi Ishiguro’, a Japanese scientist and robotocist who specializes in humanoid robots.

So thin and soft you don’t notice it: new wearable tech

An August 2, 2019 news item on ScienceDaily features some new work on wearable technology that was a bit of a surprise to me,

Wearable human-machine interfaces — devices that can collect and store important health information about the wearer, among other uses — have benefited from advances in electronics, materials and mechanical designs. But current models still can be bulky and uncomfortable, and they can’t always handle multiple functions at one time.

Researchers reported Friday, Aug. 2 [2019], the discovery of a multifunctional ultra-thin wearable electronic device that is imperceptible to the wearer.

I expected this wearable technology to be a piece of clothing that somehow captured health data but it’s not,

While a health care application is mentioned early in the August 2, 2019 University of Houston news release (also on EurekAlert) by Jeannie Kever the primary interest seems to be robots and robotic skin (Note: This news release originated the news item on ScienceDaily),

The device allows the wearer to move naturally and is less noticeable than wearing a Band-Aid, said Cunjiang Yu, Bill D. Cook Associate Professor of Mechanical Engineering at the University of Houston and lead author for the paper, published as the cover story in Science Advances.

“Everything is very thin, just a few microns thick,” said Yu, who also is a principal investigator at the Texas Center for Superconductivity at UH. “You will not be able to feel it.”
It has the potential to work as a prosthetic skin for a robotic hand or other robotic devices, with a robust human-machine interface that allows it to automatically collect information and relay it back to the wearer.

That has applications for health care – “What if when you shook hands with a robotic hand, it was able to instantly deduce physical condition?” Yu asked – as well as for situations such as chemical spills, which are risky for humans but require human decision-making based on physical inspection.

While current devices are gaining in popularity, the researchers said they can be bulky to wear, offer slow response times and suffer a drop in performance over time. More flexible versions are unable to provide multiple functions at once – sensing, switching, stimulation and data storage, for example – and are generally expensive and complicated to manufacture.

The device described in the paper, a metal oxide semiconductor on a polymer base, offers manufacturing advantages and can be processed at temperatures lower than 300 C.

“We report an ultrathin, mechanically imperceptible, and stretchable (human-machine interface) HMI device, which is worn on human skin to capture multiple physical data and also on a robot to offer intelligent feedback, forming a closed-loop HMI,” the researchers wrote. “The multifunctional soft stretchy HMI device is based on a one-step formed, sol-gel-on-polymer-processed indium zinc oxide semiconductor nanomembrane electronics.”

In addition to Yu, the paper’s co-authors include first author Kyoseung Sim, Zhoulyu Rao, Faheem Ershad, Jianming Lei, Anish Thukral and Jie Chen, all of UH; Zhanan Zou and Jianliang Xiao, both of the University of Colorado; and Qing-An Huang of Southeast University in Nanjing, China.

Here’s a link to and a citation for the paper,

Metal oxide semiconductor nanomembrane–based soft unnoticeable multifunctional electronics for wearable human-machine interfaces by Kyoseung Sim, Zhoulyu Rao, Zhanan Zou, Faheem Ershad, Jianming Lei, Anish Thukral, Jie Chen, Qing-An Huang, Jianliang Xiao and Cunjiang Yu. Science Advances 02 Aug 2019: Vol. 5, no. 8, eaav9653 DOI: 10.1126/sciadv.aav9653

This paper appears to be open access.

Memristor-based neural network and the biosimilar principle of learning

Once you get past the technical language (there’s a lot of it), you’ll find that they make the link between biomimicry and memristors explicit. Admittedly I’m not an expert but if I understand the research correctly, the scientists are suggesting that the algorithms used in machine learning today cannot allow memristors to be properly integrated for use in true neuromorphic computing and this work from Russia and Greece points to a new paradigm. If you understand it differently, please do let me know in the comments.

A July 12, 2019 news item on Nanowerk kicks things off (Note: A link has been removed),

Lobachevsky University scientists together with their colleagues from the National Research Center “Kurchatov Institute” (Moscow) and the National Research Center “Demokritos” (Athens) are working on the hardware implementation of a spiking neural network based on memristors.

The key elements of such a network, along with pulsed neurons, are artificial synaptic connections that can change the strength (weight) of connection between neurons during the learning (Microelectronic Engineering, “Yttria-stabilized zirconia cross-point memristive devices for neuromorphic applications”).

For this purpose, memristive devices based on metal-oxide-metal nanostructures developed at the UNN Physics and Technology Research Institute (PTRI) are suitable, but their use in specific spiking neural network architectures developed at the Kurchatov Institute requires demonstration of biologically plausible learning principles.

Caption: Cross-section image of the metal-oxide-metal memristive structure based on ZrO2(Y) polycrystalline film (a); corresponding schematic view of the cross-point memristive device (b); STDP dependencies of memristive device conductance changes for different delay values between pre- and postsynaptic neuron spikes (c); photographs of a microchip and an array of memristive devices in a standard cermet casing (d); the simplest spiking neural network architecture learning on the basis of local rules for changing memristive weights (e). Credit: Lobachevsky University

A July 12, 2019 (?) Lobachevsky University press release (also on EurekAlert), which originated the news item, delves further into the work,

The biological mechanism of learning of neural systems is described by Hebb’s rule, according to which learning occurs as a result of an increase in the strength of connection  (synaptic weight) between simultaneously active neurons, which indicates the presence of a causal relationship in their excitation. One of the clarifying forms of this fundamental rule is plasticity, which depends on the time of arrival of pulses (Spike-Timing Dependent Plasticity – STDP).

In accordance with STDP, synaptic weight increases if the postsynaptic neuron generates a pulse (spike) immediately after the presynaptic one, and vice versa, the synaptic weight decreases if the postsynaptic neuron generates a spike right before the presynaptic one. Moreover, the smaller the time difference Δt between the pre- and postsynaptic spikes, the more pronounced the weight change will be.

According to one of the researchers, Head of the UNN PTRI laboratory Alexei Mikhailov, in order to demonstrate the STDP principle, memristive nanostructures based on yttria-stabilized zirconia (YSZ) thin films were used. YSZ is a well-known solid-state electrolyte with high oxygen ion mobility.

“Due to a specified concentration of oxygen vacancies, which is determined by the controlled concentration of yttrium impurities, and the heterogeneous structure of the films obtained by magnetron sputtering, such memristive structures demonstrate controlled bipolar switching between different resistive states in a wide resistance range. The switching is associated with the formation and destruction of conductive channels along grain boundaries in the polycrystalline ZrO2 (Y) film,” notes Alexei Mikhailov.

An array of memristive devices for research was implemented in the form of a microchip mounted in a standard cermet casing, which facilitates the integration of the array into a neural network’s analog circuit. The full technological cycle for creating memristive microchips is currently implemented at the UNN PTRI. In the future, it is possible to scale the devices down to the minimum size of about 50 nm, as was established by Greek partners.
Our studies of the dynamic plasticity of the memoristive devices, continues Alexey Mikhailov, have shown that the form of the conductance change depending on Δt is in good agreement with the STDP learning rules. It should be also noted that if the initial value of the memristor conductance is close to the maximum, it is easy to reduce the corresponding weight while it is difficult to enhance it, and in the case of a memristor with a minimum conductance in the initial state, it is difficult to reduce its weight, but it is easy to enhance it.

According to Vyacheslav Demin, director-coordinator in the area of nature-like technologies of the Kurchatov Institute, who is one of the ideologues of this work, the established pattern of change in the memristor conductance clearly demonstrates the possibility of hardware implementation of the so-called local learning rules. Such rules for changing the strength of synaptic connections depend only on the values ​​of variables that are present locally at each time point (neuron activities and current weights).

“This essentially distinguishes such principle from the traditional learning algorithm, which is based on global rules for changing weights, using information on the error values ​​at the current time point for each neuron of the output neural network layer (in a widely popular group of error back propagation methods). The traditional principle is not biosimilar, it requires “external” (expert) knowledge of the correct answers for each example presented to the network (that is, they do not have the property of self-learning). This principle is difficult to implement on the basis of memristors, since it requires controlled precise changes of memristor conductances, as opposed to local rules. Such precise control is not always possible due to the natural variability (a wide range of parameters) of memristors as analog elements,” says Vyacheslav Demin.

Local learning rules of the STDP type implemented in hardware on memristors provide the basis for autonomous (“unsupervised”) learning of a spiking neural network. In this case, the final state of the network does not depend on its initial state, but depends only on the learning conditions (a specific sequence of pulses). According to Vyacheslav Demin, this opens up prospects for the application of local learning rules based on memristors when solving artificial intelligence problems with the use of complex spiking neural network architectures.

Here’s a link to and a citation for the paper,

Yttria-stabilized zirconia cross-point memristive devices for neuromorphic applications by A. V. Emelyanov, K. E. Nikiruy, A. Demin, V. V. Rylkov, A. I. Belov, D. S. Korolev, E. G. Gryaznov, D. A. Pavlov, O. N. Gorshkov, A. N. Mikhaylov, P. Dimitrakis. Microelectronic Engineering Volume 215, 15 July 2019, 110988 First available online 16 May 2019

This paper is behind a paywall.

Touchy robots and prosthetics

I have briefly speculated about the importance of touch elsewhere (see my July 19, 2019 posting regarding BlocKit and blockchain; scroll down about 50% of the way) but this upcoming news bit and the one following it put a different spin on the importance of touch.

Exceptional sense of touch

Robots need a sense of touch to perform their tasks and a July 18, 2019 National University of Singapore press release (also on EurekAlert) announces work on an improved sense of touch,

Robots and prosthetic devices may soon have a sense of touch equivalent to, or better than, the human skin with the Asynchronous Coded Electronic Skin (ACES), an artificial nervous system developed by a team of researchers at the National University of Singapore (NUS).

The new electronic skin system achieved ultra-high responsiveness and robustness to damage, and can be paired with any kind of sensor skin layers to function effectively as an electronic skin.

The innovation, achieved by Assistant Professor Benjamin Tee and his team from the Department of Materials Science and Engineering at the NUS Faculty of Engineering, was first reported in prestigious scientific journal Science Robotics on 18 July 2019.

Faster than the human sensory nervous system

“Humans use our sense of touch to accomplish almost every daily task, such as picking up a cup of coffee or making a handshake. Without it, we will even lose our sense of balance when walking. Similarly, robots need to have a sense of touch in order to interact better with humans, but robots today still cannot feel objects very well,” explained Asst Prof Tee, who has been working on electronic skin technologies for over a decade in hope of giving robots and prosthetic devices a better sense of touch.

Drawing inspiration from the human sensory nervous system, the NUS team spent a year and a half developing a sensor system that could potentially perform better. While the ACES electronic nervous system detects signals like the human sensor nervous system, it is made up of a network of sensors connected via a single electrical conductor, unlike the nerve bundles in the human skin. It is also unlike existing electronic skins which have interlinked wiring systems that can make them sensitive to damage and difficult to scale up.

Elaborating on the inspiration, Asst Prof Tee, who also holds appointments in the NUS Department of Electrical and Computer Engineering, NUS Institute for Health Innovation & Technology (iHealthTech), N.1 Institute for Health and the Hybrid Integrated Flexible Electronic Systems (HiFES) programme, said, “The human sensory nervous system is extremely efficient, and it works all the time to the extent that we often take it for granted. It is also very robust to damage. Our sense of touch, for example, does not get affected when we suffer a cut. If we can mimic how our biological system works and make it even better, we can bring about tremendous advancements in the field of robotics where electronic skins are predominantly applied.”

ACES can detect touches more than 1,000 times faster than the human sensory nervous system. For example, it is capable of differentiating physical contacts between different sensors in less than 60 nanoseconds – the fastest ever achieved for an electronic skin technology – even with large numbers of sensors. ACES-enabled skin can also accurately identify the shape, texture and hardness of objects within 10 milliseconds, ten times faster than the blinking of an eye. This is enabled by the high fidelity and capture speed of the ACES system.

The ACES platform can also be designed to achieve high robustness to physical damage, an important property for electronic skins because they come into the frequent physical contact with the environment. Unlike the current system used to interconnect sensors in existing electronic skins, all the sensors in ACES can be connected to a common electrical conductor with each sensor operating independently. This allows ACES-enabled electronic skins to continue functioning as long as there is one connection between the sensor and the conductor, making them less vulnerable to damage.

Smart electronic skins for robots and prosthetics

ACES’ simple wiring system and remarkable responsiveness even with increasing numbers of sensors are key characteristics that will facilitate the scale-up of intelligent electronic skins for Artificial Intelligence (AI) applications in robots, prosthetic devices and other human machine interfaces.

“Scalability is a critical consideration as big pieces of high performing electronic skins are required to cover the relatively large surface areas of robots and prosthetic devices,” explained Asst Prof Tee. “ACES can be easily paired with any kind of sensor skin layers, for example, those designed to sense temperatures and humidity, to create high performance ACES-enabled electronic skin with an exceptional sense of touch that can be used for a wide range of purposes,” he added.

For instance, pairing ACES with the transparent, self-healing and water-resistant sensor skin layer also recently developed by Asst Prof Tee’s team, creates an electronic skin that can self-repair, like the human skin. This type of electronic skin can be used to develop more realistic prosthetic limbs that will help disabled individuals restore their sense of touch.

Other potential applications include developing more intelligent robots that can perform disaster recovery tasks or take over mundane operations such as packing of items in warehouses. The NUS team is therefore looking to further apply the ACES platform on advanced robots and prosthetic devices in the next phase of their research.

For those who like videos, the researchers have prepared this,

Here’s a link to and a citation for the paper,

A neuro-inspired artificial peripheral nervous system for scalable electronic skins by Wang Wei Lee, Yu Jun Tan, Haicheng Yao, Si Li, Hian Hian See, Matthew Hon, Kian Ann Ng, Betty Xiong, John S. Ho and Benjamin C. K. Tee. Science Robotics Vol 4, Issue 32 31 July 2019 eaax2198 DOI: 10.1126/scirobotics.aax2198 Published online first: 17 Jul 2019:

This paper is behind a paywall.

Picking up a grape and holding his wife’s hand

This story comes from the Canadian Broadcasting Corporation (CBC) Radio with a six minute story embedded in the text, from a July 25, 2019 CBC Radio ‘As It Happens’ article by Sheena Goodyear,

The West Valley City, Utah, real estate agent [Keven Walgamott] lost his left hand in an electrical accident 17 years ago. Since then, he’s tried out a few different prosthetic limbs, but always found them too clunky and uncomfortable.

Then he decided to work with the University of Utah in 2016 to test out new prosthetic technology that mimics the sensation of human touch, allowing Walgamott to perform delicate tasks with precision — including shaking his wife’s hand. 

“I extended my left hand, she came and extended hers, and we were able to feel each other with the left hand for the first time in 13 years, and it was just a marvellous and wonderful experience,” Walgamott told As It Happens guest host Megan Williams. 

Walgamott, one of seven participants in the University of Utah study, was able to use an advanced prosthetic hand called the LUKE Arm to pick up an egg without cracking it, pluck a single grape from a bunch, hammer a nail, take a ring on and off his finger, fit a pillowcase over a pillow and more. 

While performing the tasks, Walgamott was able to actually feel the items he was holding and correctly gauge the amount of pressure he needed to exert — mimicking a process the human brain does automatically.

“I was able to feel something in each of my fingers,” he said. “What I feel, I guess the easiest way to explain it, is little electrical shocks.”

Those shocks — which he describes as a kind of a tingling sensation — intensify as he tightens his grip.

“Different variations of the intensity of the electricity as I move my fingers around and as I touch things,” he said. 

To make that [sense of touch] happen, the researchers implanted electrodes into the nerves on Walgamott’s forearm, allowing his brain to communicate with his prosthetic through a computer outside his body. That means he can move the hand just by thinking about it.

But those signals also work in reverse.

The team attached sensors to the hand of a LUKE Arm. Those sensors detect touch and positioning, and send that information to the electrodes so it can be interpreted by the brain.

For Walgamott, performing a series of menial tasks as a team of scientists recorded his progress was “fun to do.”

“I’d forgotten how well two hands work,” he said. “That was pretty cool.”

But it was also a huge relief from the phantom limb pain he has experienced since the accident, which he describes as a “burning sensation” in the place where his hand used to be.

A July 24, 2019 University of Utah news release (also on EurekAlert) provides more detail about the research,

Keven Walgamott had a good “feeling” about picking up the egg without crushing it.

What seems simple for nearly everyone else can be more of a Herculean task for Walgamott, who lost his left hand and part of his arm in an electrical accident 17 years ago. But he was testing out the prototype of a high-tech prosthetic arm with fingers that not only can move, they can move with his thoughts. And thanks to a biomedical engineering team at the University of Utah, he “felt” the egg well enough so his brain could tell the prosthetic hand not to squeeze too hard.

That’s because the team, led by U biomedical engineering associate professor Gregory Clark, has developed a way for the “LUKE Arm” (so named after the robotic hand that Luke Skywalker got in “The Empire Strikes Back”) to mimic the way a human hand feels objects by sending the appropriate signals to the brain. Their findings were published in a new paper co-authored by U biomedical engineering doctoral student Jacob George, former doctoral student David Kluger, Clark and other colleagues in the latest edition of the journal Science Robotics. A copy of the paper may be obtained by emailing robopak@aaas.org.

“We changed the way we are sending that information to the brain so that it matches the human body. And by matching the human body, we were able to see improved benefits,” George says. “We’re making more biologically realistic signals.”

That means an amputee wearing the prosthetic arm can sense the touch of something soft or hard, understand better how to pick it up and perform delicate tasks that would otherwise be impossible with a standard prosthetic with metal hooks or claws for hands.

“It almost put me to tears,” Walgamott says about using the LUKE Arm for the first time during clinical tests in 2017. “It was really amazing. I never thought I would be able to feel in that hand again.”

Walgamott, a real estate agent from West Valley City, Utah, and one of seven test subjects at the U, was able to pluck grapes without crushing them, pick up an egg without cracking it and hold his wife’s hand with a sensation in the fingers similar to that of an able-bodied person.

“One of the first things he wanted to do was put on his wedding ring. That’s hard to do with one hand,” says Clark. “It was very moving.”

Those things are accomplished through a complex series of mathematical calculations and modeling.

The LUKE Arm

The LUKE Arm has been in development for some 15 years. The arm itself is made of mostly metal motors and parts with a clear silicon “skin” over the hand. It is powered by an external battery and wired to a computer. It was developed by DEKA Research & Development Corp., a New Hampshire-based company founded by Segway inventor Dean Kamen.

Meanwhile, the U’s team has been developing a system that allows the prosthetic arm to tap into the wearer’s nerves, which are like biological wires that send signals to the arm to move. It does that thanks to an invention by U biomedical engineering Emeritus Distinguished Professor Richard A. Normann called the Utah Slanted Electrode Array. The array is a bundle of 100 microelectrodes and wires that are implanted into the amputee’s nerves in the forearm and connected to a computer outside the body. The array interprets the signals from the still-remaining arm nerves, and the computer translates them to digital signals that tell the arm to move.

But it also works the other way. To perform tasks such as picking up objects requires more than just the brain telling the hand to move. The prosthetic hand must also learn how to “feel” the object in order to know how much pressure to exert because you can’t figure that out just by looking at it.

First, the prosthetic arm has sensors in its hand that send signals to the nerves via the array to mimic the feeling the hand gets upon grabbing something. But equally important is how those signals are sent. It involves understanding how your brain deals with transitions in information when it first touches something. Upon first contact of an object, a burst of impulses runs up the nerves to the brain and then tapers off. Recreating this was a big step.

“Just providing sensation is a big deal, but the way you send that information is also critically important, and if you make it more biologically realistic, the brain will understand it better and the performance of this sensation will also be better,” says Clark.

To achieve that, Clark’s team used mathematical calculations along with recorded impulses from a primate’s arm to create an approximate model of how humans receive these different signal patterns. That model was then implemented into the LUKE Arm system.

Future research

In addition to creating a prototype of the LUKE Arm with a sense of touch, the overall team is already developing a version that is completely portable and does not need to be wired to a computer outside the body. Instead, everything would be connected wirelessly, giving the wearer complete freedom.

Clark says the Utah Slanted Electrode Array is also capable of sending signals to the brain for more than just the sense of touch, such as pain and temperature, though the paper primarily addresses touch. And while their work currently has only involved amputees who lost their extremities below the elbow, where the muscles to move the hand are located, Clark says their research could also be applied to those who lost their arms above the elbow.

Clark hopes that in 2020 or 2021, three test subjects will be able to take the arm home to use, pending federal regulatory approval.

The research involves a number of institutions including the U’s Department of Neurosurgery, Department of Physical Medicine and Rehabilitation and Department of Orthopedics, the University of Chicago’s Department of Organismal Biology and Anatomy, the Cleveland Clinic’s Department of Biomedical Engineering and Utah neurotechnology companies Ripple Neuro LLC and Blackrock Microsystems. The project is funded by the Defense Advanced Research Projects Agency and the National Science Foundation.

“This is an incredible interdisciplinary effort,” says Clark. “We could not have done this without the substantial efforts of everybody on that team.”

Here’s a link to and a citation for the paper,

Biomimetic sensory feedback through peripheral nerve stimulation improves dexterous use of a bionic hand by J. A. George, D. T. Kluger, T. S. Davis, S. M. Wendelken, E. V. Okorokova, Q. He, C. C. Duncan, D. T. Hutchinson, Z. C. Thumser, D. T. Beckler, P. D. Marasco, S. J. Bensmaia and G. A. Clark. Science Robotics Vol. 4, Issue 32, eaax2352 31 July 2019 DOI: 10.1126/scirobotics.aax2352 Published online first: 24 Jul 2019

This paper is definitely behind a paywall.

The University of Utah researchers have produced a video highlighting their work,

Using light to manipulate neurons

There are three (or more?) possible applications including neuromorphic computing for this new optoelectronic technology which is based on black phophorus. A July 16, 2019 news item on Nanowerk announces the research,

Researchers from RMIT University [Australia] drew inspiration from an emerging tool in biotechnology – optogenetics – to develop a device that replicates the way the brain stores and loses information.

Optogenetics allows scientists to delve into the body’s electrical system with incredible precision, using light to manipulate neurons so that they can be turned on or off.

The new chip is based on an ultra-thin material that changes electrical resistance in response to different wavelengths of light, enabling it to mimic the way that neurons work to store and delete information in the brain.

Caption: The new chip is based on an ultra-thin material that changes electrical resistance in response to different wavelengths of light. Credit: RMIT University

A July 17, 2019 RMIT University press release (also on EurekAlert but published on July 16, 2019), which originated the news item, expands on the theme,

Research team leader Dr Sumeet Walia said the technology moves us closer towards artificial intelligence (AI) that can harness the brain’s full sophisticated functionality.

“Our optogenetically-inspired chip imitates the fundamental biology of nature’s best computer – the human brain,” Walia said.

“Being able to store, delete and process information is critical for computing, and the brain does this extremely efficiently.

“We’re able to simulate the brain’s neural approach simply by shining different colours onto our chip.

“This technology takes us further on the path towards fast, efficient and secure light-based computing.

“It also brings us an important step closer to the realisation of a bionic brain – a brain-on-a-chip that can learn from its environment just like humans do.”

Dr Taimur Ahmed, lead author of the study published in Advanced Functional Materials, said being able to replicate neural behavior on an artificial chip offered exciting avenues for research across sectors.

“This technology creates tremendous opportunities for researchers to better understand the brain and how it’s affected by disorders that disrupt neural connections, like Alzheimer’s disease and dementia,” Ahmed said.

The researchers, from the Functional Materials and Microsystems Research Group at RMIT, have also demonstrated the chip can perform logic operations – information processing – ticking another box for brain-like functionality.

Developed at RMIT’s MicroNano Research Facility, the technology is compatible with existing electronics and has also been demonstrated on a flexible platform, for integration into wearable electronics.

How the chip works:

Neural connections happen in the brain through electrical impulses. When tiny energy spikes reach a certain threshold of voltage, the neurons bind together – and you’ve started creating a memory.

On the chip, light is used to generate a photocurrent. Switching between colors causes the current to reverse direction from positive to negative.

This direction switch, or polarity shift, is equivalent to the binding and breaking of neural connections, a mechanism that enables neurons to connect (and induce learning) or inhibit (and induce forgetting).

This is akin to optogenetics, where light-induced modification of neurons causes them to either turn on or off, enabling or inhibiting connections to the next neuron in the chain.

To develop the technology, the researchers used a material called black phosphorus (BP) that can be inherently defective in nature.

This is usually a problem for optoelectronics, but with precision engineering the researchers were able to harness the defects to create new functionality.

“Defects are usually looked on as something to be avoided, but here we’re using them to create something novel and useful,” Ahmed said.

“It’s a creative approach to finding solutions for the technical challenges we face.”

Here’s a link and a citation for the paper,

Multifunctional Optoelectronics via Harnessing Defects in Layered Black Phosphorus by Taimur Ahmed, Sruthi Kuriakose, Sherif Abbas,, Michelle J. S. Spencer, Md. Ataur Rahman, Muhammad Tahir, Yuerui Lu, Prashant Sonar, Vipul Bansal, Madhu Bhaskaran, Sharath Sriram, Sumeet Walia. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.201901991 First published (online): 17 July 2019

This paper is behind a paywall.

Large Interactive Virtual Environment Laboratory (LIVELab) located in McMaster University’s Institute for Music & the Mind (MIMM) and the MetaCreation Lab at Simon Fraser University

Both of these bits have a music focus but they represent two entirely different science-based approaches to that form of art and one is solely about the music and the other is included as one of the art-making processes being investigated..

Large Interactive Virtual Environment Laboratory (LIVELab) at McMaster University

Laurel Trainor and Dan J. Bosnyak both of McMaster University (Ontario, Canada) have written an October 27, 2019 essay about the LiveLab and their work for The Conversation website (Note: Links have been removed),

The Large Interactive Virtual Environment Laboratory (LIVELab) at McMaster University is a research concert hall. It functions as both a high-tech laboratory and theatre, opening up tremendous opportunities for research and investigation.

As the only facility of its kind in the world, the LIVELab is a 106-seat concert hall equipped with dozens of microphones, speakers and sensors to measure brain responses, physiological responses such as heart rate, breathing rates, perspiration and movements in multiple musicians and audience members at the same time.

Engineers, psychologists and clinician-researchers from many disciplines work alongside musicians, media artists and industry to study performance, perception, neural processing and human interaction.

In the LIVELab, acoustics are digitally controlled so the experience can change instantly from extremely silent with almost no reverberation to a noisy restaurant to a subway platform or to the acoustics of Carnegie Hall.

Real-time physiological data such as heart rate can be synchronized with data from other systems such as motion capture, and monitored and recorded from both performers and audience members. The result is that the reams of data that can now be collected in a few hours in the LIVELab used to take weeks or months to collect in a traditional lab. And having measurements of multiple people simultaneously is pushing forward our understanding of real-time human interactions.

Consider the implications of how music might help people with Parkinson’s disease to walk more smoothly or children with dyslexia to read better.

[…] area of ongoing research is the effectiveness of hearing aids. By the age of 60, nearly 49 per cent of people will suffer from some hearing loss. People who wear hearing aids are often frustrated when listening to music because the hearing aids distort the sound and cannot deal with the dynamic range of the music.

The LIVELab is working with the Hamilton Philharmonic Orchestra to solve this problem. During a recent concert, researchers evaluated new ways of delivering sound directly to participants’ hearing aids to enhance sounds.

Researchers hope new technologies can not only increase live musical enjoyment but alleviate the social isolation caused by hearing loss.

Imagine the possibilities for understanding music and sound: How it might help to improve cognitive decline, manage social performance anxiety, help children with developmental disorders, aid in treatment of depression or keep the mind focused. Every time we conceive and design a study, we think of new possibilities.

The essay also includes an embedded 12 min. video about LIVELab and details about studies conducted on musicians and live audiences. Apparently, audiences experience live performance differently than recorded performances and musicians use body sway to create cohesive performances. You can find the McMaster Institute for Music & the Mind here and McMaster’s LIVELab here.

Capturing the motions of a string quartet performance. Laurel Trainor, Author provided [McMaster University]

Metacreation Lab at Simon Fraser University (SFU)

I just recently discovered that there’s a Metacreation Lab at Simon Fraser University (Vancouver, Canada), which on its homepage has this ” Metacreation is the idea of endowing machines with creative behavior.” Here’s more from the homepage,

As the contemporary approach to generative art, Metacreation involves using tools and techniques from artificial intelligence, artificial life, and machine learning to develop software that partially or completely automates creative tasks. Through the collaboration between scientists, experts in artificial intelligence, cognitive sciences, designers and artists, the Metacreation Lab for Creative AI is at the forefront of the development of generative systems, be they embedded in interactive experiences or integrated into current creative software. Scientific research in the Metacreation Lab explores how various creative tasks can be automated and enriched. These tasks include music composition [emphasis mine], sound design, video editing, audio/visual effect generation, 3D animation, choreography, and video game design.

Besides scientific research, the team designs interactive and generative artworks that build upon the algorithms and research developed in the Lab. This work often challenges the social and cultural discourse on AI.

Much to my surprise I received the Metacreation Lab’s inaugural email newsletter (received via email on Friday, November 15, 2019),

Greetings,

We decided to start a mailing list for disseminating news, updates, and announcements regarding generative art, creative AI and New Media. In this newsletter: 

  1. ISEA 2020: The International Symposium on Electronic Art. ISEA return to Montreal, check the CFP bellow and contribute!
  2. ISEA 2015: A transcription of Sara Diamond’s keynote address “Action Agenda: Vancouver’s Prescient Media Arts” is now available for download. 
  3. Brain Art, the book: we are happy to announce the release of the first comprehensive volume on Brain Art. Edited by Anton Nijholt, and published by Springer.

Here are more details from the newsletter,

ISEA2020 – 26th International Symposium on Electronic Arts

Montreal, September 24, 2019
Montreal Digital Spring (Printemps numérique) is launching a call for participation as part of ISEA2020 / MTL connect to be held from May 19 to 24, 2020 in Montreal, Canada. Founded in 1990, ISEA is one of the world’s most prominent international arts and technology events, bringing together scholarly, artistic, and scientific domains in an interdisciplinary discussion and showcase of creative productions applying new technologies in art, interactivity, and electronic and digital media. For 2020, ISEA Montreal turns towards the theme of sentience.

ISEA2020 will be fully dedicated to examining the resurgence of sentience—feeling-sensing-making sense—in recent art and design, media studies, science and technology studies, philosophy, anthropology, history of science and the natural scientific realm—notably biology, neuroscience and computing. We ask: why sentience? Why and how does sentience matter? Why have artists and scholars become interested in sensing and feeling beyond, with and around our strictly human bodies and selves? Why has this notion been brought to the fore in an array of disciplines in the 21st century?
CALL FOR PARTICIPATION: WHY SENTIENCE? ISEA2020 invites artists, designers, scholars, researchers, innovators and creators to participate in the various activities deployed from May 19 to 24, 2020. To complete an application, please fill in the forms and follow the instructions.

The final submissions deadline is NOVEMBER 25, 2019. Submit your application for WORKSHOP and TUTORIAL Submit your application for ARTISTIC WORK Submit your application for FULL / SHORT PAPER Submit your application for PANEL Submit your application for POSTER Submit your application for ARTIST TALK Submit your application for INSTITUTIONAL PRESENTATION
Find Out More
You can apply for several categories. All profiles are welcome. Notifications of acceptance will be sent around January 13, 2020.

Important: please note that the Call for participation for MTL connect is not yet launched, but you can also apply to participate in the programming of the other Pavilions (4 other themes) when registrations are open (coming soon): mtlconnecte.ca/en TICKETS

Registration is now available to assist to ISEA2020 / MTL connect, from May 19 to 24, 2020. Book today your Full Pass and get the early-bird rate!
Buy Now

More from the newsletter,

ISEA 2015 was in Vancouver, Canada, and the proceedings and art catalog are still online. The news is that Sara Diamond released her 2015 keynote address as a paper: Action Agenda: Vancouver’s Prescient Media Arts. It is never too late so we thought we would let you know about this great read. See The 2015 Proceedings Here

The last item from the inaugural newsletter,

The first book that surveys how brain activity can be monitored and manipulated for artistic purposes, with contributions by interactive media artists, brain-computer interface researchers, and neuroscientists. View the Book Here

As per the Leonardo review from Cristina Albu:

“Another seminal contribution of the volume is the presentation of multiple taxonomies of “brain art,” which can help art critics develop better criteria for assessing this genre. Mirjana Prpa and Philippe Pasquier’s meticulous classification shows how diverse such works have become as artists consider a whole range of variables of neurofeedback.” Read the Review

For anyone not familiar with the ‘Leonardo’ cited in the above, it’s Leonardo; the International Society for the Arts, Sciences and Technology.

Should this kind of information excite and motivate you do start metacreating, you can get in touch with the lab,

Our mailing address is:
Metacreation Lab for Creative AI
School of Interactive Arts & Technology
Simon Fraser University
250-13450 102 Ave.
Surrey, BC V3T 0A3
Web: http://metacreation.net/
Email: metacreation_admin (at) sfu (dot) ca

Sonifying proteins to make music and brand new proteins

Markus Buehler at the Massachusetts Institute of Technology (MIT) has been working with music and science for a number of years. My December 9, 2011 posting, Music, math, and spiderwebs, was the first one here featuring his work. My November 28, 2012 posting, Producing stronger silk musically, was a followup to Buehler’s previous work.

A June 28, 2019 news item on Azonano provides a recent update,

Composers string notes of different pitch and duration together to create music. Similarly, cells join amino acids with different characteristics together to make proteins.

Now, researchers have bridged these two seemingly disparate processes by translating protein sequences into musical compositions and then using artificial intelligence to convert the sounds into brand-new proteins. …

Caption: Researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature. Credit: Zhao Qin and Francisco Martin-Martinez

A June 26, 2019 American Chemical Society (ACS) news release, which originated the news item, provides more detail and a video,

To make proteins, cellular structures called ribosomes add one of 20 different amino acids to a growing chain in combinations specified by the genetic blueprint. The properties of the amino acids and the complex shapes into which the resulting proteins fold determine how the molecule will work in the body. To better understand a protein’s architecture, and possibly design new ones with desired features, Markus Buehler and colleagues wanted to find a way to translate a protein’s amino acid sequence into music.

The researchers transposed the unique natural vibrational frequencies of each amino acid into sound frequencies that humans can hear. In this way, they generated a scale consisting of 20 unique tones. Unlike musical notes, however, each amino acid tone consisted of the overlay of many different frequencies –– similar to a chord. Buehler and colleagues then translated several proteins into audio compositions, with the duration of each tone specified by the different 3D structures that make up the molecule. Finally, the researchers used artificial intelligence to recognize specific musical patterns that corresponded to certain protein architectures. The computer then generated scores and translated them into new-to-nature proteins. In addition to being a tool for protein design and for investigating disease mutations, the method could be helpful for explaining protein structure to broad audiences, the researchers say. They even developed an Android app [Amino Acid Synthesizer] to allow people to create their own bio-based musical compositions.

Here’s the ACS video,

A June 26, 2019 MIT news release (also on EurekAlert) provides some specifics and the MIT news release includes two embedded audio files,

Want to create a brand new type of protein that might have useful properties? No problem. Just hum a few bars.

In a surprising marriage of science and art, researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature.

Although it’s not quite as simple as humming a new protein into existence, the new system comes close. It provides a systematic way of translating a protein’s sequence of amino acids into a musical sequence, using the physical properties of the molecules to determine the sounds. Although the sounds are transposed in order to bring them within the audible range for humans, the tones and their relationships are based on the actual vibrational frequencies of each amino acid molecule itself, computed using theories from quantum chemistry.

The system was developed by Markus Buehler, the McAfee Professor of Engineering and head of the Department of Civil and Environmental Engineering at MIT, along with postdoc Chi Hua Yu and two others. As described in the journal ACS Nano, the system translates the 20 types of amino acids, the building blocks that join together in chains to form all proteins, into a 20-tone scale. Any protein’s long sequence of amino acids then becomes a sequence of notes.

While such a scale sounds unfamiliar to people accustomed to Western musical traditions, listeners can readily recognize the relationships and differences after familiarizing themselves with the sounds. Buehler says that after listening to the resulting melodies, he is now able to distinguish certain amino acid sequences that correspond to proteins with specific structural functions. “That’s a beta sheet,” he might say, or “that’s an alpha helix.”

Learning the language of proteins

The whole concept, Buehler explains, is to get a better handle on understanding proteins and their vast array of variations. Proteins make up the structural material of skin, bone, and muscle, but are also enzymes, signaling chemicals, molecular switches, and a host of other functional materials that make up the machinery of all living things. But their structures, including the way they fold themselves into the shapes that often determine their functions, are exceedingly complicated. “They have their own language, and we don’t know how it works,” he says. “We don’t know what makes a silk protein a silk protein or what patterns reflect the functions found in an enzyme. We don’t know the code.”

By translating that language into a different form that humans are particularly well-attuned to, and that allows different aspects of the information to be encoded in different dimensions — pitch, volume, and duration — Buehler and his team hope to glean new insights into the relationships and differences between different families of proteins and their variations, and use this as a way of exploring the many possible tweaks and modifications of their structure and function. As with music, the structure of proteins is hierarchical, with different levels of structure at different scales of length or time.

The team then used an artificial intelligence system to study the catalog of melodies produced by a wide variety of different proteins. They had the AI system introduce slight changes in the musical sequence or create completely new sequences, and then translated the sounds back into proteins that correspond to the modified or newly designed versions. With this process they were able to create variations of existing proteins — for example of one found in spider silk, one of nature’s strongest materials — thus making new proteins unlike any produced by evolution.

Although the researchers themselves may not know the underlying rules, “the AI has learned the language of how proteins are designed,” and it can encode it to create variations of existing versions, or completely new protein designs, Buehler says. Given that there are “trillions and trillions” of potential combinations, he says, when it comes to creating new proteins “you wouldn’t be able to do it from scratch, but that’s what the AI can do.”

“Composing” new proteins

By using such a system, he says training the AI system with a set of data for a particular class of proteins might take a few days, but it can then produce a design for a new variant within microseconds. “No other method comes close,” he says. “The shortcoming is the model doesn’t tell us what’s really going on inside. We just know it works.

This way of encoding structure into music does reflect a deeper reality. “When you look at a molecule in a textbook, it’s static,” Buehler says. “But it’s not static at all. It’s moving and vibrating. Every bit of matter is a set of vibrations. And we can use this concept as a way of describing matter.”

The method does not yet allow for any kind of directed modifications — any changes in properties such as mechanical strength, elasticity, or chemical reactivity will be essentially random. “You still need to do the experiment,” he says. When a new protein variant is produced, “there’s no way to predict what it will do.”

The team also created musical compositions developed from the sounds of amino acids, which define this new 20-tone musical scale. The art pieces they constructed consist entirely of the sounds generated from amino acids. “There are no synthetic or natural instruments used, showing how this new source of sounds can be utilized as a creative platform,” Buehler says. Musical motifs derived from both naturally existing proteins and AI-generated proteins are used throughout the examples, and all the sounds, including some that resemble bass or snare drums, are also generated from the sounds of amino acids.

The researchers have created a free Android smartphone app, called Amino Acid Synthesizer, to play the sounds of amino acids and record protein sequences as musical compositions.

Here’s a link to and a citation for the paper,

A Self-Consistent Sonification Method to Translate Amino Acid Sequences into Musical Compositions and Application in Protein Design Using Artificial Intelligence by Chi-Hua Yu, Zhao Qin, Francisco J. Martin-Martinez, Markus J. Buehler. ACS Nano 2019 XXXXXXXXXX-XXX DOI: https://doi.org/10.1021/acsnano.9b02180 Publication Date:June 26, 2019 Copyright © 2019 American Chemical Society

This paper is behind a paywall.

ETA October 23, 2019 1000 hours: Ooops! I almost forgot the link to the Aminot Acid Synthesizer.