Category Archives: robots

So thin and soft you don’t notice it: new wearable tech

An August 2, 2019 news item on ScienceDaily features some new work on wearable technology that was a bit of a surprise to me,

Wearable human-machine interfaces — devices that can collect and store important health information about the wearer, among other uses — have benefited from advances in electronics, materials and mechanical designs. But current models still can be bulky and uncomfortable, and they can’t always handle multiple functions at one time.

Researchers reported Friday, Aug. 2 [2019], the discovery of a multifunctional ultra-thin wearable electronic device that is imperceptible to the wearer.

I expected this wearable technology to be a piece of clothing that somehow captured health data but it’s not,

While a health care application is mentioned early in the August 2, 2019 University of Houston news release (also on EurekAlert) by Jeannie Kever the primary interest seems to be robots and robotic skin (Note: This news release originated the news item on ScienceDaily),

The device allows the wearer to move naturally and is less noticeable than wearing a Band-Aid, said Cunjiang Yu, Bill D. Cook Associate Professor of Mechanical Engineering at the University of Houston and lead author for the paper, published as the cover story in Science Advances.

“Everything is very thin, just a few microns thick,” said Yu, who also is a principal investigator at the Texas Center for Superconductivity at UH. “You will not be able to feel it.”
It has the potential to work as a prosthetic skin for a robotic hand or other robotic devices, with a robust human-machine interface that allows it to automatically collect information and relay it back to the wearer.

That has applications for health care – “What if when you shook hands with a robotic hand, it was able to instantly deduce physical condition?” Yu asked – as well as for situations such as chemical spills, which are risky for humans but require human decision-making based on physical inspection.

While current devices are gaining in popularity, the researchers said they can be bulky to wear, offer slow response times and suffer a drop in performance over time. More flexible versions are unable to provide multiple functions at once – sensing, switching, stimulation and data storage, for example – and are generally expensive and complicated to manufacture.

The device described in the paper, a metal oxide semiconductor on a polymer base, offers manufacturing advantages and can be processed at temperatures lower than 300 C.

“We report an ultrathin, mechanically imperceptible, and stretchable (human-machine interface) HMI device, which is worn on human skin to capture multiple physical data and also on a robot to offer intelligent feedback, forming a closed-loop HMI,” the researchers wrote. “The multifunctional soft stretchy HMI device is based on a one-step formed, sol-gel-on-polymer-processed indium zinc oxide semiconductor nanomembrane electronics.”

In addition to Yu, the paper’s co-authors include first author Kyoseung Sim, Zhoulyu Rao, Faheem Ershad, Jianming Lei, Anish Thukral and Jie Chen, all of UH; Zhanan Zou and Jianliang Xiao, both of the University of Colorado; and Qing-An Huang of Southeast University in Nanjing, China.

Here’s a link to and a citation for the paper,

Metal oxide semiconductor nanomembrane–based soft unnoticeable multifunctional electronics for wearable human-machine interfaces by Kyoseung Sim, Zhoulyu Rao, Zhanan Zou, Faheem Ershad, Jianming Lei, Anish Thukral, Jie Chen, Qing-An Huang, Jianliang Xiao and Cunjiang Yu. Science Advances 02 Aug 2019: Vol. 5, no. 8, eaav9653 DOI: 10.1126/sciadv.aav9653

This paper appears to be open access.

Memristor-based neural network and the biosimilar principle of learning

Once you get past the technical language (there’s a lot of it), you’ll find that they make the link between biomimicry and memristors explicit. Admittedly I’m not an expert but if I understand the research correctly, the scientists are suggesting that the algorithms used in machine learning today cannot allow memristors to be properly integrated for use in true neuromorphic computing and this work from Russia and Greece points to a new paradigm. If you understand it differently, please do let me know in the comments.

A July 12, 2019 news item on Nanowerk kicks things off (Note: A link has been removed),

Lobachevsky University scientists together with their colleagues from the National Research Center “Kurchatov Institute” (Moscow) and the National Research Center “Demokritos” (Athens) are working on the hardware implementation of a spiking neural network based on memristors.

The key elements of such a network, along with pulsed neurons, are artificial synaptic connections that can change the strength (weight) of connection between neurons during the learning (Microelectronic Engineering, “Yttria-stabilized zirconia cross-point memristive devices for neuromorphic applications”).

For this purpose, memristive devices based on metal-oxide-metal nanostructures developed at the UNN Physics and Technology Research Institute (PTRI) are suitable, but their use in specific spiking neural network architectures developed at the Kurchatov Institute requires demonstration of biologically plausible learning principles.

Caption: Cross-section image of the metal-oxide-metal memristive structure based on ZrO2(Y) polycrystalline film (a); corresponding schematic view of the cross-point memristive device (b); STDP dependencies of memristive device conductance changes for different delay values between pre- and postsynaptic neuron spikes (c); photographs of a microchip and an array of memristive devices in a standard cermet casing (d); the simplest spiking neural network architecture learning on the basis of local rules for changing memristive weights (e). Credit: Lobachevsky University

A July 12, 2019 (?) Lobachevsky University press release (also on EurekAlert), which originated the news item, delves further into the work,

The biological mechanism of learning of neural systems is described by Hebb’s rule, according to which learning occurs as a result of an increase in the strength of connection  (synaptic weight) between simultaneously active neurons, which indicates the presence of a causal relationship in their excitation. One of the clarifying forms of this fundamental rule is plasticity, which depends on the time of arrival of pulses (Spike-Timing Dependent Plasticity – STDP).

In accordance with STDP, synaptic weight increases if the postsynaptic neuron generates a pulse (spike) immediately after the presynaptic one, and vice versa, the synaptic weight decreases if the postsynaptic neuron generates a spike right before the presynaptic one. Moreover, the smaller the time difference Δt between the pre- and postsynaptic spikes, the more pronounced the weight change will be.

According to one of the researchers, Head of the UNN PTRI laboratory Alexei Mikhailov, in order to demonstrate the STDP principle, memristive nanostructures based on yttria-stabilized zirconia (YSZ) thin films were used. YSZ is a well-known solid-state electrolyte with high oxygen ion mobility.

“Due to a specified concentration of oxygen vacancies, which is determined by the controlled concentration of yttrium impurities, and the heterogeneous structure of the films obtained by magnetron sputtering, such memristive structures demonstrate controlled bipolar switching between different resistive states in a wide resistance range. The switching is associated with the formation and destruction of conductive channels along grain boundaries in the polycrystalline ZrO2 (Y) film,” notes Alexei Mikhailov.

An array of memristive devices for research was implemented in the form of a microchip mounted in a standard cermet casing, which facilitates the integration of the array into a neural network’s analog circuit. The full technological cycle for creating memristive microchips is currently implemented at the UNN PTRI. In the future, it is possible to scale the devices down to the minimum size of about 50 nm, as was established by Greek partners.
Our studies of the dynamic plasticity of the memoristive devices, continues Alexey Mikhailov, have shown that the form of the conductance change depending on Δt is in good agreement with the STDP learning rules. It should be also noted that if the initial value of the memristor conductance is close to the maximum, it is easy to reduce the corresponding weight while it is difficult to enhance it, and in the case of a memristor with a minimum conductance in the initial state, it is difficult to reduce its weight, but it is easy to enhance it.

According to Vyacheslav Demin, director-coordinator in the area of nature-like technologies of the Kurchatov Institute, who is one of the ideologues of this work, the established pattern of change in the memristor conductance clearly demonstrates the possibility of hardware implementation of the so-called local learning rules. Such rules for changing the strength of synaptic connections depend only on the values ​​of variables that are present locally at each time point (neuron activities and current weights).

“This essentially distinguishes such principle from the traditional learning algorithm, which is based on global rules for changing weights, using information on the error values ​​at the current time point for each neuron of the output neural network layer (in a widely popular group of error back propagation methods). The traditional principle is not biosimilar, it requires “external” (expert) knowledge of the correct answers for each example presented to the network (that is, they do not have the property of self-learning). This principle is difficult to implement on the basis of memristors, since it requires controlled precise changes of memristor conductances, as opposed to local rules. Such precise control is not always possible due to the natural variability (a wide range of parameters) of memristors as analog elements,” says Vyacheslav Demin.

Local learning rules of the STDP type implemented in hardware on memristors provide the basis for autonomous (“unsupervised”) learning of a spiking neural network. In this case, the final state of the network does not depend on its initial state, but depends only on the learning conditions (a specific sequence of pulses). According to Vyacheslav Demin, this opens up prospects for the application of local learning rules based on memristors when solving artificial intelligence problems with the use of complex spiking neural network architectures.

Here’s a link to and a citation for the paper,

Yttria-stabilized zirconia cross-point memristive devices for neuromorphic applications by A. V. Emelyanov, K. E. Nikiruy, A. Demin, V. V. Rylkov, A. I. Belov, D. S. Korolev, E. G. Gryaznov, D. A. Pavlov, O. N. Gorshkov, A. N. Mikhaylov, P. Dimitrakis. Microelectronic Engineering Volume 215, 15 July 2019, 110988 First available online 16 May 2019

This paper is behind a paywall.

Touchy robots and prosthetics

I have briefly speculated about the importance of touch elsewhere (see my July 19, 2019 posting regarding BlocKit and blockchain; scroll down about 50% of the way) but this upcoming news bit and the one following it put a different spin on the importance of touch.

Exceptional sense of touch

Robots need a sense of touch to perform their tasks and a July 18, 2019 National University of Singapore press release (also on EurekAlert) announces work on an improved sense of touch,

Robots and prosthetic devices may soon have a sense of touch equivalent to, or better than, the human skin with the Asynchronous Coded Electronic Skin (ACES), an artificial nervous system developed by a team of researchers at the National University of Singapore (NUS).

The new electronic skin system achieved ultra-high responsiveness and robustness to damage, and can be paired with any kind of sensor skin layers to function effectively as an electronic skin.

The innovation, achieved by Assistant Professor Benjamin Tee and his team from the Department of Materials Science and Engineering at the NUS Faculty of Engineering, was first reported in prestigious scientific journal Science Robotics on 18 July 2019.

Faster than the human sensory nervous system

“Humans use our sense of touch to accomplish almost every daily task, such as picking up a cup of coffee or making a handshake. Without it, we will even lose our sense of balance when walking. Similarly, robots need to have a sense of touch in order to interact better with humans, but robots today still cannot feel objects very well,” explained Asst Prof Tee, who has been working on electronic skin technologies for over a decade in hope of giving robots and prosthetic devices a better sense of touch.

Drawing inspiration from the human sensory nervous system, the NUS team spent a year and a half developing a sensor system that could potentially perform better. While the ACES electronic nervous system detects signals like the human sensor nervous system, it is made up of a network of sensors connected via a single electrical conductor, unlike the nerve bundles in the human skin. It is also unlike existing electronic skins which have interlinked wiring systems that can make them sensitive to damage and difficult to scale up.

Elaborating on the inspiration, Asst Prof Tee, who also holds appointments in the NUS Department of Electrical and Computer Engineering, NUS Institute for Health Innovation & Technology (iHealthTech), N.1 Institute for Health and the Hybrid Integrated Flexible Electronic Systems (HiFES) programme, said, “The human sensory nervous system is extremely efficient, and it works all the time to the extent that we often take it for granted. It is also very robust to damage. Our sense of touch, for example, does not get affected when we suffer a cut. If we can mimic how our biological system works and make it even better, we can bring about tremendous advancements in the field of robotics where electronic skins are predominantly applied.”

ACES can detect touches more than 1,000 times faster than the human sensory nervous system. For example, it is capable of differentiating physical contacts between different sensors in less than 60 nanoseconds – the fastest ever achieved for an electronic skin technology – even with large numbers of sensors. ACES-enabled skin can also accurately identify the shape, texture and hardness of objects within 10 milliseconds, ten times faster than the blinking of an eye. This is enabled by the high fidelity and capture speed of the ACES system.

The ACES platform can also be designed to achieve high robustness to physical damage, an important property for electronic skins because they come into the frequent physical contact with the environment. Unlike the current system used to interconnect sensors in existing electronic skins, all the sensors in ACES can be connected to a common electrical conductor with each sensor operating independently. This allows ACES-enabled electronic skins to continue functioning as long as there is one connection between the sensor and the conductor, making them less vulnerable to damage.

Smart electronic skins for robots and prosthetics

ACES’ simple wiring system and remarkable responsiveness even with increasing numbers of sensors are key characteristics that will facilitate the scale-up of intelligent electronic skins for Artificial Intelligence (AI) applications in robots, prosthetic devices and other human machine interfaces.

“Scalability is a critical consideration as big pieces of high performing electronic skins are required to cover the relatively large surface areas of robots and prosthetic devices,” explained Asst Prof Tee. “ACES can be easily paired with any kind of sensor skin layers, for example, those designed to sense temperatures and humidity, to create high performance ACES-enabled electronic skin with an exceptional sense of touch that can be used for a wide range of purposes,” he added.

For instance, pairing ACES with the transparent, self-healing and water-resistant sensor skin layer also recently developed by Asst Prof Tee’s team, creates an electronic skin that can self-repair, like the human skin. This type of electronic skin can be used to develop more realistic prosthetic limbs that will help disabled individuals restore their sense of touch.

Other potential applications include developing more intelligent robots that can perform disaster recovery tasks or take over mundane operations such as packing of items in warehouses. The NUS team is therefore looking to further apply the ACES platform on advanced robots and prosthetic devices in the next phase of their research.

For those who like videos, the researchers have prepared this,

Here’s a link to and a citation for the paper,

A neuro-inspired artificial peripheral nervous system for scalable electronic skins by Wang Wei Lee, Yu Jun Tan, Haicheng Yao, Si Li, Hian Hian See, Matthew Hon, Kian Ann Ng, Betty Xiong, John S. Ho and Benjamin C. K. Tee. Science Robotics Vol 4, Issue 32 31 July 2019 eaax2198 DOI: 10.1126/scirobotics.aax2198 Published online first: 17 Jul 2019:

This paper is behind a paywall.

Picking up a grape and holding his wife’s hand

This story comes from the Canadian Broadcasting Corporation (CBC) Radio with a six minute story embedded in the text, from a July 25, 2019 CBC Radio ‘As It Happens’ article by Sheena Goodyear,

The West Valley City, Utah, real estate agent [Keven Walgamott] lost his left hand in an electrical accident 17 years ago. Since then, he’s tried out a few different prosthetic limbs, but always found them too clunky and uncomfortable.

Then he decided to work with the University of Utah in 2016 to test out new prosthetic technology that mimics the sensation of human touch, allowing Walgamott to perform delicate tasks with precision — including shaking his wife’s hand. 

“I extended my left hand, she came and extended hers, and we were able to feel each other with the left hand for the first time in 13 years, and it was just a marvellous and wonderful experience,” Walgamott told As It Happens guest host Megan Williams. 

Walgamott, one of seven participants in the University of Utah study, was able to use an advanced prosthetic hand called the LUKE Arm to pick up an egg without cracking it, pluck a single grape from a bunch, hammer a nail, take a ring on and off his finger, fit a pillowcase over a pillow and more. 

While performing the tasks, Walgamott was able to actually feel the items he was holding and correctly gauge the amount of pressure he needed to exert — mimicking a process the human brain does automatically.

“I was able to feel something in each of my fingers,” he said. “What I feel, I guess the easiest way to explain it, is little electrical shocks.”

Those shocks — which he describes as a kind of a tingling sensation — intensify as he tightens his grip.

“Different variations of the intensity of the electricity as I move my fingers around and as I touch things,” he said. 

To make that [sense of touch] happen, the researchers implanted electrodes into the nerves on Walgamott’s forearm, allowing his brain to communicate with his prosthetic through a computer outside his body. That means he can move the hand just by thinking about it.

But those signals also work in reverse.

The team attached sensors to the hand of a LUKE Arm. Those sensors detect touch and positioning, and send that information to the electrodes so it can be interpreted by the brain.

For Walgamott, performing a series of menial tasks as a team of scientists recorded his progress was “fun to do.”

“I’d forgotten how well two hands work,” he said. “That was pretty cool.”

But it was also a huge relief from the phantom limb pain he has experienced since the accident, which he describes as a “burning sensation” in the place where his hand used to be.

A July 24, 2019 University of Utah news release (also on EurekAlert) provides more detail about the research,

Keven Walgamott had a good “feeling” about picking up the egg without crushing it.

What seems simple for nearly everyone else can be more of a Herculean task for Walgamott, who lost his left hand and part of his arm in an electrical accident 17 years ago. But he was testing out the prototype of a high-tech prosthetic arm with fingers that not only can move, they can move with his thoughts. And thanks to a biomedical engineering team at the University of Utah, he “felt” the egg well enough so his brain could tell the prosthetic hand not to squeeze too hard.

That’s because the team, led by U biomedical engineering associate professor Gregory Clark, has developed a way for the “LUKE Arm” (so named after the robotic hand that Luke Skywalker got in “The Empire Strikes Back”) to mimic the way a human hand feels objects by sending the appropriate signals to the brain. Their findings were published in a new paper co-authored by U biomedical engineering doctoral student Jacob George, former doctoral student David Kluger, Clark and other colleagues in the latest edition of the journal Science Robotics. A copy of the paper may be obtained by emailing robopak@aaas.org.

“We changed the way we are sending that information to the brain so that it matches the human body. And by matching the human body, we were able to see improved benefits,” George says. “We’re making more biologically realistic signals.”

That means an amputee wearing the prosthetic arm can sense the touch of something soft or hard, understand better how to pick it up and perform delicate tasks that would otherwise be impossible with a standard prosthetic with metal hooks or claws for hands.

“It almost put me to tears,” Walgamott says about using the LUKE Arm for the first time during clinical tests in 2017. “It was really amazing. I never thought I would be able to feel in that hand again.”

Walgamott, a real estate agent from West Valley City, Utah, and one of seven test subjects at the U, was able to pluck grapes without crushing them, pick up an egg without cracking it and hold his wife’s hand with a sensation in the fingers similar to that of an able-bodied person.

“One of the first things he wanted to do was put on his wedding ring. That’s hard to do with one hand,” says Clark. “It was very moving.”

Those things are accomplished through a complex series of mathematical calculations and modeling.

The LUKE Arm

The LUKE Arm has been in development for some 15 years. The arm itself is made of mostly metal motors and parts with a clear silicon “skin” over the hand. It is powered by an external battery and wired to a computer. It was developed by DEKA Research & Development Corp., a New Hampshire-based company founded by Segway inventor Dean Kamen.

Meanwhile, the U’s team has been developing a system that allows the prosthetic arm to tap into the wearer’s nerves, which are like biological wires that send signals to the arm to move. It does that thanks to an invention by U biomedical engineering Emeritus Distinguished Professor Richard A. Normann called the Utah Slanted Electrode Array. The array is a bundle of 100 microelectrodes and wires that are implanted into the amputee’s nerves in the forearm and connected to a computer outside the body. The array interprets the signals from the still-remaining arm nerves, and the computer translates them to digital signals that tell the arm to move.

But it also works the other way. To perform tasks such as picking up objects requires more than just the brain telling the hand to move. The prosthetic hand must also learn how to “feel” the object in order to know how much pressure to exert because you can’t figure that out just by looking at it.

First, the prosthetic arm has sensors in its hand that send signals to the nerves via the array to mimic the feeling the hand gets upon grabbing something. But equally important is how those signals are sent. It involves understanding how your brain deals with transitions in information when it first touches something. Upon first contact of an object, a burst of impulses runs up the nerves to the brain and then tapers off. Recreating this was a big step.

“Just providing sensation is a big deal, but the way you send that information is also critically important, and if you make it more biologically realistic, the brain will understand it better and the performance of this sensation will also be better,” says Clark.

To achieve that, Clark’s team used mathematical calculations along with recorded impulses from a primate’s arm to create an approximate model of how humans receive these different signal patterns. That model was then implemented into the LUKE Arm system.

Future research

In addition to creating a prototype of the LUKE Arm with a sense of touch, the overall team is already developing a version that is completely portable and does not need to be wired to a computer outside the body. Instead, everything would be connected wirelessly, giving the wearer complete freedom.

Clark says the Utah Slanted Electrode Array is also capable of sending signals to the brain for more than just the sense of touch, such as pain and temperature, though the paper primarily addresses touch. And while their work currently has only involved amputees who lost their extremities below the elbow, where the muscles to move the hand are located, Clark says their research could also be applied to those who lost their arms above the elbow.

Clark hopes that in 2020 or 2021, three test subjects will be able to take the arm home to use, pending federal regulatory approval.

The research involves a number of institutions including the U’s Department of Neurosurgery, Department of Physical Medicine and Rehabilitation and Department of Orthopedics, the University of Chicago’s Department of Organismal Biology and Anatomy, the Cleveland Clinic’s Department of Biomedical Engineering and Utah neurotechnology companies Ripple Neuro LLC and Blackrock Microsystems. The project is funded by the Defense Advanced Research Projects Agency and the National Science Foundation.

“This is an incredible interdisciplinary effort,” says Clark. “We could not have done this without the substantial efforts of everybody on that team.”

Here’s a link to and a citation for the paper,

Biomimetic sensory feedback through peripheral nerve stimulation improves dexterous use of a bionic hand by J. A. George, D. T. Kluger, T. S. Davis, S. M. Wendelken, E. V. Okorokova, Q. He, C. C. Duncan, D. T. Hutchinson, Z. C. Thumser, D. T. Beckler, P. D. Marasco, S. J. Bensmaia and G. A. Clark. Science Robotics Vol. 4, Issue 32, eaax2352 31 July 2019 DOI: 10.1126/scirobotics.aax2352 Published online first: 24 Jul 2019

This paper is definitely behind a paywall.

The University of Utah researchers have produced a video highlighting their work,

Using light to manipulate neurons

There are three (or more?) possible applications including neuromorphic computing for this new optoelectronic technology which is based on black phophorus. A July 16, 2019 news item on Nanowerk announces the research,

Researchers from RMIT University [Australia] drew inspiration from an emerging tool in biotechnology – optogenetics – to develop a device that replicates the way the brain stores and loses information.

Optogenetics allows scientists to delve into the body’s electrical system with incredible precision, using light to manipulate neurons so that they can be turned on or off.

The new chip is based on an ultra-thin material that changes electrical resistance in response to different wavelengths of light, enabling it to mimic the way that neurons work to store and delete information in the brain.

Caption: The new chip is based on an ultra-thin material that changes electrical resistance in response to different wavelengths of light. Credit: RMIT University

A July 17, 2019 RMIT University press release (also on EurekAlert but published on July 16, 2019), which originated the news item, expands on the theme,

Research team leader Dr Sumeet Walia said the technology moves us closer towards artificial intelligence (AI) that can harness the brain’s full sophisticated functionality.

“Our optogenetically-inspired chip imitates the fundamental biology of nature’s best computer – the human brain,” Walia said.

“Being able to store, delete and process information is critical for computing, and the brain does this extremely efficiently.

“We’re able to simulate the brain’s neural approach simply by shining different colours onto our chip.

“This technology takes us further on the path towards fast, efficient and secure light-based computing.

“It also brings us an important step closer to the realisation of a bionic brain – a brain-on-a-chip that can learn from its environment just like humans do.”

Dr Taimur Ahmed, lead author of the study published in Advanced Functional Materials, said being able to replicate neural behavior on an artificial chip offered exciting avenues for research across sectors.

“This technology creates tremendous opportunities for researchers to better understand the brain and how it’s affected by disorders that disrupt neural connections, like Alzheimer’s disease and dementia,” Ahmed said.

The researchers, from the Functional Materials and Microsystems Research Group at RMIT, have also demonstrated the chip can perform logic operations – information processing – ticking another box for brain-like functionality.

Developed at RMIT’s MicroNano Research Facility, the technology is compatible with existing electronics and has also been demonstrated on a flexible platform, for integration into wearable electronics.

How the chip works:

Neural connections happen in the brain through electrical impulses. When tiny energy spikes reach a certain threshold of voltage, the neurons bind together – and you’ve started creating a memory.

On the chip, light is used to generate a photocurrent. Switching between colors causes the current to reverse direction from positive to negative.

This direction switch, or polarity shift, is equivalent to the binding and breaking of neural connections, a mechanism that enables neurons to connect (and induce learning) or inhibit (and induce forgetting).

This is akin to optogenetics, where light-induced modification of neurons causes them to either turn on or off, enabling or inhibiting connections to the next neuron in the chain.

To develop the technology, the researchers used a material called black phosphorus (BP) that can be inherently defective in nature.

This is usually a problem for optoelectronics, but with precision engineering the researchers were able to harness the defects to create new functionality.

“Defects are usually looked on as something to be avoided, but here we’re using them to create something novel and useful,” Ahmed said.

“It’s a creative approach to finding solutions for the technical challenges we face.”

Here’s a link and a citation for the paper,

Multifunctional Optoelectronics via Harnessing Defects in Layered Black Phosphorus by Taimur Ahmed, Sruthi Kuriakose, Sherif Abbas,, Michelle J. S. Spencer, Md. Ataur Rahman, Muhammad Tahir, Yuerui Lu, Prashant Sonar, Vipul Bansal, Madhu Bhaskaran, Sharath Sriram, Sumeet Walia. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.201901991 First published (online): 17 July 2019

This paper is behind a paywall.

Large Interactive Virtual Environment Laboratory (LIVELab) located in McMaster University’s Institute for Music & the Mind (MIMM) and the MetaCreation Lab at Simon Fraser University

Both of these bits have a music focus but they represent two entirely different science-based approaches to that form of art and one is solely about the music and the other is included as one of the art-making processes being investigated..

Large Interactive Virtual Environment Laboratory (LIVELab) at McMaster University

Laurel Trainor and Dan J. Bosnyak both of McMaster University (Ontario, Canada) have written an October 27, 2019 essay about the LiveLab and their work for The Conversation website (Note: Links have been removed),

The Large Interactive Virtual Environment Laboratory (LIVELab) at McMaster University is a research concert hall. It functions as both a high-tech laboratory and theatre, opening up tremendous opportunities for research and investigation.

As the only facility of its kind in the world, the LIVELab is a 106-seat concert hall equipped with dozens of microphones, speakers and sensors to measure brain responses, physiological responses such as heart rate, breathing rates, perspiration and movements in multiple musicians and audience members at the same time.

Engineers, psychologists and clinician-researchers from many disciplines work alongside musicians, media artists and industry to study performance, perception, neural processing and human interaction.

In the LIVELab, acoustics are digitally controlled so the experience can change instantly from extremely silent with almost no reverberation to a noisy restaurant to a subway platform or to the acoustics of Carnegie Hall.

Real-time physiological data such as heart rate can be synchronized with data from other systems such as motion capture, and monitored and recorded from both performers and audience members. The result is that the reams of data that can now be collected in a few hours in the LIVELab used to take weeks or months to collect in a traditional lab. And having measurements of multiple people simultaneously is pushing forward our understanding of real-time human interactions.

Consider the implications of how music might help people with Parkinson’s disease to walk more smoothly or children with dyslexia to read better.

[…] area of ongoing research is the effectiveness of hearing aids. By the age of 60, nearly 49 per cent of people will suffer from some hearing loss. People who wear hearing aids are often frustrated when listening to music because the hearing aids distort the sound and cannot deal with the dynamic range of the music.

The LIVELab is working with the Hamilton Philharmonic Orchestra to solve this problem. During a recent concert, researchers evaluated new ways of delivering sound directly to participants’ hearing aids to enhance sounds.

Researchers hope new technologies can not only increase live musical enjoyment but alleviate the social isolation caused by hearing loss.

Imagine the possibilities for understanding music and sound: How it might help to improve cognitive decline, manage social performance anxiety, help children with developmental disorders, aid in treatment of depression or keep the mind focused. Every time we conceive and design a study, we think of new possibilities.

The essay also includes an embedded 12 min. video about LIVELab and details about studies conducted on musicians and live audiences. Apparently, audiences experience live performance differently than recorded performances and musicians use body sway to create cohesive performances. You can find the McMaster Institute for Music & the Mind here and McMaster’s LIVELab here.

Capturing the motions of a string quartet performance. Laurel Trainor, Author provided [McMaster University]

Metacreation Lab at Simon Fraser University (SFU)

I just recently discovered that there’s a Metacreation Lab at Simon Fraser University (Vancouver, Canada), which on its homepage has this ” Metacreation is the idea of endowing machines with creative behavior.” Here’s more from the homepage,

As the contemporary approach to generative art, Metacreation involves using tools and techniques from artificial intelligence, artificial life, and machine learning to develop software that partially or completely automates creative tasks. Through the collaboration between scientists, experts in artificial intelligence, cognitive sciences, designers and artists, the Metacreation Lab for Creative AI is at the forefront of the development of generative systems, be they embedded in interactive experiences or integrated into current creative software. Scientific research in the Metacreation Lab explores how various creative tasks can be automated and enriched. These tasks include music composition [emphasis mine], sound design, video editing, audio/visual effect generation, 3D animation, choreography, and video game design.

Besides scientific research, the team designs interactive and generative artworks that build upon the algorithms and research developed in the Lab. This work often challenges the social and cultural discourse on AI.

Much to my surprise I received the Metacreation Lab’s inaugural email newsletter (received via email on Friday, November 15, 2019),

Greetings,

We decided to start a mailing list for disseminating news, updates, and announcements regarding generative art, creative AI and New Media. In this newsletter: 

  1. ISEA 2020: The International Symposium on Electronic Art. ISEA return to Montreal, check the CFP bellow and contribute!
  2. ISEA 2015: A transcription of Sara Diamond’s keynote address “Action Agenda: Vancouver’s Prescient Media Arts” is now available for download. 
  3. Brain Art, the book: we are happy to announce the release of the first comprehensive volume on Brain Art. Edited by Anton Nijholt, and published by Springer.

Here are more details from the newsletter,

ISEA2020 – 26th International Symposium on Electronic Arts

Montreal, September 24, 2019
Montreal Digital Spring (Printemps numérique) is launching a call for participation as part of ISEA2020 / MTL connect to be held from May 19 to 24, 2020 in Montreal, Canada. Founded in 1990, ISEA is one of the world’s most prominent international arts and technology events, bringing together scholarly, artistic, and scientific domains in an interdisciplinary discussion and showcase of creative productions applying new technologies in art, interactivity, and electronic and digital media. For 2020, ISEA Montreal turns towards the theme of sentience.

ISEA2020 will be fully dedicated to examining the resurgence of sentience—feeling-sensing-making sense—in recent art and design, media studies, science and technology studies, philosophy, anthropology, history of science and the natural scientific realm—notably biology, neuroscience and computing. We ask: why sentience? Why and how does sentience matter? Why have artists and scholars become interested in sensing and feeling beyond, with and around our strictly human bodies and selves? Why has this notion been brought to the fore in an array of disciplines in the 21st century?
CALL FOR PARTICIPATION: WHY SENTIENCE? ISEA2020 invites artists, designers, scholars, researchers, innovators and creators to participate in the various activities deployed from May 19 to 24, 2020. To complete an application, please fill in the forms and follow the instructions.

The final submissions deadline is NOVEMBER 25, 2019. Submit your application for WORKSHOP and TUTORIAL Submit your application for ARTISTIC WORK Submit your application for FULL / SHORT PAPER Submit your application for PANEL Submit your application for POSTER Submit your application for ARTIST TALK Submit your application for INSTITUTIONAL PRESENTATION
Find Out More
You can apply for several categories. All profiles are welcome. Notifications of acceptance will be sent around January 13, 2020.

Important: please note that the Call for participation for MTL connect is not yet launched, but you can also apply to participate in the programming of the other Pavilions (4 other themes) when registrations are open (coming soon): mtlconnecte.ca/en TICKETS

Registration is now available to assist to ISEA2020 / MTL connect, from May 19 to 24, 2020. Book today your Full Pass and get the early-bird rate!
Buy Now

More from the newsletter,

ISEA 2015 was in Vancouver, Canada, and the proceedings and art catalog are still online. The news is that Sara Diamond released her 2015 keynote address as a paper: Action Agenda: Vancouver’s Prescient Media Arts. It is never too late so we thought we would let you know about this great read. See The 2015 Proceedings Here

The last item from the inaugural newsletter,

The first book that surveys how brain activity can be monitored and manipulated for artistic purposes, with contributions by interactive media artists, brain-computer interface researchers, and neuroscientists. View the Book Here

As per the Leonardo review from Cristina Albu:

“Another seminal contribution of the volume is the presentation of multiple taxonomies of “brain art,” which can help art critics develop better criteria for assessing this genre. Mirjana Prpa and Philippe Pasquier’s meticulous classification shows how diverse such works have become as artists consider a whole range of variables of neurofeedback.” Read the Review

For anyone not familiar with the ‘Leonardo’ cited in the above, it’s Leonardo; the International Society for the Arts, Sciences and Technology.

Should this kind of information excite and motivate you do start metacreating, you can get in touch with the lab,

Our mailing address is:
Metacreation Lab for Creative AI
School of Interactive Arts & Technology
Simon Fraser University
250-13450 102 Ave.
Surrey, BC V3T 0A3
Web: http://metacreation.net/
Email: metacreation_admin (at) sfu (dot) ca

Sonifying proteins to make music and brand new proteins

Markus Buehler at the Massachusetts Institute of Technology (MIT) has been working with music and science for a number of years. My December 9, 2011 posting, Music, math, and spiderwebs, was the first one here featuring his work. My November 28, 2012 posting, Producing stronger silk musically, was a followup to Buehler’s previous work.

A June 28, 2019 news item on Azonano provides a recent update,

Composers string notes of different pitch and duration together to create music. Similarly, cells join amino acids with different characteristics together to make proteins.

Now, researchers have bridged these two seemingly disparate processes by translating protein sequences into musical compositions and then using artificial intelligence to convert the sounds into brand-new proteins. …

Caption: Researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature. Credit: Zhao Qin and Francisco Martin-Martinez

A June 26, 2019 American Chemical Society (ACS) news release, which originated the news item, provides more detail and a video,

To make proteins, cellular structures called ribosomes add one of 20 different amino acids to a growing chain in combinations specified by the genetic blueprint. The properties of the amino acids and the complex shapes into which the resulting proteins fold determine how the molecule will work in the body. To better understand a protein’s architecture, and possibly design new ones with desired features, Markus Buehler and colleagues wanted to find a way to translate a protein’s amino acid sequence into music.

The researchers transposed the unique natural vibrational frequencies of each amino acid into sound frequencies that humans can hear. In this way, they generated a scale consisting of 20 unique tones. Unlike musical notes, however, each amino acid tone consisted of the overlay of many different frequencies –– similar to a chord. Buehler and colleagues then translated several proteins into audio compositions, with the duration of each tone specified by the different 3D structures that make up the molecule. Finally, the researchers used artificial intelligence to recognize specific musical patterns that corresponded to certain protein architectures. The computer then generated scores and translated them into new-to-nature proteins. In addition to being a tool for protein design and for investigating disease mutations, the method could be helpful for explaining protein structure to broad audiences, the researchers say. They even developed an Android app [Amino Acid Synthesizer] to allow people to create their own bio-based musical compositions.

Here’s the ACS video,

A June 26, 2019 MIT news release (also on EurekAlert) provides some specifics and the MIT news release includes two embedded audio files,

Want to create a brand new type of protein that might have useful properties? No problem. Just hum a few bars.

In a surprising marriage of science and art, researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature.

Although it’s not quite as simple as humming a new protein into existence, the new system comes close. It provides a systematic way of translating a protein’s sequence of amino acids into a musical sequence, using the physical properties of the molecules to determine the sounds. Although the sounds are transposed in order to bring them within the audible range for humans, the tones and their relationships are based on the actual vibrational frequencies of each amino acid molecule itself, computed using theories from quantum chemistry.

The system was developed by Markus Buehler, the McAfee Professor of Engineering and head of the Department of Civil and Environmental Engineering at MIT, along with postdoc Chi Hua Yu and two others. As described in the journal ACS Nano, the system translates the 20 types of amino acids, the building blocks that join together in chains to form all proteins, into a 20-tone scale. Any protein’s long sequence of amino acids then becomes a sequence of notes.

While such a scale sounds unfamiliar to people accustomed to Western musical traditions, listeners can readily recognize the relationships and differences after familiarizing themselves with the sounds. Buehler says that after listening to the resulting melodies, he is now able to distinguish certain amino acid sequences that correspond to proteins with specific structural functions. “That’s a beta sheet,” he might say, or “that’s an alpha helix.”

Learning the language of proteins

The whole concept, Buehler explains, is to get a better handle on understanding proteins and their vast array of variations. Proteins make up the structural material of skin, bone, and muscle, but are also enzymes, signaling chemicals, molecular switches, and a host of other functional materials that make up the machinery of all living things. But their structures, including the way they fold themselves into the shapes that often determine their functions, are exceedingly complicated. “They have their own language, and we don’t know how it works,” he says. “We don’t know what makes a silk protein a silk protein or what patterns reflect the functions found in an enzyme. We don’t know the code.”

By translating that language into a different form that humans are particularly well-attuned to, and that allows different aspects of the information to be encoded in different dimensions — pitch, volume, and duration — Buehler and his team hope to glean new insights into the relationships and differences between different families of proteins and their variations, and use this as a way of exploring the many possible tweaks and modifications of their structure and function. As with music, the structure of proteins is hierarchical, with different levels of structure at different scales of length or time.

The team then used an artificial intelligence system to study the catalog of melodies produced by a wide variety of different proteins. They had the AI system introduce slight changes in the musical sequence or create completely new sequences, and then translated the sounds back into proteins that correspond to the modified or newly designed versions. With this process they were able to create variations of existing proteins — for example of one found in spider silk, one of nature’s strongest materials — thus making new proteins unlike any produced by evolution.

Although the researchers themselves may not know the underlying rules, “the AI has learned the language of how proteins are designed,” and it can encode it to create variations of existing versions, or completely new protein designs, Buehler says. Given that there are “trillions and trillions” of potential combinations, he says, when it comes to creating new proteins “you wouldn’t be able to do it from scratch, but that’s what the AI can do.”

“Composing” new proteins

By using such a system, he says training the AI system with a set of data for a particular class of proteins might take a few days, but it can then produce a design for a new variant within microseconds. “No other method comes close,” he says. “The shortcoming is the model doesn’t tell us what’s really going on inside. We just know it works.

This way of encoding structure into music does reflect a deeper reality. “When you look at a molecule in a textbook, it’s static,” Buehler says. “But it’s not static at all. It’s moving and vibrating. Every bit of matter is a set of vibrations. And we can use this concept as a way of describing matter.”

The method does not yet allow for any kind of directed modifications — any changes in properties such as mechanical strength, elasticity, or chemical reactivity will be essentially random. “You still need to do the experiment,” he says. When a new protein variant is produced, “there’s no way to predict what it will do.”

The team also created musical compositions developed from the sounds of amino acids, which define this new 20-tone musical scale. The art pieces they constructed consist entirely of the sounds generated from amino acids. “There are no synthetic or natural instruments used, showing how this new source of sounds can be utilized as a creative platform,” Buehler says. Musical motifs derived from both naturally existing proteins and AI-generated proteins are used throughout the examples, and all the sounds, including some that resemble bass or snare drums, are also generated from the sounds of amino acids.

The researchers have created a free Android smartphone app, called Amino Acid Synthesizer, to play the sounds of amino acids and record protein sequences as musical compositions.

Here’s a link to and a citation for the paper,

A Self-Consistent Sonification Method to Translate Amino Acid Sequences into Musical Compositions and Application in Protein Design Using Artificial Intelligence by Chi-Hua Yu, Zhao Qin, Francisco J. Martin-Martinez, Markus J. Buehler. ACS Nano 2019 XXXXXXXXXX-XXX DOI: https://doi.org/10.1021/acsnano.9b02180 Publication Date:June 26, 2019 Copyright © 2019 American Chemical Society

This paper is behind a paywall.

ETA October 23, 2019 1000 hours: Ooops! I almost forgot the link to the Aminot Acid Synthesizer.

October 2019 science and art/science events in Vancouver and other parts of Canada

This is a scattering of events, which I’m sure will be augmented as we properly start the month of October 2019.

October 2, 2019 in Waterloo, Canada (Perimeter Institute)

If you want to be close enough to press the sacred flesh (Sir Martin Rees), you’re out of luck. However, there are still options ranging from watching a live webcast from the comfort of your home to watching the lecture via closed circuit television with other devoted fans at a licensed bistro located on site at the Perimeter Institute (PI) to catching the lecture at a later date via YouTube.

That said, here’s why you might be interested,

Here’s more from a September 11, 2019 Perimeter Institute (PI) announcement received via email,

Surviving the Century
MOVING TOWARD A POST-HUMAN FUTURE
Martin Rees, UK Astronomer Royal
Wednesday, Oct. 2 at 7:00 PM ET

Advances in technology and space exploration could, if applied wisely, allow a bright future for the 10 billion people living on earth by the end of the century.

But there are dystopian risks we ignore at our peril: our collective “footprint” on our home planet, as well as the creation and use of technologies so powerful that even small groups could cause a global catastrophe.

Martin Rees, the UK Astronomer Royal, will explore this unprecedented moment in human history during his lecture on October 2, 2019. A former president of the Royal Society and master of Trinity College, Cambridge, Rees is a cosmologist whose work also explores the interfaces between science, ethics, and politics. Read More.

Mark your calendar! Tickets will be available on Monday, Sept. 16 at 9 AM ET

Didn’t get tickets for the lecture? We’ve got more ways to watch.
Join us at Perimeter on lecture night to watch live in the Black Hole Bistro.
Catch the live stream on Inside the Perimeter or watch it on Youtube the next day
Become a member of our donor thank you program! Learn more.

It took me a while to locate an address for PI venue since I expect that information to be part of the announcement. (insert cranky emoticon here) Here’s the address: Perimeter Institute, Mike Lazaridis Theatre of Ideas, 31 Caroline St. N., Waterloo, ON

Before moving onto the next event, I’m including a paragraph from the event description that was not included in the announcement (from the PI Outreach Surviving the Century webpage),

In his October 2 [2019] talk – which kicks off the 2019/20 season of the Perimeter Institute Public Lecture Series – Rees will discuss the outlook for humans (or their robotic envoys) venturing to other planets. Humans, Rees argues, will be ill-adapted to new habitats beyond Earth, and will use genetic and cyborg technology to transform into a “post-human” species.

I first covered Sir Martin Rees and his concerns about technology (robots and cyborgs run amok) in this November 26, 2012 posting about existential risk. He and his colleagues at Cambridge University, UK, proposed a Centre for the Study of Existential Risk, which opened in 2015.

Straddling Sept. and Oct. at the movies in Vancouver

The Vancouver International Film Festival (VIFF) opened today, September 26, 2019. During its run to October 11, 2019 there’ll be a number of documentaries that touch on science. Here are three of the documentaries most closely adhere to the topics I’m most likely to address on this blog. There is a fourth documentary included here as it touches on ecology in a more hopeful fashion than is the current trend.

Human Nature

From the VIFF 2019 film description and ticket page,

One of the most significant scientific breakthroughs in history, the discovery of CRISPR has made it possible to manipulate human DNA, paving the path to a future of great possibilities.

The implications of this could mean the eradication of disease or, more controversially, the possibility of genetically pre-programmed children.

Breaking away from scientific jargon, Human Nature pieces together a complex account of bio-research for the layperson as compelling as a work of science-fiction. But whether the gene-editing powers of CRISPR (described as “a word processor for DNA”) are used for good or evil, they’re reshaping the world as we know it. As we push past the boundaries of what it means to be human, Adam Bolt’s stunning work of science journalism reaches out to scientists, engineers, and people whose lives could benefit from CRISPR technology, and offers a wide-ranging look at the pros and cons of designing our futures.

Tickets
Friday, September 27, 2019 at 11:45 AM
Vancity Theatre

Saturday, September 28, 2019 at 11:15 AM
International Village 10

Thursday, October 10, 2019 at 6:45 PM
SFU Goldcorp

According to VIFF, the tickets for the Sept. 27, 2019 show are going fast.

Resistance Fighters

From the VIFF 2019 film description and ticket page,

Since mass-production in the 1940s, antibiotics have been nothing less than miraculous, saving countless lives and revolutionizing modern medicine. It’s virtually impossible to imagine hospitals or healthcare without them. But after years of abuse and mismanagement by the medical and agricultural communities, superbugs resistant to antibiotics are reaching apocalyptic proportions. The ongoing rise in multi-resistant bacteria – unvanquishable microbes, currently responsible for 700,000 deaths per year and projected to kill 10 million yearly by 2050 if nothing changes – and the people who fight them are the subjects of Michael Wech’s stunning “science-thriller.”

Peeling back the carefully constructed veneer of the medical corporate establishment’s greed and complacency to reveal the world on the cusp of a potential crisis, Resistance Fighters sounds a clarion call of urgency. It’s an all-out war, one which most of us never knew we were fighting, to avoid “Pharmageddon.” Doctors, researchers, patients, and diplomats testify about shortsighted medical and economic practices, while Wech offers refreshingly original perspectives on environment, ecology, and (animal) life in general. As alarming as it is informative, this is a wake-up call the world needs to hear.

Sunday, October 6, 2019 at 5:45 PM
International Village 8

Thursday, October 10, 2019 at 2:15 PM
SFU Goldcorp

According to VIFF, the tickets for the Oct. 6, 2019 show are going fast.

Trust Machine: The Story of Blockchain

Strictly speaking this is more of a technology story than science story but I have written about blockchain and cryptocurrencies before so I’m including this. From the VIFF 2019 film description and ticket page,

For anyone who has questions about cryptocurrencies like Bitcoin (and who doesn’t?), Alex Winter’s thorough documentary is an excellent introduction to the blockchain phenomenon. Trust Machine offers a wide range of expert testimony and a variety of perspectives that explicate the promises and the risks inherent in this new manifestation of high-tech wizardry. And it’s not just money that blockchains threaten to disrupt: innovators as diverse as UNICEF and Imogen Heap make spirited arguments that the industries of energy, music, humanitarianism, and more are headed for revolutionary change.

A propulsive and subversive overview of this little-understood phenomenon, Trust Machine crafts a powerful and accessible case that a technologically decentralized economy is more than just a fad. As the aforementioned experts – tech wizards, underground activists, and even some establishment figures – argue persuasively for an embrace of the possibilities offered by blockchains, others criticize its bubble-like markets and inefficiencies. Either way, Winter’s film suggests a whole new epoch may be just around the corner, whether the powers that be like it or not.

Tuesday, October 1, 2019 at 11:00 AM
Vancity Theatre

Thursday, October 3, 2019 at 9:00 PM
Vancity Theatre

Monday, October 7, 2019 at 1:15 PM
International Village 8

According to VIFF, tickets for all three shows are going fast

The Great Green Wall

For a little bit of hope, From the VIFF 2019 film description and ticket page,

“We must dare to invent the future.” In 2007, the African Union officially began a massively ambitious environmental project planned since the 1970s. Stretching through 11 countries and 8,000 km across the desertified Sahel region, on the southern edges of the Sahara, The Great Green Wall – once completed, a mosaic of restored, fertile land – would be the largest living structure on Earth.

Malian musician-activist Inna Modja embarks on an expedition through Senegal, Mali, Nigeria, Niger, and Ethiopia, gathering an ensemble of musicians and artists to celebrate the pan-African dream of realizing The Great Green Wall. Her journey is accompanied by a dazzling array of musical diversity, celebrating local cultures and traditions as they come together into a community to stand against the challenges of desertification, drought, migration, and violent conflict.

An unforgettable, beautiful exploration of a modern marvel of ecological restoration, and so much more than a passive source of information, The Great Green Wall is a powerful call to take action and help reshape the world.

Sunday, September 29, 2019 at 11:15 AM
International Village 10

Wednesday, October 2, 2019 at 6:00 PM
International Village 8
Standby – advance tickets are sold out but a limited number are likely to be released at the door

Wednesday, October 9, 2019 at 11:00 AM
International Village 9

As you can see, one show is already offering standby tickets only and the other two are selling quickly.

For venue locations, information about what ‘standby’ means and much more go here and click on the Festival tab. As for more information the individual films, you’ll links to trailers, running times, and more on the pages for which I’ve supplied links.

Brain Talks on October 16, 2019 in Vancouver

From time to time I get notices about a series titled Brain Talks from the Dept. of Psychiatry at the University of British Columbia. A September 11, 2019 announcement (received via email) focuses attention on the ‘guts of the matter’,

YOU ARE INVITED TO ATTEND:

BRAINTALKS: THE BRAIN AND THE GUT

WEDNESDAY, OCTOBER 16TH, 2019 FROM 6:00 PM – 8:00 PM

Join us on Wednesday October 16th [2019] for a series of talks exploring the
relationship between the brain, microbes, mental health, diet and the
gut. We are honored to host three phenomenal presenters for the evening:
Dr. Brett Finlay, Dr. Leslie Wicholas, and Thara Vayali, ND.

DR. BRETT FINLAY [2] is a Professor in the Michael Smith Laboratories at
the University of British Columbia. Dr. Finlay’s  research interests are
focused on host-microbe interactions at the molecular level,
specializing in Cellular Microbiology. He has published over 500 papers
and has been inducted into the Canadian  Medical Hall of Fame. He is the
co-author of the  books: Let Them Eat Dirt and The Whole Body
Microbiome.

DR. LESLIE WICHOLAS [3]  is a psychiatrist with an expertise in the
clinical understanding of the gut-brain axis. She has become
increasingly involved in the emerging field of Nutritional Psychiatry,
exploring connections between diet, nutrition, and mental health.
Currently, Dr. Wicholas is the director of the Food as Medicine program
at the Mood Disorder Association of BC.

THARA VAYALI, ND [4] holds a BSc in Nutritional Sciences and a MA in
Education and Communications. She has trained in naturopathic medicine
and advocates for awareness about women’s physiology and body literacy.
Ms. Vayali is a frequent speaker and columnist that prioritizes
engagement, understanding, and community as pivotal pillars for change.

Our event on Wednesday, October 16th [2019] will start with presentations from
each of the three speakers, and end with a panel discussion inspired by
audience questions. After the talks, at 7:30 pm, we host a social
gathering with a rich spread of catered healthy food and non-alcoholic
drinks. We look forward to seeing you there!

Paetzhold Theater

Vancouver General Hospital; Jim Pattison Pavilion, Vancouver, BC

Attend Event

That’s it for now.

Ghosts, mechanical turks, and pseudo-AI (artificial intelligence)—Is it all a con game?

There’s been more than one artificial intelligence (AI) story featured here on this blog but the ones featured in this posting are the first I’ve stumbled across that suggest the hype is even more exaggerated than even the most cynical might have thought. (BTW, the 2019 material is later as I have taken a chronological approach to this posting.)

It seems a lot of companies touting their AI algorithms and capabilities are relying on human beings to do the work, from a July 6, 2018 article by Olivia Solon for the Guardian (Note: A link has been removed),

It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans.

“Using a human to do the job lets you skip over a load of technical and business development challenges. It doesn’t scale, obviously, but it allows you to build something and skip the hard part early on,” said Gregory Koberger, CEO of ReadMe, who says he has come across a lot of “pseudo-AIs”.

“It’s essentially prototyping the AI with human beings,” he said.

In 2017, the business expense management app Expensify admitted that it had been using humans to transcribe at least some of the receipts it claimed to process using its “smartscan technology”. Scans of the receipts were being posted to Amazon’s Mechanical Turk crowdsourced labour tool, where low-paid workers were reading and transcribing them.

“I wonder if Expensify SmartScan users know MTurk workers enter their receipts,” said Rochelle LaPlante, a “Turker” and advocate for gig economy workers on Twitter. “I’m looking at someone’s Uber receipt with their full name, pick-up and drop-off addresses.”

Even Facebook, which has invested heavily in AI, relied on humans for its virtual assistant for Messenger, M.

In some cases, humans are used to train the AI system and improve its accuracy. …

The Turk

Fooling people with machines that seem intelligent is not new according to a Sept. 10, 2018 article by Seth Stevenson for Slate.com (Note: Links have been removed),

It’s 1783, and Paris is gripped by the prospect of a chess match. One of the contestants is François-André Philidor, who is considered the greatest chess player in Paris, and possibly the world. Everyone is so excited because Philidor is about to go head-to-head with the other biggest sensation in the chess world at the time.

But his opponent isn’t a man. And it’s not a woman, either. It’s a machine.

This story may sound a lot like Garry Kasparov taking on Deep Blue, IBM’s chess-playing supercomputer. But that was only a couple of decades ago, and this chess match in Paris happened more than 200 years ago. It doesn’t seem like a robot that can play chess would even be possible in the 1780s. This machine playing against Philidor was making an incredible technological leap—playing chess, and not only that, but beating humans at chess.

In the end, it didn’t quite beat Philidor, but the chess master called it one of his toughest matches ever. It was so hard for Philidor to get a read on his opponent, which was a carved wooden figure—slightly larger than life—wearing elaborate garments and offering a cold, mean stare.

It seems like the minds of the era would have been completely blown by a robot that could nearly beat a human chess champion. Some people back then worried that it was black magic, but many folks took the development in stride. …

Debates about the hottest topic in technology today—artificial intelligence—didn’t starts in the 1940s, with people like Alan Turing and the first computers. It turns out that the arguments about AI go back much further than you might imagine. The story of the 18th-century chess machine turns out to be one of those curious tales from history that can help us understand technology today, and where it might go tomorrow.

[In future episodes our podcast, Secret History of the Future] we’re going to look at the first cyberattack, which happened in the 1830s, and find out how the Victorians invented virtual reality.

Philidor’s opponent was known as The Turk or Mechanical Turk and that ‘machine’ was in fact a masterful hoax as The Turk held a hidden compartment from which a human being directed his moves.

People pretending to be AI agents

It seems that today’s AI has something in common with the 18th century Mechanical Turk, there are often humans lurking in the background making things work. From a Sept. 4, 2018 article by Janelle Shane for Slate.com (Note: Links have been removed),

Every day, people are paid to pretend to be bots.

In a strange twist on “robots are coming for my job,” some tech companies that boast about their artificial intelligence have found that at small scales, humans are a cheaper, easier, and more competent alternative to building an A.I. that can do the task.

Sometimes there is no A.I. at all. The “A.I.” is a mockup powered entirely by humans, in a “fake it till you make it” approach used to gauge investor interest or customer behavior. Other times, a real A.I. is combined with human employees ready to step in if the bot shows signs of struggling. These approaches are called “pseudo-A.I.” or sometimes, more optimistically, “hybrid A.I.”

Although some companies see the use of humans for “A.I.” tasks as a temporary bridge, others are embracing pseudo-A.I. as a customer service strategy that combines A.I. scalability with human competence. They’re advertising these as “hybrid A.I.” chatbots, and if they work as planned, you will never know if you were talking to a computer or a human. Every remote interaction could turn into a form of the Turing test. So how can you tell if you’re dealing with a bot pretending to be a human or a human pretending to be a bot?

One of the ways you can’t tell anymore is by looking for human imperfections like grammar mistakes or hesitations. In the past, chatbots had prewritten bits of dialogue that they could mix and match according to built-in rules. Bot speech was synonymous with precise formality. In early Turing tests, spelling mistakes were often a giveaway that the hidden speaker was a human. Today, however, many chatbots are powered by machine learning. Instead of using a programmer’s rules, these algorithms learn by example. And many training data sets come from services like Amazon’s Mechanical Turk, which lets programmers hire humans from around the world to generate examples of tasks like asking and answering questions. These data sets are usually full of casual speech, regionalisms, or other irregularities, so that’s what the algorithms learn. It’s not uncommon these days to get algorithmically generated image captions that read like text messages. And sometimes programmers deliberately add these things in, since most people don’t expect imperfections of an algorithm. In May, Google’s A.I. assistant made headlines for its ability to convincingly imitate the “ums” and “uhs” of a human speaker.

Limited computing power is the main reason that bots are usually good at just one thing at a time. Whenever programmers try to train machine learning algorithms to handle additional tasks, they usually get algorithms that can do many tasks rather badly. In other words, today’s algorithms are artificial narrow intelligence, or A.N.I., rather than artificial general intelligence, or A.G.I. For now, and for many years in the future, any algorithm or chatbot that claims A.G.I-level performance—the ability to deal sensibly with a wide range of topics—is likely to have humans behind the curtain.

Another bot giveaway is a very poor memory. …

Bringing AI to life: ghosts

Sidney Fussell’s April 15, 2019 article for The Atlantic provides more detail about the human/AI interface as found in some Amazon products such as Alexa ( a voice-control system),

… Alexa-enabled speakers can and do interpret speech, but Amazon relies on human guidance to make Alexa, well, more human—to help the software understand different accents, recognize celebrity names, and respond to more complex commands. This is true of many artificial intelligence–enabled products. They’re prototypes. They can only approximate their promised functions while humans help with what Harvard researchers have called “the paradox of automation’s last mile.” Advancements in AI, the researchers write, create temporary jobs such as tagging images or annotating clips, even as the technology is meant to supplant human labor. In the case of the Echo, gig workers are paid to improve its voice-recognition software—but then, when it’s advanced enough, it will be used to replace the hostess in a hotel lobby.

A 2016 paper by researchers at Stanford University used a computer vision system to infer, with 88 percent accuracy, the political affiliation of 22 million people based on what car they drive and where they live. Traditional polling would require a full staff, a hefty budget, and months of work. The system completed the task in two weeks. But first, it had to know what a car was. The researchers paid workers through Amazon’s Mechanical Turk [emphasis mine] platform to manually tag thousands of images of cars, so the system would learn to differentiate between shapes, styles, and colors.

It may be a rude awakening for Amazon Echo owners, but AI systems require enormous amounts of categorized data, before, during, and after product launch. ..,

Isn’t interesting that Amazon also has a crowdsourcing marketplace for its own products. Calling it ‘Mechanical Turk’ after a famous 18th century hoax would suggest a dark sense of humour somewhere in the corporation. (You can find out more about the Amazon Mechanical Turk on this Amazon website and in its Wikipedia entry.0

Anthropologist, Mary L. Gray has coined the phrase ‘ghost work’ for the work that humans perform but for which AI gets the credit. Angela Chan’s May 13, 2019 article for The Verge features Gray as she promotes her latest book with Siddarth Suri ‘Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass’ (Note: A link has been removed),

“Ghost work” is anthropologist Mary L. Gray’s term for the invisible labor that powers our technology platforms. When Gray, a senior researcher at Microsoft Research, first arrived at the company, she learned that building artificial intelligence requires people to manage and clean up data to feed to the training algorithms. “I basically started asking the engineers and computer scientists around me, ‘Who are the people you pay to do this task work of labeling images and classification tasks and cleaning up databases?’” says Gray. Some people said they didn’t know. Others said they didn’t want to know and were concerned that if they looked too closely they might find unsavory working conditions.

So Gray decided to find out for herself. Who are the people, often invisible, who pick up the tasks necessary for these platforms to run? Why do they do this work, and why do they leave? What are their working conditions?

The interview that follows is interesting although it doesn’t seem to me that the question about working conditions is answered in any great detail. However, there is this rather interesting policy suggestion,

If companies want to happily use contract work because they need to constantly churn through new ideas and new aptitudes, the only way to make that a good thing for both sides of that enterprise is for people to be able to jump into that pool. And people do that when they have health care and other provisions. This is the business case for universal health care, for universal education as a public good. It’s going to benefit all enterprise.

I want to get across to people that, in a lot of ways, we’re describing work conditions. We’re not describing a particular type of work. We’re describing today’s conditions for project-based task-driven work. This can happen to everybody’s jobs, and I hate that that might be the motivation because we should have cared all along, as this has been happening to plenty of people. For me, the message of this book is: let’s make this not just manageable, but sustainable and enjoyable. Stop making our lives wrap around work, and start making work serve our lives.

Puts a different spin on AI and work, doesn’t it?

AI (artificial intelligence) and a hummingbird robot

Every once in a while I stumble across a hummingbird robot story (my August 12, 2011 posting and my August 1, 2014 posting). Here’s what the hummingbird robot looks like now (hint: there’s a significant reduction in size),

Caption: Purdue University researchers are building robotic hummingbirds that learn from computer simulations how to fly like a real hummingbird does. The robot is encased in a decorative shell. Credit: Purdue University photo/Jared Pike

I think this is the first time I’ve seen one of these projects not being funded by the military, which explains why the researchers are more interested in using these hummingbird robots for observing wildlife and for rescue efforts in emergency situations. Still, they do acknowledge theses robots could also be used in covert operations.

From a May 9, 2019 news item on ScienceDaily,

What can fly like a bird and hover like an insect?

Your friendly neighborhood hummingbirds. If drones had this combo, they would be able to maneuver better through collapsed buildings and other cluttered spaces to find trapped victims.

Purdue University researchers have engineered flying robots that behave like hummingbirds, trained by machine learning algorithms based on various techniques the bird uses naturally every day.

This means that after learning from a simulation, the robot “knows” how to move around on its own like a hummingbird would, such as discerning when to perform an escape maneuver.

Artificial intelligence, combined with flexible flapping wings, also allows the robot to teach itself new tricks. Even though the robot can’t see yet, for example, it senses by touching surfaces. Each touch alters an electrical current, which the researchers realized they could track.

“The robot can essentially create a map without seeing its surroundings. This could be helpful in a situation when the robot might be searching for victims in a dark place — and it means one less sensor to add when we do give the robot the ability to see,” said Xinyan Deng, an associate professor of mechanical engineering at Purdue.

The researchers even have a video,

A May 9, 2019 Purdue University news release (also on EurekAlert), which originated the news item, provides more detail,


The researchers [presented] their work on May 20 at the 2019 IEEE International Conference on Robotics and Automation in Montreal. A YouTube video is available at https://www.youtube.com/watch?v=hl892dHqfA&feature=youtu.be. [it’s the video I’ve embedded in the above]

Drones can’t be made infinitely smaller, due to the way conventional aerodynamics work. They wouldn’t be able to generate enough lift to support their weight.

But hummingbirds don’t use conventional aerodynamics – and their wings are resilient. “The physics is simply different; the aerodynamics is inherently unsteady, with high angles of attack and high lift. This makes it possible for smaller, flying animals to exist, and also possible for us to scale down flapping wing robots,” Deng said.

Researchers have been trying for years to decode hummingbird flight so that robots can fly where larger aircraft can’t. In 2011, the company AeroVironment, commissioned by DARPA, an agency within the U.S. Department of Defense, built a robotic hummingbird that was heavier than a real one but not as fast, with helicopter-like flight controls and limited maneuverability. It required a human to be behind a remote control at all times.

Deng’s group and her collaborators studied hummingbirds themselves for multiple summers in Montana. They documented key hummingbird maneuvers, such as making a rapid 180-degree turn, and translated them to computer algorithms that the robot could learn from when hooked up to a simulation.

Further study on the physics of insects and hummingbirds allowed Purdue researchers to build robots smaller than hummingbirds – and even as small as insects – without compromising the way they fly. The smaller the size, the greater the wing flapping frequency, and the more efficiently they fly, Deng says.

The robots have 3D-printed bodies, wings made of carbon fiber and laser-cut membranes. The researchers have built one hummingbird robot weighing 12 grams – the weight of the average adult Magnificent Hummingbird – and another insect-sized robot weighing 1 gram. The hummingbird robot can lift more than its own weight, up to 27 grams.

Designing their robots with higher lift gives the researchers more wiggle room to eventually add a battery and sensing technology, such as a camera or GPS. Currently, the robot needs to be tethered to an energy source while it flies – but that won’t be for much longer, the researchers say.

The robots could fly silently just as a real hummingbird does, making them more ideal for covert operations. And they stay steady through turbulence, which the researchers demonstrated by testing the dynamically scaled wings in an oil tank.

The robot requires only two motors and can control each wing independently of the other, which is how flying animals perform highly agile maneuvers in nature.

“An actual hummingbird has multiple groups of muscles to do power and steering strokes, but a robot should be as light as possible, so that you have maximum performance on minimal weight,” Deng said.

Robotic hummingbirds wouldn’t only help with search-and-rescue missions, but also allow biologists to more reliably study hummingbirds in their natural environment through the senses of a realistic robot.

“We learned from biology to build the robot, and now biological discoveries can happen with extra help from robots,” Deng said.
Simulations of the technology are available open-source at https://github.com/
purdue-biorobotics/flappy
.

Early stages of the work, including the Montana hummingbird experiments in collaboration with Bret Tobalske’s group at the University of Montana, were financially supported by the National Science Foundation.

The researchers have three paper on arxiv.org for open access peer review,

Learning Extreme Hummingbird Maneuvers on Flapping Wing Robots
Fan Fei, Zhan Tu, Jian Zhang, and Xinyan Deng
Purdue University, West Lafayette, IN, USA
https://arxiv.org/abs/1902.0962

Biological studies show that hummingbirds can perform extreme aerobatic maneuvers during fast escape. Given a sudden looming visual stimulus at hover, a hummingbird initiates a fast backward translation coupled with a 180-degree yaw turn, which is followed by instant posture stabilization in just under 10 wingbeats. Consider the wingbeat frequency of 40Hz, this aggressive maneuver is carried out in just 0.2 seconds. Inspired by the hummingbirds’ near-maximal performance during such extreme maneuvers, we developed a flight control strategy and experimentally demonstrated that such maneuverability can be achieved by an at-scale 12- gram hummingbird robot equipped with just two actuators. The proposed hybrid control policy combines model-based nonlinear control with model-free reinforcement learning. We use model-based nonlinear control for nominal flight control, as the dynamic model is relatively accurate for these conditions. However, during extreme maneuver, the modeling error becomes unmanageable. A model-free reinforcement learning policy trained in simulation was optimized to ‘destabilize’ the system and maximize the performance during maneuvering. The hybrid policy manifests a maneuver that is close to that observed in hummingbirds. Direct simulation-to-real transfer is achieved, demonstrating the hummingbird-like fast evasive maneuvers on the at-scale hummingbird robot.

Acting is Seeing: Navigating Tight Space Using Flapping Wings
Zhan Tu, Fan Fei, Jian Zhang, and Xinyan Deng
Purdue University, West Lafayette, IN, USA
https://arxiv.org/abs/1902.0868

Wings of flying animals can not only generate lift and control torques but also can sense their surroundings. Such dual functions of sensing and actuation coupled in one element are particularly useful for small sized bio-inspired robotic flyers, whose weight, size, and power are under stringent constraint. In this work, we present the first flapping-wing robot using its flapping wings for environmental perception and navigation in tight space, without the need for any visual feedback. As the test platform, we introduce the Purdue Hummingbird, a flapping-wing robot with 17cm wingspan and 12 grams weight, with a pair of 30-40Hz flapping wings driven by only two actuators. By interpreting the wing loading feedback and its variations, the vehicle can detect the presence of environmental changes such as grounds, walls, stairs, obstacles and wind gust. The instantaneous wing loading can be obtained through the measurements and interpretation of the current feedback by the motors that actuate the wings. The effectiveness of the proposed approach is experimentally demonstrated on several challenging flight tasks without vision: terrain following, wall following and going through a narrow corridor. To ensure flight stability, a robust controller was designed for handling unforeseen disturbances during the flight. Sensing and navigating one’s environment through actuator loading is a promising method for mobile robots, and it can serve as an alternative or complementary method to visual perception.

Flappy Hummingbird: An Open Source Dynamic Simulation of Flapping Wing Robots and Animals
Fan Fei, Zhan Tu, Yilun Yang, Jian Zhang, and Xinyan Deng
Purdue University, West Lafayette, IN, USA
https://arxiv.org/abs/1902.0962

Insects and hummingbirds exhibit extraordinary flight capabilities and can simultaneously master seemingly conflicting goals: stable hovering and aggressive maneuvering, unmatched by small scale man-made vehicles. Flapping Wing Micro Air Vehicles (FWMAVs) hold great promise for closing this performance gap. However, design and control of such systems remain challenging due to various constraints. Here, we present an open source high fidelity dynamic simulation for FWMAVs to serve as a testbed for the design, optimization and flight control of FWMAVs. For simulation validation, we recreated the hummingbird-scale robot developed in our lab in the simulation. System identification was performed to obtain the model parameters. The force generation, open- loop and closed-loop dynamic response between simulated and experimental flights were compared and validated. The unsteady aerodynamics and the highly nonlinear flight dynamics present challenging control problems for conventional and learning control algorithms such as Reinforcement Learning. The interface of the simulation is fully compatible with OpenAI Gym environment. As a benchmark study, we present a linear controller for hovering stabilization and a Deep Reinforcement Learning control policy for goal-directed maneuvering. Finally, we demonstrate direct simulation-to-real transfer of both control policies onto the physical robot, further demonstrating the fidelity of the simulation.

Enjoy!

Electronics begone! Enter: the light-based brainlike computing chip

At this point, it’s possible I’m wrong but I think this is the first ‘memristor’ type device (also called a neuromorphic chip) based on light rather than electronics that I’ve featured here on this blog. In other words, it’s not, technically speaking, a memristor but it does have the same properties so it is a neuromorphic chip.

Caption: The optical microchips that the researchers are working on developing are about the size of a one-cent piece. Credit: WWU Muenster – Peter Leßmann

A May 8, 2019 news item on Nanowerk announces this new approach to neuromorphic hardware (Note: A link has been removed),

Researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain.

The scientists produced a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses. The network is able to “learn” information and use this as a basis for computing and recognizing patterns. As the system functions solely with light and not with electrons, it can process data many times faster than traditional systems. …

A May 8, 2019 University of Münster press release (also on EurekAlert), which originated the news item, reveals the full story,

A technology that functions like a brain? In these times of artificial intelligence, this no longer seems so far-fetched – for example, when a mobile phone can recognise faces or languages. With more complex applications, however, computers still quickly come up against their own limitations. One of the reasons for this is that a computer traditionally has separate memory and processor units – the consequence of which is that all data have to be sent back and forth between the two. In this respect, the human brain is way ahead of even the most modern computers because it processes and stores information in the same place – in the synapses, or connections between neurons, of which there are a million-billion in the brain. An international team of researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have now succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain. The scientists managed to produce a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses.

The researchers were able to demonstrate, that such an optical neurosynaptic network is able to “learn” information and use this as a basis for computing and recognizing patterns – just as a brain can. As the system functions solely with light and not with traditional electrons, it can process data many times faster. “This integrated photonic system is an experimental milestone,” says Prof. Wolfram Pernice from Münster University and lead partner in the study. “The approach could be used later in many different fields for evaluating patterns in large quantities of data, for example in medical diagnoses.” The study is published in the latest issue of the “Nature” journal.

The story in detail – background and method used

Most of the existing approaches relating to so-called neuromorphic networks are based on electronics, whereas optical systems – in which photons, i.e. light particles, are used – are still in their infancy. The principle which the German and British scientists have now presented works as follows: optical waveguides that can transmit light and can be fabricated into optical microchips are integrated with so-called phase-change materials – which are already found today on storage media such as re-writable DVDs. These phase-change materials are characterised by the fact that they change their optical properties dramatically, depending on whether they are crystalline – when their atoms arrange themselves in a regular fashion – or amorphous – when their atoms organise themselves in an irregular fashion. This phase-change can be triggered by light if a laser heats the material up. “Because the material reacts so strongly, and changes its properties dramatically, it is highly suitable for imitating synapses and the transfer of impulses between two neurons,” says lead author Johannes Feldmann, who carried out many of the experiments as part of his PhD thesis at the Münster University.

In their study, the scientists succeeded for the first time in merging many nanostructured phase-change materials into one neurosynaptic network. The researchers developed a chip with four artificial neurons and a total of 60 synapses. The structure of the chip – consisting of different layers – was based on the so-called wavelength division multiplex technology, which is a process in which light is transmitted on different channels within the optical nanocircuit.

In order to test the extent to which the system is able to recognise patterns, the researchers “fed” it with information in the form of light pulses, using two different algorithms of machine learning. In this process, an artificial system “learns” from examples and can, ultimately, generalise them. In the case of the two algorithms used – both in so-called supervised and in unsupervised learning – the artificial network was ultimately able, on the basis of given light patterns, to recognise a pattern being sought – one of which was four consecutive letters.

“Our system has enabled us to take an important step towards creating computer hardware which behaves similarly to neurons and synapses in the brain and which is also able to work on real-world tasks,” says Wolfram Pernice. “By working with photons instead of electrons we can exploit to the full the known potential of optical technologies – not only in order to transfer data, as has been the case so far, but also in order to process and store them in one place,” adds co-author Prof. Harish Bhaskaran from the University of Oxford.

A very specific example is that with the aid of such hardware cancer cells could be identified automatically. Further work will need to be done, however, before such applications become reality. The researchers need to increase the number of artificial neurons and synapses and increase the depth of neural networks. This can be done, for example, with optical chips manufactured using silicon technology. “This step is to be taken in the EU joint project ‘Fun-COMP’ by using foundry processing for the production of nanochips,” says co-author and leader of the Fun-COMP project, Prof. C. David Wright from the University of Exeter.

Here’s a link to and a citation for the paper,

All-optical spiking neurosynaptic networks with self-learning capabilities by J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran & W. H. P. Pernice. Nature volume 569, pages208–214 (2019) DOI: https://doi.org/10.1038/s41586-019-1157-8 Issue Date: 09 May 2019

This paper is behind a paywall.

For the curious, I found a little more information about Fun-COMP (functionally-scaled computer technology). It’s a European Commission (EC) Horizon 2020 project coordinated through the University of Exeter. For information with details such as the total cost, contribution from the EC, the list of partnerships and more there is the Fun-COMP webpage on fabiodisconzi.com.