Category Archives: robots

A new generation of xenobots made with frog cells

I meant to feature this work last year when it was first announced so I’m delighted a second chance has come around so soon after. From a March 31, 2021 news item on ScienceDaily,

Last year, a team of biologists and computer scientists from Tufts University and the University of Vermont (UVM) created novel, tiny self-healing biological machines from frog cells called “Xenobots” that could move around, push a payload, and even exhibit collective behavior in the presence of a swarm of other Xenobots.

Get ready for Xenobots 2.0.

Here’s a video of the Xenobot 2.0. It’s amazing but, for anyone who has problems with animal experimentation, this may be disturbing,


The next version of Xenobots have been created – they’re faster, live longer, and can now record information. (Source: Doug Blackiston & Emma Lederer)

A March 31, 2021 Tufts University news release by Mike Silver (also on EurekAlert and adapted and published as Scientists Create the Next Generation of Living Robots on the University of Vermont website as a UVM Today story),

The same team has now created life forms that self-assemble a body from single cells, do not require muscle cells to move, and even demonstrate the capability of recordable memory. The new generation Xenobots also move faster, navigate different environments, and have longer lifespans than the first edition, and they still have the ability to work together in groups and heal themselves if damaged. The results of the new research were published today [March 31, 2021] in Science Robotics.

Compared to Xenobots 1.0, in which the millimeter-sized automatons were constructed in a “top down” approach by manual placement of tissue and surgical shaping of frog skin and cardiac cells to produce motion, the next version of Xenobots takes a “bottom up” approach. The biologists at Tufts took stem cells from embryos of the African frog Xenopus laevis (hence the name “Xenobots”) and allowed them to self-assemble and grow into spheroids, where some of the cells after a few days differentiated to produce cilia – tiny hair-like projections that move back and forth or rotate in a specific way. Instead of using manually sculpted cardiac cells whose natural rhythmic contractions allowed the original Xenobots to scuttle around, cilia give the new spheroidal bots “legs” to move them rapidly across a surface. In a frog, or human for that matter, cilia would normally be found on mucous surfaces, like in the lungs, to help push out pathogens and other foreign material. On the Xenobots, they are repurposed to provide rapid locomotion. 

“We are witnessing the remarkable plasticity of cellular collectives, which build a rudimentary new ‘body’ that is quite distinct from their default – in this case, a frog – despite having a completely normal genome,” said Michael Levin, Distinguished Professor of Biology and director of the Allen Discovery Center at Tufts University, and corresponding author of the study. “In a frog embryo, cells cooperate to create a tadpole. Here, removed from that context, we see that cells can re-purpose their genetically encoded hardware, like cilia, for new functions such as locomotion. It is amazing that cells can spontaneously take on new roles and create new body plans and behaviors without long periods of evolutionary selection for those features.”

“In a way, the Xenobots are constructed much like a traditional robot.  Only we use cells and tissues rather than artificial components to build the shape and create predictable behavior.” said senior scientist Doug Blackiston, who co-first authored the study with research technician Emma Lederer. “On the biology end, this approach is helping us understand how cells communicate as they interact with one another during development, and how we might better control those interactions.”

While the Tufts scientists created the physical organisms, scientists at UVM were busy running computer simulations that modeled different shapes of the Xenobots to see if they might exhibit different behaviors, both individually and in groups. Using the Deep Green supercomputer cluster at UVM’s Vermont Advanced Computing Core, the team, led by computer scientists and robotics experts Josh Bongard and Sam Kriegman, simulated the Xenbots under hundreds of thousands of random environmental conditions using an evolutionary algorithm.  These simulations were used to identify Xenobots most able to work together in swarms to gather large piles of debris in a field of particles

“We know the task, but it’s not at all obvious — for people — what a successful design should look like. That’s where the supercomputer comes in and searches over the space of all possible Xenobot swarms to find the swarm that does the job best,” says Bongard. “We want Xenobots to do useful work. Right now we’re giving them simple tasks, but ultimately we’re aiming for a new kind of living tool that could, for example, clean up microplastics in the ocean or contaminants in soil.” 

It turns out, the new Xenobots are much faster and better at tasks such as garbage collection than last year’s model, working together in a swarm to sweep through a petri dish and gather larger piles of iron oxide particles. They can also cover large flat surfaces, or travel through narrow capillary tubes.

These studies also suggest that the in silico [computer] simulations could in the future optimize additional features of biological bots for more complex behaviors. One important feature added in the Xenobot upgrade is the ability to record information.

Now with memory

A central feature of robotics is the ability to record memory and use that information to modify the robot’s actions and behavior. With that in mind, the Tufts scientists engineered the Xenobots with a read/write capability to record one bit of information, using a fluorescent reporter protein called EosFP, which normally glows green. However, when exposed to light at 390nm wavelength, the protein emits red light instead. 

The cells of the frog embryos were injected with messenger RNA coding for the EosFP protein before stem cells were excised to create the Xenobots. The mature Xenobots now have a built-in fluorescent switch which can record exposure to blue light around 390nm.
The researchers tested the memory function by allowing 10 Xenobots to swim around a surface on which one spot is illuminated with a beam of 390nm light. After two hours, they found that three bots emitted red light. The rest remained their original green, effectively recording the “travel experience” of the bots.

This proof of principle of molecular memory could be extended in the future to detect and record not only light but also the presence of radioactive contamination, chemical pollutants, drugs, or a disease condition. Further engineering of the memory function could enable the recording of multiple stimuli (more bits of information) or allow the bots to release compounds or change behavior upon sensation of stimuli. 

“When we bring in more capabilities to the bots, we can use the computer simulations to design them with more complex behaviors and the ability to carry out more elaborate tasks,” said Bongard. “We could potentially design them not only to report conditions in their environment but also to modify and repair conditions in their environment.”

Xenobot, heal thyself

“The biological materials we are using have many features we would like to someday implement in the bots – cells can act like sensors, motors for movement, communication and computation networks, and recording devices to store information,” said Levin. “One thing the Xenobots and future versions of biological bots can do that their metal and plastic counterparts have difficulty doing is constructing their own body plan as the cells grow and mature, and then repairing and restoring themselves if they become damaged. Healing is a natural feature of living organisms, and it is preserved in Xenobot biology.” 

The new Xenobots were remarkably adept at healing and would close the majority of a severe full-length laceration half their thickness within 5 minutes of the injury. All injured bots were able to ultimately heal the wound, restore their shape and continue their work as before. 

Another advantage of a biological robot, Levin adds, is metabolism. Unlike metal and plastic robots, the cells in a biological robot can absorb and break down chemicals and work like tiny factories synthesizing and excreting chemicals and proteins. The whole field of synthetic biology – which has largely focused on reprogramming single celled organisms to produce useful molecules – can now be exploited in these multicellular creatures

Like the original Xenobots, the upgraded bots can survive up to ten days on their embryonic energy stores and run their tasks without additional energy sources, but they can also carry on at full speed for many months if kept in a “soup” of nutrients. 

What the scientists are really after

An engaging description of the biological bots and what we can learn from them is presented in a TED talk by Michael Levin. In his TED Talk, professor Levin describes not only the remarkable potential for tiny biological robots to carry out useful tasks in the environment or potentially in therapeutic applications, but he also points out what may be the most valuable benefit of this research – using the bots to understand how individual cells come together, communicate, and specialize to create a larger organism, as they do in nature to create a frog or human. It’s a new model system that can provide a foundation for regenerative medicine.

Xenobots and their successors may also provide insight into how multicellular organisms arose from ancient single celled organisms, and the origins of information processing, decision making and cognition in biological organisms. 

Recognizing the tremendous future for this technology, Tufts University and the University of Vermont have established the Institute for Computer Designed Organisms (ICDO), to be formally launched in the coming months, which will pull together resources from each university and outside sources to create living robots with increasingly sophisticated capabilities.

The ultimate goal for the Tufts and UVM researchers is not only to explore the full scope of biological robots they can make; it is also to understand the relationship between the ‘hardware’ of the genome and the ‘software’ of cellular communications that go into creating organized tissues, organs and limbs. Then we can gain greater control of that morphogenesis for regenerative medicine, and the treatment of cancer and diseases of aging.

Here’s a link to and a citation for the paper,

A cellular platform for the development of synthetic living machines by Douglas Blackiston, Emma Lederer, Sam Kriegman, Simon Garnier, Joshua Bongard, and Michael Levin. Science Robotics 31 Mar 2021: Vol. 6, Issue 52, eabf1571 DOI: 10.1126/scirobotics.abf1571

This paper is behind a paywall.

An electronics-free, soft robotic dragonfly

From the description on YouTube,

With the ability to sense changes in pH, temperature and oil, this completely soft, electronics-free robot dubbed “DraBot” could be the prototype for future environmental sentinels. …

Music: Joneve by Mello C from the Free Music Archive

A favourite motif in the Art Nouveau movement (more about that later in the post), dragonflies or a facsimile thereof feature in March 25, 2021 Duke University news release (also on EurekAlert) by Ken Kingery,

Engineers at Duke University have developed an electronics-free, entirely soft robot shaped like a dragonfly that can skim across water and react to environmental conditions such as pH, temperature or the presence of oil. The proof-of-principle demonstration could be the precursor to more advanced, autonomous, long-range environmental sentinels for monitoring a wide range of potential telltale signs of problems.

The soft robot is described online March 25 [2021] in the journal Advanced Intelligent Systems.

Soft robots are a growing trend in the industry due to their versatility. Soft parts can handle delicate objects such as biological tissues that metal or ceramic components would damage. Soft bodies can help robots float or squeeze into tight spaces where rigid frames would get stuck.

The expanding field was on the mind of Shyni Varghese, professor of biomedical engineering, mechanical engineering and materials science, and orthopaedic surgery at Duke, when inspiration struck.

“I got an email from Shyni from the airport saying she had an idea for a soft robot that uses a self-healing hydrogel that her group has invented in the past to react and move autonomously,” said Vardhman Kumar, a PhD student in Varghese’s laboratory and first author of the paper. “But that was the extent of the email, and I didn’t hear from her again for days. So the idea sort of sat in limbo for a little while until I had enough free time to pursue it, and Shyni said to go for it.”

In 2012, Varghese and her laboratory created a self-healing hydrogel that reacts to changes in pH in a matter of seconds. Whether it be a crack in the hydrogel or two adjoining pieces “painted” with it, a change in acidity causes the hydrogel to form new bonds, which are completely reversible when the pH returns to its original levels.

Varghese’s hastily written idea was to find a way to use this hydrogel on a soft robot that could travel across water and indicate places where the pH changes. Along with a few other innovations to signal changes in its surroundings, she figured her lab could design such a robot as a sort of autonomous environmental sensor.

With the help of Ung Hyun Ko, a postdoctoral fellow also in Varghese’s laboratory, Kumar began designing a soft robot based on a fly. After several iterations, the pair settled on the shape of a dragonfly engineered with a network of interior microchannels that allow it to be controlled with air pressure.

They created the body–about 2.25 inches long with a 1.4-inch wingspan–by pouring silicon into an aluminum mold and baking it. The team used soft lithography to create interior channels and connected with flexible silicon tubing.

DraBot was born.

“Getting DraBot to respond to air pressure controls over long distances using only self-actuators without any electronics was difficult,” said Ko. “That was definitely the most challenging part.”

DraBot works by controlling the air pressure coming into its wings. Microchannels carry the air into the front wings, where it escapes through a series of holes pointed directly into the back wings. If both back wings are down, the airflow is blocked, and DraBot goes nowhere. But if both wings are up, DraBot goes forward.

To add an element of control, the team also designed balloon actuators under each of the back wings close to DraBot’s body. When inflated, the balloons cause the wings to curl upward. By changing which wings are up or down, the researchers tell DraBot where to go.

“We were happy when we were able to control DraBot, but it’s based on living things,” said Kumar. “And living things don’t just move around on their own, they react to their environment.”

That’s where self-healing hydrogel comes in. By painting one set of wings with the hydrogel, the researchers were able to make DraBot responsive to changes in the surrounding water’s pH. If the water becomes acidic, one side’s front wing fuses with the back wing. Instead of traveling in a straight line as instructed, the imbalance causes the robot to spin in a circle. Once the pH returns to a normal level, the hydrogel “un-heals,” the fused wings separate, and DraBot once again becomes fully responsive to commands.

To beef up its environmental awareness, the researchers also leveraged the sponges under the wings and doped the wings with temperature-responsive materials. When DraBot skims over water with oil floating on the surface, the sponges will soak it up and change color to the corresponding color of oil. And when the water becomes overly warm, DraBot’s wings change from red to yellow.

The researchers believe these types of measurements could play an important part in an environmental robotic sensor in the future. Responsiveness to pH can detect freshwater acidification, which is a serious environmental problem affecting several geologically-sensitive regions. The ability to soak up oils makes such long-distance skimming robots an ideal candidate for early detection of oil spills. Changing colors due to temperatures could help spot signs of red tide and the bleaching of coral reefs, which leads to decline in the population of aquatic life.

The team also sees many ways that they could improve on their proof-of-concept. Wireless cameras or solid-state sensors could enhance the capabilities of DraBot. And creating a form of onboard propellant would help similar bots break free of their tubing.

“Instead of using air pressure to control the wings, I could envision using some sort of synthetic biology that generates energy,” said Varghese. “That’s a totally different field than I work in, so we’ll have to have a conversation with some potential collaborators to see what’s possible. But that’s part of the fun of working on an interdisciplinary project like this.”

Here’s a link to and a citation for the paper,

Microengineered Materials with Self‐Healing Features for Soft Robotics by Vardhman Kumar, Ung Hyun Ko, Yilong Zhou, Jiaul Hoque, Gaurav Arya, Shyni Varghese. Advanced Intelligent Systems DOI: https://doi.org/10.1002/aisy.202100005 First published: 25 March 2021

This paper is open access.

The earlier reference to Art Nouveau gives me an excuse to introduce this March 7, 2020 (?) essay by Bex Simon (artist blacksmith) on her eponymous website.

Dragonflies, in particular, are a very poplar subject matter in the Art Nouveau movement. Art Nouveau, with its wonderful flowing lines and hidden fantasies, is full of symbolism.  The movement was a response to the profound social changes and industrialization of every day life and the style of the moment was, in part, inspired by Japanese art.

Simon features examples of Art Nouveau dragonfly art along with examples of her own take on the subject. She also has this,

[downloaded from https://www.bexsimon.com/dragonflies-and-butterflies-in-art-nouveau/]

This is a closeup of a real dragonfly as seen on Simon’s website. If you have an interest, reading her March 7, 2020 (?) essay and gazing at the images won’t take much time.

Art, sound, AI, & the Metacreation Lab’s Spring 2021 newsletter

The Metacreation Lab’s Spring 2021 newsletter (received via email) features a number of events either currently taking place or about to take place.

2021 AI Song Contest

2021 marks the 2nd year for this international event, an artificial intelligence/AI Song Contest 2021. The folks at Simon Fraser University’s (SFU) Metacreation Lab have an entry for the 2021 event, A song about the weekend (and you can do whatever you want). Should you click on the song entry, you will find an audio file, a survey/vote consisting of four questions and, if you keep scrolling down, more information about the creative, team, the song and more,

Driven by collaborations involving scientists, experts in artificial intelligence, cognitive sciences, designers, and artists, the Metacreation Lab for Creative AI is at the forefront of the development of generative systems, whether these are embedded in interactive experiences or automating workflows integrated into cutting-edge creative software.

Team:

Cale Plut (Composer and musician) is a PhD Student in the Metacreation lab, researching AI music applications in video games.

Philippe Pasquier (Producer and supervisor) is an Associate Professor, and leads the Metacreation Lab. 

Jeff Ens (AI programmer) is a PhD Candidate in the Metacreation lab, researching AI models for music generation.

Renaud Tchemeube (Producer and interaction designer) is a PhD Student in the Metacreation Lab, researching interaction software design for creativity.

Tara Jadidi (Research Assistant) is an undergraduate student at FUM, Iran, working with the Metacreation lab.

Dimiter Zlatkov (Research Assistant) is an undergraduate student at UBC, working with the Metacreation lab.

ABOUT THE SONG

A song about the weekend (and you can do whatever you want) explores the relationships between AI, humans, labour, and creation in a lighthearted and fun song. It is co-created with the Multi-track Music Machine (MMM)

Through the history of automation and industrialization, the relationship between the labour magnification power of automation and the recipients of the benefits of that magnification have been in contention. While increasing levels of automation are often accompanied by promises of future leisure increases, this rarely materializes for the workers whose labour is multiplied. By primarily using automated methods to create a “fun” song about leisure, we highlight both the promise of AI-human cooperation as well as the disparities in its real-world deployment. 

As for the competition itself, here’s more from the FAQs (frequently asked questions),

What is the AI Song Contest?

AI Song Contest is an international creative AI contest. Teams from all over the world try to create a 4-minute pop song with the help of artificial intelligence.

When and where does it take place?

Between June 1, 2021 and July 1, 2021 voting is open for the international public. On July 6 there will be multiple online panel sessions, and the winner of the AI Song Contest 2021 will be announced in an online award ceremony. All sessions on July 6 are organised in collaboration with Wallifornia MusicTech.

How is the winner determined?

Each participating team will be awarded two sets of points: one a public vote by the contest’s international audience, the other the determination of an expert jury.

Anyone can evaluate as many songs as they like: from one, up to all thirty-eight. Every song can be evaluated only once. Even though it won’t count in the grand total, lyrics can be evaluated too; we do like to determine which team wrote the best accoring to the audience.

Can I vote multiple times for the same team?

No, votes are controlled by IP address. So only one of your votes will count.

Is this the first time the contest is organised?

This is the second time the AI Song Contest is organised. The contest was first initiated in 2020 by Dutch public broadcaster VPRO together with NPO Innovation and NPO 3FM. Teams from Europe and Australia tried to create a Eurovision kind of song with the help of AI. Team Uncanny Valley from Australia won the first edition with their song Beautiful the World. The 2021 edition is organised independently.

What is the definition of artificial intelligence in this contest?

Artificial intelligence is a very broad concept. For this contest it will mean that teams can use techniques such as -but not limited to- machine learning, such as deep learning, natural language processing, algorithmic composition or combining rule-based approaches with neural networks for the creation of their songs. Teams can create their own AI tools, or use existing models and algorithms.  

What are possible challenges?

Read here about the challenges teams from last year’s contest faced.

As an AI researcher, can I collaborate with musicians?

Yes – this is strongly encouraged!

For the 2020 edition, all songs had to be Eurovision-style. Is that also the intention for 2021 entries?

Last year, the first year the contest was organized, it was indeed all about Eurovision. For this year’s competition, we are trying to expand geographically, culturally, and musically. Teams from all over the world can compete, and songs in all genres can be submitted.

If you’re not familiar with Eurovision-style, you can find a compilation video with brief excerpts from the 26 finalists for Eurovision 2021 here (Bill Young’s May 23, 2021 posting on tellyspotting.kera.org; the video runs under 10 mins.). There’s also the “Eurovision Song Contest: The Story of Fire Saga” 2020 movie starring Rachel McAdams, Will Ferrell, and Dan Stevens. It’s intended as a gentle parody but the style is all there.

ART MACHINES 2: International Symposium on Machine Learning and Art 2021

The symposium, Art Machines 2, started yesterday (June 10, 2021 and runs to June 14, 2021) in Hong Kong and SFU’s Metacreation Lab will be represented (from the Spring 2021 newsletter received via email),

On Sunday, June 13 [2021] at 21:45 Hong Kong Standard Time (UTC +8) as part of the Sound Art Paper Session chaired by Ryo Ikeshiro, the Metacreation Lab’s Mahsoo Salimi and Philippe Pasquier will present their paper, Exploiting Swarm Aesthetics in Sound Art. We’ve included a more detailed preview of the paper in this newsletter below.

Concurrent with ART MACHINES 2 is the launch of two exhibitions – Constructing Contexts and System Dreams. Constructing Contexts, curated by Tobias Klein and Rodrigo Guzman-Serrano, will bring together 27 works with unique approaches to the question of contexts as applied by generative adversarial networks. System Dreams highlights work from the latest MFA talent from the School of Creative Media. While the exhibitions take place in Hong Kong, the participating artists and artwork are well documented online.

Liminal Tones: Swarm Aesthetics in Sound Art

Applications of swarm aesthetics in music composition are not new and have already resulted in volumes of complex soundscapes and musical compositions. Using an experimental approach, Mahsoo Salimi and Philippe Pasquier create a series of sound textures know as Liminal Tones (B/ Rain Dream) based on swarming behaviours

Findings of the Liminal Tones project will be presented in papers for the Art Machines 2: International Symposium on Machine Learning (June 10-14 [2021]) and the International Conference on Swarm Intelligence (July 17-21 [2021]).

Talk about Creative AI at the University of British Columbia

This is the last item I’m excerpting from the newsletter. (Should you be curious about what else is listed, you can go to the Metacreation Lab’s contact page and sign up for the newsletter there.) On June 22, 2021 at 2:00 PM PDT, there will be this event,

Creative AI: on the partial or complete automation of creative tasks @ CAIDA

Philippe Pasquier will be giving a talk on creative applications of AI at CAIDA: UBC ICICS Centre for Artificial Intelligence Decision-making and Action. Overviewing the state of the art of computer-assisted creativity and embedded systems and their various applications, the talk will survey the design, deployment, and evaluation of generative systems.

Free registration for the talk is available at the link below.

Register for Creative AI @ CAIDA

Remember, if you want to see the rest of the newsletter, you can sign up at the Metacreation Lab’s contact page.

US Army researchers’ vision for artificial intelligence and ethics

The US Army peeks into a near future where humans and some forms of artificial intelligence (AI) work together in battle and elsewhere. From a February 3, 2021 U.S. Army Research Laboratory news release (also on EurekAlert but published on February 16, 2021),

The Army of the future will involve humans and autonomous machines working together to accomplish the mission. According to Army researchers, this vision will only succeed if artificial intelligence is perceived to be ethical.

Researchers, based at the U.S. Army Combat Capabilities Development Command, now known as DEVCOM, Army Research Laboratory, Northeastern University and the University of Southern California, expanded existing research to cover moral dilemmas and decision making that has not been pursued elsewhere.

This research, featured in Frontiers in Robotics and AI, tackles the fundamental challenge of developing ethical artificial intelligence, which, according to the researchers, is still mostly understudied.

“Autonomous machines, such as automated vehicles and robots, are poised to become pervasive in the Army,” said DEVCOM ARL researcher Dr. Celso de Melo, who is located at the laboratory’s ARL West regional site in Playa Vista, California. “These machines will inevitably face moral dilemmas where they must make decisions that could very well injure humans.”

For example, de Melo said, imagine that an automated vehicle is driving in a tunnel and suddenly five pedestrians cross the street; the vehicle must decide whether to continue moving forward injuring the pedestrians or swerve towards the wall risking the driver.

What should the automated vehicle do in this situation?

Prior work has framed these dilemmas in starkly simple terms, framing decisions as life and death, de Melo said, neglecting the influence of risk of injury to the involved parties on the outcome.

“By expanding the study of moral dilemmas to consider the risk profile of the situation, we significantly expanded the space of acceptable solutions for these dilemmas,” de Melo said. “In so doing, we contributed to the development of autonomous technology that abides to acceptable moral norms and thus is more likely to be adopted in practice and accepted by the general public.”

The researchers focused on this gap and presented experimental evidence that, in a moral dilemma with automated vehicles, the likelihood of making the utilitarian choice – which minimizes the overall injury risk to humans and, in this case, saves the pedestrians – was moderated by the perceived risk of injury to pedestrians and drivers.

In their study, participants were found more likely to make the utilitarian choice with decreasing risk to the driver and with increasing risk to the pedestrians. However, interestingly, most were willing to risk the driver (i.e., self-sacrifice), even if the risk to the pedestrians was lower than to the driver.

As a second contribution, the researchers also demonstrated that participants’ moral decisions were influenced by what other decision makers do – for instance, participants were less likely to make the utilitarian choice, if others often chose the non-utilitarian choice.

“This research advances the state-of-the-art in the study of moral dilemmas involving autonomous machines by shedding light on the role of risk on moral choices,” de Melo said. “Further, both of these mechanisms introduce opportunities to develop AI that will be perceived to make decisions that meet moral standards, as well as introduce an opportunity to use technology to shape human behavior and promote a more moral society.”

For the Army, this research is particularly relevant to Army modernization, de Melo said.

“As these vehicles become increasingly autonomous and operate in complex and dynamic environments, they are bound to face situations where injury to humans is unavoidable,” de Melo said. “This research informs how to navigate these moral dilemmas and make decisions that will be perceived as optimal given the circumstances; for example, minimizing overall risk to human life.”

Moving in to the future, researchers will study this type of risk-benefit analysis in Army moral dilemmas and articulate the corresponding practical implications for the development of AI systems.

“When deployed at scale, the decisions made by AI systems can be very consequential, in particular for situations involving risk to human life,” de Melo said. “It is critical that AI is able to make decisions that reflect society’s ethical standards to facilitate adoption by the Army and acceptance by the general public. This research contributes to realizing this vision by clarifying some of the key factors shaping these standards. This research is personally important because AI is expected to have considerable impact to the Army of the future; however, what kind of impact will be defined by the values reflected in that AI.”

The last time I had an item on a similar topic from the US Army Research Laboratory (ARL) it was in a March 26, 2018 posting; scroll down to the subhead, US Army (about 50% of the way down),

“As machine agents become more sophisticated and independent, it is critical for their human counterparts to understand their intent, behaviors, reasoning process behind those behaviors, and expected outcomes so the humans can properly calibrate their trust [emphasis mine] in the systems and make appropriate decisions,” explained ARL’s Dr. Jessie Chen, senior research psychologist.

This latest work also revolves around the issue of trust according to the last sentence in the 2021 study paper (link and citation to follow),

… Overall, these questions emphasize the importance of the kind of experimental work presented here, as it has the potential to shed light on people’s preferences about moral behavior in machines, inform the design of autonomous machines people are likely to trust and adopt, and, perhaps, even introduce an opportunity to promote a more moral society. [emphases mine]

From trust to adoption to a more moral society—that’s an interesting progression. For another more optimistic view of how robots and AI can have positive impacts there’s my March 29, 2021 posting, Little Lost Robot and humane visions of our technological future

Here’s a link to and a citation for the paper,

Risk of Injury in Moral Dilemmas With Autonomous Vehicles by Celso M. de Melo, Stacy Marsella, and Jonathan Gratch. Front. Robot. AI [Frontiers in Robotics and AI], 20 January 2021 DOI: https://doi.org/10.3389/frobt.2020.572529

This paper is in an open access journal.

Artificial emotional intelligence detection

Sabotage was not my first thought on reading about artificial emotional intelligence so this February 11, 2021 Incheon National University press release (also on EurekAlert) is educational in an unexpected way (Note: A link has been removed),

With the advent of 5G communication technology and its integration with AI, we are looking at the dawn of a new era in which people, machines, objects, and devices are connected like never before. This smart era will be characterized by smart facilities and services such as self-driving cars, smart UAVs [unmanned aerial vehicle], and intelligent healthcare. This will be the aftermath of a technological revolution.

But the flip side of such technological revolution is that AI [artificial intelligence] itself can be used to attack or threaten the security of 5G-enabled systems which, in turn, can greatly compromise their reliability. It is, therefore, imperative to investigate such potential security threats and explore countermeasures before a smart world is realized.

In a recent study published in IEEE Network, a team of researchers led by Prof. Hyunbum Kim from Incheon National University, Korea, address such issues in relation to an AI-based, 5G-integrated virtual emotion recognition system called 5G-I-VEmoSYS, which detects human emotions using wireless signals and body movement. “Emotions are a critical characteristic of human beings and separates humans from machines, defining daily human activity. However, some emotions can also disrupt the normal functioning of a society and put people’s lives in danger, such as those of an unstable driver. Emotion detection technology thus has great potential for recognizing any disruptive emotion and in tandem with 5G and beyond-5G communication, warning others of potential dangers,” explains Prof. Kim. “For instance, in the case of the unstable driver, the AI enabled driver system of the car can inform the nearest network towers, from where nearby pedestrians can be informed via their personal smart devices.”

The virtual emotion system developed by Prof. Kim’s team, 5G-I-VEmoSYS, can recognize at least five kinds of emotion (joy, pleasure, a neutral state, sadness, and anger) and is composed of three subsystems dealing with the detection, flow, and mapping of human emotions. The system concerned with detection is called Artificial Intelligence-Virtual Emotion Barrier, or AI-VEmoBAR, which relies on the reflection of wireless signals from a human subject to detect emotions. This emotion information is then handled by the system concerned with flow, called Artificial Intelligence-Virtual Emotion Flow, or AI-VEmoFLOW, which enables the flow of specific emotion information at a specific time to a specific area. Finally, the Artificial Intelligence-Virtual Emotion Map, or AI-VEmoMAP, utilizes a large amount of this virtual emotion data to create a virtual emotion map that can be utilized for threat detection and crime prevention.

A notable advantage of 5G-I-VEmoSYS is that it allows emotion detection without revealing the face or other private parts of the subjects, thereby protecting the privacy of citizens in public areas. Moreover, in private areas, it gives the user the choice to remain anonymous while providing information to the system. Furthermore, when a serious emotion, such as anger or fear, is detected in a public area, the information is rapidly conveyed to the nearest police department or relevant entities who can then take steps to prevent any potential crime or terrorism threats.

However, the system suffers from serious security issues such as the possibility of illegal signal tampering, abuse of anonymity, and hacking-related cyber-security threats. Further, the danger of sending false alarms to authorities remains.

While these concerns do put the system’s reliability at stake, Prof. Kim’s team are confident that they can be countered with further research. “This is only an initial study. In the future, we need to achieve rigorous information integrity and accordingly devise robust AI-based algorithms that can detect compromised or malfunctioning devices and offer protection against potential system hacks,” explains Prof. Kim, “Only then will it enable people to have safer and more convenient lives in the advanced smart cities of the future.”

Intriguing, yes? The researchers have used this image to illustrate their work,

Caption: With 5G communication technology and new AI-based systems such as emotion recognition systems, smart cities are all set to become a reality; but these systems need to be honed and security issues need to be ironed out before the smart reality can be realized. Credit: macrovector on Freepik

Before getting to the link and citation for the paper, I have a March 8, 2019 article by Meredith Somers for MIT (Massachusetts Institute of Technology) Sloan School of Management’s Ideas Made to Matter publication (Note Links have been removed),

What did you think of the last commercial you watched? Was it funny? Confusing? Would you buy the product? You might not remember or know for certain how you felt, but increasingly, machines do. New artificial intelligence technologies are learning and recognizing human emotions, and using that knowledge to improve everything from marketing campaigns to health care.

These technologies are referred to as “emotion AI.” Emotion AI is a subset of artificial intelligence (the broad term for machines replicating the way humans think) that measures, understands, simulates, and reacts to human emotions. It’s also known as affective computing, or artificial emotional intelligence. The field dates back to at least 1995, when MIT Media lab professor Rosalind Picard published “Affective Computing.”

Javier Hernandez, a research scientist with the Affective Computing Group at the MIT Media Lab, explains emotion AI as a tool that allows for a much more natural interaction between humans and machines.“Think of the way you interact with other human beings; you look at their faces, you look at their body, and you change your interaction accordingly,” Hernandez said. “How can [a machine] effectively communicate information if it doesn’t know your emotional state, if it doesn’t know how you’re feeling, it doesn’t know how you’re going to respond to specific content?”

While humans might currently have the upper hand on reading emotions, machines are gaining ground using their own strengths. Machines are very good at analyzing large amounts of data, explained MIT Sloan professor Erik Brynjolfsson. They can listen to voice inflections and start to recognize when those inflections correlate with stress or anger. Machines can analyze images and pick up subtleties in micro-expressions on humans’ faces that might happen even too fast for a person to recognize.

“We have a lot of neurons in our brain for social interactions. We’re born with some of those skills, and then we learn more. It makes sense to use technology to connect to our social brains, not just our analytical brains.” Brynjolfsson said. “Just like we can understand speech and machines can communicate in speech, we also understand and communicate with humor and other kinds of emotions. And machines that can speak that language — the language of emotions — are going to have better, more effective interactions with us. It’s great that we’ve made some progress; it’s just something that wasn’t an option 20 or 30 years ago, and now it’s on the table.”

Somers describes current uses of emotion AI (I’ve selected two from her list; Note: A link has been removed),

Call centers —Technology from Cogito, a company co-founded in 2007 by MIT Sloan alumni, helps call center agents identify the moods of customers on the phone and adjust how they handle the conversation in real time. Cogito’s voice-analytics software is based on years of human behavior research to identify voice patterns.

Mental health —  In December 2018 Cogito launched a spinoff called CompanionMx, and an accompanying mental health monitoring app. The Companion app listens to someone speaking into their phone, and analyzes the speaker’s voice and phone use for signs of anxiety and mood changes.

The app improves users’ self-awareness, and can increase coping skills including steps for stress reduction. The company has worked with the Department of Veterans Affairs, the Massachusetts General Hospital, and Brigham & Women’s Hospital in Boston.

Somers’ March 8, 2019 article was an eye-opener.

Getting back to the Korean research, here’s a link to and a citation for the paper,

Research Challenges and Security Threats to AI-Driven 5G Virtual Emotion Applications Using Autonomous Vehicles, Drones, and Smart Devices by Hyunbum Kim; Jalel Ben-Othman; Lynda Mokdad; Junggab Son; Chunguo Li. IEEE Network Volume: 34 Issue: 6 November/December 2020 Page(s): 288 – 294 DOI: 10.1109/MNET.011.2000245 Date of Publication (online): 12 October 2020

This paper is behind a paywall.

Getting to be more literate than humans

Lucinda McKnight, lecturer at Deakin University, Australia, has a February 9, 2021 essay about literacy in the coming age of artificial intelligence (AI) for The Conversation (Note 1: You can also find this essay as a February 10, 2021 news item on phys.org; Note 2: Links have been removed),

Students across Australia have started the new school year using pencils, pens and keyboards to learn to write.

In workplaces, machines are also learning to write, so effectively that within a few years they may write better than humans.

Sometimes they already do, as apps like Grammarly demonstrate. Certainly, much everyday writing humans now do may soon be done by machines with artificial intelligence (AI).

The predictive text commonly used by phone and email software is a form of AI writing that countless humans use every day.

According to an industry research organisation Gartner, AI and related technology will automate production of 30% of all content found on the internet by 2022.

Some prose, poetry, reports, newsletters, opinion articles, reviews, slogans and scripts are already being written by artificial intelligence.

Literacy increasingly means and includes interacting with and critically evaluating AI.

This means our children should no longer be taught just formulaic writing. [emphasis mine] Instead, writing education should encompass skills that go beyond the capacities of artificial intelligence.

McKnight’s focus is on how Australian education should approach the coming AI writer ‘supremacy’, from her February 9, 2021 essay (Note: Links have been removed),

In 2019, the New Yorker magazine did an experiment to see if IT company OpenAI’s natural language generator GPT-2 could write an entire article in the magazine’s distinctive style. This attempt had limited success, with the generator making many errors.

But by 2020, GPT-3, the new version of the machine, trained on even more data, wrote an article for The Guardian newspaper with the headline “A robot wrote this entire article. Are you scared yet, human?”

This latest much improved generator has implications for the future of journalism, as the Elon Musk-funded OpenAI invests ever more in research and development.

AI writing is said to have voice but no soul. Human writers, as the New Yorker’s John Seabrook says, give “color, personality and emotion to writing by bending the rules”. Students, therefore, need to learn the rules and be encouraged to break them.

Creativity and co-creativity (with machines) should be fostered. Machines are trained on a finite amount of data, to predict and replicate, not to innovate in meaningful and deliberate ways.

AI cannot yet plan and does not have a purpose. Students need to hone skills in purposeful writing that achieves their communication goals.

AI is not yet as complex as the human brain. Humans detect humor and satire. They know words can have multiple and subtle meanings. Humans are capable of perception and insight; they can make advanced evaluative judgements about good and bad writing.

There are calls for humans to become expert in sophisticated forms of writing and in editing writing created by robots as vital future skills.

… OpenAI’s managers originally refused to release GPT-3, ostensibly because they were concerned about the generator being used to create fake material, such as reviews of products or election-related commentary.

AI writing bots have no conscience and may need to be eliminated by humans, as with Microsoft’s racist Twitter prototype, Tay.

Critical, compassionate and nuanced assessment of what AI produces, management and monitoring of content, and decision-making and empathy with readers are all part of the “writing” roles of a democratic future.

It’s an interesting line of thought and McKnight’s ideas about writing education could be applicable beyond Australia., assuming you accept her basic premise.

I have a few other postings here about AI and writing:

Writing and AI or is a robot writing this blog? a July 16, 2014 posting

AI (artificial intelligence) text generator, too dangerous to release? a February 18, 2019 posting

Automated science writing? a September 16, 2019 posting

It seems I have a lot of question about the automation of any kind of writing.

Council of Canadian Academies and its expert panel for the AI for Science and Engineering project

There seems to be an explosion (metaphorically and only by Canadian standards) of interest in public perceptions/engagement/awareness of artificial intelligence (see my March 29, 2021 posting “Canada launches its AI dialogues” and these dialogues run until April 30, 2021 plus there’s this April 6, 2021 posting “UNESCO’s Call for Proposals to highlight blind spots in AI Development open ’til May 2, 2021” which was launched in cooperation with Mila-Québec Artificial Intelligence Institute).

Now there’s this, in a March 31, 2020 Council of Canadian Academies (CCA) news release, four new projects were announced. (Admittedly these are not ‘public engagement’ exercises as such but the reports are publicly available and utilized by policymakers.) These are the two projects of most interest to me,

Public Safety in the Digital Age

Information and communications technologies have profoundly changed almost every aspect of life and business in the last two decades. While the digital revolution has brought about many positive changes, it has also created opportunities for criminal organizations and malicious actors to target individuals, businesses, and systems.

This assessment will examine promising practices that could help to address threats to public safety related to the use of digital technologies while respecting human rights and privacy.

Sponsor: Public Safety Canada

AI for Science and Engineering

The use of artificial intelligence (AI) and machine learning in science and engineering has the potential to radically transform the nature of scientific inquiry and discovery and produce a wide range of social and economic benefits for Canadians. But, the adoption of these technologies also presents a number of potential challenges and risks.

This assessment will examine the legal/regulatory, ethical, policy and social challenges related to the use of AI technologies in scientific research and discovery.

Sponsor: National Research Council Canada [NRC] (co-sponsors: CIFAR [Canadian Institute for Advanced Research], CIHR [Canadian Institutes of Health Research], NSERC [Natural Sciences and Engineering Research Council], and SSHRC [Social Sciences and Humanities Research Council])

For today’s posting the focus will be on the AI project, specifically, the April 19, 2021 CCA news release announcing the project’s expert panel,

The Council of Canadian Academies (CCA) has formed an Expert Panel to examine a broad range of factors related to the use of artificial intelligence (AI) technologies in scientific research and discovery in Canada. Teresa Scassa, SJD, Canada Research Chair in Information Law and Policy at the University of Ottawa, will serve as Chair of the Panel.  

“AI and machine learning may drastically change the fields of science and engineering by accelerating research and discovery,” said Dr. Scassa. “But these technologies also present challenges and risks. A better understanding of the implications of the use of AI in scientific research will help to inform decision-making in this area and I look forward to undertaking this assessment with my colleagues.”

As Chair, Dr. Scassa will lead a multidisciplinary group with extensive expertise in law, policy, ethics, philosophy, sociology, and AI technology. The Panel will answer the following question:

What are the legal/regulatory, ethical, policy and social challenges associated with deploying AI technologies to enable scientific/engineering research design and discovery in Canada?

“We’re delighted that Dr. Scassa, with her extensive experience in AI, the law and data governance, has taken on the role of Chair,” said Eric M. Meslin, PhD, FRSC, FCAHS, President and CEO of the CCA. “I anticipate the work of this outstanding panel will inform policy decisions about the development, regulation and adoption of AI technologies in scientific research, to the benefit of Canada.”

The CCA was asked by the National Research Council of Canada (NRC), along with co-sponsors CIFAR, CIHR, NSERC, and SSHRC, to address the question. More information can be found here.

The Expert Panel on AI for Science and Engineering:

Teresa Scassa (Chair), SJD, Canada Research Chair in Information Law and Policy, University of Ottawa, Faculty of Law (Ottawa, ON)

Julien Billot, CEO, Scale AI (Montreal, QC)

Wendy Hui Kyong Chun, Canada 150 Research Chair in New Media and Professor of Communication, Simon Fraser University (Burnaby, BC)

Marc Antoine Dilhac, Professor (Philosophy), University of Montreal; Director of Ethics and Politics, Centre for Ethics (Montréal, QC)

B. Courtney Doagoo, AI and Society Fellow, Centre for Law, Technology and Society, University of Ottawa; Senior Manager, Risk Consulting Practice, KPMG Canada (Ottawa, ON)

Abhishek Gupta, Founder and Principal Researcher, Montreal AI Ethics Institute (Montréal, QC)

Richard Isnor, Associate Vice President, Research and Graduate Studies, St. Francis Xavier University (Antigonish, NS)

Ross D. King, Professor, Chalmers University of Technology (Göteborg, Sweden)

Sabina Leonelli, Professor of Philosophy and History of Science, University of Exeter (Exeter, United Kingdom)

Raymond J. Spiteri, Professor, Department of Computer Science, University of Saskatchewan (Saskatoon, SK)

Who is the expert panel?

Putting together a Canadian panel is an interesting problem especially so when you’re trying to find people of expertise who can also represent various viewpoints both professionally and regionally. Then, there are gender, racial, linguistic, urban/rural, and ethnic considerations.

Statistics

Eight of the panelists could be said to be representing various regions of Canada. Five of those eight panelists are based in central Canada, specifically, Ontario (Ottawa) or Québec (Montréal). The sixth panelist is based in Atlantic Canada (Nova Scotia), the seventh panelist is based in the Prairies (Saskatchewan), and the eighth panelist is based in western Canada, (Vancouver, British Columbia).

The two panelists bringing an international perspective to this project are both based in Europe, specifically, Sweden and the UK.

(sigh) It would be good to have representation from another part of the world. Asia springs to mind as researchers in that region are very advanced in their AI research and applications meaning that their experts and ethicists are likely to have valuable insights.

Four of the ten panelists are women, which is closer to equal representation than some of the other CCA panels I’ve looked at.

As for Indigenous and BIPOC representation, unless one or more of the panelists chooses to self-identify in that fashion, I cannot make any comments. It should be noted that more than one expert panelist focuses on social justice and/or bias in algorithms.

Network of relationships

As you can see, the CCA descriptions for the individual members of the expert panel are a little brief. So, I did a little digging and In my searches, I noticed what seems to be a pattern of relationships among some of these experts. In particular, take note of the Canadian Institute for Advanced Research (CIFAR) and the AI Advisory Council of the Government of Canada.

Individual panelists

Teresa Scassa (Ontario) whose SJD designation signifies a research doctorate in law chairs this panel. Offhand, I can recall only one or two other panels being chaired by women of the 10 or so I’ve reviewed. In addition to her profile page at the University of Ottawa, she hosts her own blog featuring posts such as “How Might Bill C-11 Affect the Outcome of a Clearview AI-type Complaint?” She writes clearly (I didn’t seen any jargon) for an audience that is somewhat informed on the topic.

Along with Dilhac, Teresa Scassa is a member of the AI Advisory Council of the Government of Canada. More about that group when you read Dilhac’s description.

Julien Billot (Québec) has provided a profile on LinkedIn and you can augment your view of M. Billot with this profile from the CreativeDestructionLab (CDL),

Mr. Billot is a member of the faculty at HEC Montréal [graduate business school of the Université de Montréal] as an adjunct professor of management and the lead for the CreativeDestructionLab (CDL) and NextAi program in Montreal.

Julien Billot has been President and Chief Executive Officer of Yellow Pages Group Corporation (Y.TO) in Montreal, Quebec. Previously, he was Executive Vice President, Head of Media and Member of the Executive Committee of Solocal Group (formerly PagesJaunes Groupe), the publicly traded and incumbent local search business in France. Earlier experience includes serving as CEO of the digital and new business group of Lagardère Active, a multimedia branch of Lagardère Group and 13 years in senior management positions at France Telecom, notably as Chief Marketing Officer for Orange, the company’s mobile subsidiary.

Mr. Billot is a graduate of École Polytechnique (Paris) and from Telecom Paris Tech. He holds a postgraduate diploma (DEA) in Industrial Economics from the University of Paris-Dauphine.

Wendy Hui Kyong Chun (British Columbia) has a profile on the Simon Fraser University (SFU) website, which provided one of the more interesting (to me personally) biographies,

Wendy Hui Kyong Chun is the Canada 150 Research Chair in New Media at Simon Fraser University, and leads the Digital Democracies Institute which was launched in 2019. The Institute aims to integrate research in the humanities and data sciences to address questions of equality and social justice in order to combat the proliferation of online “echo chambers,” abusive language, discriminatory algorithms and mis/disinformation by fostering critical and creative user practices and alternative paradigms for connection. It has four distinct research streams all led by Dr. Chun: Beyond Verification which looks at authenticity and the spread of disinformation; From Hate to Agonism, focusing on fostering democratic exchange online; Desegregating Network Neighbourhoods, combatting homophily across platforms; and Discriminating Data: Neighbourhoods, Individuals and Proxies, investigating the centrality of race, gender, class and sexuality [emphasis mine] to big data and network analytics.

I’m glad to see someone who has focused on ” … the centrality of race, gender, class and sexuality to big data and network analytics.” Even more interesting to me was this from her CV (curriculum vitae),

Professor, Department of Modern Culture and Media, Brown University, July 2010-June 2018

.•Affiliated Faculty, Multimedia & Electronic Music Experiments (MEME), Department of Music,2017.

•Affiliated Faculty, History of Art and Architecture, March 2012-

.•Graduate Field Faculty, Theatre Arts and Performance Studies, Sept 2008-.[sic]

….

[all emphases mine]

And these are some of her credentials,

Ph.D., English, Princeton University, 1999.
•Certificate, School of Criticism and Theory, Dartmouth College, Summer 1995.

M.A., English, Princeton University, 1994.

B.A.Sc., Systems Design Engineering and English, University of Waterloo, Canada, 1992.
•first class honours and a Senate Commendation for Excellence for being the first student to graduate from the School of Engineering with a double major

It’s about time the CCA started integrating some of kind of arts perspective into their projects. (Although, I can’t help wondering if this was by accident rather than by design.)

Marc Antoine Dilhac, an associate professor at l’Université de Montréal, he, like Billot, graduated from a French university, in his case, the Sorbonne. Here’s more from Dilhac’s profile on the Mila website,

Marc-Antoine Dilhac (Ph.D., Paris 1 Panthéon-Sorbonne) is a professor of ethics and political philosophy at the Université de Montréal and an associate member of Mila – Quebec Artificial Intelligence Institute. He currently holds a CIFAR [Canadian Institute for Advanced Research] Chair in AI ethics (2019-2024), and was previously Canada Research Chair in Public Ethics and Political Theory 2014-2019. He specialized in theories of democracy and social justice, as well as in questions of applied ethics. He published two books on the politics of toleration and inclusion (2013, 2014). His current research focuses on the ethical and social impacts of AI and issues of governance and institutional design, with a particular emphasis on how new technologies are changing public relations and political structures.

In 2017, he instigated the project of the Montreal Declaration for a Responsible Development of AI and chaired its scientific committee. In 2020, as director of Algora Lab, he led an international deliberation process as part of UNESCO’s consultation on its recommendation on the ethics of AI.

In 2019, he founded Algora Lab, an interdisciplinary laboratory advancing research on the ethics of AI and developing a deliberative approach to the governance of AI and digital technologies. He is co-director of Deliberation at the Observatory on the social impacts of AI and digital technologies (OBVIA), and contributes to the OECD Policy Observatory (OECD.AI) as a member of its expert network ONE.AI.

He sits on the AI Advisory Council of the Government of Canada and co-chair its Working Group on Public Awareness.

Formerly known as Mila only, Mila – Quebec Artificial Intelligence Institute is a beneficiary of the 2017 Canadian federal budget’s inception of the Pan-Canadian Artificial Intelligence Strategy, which named CIFAR as an agency that would benefit as the hub and would also distribute funds for artificial intelligence research to (mainly) three agencies: Mila in Montréal, the Vector Institute in Toronto, and the Alberta Machine Intelligence Institute (AMII; Edmonton).

Consequently, Dilhac’s involvement with CIFAR is not unexpected but when added to his presence on the AI Advisory Council of the Government of Canada and his role as co-chair of its Working Group on Public Awareness, one of the co-sponsors for this future CCA report, you get a sense of just how small the Canadian AI ethics and public awareness community is.

Add in CIFAR’s Open Dialogue: AI in Canada series (ongoing until April 30, 2021) which is being held in partnership with the AI Advisory Council of the Government of Canada (see my March 29, 2021 posting for more details about the dialogues) amongst other familiar parties and you see a web of relations so tightly interwoven that if you could produce masks from it you’d have superior COVID-19 protection to N95 masks.

These kinds of connections are understandable and I have more to say about them in my final comments.

B. Courtney Doagoo has a profile page at the University of Ottawa, which fills in a few information gaps,

As a Fellow, Dr. Doagoo develops her research on the social, economic and cultural implications of AI with a particular focus on the role of laws, norms and policies [emphasis mine]. She also notably advises Dr. Florian Martin-Bariteau, CLTS Director, in the development of a new research initiative on those topical issues, and Dr. Jason Millar in the development of the Canadian Robotics and Artificial Intelligence Ethical Design Lab (CRAiEDL).

Dr. Doagoo completed her Ph.D. in Law at the University of Ottawa in 2017. In her interdisciplinary research, she used empirical methods to learn about and describe the use of intellectual property law and norms in creative communities. Following her doctoral research, she joined the World Intellectual Property Organization’s Coordination Office in New York as a legal intern and contributed to developing the joint initiative on gender and innovation in collaboration with UNESCO and UN Women. She later joined the International Law Research Program at the Centre for International Governance Innovation as a Post-Doctoral Fellow, where she conducted research in technology and law focusing on intellectual property law, artificial intelligence and data governance.

Dr. Doagoo completed her LL.L. at the University of Ottawa, and LL.M. in Intellectual Property Law at the Benjamin N. Cardozo School of Law [a law school at Yeshiva University in New York City].  In between her academic pursuits, Dr. Doagoo has been involved with different technology start-ups, including the one she is currently leading aimed at facilitating access to legal services. She’s also an avid lover of the arts and designed a course on Arts and Cultural Heritage Law taught during her doctoral studies at the University of Ottawa, Faculty of Law.

It’s probably because I don’t know enough but this “the role of laws, norms and policies” seems bland to the point of meaningless. The rest is more informative and brings it back to the arts with Wendy Hui Kyong Chun at SFU.

Doagoo’s LinkedIn profile offers an unexpected link to this expert panel’s chairperson, Teresa Scassa (in addition to both being lawyers whose specialties are in related fields and on faculty or fellow at the University of Ottawa),

Soft-funded Research Bursary

Dr. Teresa Scassa

2014

I’m not suggesting any conspiracies; it’s simply that this is a very small community with much of it located in central and eastern Canada and possible links into the US. For example, Wendy Hui Kyong Chun, prior to her SFU appointment in December 2018, worked and studied in the eastern US for over 25 years after starting her academic career at the University of Waterloo (Ontario).

Abhishek Gupta provided me with a challenging search. His LinkedIn profile yielded some details (I’m not convinced the man sleeps), Note: I have made some formatting changes and removed the location, ‘Montréal area’ from some descriptions

Experience

Microsoft Graphic
Software Engineer II – Machine Learning
Microsoft

Jul 2018 – Present – 2 years 10 months

Machine Learning – Commercial Software Engineering team

Serves on the CSE Responsible AI Board

Founder and Principal Researcher
Montreal AI Ethics Institute

May 2018 – Present – 3 years

Institute creating tangible and practical research in the ethical, safe and inclusive development of AI. For more information, please visit https://montrealethics.ai

Visiting AI Ethics Researcher, Future of Work, International Visitor Leadership Program
U.S. Department of State

Aug 2019 – Present – 1 year 9 months

Selected to represent Canada on the future of work

Responsible AI Lead, Data Advisory Council
Northwest Commission on Colleges and Universities

Jun 2020 – Present – 11 months

Faculty Associate, Frankfurt Big Data Lab
Goethe University

Mar 2020 – Present – 1 year 2 months

Advisor for the Z-inspection project

Associate Member
LF AI Foundation

May 2020 – Present – 1 year

Author
MIT Technology Review

Sep 2020 – Present – 8 months

Founding Editorial Board Member, AI and Ethics Journal
Springer Nature

Jul 2020 – Present – 10 months

Education

McGill University Bachelor of Science (BS)Computer Science

2012 – 2015

Exhausting, eh? He also has an eponymous website and the Montreal AI Ethics Institute can found here where Gupta and his colleagues are “Democratizing AI ethics literacy.” My hat’s off to Gupta getting on an expert panel for CCA is quite an achievement for someone without the usual academic and/or industry trappings.

Richard Isnor, based in Nova Scotia and associate vice president of research & graduate studies at St. Francis Xavier University (StFX), seems to have some connection to northern Canada (see the reference to Nunavut Research Institute below); he’s certainly well connected to various federal government agencies according to his profile page,

Prior to joining StFX, he was Manager of the Atlantic Regional Office for the Natural Sciences and Engineering Research Council of Canada (NSERC), based in Moncton, NB.  Previously, he was Director of Innovation Policy and Science at the International Development Research Centre in Ottawa and also worked for three years with the National Research Council of Canada [NRC] managing Biotechnology Research Initiatives and the NRC Genomics and Health Initiative.

Richard holds a D. Phil. in Science and Technology Policy Studies from the University of Sussex, UK; a Master’s in Environmental Studies from Dalhousie University [Nova Scotia]; and a B. Sc. (Hons) in Biochemistry from Mount Allison University [New Burnswick].  His primary interest is in science policy and the public administration of research; he has worked in science and technology policy or research administrative positions for Environment Canada, Natural Resources Canada, the Privy Council Office, as well as the Nunavut Research Institute. [emphasis mine]

I don’t know what Dr. Isnor’s work is like but I’m hopeful he (along with Spiteri) will be able to provide a less ‘big city’ perspective to the proceedings.

(For those unfamiliar with Canadian cities, Montreal [three expert panelists] is the second largest city in the country, Ottawa [two expert panelists] as the capital has an outsize view of itself, Vancouver [one expert panelist] is the third or fourth largest city in the country for a total of six big city representatives out of eight Canadian expert panelists.)

Ross D. King, professor of machine intelligence at Sweden’s Chalmers University of Technology, might be best known for Adam, also known as, Robot Scientist. Here’s more about King, from his Wikipedia entry (Note: Links have been removed),

King completed a Bachelor of Science degree in Microbiology at the University of Aberdeen in 1983 and went on to study for a Master of Science degree in Computer Science at the University of Newcastle in 1985. Following this, he completed a PhD at The Turing Institute [emphasis mine] at the University of Strathclyde in 1989[3] for work on developing machine learning methods for protein structure prediction.[7]

King’s research interests are in the automation of science, drug design, AI, machine learning and synthetic biology.[8][9] He is probably best known for the Robot Scientist[4][10][11][12][13][14][15][16][17] project which has created a robot that can:

hypothesize to explain observations

devise experiments to test these hypotheses

physically run the experiments using laboratory robotics

interpret the results from the experiments

repeat the cycle as required

The Robot Scientist Wikipedia entry has this to add,

… a laboratory robot created and developed by a group of scientists including Ross King, Kenneth Whelan, Ffion Jones, Philip Reiser, Christopher Bryant, Stephen Muggleton, Douglas Kell and Steve Oliver.[2][6][7][8][9][10]

… Adam became the first machine in history to have discovered new scientific knowledge independently of its human creators.[5][17][18]

Sabina Leonelli, professor of philosophy and history of science at the University of Exeter, is the only person for whom I found a Twitter feed (@SabinaLeonelli). Here’s a bit more from her Wikipedia entry Note: Links have been removed),

Originally from Italy, Leonelli moved to the UK for a BSc degree in History, Philosophy and Social Studies of Science at University College London and a MSc degree in History and Philosophy of Science at the London School of Economics. Her doctoral research was carried out in the Netherlands at the Vrije Universiteit Amsterdam with Henk W. de Regt and Hans Radder. Before joining the Exeter faculty, she was a research officer under Mary S. Morgan at the Department of Economic History of the London School of Economics.

Leonelli is the Co-Director of the Exeter Centre for the Study of the Life Sciences (Egenis)[3] and a Turing Fellow at the Alan Turing Institute [emphases mine] in London.[4] She is also Editor-in-Chief of the international journal History and Philosophy of the Life Sciences[5] and Associate Editor for the Harvard Data Science Review.[6] She serves as External Faculty for the Konrad Lorenz Institute for Evolution and Cognition Research.[7]

Notice that Ross King and Sabina Leonelli both have links to The Alan Turing Institute (“We believe data science and artificial intelligence will change the world”), although the institute’s link to the University of Strathclyde (Scotland) where King studied seems a bit tenuous.

Do check out Leonelli’s profile at the University of Exeter as it’s comprehensive.

Raymond J. Spiteri, professor and director of the Centre for High Performance Computing, Department of Computer Science at the University of Saskatchewan, has a profile page at the university the likes of which I haven’t seen in several years perhaps due to its 2013 origins. His other university profile page can best be described as minimalist.

His Canadian Applied and Industrial Mathematics Society (CAIMS) biography page could be described as less charming (to me) than the 2013 profile but it is easier to read,

Raymond Spiteri is a Professor in the Department of Computer Science at the University of Saskatchewan. He performed his graduate work as a member of the Institute for Applied Mathematics at the University of British Columbia. He was a post-doctoral fellow at McGill University and held faculty positions at Acadia University and Dalhousie University before joining USask in 2004. He serves on the Executive Committee of the WestGrid High-Performance Computing Consortium with Compute/Calcul Canada. He was a MITACS Project Leader from 2004-2012 and served in the role of Mitacs Regional Scientific Director for the Prairie Provinces between 2008 and 2011.

Spiteri’s areas of research are numerical analysis, scientific computing, and high-performance computing. His area of specialization is the analysis and implementation of efficient time-stepping methods for differential equations. He actively collaborates with scientists, engineers, and medical experts of all flavours. He also has a long record of industry collaboration with companies such as IBM and Boeing.

Spiteri has been lifetime member of CAIMS/SCMAI since 2000. He helped co-organize the 2004 Annual Meeting at Dalhousie and served on the Cecil Graham Doctoral Dissertation Award Committee from 2005 to 2009, acting as chair from 2007. He has been an active participant in CAIMS, serving several times on the Scientific Committee for the Annual Meeting, as well as frequently attending and organizing mini-symposia. Spiteri believes it is important for applied mathematics to play a major role in the efforts to meet Canada’s most pressing societal challenges, including the sustainability of our healthcare system, our natural resources, and the environment.

A last look at Spiteri’s 2013 profile gave me this (Note: Links have been removed),

Another biographical note: I obtained my B.Sc. degree in Applied Mathematics from the University of Western Ontario [also known as, Western University] in 1990. My advisor was Dr. M.A.H. (Paddy) Nerenberg, after whom the Nerenberg Lecture Series is named. Here is an excerpt from the description, put here is his honour, as a model for the rest of us:

The Nerenberg Lecture Series is first and foremost about people and ideas. Knowledge is the true treasure of humanity, accrued and passed down through the generations. Some of it, particularly science and its language, mathematics, is closed in practice to many because of technical barriers that can only be overcome at a high price. These technical barriers form part of the remarkable fractures that have formed in our legacy of knowledge. We are so used to those fractures that they have become almost invisible to us, but they are a source of profound confusion about what is known.

The Nerenberg Lecture is named after the late Morton (Paddy) Nerenberg, a much-loved professor and researcher born on 17 March– hence his nickname. He was a Professor at Western for more than a quarter century, and a founding member of the Department of Applied Mathematics there. A successful researcher and accomplished teacher, he believed in the unity of knowledge, that scientific and mathematical ideas belong to everyone, and that they are of human importance. He regretted that they had become inaccessible to so many, and anticipated serious consequences from it. [emphases mine] The series honors his appreciation for the democracy of ideas. He died in 1993 at the age of 57.

So, we have the expert panel.

Thoughts about the panel and the report

As I’ve noted previously here and elsewhere, assembling any panels whether they’re for a single event or for a longer term project such as producing a report is no easy task. Looking at the panel, there’s some arts representation, smaller urban centres are also represented, and some of the members have experience in more than one region in Canada. I was also much encouraged by Spiteri’s acknowledgement of his advisor’s, Morton (Paddy) Nerenberg, passionate commitment to the idea that “scientific and mathematical ideas belong to everyone.”

Kudos to the Council of Canadian Academies (CCA) organizers.

That said, this looks like an exceptionally Eurocentric panel. Unusually, there’s no representation from the US unless you count Chun who has spent the majority of her career in the US with only a little over two years at Simon Fraser University on Canada’s West Coast.

There’s weakness to a strategy (none of the ten or so CCA reports I’ve reviewed here deviates from this pattern) that seems to favour international participants from Europe and/or the US (also, sometimes, Australia/New Zealand). This leaves out giant chunks of the international community and brings us dangerously close to an echo chamber.

The same problem exists regionally and with various Canadian communities, which are acknowledged more in spirit than in actuality, e.g., the North, rural, indigenous, arts, etc.

Getting back to the ‘big city’ emphsais noted earlier, two people from Ottawa and three from Montreal; half of the expert panel lives within a two hour train ride of each other. (For those who don’t know, that’s close by Canadian standards. For comparison, a train ride from Vancouver to Seattle [US] is about four hours, a short trip when compared to a 24 hour train trip to the closest large Canadian cities.)

I appreciate that it’s not a simple problem but my concern is that it’s never acknowledged by the CCA. Perhaps they could include a section in the report acknowledging the issues and how the expert panel attempted to address them , in other words, transparency. Coincidentally, transparency, which has been related to trust, have both been identified as big issues with artificial intelligence.

As for solutions, these reports get sent to external reviewers and, prior to the report, outside experts are sometimes brought in as the panel readies itself. That would be two opportunities afforded by their current processes.

Anyway, good luck with the report and I look forward to seeing it.

AI (Audeo) uses visual cues to play the right music

A February 4, 2021 news item on ScienceDaily highlights research from the University of Washington (state) about artificial intelligence, piano playing, and Audeo,

Anyone who’s been to a concert knows that something magical happens between the performers and their instruments. It transforms music from being just “notes on a page” to a satisfying experience.

A University of Washington team wondered if artificial intelligence could recreate that delight using only visual cues — a silent, top-down video of someone playing the piano. The researchers used machine learning to create a system, called Audeo, that creates audio from silent piano performances. When the group tested the music Audeo created with music-recognition apps, such as SoundHound, the apps correctly identified the piece Audeo played about 86% of the time. For comparison, these apps identified the piece in the audio tracks from the source videos 93% of the time.

The researchers presented Audeo Dec. 8 [2020] at the NeurIPS 2020 conference.

A February 4, 2021 University of Washington news release (also on EurekAlert), which originated the news item, offers more detail,

“To create music that sounds like it could be played in a musical performance was previously believed to be impossible,” said senior author Eli Shlizerman, an assistant professor in both the applied mathematics and the electrical and computer engineering departments. “An algorithm needs to figure out the cues, or ‘features,’ in the video frames that are related to generating music, and it needs to ‘imagine’ the sound that’s happening in between the video frames. It requires a system that is both precise and imaginative. The fact that we achieved music that sounded pretty good was a surprise.”

Audeo uses a series of steps to decode what’s happening in the video and then translate it into music. First, it has to detect which keys are pressed in each video frame to create a diagram over time. Then it needs to translate that diagram into something that a music synthesizer would actually recognize as a sound a piano would make. This second step cleans up the data and adds in more information, such as how strongly each key is pressed and for how long.

“If we attempt to synthesize music from the first step alone, we would find the quality of the music to be unsatisfactory,” Shlizerman said. “The second step is like how a teacher goes over a student composer’s music and helps enhance it.”

The researchers trained and tested the system using YouTube videos of the pianist Paul Barton. The training consisted of about 172,000 video frames of Barton playing music from well-known classical composers, such as Bach and Mozart. Then they tested Audeo with almost 19,000 frames of Barton playing different music from these composers and others, such as Scott Joplin.

Once Audeo has generated a transcript of the music, it’s time to give it to a synthesizer that can translate it into sound. Every synthesizer will make the music sound a little different — this is similar to changing the “instrument” setting on an electric keyboard. For this study, the researchers used two different synthesizers.

“Fluidsynth makes synthesizer piano sounds that we are familiar with. These are somewhat mechanical-sounding but pretty accurate,” Shlizerman said. “We also used PerfNet, a new AI synthesizer that generates richer and more expressive music. But it also generates more noise.”

Audeo was trained and tested only on Paul Barton’s piano videos. Future research is needed to see how well it could transcribe music for any musician or piano, Shlizerman said.

“The goal of this study was to see if artificial intelligence could generate music that was played by a pianist in a video recording — though we were not aiming to replicate Paul Barton because he is such a virtuoso,” Shlizerman said. “We hope that our study enables novel ways to interact with music. For example, one future application is that Audeo can be extended to a virtual piano with a camera recording just a person’s hands. Also, by placing a camera on top of a real piano, Audeo could potentially assist in new ways of teaching students how to play.”

The researchers have created videos featuring the live pianist and the AI pianist, which you will find embedded in the February 4, 2021 University of Washington news release.

Here’s a link to and a citation for the researchers’ paper,

Audeo: Generating music just from a video of pianist movements by Kun Su, Xiulong Liu, and E. Shlizerman. http://faculty.washington.edu/shlizee/audeo/?_ga=2.11972724.1912597934.1613414721-714686724.1612482256 (I had some difficulty creating a link and ended up with this unwieldy open access (?) version.)

The paper also appears in the proceedings for Advances in Neural Information Processing Systems 33 (NeurIPS 2020) Edited by: H. Larochelle and M. Ranzato and R. Hadsell and M.F. Balcan and H. Lin. I had to scroll through many papers and all I found for ‘Audeo’ was an abstract.