Tag Archives: artificial intelligence (AI)

Nanosensors use AI to explore the biomolecular world

EPFL scientists have developed AI-powered nanosensors that let researchers track various kinds of biological molecules without disturbing them. Courtesy: École polytechnique fédérale de Lausanne (EPFL)

If you look at the big orange dot (representing the nanosensors?), you’ll see those purplish/fuschia objects resemble musical notes (biological molecules?). I think that brainlike object to the left and in light blue is the artificial intelligence (AI) component. (If anyone wants to correct my guesses or identify the bits I can’t, please feel free to add to the Comments for this blog.)

Getting back to my topic, keep the ‘musical notes’ in mind as you read about some of the latest research from l’École polytechnique fédérale de Lausanne (EPFL) in an April 7, 2021 news item on Nanowerk,

The tiny world of biomolecules is rich in fascinating interactions between a plethora of different agents such as intricate nanomachines (proteins), shape-shifting vessels (lipid complexes), chains of vital information (DNA) and energy fuel (carbohydrates). Yet the ways in which biomolecules meet and interact to define the symphony of life is exceedingly complex.

Scientists at the Bionanophotonic Systems Laboratory in EPFL’s School of Engineering have now developed a new biosensor that can be used to observe all major biomolecule classes of the nanoworld without disturbing them. Their innovative technique uses nanotechnology, metasurfaces, infrared light and artificial intelligence.

To each molecule its own melody

In this nano-sized symphony, perfect orchestration makes physiological wonders such as vision and taste possible, while slight dissonances can amplify into horrendous cacophonies leading to pathologies such as cancer and neurodegeneration.

An April 7, 2021 EPFL press release, which originated the news item, provides more detail,

“Tuning into this tiny world and being able to differentiate between proteins, lipids, nucleic acids and carbohydrates without disturbing their interactions is of fundamental importance for understanding life processes and disease mechanisms,” says Hatice Altug, the head of the Bionanophotonic Systems Laboratory. 

Light, and more specifically infrared light, is at the core of the biosensor developed by Altug’s team. Humans cannot see infrared light, which is beyond the visible light spectrum that ranges from blue to red. However, we can feel it in the form of heat in our bodies, as our molecules vibrate under the infrared light excitation.

Molecules consist of atoms bonded to each other and – depending on the mass of the atoms and the arrangement and stiffness of their bonds – vibrate at specific frequencies. This is similar to the strings on a musical instrument that vibrate at specific frequencies depending on their length. These resonant frequencies are molecule-specific, and they mostly occur in the infrared frequency range of the electromagnetic spectrum. 

“If you imagine audio frequencies instead of infrared frequencies, it’s as if each molecule has its own characteristic melody,” says Aurélian John-Herpin, a doctoral assistant at Altug’s lab and the first author of the publication. “However, tuning into these melodies is very challenging because without amplification, they are mere whispers in a sea of sounds. To make matters worse, their melodies can present very similar motifs making it hard to tell them apart.” 

Metasurfaces and artificial intelligence

The scientists solved these two issues using metasurfaces and AI. Metasurfaces are man-made materials with outstanding light manipulation capabilities at the nano scale, thereby enabling functions beyond what is otherwise seen in nature. Here, their precisely engineered meta-atoms made out of gold nanorods act like amplifiers of light-matter interactions by tapping into the plasmonic excitations resulting from the collective oscillations of free electrons in metals. “In our analogy, these enhanced interactions make the whispered molecule melodies more audible,” says John-Herpin.

AI is a powerful tool that can be fed with more data than humans can handle in the same amount of time and that can quickly develop the ability to recognize complex patterns from the data. John-Herpin explains, “AI can be imagined as a complete beginner musician who listens to the different amplified melodies and develops a perfect ear after just a few minutes and can tell the melodies apart, even when they are played together – like in an orchestra featuring many instruments simultaneously.” 

The first biosensor of its kind

When the scientists’ infrared metasurfaces are augmented with AI, the new sensor can be used to analyze biological assays featuring multiple analytes simultaneously from the major biomolecule classes and resolving their dynamic interactions. 

“We looked in particular at lipid vesicle-based nanoparticles and monitored their breakage through the insertion of a toxin peptide and the subsequent release of vesicle cargos of nucleotides and carbohydrates, as well as the formation of supported lipid bilayer patches on the metasurface,” says Altug.

This pioneering AI-powered, metasurface-based biosensor will open up exciting perspectives for studying and unraveling inherently complex biological processes, such as intercellular communication via exosomesand the interaction of nucleic acids and carbohydrates with proteins in gene regulation and neurodegeneration. 

“We imagine that our technology will have applications in the fields of biology, bioanalytics and pharmacology – from fundamental research and disease diagnostics to drug development,” says Altug. 

Here’s a link to and a citation for the paper,

Infrared Metasurface Augmented by Deep Learning for Monitoring Dynamics between All Major Classes of Biomolecules by Aurelian John‐Herpin, Deepthy Kavungal. Lea von Mücke, Hatice Altug. Advanced Materials Volume 33, Issue 14 April 8, 2021 2006054 DOI: https://doi.org/10.1002/adma.202006054 First published: 22 February 2021

This paper is open access.

Art, sound, AI, & the Metacreation Lab’s Spring 2021 newsletter

The Metacreation Lab’s Spring 2021 newsletter (received via email) features a number of events either currently taking place or about to take place.

2021 AI Song Contest

2021 marks the 2nd year for this international event, an artificial intelligence/AI Song Contest 2021. The folks at Simon Fraser University’s (SFU) Metacreation Lab have an entry for the 2021 event, A song about the weekend (and you can do whatever you want). Should you click on the song entry, you will find an audio file, a survey/vote consisting of four questions and, if you keep scrolling down, more information about the creative, team, the song and more,

Driven by collaborations involving scientists, experts in artificial intelligence, cognitive sciences, designers, and artists, the Metacreation Lab for Creative AI is at the forefront of the development of generative systems, whether these are embedded in interactive experiences or automating workflows integrated into cutting-edge creative software.

Team:

Cale Plut (Composer and musician) is a PhD Student in the Metacreation lab, researching AI music applications in video games.

Philippe Pasquier (Producer and supervisor) is an Associate Professor, and leads the Metacreation Lab. 

Jeff Ens (AI programmer) is a PhD Candidate in the Metacreation lab, researching AI models for music generation.

Renaud Tchemeube (Producer and interaction designer) is a PhD Student in the Metacreation Lab, researching interaction software design for creativity.

Tara Jadidi (Research Assistant) is an undergraduate student at FUM, Iran, working with the Metacreation lab.

Dimiter Zlatkov (Research Assistant) is an undergraduate student at UBC, working with the Metacreation lab.

ABOUT THE SONG

A song about the weekend (and you can do whatever you want) explores the relationships between AI, humans, labour, and creation in a lighthearted and fun song. It is co-created with the Multi-track Music Machine (MMM)

Through the history of automation and industrialization, the relationship between the labour magnification power of automation and the recipients of the benefits of that magnification have been in contention. While increasing levels of automation are often accompanied by promises of future leisure increases, this rarely materializes for the workers whose labour is multiplied. By primarily using automated methods to create a “fun” song about leisure, we highlight both the promise of AI-human cooperation as well as the disparities in its real-world deployment. 

As for the competition itself, here’s more from the FAQs (frequently asked questions),

What is the AI Song Contest?

AI Song Contest is an international creative AI contest. Teams from all over the world try to create a 4-minute pop song with the help of artificial intelligence.

When and where does it take place?

Between June 1, 2021 and July 1, 2021 voting is open for the international public. On July 6 there will be multiple online panel sessions, and the winner of the AI Song Contest 2021 will be announced in an online award ceremony. All sessions on July 6 are organised in collaboration with Wallifornia MusicTech.

How is the winner determined?

Each participating team will be awarded two sets of points: one a public vote by the contest’s international audience, the other the determination of an expert jury.

Anyone can evaluate as many songs as they like: from one, up to all thirty-eight. Every song can be evaluated only once. Even though it won’t count in the grand total, lyrics can be evaluated too; we do like to determine which team wrote the best accoring to the audience.

Can I vote multiple times for the same team?

No, votes are controlled by IP address. So only one of your votes will count.

Is this the first time the contest is organised?

This is the second time the AI Song Contest is organised. The contest was first initiated in 2020 by Dutch public broadcaster VPRO together with NPO Innovation and NPO 3FM. Teams from Europe and Australia tried to create a Eurovision kind of song with the help of AI. Team Uncanny Valley from Australia won the first edition with their song Beautiful the World. The 2021 edition is organised independently.

What is the definition of artificial intelligence in this contest?

Artificial intelligence is a very broad concept. For this contest it will mean that teams can use techniques such as -but not limited to- machine learning, such as deep learning, natural language processing, algorithmic composition or combining rule-based approaches with neural networks for the creation of their songs. Teams can create their own AI tools, or use existing models and algorithms.  

What are possible challenges?

Read here about the challenges teams from last year’s contest faced.

As an AI researcher, can I collaborate with musicians?

Yes – this is strongly encouraged!

For the 2020 edition, all songs had to be Eurovision-style. Is that also the intention for 2021 entries?

Last year, the first year the contest was organized, it was indeed all about Eurovision. For this year’s competition, we are trying to expand geographically, culturally, and musically. Teams from all over the world can compete, and songs in all genres can be submitted.

If you’re not familiar with Eurovision-style, you can find a compilation video with brief excerpts from the 26 finalists for Eurovision 2021 here (Bill Young’s May 23, 2021 posting on tellyspotting.kera.org; the video runs under 10 mins.). There’s also the “Eurovision Song Contest: The Story of Fire Saga” 2020 movie starring Rachel McAdams, Will Ferrell, and Dan Stevens. It’s intended as a gentle parody but the style is all there.

ART MACHINES 2: International Symposium on Machine Learning and Art 2021

The symposium, Art Machines 2, started yesterday (June 10, 2021 and runs to June 14, 2021) in Hong Kong and SFU’s Metacreation Lab will be represented (from the Spring 2021 newsletter received via email),

On Sunday, June 13 [2021] at 21:45 Hong Kong Standard Time (UTC +8) as part of the Sound Art Paper Session chaired by Ryo Ikeshiro, the Metacreation Lab’s Mahsoo Salimi and Philippe Pasquier will present their paper, Exploiting Swarm Aesthetics in Sound Art. We’ve included a more detailed preview of the paper in this newsletter below.

Concurrent with ART MACHINES 2 is the launch of two exhibitions – Constructing Contexts and System Dreams. Constructing Contexts, curated by Tobias Klein and Rodrigo Guzman-Serrano, will bring together 27 works with unique approaches to the question of contexts as applied by generative adversarial networks. System Dreams highlights work from the latest MFA talent from the School of Creative Media. While the exhibitions take place in Hong Kong, the participating artists and artwork are well documented online.

Liminal Tones: Swarm Aesthetics in Sound Art

Applications of swarm aesthetics in music composition are not new and have already resulted in volumes of complex soundscapes and musical compositions. Using an experimental approach, Mahsoo Salimi and Philippe Pasquier create a series of sound textures know as Liminal Tones (B/ Rain Dream) based on swarming behaviours

Findings of the Liminal Tones project will be presented in papers for the Art Machines 2: International Symposium on Machine Learning (June 10-14 [2021]) and the International Conference on Swarm Intelligence (July 17-21 [2021]).

Talk about Creative AI at the University of British Columbia

This is the last item I’m excerpting from the newsletter. (Should you be curious about what else is listed, you can go to the Metacreation Lab’s contact page and sign up for the newsletter there.) On June 22, 2021 at 2:00 PM PDT, there will be this event,

Creative AI: on the partial or complete automation of creative tasks @ CAIDA

Philippe Pasquier will be giving a talk on creative applications of AI at CAIDA: UBC ICICS Centre for Artificial Intelligence Decision-making and Action. Overviewing the state of the art of computer-assisted creativity and embedded systems and their various applications, the talk will survey the design, deployment, and evaluation of generative systems.

Free registration for the talk is available at the link below.

Register for Creative AI @ CAIDA

Remember, if you want to see the rest of the newsletter, you can sign up at the Metacreation Lab’s contact page.

US Army researchers’ vision for artificial intelligence and ethics

The US Army peeks into a near future where humans and some forms of artificial intelligence (AI) work together in battle and elsewhere. From a February 3, 2021 U.S. Army Research Laboratory news release (also on EurekAlert but published on February 16, 2021),

The Army of the future will involve humans and autonomous machines working together to accomplish the mission. According to Army researchers, this vision will only succeed if artificial intelligence is perceived to be ethical.

Researchers, based at the U.S. Army Combat Capabilities Development Command, now known as DEVCOM, Army Research Laboratory, Northeastern University and the University of Southern California, expanded existing research to cover moral dilemmas and decision making that has not been pursued elsewhere.

This research, featured in Frontiers in Robotics and AI, tackles the fundamental challenge of developing ethical artificial intelligence, which, according to the researchers, is still mostly understudied.

“Autonomous machines, such as automated vehicles and robots, are poised to become pervasive in the Army,” said DEVCOM ARL researcher Dr. Celso de Melo, who is located at the laboratory’s ARL West regional site in Playa Vista, California. “These machines will inevitably face moral dilemmas where they must make decisions that could very well injure humans.”

For example, de Melo said, imagine that an automated vehicle is driving in a tunnel and suddenly five pedestrians cross the street; the vehicle must decide whether to continue moving forward injuring the pedestrians or swerve towards the wall risking the driver.

What should the automated vehicle do in this situation?

Prior work has framed these dilemmas in starkly simple terms, framing decisions as life and death, de Melo said, neglecting the influence of risk of injury to the involved parties on the outcome.

“By expanding the study of moral dilemmas to consider the risk profile of the situation, we significantly expanded the space of acceptable solutions for these dilemmas,” de Melo said. “In so doing, we contributed to the development of autonomous technology that abides to acceptable moral norms and thus is more likely to be adopted in practice and accepted by the general public.”

The researchers focused on this gap and presented experimental evidence that, in a moral dilemma with automated vehicles, the likelihood of making the utilitarian choice – which minimizes the overall injury risk to humans and, in this case, saves the pedestrians – was moderated by the perceived risk of injury to pedestrians and drivers.

In their study, participants were found more likely to make the utilitarian choice with decreasing risk to the driver and with increasing risk to the pedestrians. However, interestingly, most were willing to risk the driver (i.e., self-sacrifice), even if the risk to the pedestrians was lower than to the driver.

As a second contribution, the researchers also demonstrated that participants’ moral decisions were influenced by what other decision makers do – for instance, participants were less likely to make the utilitarian choice, if others often chose the non-utilitarian choice.

“This research advances the state-of-the-art in the study of moral dilemmas involving autonomous machines by shedding light on the role of risk on moral choices,” de Melo said. “Further, both of these mechanisms introduce opportunities to develop AI that will be perceived to make decisions that meet moral standards, as well as introduce an opportunity to use technology to shape human behavior and promote a more moral society.”

For the Army, this research is particularly relevant to Army modernization, de Melo said.

“As these vehicles become increasingly autonomous and operate in complex and dynamic environments, they are bound to face situations where injury to humans is unavoidable,” de Melo said. “This research informs how to navigate these moral dilemmas and make decisions that will be perceived as optimal given the circumstances; for example, minimizing overall risk to human life.”

Moving in to the future, researchers will study this type of risk-benefit analysis in Army moral dilemmas and articulate the corresponding practical implications for the development of AI systems.

“When deployed at scale, the decisions made by AI systems can be very consequential, in particular for situations involving risk to human life,” de Melo said. “It is critical that AI is able to make decisions that reflect society’s ethical standards to facilitate adoption by the Army and acceptance by the general public. This research contributes to realizing this vision by clarifying some of the key factors shaping these standards. This research is personally important because AI is expected to have considerable impact to the Army of the future; however, what kind of impact will be defined by the values reflected in that AI.”

The last time I had an item on a similar topic from the US Army Research Laboratory (ARL) it was in a March 26, 2018 posting; scroll down to the subhead, US Army (about 50% of the way down),

“As machine agents become more sophisticated and independent, it is critical for their human counterparts to understand their intent, behaviors, reasoning process behind those behaviors, and expected outcomes so the humans can properly calibrate their trust [emphasis mine] in the systems and make appropriate decisions,” explained ARL’s Dr. Jessie Chen, senior research psychologist.

This latest work also revolves around the issue of trust according to the last sentence in the 2021 study paper (link and citation to follow),

… Overall, these questions emphasize the importance of the kind of experimental work presented here, as it has the potential to shed light on people’s preferences about moral behavior in machines, inform the design of autonomous machines people are likely to trust and adopt, and, perhaps, even introduce an opportunity to promote a more moral society. [emphases mine]

From trust to adoption to a more moral society—that’s an interesting progression. For another more optimistic view of how robots and AI can have positive impacts there’s my March 29, 2021 posting, Little Lost Robot and humane visions of our technological future

Here’s a link to and a citation for the paper,

Risk of Injury in Moral Dilemmas With Autonomous Vehicles by Celso M. de Melo, Stacy Marsella, and Jonathan Gratch. Front. Robot. AI [Frontiers in Robotics and AI], 20 January 2021 DOI: https://doi.org/10.3389/frobt.2020.572529

This paper is in an open access journal.

Artificial emotional intelligence detection

Sabotage was not my first thought on reading about artificial emotional intelligence so this February 11, 2021 Incheon National University press release (also on EurekAlert) is educational in an unexpected way (Note: A link has been removed),

With the advent of 5G communication technology and its integration with AI, we are looking at the dawn of a new era in which people, machines, objects, and devices are connected like never before. This smart era will be characterized by smart facilities and services such as self-driving cars, smart UAVs [unmanned aerial vehicle], and intelligent healthcare. This will be the aftermath of a technological revolution.

But the flip side of such technological revolution is that AI [artificial intelligence] itself can be used to attack or threaten the security of 5G-enabled systems which, in turn, can greatly compromise their reliability. It is, therefore, imperative to investigate such potential security threats and explore countermeasures before a smart world is realized.

In a recent study published in IEEE Network, a team of researchers led by Prof. Hyunbum Kim from Incheon National University, Korea, address such issues in relation to an AI-based, 5G-integrated virtual emotion recognition system called 5G-I-VEmoSYS, which detects human emotions using wireless signals and body movement. “Emotions are a critical characteristic of human beings and separates humans from machines, defining daily human activity. However, some emotions can also disrupt the normal functioning of a society and put people’s lives in danger, such as those of an unstable driver. Emotion detection technology thus has great potential for recognizing any disruptive emotion and in tandem with 5G and beyond-5G communication, warning others of potential dangers,” explains Prof. Kim. “For instance, in the case of the unstable driver, the AI enabled driver system of the car can inform the nearest network towers, from where nearby pedestrians can be informed via their personal smart devices.”

The virtual emotion system developed by Prof. Kim’s team, 5G-I-VEmoSYS, can recognize at least five kinds of emotion (joy, pleasure, a neutral state, sadness, and anger) and is composed of three subsystems dealing with the detection, flow, and mapping of human emotions. The system concerned with detection is called Artificial Intelligence-Virtual Emotion Barrier, or AI-VEmoBAR, which relies on the reflection of wireless signals from a human subject to detect emotions. This emotion information is then handled by the system concerned with flow, called Artificial Intelligence-Virtual Emotion Flow, or AI-VEmoFLOW, which enables the flow of specific emotion information at a specific time to a specific area. Finally, the Artificial Intelligence-Virtual Emotion Map, or AI-VEmoMAP, utilizes a large amount of this virtual emotion data to create a virtual emotion map that can be utilized for threat detection and crime prevention.

A notable advantage of 5G-I-VEmoSYS is that it allows emotion detection without revealing the face or other private parts of the subjects, thereby protecting the privacy of citizens in public areas. Moreover, in private areas, it gives the user the choice to remain anonymous while providing information to the system. Furthermore, when a serious emotion, such as anger or fear, is detected in a public area, the information is rapidly conveyed to the nearest police department or relevant entities who can then take steps to prevent any potential crime or terrorism threats.

However, the system suffers from serious security issues such as the possibility of illegal signal tampering, abuse of anonymity, and hacking-related cyber-security threats. Further, the danger of sending false alarms to authorities remains.

While these concerns do put the system’s reliability at stake, Prof. Kim’s team are confident that they can be countered with further research. “This is only an initial study. In the future, we need to achieve rigorous information integrity and accordingly devise robust AI-based algorithms that can detect compromised or malfunctioning devices and offer protection against potential system hacks,” explains Prof. Kim, “Only then will it enable people to have safer and more convenient lives in the advanced smart cities of the future.”

Intriguing, yes? The researchers have used this image to illustrate their work,

Caption: With 5G communication technology and new AI-based systems such as emotion recognition systems, smart cities are all set to become a reality; but these systems need to be honed and security issues need to be ironed out before the smart reality can be realized. Credit: macrovector on Freepik

Before getting to the link and citation for the paper, I have a March 8, 2019 article by Meredith Somers for MIT (Massachusetts Institute of Technology) Sloan School of Management’s Ideas Made to Matter publication (Note Links have been removed),

What did you think of the last commercial you watched? Was it funny? Confusing? Would you buy the product? You might not remember or know for certain how you felt, but increasingly, machines do. New artificial intelligence technologies are learning and recognizing human emotions, and using that knowledge to improve everything from marketing campaigns to health care.

These technologies are referred to as “emotion AI.” Emotion AI is a subset of artificial intelligence (the broad term for machines replicating the way humans think) that measures, understands, simulates, and reacts to human emotions. It’s also known as affective computing, or artificial emotional intelligence. The field dates back to at least 1995, when MIT Media lab professor Rosalind Picard published “Affective Computing.”

Javier Hernandez, a research scientist with the Affective Computing Group at the MIT Media Lab, explains emotion AI as a tool that allows for a much more natural interaction between humans and machines.“Think of the way you interact with other human beings; you look at their faces, you look at their body, and you change your interaction accordingly,” Hernandez said. “How can [a machine] effectively communicate information if it doesn’t know your emotional state, if it doesn’t know how you’re feeling, it doesn’t know how you’re going to respond to specific content?”

While humans might currently have the upper hand on reading emotions, machines are gaining ground using their own strengths. Machines are very good at analyzing large amounts of data, explained MIT Sloan professor Erik Brynjolfsson. They can listen to voice inflections and start to recognize when those inflections correlate with stress or anger. Machines can analyze images and pick up subtleties in micro-expressions on humans’ faces that might happen even too fast for a person to recognize.

“We have a lot of neurons in our brain for social interactions. We’re born with some of those skills, and then we learn more. It makes sense to use technology to connect to our social brains, not just our analytical brains.” Brynjolfsson said. “Just like we can understand speech and machines can communicate in speech, we also understand and communicate with humor and other kinds of emotions. And machines that can speak that language — the language of emotions — are going to have better, more effective interactions with us. It’s great that we’ve made some progress; it’s just something that wasn’t an option 20 or 30 years ago, and now it’s on the table.”

Somers describes current uses of emotion AI (I’ve selected two from her list; Note: A link has been removed),

Call centers —Technology from Cogito, a company co-founded in 2007 by MIT Sloan alumni, helps call center agents identify the moods of customers on the phone and adjust how they handle the conversation in real time. Cogito’s voice-analytics software is based on years of human behavior research to identify voice patterns.

Mental health —  In December 2018 Cogito launched a spinoff called CompanionMx, and an accompanying mental health monitoring app. The Companion app listens to someone speaking into their phone, and analyzes the speaker’s voice and phone use for signs of anxiety and mood changes.

The app improves users’ self-awareness, and can increase coping skills including steps for stress reduction. The company has worked with the Department of Veterans Affairs, the Massachusetts General Hospital, and Brigham & Women’s Hospital in Boston.

Somers’ March 8, 2019 article was an eye-opener.

Getting back to the Korean research, here’s a link to and a citation for the paper,

Research Challenges and Security Threats to AI-Driven 5G Virtual Emotion Applications Using Autonomous Vehicles, Drones, and Smart Devices by Hyunbum Kim; Jalel Ben-Othman; Lynda Mokdad; Junggab Son; Chunguo Li. IEEE Network Volume: 34 Issue: 6 November/December 2020 Page(s): 288 – 294 DOI: 10.1109/MNET.011.2000245 Date of Publication (online): 12 October 2020

This paper is behind a paywall.

Getting to be more literate than humans

Lucinda McKnight, lecturer at Deakin University, Australia, has a February 9, 2021 essay about literacy in the coming age of artificial intelligence (AI) for The Conversation (Note 1: You can also find this essay as a February 10, 2021 news item on phys.org; Note 2: Links have been removed),

Students across Australia have started the new school year using pencils, pens and keyboards to learn to write.

In workplaces, machines are also learning to write, so effectively that within a few years they may write better than humans.

Sometimes they already do, as apps like Grammarly demonstrate. Certainly, much everyday writing humans now do may soon be done by machines with artificial intelligence (AI).

The predictive text commonly used by phone and email software is a form of AI writing that countless humans use every day.

According to an industry research organisation Gartner, AI and related technology will automate production of 30% of all content found on the internet by 2022.

Some prose, poetry, reports, newsletters, opinion articles, reviews, slogans and scripts are already being written by artificial intelligence.

Literacy increasingly means and includes interacting with and critically evaluating AI.

This means our children should no longer be taught just formulaic writing. [emphasis mine] Instead, writing education should encompass skills that go beyond the capacities of artificial intelligence.

McKnight’s focus is on how Australian education should approach the coming AI writer ‘supremacy’, from her February 9, 2021 essay (Note: Links have been removed),

In 2019, the New Yorker magazine did an experiment to see if IT company OpenAI’s natural language generator GPT-2 could write an entire article in the magazine’s distinctive style. This attempt had limited success, with the generator making many errors.

But by 2020, GPT-3, the new version of the machine, trained on even more data, wrote an article for The Guardian newspaper with the headline “A robot wrote this entire article. Are you scared yet, human?”

This latest much improved generator has implications for the future of journalism, as the Elon Musk-funded OpenAI invests ever more in research and development.

AI writing is said to have voice but no soul. Human writers, as the New Yorker’s John Seabrook says, give “color, personality and emotion to writing by bending the rules”. Students, therefore, need to learn the rules and be encouraged to break them.

Creativity and co-creativity (with machines) should be fostered. Machines are trained on a finite amount of data, to predict and replicate, not to innovate in meaningful and deliberate ways.

AI cannot yet plan and does not have a purpose. Students need to hone skills in purposeful writing that achieves their communication goals.

AI is not yet as complex as the human brain. Humans detect humor and satire. They know words can have multiple and subtle meanings. Humans are capable of perception and insight; they can make advanced evaluative judgements about good and bad writing.

There are calls for humans to become expert in sophisticated forms of writing and in editing writing created by robots as vital future skills.

… OpenAI’s managers originally refused to release GPT-3, ostensibly because they were concerned about the generator being used to create fake material, such as reviews of products or election-related commentary.

AI writing bots have no conscience and may need to be eliminated by humans, as with Microsoft’s racist Twitter prototype, Tay.

Critical, compassionate and nuanced assessment of what AI produces, management and monitoring of content, and decision-making and empathy with readers are all part of the “writing” roles of a democratic future.

It’s an interesting line of thought and McKnight’s ideas about writing education could be applicable beyond Australia., assuming you accept her basic premise.

I have a few other postings here about AI and writing:

Writing and AI or is a robot writing this blog? a July 16, 2014 posting

AI (artificial intelligence) text generator, too dangerous to release? a February 18, 2019 posting

Automated science writing? a September 16, 2019 posting

It seems I have a lot of question about the automation of any kind of writing.

Council of Canadian Academies and its expert panel for the AI for Science and Engineering project

There seems to be an explosion (metaphorically and only by Canadian standards) of interest in public perceptions/engagement/awareness of artificial intelligence (see my March 29, 2021 posting “Canada launches its AI dialogues” and these dialogues run until April 30, 2021 plus there’s this April 6, 2021 posting “UNESCO’s Call for Proposals to highlight blind spots in AI Development open ’til May 2, 2021” which was launched in cooperation with Mila-Québec Artificial Intelligence Institute).

Now there’s this, in a March 31, 2020 Council of Canadian Academies (CCA) news release, four new projects were announced. (Admittedly these are not ‘public engagement’ exercises as such but the reports are publicly available and utilized by policymakers.) These are the two projects of most interest to me,

Public Safety in the Digital Age

Information and communications technologies have profoundly changed almost every aspect of life and business in the last two decades. While the digital revolution has brought about many positive changes, it has also created opportunities for criminal organizations and malicious actors to target individuals, businesses, and systems.

This assessment will examine promising practices that could help to address threats to public safety related to the use of digital technologies while respecting human rights and privacy.

Sponsor: Public Safety Canada

AI for Science and Engineering

The use of artificial intelligence (AI) and machine learning in science and engineering has the potential to radically transform the nature of scientific inquiry and discovery and produce a wide range of social and economic benefits for Canadians. But, the adoption of these technologies also presents a number of potential challenges and risks.

This assessment will examine the legal/regulatory, ethical, policy and social challenges related to the use of AI technologies in scientific research and discovery.

Sponsor: National Research Council Canada [NRC] (co-sponsors: CIFAR [Canadian Institute for Advanced Research], CIHR [Canadian Institutes of Health Research], NSERC [Natural Sciences and Engineering Research Council], and SSHRC [Social Sciences and Humanities Research Council])

For today’s posting the focus will be on the AI project, specifically, the April 19, 2021 CCA news release announcing the project’s expert panel,

The Council of Canadian Academies (CCA) has formed an Expert Panel to examine a broad range of factors related to the use of artificial intelligence (AI) technologies in scientific research and discovery in Canada. Teresa Scassa, SJD, Canada Research Chair in Information Law and Policy at the University of Ottawa, will serve as Chair of the Panel.  

“AI and machine learning may drastically change the fields of science and engineering by accelerating research and discovery,” said Dr. Scassa. “But these technologies also present challenges and risks. A better understanding of the implications of the use of AI in scientific research will help to inform decision-making in this area and I look forward to undertaking this assessment with my colleagues.”

As Chair, Dr. Scassa will lead a multidisciplinary group with extensive expertise in law, policy, ethics, philosophy, sociology, and AI technology. The Panel will answer the following question:

What are the legal/regulatory, ethical, policy and social challenges associated with deploying AI technologies to enable scientific/engineering research design and discovery in Canada?

“We’re delighted that Dr. Scassa, with her extensive experience in AI, the law and data governance, has taken on the role of Chair,” said Eric M. Meslin, PhD, FRSC, FCAHS, President and CEO of the CCA. “I anticipate the work of this outstanding panel will inform policy decisions about the development, regulation and adoption of AI technologies in scientific research, to the benefit of Canada.”

The CCA was asked by the National Research Council of Canada (NRC), along with co-sponsors CIFAR, CIHR, NSERC, and SSHRC, to address the question. More information can be found here.

The Expert Panel on AI for Science and Engineering:

Teresa Scassa (Chair), SJD, Canada Research Chair in Information Law and Policy, University of Ottawa, Faculty of Law (Ottawa, ON)

Julien Billot, CEO, Scale AI (Montreal, QC)

Wendy Hui Kyong Chun, Canada 150 Research Chair in New Media and Professor of Communication, Simon Fraser University (Burnaby, BC)

Marc Antoine Dilhac, Professor (Philosophy), University of Montreal; Director of Ethics and Politics, Centre for Ethics (Montréal, QC)

B. Courtney Doagoo, AI and Society Fellow, Centre for Law, Technology and Society, University of Ottawa; Senior Manager, Risk Consulting Practice, KPMG Canada (Ottawa, ON)

Abhishek Gupta, Founder and Principal Researcher, Montreal AI Ethics Institute (Montréal, QC)

Richard Isnor, Associate Vice President, Research and Graduate Studies, St. Francis Xavier University (Antigonish, NS)

Ross D. King, Professor, Chalmers University of Technology (Göteborg, Sweden)

Sabina Leonelli, Professor of Philosophy and History of Science, University of Exeter (Exeter, United Kingdom)

Raymond J. Spiteri, Professor, Department of Computer Science, University of Saskatchewan (Saskatoon, SK)

Who is the expert panel?

Putting together a Canadian panel is an interesting problem especially so when you’re trying to find people of expertise who can also represent various viewpoints both professionally and regionally. Then, there are gender, racial, linguistic, urban/rural, and ethnic considerations.

Statistics

Eight of the panelists could be said to be representing various regions of Canada. Five of those eight panelists are based in central Canada, specifically, Ontario (Ottawa) or Québec (Montréal). The sixth panelist is based in Atlantic Canada (Nova Scotia), the seventh panelist is based in the Prairies (Saskatchewan), and the eighth panelist is based in western Canada, (Vancouver, British Columbia).

The two panelists bringing an international perspective to this project are both based in Europe, specifically, Sweden and the UK.

(sigh) It would be good to have representation from another part of the world. Asia springs to mind as researchers in that region are very advanced in their AI research and applications meaning that their experts and ethicists are likely to have valuable insights.

Four of the ten panelists are women, which is closer to equal representation than some of the other CCA panels I’ve looked at.

As for Indigenous and BIPOC representation, unless one or more of the panelists chooses to self-identify in that fashion, I cannot make any comments. It should be noted that more than one expert panelist focuses on social justice and/or bias in algorithms.

Network of relationships

As you can see, the CCA descriptions for the individual members of the expert panel are a little brief. So, I did a little digging and In my searches, I noticed what seems to be a pattern of relationships among some of these experts. In particular, take note of the Canadian Institute for Advanced Research (CIFAR) and the AI Advisory Council of the Government of Canada.

Individual panelists

Teresa Scassa (Ontario) whose SJD designation signifies a research doctorate in law chairs this panel. Offhand, I can recall only one or two other panels being chaired by women of the 10 or so I’ve reviewed. In addition to her profile page at the University of Ottawa, she hosts her own blog featuring posts such as “How Might Bill C-11 Affect the Outcome of a Clearview AI-type Complaint?” She writes clearly (I didn’t seen any jargon) for an audience that is somewhat informed on the topic.

Along with Dilhac, Teresa Scassa is a member of the AI Advisory Council of the Government of Canada. More about that group when you read Dilhac’s description.

Julien Billot (Québec) has provided a profile on LinkedIn and you can augment your view of M. Billot with this profile from the CreativeDestructionLab (CDL),

Mr. Billot is a member of the faculty at HEC Montréal [graduate business school of the Université de Montréal] as an adjunct professor of management and the lead for the CreativeDestructionLab (CDL) and NextAi program in Montreal.

Julien Billot has been President and Chief Executive Officer of Yellow Pages Group Corporation (Y.TO) in Montreal, Quebec. Previously, he was Executive Vice President, Head of Media and Member of the Executive Committee of Solocal Group (formerly PagesJaunes Groupe), the publicly traded and incumbent local search business in France. Earlier experience includes serving as CEO of the digital and new business group of Lagardère Active, a multimedia branch of Lagardère Group and 13 years in senior management positions at France Telecom, notably as Chief Marketing Officer for Orange, the company’s mobile subsidiary.

Mr. Billot is a graduate of École Polytechnique (Paris) and from Telecom Paris Tech. He holds a postgraduate diploma (DEA) in Industrial Economics from the University of Paris-Dauphine.

Wendy Hui Kyong Chun (British Columbia) has a profile on the Simon Fraser University (SFU) website, which provided one of the more interesting (to me personally) biographies,

Wendy Hui Kyong Chun is the Canada 150 Research Chair in New Media at Simon Fraser University, and leads the Digital Democracies Institute which was launched in 2019. The Institute aims to integrate research in the humanities and data sciences to address questions of equality and social justice in order to combat the proliferation of online “echo chambers,” abusive language, discriminatory algorithms and mis/disinformation by fostering critical and creative user practices and alternative paradigms for connection. It has four distinct research streams all led by Dr. Chun: Beyond Verification which looks at authenticity and the spread of disinformation; From Hate to Agonism, focusing on fostering democratic exchange online; Desegregating Network Neighbourhoods, combatting homophily across platforms; and Discriminating Data: Neighbourhoods, Individuals and Proxies, investigating the centrality of race, gender, class and sexuality [emphasis mine] to big data and network analytics.

I’m glad to see someone who has focused on ” … the centrality of race, gender, class and sexuality to big data and network analytics.” Even more interesting to me was this from her CV (curriculum vitae),

Professor, Department of Modern Culture and Media, Brown University, July 2010-June 2018

.•Affiliated Faculty, Multimedia & Electronic Music Experiments (MEME), Department of Music,2017.

•Affiliated Faculty, History of Art and Architecture, March 2012-

.•Graduate Field Faculty, Theatre Arts and Performance Studies, Sept 2008-.[sic]

….

[all emphases mine]

And these are some of her credentials,

Ph.D., English, Princeton University, 1999.
•Certificate, School of Criticism and Theory, Dartmouth College, Summer 1995.

M.A., English, Princeton University, 1994.

B.A.Sc., Systems Design Engineering and English, University of Waterloo, Canada, 1992.
•first class honours and a Senate Commendation for Excellence for being the first student to graduate from the School of Engineering with a double major

It’s about time the CCA started integrating some of kind of arts perspective into their projects. (Although, I can’t help wondering if this was by accident rather than by design.)

Marc Antoine Dilhac, an associate professor at l’Université de Montréal, he, like Billot, graduated from a French university, in his case, the Sorbonne. Here’s more from Dilhac’s profile on the Mila website,

Marc-Antoine Dilhac (Ph.D., Paris 1 Panthéon-Sorbonne) is a professor of ethics and political philosophy at the Université de Montréal and an associate member of Mila – Quebec Artificial Intelligence Institute. He currently holds a CIFAR [Canadian Institute for Advanced Research] Chair in AI ethics (2019-2024), and was previously Canada Research Chair in Public Ethics and Political Theory 2014-2019. He specialized in theories of democracy and social justice, as well as in questions of applied ethics. He published two books on the politics of toleration and inclusion (2013, 2014). His current research focuses on the ethical and social impacts of AI and issues of governance and institutional design, with a particular emphasis on how new technologies are changing public relations and political structures.

In 2017, he instigated the project of the Montreal Declaration for a Responsible Development of AI and chaired its scientific committee. In 2020, as director of Algora Lab, he led an international deliberation process as part of UNESCO’s consultation on its recommendation on the ethics of AI.

In 2019, he founded Algora Lab, an interdisciplinary laboratory advancing research on the ethics of AI and developing a deliberative approach to the governance of AI and digital technologies. He is co-director of Deliberation at the Observatory on the social impacts of AI and digital technologies (OBVIA), and contributes to the OECD Policy Observatory (OECD.AI) as a member of its expert network ONE.AI.

He sits on the AI Advisory Council of the Government of Canada and co-chair its Working Group on Public Awareness.

Formerly known as Mila only, Mila – Quebec Artificial Intelligence Institute is a beneficiary of the 2017 Canadian federal budget’s inception of the Pan-Canadian Artificial Intelligence Strategy, which named CIFAR as an agency that would benefit as the hub and would also distribute funds for artificial intelligence research to (mainly) three agencies: Mila in Montréal, the Vector Institute in Toronto, and the Alberta Machine Intelligence Institute (AMII; Edmonton).

Consequently, Dilhac’s involvement with CIFAR is not unexpected but when added to his presence on the AI Advisory Council of the Government of Canada and his role as co-chair of its Working Group on Public Awareness, one of the co-sponsors for this future CCA report, you get a sense of just how small the Canadian AI ethics and public awareness community is.

Add in CIFAR’s Open Dialogue: AI in Canada series (ongoing until April 30, 2021) which is being held in partnership with the AI Advisory Council of the Government of Canada (see my March 29, 2021 posting for more details about the dialogues) amongst other familiar parties and you see a web of relations so tightly interwoven that if you could produce masks from it you’d have superior COVID-19 protection to N95 masks.

These kinds of connections are understandable and I have more to say about them in my final comments.

B. Courtney Doagoo has a profile page at the University of Ottawa, which fills in a few information gaps,

As a Fellow, Dr. Doagoo develops her research on the social, economic and cultural implications of AI with a particular focus on the role of laws, norms and policies [emphasis mine]. She also notably advises Dr. Florian Martin-Bariteau, CLTS Director, in the development of a new research initiative on those topical issues, and Dr. Jason Millar in the development of the Canadian Robotics and Artificial Intelligence Ethical Design Lab (CRAiEDL).

Dr. Doagoo completed her Ph.D. in Law at the University of Ottawa in 2017. In her interdisciplinary research, she used empirical methods to learn about and describe the use of intellectual property law and norms in creative communities. Following her doctoral research, she joined the World Intellectual Property Organization’s Coordination Office in New York as a legal intern and contributed to developing the joint initiative on gender and innovation in collaboration with UNESCO and UN Women. She later joined the International Law Research Program at the Centre for International Governance Innovation as a Post-Doctoral Fellow, where she conducted research in technology and law focusing on intellectual property law, artificial intelligence and data governance.

Dr. Doagoo completed her LL.L. at the University of Ottawa, and LL.M. in Intellectual Property Law at the Benjamin N. Cardozo School of Law [a law school at Yeshiva University in New York City].  In between her academic pursuits, Dr. Doagoo has been involved with different technology start-ups, including the one she is currently leading aimed at facilitating access to legal services. She’s also an avid lover of the arts and designed a course on Arts and Cultural Heritage Law taught during her doctoral studies at the University of Ottawa, Faculty of Law.

It’s probably because I don’t know enough but this “the role of laws, norms and policies” seems bland to the point of meaningless. The rest is more informative and brings it back to the arts with Wendy Hui Kyong Chun at SFU.

Doagoo’s LinkedIn profile offers an unexpected link to this expert panel’s chairperson, Teresa Scassa (in addition to both being lawyers whose specialties are in related fields and on faculty or fellow at the University of Ottawa),

Soft-funded Research Bursary

Dr. Teresa Scassa

2014

I’m not suggesting any conspiracies; it’s simply that this is a very small community with much of it located in central and eastern Canada and possible links into the US. For example, Wendy Hui Kyong Chun, prior to her SFU appointment in December 2018, worked and studied in the eastern US for over 25 years after starting her academic career at the University of Waterloo (Ontario).

Abhishek Gupta provided me with a challenging search. His LinkedIn profile yielded some details (I’m not convinced the man sleeps), Note: I have made some formatting changes and removed the location, ‘Montréal area’ from some descriptions

Experience

Microsoft Graphic
Software Engineer II – Machine Learning
Microsoft

Jul 2018 – Present – 2 years 10 months

Machine Learning – Commercial Software Engineering team

Serves on the CSE Responsible AI Board

Founder and Principal Researcher
Montreal AI Ethics Institute

May 2018 – Present – 3 years

Institute creating tangible and practical research in the ethical, safe and inclusive development of AI. For more information, please visit https://montrealethics.ai

Visiting AI Ethics Researcher, Future of Work, International Visitor Leadership Program
U.S. Department of State

Aug 2019 – Present – 1 year 9 months

Selected to represent Canada on the future of work

Responsible AI Lead, Data Advisory Council
Northwest Commission on Colleges and Universities

Jun 2020 – Present – 11 months

Faculty Associate, Frankfurt Big Data Lab
Goethe University

Mar 2020 – Present – 1 year 2 months

Advisor for the Z-inspection project

Associate Member
LF AI Foundation

May 2020 – Present – 1 year

Author
MIT Technology Review

Sep 2020 – Present – 8 months

Founding Editorial Board Member, AI and Ethics Journal
Springer Nature

Jul 2020 – Present – 10 months

Education

McGill University Bachelor of Science (BS)Computer Science

2012 – 2015

Exhausting, eh? He also has an eponymous website and the Montreal AI Ethics Institute can found here where Gupta and his colleagues are “Democratizing AI ethics literacy.” My hat’s off to Gupta getting on an expert panel for CCA is quite an achievement for someone without the usual academic and/or industry trappings.

Richard Isnor, based in Nova Scotia and associate vice president of research & graduate studies at St. Francis Xavier University (StFX), seems to have some connection to northern Canada (see the reference to Nunavut Research Institute below); he’s certainly well connected to various federal government agencies according to his profile page,

Prior to joining StFX, he was Manager of the Atlantic Regional Office for the Natural Sciences and Engineering Research Council of Canada (NSERC), based in Moncton, NB.  Previously, he was Director of Innovation Policy and Science at the International Development Research Centre in Ottawa and also worked for three years with the National Research Council of Canada [NRC] managing Biotechnology Research Initiatives and the NRC Genomics and Health Initiative.

Richard holds a D. Phil. in Science and Technology Policy Studies from the University of Sussex, UK; a Master’s in Environmental Studies from Dalhousie University [Nova Scotia]; and a B. Sc. (Hons) in Biochemistry from Mount Allison University [New Burnswick].  His primary interest is in science policy and the public administration of research; he has worked in science and technology policy or research administrative positions for Environment Canada, Natural Resources Canada, the Privy Council Office, as well as the Nunavut Research Institute. [emphasis mine]

I don’t know what Dr. Isnor’s work is like but I’m hopeful he (along with Spiteri) will be able to provide a less ‘big city’ perspective to the proceedings.

(For those unfamiliar with Canadian cities, Montreal [three expert panelists] is the second largest city in the country, Ottawa [two expert panelists] as the capital has an outsize view of itself, Vancouver [one expert panelist] is the third or fourth largest city in the country for a total of six big city representatives out of eight Canadian expert panelists.)

Ross D. King, professor of machine intelligence at Sweden’s Chalmers University of Technology, might be best known for Adam, also known as, Robot Scientist. Here’s more about King, from his Wikipedia entry (Note: Links have been removed),

King completed a Bachelor of Science degree in Microbiology at the University of Aberdeen in 1983 and went on to study for a Master of Science degree in Computer Science at the University of Newcastle in 1985. Following this, he completed a PhD at The Turing Institute [emphasis mine] at the University of Strathclyde in 1989[3] for work on developing machine learning methods for protein structure prediction.[7]

King’s research interests are in the automation of science, drug design, AI, machine learning and synthetic biology.[8][9] He is probably best known for the Robot Scientist[4][10][11][12][13][14][15][16][17] project which has created a robot that can:

hypothesize to explain observations

devise experiments to test these hypotheses

physically run the experiments using laboratory robotics

interpret the results from the experiments

repeat the cycle as required

The Robot Scientist Wikipedia entry has this to add,

… a laboratory robot created and developed by a group of scientists including Ross King, Kenneth Whelan, Ffion Jones, Philip Reiser, Christopher Bryant, Stephen Muggleton, Douglas Kell and Steve Oliver.[2][6][7][8][9][10]

… Adam became the first machine in history to have discovered new scientific knowledge independently of its human creators.[5][17][18]

Sabina Leonelli, professor of philosophy and history of science at the University of Exeter, is the only person for whom I found a Twitter feed (@SabinaLeonelli). Here’s a bit more from her Wikipedia entry Note: Links have been removed),

Originally from Italy, Leonelli moved to the UK for a BSc degree in History, Philosophy and Social Studies of Science at University College London and a MSc degree in History and Philosophy of Science at the London School of Economics. Her doctoral research was carried out in the Netherlands at the Vrije Universiteit Amsterdam with Henk W. de Regt and Hans Radder. Before joining the Exeter faculty, she was a research officer under Mary S. Morgan at the Department of Economic History of the London School of Economics.

Leonelli is the Co-Director of the Exeter Centre for the Study of the Life Sciences (Egenis)[3] and a Turing Fellow at the Alan Turing Institute [emphases mine] in London.[4] She is also Editor-in-Chief of the international journal History and Philosophy of the Life Sciences[5] and Associate Editor for the Harvard Data Science Review.[6] She serves as External Faculty for the Konrad Lorenz Institute for Evolution and Cognition Research.[7]

Notice that Ross King and Sabina Leonelli both have links to The Alan Turing Institute (“We believe data science and artificial intelligence will change the world”), although the institute’s link to the University of Strathclyde (Scotland) where King studied seems a bit tenuous.

Do check out Leonelli’s profile at the University of Exeter as it’s comprehensive.

Raymond J. Spiteri, professor and director of the Centre for High Performance Computing, Department of Computer Science at the University of Saskatchewan, has a profile page at the university the likes of which I haven’t seen in several years perhaps due to its 2013 origins. His other university profile page can best be described as minimalist.

His Canadian Applied and Industrial Mathematics Society (CAIMS) biography page could be described as less charming (to me) than the 2013 profile but it is easier to read,

Raymond Spiteri is a Professor in the Department of Computer Science at the University of Saskatchewan. He performed his graduate work as a member of the Institute for Applied Mathematics at the University of British Columbia. He was a post-doctoral fellow at McGill University and held faculty positions at Acadia University and Dalhousie University before joining USask in 2004. He serves on the Executive Committee of the WestGrid High-Performance Computing Consortium with Compute/Calcul Canada. He was a MITACS Project Leader from 2004-2012 and served in the role of Mitacs Regional Scientific Director for the Prairie Provinces between 2008 and 2011.

Spiteri’s areas of research are numerical analysis, scientific computing, and high-performance computing. His area of specialization is the analysis and implementation of efficient time-stepping methods for differential equations. He actively collaborates with scientists, engineers, and medical experts of all flavours. He also has a long record of industry collaboration with companies such as IBM and Boeing.

Spiteri has been lifetime member of CAIMS/SCMAI since 2000. He helped co-organize the 2004 Annual Meeting at Dalhousie and served on the Cecil Graham Doctoral Dissertation Award Committee from 2005 to 2009, acting as chair from 2007. He has been an active participant in CAIMS, serving several times on the Scientific Committee for the Annual Meeting, as well as frequently attending and organizing mini-symposia. Spiteri believes it is important for applied mathematics to play a major role in the efforts to meet Canada’s most pressing societal challenges, including the sustainability of our healthcare system, our natural resources, and the environment.

A last look at Spiteri’s 2013 profile gave me this (Note: Links have been removed),

Another biographical note: I obtained my B.Sc. degree in Applied Mathematics from the University of Western Ontario [also known as, Western University] in 1990. My advisor was Dr. M.A.H. (Paddy) Nerenberg, after whom the Nerenberg Lecture Series is named. Here is an excerpt from the description, put here is his honour, as a model for the rest of us:

The Nerenberg Lecture Series is first and foremost about people and ideas. Knowledge is the true treasure of humanity, accrued and passed down through the generations. Some of it, particularly science and its language, mathematics, is closed in practice to many because of technical barriers that can only be overcome at a high price. These technical barriers form part of the remarkable fractures that have formed in our legacy of knowledge. We are so used to those fractures that they have become almost invisible to us, but they are a source of profound confusion about what is known.

The Nerenberg Lecture is named after the late Morton (Paddy) Nerenberg, a much-loved professor and researcher born on 17 March– hence his nickname. He was a Professor at Western for more than a quarter century, and a founding member of the Department of Applied Mathematics there. A successful researcher and accomplished teacher, he believed in the unity of knowledge, that scientific and mathematical ideas belong to everyone, and that they are of human importance. He regretted that they had become inaccessible to so many, and anticipated serious consequences from it. [emphases mine] The series honors his appreciation for the democracy of ideas. He died in 1993 at the age of 57.

So, we have the expert panel.

Thoughts about the panel and the report

As I’ve noted previously here and elsewhere, assembling any panels whether they’re for a single event or for a longer term project such as producing a report is no easy task. Looking at the panel, there’s some arts representation, smaller urban centres are also represented, and some of the members have experience in more than one region in Canada. I was also much encouraged by Spiteri’s acknowledgement of his advisor’s, Morton (Paddy) Nerenberg, passionate commitment to the idea that “scientific and mathematical ideas belong to everyone.”

Kudos to the Council of Canadian Academies (CCA) organizers.

That said, this looks like an exceptionally Eurocentric panel. Unusually, there’s no representation from the US unless you count Chun who has spent the majority of her career in the US with only a little over two years at Simon Fraser University on Canada’s West Coast.

There’s weakness to a strategy (none of the ten or so CCA reports I’ve reviewed here deviates from this pattern) that seems to favour international participants from Europe and/or the US (also, sometimes, Australia/New Zealand). This leaves out giant chunks of the international community and brings us dangerously close to an echo chamber.

The same problem exists regionally and with various Canadian communities, which are acknowledged more in spirit than in actuality, e.g., the North, rural, indigenous, arts, etc.

Getting back to the ‘big city’ emphsais noted earlier, two people from Ottawa and three from Montreal; half of the expert panel lives within a two hour train ride of each other. (For those who don’t know, that’s close by Canadian standards. For comparison, a train ride from Vancouver to Seattle [US] is about four hours, a short trip when compared to a 24 hour train trip to the closest large Canadian cities.)

I appreciate that it’s not a simple problem but my concern is that it’s never acknowledged by the CCA. Perhaps they could include a section in the report acknowledging the issues and how the expert panel attempted to address them , in other words, transparency. Coincidentally, transparency, which has been related to trust, have both been identified as big issues with artificial intelligence.

As for solutions, these reports get sent to external reviewers and, prior to the report, outside experts are sometimes brought in as the panel readies itself. That would be two opportunities afforded by their current processes.

Anyway, good luck with the report and I look forward to seeing it.

DEBBY FRIDAY’s LINK SICK, an audio play+, opens March 29, 2021 (online)

[downloaded from https://debbyfriday.com/link-sick]

This is an artistic work, part of the DEBBY FRIDAY enterprise, and an MFA (Master of Fine Arts) project. Here’s the description from the Simon Fraser University (SFU) Link Sick event page,

LINK SICK

DEBBY FRIDAY’S MFA Project
Launching Monday, March 29, 2021 | debbyfriday.com/link-sick

Set against the backdrop of an ambiguous dystopia and eternal rave, LINK SICK is a tale about the threads that bind us together.  

LINK SICK is DEBBY FRIDAY’S graduate thesis project – an audio-play written, directed and scored by the artist herself. The project is a science-fiction exploration of the connective tissue of human experience as well as an experiment in sound art; blurring the lines between theatre, radio, music, fiction, essay, and internet art. Over 42-minutes, listeners are invited to gather round, close their eyes, and open their ears; submerging straight into a strange future peppered with blink-streams, automated protests, disembodied DJs, dancefloor orgies, and only the trendiest S/S 221 G-E two-piece club skins.

Starring 

DEBBY FRIDAY as Izzi/Narrator
Chino Amobi as Philo
Sam Rolfes as Dj GODLESS
Hanna Sam as ABC Inc. Announcer
Storm Greenwood as Diana Deviance
Alex Zhang Hungtai as Weaver
Allie Stephen as Numee
Soukayna as Katz
AI Voice Generated Protesters via Replica Studios

Presented in partial fulfillment of the requirements of the Degree of Master of Fine Arts in the School for the Contemporary Arts at Simon Fraser University.

No time is listed but I’m assuming FRIDAY is operating on PDT, so, you might want to take that into account when checking.

FRIDAY seems to favour full caps for her name and everywhere on her eponymous website (from her ABOUT page),

DEBBY FRIDAY is an experimentalist.

Born in Nigeria, raised in Montreal, and now based in Vancouver, DEBBY FRIDAY’s work spans the spectrum of the audio-visual, resisting categorizations of genre and artistic discipline. She is at once sound theorist and musician, performer and poet, filmmaker and PUNK GOD. …

Should you wish to support the artist financially, she offers merchandise.

Getting back to the play, I look forward to the auditory experience. Given how much we are expected to watch and the dominance of images, creating a piece that requires listening is an interesting choice.

Girl Trouble—UNESCO’s and the World Economic Forum’s Breaking Through Bias in AI panel on International Women’s Day March 8, 2021

What a Monday morning! United Nations Educational, Scientific and Cultural Organization (UNESCO; French: Organisation des Nations unies pour l’éducation, la science et la culture) and the World Economic Forum (WEF) hosted a live webcast (which started at 6 am PST or 1500 CET [3 pm in Paris, France]). The session is available online for viewing both here on UNESCO’s Girl Trouble webpage and here on YouTube. It’s about 2.5 hours long with two separate discussions and a question period after each discussion. You will have a 2 minute wait before seeing any speakers or panelists.

Here’s why you might want to check this out (from the Girl Trouble: Breaking Through The Bias in AI page on the UNESCO website),

UNESCO and the World Economic Forum present Girl Trouble: Breaking Through The Bias in AI on International Women’s Day, 8th March, 3:00 pm – 5:30 pm (CET). This timely round-table brings together a range of leading female voices in tech to confront the deep-rooted gender imbalances skewing the development of artificial intelligence. Today critics charge that AI feeds on biased data-sets, amplifying the existing the anti-female biases of our societies, and that AI is perpetuating harmful stereotypes of women as submissive and subservient. Is it any wonder when only 22% of AI professionals globally are women?

Our panelists are female change-makers in AI. From C-suite professionals taking decisions which affect us all, to women innovating new AI tools and policies to help vulnerable groups, to those courageously exposing injustice and algorithmic biases, we welcome:

Gabriela Ramos, Assistant Director-General of Social and Human Sciences, UNESCO, leading the development of UNESCO’s Recommendation on the Ethics of AI, the first global standard-setting instrument in the field.
Kay Firth-Butterfield, Keynote speaker. Kay was the world’s first chief AI Ethics Officer. As Head of AI & Machine Learning, and a Member of the Executive Committee of the World Economic Forum, Kay develops new alliances to promote awareness of gender bias in AI;
Ashwini Asokan, CEO of Chennai-based AI company, Mad Street Den. She explores how Artificial Intelligence can be applied meaningfully and made accessible to billions across the globe;
Adriana Bora a researcher using machine learning to boost compliance with the UK and Australian Modern Slavery Acts, and to combat modern slavery, including the trafficking of women;
Anne Bioulac, a member of the Women in Africa Initiative, developing AI-enabled online learning to empower African women to use AI in digital entrepreneurship;
Meredith Broussard, a software developer and associate professor of data journalism at New York University, whose research focuses on AI in investigative reporting, with a particular interest in using data analysis for social good ;
Latifa Mohammed Al-AbdulKarim, named by Forbes magazine as one of 100 Brilliant Women in AI Ethics, and as one of the women defining AI in the 21st century;
Wanda Munoz, of the Latin American Human Security Network. One of the Nobel Women’s Initiative’s 2020 peacebuilders, she raises aware-ness around gender-based violence and autonomous weapons;
Nanjira Sambuli, a Member of the UN Secretary General’s High-Level Panel for Digital Cooperation and Advisor for the A+ Alliance for Inclusive Algorithms;
Jutta Williams, Product Manager at Twitter, analyzing how Twitter can improve its models to reduce bias.

There’s an urgent need for more women to participate in and lead the design, development, and deployment of AI systems. Evidence shows that by 2022, 85% of AI projects will deliver erroneous outcomes due to bias.

AI Recruiters searching for female AI specialists online just cannot find them. Companies hiring experts for AI and data science jobs estimate fewer than 1 per cent of the applications they receive come from women. Women and girls are 4 times less likely to know how to programme computers, and 13 times less likely to file for technology patent. They are also less likely to occupy leadership positions in tech companies.

Building on UNESCO’s cutting edge research in this field, and flagship 2019 publication “I’d Blush if I Could”, and policy guidance on gender equality in the 2020 UNESCO Draft Recommendation on the Ethics of Artificial Intelligence, the panel will look at:

1. The 4th industrial revolution is on our doorstop, and gender equality risks being set back decades; What more can we do to attract more women to design jobs in AI, and to support them to take their seats on the boards of tech companies.

2. How can AI help us advance women and girls’ rights in society? And how can we solve the problem of algorithmic gender bias in AI systems?

Women’s leadership in the AI Sector at all levels, from big tech to the start-up AI economy in developing countries will be placed under the micro-scope.

Confession: I set the timer correctly but then forget to set the alarm so I watched the last 1.5 hours (I plan to go back and get the first hour later). Here’s a little of what transpired.

Moderator

Kudos to the moderator, Natashya Gutierrez, for her excellent performance; it can’t have been easy to keep track of the panelists and questions for a period of 2.5 hours,

Natashya Gutierrez, Editor-in-Chief APAC, VICE World News

Natashya is an award-winning multimedia journalist and current Editor in Chief of VICE World News in APAC [Asia-Pacific Countries]. She oversees editorial teams across Australia, Indonesia, India, Hong Kong, Thailand, the Philippines, Singapore, Japan and Korea. Natashya’s reporting specialises on women’s rights. At VICE, she hosts Unequal, a series focused on gender inequality in Asia. She is the recipient of several journalism awards including the Society of Publishers in Asia for reporting on women’s issues, and the Asia Journalism Fellowship. Before VICE, she was part of the founding team of Rappler, an online news network based in the Philippines. She has been selected as one of Asia’s emerging young leaders and named a Development Fellow by the Asia Foundation. Natashya is a graduate of Yale University.

First panel discussion

For anyone who’s going to watch the session, don’t forget it takes about two minutes before there’s sound. The first panel was focused on “the female training and recruitment crisis in AI.’

  • The right people

I have a suspicion that Ashwini Asokan’s comment about getting the ‘right people’ to create the algorithms and make decisions about AI was not meant the way it might sound. I will have to listen again but, at a guess, I think she was suggesting that a bunch of 25 – 35 year old developers (mostly male and working in monoculture environments) is not going to be cognizant of how their mathematical decisions will impact real world lives.

So, getting the ‘right people’ means more inclusive hiring.

  • Is AI always the best solution?

In all the talk about AI, it’s assumed that this technology is the best solution to all problems. One of the panelists (Nanjira Sambuli) suggested an analogue solution (e. g., a book) might be a better solution on occasion.

There are some things that people are better at than AI (can’t remember which panelist said this). That comment hints at something which seems heretical. It challenges the notion that technology is always better than a person.

I once had someone at a bank explain to me that computers were very smart (by implication, smarter than me)—30 years ago The teller was talking about a database.

Adriana Bora (I think) suggested that lived experience should be considered when putting together consultative groups and developer groups.

This theme of AI not being the best solution for all problems came up again in the second panel discussion

Second panel discussion

The second panel was focused on “innovative AI-based solutions to address bias against women.”

  • AI is math and it’s hard

It’s surprisingly easy to forget that AI is math. Meredith Broussard pointed out that most of us (around the world) have a very Hollywood idea about what AI is.

Broussard noted that AI has its limits and there are times when it’s not the right choice.

She made an interesting point in her comment about AI being hard. I don’t think she meant to echo the old cliché ‘math is hard, so it’s not for girls’. The comment seemed to speak to the breadth and depth of the AI sector. Simultaneous with challenging mathematics, we need to take into account so much more than was imagined in the Industrial Revolution when ecological consequences were unimagined and inequities often taken as god-given.

  • Inequities and language

Natashya Gutierrez, the moderator, noted that AI doesn’t create bias, it magnifies it.

One of the panelists, Jutta Williams (Twitter), noted later that algorithms are designed to favour certain types of language, e. g., information presented as factual rather than emotive. That’s how you get more attention on social media platforms. In essence, the bias in the algorithms was not towards males but towards the way they tend to communicate.

  • Laziness

Describing engineers as ‘lazy’, Meredith Broussard added this about the mindset, ‘write once, run anywhere’.

A colleague, some years ago, drew my attention to the problem. She was unsuccessfully trying to get the developers to fix a problem in the code. They simply couldn’t be bothered. It wasn’t an interesting problem and there was no reward for fixing it.

I’m having a problem now where I suspect engineers/developers don’t want to tweak or correct code in WordPress. It’s the software I use to create my blog postings and I use tags to make those postings easier to find.

Sometime in December 2018 I updated my blog software to their latest version. Many problems ensued but there is one which persists to this day. I can’t tag any new words with apostrophes in them (very common in French). The system refuses to save them.

Previous versions of WordPress were quite capable of saving words with apostrophes. Those words are still in my ‘tag database’.

  • Older generation has less tech savvy

Adriana Bora suggested that the older generation should also be considered in discussions about AI and inclusivity. I’m glad to hear her mention.

Unfortunately, she seemed to be under the impression that seniors don’t know much about technology.

Yes and no. Who do you think built and developed the technologies you are currently using? Probably your parents and grandparents. Networks were first developed in the early to mid-1960s. The Internet is approximately 40 years old. (You can get the details in the History of the Internet entry on Wikipedia.)

Yes, I’ve made that mistake about seniors/elders too.

It’s possible that person over … what age is that? Over 55? Over 60? Over 65? Over 75? and so on … Anyway, that person may not have had much experience with the digital world or it may be dated experience but that assumption is problematic.

As an antidote, here’s one of my favourite blogs, Grandma Got STEM. It’s mostly written by people reminiscing about their STEM mothers and grandmothers.

  • Bits and bobs

There seemed to be general agreement that there needs to be more transparency about the development of AI and what happens in the ‘AI black box’.

Gabriela Ramos, keynote speaker, commented that transparency needs to be paired up with choice otherwise it won’t do much good.

After recounting a distressing story about how activists have had their personal revealed in various networks, Wanda Munoz noted that AI can be used for good.

The concerns are not theoretical and my final comments

Munoz, of course, brought a real life example of bad things happening but I’d like to reinforce it with one more example. The British Broadcasting Corporation (BBC) in a January 13, 2021 news article by Leo Kelion broke the news that Huawei, a Chinese technology company, had technology that could identify ethnic groups (Note: Links have been removed),

A Huawei patent has been brought to light for a system that identifies people who appear to be of Uighur origin among images of pedestrians.

The filing is one of several of its kind involving leading Chinese technology companies, discovered by a US research company and shared with BBC News.

Huawei had previously said none of its technologies was designed to identify ethnic groups.

It now plans to alter the patent.

The company indicated this would involve asking the China National Intellectual Property Administration (CNIPA) – the country’s patent authority – for permission to delete the reference to Uighurs in the Chinese-language document.

Uighur people belong to a mostly Muslim ethnic group that lives mainly in Xinjiang province, in north-western China.

Government authorities are accused of using high-tech surveillance against them and detaining many in forced-labour camps, where children are sometimes separated from their parents.

Beijing says the camps offer voluntary education and training.

Huawei’s patent was originally filed in July 2018, in conjunction with the Chinese Academy of Sciences .

It describes ways to use deep-learning artificial-intelligence techniques to identify various features of pedestrians photographed or filmed in the street.

But the document also lists attributes by which a person might be targeted, which it says can include “race (Han [China’s biggest ethnic group], Uighur)”.

More than one company has been caught out, do read the January 13, 2021 news article in its entirety.

I did not do justice to the depth and breadth of the discussion. (I noticed I missed a few panelists and it’s entirely my fault; I should have woken up sooner. I apologize for the omissions.)

If you have the time and the inclination, do go to the Girl Trouble: Breaking Through The Bias in AI page on the UNESCO website where in addition to the panel video, you can find a number of related reports:

Happy International Women’s Day 2021.