Tag Archives: Rosalind Picard

Artificial emotional intelligence detection

Sabotage was not my first thought on reading about artificial emotional intelligence so this February 11, 2021 Incheon National University press release (also on EurekAlert) is educational in an unexpected way (Note: A link has been removed),

With the advent of 5G communication technology and its integration with AI, we are looking at the dawn of a new era in which people, machines, objects, and devices are connected like never before. This smart era will be characterized by smart facilities and services such as self-driving cars, smart UAVs [unmanned aerial vehicle], and intelligent healthcare. This will be the aftermath of a technological revolution.

But the flip side of such technological revolution is that AI [artificial intelligence] itself can be used to attack or threaten the security of 5G-enabled systems which, in turn, can greatly compromise their reliability. It is, therefore, imperative to investigate such potential security threats and explore countermeasures before a smart world is realized.

In a recent study published in IEEE Network, a team of researchers led by Prof. Hyunbum Kim from Incheon National University, Korea, address such issues in relation to an AI-based, 5G-integrated virtual emotion recognition system called 5G-I-VEmoSYS, which detects human emotions using wireless signals and body movement. “Emotions are a critical characteristic of human beings and separates humans from machines, defining daily human activity. However, some emotions can also disrupt the normal functioning of a society and put people’s lives in danger, such as those of an unstable driver. Emotion detection technology thus has great potential for recognizing any disruptive emotion and in tandem with 5G and beyond-5G communication, warning others of potential dangers,” explains Prof. Kim. “For instance, in the case of the unstable driver, the AI enabled driver system of the car can inform the nearest network towers, from where nearby pedestrians can be informed via their personal smart devices.”

The virtual emotion system developed by Prof. Kim’s team, 5G-I-VEmoSYS, can recognize at least five kinds of emotion (joy, pleasure, a neutral state, sadness, and anger) and is composed of three subsystems dealing with the detection, flow, and mapping of human emotions. The system concerned with detection is called Artificial Intelligence-Virtual Emotion Barrier, or AI-VEmoBAR, which relies on the reflection of wireless signals from a human subject to detect emotions. This emotion information is then handled by the system concerned with flow, called Artificial Intelligence-Virtual Emotion Flow, or AI-VEmoFLOW, which enables the flow of specific emotion information at a specific time to a specific area. Finally, the Artificial Intelligence-Virtual Emotion Map, or AI-VEmoMAP, utilizes a large amount of this virtual emotion data to create a virtual emotion map that can be utilized for threat detection and crime prevention.

A notable advantage of 5G-I-VEmoSYS is that it allows emotion detection without revealing the face or other private parts of the subjects, thereby protecting the privacy of citizens in public areas. Moreover, in private areas, it gives the user the choice to remain anonymous while providing information to the system. Furthermore, when a serious emotion, such as anger or fear, is detected in a public area, the information is rapidly conveyed to the nearest police department or relevant entities who can then take steps to prevent any potential crime or terrorism threats.

However, the system suffers from serious security issues such as the possibility of illegal signal tampering, abuse of anonymity, and hacking-related cyber-security threats. Further, the danger of sending false alarms to authorities remains.

While these concerns do put the system’s reliability at stake, Prof. Kim’s team are confident that they can be countered with further research. “This is only an initial study. In the future, we need to achieve rigorous information integrity and accordingly devise robust AI-based algorithms that can detect compromised or malfunctioning devices and offer protection against potential system hacks,” explains Prof. Kim, “Only then will it enable people to have safer and more convenient lives in the advanced smart cities of the future.”

Intriguing, yes? The researchers have used this image to illustrate their work,

Caption: With 5G communication technology and new AI-based systems such as emotion recognition systems, smart cities are all set to become a reality; but these systems need to be honed and security issues need to be ironed out before the smart reality can be realized. Credit: macrovector on Freepik

Before getting to the link and citation for the paper, I have a March 8, 2019 article by Meredith Somers for MIT (Massachusetts Institute of Technology) Sloan School of Management’s Ideas Made to Matter publication (Note Links have been removed),

What did you think of the last commercial you watched? Was it funny? Confusing? Would you buy the product? You might not remember or know for certain how you felt, but increasingly, machines do. New artificial intelligence technologies are learning and recognizing human emotions, and using that knowledge to improve everything from marketing campaigns to health care.

These technologies are referred to as “emotion AI.” Emotion AI is a subset of artificial intelligence (the broad term for machines replicating the way humans think) that measures, understands, simulates, and reacts to human emotions. It’s also known as affective computing, or artificial emotional intelligence. The field dates back to at least 1995, when MIT Media lab professor Rosalind Picard published “Affective Computing.”

Javier Hernandez, a research scientist with the Affective Computing Group at the MIT Media Lab, explains emotion AI as a tool that allows for a much more natural interaction between humans and machines.“Think of the way you interact with other human beings; you look at their faces, you look at their body, and you change your interaction accordingly,” Hernandez said. “How can [a machine] effectively communicate information if it doesn’t know your emotional state, if it doesn’t know how you’re feeling, it doesn’t know how you’re going to respond to specific content?”

While humans might currently have the upper hand on reading emotions, machines are gaining ground using their own strengths. Machines are very good at analyzing large amounts of data, explained MIT Sloan professor Erik Brynjolfsson. They can listen to voice inflections and start to recognize when those inflections correlate with stress or anger. Machines can analyze images and pick up subtleties in micro-expressions on humans’ faces that might happen even too fast for a person to recognize.

“We have a lot of neurons in our brain for social interactions. We’re born with some of those skills, and then we learn more. It makes sense to use technology to connect to our social brains, not just our analytical brains.” Brynjolfsson said. “Just like we can understand speech and machines can communicate in speech, we also understand and communicate with humor and other kinds of emotions. And machines that can speak that language — the language of emotions — are going to have better, more effective interactions with us. It’s great that we’ve made some progress; it’s just something that wasn’t an option 20 or 30 years ago, and now it’s on the table.”

Somers describes current uses of emotion AI (I’ve selected two from her list; Note: A link has been removed),

Call centers —Technology from Cogito, a company co-founded in 2007 by MIT Sloan alumni, helps call center agents identify the moods of customers on the phone and adjust how they handle the conversation in real time. Cogito’s voice-analytics software is based on years of human behavior research to identify voice patterns.

Mental health —  In December 2018 Cogito launched a spinoff called CompanionMx, and an accompanying mental health monitoring app. The Companion app listens to someone speaking into their phone, and analyzes the speaker’s voice and phone use for signs of anxiety and mood changes.

The app improves users’ self-awareness, and can increase coping skills including steps for stress reduction. The company has worked with the Department of Veterans Affairs, the Massachusetts General Hospital, and Brigham & Women’s Hospital in Boston.

Somers’ March 8, 2019 article was an eye-opener.

Getting back to the Korean research, here’s a link to and a citation for the paper,

Research Challenges and Security Threats to AI-Driven 5G Virtual Emotion Applications Using Autonomous Vehicles, Drones, and Smart Devices by Hyunbum Kim; Jalel Ben-Othman; Lynda Mokdad; Junggab Son; Chunguo Li. IEEE Network Volume: 34 Issue: 6 November/December 2020 Page(s): 288 – 294 DOI: 10.1109/MNET.011.2000245 Date of Publication (online): 12 October 2020

This paper is behind a paywall.