Tag Archives: emotion detection

Artificial emotional intelligence detection

Sabotage was not my first thought on reading about artificial emotional intelligence so this February 11, 2021 Incheon National University press release (also on EurekAlert) is educational in an unexpected way (Note: A link has been removed),

With the advent of 5G communication technology and its integration with AI, we are looking at the dawn of a new era in which people, machines, objects, and devices are connected like never before. This smart era will be characterized by smart facilities and services such as self-driving cars, smart UAVs [unmanned aerial vehicle], and intelligent healthcare. This will be the aftermath of a technological revolution.

But the flip side of such technological revolution is that AI [artificial intelligence] itself can be used to attack or threaten the security of 5G-enabled systems which, in turn, can greatly compromise their reliability. It is, therefore, imperative to investigate such potential security threats and explore countermeasures before a smart world is realized.

In a recent study published in IEEE Network, a team of researchers led by Prof. Hyunbum Kim from Incheon National University, Korea, address such issues in relation to an AI-based, 5G-integrated virtual emotion recognition system called 5G-I-VEmoSYS, which detects human emotions using wireless signals and body movement. “Emotions are a critical characteristic of human beings and separates humans from machines, defining daily human activity. However, some emotions can also disrupt the normal functioning of a society and put people’s lives in danger, such as those of an unstable driver. Emotion detection technology thus has great potential for recognizing any disruptive emotion and in tandem with 5G and beyond-5G communication, warning others of potential dangers,” explains Prof. Kim. “For instance, in the case of the unstable driver, the AI enabled driver system of the car can inform the nearest network towers, from where nearby pedestrians can be informed via their personal smart devices.”

The virtual emotion system developed by Prof. Kim’s team, 5G-I-VEmoSYS, can recognize at least five kinds of emotion (joy, pleasure, a neutral state, sadness, and anger) and is composed of three subsystems dealing with the detection, flow, and mapping of human emotions. The system concerned with detection is called Artificial Intelligence-Virtual Emotion Barrier, or AI-VEmoBAR, which relies on the reflection of wireless signals from a human subject to detect emotions. This emotion information is then handled by the system concerned with flow, called Artificial Intelligence-Virtual Emotion Flow, or AI-VEmoFLOW, which enables the flow of specific emotion information at a specific time to a specific area. Finally, the Artificial Intelligence-Virtual Emotion Map, or AI-VEmoMAP, utilizes a large amount of this virtual emotion data to create a virtual emotion map that can be utilized for threat detection and crime prevention.

A notable advantage of 5G-I-VEmoSYS is that it allows emotion detection without revealing the face or other private parts of the subjects, thereby protecting the privacy of citizens in public areas. Moreover, in private areas, it gives the user the choice to remain anonymous while providing information to the system. Furthermore, when a serious emotion, such as anger or fear, is detected in a public area, the information is rapidly conveyed to the nearest police department or relevant entities who can then take steps to prevent any potential crime or terrorism threats.

However, the system suffers from serious security issues such as the possibility of illegal signal tampering, abuse of anonymity, and hacking-related cyber-security threats. Further, the danger of sending false alarms to authorities remains.

While these concerns do put the system’s reliability at stake, Prof. Kim’s team are confident that they can be countered with further research. “This is only an initial study. In the future, we need to achieve rigorous information integrity and accordingly devise robust AI-based algorithms that can detect compromised or malfunctioning devices and offer protection against potential system hacks,” explains Prof. Kim, “Only then will it enable people to have safer and more convenient lives in the advanced smart cities of the future.”

Intriguing, yes? The researchers have used this image to illustrate their work,

Caption: With 5G communication technology and new AI-based systems such as emotion recognition systems, smart cities are all set to become a reality; but these systems need to be honed and security issues need to be ironed out before the smart reality can be realized. Credit: macrovector on Freepik

Before getting to the link and citation for the paper, I have a March 8, 2019 article by Meredith Somers for MIT (Massachusetts Institute of Technology) Sloan School of Management’s Ideas Made to Matter publication (Note Links have been removed),

What did you think of the last commercial you watched? Was it funny? Confusing? Would you buy the product? You might not remember or know for certain how you felt, but increasingly, machines do. New artificial intelligence technologies are learning and recognizing human emotions, and using that knowledge to improve everything from marketing campaigns to health care.

These technologies are referred to as “emotion AI.” Emotion AI is a subset of artificial intelligence (the broad term for machines replicating the way humans think) that measures, understands, simulates, and reacts to human emotions. It’s also known as affective computing, or artificial emotional intelligence. The field dates back to at least 1995, when MIT Media lab professor Rosalind Picard published “Affective Computing.”

Javier Hernandez, a research scientist with the Affective Computing Group at the MIT Media Lab, explains emotion AI as a tool that allows for a much more natural interaction between humans and machines.“Think of the way you interact with other human beings; you look at their faces, you look at their body, and you change your interaction accordingly,” Hernandez said. “How can [a machine] effectively communicate information if it doesn’t know your emotional state, if it doesn’t know how you’re feeling, it doesn’t know how you’re going to respond to specific content?”

While humans might currently have the upper hand on reading emotions, machines are gaining ground using their own strengths. Machines are very good at analyzing large amounts of data, explained MIT Sloan professor Erik Brynjolfsson. They can listen to voice inflections and start to recognize when those inflections correlate with stress or anger. Machines can analyze images and pick up subtleties in micro-expressions on humans’ faces that might happen even too fast for a person to recognize.

“We have a lot of neurons in our brain for social interactions. We’re born with some of those skills, and then we learn more. It makes sense to use technology to connect to our social brains, not just our analytical brains.” Brynjolfsson said. “Just like we can understand speech and machines can communicate in speech, we also understand and communicate with humor and other kinds of emotions. And machines that can speak that language — the language of emotions — are going to have better, more effective interactions with us. It’s great that we’ve made some progress; it’s just something that wasn’t an option 20 or 30 years ago, and now it’s on the table.”

Somers describes current uses of emotion AI (I’ve selected two from her list; Note: A link has been removed),

Call centers —Technology from Cogito, a company co-founded in 2007 by MIT Sloan alumni, helps call center agents identify the moods of customers on the phone and adjust how they handle the conversation in real time. Cogito’s voice-analytics software is based on years of human behavior research to identify voice patterns.

Mental health —  In December 2018 Cogito launched a spinoff called CompanionMx, and an accompanying mental health monitoring app. The Companion app listens to someone speaking into their phone, and analyzes the speaker’s voice and phone use for signs of anxiety and mood changes.

The app improves users’ self-awareness, and can increase coping skills including steps for stress reduction. The company has worked with the Department of Veterans Affairs, the Massachusetts General Hospital, and Brigham & Women’s Hospital in Boston.

Somers’ March 8, 2019 article was an eye-opener.

Getting back to the Korean research, here’s a link to and a citation for the paper,

Research Challenges and Security Threats to AI-Driven 5G Virtual Emotion Applications Using Autonomous Vehicles, Drones, and Smart Devices by Hyunbum Kim; Jalel Ben-Othman; Lynda Mokdad; Junggab Son; Chunguo Li. IEEE Network Volume: 34 Issue: 6 November/December 2020 Page(s): 288 – 294 DOI: 10.1109/MNET.011.2000245 Date of Publication (online): 12 October 2020

This paper is behind a paywall.

Communicating science effectively—a December 2016 book from the US National Academy of Sciences

I stumbled across this Dec. 13, 2016  essay/book announcement by Dr. Andrew Maynard and Dr. Dietram A. Scheufele on The Conversation,

Many scientists and science communicators have grappled with disregard for, or inappropriate use of, scientific evidence for years – especially around contentious issues like the causes of global warming, or the benefits of vaccinating children. A long debunked study on links between vaccinations and autism, for instance, cost the researcher his medical license but continues to keep vaccination rates lower than they should be.

Only recently, however, have people begun to think systematically about what actually works to promote better public discourse and decision-making around what is sometimes controversial science. Of course scientists would like to rely on evidence, generated by research, to gain insights into how to most effectively convey to others what they know and do.

As it turns out, the science on how to best communicate science across different issues, social settings and audiences has not led to easy-to-follow, concrete recommendations.

About a year ago, the National Academies of Sciences, Engineering and Medicine brought together a diverse group of experts and practitioners to address this gap between research and practice. The goal was to apply scientific thinking to the process of how we go about communicating science effectively. Both of us were a part of this group (with Dietram as the vice chair).

The public draft of the group’s findings – “Communicating Science Effectively: A Research Agenda” – has just been published. In it, we take a hard look at what effective science communication means and why it’s important; what makes it so challenging – especially where the science is uncertain or contested; and how researchers and science communicators can increase our knowledge of what works, and under what conditions.

At some level, all science communication has embedded values. Information always comes wrapped in a complex skein of purpose and intent – even when presented as impartial scientific facts. Despite, or maybe because of, this complexity, there remains a need to develop a stronger empirical foundation for effective communication of and about science.

Addressing this, the National Academies draft report makes an extensive number of recommendations. A few in particular stand out:

  • Use a systems approach to guide science communication. In other words, recognize that science communication is part of a larger network of information and influences that affect what people and organizations think and do.
  • Assess the effectiveness of science communication. Yes, researchers try, but often we still engage in communication first and evaluate later. Better to design the best approach to communication based on empirical insights about both audiences and contexts. Very often, the technical risk that scientists think must be communicated have nothing to do with the hopes or concerns public audiences have.
  • Get better at meaningful engagement between scientists and others to enable that “honest, bidirectional dialogue” about the promises and pitfalls of science that our committee chair Alan Leshner and others have called for.
  • Consider social media’s impact – positive and negative.
  • Work toward better understanding when and how to communicate science around issues that are contentious, or potentially so.

The paper version of the book has a cost but you can get a free online version.  Unfortunately,  I cannot copy and paste the book’s table of contents here and was not able to find a book index although there is a handy list of reference texts.

I have taken a very quick look at the book. If you’re in the field, it’s definitely worth a look. It is, however, written for and by academics. If you look at the list of writers and reviewers, you will find over 90% are professors at one university or another. That said, I was happy to see references to Dan Kahan’s work at the Yale Law School’s Culture Cognition Project cited. As happens they weren’t able to cite his latest work [***see my xxx, 2017 curiosity post***], released about a month after “Communicating Science Effectively: A Research Agenda.”

I was unable to find any reference to science communication via popular culture. I’m a little dismayed as I feel that this is a seriously ignored source of information by science communication specialists and academicians but not by the folks at MIT (Massachusetts Institute of Technology) who announced a wireless app in the same week as it was featured in an episode of the US television comedy, The Big Bang Theory. Here’s more from MIT’s emotion detection wireless app in a Feb. 1, 2017 news release (also on EurekAlert),

It’s a fact of nature that a single conversation can be interpreted in very different ways. For people with anxiety or conditions such as Asperger’s, this can make social situations extremely stressful. But what if there was a more objective way to measure and understand our interactions?

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute of Medical Engineering and Science (IMES) say that they’ve gotten closer to a potential solution: an artificially intelligent, wearable system that can predict if a conversation is happy, sad, or neutral based on a person’s speech patterns and vitals.

“Imagine if, at the end of a conversation, you could rewind it and see the moments when the people around you felt the most anxious,” says graduate student Tuka Alhanai, who co-authored a related paper with PhD candidate Mohammad Ghassemi that they will present at next week’s Association for the Advancement of Artificial Intelligence (AAAI) conference in San Francisco. “Our work is a step in this direction, suggesting that we may not be that far away from a world where people can have an AI social coach right in their pocket.”

As a participant tells a story, the system can analyze audio, text transcriptions, and physiological signals to determine the overall tone of the story with 83 percent accuracy. Using deep-learning techniques, the system can also provide a “sentiment score” for specific five-second intervals within a conversation.

“As far as we know, this is the first experiment that collects both physical data and speech data in a passive but robust way, even while subjects are having natural, unstructured interactions,” says Ghassemi. “Our results show that it’s possible to classify the emotional tone of conversations in real-time.”

The researchers say that the system’s performance would be further improved by having multiple people in a conversation use it on their smartwatches, creating more data to be analyzed by their algorithms. The team is keen to point out that they developed the system with privacy strongly in mind: The algorithm runs locally on a user’s device as a way of protecting personal information. (Alhanai says that a consumer version would obviously need clear protocols for getting consent from the people involved in the conversations.)

How it works

Many emotion-detection studies show participants “happy” and “sad” videos, or ask them to artificially act out specific emotive states. But in an effort to elicit more organic emotions, the team instead asked subjects to tell a happy or sad story of their own choosing.

Subjects wore a Samsung Simband, a research device that captures high-resolution physiological waveforms to measure features such as movement, heart rate, blood pressure, blood flow, and skin temperature. The system also captured audio data and text transcripts to analyze the speaker’s tone, pitch, energy, and vocabulary.

“The team’s usage of consumer market devices for collecting physiological data and speech data shows how close we are to having such tools in everyday devices,” says Björn Schuller, professor and chair of Complex and Intelligent Systems at the University of Passau in Germany, who was not involved in the research. “Technology could soon feel much more emotionally intelligent, or even ‘emotional’ itself.”

After capturing 31 different conversations of several minutes each, the team trained two algorithms on the data: One classified the overall nature of a conversation as either happy or sad, while the second classified each five-second block of every conversation as positive, negative, or neutral.

Alhanai notes that, in traditional neural networks, all features about the data are provided to the algorithm at the base of the network. In contrast, her team found that they could improve performance by organizing different features at the various layers of the network.

“The system picks up on how, for example, the sentiment in the text transcription was more abstract than the raw accelerometer data,” says Alhanai. “It’s quite remarkable that a machine could approximate how we humans perceive these interactions, without significant input from us as researchers.”

Results

Indeed, the algorithm’s findings align well with what we humans might expect to observe. For instance, long pauses and monotonous vocal tones were associated with sadder stories, while more energetic, varied speech patterns were associated with happier ones. In terms of body language, sadder stories were also strongly associated with increased fidgeting and cardiovascular activity, as well as certain postures like putting one’s hands on one’s face.

On average, the model could classify the mood of each five-second interval with an accuracy that was approximately 18 percent above chance, and a full 7.5 percent better than existing approaches.

The algorithm is not yet reliable enough to be deployed for social coaching, but Alhanai says that they are actively working toward that goal. For future work the team plans to collect data on a much larger scale, potentially using commercial devices such as the Apple Watch that would allow them to more easily implement the system out in the world.

“Our next step is to improve the algorithm’s emotional granularity so that it is more accurate at calling out boring, tense, and excited moments, rather than just labeling interactions as ‘positive’ or ‘negative,’” says Alhanai. “Developing technology that can take the pulse of human emotions has the potential to dramatically improve how we communicate with each other.”

This research was made possible in part by the Samsung Strategy and Innovation Center.

Episode 14 of season 10 of The Big Bang Theory was titled “The Emotion Detection Automation”  (full episode can be found on this webpage) and broadcast on Feb. 2, 2017. There’s also a Feb. 2, 2017 recap (recapitulation) by Lincee Ray for EW.com (it seems Ray is unaware that there really is such a machine),

Who knew we would see the day when Sheldon and Raj figured out solutions for their social ineptitudes? Only The Big Bang Theory writers would think to tackle our favorite physicists’ lack of social skills with an emotion detector and an ex-girlfriend focus group. It’s been a while since I enjoyed both storylines as much as I did in this episode. That’s no bazinga.

When Raj tells the guys that he is back on the market, he wonders out loud what is wrong with his game. Why do women reject him? Sheldon receives the information like a scientist and runs through many possible answers. Raj shuts him down with a simple, “I’m fine.”

Sheldon is irritated when he learns that this obligatory remark is a mask for what Raj is really feeling. It turns out, Raj is not fine. Sheldon whines, wondering why no one just says exactly what’s on their mind. It’s quite annoying for those who struggle with recognizing emotional cues.

Lo and behold, Bernadette recently read about a gizmo that was created for people who have this exact same anxiety. MIT has a prototype, and because Howard is an alum, he can probably submit Sheldon’s name as a beta tester.

Of course this is a real thing. If anyone can build an emotion detector, it’s a bunch of awkward scientists with zero social skills.

This is the first time I’ve noticed an academic institution’s news release to be almost simultaneous with mention of its research in a popular culture television program, which suggests things have come a long way since I featured news about a webinar by the National Academies ‘ Science and Entertainment Exchange for film and television productions collaborating with scientists in an Aug. 28, 2012 post.

One last science/popular culture moment: Hidden Figures, a movie about African American women who were human computers supporting NASA (US National Aeronautics and Space Agency) efforts during the 1960s space race and getting a man on the moon was (shockingly) no. 1 in the US box office for a few weeks (there’s more about the movie here in my Sept. 2, 2016 post covering then upcoming movies featuring science).  After the movie was released, Mary Elizabeth Williams wrote up a Jan. 23, 2017 interview with the ‘Hidden Figures’ scriptwriter for Salon.com

I [Allison Schroeder] got on the phone with her [co-producer Renee Witt] and Donna  [co-producer Donna Gigliotti] and I said, “You have to hire me for this; I was born to write this.” Donna sort of rolled her eyes and was like, “God, these Hollywood types would say anything.” I said, “No, no, I grew up at Cape Canaveral. My grandmother was a computer programmer at NASA, my grandfather worked on the Mercury prototype, and I interned there all through high school and then the summer after my freshman year at Stanford I interned. I worked at a missile launch company.”

She was like, “OK that’s impressive.” And I said, “No, I literally grew up climbing on the Mercury capsule — hitting all the buttons, trying to launch myself into space.”

She said, “Well do you think you can handle the math?” I said that I had to study a certain amount of math at Stanford for economics degree. She said, “Oh, all right, that sounds pretty good.”

I pitched her a few scenes. I pitched her the end of the movie that you saw with Katherine running the numbers as John Glenn is trying to get up in space. I pitched her the idea of one of the women as a mechanic and to see her legs underneath the engine. You’re used to seeing a guy like that, but what would it be like to see heels and pantyhose and a skirt and she’s a mechanic and fixing something? Those are some of the scenes that I pitched them, and I got the job.

I love that the film begins with setting up their mechanical aptitude. You set up these are women; you set up these women of color. You set up exactly what that means in this moment in history. It’s like you just go from there.

I was on a really tight timeline because this started as an indie film. It was just Donna Gigliotti, Renee Witt, me and the author Margot Lee Shetterly for about a year working on it. I was only given four weeks for research and 12 weeks for writing the first draft. I’m not sure if I hadn’t known NASA and known the culture and just knew what the machines would look like, knew what the prototypes looked like, if I could have done it that quickly. I turned in that draft and Donna was like, “OK you’ve got the math and the science; it’s all here. Now go have fun.” Then I did a few more drafts and that was really enjoyable because I could let go of the fact I did it and make sure that the characters and the drive of the story and everything just fit what needed to happen.

For anyone interested in the science/popular culture connection, David Bruggeman of the Pasco Phronesis blog does a better job than I do of keeping up with the latest doings.

Getting back to ‘Communicating Science Effectively: A Research Agenda’, even with a mention of popular culture, it is a thoughtful book on the topic.