Tag Archives: emotions

An emotional android child

Caption: The six emotional expressions assessed in the second experiment. See if you can identify them! Note: this is a video made by filming Nikola’s head as it sat on a desk – it is not a computer animated graphic. Credit: RIKEN

This work comes from Japan according to a February 16, 2022 news item on ScienceDaily,

Researchers from the RIKEN Guardian Robot Project in Japan have made an android child named Nikola that successfully conveys six basic emotions. The new study, published in Frontiers in Psychology, tested how well people could identify six facial expressions — happiness, sadness, fear, anger, surprise, and disgust — which were generated by moving “muscles” in Nikola’s face. This is the first time that the quality of android-expressed emotion has been tested and verified for these six emotions.

A February 11, 2022 RIKEN press release (also on EurekAlert but published February 15, 2022), which originated the news item, provides more detail about the work,

Rosie the robot maid was considered science fiction when she debuted on the Jetson’s cartoon over 50 years ago. Although the reality of the helpful robot is currently more science and less fiction, there are still many challenges that need to be met, including being able to detect and express emotions. The recent study led by Wataru Sato from the RIKEN Guardian Robot Project focused on building a humanoid robot, or android, that can use its face to express a variety of emotions. The result is Nikola, an android head that looks like a hairless boy.

Inside Nikola’s face are 29 pneumatic actuators that control the movements of artificial muscles. Another 6 actuators control head and eyeball movements. Pneumatic actuators are controlled by air pressure, which makes the movements silent and smooth. The team placed the actuators based on the Facial Action Coding System (FACS), which has been used extensively to study facial expressions. Past research has identified numerous facial action units—such as ‘cheek raiser’ and ‘lip pucker’—that comprise typical emotions such as happiness or disgust, and the researchers incorporated these action units in Nikola’s design.

Typically, studies of emotions, particularly how people react to emotions, have a problem. It is difficult to do a properly controlled experiment with live people interacting, but at the same time, looking at photos or videos of people is less natural, and reactions aren’t the same. “The hope is that with androids like Nikola, we can have our cake and eat it too,” says Sato. “We can control every aspect of Nikola’s behavior, and at the same time study live interactions.” The first step was to see if Nikola’s facial expressions were understandable.

A person certified in FACS [Facial Action Coding System] scoring was able to identify each facial action unit, indicating that Nikola’s facial movements accurately resemble those of a real human. A second test showed that everyday people could recognize the six prototypical emotions—happiness, sadness, fear, anger, surprise, and disgust—in Nikola’s face, albeit to varying accuracies. This is because Nikola’s silicone skin is less elastic than real human skin and cannot form wrinkles very well. Thus, emotions like disgust were harder to identify because the action unit for nose wrinkling could not be included.

“In the short term, androids like Nikola can be important research tools for social psychology or even social neuroscience,” says Sato. “Compared with human confederates, androids are good at controlling behaviors and can facilitate rigorous empirical investigation of human social interactions.” As an example, the researchers asked people to rate the naturalness of Nikola’s emotions as the speed of his facial movements was systematically controlled. They researchers found that the most natural speed was slower for some emotions like sadness than it was for others like surprise.

While Nikola still lacks a body, the ultimate goal of the Guardian Robot Project is to build an android that can assist people, particularly those which physical needs who might live alone. “Androids that can emotionally communicate with us will be useful in a wide range of real-life situations, such as caring for older people, and can promote human wellbeing,” says Sato.

Here’s a link to and a citation for the paper,

An Android for Emotional Interaction: Spatiotemporal Validation of Its Facial Expressions by Wataru Sato, Shushi Namba, Dongsheng Yang, Shin’ya Nishida, Carlos Ishi, and Takashi Minato. Front. Psychol., 04 February 2022 DOI: https://doi.org/10.3389/fpsyg.2021.800657

This paper is open access.

For anyone who’d like to investigate the worlds of robots, artificial intelligence, and emotions, I have my December 3, 2021 posting “True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)” and there’s also Hiroshi Ishiguro’s work, which I’ve mentioned a number of times here, most recently in a March 27, 2017 posting “Ishiguro’s robots and Swiss scientist question artificial intelligence at SXSW (South by Southwest) 2017.”

Emotional robots

This is some very intriguing work,

“I’ve always felt that robots shouldn’t just be modeled after humans [emphasis mine] or be copies of humans,” he [Guy Hoffman, assistant professor at Cornell University)] said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”

A July 16, 2018 Cornell University news release on EurekAlert offers more insight into the work,

Cornell University researchers have developed a prototype of a robot that can express “emotions” through changes in its outer surface. The robot’s skin covers a grid of texture units whose shapes change based on the robot’s feelings.

Assistant professor of mechanical and aerospace engineering Guy Hoffman, who has given a TEDx talk on “Robots with ‘soul'” said the inspiration for designing a robot that gives off nonverbal cues through its outer skin comes from the animal world, based on the idea that robots shouldn’t be thought of in human terms.

“I’ve always felt that robots shouldn’t just be modeled after humans or be copies of humans,” he said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”

Their work is detailed in a paper, “Soft Skin Texture Modulation for Social Robots,” presented at the International Conference on Soft Robotics in Livorno, Italy. Doctoral student Yuhan Hu was lead author; the paper was featured in IEEE Spectrum, a publication of the Institute of Electrical and Electronics Engineers.

Hoffman and Hu’s design features an array of two shapes, goosebumps and spikes, which map to different emotional states. The actuation units for both shapes are integrated into texture modules, with fluidic chambers connecting bumps of the same kind.

The team tried two different actuation control systems, with minimizing size and noise level a driving factor in both designs. “One of the challenges,” Hoffman said, “is that a lot of shape-changing technologies are quite loud, due to the pumps involved, and these make them also quite bulky.”

Hoffman does not have a specific application for his robot with texture-changing skin mapped to its emotional state. At this point, just proving that this can be done is a sizable first step. “It’s really just giving us another way to think about how robots could be designed,” he said.

Future challenges include scaling the technology to fit into a self-contained robot – whatever shape that robot takes – and making the technology more responsive to the robot’s immediate emotional changes.

“At the moment, most social robots express [their] internal state only by using facial expressions and gestures,” the paper concludes. “We believe that the integration of a texture-changing skin, combining both haptic [feel] and visual modalities, can thus significantly enhance the expressive spectrum of robots for social interaction.”

A video helps to explain the work,

I don’t consider ‘sleepy’ to be an emotional state but as noted earlier this is intriguing. You can find out more in a July 9, 2018 article by Tom Fleischman for the Cornell Chronicle (Note: tthe news release was fashioned from this article so you will find some redundancy should you read in its entirety),

In 1872, Charles Darwin published his third major work on evolutionary theory, “The Expression of the Emotions in Man and Animals,” which explores the biological aspects of emotional life.

In it, Darwin writes: “Hardly any expressive movement is so general as the involuntary erection of the hairs, feathers and other dermal appendages … it is common throughout three of the great vertebrate classes.” Nearly 150 years later, the field of robotics is starting to draw inspiration from those words.

“The aspect of touch has not been explored much in human-robot interaction, but I often thought that people and animals do have this change in their skin that expresses their internal state,” said Guy Hoffman, assistant professor and Mills Family Faculty Fellow in the Sibley School of Mechanical and Aerospace Engineering (MAE).

Inspired by this idea, Hoffman and students in his Human-Robot Collaboration and Companionship Lab have developed a prototype of a robot that can express “emotions” through changes in its outer surface. …

Part of our relationship with other species is our understanding of the nonverbal cues animals give off – like the raising of fur on a dog’s back or a cat’s neck, or the ruffling of a bird’s feathers. Those are unmistakable signals that the animal is somehow aroused or angered; the fact that they can be both seen and felt strengthens the message.

“Yuhan put it very nicely: She said that humans are part of the family of species, they are not disconnected,” Hoffman said. “Animals communicate this way, and we do have a sensitivity to this kind of behavior.”

You can find the paper presented at the International Conference on Soft Robotics in Livorno, Italy, ‘Soft Skin Texture Modulation for Social Robotics’ by Yuhan Hu, Zhengnan Zhao, Abheek Vimal, and Guy Hoffman, here.

The song is you: a McGill University, University of Cambridge, and Stanford University research collaboration

These days I’m thinking about sound, music, spoken word, and more as I prepare for a new art/science piece. It’s very early stages so I don’t have much more to say about it but along those lines of thought, there’s a recent piece of research on music and personality that caught my eye. From a May 11, 2016 news item on phys.org,

A team of scientists from McGill University, the University of Cambridge, and Stanford Graduate School of Business developed a new method of coding and categorizing music. They found that people’s preference for these musical categories is driven by personality. The researchers say the findings have important implications for industry and health professionals.

A May 10, 2016 McGill University news release, which originated the news item, provides some fascinating suggestions for new categories for music,

There are a multitude of adjectives that people use to describe music, but in a recent study to be published this week in the journal Social Psychological and Personality Science, researchers show that musical attributes can be grouped into three categories. Rather than relying on the genre or style of a song, the team of scientists led by music psychologist David Greenberg with the help of Daniel J. Levitin from McGill University mapped the musical attributes of song excerpts from 26 different genres and subgenres, and then applied a statistical procedure to group them into clusters. The study revealed three clusters, which they labeled Arousal, Valence, and Depth. Arousal describes intensity and energy in music; Valence describes the spectrum of emotions in music (from sad to happy); and Depth describes intellect and sophistication in music. They also found that characteristics describing music from a single genre (both rock and jazz separately) could be grouped in these same three categories.

The findings suggest that this may be a useful alternative to grouping music into genres, which is often based on social connotations rather than the attributes of the actual music. It also suggests that those in academia and industry (e.g. Spotify and Pandora) that are already coding music on a multitude of attributes might save time and money by coding music around these three composite categories instead.

The researchers also conducted a second study of nearly 10,000 Facebook users who indicated their preferences for 50 musical excerpts from different genres. The researchers were then able to map preferences for these three attribute categories onto five personality traits and 30 detailed personality facets. For example, they found people who scored high on Openness to Experience preferred Depth in music, while Extraverted excitement-seekers preferred high Arousal in music. And those who scored high on Neuroticism preferred negative emotions in music, while those who were self-assured preferred positive emotions in music. As the title from the old Kern and Hammerstein song suggests, “The Song is You”. That is, the musical attributes that you like most reflect your personality. It also provides scientific support for what Joni Mitchell said in a 2013 interview with the CBC: “The trick is if you listen to that music and you see me, you’re not getting anything out of it. If you listen to that music and you see yourself, it will probably make you cry and you’ll learn something about yourself and now you’re getting something out of it.”

The researchers hope that this information will not only be helpful to music therapists but also for health care professions and even hospitals. For example, recent evidence has showed that music listening can increase recovery after surgery. The researchers argue that information about music preferences and personality could inform a music listening protocol after surgery to boost recovery rates.

The article is another in a series of studies that Greenberg and his team have published on music and personality. This past July [2015], they published an article in PLOS ONE showing that people’s musical preferences are linked to thinking styles. And in October [2015], they published an article in the Journal of Research in Personality, identifying the personality trait Openness to Experience as a key predictor of musical ability, even in non-musicians. These series of studies tell us that there are close links between our personality and musical behavior that may be beyond our control and awareness.

Readers can find out how they score on the music and personality quizzes at www.musicaluniverse.org.

David M. Greenberg, lead author from Cambridge University and City University of New York said: “Genre labels are informative but we’re trying to transcend them and move in a direction that points to the detailed characteristics in music that are driving people preferences and emotional reactions.”

Greenberg added: “As a musician, I see how vast the powers of music really are, and unfortunately, many of us do not use music to its full potential. Our ultimate goal is to create science that will help enhance the experience of listening to music. We want to use this information about personality and preferences to increase the day-to-day enjoyment and peak experiences people have with music.”

William Hoffman in a May 11, 2016 article for Inverse describes the work in connection with recently released new music from Radiohead and an upcoming release from Chance the Rapper (along with a brief mention of Drake), Note: Links have been removed,

Music critics regularly scour Thesaurus.com for the best adjectives to throw into their perfectly descriptive melodious disquisitions on the latest works from Drake, Radiohead, or whomever. And listeners of all walks have, since the beginning of music itself, been guilty of lazily pigeonholing artists into numerous socially constructed genres. But all of that can be (and should be) thrown out the window now, because new research suggests that, to perfectly match music to a listener’s personality, all you need are these three scientific measurables [arousal, valence, depth].

This suggests that a slow, introspective gospel song from Chance The Rapper’s upcoming album could have the same depth as a track from Radiohead’s A Moon Shaped Pool. So a system of categorization based on Greenberg’s research would, surprisingly but rightfully, place the rap and rock works in the same bin.

Here’s a link to and a citation for the latest paper,

The Song Is You: Preferences for Musical Attribute Dimensions Reflect Personality by David M. Greenberg, Michal Kosinski, David J. Stillwell, Brian L. Monteiro, Daniel J. Levitin, and Peter J. Rentfrow. Social Psychological and Personality Science, 1948550616641473, first published on May 9, 2016

This paper is behind a paywall.

Here’s a link to and a citation for the October 2015 paper

Personality predicts musical sophistication by David M. Greenberg, Daniel Müllensiefen, Michael E. Lamb, Peter J. Rentfrow. Journal of Research in Personality Volume 58, October 2015, Pages 154–158 doi:10.1016/j.jrp.2015.06.002 Note: A Feb. 2016 erratum is also listed.

The paper is behind a paywall and it looks as if you will have to pay for it and for the erratum separately.

Here’s a link to and a citation for the July 2015 paper,

Musical Preferences are Linked to Cognitive Styles by David M. Greenberg, Simon Baron-Cohen, David J. Stillwell, Michal Kosinski, Peter J. Rentfrow. PLOS [Public Library of Science ONE]  http://dx.doi.org/10.1371/journal.pone.0131151 Published: July 22, 2015

This paper is open access.

I tried out the research project’s website: The Musical Universe. by filling out the Musical Taste questionnaire. Unfortunately, I did not receive my results. Since the team’s latest research has just been reported, I imagine there are many people trying do the same thing. It might be worth your while to wait a bit if you want to try this out or you can fill out one of their other questionnaires. Oh, and you might want to allot at least 20 mins.

A wearable book (The Girl Who Was Plugged In) makes you feel the protagonists pain

A team of students taking an MIT (Massachusetts Institute of Technology) course called ‘Science Fiction to Science Fabrication‘ have created a new kind of category for books, sensory fiction.  John Brownlee in his Feb. 10, 2014 article for Fast Company describes it this way,

Have you ever felt your pulse quicken when you read a book, or your skin go clammy during a horror story? A new student project out of MIT wants to deepen those sensations. They have created a wearable book that uses inexpensive technology and neuroscientific hacking to create a sort of cyberpunk Neverending Story that blurs the line between the bodies of a reader and protagonist.

Called Sensory Fiction, the project was created by a team of four MIT students–Felix Heibeck, Alexis Hope, Julie Legault, and Sophia Brueckner …

Here’s the MIT video demonstrating the book in use (from the course’s sensory fiction page),

Here’s how the students have described their sensory book, from the project page,

Sensory fiction is about new ways of experiencing and creating stories.

Traditionally, fiction creates and induces emotions and empathy through words and images.  By using a combination of networked sensors and actuators, the Sensory Fiction author is provided with new means of conveying plot, mood, and emotion while still allowing space for the reader’s imagination. These tools can be wielded to create an immersive storytelling experience tailored to the reader.

To explore this idea, we created a connected book and wearable. The ‘augmented’ book portrays the scenery and sets the mood, and the wearable allows the reader to experience the protagonist’s physiological emotions.

The book cover animates to reflect the book’s changing atmosphere, while certain passages trigger vibration patterns.

Changes in the protagonist’s emotional or physical state triggers discrete feedback in the wearable, whether by changing the heartbeat rate, creating constriction through air pressure bags, or causing localized temperature fluctuations.

Our prototype story, ‘The Girl Who Was Plugged In’ by James Tiptree showcases an incredible range of settings and emotions. The main protagonist experiences both deep love and ultimate despair, the freedom of Barcelona sunshine and the captivity of a dark damp cellar.

The book and wearable support the following outputs:

  • Light (the book cover has 150 programmable LEDs to create ambient light based on changing setting and mood)
  • Sound
  • Personal heating device to change skin temperature (through a Peltier junction secured at the collarbone)
  • Vibration to influence heart rate
  • Compression system (to convey tightness or loosening through pressurized airbags)

One of the earliest stories about this project was a Jan. 28,2014 piece written by Alison Flood for the Guardian where she explains how vibration, etc. are used to convey/stimulate the reader’s sensations and emotions,

MIT scientists have created a ‘wearable’ book using temperature and lighting to mimic the experiences of a book’s protagonist

The book, explain the researchers, senses the page a reader is on, and changes ambient lighting and vibrations to “match the mood”. A series of straps form a vest which contains a “heartbeat and shiver simulator”, a body compression system, temperature controls and sound.

“Changes in the protagonist’s emotional or physical state trigger discrete feedback in the wearable [vest], whether by changing the heartbeat rate, creating constriction through air pressure bags, or causing localised temperature fluctuations,” say the academics.

Flood goes on to illuminate how science fiction has explored the notion of ‘sensory books’ (Note: Links have been removed) and how at least one science fiction novelist is responding to this new type of book,,

The Arthur C Clarke award-winning science fiction novelist Chris Beckett wrote about a similar invention in his novel Marcher, although his “sensory” experience comes in the form of a video game:

Adam Roberts, another prize-winning science fiction writer, found the idea of “sensory” fiction “amazing”, but also “infantalising, like reverting to those sorts of books we buy for toddlers that have buttons in them to generate relevant sound-effects”.

Elise Hu in her Feb. 6, 2014 posting on the US National Public Radio (NPR) blog, All Tech Considered, takes a different approach to the topic,

The prototype does work, but it won’t be manufactured anytime soon. The creation was only “meant to provoke discussion,” Hope says. It was put together as part of a class in which designers read science fiction and make functional prototypes to explore the ideas in the books.

If it ever does become more widely available, sensory fiction could have an unintended consequence. When I shared this idea with NPR editor Ellen McDonnell, she quipped, “If these device things are helping ‘put you there,’ it just means the writing won’t have to be as good.”

I hope the students are successful at provoking discussion as so far they seem to have primarily provoked interest.

As for my two cents, I think that in a world where it seems making personal connections  is increasingly difficult (i.e., people becoming more isolated) that sensory fiction which stimulates people into feeling something as they read a book seems a logical progression.  It’s also interesting to me that all of the focus is on the reader with no mention as to what writers might produce (other than McDonnell’s cheeky comment) if they knew their books were going to be given the ‘sensory treatment’. One more musing, I wonder if there might a difference in how males and females, writers and readers, respond to sensory fiction.

Now for a bit of wordplay. Feeling can be emotional but, in English, it can also refer to touch and researchers at MIT have also been investigating new touch-oriented media.  You can read more about that project in my Reaching beyond the screen with the Tangible Media Group at the Massachusetts Institute of Technology (MIT) posting dated Nov. 13, 2013. One final thought, I am intrigued by how interested scientists at MIT seem to be in feelings of all kinds.

Emotions and robots

Two new robots (the type that can show their emotions, more or less) have recently been introduced according to an article by Kit Eaton titled Kid and Baby Robots Get Creepy Emotional Faces on Fast Company. From the article,

The two bots were revealed today by creators the JST Erato Asada Project–a research team dedicated to investigating how humans and robots can better relate to each other in the future and so that robots can learn better (though given the early stages of current artificial intelligence science, it’s almost a case of working out how humans can feel better about interacting with robots).

..

The first is M3-Kindy, a 27-kilo machine with 42 motors and over a hundred touch-sensors. He’s about the size of a 5-year-old child, and can do speech recognition, and machine vision with his stereoscopic camera eyes. Kindy’s also designed to be led around by humans holding its hand, and can be taught to manipulate objects.

But it’s Kindy’s face that’s the freakiest bit. It’s been carefully designed so that it can portray emotions. That’ll undoubtedly be useful in the future, when, for instance, having more friendly, emotionally attractive robot carers look after elderly people and patients in hospitals is going to be important.

… Noby will have you running out of the room. It’s a similar human-machine interaction research droid, but is meant to model a 9-month-old baby, right down to the mass and density of its limbs and soft skin.

Do visit the article to see the images of the two robots and read more.