Tag Archives: Takashi Minato

An emotional android child

Caption: The six emotional expressions assessed in the second experiment. See if you can identify them! Note: this is a video made by filming Nikola’s head as it sat on a desk – it is not a computer animated graphic. Credit: RIKEN

This work comes from Japan according to a February 16, 2022 news item on ScienceDaily,

Researchers from the RIKEN Guardian Robot Project in Japan have made an android child named Nikola that successfully conveys six basic emotions. The new study, published in Frontiers in Psychology, tested how well people could identify six facial expressions — happiness, sadness, fear, anger, surprise, and disgust — which were generated by moving “muscles” in Nikola’s face. This is the first time that the quality of android-expressed emotion has been tested and verified for these six emotions.

A February 11, 2022 RIKEN press release (also on EurekAlert but published February 15, 2022), which originated the news item, provides more detail about the work,

Rosie the robot maid was considered science fiction when she debuted on the Jetson’s cartoon over 50 years ago. Although the reality of the helpful robot is currently more science and less fiction, there are still many challenges that need to be met, including being able to detect and express emotions. The recent study led by Wataru Sato from the RIKEN Guardian Robot Project focused on building a humanoid robot, or android, that can use its face to express a variety of emotions. The result is Nikola, an android head that looks like a hairless boy.

Inside Nikola’s face are 29 pneumatic actuators that control the movements of artificial muscles. Another 6 actuators control head and eyeball movements. Pneumatic actuators are controlled by air pressure, which makes the movements silent and smooth. The team placed the actuators based on the Facial Action Coding System (FACS), which has been used extensively to study facial expressions. Past research has identified numerous facial action units—such as ‘cheek raiser’ and ‘lip pucker’—that comprise typical emotions such as happiness or disgust, and the researchers incorporated these action units in Nikola’s design.

Typically, studies of emotions, particularly how people react to emotions, have a problem. It is difficult to do a properly controlled experiment with live people interacting, but at the same time, looking at photos or videos of people is less natural, and reactions aren’t the same. “The hope is that with androids like Nikola, we can have our cake and eat it too,” says Sato. “We can control every aspect of Nikola’s behavior, and at the same time study live interactions.” The first step was to see if Nikola’s facial expressions were understandable.

A person certified in FACS [Facial Action Coding System] scoring was able to identify each facial action unit, indicating that Nikola’s facial movements accurately resemble those of a real human. A second test showed that everyday people could recognize the six prototypical emotions—happiness, sadness, fear, anger, surprise, and disgust—in Nikola’s face, albeit to varying accuracies. This is because Nikola’s silicone skin is less elastic than real human skin and cannot form wrinkles very well. Thus, emotions like disgust were harder to identify because the action unit for nose wrinkling could not be included.

“In the short term, androids like Nikola can be important research tools for social psychology or even social neuroscience,” says Sato. “Compared with human confederates, androids are good at controlling behaviors and can facilitate rigorous empirical investigation of human social interactions.” As an example, the researchers asked people to rate the naturalness of Nikola’s emotions as the speed of his facial movements was systematically controlled. They researchers found that the most natural speed was slower for some emotions like sadness than it was for others like surprise.

While Nikola still lacks a body, the ultimate goal of the Guardian Robot Project is to build an android that can assist people, particularly those which physical needs who might live alone. “Androids that can emotionally communicate with us will be useful in a wide range of real-life situations, such as caring for older people, and can promote human wellbeing,” says Sato.

Here’s a link to and a citation for the paper,

An Android for Emotional Interaction: Spatiotemporal Validation of Its Facial Expressions by Wataru Sato, Shushi Namba, Dongsheng Yang, Shin’ya Nishida, Carlos Ishi, and Takashi Minato. Front. Psychol., 04 February 2022 DOI: https://doi.org/10.3389/fpsyg.2021.800657

This paper is open access.

For anyone who’d like to investigate the worlds of robots, artificial intelligence, and emotions, I have my December 3, 2021 posting “True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)” and there’s also Hiroshi Ishiguro’s work, which I’ve mentioned a number of times here, most recently in a March 27, 2017 posting “Ishiguro’s robots and Swiss scientist question artificial intelligence at SXSW (South by Southwest) 2017.”

What about the heart? and the quest to make androids lifelike

Japanese scientist Hiroshi Ishiguro has been mentioned here several times in the context of ‘lifelike’ robots. Accordingly, it’s no surprise to see Ishiguro’s name in a June 24, 2014 news item about uncannily lifelike robotic tour guides in a Tokyo museum (CBC (Canadian Broadcasting Corporation) News online),

The new robot guides at a Tokyo museum look so eerily human and speak so smoothly they almost outdo people — almost.

Japanese robotics expert Hiroshi Ishiguro, an Osaka University professor, says they will be useful for research on how people interact with robots and on what differentiates the person from the machine.

“Making androids is about exploring what it means to be human,” he told reporters Tuesday [June 23, 2014], “examining the question of what is emotion, what is awareness, what is thinking.”

In a demonstration, the remote-controlled machines moved their pink lips in time to a voice-over, twitched their eyebrows, blinked and swayed their heads from side to side. They stay seated but can move their hands.

Ishiguro and his robots were also mentioned in a May 29, 2014 article by Carey Dunne for Fast Company. The article concerned a photographic project of Luisa Whitton’s.

In her series “What About the Heart?,” British photographer Luisa Whitton documents one of the creepiest niches of the Japanese robotics industry--androids. Here, an eerily lifelike face made for a robot. [dowloaded from http://www.fastcodesign.com/3031125/exposure/japans-uncanny-quest-to-humanize-robots?partner=rss]

In her series “What About the Heart?,” British photographer Luisa Whitton documents one of the creepiest niches of the Japanese robotics industry–androids. Here, an eerily lifelike face made for a robot. [dowloaded from http://www.fastcodesign.com/3031125/exposure/japans-uncanny-quest-to-humanize-robots?partner=rss]

From Dunne’s May 29, 2014 article (Note: Links have been removed),

We’re one step closer to a robot takeover. At least, that’s one interpretation of “What About the Heart?” a new series by British photographer Luisa Whitton. In 17 photos, Whitton documents one of the creepiest niches of the Japanese robotics industry–androids. These are the result of a growing group of scientists trying to make robots look like living, breathing people. Their efforts pose a question that’s becoming more relevant as Siri and her robot friends evolve: what does it mean to be human as technology progresses?

Whitton spent several months in Japan working with Hiroshi Ishiguro, a scientist who has constructed a robotic copy of himself. Ishiguro’s research focused on whether his robotic double could somehow possess his “Sonzai-Kan,” a Japanese term that translates to the “presence” or “spirit” of a person. It’s work that blurs the line between technology, philosophy, psychology, and art, using real-world studies to examine existential issues once reserved for speculation by the likes of Philip K. Dick or Sigmund Freud. And if this sounds like a sequel to Blade Runner, it gets weirder: after Ishiguro aged, he had plastic surgery so that his face still matched that of his younger, mechanical doppelganger.

I profiled Ishiguro’s robots (then called Geminoids) in a March 10, 2011 posting which featured a Danish philosopher, Henrik Scharfe, who’d commissioned a Geminoid identical to himself for research purposes. He doesn’t seem to have published any papers about his experience but there is this interview of Scharfe and his Geminoid twin by Aldith Hunkar (she’s very good) at a 2011 TEDxAmsterdam,

Mary King’s 2007 research project notes a contrast, Robots and AI in Japan and The West and provides an excellent primer (Note: A link has been removed),

The Japanese scientific approach and expectations of robots and AI are far more down to earth than those of their Western counterparts. Certainly, future predictions made by Japanese scientists are far less confrontational or sci-fi-like. In an interview via email, Canadian technology journalist Tim N. Hornyak described the Japanese attitude towards robots as being “that of the craftsman, not the philosopher” and cited this as the reason for “so many rosy imaginings of a future Japan in which robots are a part of people’s everyday lives.”

Hornyak, who is author of “Loving the Machine: The Art and Science of Japanese Robots,” acknowledges that apocalyptic visions do appear in manga and anime, but emphasizes that such forecasts do not exist in government circles or within Japanese companies. Hornyak also added that while AI has for many years taken a back seat to robot development in Japan, this situation is now changing. Honda, for example, is working on giving better brains to Asimo, which is already the world’s most advanced humanoid robot. Japan is also already legislating early versions of Asimov’s laws by introducing design requirements for next-generation mobile robots.

It does seem there might be more interest in the philosophical issues in Japan these days or possibly it’s a reflection of Ishiguro’s own current concerns (from Dunne’s May 29, 2014 article),

The project’s title derives from a discussion with Ishiguro about what it means to be human. “The definition of human will be more complicated,” Ishiguro said.

Dunne reproduces a portion of Whitton’s statement describing her purpose for these photographs,

Through Ishiguro, Whitton got in touch with a number of other scientists working on androids. “In the photographs, I am trying to subvert the traditional formula of portraiture and allure the audience into a debate on the boundaries that determine the dichotomy of the human/not human,” she writes in her artist statement. “The photographs become documents of objects that sit between scientific tool and horrid simulacrum.”

I’m not sure what she means by “horrid simulacrum” but she seems to be touching on the concept of the ‘uncanny valley’. Here’s a description I provided in a May 31, 2013 posting about animator Chris Landreth and his explorations of that valley within the context of his animated film, Subconscious Password,,

Landreth also discusses the ‘uncanny valley’ and how he deliberately cast his film into that valley. For anyone who’s unfamiliar with the ‘uncanny valley’ I wrote about it in a Mar. 10, 2011 posting concerning Geminoid robots,

It seems that researchers believe that the ‘uncanny valley’ doesn’t necessarily have to exist forever and at some point, people will accept humanoid robots without hesitation. In the meantime, here’s a diagram of the ‘uncanny valley’,

From the article on Android Science by Masahiro Mori (translated by Karl F. MacDorman and Takashi Minato)

Here’s what Mori (the person who coined the term) had to say about the ‘uncanny valley’ (from Android Science),

Recently there are many industrial robots, and as we know the robots do not have a face or legs, and just rotate or extend or contract their arms, and they bear no resemblance to human beings. Certainly the policy for designing these kinds of robots is based on functionality. From this standpoint, the robots must perform functions similar to those of human factory workers, but their appearance is not evaluated. If we plot these industrial robots on a graph of familiarity versus appearance, they lie near the origin (see Figure 1 [above]). So they bear little resemblance to a human being, and in general people do not find them to be familiar. But if the designer of a toy robot puts importance on a robot’s appearance rather than its function, the robot will have a somewhat humanlike appearance with a face, two arms, two legs, and a torso. This design lets children enjoy a sense of familiarity with the humanoid toy. So the toy robot is approaching the top of the first peak.

Of course, human beings themselves lie at the final goal of robotics, which is why we make an effort to build humanlike robots. For example, a robot’s arms may be composed of a metal cylinder with many bolts, but to achieve a more humanlike appearance, we paint over the metal in skin tones. These cosmetic efforts cause a resultant increase in our sense of the robot’s familiarity. Some readers may have felt sympathy for handicapped people they have seen who attach a prosthetic arm or leg to replace a missing limb. But recently prosthetic hands have improved greatly, and we cannot distinguish them from real hands at a glance. Some prosthetic hands attempt to simulate veins, muscles, tendons, finger nails, and finger prints, and their color resembles human pigmentation. So maybe the prosthetic arm has achieved a degree of human verisimilitude on par with false teeth. But this kind of prosthetic hand is too real and when we notice it is prosthetic, we have a sense of strangeness. So if we shake the hand, we are surprised by the lack of soft tissue and cold temperature. In this case, there is no longer a sense of familiarity. It is uncanny. In mathematical terms, strangeness can be represented by negative familiarity, so the prosthetic hand is at the bottom of the valley. So in this case, the appearance is quite human like, but the familiarity is negative. This is the uncanny valley.

[keep scrolling, I’m having trouble getting rid of this extra space below]

It seems that Mori is suggesting that as the differences between the original and the simulacrum become fewer and fewer, the ‘uncanny valley’ will disappear. It’s possible but I suspect before that day occurs those of us who were brought up in a world without synthetic humans (androids) may experience an intensification of the feelings aroused by an encounter with the uncanny valley even as it disappears. For those who’d like a preview, check out Luisa Whitton’s What About The Heart? project.

They is becoming more like us: Geminoid robots and robots with more humanlike movement

We will be proceeding deep into the ‘uncanny valley’, that place where robots looks so like humans, they make us uncomfortable. I have made a reference to the ‘uncanny valley’ in a previous posting that featured some Japanese dancing robots (October 18, 2010 posting [scroll down]). This is an order of magnitude more uncanny. See the video for yourself,

First test of the Geminoid DK. The nearly completed geminoid (twin robot) is operated by a human for the first time. Movements of the operator is reproduced in the robot. (from the description on Youtube)

Here’s a little more from a March 7, 2011 article by Katie Gatto on physorg.com,

The latest robot in the family of ultra-realistic androids, called the Geminoid series, is so realistic that it can actually be mistaken for the person it was designed to look like. The new bot, dubbed the Geminoid DK, was was created by robotics firm Kokoro in Tokyo and is now being housed at Japan’s Advanced Telecommunications Research Institute International in Nara. The robot was designed to look like Associate Professor Henrik Scharfe of Aalborg University in Denmark.

As for why anyone would want a robot that so closely resembled themselves, I can think of a few reasons but Scharfe has used this as an opportunity to embark on a study (from the March 7, 2011 article by Kit Eaton on Fast Company),

Scharfe is an associate professor at Aalborg University in Denmark and is director of the center for Computer-Mediated Epistemology, which pretty much explains what all this robotics tech is all about–Epistemology is the philosophical study of knowledge, centering on the question of what’s “true” knowledge versus “false” or “inadequate” knowledge. Scharfe intends to use the robot to probe “emotional affordances” between robots and humans, as well as “blended presence” (a partly digital, partly realistic way for people to telepresence themselves, demonstrated by weird prototypes like the Elfoid robot-phone we covered the other day). The device will also be used to look at cultural differences in how people interact with robots–for example in the U.S. robots may be perceived as threatening, or mere simple tools, but in Japan they’re increasingly accepted as a part of society.

Here’s a picture of the ‘real’ Scharfe with the ‘Geminoid’ Scharfe,

Image from Geminoid Facebook page

You can click through to the Geminoid Facebook page from here. Here’s more about Geminoid research (from the Geminoid DK website),

Introduction to Geminoid research

The first geminoid, HI-1, was created in 2005 by Prof. Hiroshi Ishiguro of ATR and the Tokyo-based firm, Kokoro. A geminoid is an android, designed to look exactly as its master, and is controlled through a computer system that replicates the facial movements of the operator in the robot.

In the spring of 2010, a new geminoid was created. The new robot, Geminoid-F was a simpler version of the original HI-1, and it was also more affordable, making it reasonable to acquire one for humanistic research in Human Robot Interaction.

Geminoid|DK will be the first of its kind outside of Japan, and is intended to advance android science and philosophy, in seeking answers to fundamental questions, many of which that have also occupied the Japanese researchers. The most important questions are:

– What is a human?
– What is presence?
– What is a relation?
– What is identity?

If that isn’t enough, there’s research at Georgia Tech (US) being done on how make to robots move in a more humanlike fashion (from the March 8, 2011 article by Kit Eaton on Fast Company),

Which is where research from Georgia Tech comes in. Based on their research droid Simon who looks distinctly robotic with a comedic head and glowing “ears,” a team working in the Socially Intelligent Machines Lab has been trying to teach Simon to move like humans do–forcing less machine-like gestures from his solid limbs. The trick was to record real human subjects performing a series of moves in a motion-capture studio, then taking the data and using it to program Simon, being careful (via a clever algorithm) to replicate the fluid multiple-joint rotations a human body does when swinging a limb between one position and the next, and which robot movements tend to avoid.

Then the team got volunteers to observe Simon in action, and asked them to identify the kinds of movements he was making. When a more smooth, fluid robot movement was made, the volunteers were better at identifying the gesture compared to a more “robotic” movement. To double-check the algorithm’s effectiveness the researchers then asked the human volunteers to mimic the gestures they thought the robot was making, tapping into the unconscious part of their minds that recognize human tics: And again, the volunteers were better at correctly mimicking the gesture when the human-like algorithm was applied to Simon’s moves.

Why’s this research important? Because as robots become increasingly a part of every day human life, we need to trust them and interact with them normally. Just as other research tries to teach robots to move in ways that can’t hurt us, this work will create robots that move in subtle ways to communicate physically with nearby people, aiding their incorporation into society. In medical professional roles, which are some of the first places humanoid robots may find work, this sort of acceptance could be absolutely crucial.

It seems that researchers believe that the ‘uncanny valley’ doesn’t necessarily have to exist forever and at some point, people will accept humanoid robots without hesitation. In the meantime, here’s a diagram of the ‘uncanny valley’,

From the article on Android Science by Masahiro Mori (translated by Karl F. MacDorman and Takashi Minato)

Here’s what Mori (the person who coined the term) had to say about the ‘uncanny valley’ (from Android Science),

Recently there are many industrial robots, and as we know the robots do not have a face or legs, and just rotate or extend or contract their arms, and they bear no resemblance to human beings. Certainly the policy for designing these kinds of robots is based on functionality. From this standpoint, the robots must perform functions similar to those of human factory workers, but their appearance is not evaluated. If we plot these industrial robots on a graph of familiarity versus appearance, they lie near the origin (see Figure 1 [above]). So they bear little resemblance to a human being, and in general people do not find them to be familiar. But if the designer of a toy robot puts importance on a robot’s appearance rather than its function, the robot will have a somewhat humanlike appearance with a face, two arms, two legs, and a torso. This design lets children enjoy a sense of familiarity with the humanoid toy. So the toy robot is approaching the top of the first peak.

Of course, human beings themselves lie at the final goal of robotics, which is why we make an effort to build humanlike robots. For example, a robot’s arms may be composed of a metal cylinder with many bolts, but to achieve a more humanlike appearance, we paint over the metal in skin tones. These cosmetic efforts cause a resultant increase in our sense of the robot’s familiarity. Some readers may have felt sympathy for handicapped people they have seen who attach a prosthetic arm or leg to replace a missing limb. But recently prosthetic hands have improved greatly, and we cannot distinguish them from real hands at a glance. Some prosthetic hands attempt to simulate veins, muscles, tendons, finger nails, and finger prints, and their color resembles human pigmentation. So maybe the prosthetic arm has achieved a degree of human verisimilitude on par with false teeth. But this kind of prosthetic hand is too real and when we notice it is prosthetic, we have a sense of strangeness. So if we shake the hand, we are surprised by the lack of soft tissue and cold temperature. In this case, there is no longer a sense of familiarity. It is uncanny. In mathematical terms, strangeness can be represented by negative familiarity, so the prosthetic hand is at the bottom of the valley. So in this case, the appearance is quite human like, but the familiarity is negative. This is the uncanny valley.

It’s a very interesting interpretation of the diagram. The article is definitely worth reading  although you won’t find a reference to the zombies which represent the bottom of the ‘uncanny valley’. Perhaps there’s something about them in the original article printed in Energy, (1970) 7(4), pp. 33-35?

ETA April 12, 2011: Someone sent me a link to this March 8, 2011 posting by Reid of the Analytic Design Group. It offers another perspective, this one being mildly cautionary.