Tag Archives: political philosophy

They is becoming more like us: Geminoid robots and robots with more humanlike movement

We will be proceeding deep into the ‘uncanny valley’, that place where robots looks so like humans, they make us uncomfortable. I have made a reference to the ‘uncanny valley’ in a previous posting that featured some Japanese dancing robots (October 18, 2010 posting [scroll down]). This is an order of magnitude more uncanny. See the video for yourself,

First test of the Geminoid DK. The nearly completed geminoid (twin robot) is operated by a human for the first time. Movements of the operator is reproduced in the robot. (from the description on Youtube)

Here’s a little more from a March 7, 2011 article by Katie Gatto on physorg.com,

The latest robot in the family of ultra-realistic androids, called the Geminoid series, is so realistic that it can actually be mistaken for the person it was designed to look like. The new bot, dubbed the Geminoid DK, was was created by robotics firm Kokoro in Tokyo and is now being housed at Japan’s Advanced Telecommunications Research Institute International in Nara. The robot was designed to look like Associate Professor Henrik Scharfe of Aalborg University in Denmark.

As for why anyone would want a robot that so closely resembled themselves, I can think of a few reasons but Scharfe has used this as an opportunity to embark on a study (from the March 7, 2011 article by Kit Eaton on Fast Company),

Scharfe is an associate professor at Aalborg University in Denmark and is director of the center for Computer-Mediated Epistemology, which pretty much explains what all this robotics tech is all about–Epistemology is the philosophical study of knowledge, centering on the question of what’s “true” knowledge versus “false” or “inadequate” knowledge. Scharfe intends to use the robot to probe “emotional affordances” between robots and humans, as well as “blended presence” (a partly digital, partly realistic way for people to telepresence themselves, demonstrated by weird prototypes like the Elfoid robot-phone we covered the other day). The device will also be used to look at cultural differences in how people interact with robots–for example in the U.S. robots may be perceived as threatening, or mere simple tools, but in Japan they’re increasingly accepted as a part of society.

Here’s a picture of the ‘real’ Scharfe with the ‘Geminoid’ Scharfe,

Image from Geminoid Facebook page

You can click through to the Geminoid Facebook page from here. Here’s more about Geminoid research (from the Geminoid DK website),

Introduction to Geminoid research

The first geminoid, HI-1, was created in 2005 by Prof. Hiroshi Ishiguro of ATR and the Tokyo-based firm, Kokoro. A geminoid is an android, designed to look exactly as its master, and is controlled through a computer system that replicates the facial movements of the operator in the robot.

In the spring of 2010, a new geminoid was created. The new robot, Geminoid-F was a simpler version of the original HI-1, and it was also more affordable, making it reasonable to acquire one for humanistic research in Human Robot Interaction.

Geminoid|DK will be the first of its kind outside of Japan, and is intended to advance android science and philosophy, in seeking answers to fundamental questions, many of which that have also occupied the Japanese researchers. The most important questions are:

– What is a human?
– What is presence?
– What is a relation?
– What is identity?

If that isn’t enough, there’s research at Georgia Tech (US) being done on how make to robots move in a more humanlike fashion (from the March 8, 2011 article by Kit Eaton on Fast Company),

Which is where research from Georgia Tech comes in. Based on their research droid Simon who looks distinctly robotic with a comedic head and glowing “ears,” a team working in the Socially Intelligent Machines Lab has been trying to teach Simon to move like humans do–forcing less machine-like gestures from his solid limbs. The trick was to record real human subjects performing a series of moves in a motion-capture studio, then taking the data and using it to program Simon, being careful (via a clever algorithm) to replicate the fluid multiple-joint rotations a human body does when swinging a limb between one position and the next, and which robot movements tend to avoid.

Then the team got volunteers to observe Simon in action, and asked them to identify the kinds of movements he was making. When a more smooth, fluid robot movement was made, the volunteers were better at identifying the gesture compared to a more “robotic” movement. To double-check the algorithm’s effectiveness the researchers then asked the human volunteers to mimic the gestures they thought the robot was making, tapping into the unconscious part of their minds that recognize human tics: And again, the volunteers were better at correctly mimicking the gesture when the human-like algorithm was applied to Simon’s moves.

Why’s this research important? Because as robots become increasingly a part of every day human life, we need to trust them and interact with them normally. Just as other research tries to teach robots to move in ways that can’t hurt us, this work will create robots that move in subtle ways to communicate physically with nearby people, aiding their incorporation into society. In medical professional roles, which are some of the first places humanoid robots may find work, this sort of acceptance could be absolutely crucial.

It seems that researchers believe that the ‘uncanny valley’ doesn’t necessarily have to exist forever and at some point, people will accept humanoid robots without hesitation. In the meantime, here’s a diagram of the ‘uncanny valley’,

From the article on Android Science by Masahiro Mori (translated by Karl F. MacDorman and Takashi Minato)

Here’s what Mori (the person who coined the term) had to say about the ‘uncanny valley’ (from Android Science),

Recently there are many industrial robots, and as we know the robots do not have a face or legs, and just rotate or extend or contract their arms, and they bear no resemblance to human beings. Certainly the policy for designing these kinds of robots is based on functionality. From this standpoint, the robots must perform functions similar to those of human factory workers, but their appearance is not evaluated. If we plot these industrial robots on a graph of familiarity versus appearance, they lie near the origin (see Figure 1 [above]). So they bear little resemblance to a human being, and in general people do not find them to be familiar. But if the designer of a toy robot puts importance on a robot’s appearance rather than its function, the robot will have a somewhat humanlike appearance with a face, two arms, two legs, and a torso. This design lets children enjoy a sense of familiarity with the humanoid toy. So the toy robot is approaching the top of the first peak.

Of course, human beings themselves lie at the final goal of robotics, which is why we make an effort to build humanlike robots. For example, a robot’s arms may be composed of a metal cylinder with many bolts, but to achieve a more humanlike appearance, we paint over the metal in skin tones. These cosmetic efforts cause a resultant increase in our sense of the robot’s familiarity. Some readers may have felt sympathy for handicapped people they have seen who attach a prosthetic arm or leg to replace a missing limb. But recently prosthetic hands have improved greatly, and we cannot distinguish them from real hands at a glance. Some prosthetic hands attempt to simulate veins, muscles, tendons, finger nails, and finger prints, and their color resembles human pigmentation. So maybe the prosthetic arm has achieved a degree of human verisimilitude on par with false teeth. But this kind of prosthetic hand is too real and when we notice it is prosthetic, we have a sense of strangeness. So if we shake the hand, we are surprised by the lack of soft tissue and cold temperature. In this case, there is no longer a sense of familiarity. It is uncanny. In mathematical terms, strangeness can be represented by negative familiarity, so the prosthetic hand is at the bottom of the valley. So in this case, the appearance is quite human like, but the familiarity is negative. This is the uncanny valley.

It’s a very interesting interpretation of the diagram. The article is definitely worth reading  although you won’t find a reference to the zombies which represent the bottom of the ‘uncanny valley’. Perhaps there’s something about them in the original article printed in Energy, (1970) 7(4), pp. 33-35?

ETA April 12, 2011: Someone sent me a link to this March 8, 2011 posting by Reid of the Analytic Design Group. It offers another perspective, this one being mildly cautionary.

Intersection of philosophy, science policy, and nanotechnology regulation

After coming across a mention of John Rawls in a July 11, 2010 posting by Richard Jones (Soft Machines blog) and his (Rawls’) notions about how people and groups with diverse interests can come to agreements on social norms, I wondered why I hadn’t heard of Rawls before and how his thinking might apply to nanotechnology regulatory frameworks.

Assuming I might not be alone in my ignorance of Rawls’ work, here’s a brief description from a Wikipedia essay,

John Bordley Rawls (February 21, 1921 – November 24, 2002) was an American philosopher and a leading figure in moral and political philosophy. … His magnum opus, A Theory of Justice (1971), is now regarded as “one of the primary texts in political philosophy.”[1] His work in political philosophy, dubbed Rawlsianism,[2] takes as its starting point the argument that “most reasonable principles of justice are those everyone would accept and agree to from a fair position.”[1]

(The footnote details can be found by following the essay link.) I think the idea of people being able to come to agreements when they operate from a fair position is both interesting and seems to be borne out by a recent study in the US that Steffen Foss Hansen has recently published in the Journal of Nanoparticle Research. Michael Berger at Nanowerk has written an in depth article about the study and multicriteria mapping, the technique used to measure and evaluate interviewees’ positions on nanotechnology regulatory frameworks. From the Berger article,

Multicriteria Mapping [MCM] is a computer-based decision analysis technique that provides a way of appraising a series of different potential ways forward on a complex and controversial policy problem. Like other multicriteria approaches, it involves developing a set of criteria, evaluating the performance of each option under each criterion, and weighting each criterion according to its relative importance.

Hansen interviewed 26 stakeholders, including academics, public civil servants, corporate lawyers, [public interest groups,] and representatives from worker unions, industrial companies, and trade association.

One aspect of this research that I thought particularly useful is that the interviews are structured dynamically. From the study,

Once the criteria had been defined, the interviewee was asked to evaluate the relative performance of the different policy options on a numerical scale (0–100) under each of the criteria one-by-one. Zero representing the worst relative performance and a 100 the best. In order to allow for uncertainty in the estimation MCM allows the interviewee to give a range (e.g., 20–30) and to make worst- and best-case assumptions. The lowest values assigned to an option would then reflect the option considered under worst case assumptions whereas the highest would reflect the same option considered under best-case assumptions. Throughout this scoring process the interviewee was asked to explain the value or range assigned to options and assumptions made. One interview had to be terminated at this stage of the interview as the participant realized that he/she had yet to develop a formalized opinion on the most preferred options. Others expressed some dislike with having to put a numerical estimate on something which they normally only discuss in qualitative terms. Others again found it challenging to have to look at all the options through all their criteria scoring and explaining the scoring of up to 72 combinations of policy options and criteria. Normally they would not have to explain their position in such depth.  …  MCM is an iterative process, so interviewees were free to return to review earlier steps of the process at any stage of the interview. (Journal of Nanoparticle Research, vol. 12, p. 1963)

Bravo to the interviewees for going through a demanding process and putting their opinions to the test. Also, I understood from reading the study that MCM captures both quantitative (as the preceding excerpt shows) and qualitative data, an approach I’ve always favoured.

Berger’s article goes on to discuss the results from the study,

“Adopting an incremental approach and implementing a new regulatory framework have been evaluated as the best options whereas a complete ban and no additional regulation of nanotechnology were the least favorable” Hansen explains the key findings to Nanowerk.

Participants described their idea of an ‘incremental approach’ as “…launching an incremental process using existing legislative structures—e.g., dangerous substances legislation, classification and labeling, cosmetic legislation, etc.—to the maximum, revisiting them, and, when appropriate only, amending them…” and a ‘new regulatory framework’ as “…launching a comprehensive, in-depth regulatory process specific to nanotechnologies that aims at developing an entirely new legislative framework that tries to take all the widely different nanomaterials and applications into consideration.”

Hansen notes that comparing the ranking of the various options by the stakeholder groups reveals that an incremental approach was ranked highest by a majority of the various stakeholder groups e.g. civil servants, public interest groups, industrial company representatives and corporate lawyers.

Who would have thought that the most extreme ends of opinion as represented by public interest groups that usually favour the precautionary principle and industrial company representatives who argue in favour of little or voluntary regulation could agree on an incremental approach? I suppose it gets back to Rawls and his notion of coming to an agreement from “a fair position.”

More work needs to be done, it’s a single study, only 26 interviews took place, the MCM is a snapshot of a moment in time and may no longer reflect the interviewee’s personal opinions, and the regulatory situation in the US has changed since these interviews took place. Still, with all these caveats, and I’m sure there are others, the study offers encouraging news about diverse groups being able to come to an agreement on the subject of nanotechnology regulatory frameworks.