Tag Archives: Masahiro Mori

Robots with living human skin tissue?

So far, it looks like they’ve managed a single robotic finger. I expect it will take a great deal more work before an entire robotic hand is covered in living skin. BTW, I have a few comments at the end of this post.

Caption: Illustration showing the cutting and healing process of the robotic finger (A), its anchoring structure (B) and fabrication process (C). Credit: ©2022 Takeuchi et al.

I have two news releases highlighting the work. This a June 9, 2022 Cell Press news release,

From action heroes to villainous assassins, biohybrid robots made of both living and artificial materials have been at the center of many sci-fi fantasies, inspiring today’s robotic innovations. It’s still a long way until human-like robots walk among us in our daily lives, but scientists from Japan are bringing us one step closer by crafting living human skin on robots. The method developed, presented June 9 in the journal Matter, not only gave a robotic finger skin-like texture, but also water-repellent and self-healing functions.

“The finger looks slightly ‘sweaty’ straight out of the culture medium,” says first author Shoji Takeuchi, a professor at the University of Tokyo, Japan. “Since the finger is driven by an electric motor, it is also interesting to hear the clicking sounds of the motor in harmony with a finger that looks just like a real one.”

Looking “real” like a human is one of the top priorities for humanoid robots that are often tasked to interact with humans in healthcare and service industries. A human-like appearance can improve communication efficiency and evoke likability. While current silicone skin made for robots can mimic human appearance, it falls short when it comes to delicate textures like wrinkles and lacks skin-specific functions. Attempts at fabricating living skin sheets to cover robots have also had limited success, since it’s challenging to conform them to dynamic objects with uneven surfaces.

“With that method, you have to have the hands of a skilled artisan who can cut and tailor the skin sheets,” says Takeuchi. “To efficiently cover surfaces with skin cells, we established a tissue molding method to directly mold skin tissue around the robot, which resulted in a seamless skin coverage on a robotic finger.”

To craft the skin, the team first submerged the robotic finger in a cylinder filled with a solution of collagen and human dermal fibroblasts, the two main components that make up the skin’s connective tissues. Takeuchi says the study’s success lies within the natural shrinking tendency of this collagen and fibroblast mixture, which shrank and tightly conformed to the finger. Like paint primers, this layer provided a uniform foundation for the next coat of cells—human epidermal keratinocytes—to stick to. These cells make up 90% of the outermost layer of skin, giving the robot a skin-like texture and moisture-retaining barrier properties.

The crafted skin had enough strength and elasticity to bear the dynamic movements as the robotic finger curled and stretched. The outermost layer was thick enough to be lifted with tweezers and repelled water, which provides various advantages in performing specific tasks like handling electrostatically charged tiny polystyrene foam, a material often used in packaging. When wounded, the crafted skin could even self-heal like humans’ with the help of a collagen bandage, which gradually morphed into the skin and withstood repeated joint movements.

“We are surprised by how well the skin tissue conforms to the robot’s surface,” says Takeuchi. “But this work is just the first step toward creating robots covered with living skin.” The developed skin is much weaker than natural skin and can’t survive long without constant nutrient supply and waste removal. Next, Takeuchi and his team plan to address those issues and incorporate more sophisticated functional structures within the skin, such as sensory neurons, hair follicles, nails, and sweat glands.

“I think living skin is the ultimate solution to give robots the look and touch of living creatures since it is exactly the same material that covers animal bodies,” says Takeuchi.

A June 10, 2022 University of Tokyo news release (also on EurekAlert but published June 9, 2022) covers some of the same ground while providing more technical details,

Researchers from the University of Tokyo pool knowledge of robotics and tissue culturing to create a controllable robotic finger covered with living skin tissue. The robotic digit had living cells and supporting organic material grown on top of it for ideal shaping and strength. As the skin is soft and can even heal itself, so could be useful in applications that require a gentle touch but also robustness. The team aims to add other kinds of cells into future iterations, giving devices the ability to sense as we do.

Professor Shoji Takeuchi is a pioneer in the field of biohybrid robots, the intersection of robotics and bioengineering. Together with researchers from around the University of Tokyo, he explores things such as artificial muscles, synthetic odor receptors, lab-grown meat, and more. His most recent creation is both inspired by and aims to aid medical research on skin damage such as deep wounds and burns, as well as help advance manufacturing.

“We have created a working robotic finger that articulates just as ours does, and is covered by a kind of artificial skin that can heal itself,” said Takeuchi. “Our skin model is a complex three-dimensional matrix that is grown in situ on the finger itself. It is not grown separately then cut to size and adhered to the device; our method provides a more complete covering and is more strongly anchored too.”

Three-dimensional skin models have been used for some time for cosmetic and drug research and testing, but this is the first time such materials have been used on a working robot. In this case, the synthetic skin is made from a lightweight collagen matrix known as a hydrogel, within which several kinds of living skin cells called fibroblasts and keratinocytes are grown. The skin is grown directly on the robotic component which proved to be one of the more challenging aspects of this research, requiring specially engineered structures that can anchor the collagen matrix to them, but it was worth it for the aforementioned benefits.

“Our creation is not only soft like real skin but can repair itself if cut or damaged in some way. So we imagine it could be useful in industries where in situ repairability is important as are humanlike qualities, such as dexterity and a light touch,” said Takeuchi. “In the future, we will develop more advanced versions by reproducing some of the organs found in skin, such as sensory cells, hair follicles and sweat glands. Also, we would like to try to coat larger structures.”

The main long-term aim for this research is to open up new possibilities in advanced manufacturing industries. Having humanlike manipulators could allow for the automation of things currently only achievable by highly skilled professionals. Other areas such as cosmetics, pharmaceuticals and regenerative medicine could also benefit. This could potentially reduce cost, time and complexity of research in these areas and could even reduce the need for animal testing.

Here’s a link to and a citation for the paper,

Living skin on a robot by Michio Kawai, Minghao Nie, Haruka Oda, Yuya Morimoto, Shoji Takeuchi. Matter DOI: https://doi.org/10.1016/j.matt.2022.05.019 Published:June 09, 2022

This paper appears to be open access.

There more images and there’s at least one video all of which can be found by clicking on the links to one or both of the news releases and to the paper. Personally, I found the images fascinating and …

Frankenstein, cyborgs, and more

The word is creepy. I find the robot finger images fascinating and creepy. The work brings to mind Frankenstein (by Mary Shelley) and The Island of Dr. Moreau (by H. G. Wells) both of which feature cautionary tales. Dr. Frankenstein tries to bring a dead ‘person’ assembled with parts from various corpses to life and Dr. Moreau attempts to create hybrids composed humans and animals. It’s fascinating how 19th century nightmares prefigure some of the research being performed now.

The work also brings to mind the ‘uncanny valley’, a term coined by Masahiro Mori, where people experience discomfort when something that’s not human seems too human. I have an excerpt from an essay that Mori wrote about the uncanny valley in my March 10, 2011 posting; scroll down about 50% of the way.) The diagram which accompanies it illustrates the gap between the least uncanny or the familiar (a healthy person, a puppet, etc.) and the most uncanny or the unfamiliar (a corpse, a zombie, a prosthetic hand).

Mori notes that the uncanny valley is not immovable; things change and the unfamiliar becomes familiar. Presumably, one day, I will no longer find robots with living skin to be creepy.

All of this changes the meaning (for me) of a term i coined for this site, ‘machine/flesh’. At the time, I was thinking of prosthetics and implants and how deeply they are being integrated into the body. But this research reverses the process. Now, the body (skin in this case) is being added to the machine (robot).

Uncanny Valley: Being Human in the Age of AI (artificial intelligence) at the de Young museum (San Francisco, US) February 22 – October 25, 2020

So we’re still stuck in 20th century concepts about artificial intelligence (AI), eh? Sean Captain’s February 21, 2020 article (for Fast Company) about the new AI exhibit in San Francisco suggests that artists can help us revise our ideas (Note: Links have been removed),

Though we’re well into the age of machine learning, popular culture is stuck with a 20th century notion of artificial intelligence. While algorithms are shaping our lives in real ways—playing on our desires, insecurities, and suspicions in social media, for instance—Hollywood is still feeding us clichéd images of sexy, deadly robots in shows like Westworld and Star Trek Picard.

The old-school humanlike sentient robot “is an important trope that has defined the visual vocabulary around this human-machine relationship for a very long period of time,” says Claudia Schmuckli, curator of contemporary art and programming at the Fine Arts Museums of San Francisco. It’s also a naïve and outdated metaphor, one she is challenging with a new exhibition at San Francisco’s de Young Museum, called Uncanny Valley, that opens on February 22 [2020].

The show’s name [Uncanny Valley: Being Human in the Age of AI] is a kind of double entendre referencing both the dated and emerging conceptions of AI. Coined in the 1970s, the term “uncanny valley” describes the rise and then sudden drop off of empathy we feel toward a machine as its resemblance to a human increases. Putting a set of cartoony eyes on a robot may make it endearing. But fitting it with anatomically accurate eyes, lips, and facial gestures gets creepy. As the gap between the synthetic and organic narrows, the inability to completely close that gap becomes all the more unsettling.

But the artists in this exhibit are also looking to another valley—Silicon Valley, and the uncanny nature of the real AI the region is building. “One of the positions of this exhibition is that it may be time to rethink the coordinates of the Uncanny Valley and propose a different visual vocabulary,” says Schmuckli.

Artist Stephanie Dinkins faces off with robot Bina48, a bot on display at the de Young Museum’s Uncanny Valley show. [Photo: courtesy of the artist; courtesy of the Fine Arts Museums of San Francisco]

From Captain’s February 21, 2020 article,

… the resemblance to humans is only synthetic-skin deep. Bina48 can string together a long series of sentences in response to provocative questions from Dinkins, such as, “Do you know racism?” But the answers are sometimes barely intelligible, or at least lack the depth and nuance of a conversation with a real human. The robot’s jerky attempts at humanlike motion also stand in stark contrast to Dinkins’s calm bearing and fluid movement. Advanced as she is by today’s standards, Bina48 is tragically far from the sci-fi concept of artificial life. Her glaring shortcomings hammer home why the humanoid metaphor is not the right framework for understanding at least today’s level of artificial intelligence.

For anybody who has more curiosity about the ‘uncanny valley’, there’s this Wikipedia entry.

For more details about the’ Uncanny Valley: Being Human in the Age of AI’ exhibition there’s this September 26, 2019 de Young museum news release,

What are the invisible mechanisms of current forms of artificial intelligence (AI)? How is AI impacting our personal lives and socioeconomic spheres? How do we define intelligence? How do we envision the future of humanity?

SAN FRANCISCO (September 26, 2019) — As technological innovation continues to shape our identities and societies, the question of what it means to be, or remain human has become the subject of fervent debate. Taking advantage of the de Young museum’s proximity to Silicon Valley, Uncanny Valley: Being Human in the Age of AI arrives as the first major exhibition in the US to explore the relationship between humans and intelligent machines through an artistic lens. Organized by the Fine Arts Museums of San Francisco, with San Francisco as its sole venue, Uncanny Valley: Being Human in the Age of AI will be on view from February 22 to October 25, 2020.

“Technology is changing our world, with artificial intelligence both a new frontier of possibility but also a development fraught with anxiety,” says Thomas P. Campbell, Director and CEO of the Fine Arts Museums of San Francisco. “Uncanny Valley: Being Human in the Age of AI brings artistic exploration of this tension to the ground zero of emerging technology, raising challenging questions about the future interface of human and machine.”

The exhibition, which extends through the first floor of the de Young and into the museum’s sculpture garden, explores the current juncture through philosophical, political, and poetic questions and problems raised by AI. New and recent works by an intergenerational, international group of artists and activist collectives—including Zach Blas, Ian Cheng, Simon Denny, Stephanie Dinkins, Forensic Architecture, Lynn Hershman Leeson, Pierre Huyghe, Christopher Kulendran Thomas in collaboration with Annika Kuhlmann, Agnieszka Kurant, Lawrence Lek, Trevor Paglen, Hito Steyerl, Martine Syms, and the Zairja Collective—will be presented.

The Uncanny Valley

In 1970 Japanese engineer Masahiro Mori introduced the concept of the “uncanny valley” as a terrain of existential uncertainty that humans experience when confronted with autonomous machines that mimic their physical and mental properties. An enduring metaphor for the uneasy relationship between human beings and lifelike robots or thinking machines, the uncanny valley and its edges have captured the popular imagination ever since. Over time, the rapid growth and affordability of computers, cloud infrastructure, online search engines, and data sets have fueled developments in machine learning that fundamentally alter our modes of existence, giving rise to a newly expanded uncanny valley.

“As our lives are increasingly organized and shaped by algorithms that track, collect, evaluate, and monetize our data, the uncanny valley has grown to encompass the invisible mechanisms of behavioral engineering and automation,” says Claudia Schmuckli, Curator in Charge of Contemporary Art and Programming at the Fine Arts Museums of San Francisco. “By paying close attention to the imminent and nuanced realities of AI’s possibilities and pitfalls, the artists in the exhibition seek to thicken the discourse around AI. Although fables like HBO’s sci-fi drama Westworld, or Spike Jonze’s feature film Her still populate the collective imagination with dystopian visions of a mechanized future, the artists in this exhibition treat such fictions as relics of a humanist tradition that has little relevance today.”

In Detail

Ian Cheng’s digitally simulated AI creature BOB (Bag of Beliefs) reflects on the interdependency of carbon and silicon forms of intelligence. An algorithmic Tamagotchi, it is capable of evolution, but its growth, behavior, and personality are molded by online interaction with visitors who assume collective responsibility for its wellbeing.

In A.A.I. (artificial artificial intelligence), an installation of multiple termite mounds of colored sand, gold, glitter and crystals, Agnieszka Kurant offers a vibrant critique of new AI economies, with their online crowdsourcing marketplace platforms employing invisible armies of human labor at sub-minimum wages.

Simon Denny ‘s Amazon worker cage patent drawing as virtual King Island Brown Thornbill cage (US 9,280,157 B2: “System and method for transporting personnel within an active workspace”, 2016) (2019) also examines the intersection of labor, resources, and automation. He presents 3-D prints and a cage-like sculpture based on an unrealized machine patent filed by Amazon to contain human workers. Inside the cage an augmented reality application triggers the appearance of a King Island Brown Thornbill — a bird on the verge of extinction; casting human labor as the proverbial canary in the mine. The humanitarian and ecological costs of today’s data economy also informs a group of works by the Zairja Collective that reflect on the extractive dynamics of algorithmic data mining. 

Hito Steyerl addresses the political risks of introducing machine learning into the social sphere. Her installation The City of Broken Windows presents a collision between commercial applications of AI in urban planning along with communal and artistic acts of resistance against neighborhood tipping: one of its short films depicts a group of technicians purposefully smashing windows to teach an algorithm how to recognize the sound of breaking glass, and another follows a group of activists through a Camden, NJ neighborhood as they work to keep decay at bay by replacing broken windows in abandoned homes with paintings. 

Addressing the perpetuation of societal biases and discrimination within AI, Trevor Paglen’s They Took the Faces from the Accused and the Dead…(SD18), presents a large gridded installation of more than three thousand mugshots from the archives of the American National Standards Institute. The institute’s collections of such images were used to train ealry facial-recognition technologies — without the consent of those pictured. Lynn Hershman Leeson’s new installation Shadow Stalker critiques the problematic reliance on algorithmic systems, such as the military forecasting tool Predpol now widely used for policing, that categorize individuals into preexisting and often false “embodied metrics.”

Stephanie Dinkins extends the inquiry into how value systems are built into AI and the construction of identity in Conversations with Bina48, examining the social robot’s (and by extension our society’s) coding of technology, race, gender and social equity. In the same territory, Martine Syms posits AI as a “shamespace” for misrepresentation. For Mythiccbeing she has created an avatar of herself that viewers can interact with through text messaging. But unlike service agents such as Siri and Alexa, who readily respond to questions and demands, Syms’s Teeny is a contrarious interlocutor, turning each interaction into an opportunity to voice personal observations and frustrations about racial inequality and social injustice.

Countering the abusive potential of machine learning, Forensic Architecture pioneers an application to the pursuit of social justice. Their proposition of a Model Zoo marks the beginnings of a new research tool for civil society built of military vehicles, missile fragments, and bomb clouds—evidence of human-rights violations by states and militaries around the world. Christopher Kulendran Thomas’s video Being Human, created in collaboration with Annika Kuhlmann, poses the philosophical question of what it means to be human when machines are able to synthesize human understanding ever more convincingly. Set  in Sri Lanka, it employs AI-generated characters of singer Taylor Swift and artist Oscar Murillo to reflect on issues of individual authenticity, collective sovereignty, and the future of human rights.

Lawrence Lek’s sci-fi-inflected film Aidol, which explores the relationship between algorithmic automation and human creativity, projects this question into the future. It transports the viewer into the computer-generated “sinofuturist” world of the 2065 eSports Olympics: when the popular singer Diva enlists the super-intelligent Geomancer to help her stage her artistic comeback during the game’s halftime show, she unleashes an existential and philosophical battle that explodes the divide between humans and machines.

The Doors, a newly commissioned installation by Zach Blas, by contrast shines the spotlight back onto the present and on the culture and ethos of Silicon Valley — the ground zero for the development of AI. Inspired by the ubiquity of enclosed gardens on tech campuses, he has created an artificial garden framed by a six-channel video projected on glass panes that convey a sense of algorithmic psychedelia aiming to open new “doors of perception.” While luring visitors into AI’s promises, it also asks what might become possible when such glass doors begin to crack. 

Unveiled in late spring Pierre Huyghe‘s Exomind (Deep Water), a sculpture of a crouched female nude with a live beehive as its head will be nestled within the museum’s garden. With its buzzing colony pollinating the surrounding flora, it offers a poignant metaphor for the modeling of neural networks on the biological brain and an understanding of intelligence as grounded in natural forms and processes.

The Uncanny Valley: Being Human in the Age of AI event page features a link to something unexpected 9scroll down about 40% of the way), a Statement on Eyal Weizman of Forensic Architecture,

On Thursday, February 13 [2020], Eyal Weizman of Forensic Architecture had his travel authorization to the United States revoked due to an “algorithm” that identified him as a security threat.

He was meant to be in the United States promoting multiple exhibitions including Uncanny Valley: Being Human in the Age of AI, opening on February 22 [2020] at the de Young museum in San Francisco.

Since 2018, Forensic Architecture has used machine learning / AI to aid in humanitarian work, using synthetic images—photorealistic digital renderings based around 3-D models—to train algorithmic classifiers to identify tear gas munitions and chemical bombs deployed against protesters worldwide, including in Hong Kong, Chile, the US, Venezuela, and Sudan.

Their project, Model Zoo, on view in Uncanny Valley represents a growing collection of munitions and weapons used in conflict today and the algorithmic models developed to identify them. It shows a collection of models being used to track and hold accountable human rights violators around the world. The piece joins work by 14 contemporary artists reflecting on the philosophical and political consequences of the application of AI into the social sphere.

We are deeply saddened that Weizman will not be allowed to travel to celebrate the opening of the exhibition. We stand with him and Forensic Architecture’s partner communities who continue to resist violent states and corporate practices, and who are increasingly exposed to the regime of “security algorithms.”

—Claudia Schmuckli, Curator-in-Charge, Contemporary Art & Programming, & Thomas P. Campbell, Director and CEO, Fine Arts Museums of San Francisco

There is a February 20, 2020 article (for Fast Company) by Eyal Weizman chronicling his experience with being denied entry by an algorithm. Do read it in its entirety (the Fast Company is itself an excerpt from Weizman’s essay) if you have the time, if not, here’s the description of how he tried to gain entry after being denied the first time,

The following day I went to the U.S. Embassy in London to apply for a visa. In my interview, the officer informed me that my authorization to travel had been revoked because the “algorithm” had identified a security threat. He said he did not know what had triggered the algorithm but suggested that it could be something I was involved in, people I am or was in contact with, places to which I had traveled (had I recently been in Syria, Iran, Iraq, Yemen, or Somalia or met their nationals?), hotels at which I stayed, or a certain pattern of relations among these things. I was asked to supply the Embassy with additional information, including 15 years of travel history, in particular where I had gone and who had paid for it. The officer said that Homeland Security’s investigators could assess my case more promptly if I supplied the names of anyone in my network whom I believed might have triggered the algorithm. I declined to provide this information.

I hope the exhibition is successful; it has certainly experienced a thought-provoking start.

Finally, I have often featured postings that discuss the ‘uncanny valley’. To find those postings, just use that phrase in the blog search engine. You might also went to search ‘Hiroshi Ishiguro’, a Japanese scientist and robotocist who specializes in humanoid robots.

What about the heart? and the quest to make androids lifelike

Japanese scientist Hiroshi Ishiguro has been mentioned here several times in the context of ‘lifelike’ robots. Accordingly, it’s no surprise to see Ishiguro’s name in a June 24, 2014 news item about uncannily lifelike robotic tour guides in a Tokyo museum (CBC (Canadian Broadcasting Corporation) News online),

The new robot guides at a Tokyo museum look so eerily human and speak so smoothly they almost outdo people — almost.

Japanese robotics expert Hiroshi Ishiguro, an Osaka University professor, says they will be useful for research on how people interact with robots and on what differentiates the person from the machine.

“Making androids is about exploring what it means to be human,” he told reporters Tuesday [June 23, 2014], “examining the question of what is emotion, what is awareness, what is thinking.”

In a demonstration, the remote-controlled machines moved their pink lips in time to a voice-over, twitched their eyebrows, blinked and swayed their heads from side to side. They stay seated but can move their hands.

Ishiguro and his robots were also mentioned in a May 29, 2014 article by Carey Dunne for Fast Company. The article concerned a photographic project of Luisa Whitton’s.

In her series “What About the Heart?,” British photographer Luisa Whitton documents one of the creepiest niches of the Japanese robotics industry--androids. Here, an eerily lifelike face made for a robot. [dowloaded from http://www.fastcodesign.com/3031125/exposure/japans-uncanny-quest-to-humanize-robots?partner=rss]

In her series “What About the Heart?,” British photographer Luisa Whitton documents one of the creepiest niches of the Japanese robotics industry–androids. Here, an eerily lifelike face made for a robot. [dowloaded from http://www.fastcodesign.com/3031125/exposure/japans-uncanny-quest-to-humanize-robots?partner=rss]

From Dunne’s May 29, 2014 article (Note: Links have been removed),

We’re one step closer to a robot takeover. At least, that’s one interpretation of “What About the Heart?” a new series by British photographer Luisa Whitton. In 17 photos, Whitton documents one of the creepiest niches of the Japanese robotics industry–androids. These are the result of a growing group of scientists trying to make robots look like living, breathing people. Their efforts pose a question that’s becoming more relevant as Siri and her robot friends evolve: what does it mean to be human as technology progresses?

Whitton spent several months in Japan working with Hiroshi Ishiguro, a scientist who has constructed a robotic copy of himself. Ishiguro’s research focused on whether his robotic double could somehow possess his “Sonzai-Kan,” a Japanese term that translates to the “presence” or “spirit” of a person. It’s work that blurs the line between technology, philosophy, psychology, and art, using real-world studies to examine existential issues once reserved for speculation by the likes of Philip K. Dick or Sigmund Freud. And if this sounds like a sequel to Blade Runner, it gets weirder: after Ishiguro aged, he had plastic surgery so that his face still matched that of his younger, mechanical doppelganger.

I profiled Ishiguro’s robots (then called Geminoids) in a March 10, 2011 posting which featured a Danish philosopher, Henrik Scharfe, who’d commissioned a Geminoid identical to himself for research purposes. He doesn’t seem to have published any papers about his experience but there is this interview of Scharfe and his Geminoid twin by Aldith Hunkar (she’s very good) at a 2011 TEDxAmsterdam,

Mary King’s 2007 research project notes a contrast, Robots and AI in Japan and The West and provides an excellent primer (Note: A link has been removed),

The Japanese scientific approach and expectations of robots and AI are far more down to earth than those of their Western counterparts. Certainly, future predictions made by Japanese scientists are far less confrontational or sci-fi-like. In an interview via email, Canadian technology journalist Tim N. Hornyak described the Japanese attitude towards robots as being “that of the craftsman, not the philosopher” and cited this as the reason for “so many rosy imaginings of a future Japan in which robots are a part of people’s everyday lives.”

Hornyak, who is author of “Loving the Machine: The Art and Science of Japanese Robots,” acknowledges that apocalyptic visions do appear in manga and anime, but emphasizes that such forecasts do not exist in government circles or within Japanese companies. Hornyak also added that while AI has for many years taken a back seat to robot development in Japan, this situation is now changing. Honda, for example, is working on giving better brains to Asimo, which is already the world’s most advanced humanoid robot. Japan is also already legislating early versions of Asimov’s laws by introducing design requirements for next-generation mobile robots.

It does seem there might be more interest in the philosophical issues in Japan these days or possibly it’s a reflection of Ishiguro’s own current concerns (from Dunne’s May 29, 2014 article),

The project’s title derives from a discussion with Ishiguro about what it means to be human. “The definition of human will be more complicated,” Ishiguro said.

Dunne reproduces a portion of Whitton’s statement describing her purpose for these photographs,

Through Ishiguro, Whitton got in touch with a number of other scientists working on androids. “In the photographs, I am trying to subvert the traditional formula of portraiture and allure the audience into a debate on the boundaries that determine the dichotomy of the human/not human,” she writes in her artist statement. “The photographs become documents of objects that sit between scientific tool and horrid simulacrum.”

I’m not sure what she means by “horrid simulacrum” but she seems to be touching on the concept of the ‘uncanny valley’. Here’s a description I provided in a May 31, 2013 posting about animator Chris Landreth and his explorations of that valley within the context of his animated film, Subconscious Password,,

Landreth also discusses the ‘uncanny valley’ and how he deliberately cast his film into that valley. For anyone who’s unfamiliar with the ‘uncanny valley’ I wrote about it in a Mar. 10, 2011 posting concerning Geminoid robots,

It seems that researchers believe that the ‘uncanny valley’ doesn’t necessarily have to exist forever and at some point, people will accept humanoid robots without hesitation. In the meantime, here’s a diagram of the ‘uncanny valley’,

From the article on Android Science by Masahiro Mori (translated by Karl F. MacDorman and Takashi Minato)

Here’s what Mori (the person who coined the term) had to say about the ‘uncanny valley’ (from Android Science),

Recently there are many industrial robots, and as we know the robots do not have a face or legs, and just rotate or extend or contract their arms, and they bear no resemblance to human beings. Certainly the policy for designing these kinds of robots is based on functionality. From this standpoint, the robots must perform functions similar to those of human factory workers, but their appearance is not evaluated. If we plot these industrial robots on a graph of familiarity versus appearance, they lie near the origin (see Figure 1 [above]). So they bear little resemblance to a human being, and in general people do not find them to be familiar. But if the designer of a toy robot puts importance on a robot’s appearance rather than its function, the robot will have a somewhat humanlike appearance with a face, two arms, two legs, and a torso. This design lets children enjoy a sense of familiarity with the humanoid toy. So the toy robot is approaching the top of the first peak.

Of course, human beings themselves lie at the final goal of robotics, which is why we make an effort to build humanlike robots. For example, a robot’s arms may be composed of a metal cylinder with many bolts, but to achieve a more humanlike appearance, we paint over the metal in skin tones. These cosmetic efforts cause a resultant increase in our sense of the robot’s familiarity. Some readers may have felt sympathy for handicapped people they have seen who attach a prosthetic arm or leg to replace a missing limb. But recently prosthetic hands have improved greatly, and we cannot distinguish them from real hands at a glance. Some prosthetic hands attempt to simulate veins, muscles, tendons, finger nails, and finger prints, and their color resembles human pigmentation. So maybe the prosthetic arm has achieved a degree of human verisimilitude on par with false teeth. But this kind of prosthetic hand is too real and when we notice it is prosthetic, we have a sense of strangeness. So if we shake the hand, we are surprised by the lack of soft tissue and cold temperature. In this case, there is no longer a sense of familiarity. It is uncanny. In mathematical terms, strangeness can be represented by negative familiarity, so the prosthetic hand is at the bottom of the valley. So in this case, the appearance is quite human like, but the familiarity is negative. This is the uncanny valley.

[keep scrolling, I’m having trouble getting rid of this extra space below]

It seems that Mori is suggesting that as the differences between the original and the simulacrum become fewer and fewer, the ‘uncanny valley’ will disappear. It’s possible but I suspect before that day occurs those of us who were brought up in a world without synthetic humans (androids) may experience an intensification of the feelings aroused by an encounter with the uncanny valley even as it disappears. For those who’d like a preview, check out Luisa Whitton’s What About The Heart? project.

Canadian filmmaker Chris Landreth’s Subconscious Password explores the uncanny valley

I gather Chris Landreth’s short animation, Subconscious Password, hasn’t been officially released yet by the National Film Board (NFB) of Canada but there are clips and trailers which hint at some of the filmmaker’s themes. Landreth in a May 23, 2013 guest post for the NFB.ca blog spells out one of them,

Subconscious Password, my latest short film, travels to the inner mind of a fellow named Charles Langford, as he struggles to remember the name of his friend at a party. In his subconscious, he encounters a game show, populated with special guest stars:  archetypes, icons, distant memories, who try to help him find the connection he needs: His friend’s name.

The film is a psychological romp into a person’s inner mind where (I hope) you will see something of your own mind working, thinking, feeling. Even during a mundane act like remembering the name of an acquaintance at a party, someone you only vaguely remember. To me, mundane accomplishments like these are miracles we all experience many times each day.

Landreth also discusses the ‘uncanny valley’ and how he deliberately cast his film into that valley. For anyone who’s unfamiliar with the ‘uncanny valley’ I wrote about it in a Mar. 10, 2011 posting concerning Geminoid robots,

It seems that researchers believe that the ‘uncanny valley’ doesn’t necessarily have to exist forever and at some point, people will accept humanoid robots without hesitation. In the meantime, here’s a diagram of the ‘uncanny valley’,

From the article on Android Science by Masahiro Mori (translated by Karl F. MacDorman and Takashi Minato)

Here’s what Mori (the person who coined the term) had to say about the ‘uncanny valley’ (from Android Science),

Recently there are many industrial robots, and as we know the robots do not have a face or legs, and just rotate or extend or contract their arms, and they bear no resemblance to human beings. Certainly the policy for designing these kinds of robots is based on functionality. From this standpoint, the robots must perform functions similar to those of human factory workers, but their appearance is not evaluated. If we plot these industrial robots on a graph of familiarity versus appearance, they lie near the origin (see Figure 1 [above]). So they bear little resemblance to a human being, and in general people do not find them to be familiar. But if the designer of a toy robot puts importance on a robot’s appearance rather than its function, the robot will have a somewhat humanlike appearance with a face, two arms, two legs, and a torso. This design lets children enjoy a sense of familiarity with the humanoid toy. So the toy robot is approaching the top of the first peak.

Of course, human beings themselves lie at the final goal of robotics, which is why we make an effort to build humanlike robots. For example, a robot’s arms may be composed of a metal cylinder with many bolts, but to achieve a more humanlike appearance, we paint over the metal in skin tones. These cosmetic efforts cause a resultant increase in our sense of the robot’s familiarity. Some readers may have felt sympathy for handicapped people they have seen who attach a prosthetic arm or leg to replace a missing limb. But recently prosthetic hands have improved greatly, and we cannot distinguish them from real hands at a glance. Some prosthetic hands attempt to simulate veins, muscles, tendons, finger nails, and finger prints, and their color resembles human pigmentation. So maybe the prosthetic arm has achieved a degree of human verisimilitude on par with false teeth. But this kind of prosthetic hand is too real and when we notice it is prosthetic, we have a sense of strangeness. So if we shake the hand, we are surprised by the lack of soft tissue and cold temperature. In this case, there is no longer a sense of familiarity. It is uncanny. In mathematical terms, strangeness can be represented by negative familiarity, so the prosthetic hand is at the bottom of the valley. So in this case, the appearance is quite human like, but the familiarity is negative. This is the uncanny valley.

Landreth discusses the ‘uncanny valley’ in relation to animated characters,

Many of you know what this is. The Uncanny Valley describes a common problem that audiences have with CG-animated characters. Here’s a graph that shows this:

Follow the curvy line from the lower left. If a character is simple (like a stick figure) we have little or no empathy with it. A more complex character, like Snow White orPixar’s Mr. Incredible, gives us more human-like mannerisms for us to identify with.

But then the Uncanny Valley kicks in. That curvy line changes direction, plunging downwards. This is the pit into which many characters from The Polar Express, Final Fantasy and Mars Needs Moms fall. We stop empathizing with these characters. They are unintentionally disturbing, like moving corpses. This is a big problem with realistic CGI characters: that unshakable perception that they are animated zombies. [zombie emphasis mine]

You’ll notice that the diagram from my posting features a zombie at the very bottom of the curve.

Landreth goes on to compare the ‘land’ in the uncanny valley to real estate,

… The value of land in the Uncanny Valley has plunged to zero. There are no buyers.

Well, except perhaps me.

Some of you know that my films have a certain obsession with visual realism with their human characters. I like doing this. I find value in this realism that goes beyond simply copying what humans look and act like. If used intelligently and with imagination, realism can capture something deeper, something weird and emotional and psychological about our collective experience on this planet. But it has to be honest. That’s hard.

He also explains what he’s hoping to accomplish by inhabiting the uncanny valley,

When making this film, we knew we were going into the Uncanny Valley. We did it because your subconscious processes, and mine, are like this valley. We project our waking world into our subconscious minds. The ‘characters’ in this inner world are realistic approximations of actual people, without actually being real. This is the miracle of how we get by. My protagonist, Charles, has a mixture of both realistic approximations and crazy warped versions of the people and icons in his life. He is indeed a bit off-kilter. But he gets by, like most of us do. As you probably have guessed, both Charles and the Host are self-portraits. I want to be honest in showing you this world. My own Uncanny Valley. You have one too. It’s something to celebrate.

On the that note, here’s a clip from Subconscious Password,

Subconscious Password (Clip) by Chris Landreth, National Film Board of Canada

 I last wrote about Landreth and his work in an April 14, 2010 posting (scroll down about 1/4 of the way) regarding mathematics and the arts. This post features excerpts from an interview with the University of Toronto (Ontario, Canada) mathematician, Karan Singh who worked with Landreth on their award-winning, Ryan.

Feeling artificial skin

In reading about some of the latest work on artificial skin and feeling, I was reminded of a passage from a description of the ‘uncanny valley’ by Masahiro Mori (excerpted from my March 10, 2011 posting about robots [geminoid robots, in particular])

… this kind of prosthetic hand is too real and when we notice it is prosthetic, we have a sense of strangeness. So if we shake the hand, we are surprised by the lack of soft tissue and cold temperature.

According to a March 29, 2012 news item on Nanowerk, this state of affairs is about to change,

Sooner than later, robots may have the ability to “feel.” In a paper published online March 26 in Advanced Functional Materials (“Mechanical Resuscitation of Chemical Oscillations in Belousov–Zhabotinsky Gels”), a team of researchers from the University of Pittsburgh [Pitt] and the Massachusetts Institute of Technology (MIT) demonstrated that a nonoscillating gel can be resuscitated in a fashion similar to a medical cardiopulmonary resuscitation. These findings pave the way for the development of a wide range of new applications that sense mechanical stimuli and respond chemically—a natural phenomenon few materials have been able to mimic.

“Think of it like human skin, which can provide signals to the brain that something on the body is deformed or hurt,” says Balazs [Anna Balazs, Distinguished Professor of Chemical and Petroleum Engineering in Pitt’s Swanson School of Engineering]. “This gel has numerous far-reaching applications, such as artificial skin that could be sensory—a holy grail in robotics.”

The Pitt March 29, 2012 news release reveals some of the personal motivation behind the research,

“My mother would often tease me when I was young, saying I was like a mimosa plant— shy and bashful,” says Balazs. “As a result, I became fascinated with the plant and its unique hide-and-seek qualities—the plant leaves fold inward and droop when touched or shaken, reopening just minutes later. I knew there had to be a scientific application regarding touch, which led me to studies like this in mechanical and chemical energy.”

Here’s a more technical description of the joint Pitt/MIT research team’s work (from the Pitt news release),

A team of researchers at Pitt made predictions regarding the behavior of Belousov-Zhabotinsky (BZ) gel, a material that was first fabricated in the late 1990s and shown to pulsate in the absence of any external stimuli. In fact, under certain conditions, the gel sitting in a petri dish resembles a beating heart.

Along with her colleagues, [Balazs] predicted that BZ gel not previously oscillating could be re-excited by mechanical pressure. The prediction was actualized by MIT researchers, who proved that chemical oscillations can be triggered by mechanically compressing the BZ gel beyond a critical stress.

I’m always fascinated by what motivates people and so Balazs’s story about the mimosa strikes me as both charming and instructive as to the sources for creative inspiration in any field.

If I read the news release rightly, we’ve still got a long way to go before ‘seeing’ robots with skin that can ‘feel’.

They is becoming more like us: Geminoid robots and robots with more humanlike movement

We will be proceeding deep into the ‘uncanny valley’, that place where robots looks so like humans, they make us uncomfortable. I have made a reference to the ‘uncanny valley’ in a previous posting that featured some Japanese dancing robots (October 18, 2010 posting [scroll down]). This is an order of magnitude more uncanny. See the video for yourself,

First test of the Geminoid DK. The nearly completed geminoid (twin robot) is operated by a human for the first time. Movements of the operator is reproduced in the robot. (from the description on Youtube)

Here’s a little more from a March 7, 2011 article by Katie Gatto on physorg.com,

The latest robot in the family of ultra-realistic androids, called the Geminoid series, is so realistic that it can actually be mistaken for the person it was designed to look like. The new bot, dubbed the Geminoid DK, was was created by robotics firm Kokoro in Tokyo and is now being housed at Japan’s Advanced Telecommunications Research Institute International in Nara. The robot was designed to look like Associate Professor Henrik Scharfe of Aalborg University in Denmark.

As for why anyone would want a robot that so closely resembled themselves, I can think of a few reasons but Scharfe has used this as an opportunity to embark on a study (from the March 7, 2011 article by Kit Eaton on Fast Company),

Scharfe is an associate professor at Aalborg University in Denmark and is director of the center for Computer-Mediated Epistemology, which pretty much explains what all this robotics tech is all about–Epistemology is the philosophical study of knowledge, centering on the question of what’s “true” knowledge versus “false” or “inadequate” knowledge. Scharfe intends to use the robot to probe “emotional affordances” between robots and humans, as well as “blended presence” (a partly digital, partly realistic way for people to telepresence themselves, demonstrated by weird prototypes like the Elfoid robot-phone we covered the other day). The device will also be used to look at cultural differences in how people interact with robots–for example in the U.S. robots may be perceived as threatening, or mere simple tools, but in Japan they’re increasingly accepted as a part of society.

Here’s a picture of the ‘real’ Scharfe with the ‘Geminoid’ Scharfe,

Image from Geminoid Facebook page

You can click through to the Geminoid Facebook page from here. Here’s more about Geminoid research (from the Geminoid DK website),

Introduction to Geminoid research

The first geminoid, HI-1, was created in 2005 by Prof. Hiroshi Ishiguro of ATR and the Tokyo-based firm, Kokoro. A geminoid is an android, designed to look exactly as its master, and is controlled through a computer system that replicates the facial movements of the operator in the robot.

In the spring of 2010, a new geminoid was created. The new robot, Geminoid-F was a simpler version of the original HI-1, and it was also more affordable, making it reasonable to acquire one for humanistic research in Human Robot Interaction.

Geminoid|DK will be the first of its kind outside of Japan, and is intended to advance android science and philosophy, in seeking answers to fundamental questions, many of which that have also occupied the Japanese researchers. The most important questions are:

– What is a human?
– What is presence?
– What is a relation?
– What is identity?

If that isn’t enough, there’s research at Georgia Tech (US) being done on how make to robots move in a more humanlike fashion (from the March 8, 2011 article by Kit Eaton on Fast Company),

Which is where research from Georgia Tech comes in. Based on their research droid Simon who looks distinctly robotic with a comedic head and glowing “ears,” a team working in the Socially Intelligent Machines Lab has been trying to teach Simon to move like humans do–forcing less machine-like gestures from his solid limbs. The trick was to record real human subjects performing a series of moves in a motion-capture studio, then taking the data and using it to program Simon, being careful (via a clever algorithm) to replicate the fluid multiple-joint rotations a human body does when swinging a limb between one position and the next, and which robot movements tend to avoid.

Then the team got volunteers to observe Simon in action, and asked them to identify the kinds of movements he was making. When a more smooth, fluid robot movement was made, the volunteers were better at identifying the gesture compared to a more “robotic” movement. To double-check the algorithm’s effectiveness the researchers then asked the human volunteers to mimic the gestures they thought the robot was making, tapping into the unconscious part of their minds that recognize human tics: And again, the volunteers were better at correctly mimicking the gesture when the human-like algorithm was applied to Simon’s moves.

Why’s this research important? Because as robots become increasingly a part of every day human life, we need to trust them and interact with them normally. Just as other research tries to teach robots to move in ways that can’t hurt us, this work will create robots that move in subtle ways to communicate physically with nearby people, aiding their incorporation into society. In medical professional roles, which are some of the first places humanoid robots may find work, this sort of acceptance could be absolutely crucial.

It seems that researchers believe that the ‘uncanny valley’ doesn’t necessarily have to exist forever and at some point, people will accept humanoid robots without hesitation. In the meantime, here’s a diagram of the ‘uncanny valley’,

From the article on Android Science by Masahiro Mori (translated by Karl F. MacDorman and Takashi Minato)

Here’s what Mori (the person who coined the term) had to say about the ‘uncanny valley’ (from Android Science),

Recently there are many industrial robots, and as we know the robots do not have a face or legs, and just rotate or extend or contract their arms, and they bear no resemblance to human beings. Certainly the policy for designing these kinds of robots is based on functionality. From this standpoint, the robots must perform functions similar to those of human factory workers, but their appearance is not evaluated. If we plot these industrial robots on a graph of familiarity versus appearance, they lie near the origin (see Figure 1 [above]). So they bear little resemblance to a human being, and in general people do not find them to be familiar. But if the designer of a toy robot puts importance on a robot’s appearance rather than its function, the robot will have a somewhat humanlike appearance with a face, two arms, two legs, and a torso. This design lets children enjoy a sense of familiarity with the humanoid toy. So the toy robot is approaching the top of the first peak.

Of course, human beings themselves lie at the final goal of robotics, which is why we make an effort to build humanlike robots. For example, a robot’s arms may be composed of a metal cylinder with many bolts, but to achieve a more humanlike appearance, we paint over the metal in skin tones. These cosmetic efforts cause a resultant increase in our sense of the robot’s familiarity. Some readers may have felt sympathy for handicapped people they have seen who attach a prosthetic arm or leg to replace a missing limb. But recently prosthetic hands have improved greatly, and we cannot distinguish them from real hands at a glance. Some prosthetic hands attempt to simulate veins, muscles, tendons, finger nails, and finger prints, and their color resembles human pigmentation. So maybe the prosthetic arm has achieved a degree of human verisimilitude on par with false teeth. But this kind of prosthetic hand is too real and when we notice it is prosthetic, we have a sense of strangeness. So if we shake the hand, we are surprised by the lack of soft tissue and cold temperature. In this case, there is no longer a sense of familiarity. It is uncanny. In mathematical terms, strangeness can be represented by negative familiarity, so the prosthetic hand is at the bottom of the valley. So in this case, the appearance is quite human like, but the familiarity is negative. This is the uncanny valley.

It’s a very interesting interpretation of the diagram. The article is definitely worth reading  although you won’t find a reference to the zombies which represent the bottom of the ‘uncanny valley’. Perhaps there’s something about them in the original article printed in Energy, (1970) 7(4), pp. 33-35?

ETA April 12, 2011: Someone sent me a link to this March 8, 2011 posting by Reid of the Analytic Design Group. It offers another perspective, this one being mildly cautionary.