Tag Archives: Pepper

True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)

The Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, which has been broadcast since November 1960, explored the world of emotional, empathic and creative artificial intelligence (AI) in a Friday, November 19, 2021 telecast titled, The Machine That Feels,

The Machine That Feels explores how artificial intelligence (AI) is catching up to us in ways once thought to be uniquely human: empathy, emotional intelligence and creativity.

As AI moves closer to replicating humans, it has the potential to reshape every aspect of our world – but most of us are unaware of what looms on the horizon.

Scientists see AI technology as an opportunity to address inequities and make a better, more connected world. But it also has the capacity to do the opposite: to stoke division and inequality and disconnect us from fellow humans. The Machine That Feels, from The Nature of Things, shows viewers what they need to know about a field that is advancing at a dizzying pace, often away from the public eye.

What does it mean when AI makes art? Can AI interpret and understand human emotions? How is it possible that AI creates sophisticated neural networks that mimic the human brain? The Machine That Feels investigates these questions, and more.

In Vienna, composer Walter Werzowa has — with the help of AI — completed Beethoven’s previously unfinished 10th symphony. By feeding data about Beethoven, his music, his style and the original scribbles on the 10th symphony into an algorithm, AI has created an entirely new piece of art.

In Atlanta, Dr. Ayanna Howard and her robotics lab at Georgia Tech are teaching robots how to interpret human emotions. Where others see problems, Howard sees opportunity: how AI can help fill gaps in education and health care systems. She believes we need a fundamental shift in how we perceive robots: let’s get humans and robots to work together to help others.

At Tufts University in Boston, a new type of biological robot has been created: the xenobot. The size of a grain of sand, xenobots are grown from frog heart and skin cells, and combined with the “mind” of a computer. Programmed with a specific task, they can move together to complete it. In the future, they could be used for environmental cleanup, digesting microplastics and targeted drug delivery (like releasing chemotherapy compounds directly into tumours).

The film includes interviews with global leaders, commentators and innovators from the AI field, including Geoff Hinton, Yoshua Bengio, Ray Kurzweil and Douglas Coupland, who highlight some of the innovative and cutting-edge AI technologies that are changing our world.

The Machine That Feels focuses on one central question: in the flourishing age of artificial intelligence, what does it mean to be human?

I’ll get back to that last bit, “… what does it mean to be human?” later.

There’s a lot to appreciate in this 44 min. programme. As you’d expect, there was a significant chunk of time devoted to research being done in the US but Poland and Japan also featured and Canadian content was substantive. A number of tricky topics were covered and transitions from one topic to the next were smooth.

In the end credits, I counted over 40 source materials from Getty Images, Google Canada, Gatebox, amongst others. It would have been interesting to find out which segments were produced by CBC.

David Suzuki’s (programme host) script was well written and his narration was enjoyable, engaging, and non-intrusive. That last quality is not always true of CBC hosts who can fall into the trap of overdramatizing the text.

Drilling down

I have followed artificial intelligence stories in a passive way (i.e., I don’t seek them out) for many years. Even so, there was a lot of material in the programme that was new to me.

For example, there was this love story (from the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage on the CBC),

In the The Machine That Feels, a documentary from The Nature of Things, we meet Kondo Akihiko, a Tokyo resident who “married” a hologram of virtual pop singer Hatsune Miku using a certificate issued by Gatebox (the marriage isn’t recognized by the state, and Gatebox acknowledges the union goes “beyond dimensions”).

I found Akihiko to be quite moving when he described his relationship, which is not unique. It seems some 4,000 men have ‘wed’ their digital companions, you can read about that and more on the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage.

What does it mean to be human?

Overall, this Nature of Things episode embraces certainty, which means the question of what it means to human is referenced rather than seriously discussed. An unanswerable philosophical question, the programme is ill-equipped to address it, especially since none of the commentators are philosophers or seem inclined to philosophize.

The programme presents AI as a juggernaut. Briefly mentioned is the notion that we need to make some decisions about how our juggernaut is developed and utilized. No one discusses how we go about making changes to systems that are already making critical decisions for us. (For more about AI and decision-making, see my February 28, 2017 posting and scroll down to the ‘Algorithms and big data’ subhead for Cathy O’Neil’s description of how important decisions that affect us are being made by AI systems. She is the author of the 2016 book, ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’; still a timely read.)

In fact, the programme’s tone is mostly one of breathless excitement. A few misgivings are expressed, e.g,, one woman who has an artificial ‘texting friend’ (Replika; a chatbot app) noted that it can ‘get into your head’ when she had a chat where her ‘friend’ told her that all of a woman’s worth is based on her body; she pushed back but intimated that someone more vulnerable could find that messaging difficult to deal with.

The sequence featuring Akihiko and his hologram ‘wife’ is followed by one suggesting that people might become more isolated and emotionally stunted as they interact with artificial friends. It should be noted, Akihiko’s wife is described as ‘perfect’. I gather perfection means that you are always understanding and have no needs of your own. She also seems to be about 18″ high.

Akihiko has obviously been asked about his ‘wife’ before as his answers are ready. They boil down to “there are many types of relationships” and there’s nothing wrong with that. It’s an intriguing thought which is not explored.

Also unexplored, these relationships could be said to resemble slavery. After all, you pay for these friends over which you have control. But perhaps that’s alright since AI friends don’t have consciousness. Or do they? In addition to not being able to answer the question, “what is it to be human?” we still can’t answer the question, “what is consciousness?”

AI and creativity

The Nature of Things team works fast. ‘Beethoven X – The AI Project’ had its first performance on October 9, 2021. (See my October 1, 2021 post ‘Finishing Beethoven’s unfinished 10th Symphony’ for more information from Ahmed Elgammal’s (Director of the Art & AI Lab at Rutgers University) technical perspective on the project.

Briefly, Beethoven died before completing his 10th symphony and a number of computer scientists, musicologists, AI, and musicians collaborated to finish the symphony.)

The one listener (Felix Mayer, music professor at the Technical University Munich) in the hall during a performance doesn’t consider the work to be a piece of music. He does have a point. Beethoven left some notes but this ’10th’ is at least partly mathematical guesswork. A set of probabilities where an algorithm chooses which note comes next based on probability.

There was another artist also represented in the programme. Puzzlingly, it was the still living Douglas Coupland. In my opinion, he’s better known as a visual artist than a writer (his Wikipedia entry lists him as a novelist first) but he has succeeded greatly in both fields.

What makes his inclusion in the Nature of Things ‘The Machine That Feels’ programme puzzling, is that it’s not clear how he worked with artificial intelligence in a collaborative fashion. Here’s a description of Coupland’s ‘AI’ project from a June 29, 2021 posting by Chris Henry on the Google Outreach blog (Note: Links have been removed),

… when the opportunity presented itself to explore how artificial intelligence (AI) inspires artistic expression — with the help of internationally renowned Canadian artist Douglas Coupland — the Google Research team jumped on it. This collaboration, with the support of Google Arts & Culture, culminated in a project called Slogans for the Class of 2030, which spotlights the experiences of the first generation of young people whose lives are fully intertwined with the existence of AI. 

This collaboration was brought to life by first introducing Coupland’s written work to a machine learning language model. Machine learning is a form of AI that provides computer systems the ability to automatically learn from data. In this case, Google research scientists tuned a machine learning algorithm with Coupland’s 30-year body of written work — more than a million words — so it would familiarize itself with the author’s unique style of writing. From there, curated general-public social media posts on selected topics were added to teach the algorithm how to craft short-form, topical statements. [emphases mine]

Once the algorithm was trained, the next step was to process and reassemble suggestions of text for Coupland to use as inspiration to create twenty-five Slogans for the Class of 2030. [emphasis mine]

I would comb through ‘data dumps’ where characters from one novel were speaking with those in other novels in ways that they might actually do. It felt like I was encountering a parallel universe Doug,” Coupland says. “And from these outputs, the statements you see here in this project appeared like gems. Did I write them? Yes. No. Could they have existed without me? No.” [emphases mine]

So, the algorithms crunched through Coupland’s word and social media texts to produce slogans, which Coupland then ‘combed through’ to pick out 25 slogans for the ‘Slogans For The Class of 2030’ project. (Note: In the programme, he says that he started a sentence and then the AI system completed that sentence with material gleaned from his own writings, which brings to Exquisite Corpse, a collaborative game for writers originated by the Surrealists, possibly as early as 1918.)

The ‘slogans’ project also reminds me of William S. Burroughs and the cut-up technique used in his work. From the William S. Burroughs Cut-up technique webpage on the Language is a Virus website (Thank you to Lake Rain Vajra for a very interesting website),

The cutup is a mechanical method of juxtaposition in which Burroughs literally cuts up passages of prose by himself and other writers and then pastes them back together at random. This literary version of the collage technique is also supplemented by literary use of other media. Burroughs transcribes taped cutups (several tapes spliced into each other), film cutups (montage), and mixed media experiments (results of combining tapes with television, movies, or actual events). Thus Burroughs’s use of cutups develops his juxtaposition technique to its logical conclusion as an experimental prose method, and he also makes use of all contemporary media, expanding his use of popular culture.

[Burroughs says] “All writing is in fact cut-ups. A collage of words read heard overheard. What else? Use of scissors renders the process explicit and subject to extension and variation. Clear classical prose can be composed entirely of rearranged cut-ups. Cutting and rearranging a page of written words introduces a new dimension into writing enabling the writer to turn images in cinematic variation. Images shift sense under the scissors smell images to sound sight to sound to kinesthetic. This is where Rimbaud was going with his color of vowels. And his “systematic derangement of the senses.” The place of mescaline hallucination: seeing colors tasting sounds smelling forms.

“The cut-ups can be applied to other fields than writing. Dr Neumann [emphasis mine] in his Theory of Games and Economic behavior introduces the cut-up method of random action into game and military strategy: assume that the worst has happened and act accordingly. … The cut-up method could be used to advantage in processing scientific data. [emphasis mine] How many discoveries have been made by accident? We cannot produce accidents to order. The cut-ups could add new dimension to films. Cut gambling scene in with a thousand gambling scenes all times and places. Cut back. Cut streets of the world. Cut and rearrange the word and image in films. There is no reason to accept a second-rate product when you can have the best. And the best is there for all. Poetry is for everyone . . .”

First, John von Neumann (1902 – 57) is a very important figure in the history of computing. From a February 25, 2017 John von Neumann and Modern Computer Architecture essay on the ncLab website, “… he invented the computer architecture that we use today.”

Here’s Burroughs on the history of writers and cutups (thank you to QUEDEAR for posting this clip),

You can hear Burroughs talk about the technique and how he started using it in 1959.

There is no explanation from Coupland as to how his project differs substantively from Burroughs’ cut-ups or a session of Exquisite Corpse. The use of a computer programme to crunch through data and give output doesn’t seem all that exciting. *(More about computers and chatbots at end of posting).* It’s hard to know if this was an interview situation where he wasn’t asked the question or if the editors decided against including it.

Kazuo Ishiguro?

Given that Ishiguro’s 2021 book (Klara and the Sun) is focused on an artificial friend and raises the question of ‘what does it mean to be human’, as well as the related question, ‘what is the nature of consciousness’, it would have been interesting to hear from him. He spent a fair amount of time looking into research on machine learning in preparation for his book. Maybe he was too busy?

AI and emotions

The work being done by Georgia Tech’s Dr. Ayanna Howard and her robotics lab is fascinating. They are teaching robots how to interpret human emotions. The segment which features researchers teaching and interacting with robots, Pepper and Salt, also touches on AI and bias.

Watching two African American researchers talk about the ways in which AI is unable to read emotions on ‘black’ faces as accurately as ‘white’ faces is quite compelling. It also reinforces the uneasiness you might feel after the ‘Replika’ segment where an artificial friend informs a woman that her only worth is her body.

(Interestingly, Pepper and Salt are produced by Softbank Robotics, part of Softbank, a multinational Japanese conglomerate, [see a June 28, 2021 article by Ian Carlos Campbell for The Verge] whose entire management team is male according to their About page.)

While Howard is very hopeful about the possibilities of a machine that can read emotions, she doesn’t explore (on camera) any means for pushing back against bias other than training AI by using more black faces to help them learn. Perhaps more representative management and coding teams in technology companies?

While the programme largely focused on AI as an algorithm on a computer, robots can be enabled by AI (as can be seen in the segment with Dr. Howard).

My February 14, 2019 posting features research with a completely different approach to emotions and machines,

“I’ve always felt that robots shouldn’t just be modeled after humans [emphasis mine] or be copies of humans,” he [Guy Hoffman, assistant professor at Cornell University)] said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”

[from a July 16, 2018 Cornell University news release on EurekAlert]

This brings the question back to, what is consciousness?

What scientists aren’t taught

Dr. Howard notes that scientists are not taught to consider the implications of their work. Her comment reminded me of a question I was asked many years ago after a presentation, it concerned whether or not science had any morality. (I said, no.)

My reply angered an audience member (a visual artist who was working with scientists at the time) as she took it personally and started defending scientists as good people who care and have morals and values. She failed to understand that the way in which we teach science conforms to a notion that somewhere there are scientific facts which are neutral and objective. Society and its values are irrelevant in the face of the larger ‘scientific truth’ and, as a consequence, you don’t need to teach or discuss how your values or morals affect that truth or what the social implications of your work might be.

Science is practiced without much if any thought to values. By contrast, there is the medical injunction, “Do no harm,” which suggests to me that someone recognized competing values. E.g., If your important and worthwhile research is harming people, you should ‘do no harm’.

The experts, the connections, and the Canadian content

It’s been a while since I’ve seen Ray Kurzweil mentioned but he seems to be getting more attention these days. (See this November 16, 2021 posting by Jonny Thomson titled, “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” on The Big Think for more). Note: I will have a little more about evolution later in this post.

Interestingly, Kurzweil is employed by Google these days (see his Wikipedia entry, the column to the right). So is Geoffrey Hinton, another one of the experts in the programme (see Hinton’s Wikipedia entry, the column to the right, under Institutions).

I’m not sure about Yoshu Bengio’s relationship with Google but he’s a professor at the Université de Montréal, and he’s the Scientific Director for Mila ((Quebec’s Artificial Intelligence research institute)) & IVADO (Institut de valorisation des données), Note: IVADO is not particularly relevant to what’s being discussed in this post.

As for Mila, the Canada Google blog in a November 21, 2016 posting notes a $4.5M grant to the institution,

Google invests $4.5 Million in Montreal AI Research

A new grant from Google for the Montreal Institute for Learning Algorithms (MILA) will fund seven faculty across a number of Montreal institutions and will help tackle some of the biggest challenges in machine learning and AI, including applications in the realm of systems that can understand and generate natural language. In other words, better understand a fan’s enthusiasm for Les Canadien [sic].

Google is expanding its academic support of deep learning at MILA, renewing Yoshua Bengio’s Focused Research Award and offering Focused Research Awards to MILA faculty at University of Montreal and McGill University:

Google reaffirmed their commitment to Mila in 2020 with a grant worth almost $4M (from a November 13, 2020 posting on the Mila website, Note: A link has been removed),

Google Canada announced today [November 13, 2020] that it will be renewing its funding of Mila – Quebec Artificial Intelligence Institute, with a generous pledge of nearly $4M over a three-year period. Google previously invested $4.5M US in 2016, enabling Mila to grow from 25 to 519 researchers.

In a piece written for Google’s Official Canada Blog, Yoshua Bengio, Mila Scientific Director, says that this year marked a “watershed moment for the Canadian AI community,” as the COVID-19 pandemic created unprecedented challenges that demanded rapid innovation and increased interdisciplinary collaboration between researchers in Canada and around the world.

COVID-19 has changed the world forever and many industries, from healthcare to retail, will need to adapt to thrive in our ‘new normal.’ As we look to the future and how priorities will shift, it is clear that AI is no longer an emerging technology but a useful tool that can serve to solve world problems. Google Canada recognizes not only this opportunity but the important task at hand and I’m thrilled they have reconfirmed their support of Mila with an additional $3,95 million funding grant until 22.

– Yoshua Bengio, for Google’s Official Canada Blog

Interesting, eh? Of course, Douglas Coupland is working with Google, presumably for money, and that would connect over 50% of the Canadian content (Douglas Coupland, Yoshua Bengio, and Geoffrey Hinton; Kurzweil is an American) in the programme to Google.

My hat’s off to Google’s marketing communications and public relations teams.

Anthony Morgan of Science Everywhere also provided some Canadian content. His LinkedIn profile indicates that he’s working on a PhD in molecular science, which is described this way, “My work explores the characteristics of learning environments, that support critical thinking and the relationship between critical thinking and wisdom.”

Morgan is also the founder and creative director of Science Everywhere, from his LinkedIn profile, “An events & media company supporting knowledge mobilization, community engagement, entrepreneurship and critical thinking. We build social tools for better thinking.”

There is this from his LinkedIn profile,

I develop, create and host engaging live experiences & media to foster critical thinking.

I’ve spent my 15+ years studying and working in psychology and science communication, thinking deeply about the most common individual and societal barriers to critical thinking. As an entrepreneur, I lead a team to create, develop and deploy cultural tools designed to address those barriers. As a researcher I study what we can do to reduce polarization around science.

There’s a lot more to Morgan (do look him up; he has connections to the CBC and other media outlets). The difficulty is: why was he chosen to talk about artificial intelligence and emotions and creativity when he doesn’t seem to know much about the topic? He does mention GPT-3, an AI programming language. He seems to be acting as an advocate for AI although he offers this bit of almost cautionary wisdom, “… algorithms are sets of instructions.” (You can can find out more about it in my April 27, 2021 posting. There’s also this November 26, 2021 posting [The Inherent Limitations of GPT-3] by Andrey Kurenkov, a PhD student with the Stanford [University] Vision and Learning Lab.)

Most of the cautionary commentary comes from Luke Stark, assistant professor at Western [Ontario] University’s Faculty of Information and Media Studies. He’s the one who mentions stunted emotional growth.

Before moving on, there is another set of connections through the Pan-Canadian Artificial Intelligence Strategy, a Canadian government science funding initiative announced in the 2017 federal budget. The funds allocated to the strategy are administered by the Canadian Institute for Advanced Research (CIFAR). Yoshua Bengio through Mila is associated with the strategy and CIFAR, as is Geoffrey Hinton through his position as Chief Scientific Advisor for the Vector Institute.

Evolution

Getting back to “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” Xenobots point in a disconcerting (for some of us) evolutionary direction.

I featured the work, which is being done at Tufts University in the US, in my June 21, 2021 posting, which includes an embedded video,

From a March 31, 2021 news item on ScienceDaily,

Last year, a team of biologists and computer scientists from Tufts University and the University of Vermont (UVM) created novel, tiny self-healing biological machines from frog cells called “Xenobots” that could move around, push a payload, and even exhibit collective behavior in the presence of a swarm of other Xenobots.

Get ready for Xenobots 2.0.

Also from an excerpt in the posting, the team has “created life forms that self-assemble a body from single cells, do not require muscle cells to move, and even demonstrate the capability of recordable memory.”

Memory is key to intelligence and this work introduces the notion of ‘living’ robots which leads to questioning what constitutes life. ‘The Machine That Feels’ is already grappling with far too many questions to address this development but introducing the research here might have laid the groundwork for the next episode, The New Human, telecast on November 26, 2021,

While no one can be certain what will happen, evolutionary biologists and statisticians are observing trends that could mean our future feet only have four toes (so long, pinky toe) or our faces may have new combinations of features. The new humans might be much taller than their parents or grandparents, or have darker hair and eyes.

And while evolution takes a lot of time, we might not have to wait too long for a new version of ourselves.

Technology is redesigning the way we look and function — at a much faster pace than evolution. We are merging with technology more than ever before: our bodies may now have implanted chips, smart limbs, exoskeletons and 3D-printed organs. A revolutionary gene editing technique has given us the power to take evolution into our own hands and alter our own DNA. How long will it be before we are designing our children?

As the story about the xenobots doesn’t say, we could also take the evolution of another species into our hands.

David Suzuki, where are you?

Our programme host, David Suzuki surprised me. I thought that as an environmentalist he’d point out that the huge amounts of computing power needed for artificial intelligence as mentioned in the programme, constitutes an environmental issue. I also would have expected a geneticist like Suzuki might have some concerns with regard to xenobots but perhaps that’s being saved for the next episode (The New Human) of the Nature of Things.

Artificial stupidity

Thanks to Will Knight for introducing me to the term ‘artificial stupidity’. Knight, a senior writer covers artificial intelligence for WIRED magazine. According to its Wikipedia entry,

Artificial stupidity is commonly used as a humorous opposite of the term artificial intelligence (AI), often as a derogatory reference to the inability of AI technology to adequately perform its tasks.[1] However, within the field of computer science, artificial stupidity is also used to refer to a technique of “dumbing down” computer programs in order to deliberately introduce errors in their responses.

Knight was using the term in its humorous, derogatory form.

Finally

The episode certainly got me thinking if not quite in the way producers might have hoped. ‘The Machine That Feels’ is a glossy, pretty well researched piece of infotainment.

To be blunt, I like and have no problems with infotainment but it can be seductive. I found it easier to remember the artificial friends, wife, xenobots, and symphony than the critiques and concerns.

Hopefully, ‘The Machine That Feels’ stimulates more interest in some very important topics. If you missed the telecast, you can catch the episode here.

For anyone curious about predictive policing, which was mentioned in the Ayanna Howard segment, see my November 23, 2017 posting about Vancouver’s plunge into AI and car theft.

*ETA December 6, 2021: One of the first ‘chatterbots’ was ELIZA, a computer programme developed from1964 to 1966. The most famous ELIZA script was DOCTOR, where the programme simulated a therapist. Many early users believed ELIZA understood and could respond as a human would despite Joseph Weizenbaum’s (creator of the programme) insistence otherwise.

Elder care robot being tested by Washington State University team

I imagine that at some point the Washington State University’s (WSU) ‘elder care’ robot will be tested by senior citizens as opposed to the students described in a January 14, 2019 WSU news release (also on EurekAlert) by Will Ferguson,

A robot created by Washington State University scientists could help elderly people with dementia and other limitations live independently in their own homes.

The Robot Activity Support System, or RAS, uses sensors embedded in a WSU smart home to determine where its residents are, what they are doing and when they need assistance with daily activities.

It navigates through rooms and around obstacles to find people on its own, provides video instructions on how to do simple tasks and can even lead its owner to objects like their medication or a snack in the kitchen.

“RAS combines the convenience of a mobile robot with the activity detection technology of a WSU smart home to provide assistance in the moment, as the need for help is detected,” said Bryan Minor, a postdoctoral researcher in the WSU School of Electrical Engineering and Computer Science.

Minor works in the lab of Diane Cook, professor of electrical engineering and computer science and director of the WSU Center for Advanced Studies in Adaptive Systems.

For the last decade, Cook and Maureen Schmitter-Edgecombe, a WSU professor of psychology, have led CASAS researchers in the development of smart home technologies that could enable elderly adults with memory problems and other impairments to live independently.

Currently, an estimated 50 percent of adults over the age of 85 need assistance with every day activities such as preparing meals and taking medication and the annual cost for this assistance in the US is nearly $2 trillion.

With the number of adults over 85 expected to triple by 2050, Cook and Schmitter-Edgecombe hope that technologies like RAS and the WSU smart home will alleviate some of the financial strain on the healthcare system by making it easier for older adults to live alone.

“Upwards of 90 percent of older adults prefer to age in place as opposed to moving into a nursing home,” Cook said. “We want to make it so that instead of bringing in a caregiver or sending these people to a nursing home, we can use technology to help them live independently on their own.”

RAS is the first robot CASAS researchers have tried to incorporate into their smart home environment. They recently published a study in the journal Cognitive Systems Research that demonstrates how RAS could make life easier for older adults struggling to live independently

In the study CASAS researchers recruited 26 undergraduate and graduate students [emphasis mine] to complete three activities in a smart home with RAS as an assistant.

The activities were getting ready to walk the dog, taking medication with food and water and watering household plants.

When the smart home sensors detected a human failed to initiate or was struggling with one of the tasks, RAS received a message to help.

The robot then used its mapping and navigation camera, sensors and software to find the person and offer assistance.

The person could then indicate through a tablet interface that they wanted to see a video of the next step in the activity they were performing, a video of the entire activity or they could ask the robot to lead them to objects needed to complete the activity like the dog’s leash or a granola bar from the kitchen.

Afterwards the study participants were asked to rate the robot’s performance. Most of the participants rated RAS’ performance favorably and found the robot’s tablet interface to be easy to use. They also reported the next step video as being the most useful of the prompts.

“While we are still in an early stage of development, our initial results with RAS have been promising,” Minor said. “The next step in the research will be to test RAS’ performance with a group of older adults to get a better idea of what prompts, video reminders and other preferences they have regarding the robot.”

Here’s a link to and a citation for the paper,

Robot-enabled support of daily activities in smart home environment by Garrett Wilson, Christopher Pereyda, Nisha Raghunath, Gabriel de la Cruz, Shivam Goel, Sepehr Nesaei, Bryan Minor, Maureen Schmitter-Edgecombe, Matthew E.Taylor, Diane J.Cook. Cognitive Systems Research Volume 54, May 2019, Pages 258-272 DOI: https://doi.org/10.1016/j.cogsys.2018.10.032

This paper is behind a paywall.

Other ‘caring’ robots

Dutch filmmaker, Sander Burger, directed a documentary about ‘caredroids’ for seniors titled ‘Alice Cares’ or ‘Ik ben Alice’ in Dutch. It premiered at the 2015 Vancouver (Canada) International Film Festival and was featured in a January 22, 2015 article by Neil Young for the Hollywood Reporter,


The benign side of artificial intelligence enjoys a rare cinematic showcase in Sander Burger‘s Alice Cares (Ik ben Alice), a small-scale Dutch documentary that reinvents no wheels but proves as unassumingly delightful as its eponymous, diminutive “care-robot.” Touching lightly on social and technological themes that are increasingly relevant to nearly all industrialized societies, this quiet charmer bowed at Rotterdam ahead of its local release and deserves wider exposure via festivals and small-screen outlets.

… Developed by the US firm Hanson Robotics, “Alice”— has the stature and face of a girl of eight, but an adult female’s voice—is primarily intended to provide company for lonely seniors.

Burger shows Alice “visiting” the apartments of three octogenarian Dutch ladies, the contraption overcoming their hosts’ initial wariness and quickly forming chatty bonds. This prototype “care-droid” represents the technology at a relatively early stage, with Alice unable to move anything apart from her head, eyes (which incorporate tiny cameras) and mouth. Her body is made much more obviously robotic in appearance than the face, to minimize the chances of her interlocutors mistaking her for an actual human. Such design-touches are discussed by Alice’s programmer in meetings with social-workers, which Burger and his editor Manuel Rombley intersperses between the domestic exchanges that provide the bulk of the running-time.

‘Alice’ was also featured in the Lancet’s (a general medical journal) July 18, 2015 article by Natalie Harrison,

“I’m going to ask you some questions about your life. Do you live independently? Are you lonely?” If you close your eyes and start listening to the film Alice Cares, you would think you were overhearing a routine conversation between an older woman and a health-care worker. It’s only when the woman, Martha Remkes, ends the conversation with “I don’t feel like having a robot in my home, I prefer a human being” that you realise something is amiss. In the Dutch documentary Alice Cares, Alice Robokind, a prototype caredroid developed in a laboratory in Amsterdam, is sent to live with three women who require care and company, with rather surprising results

Although the idea of health robots has been around for a couple of decades, research into the use of robots with older adults is a fairly new area. Alex Mihailidis, from the Intelligent Assistive Technology and Systems Lab [University of Toronto] in Toronto, ON, Canada, explains: “For carers, robots have been used as tools that can help to alleviate burden typically associated with providing continuous care”. He adds that “as robots become more viable and are able to perform common physical tasks, they can be very valuable in helping caregivers complete common tasks such as moving a person in and out of bed”. Although Japan and Korea are regarded as the world leaders in this research, the European Union and the USA are also making progress. At the Edinburgh Centre for Robotics, for example, researchers are working to develop more complex sensor and navigation technology for robots that work alongside people and on assisted living prosthetics technologies. This research is part of a collaboration between the University of Edinburgh and Heriot-Watt University that was awarded £6 million in funding as part of a wider £85 million investment into industrial technology in the UK Government’s Eight Great Technologies initiative. Robotics research is clearly flourishing and the global market for service and industrial robots is estimated to reach almost US$60 billion by 2020.

The idea for Alice Cares came to director Sander Burger after he read about a group of scientists at the VU University of Amsterdam in the Netherlands who were about to test a health-care robot on older people. “The first thing I felt was some resentment against the idea—I was curious why I was so offended by the whole idea and just called the scientists to see if I could come by to see what they were doing. …

… With software to generate and regulate Alice’s emotions, an artificial moral reasoner, a computational model of creativity, and full access to the internet, the investigators hoped to create a robotic care provider that was intelligent, sensitive, creative, and entertaining. “The robot was specially developed for social skills, in short, she was programmed to make the elderly women feel less lonely”, explains Burger.

Copyright © 2015 Alice Cares KeyDocs

Both the Young and Harrison articles are well worth the time, should you have enough to read them. Also, there’s an Ik ben Alice website (it’s in Dutch only).

Meanwhile, Canadians can look at Humber River Hospital (HHR; Toronto, Ontario) for a glimpse at another humanoid ‘carebot’, from a July 25, 2018 HHR Foundation blog entry,

Earlier this year, a special new caregiver joined the Child Life team at the Humber River Hospital. Pepper, the humanoid robot, helps our Child Life Specialists decrease patient anxiety, increase their comfort and educate young patients and their families. Pepper embodies perfectly the intersection of compassion and advanced technology for which Humber River is renowned.

Pepper helps our Child Life Specialists decrease patient anxiety, increase their comfort and educate young patients.

Humber River Hospital is committed to making the hospital experience a better one for our patients and their families from the moment they arrive and Pepper the robot helps us do that! Pepper is child-sized with large, expressive eyes and a sweet voice. It greets visitors, provides directions, plays games, does yoga and even dances. Using facial recognition to detect human emotions, it adapts its behaviour according to the mood of the person with whom it’s interacting. Pepper makes the Hospital an even more welcoming place for everyone it encounters.

Humber currently has two Peppers on staff: one is used exclusively by the Child Life Program to help young patients feel at ease and a second to greet patients and their families in the Hospital’s main entrance.

While Pepper robots are used around the world in such industries as retail and hospitality, Humber River is the first hospital in Canada to use Pepper in a healthcare setting. Using dedicated applications built specifically for the Hospital, Pepper’s interactive touch-screen display helps visitors find specific departments, washrooms, exits and more. In addition to answering questions and sharing information, Pepper entertains, plays games and is always available for a selfie.

I’m guessing that they had a ‘soft’ launch for Pepper because there’s an Oct. 25, 2018 HHR news release announcing Pepper’s deployment,

Pepper® can greet visitors, provide directions, play games, do yoga and even dance

Humber River Hospital has joined forces with SoftBank Robotics America (SBRA) to launch a new pilot program with Pepper the humanoid robot.  Beginning this week, Pepper will greet, help guide, engage and entertain patients and visitors who enter the hospital’s main entrance hall.

“While the healthcare sector has talked about this technology for some time now, we are ambitious and confident at Humber River Hospital to make the move and become the first hospital in Canada to pilot this technology,” states Barbara Collins, President and CEO, Humber River Hospital. 


Pepper by the numbers:
Stands 1.2 m (4ft) tall and weighs 29 kg (62lb)
Features three cameras – two 2 HD cameras and one 3D depth sensor – to “see” and interact with people
20 engines in Pepper’s head, arms and back control its precise movements
A 10-inch chest-mounted touchscreen tablet that Pepper uses to convey information and encourage input

Finally, there’s a 2012 movie, Robot & Frank (mentioned here before in this Oct. 13, 2017 posting; scroll down to Robots and pop culture subsection) which provides an intriguing example of how ‘carebots’ might present unexpected ethical challenges. Hint: Frank is a senior citizen and former jewel thief who decides to pass on some skills.

Final thoughts

It’s fascinating to me that every time I’ve looked at articles about robots being used for tasks usually performed by humans that some expert or other sweetly notes that robots will be used to help humans with tasks that are ‘boring’ or ‘physical’ with the implication that humans will focus on more rewarding work, from Harrison’s Lancet article (in a previous excerpt),

… Alex Mihailidis, from the Intelligent Assistive Technology and Systems Lab in Toronto, ON, Canada, explains: “For carers, robots have been used as tools that can help to alleviate burden typically associated with providing continuous care”. He adds that “as robots become more viable and are able to perform common physical tasks, they can be very valuable in helping caregivers …

For all the emphasis on robots as taking over burdensome physical tasks, Burger’s documentary makes it clear that these early versions are being used primarily to provide companionship. Yes, HHR’s Pepper® is taking over some repetitive tasks, such as giving directions, but it’s also playing and providing companionship.

As for what it will mean ultimately, that’s something we, as a society, need to consider.

Adopting robots into a health care system, Finnish style

The Finns have been studying the implementation of a logistics robotic system in a hospital setting according to an August 30, 2017 news item on phys.org,

VTT Technical Research Centre of Finland studied the implementation of a logistics robot system at the Seinäjoki Central Hospital in South Ostrobothnia. The aim is to reduce transportation costs, improve the availability of supplies and alleviate congestion on hospital hallways by running deliveries around the clock on every day of the week. Joint planning and dialogue between the various occupational groups and stakeholders involved was necessary for a successful change process.

This study is part of a larger project as the August 30, 2017 VTT press release (also on EurekAlert), which originated the news item, makes clear,

As the population ages, the need for robotic services is on the increase. Adopting new technology to support care and nursing work is not straightforward, however. Autonomous service robots and robot systems raise questions about safety as well as about their impact on care quality and jobs, among others.

VTT has studied the implementation of a next-generation logistics robot system at the Seinäjoki Central Hospital. First steps are being taken in Finland to introduce automated delivery systems in hospitals, with Seinäjoki Central Hospital acting as one of the pioneers. The Seinäjoki hospital’s robot system will include a total of 5–8 automated delivery robots, two of which were deployed during the study.

With deliveries running 24/7, the system will help to improve the availability of supplies and alleviate congestion on hallways. Experiences gained during the first six months show that transport personnel expenses and the physical strain of transport work have been reduced. The personnel’s views on the delivery robots have developed favourably and other hospitals have shown plenty of interest in the Seinäjoki hospital’s experiences.

From the perspective of various occupational groups, adoption of the system has had a varied effect on their perceived level of sense of control and appreciation of their work, as well as competence requirements. This study by VTT, employing work research approaches and a systems-oriented view, highlights the importance of taking into account in the change process the interdependencies between various players, along with their roles in the hospital’s core task.

Careful planning, piloting and implementation are required to ensure that the adoption of new robots runs smoothly as a whole. “As the system is expanded with new robots and types of deliveries, even more guidance, communication and dialogue is needed. Joint planning that brings various players to the same table ensures that the system’s implementation goes as smoothly as possible, making it easier to achieve the desired overall benefits”, says Senior Scientist Inka Lappalainen of the ROSE project.

VTT’s study is part of the Robots and the Future of Welfare Services project (ROSE), running from 2015 to 2020. The project investigates Finland’s opportunities for adopting assisting robotics to support the ageing population’s independent living, wellbeing and care. There is also a blog post on the topic: http://roseproject.aalto.fi/fi/blog/32-blog8.

Roadmap

Intermediate results of the project are presented in the publication Robotics in Care Services: A Finnish Roadmap, providing recommendations for both policy making and research. The roadmap is available on the ROSE project website, at http://roseproject.aalto.fi/ or http://roseproject.aalto.fi/fi/blog/29-roadmap-blog-fi.

The roadmap has been compiled by the project consortium comprising Aalto University, the project’s coordinator, and research organisations Laurea University of Applied Sciences, Lappeenranta University of Technology, Tampere University of Technology, University of Tampere and VTT.

 Photo: a logistics robot at the Seinäjoki Central Hospital (photo Marketta Niemelä, VTT)

To make it easier for those without Finnish language reading skills, I have a link to the English language version of the ROSE website. In looking at the ROSE website’s video page, I found this amongst others,

This reminded me of an initiative in Canada introducing a robot designed for use in clinical settings. In a July 4, 2017 posting, I posed this question,

A Canadian project to introduce robots like Pepper into clinical settings (aside: can seniors’ facilities be far behind?) is the subject of a June 23, 2017 news item on phys.org, …

There’s also been some work on robots and seniors in Holland (Netherlands) and Japan although I don’t have any details.

Meet Pepper, a robot for health care clinical settings

A Canadian project to introduce robots like Pepper into clinical settings (aside: can seniors’ facilities be far behind?) is the subject of a June 23, 2017 news item on phys.org,

McMaster and Ryerson universities today announced the Smart Robots for Health Communication project, a joint research initiative designed to introduce social robotics and artificial intelligence into clinical health care.

A June 22, 2017 McMaster University news release, which originated the news item, provides more detail,

With the help of Softbank’s humanoid robot Pepper and IBM Bluemix Watson Cognitive Services, the researchers will study health information exchange through a state-of-the-art human-robot interaction system. The project is a collaboration between David Harris Smith, professor in the Department of Communication Studies and Multimedia at McMaster University, Frauke Zeller, professor in the School of Professional Communication at Ryerson University and Hermenio Lima, a dermatologist and professor of medicine at McMaster’s Michael G. DeGroote School of Medicine. His main research interests are in the area of immunodermatology and technology applied to human health.

The research project involves the development and analysis of physical and virtual human-robot interactions, and has the capability to improve healthcare outcomes by helping healthcare professionals better understand patients’ behaviour.

Zeller and Harris Smith have previously worked together on hitchBOT, the friendly hitchhiking robot that travelled across Canada and has since found its new home in the [Canada] Science and Technology Museum in Ottawa.

“Pepper will help us highlight some very important aspects and motives of human behaviour and communication,” said Zeller.

Designed to be used in professional environments, Pepper is a humanoid robot that can interact with people, ‘read’ emotions, learn, move and adapt to its environment, and even recharge on its own. Pepper is able to perform facial recognition and develop individualized relationships when it interacts with people.

Lima, the clinic director, said: “We are excited to have the opportunity to potentially transform patient engagement in a clinical setting, and ultimately improve healthcare outcomes by adapting to clients’ communications needs.”

At Ryerson, Pepper was funded by the Co-lab in the Faculty of Communication and Design. FCAD’s Co-lab provides strategic leadership, technological support and acquisitions of technologies that are shaping the future of communications.

“This partnership is a testament to the collaborative nature of innovation,” said dean of FCAD, Charles Falzon. “I’m thrilled to support this multidisciplinary project that pushes the boundaries of research, and allows our faculty and students to find uses for emerging tech inside and outside the classroom.”

“This project exemplifies the value that research in the Humanities can bring to the wider world, in this case building understanding and enhancing communications in critical settings such as health care,” says McMaster’s Dean of Humanities, Ken Cruikshank.

The integration of IBM Watson cognitive computing services with the state-of-the-art social robot Pepper, offers a rich source of research potential for the projects at Ryerson and McMaster. This integration is also supported by IBM Canada and [Southern Ontario Smart Computing Innovation Platform] SOSCIP by providing the project access to high performance research computing resources and staff in Ontario.

“We see this as the initiation of an ongoing collaborative university and industry research program to develop and test applications of embodied AI, a research program that is well-positioned to integrate and apply emerging improvements in machine learning and social robotics innovations,” said Harris Smith.

I just went to a presentation at the facility where my mother lives and it was all about delivering more individualized and better care for residents. Given that most seniors in British Columbia care facilities do not receive the number of service hours per resident recommended by the province due to funding issues, it seemed a well-meaning initiative offered in the face of daunting odds against success. Now with this news, I wonder what impact ‘Pepper’ might ultimately have on seniors and on the people who currently deliver service. Of course, this assumes that researchers will be able to tackle problems with understanding various accents and communication strategies, which are strongly influenced by culture and, over time, the aging process.

After writing that last paragraph I stumbled onto this June 27, 2017 Sage Publications press release on EurekAlert about a related matter,

Existing digital technologies must be exploited to enable a paradigm shift in current healthcare delivery which focuses on tests, treatments and targets rather than the therapeutic benefits of empathy. Writing in the Journal of the Royal Society of Medicine, Dr Jeremy Howick and Dr Sian Rees of the Oxford Empathy Programme, say a new paradigm of empathy-based medicine is needed to improve patient outcomes, reduce practitioner burnout and save money.

Empathy-based medicine, they write, re-establishes relationship as the heart of healthcare. “Time pressure, conflicting priorities and bureaucracy can make practitioners less likely to express empathy. By re-establishing the clinical encounter as the heart of healthcare, and exploiting available technologies, this can change”, said Dr Howick, a Senior Researcher in Oxford University’s Nuffield Department of Primary Care Health Sciences.

Technology is already available that could reduce the burden of practitioner paperwork by gathering basic information prior to consultation, for example via email or a mobile device in the waiting room.

During the consultation, the computer screen could be placed so that both patient and clinician can see it, a help to both if needed, for example, to show infographics on risks and treatment options to aid decision-making and the joint development of a treatment plan.

Dr Howick said: “The spread of alternatives to face-to-face consultations is still in its infancy, as is our understanding of when a machine will do and when a person-to-person relationship is needed.” However, he warned, technology can also get in the way. A computer screen can become a barrier to communication rather than an aid to decision-making. “Patients and carers need to be involved in determining the need for, and designing, new technologies”, he said.

I sincerely hope that the Canadian project has taken into account some of the issues described in the ’empathy’ press release and in the article, which can be found here,

Overthrowing barriers to empathy in healthcare: empathy in the age of the Internet
by J Howick and S Rees. Journaly= of the Royal Society of Medicine Article first published online: June 27, 2017 DOI: https://doi.org/10.1177/0141076817714443

This article is open access.