Tag Archives: sex robot

Sexbots, sexbot ethics, families, and marriage

Setting the stage

Can we? Should we? Is this really a good idea? I believe those ships have sailed where sexbots are concerned since the issue is no longer whether we can or should but rather what to do now that we have them. My Oct. 17, 2017 posting: ‘Robots in Vancouver and in Canada (one of two)’ features Harmony, the first (I believe) commercial AI (artificial intelligence)-enhanced sex robot n the US. They were getting ready to start shipping the bot either for Christmas 2017 or in early 2018.

Ethical quandaries?

Things have moved a little more quickly that I would have expected had I thought ahead. An April 5, 2018 essay  (h/t phys.org) by Victoria Brooks, lecturer in law at the University of Westminster (UK) for The Conversation lays out some of ethical issues (Note: Links have been removed),

Late in 2017 at a tech fair in Austria, a sex robot was reportedly “molested” repeatedly and left in a “filthy” state. The robot, named Samantha, received a barrage of male attention, which resulted in her sustaining two broken fingers. This incident confirms worries that the possibility of fully functioning sex robots raises both tantalising possibilities for human desire (by mirroring human/sex-worker relationships), as well as serious ethical questions.

So what should be done? The campaign to “ban” sex robots, as the computer scientist Kate Devlin has argued, is only likely to lead to a lack of discussion. Instead, she hypothesises that many ways of sexual and social inclusivity could be explored as a result of human-robot relationships.

To be sure, there are certain elements of relationships between humans and sex workers that we may not wish to repeat. But to me, it is the ethical aspects of the way we think about human-robot desire that are particularly key.

Why? Because we do not even agree yet on what sex is. Sex can mean lots of different things for different bodies – and the types of joys and sufferings associated with it are radically different for each individual body. We are only just beginning to understand and know these stories. But with Europe’s first sex robot brothel open in Barcelona and the building of “Harmony”, a talking sex robot in California, it is clear that humans are already contemplating imposing our barely understood sexual ethic upon machines.

I think that most of us will experience some discomfort on hearing Samantha’s story. And it’s important that, just because she’s a machine, we do not let ourselves “off the hook” by making her yet another victim and heroine who survived an encounter, only for it to be repeated. Yes, she is a machine, but does this mean it is justifiable to act destructively towards her? Surely the fact that she is in a human form makes her a surface on which human sexuality is projected, and symbolic of a futuristic human sexuality. If this is the case, then Samatha’s [sic] case is especially sad.

It is Devlin who has asked the crucial question: whether sex robots will have rights. “Should we build in the idea of consent,” she asks? In legal terms, this would mean having to recognise the robot as human – such is the limitation of a law made by and for humans.

Suffering is a way of knowing that you, as a body, have come out on the “wrong” side of an ethical dilemma. [emphasis mine] This idea of an “embodied” ethic understood through suffering has been developed on the basis of the work of the famous philosopher Spinoza and is of particular use for legal thinkers. It is useful as it allows us to judge rightness by virtue of the real and personal experience of the body itself, rather than judging by virtue of what we “think” is right in connection with what we assume to be true about their identity.

This helps us with Samantha’s case, since it tells us that in accordance with human desire, it is clear she would not have wanted what she got. The contact Samantha received was distinctly human in the sense that this case mirrors some of the most violent sexual offences cases. While human concepts such as “law” and “ethics” are flawed, we know we don’t want to make others suffer. We are making these robot lovers in our image and we ought not pick and choose whether to be kind to our sexual partners, even when we choose to have relationships outside of the “norm”, or with beings that have a supposedly limited consciousness, or even no (humanly detectable) consciousness.

Brooks makes many interesting points not all of them in the excerpts seen here but one question not raised in the essay is whether or not the bot itself suffered. It’s a point that I imagine proponents of ‘treating your sex bot however you like’ are certain to raise. It’s also a question Canadians may need to answer sooner rather than later now that a ‘sex doll brothel’ is about to open Toronto. However, before getting to that news bit, there’s an interview with a man, his sexbot, and his wife.

The sexbot at home

In fact, I have two interviews the first I’m including here was with CBC (Canadian Broadcasting Corporation) radio and it originally aired October 29, 2017. Here’s a part of the transcript (Note: A link has been removed),

“She’s [Samantha] quite an elegant kind of girl,” says Arran Lee Squire, who is sales director for the company that makes her and also owns one himself.

And unlike other dolls like her, she’ll resist sex if she isn’t in the mood.

“If you touch her, say, on her sensitive spots on the breasts, for example, straight away, and you don’t touch her hands or kiss her, she might say, ‘Oh, I’m not ready for that,'” Arran says.

He says she’ll even synchronize her orgasm to the user’s.

But Arran emphasized that her functions go beyond the bedroom.

Samantha has a “family mode,” in which she can can talk about science, animals and philosophy. She’ll give you motivational quotes if you’re feeling down.

At Arran’s house, Samantha interacts with his two kids. And when they’ve gone to bed, she’ll have sex with him, but only with his wife involved.

There’s also this Sept. 12, 2017 ITV This Morning with Phillip & Holly broadcast interview  (running time: 6 mins. 19 secs.),

I can imagine that if I were a child in that household I’d be tempted to put the sexbot into ‘sexy mode’, preferably unsupervised by my parents. Also, will the parents be using it, at some point, for sex education?

Canadian perspective 1: Sure, it could be good for your marriage

Prior to the potential sex doll brothel in Toronto (more about that coming up), there was a flurry of interest in Marina Adshade’s contribution to the book, Robot Sex: Social and Ethical Implications, from an April 18, 2018 news item on The Tyee,

Sex robots may soon be a reality. However, little research has been done on the social, philosophical, moral and legal implications of robots specifically designed for sexual gratification.

In a chapter written for the book Robot Sex: Social and Ethical Implications, Marina Adshade, professor in the Vancouver School of Economics at the University of British Columbia, argues that sex robots could improve marriage by making it less about sex and more about love.

In this Q&A, Adshade discusses her predictions.

Could sex robots really be a viable replacement for marriage with a human? Can you love a robot?

I don’t see sex robots as substitutes for human companionship but rather as complements to human companionship. Just because we might enjoy the company of robots doesn’t mean that we cannot also enjoy the company of humans, or that having robots won’t enhance our relationships with humans. I see them as very different things — just as one woman (or one man) is not a perfect substitute for another woman (or man).

Is there a need for modern marriage to improve?

We have become increasingly demanding in what we want from the people that we marry. There was a time when women were happy to have a husband that supported the family and men were happy to have a caring mother to his children. Today we still want those things, but we also want so much more — we want lasting sexual compatibility, intense romance, and someone who is an amazing co-parent. That is a lot to ask of one person. …

Adshade adapted part of her text  “Sexbot-Induced Social Change: An Economic Perspective” in Robot Sex: Social and Ethical Implications edited by John Danaher and Neil McArthur for an August 14, 2018 essay on Slate.com,

Technological change invariably brings social change. We know this to be true, but rarely can we make accurate predictions about how social behavior will evolve when new technologies are introduced. …we should expect that the proliferation of robots designed specifically for human sexual gratification means that sexbot-induced social change is on the horizon.

Some elements of that social change might be easier to anticipate than others. For example, the share of the young adult population that chooses to remain single (with their sexual needs met by robots) is very likely to increase. Because social change is organic, however, adaptations in other social norms and behaviors are much more difficult to predict. But this is not virgin territory [I suspect this was an unintended pun]. New technologies completely transformed sexual behavior and marital norms over the second half of the 20th century. Although getting any of these predictions right will surely involve some luck, we have decades of technology-induced social change to guide our predictions about the future of a world confronted with wholesale access to sexbots.

The reality is that marriage has always evolved alongside changes in technology. Between the mid-1700s and the early 2000s, the role of marriage between a man and a woman was predominately to encourage the efficient production of market goods and services (by men) and household goods and services (by women), since the social capacity to earn a wage was almost always higher for husbands than it was for wives. But starting as early as the end of the 19th century, marriage began to evolve as electrification in the home made women’s work less time-consuming, and new technologies in the workplace started to decrease the gender wage gap. Between 1890 and 1940, the share of married women working in the labor force tripled, and over the course of the century, that share continued to grow as new technologies arrived that replaced the labor of women in the home. By the early 1970s, the arrival of microwave ovens and frozen foods meant that a family could easily be fed at the end of a long workday, even when the mother worked outside of the home.

Some elements of that social change might be easier to anticipate than others. For example, the share of the young adult population that chooses to remain single (with their sexual needs met by robots) is very likely to increase. Because social change is organic, however, adaptations in other social norms and behaviors are much more difficult to predict. But this is not virgin territory. New technologies completely transformed sexual behavior and marital norms over the second half of the 20th century. Although getting any of these predictions right will surely involve some luck, we have decades of technology-induced social change to guide our predictions about the future of a world confronted with wholesale access to sexbots.

The reality is that marriage has always evolved alongside changes in technology. Between the mid-1700s and the early 2000s, the role of marriage between a man and a woman was predominately to encourage the efficient production of market goods and services (by men) and household goods and services (by women), since the social capacity to earn a wage was almost always higher for husbands than it was for wives. But starting as early as the end of the 19th century, marriage began to evolve as electrification in the home made women’s work less time-consuming, and new technologies in the workplace started to decrease the gender wage gap. Between 1890 and 1940, the share of married women working in the labor force tripled, and over the course of the century, that share continued to grow as new technologies arrived that replaced the labor of women in the home. By the early 1970s, the arrival of microwave ovens and frozen foods meant that a family could easily be fed at the end of a long workday, even when the mother worked outside of the home.

There are those who argue that men only “assume the burden” of marriage because marriage allows men easy sexual access, and that if men can find sex elsewhere they won’t marry. We hear this prediction now being made in reference to sexbots, but the same argument was given a century ago when the invention of the latex condom (1912) and the intrauterine device (1909) significantly increased people’s freedom to have sex without risking pregnancy and (importantly, in an era in which syphilis was rampant) sexually transmitted disease. Cosmopolitan magazine ran a piece at the time by John B. Watson that asked the blunt question, will men marry 50 years from now? Watson’s answer was a resounding no, writing that “we don’t want helpmates anymore, we want playmates.” Social commentators warned that birth control technologies would destroy marriage by removing the incentives women had to remain chaste and encourage them to flood the market with nonmarital sex. Men would have no incentive to marry, and women, whose only asset is sexual access, would be left destitute.

Fascinating, non? Should you be interested, “Sexbot-Induced Social Change: An Economic Perspective” by Marina Adshade  can be found in Robot Sex: Social and Ethical Implications (link to Amazon) edited by John Danaher and Neil McArthur. © 2017 by the Massachusetts Institute of Technology, reprinted courtesy of the MIT Press

Canadian perspective 2: What is a sex doll brothel doing in Toronto?

Sometimes known as Toronto the Good (although not recently; find out more about Toronto and its nicknames here) and once a byword for stodginess, the city is about to welcome a sex doll brothel according to an August 28, 2018 CBC Radio news item by Katie Geleff and John McGill,

On their website, Aura Dolls claims to be, “North America’s first known brothel that offers sexual services with the world’s most beautiful silicone ladies.”

Nestled between a massage parlour, nail salon and dry cleaner, Aura Dolls is slated to open on Sept. 8 [2018] in an otherwise nondescript plaza in Toronto’s north end.

The company plans to operate 24 hours a day, seven days a week, and will offer customers six different silicone dolls. The website describes the life-like dolls as, “classy, sophisticated, and adventurous ladies.” …

They add that, “the dolls are thoroughly sanitized to meet your expectations.” But that condoms are still “highly recommended.”

Toronto city councillor John Filion says people in his community are concerned about the proposed business.

Filion spoke to As It Happens guest host Helen Mann. Here is part of their conversation.

Councillor Filion, Aura Dolls is urging people to have “an open mind” about their business plan. Would you say that you have one?

Well, I have an open mind about what sort of behaviours people want to do, as long as they don’t harm anybody else. It’s a totally different matter once you bring that out to the public. So I think I have a fairly closed mind about where people should be having sex with [silicone] dolls.

So, what’s wrong with a sex doll brothel?

It’s where it is located, for one thing. Where it’s being proposed happens to be near an intersection where about 25,000 people live, all kinds of families, four elementary schools are very near by. And you know, people shouldn’t really need to be out on a walk with their families and try to explain to their kids why someone is having sex with a [silicone] doll.

But Aura Dolls says that they are going to be doing this very discreetly, that they won’t have explicit signage, and that they therefore won’t be bothering anyone.

They’ve hardly been discreet. They were putting illegal posters all over the neighbourhood. They’ve probably had a couple of hundred of thousands of dollars of free publicity already. I don’t think there’s anything at all discreet about what they are doing. They’re trying to be indiscreet to drum up business.

Can you be sure that there aren’t constituents in your area that think this is a great idea?

I can’t be sure that there aren’t some people who might think, “Oh great, it’s just down the street from me. Let me go there.” I would say that might be a fraction of one per cent of my constituents. Most people are appalled by this.

And it’s not a narrow-minded neighbourhood. Whatever somebody does in their home, I don’t think we’re going to pass moral judgment on it, again, as long as it’s not harming anyone else. But this is just kind of scuzzy. ..

….

Aura Dolls says that it’s doing nothing illegal. They say that they are being very clear that the dolls they are using represent adult women and that they are actually providing a service. Do you agree that they are doing this legally?

No, they’re not at all legal. It’s an illegal use. And if there’s any confusion about that, they will be getting a letter from the city very soon. It is clearly not a legal use. It’s not permitted under the zoning bylaw and it fits the definition of adult entertainment parlour, for which you require a license — and they certainly would not get one. They would not get a license in this neighbourhood because it’s not a permitted use.

The audio portion runs for 5 mins. 31 secs.

I believe these dolls are in fact sexbots, likely enhanced with AI. An August 29, 2018 article by Karlton Jahmal for hotnewhiphop.com describes the dolls as ‘fembots’ and provides more detail (Note: Links have been removed),

Toronto has seen the future, and apparently, it has to do with sex dolls. The Six [another Toronto nickname] is about to get blessed with the first legal sex doll brothel, and the fembots look too good to be true. If you head over to Aura Dolls website, detailed biographies for the six available sex dolls are on full display. You can check out the doll’s height, physical dimensions, heritage and more.

Aura plans to introduce more dolls in the future, according to a statement in the Toronto Star by Claire Lee, a representative for the compnay. At the moment, the ethnicities of the sex dolls feature Japanese, Caucasian American, French Canadian, Irish Canadian, Colombian, and Korean girls. Male dolls will be added in the near future. The sex dolls look remarkably realistic. Aura’s website writes, “Our dolls are made from the highest quality of TPE silicone which mimics the feeling of natural human skin, pores, texture and movement giving the user a virtually identical experience as being with a real partner.”

There are a few more details about the proposed brothel and more comments from Toronto city councillor John Filion in an August 28, 2018 article by Claire Floody and Jenna Moon with Alexandra Jones and Melanie Green for thestar.com,

Toronto will soon be home to North America’s [this should include Canada, US, and Mexico] first known sex doll brothel, offering sexual services with six silicone-made dolls.

According to the website for Aura Dolls, the company behind the brothel, the vision is to bring a new way to achieve sexual needs “without the many restrictions and limitations that a real partner may come with.”

The brothel is expected to open in a shopping plaza on Yonge St., south of Sheppard Ave., on Sept. 8 [2018]. The company doesn’t give the exact location on its website, stating it’s announced upon booking.

Spending half an hour with one doll costs $80, with two dolls running $160. For an hour, the cost is $120 with one doll. The maximum listed time is four hours for $480 per doll.

Doors at the new brothel for separate entry and exit will be used to ensure “maximum privacy for customers.” While the business does plan on having staff on-site, they “should not have any interaction,” Lee said.

“The reason why we do that is to make sure that everyone feels comfortable coming in and exiting,” she said, noting that people may feel shy or awkward about visiting the site.

… Lee said that the business is operating within the law. “The only law stating with anything to do with the dolls is that it has to meet a height requirement. It can’t resemble a child,” she said. …

Councillor John Filion, Ward 23 Willowdale, said his staff will be “throwing the book at (Aura Dolls) for everything they can.”

“I’ve still got people studying to see what’s legal and what isn’t,” Filion said. He noted that a bylaw introduced in North York in the ’90s prevents retail sex shops operating outside of industrial areas. Filion said his office is still confirming that the bylaw is active following harmonization, which condensed the six boroughs’ bylaws after amalgamation in 1998.

“If the bylaw that I brought in 20 years ago still exists, it would prohibit this,” Filion said.

“There’s legal issues,” he said, suggesting that people interested in using the sex dolls might consider doing so at home, rather than at a brothel.

The councillor said he’s received complaints from constituents about the business. “The phone’s ringing off the hook today,” Filion said.

It should be an interesting first week at school for everyone involved. I wonder what Ontario Premier, Doug Ford who recently rolled back the sex education curriculum for the province by 20 years will make of these developments.

As for sexbots/fembots/sex dolls or whatever you want to call them, they are here and it’s about time Canadians had a frank discussion on the matter. Also, I’ve been waiting for quite some time for any mention of male sexbots (malebots?). Personally, I don’t think we’ll be seeing male sexbots appear in either brothels or homes anytime soon.

Robots in Vancouver and in Canada (two of two)

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Where
Vancouver, BC, Canada

When
Thursday September 28 & Friday September 29, 2017 — Save the date!

Download the full agenda and speakers’ list here.

Objectives

The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference.  The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Program Overview

Download the full agenda and speakers’ list here.

DAY ONE: IROS Workshop 

“Best practices in designing effective roadmaps for robotics innovation”

Thursday September 28, 2017 | 8:30am – 5:00pm | Vancouver Convention Centre

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

Thursday Evening Program: Sponsored by Clearpath Robotics  Workshop participants gather at a nearby restaurant to network and socialize.

Learn more about the IROS Workshop.

DAY TWO: NSERC-Sponsored Canadian Robotics Stakeholder Meeting
“Towards a national robotics strategy for Canada”

Friday September 29, 2017 | 8:30am – 5:00pm | University of British Columbia (UBC)

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

Friday Evening Program: Sponsored by NSERC Meeting participants gather at a nearby restaurant for the event’s closing dinner reception.

Learn more about the Canadian Robotics Stakeholder Meeting.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another  concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there  already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s  influential 1982 book, ‘Orality and Literacy’,  (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but  tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

That’s it!

ETA Oct. 16, 2017: Well, I guess that wasn’t quite ‘it’. BBC’s (British Broadcasting Corporation) Magazine published a thoughtful Oct. 15, 2017 piece titled: Can we teach robots ethics?

Robots in Vancouver and in Canada (one of two)

This piece just started growing. It started with robot ethics, moved on to sexbots and news of an upcoming Canadian robotics roadmap. Then, it became a two-part posting with the robotics strategy (roadmap) moving to part two along with robots and popular culture and a further  exploration of robot and AI ethics issues..

What is a robot?

There are lots of robots, some are macroscale and others are at the micro and nanoscales (see my Sept. 22, 2017 posting for the latest nanobot). Here’s a definition from the Robot Wikipedia entry that covers all the scales. (Note: Links have been removed),

A robot is a machine—especially one programmable by a computer— capable of carrying out a complex series of actions automatically.[2] Robots can be guided by an external control device or the control may be embedded within. Robots may be constructed to take on human form but most robots are machines designed to perform a task with no regard to how they look.

Robots can be autonomous or semi-autonomous and range from humanoids such as Honda’s Advanced Step in Innovative Mobility (ASIMO) and TOSY’s TOSY Ping Pong Playing Robot (TOPIO) to industrial robots, medical operating robots, patient assist robots, dog therapy robots, collectively programmed swarm robots, UAV drones such as General Atomics MQ-1 Predator, and even microscopic nano robots. [emphasis mine] By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own.

We may think we’ve invented robots but the idea has been around for a very long time (from the Robot Wikipedia entry; Note: Links have been removed),

Many ancient mythologies, and most modern religions include artificial people, such as the mechanical servants built by the Greek god Hephaestus[18] (Vulcan to the Romans), the clay golems of Jewish legend and clay giants of Norse legend, and Galatea, the mythical statue of Pygmalion that came to life. Since circa 400 BC, myths of Crete include Talos, a man of bronze who guarded the Cretan island of Europa from pirates.

In ancient Greece, the Greek engineer Ctesibius (c. 270 BC) “applied a knowledge of pneumatics and hydraulics to produce the first organ and water clocks with moving figures.”[19][20] In the 4th century BC, the Greek mathematician Archytas of Tarentum postulated a mechanical steam-operated bird he called “The Pigeon”. Hero of Alexandria (10–70 AD), a Greek mathematician and inventor, created numerous user-configurable automated devices, and described machines powered by air pressure, steam and water.[21]

The 11th century Lokapannatti tells of how the Buddha’s relics were protected by mechanical robots (bhuta vahana yanta), from the kingdom of Roma visaya (Rome); until they were disarmed by King Ashoka. [22] [23]

In ancient China, the 3rd century text of the Lie Zi describes an account of humanoid automata, involving a much earlier encounter between Chinese emperor King Mu of Zhou and a mechanical engineer known as Yan Shi, an ‘artificer’. Yan Shi proudly presented the king with a life-size, human-shaped figure of his mechanical ‘handiwork’ made of leather, wood, and artificial organs.[14] There are also accounts of flying automata in the Han Fei Zi and other texts, which attributes the 5th century BC Mohist philosopher Mozi and his contemporary Lu Ban with the invention of artificial wooden birds (ma yuan) that could successfully fly.[17] In 1066, the Chinese inventor Su Song built a water clock in the form of a tower which featured mechanical figurines which chimed the hours.

The beginning of automata is associated with the invention of early Su Song’s astronomical clock tower featured mechanical figurines that chimed the hours.[24][25][26] His mechanism had a programmable drum machine with pegs (cams) that bumped into little levers that operated percussion instruments. The drummer could be made to play different rhythms and different drum patterns by moving the pegs to different locations.[26]

In Renaissance Italy, Leonardo da Vinci (1452–1519) sketched plans for a humanoid robot around 1495. Da Vinci’s notebooks, rediscovered in the 1950s, contained detailed drawings of a mechanical knight now known as Leonardo’s robot, able to sit up, wave its arms and move its head and jaw.[28] The design was probably based on anatomical research recorded in his Vitruvian Man. It is not known whether he attempted to build it.

In Japan, complex animal and human automata were built between the 17th to 19th centuries, with many described in the 18th century Karakuri zui (Illustrated Machinery, 1796). One such automaton was the karakuri ningyō, a mechanized puppet.[29] Different variations of the karakuri existed: the Butai karakuri, which were used in theatre, the Zashiki karakuri, which were small and used in homes, and the Dashi karakuri which were used in religious festivals, where the puppets were used to perform reenactments of traditional myths and legends.

The term robot was coined by a Czech writer (from the Robot Wikipedia entry; Note: Links have been removed)

‘Robot’ was first applied as a term for artificial automata in a 1920 play R.U.R. by the Czech writer, Karel Čapek. However, Josef Čapek was named by his brother Karel as the true inventor of the term robot.[6][7] The word ‘robot’ itself was not new, having been in Slavic language as robota (forced laborer), a term which classified those peasants obligated to compulsory service under the feudal system widespread in 19th century Europe (see: Robot Patent).[37][38] Čapek’s fictional story postulated the technological creation of artificial human bodies without souls, and the old theme of the feudal robota class eloquently fit the imagination of a new class of manufactured, artificial workers.

I’m particularly fascinated by how long humans have been imagining and creating robots.

Robot ethics in Vancouver

The Westender, has run what I believe is the first article by a local (Vancouver, Canada) mainstream media outlet on the topic of robots and ethics. Tessa Vikander’s Sept. 14, 2017 article highlights two local researchers, Ajung Moon and Mark Schmidt, and a local social media company’s (Hootsuite), analytics director, Nik Pai. Vikander opens her piece with an ethical dilemma (Note: Links have been removed),

Emma is 68, in poor health and an alcoholic who has been told by her doctor to stop drinking. She lives with a care robot, which helps her with household tasks.

Unable to fix herself a drink, she asks the robot to do it for her. What should the robot do? Would the answer be different if Emma owns the robot, or if she’s borrowing it from the hospital?

This is the type of hypothetical, ethical question that Ajung Moon, director of the Open Roboethics Initiative [ORI], is trying to answer.

According to an ORI study, half of respondents said ownership should make a difference, and half said it shouldn’t. With society so torn on the question, Moon is trying to figure out how engineers should be programming this type of robot.

A Vancouver resident, Moon is dedicating her life to helping those in the decision-chair make the right choice. The question of the care robot is but one ethical dilemma in the quickly advancing world of artificial intelligence.

At the most sensationalist end of the scale, one form of AI that’s recently made headlines is the sex robot, which has a human-like appearance. A report from the Foundation for Responsible Robotics says that intimacy with sex robots could lead to greater social isolation [emphasis mine] because they desensitize people to the empathy learned through human interaction and mutually consenting relationships.

I’ll get back to the impact that robots might have on us in part two but first,

Sexbots, could they kill?

For more about sexbots in general, Alessandra Maldonado wrote an Aug. 10, 2017 article for salon.com about them (Note: A link has been removed),

Artificial intelligence has given people the ability to have conversations with machines like never before, such as speaking to Amazon’s personal assistant Alexa or asking Siri for directions on your iPhone. But now, one company has widened the scope of what it means to connect with a technological device and created a whole new breed of A.I. — specifically for sex-bots.

Abyss Creations has been in the business of making hyperrealistic dolls for 20 years, and by the end of 2017, they’ll unveil their newest product, an anatomically correct robotic sex toy. Matt McMullen, the company’s founder and CEO, explains the goal of sex robots is companionship, not only a physical partnership. “Imagine if you were completely lonely and you just wanted someone to talk to, and yes, someone to be intimate with,” he said in a video depicting the sculpting process of the dolls. “What is so wrong with that? It doesn’t hurt anybody.”

Maldonado also embedded this video into her piece,

A friend of mine described it as creepy. Specifically we were discussing why someone would want to programme ‘insecurity’ as a  desirable trait in a sexbot.

Marc Beaulieu’s concept of a desirable trait in a sexbot is one that won’t kill him according to his Sept. 25, 2017 article on Canadian Broadcasting News (CBC) online (Note: Links have been removed),

Harmony has a charming Scottish lilt, albeit a bit staccato and canny. Her eyes dart around the room, her chin dips as her eyebrows raise in coquettish fashion. Her face manages expressions that are impressively lifelike. That face comes in 31 different shapes and 5 skin tones, with or without freckles and it sticks to her cyber-skull with magnets. Just peel it off and switch it out at will. In fact, you can choose Harmony’s eye colour, body shape (in great detail) and change her hair too. Harmony, of course, is a sex bot. A very advanced one. How advanced is she? Well, if you have $12,332 CAD to put towards a talkative new home appliance, REALBOTIX says you could be having a “conversation” and relations with her come January. Happy New Year.

Caveat emptor though: one novel bonus feature you might also get with Harmony is her ability to eventually murder you in your sleep. And not because she wants to.

Dr Nick Patterson, faculty of Science Engineering and Built Technology at Deakin University in Australia is lending his voice to a slew of others warning us to slow down and be cautious as we steadily approach Westworldian levels of human verisimilitude with AI tech. Surprisingly, Patterson didn’t regurgitate the narrative we recognize from the popular sci-fi (increasingly non-fi actually) trope of a dystopian society’s futile resistance to a robocalypse. He doesn’t think Harmony will want to kill you. He thinks she’ll be hacked by a code savvy ne’er-do-well who’ll want to snuff you out instead. …

Embedded in Beaulieu’s article is another video of the same sexbot profiled earlier. Her programmer seems to have learned a thing or two (he no longer inputs any traits as you’re watching),

I guess you could get one for Christmas this year if you’re willing to wait for an early 2018 delivery and aren’t worried about hackers turning your sexbot into a killer. While the killer aspect might seem farfetched, it turns out it’s not the only sexbot/hacker issue.

Sexbots as spies

This Oct. 5, 2017 story by Karl Bode for Techdirt points out that sex toys that are ‘smart’ can easily be hacked for any reason including some mischief (Note: Links have been removed),

One “smart dildo” manufacturer was recently forced to shell out $3.75 million after it was caught collecting, err, “usage habits” of the company’s customers. According to the lawsuit, Standard Innovation’s We-Vibe vibrator collected sensitive data about customer usage, including “selected vibration settings,” the device’s battery life, and even the vibrator’s “temperature.” At no point did the company apparently think it was a good idea to clearly inform users of this data collection.

But security is also lacking elsewhere in the world of internet-connected sex toys. Alex Lomas of Pentest Partners recently took a look at the security in many internet-connected sex toys, and walked away arguably unimpressed. Using a Bluetooth “dongle” and antenna, Lomas drove around Berlin looking for openly accessible sex toys (he calls it “screwdriving,” in a riff off of wardriving). He subsequently found it’s relatively trivial to discover and hijack everything from vibrators to smart butt plugs — thanks to the way Bluetooth Low Energy (BLE) connectivity works:

“The only protection you have is that BLE devices will generally only pair with one device at a time, but range is limited and if the user walks out of range of their smartphone or the phone battery dies, the adult toy will become available for others to connect to without any authentication. I should say at this point that this is purely passive reconnaissance based on the BLE advertisements the device sends out – attempting to connect to the device and actually control it without consent is not something I or you should do. But now one could drive the Hush’s motor to full speed, and as long as the attacker remains connected over BLE and not the victim, there is no way they can stop the vibrations.”

Does that make you think twice about a sexbot?

Robots and artificial intelligence

Getting back to the Vikander article (Sept. 14, 2017), Moon or Vikander or both seem to have conflated artificial intelligence with robots in this section of the article,

As for the building blocks that have thrust these questions [care robot quandary mentioned earlier] into the spotlight, Moon explains that AI in its basic form is when a machine uses data sets or an algorithm to make a decision.

“It’s essentially a piece of output that either affects your decision, or replaces a particular decision, or supports you in making a decision.” With AI, we are delegating decision-making skills or thinking to a machine, she says.

Although we’re not currently surrounded by walking, talking, independently thinking robots, the use of AI [emphasis mine] in our daily lives has become widespread.

For Vikander, the conflation may have been due to concerns about maintaining her word count and for Moon, it may have been one of convenience or a consequence of how the jargon is evolving with ‘robot’ meaning a machine specifically or, sometimes, a machine with AI or AI only.

To be precise, not all robots have AI and not all AI is found in robots. It’s a distinction that may be more important for people developing robots and/or AI but it also seems to make a difference where funding is concerned. In a March 24, 2017 posting about the 2017 Canadian federal budget I noticed this,

… The Canadian Institute for Advanced Research will receive $93.7 million [emphasis mine] to “launch a Pan-Canadian Artificial Intelligence Strategy … (to) position Canada as a world-leading destination for companies seeking to invest in artificial intelligence and innovation.”

This brings me to a recent set of meetings held in Vancouver to devise a Canadian robotics roadmap, which suggests the robotics folks feel they need specific representation and funding.

See: part two for the rest.