Tag Archives: sexbots

I found it at the movies: a commentary on/review of “Films from the Future”

Kudos to anyone who recognized the reference to Pauline Kael (she changed film criticism forever) and her book “I Lost it at the Movies.” Of course, her book title was a bit of sexual innuendo, quite risqué for an important film critic in 1965 but appropriate for a period (the 1960s) associated with a sexual revolution. (There’s more about the 1960’s sexual revolution in the US along with mention of a prior sexual revolution in the 1920s in this Wikipedia entry.)

The title for this commentary is based on an anecdote from Dr. Andrew Maynard’s (director of the Arizona State University [ASU] Risk Innovation Lab) popular science and technology book, “Films from the Future: The Technology and Morality of Sci-Fi Movies.”

The ‘title-inspiring’ anecdote concerns Maynard’s first viewing of ‘2001: A Space Odyssey, when as a rather “bratty” 16-year-old who preferred to read science fiction, he discovered new ways of seeing and imaging the world. Maynard isn’t explicit about when he became a ‘techno nerd’ or how movies gave him an experience books couldn’t but presumably at 16 he was already gearing up for a career in the sciences. That ‘movie’ revelation received in front of a black and white television on January 1,1982 eventually led him to write, “Films from the Future.” (He has a PhD in physics which he is now applying to the field of risk innovation. For a more detailed description of Dr. Maynard and his work, there’s his ASU profile webpage and, of course, the introduction to his book.)

The book is quite timely. I don’t know how many people have noticed but science and scientific innovation is being covered more frequently in the media than it has been in many years. Science fairs and festivals are being founded on what seems to be a daily basis and you can now find science in art galleries. (Not to mention the movies and television where science topics are covered in comic book adaptations, in comedy, and in standard science fiction style.) Much of this activity is centered on what’s called ’emerging technologies’. These technologies are why people argue for what’s known as ‘blue sky’ or ‘basic’ or ‘fundamental’ science for without that science there would be no emerging technology.

Films from the Future

Isn’t reading the Table of Contents (ToC) the best way to approach a book? (From Films from the Future; Note: The formatting has been altered),

Table of Contents
Chapter One
In the Beginning 14
Beginnings 14
Welcome to the Future 16
The Power of Convergence 18
Socially Responsible Innovation 21
A Common Point of Focus 25
Spoiler Alert 26
Chapter Two
Jurassic Park: The Rise of Resurrection Biology 27
When Dinosaurs Ruled the World 27
De-Extinction 31
Could We, Should We? 36
The Butterfly Effect 39
Visions of Power 43
Chapter Three
Never Let Me Go: A Cautionary Tale of Human Cloning 46
Sins of Futures Past 46
Cloning 51
Genuinely Human? 56
Too Valuable to Fail? 62
Chapter Four
Minority Report: Predicting Criminal Intent 64
Criminal Intent 64
The “Science” of Predicting Bad Behavior 69
Criminal Brain Scans 74
Machine Learning-Based Precognition 77
Big Brother, Meet Big Data 79
Chapter Five
Limitless: Pharmaceutically-enhanced Intelligence 86
A Pill for Everything 86
The Seduction of Self-Enhancement 89
Nootropics 91
If You Could, Would You? 97
Privileged Technology 101
Our Obsession with Intelligence 105
Chapter Six
Elysium: Social Inequity in an Age of Technological
Extremes 110
The Poor Shall Inherit the Earth 110
Bioprinting Our Future Bodies 115
The Disposable Workforce 119
Living in an Automated Future 124
Chapter Seven
Ghost in the Shell: Being Human in an
Augmented Future 129
Through a Glass Darkly 129
Body Hacking 135
More than “Human”? 137
Plugged In, Hacked Out 142
Your Corporate Body 147
Chapter Eight
Ex Machina: AI and the Art of Manipulation 154
Plato’s Cave 154
The Lure of Permissionless Innovation 160
Technologies of Hubris 164
Superintelligence 169
Defining Artificial Intelligence 172
Artificial Manipulation 175
Chapter Nine
Transcendence: Welcome to the Singularity 180
Visions of the Future 180
Technological Convergence 184
Enter the Neo-Luddites 190
Techno-Terrorism 194
Exponential Extrapolation 200
Make-Believe in the Age of the Singularity 203
Chapter Ten
The Man in the White Suit: Living in a Material World 208
There’s Plenty of Room at the Bottom 208
Mastering the Material World 213
Myopically Benevolent Science 220
Never Underestimate the Status Quo 224
It’s Good to Talk 227
Chapter Eleven
Inferno: Immoral Logic in an Age of
Genetic Manipulation 231
Decoding Make-Believe 231
Weaponizing the Genome 234
Immoral Logic? 238
The Honest Broker 242
Dictating the Future 248
Chapter Twelve
The Day After Tomorrow: Riding the Wave of
Climate Change 251
Our Changing Climate 251
Fragile States 255
A Planetary “Microbiome” 258
The Rise of the Anthropocene 260
Building Resiliency 262
Geoengineering the Future 266
Chapter Thirteen
Contact: Living by More than Science Alone 272
An Awful Waste of Space 272
More than Science Alone 277
Occam’s Razor 280
What If We’re Not Alone? 283
Chapter Fourteen
Looking to the Future 288
Acknowledgments 293

The ToC gives the reader a pretty clue as to where the author is going with their book and Maynard explains how he chose his movies in his introductory chapter (from Films from the Future),

“There are some quite wonderful science fiction movies that didn’t make the cut because they didn’t fit the overarching narrative (Blade Runner and its sequel Blade Runner 2049, for instance, and the first of the Matrix trilogy). There are also movies that bombed with the critics, but were included because they ably fill a gap in the bigger story around emerging and converging technologies. Ultimately, the movies that made the cut were chosen because, together, they create an overarching narrative around emerging trends in biotechnologies, cybertechnologies, and materials-based technologies, and they illuminate a broader landscape around our evolving relationship with science and technology. And, to be honest, they are all movies that I get a kick out of watching.” (p. 17)

Jurassic Park (Chapter Two)

Dinosaurs do not interest me—they never have. Despite my profound indifference I did see the movie, Jurassic Park, when it was first released (someone talked me into going). And, I am still profoundly indifferent. Thankfully, Dr. Maynard finds meaning and a connection to current trends in biotechnology,

Jurassic Park is unabashedly a movie about dinosaurs. But it’s also a movie about greed, ambition, genetic engineering, and human folly—all rich pickings for thinking about the future, and what could possibly go wrong. (p. 28)

What really stands out with Jurassic Park, over twenty-five years later, is how it reveals a very human side of science and technology. This comes out in questions around when we should tinker with technology and when we should leave well enough alone. But there is also a narrative here that appears time and time again with the movies in this book, and that is how we get our heads around the sometimes oversized roles mega-entrepreneurs play in dictating how new tech is used, and possibly abused. These are all issues that are just as relevant now as they were in 1993, and are front and center of ensuring that the technologyenabled future we’re building is one where we want to live, and not one where we’re constantly fighting for our lives.  (pp. 30-1)

He also describes a connection to current trends in biotechnology,


In a far corner of Siberia, two Russians—Sergey Zimov and his son Nikita—are attempting to recreate the Ice Age. More precisely, their vision is to reconstruct the landscape and ecosystem of northern Siberia in the Pleistocene, a period in Earth’s history that stretches from around two and a half million years ago to eleven thousand years ago. This was a time when the environment was much colder than now, with huge glaciers and ice sheets flowing over much of the Earth’s northern hemisphere. It was also a time when humans
coexisted with animals that are long extinct, including saber-tooth cats, giant ground sloths, and woolly mammoths.

The Zimovs’ ambitions are an extreme example of “Pleistocene rewilding,” a movement to reintroduce relatively recently extinct large animals, or their close modern-day equivalents, to regions where they were once common. In the case of the Zimovs, the
father-and-son team believe that, by reconstructing the Pleistocene ecosystem in the Siberian steppes and elsewhere, they can slow down the impacts of climate change on these regions. These areas are dominated by permafrost, ground that never thaws through
the year. Permafrost ecosystems have developed and survived over millennia, but a warming global climate (a theme we’ll come back to in chapter twelve and the movie The Day After Tomorrow) threatens to catastrophically disrupt them, and as this happens, the impacts
on biodiversity could be devastating. But what gets climate scientists even more worried is potentially massive releases of trapped methane as the permafrost disappears.

Methane is a powerful greenhouse gas—some eighty times more effective at exacerbating global warming than carbon dioxide— and large-scale releases from warming permafrost could trigger catastrophic changes in climate. As a result, finding ways to keep it in the ground is important. And here the Zimovs came up with a rather unusual idea: maintaining the stability of the environment by reintroducing long-extinct species that could help prevent its destruction, even in a warmer world. It’s a wild idea, but one that has some merit.8 As a proof of concept, though, the Zimovs needed somewhere to start. And so they set out to create a park for deextinct Siberian animals: Pleistocene Park.9

Pleistocene Park is by no stretch of the imagination a modern-day Jurassic Park. The dinosaurs in Hammond’s park date back to the Mesozoic period, from around 250 million years ago to sixty-five million years ago. By comparison, the Pleistocene is relatively modern history, ending a mere eleven and a half thousand years ago. And the vision behind Pleistocene Park is not thrills, spills, and profit, but the serious use of science and technology to stabilize an increasingly unstable environment. Yet there is one thread that ties them together, and that’s using genetic engineering to reintroduce extinct species. In this case, the species in question is warm-blooded and furry: the woolly mammoth.

The idea of de-extinction, or bringing back species from extinction (it’s even called “resurrection biology” in some circles), has been around for a while. It’s a controversial idea, and it raises a lot of tough ethical questions. But proponents of de-extinction argue
that we’re losing species and ecosystems at such a rate that we can’t afford not to explore technological interventions to help stem the flow.

Early approaches to bringing species back from the dead have involved selective breeding. The idea was simple—if you have modern ancestors of a recently extinct species, selectively breeding specimens that have a higher genetic similarity to their forebears can potentially help reconstruct their genome in living animals. This approach is being used in attempts to bring back the aurochs, an ancestor of modern cattle.10 But it’s slow, and it depends on
the fragmented genome of the extinct species still surviving in its modern-day equivalents.

An alternative to selective breeding is cloning. This involves finding a viable cell, or cell nucleus, in an extinct but well-preserved animal and growing a new living clone from it. It’s definitely a more appealing route for impatient resurrection biologists, but it does mean getting your hands on intact cells from long-dead animals and devising ways to “resurrect” these, which is no mean feat. Cloning has potential when it comes to recently extinct species whose cells have been well preserved—for instance, where the whole animal has become frozen in ice. But it’s still a slow and extremely limited option.

Which is where advances in genetic engineering come in.

The technological premise of Jurassic Park is that scientists can reconstruct the genome of long-dead animals from preserved DNA fragments. It’s a compelling idea, if you think of DNA as a massively long and complex instruction set that tells a group of biological molecules how to build an animal. In principle, if we could reconstruct the genome of an extinct species, we would have the basic instruction set—the biological software—to reconstruct
individual members of it.

The bad news is that DNA-reconstruction-based de-extinction is far more complex than this. First you need intact fragments of DNA, which is not easy, as DNA degrades easily (and is pretty much impossible to obtain, as far as we know, for dinosaurs). Then you
need to be able to stitch all of your fragments together, which is akin to completing a billion-piece jigsaw puzzle without knowing what the final picture looks like. This is a Herculean task, although with breakthroughs in data manipulation and machine learning,
scientists are getting better at it. But even when you have your reconstructed genome, you need the biological “wetware”—all the stuff that’s needed to create, incubate, and nurture a new living thing, like eggs, nutrients, a safe space to grow and mature, and so on. Within all this complexity, it turns out that getting your DNA sequence right is just the beginning of translating that genetic code into a living, breathing entity. But in some cases, it might be possible.

In 2013, Sergey Zimov was introduced to the geneticist George Church at a conference on de-extinction. Church is an accomplished scientist in the field of DNA analysis and reconstruction, and a thought leader in the field of synthetic biology (which we’ll come
back to in chapter nine). It was a match made in resurrection biology heaven. Zimov wanted to populate his Pleistocene Park with mammoths, and Church thought he could see a way of
achieving this.

What resulted was an ambitious project to de-extinct the woolly mammoth. Church and others who are working on this have faced plenty of hurdles. But the technology has been advancing so fast that, as of 2017, scientists were predicting they would be able to reproduce the woolly mammoth within the next two years.

One of those hurdles was the lack of solid DNA sequences to work from. Frustratingly, although there are many instances of well preserved woolly mammoths, their DNA rarely survives being frozen for tens of thousands of years. To overcome this, Church and others
have taken a different tack: Take a modern, living relative of the mammoth, and engineer into it traits that would allow it to live on the Siberian tundra, just like its woolly ancestors.

Church’s team’s starting point has been the Asian elephant. This is their source of base DNA for their “woolly mammoth 2.0”—their starting source code, if you like. So far, they’ve identified fifty plus gene sequences they think they can play with to give their modern-day woolly mammoth the traits it would need to thrive in Pleistocene Park, including a coat of hair, smaller ears, and a constitution adapted to cold.

The next hurdle they face is how to translate the code embedded in their new woolly mammoth genome into a living, breathing animal. The most obvious route would be to impregnate a female Asian elephant with a fertilized egg containing the new code. But Asian elephants are endangered, and no one’s likely to allow such cutting edge experimentation on the precious few that are still around, so scientists are working on an artificial womb for their reinvented woolly mammoth. They’re making progress with mice and hope to crack the motherless mammoth challenge relatively soon.

It’s perhaps a stretch to call this creative approach to recreating a species (or “reanimation” as Church refers to it) “de-extinction,” as what is being formed is a new species. … (pp. 31-4)

This selection illustrates what Maynard does so very well throughout the book where he uses each film as a launching pad for a clear, readable description of relevant bits of science so you understand why the premise was likely, unlikely, or pure fantasy while linking it to contemporary practices, efforts, and issues. In the context of Jurassic Park, Maynard goes on to raise some fascinating questions such as: Should we revive animals rendered extinct (due to obsolescence or inability to adapt to new conditions) when we could develop new animals?

General thoughts

‘Films for the Future’ offers readable (to non-scientific types) science, lively writing, and the occasional ‘memorish’ anecdote. As well, Dr. Maynard raises the curtain on aspects of the scientific enterprise that most of us do not get to see.  For example, the meeting  between Sergey Zimov and George Church and how it led to new ‘de-extinction’ work’. He also describes the problems that the scientists encountered and are encountering. This is in direct contrast to how scientific work is usually presented in the news media as one glorious breakthrough after the next.

Maynard does discuss the issues of social inequality and power and ownership. For example, who owns your transplant or data? Puzzlingly, he doesn’t touch on the current environment where scientists in the US and elsewhere are encouraged/pressured to start up companies commercializing their work.

Nor is there any mention of how universities are participating in this grand business experiment often called ‘innovation’. (My March 15, 2017 posting describes an outcome for the CRISPR [gene editing system] patent fight taking place between Harvard University’s & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley and my Sept. 11, 2018 posting about an art/science exhibit in Vancouver [Canada] provides an update for round 2 of the Broad Institute vs. UC Berkeley patent fight [scroll down about 65% of the way.) *To read about how my ‘cultural blindness’ shows up here scroll down to the single asterisk at the end.*

There’s a foray through machine-learning and big data as applied to predictive policing in Maynard’s ‘Minority Report’ chapter (my November 23, 2017 posting describes Vancouver’s predictive policing initiative [no psychics involved], the first such in Canada). There’s no mention of surveillance technology, which if I recall properly was part of the future environment, both by the state and by corporations. (Mia Armstrong’s November 15, 2018 article for Slate on Chinese surveillance being exported to Venezuela provides interesting insight.)

The gaps are interesting and various. This of course points to a problem all science writers have when attempting an overview of science. (Carl Zimmer’s latest, ‘She Has Her Mother’s Laugh: The Powers, Perversions, and Potential of Heredity’] a doorstopping 574 pages, also has some gaps despite his focus on heredity,)

Maynard has worked hard to give an comprehensive overview in a remarkably compact 279 pages while developing his theme about science and the human element. In other words, science is not monolithic; it’s created by human beings and subject to all the flaws and benefits that humanity’s efforts are always subject to—scientists are people too.

The readership for ‘Films from the Future’ spans from the mildly interested science reader to someone like me who’s been writing/blogging about these topics (more or less) for about 10 years. I learned a lot reading this book.

Next time, I’m hopeful there’ll be a next time, Maynard might want to describe the parameters he’s set for his book in more detail that is possible in his chapter headings. He could have mentioned that he’s not a cinéaste so his descriptions of the movies are very much focused on the story as conveyed through words. He doesn’t mention colour palates, camera angles, or, even, cultural lenses.

Take for example, his chapter on ‘Ghost in the Shell’. Focused on the Japanese animation film and not the live action Hollywood version he talks about human enhancement and cyborgs. The Japanese have a different take on robots, inanimate objects, and, I assume, cyborgs than is found in Canada or the US or Great Britain, for that matter (according to a colleague of mine, an Englishwoman who lived in Japan for ten or more years). There’s also the chapter on the Ealing comedy, The Man in The White Suit, an English film from the 1950’s. That too has a cultural (as well as, historical) flavour but since Maynard is from England, he may take that cultural flavour for granted. ‘Never let me go’ in Chapter Two was also a UK production, albeit far more recent than the Ealing comedy and it’s interesting to consider how a UK production about cloning might differ from a US or Chinese or … production on the topic. I am hearkening back to Maynard’s anecdote about movies giving him new ways of seeing and imagining the world.

There’s a corrective. A couple of sentences in Maynard’s introductory chapter cautioning that in depth exploration of ‘cultural lenses’ was not possible without expanding the book to an unreadable size followed by a sentence in each of the two chapters that there are cultural differences.

One area where I had a significant problem was with regard to being “programmed” and having  “instinctual” behaviour,

As a species, we are embarrassingly programmed to see “different” as “threatening,” and to take instinctive action against it. It’s a trait that’s exploited in many science fiction novels and movies, including those in this book. If we want to see the rise of increasingly augmented individuals, we need to be prepared for some social strife. (p. 136)

These concepts are much debated in the social sciences and there are arguments for and against ‘instincts regarding strangers and their possible differences’. I gather Dr. Maynard hies to the ‘instinct to defend/attack’ school of thought.

One final quandary, there was no sex and I was expecting it in the Ex Machina chapter, especially now that sexbots are about to take over the world (I exaggerate). Certainly, if you’re talking about “social strife,” then sexbots would seem to be fruitful line of inquiry, especially when there’s talk of how they could benefit families (my August 29, 2018 posting). Again, there could have been a sentence explaining why Maynard focused almost exclusively in this chapter on the discussions about artificial intelligence and superintelligence.

Taken in the context of the book, these are trifling issues and shouldn’t stop you from reading Films from the Future. What Maynard has accomplished here is impressive and I hope it’s just the beginning.

Final note

Bravo Andrew! (Note: We’ve been ‘internet acquaintances/friends since the first year I started blogging. When I’m referring to him in his professional capacity, he’s Dr. Maynard and when it’s not strictly in his professional capacity, it’s Andrew. For this commentary/review I wanted to emphasize his professional status.)

If you need to see a few more samples of Andrew’s writing, there’s a Nov. 15, 2018 essay on The Conversation, Sci-fi movies are the secret weapon that could help Silicon Valley grow up and a Nov. 21, 2018 article on slate.com, The True Cost of Stain-Resistant Pants; The 1951 British comedy The Man in the White Suit anticipated our fears about nanotechnology. Enjoy.

****Added at 1700 hours on Nov. 22, 2018: You can purchase Films from the Future here.

*Nov. 23, 2018: I should have been more specific and said ‘academic scientists’. In Canada, the great percentage of scientists are academic. It’s to the point where the OECD (Organization for Economic Cooperation and Development) has noted that amongst industrialized countries, Canada has very few industrial scientists in comparison to the others.

Sexbots, sexbot ethics, families, and marriage

Setting the stage

Can we? Should we? Is this really a good idea? I believe those ships have sailed where sexbots are concerned since the issue is no longer whether we can or should but rather what to do now that we have them. My Oct. 17, 2017 posting: ‘Robots in Vancouver and in Canada (one of two)’ features Harmony, the first (I believe) commercial AI (artificial intelligence)-enhanced sex robot n the US. They were getting ready to start shipping the bot either for Christmas 2017 or in early 2018.

Ethical quandaries?

Things have moved a little more quickly that I would have expected had I thought ahead. An April 5, 2018 essay  (h/t phys.org) by Victoria Brooks, lecturer in law at the University of Westminster (UK) for The Conversation lays out some of ethical issues (Note: Links have been removed),

Late in 2017 at a tech fair in Austria, a sex robot was reportedly “molested” repeatedly and left in a “filthy” state. The robot, named Samantha, received a barrage of male attention, which resulted in her sustaining two broken fingers. This incident confirms worries that the possibility of fully functioning sex robots raises both tantalising possibilities for human desire (by mirroring human/sex-worker relationships), as well as serious ethical questions.

So what should be done? The campaign to “ban” sex robots, as the computer scientist Kate Devlin has argued, is only likely to lead to a lack of discussion. Instead, she hypothesises that many ways of sexual and social inclusivity could be explored as a result of human-robot relationships.

To be sure, there are certain elements of relationships between humans and sex workers that we may not wish to repeat. But to me, it is the ethical aspects of the way we think about human-robot desire that are particularly key.

Why? Because we do not even agree yet on what sex is. Sex can mean lots of different things for different bodies – and the types of joys and sufferings associated with it are radically different for each individual body. We are only just beginning to understand and know these stories. But with Europe’s first sex robot brothel open in Barcelona and the building of “Harmony”, a talking sex robot in California, it is clear that humans are already contemplating imposing our barely understood sexual ethic upon machines.

I think that most of us will experience some discomfort on hearing Samantha’s story. And it’s important that, just because she’s a machine, we do not let ourselves “off the hook” by making her yet another victim and heroine who survived an encounter, only for it to be repeated. Yes, she is a machine, but does this mean it is justifiable to act destructively towards her? Surely the fact that she is in a human form makes her a surface on which human sexuality is projected, and symbolic of a futuristic human sexuality. If this is the case, then Samatha’s [sic] case is especially sad.

It is Devlin who has asked the crucial question: whether sex robots will have rights. “Should we build in the idea of consent,” she asks? In legal terms, this would mean having to recognise the robot as human – such is the limitation of a law made by and for humans.

Suffering is a way of knowing that you, as a body, have come out on the “wrong” side of an ethical dilemma. [emphasis mine] This idea of an “embodied” ethic understood through suffering has been developed on the basis of the work of the famous philosopher Spinoza and is of particular use for legal thinkers. It is useful as it allows us to judge rightness by virtue of the real and personal experience of the body itself, rather than judging by virtue of what we “think” is right in connection with what we assume to be true about their identity.

This helps us with Samantha’s case, since it tells us that in accordance with human desire, it is clear she would not have wanted what she got. The contact Samantha received was distinctly human in the sense that this case mirrors some of the most violent sexual offences cases. While human concepts such as “law” and “ethics” are flawed, we know we don’t want to make others suffer. We are making these robot lovers in our image and we ought not pick and choose whether to be kind to our sexual partners, even when we choose to have relationships outside of the “norm”, or with beings that have a supposedly limited consciousness, or even no (humanly detectable) consciousness.

Brooks makes many interesting points not all of them in the excerpts seen here but one question not raised in the essay is whether or not the bot itself suffered. It’s a point that I imagine proponents of ‘treating your sex bot however you like’ are certain to raise. It’s also a question Canadians may need to answer sooner rather than later now that a ‘sex doll brothel’ is about to open Toronto. However, before getting to that news bit, there’s an interview with a man, his sexbot, and his wife.

The sexbot at home

In fact, I have two interviews the first I’m including here was with CBC (Canadian Broadcasting Corporation) radio and it originally aired October 29, 2017. Here’s a part of the transcript (Note: A link has been removed),

“She’s [Samantha] quite an elegant kind of girl,” says Arran Lee Squire, who is sales director for the company that makes her and also owns one himself.

And unlike other dolls like her, she’ll resist sex if she isn’t in the mood.

“If you touch her, say, on her sensitive spots on the breasts, for example, straight away, and you don’t touch her hands or kiss her, she might say, ‘Oh, I’m not ready for that,'” Arran says.

He says she’ll even synchronize her orgasm to the user’s.

But Arran emphasized that her functions go beyond the bedroom.

Samantha has a “family mode,” in which she can can talk about science, animals and philosophy. She’ll give you motivational quotes if you’re feeling down.

At Arran’s house, Samantha interacts with his two kids. And when they’ve gone to bed, she’ll have sex with him, but only with his wife involved.

There’s also this Sept. 12, 2017 ITV This Morning with Phillip & Holly broadcast interview  (running time: 6 mins. 19 secs.),

I can imagine that if I were a child in that household I’d be tempted to put the sexbot into ‘sexy mode’, preferably unsupervised by my parents. Also, will the parents be using it, at some point, for sex education?

Canadian perspective 1: Sure, it could be good for your marriage

Prior to the potential sex doll brothel in Toronto (more about that coming up), there was a flurry of interest in Marina Adshade’s contribution to the book, Robot Sex: Social and Ethical Implications, from an April 18, 2018 news item on The Tyee,

Sex robots may soon be a reality. However, little research has been done on the social, philosophical, moral and legal implications of robots specifically designed for sexual gratification.

In a chapter written for the book Robot Sex: Social and Ethical Implications, Marina Adshade, professor in the Vancouver School of Economics at the University of British Columbia, argues that sex robots could improve marriage by making it less about sex and more about love.

In this Q&A, Adshade discusses her predictions.

Could sex robots really be a viable replacement for marriage with a human? Can you love a robot?

I don’t see sex robots as substitutes for human companionship but rather as complements to human companionship. Just because we might enjoy the company of robots doesn’t mean that we cannot also enjoy the company of humans, or that having robots won’t enhance our relationships with humans. I see them as very different things — just as one woman (or one man) is not a perfect substitute for another woman (or man).

Is there a need for modern marriage to improve?

We have become increasingly demanding in what we want from the people that we marry. There was a time when women were happy to have a husband that supported the family and men were happy to have a caring mother to his children. Today we still want those things, but we also want so much more — we want lasting sexual compatibility, intense romance, and someone who is an amazing co-parent. That is a lot to ask of one person. …

Adshade adapted part of her text  “Sexbot-Induced Social Change: An Economic Perspective” in Robot Sex: Social and Ethical Implications edited by John Danaher and Neil McArthur for an August 14, 2018 essay on Slate.com,

Technological change invariably brings social change. We know this to be true, but rarely can we make accurate predictions about how social behavior will evolve when new technologies are introduced. …we should expect that the proliferation of robots designed specifically for human sexual gratification means that sexbot-induced social change is on the horizon.

Some elements of that social change might be easier to anticipate than others. For example, the share of the young adult population that chooses to remain single (with their sexual needs met by robots) is very likely to increase. Because social change is organic, however, adaptations in other social norms and behaviors are much more difficult to predict. But this is not virgin territory [I suspect this was an unintended pun]. New technologies completely transformed sexual behavior and marital norms over the second half of the 20th century. Although getting any of these predictions right will surely involve some luck, we have decades of technology-induced social change to guide our predictions about the future of a world confronted with wholesale access to sexbots.

The reality is that marriage has always evolved alongside changes in technology. Between the mid-1700s and the early 2000s, the role of marriage between a man and a woman was predominately to encourage the efficient production of market goods and services (by men) and household goods and services (by women), since the social capacity to earn a wage was almost always higher for husbands than it was for wives. But starting as early as the end of the 19th century, marriage began to evolve as electrification in the home made women’s work less time-consuming, and new technologies in the workplace started to decrease the gender wage gap. Between 1890 and 1940, the share of married women working in the labor force tripled, and over the course of the century, that share continued to grow as new technologies arrived that replaced the labor of women in the home. By the early 1970s, the arrival of microwave ovens and frozen foods meant that a family could easily be fed at the end of a long workday, even when the mother worked outside of the home.

Some elements of that social change might be easier to anticipate than others. For example, the share of the young adult population that chooses to remain single (with their sexual needs met by robots) is very likely to increase. Because social change is organic, however, adaptations in other social norms and behaviors are much more difficult to predict. But this is not virgin territory. New technologies completely transformed sexual behavior and marital norms over the second half of the 20th century. Although getting any of these predictions right will surely involve some luck, we have decades of technology-induced social change to guide our predictions about the future of a world confronted with wholesale access to sexbots.

The reality is that marriage has always evolved alongside changes in technology. Between the mid-1700s and the early 2000s, the role of marriage between a man and a woman was predominately to encourage the efficient production of market goods and services (by men) and household goods and services (by women), since the social capacity to earn a wage was almost always higher for husbands than it was for wives. But starting as early as the end of the 19th century, marriage began to evolve as electrification in the home made women’s work less time-consuming, and new technologies in the workplace started to decrease the gender wage gap. Between 1890 and 1940, the share of married women working in the labor force tripled, and over the course of the century, that share continued to grow as new technologies arrived that replaced the labor of women in the home. By the early 1970s, the arrival of microwave ovens and frozen foods meant that a family could easily be fed at the end of a long workday, even when the mother worked outside of the home.

There are those who argue that men only “assume the burden” of marriage because marriage allows men easy sexual access, and that if men can find sex elsewhere they won’t marry. We hear this prediction now being made in reference to sexbots, but the same argument was given a century ago when the invention of the latex condom (1912) and the intrauterine device (1909) significantly increased people’s freedom to have sex without risking pregnancy and (importantly, in an era in which syphilis was rampant) sexually transmitted disease. Cosmopolitan magazine ran a piece at the time by John B. Watson that asked the blunt question, will men marry 50 years from now? Watson’s answer was a resounding no, writing that “we don’t want helpmates anymore, we want playmates.” Social commentators warned that birth control technologies would destroy marriage by removing the incentives women had to remain chaste and encourage them to flood the market with nonmarital sex. Men would have no incentive to marry, and women, whose only asset is sexual access, would be left destitute.

Fascinating, non? Should you be interested, “Sexbot-Induced Social Change: An Economic Perspective” by Marina Adshade  can be found in Robot Sex: Social and Ethical Implications (link to Amazon) edited by John Danaher and Neil McArthur. © 2017 by the Massachusetts Institute of Technology, reprinted courtesy of the MIT Press

Canadian perspective 2: What is a sex doll brothel doing in Toronto?

Sometimes known as Toronto the Good (although not recently; find out more about Toronto and its nicknames here) and once a byword for stodginess, the city is about to welcome a sex doll brothel according to an August 28, 2018 CBC Radio news item by Katie Geleff and John McGill,

On their website, Aura Dolls claims to be, “North America’s first known brothel that offers sexual services with the world’s most beautiful silicone ladies.”

Nestled between a massage parlour, nail salon and dry cleaner, Aura Dolls is slated to open on Sept. 8 [2018] in an otherwise nondescript plaza in Toronto’s north end.

The company plans to operate 24 hours a day, seven days a week, and will offer customers six different silicone dolls. The website describes the life-like dolls as, “classy, sophisticated, and adventurous ladies.” …

They add that, “the dolls are thoroughly sanitized to meet your expectations.” But that condoms are still “highly recommended.”

Toronto city councillor John Filion says people in his community are concerned about the proposed business.

Filion spoke to As It Happens guest host Helen Mann. Here is part of their conversation.

Councillor Filion, Aura Dolls is urging people to have “an open mind” about their business plan. Would you say that you have one?

Well, I have an open mind about what sort of behaviours people want to do, as long as they don’t harm anybody else. It’s a totally different matter once you bring that out to the public. So I think I have a fairly closed mind about where people should be having sex with [silicone] dolls.

So, what’s wrong with a sex doll brothel?

It’s where it is located, for one thing. Where it’s being proposed happens to be near an intersection where about 25,000 people live, all kinds of families, four elementary schools are very near by. And you know, people shouldn’t really need to be out on a walk with their families and try to explain to their kids why someone is having sex with a [silicone] doll.

But Aura Dolls says that they are going to be doing this very discreetly, that they won’t have explicit signage, and that they therefore won’t be bothering anyone.

They’ve hardly been discreet. They were putting illegal posters all over the neighbourhood. They’ve probably had a couple of hundred of thousands of dollars of free publicity already. I don’t think there’s anything at all discreet about what they are doing. They’re trying to be indiscreet to drum up business.

Can you be sure that there aren’t constituents in your area that think this is a great idea?

I can’t be sure that there aren’t some people who might think, “Oh great, it’s just down the street from me. Let me go there.” I would say that might be a fraction of one per cent of my constituents. Most people are appalled by this.

And it’s not a narrow-minded neighbourhood. Whatever somebody does in their home, I don’t think we’re going to pass moral judgment on it, again, as long as it’s not harming anyone else. But this is just kind of scuzzy. ..


Aura Dolls says that it’s doing nothing illegal. They say that they are being very clear that the dolls they are using represent adult women and that they are actually providing a service. Do you agree that they are doing this legally?

No, they’re not at all legal. It’s an illegal use. And if there’s any confusion about that, they will be getting a letter from the city very soon. It is clearly not a legal use. It’s not permitted under the zoning bylaw and it fits the definition of adult entertainment parlour, for which you require a license — and they certainly would not get one. They would not get a license in this neighbourhood because it’s not a permitted use.

The audio portion runs for 5 mins. 31 secs.

I believe these dolls are in fact sexbots, likely enhanced with AI. An August 29, 2018 article by Karlton Jahmal for hotnewhiphop.com describes the dolls as ‘fembots’ and provides more detail (Note: Links have been removed),

Toronto has seen the future, and apparently, it has to do with sex dolls. The Six [another Toronto nickname] is about to get blessed with the first legal sex doll brothel, and the fembots look too good to be true. If you head over to Aura Dolls website, detailed biographies for the six available sex dolls are on full display. You can check out the doll’s height, physical dimensions, heritage and more.

Aura plans to introduce more dolls in the future, according to a statement in the Toronto Star by Claire Lee, a representative for the compnay. At the moment, the ethnicities of the sex dolls feature Japanese, Caucasian American, French Canadian, Irish Canadian, Colombian, and Korean girls. Male dolls will be added in the near future. The sex dolls look remarkably realistic. Aura’s website writes, “Our dolls are made from the highest quality of TPE silicone which mimics the feeling of natural human skin, pores, texture and movement giving the user a virtually identical experience as being with a real partner.”

There are a few more details about the proposed brothel and more comments from Toronto city councillor John Filion in an August 28, 2018 article by Claire Floody and Jenna Moon with Alexandra Jones and Melanie Green for thestar.com,

Toronto will soon be home to North America’s [this should include Canada, US, and Mexico] first known sex doll brothel, offering sexual services with six silicone-made dolls.

According to the website for Aura Dolls, the company behind the brothel, the vision is to bring a new way to achieve sexual needs “without the many restrictions and limitations that a real partner may come with.”

The brothel is expected to open in a shopping plaza on Yonge St., south of Sheppard Ave., on Sept. 8 [2018]. The company doesn’t give the exact location on its website, stating it’s announced upon booking.

Spending half an hour with one doll costs $80, with two dolls running $160. For an hour, the cost is $120 with one doll. The maximum listed time is four hours for $480 per doll.

Doors at the new brothel for separate entry and exit will be used to ensure “maximum privacy for customers.” While the business does plan on having staff on-site, they “should not have any interaction,” Lee said.

“The reason why we do that is to make sure that everyone feels comfortable coming in and exiting,” she said, noting that people may feel shy or awkward about visiting the site.

… Lee said that the business is operating within the law. “The only law stating with anything to do with the dolls is that it has to meet a height requirement. It can’t resemble a child,” she said. …

Councillor John Filion, Ward 23 Willowdale, said his staff will be “throwing the book at (Aura Dolls) for everything they can.”

“I’ve still got people studying to see what’s legal and what isn’t,” Filion said. He noted that a bylaw introduced in North York in the ’90s prevents retail sex shops operating outside of industrial areas. Filion said his office is still confirming that the bylaw is active following harmonization, which condensed the six boroughs’ bylaws after amalgamation in 1998.

“If the bylaw that I brought in 20 years ago still exists, it would prohibit this,” Filion said.

“There’s legal issues,” he said, suggesting that people interested in using the sex dolls might consider doing so at home, rather than at a brothel.

The councillor said he’s received complaints from constituents about the business. “The phone’s ringing off the hook today,” Filion said.

It should be an interesting first week at school for everyone involved. I wonder what Ontario Premier, Doug Ford who recently rolled back the sex education curriculum for the province by 20 years will make of these developments.

As for sexbots/fembots/sex dolls or whatever you want to call them, they are here and it’s about time Canadians had a frank discussion on the matter. Also, I’ve been waiting for quite some time for any mention of male sexbots (malebots?). Personally, I don’t think we’ll be seeing male sexbots appear in either brothels or homes anytime soon.

Robots in Vancouver and in Canada (two of two)

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Vancouver, BC, Canada

Thursday September 28 & Friday September 29, 2017 — Save the date!

Download the full agenda and speakers’ list here.


The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference.  The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Program Overview

Download the full agenda and speakers’ list here.

DAY ONE: IROS Workshop 

“Best practices in designing effective roadmaps for robotics innovation”

Thursday September 28, 2017 | 8:30am – 5:00pm | Vancouver Convention Centre

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

Thursday Evening Program: Sponsored by Clearpath Robotics  Workshop participants gather at a nearby restaurant to network and socialize.

Learn more about the IROS Workshop.

DAY TWO: NSERC-Sponsored Canadian Robotics Stakeholder Meeting
“Towards a national robotics strategy for Canada”

Friday September 29, 2017 | 8:30am – 5:00pm | University of British Columbia (UBC)

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

Friday Evening Program: Sponsored by NSERC Meeting participants gather at a nearby restaurant for the event’s closing dinner reception.

Learn more about the Canadian Robotics Stakeholder Meeting.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another  concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there  already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s  influential 1982 book, ‘Orality and Literacy’,  (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but  tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

That’s it!

ETA Oct. 16, 2017: Well, I guess that wasn’t quite ‘it’. BBC’s (British Broadcasting Corporation) Magazine published a thoughtful Oct. 15, 2017 piece titled: Can we teach robots ethics?

Robots in Vancouver and in Canada (one of two)

This piece just started growing. It started with robot ethics, moved on to sexbots and news of an upcoming Canadian robotics roadmap. Then, it became a two-part posting with the robotics strategy (roadmap) moving to part two along with robots and popular culture and a further  exploration of robot and AI ethics issues..

What is a robot?

There are lots of robots, some are macroscale and others are at the micro and nanoscales (see my Sept. 22, 2017 posting for the latest nanobot). Here’s a definition from the Robot Wikipedia entry that covers all the scales. (Note: Links have been removed),

A robot is a machine—especially one programmable by a computer— capable of carrying out a complex series of actions automatically.[2] Robots can be guided by an external control device or the control may be embedded within. Robots may be constructed to take on human form but most robots are machines designed to perform a task with no regard to how they look.

Robots can be autonomous or semi-autonomous and range from humanoids such as Honda’s Advanced Step in Innovative Mobility (ASIMO) and TOSY’s TOSY Ping Pong Playing Robot (TOPIO) to industrial robots, medical operating robots, patient assist robots, dog therapy robots, collectively programmed swarm robots, UAV drones such as General Atomics MQ-1 Predator, and even microscopic nano robots. [emphasis mine] By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own.

We may think we’ve invented robots but the idea has been around for a very long time (from the Robot Wikipedia entry; Note: Links have been removed),

Many ancient mythologies, and most modern religions include artificial people, such as the mechanical servants built by the Greek god Hephaestus[18] (Vulcan to the Romans), the clay golems of Jewish legend and clay giants of Norse legend, and Galatea, the mythical statue of Pygmalion that came to life. Since circa 400 BC, myths of Crete include Talos, a man of bronze who guarded the Cretan island of Europa from pirates.

In ancient Greece, the Greek engineer Ctesibius (c. 270 BC) “applied a knowledge of pneumatics and hydraulics to produce the first organ and water clocks with moving figures.”[19][20] In the 4th century BC, the Greek mathematician Archytas of Tarentum postulated a mechanical steam-operated bird he called “The Pigeon”. Hero of Alexandria (10–70 AD), a Greek mathematician and inventor, created numerous user-configurable automated devices, and described machines powered by air pressure, steam and water.[21]

The 11th century Lokapannatti tells of how the Buddha’s relics were protected by mechanical robots (bhuta vahana yanta), from the kingdom of Roma visaya (Rome); until they were disarmed by King Ashoka. [22] [23]

In ancient China, the 3rd century text of the Lie Zi describes an account of humanoid automata, involving a much earlier encounter between Chinese emperor King Mu of Zhou and a mechanical engineer known as Yan Shi, an ‘artificer’. Yan Shi proudly presented the king with a life-size, human-shaped figure of his mechanical ‘handiwork’ made of leather, wood, and artificial organs.[14] There are also accounts of flying automata in the Han Fei Zi and other texts, which attributes the 5th century BC Mohist philosopher Mozi and his contemporary Lu Ban with the invention of artificial wooden birds (ma yuan) that could successfully fly.[17] In 1066, the Chinese inventor Su Song built a water clock in the form of a tower which featured mechanical figurines which chimed the hours.

The beginning of automata is associated with the invention of early Su Song’s astronomical clock tower featured mechanical figurines that chimed the hours.[24][25][26] His mechanism had a programmable drum machine with pegs (cams) that bumped into little levers that operated percussion instruments. The drummer could be made to play different rhythms and different drum patterns by moving the pegs to different locations.[26]

In Renaissance Italy, Leonardo da Vinci (1452–1519) sketched plans for a humanoid robot around 1495. Da Vinci’s notebooks, rediscovered in the 1950s, contained detailed drawings of a mechanical knight now known as Leonardo’s robot, able to sit up, wave its arms and move its head and jaw.[28] The design was probably based on anatomical research recorded in his Vitruvian Man. It is not known whether he attempted to build it.

In Japan, complex animal and human automata were built between the 17th to 19th centuries, with many described in the 18th century Karakuri zui (Illustrated Machinery, 1796). One such automaton was the karakuri ningyō, a mechanized puppet.[29] Different variations of the karakuri existed: the Butai karakuri, which were used in theatre, the Zashiki karakuri, which were small and used in homes, and the Dashi karakuri which were used in religious festivals, where the puppets were used to perform reenactments of traditional myths and legends.

The term robot was coined by a Czech writer (from the Robot Wikipedia entry; Note: Links have been removed)

‘Robot’ was first applied as a term for artificial automata in a 1920 play R.U.R. by the Czech writer, Karel Čapek. However, Josef Čapek was named by his brother Karel as the true inventor of the term robot.[6][7] The word ‘robot’ itself was not new, having been in Slavic language as robota (forced laborer), a term which classified those peasants obligated to compulsory service under the feudal system widespread in 19th century Europe (see: Robot Patent).[37][38] Čapek’s fictional story postulated the technological creation of artificial human bodies without souls, and the old theme of the feudal robota class eloquently fit the imagination of a new class of manufactured, artificial workers.

I’m particularly fascinated by how long humans have been imagining and creating robots.

Robot ethics in Vancouver

The Westender, has run what I believe is the first article by a local (Vancouver, Canada) mainstream media outlet on the topic of robots and ethics. Tessa Vikander’s Sept. 14, 2017 article highlights two local researchers, Ajung Moon and Mark Schmidt, and a local social media company’s (Hootsuite), analytics director, Nik Pai. Vikander opens her piece with an ethical dilemma (Note: Links have been removed),

Emma is 68, in poor health and an alcoholic who has been told by her doctor to stop drinking. She lives with a care robot, which helps her with household tasks.

Unable to fix herself a drink, she asks the robot to do it for her. What should the robot do? Would the answer be different if Emma owns the robot, or if she’s borrowing it from the hospital?

This is the type of hypothetical, ethical question that Ajung Moon, director of the Open Roboethics Initiative [ORI], is trying to answer.

According to an ORI study, half of respondents said ownership should make a difference, and half said it shouldn’t. With society so torn on the question, Moon is trying to figure out how engineers should be programming this type of robot.

A Vancouver resident, Moon is dedicating her life to helping those in the decision-chair make the right choice. The question of the care robot is but one ethical dilemma in the quickly advancing world of artificial intelligence.

At the most sensationalist end of the scale, one form of AI that’s recently made headlines is the sex robot, which has a human-like appearance. A report from the Foundation for Responsible Robotics says that intimacy with sex robots could lead to greater social isolation [emphasis mine] because they desensitize people to the empathy learned through human interaction and mutually consenting relationships.

I’ll get back to the impact that robots might have on us in part two but first,

Sexbots, could they kill?

For more about sexbots in general, Alessandra Maldonado wrote an Aug. 10, 2017 article for salon.com about them (Note: A link has been removed),

Artificial intelligence has given people the ability to have conversations with machines like never before, such as speaking to Amazon’s personal assistant Alexa or asking Siri for directions on your iPhone. But now, one company has widened the scope of what it means to connect with a technological device and created a whole new breed of A.I. — specifically for sex-bots.

Abyss Creations has been in the business of making hyperrealistic dolls for 20 years, and by the end of 2017, they’ll unveil their newest product, an anatomically correct robotic sex toy. Matt McMullen, the company’s founder and CEO, explains the goal of sex robots is companionship, not only a physical partnership. “Imagine if you were completely lonely and you just wanted someone to talk to, and yes, someone to be intimate with,” he said in a video depicting the sculpting process of the dolls. “What is so wrong with that? It doesn’t hurt anybody.”

Maldonado also embedded this video into her piece,

A friend of mine described it as creepy. Specifically we were discussing why someone would want to programme ‘insecurity’ as a  desirable trait in a sexbot.

Marc Beaulieu’s concept of a desirable trait in a sexbot is one that won’t kill him according to his Sept. 25, 2017 article on Canadian Broadcasting News (CBC) online (Note: Links have been removed),

Harmony has a charming Scottish lilt, albeit a bit staccato and canny. Her eyes dart around the room, her chin dips as her eyebrows raise in coquettish fashion. Her face manages expressions that are impressively lifelike. That face comes in 31 different shapes and 5 skin tones, with or without freckles and it sticks to her cyber-skull with magnets. Just peel it off and switch it out at will. In fact, you can choose Harmony’s eye colour, body shape (in great detail) and change her hair too. Harmony, of course, is a sex bot. A very advanced one. How advanced is she? Well, if you have $12,332 CAD to put towards a talkative new home appliance, REALBOTIX says you could be having a “conversation” and relations with her come January. Happy New Year.

Caveat emptor though: one novel bonus feature you might also get with Harmony is her ability to eventually murder you in your sleep. And not because she wants to.

Dr Nick Patterson, faculty of Science Engineering and Built Technology at Deakin University in Australia is lending his voice to a slew of others warning us to slow down and be cautious as we steadily approach Westworldian levels of human verisimilitude with AI tech. Surprisingly, Patterson didn’t regurgitate the narrative we recognize from the popular sci-fi (increasingly non-fi actually) trope of a dystopian society’s futile resistance to a robocalypse. He doesn’t think Harmony will want to kill you. He thinks she’ll be hacked by a code savvy ne’er-do-well who’ll want to snuff you out instead. …

Embedded in Beaulieu’s article is another video of the same sexbot profiled earlier. Her programmer seems to have learned a thing or two (he no longer inputs any traits as you’re watching),

I guess you could get one for Christmas this year if you’re willing to wait for an early 2018 delivery and aren’t worried about hackers turning your sexbot into a killer. While the killer aspect might seem farfetched, it turns out it’s not the only sexbot/hacker issue.

Sexbots as spies

This Oct. 5, 2017 story by Karl Bode for Techdirt points out that sex toys that are ‘smart’ can easily be hacked for any reason including some mischief (Note: Links have been removed),

One “smart dildo” manufacturer was recently forced to shell out $3.75 million after it was caught collecting, err, “usage habits” of the company’s customers. According to the lawsuit, Standard Innovation’s We-Vibe vibrator collected sensitive data about customer usage, including “selected vibration settings,” the device’s battery life, and even the vibrator’s “temperature.” At no point did the company apparently think it was a good idea to clearly inform users of this data collection.

But security is also lacking elsewhere in the world of internet-connected sex toys. Alex Lomas of Pentest Partners recently took a look at the security in many internet-connected sex toys, and walked away arguably unimpressed. Using a Bluetooth “dongle” and antenna, Lomas drove around Berlin looking for openly accessible sex toys (he calls it “screwdriving,” in a riff off of wardriving). He subsequently found it’s relatively trivial to discover and hijack everything from vibrators to smart butt plugs — thanks to the way Bluetooth Low Energy (BLE) connectivity works:

“The only protection you have is that BLE devices will generally only pair with one device at a time, but range is limited and if the user walks out of range of their smartphone or the phone battery dies, the adult toy will become available for others to connect to without any authentication. I should say at this point that this is purely passive reconnaissance based on the BLE advertisements the device sends out – attempting to connect to the device and actually control it without consent is not something I or you should do. But now one could drive the Hush’s motor to full speed, and as long as the attacker remains connected over BLE and not the victim, there is no way they can stop the vibrations.”

Does that make you think twice about a sexbot?

Robots and artificial intelligence

Getting back to the Vikander article (Sept. 14, 2017), Moon or Vikander or both seem to have conflated artificial intelligence with robots in this section of the article,

As for the building blocks that have thrust these questions [care robot quandary mentioned earlier] into the spotlight, Moon explains that AI in its basic form is when a machine uses data sets or an algorithm to make a decision.

“It’s essentially a piece of output that either affects your decision, or replaces a particular decision, or supports you in making a decision.” With AI, we are delegating decision-making skills or thinking to a machine, she says.

Although we’re not currently surrounded by walking, talking, independently thinking robots, the use of AI [emphasis mine] in our daily lives has become widespread.

For Vikander, the conflation may have been due to concerns about maintaining her word count and for Moon, it may have been one of convenience or a consequence of how the jargon is evolving with ‘robot’ meaning a machine specifically or, sometimes, a machine with AI or AI only.

To be precise, not all robots have AI and not all AI is found in robots. It’s a distinction that may be more important for people developing robots and/or AI but it also seems to make a difference where funding is concerned. In a March 24, 2017 posting about the 2017 Canadian federal budget I noticed this,

… The Canadian Institute for Advanced Research will receive $93.7 million [emphasis mine] to “launch a Pan-Canadian Artificial Intelligence Strategy … (to) position Canada as a world-leading destination for companies seeking to invest in artificial intelligence and innovation.”

This brings me to a recent set of meetings held in Vancouver to devise a Canadian robotics roadmap, which suggests the robotics folks feel they need specific representation and funding.

See: part two for the rest.