Tag Archives: Robot & Frank

Putting science back into pop culture and selling books

Clifford V. Johnson is very good at promoting books. I tip my hat to him; that’s an excellent talent to have, especially when you’ve written a book, in his case, it’s a graphic novel titled ‘The Dialogues: Conversations about the Nature of the Universe‘.

I first stumbled across professor (University of Southern California) and physicist Johnson and his work in this January 18, 2018 news item on phys.org,

How often do you, outside the requirements of an assignment, ponder things like the workings of a distant star, the innards of your phone camera, or the number and layout of petals on a flower? Maybe a little bit, maybe never. Too often, people regard science as sitting outside the general culture: A specialized, difficult topic carried out by somewhat strange people with arcane talents. It’s somehow not for them.

But really science is part of the wonderful tapestry of human culture, intertwined with things like art, music, theater, film and even religion. These elements of our culture help us understand and celebrate our place in the universe, navigate it and be in dialogue with it and each other. Everyone should be able to engage freely in whichever parts of the general culture they choose, from going to a show or humming a tune to talking about a new movie over dinner.

Science, though, gets portrayed as opposite to art, intuition and mystery, as though knowing in detail how that flower works somehow undermines its beauty. As a practicing physicist, I disagree. Science can enhance our appreciation of the world around us. It should be part of our general culture, accessible to all. Those “special talents” required in order to engage with and even contribute to science are present in all of us.

Here’s more his January 18, 2018 essay on The Conversation (which was the origin for the news item), Note: Links have been removed,

… in addition to being a professor, I work as a science advisor for various forms of entertainment, from blockbuster movies like the recent “Thor: Ragnarok,” or last spring’s 10-hour TV dramatization of the life and work of Albert Einstein (“Genius,” on National Geographic), to the bestselling novel “Dark Matter,” by Blake Crouch. People spend a lot of time consuming entertainment simply because they love stories like these, so it makes sense to put some science in there.

Science can actually help make storytelling more entertaining, engaging and fun – as I explain to entertainment professionals every chance I get. From their perspective, they get potentially bigger audiences. But good stories, enhanced by science, also spark valuable conversations about the subject that continue beyond the movie theater.
Science can be one of the topics woven into the entertainment we consume – via stories, settings and characters. ABC Television

Nonprofit organizations have been working hard on this mission. The Alfred P. Sloan Foundation helps fund and develop films with science content – “The Man Who Knew Infinity” (2015) and “Robot & Frank” (2012) are two examples. (The Sloan Foundation is also a funding partner of The Conversation US.)

The National Academy of Sciences set up the Science & Entertainment Exchange to help connect people from the entertainment industry to scientists. The idea is that such experts can provide Hollywood with engaging details and help with more accurate portrayals of scientists that can enhance the narratives they tell. Many of the popular Marvel movies – including “Thor” (2011), “Ant-Man” (2015) and the upcoming “Avengers: Infinity War” – have had their content strengthened in this way.

Encouragingly, a recent Pew Research Center survey in the U.S. showed that entertainment with science or related content is watched by people across “all demographic, educational and political groups,” and that overall they report positive impressions of the science ideas and scenarios contained in them.

Many years ago I realized it is hard to find books on the nonfiction science shelf that let readers see themselves as part of the conversation about science. So I envisioned an entire book of conversations about science taking place between ordinary people. While “eavesdropping” on those conversations, readers learn some science ideas, and are implicitly invited to have conversations of their own. It’s a resurrection of the dialogue form, known to the ancient Greeks, and to Galileo, as a device for exchanging ideas, but with contemporary settings: cafes, restaurants, trains and so on.

Clifford Johnson at his drafting table. Clifford V. Johnson, CC BY-ND

So over six years I taught myself the requisite artistic and other production techniques, and studied the language and craft of graphic narratives. I wrote and drew “The Dialogues: Conversations About the Nature of the Universe” as proof of concept: A new kind of nonfiction science book that can inspire more people to engage in their own conversations about science, and celebrate a spirit of plurality in everyday science participation.

I so enjoyed Johnson’s writing and appreciated how he introduced his book into the piece that I searched for more and found a three-part interview with Henry Jenkins on his Confessions of an Aca-Fan (Academic-Fan) blog. Before moving onto the interview, here’s some information about the interviewer, Henry Jenkins, (Note: Links have been removed),

Henry Jenkins is the Provost Professor of Communication, Journalism, Cinematic Arts and Education at the University of Southern California. He arrived at USC in Fall 2009 after spending more than a decade as the Director of the MIT Comparative Media Studies Program and the Peter de Florez Professor of Humanities. He is the author and/or editor of seventeen books on various aspects of media and popular culture, including Textual Poachers: Television Fans and Participatory Culture, Hop on Pop: The Politics and Pleasures of Popular Culture,  From Barbie to Mortal Kombat: Gender and Computer Games, Convergence Culture: Where Old and New Media Collide, Spreadable Media: Creating Meaning and Value in a Networked Culture, and By Any Media Necessary: The New Youth Activism. He is currently editing a handbook on the civic imagination and writing a book on “comics and stuff”. He has written for Technology Review, Computer Games, Salon, and The Huffington Post.

Jenkins is the principal investigator for The Civic Imagination Project, funded by the MacArthur Foundation, to explore ways to inspire creative collaborations within communities as they work together to identify shared values and visions for the future. This project grew out of the Media, Activism, and Participatory Politics research group, also funded by MacArthur, which did case studies of innovative organizations that have been effective at getting young people involved in the political process. He is also the Chief Advisor to the Annenberg Innovation Lab. Jenkins also serves on the jury that selects the Peabody Awards, which recognizes “stories that matter” from radio, television, and the web.

He has previously worked as the principal investigator for  Project New Media Literacies (NML), a group which originated as part of the MacArthur Digital Media and Learning Initiative. Jenkins wrote a white paper on learning in a participatory culture that has become the springboard for the group’s efforts to develop and test educational materials focused on preparing students for engagement with the new media landscape. He also was the founder for the Convergence Culture Consortium, a faculty network which seeks to build bridges between academic researchers and the media industry in order to help inform the rethinking of consumer relations in an age of participatory culture.  The Consortium lives on today via the Transforming Hollywood conference, run jointly between USC and UCLA, which recently hosted its 8th event.  

While at MIT, he was one of the principal investigators for The Education Arcade, a consortium of educators and business leaders working to promote the educational use of computer and video games. Jenkins also plays a significant role as a public advocate for fans, gamers and bloggers: testifying before the U.S. Senate Commerce Committee investigation into “Marketing Violence to Youth” following the Columbine shootings; advocating for media literacy education before the Federal Communications Commission; calling for a more consumer-oriented approach to intellectual property at a closed door meeting of the governing body of the World Economic Forum; signing amicus briefs in opposition to games censorship;  regularly speaking to the press and other media about aspects of media change and popular culture; and most recently, serving as an expert witness in the legal struggle over the fan-made film, Prelude to Axanar.  He also has served as a consultant on the Amazon children’s series Lost in Oz, where he provided insights on world-building and transmedia strategies as well as new media literacy issues.

Jenkins has a B.A. in Political Science and Journalism from Georgia State University, a M.A. in Communication Studies from the University of Iowa and a PhD in Communication Arts from the University of Wisconsin-Madison.

Well, that didn’t seem so simple after all. For a somewhat more personal account of who I am, read on.

About Me

The first thing you are going to discover about me, oh reader of this blog, is that I am prolific as hell. The second is that I am also long-winded as all get out. As someone famous once said, “I would have written it shorter, but I didn’t have enough time.”

My earliest work centered on television fans – particularly science fiction fans. Part of what drew me into graduate school in media studies was a fascination with popular culture. I grew up reading Mad magazine and Famous Monsters of Filmland – and, much as my parents feared, it warped me for life. Early on, I discovered the joys of comic books and science fiction, spent time playing around with monster makeup, started writing scripts for my own Super 8 movies (The big problem was that I didn’t have access to a camera until much later), and collecting television-themed toys. By the time I went to college, I was regularly attending science fiction conventions. Through the woman who would become my wife, I discovered fan fiction. And we spent a great deal of time debating our very different ways of reading our favorite television series.

When I got to graduate school, I was struck by how impoverished the academic framework for thinking about media spectatorship was – basically, though everyone framed it differently, consumers were assumed to be passive, brainless, inarticulate, and brainwashed. None of this jelled well with my own robust experience of being a fan of popular culture. I was lucky enough to get to study under John Fiske, first at Iowa and then at the University of Wisconsin-Madison, who introduced me to the cultural studies perspective. Fiske was a key advocate of ethnographic audience research, arguing that media consumers had more tricks up their sleeves than most academic theory acknowledged.

Out of this tension between academic theory and fan experience emerged first an essay, “Star Trek Reread, Rerun, Rewritten” and then a book, Textual Poachers: Television Fans and Participatory Culture. Textual Poachers emerged at a moment when fans were still largely marginal to the way mass media was produced and consumed, and still hidden from the view of most “average consumers.” As such, the book represented a radically different way of thinking about how one might live in relation to media texts. In the book, I describe fans as “rogue readers.” What most people took from that book was my concept of “poaching,” the idea that fans construct their own culture – fan fiction, artwork, costumes, music and videos – from content appropriated from mass media, reshaping it to serve their own needs and interests. There are two other key concepts in this early work which takes on greater significance in my work today – the idea of participatory culture (which runs throughout Convergence Culture) and the idea of a moral economy (that is, the presumed ethical norms which govern the relations between media producers and consumers).

As for the interview, here’s Jenkins’ introduction to the series and a portion of part one (from Comics and Popular Science: An Interview with Clifford V. Johnson (Part One) posted on November 15, 2017),

unnamed.jpg

Clifford V. Johnson is the first theoretical physicist who I have ever interviewed for my blog. Given the sharp divide that our society constructs between the sciences and the humanities, he may well be the last, but he would be the first to see this gap as tragic, a consequence of the current configuration of disciplines. Johnson, as I have discovered, is deeply committed to helping us recognize the role that science plays in everyday life, a project he pursues actively through his involvement as one of the leaders of the Los Angeles Institute for the Humanities (of which I am also a member), as a consultant on various film and television projects, and now, as the author of a graphic novel, The Dialogues, which is being released this week. We were both on a panel about contemporary graphic storytelling Tara McPherson organized for the USC Sydney Harmon Institute for Polymathic Study and we’ve continued to bat around ideas about the pedagogical potential of comics ever since.

Here’s what I wrote when I was asked to provide a blurb for his new book:

“Two superheroes walk into a natural history museum — what happens after that will have you thinking and talking for a long time to come. Clifford V. Johnson’s The Dialogues joins a select few examples of recent texts, such as Scott McCloud’s Understanding Comics, Larry Gonick’s Cartoon History of the Universe, Nick Sousanis’s Unflattening, Bryan Talbot’s Alice in Sunderland, or Joe Sacco’s Palestine, which use the affordances of graphic storytelling as pedagogical tools for changing the ways we think about the world around us. Johnson displays a solid grasp of the craft of comics, demonstrating how this medium can be used to represent different understandings of the relationship between time and space, questions central to his native field of physics. He takes advantage of the observational qualities of contemporary graphic novels to explore the place of scientific thinking in our everyday lives.”

To my many readers who care about sequential art, this is a book which should be added to your collection — Johnson makes good comics, smart comics, beautiful comics, and comics which are doing important work, all at the same time. What more do you want!

In the interviews that follows, we explore more fully what motivated this particular comics and how approaching comics as a theoretical physicist has helped him to discover some interesting formal aspects of this medium.

What do you want your readers to learn about science over the course of these exchanges? I am struck by the ways you seek to demystify aspects of the scientific process, including the role of theory, equations, and experimentation.

unnamed-2.jpg

 

That participatory aspect is core, for sure. Conversations about science by random people out there in the world really do happen – I hear them a lot on the subway, or in cafes, and so I wanted to highlight those and celebrate them. So the book becomes a bit of an invitation to everyone to join in. But then I can show so many other things that typically just get left out of books about science: The ordinariness of the settings in which such conversations can take place, the variety of types of people involved, and indeed the main tools, like equations and technical diagrams, that editors usually tell you to leave out for fear of scaring away the audience. …

I looked for book reviews and found two. This first one is from Starburst Magazine, which strangely does not have the date or author listed (from the review),

The Dialogues is a series of nine conversations about science told in graphic novel format; the conversationalists are men, women, children, and amateur science buffs who all have something to say about the nature of the universe. Their discussions range from multiverse and string theory to immortality, black holes, and how it’s possible to put just a cup of rice in the pan but end up with a ton more after Mom cooks it. Johnson (who also illustrated the book) believes the graphic form is especially suited for physics because “one drawing can show what it would take many words to explain” and it’s hard to argue with his noble intentions, but despite some undoubtedly thoughtful content The Dialogues doesn’t really work. Why not? Because, even with its plethora of brightly-coloured pictures, it’s still 200+ pages of talking heads. The individual conversations might give us plenty to think about, but the absence of any genuine action (or even a sense of humour) still makes The Dialogues read like very pretty homework.

Adelmar Bultheel’s December 8, 2017 review for the European Mathematical Society acknowledges issues with the book while noting its strong points,

So what is the point of producing such a graphic novel if the reader is not properly instructed about anything? In my opinion, the true message can be found in the one or two pages of notes that follow each of the eleven conversations. If you are not into the subject that you were eavesdropping, you probably have heard words, concepts, theories, etc. that you did not understand, or you might just be curious about what exactly the two were discussing. Then you should look that up on the web, or if you want to do it properly, you should consult some literature. This is what these notes are providing: they are pointing to the proper books to consult. …

This is a most unusual book for this subject and the way this is approached is most surprising. Not only the contents is heavy stuff, it is also physically heavy to read. Some 250 pages on thick glossy paper makes it a quite heavy book to hold. You probably do not want to read this in bed or take it on a train, unless you have a table in front of you to put it on. Many subjects are mentioned, but not all are explained in detail. The reader should definitely be prepared to do some extra reading to understand things better. Since most references concern other popularising books on the subject, it may require quite a lot of extra reading. But all this hard science is happening in conversations by young enthusiastic people in casual locations and it is all wrapped up in beautiful graphics showing marvellous realistic decors.

I am fascinated by this book which I have yet to read but I did find a trailer for it (from thedialoguesbook.com),

Enjoy!

Robots in Vancouver and in Canada (two of two)

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Where
Vancouver, BC, Canada

When
Thursday September 28 & Friday September 29, 2017 — Save the date!

Download the full agenda and speakers’ list here.

Objectives

The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference.  The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Program Overview

Download the full agenda and speakers’ list here.

DAY ONE: IROS Workshop 

“Best practices in designing effective roadmaps for robotics innovation”

Thursday September 28, 2017 | 8:30am – 5:00pm | Vancouver Convention Centre

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

Thursday Evening Program: Sponsored by Clearpath Robotics  Workshop participants gather at a nearby restaurant to network and socialize.

Learn more about the IROS Workshop.

DAY TWO: NSERC-Sponsored Canadian Robotics Stakeholder Meeting
“Towards a national robotics strategy for Canada”

Friday September 29, 2017 | 8:30am – 5:00pm | University of British Columbia (UBC)

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

Friday Evening Program: Sponsored by NSERC Meeting participants gather at a nearby restaurant for the event’s closing dinner reception.

Learn more about the Canadian Robotics Stakeholder Meeting.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another  concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there  already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s  influential 1982 book, ‘Orality and Literacy’,  (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but  tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

That’s it!

ETA Oct. 16, 2017: Well, I guess that wasn’t quite ‘it’. BBC’s (British Broadcasting Corporation) Magazine published a thoughtful Oct. 15, 2017 piece titled: Can we teach robots ethics?