Tag Archives: Siri

Protecting your data from Apple is very hard

There has been a lot of talk about Tim Cook (Chief Executive Officer of Apple Inc.) and his policy for data privacy at Apple and his push for better consumer data privacy. For example, there’s this, from a June 10, 2022 article by Kif Leswing for CNBC,

Key Points

  • Apple CEO Tim Cook said in a letter to Congress that lawmakers should advance privacy legislation that’s currently being debated “as soon as possible.”
  • The bill would give consumers protections and rights dealing with how their data is used online, and would require that companies minimize the amount of data they collect on their users.
  • Apple has long positioned itself as the most privacy-focused company among its tech peers.

Apple has long positioned itself as the most privacy-focused company among its tech peers, and Cook regularly addresses the issue in speeches and meetings. Apple says that its commitment to privacy is a deeply held value by its employees, and often invokes the phrase “privacy is a fundamental human right.”

It’s also strategic for Apple’s hardware business. Legislation that regulates how much data companies collect or how it’s processed plays into Apple’s current privacy features, and could even give Apple a head start against competitors that would need to rebuild their systems to comply with the law.

More recently with rising concerns regarding artificial intelligence (AI), Apple has rushed to assure customers that their data is still private, from a May 10, 2024 article by Kyle Orland for Ars Technica, Note: Links have been removed,

Apple’s AI promise: “Your data is never stored or made accessible to Apple”

And publicly reviewable server code means experts can “verify this privacy promise.”

With most large language models being run on remote, cloud-based server farms, some users have been reluctant to share personally identifiable and/or private data with AI companies. In its WWDC [Apple’s World Wide Developers Conference] keynote today, Apple stressed that the new “Apple Intelligence” system it’s integrating into its products will use a new “Private Cloud Compute” to ensure any data processed on its cloud servers is protected in a transparent and verifiable way.

“You should not have to hand over all the details of your life to be warehoused and analyzed in someone’s AI cloud,” Apple Senior VP of Software Engineering Craig Federighi said.-

Part of what Apple calls “a brand new standard for privacy and AI” is achieved through on-device processing. Federighi said “many” of Apple’s generative AI models can run entirely on a device powered by an A17+ or M-series chips, eliminating the risk of sending your personal data to a remote server.

When a bigger, cloud-based model is needed to fulfill a generative AI request, though, Federighi stressed that it will “run on servers we’ve created especially using Apple silicon,” which allows for the use of security tools built into the Swift programming language. The Apple Intelligence system “sends only the data that’s relevant to completing your task” to those servers, Federighi said, rather than giving blanket access to the entirety of the contextual information the device has access to.

But you don’t just have to trust Apple on this score, Federighi claimed. That’s because the server code used by Private Cloud Compute will be publicly accessible, meaning that “independent experts can inspect the code that runs on these servers to verify this privacy promise.” The entire system has been set up cryptographically so that Apple devices “will refuse to talk to a server unless its software has been publicly logged for inspection.”

While the keynote speech was light on details [emphasis mine] for the moment, the focus on privacy during the presentation shows that Apple is at least prioritizing security concerns in its messaging [emphasis mine] as it wades into the generative AI space for the first time. We’ll see what security experts have to say [emphasis mine] when these servers and their code are made publicly available in the near future.

Orland’s caution/suspicion would seem warranted in light of some recent research from scientists in Finland. From an April 3, 2024 Aalto University press release (also on EurekAlert), Note: A link has been removed,

Privacy. That’s Apple,’ the slogan proclaims. New research from Aalto University begs to differ.

Study after study has shown how voluntary third-party apps erode people’s privacy. Now, for the first time, researchers at Aalto University have investigated the privacy settings of Apple’s default apps; the ones that are pretty much unavoidable on a new device, be it a computer, tablet or mobile phone. The researchers will present their findings in mid-May at the prestigious CHI conference [ACM CHI Conference on Human Factors in Computing Systems, May 11, 2024 – May 16, 2024 in Honolulu, Hawaii], and the peer-reviewed research paper is already available online.

‘We focused on apps that are an integral part of the platform and ecosystem. These apps are glued to the platform, and getting rid of them is virtually impossible,’ says Associate Professor Janne Lindqvist, head of the computer science department at Aalto.

The researchers studied eight apps: Safari, Siri, Family Sharing, iMessage, FaceTime, Location Services, Find My and Touch ID. They collected all publicly available privacy-related information on these apps, from technical documentation to privacy policies and user manuals.

The fragility of the privacy protections surprised even the researchers. [emphasis mine]

‘Due to the way the user interface is designed, users don’t know what is going on. For example, the user is given the option to enable or not enable Siri, Apple’s virtual assistant. But enabling only refers to whether you use Siri’s voice control. Siri collects data in the background from other apps you use, regardless of your choice, unless you understand how to go into the settings and specifically change that,’ says Lindqvist.

Participants weren’t able to stop data sharing in any of the apps

In practice, protecting privacy on an Apple device requires persistent and expert clicking on each app individually. Apple’s help falls short.

‘The online instructions for restricting data access are very complex and confusing, and the steps required are scattered in different places. There’s no clear direction on whether to go to the app settings, the central settings – or even both,’ says Amel Bourdoucen, a doctoral researcher at Aalto.

In addition, the instructions didn’t list all the necessary steps or explain how collected data is processed.

The researchers also demonstrated these problems experimentally. They interviewed users and asked them to try changing the settings.

‘It turned out that the participants weren’t able to prevent any of the apps from sharing their data with other applications or the service provider,’ Bourdoucen says.

Finding and adjusting privacy settings also took a lot of time. ‘When making adjustments, users don’t get feedback on whether they’ve succeeded. They then get lost along the way, go backwards in the process and scroll randomly, not knowing if they’ve done enough,’ Bourdoucen says.

In the end, Bourdoucen explains, the participants were able to take one or two steps in the right direction, but none succeeded in following the whole procedure to protect their privacy.

Running out of options

If preventing data sharing is difficult, what does Apple do with all that data? [emphasis mine]

It’s not possible to be sure based on public documents, but Lindqvist says it’s possible to conclude that the data will be used to train the artificial intelligence system behind Siri and to provide personalised user experiences, among other things. [emphasis mine]

Many users are used to seamless multi-device interaction, which makes it difficult to move back to a time of more limited data sharing. However, Apple could inform users much more clearly than it does today, says Lindqvist. The study lists a number of detailed suggestions to clarify privacy settings and improve guidelines.

For individual apps, Lindqvist says that the problem can be solved to some extent by opting for a third-party service. For example, some participants in the study had switched from Safari to Firefox.

Lindqvist can’t comment directly on how Google’s Android works in similar respects [emphasis mine], as no one has yet done a similar mapping of its apps. But past research on third-party apps does not suggest that Google is any more privacy-conscious than Apple [emphasis mine].

So what can be learned from all this – are users ultimately facing an almost impossible task?

‘Unfortunately, that’s one lesson,’ says Lindqvist.

I have found two copies of the researchers’ paper. There’s a PDF version on Aalto University’s website that bears this caution,

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail.

Here’s a link to and a citation for the official version of the paper,

Privacy of Default Apps in Apple’s Mobile Ecosystem by Amel Bourdoucen and Janne Lindqvist. CHI. ’24: Proceedings of the CHI Conference on Human Factors in Computing Systems May 2024 Article No.: 786 Pages 1–32 DOI: https://doi.org/10.1145/3613904.3642831 Published:11 May 2024

This paper is open access.

A customized cruise experience with wearable technology (and decreased personal agency?)

The days when you went cruising to ‘get away from it all’ seem to have passed (if they ever really existed) with the introduction of wearable technology that will register your every preference and make life easier according to Cliff Kuang’s Oct. 19, 2017 article for Fast Company,

This month [October 2017], the 141,000-ton Regal Princess will push out to sea after a nine-figure revamp of mind-boggling scale. Passengers won’t be greeted by new restaurants, swimming pools, or onboard activities, but will instead step into a future augured by the likes of Netflix and Uber, where nearly everything is on demand and personally tailored. An ambitious new customization platform has been woven into the ship’s 19 passenger decks: some 7,000 onboard sensors and 4,000 “guest portals” (door-access panels and touch-screen TVs), all of them connected by 75 miles of internal cabling. As the Carnival-owned ship cruises to Nassau, Bahamas, and Grand Turk, its 3,500 passengers will have the option of carrying a quarter-size device, called the Ocean Medallion, which can be slipped into a pocket or worn on the wrist and is synced with a companion app.

The platform will provide a new level of service for passengers; the onboard sensors record their tastes and respond to their movements, and the app guides them around the ship and toward activities aligned with their preferences. Carnival plans to roll out the platform to another seven ships by January 2019. Eventually, the Ocean Medallion could be opening doors, ordering drinks, and scheduling activities for passengers on all 102 of Carnival’s vessels across 10 cruise lines, from the mass-market Princess ships to the legendary ocean liners of Cunard.

Kuang goes on to explain the reasoning behind this innovation,

The Ocean Medallion is Carnival’s attempt to address a problem that’s become increasingly vexing to the $35.5 billion cruise industry. Driven by economics, ships have exploded in size: In 1996, Carnival Destiny was the world’s largest cruise ship, carrying 2,600 passengers. Today, Royal Caribbean’s MS Harmony of the Seas carries up to 6,780 passengers and 2,300 crew. Larger ships expend less fuel per passenger; the money saved can then go to adding more amenities—which, in turn, are geared to attracting as many types of people as possible. Today on a typical ship you can do practically anything—from attending violin concertos to bungee jumping. And that’s just onboard. Most of a cruise is spent in port, where each day there are dozens of experiences available. This avalanche of choice can bury a passenger. It has also made personalized service harder to deliver. …

Kuang also wrote this brief description of how the technology works from the passenger’s perspective in an Oct. 19, 2017 item for Fast Company,

1. Pre-trip

On the web or on the app, you can book experiences, log your tastes and interests, and line up your days. That data powers the recommendations you’ll see. The Ocean Medallion arrives by mail and becomes the key to ship access.

2. Stateroom

When you draw near, your cabin-room door unlocks without swiping. The room’s unique 43-inch TV, which doubles as a touch screen, offers a range of Carnival’s bespoke travel shows. Whatever you watch is fed into your excursion suggestions.

3. Food

When you order something, sensors detect where you are, allowing your server to find you. Your allergies and preferences are also tracked, and shape the choices you’re offered. In all, the back-end data has 45,000 allergens tagged and manages 250,000 drink combinations.

4. Activities

The right algorithms can go beyond suggesting wines based on previous orders. Carnival is creating a massive semantic database, so if you like pricey reds, you’re more apt to be guided to a violin concerto than a limbo competition. Your onboard choices—the casino, the gym, the pool—inform your excursion recommendations.

In Kuang’s Oct. 19, 2017 article he notes that the cruise ship line is putting a lot of effort into retraining their staff and emphasizing the ‘soft’ skills that aren’t going to be found in this iteration of the technology. No mention is made of whether or not there will be reductions in the number of staff members on this cruise ship nor is the possibility that ‘soft’ skills may in the future be incorporated into this technological marvel.

Personalization/customization is increasingly everywhere

How do you feel about customized news feeds? As it turns out, this is not a rhetorical question as Adrienne LaFrance notes in her Oct. 19, 2017 article for The Atlantic (Note: Links have been removed),

Today, a Google search for news runs through the same algorithmic filtration system as any other Google search: A person’s individual search history, geographic location, and other demographic information affects what Google shows you. Exactly how your search results differ from any other person’s is a mystery, however. Not even the computer scientists who developed the algorithm could precisely reverse engineer it, given the fact that the same result can be achieved through numerous paths, and that ranking factors—deciding which results show up first—are constantly changing, as are the algorithms themselves.

We now get our news in real time, on demand, tailored to our interests, across multiple platforms, without knowing just how much is actually personalized. It was technology companies like Google and Facebook, not traditional newsrooms, that made it so. But news organizations are increasingly betting that offering personalized content can help them draw audiences to their sites—and keep them coming back.

Personalization extends beyond how and where news organizations meet their readers. Already, smartphone users can subscribe to push notifications for the specific coverage areas that interest them. On Facebook, users can decide—to some extent—which organizations’ stories they would like to appear in their news feeds. At the same time, devices and platforms that use machine learning to get to know their users will increasingly play a role in shaping ultra-personalized news products. Meanwhile, voice-activated artificially intelligent devices, such as Google Home and Amazon Echo, are poised to redefine the relationship between news consumers and the news [emphasis mine].

While news personalization can help people manage information overload by making individuals’ news diets unique, it also threatens to incite filter bubbles and, in turn, bias [emphasis mine]. This “creates a bit of an echo chamber,” says Judith Donath, author of The Social Machine: Designs for Living Online and a researcher affiliated with Harvard University ’s Berkman Klein Center for Internet and Society. “You get news that is designed to be palatable to you. It feeds into people’s appetite of expecting the news to be entertaining … [and] the desire to have news that’s reinforcing your beliefs, as opposed to teaching you about what’s happening in the world and helping you predict the future better.”

Still, algorithms have a place in responsible journalism. “An algorithm actually is the modern editorial tool,” says Tamar Charney, the managing editor of NPR One, the organization’s customizable mobile-listening app. A handcrafted hub for audio content from both local and national programs as well as podcasts from sources other than NPR, NPR One employs an algorithm to help populate users’ streams with content that is likely to interest them. But Charney assures there’s still a human hand involved: “The whole editorial vision of NPR One was to take the best of what humans do and take the best of what algorithms do and marry them together.” [emphasis mine]

The skimming and diving Charney describes sounds almost exactly like how Apple and Google approach their distributed-content platforms. With Apple News, users can decide which outlets and topics they are most interested in seeing, with Siri offering suggestions as the algorithm gets better at understanding your preferences. Siri now has have help from Safari. The personal assistant can now detect browser history and suggest news items based on what someone’s been looking at—for example, if someone is searching Safari for Reykjavík-related travel information, they will then see Iceland-related news on Apple News. But the For You view of Apple News isn’t 100 percent customizable, as it still spotlights top stories of the day, and trending stories that are popular with other users, alongside those curated just for you.

Similarly, with Google’s latest update to Google News, readers can scan fixed headlines, customize sidebars on the page to their core interests and location—and, of course, search. The latest redesign of Google News makes it look newsier than ever, and adds to many of the personalization features Google first introduced in 2010. There’s also a place where you can preprogram your own interests into the algorithm.

Google says this isn’t an attempt to supplant news organizations, nor is it inspired by them. The design is rather an embodiment of Google’s original ethos, the product manager for Google News Anand Paka says: “Just due to the deluge of information, users do want ways to control information overload. In other words, why should I read the news that I don’t care about?” [emphasis mine]

Meanwhile, in May [2017?], Google briefly tested a personalized search filter that would dip into its trove of data about users with personal Google and Gmail accounts and include results exclusively from their emails, photos, calendar items, and other personal data related to their query. [emphasis mine] The “personal” tab was supposedly “just an experiment,” a Google spokesperson said, and the option was temporarily removed, but seems to have rolled back out for many users as of August [2017?].

Now, Google, in seeking to settle a class-action lawsuit alleging that scanning emails to offer targeted ads amounts to illegal wiretapping, is promising that for the next three years it won’t use the content of its users’ emails to serve up targeted ads in Gmail. The move, which will go into effect at an unspecified date, doesn’t mean users won’t see ads, however. Google will continue to collect data from users’ search histories, YouTube, and Chrome browsing habits, and other activity.

The fear that personalization will encourage filter bubbles by narrowing the selection of stories is a valid one, especially considering that the average internet user or news consumer might not even be aware of such efforts. Elia Powers, an assistant professor of journalism and news media at Towson University in Maryland, studied the awareness of news personalization among students after he noticed those in his own classes didn’t seem to realize the extent to which Facebook and Google customized users’ results. “My sense is that they didn’t really understand … the role that people that were curating the algorithms [had], how influential that was. And they also didn’t understand that they could play a pretty active role on Facebook in telling Facebook what kinds of news they want them to show and how to prioritize [content] on Google,” he says.

The results of Powers’s study, which was published in Digital Journalism in February [2017], showed that the majority of students had no idea that algorithms were filtering the news content they saw on Facebook and Google. When asked if Facebook shows every news item, posted by organizations or people, in a users’ newsfeed, only 24 percent of those surveyed were aware that Facebook prioritizes certain posts and hides others. Similarly, only a quarter of respondents said Google search results would be different for two different people entering the same search terms at the same time. [emphasis mine; Note: Respondents in this study were students.]

This, of course, has implications beyond the classroom, says Powers: “People as news consumers need to be aware of what decisions are being made [for them], before they even open their news sites, by algorithms and the people behind them, and also be able to understand how they can counter the effects or maybe even turn off personalization or make tweaks to their feeds or their news sites so they take a more active role in actually seeing what they want to see in their feeds.”

On Google and Facebook, the algorithm that determines what you see is invisible. With voice-activated assistants, the algorithm suddenly has a persona. “We are being trained to have a relationship with the AI,” says Amy Webb, founder of the Future Today Institute and an adjunct professor at New York University Stern School of Business. “This is so much more catastrophically horrible for news organizations than the internet. At least with the internet, I have options. The voice ecosystem is not built that way. It’s being built so I just get the information I need in a pleasing way.”

LaFrance’s article is thoughtful and well worth reading in its entirety. Now, onto some commentary.

Loss of personal agency

I have been concerned for some time about the increasingly dull results I get from a Google search and while I realize the company has been gathering information about me via my searches , supposedly in service of giving me better searches, I had no idea how deeply the company can mine for personal data. It makes me wonder what would happen if Google and Facebook attempted a merger.

More cogently, I rather resent the search engines and artificial intelligence agents (e.g. Facebook bots) which have usurped my role as the arbiter of what interests me, in short, my increasing loss of personal agency.

I’m also deeply suspicious of what these companies are going to do with my data. Will it be used to manipulate me in some way? Presumably, the data will be sold and used for some purpose. In the US, they have married electoral data with consumer data as Brent Bambury notes in an Oct. 13, 2017 article for his CBC (Canadian Broadcasting Corporation) Radio show,

How much of your personal information circulates in the free-market ether of metadata? It could be more than you imagine, and it might be enough to let others change the way you vote.

A data firm that specializes in creating psychological profiles of voters claims to have up to 5,000 data points on 220 million Americans. Cambridge Analytica has deep ties to the American right and was hired by the campaigns of Ben Carson, Ted Cruz and Donald Trump.

During the U.S. election, CNN called them “Donald Trump’s mind readers” and his secret weapon.

David Carroll is a Professor at the Parsons School of Design in New York City. He is one of the millions of Americans profiled by Cambridge Analytica and he’s taking legal action to find out where the company gets its masses of data and how they use it to create their vaunted psychographic profiles of voters.

On Day 6 [Banbury’s CBC radio programme], he explained why that’s important.

“They claim to have figured out how to project our voting behavior based on our consumer behavior. So it’s important for citizens to be able to understand this because it would affect our ability to understand how we’re being targeted by campaigns and how the messages that we’re seeing on Facebook and television are being directed at us to manipulate us.” [emphasis mine]

The parent company of Cambridge Analytica, SCL Group, is a U.K.-based data operation with global ties to military and political activities. David Carroll says the potential for sharing personal data internationally is a cause for concern.

“It’s the first time that this kind of data is being collected and transferred across geographic boundaries,” he says.

But that also gives Carroll an opening for legal action. An individual has more rights to access their personal information in the U.K., so that’s where he’s launching his lawsuit.

Reports link Michael Flynn, briefly Trump’s National Security Adviser, to SCL Group and indicate that former White House strategist Steve Bannon is a board member of Cambridge Analytica. Billionaire Robert Mercer, who has underwritten Bannon’s Breitbart operations and is a major Trump donor, also has a significant stake in Cambridge Analytica.

In the world of data, Mercer’s credentials are impeccable.

“He is an important contributor to the field of artificial intelligence,” says David Carroll.

“His work at IBM is seminal and really important in terms of the foundational ideas that go into big data analytics, so the relationship between AI and big data analytics. …

Banbury’s piece offers a lot more, including embedded videos, than I’ve not included in that excerpt but I also wanted to include some material from Carole Cadwalladr’s Oct. 1, 2017 Guardian article about Carroll and his legal fight in the UK,

“There are so many disturbing aspects to this. One of the things that really troubles me is how the company can buy anonymous data completely legally from all these different sources, but as soon as it attaches it to voter files, you are re-identified. It means that every privacy policy we have ignored in our use of technology is a broken promise. It would be one thing if this information stayed in the US, if it was an American company and it only did voter data stuff.”

But, he [Carroll] argues, “it’s not just a US company and it’s not just a civilian company”. Instead, he says, it has ties with the military through SCL – “and it doesn’t just do voter targeting”. Carroll has provided information to the Senate intelligence committee and believes that the disclosures mandated by a British court could provide evidence helpful to investigators.

Frank Pasquale, a law professor at the University of Maryland, author of The Black Box Society and a leading expert on big data and the law, called the case a “watershed moment”.

“It really is a David and Goliath fight and I think it will be the model for other citizens’ actions against other big corporations. I think we will look back and see it as a really significant case in terms of the future of algorithmic accountability and data protection. …

Nobody is discussing personal agency directly but if you’re only being exposed to certain kinds of messages then your personal agency has been taken from you. Admittedly we don’t have complete personal agency in our lives but AI along with the data gathering done online and increasingly with wearable and smart technology means that another layer of control has been added to your life and it is largely invisible. After all, the students in Elia Powers’ study didn’t realize their news feeds were being pre-curated.

Robots in Vancouver and in Canada (two of two)

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Where
Vancouver, BC, Canada

When
Thursday September 28 & Friday September 29, 2017 — Save the date!

Download the full agenda and speakers’ list here.

Objectives

The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference.  The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Program Overview

Download the full agenda and speakers’ list here.

DAY ONE: IROS Workshop 

“Best practices in designing effective roadmaps for robotics innovation”

Thursday September 28, 2017 | 8:30am – 5:00pm | Vancouver Convention Centre

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

Thursday Evening Program: Sponsored by Clearpath Robotics  Workshop participants gather at a nearby restaurant to network and socialize.

Learn more about the IROS Workshop.

DAY TWO: NSERC-Sponsored Canadian Robotics Stakeholder Meeting
“Towards a national robotics strategy for Canada”

Friday September 29, 2017 | 8:30am – 5:00pm | University of British Columbia (UBC)

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

Friday Evening Program: Sponsored by NSERC Meeting participants gather at a nearby restaurant for the event’s closing dinner reception.

Learn more about the Canadian Robotics Stakeholder Meeting.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another  concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there  already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s  influential 1982 book, ‘Orality and Literacy’,  (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but  tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

That’s it!

ETA Oct. 16, 2017: Well, I guess that wasn’t quite ‘it’. BBC’s (British Broadcasting Corporation) Magazine published a thoughtful Oct. 15, 2017 piece titled: Can we teach robots ethics?

Robots in Vancouver and in Canada (one of two)

This piece just started growing. It started with robot ethics, moved on to sexbots and news of an upcoming Canadian robotics roadmap. Then, it became a two-part posting with the robotics strategy (roadmap) moving to part two along with robots and popular culture and a further  exploration of robot and AI ethics issues..

What is a robot?

There are lots of robots, some are macroscale and others are at the micro and nanoscales (see my Sept. 22, 2017 posting for the latest nanobot). Here’s a definition from the Robot Wikipedia entry that covers all the scales. (Note: Links have been removed),

A robot is a machine—especially one programmable by a computer— capable of carrying out a complex series of actions automatically.[2] Robots can be guided by an external control device or the control may be embedded within. Robots may be constructed to take on human form but most robots are machines designed to perform a task with no regard to how they look.

Robots can be autonomous or semi-autonomous and range from humanoids such as Honda’s Advanced Step in Innovative Mobility (ASIMO) and TOSY’s TOSY Ping Pong Playing Robot (TOPIO) to industrial robots, medical operating robots, patient assist robots, dog therapy robots, collectively programmed swarm robots, UAV drones such as General Atomics MQ-1 Predator, and even microscopic nano robots. [emphasis mine] By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own.

We may think we’ve invented robots but the idea has been around for a very long time (from the Robot Wikipedia entry; Note: Links have been removed),

Many ancient mythologies, and most modern religions include artificial people, such as the mechanical servants built by the Greek god Hephaestus[18] (Vulcan to the Romans), the clay golems of Jewish legend and clay giants of Norse legend, and Galatea, the mythical statue of Pygmalion that came to life. Since circa 400 BC, myths of Crete include Talos, a man of bronze who guarded the Cretan island of Europa from pirates.

In ancient Greece, the Greek engineer Ctesibius (c. 270 BC) “applied a knowledge of pneumatics and hydraulics to produce the first organ and water clocks with moving figures.”[19][20] In the 4th century BC, the Greek mathematician Archytas of Tarentum postulated a mechanical steam-operated bird he called “The Pigeon”. Hero of Alexandria (10–70 AD), a Greek mathematician and inventor, created numerous user-configurable automated devices, and described machines powered by air pressure, steam and water.[21]

The 11th century Lokapannatti tells of how the Buddha’s relics were protected by mechanical robots (bhuta vahana yanta), from the kingdom of Roma visaya (Rome); until they were disarmed by King Ashoka. [22] [23]

In ancient China, the 3rd century text of the Lie Zi describes an account of humanoid automata, involving a much earlier encounter between Chinese emperor King Mu of Zhou and a mechanical engineer known as Yan Shi, an ‘artificer’. Yan Shi proudly presented the king with a life-size, human-shaped figure of his mechanical ‘handiwork’ made of leather, wood, and artificial organs.[14] There are also accounts of flying automata in the Han Fei Zi and other texts, which attributes the 5th century BC Mohist philosopher Mozi and his contemporary Lu Ban with the invention of artificial wooden birds (ma yuan) that could successfully fly.[17] In 1066, the Chinese inventor Su Song built a water clock in the form of a tower which featured mechanical figurines which chimed the hours.

The beginning of automata is associated with the invention of early Su Song’s astronomical clock tower featured mechanical figurines that chimed the hours.[24][25][26] His mechanism had a programmable drum machine with pegs (cams) that bumped into little levers that operated percussion instruments. The drummer could be made to play different rhythms and different drum patterns by moving the pegs to different locations.[26]

In Renaissance Italy, Leonardo da Vinci (1452–1519) sketched plans for a humanoid robot around 1495. Da Vinci’s notebooks, rediscovered in the 1950s, contained detailed drawings of a mechanical knight now known as Leonardo’s robot, able to sit up, wave its arms and move its head and jaw.[28] The design was probably based on anatomical research recorded in his Vitruvian Man. It is not known whether he attempted to build it.

In Japan, complex animal and human automata were built between the 17th to 19th centuries, with many described in the 18th century Karakuri zui (Illustrated Machinery, 1796). One such automaton was the karakuri ningyō, a mechanized puppet.[29] Different variations of the karakuri existed: the Butai karakuri, which were used in theatre, the Zashiki karakuri, which were small and used in homes, and the Dashi karakuri which were used in religious festivals, where the puppets were used to perform reenactments of traditional myths and legends.

The term robot was coined by a Czech writer (from the Robot Wikipedia entry; Note: Links have been removed)

‘Robot’ was first applied as a term for artificial automata in a 1920 play R.U.R. by the Czech writer, Karel Čapek. However, Josef Čapek was named by his brother Karel as the true inventor of the term robot.[6][7] The word ‘robot’ itself was not new, having been in Slavic language as robota (forced laborer), a term which classified those peasants obligated to compulsory service under the feudal system widespread in 19th century Europe (see: Robot Patent).[37][38] Čapek’s fictional story postulated the technological creation of artificial human bodies without souls, and the old theme of the feudal robota class eloquently fit the imagination of a new class of manufactured, artificial workers.

I’m particularly fascinated by how long humans have been imagining and creating robots.

Robot ethics in Vancouver

The Westender, has run what I believe is the first article by a local (Vancouver, Canada) mainstream media outlet on the topic of robots and ethics. Tessa Vikander’s Sept. 14, 2017 article highlights two local researchers, Ajung Moon and Mark Schmidt, and a local social media company’s (Hootsuite), analytics director, Nik Pai. Vikander opens her piece with an ethical dilemma (Note: Links have been removed),

Emma is 68, in poor health and an alcoholic who has been told by her doctor to stop drinking. She lives with a care robot, which helps her with household tasks.

Unable to fix herself a drink, she asks the robot to do it for her. What should the robot do? Would the answer be different if Emma owns the robot, or if she’s borrowing it from the hospital?

This is the type of hypothetical, ethical question that Ajung Moon, director of the Open Roboethics Initiative [ORI], is trying to answer.

According to an ORI study, half of respondents said ownership should make a difference, and half said it shouldn’t. With society so torn on the question, Moon is trying to figure out how engineers should be programming this type of robot.

A Vancouver resident, Moon is dedicating her life to helping those in the decision-chair make the right choice. The question of the care robot is but one ethical dilemma in the quickly advancing world of artificial intelligence.

At the most sensationalist end of the scale, one form of AI that’s recently made headlines is the sex robot, which has a human-like appearance. A report from the Foundation for Responsible Robotics says that intimacy with sex robots could lead to greater social isolation [emphasis mine] because they desensitize people to the empathy learned through human interaction and mutually consenting relationships.

I’ll get back to the impact that robots might have on us in part two but first,

Sexbots, could they kill?

For more about sexbots in general, Alessandra Maldonado wrote an Aug. 10, 2017 article for salon.com about them (Note: A link has been removed),

Artificial intelligence has given people the ability to have conversations with machines like never before, such as speaking to Amazon’s personal assistant Alexa or asking Siri for directions on your iPhone. But now, one company has widened the scope of what it means to connect with a technological device and created a whole new breed of A.I. — specifically for sex-bots.

Abyss Creations has been in the business of making hyperrealistic dolls for 20 years, and by the end of 2017, they’ll unveil their newest product, an anatomically correct robotic sex toy. Matt McMullen, the company’s founder and CEO, explains the goal of sex robots is companionship, not only a physical partnership. “Imagine if you were completely lonely and you just wanted someone to talk to, and yes, someone to be intimate with,” he said in a video depicting the sculpting process of the dolls. “What is so wrong with that? It doesn’t hurt anybody.”

Maldonado also embedded this video into her piece,

A friend of mine described it as creepy. Specifically we were discussing why someone would want to programme ‘insecurity’ as a  desirable trait in a sexbot.

Marc Beaulieu’s concept of a desirable trait in a sexbot is one that won’t kill him according to his Sept. 25, 2017 article on Canadian Broadcasting News (CBC) online (Note: Links have been removed),

Harmony has a charming Scottish lilt, albeit a bit staccato and canny. Her eyes dart around the room, her chin dips as her eyebrows raise in coquettish fashion. Her face manages expressions that are impressively lifelike. That face comes in 31 different shapes and 5 skin tones, with or without freckles and it sticks to her cyber-skull with magnets. Just peel it off and switch it out at will. In fact, you can choose Harmony’s eye colour, body shape (in great detail) and change her hair too. Harmony, of course, is a sex bot. A very advanced one. How advanced is she? Well, if you have $12,332 CAD to put towards a talkative new home appliance, REALBOTIX says you could be having a “conversation” and relations with her come January. Happy New Year.

Caveat emptor though: one novel bonus feature you might also get with Harmony is her ability to eventually murder you in your sleep. And not because she wants to.

Dr Nick Patterson, faculty of Science Engineering and Built Technology at Deakin University in Australia is lending his voice to a slew of others warning us to slow down and be cautious as we steadily approach Westworldian levels of human verisimilitude with AI tech. Surprisingly, Patterson didn’t regurgitate the narrative we recognize from the popular sci-fi (increasingly non-fi actually) trope of a dystopian society’s futile resistance to a robocalypse. He doesn’t think Harmony will want to kill you. He thinks she’ll be hacked by a code savvy ne’er-do-well who’ll want to snuff you out instead. …

Embedded in Beaulieu’s article is another video of the same sexbot profiled earlier. Her programmer seems to have learned a thing or two (he no longer inputs any traits as you’re watching),

I guess you could get one for Christmas this year if you’re willing to wait for an early 2018 delivery and aren’t worried about hackers turning your sexbot into a killer. While the killer aspect might seem farfetched, it turns out it’s not the only sexbot/hacker issue.

Sexbots as spies

This Oct. 5, 2017 story by Karl Bode for Techdirt points out that sex toys that are ‘smart’ can easily be hacked for any reason including some mischief (Note: Links have been removed),

One “smart dildo” manufacturer was recently forced to shell out $3.75 million after it was caught collecting, err, “usage habits” of the company’s customers. According to the lawsuit, Standard Innovation’s We-Vibe vibrator collected sensitive data about customer usage, including “selected vibration settings,” the device’s battery life, and even the vibrator’s “temperature.” At no point did the company apparently think it was a good idea to clearly inform users of this data collection.

But security is also lacking elsewhere in the world of internet-connected sex toys. Alex Lomas of Pentest Partners recently took a look at the security in many internet-connected sex toys, and walked away arguably unimpressed. Using a Bluetooth “dongle” and antenna, Lomas drove around Berlin looking for openly accessible sex toys (he calls it “screwdriving,” in a riff off of wardriving). He subsequently found it’s relatively trivial to discover and hijack everything from vibrators to smart butt plugs — thanks to the way Bluetooth Low Energy (BLE) connectivity works:

“The only protection you have is that BLE devices will generally only pair with one device at a time, but range is limited and if the user walks out of range of their smartphone or the phone battery dies, the adult toy will become available for others to connect to without any authentication. I should say at this point that this is purely passive reconnaissance based on the BLE advertisements the device sends out – attempting to connect to the device and actually control it without consent is not something I or you should do. But now one could drive the Hush’s motor to full speed, and as long as the attacker remains connected over BLE and not the victim, there is no way they can stop the vibrations.”

Does that make you think twice about a sexbot?

Robots and artificial intelligence

Getting back to the Vikander article (Sept. 14, 2017), Moon or Vikander or both seem to have conflated artificial intelligence with robots in this section of the article,

As for the building blocks that have thrust these questions [care robot quandary mentioned earlier] into the spotlight, Moon explains that AI in its basic form is when a machine uses data sets or an algorithm to make a decision.

“It’s essentially a piece of output that either affects your decision, or replaces a particular decision, or supports you in making a decision.” With AI, we are delegating decision-making skills or thinking to a machine, she says.

Although we’re not currently surrounded by walking, talking, independently thinking robots, the use of AI [emphasis mine] in our daily lives has become widespread.

For Vikander, the conflation may have been due to concerns about maintaining her word count and for Moon, it may have been one of convenience or a consequence of how the jargon is evolving with ‘robot’ meaning a machine specifically or, sometimes, a machine with AI or AI only.

To be precise, not all robots have AI and not all AI is found in robots. It’s a distinction that may be more important for people developing robots and/or AI but it also seems to make a difference where funding is concerned. In a March 24, 2017 posting about the 2017 Canadian federal budget I noticed this,

… The Canadian Institute for Advanced Research will receive $93.7 million [emphasis mine] to “launch a Pan-Canadian Artificial Intelligence Strategy … (to) position Canada as a world-leading destination for companies seeking to invest in artificial intelligence and innovation.”

This brings me to a recent set of meetings held in Vancouver to devise a Canadian robotics roadmap, which suggests the robotics folks feel they need specific representation and funding.

See: part two for the rest.

Artificial intelligence and metaphors

This is a different approach to artificial intelligence. From a June 27, 2017 news item on ScienceDaily,

Ask Siri to find a math tutor to help you “grasp” calculus and she’s likely to respond that your request is beyond her abilities. That’s because metaphors like “grasp” are difficult for Apple’s voice-controlled personal assistant to, well, grasp.

But new UC Berkeley research suggests that Siri and other digital helpers could someday learn the algorithms that humans have used for centuries to create and understand metaphorical language.

Mapping 1,100 years of metaphoric English language, researchers at UC Berkeley and Lehigh University in Pennsylvania have detected patterns in how English speakers have added figurative word meanings to their vocabulary.

The results, published in the journal Cognitive Psychology, demonstrate how throughout history humans have used language that originally described palpable experiences such as “grasping an object” to describe more intangible concepts such as “grasping an idea.”

Unfortunately, this image is not the best quality,

Scientists have created historical maps showing the evolution of metaphoric language. (Image courtesy of Mahesh Srinivasan)

A June 27, 2017 University of California at Berkeley (or UC Berkeley) news release by Yasmin Anwar, which originated the news item,

“The use of concrete language to talk about abstract ideas may unlock mysteries about how we are able to communicate and conceptualize things we can never see or touch,” said study senior author Mahesh Srinivasan, an assistant professor of psychology at UC Berkeley. “Our results may also pave the way for future advances in artificial intelligence.”

The findings provide the first large-scale evidence that the creation of new metaphorical word meanings is systematic, researchers said. They can also inform efforts to design natural language processing systems like Siri to help them understand creativity in human language.

“Although such systems are capable of understanding many words, they are often tripped up by creative uses of words that go beyond their existing, pre-programmed vocabularies,” said study lead author Yang Xu, a postdoctoral researcher in linguistics and cognitive science at UC Berkeley.

“This work brings opportunities toward modeling metaphorical words at a broad scale, ultimately allowing the construction of artificial intelligence systems that are capable of creating and comprehending metaphorical language,” he added.

Srinivasan and Xu conducted the study with Lehigh University psychology professor Barbara Malt.

Using the Metaphor Map of English database, researchers examined more than 5,000 examples from the past millennium in which word meanings from one semantic domain, such as “water,” were extended to another semantic domain, such as “mind.”

Researchers called the original semantic domain the “source domain” and the domain that the metaphorical meaning was extended to, the “target domain.”

More than 1,400 online participants were recruited to rate semantic domains such as “water” or “mind” according to the degree to which they were related to the external world (light, plants), animate things (humans, animals), or intense emotions (excitement, fear).

These ratings were fed into computational models that the researchers had developed to predict which semantic domains had been the sources or targets of metaphorical extension.

In comparing their computational predictions against the actual historical record provided by the Metaphor Map of English, researchers found that their models correctly forecast about 75 percent of recorded metaphorical language mappings over the past millennium.

Furthermore, they found that the degree to which a domain is tied to experience in the external world, such as “grasping a rope,” was the primary predictor of how a word would take on a new metaphorical meaning such as “grasping an idea.”

For example, time and again, researchers found that words associated with textiles, digestive organs, wetness, solidity and plants were more likely to provide sources for metaphorical extension, while mental and emotional states, such as excitement, pride and fear were more likely to be the targets of metaphorical extension.

Scientists have created historical maps showing the evolution of metaphoric language. (Image courtesy of Mahesh Srinivasan)

Here’s a link to and a citation for the paper,

Evolution of word meanings through metaphorical mapping: Systematicity over the past millennium by Yang Xu, Barbara C. Malt, Mahesh Srinivasan. Cognitive Psychology Volume 96, August 2017, Pages 41–53 DOI: https://doi.org/10.1016/j.cogpsych.2017.05.005

The early web version of this paper is behind a paywall.

For anyone interested in the ‘Metaphor Map of English’ database mentioned in the news release, you find it here on the University of Glasgow website. By the way, it also seems to be known as ‘Mapping Metaphor with the Historical Thesaurus‘.