Category Archives: robots

Electrode-filled elastic fiber for wearable electronics and robots

This work comes out of Switzerland. A May 25, 2018 École Polytechnique Fédérale de Lausanne (EPFL) press release (also on EurekAlert) announces their fibers,

EPFL scientists have found a fast and simple way to make super-elastic, multi-material, high-performance fibers. Their fibers have already been used as sensors on robotic fingers and in clothing. This breakthrough method opens the door to new kinds of smart textiles and medical implants.

It’s a whole new way of thinking about sensors. The tiny fibers developed at EPFL are made of elastomer and can incorporate materials like electrodes and nanocomposite polymers. The fibers can detect even the slightest pressure and strain and can withstand deformation of close to 500% before recovering their initial shape. All that makes them perfect for applications in smart clothing and prostheses, and for creating artificial nerves for robots.

The fibers were developed at EPFL’s Laboratory of Photonic Materials and Fiber Devices (FIMAP), headed by Fabien Sorin at the School of Engineering. The scientists came up with a fast and easy method for embedding different kinds of microstructures in super-elastic fibers. For instance, by adding electrodes at strategic locations, they turned the fibers into ultra-sensitive sensors. What’s more, their method can be used to produce hundreds of meters of fiber in a short amount of time. Their research has just been published in Advanced Materials.

Heat, then stretch
To make their fibers, the scientists used a thermal drawing process, which is the standard process for optical-fiber manufacturing. They started by creating a macroscopic preform with the various fiber components arranged in a carefully designed 3D pattern. They then heated the preform and stretched it out, like melted plastic, to make fibers of a few hundreds microns in diameter. And while this process stretched out the pattern of components lengthwise, it also contracted it crosswise, meaning the components’ relative positions stayed the same. The end result was a set of fibers with an extremely complicated microarchitecture and advanced properties.

Until now, thermal drawing could be used to make only rigid fibers. But Sorin and his team used it to make elastic fibers. With the help of a new criterion for selecting materials, they were able to identify some thermoplastic elastomers that have a high viscosity when heated. After the fibers are drawn, they can be stretched and deformed but they always return to their original shape.

Rigid materials like nanocomposite polymers, metals and thermoplastics can be introduced into the fibers, as well as liquid metals that can be easily deformed. “For instance, we can add three strings of electrodes at the top of the fibers and one at the bottom. Different electrodes will come into contact depending on how the pressure is applied to the fibers. This will cause the electrodes to transmit a signal, which can then be read to determine exactly what type of stress the fiber is exposed to – such as compression or shear stress, for example,” says Sorin.

Artificial nerves for robots

Working in association with Professor Dr. Oliver Brock (Robotics and Biology Laboratory, Technical University of Berlin), the scientists integrated their fibers into robotic fingers as artificial nerves. Whenever the fingers touch something, electrodes in the fibers transmit information about the robot’s tactile interaction with its environment. The research team also tested adding their fibers to large-mesh clothing to detect compression and stretching. “Our technology could be used to develop a touch keyboard that’s integrated directly into clothing, for instance” says Sorin.

The researchers see many other potential applications. Especially since the thermal drawing process can be easily tweaked for large-scale production. This is a real plus for the manufacturing sector. The textile sector has already expressed interest in the new technology, and patents have been filed.

There’s a video of the lead researcher discussing the work as he offers some visual aids,

Here’s a link to and a citation for the paper,

Superelastic Multimaterial Electronic and Photonic Fibers and Devices via Thermal Drawing by Yunpeng Qu, Tung Nguyen‐Dang, Alexis Gérald Page, Wei Yan, Tapajyoti Das Gupta, Gelu Marius Rotaru, René M. Rossi, Valentine Dominique Favrod, Nicola Bartolomei, Fabien Sorin. Advanced Materials First published: 25 May 2018 https://doi.org/10.1002/adma.201707251

This paper is behind a paywall.

A potpourri of robot/AI stories: killers , kindergarten teachers, a Balenciaga-inspired AI fashion designer, a conversational android, and more

Following on my August 29, 2018 post (Sexbots, sexbot ethics, families, and marriage), I’m following up with a more general piece.

Robots, AI (artificial intelligence), and androids (humanoid robots), the terms can be confusing since there’s a tendency to use them interchangeably. Confession: I do it too, but, not this time. That said, I have multiple news bits.

Killer ‘bots and ethics

The U.S. military is already testing a Modular Advanced Armed Robotic System. Credit: Lance Cpl. Julien Rodarte, U.S. Marine Corps

That is a robot.

For the purposes of this posting, a robot is a piece of hardware which may or may not include an AI system and does not mimic a human or other biological organism such that you might, under circumstances, mistake the robot for a biological organism.

As for what precipitated this feature (in part), it seems there’s been a United Nations meeting in Geneva, Switzerland held from August 27 – 31, 2018 about war and the use of autonomous robots, i.e., robots equipped with AI systems and designed for independent action. BTW, it’s the not first meeting the UN has held on this topic.

Bonnie Docherty, lecturer on law and associate director of armed conflict and civilian protection, international human rights clinic, Harvard Law School, has written an August 21, 2018 essay on The Conversation (also on phys.org) describing the history and the current rules around the conduct of war, as well as, outlining the issues with the military use of autonomous robots (Note: Links have been removed),

When drafting a treaty on the laws of war at the end of the 19th century, diplomats could not foresee the future of weapons development. But they did adopt a legal and moral standard for judging new technology not covered by existing treaty language.

This standard, known as the Martens Clause, has survived generations of international humanitarian law and gained renewed relevance in a world where autonomous weapons are on the brink of making their own determinations about whom to shoot and when. The Martens Clause calls on countries not to use weapons that depart “from the principles of humanity and from the dictates of public conscience.”

I was the lead author of a new report by Human Rights Watch and the Harvard Law School International Human Rights Clinic that explains why fully autonomous weapons would run counter to the principles of humanity and the dictates of public conscience. We found that to comply with the Martens Clause, countries should adopt a treaty banning the development, production and use of these weapons.

Representatives of more than 70 nations will gather from August 27 to 31 [2018] at the United Nations in Geneva to debate how to address the problems with what they call lethal autonomous weapon systems. These countries, which are parties to the Convention on Conventional Weapons, have discussed the issue for five years. My co-authors and I believe it is time they took action and agreed to start negotiating a ban next year.

Docherty elaborates on her points (Note: A link has been removed),

The Martens Clause provides a baseline of protection for civilians and soldiers in the absence of specific treaty law. The clause also sets out a standard for evaluating new situations and technologies that were not previously envisioned.

Fully autonomous weapons, sometimes called “killer robots,” would select and engage targets without meaningful human control. They would be a dangerous step beyond current armed drones because there would be no human in the loop to determine when to fire and at what target. Although fully autonomous weapons do not yet exist, China, Israel, Russia, South Korea, the United Kingdom and the United States are all working to develop them. They argue that the technology would process information faster and keep soldiers off the battlefield.

The possibility that fully autonomous weapons could soon become a reality makes it imperative for those and other countries to apply the Martens Clause and assess whether the technology would offend basic humanity and the public conscience. Our analysis finds that fully autonomous weapons would fail the test on both counts.

I encourage you to read the essay in its entirety and for anyone who thinks the discussion about ethics and killer ‘bots is new or limited to military use, there’s my July 25, 2016 posting about police use of a robot in Dallas, Texas. (I imagine the discussion predates 2016 but that’s the earliest instance I have here.)

Teacher bots

Robots come in many forms and this one is on the humanoid end of the spectum,

Children watch a Keeko robot at the Yiswind Institute of Multicultural Education in Beijing, where the intelligent machines are telling stories and challenging kids with logic problems  [donwloaded from https://phys.org/news/2018-08-robot-teachers-invade-chinese-kindergartens.html]

Don’t those ‘eyes’ look almost heart-shaped? No wonder the kids love these robots, if an August  29, 2018 news item on phys.org can be believed,

The Chinese kindergarten children giggled as they worked to solve puzzles assigned by their new teaching assistant: a roundish, short educator with a screen for a face.

Just under 60 centimetres (two feet) high, the autonomous robot named Keeko has been a hit in several kindergartens, telling stories and challenging children with logic problems.

Round and white with a tubby body, the armless robot zips around on tiny wheels, its inbuilt cameras doubling up both as navigational sensors and a front-facing camera allowing users to record video journals.

In China, robots are being developed to deliver groceries, provide companionship to the elderly, dispense legal advice and now, as Keeko’s creators hope, join the ranks of educators.

At the Yiswind Institute of Multicultural Education on the outskirts of Beijing, the children have been tasked to help a prince find his way through a desert—by putting together square mats that represent a path taken by the robot—part storytelling and part problem-solving.

Each time they get an answer right, the device reacts with delight, its face flashing heart-shaped eyes.

“Education today is no longer a one-way street, where the teacher teaches and students just learn,” said Candy Xiong, a teacher trained in early childhood education who now works with Keeko Robot Xiamen Technology as a trainer.

“When children see Keeko with its round head and body, it looks adorable and children love it. So when they see Keeko, they almost instantly take to it,” she added.

Keeko robots have entered more than 600 kindergartens across the country with its makers hoping to expand into Greater China and Southeast Asia.

Beijing has invested money and manpower in developing artificial intelligence as part of its “Made in China 2025” plan, with a Chinese firm last year unveiling the country’s first human-like robot that can hold simple conversations and make facial expressions.

According to the International Federation of Robots, China has the world’s top industrial robot stock, with some 340,000 units in factories across the country engaged in manufacturing and the automotive industry.

Moving on from hardware/software to a software only story.

AI fashion designer better than Balenciaga?

Despite the title for Katharine Schwab’s August 22, 2018 article for Fast Company, I don’t think this AI designer is better than Balenciaga but from the pictures I’ve seen the designs are as good and it does present some intriguing possibilities courtesy of its neural network (Note: Links have been removed),

The AI, created by researcher Robbie Barat, has created an entire collection based on Balenciaga’s previous styles. There’s a fabulous pink and red gradient jumpsuit that wraps all the way around the model’s feet–like a onesie for fashionistas–paired with a dark slouchy coat. There’s a textural color-blocked dress, paired with aqua-green tights. And for menswear, there’s a multi-colored, shimmery button-up with skinny jeans and mismatched shoes. None of these looks would be out of place on the runway.

To create the styles, Barat collected images of Balenciaga’s designs via the designer’s lookbooks, ad campaigns, runway shows, and online catalog over the last two months, and then used them to train the pix2pix neural net. While some of the images closely resemble humans wearing fashionable clothes, many others are a bit off–some models are missing distinct limbs, and don’t get me started on how creepy [emphasis mine] their faces are. Even if the outfits aren’t quite ready to be fabricated, Barat thinks that designers could potentially use a tool like this to find inspiration. Because it’s not constrained by human taste, style, and history, the AI comes up with designs that may never occur to a person. “I love how the network doesn’t really understand or care about symmetry,” Barat writes on Twitter.

You can see the ‘creepy’ faces and some of the designs here,

Image: Robbie Barat

In contrast to the previous two stories, this all about algorithms, no machinery with independent movement (robot hardware) needed.

Conversational android: Erica

Hiroshi Ishiguro and his lifelike (definitely humanoid) robots have featured here many, many times before. The most recent posting is a March 27, 2017 posting about his and his android’s participation at the 2017 SXSW festival.

His latest work is featured in an August 21, 2018 news news item on ScienceDaily,

We’ve all tried talking with devices, and in some cases they talk back. But, it’s a far cry from having a conversation with a real person.

Now a research team from Kyoto University, Osaka University, and the Advanced Telecommunications Research Institute, or ATR, have significantly upgraded the interaction system for conversational android ERICA, giving her even greater dialog skills.

ERICA is an android created by Hiroshi Ishiguro of Osaka University and ATR, specifically designed for natural conversation through incorporation of human-like facial expressions and gestures. The research team demonstrated the updates during a symposium at the National Museum of Emerging Science in Tokyo.

Here’s the latest conversational android, Erica

Caption: The experimental set up when the subject (left) talks with ERICA (right) Credit: Kyoto University / Kawahara lab

An August 20, 2018 Kyoto University press release on EurekAlert, which originated the news item, offers more details,

When we talk to one another, it’s never a simple back and forward progression of information,” states Tatsuya Kawahara of Kyoto University’s Graduate School of Informatics, and an expert in speech and audio processing.

“Listening is active. We express agreement by nodding or saying ‘uh-huh’ to maintain the momentum of conversation. This is called ‘backchanneling’, and is something we wanted to implement with ERICA.”

The team also focused on developing a system for ‘attentive listening’. This is when a listener asks elaborating questions, or repeats the last word of the speaker’s sentence, allowing for more engaging dialogue.

Deploying a series of distance sensors, facial recognition cameras, and microphone arrays, the team began collecting data on parameters necessary for a fluid dialog between ERICA and a human subject.

“We looked at three qualities when studying backchanneling,” continues Kawahara. “These were: timing — when a response happens; lexical form — what is being said; and prosody, or how the response happens.”

Responses were generated through machine learning using a counseling dialogue corpus, resulting in dramatically improved dialog engagement. Testing in five-minute sessions with a human subject, ERICA demonstrated significantly more dynamic speaking skill, including the use of backchanneling, partial repeats, and statement assessments.

“Making a human-like conversational robot is a major challenge,” states Kawahara. “This project reveals how much complexity there is in listening, which we might consider mundane. We are getting closer to a day where a robot can pass a Total Turing Test.”

Erica seems to have been first introduced publicly in Spring 2017, from an April 2017 Erica: Man Made webpage on The Guardian website,

Erica is 23. She has a beautiful, neutral face and speaks with a synthesised voice. She has a degree of autonomy – but can’t move her hands yet. Hiroshi Ishiguro is her ‘father’ and the bad boy of Japanese robotics. Together they will redefine what it means to be human and reveal that the future is closer than we might think.

Hiroshi Ishiguro and his colleague Dylan Glas are interested in what makes a human. Erica is their latest creation – a semi-autonomous android, the product of the most funded scientific project in Japan. But these men regard themselves as artists more than scientists, and the Erica project – the result of a collaboration between Osaka and Kyoto universities and the Advanced Telecommunications Research Institute International – is a philosophical one as much as technological one.

Erica is interviewed about her hope and dreams – to be able to leave her room and to be able to move her arms and legs. She likes to chat with visitors and has one of the most advanced speech synthesis systems yet developed. Can she be regarded as being alive or as a comparable being to ourselves? Will she help us to understand ourselves and our interactions as humans better?

Erica and her creators are interviewed in the science fiction atmosphere of Ishiguro’s laboratory, and this film asks how we might form close relationships with robots in the future. Ishiguro thinks that for Japanese people especially, everything has a soul, whether human or not. If we don’t understand how human hearts, minds and personalities work, can we truly claim that humans have authenticity that machines don’t?

Ishiguro and Glas want to release Erica and her fellow robots into human society. Soon, Erica may be an essential part of our everyday life, as one of the new children of humanity.

Key credits

  • Director/Editor: Ilinca Calugareanu
  • Producer: Mara Adina
  • Executive producers for the Guardian: Charlie Phillips and Laurence Topham
  • This video is produced in collaboration with the Sundance Institute Short Documentary Fund supported by the John D and Catherine T MacArthur Foundation

You can also view the 14 min. film here.

Artworks generated by an AI system are to be sold at Christie’s auction house

KC Ifeanyi’s August 22, 2018 article for Fast Company may send a chill down some artists’ spines,

For the first time in its 252-year history, Christie’s will auction artwork generated by artificial intelligence.

Created by the French art collective Obvious, “Portrait of Edmond de Belamy” is part of a series of paintings of the fictional Belamy family that was created using a two-part algorithm. …

The portrait is estimated to sell anywhere between $7,000-$10,000, and Obvious says the proceeds will go toward furthering its algorithm.

… Famed collector Nicolas Laugero-Lasserre bought one of Obvious’s Belamy works in February, which could’ve been written off as a novel purchase where the story behind it is worth more than the piece itself. However, with validation from a storied auction house like Christie’s, AI art could shake the contemporary art scene.

“Edmond de Belamy” goes up for auction from October 23-25 [2018].

Jobs safe from automation? Are there any?

Michael Grothaus expresses more optimism about future job markets than I’m feeling in an August 30, 2018 article for Fast Company,

A 2017 McKinsey Global Institute study of 800 occupations across 46 countries found that by 2030, 800 million people will lose their jobs to automation. That’s one-fifth of the global workforce. A further one-third of the global workforce will need to retrain if they want to keep their current jobs as well. And looking at the effects of automation on American jobs alone, researchers from Oxford University found that “47 percent of U.S. workers have a high probability of seeing their jobs automated over the next 20 years.”

The good news is that while the above stats are rightly cause for concern, they also reveal that 53% of American jobs and four-fifths of global jobs are unlikely to be affected by advances in artificial intelligence and robotics. But just what are those fields? I spoke to three experts in artificial intelligence, robotics, and human productivity to get their automation-proof career advice.

Creatives

“Although I believe every single job can, and will, benefit from a level of AI or robotic influence, there are some roles that, in my view, will never be replaced by technology,” says Tom Pickersgill, …

Maintenance foreman

When running a production line, problems and bottlenecks are inevitable–and usually that’s a bad thing. But in this case, those unavoidable issues will save human jobs because their solutions will require human ingenuity, says Mark Williams, head of product at People First, …

Hairdressers

Mat Hunter, director of the Central Research Laboratory, a tech-focused co-working space and accelerator for tech startups, have seen startups trying to create all kinds of new technologies, which has given him insight into just what machines can and can’t pull off. It’s lead him to believe that jobs like the humble hairdresser are safer from automation than those of, says, accountancy.

Therapists and social workers

Another automation-proof career is likely to be one involved in helping people heal the mind, says Pickersgill. “People visit therapists because there is a need for emotional support and guidance. This can only be provided through real human interaction–by someone who can empathize and understand, and who can offer advice based on shared experiences, rather than just data-driven logic.”

Teachers

Teachers are so often the unsung heroes of our society. They are overworked and underpaid–yet charged with one of the most important tasks anyone can have: nurturing the growth of young people. The good news for teachers is that their jobs won’t be going anywhere.

Healthcare workers

Doctors and nurses will also likely never see their jobs taken by automation, says Williams. While automation will no doubt better enhance the treatments provided by doctors and nurses the fact of the matter is that robots aren’t going to outdo healthcare workers’ ability to connect with patients and make them feel understood the way a human can.

Caretakers

While humans might be fine with robots flipping their burgers and artificial intelligence managing their finances, being comfortable with a robot nannying your children or looking after your elderly mother is a much bigger ask. And that’s to say nothing of the fact that even today’s most advanced robots don’t have the physical dexterity to perform the movements and actions carers do every day.

Grothaus does offer a proviso in his conclusion: certain types of jobs are relatively safe until developers learn to replicate qualities such as empathy in robots/AI.

It’s very confusing

There’s so much news about robots, artificial intelligence, androids, and cyborgs that it’s hard to keep up with it let alone attempt to get a feeling for where all this might be headed. When you add the fact that the term robots/artificial inteligence are often used interchangeably and that the distinction between robots/androids/cyborgs is not always clear any attempts to peer into the future become even more challenging.

At this point I content myself with tracking the situation and finding definitions so I can better understand what I’m tracking. Carmen Wong’s August 23, 2018 posting on the Signals blog published by Canada’s Centre for Commercialization of Regenerative Medicine (CCRM) offers some useful definitions in the context of an article about the use of artificial intelligence in the life sciences, particularly in Canada (Note: Links have been removed),

Artificial intelligence (AI). Machine learning. To most people, these are just buzzwords and synonymous. Whether or not we fully understand what both are, they are slowly integrating into our everyday lives. Virtual assistants such as Siri? AI is at work. The personalized ads you see when you are browsing on the web or movie recommendations provided on Netflix? Thank AI for that too.

AI is defined as machines having intelligence that imitates human behaviour such as learning, planning and problem solving. A process used to achieve AI is called machine learning, where a computer uses lots of data to “train” or “teach” itself, without human intervention, to accomplish a pre-determined task. Essentially, the computer keeps on modifying its algorithm based on the information provided to get to the desired goal.

Another term you may have heard of is deep learning. Deep learning is a particular type of machine learning where algorithms are set up like the structure and function of human brains. It is similar to a network of brain cells interconnecting with each other.

Toronto has seen its fair share of media-worthy AI activity. The Government of Canada, Government of Ontario, industry and multiple universities came together in March 2018 to launch the Vector Institute, with the goal of using AI to promote economic growth and improve the lives of Canadians. In May, Samsung opened its AI Centre in the MaRS Discovery District, joining a network of Samsung centres located in California, United Kingdom and Russia.

There has been a boom in AI companies over the past few years, which span a variety of industries. This year’s ranking of the top 100 most promising private AI companies covers 25 fields with cybersecurity, enterprise and robotics being the hot focus areas.

Wong goes on to explore AI deployment in the life sciences and concludes that human scientists and doctors will still be needed although she does note this in closing (Note: A link has been removed),

More importantly, empathy and support from a fellow human being could never be fully replaced by a machine (could it?), but maybe this will change in the future. We will just have to wait and see.

Artificial empathy is the term used in Lisa Morgan’s April 25, 2018 article for Information Week which unfortunately does not include any links to actual projects or researchers working on artificial empathy. Instead, the article is focused on how business interests and marketers would like to see it employed. FWIW, I have found a few references: (1) Artificial empathy Wikipedia essay (look for the references at the end of the essay for more) and (2) this open access article: Towards Artificial Empathy; How Can Artificial Empathy Follow the Developmental Pathway of Natural Empathy? by Minoru Asada.

Please let me know in the comments if you should have an insights on the matter in the comments section of this blog.

Sexbots, sexbot ethics, families, and marriage

Setting the stage

Can we? Should we? Is this really a good idea? I believe those ships have sailed where sexbots are concerned since the issue is no longer whether we can or should but rather what to do now that we have them. My Oct. 17, 2017 posting: ‘Robots in Vancouver and in Canada (one of two)’ features Harmony, the first (I believe) commercial AI (artificial intelligence)-enhanced sex robot n the US. They were getting ready to start shipping the bot either for Christmas 2017 or in early 2018.

Ethical quandaries?

Things have moved a little more quickly that I would have expected had I thought ahead. An April 5, 2018 essay  (h/t phys.org) by Victoria Brooks, lecturer in law at the University of Westminster (UK) for The Conversation lays out some of ethical issues (Note: Links have been removed),

Late in 2017 at a tech fair in Austria, a sex robot was reportedly “molested” repeatedly and left in a “filthy” state. The robot, named Samantha, received a barrage of male attention, which resulted in her sustaining two broken fingers. This incident confirms worries that the possibility of fully functioning sex robots raises both tantalising possibilities for human desire (by mirroring human/sex-worker relationships), as well as serious ethical questions.

So what should be done? The campaign to “ban” sex robots, as the computer scientist Kate Devlin has argued, is only likely to lead to a lack of discussion. Instead, she hypothesises that many ways of sexual and social inclusivity could be explored as a result of human-robot relationships.

To be sure, there are certain elements of relationships between humans and sex workers that we may not wish to repeat. But to me, it is the ethical aspects of the way we think about human-robot desire that are particularly key.

Why? Because we do not even agree yet on what sex is. Sex can mean lots of different things for different bodies – and the types of joys and sufferings associated with it are radically different for each individual body. We are only just beginning to understand and know these stories. But with Europe’s first sex robot brothel open in Barcelona and the building of “Harmony”, a talking sex robot in California, it is clear that humans are already contemplating imposing our barely understood sexual ethic upon machines.

I think that most of us will experience some discomfort on hearing Samantha’s story. And it’s important that, just because she’s a machine, we do not let ourselves “off the hook” by making her yet another victim and heroine who survived an encounter, only for it to be repeated. Yes, she is a machine, but does this mean it is justifiable to act destructively towards her? Surely the fact that she is in a human form makes her a surface on which human sexuality is projected, and symbolic of a futuristic human sexuality. If this is the case, then Samatha’s [sic] case is especially sad.

It is Devlin who has asked the crucial question: whether sex robots will have rights. “Should we build in the idea of consent,” she asks? In legal terms, this would mean having to recognise the robot as human – such is the limitation of a law made by and for humans.

Suffering is a way of knowing that you, as a body, have come out on the “wrong” side of an ethical dilemma. [emphasis mine] This idea of an “embodied” ethic understood through suffering has been developed on the basis of the work of the famous philosopher Spinoza and is of particular use for legal thinkers. It is useful as it allows us to judge rightness by virtue of the real and personal experience of the body itself, rather than judging by virtue of what we “think” is right in connection with what we assume to be true about their identity.

This helps us with Samantha’s case, since it tells us that in accordance with human desire, it is clear she would not have wanted what she got. The contact Samantha received was distinctly human in the sense that this case mirrors some of the most violent sexual offences cases. While human concepts such as “law” and “ethics” are flawed, we know we don’t want to make others suffer. We are making these robot lovers in our image and we ought not pick and choose whether to be kind to our sexual partners, even when we choose to have relationships outside of the “norm”, or with beings that have a supposedly limited consciousness, or even no (humanly detectable) consciousness.

Brooks makes many interesting points not all of them in the excerpts seen here but one question not raised in the essay is whether or not the bot itself suffered. It’s a point that I imagine proponents of ‘treating your sex bot however you like’ are certain to raise. It’s also a question Canadians may need to answer sooner rather than later now that a ‘sex doll brothel’ is about to open Toronto. However, before getting to that news bit, there’s an interview with a man, his sexbot, and his wife.

The sexbot at home

In fact, I have two interviews the first I’m including here was with CBC (Canadian Broadcasting Corporation) radio and it originally aired October 29, 2017. Here’s a part of the transcript (Note: A link has been removed),

“She’s [Samantha] quite an elegant kind of girl,” says Arran Lee Squire, who is sales director for the company that makes her and also owns one himself.

And unlike other dolls like her, she’ll resist sex if she isn’t in the mood.

“If you touch her, say, on her sensitive spots on the breasts, for example, straight away, and you don’t touch her hands or kiss her, she might say, ‘Oh, I’m not ready for that,'” Arran says.

He says she’ll even synchronize her orgasm to the user’s.

But Arran emphasized that her functions go beyond the bedroom.

Samantha has a “family mode,” in which she can can talk about science, animals and philosophy. She’ll give you motivational quotes if you’re feeling down.

At Arran’s house, Samantha interacts with his two kids. And when they’ve gone to bed, she’ll have sex with him, but only with his wife involved.

There’s also this Sept. 12, 2017 ITV This Morning with Phillip & Holly broadcast interview  (running time: 6 mins. 19 secs.),

I can imagine that if I were a child in that household I’d be tempted to put the sexbot into ‘sexy mode’, preferably unsupervised by my parents. Also, will the parents be using it, at some point, for sex education?

Canadian perspective 1: Sure, it could be good for your marriage

Prior to the potential sex doll brothel in Toronto (more about that coming up), there was a flurry of interest in Marina Adshade’s contribution to the book, Robot Sex: Social and Ethical Implications, from an April 18, 2018 news item on The Tyee,

Sex robots may soon be a reality. However, little research has been done on the social, philosophical, moral and legal implications of robots specifically designed for sexual gratification.

In a chapter written for the book Robot Sex: Social and Ethical Implications, Marina Adshade, professor in the Vancouver School of Economics at the University of British Columbia, argues that sex robots could improve marriage by making it less about sex and more about love.

In this Q&A, Adshade discusses her predictions.

Could sex robots really be a viable replacement for marriage with a human? Can you love a robot?

I don’t see sex robots as substitutes for human companionship but rather as complements to human companionship. Just because we might enjoy the company of robots doesn’t mean that we cannot also enjoy the company of humans, or that having robots won’t enhance our relationships with humans. I see them as very different things — just as one woman (or one man) is not a perfect substitute for another woman (or man).

Is there a need for modern marriage to improve?

We have become increasingly demanding in what we want from the people that we marry. There was a time when women were happy to have a husband that supported the family and men were happy to have a caring mother to his children. Today we still want those things, but we also want so much more — we want lasting sexual compatibility, intense romance, and someone who is an amazing co-parent. That is a lot to ask of one person. …

Adshade adapted part of her text  “Sexbot-Induced Social Change: An Economic Perspective” in Robot Sex: Social and Ethical Implications edited by John Danaher and Neil McArthur for an August 14, 2018 essay on Slate.com,

Technological change invariably brings social change. We know this to be true, but rarely can we make accurate predictions about how social behavior will evolve when new technologies are introduced. …we should expect that the proliferation of robots designed specifically for human sexual gratification means that sexbot-induced social change is on the horizon.

Some elements of that social change might be easier to anticipate than others. For example, the share of the young adult population that chooses to remain single (with their sexual needs met by robots) is very likely to increase. Because social change is organic, however, adaptations in other social norms and behaviors are much more difficult to predict. But this is not virgin territory [I suspect this was an unintended pun]. New technologies completely transformed sexual behavior and marital norms over the second half of the 20th century. Although getting any of these predictions right will surely involve some luck, we have decades of technology-induced social change to guide our predictions about the future of a world confronted with wholesale access to sexbots.

The reality is that marriage has always evolved alongside changes in technology. Between the mid-1700s and the early 2000s, the role of marriage between a man and a woman was predominately to encourage the efficient production of market goods and services (by men) and household goods and services (by women), since the social capacity to earn a wage was almost always higher for husbands than it was for wives. But starting as early as the end of the 19th century, marriage began to evolve as electrification in the home made women’s work less time-consuming, and new technologies in the workplace started to decrease the gender wage gap. Between 1890 and 1940, the share of married women working in the labor force tripled, and over the course of the century, that share continued to grow as new technologies arrived that replaced the labor of women in the home. By the early 1970s, the arrival of microwave ovens and frozen foods meant that a family could easily be fed at the end of a long workday, even when the mother worked outside of the home.

Some elements of that social change might be easier to anticipate than others. For example, the share of the young adult population that chooses to remain single (with their sexual needs met by robots) is very likely to increase. Because social change is organic, however, adaptations in other social norms and behaviors are much more difficult to predict. But this is not virgin territory. New technologies completely transformed sexual behavior and marital norms over the second half of the 20th century. Although getting any of these predictions right will surely involve some luck, we have decades of technology-induced social change to guide our predictions about the future of a world confronted with wholesale access to sexbots.

The reality is that marriage has always evolved alongside changes in technology. Between the mid-1700s and the early 2000s, the role of marriage between a man and a woman was predominately to encourage the efficient production of market goods and services (by men) and household goods and services (by women), since the social capacity to earn a wage was almost always higher for husbands than it was for wives. But starting as early as the end of the 19th century, marriage began to evolve as electrification in the home made women’s work less time-consuming, and new technologies in the workplace started to decrease the gender wage gap. Between 1890 and 1940, the share of married women working in the labor force tripled, and over the course of the century, that share continued to grow as new technologies arrived that replaced the labor of women in the home. By the early 1970s, the arrival of microwave ovens and frozen foods meant that a family could easily be fed at the end of a long workday, even when the mother worked outside of the home.

There are those who argue that men only “assume the burden” of marriage because marriage allows men easy sexual access, and that if men can find sex elsewhere they won’t marry. We hear this prediction now being made in reference to sexbots, but the same argument was given a century ago when the invention of the latex condom (1912) and the intrauterine device (1909) significantly increased people’s freedom to have sex without risking pregnancy and (importantly, in an era in which syphilis was rampant) sexually transmitted disease. Cosmopolitan magazine ran a piece at the time by John B. Watson that asked the blunt question, will men marry 50 years from now? Watson’s answer was a resounding no, writing that “we don’t want helpmates anymore, we want playmates.” Social commentators warned that birth control technologies would destroy marriage by removing the incentives women had to remain chaste and encourage them to flood the market with nonmarital sex. Men would have no incentive to marry, and women, whose only asset is sexual access, would be left destitute.

Fascinating, non? Should you be interested, “Sexbot-Induced Social Change: An Economic Perspective” by Marina Adshade  can be found in Robot Sex: Social and Ethical Implications (link to Amazon) edited by John Danaher and Neil McArthur. © 2017 by the Massachusetts Institute of Technology, reprinted courtesy of the MIT Press

Canadian perspective 2: What is a sex doll brothel doing in Toronto?

Sometimes known as Toronto the Good (although not recently; find out more about Toronto and its nicknames here) and once a byword for stodginess, the city is about to welcome a sex doll brothel according to an August 28, 2018 CBC Radio news item by Katie Geleff and John McGill,

On their website, Aura Dolls claims to be, “North America’s first known brothel that offers sexual services with the world’s most beautiful silicone ladies.”

Nestled between a massage parlour, nail salon and dry cleaner, Aura Dolls is slated to open on Sept. 8 [2018] in an otherwise nondescript plaza in Toronto’s north end.

The company plans to operate 24 hours a day, seven days a week, and will offer customers six different silicone dolls. The website describes the life-like dolls as, “classy, sophisticated, and adventurous ladies.” …

They add that, “the dolls are thoroughly sanitized to meet your expectations.” But that condoms are still “highly recommended.”

Toronto city councillor John Filion says people in his community are concerned about the proposed business.

Filion spoke to As It Happens guest host Helen Mann. Here is part of their conversation.

Councillor Filion, Aura Dolls is urging people to have “an open mind” about their business plan. Would you say that you have one?

Well, I have an open mind about what sort of behaviours people want to do, as long as they don’t harm anybody else. It’s a totally different matter once you bring that out to the public. So I think I have a fairly closed mind about where people should be having sex with [silicone] dolls.

So, what’s wrong with a sex doll brothel?

It’s where it is located, for one thing. Where it’s being proposed happens to be near an intersection where about 25,000 people live, all kinds of families, four elementary schools are very near by. And you know, people shouldn’t really need to be out on a walk with their families and try to explain to their kids why someone is having sex with a [silicone] doll.

But Aura Dolls says that they are going to be doing this very discreetly, that they won’t have explicit signage, and that they therefore won’t be bothering anyone.

They’ve hardly been discreet. They were putting illegal posters all over the neighbourhood. They’ve probably had a couple of hundred of thousands of dollars of free publicity already. I don’t think there’s anything at all discreet about what they are doing. They’re trying to be indiscreet to drum up business.

Can you be sure that there aren’t constituents in your area that think this is a great idea?

I can’t be sure that there aren’t some people who might think, “Oh great, it’s just down the street from me. Let me go there.” I would say that might be a fraction of one per cent of my constituents. Most people are appalled by this.

And it’s not a narrow-minded neighbourhood. Whatever somebody does in their home, I don’t think we’re going to pass moral judgment on it, again, as long as it’s not harming anyone else. But this is just kind of scuzzy. ..

….

Aura Dolls says that it’s doing nothing illegal. They say that they are being very clear that the dolls they are using represent adult women and that they are actually providing a service. Do you agree that they are doing this legally?

No, they’re not at all legal. It’s an illegal use. And if there’s any confusion about that, they will be getting a letter from the city very soon. It is clearly not a legal use. It’s not permitted under the zoning bylaw and it fits the definition of adult entertainment parlour, for which you require a license — and they certainly would not get one. They would not get a license in this neighbourhood because it’s not a permitted use.

The audio portion runs for 5 mins. 31 secs.

I believe these dolls are in fact sexbots, likely enhanced with AI. An August 29, 2018 article by Karlton Jahmal for hotnewhiphop.com describes the dolls as ‘fembots’ and provides more detail (Note: Links have been removed),

Toronto has seen the future, and apparently, it has to do with sex dolls. The Six [another Toronto nickname] is about to get blessed with the first legal sex doll brothel, and the fembots look too good to be true. If you head over to Aura Dolls website, detailed biographies for the six available sex dolls are on full display. You can check out the doll’s height, physical dimensions, heritage and more.

Aura plans to introduce more dolls in the future, according to a statement in the Toronto Star by Claire Lee, a representative for the compnay. At the moment, the ethnicities of the sex dolls feature Japanese, Caucasian American, French Canadian, Irish Canadian, Colombian, and Korean girls. Male dolls will be added in the near future. The sex dolls look remarkably realistic. Aura’s website writes, “Our dolls are made from the highest quality of TPE silicone which mimics the feeling of natural human skin, pores, texture and movement giving the user a virtually identical experience as being with a real partner.”

There are a few more details about the proposed brothel and more comments from Toronto city councillor John Filion in an August 28, 2018 article by Claire Floody and Jenna Moon with Alexandra Jones and Melanie Green for thestar.com,

Toronto will soon be home to North America’s [this should include Canada, US, and Mexico] first known sex doll brothel, offering sexual services with six silicone-made dolls.

According to the website for Aura Dolls, the company behind the brothel, the vision is to bring a new way to achieve sexual needs “without the many restrictions and limitations that a real partner may come with.”

The brothel is expected to open in a shopping plaza on Yonge St., south of Sheppard Ave., on Sept. 8 [2018]. The company doesn’t give the exact location on its website, stating it’s announced upon booking.

Spending half an hour with one doll costs $80, with two dolls running $160. For an hour, the cost is $120 with one doll. The maximum listed time is four hours for $480 per doll.

Doors at the new brothel for separate entry and exit will be used to ensure “maximum privacy for customers.” While the business does plan on having staff on-site, they “should not have any interaction,” Lee said.

“The reason why we do that is to make sure that everyone feels comfortable coming in and exiting,” she said, noting that people may feel shy or awkward about visiting the site.

… Lee said that the business is operating within the law. “The only law stating with anything to do with the dolls is that it has to meet a height requirement. It can’t resemble a child,” she said. …

Councillor John Filion, Ward 23 Willowdale, said his staff will be “throwing the book at (Aura Dolls) for everything they can.”

“I’ve still got people studying to see what’s legal and what isn’t,” Filion said. He noted that a bylaw introduced in North York in the ’90s prevents retail sex shops operating outside of industrial areas. Filion said his office is still confirming that the bylaw is active following harmonization, which condensed the six boroughs’ bylaws after amalgamation in 1998.

“If the bylaw that I brought in 20 years ago still exists, it would prohibit this,” Filion said.

“There’s legal issues,” he said, suggesting that people interested in using the sex dolls might consider doing so at home, rather than at a brothel.

The councillor said he’s received complaints from constituents about the business. “The phone’s ringing off the hook today,” Filion said.

It should be an interesting first week at school for everyone involved. I wonder what Ontario Premier, Doug Ford who recently rolled back the sex education curriculum for the province by 20 years will make of these developments.

As for sexbots/fembots/sex dolls or whatever you want to call them, they are here and it’s about time Canadians had a frank discussion on the matter. Also, I’ve been waiting for quite some time for any mention of male sexbots (malebots?). Personally, I don’t think we’ll be seeing male sexbots appear in either brothels or homes anytime soon.

Being smart about using artificial intelligence in the field of medicine

Since my August 20, 2018 post featured an opinion piece about the possibly imminent replacement of radiologists with artificial intelligence systems and the latest research about employing them for diagnosing eye diseases, it seems like a good time to examine some of the mythology embedded in the discussion about AI and medicine.

Imperfections in medical AI systems

An August 15, 2018 article for Slate.com by W. Nicholson Price II (who teaches at the University of Michigan School of Law; in addition to his law degree he has a PhD in Biological Sciences from Columbia University) begins with the peppy, optimistic view before veering into more critical territory (Note: Links have been removed),

For millions of people suffering from diabetes, new technology enabled by artificial intelligence promises to make management much easier. Medtronic’s Guardian Connect system promises to alert users 10 to 60 minutes before they hit high or low blood sugar level thresholds, thanks to IBM Watson, “the same supercomputer technology that can predict global weather patterns.” Startup Beta Bionics goes even further: In May, it received Food and Drug Administration approval to start clinical trials on what it calls a “bionic pancreas system” powered by artificial intelligence, capable of “automatically and autonomously managing blood sugar levels 24/7.”

An artificial pancreas powered by artificial intelligence represents a huge step forward for the treatment of diabetes—but getting it right will be hard. Artificial intelligence (also known in various iterations as deep learning and machine learning) promises to automatically learn from patterns in medical data to help us do everything from managing diabetes to finding tumors in an MRI to predicting how long patients will live. But the artificial intelligence techniques involved are typically opaque. We often don’t know how the algorithm makes the eventual decision. And they may change and learn from new data—indeed, that’s a big part of the promise. But when the technology is complicated, opaque, changing, and absolutely vital to the health of a patient, how do we make sure it works as promised?

Price describes how a ‘closed loop’ artificial pancreas with AI would automate insulin levels for diabetic patients, flaws in the automated system, and how companies like to maintain a competitive advantage (Note: Links have been removed),

[…] a “closed loop” artificial pancreas, where software handles the whole issue, receiving and interpreting signals from the monitor, deciding when and how much insulin is needed, and directing the insulin pump to provide the right amount. The first closed-loop system was approved in late 2016. The system should take as much of the issue off the mind of the patient as possible (though, of course, that has limits). Running a close-loop artificial pancreas is challenging. The way people respond to changing levels of carbohydrates is complicated, as is their response to insulin; it’s hard to model accurately. Making it even more complicated, each individual’s body reacts a little differently.

Here’s where artificial intelligence comes into play. Rather than trying explicitly to figure out the exact model for how bodies react to insulin and to carbohydrates, machine learning methods, given a lot of data, can find patterns and make predictions. And existing continuous glucose monitors (and insulin pumps) are excellent at generating a lot of data. The idea is to train artificial intelligence algorithms on vast amounts of data from diabetic patients, and to use the resulting trained algorithms to run a closed-loop artificial pancreas. Even more exciting, because the system will keep measuring blood glucose, it can learn from the new data and each patient’s artificial pancreas can customize itself over time as it acquires new data from that patient’s particular reactions.

Here’s the tough question: How will we know how well the system works? Diabetes software doesn’t exactly have the best track record when it comes to accuracy. A 2015 study found that among smartphone apps for calculating insulin doses, two-thirds of the apps risked giving incorrect results, often substantially so. … And companies like to keep their algorithms proprietary for a competitive advantage, which makes it hard to know how they work and what flaws might have gone unnoticed in the development process.

There’s more,

These issues aren’t unique to diabetes care—other A.I. algorithms will also be complicated, opaque, and maybe kept secret by their developers. The potential for problems multiplies when an algorithm is learning from data from an entire hospital, or hospital system, or the collected data from an entire state or nation, not just a single patient. …

The [US Food and Drug Administraiont] FDA is working on this problem. The head of the agency has expressed his enthusiasm for bringing A.I. safely into medical practice, and the agency has a new Digital Health Innovation Action Plan to try to tackle some of these issues. But they’re not easy, and one thing making it harder is a general desire to keep the algorithmic sauce secret. The example of IBM Watson for Oncology has given the field a bit of a recent black eye—it turns out that the company knew the algorithm gave poor recommendations for cancer treatment but kept that secret for more than a year. …

While Price focuses on problems with algorithms and with developers and their business interests, he also hints at some of the body’s complexities.

Can AI systems be like people?

Susan Baxter, a medical writer with over 20 years experience, a PhD in health economics, and author of countless magazine articles and several books, offers a more person-centered approach to the discussion in her July 6, 2018 posting on susanbaxter.com,

The fascination with AI continues to irk, given that every second thing I read seems to be extolling the magic of AI and medicine and how It Will Change Everything. Which it will not, trust me. The essential issue of illness remains perennial and revolves around an individual for whom no amount of technology will solve anything without human contact. …

But in this world, or so we are told by AI proponents, radiologists will soon be obsolete. [my August 20, 2018 post] The adaptational learning capacities of AI mean that reading a scan or x-ray will soon be more ably done by machines than humans. The presupposition here is that we, the original programmers of this artificial intelligence, understand the vagaries of real life (and real disease) so wonderfully that we can deconstruct these much as we do the game of chess (where, let’s face it, Big Blue ate our lunch) and that analyzing a two-dimensional image of a three-dimensional body, already problematic, can be reduced to a series of algorithms.

Attempting to extrapolate what some “shadow” on a scan might mean in a flesh and blood human isn’t really quite the same as bishop to knight seven. Never mind the false positive/negatives that are considered an acceptable risk or the very real human misery they create.

Moravec called it

It’s called Moravec’s paradox, the inability of humans to realize just how complex basic physical tasks are – and the corresponding inability of AI to mimic it. As you walk across the room, carrying a glass of water, talking to your spouse/friend/cat/child; place the glass on the counter and open the dishwasher door with your foot as you open a jar of pickles at the same time, take a moment to consider just how many concurrent tasks you are doing and just how enormous the computational power these ostensibly simple moves would require.

Researchers in Singapore taught industrial robots to assemble an Ikea chair. Essentially, screw in the legs. A person could probably do this in a minute. Maybe two. The preprogrammed robots took nearly half an hour. And I suspect programming those robots took considerably longer than that.

Ironically, even Elon Musk, who has had major production problems with the Tesla cars rolling out of his high tech factory, has conceded (in a tweet) that “Humans are underrated.”

I wouldn’t necessarily go that far given the political shenanigans of Trump & Co. but in the grand scheme of things I tend to agree. …

Is AI going the way of gene therapy?

Susan draws a parallel between the AI and medicine discussion with the discussion about genetics and medicine (Note: Links have been removed),

On a somewhat similar note – given the extent to which genetics discourse has that same linear, mechanistic  tone [as AI and medicine] – it turns out all this fine talk of using genetics to determine health risk and whatnot is based on nothing more than clever marketing, since a lot of companies are making a lot of money off our belief in DNA. Truth is half the time we don’t even know what a gene is never mind what it actually does;  geneticists still can’t agree on how many genes there are in a human genome, as this article in Nature points out.

Along the same lines, I was most amused to read about something called the Super Seniors Study, research following a group of individuals in their 80’s, 90’s and 100’s who seem to be doing really well. Launched in 2002 and headed by Angela Brooks Wilson, a geneticist at the BC [British Columbia] Cancer Agency and SFU [Simon Fraser University] Chair of biomedical physiology and kinesiology, this longitudinal work is examining possible factors involved in healthy ageing.

Turns out genes had nothing to do with it, the title of the Globe and Mail article notwithstanding. (“Could the DNA of these super seniors hold the secret to healthy aging?” The answer, a resounding “no”, well hidden at the very [end], the part most people wouldn’t even get to.) All of these individuals who were racing about exercising and working part time and living the kind of life that makes one tired just reading about it all had the same “multiple (genetic) factors linked to a high probability of disease”. You know, the gene markers they tell us are “linked” to cancer, heart disease, etc., etc. But these super seniors had all those markers but none of the diseases, demonstrating (pretty strongly) that the so-called genetic links to disease are a load of bunkum. Which (she said modestly) I have been saying for more years than I care to remember. You’re welcome.

The fundamental error in this type of linear thinking is in allowing our metaphors (genes are the “blueprint” of life) and propensity towards social ideas of determinism to overtake common sense. Biological and physiological systems are not static; they respond to and change to life in its entirety, whether it’s diet and nutrition to toxic or traumatic insults. Immunity alters, endocrinology changes, – even how we think and feel affects the efficiency and effectiveness of physiology. Which explains why as we age we become increasingly dissimilar.

If you have the time, I encourage to read Susan’s comments in their entirety.

Scientific certainties

Following on with genetics, gene therapy dreams, and the complexity of biology, the June 19, 2018 Nature article by Cassandra Willyard (mentioned in Susan’s posting) highlights an aspect of scientific research not often mentioned in public,

One of the earliest attempts to estimate the number of genes in the human genome involved tipsy geneticists, a bar in Cold Spring Harbor, New York, and pure guesswork.

That was in 2000, when a draft human genome sequence was still in the works; geneticists were running a sweepstake on how many genes humans have, and wagers ranged from tens of thousands to hundreds of thousands. Almost two decades later, scientists armed with real data still can’t agree on the number — a knowledge gap that they say hampers efforts to spot disease-related mutations.

In 2000, with the genomics community abuzz over the question of how many human genes would be found, Ewan Birney launched the GeneSweep contest. Birney, now co-director of the European Bioinformatics Institute (EBI) in Hinxton, UK, took the first bets at a bar during an annual genetics meeting, and the contest eventually attracted more than 1,000 entries and a US$3,000 jackpot. Bets on the number of genes ranged from more than 312,000 to just under 26,000, with an average of around 40,000. These days, the span of estimates has shrunk — with most now between 19,000 and 22,000 — but there is still disagreement (See ‘Gene Tally’).

… the inconsistencies in the number of genes from database to database are problematic for researchers, Pruitt says. “People want one answer,” she [Kim Pruitt, a genome researcher at the US National Center for Biotechnology Information {NCB}] in Bethesda, Maryland] adds, “but biology is complex.”

I wanted to note that scientists do make guesses and not just with genetics. For example, Gina Mallet’s 2005 book ‘Last Chance to Eat: The Fate of Taste in a Fast Food World’ recounts the story of how good and bad levels of cholesterol were established—the experts made some guesses based on their experience. That said, Willyard’s article details the continuing effort to nail down the number of genes almost 20 years after the human genome project was completed and delves into the problems the scientists have uncovered.

Final comments

In addition to opaque processes with developers/entrepreneurs wanting to maintain their secrets for competitive advantages and in addition to our own poor understanding of the human body (how many genes are there anyway?), there are same major gaps (reflected in AI) in our understanding of various diseases. Angela Lashbrook’s August 16, 2018 article for The Atlantic highlights some issues with skin cancer and shade of your skin (Note: Links have been removed),

… While fair-skinned people are at the highest risk for contracting skin cancer, the mortality rate for African Americans is considerably higher: Their five-year survival rate is 73 percent, compared with 90 percent for white Americans, according to the American Academy of Dermatology.

As the rates of melanoma for all Americans continue a 30-year climb, dermatologists have begun exploring new technologies to try to reverse this deadly trend—including artificial intelligence. There’s been a growing hope in the field that using machine-learning algorithms to diagnose skin cancers and other skin issues could make for more efficient doctor visits and increased, reliable diagnoses. The earliest results are promising—but also potentially dangerous for darker-skinned patients.

… Avery Smith, … a software engineer in Baltimore, Maryland, co-authored a paper in JAMA [Journal of the American Medical Association] Dermatology that warns of the potential racial disparities that could come from relying on machine learning for skin-cancer screenings. Smith’s co-author, Adewole Adamson of the University of Texas at Austin, has conducted multiple studies on demographic imbalances in dermatology. “African Americans have the highest mortality rate [for skin cancer], and doctors aren’t trained on that particular skin type,” Smith told me over the phone. “When I came across the machine-learning software, one of the first things I thought was how it will perform on black people.”

Recently, a study that tested machine-learning software in dermatology, conducted by a group of researchers primarily out of Germany, found that “deep-learning convolutional neural networks,” or CNN, detected potentially cancerous skin lesions better than the 58 dermatologists included in the study group. The data used for the study come from the International Skin Imaging Collaboration, or ISIC, an open-source repository of skin images to be used by machine-learning algorithms. Given the rise in melanoma cases in the United States, a machine-learning algorithm that assists dermatologists in diagnosing skin cancer earlier could conceivably save thousands of lives each year.

… Chief among the prohibitive issues, according to Smith and Adamson, is that the data the CNN relies on come from primarily fair-skinned populations in the United States, Australia, and Europe. If the algorithm is basing most of its knowledge on how skin lesions appear on fair skin, then theoretically, lesions on patients of color are less likely to be diagnosed. “If you don’t teach the algorithm with a diverse set of images, then that algorithm won’t work out in the public that is diverse,” says Adamson. “So there’s risk, then, for people with skin of color to fall through the cracks.”

As Adamson and Smith’s paper points out, racial disparities in artificial intelligence and machine learning are not a new issue. Algorithms have mistaken images of black people for gorillas, misunderstood Asians to be blinking when they weren’t, and “judged” only white people to be attractive. An even more dangerous issue, according to the paper, is that decades of clinical research have focused primarily on people with light skin, leaving out marginalized communities whose symptoms may present differently.

The reasons for this exclusion are complex. According to Andrew Alexis, a dermatologist at Mount Sinai, in New York City, and the director of the Skin of Color Center, compounding factors include a lack of medical professionals from marginalized communities, inadequate information about those communities, and socioeconomic barriers to participating in research. “In the absence of a diverse study population that reflects that of the U.S. population, potential safety or efficacy considerations could be missed,” he says.

Adamson agrees, elaborating that with inadequate data, machine learning could misdiagnose people of color with nonexistent skin cancers—or miss them entirely. But he understands why the field of dermatology would surge ahead without demographically complete data. “Part of the problem is that people are in such a rush. This happens with any new tech, whether it’s a new drug or test. Folks see how it can be useful and they go full steam ahead without thinking of potential clinical consequences. …

Improving machine-learning algorithms is far from the only method to ensure that people with darker skin tones are protected against the sun and receive diagnoses earlier, when many cancers are more survivable. According to the Skin Cancer Foundation, 63 percent of African Americans don’t wear sunscreen; both they and many dermatologists are more likely to delay diagnosis and treatment because of the belief that dark skin is adequate protection from the sun’s harmful rays. And due to racial disparities in access to health care in America, African Americans are less likely to get treatment in time.

Happy endings

I’ll add one thing to Price’s article, Susan’s posting, and Lashbrook’s article about the issues with AI , certainty, gene therapy, and medicine—the desire for a happy ending prefaced with an easy solution. If the easy solution isn’t possible accommodations will be made but that happy ending is a must. All disease will disappear and there will be peace on earth. (Nod to Susan Baxter and her many discussions with me about disease processes and happy endings.)

The solutions, for the most part, are seen as technological despite the mountain of evidence suggesting that technology reflects our own imperfect understanding of health and disease therefore providing what is at best an imperfect solution.

Also, we tend to underestimate just how complex humans are not only in terms of disease and health but also with regard to our skills, understanding, and, perhaps not often enough, our ability to respond appropriately in the moment.

There is much to celebrate in what has been accomplished: no more black death, no more smallpox, hip replacements, pacemakers, organ transplants, and much more. Yes, we should try to improve our medicine. But, maybe alongside the celebration we can welcome AI and other technologies with a lot less hype and a lot more skepticism.

Robot radiologists (artificially intelligent doctors)

Mutaz Musa, a physician at New York Presbyterian Hospital/Weill Cornell (Department of Emergency Medicine) and software developer in New York City, has penned an eyeopening opinion piece about artificial intelligence (or robots if you prefer) and the field of radiology. From a June 25, 2018 opinion piece for The Scientist (Note: Links have been removed),

Although artificial intelligence has raised fears of job loss for many, we doctors have thus far enjoyed a smug sense of security. There are signs, however, that the first wave of AI-driven redundancies among doctors is fast approaching. And radiologists seem to be first on the chopping block.

Andrew Ng, founder of online learning platform Coursera and former CTO of “China’s Google,” Baidu, recently announced the development of CheXNet, a convolutional neural net capable of recognizing pneumonia and other thoracic pathologies on chest X-rays better than human radiologists. Earlier this year, a Hungarian group developed a similar system for detecting and classifying features of breast cancer in mammograms. In 2017, Adelaide University researchers published details of a bot capable of matching human radiologist performance in detecting hip fractures. And, of course, Google achieved superhuman proficiency in detecting diabetic retinopathy in fundus photographs, a task outside the scope of most radiologists.

Beyond single, two-dimensional radiographs, a team at Oxford University developed a system for detecting spinal disease from MRI data with a performance equivalent to a human radiologist. Meanwhile, researchers at the University of California, Los Angeles, reported detecting pathology on head CT scans with an error rate more than 20 times lower than a human radiologist.

Although these particular projects are still in the research phase and far from perfect—for instance, often pitting their machines against a limited number of radiologists—the pace of progress alone is telling.

Others have already taken their algorithms out of the lab and into the marketplace. Enlitic, founded by Aussie serial entrepreneur and University of San Francisco researcher Jeremy Howard, is a Bay-Area startup that offers automated X-ray and chest CAT scan interpretation services. Enlitic’s systems putatively can judge the malignancy of nodules up to 50 percent more accurately than a panel of radiologists and identify fractures so small they’d typically be missed by the human eye. One of Enlitic’s largest investors, Capitol Health, owns a network of diagnostic imaging centers throughout Australia, anticipating the broad rollout of this technology. Another Bay-Area startup, Arterys, offers cloud-based medical imaging diagnostics. Arterys’s services extend beyond plain films to cardiac MRIs and CAT scans of the chest and abdomen. And there are many others.

Musa has offered a compelling argument with lots of links to supporting evidence.

[downloaded from https://www.the-scientist.com/news-opinion/opinion–rise-of-the-robot-radiologists-64356]

And evidence keeps mounting, I just stumbled across this June 30, 2018 news item on Xinhuanet.com,

An artificial intelligence (AI) system scored 2:0 against elite human physicians Saturday in two rounds of competitions in diagnosing brain tumors and predicting hematoma expansion in Beijing.

The BioMind AI system, developed by the Artificial Intelligence Research Centre for Neurological Disorders at the Beijing Tiantan Hospital and a research team from the Capital Medical University, made correct diagnoses in 87 percent of 225 cases in about 15 minutes, while a team of 15 senior doctors only achieved 66-percent accuracy.

The AI also gave correct predictions in 83 percent of brain hematoma expansion cases, outperforming the 63-percent accuracy among a group of physicians from renowned hospitals across the country.

The outcomes for human physicians were quite normal and even better than the average accuracy in ordinary hospitals, said Gao Peiyi, head of the radiology department at Tiantan Hospital, a leading institution on neurology and neurosurgery.

To train the AI, developers fed it tens of thousands of images of nervous system-related diseases that the Tiantan Hospital has archived over the past 10 years, making it capable of diagnosing common neurological diseases such as meningioma and glioma with an accuracy rate of over 90 percent, comparable to that of a senior doctor.

All the cases were real and contributed by the hospital, but never used as training material for the AI, according to the organizer.

Wang Yongjun, executive vice president of the Tiantan Hospital, said that he personally did not care very much about who won, because the contest was never intended to pit humans against technology but to help doctors learn and improve [emphasis mine] through interactions with technology.

“I hope through this competition, doctors can experience the power of artificial intelligence. This is especially so for some doctors who are skeptical about artificial intelligence. I hope they can further understand AI and eliminate their fears toward it,” said Wang.

Dr. Lin Yi who participated and lost in the second round, said that she welcomes AI, as it is not a threat but a “friend.” [emphasis mine]

AI will not only reduce the workload but also push doctors to keep learning and improve their skills, said Lin.

Bian Xiuwu, an academician with the Chinese Academy of Science and a member of the competition’s jury, said there has never been an absolute standard correct answer in diagnosing developing diseases, and the AI would only serve as an assistant to doctors in giving preliminary results. [emphasis mine]

Dr. Paul Parizel, former president of the European Society of Radiology and another member of the jury, also agreed that AI will not replace doctors, but will instead function similar to how GPS does for drivers. [emphasis mine]

Dr. Gauden Galea, representative of the World Health Organization in China, said AI is an exciting tool for healthcare but still in the primitive stages.

Based on the size of its population and the huge volume of accessible digital medical data, China has a unique advantage in developing medical AI, according to Galea.

China has introduced a series of plans in developing AI applications in recent years.

In 2017, the State Council issued a development plan on the new generation of Artificial Intelligence and the Ministry of Industry and Information Technology also issued the “Three-Year Action Plan for Promoting the Development of a New Generation of Artificial Intelligence (2018-2020).”

The Action Plan proposed developing medical image-assisted diagnostic systems to support medicine in various fields.

I note the reference to cars and global positioning systems (GPS) and their role as ‘helpers’;, it seems no one at the ‘AI and radiology’ competition has heard of driverless cars. Here’s Musa on those reassuring comments abut how the technology won’t replace experts but rather augment their skills,

To be sure, these services frame themselves as “support products” that “make doctors faster,” rather than replacements that make doctors redundant. This language may reflect a reserved view of the technology, though it likely also represents a marketing strategy keen to avoid threatening or antagonizing incumbents. After all, many of the customers themselves, for now, are radiologists.

Radiology isn’t the only area where experts might find themselves displaced.

Eye experts

It seems inroads have been made by artificial intelligence systems (AI) into the diagnosis of eye diseases. It got the ‘Fast Company’ treatment (exciting new tech, learn all about it) as can be seen further down in this posting. First, here’s a more restrained announcement, from an August 14, 2018 news item on phys.org (Note: A link has been removed),

An artificial intelligence (AI) system, which can recommend the correct referral decision for more than 50 eye diseases, as accurately as experts has been developed by Moorfields Eye Hospital NHS Foundation Trust, DeepMind Health and UCL [University College London].

The breakthrough research, published online by Nature Medicine, describes how machine-learning technology has been successfully trained on thousands of historic de-personalised eye scans to identify features of eye disease and recommend how patients should be referred for care.

Researchers hope the technology could one day transform the way professionals carry out eye tests, allowing them to spot conditions earlier and prioritise patients with the most serious eye diseases before irreversible damage sets in.

An August 13, 2018 UCL press release, which originated the news item, describes the research and the reasons behind it in more detail,

More than 285 million people worldwide live with some form of sight loss, including more than two million people in the UK. Eye diseases remain one of the biggest causes of sight loss, and many can be prevented with early detection and treatment.

Dr Pearse Keane, NIHR Clinician Scientist at the UCL Institute of Ophthalmology and consultant ophthalmologist at Moorfields Eye Hospital NHS Foundation Trust said: “The number of eye scans we’re performing is growing at a pace much faster than human experts are able to interpret them. There is a risk that this may cause delays in the diagnosis and treatment of sight-threatening diseases, which can be devastating for patients.”

“The AI technology we’re developing is designed to prioritise patients who need to be seen and treated urgently by a doctor or eye care professional. If we can diagnose and treat eye conditions early, it gives us the best chance of saving people’s sight. With further research it could lead to greater consistency and quality of care for patients with eye problems in the future.”

The study, launched in 2016, brought together leading NHS eye health professionals and scientists from UCL and the National Institute for Health Research (NIHR) with some of the UK’s top technologists at DeepMind to investigate whether AI technology could help improve the care of patients with sight-threatening diseases, such as age-related macular degeneration and diabetic eye disease.

Using two types of neural network – mathematical systems for identifying patterns in images or data – the AI system quickly learnt to identify 10 features of eye disease from highly complex optical coherence tomography (OCT) scans. The system was then able to recommend a referral decision based on the most urgent conditions detected.

To establish whether the AI system was making correct referrals, clinicians also viewed the same OCT scans and made their own referral decisions. The study concluded that AI was able to make the right referral recommendation more than 94% of the time, matching the performance of expert clinicians.

The AI has been developed with two unique features which maximise its potential use in eye care. Firstly, the system can provide information that helps explain to eye care professionals how it arrives at its recommendations. This information includes visuals of the features of eye disease it has identified on the OCT scan and the level of confidence the system has in its recommendations, in the form of a percentage. This functionality is crucial in helping clinicians scrutinise the technology’s recommendations and check its accuracy before deciding the type of care and treatment a patient receives.

Secondly, the AI system can be easily applied to different types of eye scanner, not just the specific model on which it was trained. This could significantly increase the number of people who benefit from this technology and future-proof it, so it can still be used even as OCT scanners are upgraded or replaced over time.

The next step is for the research to go through clinical trials to explore how this technology might improve patient care in practice, and regulatory approval before it can be used in hospitals and other clinical settings.

If clinical trials are successful in demonstrating that the technology can be used safely and effectively, Moorfields will be able to use an eventual, regulatory-approved product for free, across all 30 of their UK hospitals and community clinics, for an initial period of five years.

The work that has gone into this project will also help accelerate wider NHS research for many years to come. For example, DeepMind has invested significant resources to clean, curate and label Moorfields’ de-identified research dataset to create one of the most advanced eye research databases in the world.

Moorfields owns this database as a non-commercial public asset, which is already forming the basis of nine separate medical research studies. In addition, Moorfields can also use DeepMind’s trained AI model for future non-commercial research efforts, which could help advance medical research even further.

Mustafa Suleyman, Co-founder and Head of Applied AI at DeepMind Health, said: “We set up DeepMind Health because we believe artificial intelligence can help solve some of society’s biggest health challenges, like avoidable sight loss, which affects millions of people across the globe. These incredibly exciting results take us one step closer to that goal and could, in time, transform the diagnosis, treatment and management of patients with sight threatening eye conditions, not just at Moorfields, but around the world.”

Professor Sir Peng Tee Khaw, director of the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology said: “The results of this pioneering research with DeepMind are very exciting and demonstrate the potential sight-saving impact AI could have for patients. I am in no doubt that AI has a vital role to play in the future of healthcare, particularly when it comes to training and helping medical professionals so that patients benefit from vital treatment earlier than might previously have been possible. This shows the transformative research than can be carried out in the UK combining world leading industry and NIHR/NHS hospital/university partnerships.”

Matt Hancock, Health and Social Care Secretary, said: “This is hugely exciting and exactly the type of technology which will benefit the NHS in the long term and improve patient care – that’s why we fund over a billion pounds a year in health research as part of our long term plan for the NHS.”

Here’s a link to and a citation for the study,

Clinically applicable deep learning for diagnosis and referral in retinal disease by Jeffrey De Fauw, Joseph R. Ledsam, Bernardino Romera-Paredes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, Harry Askham, Xavier Glorot, Brendan O’Donoghue, Daniel Visentin, George van den Driessche, Balaji Lakshminarayanan, Clemens Meyer, Faith Mackinder, Simon Bouton, Kareem Ayoub, Reena Chopra, Dominic King, Alan Karthikesalingam, Cían O. Hughes, Rosalind Raine, Julian Hughes, Dawn A. Sim, Catherine Egan, Adnan Tufail, Hugh Montgomery, Demis Hassabis, Geraint Rees, Trevor Back, Peng T. Khaw, Mustafa Suleyman, Julien Cornebise, Pearse A. Keane, & Olaf Ronneberger. Nature Medicine (2018) DOI: https://doi.org/10.1038/s41591-018-0107-6 Published 13 August 2018

This paper is behind a paywall.

And now, Melissa Locker’s August 15, 2018 article for Fast Company (Note: Links have been removed),

In a paper published in Nature Medicine on Monday, Google’s DeepMind subsidiary, UCL, and researchers at Moorfields Eye Hospital showed off their new AI system. The researchers used deep learning to create algorithm-driven software that can identify common patterns in data culled from dozens of common eye diseases from 3D scans. The result is an AI that can identify more than 50 diseases with incredible accuracy and can then refer patients to a specialist. Even more important, though, is that the AI can explain why a diagnosis was made, indicating which part of the scan prompted the outcome. It’s an important step in both medicine and in making AIs slightly more human

The editor or writer has even highlighted the sentence about the system’s accuracy—not just good but incredible!

I will be publishing something soon [my August 21, 2018 posting] which highlights some of the questions one might want to ask about AI and medicine before diving headfirst into this brave new world of medicine.

A gripping problem: tree frogs lead the way

Courtesy: University of Glasgow

At least once a year, there must be a frog posting here (ETA: July 31, 2018 at 1640 hours: unusually, this is my second ‘frog’ posting in one week; my July 26, 2018 posting concerns a very desperate frog, Romeo). Prior to Romeo, this March 15, 2018 news item on phys.org tickled my fancy,

Scientists researching how tree frogs climb have discovered that a unique combination of adhesion and grip gives them perfect technique.

The new research, led by the University of Glasgow and published today [March 15, 2018] in the Journal of Experimental Biology, could have implications for areas of science such as robotics, as well as the production of climbing equipment and even tyre manufacture.

A March 15, 2018 University of Glasgow press release, which originated the news item, provides a little more detail,

Researchers found that, using their fluid-filled adhesive toe pads, tree frogs are able to grip to surfaces to climb. When surfaces aren’t smooth enough to allow adhesion, researchers found that the frogs relied on their long limbs to grip around objects.

University of Glasgow scientists Iain Hill and Jon Barnes gave the tree frogs a series of narrow and wide cylinders to climb. The research team found that on the narrow cylinders the frogs used their grip and adhesion pads, allowing them to climb the obstacle at speed. Wider cylinders were too large for the frogs to grip, so they could only climb more slowly using their suction adhesive pads.

When the cylinders were coated in sandpaper, preventing adhesion, the frogs could only climb the narrow ones slowly, using their grip. They were not able to climb the wider cylinders covered in sandpaper as they couldn’t use their grip or adhesion.

Dr Barnes said: “I have worked on tree frog research for many years and I find them fascinating. Work on tree frogs has been of interest to industry and other areas of science in the past, since their climbing abilities can offer us insights into the most efficient way to climb and stick to surfaces.

“Climbing robots, for instance, need ways to stick, they could be based either on gecko climbing or tree frog climbing.  This research demonstrates how a good climbing robot would need to combine gripping and adhesion to climb more efficiently.”

The study, “The biomechanics of tree frogs climbing curved surfaces: a gripping problem” is published in the Journal ofExperimental Biology. The work was funded by the Royal Society, London and by grants from the National Natural Science Foundation of China and the Natural Science Foundation of Jiangsu Province.

Here’s a link to and a citation for the paper (I love the pun in the title),

The biomechanics of tree frogs climbing curved surfaces: a gripping problem by Iain D. C. Hill, Benzheng Dong, W. Jon. P. Barnes, Aihong Ji, Thomas Endlein. Journal of Experimental Biology 2018 : jeb.168179 doi: 10.1242/jeb.168179 Published 19 January 2018

This paper is behind a paywall.

AI x 2: the Amnesty International and Artificial Intelligence story

Amnesty International and artificial intelligence seem like an unexpected combination but it all makes sense when you read a June 13, 2018 article by Steven Melendez for Fast Company (Note: Links have been removed),

If companies working on artificial intelligence don’t take steps to safeguard human rights, “nightmare scenarios” could unfold, warns Rasha Abdul Rahim, an arms control and artificial intelligence researcher at Amnesty International in a blog post. Those scenarios could involve armed, autonomous systems choosing military targets with little human oversight, or discrimination caused by biased algorithms, she warns.

Rahim pointed at recent reports of Google’s involvement in the Pentagon’s Project Maven, which involves harnessing AI image recognition technology to rapidly process photos taken by drones. Google recently unveiled new AI ethics policies and has said it won’t continue with the project once its current contract expires next year after high-profile employee dissent over the project. …

“Compliance with the laws of war requires human judgement [sic] –the ability to analyze the intentions behind actions and make complex decisions about the proportionality or necessity of an attack,” Rahim writes. “Machines and algorithms cannot recreate these human skills, and nor can they negotiate, produce empathy, or respond to unpredictable situations. In light of these risks, Amnesty International and its partners in the Campaign to Stop Killer Robots are calling for a total ban on the development, deployment, and use of fully autonomous weapon systems.”

Rasha Abdul Rahim’s June 14, 2018 posting (I’m putting the discrepancy in publication dates down to timezone differences) on the Amnesty International website (Note: Links have been removed),

Last week [June 7, 2018] Google released a set of principles to govern its development of AI technologies. They include a broad commitment not to design or deploy AI in weaponry, and come in the wake of the company’s announcement that it will not renew its existing contract for Project Maven, the US Department of Defense’s AI initiative, when it expires in 2019.

The fact that Google maintains its existing Project Maven contract for now raises an important question. Does Google consider that continuing to provide AI technology to the US government’s drone programme is in line with its new principles? Project Maven is a litmus test that allows us to see what Google’s new principles mean in practice.

As details of the US drone programme are shrouded in secrecy, it is unclear precisely what role Google plays in Project Maven. What we do know is that US drone programme, under successive administrations, has been beset by credible allegations of unlawful killings and civilian casualties. The cooperation of Google, in any capacity, is extremely troubling and could potentially implicate it in unlawful strikes.

As AI technology advances, the question of who will be held accountable for associated human rights abuses is becoming increasingly urgent. Machine learning, and AI more broadly, impact a range of human rights including privacy, freedom of expression and the right to life. It is partly in the hands of companies like Google to safeguard these rights in relation to their operations – for us and for future generations. If they don’t, some nightmare scenarios could unfold.

Warfare has already changed dramatically in recent years – a couple of decades ago the idea of remote controlled bomber planes would have seemed like science fiction. While the drones currently in use are still controlled by humans, China, France, Israel, Russia, South Korea, the UK and the US are all known to be developing military robots which are getting smaller and more autonomous.

For example, the UK is developing a number of autonomous systems, including the BAE [Systems] Taranis, an unmanned combat aircraft system which can fly in autonomous mode and automatically identify a target within a programmed area. Kalashnikov, the Russian arms manufacturer, is developing a fully automated, high-calibre gun that uses artificial neural networks to choose targets. The US Army Research Laboratory in Maryland, in collaboration with BAE Systems and several academic institutions, has been developing micro drones which weigh less than 30 grams, as well as pocket-sized robots that can hop or crawl.

Of course, it’s not just in conflict zones that AI is threatening human rights. Machine learning is already being used by governments in a wide range of contexts that directly impact people’s lives, including policing [emphasis mine], welfare systems, criminal justice and healthcare. Some US courts use algorithms to predict future behaviour of defendants and determine their sentence lengths accordingly. The potential for this approach to reinforce power structures, discrimination or inequalities is huge.

In july 2017, the Vancouver Police Department announced its use of predictive policing software, the first such jurisdiction in Canada to make use of the technology. My Nov. 23, 2017 posting featured the announcement.

The almost too aptly named Campaign to Stop Killer Robots can be found here. Their About Us page provides a brief history,

Formed by the following non-governmental organizations (NGOs) at a meeting in New York on 19 October 2012 and launched in London in April 2013, the Campaign to Stop Killer Robots is an international coalition working to preemptively ban fully autonomous weapons. See the Chronology charting our major actions and achievements to date.

Steering Committee

The Steering Committee is the campaign’s principal leadership and decision-making body. It is comprised of five international NGOs, a regional NGO network, and four national NGOs that work internationally:

Human Rights Watch
Article 36
Association for Aid and Relief Japan
International Committee for Robot Arms Control
Mines Action Canada
Nobel Women’s Initiative
PAX (formerly known as IKV Pax Christi)
Pugwash Conferences on Science & World Affairs
Seguridad Humana en América Latina y el Caribe (SEHLAC)
Women’s International League for Peace and Freedom

For more information, see this Overview. A Terms of Reference is also available on request, detailing the committee’s selection process, mandate, decision-making, meetings and communication, and expected commitments.

For anyone who may be interested in joining Amnesty International, go here.

AI (artificial intelligence) for Good Global Summit from May 15 – 17, 2018 in Geneva, Switzerland: details and an interview with Frederic Werner

With all the talk about artificial intelligence (AI), a lot more attention seems to be paid to apocalyptic scenarios: loss of jobs, financial hardship, loss of personal agency and privacy, and more with all of these impacts being described as global. Still, there are some folks who are considering and working on ‘AI for good’.

If you’d asked me, the International Telecommunications Union (ITU) would not have been my first guess (my choice would have been United Nations Educational, Scientific and Cultural Organization [UNESCO]) as an agency likely to host the 2018 AI for Good Global Summit. But, it turns out the ITU is a UN (United Nations agency) and, according to its Wikipedia entry, it’s an intergovernmental public-private partnership, which may explain the nature of the participants in the upcoming summit.

The news

First, there’s a May 4, 2018 ITU media advisory (received via email or you can find the full media advisory here) about the upcoming summit,

Artificial Intelligence (AI) is now widely identified as being able to address the greatest challenges facing humanity – supporting innovation in fields ranging from crisis management and healthcare to smart cities and communications networking.

The second annual ‘AI for Good Global Summit’ will take place 15-17 May [2018] in Geneva, and seeks to leverage AI to accelerate progress towards the United Nations’ Sustainable Development Goals and ultimately benefit humanity.

WHAT: Global event to advance ‘AI for Good’ with the participation of internationally recognized AI experts. The programme will include interactive high-level panels, while ‘AI Breakthrough Teams’ will propose AI strategies able to create impact in the near term, guided by an expert audience of mentors representing government, industry, academia and civil society – through interactive sessions. The summit will connect AI innovators with public and private-sector decision-makers, building collaboration to take promising strategies forward.

A special demo & exhibit track will feature innovative applications of AI designed to: protect women from sexual violence, avoid infant crib deaths, end child abuse, predict oral cancer, and improve mental health treatments for depression – as well as interactive robots including: Alice, a Dutch invention designed to support the aged; iCub, an open-source robot; and Sophia, the humanoid AI robot.

WHEN: 15-17 May 2018, beginning daily at 9 AM

WHERE: ITU Headquarters, 2 Rue de Varembé, Geneva, Switzerland (Please note: entrance to ITU is now limited for all visitors to the Montbrillant building entrance only on rue Varembé).

WHO: Confirmed participants to date include expert representatives from: Association for Computing Machinery, Bill and Melinda Gates Foundation, Cambridge University, Carnegie Mellon, Chan Zuckerberg Initiative, Consumer Trade Association, Facebook, Fraunhofer, Google, Harvard University, IBM Watson, IEEE, Intellectual Ventures, ITU, Microsoft, Massachusetts Institute of Technology (MIT), Partnership on AI, Planet Labs, Shenzhen Open Innovation Lab, University of California at Berkeley, University of Tokyo, XPRIZE Foundation, Yale University – and the participation of “Sophia” the humanoid robot and “iCub” the EU open source robotcub.

The interview

Frederic Werner, Senior Communications Officer at the International Telecommunication Union and** one of the organizers of the AI for Good Global Summit 2018 kindly took the time to speak to me and provide a few more details about the upcoming event.

Werner noted that the 2018 event grew out of a much smaller 2017 ‘workshop’ and first of its kind, about beneficial AI which this year has ballooned in size to 91 countries (about 15 participants are expected from Canada), 32 UN agencies, and substantive representation from the private sector. The 2017 event featured Dr. Yoshua Bengio of the University of Montreal  (Université de Montréal) was a featured speaker.

“This year, we’re focused on action-oriented projects that will help us reach our Sustainable Development Goals (SDGs) by 2030. We’re looking at near-term practical AI applications,” says Werner. “We’re matchmaking problem-owners and solution-owners.”

Academics, industry professionals, government officials, and representatives from UN agencies are gathering  to work on four tracks/themes:

In advance of this meeting, the group launched an AI repository (an action item from the 2017 meeting) on April 25, 2018 inviting people to list their AI projects (from the ITU’s April 25, 2018? AI repository news announcement),

ITU has just launched an AI Repository where anyone working in the field of artificial intelligence (AI) can contribute key information about how to leverage AI to help solve humanity’s greatest challenges.

This is the only global repository that identifies AI-related projects, research initiatives, think-tanks and organizations that aim to accelerate progress on the 17 United Nations’ Sustainable Development Goals (SDGs).

To submit a project, just press ‘Submit’ on the AI Repository site and fill in the online questionnaire, providing all relevant details of your project. You will also be asked to map your project to the relevant World Summit on the Information Society (WSIS) action lines and the SDGs. Approved projects will be officially registered in the repository database.

Benefits of participation on the AI Repository include:

WSIS Prizes recognize individuals, governments, civil society, local, regional and international agencies, research institutions and private-sector companies for outstanding success in implementing development oriented strategies that leverage the power of AI and ICTs.

Creating the AI Repository was one of the action items of last year’s AI for Good Global Summit.

We are looking forward to your submissions.

If you have any questions, please send an email to: ai@itu.int

“Your project won’t be visible immediately as we have to vet the submissions to weed out spam-type material and projects that are not in line with our goals,” says Werner. That said, there are already 29 projects in the repository. As you might expect, the UK, China, and US are in the repository but also represented are Egypt, Uganda, Belarus, Serbia, Peru, Italy, and other countries not commonly cited when discussing AI research.

Werner also pointed out in response to my surprise over the ITU’s role with regard to this AI initiative that the ITU is the only UN agency which has 192* member states (countries), 150 universities, and over 700 industry members as well as other member entities, which gives them tremendous breadth of reach. As well, the organization, founded originally in 1865 as the International Telegraph Convention, has extensive experience with global standardization in the information technology and telecommunications industries. (See more in their Wikipedia entry.)

Finally

There is a bit more about the summit on the ITU’s AI for Good Global Summit 2018 webpage,

The 2nd edition of the AI for Good Global Summit will be organized by ITU in Geneva on 15-17 May 2018, in partnership with XPRIZE Foundation, the global leader in incentivized prize competitions, the Association for Computing Machinery (ACM) and sister United Nations agencies including UNESCO, UNICEF, UNCTAD, UNIDO, Global Pulse, UNICRI, UNODA, UNIDIR, UNODC, WFP, IFAD, UNAIDS, WIPO, ILO, UNITAR, UNOPS, OHCHR, UN UniversityWHO, UNEP, ICAO, UNDP, The World Bank, UN DESA, CTBTOUNISDRUNOG, UNOOSAUNFPAUNECE, UNDPA, and UNHCR.

The AI for Good series is the leading United Nations platform for dialogue on AI. The action​​-oriented 2018 summit will identify practical applications of AI and supporting strategies to improve the quality and sustainability of life on our planet. The summit will continue to formulate strategies to ensure trusted, safe and inclusive development of AI technologies and equitable access to their benefits.

While the 2017 summit sparked the first ever inclusive global dialogue on beneficial AI, the action-oriented 2018 summit will focus on impactful AI solutions able to yield long-term benefits and help achieve the Sustainable Development Goals. ‘Breakthrough teams’ will demonstrate the potential of AI to map poverty and aid with natural disasters using satellite imagery, how AI could assist the delivery of citizen-centric services in smart cities, and new opportunities for AI to help achieve Universal Health Coverage, and finally to help achieve transparency and explainability in AI algorithms.

Teams will propose impactful AI strategies able to be enacted in the near term, guided by an expert audience of mentors representing government, industry, academia and civil society. Strategies will be evaluated by the mentors according to their feasibility and scalability, potential to address truly global challenges, degree of supporting advocacy, and applicability to market failures beyond the scope of government and industry. The exercise will connect AI innovators with public and private-sector decision-makers, building collaboration to take promising strategies forward.

“As the UN specialized agency for information and communication technologies, ITU is well placed to guide AI innovation towards the achievement of the UN Sustainable Development ​Goals. We are providing a neutral close quotation markplatform for international dialogue aimed at ​building a ​common understanding of the capabilities of emerging AI technologies.​​” Houlin Zhao, Secretary General ​of ITU​

Should you be close to Geneva, it seems that registration is still open. Just go to the ITU’s AI for Good Global Summit 2018 webpage, scroll the page down to ‘Documentation’ and you will find a link to the invitation and a link to online registration. Participation is free but I expect that you are responsible for your travel and accommodation costs.

For anyone unable to attend in person, the summit will be livestreamed (webcast in real time) and you can watch the sessions by following the link below,

https://www.itu.int/en/ITU-T/AI/2018/Pages/webcast.aspx

For those of us on the West Coast of Canada and other parts distant to Geneva, you will want to take the nine hour difference between Geneva (Switzerland) and here into account when viewing the proceedings. If you can’t manage the time difference, the sessions are being recorded and will be posted at a later date.

*’132 member states’ corrected to ‘192 member states’ on May 11, 2018 at 1500 hours PDT.

*Redundant ‘and’ removed on July 19, 2018.

Socially responsible AI—it’s time says University of Manchester (UK) researchers

A May 10, 2018 news item on ScienceDaily describes a report on the ‘fourth industrial revolution’ being released by the University of Manchester,

The development of new Artificial Intelligence (AI) technology is often subject to bias, and the resulting systems can be discriminatory, meaning more should be done by policymakers to ensure its development is democratic and socially responsible.

This is according to Dr Barbara Ribeiro of Manchester Institute of Innovation Research at The University of Manchester, in On AI and Robotics: Developing policy for the Fourth Industrial Revolution, a new policy report on the role of AI and Robotics in society, being published today [May 10, 2018].

Interestingly, the US White House is hosting a summit on AI today, May 10, 2018, according to a May 8, 2018 article by Danny Crichton for TechCrunch (Note: Links have been removed),

Now, it appears the White House itself is getting involved in bringing together key American stakeholders to discuss AI and those opportunities and challenges. …

Among the confirmed guests are Facebook’s Jerome Pesenti, Amazon’s Rohit Prasad, and Intel’s CEO Brian Krzanich. While the event has many tech companies present, a total of 38 companies are expected to be in attendance including United Airlines and Ford.

AI policy has been top-of-mind for many policymakers around the world. French President Emmanuel Macron has announced a comprehensive national AI strategy, as has Canada, which has put together a research fund and a set of programs to attempt to build on the success of notable local AI researchers such as University of Toronto professor George Hinton, who is a major figure in deep learning.

But it is China that has increasingly drawn the attention and concern of U.S. policymakers. The country and its venture capitalists are outlaying billions of dollars to invest in the AI industry, and it has made leading in artificial intelligence one of the nation’s top priorities through its Made in China 2025 program and other reports. …

In comparison, the United States has been remarkably uncoordinated when it comes to AI. …

That lack of engagement from policymakers has been fine — after all, the United States is the world leader in AI research. But with other nations pouring resources and talent into the space, DC policymakers are worried that the U.S. could suddenly find itself behind the frontier of research in the space, with particular repercussions for the defense industry.

Interesting contrast: do we take time to consider the implications or do we engage in a race?

While it’s becoming fashionable to dismiss dichotomous questions of this nature, the two approaches (competition and reflection) are not that compatible and it does seem to be an either/or proposition.

A May 10, 2018 University of Manchester press release (also on EurekAlert), which originated the news item, expands on the theme of responsibility and AI,

Dr Ribeiro adds because investment into AI will essentially be paid for by tax-payers in the long-term, policymakers need to make sure that the benefits of such technologies are fairly distributed throughout society.

She says: “Ensuring social justice in AI development is essential. AI technologies rely on big data and the use of algorithms, which influence decision-making in public life and on matters such as social welfare, public safety and urban planning.”

“In these ‘data-driven’ decision-making processes some social groups may be excluded, either because they lack access to devices necessary to participate or because the selected datasets do not consider the needs, preferences and interests of marginalised and disadvantaged people.”

On AI and Robotics: Developing policy for the Fourth Industrial Revolution is a comprehensive report written, developed and published by Policy@Manchester with leading experts and academics from across the University.

The publication is designed to help employers, regulators and policymakers understand the potential effects of AI in areas such as industry, healthcare, research and international policy.

However, the report doesn’t just focus on AI. It also looks at robotics, explaining the differences and similarities between the two separate areas of research and development (R&D) and the challenges policymakers face with each.

Professor Anna Scaife, Co-Director of the University’s Policy@Manchester team, explains: “Although the challenges that companies and policymakers are facing with respect to AI and robotic systems are similar in many ways, these are two entirely separate technologies – something which is often misunderstood, not just by the general public, but policymakers and employers too. This is something that has to be addressed.”

One particular area the report highlights where robotics can have a positive impact is in the world of hazardous working environments, such a nuclear decommissioning and clean-up.

Professor Barry Lennox, Professor of Applied Control and Head of the UOM Robotics Group, adds: “The transfer of robotics technology into industry, and in particular the nuclear industry, requires cultural and societal changes as well as technological advances.

“It is really important that regulators are aware of what robotic technology is and is not capable of doing today, as well as understanding what the technology might be capable of doing over the next -5 years.”

The report also highlights the importance of big data and AI in healthcare, for example in the fight against antimicrobial resistance (AMR).

Lord Jim O’Neill, Honorary Professor of Economics at The University of Manchester and Chair of the Review on Antimicrobial Resistance explains: “An important example of this is the international effort to limit the spread of antimicrobial resistance (AMR). The AMR Review gave 27 specific recommendations covering 10 broad areas, which became known as the ‘10 Commandments’.

“All 10 are necessary, and none are sufficient on their own, but if there is one that I find myself increasingly believing is a permanent game-changer, it is state of the art diagnostics. We need a ‘Google for doctors’ to reduce the rate of over prescription.”

The versatile nature of AI and robotics is leading many experts to predict that the technologies will have a significant impact on a wide variety of fields in the coming years. Policy@Manchester hopes that the On AI and Robotics report will contribute to helping policymakers, industry stakeholders and regulators better understand the range of issues they will face as the technologies play ever greater roles in our everyday lives.

As far as I can tell, the report has been designed for online viewing only. There are none of the markers (imprint date, publisher, etc.) that I expect to see on a print document. There is no bibliography or list of references but there are links to outside sources throughout the document.

It’s an interesting approach to publishing a report that calls for social justice, especially since the issue of ‘trust’ is increasingly being emphasized where all AI is concerned. With regard to this report, I’m not sure I can trust it. With a print document or a PDF I have markers. I can examine the index, the bibliography, etc. and determine if this material has covered the subject area with reference to well known authorities. It’s much harder to do that with this report. As well, this ‘souped up’ document also looks like it might be easy to change something without my knowledge. With a print or PDF version, I can compare the documents but not with this one.

The Royal Bank of Canada reports ‘Humans wanted’ and some thoughts on the future of work, robots, and artificial intelligence

It seems the Royal Bank of Canada ((RBC or Royal Bank) wants to weigh in and influence what is to come with regard to what new technologies will bring us and how they will affect our working lives.  (I will be offering my critiques of the whole thing.)

Launch yourself into the future (if you’re a youth)

“I’m not planning on being replaced by a robot.” That’s the first line of text you’ll see if you go to the Royal Bank of Canada’s new Future Launch web space and latest marketing campaign and investment.

This whole endeavour is aimed at ‘youth’ and represents a $500M investment. Of course, that money will be invested over a 10-year period which works out to $50M per year and doesn’t seem quite so munificent given how much money Canadian banks make (from a March 1, 2017 article by Don Pittis for the Canadian Broadcasting Corporation [CBC] news website),

Yesterday [February 28, 2017] the Bank of Montreal [BMO] said it had made about $1.5 billion in three months.

That may be hard to put in context until you hear that it is an increase in profit of nearly 40 per cent from the same period last year and dramatically higher than stock watchers had been expecting.

Not all the banks have done as well as BMO this time. The Royal Bank’s profits were up 24 per cent at $3 billion. [emphasis mine] CIBC [Canadian Imperial Bank of Commerce] profits were up 13 per cent. TD [Toronto Dominion] releases its numbers tomorrow.

Those numbers would put the RBC on track to a profit of roughly $12B n 2017. This means  $500M represents approximately 4.5% of a single year’s profits which will be disbursed over a 10 year period which makes the investment work out to approximately .45% or less than 1/2 of one percent. Paradoxically, it’s a lot of money and it’s not that much money.

Advertising awareness

First, there was some advertising (in Vancouver at least),

[downloaded from http://flinflononline.com/local-news/356505]

You’ll notice she has what could be described as a ‘halo’. Is she an angel or, perhaps, she’s an RBC angel? After all, yellow and gold are closely associated as colours and RBC sports a partially yellow logo. As well, the model is wearing a blue denim jacket, RBC’s other logo colour.

Her ‘halo’ is intact but those bands of colour bend a bit and could be described as ‘rainbow-like’ bringing to mind ‘pots of gold’ at the end of the rainbow.  Free association is great fun and allows people to ascribe multiple and/or overlapping ideas and stories to the advertising. For example, people who might not approve of imagery that hearkens to religious art might have an easier time with rainbows and pots of gold. At any rate, none of the elements in images/ads are likely to be happy accidents or coincidence. They are intended to evoke certain associations, e.g., anyone associated with RBC will be blessed with riches.

The timing is deliberate, too, just before Easter 2018 (April 1), suggesting to some us, that even when the robots arrive destroying the past, youth will rise up (resurrection) for a new future. Or, if you prefer, Passover and its attendant themes of being spared and moving to the Promised Land.

Enough with the semiotic analysis and onto campaign details.

Humans Wanted: an RBC report

It seems the precursor to Future Launch, is an RBC report, ‘Humans Wanted’, which itself is the outcome of still earlier work such as this Brookfield Institute for Innovation + Entrepreneurship (BII+E) report, Future-proof: Preparing young Canadians for the future of work, March 2017 (authors: Creig Lamb and Sarah Doyle), which features a quote from RBC’s President and CEO (Chief Executive Officer) David McKay,

“Canada’s future prosperity and success will rely on us harnessing the innovation of our entire talent pool. A huge part of our success will depend on how well we integrate this next generation of Canadians into the workforce. Their confidence, optimism and inspiration could be the key to helping us reimagine traditional business models, products and ways of working.”  David McKay, President and CEO, RBC

There are a number of major trends that have the potential to shape the future of work, from climate change and resource scarcity to demographic shifts resulting from an aging population and immigration. This report focuses on the need to prepare Canada’s youth for a future where a great number of jobs will be rapidly created, altered or made obsolete by technology.

Successive waves of technological advancements have rocked global economies for centuries, reconfiguring the labour force and giving rise to new economic opportunities with each wave. Modern advances, including artificial intelligence and robotics, once again have the potential to transform the economy, perhaps more rapidly and more dramatically than ever before. As past pillars of Canada’s economic growth become less reliable, harnessing technology and innovation will become increasingly important in driving productivity and growth. 1, 2, 3

… (p. 2 print; p. 4 PDF)

The Brookfield Institute (at Ryerson University in Toronto, Ontario, Canada) report is worth reading if for no other reason than its Endnotes. Unlike the RBC materials, you can find the source for the information in the Brookfield report.

After Brookfield, there was the RBC Future Launch Youth Forums 2017: What We Learned  document (October 13, 2017 according to ‘View Page Info’),

In this rapidly changing world, there’s a new reality when it comes to work. A degree or diploma no longer guarantees a job, and some of the positions, skills and trades of today won’t exist – or be relevant – in the future.

Through an unprecedented 10-year, $500 million commitment, RBC Future LaunchTM  is focused on driving real change and preparing today’s young people for the future world of work, helping them access the skills, job experience and networks that will enable their success.

At the beginning of this 10-year journey RBC® wanted to go beyond research and expert reports to better understand the regional issues facing youth across Canada and to hear directly from young people and organizations that work with them. From November 2016 to May 2017, the RBC Future Launch team held 15 youth forums across the country, bringing together over 430 partners, including young people, to uncover ideas and talk through solutions to address the workforce gaps Canada’s youth face today.

Finally,  a March 26, 2018 RBC news release announces the RBC report: ‘Humans Wanted – How Canadian youth can thrive in the age of disruption’,

Automation to impact at least 50% of Canadian jobs in the next decade: RBC research

Human intelligence and intuition critical for young people and jobs of the future

  • Being ‘human’ will ensure resiliency in an era of disruption and artificial intelligence
  • Skills mobility – the ability to move from one job to another – will become a new competitive advantage

TORONTO, March 26, 2018 – A new RBC research paper, Humans Wanted – How Canadian youth can thrive in the age of disruption, has revealed that 50% of Canadian jobs will be disrupted by automation in the next 10 years.

As a result of this disruption, Canada’s Gen Mobile – young people who are currently transitioning from education to employment – are unprepared for the rapidly changing workplace. With 4 million Canadian youth entering the workforce over the next decade, and the shift from a jobs economy to a skills economy, the research indicates young people will need a portfolio of “human skills” to remain competitive and resilient in the labour market.

“Canada is at a historic cross-roads – we have the largest generation of young people coming into the workforce at the very same time technology is starting to impact most jobs in the country,” said Dave McKay, President and CEO, RBC. “Canada is on the brink of a skills revolution and we have a responsibility to prepare young people for the opportunities and ambiguities of the future.”

‘There is a changing demand for skills,” said John Stackhouse, Senior Vice-President, RBC. “According to our findings, if employers and the next generation of employees focus on foundational ‘human skills’, they’ll be better able to navigate a new age of career mobility as technology continues to reshape every aspect of the world around us.”

Key Findings:

  • Canada’s economy is on target to add 2.4 million jobs over the next four years, virtually all of which will require a different mix of skills.
  • A growing demand for “human skills” will grow across all job sectors and include: critical thinking, co-ordination, social perceptiveness, active listening and complex problem solving.
  • Rather than a nation of coders, digital literacy – the ability to understand digital items, digital technologies or the Internet fluently – will be necessary for all new jobs.
  • Canada’s education system, training programs and labour market initiatives are inadequately designed to help Canadian youth navigate the new skills economy, resulting in roughly half a million 15-29 year olds who are unemployed and another quarter of a million who are working part-time involuntarily.
  • Canadian employers are generally not prepared, through hiring, training or retraining, to recruit and develop the skills needed to ensure their organizations remain competitive in the digital economy.

“As digital and machine technology advances, the next generation of Canadians will need to be more adaptive, creative and collaborative, adding and refining skills to keep pace with a world of work undergoing profound change,” said McKay. “Canada’s future prosperity depends on getting a few big things right and that’s why we’ve introduced RBC Future Launch.”

RBC Future Launch is a decade-long commitment to help Canadian youth prepare for the jobs of tomorrow. RBC is committed to acting as a catalyst for change, bringing government, educators, public sector and not-for-profits together to co-create solutions to help young people better prepare for the future of the work through “human skills” development, networking and work experience.

Top recommendations from the report include:

  • A national review of post-secondary education programs to assess their focus on “human skills” including global competencies
  • A national target of 100% work-integrated learning, to ensure every undergraduate student has the opportunity for an apprenticeship, internship, co-op placement or other meaningful experiential placement
  • Standardization of labour market information across all provinces and regions, and a partnership with the private sector to move skills and jobs information to real-time, interactive platforms
  • The introduction of a national initiative to help employers measure foundational skills and incorporate them in recruiting, hiring and training practices

Join the conversation with Dave McKay and John Stackhouse on Wednesday, March 28 [2018] at 9:00 a.m. to 10:00 a.m. EDT at RBC Disruptors on Facebook Live.

Click here to read: Humans Wanted – How Canadian youth can thrive in the age of disruption.

About the Report
RBC Economics amassed a database of 300 occupations and drilled into the skills required to perform them now and projected into the future. The study groups the Canadian economy into six major clusters based on skillsets as opposed to traditional classifications and sectors. This cluster model is designed to illustrate the ease of transition between dissimilar jobs as well as the relevance of current skills to jobs of the future.

Six Clusters
Doers: Emphasis on basic skills
Transition: Greenhouse worker to crane operator
High Probability of Disruption

Crafters: Medium technical skills; low in management skills
Transition: Farmer to plumber
Very High Probability of Disruption

Technicians: High in technical skills
Transition: Car mechanic to electrician
Moderate Probability of Disruption

Facilitators: Emphasis on emotional intelligence
Transition: Dental assistant to graphic designer
Moderate Probability of Disruption

Providers: High in Analytical Skills
Transition: Real estate agent to police officer
Low Probability of Disruption

Solvers: Emphasis on management skills and critical thinking
Transition: Mathematician to software engineer
Minimal Probability of Disruption

About RBC
Royal Bank of Canada is a global financial institution with a purpose-driven, principles-led approach to delivering leading performance. Our success comes from the 81,000+ employees who bring our vision, values and strategy to life so we can help our clients thrive and communities prosper. As Canada’s biggest bank, and one of the largest in the world based on market capitalization, we have a diversified business model with a focus on innovation and providing exceptional experiences to our 16 million clients in Canada, the U.S. and 34 other countries. Learn more at rbc.com.‎

We are proud to support a broad range of community initiatives through donations, community investments and employee volunteer activities. See how at http://www.rbc.com/community-sustainability/.

– 30 – 

The report features a lot of bulleted points, airy text (large fonts and lots of space between the lines), inoffensive graphics, and human interest stories illustrating the points made elsewhere in the text.

There is no bibliography or any form of note telling you where to find the sources for the information in the report. The 2.4M jobs mentioned in the news release are also mentioned in the report on p. 16 (PDF) and is credited in the main body of the text to the EDSC. I’m not up-to-date on my abbreviations but I’m pretty sure it does not stand for East Doncaster Secondary College or East Duplin Soccer Club. I’m betting it stands for Employment and Social Development Canada. All that led to visiting the EDSC website and trying (unsuccessfully) to find the report or data sheet used to supply the figures RBC quoted in their report and news release.

Also, I’m not sure who came up with or how they developed the ‘crafters, ‘doers’, ‘technicians’, etc. categories.

Here’s more from p. 2 of their report,

CANADA, WE HAVE A PROBLEM. [emphasis mine] We’re hurtling towards the 2020s with perfect hindsight, not seeing what’s clearly before us. The next generation is entering the workforce at a time of profound economic, social and technological change. We know it. [emphasis mine] Canada’s youth know it. And we’re not doing enough about it.

RBC wants to change the conversation, [emphasis mine] to help Canadian youth own the 2020s — and beyond. RBC Future Launch is our 10-year commitment to that cause, to help young people prepare for and navigate a new world of work that, we believe, will fundamentally reshape Canada. For the better. If we get a few big things right.

This report, based on a year-long research project, is designed to help that conversation. Our team conducted one of the biggest labour force data projects [emphasis mine] in Canada, and crisscrossed the country to speak with students and workers in their early careers, with educators and policymakers, and with employers in every sector.

We discovered a quiet crisis — of recent graduates who are overqualified for the jobs they’re in, of unemployed youth who weren’t trained for the jobs that are out there, and young Canadians everywhere who feel they aren’t ready for the future of work.

Sarcasm ahead

There’s nothing like starting your remarks with a paraphrased quote from a US movie about the Apollo 13 spacecraft crisis as in, “Houston, we have a problem.” I’ve always preferred Trudeau (senior) and his comment about ‘keeping our noses out of the nation’s bedrooms’. It’s not applicable but it’s more amusing and a Canadian quote to boot.

So, we know we’re having a crisis which we know about but RBC wants to tell us about it anyway (?) and RBC wants to ‘change the conversation’. OK. So how does presenting the RBC Future Launch change the conversation? Especially in light of the fact, that the conversation has already been held, “a year-long research project … Our team conducted one of the biggest labour force data projects [emphasis mine] in Canada, and crisscrossed the country to speak with students and workers in their early careers, with educators and policymakers, and with employers in every sector.” Is the proposed change something along the lines of ‘Don’t worry, be happy; RBC has six categories (Doers, Crafters, Technicians, Facilitators, Providers, Solvers) for you.’ (Yes, for those who recognized it, I’m referencing I’m referencing Bobby McFerrin’s hit song, Don’t Worry, Be Happy.)

Also, what data did RBC collect and how do they collect it? Could Facebook and other forms of social media have been involved? (My March 29, 2018 posting mentions the latest Facebook data scandal; scroll down about 80% of the way.)

There are the people leading the way and ‘changing the conversation’ as it were and they can’t present logical, coherent points. What kind of conversation could they possibly have with youth (or anyone else for that matter)?

And, if part of the problem is that employers are not planning for the future, how does Future Launch ‘change that part of the conversation’?

RBC Future Launch

Days after the report’s release,there’s the Future Launch announcement in an RBC March 28, 2018 news release,

TORONTO, March 28, 2017 – In an era of unprecedented economic and technological change, RBC is today unveiling its largest-ever commitment to Canada’s future. RBC Future Launch is a 10-year, $500-million initiative to help young people gain access and opportunity to the skills, job experience and career networks needed for the future world of work.

“Tomorrow’s prosperity will depend on today’s young people and their ability to take on a future that’s equally inspiring and unnerving,” said Dave McKay, RBC president and CEO. “We’re sitting at an intersection of history, as a massive generational shift and unprecedented technological revolution come together. And we need to ensure young Canadians are prepared to help take us forward.”

Future Launch is a core part of RBC’s celebration of Canada 150, and is the result of two years of conversations with young Canadians from coast to coast to coast.

“Young people – Canada’s future – have the confidence, optimism and inspiration to reimagine the way our country works,” McKay said. “They just need access to the capabilities and connections to make the 21st century, and their place in it, all it should be.”

Working together with young people, RBC will bring community leaders, industry experts, governments, educators and employers to help design solutions and harness resources for young Canadians to chart a more prosperous and inclusive future.

Over 10 years, RBC Future Launch will invest in areas that help young people learn skills, experience jobs, share knowledge and build resilience. The initiative will address the following critical gaps:

  • A lack of relevant experience. Too many young Canadians miss critical early opportunities because they’re stuck in a cycle of “no experience, no job.” According to the consulting firm McKinsey & Co., 83 per cent of educators believe youth are prepared for the workforce, but only 34 per cent of employers and 44 per cent of young people agree. RBC will continue to help educators and employers develop quality work-integrated learning programs to build a more dynamic bridge between school and work.
  • A lack of relevant skills. Increasingly, young people entering the workforce require a complex set of technical, entrepreneurial and social skills that cannot be attained solely through a formal education. A 2016 report from the World Economic Forum states that by 2020, more than a third of the desired core skill-sets of most occupations will be different from today — if that job still exists. RBC will help ensure young Canadians gain the skills, from critical thinking to coding to creative design, that will help them integrate into the workplace of today, and be more competitive for the jobs of tomorrow.
  • A lack of knowledge networks. Young people are at a disadvantage in the job market if they don’t have an opportunity to learn from others and discover the realities of jobs they’re considering. Many have told RBC that there isn’t enough information on the spectrum of jobs that are available. From social networks to mentoring programs, RBC will harness the vast knowledge and goodwill of Canadians in guiding young people to the opportunities that exist and will exist, across Canada.
  • A lack of future readiness. Many young Canadians know their future will be defined by disruption. A new report, Future-proof: Preparing young Canadians for the future of work, by the Brookfield Institute for Innovation + Entrepreneurship, found that 42 per cent of the Canadian labour force is at a high risk of being affected by automation in the next 10 to 20 years. Young Canadians are okay with that: they want to be the disruptors and make the future workforce more creative and productive. RBC will help to create opportunities, through our education system, workplaces and communities at large to help young Canadians retool, rethink and rebuild as the age of disruption takes hold.

By helping young people unlock their potential and launch their careers, RBC can assist them with building a stronger future for themselves, and a more prosperous Canada for all. RBC created The Launching Careers Playbook, an interactive, digital resource focused on enabling young people to reach their full potential through three distinct modules: I am starting my career; I manage interns and I create internship programs. The Playbook shares the design principles, practices, and learnings captured from the RBC Career Launch Program over three years, as well as the research and feedback RBC has received from young people and their managers.

More information on RBC Future Launch can be found at www.rbc.com/futurelaunch.

Weirdly, this news release is the only document which gives you sources for some of RBC’s information. If you should be inclined, you can check the original reports as cited in the news release and determine if you agree with the conclusions the RBC people drew from them.

Cynicism ahead

They are planning to change the conversation, are they? I can’t help wondering what return they’re (RBC)  expecting to make on their investment ($500M over10 years). The RBC is prominently displayed not only on the launch page but in several of the subtopics listed on the page.

There appears to be some very good and helpful information although much of it leads you to using a bank for one reason or another. For example, if you’re planning to become an entrepreneur (and there is serious pressure from the government of Canada on this generation to become precisely that), then it’s very handy that you have easy access to RBC from any of the Future Launch pages. As well, you can easily apply for a job at or get a loan from RBC after you’ve done some of the exercises on the website and possibly given RBC a lot of data about yourself.

For anyone who believes I’m being harsh about the bank, you might want to check out a March 15, 2017 article by Erica Johnson for the Canadian Broadcasting Corporation’s Go Public website. It highlights just how ruthless Canadian banks can be,

Employees from all five of Canada’s big banks have flooded Go Public with stories of how they feel pressured to upsell, trick and even lie to customers to meet unrealistic sales targets and keep their jobs.

The deluge is fuelling multiple calls for a parliamentary inquiry, even as the banks claim they’re acting in customers’ best interests.

In nearly 1,000 emails, employees from RBC, BMO, CIBC, TD and Scotiabank locations across Canada describe the pressures to hit targets that are monitored weekly, daily and in some cases hourly.

“Management is down your throat all the time,” said a Scotiabank financial adviser. “They want you to hit your numbers and it doesn’t matter how.”

CBC has agreed to protect their identities because the workers are concerned about current and future employment.

An RBC teller from Thunder Bay, Ont., said even when customers don’t need or want anything, “we need to upgrade their Visa card, increase their Visa limits or get them to open up a credit line.”

“It’s not what’s important to our clients anymore,” she said. “The bank wants more and more money. And it’s leading everyone into debt.”

A CIBC teller said, “I am expected to aggressively sell products, especially Visa. Hit those targets, who cares if it’s hurting customers.”

….

Many bank employees described pressure tactics used by managers to try to increase sales.

An RBC certified financial planner in Guelph, Ont., said she’s been threatened with pay cuts and losing her job if she doesn’t upsell enough customers.

“Managers belittle you,” she said. “We get weekly emails that highlight in red the people who are not hitting those sales targets. It’s bullying.”

Some TD Bank employees told CBC’s Go Public they felt they had to break the law to keep their jobs. (Aaron Harris/Reuters)

Employees at several RBC branches in Calgary said there are white boards posted in the staff room that list which financial advisers are meeting their sales targets and which advisers are coming up short.

A CIBC small business associate who quit in January after nine years on the job said her district branch manager wasn’t pleased with her sales results when she was pregnant.

While working in Waterloo, Ont., she says her manager also instructed staff to tell all new international students looking to open a chequing account that they had to open a “student package,” which also included a savings account, credit card and overdraft.

“That is unfair and not the law, but we were told to do it for all of them.”

Go Public requested interviews with the CEOs of the five big banks — BMO, CIBC, RBC, Scotiabank and TD — but all declined.

If you have the time, it’s worth reading Johnson’s article in its entirety as it provides some fascinating insight into Canadian banking practices.

Final comments and an actual ‘conversation’ about the future of work

I’m torn, It’s good to see an attempt to grapple with the extraordinary changes we are likely to see in the not so distant future. It’s hard to believe that this Future Launch initiative is anything other than a self-interested means of profiting from fears about the future and a massive public relations campaign designed to engender good will. Doubly so since the very bad publicity the banks including RBC garnered last year (2017), as mentioned in the Johnson article.

Also, RBC and who knows how many other vested interests appear to have gathered data and information which they’ve used to draw any number of conclusions. First, I can’t find any information about what data RBC is gathering, who else might have access, and what plans, if any, they have to use it. Second, RBC seems to have predetermined how this ‘future of work’ conversation needs to be changed.

I suggest treading as lightly as possible and keeping in mind other ‘conversations’ are possible. For example, Mike Masnick at Techdirt has an April 3, 2018 posting about a new ‘future of work’ initiative,

For the past few years, there have been plenty of discussions about “the future of work,” but they tend to fall into one of two camps. You have the pessimists, who insist that the coming changes wrought by automation and artificial intelligence will lead to fewer and fewer jobs, as all of the jobs of today are automated out of existence. Then, there are the optimists who point to basically every single past similar prediction of doom and gloom due to innovation, which have always turned out to be incorrect. People in this camp point out that technology is more likely to augment than replace human-based work, and vaguely insist that “the jobs will come.” Whether you fall into one of those two camps — or somewhere in between or somewhere else entirely — one thing I’d hope most people can agree on is that the future of work will be… different.

Separately, we’re also living in an age where it is increasingly clear that those in and around the technology industry must take more responsibility in thinking through the possible consequences of the innovations they’re bringing to life, and exploring ways to minimize the harmful results (and hopefully maximizing the beneficial ones).

That brings us to the project we’re announcing today, Working Futures, which is an attempt to explore what the future of work might really look like in the next ten to fifteen years. We’re doing this project in partnership with two organizations that we’ve worked with multiples times in the past: Scout.ai and R Street.

….

The key point of this project: rather than just worry about the bad stuff or hand-wave around the idea of good stuff magically appearing, we want to really dig in — figure out what new jobs may actually appear, look into what benefits may accrue as well as what harms may be dished out — and see if there are ways to minimize the negative consequences, while pushing the world towards the beneficial consequences.

To do that, we’re kicking off a variation on the classic concept of scenario planning, bringing together a wide variety of individuals with different backgrounds, perspectives and ideas to run through a fun and creative exercise to imagine the future, while staying based in reality. We’re adding in some fun game-like mechanisms to push people to think about where the future might head. We’re also updating the output side of traditional scenario planning by involving science fiction authors, who obviously have a long history of thinking up the future, and who will participate in this process and help to craft short stories out of the scenarios we build, making them entertaining, readable and perhaps a little less “wonky” than the output of more traditional scenario plans.

There you have it; the Royal Bank is changing the conversation and Techdirt is inviting you to join in scenario planning and more.