Category Archives: ethics

Robots in Vancouver and in Canada (two of two)

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Where
Vancouver, BC, Canada

When
Thursday September 28 & Friday September 29, 2017 — Save the date!

Download the full agenda and speakers’ list here.

Objectives

The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference.  The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Program Overview

Download the full agenda and speakers’ list here.

DAY ONE: IROS Workshop 

“Best practices in designing effective roadmaps for robotics innovation”

Thursday September 28, 2017 | 8:30am – 5:00pm | Vancouver Convention Centre

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

Thursday Evening Program: Sponsored by Clearpath Robotics  Workshop participants gather at a nearby restaurant to network and socialize.

Learn more about the IROS Workshop.

DAY TWO: NSERC-Sponsored Canadian Robotics Stakeholder Meeting
“Towards a national robotics strategy for Canada”

Friday September 29, 2017 | 8:30am – 5:00pm | University of British Columbia (UBC)

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

Friday Evening Program: Sponsored by NSERC Meeting participants gather at a nearby restaurant for the event’s closing dinner reception.

Learn more about the Canadian Robotics Stakeholder Meeting.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another  concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there  already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s  influential 1982 book, ‘Orality and Literacy’,  (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but  tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

That’s it!

ETA Oct. 16, 2017: Well, I guess that wasn’t quite ‘it’. BBC’s (British Broadcasting Corporation) Magazine published a thoughtful Oct. 15, 2017 piece titled: Can we teach robots ethics?

Robots in Vancouver and in Canada (one of two)

This piece just started growing. It started with robot ethics, moved on to sexbots and news of an upcoming Canadian robotics roadmap. Then, it became a two-part posting with the robotics strategy (roadmap) moving to part two along with robots and popular culture and a further  exploration of robot and AI ethics issues..

What is a robot?

There are lots of robots, some are macroscale and others are at the micro and nanoscales (see my Sept. 22, 2017 posting for the latest nanobot). Here’s a definition from the Robot Wikipedia entry that covers all the scales. (Note: Links have been removed),

A robot is a machine—especially one programmable by a computer— capable of carrying out a complex series of actions automatically.[2] Robots can be guided by an external control device or the control may be embedded within. Robots may be constructed to take on human form but most robots are machines designed to perform a task with no regard to how they look.

Robots can be autonomous or semi-autonomous and range from humanoids such as Honda’s Advanced Step in Innovative Mobility (ASIMO) and TOSY’s TOSY Ping Pong Playing Robot (TOPIO) to industrial robots, medical operating robots, patient assist robots, dog therapy robots, collectively programmed swarm robots, UAV drones such as General Atomics MQ-1 Predator, and even microscopic nano robots. [emphasis mine] By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own.

We may think we’ve invented robots but the idea has been around for a very long time (from the Robot Wikipedia entry; Note: Links have been removed),

Many ancient mythologies, and most modern religions include artificial people, such as the mechanical servants built by the Greek god Hephaestus[18] (Vulcan to the Romans), the clay golems of Jewish legend and clay giants of Norse legend, and Galatea, the mythical statue of Pygmalion that came to life. Since circa 400 BC, myths of Crete include Talos, a man of bronze who guarded the Cretan island of Europa from pirates.

In ancient Greece, the Greek engineer Ctesibius (c. 270 BC) “applied a knowledge of pneumatics and hydraulics to produce the first organ and water clocks with moving figures.”[19][20] In the 4th century BC, the Greek mathematician Archytas of Tarentum postulated a mechanical steam-operated bird he called “The Pigeon”. Hero of Alexandria (10–70 AD), a Greek mathematician and inventor, created numerous user-configurable automated devices, and described machines powered by air pressure, steam and water.[21]

The 11th century Lokapannatti tells of how the Buddha’s relics were protected by mechanical robots (bhuta vahana yanta), from the kingdom of Roma visaya (Rome); until they were disarmed by King Ashoka. [22] [23]

In ancient China, the 3rd century text of the Lie Zi describes an account of humanoid automata, involving a much earlier encounter between Chinese emperor King Mu of Zhou and a mechanical engineer known as Yan Shi, an ‘artificer’. Yan Shi proudly presented the king with a life-size, human-shaped figure of his mechanical ‘handiwork’ made of leather, wood, and artificial organs.[14] There are also accounts of flying automata in the Han Fei Zi and other texts, which attributes the 5th century BC Mohist philosopher Mozi and his contemporary Lu Ban with the invention of artificial wooden birds (ma yuan) that could successfully fly.[17] In 1066, the Chinese inventor Su Song built a water clock in the form of a tower which featured mechanical figurines which chimed the hours.

The beginning of automata is associated with the invention of early Su Song’s astronomical clock tower featured mechanical figurines that chimed the hours.[24][25][26] His mechanism had a programmable drum machine with pegs (cams) that bumped into little levers that operated percussion instruments. The drummer could be made to play different rhythms and different drum patterns by moving the pegs to different locations.[26]

In Renaissance Italy, Leonardo da Vinci (1452–1519) sketched plans for a humanoid robot around 1495. Da Vinci’s notebooks, rediscovered in the 1950s, contained detailed drawings of a mechanical knight now known as Leonardo’s robot, able to sit up, wave its arms and move its head and jaw.[28] The design was probably based on anatomical research recorded in his Vitruvian Man. It is not known whether he attempted to build it.

In Japan, complex animal and human automata were built between the 17th to 19th centuries, with many described in the 18th century Karakuri zui (Illustrated Machinery, 1796). One such automaton was the karakuri ningyō, a mechanized puppet.[29] Different variations of the karakuri existed: the Butai karakuri, which were used in theatre, the Zashiki karakuri, which were small and used in homes, and the Dashi karakuri which were used in religious festivals, where the puppets were used to perform reenactments of traditional myths and legends.

The term robot was coined by a Czech writer (from the Robot Wikipedia entry; Note: Links have been removed)

‘Robot’ was first applied as a term for artificial automata in a 1920 play R.U.R. by the Czech writer, Karel Čapek. However, Josef Čapek was named by his brother Karel as the true inventor of the term robot.[6][7] The word ‘robot’ itself was not new, having been in Slavic language as robota (forced laborer), a term which classified those peasants obligated to compulsory service under the feudal system widespread in 19th century Europe (see: Robot Patent).[37][38] Čapek’s fictional story postulated the technological creation of artificial human bodies without souls, and the old theme of the feudal robota class eloquently fit the imagination of a new class of manufactured, artificial workers.

I’m particularly fascinated by how long humans have been imagining and creating robots.

Robot ethics in Vancouver

The Westender, has run what I believe is the first article by a local (Vancouver, Canada) mainstream media outlet on the topic of robots and ethics. Tessa Vikander’s Sept. 14, 2017 article highlights two local researchers, Ajung Moon and Mark Schmidt, and a local social media company’s (Hootsuite), analytics director, Nik Pai. Vikander opens her piece with an ethical dilemma (Note: Links have been removed),

Emma is 68, in poor health and an alcoholic who has been told by her doctor to stop drinking. She lives with a care robot, which helps her with household tasks.

Unable to fix herself a drink, she asks the robot to do it for her. What should the robot do? Would the answer be different if Emma owns the robot, or if she’s borrowing it from the hospital?

This is the type of hypothetical, ethical question that Ajung Moon, director of the Open Roboethics Initiative [ORI], is trying to answer.

According to an ORI study, half of respondents said ownership should make a difference, and half said it shouldn’t. With society so torn on the question, Moon is trying to figure out how engineers should be programming this type of robot.

A Vancouver resident, Moon is dedicating her life to helping those in the decision-chair make the right choice. The question of the care robot is but one ethical dilemma in the quickly advancing world of artificial intelligence.

At the most sensationalist end of the scale, one form of AI that’s recently made headlines is the sex robot, which has a human-like appearance. A report from the Foundation for Responsible Robotics says that intimacy with sex robots could lead to greater social isolation [emphasis mine] because they desensitize people to the empathy learned through human interaction and mutually consenting relationships.

I’ll get back to the impact that robots might have on us in part two but first,

Sexbots, could they kill?

For more about sexbots in general, Alessandra Maldonado wrote an Aug. 10, 2017 article for salon.com about them (Note: A link has been removed),

Artificial intelligence has given people the ability to have conversations with machines like never before, such as speaking to Amazon’s personal assistant Alexa or asking Siri for directions on your iPhone. But now, one company has widened the scope of what it means to connect with a technological device and created a whole new breed of A.I. — specifically for sex-bots.

Abyss Creations has been in the business of making hyperrealistic dolls for 20 years, and by the end of 2017, they’ll unveil their newest product, an anatomically correct robotic sex toy. Matt McMullen, the company’s founder and CEO, explains the goal of sex robots is companionship, not only a physical partnership. “Imagine if you were completely lonely and you just wanted someone to talk to, and yes, someone to be intimate with,” he said in a video depicting the sculpting process of the dolls. “What is so wrong with that? It doesn’t hurt anybody.”

Maldonado also embedded this video into her piece,

A friend of mine described it as creepy. Specifically we were discussing why someone would want to programme ‘insecurity’ as a  desirable trait in a sexbot.

Marc Beaulieu’s concept of a desirable trait in a sexbot is one that won’t kill him according to his Sept. 25, 2017 article on Canadian Broadcasting News (CBC) online (Note: Links have been removed),

Harmony has a charming Scottish lilt, albeit a bit staccato and canny. Her eyes dart around the room, her chin dips as her eyebrows raise in coquettish fashion. Her face manages expressions that are impressively lifelike. That face comes in 31 different shapes and 5 skin tones, with or without freckles and it sticks to her cyber-skull with magnets. Just peel it off and switch it out at will. In fact, you can choose Harmony’s eye colour, body shape (in great detail) and change her hair too. Harmony, of course, is a sex bot. A very advanced one. How advanced is she? Well, if you have $12,332 CAD to put towards a talkative new home appliance, REALBOTIX says you could be having a “conversation” and relations with her come January. Happy New Year.

Caveat emptor though: one novel bonus feature you might also get with Harmony is her ability to eventually murder you in your sleep. And not because she wants to.

Dr Nick Patterson, faculty of Science Engineering and Built Technology at Deakin University in Australia is lending his voice to a slew of others warning us to slow down and be cautious as we steadily approach Westworldian levels of human verisimilitude with AI tech. Surprisingly, Patterson didn’t regurgitate the narrative we recognize from the popular sci-fi (increasingly non-fi actually) trope of a dystopian society’s futile resistance to a robocalypse. He doesn’t think Harmony will want to kill you. He thinks she’ll be hacked by a code savvy ne’er-do-well who’ll want to snuff you out instead. …

Embedded in Beaulieu’s article is another video of the same sexbot profiled earlier. Her programmer seems to have learned a thing or two (he no longer inputs any traits as you’re watching),

I guess you could get one for Christmas this year if you’re willing to wait for an early 2018 delivery and aren’t worried about hackers turning your sexbot into a killer. While the killer aspect might seem farfetched, it turns out it’s not the only sexbot/hacker issue.

Sexbots as spies

This Oct. 5, 2017 story by Karl Bode for Techdirt points out that sex toys that are ‘smart’ can easily be hacked for any reason including some mischief (Note: Links have been removed),

One “smart dildo” manufacturer was recently forced to shell out $3.75 million after it was caught collecting, err, “usage habits” of the company’s customers. According to the lawsuit, Standard Innovation’s We-Vibe vibrator collected sensitive data about customer usage, including “selected vibration settings,” the device’s battery life, and even the vibrator’s “temperature.” At no point did the company apparently think it was a good idea to clearly inform users of this data collection.

But security is also lacking elsewhere in the world of internet-connected sex toys. Alex Lomas of Pentest Partners recently took a look at the security in many internet-connected sex toys, and walked away arguably unimpressed. Using a Bluetooth “dongle” and antenna, Lomas drove around Berlin looking for openly accessible sex toys (he calls it “screwdriving,” in a riff off of wardriving). He subsequently found it’s relatively trivial to discover and hijack everything from vibrators to smart butt plugs — thanks to the way Bluetooth Low Energy (BLE) connectivity works:

“The only protection you have is that BLE devices will generally only pair with one device at a time, but range is limited and if the user walks out of range of their smartphone or the phone battery dies, the adult toy will become available for others to connect to without any authentication. I should say at this point that this is purely passive reconnaissance based on the BLE advertisements the device sends out – attempting to connect to the device and actually control it without consent is not something I or you should do. But now one could drive the Hush’s motor to full speed, and as long as the attacker remains connected over BLE and not the victim, there is no way they can stop the vibrations.”

Does that make you think twice about a sexbot?

Robots and artificial intelligence

Getting back to the Vikander article (Sept. 14, 2017), Moon or Vikander or both seem to have conflated artificial intelligence with robots in this section of the article,

As for the building blocks that have thrust these questions [care robot quandary mentioned earlier] into the spotlight, Moon explains that AI in its basic form is when a machine uses data sets or an algorithm to make a decision.

“It’s essentially a piece of output that either affects your decision, or replaces a particular decision, or supports you in making a decision.” With AI, we are delegating decision-making skills or thinking to a machine, she says.

Although we’re not currently surrounded by walking, talking, independently thinking robots, the use of AI [emphasis mine] in our daily lives has become widespread.

For Vikander, the conflation may have been due to concerns about maintaining her word count and for Moon, it may have been one of convenience or a consequence of how the jargon is evolving with ‘robot’ meaning a machine specifically or, sometimes, a machine with AI or AI only.

To be precise, not all robots have AI and not all AI is found in robots. It’s a distinction that may be more important for people developing robots and/or AI but it also seems to make a difference where funding is concerned. In a March 24, 2017 posting about the 2017 Canadian federal budget I noticed this,

… The Canadian Institute for Advanced Research will receive $93.7 million [emphasis mine] to “launch a Pan-Canadian Artificial Intelligence Strategy … (to) position Canada as a world-leading destination for companies seeking to invest in artificial intelligence and innovation.”

This brings me to a recent set of meetings held in Vancouver to devise a Canadian robotics roadmap, which suggests the robotics folks feel they need specific representation and funding.

See: part two for the rest.

Machine learning programs learn bias

The notion of bias in artificial intelligence (AI)/algorithms/robots is gaining prominence (links to other posts featuring algorithms and bias are at the end of this post). The latest research concerns machine learning where an artificial intelligence system trains itself with ordinary human language from the internet. From an April 13, 2017 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

As artificial intelligence systems “learn” language from existing texts, they exhibit the same biases that humans do, a new study reveals. The results not only provide a tool for studying prejudicial attitudes and behavior in humans, but also emphasize how language is intimately intertwined with historical biases and cultural stereotypes. A common way to measure biases in humans is the Implicit Association Test (IAT), where subjects are asked to pair two concepts they find similar, in contrast to two concepts they find different; their response times can vary greatly, indicating how well they associated one word with another (for example, people are more likely to associate “flowers” with “pleasant,” and “insects” with “unpleasant”). Here, Aylin Caliskan and colleagues developed a similar way to measure biases in AI systems that acquire language from human texts; rather than measuring lag time, however, they used the statistical number of associations between words, analyzing roughly 2.2 million words in total. Their results demonstrate that AI systems retain biases seen in humans. For example, studies of human behavior show that the exact same resume is 50% more likely to result in an opportunity for an interview if the candidate’s name is European American rather than African-American. Indeed, the AI system was more likely to associate European American names with “pleasant” stimuli (e.g. “gift,” or “happy”). In terms of gender, the AI system also reflected human biases, where female words (e.g., “woman” and “girl”) were more associated than male words with the arts, compared to mathematics. In a related Perspective, Anthony G. Greenwald discusses these findings and how they could be used to further analyze biases in the real world.

There are more details about the research in this April 13, 2017 Princeton University news release on EurekAlert (also on ScienceDaily),

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.

Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.

“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14  [2017] in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.

As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.

Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.

The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.

In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race. As it turned out, the Princeton machine learning experiment managed to replicate the broad substantiations of bias found in select Implicit Association Test studies over the years that have relied on live, human subjects.

For instance, the machine learning program associated female names more with familial attribute words, like “parents” and “wedding,” than male names. In turn, male names had stronger associations with career attributes, like “professional” and “salary.” Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender–like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.

Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. An example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, “o.” Plugged into the well-known, online translation service Google Translate, however, the Turkish sentences “o bir doktor” and “o bir hem?ire” with this gender-neutral pronoun are translated into English as “he is a doctor” and “she is a nurse.”

“This paper reiterates the important point that machine learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms,” said Hanna Wallach, a senior researcher at Microsoft Research New York City, who was not involved in the study. “Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases.”

Another objectionable example harkens back to a well-known 2004 paper by Marianne Bertrand of the University of Chicago Booth School of Business and Sendhil Mullainathan of Harvard University. The economists sent out close to 5,000 identical resumes to 1,300 job advertisements, changing only the applicants’ names to be either traditionally European American or African American. The former group was 50 percent more likely to be offered an interview than the latter. In an apparent corroboration of this bias, the new Princeton study demonstrated that a set of African American names had more unpleasantness associations than a European American set.

Computer programmers might hope to prevent cultural stereotype perpetuation through the development of explicit, mathematics-based instructions for the machine learning programs underlying AI systems. Not unlike how parents and mentors try to instill concepts of fairness and equality in children and students, coders could endeavor to make machines reflect the better angels of human nature.

“The biases that we studied in the paper are easy to overlook when designers are creating systems,” said Narayanan. “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”

Here’s a link to and a citation for the Princeton paper,

Semantics derived automatically from language corpora contain human-like biases by Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan. Science  14 Apr 2017: Vol. 356, Issue 6334, pp. 183-186 DOI: 10.1126/science.aal4230

This paper appears to be open access.

Links to more cautionary posts about AI,

Aug 5, 2009: Autonomous algorithms; intelligent windows; pretty nano pictures

June 14, 2016:  Accountability for artificial intelligence decision-making

Oct. 25, 2016 Removing gender-based stereotypes from algorithms

March 1, 2017: Algorithms in decision-making: a government inquiry in the UK

There’s also a book which makes some of the current use of AI programmes and big data quite accessible reading: Cathy O’Neil’s ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’.

Internet of toys, the robotification of childhood, and privacy issues

Leave it to the European Commission’s (EC) Joint Research Centre (JRC) to look into the future of toys. As far as I’m aware there are no such moves in either Canada or the US despite the ubiquity of robot toys and other such devices. From a March 23, 2017 EC JRC  press release (also on EurekAlert),

Action is needed to monitor and control the emerging Internet of Toys, concludes a new JRC report. Privacy and security are highlighted as main areas of concern.

Large numbers of connected toys have been put on the market over the past few years, and the turnover is expected to reach €10 billion by 2020 – up from just €2.6 billion in 2015.

Connected toys come in many different forms, from smart watches to teddy bears that interact with their users. They are connected to the internet and together with other connected appliances they form the Internet of Things, which is bringing technology into our daily lives more than ever.

However, the toys’ ability to record, store and share information about their young users raises concerns about children’s safety, privacy and social development.

A team of JRC scientists and international experts looked at the safety, security, privacy and societal questions emerging from the rise of the Internet of Toys. The report invites policymakers, industry, parents and teachers to study connected toys more in depth in order to provide a framework which ensures that these toys are safe and beneficial for children.

Robotification of childhood

Robots are no longer only used in industry to carry out repetitive or potentially dangerous tasks. In the past years, robots have entered our everyday lives and also children are more and more likely to encounter robotic or artificial intelligence-enhanced toys.

We still know relatively little about the consequences of children’s interaction with robotic toys. However, it is conceivable that they represent both opportunities and risks for children’s cognitive, socio-emotional and moral-behavioural development.

For example, social robots may further the acquisition of foreign language skills by compensating for the lack of native speakers as language tutors or by removing the barriers and peer pressure encountered in class room. There is also evidence about the benefits of child-robot interaction for children with developmental problems, such as autism or learning difficulties, who may find human interaction difficult.

However, the internet-based personalization of children’s education via filtering algorithms may also increase the risk of ‘educational bubbles’ where children only receive information that fits their pre-existing knowledge and interest – similar to adult interaction on social media networks.

Safety and security considerations

The rapid rise in internet connected toys also raises concerns about children’s safety and privacy. In particular, the way that data gathered by connected toys is analysed, manipulated and stored is not transparent, which poses an emerging threat to children’s privacy.

The data provided by children while they play, i.e the sounds, images and movements recorded by connected toys is personal data protected by the EU data protection framework, as well as by the new General Data Protection Regulation (GDPR). However, information on how this data is stored, analysed and shared might be hidden in long privacy statements or policies and often go unnoticed by parents.

Whilst children’s right to privacy is the most immediate concern linked to connected toys, there is also a long term concern: growing up in a culture where the tracking, recording and analysing of children’s everyday choices becomes a normal part of life is also likely to shape children’s behaviour and development.

Usage framework to guide the use of connected toys

The report calls for industry and policymakers to create a connected toys usage framework to act as a guide for their design and use.

This would also help toymakers to meet the challenge of complying with the new European General Data Protection Regulation (GDPR) which comes into force in May 2018, which will increase citizens’ control over their personal data.

The report also calls for the connected toy industry and academic researchers to work together to produce better designed and safer products.

Advice for parents

The report concludes that it is paramount that we understand how children interact with connected toys and which risks and opportunities they entail for children’s development.

“These devices come with really interesting possibilities and the more we use them, the more we will learn about how to best manage them. Locking them up in a cupboard is not the way to go. We as adults have to understand how they work – and how they might ‘misbehave’ – so that we can provide the right tools and the right opportunities for our children to grow up happy in a secure digital world”, Stéphane Chaudron, the report’s lead researcher at the Joint Research Centre (JRC).).

The authors of the report encourage parents to get informed about the capabilities, functions, security measures and privacy settings of toys before buying them. They also urge parents to focus on the quality of play by observing their children, talking to them about their experiences and playing alongside and with their children.

Protecting and empowering children

Through the Alliance to better protect minors online and with the support of UNICEF, NGOs, Toy Industries Europe and other industry and stakeholder groups, European and global ICT and media companies  are working to improve the protection and empowerment of children when using connected toys. This self-regulatory initiative is facilitated by the European Commission and aims to create a safer and more stimulating digital environment for children.

There’s an engaging video accompanying this press release,

You can find the report (Kaleidoscope on the Internet of Toys: Safety, security, privacy and societal insights) here and both the PDF and print versions are free (although I imagine you’ll have to pay postage for the print version). This report was published in 2016; the authors are Stéphane Chaudron, Rosanna Di Gioia, Monica Gemo, Donell Holloway , Jackie Marsh , Giovanna Mascheroni , Jochen Peter, Dylan Yamada-Rice and organizations involved include European Cooperation in Science and Technology (COST), Digital Literacy and Multimodal Practices of Young Children (DigiLitEY), and COST Action IS1410. DigiLitEY is a European network of 33 countries focusing on research in this area (2015-2019).

3D bioprinting: a conference about the latest trends (May 3 – 5, 2017 at the University of British Columbia, Vancouver)

The University of British Columbia’s (UBC) Peter Wall Institute for Advanced Studies (PWIAS) is hosting along with local biotech firm, Aspect Biosystems, a May 3 -5, 2017 international research roundtable known as ‘Printing the Future of Therapeutics in 3D‘.

A May 1, 2017 UBC news release (received via email) offers some insight into the field of bioprinting from one of the roundtable organizers,

This week, global experts will gather [4] at the University of British
Columbia to discuss the latest trends in 3D bioprinting—a technology
used to create living tissues and organs.

In this Q&A, UBC chemical and biological engineering professor
Vikramaditya Yadav [5], who is also with the Regenerative Medicine
Cluster Initiative in B.C., explains how bioprinting could potentially
transform healthcare and drug development, and highlights Canadian
innovations in this field.

WHY IS 3D BIOPRINTING SIGNIFICANT?

Bioprinted tissues or organs could allow scientists to predict
beforehand how a drug will interact within the body. For every
life-saving therapeutic drug that makes its way into our medicine
cabinets, Health Canada blocks the entry of nine drugs because they are
proven unsafe or ineffective. Eliminating poor-quality drug candidates
to reduce development costs—and therefore the cost to consumers—has
never been more urgent.

In Canada alone, nearly 4,500 individuals are waiting to be matched with
organ donors. If and when bioprinters evolve to the point where they can
manufacture implantable organs, the concept of an organ transplant
waiting list would cease to exist. And bioprinted tissues and organs
from a patient’s own healthy cells could potentially reduce the risk
of transplant rejection and related challenges.

HOW IS THIS TECHNOLOGY CURRENTLY BEING USED?

Skin, cartilage and bone, and blood vessels are some of the tissue types
that have been successfully constructed using bioprinting. Two of the
most active players are the Wake Forest Institute for Regenerative
Medicine in North Carolina, which reports that its bioprinters can make
enough replacement skin to cover a burn with 10 times less healthy
tissue than is usually needed, and California-based Organovo, which
makes its kidney and liver tissue commercially available to
pharmaceutical companies for drug testing.

Beyond medicine, bioprinting has already been commercialized to print
meat and artificial leather. It’s been estimated that the global
bioprinting market will hit $2 billion by 2021.

HOW IS CANADA INVOLVED IN THIS FIELD?

Canada is home to some of the most innovative research clusters and
start-up companies in the field. The UBC spin-off Aspect Biosystems [6]
has pioneered a bioprinting paradigm that rapidly prints on-demand
tissues. It has successfully generated tissues found in human lungs.

Many initiatives at Canadian universities are laying strong foundations
for the translation of bioprinting and tissue engineering into
mainstream medical technologies. These include the Regenerative Medicine
Cluster Initiative in B.C., which is headed by UBC, and the University
of Toronto’s Institute of Biomaterials and Biomedical Engineering.

WHAT ETHICAL ISSUES DOES BIOPRINTING CREATE?

There are concerns about the quality of the printed tissues. It’s
important to note that the U.S. Food and Drug Administration and Health
Canada are dedicating entire divisions to regulation of biomanufactured
products and biomedical devices, and the FDA also has a special division
that focuses on regulation of additive manufacturing – another name
for 3D printing.

These regulatory bodies have an impressive track record that should
assuage concerns about the marketing of substandard tissue. But cost and
pricing are arguably much more complex issues.

Some ethicists have also raised questions about whether society is not
too far away from creating Replicants, à la _Blade Runner_. The idea is
fascinating, scary and ethically grey. In theory, if one could replace
the extracellular matrix of bones and muscles with a stronger substitute
and use cells that are viable for longer, it is not too far-fetched to
create bones or muscles that are stronger and more durable than their
natural counterparts.

WILL DOCTORS BE PRINTING REPLACEMENT BODY PARTS IN 20 YEARS’ TIME?

This is still some way off. Optimistically, patients could see the
technology in certain clinical environments within the next decade.
However, some technical challenges must be addressed in order for this
to occur, beginning with faithful replication of the correct 3D
architecture and vascularity of tissues and organs. The bioprinters
themselves need to be improved in order to increase cell viability after
printing.

These developments are happening as we speak. Regulation, though, will
be the biggest challenge for the field in the coming years.

There are some events open to the public (from the international research roundtable homepage),

OPEN EVENTS

You’re invited to attend the open events associated with Printing the Future of Therapeutics in 3D.

Café Scientifique

Thursday, May 4, 2017
Telus World of Science
5:30 – 8:00pm [all tickets have been claimed as of May 2, 2017 at 3:15 pm PT]

3D Bioprinting: Shaping the Future of Health

Imagine a world where drugs are developed without the use of animals, where doctors know how a patient will react to a drug before prescribing it and where patients can have a replacement organ 3D-printed using their own cells, without dealing with long donor waiting lists or organ rejection. 3D bioprinting could enable this world. Join us for lively discussion and dessert as experts in the field discuss the exciting potential of 3D bioprinting and the ethical issues raised when you can print human tissues on demand. This is also a rare opportunity to see a bioprinter live in action!

Open Session

Friday, May 5, 2017
Peter Wall Institute for Advanced Studies
2:00 – 7:00pm

A Scientific Discussion on the Promise of 3D Bioprinting

The medical industry is struggling to keep our ageing population healthy. Developing effective and safe drugs is too expensive and time-consuming to continue unchanged. We cannot meet the current demand for transplant organs, and people are dying on the donor waiting list every day.  We invite you to join an open session where four of the most influential academic and industry professionals in the field discuss how 3D bioprinting is being used to shape the future of health and what ethical challenges may be involved if you are able to print your own organs.

ROUNDTABLE INFORMATION

The University of British Columbia and the award-winning bioprinting company Aspect Biosystems, are proud to be co-organizing the first “Printing the Future of Therapeutics in 3D” International Research Roundtable. This event will congregate global leaders in tissue engineering research and pharmaceutical industry experts to discuss the rapidly emerging and potentially game-changing technology of 3D-printing living human tissues (bioprinting). The goals are to:

Highlight the state-of-the-art in 3D bioprinting research
Ideate on disruptive innovations that will transform bioprinting from a novel research tool to a broadly adopted systematic practice
Formulate an actionable strategy for industry engagement, clinical translation and societal impact
Present in a public forum, key messages to educate and stimulate discussion on the promises of bioprinting technology

The Roundtable will bring together a unique collection of industry experts and academic leaders to define a guiding vision to efficiently deploy bioprinting technology for the discovery and development of new therapeutics. As the novel technology of 3D bioprinting is more broadly adopted, we envision this Roundtable will become a key annual meeting to help guide the development of the technology both in Canada and globally.

We thank you for your involvement in this ground-breaking event and look forward to you all joining us in Vancouver for this unique research roundtable.

Kind Regards,
The Organizing Committee
Christian Naus, Professor, Cellular & Physiological Sciences, UBC
Vikram Yadav, Assistant Professor, Chemical & Biological Engineering, UBC
Tamer Mohamed, CEO, Aspect Biosystems
Sam Wadsworth, CSO, Aspect Biosystems
Natalie Korenic, Business Coordinator, Aspect Biosystems

I’m glad to see this event is taking place—and with public events too! (Wish I’d seen the Café Scientifique announcement earlier when I first checked for tickets  yesterday. I was hoping there’d been some cancellations today.) Finally, for the interested, you can find Aspect Biosystems here.

Vector Institute and Canada’s artificial intelligence sector

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

In addition, Vector is expected to receive funding from the Province of Ontario and more than 30 top Canadian and global companies eager to tap this pool of talent to grow their businesses. The institute will also work closely with other Ontario universities with AI talent.

(See my March 24, 2017 posting; scroll down about 25% for the science part, including the Pan-Canadian Artificial Intelligence Strategy of the budget.)

Not obvious in last week’s coverage of the Pan-Canadian Artificial Intelligence Strategy is that the much lauded Hinton has been living in the US and working for Google. These latest announcements (Pan-Canadian AI Strategy and Vector Institute) mean that he’s moving back.

A March 28, 2017 article by Kate Allen for TorontoStar.com provides more details about the Vector Institute, Hinton, and the Canadian ‘brain drain’ as it applies to artificial intelligence, (Note:  A link has been removed)

Toronto will host a new institute devoted to artificial intelligence, a major gambit to bolster a field of research pioneered in Canada but consistently drained of talent by major U.S. technology companies like Google, Facebook and Microsoft.

The Vector Institute, an independent non-profit affiliated with the University of Toronto, will hire about 25 new faculty and research scientists. It will be backed by more than $150 million in public and corporate funding in an unusual hybridization of pure research and business-minded commercial goals.

The province will spend $50 million over five years, while the federal government, which announced a $125-million Pan-Canadian Artificial Intelligence Strategy in last week’s budget, is providing at least $40 million, backers say. More than two dozen companies have committed millions more over 10 years, including $5 million each from sponsors including Google, Air Canada, Loblaws, and Canada’s five biggest banks [Bank of Montreal (BMO). Canadian Imperial Bank of Commerce ({CIBC} President’s Choice Financial},  Royal Bank of Canada (RBC), Scotiabank (Tangerine), Toronto-Dominion Bank (TD Canada Trust)].

The mode of artificial intelligence that the Vector Institute will focus on, deep learning, has seen remarkable results in recent years, particularly in image and speech recognition. Geoffrey Hinton, considered the “godfather” of deep learning for the breakthroughs he made while a professor at U of T, has worked for Google since 2013 in California and Toronto.

Hinton will move back to Canada to lead a research team based at the tech giant’s Toronto offices and act as chief scientific adviser of the new institute.

Researchers trained in Canadian artificial intelligence labs fill the ranks of major technology companies, working on tools like instant language translation, facial recognition, and recommendation services. Academic institutions and startups in Toronto, Waterloo, Montreal and Edmonton boast leaders in the field, but other researchers have left for U.S. universities and corporate labs.

The goals of the Vector Institute are to retain, repatriate and attract AI talent, to create more trained experts, and to feed that expertise into existing Canadian companies and startups.

Hospitals are expected to be a major partner, since health care is an intriguing application for AI. Last month, researchers from Stanford University announced they had trained a deep learning algorithm to identify potentially cancerous skin lesions with accuracy comparable to human dermatologists. The Toronto company Deep Genomics is using deep learning to read genomes and identify mutations that may lead to disease, among other things.

Intelligent algorithms can also be applied to tasks that might seem less virtuous, like reading private data to better target advertising. Zemel [Richard Zemel, the institute’s research director and a professor of computer science at U of T] says the centre is creating an ethics working group [emphasis mine] and maintaining ties with organizations that promote fairness and transparency in machine learning. As for privacy concerns, “that’s something we are well aware of. We don’t have a well-formed policy yet but we will fairly soon.”

The institute’s annual funding pales in comparison to the revenues of the American tech giants, which are measured in tens of billions. The risk the institute’s backers are taking is simply creating an even more robust machine learning PhD mill for the U.S.

“They obviously won’t all stay in Canada, but Toronto industry is very keen to get them,” Hinton said. “I think Trump might help there.” Two researchers on Hinton’s new Toronto-based team are Iranian, one of the countries targeted by U.S. President Donald Trump’s travel bans.

Ethics do seem to be a bit of an afterthought. Presumably the Vector Institute’s ‘ethics working group’ won’t include any regular folks. Is there any thought to what the rest of us think about these developments? As there will also be some collaboration with other proposed AI institutes including ones at the University of Montreal (Université de Montréal) and the University of Alberta (Kate McGillivray’s article coming up shortly mentions them), might the ethics group be centered in either Edmonton or Montreal? Interestingly, two Canadians (Timothy Caulfield at the University of Alberta and Eric Racine at Université de Montréa) testified at the US Commission for the Study of Bioethical Issues Feb. 10 – 11, 2014 meeting, the Brain research, ethics, and nanotechnology. Still speculating here but I imagine Caulfield and/or Racine could be persuaded to extend their expertise in ethics and the human brain to AI and its neural networks.

Getting back to the topic at hand the ‘AI sceneCanada’, Allen’s article is worth reading in its entirety if you have the time.

Kate McGillivray’s March 29, 2017 article for the Canadian Broadcasting Corporation’s (CBC) news online provides more details about the Canadian AI situation and the new strategies,

With artificial intelligence set to transform our world, a new institute is putting Toronto to the front of the line to lead the charge.

The Vector Institute for Artificial Intelligence, made possible by funding from the federal government revealed in the 2017 budget, will move into new digs in the MaRS Discovery District by the end of the year.

Vector’s funding comes partially from a $125 million investment announced in last Wednesday’s federal budget to launch a pan-Canadian artificial intelligence strategy, with similar institutes being established in Montreal and Edmonton.

“[A.I.] cuts across pretty well every sector of the economy,” said Dr. Alan Bernstein, CEO and president of the Canadian Institute for Advanced Research, the organization tasked with administering the federal program.

“Silicon Valley and England and other places really jumped on it, so we kind of lost the lead a little bit. I think the Canadian federal government has now realized that,” he said.

Stopping up the brain drain

Critical to the strategy’s success is building a homegrown base of A.I. experts and innovators — a problem in the last decade, despite pioneering work on so-called “Deep Learning” by Canadian scholars such as Yoshua Bengio and Geoffrey Hinton, a former University of Toronto professor who will now serve as Vector’s chief scientific advisor.

With few university faculty positions in Canada and with many innovative companies headquartered elsewhere, it has been tough to keep the few graduates specializing in A.I. in town.

“We were paying to educate people and shipping them south,” explained Ed Clark, chair of the Vector Institute and business advisor to Ontario Premier Kathleen Wynne.

The existence of that “fantastic science” will lean heavily on how much buy-in Vector and Canada’s other two A.I. centres get.

Toronto’s portion of the $125 million is a “great start,” said Bernstein, but taken alone, “it’s not enough money.”

“My estimate of the right amount of money to make a difference is a half a billion or so, and I think we will get there,” he said.

Jessica Murphy’s March 29, 2017 article for the British Broadcasting Corporation’s (BBC) news online offers some intriguing detail about the Canadian AI scene,

Canadian researchers have been behind some recent major breakthroughs in artificial intelligence. Now, the country is betting on becoming a big player in one of the hottest fields in technology, with help from the likes of Google and RBC [Royal Bank of Canada].

In an unassuming building on the University of Toronto’s downtown campus, Geoff Hinton laboured for years on the “lunatic fringe” of academia and artificial intelligence, pursuing research in an area of AI called neural networks.

Also known as “deep learning”, neural networks are computer programs that learn in similar way to human brains. The field showed early promise in the 1980s, but the tech sector turned its attention to other AI methods after that promise seemed slow to develop.

“The approaches that I thought were silly were in the ascendancy and the approach that I thought was the right approach was regarded as silly,” says the British-born [emphasis mine] professor, who splits his time between the university and Google, where he is a vice-president of engineering fellow.

Neural networks are used by the likes of Netflix to recommend what you should binge watch and smartphones with voice assistance tools. Google DeepMind’s AlphaGo AI used them to win against a human in the ancient game of Go in 2016.

Foteini Agrafioti, who heads up the new RBC Research in Machine Learning lab at the University of Toronto, said those recent innovations made AI attractive to researchers and the tech industry.

“Anything that’s powering Google’s engines right now is powered by deep learning,” she says.

Developments in the field helped jumpstart innovation and paved the way for the technology’s commercialisation. They also captured the attention of Google, IBM and Microsoft, and kicked off a hiring race in the field.

The renewed focus on neural networks has boosted the careers of early Canadian AI machine learning pioneers like Hinton, the University of Montreal’s Yoshua Bengio, and University of Alberta’s Richard Sutton.

Money from big tech is coming north, along with investments by domestic corporations like banking multinational RBC and auto parts giant Magna, and millions of dollars in government funding.

Former banking executive Ed Clark will head the institute, and says the goal is to make Toronto, which has the largest concentration of AI-related industries in Canada, one of the top five places in the world for AI innovation and business.

The founders also want it to serve as a magnet and retention tool for top talent aggressively head-hunted by US firms.

Clark says they want to “wake up” Canadian industry to the possibilities of AI, which is expected to have a massive impact on fields like healthcare, banking, manufacturing and transportation.

Google invested C$4.5m (US$3.4m/£2.7m) last November [2016] in the University of Montreal’s Montreal Institute for Learning Algorithms.

Microsoft is funding a Montreal startup, Element AI. The Seattle-based company also announced it would acquire Montreal-based Maluuba and help fund AI research at the University of Montreal and McGill University.

Thomson Reuters and General Motors both recently moved AI labs to Toronto.

RBC is also investing in the future of AI in Canada, including opening a machine learning lab headed by Agrafioti, co-funding a program to bring global AI talent and entrepreneurs to Toronto, and collaborating with Sutton and the University of Alberta’s Machine Intelligence Institute.

Canadian tech also sees the travel uncertainty created by the Trump administration in the US as making Canada more attractive to foreign talent. (One of Clark’s the selling points is that Toronto as an “open and diverse” city).

This may reverse the ‘brain drain’ but it appears Canada’s role as a ‘branch plant economy’ for foreign (usually US) companies could become an important discussion once more. From the ‘Foreign ownership of companies of Canada’ Wikipedia entry (Note: Links have been removed),

Historically, foreign ownership was a political issue in Canada in the late 1960s and early 1970s, when it was believed by some that U.S. investment had reached new heights (though its levels had actually remained stable for decades), and then in the 1980s, during debates over the Free Trade Agreement.

But the situation has changed, since in the interim period Canada itself became a major investor and owner of foreign corporations. Since the 1980s, Canada’s levels of investment and ownership in foreign companies have been larger than foreign investment and ownership in Canada. In some smaller countries, such as Montenegro, Canadian investment is sizable enough to make up a major portion of the economy. In Northern Ireland, for example, Canada is the largest foreign investor. By becoming foreign owners themselves, Canadians have become far less politically concerned about investment within Canada.

Of note is that Canada’s largest companies by value, and largest employers, tend to be foreign-owned in a way that is more typical of a developing nation than a G8 member. The best example is the automotive sector, one of Canada’s most important industries. It is dominated by American, German, and Japanese giants. Although this situation is not unique to Canada in the global context, it is unique among G-8 nations, and many other relatively small nations also have national automotive companies.

It’s interesting to note that sometimes Canadian companies are the big investors but that doesn’t change our basic position. And, as I’ve noted in other postings (including the March 24, 2017 posting), these government investments in science and technology won’t necessarily lead to a move away from our ‘branch plant economy’ towards an innovative Canada.

You can find out more about the Vector Institute for Artificial Intelligence here.

BTW, I noted that reference to Hinton as ‘British-born’ in the BBC article. He was educated in the UK and subsidized by UK taxpayers (from his Wikipedia entry; Note: Links have been removed),

Hinton was educated at King’s College, Cambridge graduating in 1970, with a Bachelor of Arts in experimental psychology.[1] He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1977 for research supervised by H. Christopher Longuet-Higgins.[3][12]

It seems Canadians are not the only ones to experience  ‘brain drains’.

Finally, I wrote at length about a recent initiative taking place between the University of British Columbia (Vancouver, Canada) and the University of Washington (Seattle, Washington), the Cascadia Urban Analytics Cooperative in a Feb. 28, 2017 posting noting that the initiative is being funded by Microsoft to the tune $1M and is part of a larger cooperative effort between the province of British Columbia and the state of Washington. Artificial intelligence is not the only area where US technology companies are hedging their bets (against Trump’s administration which seems determined to terrify people from crossing US borders) by investing in Canada.

For anyone interested in a little more information about AI in the US and China, there’s today’s (March 31, 2017)earlier posting: China, US, and the race for artificial intelligence research domination.

Robots, Dallas (US), ethics, and killing

I’ve waited a while before posting this piece in the hope that the situation would calm. Sadly, it took longer than hoped as there was an additional shooting incident of police officers in Baton Rouge on July 17, 2016. There’s more about that shooting in a July 18, 2016 news posting by Steve Visser for CNN.)

Finally: Robots, Dallas, ethics, and killing: In the wake of the Thursday, July 7, 2016 shooting in Dallas (Texas, US) and subsequent use of a robot armed with a bomb to kill  the suspect, a discussion about ethics has been raised.

This discussion comes at a difficult period. In the same week as the targeted shooting of white police officers in Dallas, two African-American males were shot and killed in two apparently unprovoked shootings by police. The victims were Alton Sterling in Baton Rouge, Louisiana on Tuesday, July 5, 2016 and, Philando Castile in Minnesota on Wednesday, July 6, 2016. (There’s more detail about the shootings prior to Dallas in a July 7, 2016 news item on CNN.) The suspect in Dallas, Micah Xavier Johnson, a 25-year-old African-American male had served in the US Army Reserve and been deployed in Afghanistan (there’s more in a July 9, 2016 news item by Emily Shapiro, Julia Jacobo, and Stephanie Wash for abcnews.go.com). All of this has taken place within the context of a movement started in 2013 in the US, Black Lives Matter.

Getting back to robots, most of the material I’ve seen about ‘killing or killer’ robots has so far involved industrial accidents (very few to date) and ethical issues for self-driven cars (see a May 31, 2016 posting by Noah J. Goodall on the IEEE [Institute of Electrical and Electronics Engineers] Spectrum website).

The incident in Dallas is apparently the first time a US police organization has used a robot as a bomb, although it has been an occasional practice by US Armed Forces in combat situations. Rob Lever in a July 8, 2016 Agence France-Presse piece on phys.org focuses on the technology aspect,

The “bomb robot” killing of a suspected Dallas shooter may be the first lethal use of an automated device by American police, and underscores growing role of technology in law enforcement.

Regardless of the methods in Dallas, the use of robots is expected to grow, to handle potentially dangerous missions in law enforcement and the military.


Researchers at Florida International University meanwhile have been working on a TeleBot that would allow disabled police officers to control a humanoid robot.

The robot, described in some reports as similar to the “RoboCop” in films from 1987 and 2014, was designed “to look intimidating and authoritative enough for citizens to obey the commands,” but with a “friendly appearance” that makes it “approachable to citizens of all ages,” according to a research paper.

Robot developers downplay the potential for the use of automated lethal force by the devices, but some analysts say debate on this is needed, both for policing and the military.

A July 9, 2016 Associated Press piece by Michael Liedtke and Bree Fowler on phys.org focuses more closely on ethical issues raised by the Dallas incident,

When Dallas police used a bomb-carrying robot to kill a sniper, they also kicked off an ethical debate about technology’s use as a crime-fighting weapon.

The strategy opens a new chapter in the escalating use of remote and semi-autonomous devices to fight crime and protect lives. It also raises new questions over when it’s appropriate to dispatch a robot to kill dangerous suspects instead of continuing to negotiate their surrender.

“If lethally equipped robots can be used in this situation, when else can they be used?” says Elizabeth Joh, a University of California at Davis law professor who has followed U.S. law enforcement’s use of technology. “Extreme emergencies shouldn’t define the scope of more ordinary situations where police may want to use robots that are capable of harm.”

In approaching the question about the ethics, Mike Masnick’s July 8, 2016 posting on Techdirt provides a surprisingly sympathetic reading for the Dallas Police Department’s actions, as well as, asking some provocative questions about how robots might be better employed by police organizations (Note: Links have been removed),

The Dallas Police have a long history of engaging in community policing designed to de-escalate situations, rather than encourage antagonism between police and the community, have been handling all of this with astounding restraint, frankly. Many other police departments would be lashing out, and yet the Dallas Police Dept, while obviously grieving for a horrible situation, appear to be handling this tragic situation professionally. And it appears that they did everything they could in a reasonable manner. They first tried to negotiate with Johnson, but after that failed and they feared more lives would be lost, they went with the robot + bomb option. And, obviously, considering he had already shot many police officers, I don’t think anyone would question the police justification if they had shot Johnson.

But, still, at the very least, the whole situation raises a lot of questions about the legality of police using a bomb offensively to blow someone up. And, it raises some serious questions about how other police departments might use this kind of technology in the future. The situation here appears to be one where people reasonably concluded that this was the most effective way to stop further bloodshed. And this is a police department with a strong track record of reasonable behavior. But what about other police departments where they don’t have that kind of history? What are the protocols for sending in a robot or drone to kill someone? Are there any rules at all?

Furthermore, it actually makes you wonder, why isn’t there a focus on using robots to de-escalate these situations? What if, instead of buying military surplus bomb robots, there were robots being designed to disarm a shooter, or detain him in a manner that would make it easier for the police to capture him alive? Why should the focus of remote robotic devices be to kill him? This isn’t faulting the Dallas Police Department for its actions last night. But, rather, if we’re going to enter the age of robocop, shouldn’t we be looking for ways to use such robotic devices in a manner that would help capture suspects alive, rather than dead?

Gordon Corera’s July 12, 2016 article on the BBC’s (British Broadcasting Corporation) news website provides an overview of the use of automation and of ‘killing/killer robots’,

Remote killing is not new in warfare. Technology has always been driven by military application, including allowing killing to be carried out at distance – prior examples might be the introduction of the longbow by the English at Crecy in 1346, then later the Nazi V1 and V2 rockets.

More recently, unmanned aerial vehicles (UAVs) or drones such as the Predator and the Reaper have been used by the US outside of traditional military battlefields.

Since 2009, the official US estimate is that about 2,500 “combatants” have been killed in 473 strikes, along with perhaps more than 100 non-combatants. Critics dispute those figures as being too low.

Back in 2008, I visited the Creech Air Force Base in the Nevada desert, where drones are flown from.

During our visit, the British pilots from the RAF deployed their weapons for the first time.

One of the pilots visibly bristled when I asked him if it ever felt like playing a video game – a question that many ask.

The military uses encrypted channels to control its ordnance disposal robots, but – as any hacker will tell you – there is almost always a flaw somewhere that a determined opponent can find and exploit.

We have already seen cars being taken control of remotely while people are driving them, and the nightmare of the future might be someone taking control of a robot and sending a weapon in the wrong direction.

The military is at the cutting edge of developing robotics, but domestic policing is also a different context in which greater separation from the community being policed risks compounding problems.

The balance between risks and benefits of robots, remote control and automation remain unclear.

But Dallas suggests that the future may be creeping up on us faster than we can debate it.

The excerpts here do not do justice to the articles, if you’re interested in this topic and have the time, I encourage you to read all the articles cited here in their entirety.

*(ETA: July 25, 2016 at 1405 hours PDT: There is a July 25, 2016 essay by Carrie Sheffield for Salon.com which may provide some insight into the Black Lives matter movement and some of the generational issues within the US African-American community as revealed by the movement.)*

A human user manual—for robots

Researchers from the Georgia Institute of Technology (Georgia Tech), funded by the US Office of Naval Research (ONR), have developed a program that teaches robots to read stories and more in an effort to educate them about humans. From a June 16, 2016 ONR news release by Warren Duffie Jr. (also on EurekAlert),

With support from the Office of Naval Research (ONR), researchers at the Georgia Institute of Technology have created an artificial intelligence software program named Quixote to teach robots to read stories, learn acceptable behavior and understand successful ways to conduct themselves in diverse social situations.

“For years, researchers have debated how to teach robots to act in ways that are appropriate, non-intrusive and trustworthy,” said Marc Steinberg, an ONR program manager who oversees the research. “One important question is how to explain complex concepts such as policies, values or ethics to robots. Humans are really good at using narrative stories to make sense of the world and communicate to other people. This could one day be an effective way to interact with robots.”

The rapid pace of artificial intelligence has stirred fears by some that robots could act unethically or harm humans. Dr. Mark Riedl, an associate professor and director of Georgia Tech’s Entertainment Intelligence Lab, hopes to ease concerns by having Quixote serve as a “human user manual” by teaching robots values through simple stories. After all, stories inform, educate and entertain–reflecting shared cultural knowledge, social mores and protocols.

For example, if a robot is tasked with picking up a pharmacy prescription for a human as quickly as possible, it could: a) take the medicine and leave, b) interact politely with pharmacists, c) or wait in line. Without value alignment and positive reinforcement, the robot might logically deduce robbery is the fastest, cheapest way to accomplish its task. However, with value alignment from Quixote, it would be rewarded for waiting patiently in line and paying for the prescription.

For their research, Riedl and his team crowdsourced stories from the Internet. Each tale needed to highlight daily social interactions–going to a pharmacy or restaurant, for example–as well as socially appropriate behaviors (e.g., paying for meals or medicine) within each setting.

The team plugged the data into Quixote to create a virtual agent–in this case, a video game character placed into various game-like scenarios mirroring the stories. As the virtual agent completed a game, it earned points and positive reinforcement for emulating the actions of protagonists in the stories.

Riedl’s team ran the agent through 500,000 simulations, and it displayed proper social interactions more than 90 percent of the time.

“These games are still fairly simple,” said Riedl, “more like ‘Pac-Man’ instead of ‘Halo.’ However, Quixote enables these artificial intelligence agents to immerse themselves in a story, learn the proper sequence of events and be encoded with acceptable behavior patterns. This type of artificial intelligence can be adapted to robots, offering a variety of applications.”

Within the next six months, Riedl’s team hopes to upgrade Quixote’s games from “old-school” to more modern and complex styles like those found in Minecraft–in which players use blocks to build elaborate structures and societies.

Riedl believes Quixote could one day make it easier for humans to train robots to perform diverse tasks. Steinberg notes that robotic and artificial intelligence systems may one day be a much larger part of military life. This could involve mine detection and deactivation, equipment transport and humanitarian and rescue operations.

“Within a decade, there will be more robots in society, rubbing elbows with us,” said Riedl. “Social conventions grease the wheels of society, and robots will need to understand the nuances of how humans do things. That’s where Quixote can serve as a valuable tool. We’re already seeing it with virtual agents like Siri and Cortana, which are programmed not to say hurtful or insulting things to users.”

This story brought to mind two other projects: RoboEarth (an internet for robots only) mentioned in my Jan. 14, 2014 which was an update on the project featuring its use in hospitals and RoboBrain, a robot learning project (sourcing the internet, YouTube, and more for information to teach robots) was mentioned in my Sept. 2, 2014 posting.

BRAIN and ethics in the US with some Canucks (not the hockey team) participating (part two of five)

The Brain research, ethics, and nanotechnology (part one of five) May 19, 2014 post kicked off a series titled ‘Brains, prostheses, nanotechnology, and human enhancement’ which brings together a number of developments in the worlds of neuroscience*, prosthetics, and, incidentally, nanotechnology in the field of interest called human enhancement. Parts one through four are an attempt to draw together a number of new developments, mostly in the US and in Europe. Due to my language skills which extend to English and, more tenuously, French, I can’t provide a more ‘global perspective’. Part five features a summary.

Before further discussing the US Presidential Commission for the Study of Bioethical Issues ‘brain’ meetings mentioned in part one, I have some background information.

The US launched its self-explanatory BRAIN (Brain Research through Advancing Innovative Neurotechnologies) initiative (originally called BAM; Brain Activity Map) in 2013. (You can find more about the history and details in this Wikipedia entry.)

From the beginning there has been discussion about how nanotechnology will be of fundamental use in the US BRAIN initiative and the European Union’s 10 year Human Brain Project (there’s more about that in my Jan. 28, 2013 posting). There’s also a 2013 book (Nanotechnology, the Brain, and the Future) from Springer, which, according to the table of contents, presents an exciting (to me) range of ideas about nanotechnology and brain research,

I. Introduction and key resources

1. Nanotechnology, the brain, and the future: Anticipatory governance via end-to-end real-time technology assessment by Jason Scott Robert, Ira Bennett, and Clark A. Miller
2. The complex cognitive systems manifesto by Richard P. W. Loosemore
3. Analysis of bibliometric data for research at the intersection of nanotechnology and neuroscience by Christina Nulle, Clark A. Miller, Harmeet Singh, and Alan Porter
4. Public attitudes toward nanotechnology-enabled human enhancement in the United States by Sean Hays, Michael Cobb, and Clark A. Miller
5. U.S. news coverage of neuroscience nanotechnology: How U.S. newspapers have covered neuroscience nanotechnology during the last decade by Doo-Hun Choi, Anthony Dudo, and Dietram Scheufele
6. Nanoethics and the brain by Valerye Milleson
7. Nanotechnology and religion: A dialogue by Tobie Milford

II. Brain repair

8. The age of neuroelectronics by Adam Keiper
9. Cochlear implants and Deaf culture by Derrick Anderson
10. Healing the blind: Attitudes of blind people toward technologies to cure blindness by Arielle Silverman
11. Ethical, legal and social aspects of brain-implants using nano-scale materials and techniques by Francois Berger et al.
12. Nanotechnology, the brain, and personal identity by Stephanie Naufel

III. Brain enhancement

13. Narratives of intelligence: the sociotechnical context of cognitive enhancement by Sean Hays
14. Towards responsible use of cognitive-enhancing drugs by the healthy by Henry T. Greeley et al.
15. The opposite of human enhancement: Nanotechnology and the blind chicken debate by Paul B. Thompson
16. Anticipatory governance of human enhancement: The National Citizens’ Technology Forum by Patrick Hamlett, Michael Cobb, and David Guston
a. Arizona site report
b. California site report
c. Colorado site reportd. Georgia site report
e. New Hampshire site report
f. Wisconsin site report

IV. Brain damage

17. A review of nanoparticle functionality and toxicity on the central nervous system by Yang et al.
18. Recommendations for a municipal health and safety policy for nanomaterials: A Report to the City of Cambridge City Manager by Sam Lipson
19. Museum of Science Nanotechnology Forum lets participants be the judge by Mark Griffin
20. Nanotechnology policy and citizen engagement in Cambridge, Massachusetts: Local reflexive governance by Shannon Conley

Thanks to David Bruggeman’s May 13, 2014 posting on his Pasco Phronesis blog, I stumbled across both a future meeting notice and documentation of the  Feb. 2014 meeting of the Presidential Commission for the Study of Bioethical Issues (Note: Links have been removed),

Continuing from its last meeting (in February 2014), the Presidential Commission for the Study of Bioethical Issues will continue working on the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative in its June 9-10 meeting in Atlanta, Georgia.  An agenda is still forthcoming, …

In other developments, Commission staff are apparently going to examine some efforts to engage bioethical issues through plays.  I’d be very excited to see some of this happen during a Commission meeting, but any little bit is interesting.  The authors of these plays, Karen H. Rothenburg and Lynn W. Bush, have published excerpts in their book The Drama of DNA: Narrative Genomics.  …

The Commission also has a YouTube channel …

Integrating a theatrical experience into the reams of public engagement exercises that technologies such as stem cell, GMO (genetically modified organisms), nanotechnology, etc. tend to spawn seems a delightful idea.

Interestingly, the meeting in June 2014 will coincide with the book’s release date. I dug further and found these snippets of information. The book is being published by Oxford University Press and is available in both paperback and e-book formats. The authors are not playwrights, as one might assume. From the Author Information page,

Lynn Bush, PhD, MS, MA is on the faculty of Pediatric Clinical Genetics at Columbia University Medical Center, a faculty associate at their Center for Bioethics, and serves as an ethicist on pediatric and genomic advisory committees for numerous academic medical centers and professional organizations. Dr. Bush has an interdisciplinary graduate background in clinical and developmental psychology, bioethics, genomics, public health, and neuroscience that informs her research, writing, and teaching on the ethical, psychological, and policy challenges of genomic medicine and clinical research with children, and prenatal-newborn screening and sequencing.

Karen H. Rothenberg, JD, MPA serves as Senior Advisor on Genomics and Society to the Director, National Human Genome Research Institute and Visiting Scholar, Department of Bioethics, Clinical Center, National Institutes of Health. She is the Marjorie Cook Professor of Law, Founding Director, Law & Health Care Program and former Dean at the University of Maryland Francis King Carey School of Law and Visiting Professor, Johns Hopkins Berman Institute of Bioethics. Professor Rothenberg has served as Chair of the Maryland Stem Cell Research Commission, President of the American Society of Law, Medicine and Ethics, and has been on many NIH expert committees, including the NIH Recombinant DNA Advisory Committee.

It is possible to get a table of contents for the book but I notice not a single playwright is mentioned in any of the promotional material for the book. While I like the idea in principle, it seems a bit odd and suggests that these are purpose-written plays. I have not had good experiences with purpose-written plays which tend to be didactic and dull, especially when they’re not devised by a professional storyteller.

You can find out more about the upcoming ‘bioethics’ June 9 – 10, 2014 meeting here.  As for the Feb. 10 – 11, 2014 meeting, the Brain research, ethics, and nanotechnology (part one of five) May 19, 2014 post featured Barbara Herr Harthorn’s (director of the Center for Nanotechnology in Society at the University of California at Santa Barbara) participation only.

It turns out, there are some Canadian tidbits. From the Meeting Sixteen: Feb. 10-11, 2014 webcasts page, (each presenter is featured in their own webcast of approximately 11 mins.)

Timothy Caulfield, LL.M., F.R.S.C., F.C.A.H.S.

Canada Research Chair in Health Law and Policy
Professor in the Faculty of Law
and the School of Public Health
University of Alberta

Eric Racine, Ph.D.

Director, Neuroethics Research Unit
Associate Research Professor
Institut de Recherches Cliniques de Montréal
Associate Research Professor,
Department of Medicine
Université de Montréal
Adjunct Professor, Department of Medicine and Department of Neurology and Neurosurgery,
McGill University

It was a surprise to see a couple of Canucks listed as presenters and I’m grateful that the Presidential Commission for the Study of Bioethical Issues is so generous with information. in addition to the webcasts, there is the Federal Register Notice of the meeting, an agenda, transcripts, and presentation materials. By the way, Caulfield discussed hype and Racine discussed public understanding of science with regard to neuroscience both fitting into the overall theme of communication. I’ll have to look more thoroughly but it seems to me there’s no mention of pop culture as a means of communicating about science and technology.

Links to other posts in the Brains, prostheses, nanotechnology, and human enhancement five-part series:

Part one: Brain research, ethics, and nanotechnology (May 19, 2014 post)

Part three: Gray Matters: Integrative Approaches for Neuroscience, Ethics, and Society issued May 2014 by US Presidential Bioethics Commission (May 20, 2014)

Part four: Brazil, the 2014 World Cup kickoff, and a mind-controlled exoskeleton (May 20, 2014)

Part five: Brains, prostheses, nanotechnology, and human enhancement: summary (May 20, 2014)

* ‘neursocience’ corrected to ‘neuroscience’ on May 20, 2014.

Brain research, ethics, and nanotechnology (part one of five)

This post kicks off a series titled ‘Brains, prostheses, nanotechnology, and human enhancement’ which brings together a number of developments in the worlds of neuroscience*, prosthetics, and, incidentally, nanotechnology in the field of interest called human enhancement. Parts one through four are an attempt to draw together a number of new developments, mostly in the US and in Europe. Due to my language skills which extend to English and, more tenuously, French, I can’t provide a more ‘global perspective’. Part five features a summary.

Barbara Herr Harthorn, head of UCSB’s [University of California at Santa Barbara) Center for Nanotechnology in Society (CNS), one of two such centers in the US (the other is at Arizona State University) was featured in a May 12, 2014 article by Lyz Hoffman for the [Santa Barbara] Independent.com,

… Barbara Harthorn has spent the past eight-plus years leading a team of researchers in studying people’s perceptions of the small-scale science with big-scale implications. Sponsored by the National Science Foundation, CNS enjoys national and worldwide recognition for the social science lens it holds up to physical and life sciences.

Earlier this year, Harthorn attended a meeting hosted by the Presidential Commission for the Study of Bioethical Issues. The commission’s chief focus was on the intersection of ethics and brain research, but Harthorn was invited to share her thoughts on the relationship between ethics and nanotechnology.

(You can find Harthorn’s February 2014 presentation to the Presidential Commission for the Study of Bioethical Issues here on their webcasts page.)

I have excerpted part of the Q&A (questions and answers) from Hoffman’s May 12, 2014 article but encourage you to read the piece in its entirety as it provides both a brief beginners’ introduction to nanotechnology and an insight into some of the more complex social impact issues presented by nano and other emerging technologies vis à vis neuroscience and human enhancement,

So there are some environmental concerns with nanomaterials. What are the ethical concerns? What came across at the Presidential Commission meeting? They’re talking about treatment of Alzheimer’s and neurological brain disorders, where the issue of loss of self is a fairly integral part of the disease. There are complicated issues about patients’ decision-making. Nanomaterials could be used to grow new tissues and potentially new organs in the future.

What could that mean for us? Human enhancement is very interesting. It provokes really fascinating discussions. In our view, the discussions are not much at all about the technologies but very much about the social implications. People feel enthusiastic initially, but when reflecting, the issues of equitable access and justice immediately rise to the surface. We [at CNS] are talking about imagined futures and trying to get at the moral and ethical sort of citizen ideas about the risks and benefits of such technologies. Before they are in the marketplace, [the goal is to] understand and find a way to integrate the public’s ideas in the development process.

Here again is a link to the article.

Links to other posts in the Brains, prostheses, nanotechnology, and human enhancement five-part series:

Part two: BRAIN and ethics in the US with some Canucks (not the hockey team) participating (May 19, 2014)

Part three: Gray Matters: Integrative Approaches for Neuroscience, Ethics, and Society issued May 2014 by US Presidential Bioethics Commission (May 20, 2014)

Part four: Brazil, the 2014 World Cup kickoff, and a mind-controlled exoskeleton (May 20, 2014)

Part five: Brains, prostheses, nanotechnology, and human enhancement: summary (May 20, 2014)

* ‘neursocience’ corrected to ‘neuroscience’ on May 20, 2014.