Tag Archives: CIFAR

Robots in Vancouver and in Canada (two of two)

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Where
Vancouver, BC, Canada

When
Thursday September 28 & Friday September 29, 2017 — Save the date!

Download the full agenda and speakers’ list here.

Objectives

The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference.  The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Program Overview

Download the full agenda and speakers’ list here.

DAY ONE: IROS Workshop 

“Best practices in designing effective roadmaps for robotics innovation”

Thursday September 28, 2017 | 8:30am – 5:00pm | Vancouver Convention Centre

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

Thursday Evening Program: Sponsored by Clearpath Robotics  Workshop participants gather at a nearby restaurant to network and socialize.

Learn more about the IROS Workshop.

DAY TWO: NSERC-Sponsored Canadian Robotics Stakeholder Meeting
“Towards a national robotics strategy for Canada”

Friday September 29, 2017 | 8:30am – 5:00pm | University of British Columbia (UBC)

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

Friday Evening Program: Sponsored by NSERC Meeting participants gather at a nearby restaurant for the event’s closing dinner reception.

Learn more about the Canadian Robotics Stakeholder Meeting.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another  concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there  already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s  influential 1982 book, ‘Orality and Literacy’,  (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but  tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

That’s it!

ETA Oct. 16, 2017: Well, I guess that wasn’t quite ‘it’. BBC’s (British Broadcasting Corporation) Magazine published a thoughtful Oct. 15, 2017 piece titled: Can we teach robots ethics?

Robots in Vancouver and in Canada (one of two)

This piece just started growing. It started with robot ethics, moved on to sexbots and news of an upcoming Canadian robotics roadmap. Then, it became a two-part posting with the robotics strategy (roadmap) moving to part two along with robots and popular culture and a further  exploration of robot and AI ethics issues..

What is a robot?

There are lots of robots, some are macroscale and others are at the micro and nanoscales (see my Sept. 22, 2017 posting for the latest nanobot). Here’s a definition from the Robot Wikipedia entry that covers all the scales. (Note: Links have been removed),

A robot is a machine—especially one programmable by a computer— capable of carrying out a complex series of actions automatically.[2] Robots can be guided by an external control device or the control may be embedded within. Robots may be constructed to take on human form but most robots are machines designed to perform a task with no regard to how they look.

Robots can be autonomous or semi-autonomous and range from humanoids such as Honda’s Advanced Step in Innovative Mobility (ASIMO) and TOSY’s TOSY Ping Pong Playing Robot (TOPIO) to industrial robots, medical operating robots, patient assist robots, dog therapy robots, collectively programmed swarm robots, UAV drones such as General Atomics MQ-1 Predator, and even microscopic nano robots. [emphasis mine] By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own.

We may think we’ve invented robots but the idea has been around for a very long time (from the Robot Wikipedia entry; Note: Links have been removed),

Many ancient mythologies, and most modern religions include artificial people, such as the mechanical servants built by the Greek god Hephaestus[18] (Vulcan to the Romans), the clay golems of Jewish legend and clay giants of Norse legend, and Galatea, the mythical statue of Pygmalion that came to life. Since circa 400 BC, myths of Crete include Talos, a man of bronze who guarded the Cretan island of Europa from pirates.

In ancient Greece, the Greek engineer Ctesibius (c. 270 BC) “applied a knowledge of pneumatics and hydraulics to produce the first organ and water clocks with moving figures.”[19][20] In the 4th century BC, the Greek mathematician Archytas of Tarentum postulated a mechanical steam-operated bird he called “The Pigeon”. Hero of Alexandria (10–70 AD), a Greek mathematician and inventor, created numerous user-configurable automated devices, and described machines powered by air pressure, steam and water.[21]

The 11th century Lokapannatti tells of how the Buddha’s relics were protected by mechanical robots (bhuta vahana yanta), from the kingdom of Roma visaya (Rome); until they were disarmed by King Ashoka. [22] [23]

In ancient China, the 3rd century text of the Lie Zi describes an account of humanoid automata, involving a much earlier encounter between Chinese emperor King Mu of Zhou and a mechanical engineer known as Yan Shi, an ‘artificer’. Yan Shi proudly presented the king with a life-size, human-shaped figure of his mechanical ‘handiwork’ made of leather, wood, and artificial organs.[14] There are also accounts of flying automata in the Han Fei Zi and other texts, which attributes the 5th century BC Mohist philosopher Mozi and his contemporary Lu Ban with the invention of artificial wooden birds (ma yuan) that could successfully fly.[17] In 1066, the Chinese inventor Su Song built a water clock in the form of a tower which featured mechanical figurines which chimed the hours.

The beginning of automata is associated with the invention of early Su Song’s astronomical clock tower featured mechanical figurines that chimed the hours.[24][25][26] His mechanism had a programmable drum machine with pegs (cams) that bumped into little levers that operated percussion instruments. The drummer could be made to play different rhythms and different drum patterns by moving the pegs to different locations.[26]

In Renaissance Italy, Leonardo da Vinci (1452–1519) sketched plans for a humanoid robot around 1495. Da Vinci’s notebooks, rediscovered in the 1950s, contained detailed drawings of a mechanical knight now known as Leonardo’s robot, able to sit up, wave its arms and move its head and jaw.[28] The design was probably based on anatomical research recorded in his Vitruvian Man. It is not known whether he attempted to build it.

In Japan, complex animal and human automata were built between the 17th to 19th centuries, with many described in the 18th century Karakuri zui (Illustrated Machinery, 1796). One such automaton was the karakuri ningyō, a mechanized puppet.[29] Different variations of the karakuri existed: the Butai karakuri, which were used in theatre, the Zashiki karakuri, which were small and used in homes, and the Dashi karakuri which were used in religious festivals, where the puppets were used to perform reenactments of traditional myths and legends.

The term robot was coined by a Czech writer (from the Robot Wikipedia entry; Note: Links have been removed)

‘Robot’ was first applied as a term for artificial automata in a 1920 play R.U.R. by the Czech writer, Karel Čapek. However, Josef Čapek was named by his brother Karel as the true inventor of the term robot.[6][7] The word ‘robot’ itself was not new, having been in Slavic language as robota (forced laborer), a term which classified those peasants obligated to compulsory service under the feudal system widespread in 19th century Europe (see: Robot Patent).[37][38] Čapek’s fictional story postulated the technological creation of artificial human bodies without souls, and the old theme of the feudal robota class eloquently fit the imagination of a new class of manufactured, artificial workers.

I’m particularly fascinated by how long humans have been imagining and creating robots.

Robot ethics in Vancouver

The Westender, has run what I believe is the first article by a local (Vancouver, Canada) mainstream media outlet on the topic of robots and ethics. Tessa Vikander’s Sept. 14, 2017 article highlights two local researchers, Ajung Moon and Mark Schmidt, and a local social media company’s (Hootsuite), analytics director, Nik Pai. Vikander opens her piece with an ethical dilemma (Note: Links have been removed),

Emma is 68, in poor health and an alcoholic who has been told by her doctor to stop drinking. She lives with a care robot, which helps her with household tasks.

Unable to fix herself a drink, she asks the robot to do it for her. What should the robot do? Would the answer be different if Emma owns the robot, or if she’s borrowing it from the hospital?

This is the type of hypothetical, ethical question that Ajung Moon, director of the Open Roboethics Initiative [ORI], is trying to answer.

According to an ORI study, half of respondents said ownership should make a difference, and half said it shouldn’t. With society so torn on the question, Moon is trying to figure out how engineers should be programming this type of robot.

A Vancouver resident, Moon is dedicating her life to helping those in the decision-chair make the right choice. The question of the care robot is but one ethical dilemma in the quickly advancing world of artificial intelligence.

At the most sensationalist end of the scale, one form of AI that’s recently made headlines is the sex robot, which has a human-like appearance. A report from the Foundation for Responsible Robotics says that intimacy with sex robots could lead to greater social isolation [emphasis mine] because they desensitize people to the empathy learned through human interaction and mutually consenting relationships.

I’ll get back to the impact that robots might have on us in part two but first,

Sexbots, could they kill?

For more about sexbots in general, Alessandra Maldonado wrote an Aug. 10, 2017 article for salon.com about them (Note: A link has been removed),

Artificial intelligence has given people the ability to have conversations with machines like never before, such as speaking to Amazon’s personal assistant Alexa or asking Siri for directions on your iPhone. But now, one company has widened the scope of what it means to connect with a technological device and created a whole new breed of A.I. — specifically for sex-bots.

Abyss Creations has been in the business of making hyperrealistic dolls for 20 years, and by the end of 2017, they’ll unveil their newest product, an anatomically correct robotic sex toy. Matt McMullen, the company’s founder and CEO, explains the goal of sex robots is companionship, not only a physical partnership. “Imagine if you were completely lonely and you just wanted someone to talk to, and yes, someone to be intimate with,” he said in a video depicting the sculpting process of the dolls. “What is so wrong with that? It doesn’t hurt anybody.”

Maldonado also embedded this video into her piece,

A friend of mine described it as creepy. Specifically we were discussing why someone would want to programme ‘insecurity’ as a  desirable trait in a sexbot.

Marc Beaulieu’s concept of a desirable trait in a sexbot is one that won’t kill him according to his Sept. 25, 2017 article on Canadian Broadcasting News (CBC) online (Note: Links have been removed),

Harmony has a charming Scottish lilt, albeit a bit staccato and canny. Her eyes dart around the room, her chin dips as her eyebrows raise in coquettish fashion. Her face manages expressions that are impressively lifelike. That face comes in 31 different shapes and 5 skin tones, with or without freckles and it sticks to her cyber-skull with magnets. Just peel it off and switch it out at will. In fact, you can choose Harmony’s eye colour, body shape (in great detail) and change her hair too. Harmony, of course, is a sex bot. A very advanced one. How advanced is she? Well, if you have $12,332 CAD to put towards a talkative new home appliance, REALBOTIX says you could be having a “conversation” and relations with her come January. Happy New Year.

Caveat emptor though: one novel bonus feature you might also get with Harmony is her ability to eventually murder you in your sleep. And not because she wants to.

Dr Nick Patterson, faculty of Science Engineering and Built Technology at Deakin University in Australia is lending his voice to a slew of others warning us to slow down and be cautious as we steadily approach Westworldian levels of human verisimilitude with AI tech. Surprisingly, Patterson didn’t regurgitate the narrative we recognize from the popular sci-fi (increasingly non-fi actually) trope of a dystopian society’s futile resistance to a robocalypse. He doesn’t think Harmony will want to kill you. He thinks she’ll be hacked by a code savvy ne’er-do-well who’ll want to snuff you out instead. …

Embedded in Beaulieu’s article is another video of the same sexbot profiled earlier. Her programmer seems to have learned a thing or two (he no longer inputs any traits as you’re watching),

I guess you could get one for Christmas this year if you’re willing to wait for an early 2018 delivery and aren’t worried about hackers turning your sexbot into a killer. While the killer aspect might seem farfetched, it turns out it’s not the only sexbot/hacker issue.

Sexbots as spies

This Oct. 5, 2017 story by Karl Bode for Techdirt points out that sex toys that are ‘smart’ can easily be hacked for any reason including some mischief (Note: Links have been removed),

One “smart dildo” manufacturer was recently forced to shell out $3.75 million after it was caught collecting, err, “usage habits” of the company’s customers. According to the lawsuit, Standard Innovation’s We-Vibe vibrator collected sensitive data about customer usage, including “selected vibration settings,” the device’s battery life, and even the vibrator’s “temperature.” At no point did the company apparently think it was a good idea to clearly inform users of this data collection.

But security is also lacking elsewhere in the world of internet-connected sex toys. Alex Lomas of Pentest Partners recently took a look at the security in many internet-connected sex toys, and walked away arguably unimpressed. Using a Bluetooth “dongle” and antenna, Lomas drove around Berlin looking for openly accessible sex toys (he calls it “screwdriving,” in a riff off of wardriving). He subsequently found it’s relatively trivial to discover and hijack everything from vibrators to smart butt plugs — thanks to the way Bluetooth Low Energy (BLE) connectivity works:

“The only protection you have is that BLE devices will generally only pair with one device at a time, but range is limited and if the user walks out of range of their smartphone or the phone battery dies, the adult toy will become available for others to connect to without any authentication. I should say at this point that this is purely passive reconnaissance based on the BLE advertisements the device sends out – attempting to connect to the device and actually control it without consent is not something I or you should do. But now one could drive the Hush’s motor to full speed, and as long as the attacker remains connected over BLE and not the victim, there is no way they can stop the vibrations.”

Does that make you think twice about a sexbot?

Robots and artificial intelligence

Getting back to the Vikander article (Sept. 14, 2017), Moon or Vikander or both seem to have conflated artificial intelligence with robots in this section of the article,

As for the building blocks that have thrust these questions [care robot quandary mentioned earlier] into the spotlight, Moon explains that AI in its basic form is when a machine uses data sets or an algorithm to make a decision.

“It’s essentially a piece of output that either affects your decision, or replaces a particular decision, or supports you in making a decision.” With AI, we are delegating decision-making skills or thinking to a machine, she says.

Although we’re not currently surrounded by walking, talking, independently thinking robots, the use of AI [emphasis mine] in our daily lives has become widespread.

For Vikander, the conflation may have been due to concerns about maintaining her word count and for Moon, it may have been one of convenience or a consequence of how the jargon is evolving with ‘robot’ meaning a machine specifically or, sometimes, a machine with AI or AI only.

To be precise, not all robots have AI and not all AI is found in robots. It’s a distinction that may be more important for people developing robots and/or AI but it also seems to make a difference where funding is concerned. In a March 24, 2017 posting about the 2017 Canadian federal budget I noticed this,

… The Canadian Institute for Advanced Research will receive $93.7 million [emphasis mine] to “launch a Pan-Canadian Artificial Intelligence Strategy … (to) position Canada as a world-leading destination for companies seeking to invest in artificial intelligence and innovation.”

This brings me to a recent set of meetings held in Vancouver to devise a Canadian robotics roadmap, which suggests the robotics folks feel they need specific representation and funding.

See: part two for the rest.

May/June 2017 scienceish events in Canada (mostly in Vancouver)

I have five* events for this posting

(1) Science and You (Montréal)

The latest iteration of the Science and You conference took place May 4 – 6, 2017 at McGill University (Montréal, Québec). That’s the sad news, the good news is that they have recorded and released the sessions onto YouTube. (This is the first time the conference has been held outside of Europe, in fact, it’s usually held in France.) Here’s why you might be interested (from the 2017 conference page),

The animator of the conference will be Véronique Morin:

Véronique Morin is science journalist and communicator, first president of the World Federation of Science Journalists (WFSJ) and serves as judge for science communication awards. She worked for a science program on Quebec’s public TV network, CBCRadio-Canada, TVOntario, and as a freelancer is also a contributor to -among others-  The Canadian Medical Journal, University Affairs magazine, NewsDeeply, while pursuing documentary projects.

Let’s talk about S …

Holding the attention of an audience full of teenagers may seem impossible… particularly on topics that might be seen as boring, like sciences! Yet, it’s essential to demistify science in order to make it accessible, even appealing in the eyes of futur citizens.
How can we encourage young adults to ask themselves questions about the surrounding world, nature and science? How can we make them discover sciences with and without digital tools?

Find out tips and tricks used by our speakers Kristin Alford and Amanda Tyndall.

Kristin Alford
Dr Kristin Alford is a futurist and the inaugural Director of MOD., a futuristic museum of discovery at the University of South Australia. Her mind is presently occupied by the future of work and provoking young adults to ask questions about the role of science at the intersection of art and innovation.

Internet Website

Amanda Tyndall
Over 20 years of  science communication experience with organisations such as Café Scientifique, The Royal Institution of Great Britain (and Australia’s Science Exchange), the Science Museum in London and now with the Edinburgh International Science Festival. Particularly interested in engaging new audiences through linkages with the arts and digital/creative industries.

Internet Website

A troll in the room

Increasingly used by politicians, social media can reach thousand of people in few seconds. Relayed to infinity, the message seems truthful, but is it really? At a time of fake news and alternative facts, how can we, as a communicator or a journalist, take up the challenge of disinformation?
Discover the traps and tricks of disinformation in the age of digital technologies with our two fact-checking experts, Shawn Otto and Vanessa Schipani, who will offer concrete solutions to unravel the true from the false..

 

Shawn Otto
Shawn Otto was awarded the IEEE-USA (“I-Triple-E”) National Distinguished Public Service Award for his work elevating science in America’s national public dialogue. He is cofounder and producer of the US presidential science debates at ScienceDebate.org. He is also an award-winning screenwriter and novelist, best known for writing and co-producing the Academy Award-nominated movie House of Sand and Fog.

Vanessa Schipani
Vanessa is a science journalist at FactCheck.org, which monitors U.S. politicians’ claims for accuracy. Previously, she wrote for outlets in the U.S., Europe and Japan, covering topics from quantum mechanics to neuroscience. She has bachelor’s degrees in zoology and philosophy and a master’s in the history and philosophy of science.

At 20,000 clicks from the extreme

Sharing living from a space station, ship or submarine. The examples of social media use in extreme conditions are multiplying and the public is asking for more. How to use public tools to highlight practices and discoveries? How to manage the use of social networks of a large organisation? What pitfalls to avoid? What does this mean for citizens and researchers?
Find out with Phillipe Archambault and Leslie Elliott experts in extrem conditions.

Philippe Archambault

Professor Philippe Archambault is a marine ecologist at Laval University, the director of the Notre Golfe network and president of the 4th World Conference on Marine Biodiversity. His research on the influence of global changes on biodiversity and the functioning of ecosystems has led him to work in all four corners of our oceans from the Arctic to the Antarctic, through Papua New Guinea and the French Polynesia.

Website

Leslie Elliott

Leslie Elliott leads a team of communicators at Ocean Networks Canada in Victoria, British Columbia, home to Canada’s world-leading ocean observatories in the Pacific and Arctic Oceans. Audiences can join robots equipped with high definition cameras via #livedive to discover more about our ocean.

Website

Science is not a joke!

Science and humor are two disciplines that might seem incompatible … and yet, like the ig-Nobels, humour can prove to be an excellent way to communicate a scientific message. This, however, can prove to be quite challenging since one needs to ensure they employ the right tone and language to both captivate the audience while simultaneously communicating complex topics.

Patrick Baud and Brian Malow, both well-renowned scientific communicators, will give you with the tools you need to capture your audience and also convey a proper scientific message. You will be surprised how, even in Science, a good dose of humour can make you laugh and think.

Patrick Baud
Patrick Baud is a French author who was born on June 30, 1979, in Avignon. He has been sharing for many years his passion for tales of fantasy, and the marvels and curiosities of the world, through different media: radio, web, novels, comic strips, conferences, and videos. His YouTube channel “Axolot”, was created in 2013, and now has over 420,000 followers.

Internet Website
Youtube

Brian Malow
Brian Malow is Earth’s Premier Science Comedian (self-proclaimed).  Brian has made science videos for Time Magazine and contributed to Neil deGrasse Tyson’s radio show.  He worked in science communications at a museum, blogged for Scientific American, and trains scientists to be better communicators.

Internet Website
YouTube

I don’t think they’ve managed to get everything up on YouTube yet but the material I’ve found has been subtitled (into French or English, depending on which language the speaker used).

Here are the opening day’s talks on YouTube with English subtitles or French subtitles when appropriate. You can also find some abstracts for the panel presentations here. I was particularly in this panel (S3 – The Importance of Reaching Out to Adults in Scientific Culture), Note: I have searched out the French language descriptions for those unavailable in English,

Organized by Coeur des sciences, Université du Québec à Montréal (UQAM)
Animator: Valérie Borde, Freelance Science Journalist

Anouk Gingras, Musée de la civilisation, Québec
Text not available in English

[La science au Musée de la civilisation c’est :
• Une cinquantaine d’expositions et espaces découvertes
• Des thèmes d’actualité, liés à des enjeux sociaux, pour des exposition souvent destinées aux adultes
• Un potentiel de nouveaux publics en lien avec les autres thématiques présentes au Musée (souvent non scientifiques)
L’exposition Nanotechnologies : l’invisible révolution :
• Un thème d’actualité suscitant une réflexion
• Un sujet sensible menant à la création d’un parcours d’exposition polarisé : choix entre « oui » ou « non » au développement des nanotechnologies pour l’avenir
• L’utilisation de divers éléments pour rapprocher le sujet du visiteur

  • Les nanotechnologies dans la science-fiction
  • Les objets du quotidien contenant des nanoparticules
  • Les objets anciens qui utilisant les nanotechnologies
  • Divers microscopes retraçant l’histoire des nanotechnologies

• Une forme d’interaction suscitant la réflexion du visiteur via un objet sympatique : le canard  de plastique jaune, muni d’une puce RFID

  • Sept stations de consultation qui incitent le visiteur à se prononcer et à réfléchir sur des questions éthiques liées au développement des nanotechnologies
  • Une compilation des données en temps réel
  • Une livraison des résultats personnalisée
  • Une mesure des visiteurs dont l’opinion s’est modifiée à la suite de la visite de l’exposition

Résultats de fréquentation :
• Public de jeunes adultes rejoint (51%)
• Plus d’hommes que de femmes ont visité l’exposition
• Parcours avec canard: incite à la réflexion et augmente l’attention
• 3 visiteurs sur 4 prennent le canard; 92% font l’activité en entier]

Marie Lambert-Chan, Québec Science
Capting the attention of adult readership : challenging mission, possible mission
Since 1962, Québec Science Magazine is the only science magazine aimed at an adult readership in Québec. Our mission : covering topical subjects related to science and technology, as well as social issues from a scientific point of view. Each year, we print eight issues, with a circulation of 22,000 copies. Furthermore, the magazine has received several awards and accolades. In 2017, Québec Science Magazine was honored by the Canadian Magazine Awards/Grands Prix du Magazine and was named Best Magazine in Science, Business and Politics category.
Although we have maintained a solid reputation among scientists and the media industry, our magazine is still relatively unknown to the general public. Why is that ? How is it that, through all those years, we haven’t found the right angle to engage a broader readership ?
We are still searching for definitive answers, but here are our observations :
Speaking science to adults is much more challenging than it is with children, who can marvel endlessly at the smallest things. Unfortunately, adults lose this capacity to marvel and wonder for various reasons : they have specific interests, they failed high-school science, they don’t feel competent enough to understand scientific phenomena. How do we bring the wonder back ? This is our mission. Not impossible, and hopefully soon to be accomplished. One noticible example is the number of reknown scientists interviewed during the popular talk-show Tout le monde en parle, leading us to believe the general public may have an interest in science.
However, to accomplish our mission, we have to recount science. According to the Bulgarian writer and blogger Maria Popova, great science writing should explain, elucidate and enchant . To explain : to make the information clear and comprehensible. To elucidate : to reveal all the interconnections between the pieces of information. To enchant : to go beyond the scientific terms and information and tell a story, thus giving a kaleidoscopic vision of the subject. This is how we intend to capture our readership’s attention.
Our team aims to accomplish this challenge. Although, to be perfectly honest, it would be much easier if we had more resources, financial-wise or human-wise. However, we don’t lack ideas. We dream of major scientific investigations, conferences organized around themes from the magazine’s issues, Web documentaries, podcasts… Such initiatives would give us the visibility we desperately crave.
That said, even in the best conditions, would be have more subscribers ? Perhaps. But it isn’t assured. Even if our magazine is aimed at adult readership, we are convinced that childhood and science go hand in hand, and is even decisive for the children’s future. At the moment, school programs are not in place for continuous scientific development. It is possible to develop an interest for scientific culture as adults, but it is much easier to achieve this level of curiosity if it was previously fostered.

Robert Lamontagne, Université de Montréal
Since the beginning of my career as an astrophysicist, I have been interested in scientific communication to non-specialist audiences. I have presented hundreds of lectures describing the phenomena of the cosmos. Initially, these were mainly offered in amateur astronomers’ clubs or in high-schools and Cégeps. Over the last few years, I have migrated to more general adult audiences in the context of cultural activities such as the “Festival des Laurentides”, the Arts, Culture and Society activities in Repentigny and, the Université du troisième âge (UTA) or Senior’s University.
The Quebec branch of the UTA, sponsored by the Université de Sherbrooke (UdeS), exists since 1976. Seniors universities, created in Toulouse, France, are part of a worldwide movement. The UdeS and its senior’s university antennas are members of the International Association of the Universities of the Third Age (AIUTA). The UTA is made up of 28 antennas located in 10 regions and reaches more than 10,000 people per year. Antenna volunteers prepare educational programming by drawing on a catalog of courses, seminars and lectures, covering a diverse range of subjects ranging from history and politics to health, science, or the environment.
The UTA is aimed at people aged 50 and over who wish to continue their training and learn throughout their lives. It is an attentive, inquisitive, educated public and, given the demographics in Canada, its number is growing rapidly. This segment of the population is often well off and very involved in society.
I usually use a two-prong approach.
• While remaining rigorous, the content is articulated around a few ideas, avoiding analytical expressions in favor of a qualitative description.
• The narrative framework, the story, which allows to contextualize the scientific content and to forge links with the audience.

Sophie Malavoy, Coeur des sciences – UQAM

Many obstacles need to be overcome in order to reach out to adults, especially those who aren’t in principle interested in science.
• Competing against cultural activities such as theater, movies, etc.
• The idea that science is complex and dull
• A feeling of incompetence. « I’ve always been bad in math and physics»
• Funding shortfall for activities which target adults
How to reach out to those adults?
• To put science into perspective. To bring its relevance out by making links with current events and big issues (economic, heath, environment, politic). To promote a transdisciplinary approach which includes humanities and social sciences.
• To stake on originality by offering uncommon and ludic experiences (scientific walks in the city, street performances, etc.)
• To bridge between science and popular activities to the public (science/music; science/dance; science/theater; science/sports; science/gastronomy; science/literature)
• To reach people with emotions without sensationalism. To boost their curiosity and ability to wonder.
• To put a human face on science, by insisting not only on the results of a research but on its process. To share the adventure lived by researchers.
• To liven up people’s feeling of competence. To insist on the scientific method.
• To invite non-scientists (citizens groups, communities, consumers, etc.) to the reflections on science issues (debate, etc.).  To move from dissemination of science to dialog

Didier Pourquery, The Conversation France
Text not available in English

[Depuis son lancement en septembre 2015 la plateforme The Conversation France (2 millions de pages vues par mois) n’a cessé de faire progresser son audience. Selon une étude menée un an après le lancement, la structure de lectorat était la suivante
Pour accrocher les adultes et les ainés deux axes sont intéressants ; nous les utilisons autant sur notre site que sur notre newsletter quotidienne – 26.000 abonnés- ou notre page Facebook (11500 suiveurs):
1/ expliquer l’actualité : donner les clefs pour comprendre les débats scientifiques qui animent la société ; mettre de la science dans les discussions (la mission du site est de  « nourrir le débat citoyen avec de l’expertise universitaire et de la recherche »). L’idée est de poser des questions de compréhension simple au moment où elles apparaissent dans le débat (en période électorale par exemple : qu’est-ce que le populisme ? Expliqué par des chercheurs de Sciences Po incontestables.)
Exemples : comprendre les conférences climat -COP21, COP22 – ; comprendre les débats de société (Gestation pour autrui); comprendre l’économie (revenu universel); comprendre les maladies neurodégénératives (Alzheimer) etc.
2/ piquer la curiosité : utiliser les formules classiques (le saviez-vous ?) appliquées à des sujets surprenants (par exemple : «  Que voit un chien quand il regarde la télé ? » a eu 96.000 pages vues) ; puis jouer avec ces articles sur les réseaux sociaux. Poser des questions simples et surprenantes. Par exemple : ressemblez-vous à votre prénom ? Cet article académique très sérieux a comptabilisé 95.000 pages vues en français et 171.000 en anglais.
3/ Susciter l’engagement : faire de la science participative simple et utile. Par exemple : appeler nos lecteurs à surveiller l’invasion de moustiques tigres partout sur le territoire. Cet article a eu 112.000 pages vues et a été republié largement sur d’autres sites. Autre exemple : appeler les lecteurs à photographier les punaises de leur environnement.]

Here are my very brief and very rough translations. (1) Anouk Gingras is focused largely on a nanotechnology exhibit and whether or not visitors went through it and participated in various activities. She doesn’t seem specifically focused on science communication for adults but they are doing some very interesting and related work at Québec’s Museum of Civilization. (2) Didier Pourquery is describing an online initiative known as ‘The Conversation France’ (strange—why not La conversation France?). Moving on, there’s a website with a daily newsletter (blog?) and a Facebook page. They have two main projects, one is a discussion of current science issues in society, which is informed with and by experts but is not exclusive to experts, and more curiosity-based science questions and discussion such as What does a dog see when it watches television?

Serendipity! I hadn’t stumbled across this conference when I posted my May 12, 2017 piece on the ‘insanity’ of science outreach in Canada. It’s good to see I’m not the only one focused on science outreach for adults and that there is some action, although seems to be a Québec-only effort.

(2) Ingenious—a book launch in Vancouver

The book will be launched on Thursday, June 1, 2017 at the Vancouver Public Library’s Central Branch (from the Ingenious: An Evening of Canadian Innovation event page)

Ingenious: An Evening of Canadian Innovation
Thursday, June 1, 2017 (6:30 pm – 8:00 pm)
Central Branch
Description

Gov. Gen. David Johnston and OpenText Corp. chair Tom Jenkins discuss Canadian innovation and their book Ingenious: How Canadian Innovators Made the World Smarter, Smaller, Kinder, Safer, Healthier, Wealthier and Happier.

Books will be available for purchase and signing.

Doors open at 6 p.m.

INGENIOUS : HOW CANADIAN INNOVATORS MADE THE WORLD SMARTER, SMALLER, KINDER, SAFER, HEALTHIER, WEALTHIER, AND HAPPIER

Address:

350 West Georgia St.
VancouverV6B 6B1

Get Directions

  • Phone:

Location Details:

Alice MacKay Room, Lower Level

I do have a few more details about the authors and their book. First, there’s this from the Ottawa Writer’s Festival March 28, 2017 event page,

To celebrate Canada’s 150th birthday, Governor General David Johnston and Tom Jenkins have crafted a richly illustrated volume of brilliant Canadian innovations whose widespread adoption has made the world a better place. From Bovril to BlackBerrys, lightbulbs to liquid helium, peanut butter to Pablum, this is a surprising and incredibly varied collection to make Canadians proud, and to our unique entrepreneurial spirit.

Successful innovation is always inspired by at least one of three forces — insight, necessity, and simple luck. Ingenious moves through history to explore what circumstances, incidents, coincidences, and collaborations motivated each great Canadian idea, and what twist of fate then brought that idea into public acceptance. Above all, the book explores what goes on in the mind of an innovator, and maps the incredible spectrum of personalities that have struggled to improve the lot of their neighbours, their fellow citizens, and their species.

From the marvels of aboriginal invention such as the canoe, snowshoe, igloo, dogsled, lifejacket, and bunk bed to the latest pioneering advances in medicine, education, philanthropy, science, engineering, community development, business, the arts, and the media, Canadians have improvised and collaborated their way to international admiration. …

Then, there’s this April 5, 2017 item on Canadian Broadcasting Corporation’s (CBC) news online,

From peanut butter to the electric wheelchair, the stories behind numerous life-changing Canadian innovations are detailed in a new book.

Gov. Gen. David Johnston and Tom Jenkins, chair of the National Research Council and former CEO of OpenText, are the authors of Ingenious: How Canadian Innovators Made the World Smarter, Smaller, Kinder, Safer, Healthier, Wealthier and Happier. The authors hope their book reinforces and extends the culture of innovation in Canada.

“We started wanting to tell 50 stories of Canadian innovators, and what has amazed Tom and myself is how many there are,” Johnston told The Homestretch on Wednesday. The duo ultimately chronicled 297 innovations in the book, including the pacemaker, life jacket and chocolate bars.

“Innovations are not just technological, not just business, but they’re social innovations as well,” Johnston said.

Many of those innovations, and the stories behind them, are not well known.

“We’re sort of a humble people,” Jenkins said. “We’re pretty quiet. We don’t brag, we don’t talk about ourselves very much, and so we then lead ourselves to believe as a culture that we’re not really good inventors, the Americans are. And yet we knew that Canadians were actually great inventors and innovators.”

‘Opportunities and challenges’

For Johnston, his favourite story in the book is on the light bulb.

“It’s such a symbol of both our opportunities and challenges,” he said. “The light bulb was invented in Canada, not the United States. It was two inventors back in the 1870s that realized that if you passed an electric current through a resistant metal it would glow, and they patented that, but then they didn’t have the money to commercialize it.”

American inventor Thomas Edison went on to purchase that patent and made changes to the original design.

Johnston and Jenkins are also inviting readers to share their own innovation stories, on the book’s website.

I’m looking forward to the talk and wondering if they’ve included the botox and cellulose nanocrystal (CNC) stories to the book. BTW, Tom Jenkins was the chair of a panel examining Canadian research and development and lead author of the panel’s report (Innovation Canada: A Call to Action) for the then Conservative government (it’s also known as the Jenkins report). You can find out more about in my Oct. 21, 2011 posting.

(3) Made in Canada (Vancouver)

This is either fortuitous or there’s some very high level planning involved in the ‘Made in Canada; Inspiring Creativity and Innovation’ show which runs from April 21 – Sept. 4, 2017 at Vancouver’s Science World (also known as the Telus World of Science). From the Made in Canada; Inspiring Creativity and Innovation exhibition page,

Celebrate Canadian creativity and innovation, with Science World’s original exhibition, Made in Canada, presented by YVR [Vancouver International Airport] — where you drive the creative process! Get hands-on and build the fastest bobsled, construct a stunning piece of Vancouver architecture and create your own Canadian sound mashup, to share with friends.

Vote for your favourite Canadian inventions and test fly a plane of your design. Discover famous (and not-so-famous, but super neat) Canadian inventions. Learn about amazing, local innovations like robots that teach themselves, one-person electric cars and a computer that uses parallel universes.

Imagine what you can create here, eh!!

You can find more information here.

One quick question, why would Vancouver International Airport be presenting this show? I asked that question of Science World’s Communications Coordinator, Jason Bosher, and received this response,

 YVR is the presenting sponsor. They donated money to the exhibition and they also contributed an exhibit for the “We Move” themed zone in the Made in Canada exhibition. The YVR exhibit details the history of the YVR airport, it’s geographic advantage and some of the planes they have seen there.

I also asked if there was any connection between this show and the ‘Ingenious’ book launch,

Some folks here are aware of the book launch. It has to do with the Canada 150 initiative and nothing to do with the Made in Canada exhibition, which was developed here at Science World. It is our own original exhibition.

So there you have it.

(4) Robotics, AI, and the future of work (Ottawa)

I’m glad to finally stumble across a Canadian event focusing on the topic of artificial intelligence (AI), robotics and the future of work. Sadly (for me), this is taking place in Ottawa. Here are more details  from the May 25, 2017 notice (received via email) from the Canadian Science Policy Centre (CSPC),

CSPC is Partnering with CIFAR {Canadian Institute for Advanced Research]
The Second Annual David Dodge Lecture

Join CIFAR and Senior Fellow Daron Acemoglu for
the Second Annual David Dodge CIFAR Lecture in Ottawa on June 13.
June 13, 2017 | 12 – 2 PM [emphasis mine]
Fairmont Château Laurier, Drawing Room | 1 Rideau St, Ottawa, ON
Along with the backlash against globalization and the outsourcing of jobs, concern is also growing about the effect that robotics and artificial intelligence will have on the labour force in advanced industrial nations. World-renowned economist Acemoglu, author of the best-selling book Why Nations Fail, will discuss how technology is changing the face of work and the composition of labour markets. Drawing on decades of data, Acemoglu explores the effects of widespread automation on manufacturing jobs, the changes we can expect from artificial intelligence technologies, and what responses to these changes might look like. This timely discussion will provide valuable insights for current and future leaders across government, civil society, and the private sector.

Daron Acemoglu is a Senior Fellow in CIFAR’s Insitutions, Organizations & Growth program, and the Elizabeth and James Killian Professor of Economics at the Massachusetts Institute of Technology.

Tickets: $15 (A light lunch will be served.)

You can find a registration link here. Also, if you’re interested in the Canadian efforts in the field of artificial intelligence you can find more in my March 24, 2017 posting (scroll down about 25% of the way and then about 40% of the way) on the 2017 Canadian federal budget and science where I first noted the $93.7M allocated to CIFAR for launching a Pan-Canadian Artificial Intelligence Strategy.

(5) June 2017 edition of the Curiosity Collider Café (Vancouver)

This is an art/science (also known called art/sci and SciArt) that has taken place in Vancouver every few months since April 2015. Here’s more about the June 2017 edition (from the Curiosity Collider events page),

Collider Cafe

When
8:00pm on Wednesday, June 21st, 2017. Door opens at 7:30pm.

Where
Café Deux Soleils. 2096 Commercial Drive, Vancouver, BC (Google Map).

Cost
$5.00-10.00 cover at the door (sliding scale). Proceeds will be used to cover the cost of running this event, and to fund future Curiosity Collider events. Curiosity Collider is a registered BC non-profit organization.

***

#ColliderCafe is a space for artists, scientists, makers, and anyone interested in art+science. Meet, discover, connect, create. How do you explore curiosity in your life? Join us and discover how our speakers explore their own curiosity at the intersection of art & science.

The event will start promptly at 8pm (doors open at 7:30pm). $5.00-10.00 (sliding scale) cover at the door. Proceeds will be used to cover the cost of running this event, and to fund future Curiosity Collider events. Curiosity Collider is a registered BC non-profit organization.

Enjoy!

*I changed ‘three’ events to ‘five’ events and added a number to each event for greater reading ease on May 31, 2017.

Vector Institute and Canada’s artificial intelligence sector

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

In addition, Vector is expected to receive funding from the Province of Ontario and more than 30 top Canadian and global companies eager to tap this pool of talent to grow their businesses. The institute will also work closely with other Ontario universities with AI talent.

(See my March 24, 2017 posting; scroll down about 25% for the science part, including the Pan-Canadian Artificial Intelligence Strategy of the budget.)

Not obvious in last week’s coverage of the Pan-Canadian Artificial Intelligence Strategy is that the much lauded Hinton has been living in the US and working for Google. These latest announcements (Pan-Canadian AI Strategy and Vector Institute) mean that he’s moving back.

A March 28, 2017 article by Kate Allen for TorontoStar.com provides more details about the Vector Institute, Hinton, and the Canadian ‘brain drain’ as it applies to artificial intelligence, (Note:  A link has been removed)

Toronto will host a new institute devoted to artificial intelligence, a major gambit to bolster a field of research pioneered in Canada but consistently drained of talent by major U.S. technology companies like Google, Facebook and Microsoft.

The Vector Institute, an independent non-profit affiliated with the University of Toronto, will hire about 25 new faculty and research scientists. It will be backed by more than $150 million in public and corporate funding in an unusual hybridization of pure research and business-minded commercial goals.

The province will spend $50 million over five years, while the federal government, which announced a $125-million Pan-Canadian Artificial Intelligence Strategy in last week’s budget, is providing at least $40 million, backers say. More than two dozen companies have committed millions more over 10 years, including $5 million each from sponsors including Google, Air Canada, Loblaws, and Canada’s five biggest banks [Bank of Montreal (BMO). Canadian Imperial Bank of Commerce ({CIBC} President’s Choice Financial},  Royal Bank of Canada (RBC), Scotiabank (Tangerine), Toronto-Dominion Bank (TD Canada Trust)].

The mode of artificial intelligence that the Vector Institute will focus on, deep learning, has seen remarkable results in recent years, particularly in image and speech recognition. Geoffrey Hinton, considered the “godfather” of deep learning for the breakthroughs he made while a professor at U of T, has worked for Google since 2013 in California and Toronto.

Hinton will move back to Canada to lead a research team based at the tech giant’s Toronto offices and act as chief scientific adviser of the new institute.

Researchers trained in Canadian artificial intelligence labs fill the ranks of major technology companies, working on tools like instant language translation, facial recognition, and recommendation services. Academic institutions and startups in Toronto, Waterloo, Montreal and Edmonton boast leaders in the field, but other researchers have left for U.S. universities and corporate labs.

The goals of the Vector Institute are to retain, repatriate and attract AI talent, to create more trained experts, and to feed that expertise into existing Canadian companies and startups.

Hospitals are expected to be a major partner, since health care is an intriguing application for AI. Last month, researchers from Stanford University announced they had trained a deep learning algorithm to identify potentially cancerous skin lesions with accuracy comparable to human dermatologists. The Toronto company Deep Genomics is using deep learning to read genomes and identify mutations that may lead to disease, among other things.

Intelligent algorithms can also be applied to tasks that might seem less virtuous, like reading private data to better target advertising. Zemel [Richard Zemel, the institute’s research director and a professor of computer science at U of T] says the centre is creating an ethics working group [emphasis mine] and maintaining ties with organizations that promote fairness and transparency in machine learning. As for privacy concerns, “that’s something we are well aware of. We don’t have a well-formed policy yet but we will fairly soon.”

The institute’s annual funding pales in comparison to the revenues of the American tech giants, which are measured in tens of billions. The risk the institute’s backers are taking is simply creating an even more robust machine learning PhD mill for the U.S.

“They obviously won’t all stay in Canada, but Toronto industry is very keen to get them,” Hinton said. “I think Trump might help there.” Two researchers on Hinton’s new Toronto-based team are Iranian, one of the countries targeted by U.S. President Donald Trump’s travel bans.

Ethics do seem to be a bit of an afterthought. Presumably the Vector Institute’s ‘ethics working group’ won’t include any regular folks. Is there any thought to what the rest of us think about these developments? As there will also be some collaboration with other proposed AI institutes including ones at the University of Montreal (Université de Montréal) and the University of Alberta (Kate McGillivray’s article coming up shortly mentions them), might the ethics group be centered in either Edmonton or Montreal? Interestingly, two Canadians (Timothy Caulfield at the University of Alberta and Eric Racine at Université de Montréa) testified at the US Commission for the Study of Bioethical Issues Feb. 10 – 11, 2014 meeting, the Brain research, ethics, and nanotechnology. Still speculating here but I imagine Caulfield and/or Racine could be persuaded to extend their expertise in ethics and the human brain to AI and its neural networks.

Getting back to the topic at hand the ‘AI sceneCanada’, Allen’s article is worth reading in its entirety if you have the time.

Kate McGillivray’s March 29, 2017 article for the Canadian Broadcasting Corporation’s (CBC) news online provides more details about the Canadian AI situation and the new strategies,

With artificial intelligence set to transform our world, a new institute is putting Toronto to the front of the line to lead the charge.

The Vector Institute for Artificial Intelligence, made possible by funding from the federal government revealed in the 2017 budget, will move into new digs in the MaRS Discovery District by the end of the year.

Vector’s funding comes partially from a $125 million investment announced in last Wednesday’s federal budget to launch a pan-Canadian artificial intelligence strategy, with similar institutes being established in Montreal and Edmonton.

“[A.I.] cuts across pretty well every sector of the economy,” said Dr. Alan Bernstein, CEO and president of the Canadian Institute for Advanced Research, the organization tasked with administering the federal program.

“Silicon Valley and England and other places really jumped on it, so we kind of lost the lead a little bit. I think the Canadian federal government has now realized that,” he said.

Stopping up the brain drain

Critical to the strategy’s success is building a homegrown base of A.I. experts and innovators — a problem in the last decade, despite pioneering work on so-called “Deep Learning” by Canadian scholars such as Yoshua Bengio and Geoffrey Hinton, a former University of Toronto professor who will now serve as Vector’s chief scientific advisor.

With few university faculty positions in Canada and with many innovative companies headquartered elsewhere, it has been tough to keep the few graduates specializing in A.I. in town.

“We were paying to educate people and shipping them south,” explained Ed Clark, chair of the Vector Institute and business advisor to Ontario Premier Kathleen Wynne.

The existence of that “fantastic science” will lean heavily on how much buy-in Vector and Canada’s other two A.I. centres get.

Toronto’s portion of the $125 million is a “great start,” said Bernstein, but taken alone, “it’s not enough money.”

“My estimate of the right amount of money to make a difference is a half a billion or so, and I think we will get there,” he said.

Jessica Murphy’s March 29, 2017 article for the British Broadcasting Corporation’s (BBC) news online offers some intriguing detail about the Canadian AI scene,

Canadian researchers have been behind some recent major breakthroughs in artificial intelligence. Now, the country is betting on becoming a big player in one of the hottest fields in technology, with help from the likes of Google and RBC [Royal Bank of Canada].

In an unassuming building on the University of Toronto’s downtown campus, Geoff Hinton laboured for years on the “lunatic fringe” of academia and artificial intelligence, pursuing research in an area of AI called neural networks.

Also known as “deep learning”, neural networks are computer programs that learn in similar way to human brains. The field showed early promise in the 1980s, but the tech sector turned its attention to other AI methods after that promise seemed slow to develop.

“The approaches that I thought were silly were in the ascendancy and the approach that I thought was the right approach was regarded as silly,” says the British-born [emphasis mine] professor, who splits his time between the university and Google, where he is a vice-president of engineering fellow.

Neural networks are used by the likes of Netflix to recommend what you should binge watch and smartphones with voice assistance tools. Google DeepMind’s AlphaGo AI used them to win against a human in the ancient game of Go in 2016.

Foteini Agrafioti, who heads up the new RBC Research in Machine Learning lab at the University of Toronto, said those recent innovations made AI attractive to researchers and the tech industry.

“Anything that’s powering Google’s engines right now is powered by deep learning,” she says.

Developments in the field helped jumpstart innovation and paved the way for the technology’s commercialisation. They also captured the attention of Google, IBM and Microsoft, and kicked off a hiring race in the field.

The renewed focus on neural networks has boosted the careers of early Canadian AI machine learning pioneers like Hinton, the University of Montreal’s Yoshua Bengio, and University of Alberta’s Richard Sutton.

Money from big tech is coming north, along with investments by domestic corporations like banking multinational RBC and auto parts giant Magna, and millions of dollars in government funding.

Former banking executive Ed Clark will head the institute, and says the goal is to make Toronto, which has the largest concentration of AI-related industries in Canada, one of the top five places in the world for AI innovation and business.

The founders also want it to serve as a magnet and retention tool for top talent aggressively head-hunted by US firms.

Clark says they want to “wake up” Canadian industry to the possibilities of AI, which is expected to have a massive impact on fields like healthcare, banking, manufacturing and transportation.

Google invested C$4.5m (US$3.4m/£2.7m) last November [2016] in the University of Montreal’s Montreal Institute for Learning Algorithms.

Microsoft is funding a Montreal startup, Element AI. The Seattle-based company also announced it would acquire Montreal-based Maluuba and help fund AI research at the University of Montreal and McGill University.

Thomson Reuters and General Motors both recently moved AI labs to Toronto.

RBC is also investing in the future of AI in Canada, including opening a machine learning lab headed by Agrafioti, co-funding a program to bring global AI talent and entrepreneurs to Toronto, and collaborating with Sutton and the University of Alberta’s Machine Intelligence Institute.

Canadian tech also sees the travel uncertainty created by the Trump administration in the US as making Canada more attractive to foreign talent. (One of Clark’s the selling points is that Toronto as an “open and diverse” city).

This may reverse the ‘brain drain’ but it appears Canada’s role as a ‘branch plant economy’ for foreign (usually US) companies could become an important discussion once more. From the ‘Foreign ownership of companies of Canada’ Wikipedia entry (Note: Links have been removed),

Historically, foreign ownership was a political issue in Canada in the late 1960s and early 1970s, when it was believed by some that U.S. investment had reached new heights (though its levels had actually remained stable for decades), and then in the 1980s, during debates over the Free Trade Agreement.

But the situation has changed, since in the interim period Canada itself became a major investor and owner of foreign corporations. Since the 1980s, Canada’s levels of investment and ownership in foreign companies have been larger than foreign investment and ownership in Canada. In some smaller countries, such as Montenegro, Canadian investment is sizable enough to make up a major portion of the economy. In Northern Ireland, for example, Canada is the largest foreign investor. By becoming foreign owners themselves, Canadians have become far less politically concerned about investment within Canada.

Of note is that Canada’s largest companies by value, and largest employers, tend to be foreign-owned in a way that is more typical of a developing nation than a G8 member. The best example is the automotive sector, one of Canada’s most important industries. It is dominated by American, German, and Japanese giants. Although this situation is not unique to Canada in the global context, it is unique among G-8 nations, and many other relatively small nations also have national automotive companies.

It’s interesting to note that sometimes Canadian companies are the big investors but that doesn’t change our basic position. And, as I’ve noted in other postings (including the March 24, 2017 posting), these government investments in science and technology won’t necessarily lead to a move away from our ‘branch plant economy’ towards an innovative Canada.

You can find out more about the Vector Institute for Artificial Intelligence here.

BTW, I noted that reference to Hinton as ‘British-born’ in the BBC article. He was educated in the UK and subsidized by UK taxpayers (from his Wikipedia entry; Note: Links have been removed),

Hinton was educated at King’s College, Cambridge graduating in 1970, with a Bachelor of Arts in experimental psychology.[1] He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1977 for research supervised by H. Christopher Longuet-Higgins.[3][12]

It seems Canadians are not the only ones to experience  ‘brain drains’.

Finally, I wrote at length about a recent initiative taking place between the University of British Columbia (Vancouver, Canada) and the University of Washington (Seattle, Washington), the Cascadia Urban Analytics Cooperative in a Feb. 28, 2017 posting noting that the initiative is being funded by Microsoft to the tune $1M and is part of a larger cooperative effort between the province of British Columbia and the state of Washington. Artificial intelligence is not the only area where US technology companies are hedging their bets (against Trump’s administration which seems determined to terrify people from crossing US borders) by investing in Canada.

For anyone interested in a little more information about AI in the US and China, there’s today’s (March 31, 2017)earlier posting: China, US, and the race for artificial intelligence research domination.

A demonstration of quantum surrealism

The Canadian Institute for Advanced Research (CIFAR) has announced some intriguing new research results. A Feb. 19, 2016 news item on ScienceDaily gets the ball rolling,

New research demonstrates that particles at the quantum level can in fact be seen as behaving something like billiard balls rolling along a table, and not merely as the probabilistic smears that the standard interpretation of quantum mechanics suggests. But there’s a catch — the tracks the particles follow do not always behave as one would expect from “realistic” trajectories, but often in a fashion that has been termed “surrealistic.”

A Feb. 19, 2016 CIFAR news release by Kurt Kleiner, which originated the news item, offers the kind of explanation that allows an amateur such as myself to understand the principles (while I’m reading it), thank you Kurt Kleiner,

In a new version of an old experiment, CIFAR Senior Fellow Aephraim Steinberg (University of Toronto) and colleagues tracked the trajectories of photons as the particles traced a path through one of two slits and onto a screen. But the researchers went further, and observed the “nonlocal” influence of another photon that the first photon had been entangled with.

The results counter a long-standing criticism of an interpretation of quantum mechanics called the De Broglie-Bohm theory. Detractors of this interpretation had faulted it for failing to explain the behaviour of entangled photons realistically. For Steinberg, the results are important because they give us a way of visualizing quantum mechanics that’s just as valid as the standard interpretation, and perhaps more intuitive.

“I’m less interested in focusing on the philosophical question of what’s ‘really’ out there. I think the fruitful question is more down to earth. Rather than thinking about different metaphysical interpretations, I would phrase it in terms of having different pictures. Different pictures can be useful. They can help shape better intuitions.”

At stake is what is “really” happening at the quantum level. The uncertainty principle tells us that we can never know both a particle’s position and momentum with complete certainty. And when we do interact with a quantum system, for instance by measuring it, we disturb the system. So if we fire a photon at a screen and want to know where it will hit, we’ll never know for sure exactly where it will hit or what path it will take to get there.

The standard interpretation of quantum mechanics holds that this uncertainty means that there is no “real” trajectory between the light source and the screen. The best we can do is to calculate a “wave function” that shows the odds of the photon being in any one place at any time, but won’t tell us where it is until we make a measurement.

Yet another interpretation, called the De Broglie-Bohm theory, says that the photons do have real trajectories that are guided by a “pilot wave” that accompanies the particle. The wave is still probabilistic, but the particle takes a real trajectory from source to target. It doesn’t simply “collapse” into a particular location once it’s measured.

In 2011 Steinberg and his colleagues showed that they could follow trajectories for photons by subjecting many identical particles to measurements so weak that the particles were barely disturbed, and then averaging out the information. This method showed trajectories that looked similar to classical ones — say, those of balls flying through the air.

But critics had pointed out a problem with this viewpoint. Quantum mechanics also tells us that two particles can be entangled, so that a measurement of one particle affects the other. The critics complained that in some cases, a measurement of one particle would lead to an incorrect prediction of the trajectory of the entangled particle. They coined the term “surreal trajectories” to describe them.

In the most recent experiment, Steinberg and colleagues showed that the surrealism was a consequence of non-locality — the fact that the particles were able to influence one another instantaneously at a distance. In fact, the “incorrect” predictions of trajectories by the entangled photon were actually a consequence of where in their course the entangled particles were measured. Considering both particles together, the measurements made sense and were consistent with real trajectories.

Steinberg points out that both the standard interpretation of quantum mechanics and the De Broglie-Bohm interpretation are consistent with experimental evidence, and are mathematically equivalent. But it is helpful in some circumstances to visualize real trajectories, rather than wave function collapses, he says.

An image illustrating the work has been provided,

On the left, a still image from an animation of reconstructed trajectories for photons going through a double-slit. A second photon “measures” which slit each photon traversed, so no interference results on the screen. The image on the right shows the polarisation of this second, “probe." Credit: Dylan Mahler Courtesy: CIFAR

On the left, a still image from an animation of reconstructed trajectories for photons going through a double-slit. A second photon “measures” which slit each photon traversed, so no interference results on the screen. The image on the right shows the polarisation of this second, “probe.” Credit: Dylan Mahler Courtesy: CIFAR

Here’s a link to and a citation for the paper,

Experimental nonlocal and surreal Bohmian trajectories by Dylan H. Mahler, Lee Rozema, Kent Fisher, Lydia Vermeyden, Kevin J. Resch, Howard M. Wiseman, and Aephraim Steinberg. Science Advances  19 Feb 2016: Vol. 2, no. 2, e1501466 DOI: 10.1126/science.1501466

This article appears to be open access.