Category Archives: robots

May/June 2017 scienceish events in Canada (mostly in Vancouver)

I have five* events for this posting

(1) Science and You (Montréal)

The latest iteration of the Science and You conference took place May 4 – 6, 2017 at McGill University (Montréal, Québec). That’s the sad news, the good news is that they have recorded and released the sessions onto YouTube. (This is the first time the conference has been held outside of Europe, in fact, it’s usually held in France.) Here’s why you might be interested (from the 2017 conference page),

The animator of the conference will be Véronique Morin:

Véronique Morin is science journalist and communicator, first president of the World Federation of Science Journalists (WFSJ) and serves as judge for science communication awards. She worked for a science program on Quebec’s public TV network, CBCRadio-Canada, TVOntario, and as a freelancer is also a contributor to -among others-  The Canadian Medical Journal, University Affairs magazine, NewsDeeply, while pursuing documentary projects.

Let’s talk about S …

Holding the attention of an audience full of teenagers may seem impossible… particularly on topics that might be seen as boring, like sciences! Yet, it’s essential to demistify science in order to make it accessible, even appealing in the eyes of futur citizens.
How can we encourage young adults to ask themselves questions about the surrounding world, nature and science? How can we make them discover sciences with and without digital tools?

Find out tips and tricks used by our speakers Kristin Alford and Amanda Tyndall.

Kristin Alford
Dr Kristin Alford is a futurist and the inaugural Director of MOD., a futuristic museum of discovery at the University of South Australia. Her mind is presently occupied by the future of work and provoking young adults to ask questions about the role of science at the intersection of art and innovation.

Internet Website

Amanda Tyndall
Over 20 years of  science communication experience with organisations such as Café Scientifique, The Royal Institution of Great Britain (and Australia’s Science Exchange), the Science Museum in London and now with the Edinburgh International Science Festival. Particularly interested in engaging new audiences through linkages with the arts and digital/creative industries.

Internet Website

A troll in the room

Increasingly used by politicians, social media can reach thousand of people in few seconds. Relayed to infinity, the message seems truthful, but is it really? At a time of fake news and alternative facts, how can we, as a communicator or a journalist, take up the challenge of disinformation?
Discover the traps and tricks of disinformation in the age of digital technologies with our two fact-checking experts, Shawn Otto and Vanessa Schipani, who will offer concrete solutions to unravel the true from the false..

 

Shawn Otto
Shawn Otto was awarded the IEEE-USA (“I-Triple-E”) National Distinguished Public Service Award for his work elevating science in America’s national public dialogue. He is cofounder and producer of the US presidential science debates at ScienceDebate.org. He is also an award-winning screenwriter and novelist, best known for writing and co-producing the Academy Award-nominated movie House of Sand and Fog.

Vanessa Schipani
Vanessa is a science journalist at FactCheck.org, which monitors U.S. politicians’ claims for accuracy. Previously, she wrote for outlets in the U.S., Europe and Japan, covering topics from quantum mechanics to neuroscience. She has bachelor’s degrees in zoology and philosophy and a master’s in the history and philosophy of science.

At 20,000 clicks from the extreme

Sharing living from a space station, ship or submarine. The examples of social media use in extreme conditions are multiplying and the public is asking for more. How to use public tools to highlight practices and discoveries? How to manage the use of social networks of a large organisation? What pitfalls to avoid? What does this mean for citizens and researchers?
Find out with Phillipe Archambault and Leslie Elliott experts in extrem conditions.

Philippe Archambault

Professor Philippe Archambault is a marine ecologist at Laval University, the director of the Notre Golfe network and president of the 4th World Conference on Marine Biodiversity. His research on the influence of global changes on biodiversity and the functioning of ecosystems has led him to work in all four corners of our oceans from the Arctic to the Antarctic, through Papua New Guinea and the French Polynesia.

Website

Leslie Elliott

Leslie Elliott leads a team of communicators at Ocean Networks Canada in Victoria, British Columbia, home to Canada’s world-leading ocean observatories in the Pacific and Arctic Oceans. Audiences can join robots equipped with high definition cameras via #livedive to discover more about our ocean.

Website

Science is not a joke!

Science and humor are two disciplines that might seem incompatible … and yet, like the ig-Nobels, humour can prove to be an excellent way to communicate a scientific message. This, however, can prove to be quite challenging since one needs to ensure they employ the right tone and language to both captivate the audience while simultaneously communicating complex topics.

Patrick Baud and Brian Malow, both well-renowned scientific communicators, will give you with the tools you need to capture your audience and also convey a proper scientific message. You will be surprised how, even in Science, a good dose of humour can make you laugh and think.

Patrick Baud
Patrick Baud is a French author who was born on June 30, 1979, in Avignon. He has been sharing for many years his passion for tales of fantasy, and the marvels and curiosities of the world, through different media: radio, web, novels, comic strips, conferences, and videos. His YouTube channel “Axolot”, was created in 2013, and now has over 420,000 followers.

Internet Website
Youtube

Brian Malow
Brian Malow is Earth’s Premier Science Comedian (self-proclaimed).  Brian has made science videos for Time Magazine and contributed to Neil deGrasse Tyson’s radio show.  He worked in science communications at a museum, blogged for Scientific American, and trains scientists to be better communicators.

Internet Website
YouTube

I don’t think they’ve managed to get everything up on YouTube yet but the material I’ve found has been subtitled (into French or English, depending on which language the speaker used).

Here are the opening day’s talks on YouTube with English subtitles or French subtitles when appropriate. You can also find some abstracts for the panel presentations here. I was particularly in this panel (S3 – The Importance of Reaching Out to Adults in Scientific Culture), Note: I have searched out the French language descriptions for those unavailable in English,

Organized by Coeur des sciences, Université du Québec à Montréal (UQAM)
Animator: Valérie Borde, Freelance Science Journalist

Anouk Gingras, Musée de la civilisation, Québec
Text not available in English

[La science au Musée de la civilisation c’est :
• Une cinquantaine d’expositions et espaces découvertes
• Des thèmes d’actualité, liés à des enjeux sociaux, pour des exposition souvent destinées aux adultes
• Un potentiel de nouveaux publics en lien avec les autres thématiques présentes au Musée (souvent non scientifiques)
L’exposition Nanotechnologies : l’invisible révolution :
• Un thème d’actualité suscitant une réflexion
• Un sujet sensible menant à la création d’un parcours d’exposition polarisé : choix entre « oui » ou « non » au développement des nanotechnologies pour l’avenir
• L’utilisation de divers éléments pour rapprocher le sujet du visiteur

  • Les nanotechnologies dans la science-fiction
  • Les objets du quotidien contenant des nanoparticules
  • Les objets anciens qui utilisant les nanotechnologies
  • Divers microscopes retraçant l’histoire des nanotechnologies

• Une forme d’interaction suscitant la réflexion du visiteur via un objet sympatique : le canard  de plastique jaune, muni d’une puce RFID

  • Sept stations de consultation qui incitent le visiteur à se prononcer et à réfléchir sur des questions éthiques liées au développement des nanotechnologies
  • Une compilation des données en temps réel
  • Une livraison des résultats personnalisée
  • Une mesure des visiteurs dont l’opinion s’est modifiée à la suite de la visite de l’exposition

Résultats de fréquentation :
• Public de jeunes adultes rejoint (51%)
• Plus d’hommes que de femmes ont visité l’exposition
• Parcours avec canard: incite à la réflexion et augmente l’attention
• 3 visiteurs sur 4 prennent le canard; 92% font l’activité en entier]

Marie Lambert-Chan, Québec Science
Capting the attention of adult readership : challenging mission, possible mission
Since 1962, Québec Science Magazine is the only science magazine aimed at an adult readership in Québec. Our mission : covering topical subjects related to science and technology, as well as social issues from a scientific point of view. Each year, we print eight issues, with a circulation of 22,000 copies. Furthermore, the magazine has received several awards and accolades. In 2017, Québec Science Magazine was honored by the Canadian Magazine Awards/Grands Prix du Magazine and was named Best Magazine in Science, Business and Politics category.
Although we have maintained a solid reputation among scientists and the media industry, our magazine is still relatively unknown to the general public. Why is that ? How is it that, through all those years, we haven’t found the right angle to engage a broader readership ?
We are still searching for definitive answers, but here are our observations :
Speaking science to adults is much more challenging than it is with children, who can marvel endlessly at the smallest things. Unfortunately, adults lose this capacity to marvel and wonder for various reasons : they have specific interests, they failed high-school science, they don’t feel competent enough to understand scientific phenomena. How do we bring the wonder back ? This is our mission. Not impossible, and hopefully soon to be accomplished. One noticible example is the number of reknown scientists interviewed during the popular talk-show Tout le monde en parle, leading us to believe the general public may have an interest in science.
However, to accomplish our mission, we have to recount science. According to the Bulgarian writer and blogger Maria Popova, great science writing should explain, elucidate and enchant . To explain : to make the information clear and comprehensible. To elucidate : to reveal all the interconnections between the pieces of information. To enchant : to go beyond the scientific terms and information and tell a story, thus giving a kaleidoscopic vision of the subject. This is how we intend to capture our readership’s attention.
Our team aims to accomplish this challenge. Although, to be perfectly honest, it would be much easier if we had more resources, financial-wise or human-wise. However, we don’t lack ideas. We dream of major scientific investigations, conferences organized around themes from the magazine’s issues, Web documentaries, podcasts… Such initiatives would give us the visibility we desperately crave.
That said, even in the best conditions, would be have more subscribers ? Perhaps. But it isn’t assured. Even if our magazine is aimed at adult readership, we are convinced that childhood and science go hand in hand, and is even decisive for the children’s future. At the moment, school programs are not in place for continuous scientific development. It is possible to develop an interest for scientific culture as adults, but it is much easier to achieve this level of curiosity if it was previously fostered.

Robert Lamontagne, Université de Montréal
Since the beginning of my career as an astrophysicist, I have been interested in scientific communication to non-specialist audiences. I have presented hundreds of lectures describing the phenomena of the cosmos. Initially, these were mainly offered in amateur astronomers’ clubs or in high-schools and Cégeps. Over the last few years, I have migrated to more general adult audiences in the context of cultural activities such as the “Festival des Laurentides”, the Arts, Culture and Society activities in Repentigny and, the Université du troisième âge (UTA) or Senior’s University.
The Quebec branch of the UTA, sponsored by the Université de Sherbrooke (UdeS), exists since 1976. Seniors universities, created in Toulouse, France, are part of a worldwide movement. The UdeS and its senior’s university antennas are members of the International Association of the Universities of the Third Age (AIUTA). The UTA is made up of 28 antennas located in 10 regions and reaches more than 10,000 people per year. Antenna volunteers prepare educational programming by drawing on a catalog of courses, seminars and lectures, covering a diverse range of subjects ranging from history and politics to health, science, or the environment.
The UTA is aimed at people aged 50 and over who wish to continue their training and learn throughout their lives. It is an attentive, inquisitive, educated public and, given the demographics in Canada, its number is growing rapidly. This segment of the population is often well off and very involved in society.
I usually use a two-prong approach.
• While remaining rigorous, the content is articulated around a few ideas, avoiding analytical expressions in favor of a qualitative description.
• The narrative framework, the story, which allows to contextualize the scientific content and to forge links with the audience.

Sophie Malavoy, Coeur des sciences – UQAM

Many obstacles need to be overcome in order to reach out to adults, especially those who aren’t in principle interested in science.
• Competing against cultural activities such as theater, movies, etc.
• The idea that science is complex and dull
• A feeling of incompetence. « I’ve always been bad in math and physics»
• Funding shortfall for activities which target adults
How to reach out to those adults?
• To put science into perspective. To bring its relevance out by making links with current events and big issues (economic, heath, environment, politic). To promote a transdisciplinary approach which includes humanities and social sciences.
• To stake on originality by offering uncommon and ludic experiences (scientific walks in the city, street performances, etc.)
• To bridge between science and popular activities to the public (science/music; science/dance; science/theater; science/sports; science/gastronomy; science/literature)
• To reach people with emotions without sensationalism. To boost their curiosity and ability to wonder.
• To put a human face on science, by insisting not only on the results of a research but on its process. To share the adventure lived by researchers.
• To liven up people’s feeling of competence. To insist on the scientific method.
• To invite non-scientists (citizens groups, communities, consumers, etc.) to the reflections on science issues (debate, etc.).  To move from dissemination of science to dialog

Didier Pourquery, The Conversation France
Text not available in English

[Depuis son lancement en septembre 2015 la plateforme The Conversation France (2 millions de pages vues par mois) n’a cessé de faire progresser son audience. Selon une étude menée un an après le lancement, la structure de lectorat était la suivante
Pour accrocher les adultes et les ainés deux axes sont intéressants ; nous les utilisons autant sur notre site que sur notre newsletter quotidienne – 26.000 abonnés- ou notre page Facebook (11500 suiveurs):
1/ expliquer l’actualité : donner les clefs pour comprendre les débats scientifiques qui animent la société ; mettre de la science dans les discussions (la mission du site est de  « nourrir le débat citoyen avec de l’expertise universitaire et de la recherche »). L’idée est de poser des questions de compréhension simple au moment où elles apparaissent dans le débat (en période électorale par exemple : qu’est-ce que le populisme ? Expliqué par des chercheurs de Sciences Po incontestables.)
Exemples : comprendre les conférences climat -COP21, COP22 – ; comprendre les débats de société (Gestation pour autrui); comprendre l’économie (revenu universel); comprendre les maladies neurodégénératives (Alzheimer) etc.
2/ piquer la curiosité : utiliser les formules classiques (le saviez-vous ?) appliquées à des sujets surprenants (par exemple : «  Que voit un chien quand il regarde la télé ? » a eu 96.000 pages vues) ; puis jouer avec ces articles sur les réseaux sociaux. Poser des questions simples et surprenantes. Par exemple : ressemblez-vous à votre prénom ? Cet article académique très sérieux a comptabilisé 95.000 pages vues en français et 171.000 en anglais.
3/ Susciter l’engagement : faire de la science participative simple et utile. Par exemple : appeler nos lecteurs à surveiller l’invasion de moustiques tigres partout sur le territoire. Cet article a eu 112.000 pages vues et a été republié largement sur d’autres sites. Autre exemple : appeler les lecteurs à photographier les punaises de leur environnement.]

Here are my very brief and very rough translations. (1) Anouk Gingras is focused largely on a nanotechnology exhibit and whether or not visitors went through it and participated in various activities. She doesn’t seem specifically focused on science communication for adults but they are doing some very interesting and related work at Québec’s Museum of Civilization. (2) Didier Pourquery is describing an online initiative known as ‘The Conversation France’ (strange—why not La conversation France?). Moving on, there’s a website with a daily newsletter (blog?) and a Facebook page. They have two main projects, one is a discussion of current science issues in society, which is informed with and by experts but is not exclusive to experts, and more curiosity-based science questions and discussion such as What does a dog see when it watches television?

Serendipity! I hadn’t stumbled across this conference when I posted my May 12, 2017 piece on the ‘insanity’ of science outreach in Canada. It’s good to see I’m not the only one focused on science outreach for adults and that there is some action, although seems to be a Québec-only effort.

(2) Ingenious—a book launch in Vancouver

The book will be launched on Thursday, June 1, 2017 at the Vancouver Public Library’s Central Branch (from the Ingenious: An Evening of Canadian Innovation event page)

Ingenious: An Evening of Canadian Innovation
Thursday, June 1, 2017 (6:30 pm – 8:00 pm)
Central Branch
Description

Gov. Gen. David Johnston and OpenText Corp. chair Tom Jenkins discuss Canadian innovation and their book Ingenious: How Canadian Innovators Made the World Smarter, Smaller, Kinder, Safer, Healthier, Wealthier and Happier.

Books will be available for purchase and signing.

Doors open at 6 p.m.

INGENIOUS : HOW CANADIAN INNOVATORS MADE THE WORLD SMARTER, SMALLER, KINDER, SAFER, HEALTHIER, WEALTHIER, AND HAPPIER

Address:

350 West Georgia St.
VancouverV6B 6B1

Get Directions

  • Phone:

Location Details:

Alice MacKay Room, Lower Level

I do have a few more details about the authors and their book. First, there’s this from the Ottawa Writer’s Festival March 28, 2017 event page,

To celebrate Canada’s 150th birthday, Governor General David Johnston and Tom Jenkins have crafted a richly illustrated volume of brilliant Canadian innovations whose widespread adoption has made the world a better place. From Bovril to BlackBerrys, lightbulbs to liquid helium, peanut butter to Pablum, this is a surprising and incredibly varied collection to make Canadians proud, and to our unique entrepreneurial spirit.

Successful innovation is always inspired by at least one of three forces — insight, necessity, and simple luck. Ingenious moves through history to explore what circumstances, incidents, coincidences, and collaborations motivated each great Canadian idea, and what twist of fate then brought that idea into public acceptance. Above all, the book explores what goes on in the mind of an innovator, and maps the incredible spectrum of personalities that have struggled to improve the lot of their neighbours, their fellow citizens, and their species.

From the marvels of aboriginal invention such as the canoe, snowshoe, igloo, dogsled, lifejacket, and bunk bed to the latest pioneering advances in medicine, education, philanthropy, science, engineering, community development, business, the arts, and the media, Canadians have improvised and collaborated their way to international admiration. …

Then, there’s this April 5, 2017 item on Canadian Broadcasting Corporation’s (CBC) news online,

From peanut butter to the electric wheelchair, the stories behind numerous life-changing Canadian innovations are detailed in a new book.

Gov. Gen. David Johnston and Tom Jenkins, chair of the National Research Council and former CEO of OpenText, are the authors of Ingenious: How Canadian Innovators Made the World Smarter, Smaller, Kinder, Safer, Healthier, Wealthier and Happier. The authors hope their book reinforces and extends the culture of innovation in Canada.

“We started wanting to tell 50 stories of Canadian innovators, and what has amazed Tom and myself is how many there are,” Johnston told The Homestretch on Wednesday. The duo ultimately chronicled 297 innovations in the book, including the pacemaker, life jacket and chocolate bars.

“Innovations are not just technological, not just business, but they’re social innovations as well,” Johnston said.

Many of those innovations, and the stories behind them, are not well known.

“We’re sort of a humble people,” Jenkins said. “We’re pretty quiet. We don’t brag, we don’t talk about ourselves very much, and so we then lead ourselves to believe as a culture that we’re not really good inventors, the Americans are. And yet we knew that Canadians were actually great inventors and innovators.”

‘Opportunities and challenges’

For Johnston, his favourite story in the book is on the light bulb.

“It’s such a symbol of both our opportunities and challenges,” he said. “The light bulb was invented in Canada, not the United States. It was two inventors back in the 1870s that realized that if you passed an electric current through a resistant metal it would glow, and they patented that, but then they didn’t have the money to commercialize it.”

American inventor Thomas Edison went on to purchase that patent and made changes to the original design.

Johnston and Jenkins are also inviting readers to share their own innovation stories, on the book’s website.

I’m looking forward to the talk and wondering if they’ve included the botox and cellulose nanocrystal (CNC) stories to the book. BTW, Tom Jenkins was the chair of a panel examining Canadian research and development and lead author of the panel’s report (Innovation Canada: A Call to Action) for the then Conservative government (it’s also known as the Jenkins report). You can find out more about in my Oct. 21, 2011 posting.

(3) Made in Canada (Vancouver)

This is either fortuitous or there’s some very high level planning involved in the ‘Made in Canada; Inspiring Creativity and Innovation’ show which runs from April 21 – Sept. 4, 2017 at Vancouver’s Science World (also known as the Telus World of Science). From the Made in Canada; Inspiring Creativity and Innovation exhibition page,

Celebrate Canadian creativity and innovation, with Science World’s original exhibition, Made in Canada, presented by YVR [Vancouver International Airport] — where you drive the creative process! Get hands-on and build the fastest bobsled, construct a stunning piece of Vancouver architecture and create your own Canadian sound mashup, to share with friends.

Vote for your favourite Canadian inventions and test fly a plane of your design. Discover famous (and not-so-famous, but super neat) Canadian inventions. Learn about amazing, local innovations like robots that teach themselves, one-person electric cars and a computer that uses parallel universes.

Imagine what you can create here, eh!!

You can find more information here.

One quick question, why would Vancouver International Airport be presenting this show? I asked that question of Science World’s Communications Coordinator, Jason Bosher, and received this response,

 YVR is the presenting sponsor. They donated money to the exhibition and they also contributed an exhibit for the “We Move” themed zone in the Made in Canada exhibition. The YVR exhibit details the history of the YVR airport, it’s geographic advantage and some of the planes they have seen there.

I also asked if there was any connection between this show and the ‘Ingenious’ book launch,

Some folks here are aware of the book launch. It has to do with the Canada 150 initiative and nothing to do with the Made in Canada exhibition, which was developed here at Science World. It is our own original exhibition.

So there you have it.

(4) Robotics, AI, and the future of work (Ottawa)

I’m glad to finally stumble across a Canadian event focusing on the topic of artificial intelligence (AI), robotics and the future of work. Sadly (for me), this is taking place in Ottawa. Here are more details  from the May 25, 2017 notice (received via email) from the Canadian Science Policy Centre (CSPC),

CSPC is Partnering with CIFAR {Canadian Institute for Advanced Research]
The Second Annual David Dodge Lecture

Join CIFAR and Senior Fellow Daron Acemoglu for
the Second Annual David Dodge CIFAR Lecture in Ottawa on June 13.
June 13, 2017 | 12 – 2 PM [emphasis mine]
Fairmont Château Laurier, Drawing Room | 1 Rideau St, Ottawa, ON
Along with the backlash against globalization and the outsourcing of jobs, concern is also growing about the effect that robotics and artificial intelligence will have on the labour force in advanced industrial nations. World-renowned economist Acemoglu, author of the best-selling book Why Nations Fail, will discuss how technology is changing the face of work and the composition of labour markets. Drawing on decades of data, Acemoglu explores the effects of widespread automation on manufacturing jobs, the changes we can expect from artificial intelligence technologies, and what responses to these changes might look like. This timely discussion will provide valuable insights for current and future leaders across government, civil society, and the private sector.

Daron Acemoglu is a Senior Fellow in CIFAR’s Insitutions, Organizations & Growth program, and the Elizabeth and James Killian Professor of Economics at the Massachusetts Institute of Technology.

Tickets: $15 (A light lunch will be served.)

You can find a registration link here. Also, if you’re interested in the Canadian efforts in the field of artificial intelligence you can find more in my March 24, 2017 posting (scroll down about 25% of the way and then about 40% of the way) on the 2017 Canadian federal budget and science where I first noted the $93.7M allocated to CIFAR for launching a Pan-Canadian Artificial Intelligence Strategy.

(5) June 2017 edition of the Curiosity Collider Café (Vancouver)

This is an art/science (also known called art/sci and SciArt) that has taken place in Vancouver every few months since April 2015. Here’s more about the June 2017 edition (from the Curiosity Collider events page),

Collider Cafe

When
8:00pm on Wednesday, June 21st, 2017. Door opens at 7:30pm.

Where
Café Deux Soleils. 2096 Commercial Drive, Vancouver, BC (Google Map).

Cost
$5.00-10.00 cover at the door (sliding scale). Proceeds will be used to cover the cost of running this event, and to fund future Curiosity Collider events. Curiosity Collider is a registered BC non-profit organization.

***

#ColliderCafe is a space for artists, scientists, makers, and anyone interested in art+science. Meet, discover, connect, create. How do you explore curiosity in your life? Join us and discover how our speakers explore their own curiosity at the intersection of art & science.

The event will start promptly at 8pm (doors open at 7:30pm). $5.00-10.00 (sliding scale) cover at the door. Proceeds will be used to cover the cost of running this event, and to fund future Curiosity Collider events. Curiosity Collider is a registered BC non-profit organization.

Enjoy!

*I changed ‘three’ events to ‘five’ events and added a number to each event for greater reading ease on May 31, 2017.

An explanation of neural networks from the Massachusetts Institute of Technology (MIT)

I always enjoy the MIT ‘explainers’ and have been a little sad that I haven’t stumbled across one in a while. Until now, that is. Here’s an April 14, 201 neural network ‘explainer’ (in its entirety) by Larry Hardesty (?),

In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department.

Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory.

The technique then enjoyed a resurgence in the 1980s, fell into eclipse again in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips.

“There’s this idea that ideas in science are a bit like epidemics of viruses,” says Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT, an investigator at MIT’s McGovern Institute for Brain Research, and director of MIT’s Center for Brains, Minds, and Machines. “There are apparently five or six basic strains of flu viruses, and apparently each one comes back with a period of around 25 years. People get infected, and they develop an immune response, and so they don’t get infected for the next 25 years. And then there is a new generation that is ready to be infected by the same strain of virus. In science, people fall in love with an idea, get excited about it, hammer it to death, and then get immunized — they get tired of it. So ideas should have the same kind of periodicity!”

Weighty matters

Neural nets are a means of doing machine learning, in which a computer learns to perform some task by analyzing training examples. Usually, the examples have been hand-labeled in advance. An object recognition system, for instance, might be fed thousands of labeled images of cars, houses, coffee cups, and so on, and it would find visual patterns in the images that consistently correlate with particular labels.

Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected. Most of today’s neural nets are organized into layers of nodes, and they’re “feed-forward,” meaning that data moves through them in only one direction. An individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data.

To each of its incoming connections, a node will assign a number known as a “weight.” When the network is active, the node receives a different data item — a different number — over each of its connections and multiplies it by the associated weight. It then adds the resulting products together, yielding a single number. If that number is below a threshold value, the node passes no data to the next layer. If the number exceeds the threshold value, the node “fires,” which in today’s neural nets generally means sending the number — the sum of the weighted inputs — along all its outgoing connections.

When a neural net is being trained, all of its weights and thresholds are initially set to random values. Training data is fed to the bottom layer — the input layer — and it passes through the succeeding layers, getting multiplied and added together in complex ways, until it finally arrives, radically transformed, at the output layer. During training, the weights and thresholds are continually adjusted until training data with the same labels consistently yield similar outputs.

Minds and machines

The neural nets described by McCullough and Pitts in 1944 had thresholds and weights, but they weren’t arranged into layers, and the researchers didn’t specify any training mechanism. What McCullough and Pitts showed was that a neural net could, in principle, compute any function that a digital computer could. The result was more neuroscience than computer science: The point was to suggest that the human brain could be thought of as a computing device.

Neural nets continue to be a valuable tool for neuroscientific research. For instance, particular network layouts or rules for adjusting weights and thresholds have reproduced observed features of human neuroanatomy and cognition, an indication that they capture something about how the brain processes information.

The first trainable neural network, the Perceptron, was demonstrated by the Cornell University psychologist Frank Rosenblatt in 1957. The Perceptron’s design was much like that of the modern neural net, except that it had only one layer with adjustable weights and thresholds, sandwiched between input and output layers.

Perceptrons were an active area of research in both psychology and the fledgling discipline of computer science until 1959, when Minsky and Papert published a book titled “Perceptrons,” which demonstrated that executing certain fairly common computations on Perceptrons would be impractically time consuming.

“Of course, all of these limitations kind of disappear if you take machinery that is a little more complicated — like, two layers,” Poggio says. But at the time, the book had a chilling effect on neural-net research.

“You have to put these things in historical context,” Poggio says. “They were arguing for programming — for languages like Lisp. Not many years before, people were still using analog computers. It was not clear at all at the time that programming was the way to go. I think they went a little bit overboard, but as usual, it’s not black and white. If you think of this as this competition between analog computing and digital computing, they fought for what at the time was the right thing.”

Periodicity

By the 1980s, however, researchers had developed algorithms for modifying neural nets’ weights and thresholds that were efficient enough for networks with more than one layer, removing many of the limitations identified by Minsky and Papert. The field enjoyed a renaissance.

But intellectually, there’s something unsatisfying about neural nets. Enough training may revise a network’s settings to the point that it can usefully classify data, but what do those settings mean? What image features is an object recognizer looking at, and how does it piece them together into the distinctive visual signatures of cars, houses, and coffee cups? Looking at the weights of individual connections won’t answer that question.

In recent years, computer scientists have begun to come up with ingenious methods for deducing the analytic strategies adopted by neural nets. But in the 1980s, the networks’ strategies were indecipherable. So around the turn of the century, neural networks were supplanted by support vector machines, an alternative approach to machine learning that’s based on some very clean and elegant mathematics.

The recent resurgence in neural networks — the deep-learning revolution — comes courtesy of the computer-game industry. The complex imagery and rapid pace of today’s video games require hardware that can keep up, and the result has been the graphics processing unit (GPU), which packs thousands of relatively simple processing cores on a single chip. It didn’t take long for researchers to realize that the architecture of a GPU is remarkably like that of a neural net.

Modern GPUs enabled the one-layer networks of the 1960s and the two- to three-layer networks of the 1980s to blossom into the 10-, 15-, even 50-layer networks of today. That’s what the “deep” in “deep learning” refers to — the depth of the network’s layers. And currently, deep learning is responsible for the best-performing systems in almost every area of artificial-intelligence research.

Under the hood

The networks’ opacity is still unsettling to theorists, but there’s headway on that front, too. In addition to directing the Center for Brains, Minds, and Machines (CBMM), Poggio leads the center’s research program in Theoretical Frameworks for Intelligence. Recently, Poggio and his CBMM colleagues have released a three-part theoretical study of neural networks.

The first part, which was published last month in the International Journal of Automation and Computing, addresses the range of computations that deep-learning networks can execute and when deep networks offer advantages over shallower ones. Parts two and three, which have been released as CBMM technical reports, address the problems of global optimization, or guaranteeing that a network has found the settings that best accord with its training data, and overfitting, or cases in which the network becomes so attuned to the specifics of its training data that it fails to generalize to other instances of the same categories.

There are still plenty of theoretical questions to be answered, but CBMM researchers’ work could help ensure that neural networks finally break the generational cycle that has brought them in and out of favor for seven decades.

This image from MIT illustrates a ‘modern’ neural network,

Most applications of deep learning use “convolutional” neural networks, in which the nodes of each layer are clustered, the clusters overlap, and each cluster feeds data to multiple nodes (orange and green) of the next layer. Image: Jose-Luis Olivares/MIT

h/t phys.org April 17, 2017

One final note, I wish the folks at MIT had an ‘explainer’ archive. I’m not sure how to find any more ‘explainers on MIT’s website.

Machine learning programs learn bias

The notion of bias in artificial intelligence (AI)/algorithms/robots is gaining prominence (links to other posts featuring algorithms and bias are at the end of this post). The latest research concerns machine learning where an artificial intelligence system trains itself with ordinary human language from the internet. From an April 13, 2017 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

As artificial intelligence systems “learn” language from existing texts, they exhibit the same biases that humans do, a new study reveals. The results not only provide a tool for studying prejudicial attitudes and behavior in humans, but also emphasize how language is intimately intertwined with historical biases and cultural stereotypes. A common way to measure biases in humans is the Implicit Association Test (IAT), where subjects are asked to pair two concepts they find similar, in contrast to two concepts they find different; their response times can vary greatly, indicating how well they associated one word with another (for example, people are more likely to associate “flowers” with “pleasant,” and “insects” with “unpleasant”). Here, Aylin Caliskan and colleagues developed a similar way to measure biases in AI systems that acquire language from human texts; rather than measuring lag time, however, they used the statistical number of associations between words, analyzing roughly 2.2 million words in total. Their results demonstrate that AI systems retain biases seen in humans. For example, studies of human behavior show that the exact same resume is 50% more likely to result in an opportunity for an interview if the candidate’s name is European American rather than African-American. Indeed, the AI system was more likely to associate European American names with “pleasant” stimuli (e.g. “gift,” or “happy”). In terms of gender, the AI system also reflected human biases, where female words (e.g., “woman” and “girl”) were more associated than male words with the arts, compared to mathematics. In a related Perspective, Anthony G. Greenwald discusses these findings and how they could be used to further analyze biases in the real world.

There are more details about the research in this April 13, 2017 Princeton University news release on EurekAlert (also on ScienceDaily),

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.

Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.

“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14  [2017] in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.

As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.

Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.

The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.

In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race. As it turned out, the Princeton machine learning experiment managed to replicate the broad substantiations of bias found in select Implicit Association Test studies over the years that have relied on live, human subjects.

For instance, the machine learning program associated female names more with familial attribute words, like “parents” and “wedding,” than male names. In turn, male names had stronger associations with career attributes, like “professional” and “salary.” Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender–like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.

Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. An example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, “o.” Plugged into the well-known, online translation service Google Translate, however, the Turkish sentences “o bir doktor” and “o bir hem?ire” with this gender-neutral pronoun are translated into English as “he is a doctor” and “she is a nurse.”

“This paper reiterates the important point that machine learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms,” said Hanna Wallach, a senior researcher at Microsoft Research New York City, who was not involved in the study. “Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases.”

Another objectionable example harkens back to a well-known 2004 paper by Marianne Bertrand of the University of Chicago Booth School of Business and Sendhil Mullainathan of Harvard University. The economists sent out close to 5,000 identical resumes to 1,300 job advertisements, changing only the applicants’ names to be either traditionally European American or African American. The former group was 50 percent more likely to be offered an interview than the latter. In an apparent corroboration of this bias, the new Princeton study demonstrated that a set of African American names had more unpleasantness associations than a European American set.

Computer programmers might hope to prevent cultural stereotype perpetuation through the development of explicit, mathematics-based instructions for the machine learning programs underlying AI systems. Not unlike how parents and mentors try to instill concepts of fairness and equality in children and students, coders could endeavor to make machines reflect the better angels of human nature.

“The biases that we studied in the paper are easy to overlook when designers are creating systems,” said Narayanan. “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”

Here’s a link to and a citation for the Princeton paper,

Semantics derived automatically from language corpora contain human-like biases by Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan. Science  14 Apr 2017: Vol. 356, Issue 6334, pp. 183-186 DOI: 10.1126/science.aal4230

This paper appears to be open access.

Links to more cautionary posts about AI,

Aug 5, 2009: Autonomous algorithms; intelligent windows; pretty nano pictures

June 14, 2016:  Accountability for artificial intelligence decision-making

Oct. 25, 2016 Removing gender-based stereotypes from algorithms

March 1, 2017: Algorithms in decision-making: a government inquiry in the UK

There’s also a book which makes some of the current use of AI programmes and big data quite accessible reading: Cathy O’Neal’s ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’.

Internet of toys, the robotification of childhood, and privacy issues

Leave it to the European Commission’s (EC) Joint Research Centre (JRC) to look into the future of toys. As far as I’m aware there are no such moves in either Canada or the US despite the ubiquity of robot toys and other such devices. From a March 23, 2017 EC JRC  press release (also on EurekAlert),

Action is needed to monitor and control the emerging Internet of Toys, concludes a new JRC report. Privacy and security are highlighted as main areas of concern.

Large numbers of connected toys have been put on the market over the past few years, and the turnover is expected to reach €10 billion by 2020 – up from just €2.6 billion in 2015.

Connected toys come in many different forms, from smart watches to teddy bears that interact with their users. They are connected to the internet and together with other connected appliances they form the Internet of Things, which is bringing technology into our daily lives more than ever.

However, the toys’ ability to record, store and share information about their young users raises concerns about children’s safety, privacy and social development.

A team of JRC scientists and international experts looked at the safety, security, privacy and societal questions emerging from the rise of the Internet of Toys. The report invites policymakers, industry, parents and teachers to study connected toys more in depth in order to provide a framework which ensures that these toys are safe and beneficial for children.

Robotification of childhood

Robots are no longer only used in industry to carry out repetitive or potentially dangerous tasks. In the past years, robots have entered our everyday lives and also children are more and more likely to encounter robotic or artificial intelligence-enhanced toys.

We still know relatively little about the consequences of children’s interaction with robotic toys. However, it is conceivable that they represent both opportunities and risks for children’s cognitive, socio-emotional and moral-behavioural development.

For example, social robots may further the acquisition of foreign language skills by compensating for the lack of native speakers as language tutors or by removing the barriers and peer pressure encountered in class room. There is also evidence about the benefits of child-robot interaction for children with developmental problems, such as autism or learning difficulties, who may find human interaction difficult.

However, the internet-based personalization of children’s education via filtering algorithms may also increase the risk of ‘educational bubbles’ where children only receive information that fits their pre-existing knowledge and interest – similar to adult interaction on social media networks.

Safety and security considerations

The rapid rise in internet connected toys also raises concerns about children’s safety and privacy. In particular, the way that data gathered by connected toys is analysed, manipulated and stored is not transparent, which poses an emerging threat to children’s privacy.

The data provided by children while they play, i.e the sounds, images and movements recorded by connected toys is personal data protected by the EU data protection framework, as well as by the new General Data Protection Regulation (GDPR). However, information on how this data is stored, analysed and shared might be hidden in long privacy statements or policies and often go unnoticed by parents.

Whilst children’s right to privacy is the most immediate concern linked to connected toys, there is also a long term concern: growing up in a culture where the tracking, recording and analysing of children’s everyday choices becomes a normal part of life is also likely to shape children’s behaviour and development.

Usage framework to guide the use of connected toys

The report calls for industry and policymakers to create a connected toys usage framework to act as a guide for their design and use.

This would also help toymakers to meet the challenge of complying with the new European General Data Protection Regulation (GDPR) which comes into force in May 2018, which will increase citizens’ control over their personal data.

The report also calls for the connected toy industry and academic researchers to work together to produce better designed and safer products.

Advice for parents

The report concludes that it is paramount that we understand how children interact with connected toys and which risks and opportunities they entail for children’s development.

“These devices come with really interesting possibilities and the more we use them, the more we will learn about how to best manage them. Locking them up in a cupboard is not the way to go. We as adults have to understand how they work – and how they might ‘misbehave’ – so that we can provide the right tools and the right opportunities for our children to grow up happy in a secure digital world”, Stéphane Chaudron, the report’s lead researcher at the Joint Research Centre (JRC).).

The authors of the report encourage parents to get informed about the capabilities, functions, security measures and privacy settings of toys before buying them. They also urge parents to focus on the quality of play by observing their children, talking to them about their experiences and playing alongside and with their children.

Protecting and empowering children

Through the Alliance to better protect minors online and with the support of UNICEF, NGOs, Toy Industries Europe and other industry and stakeholder groups, European and global ICT and media companies  are working to improve the protection and empowerment of children when using connected toys. This self-regulatory initiative is facilitated by the European Commission and aims to create a safer and more stimulating digital environment for children.

There’s an engaging video accompanying this press release,

You can find the report (Kaleidoscope on the Internet of Toys: Safety, security, privacy and societal insights) here and both the PDF and print versions are free (although I imagine you’ll have to pay postage for the print version). This report was published in 2016; the authors are Stéphane Chaudron, Rosanna Di Gioia, Monica Gemo, Donell Holloway , Jackie Marsh , Giovanna Mascheroni , Jochen Peter, Dylan Yamada-Rice and organizations involved include European Cooperation in Science and Technology (COST), Digital Literacy and Multimodal Practices of Young Children (DigiLitEY), and COST Action IS1410. DigiLitEY is a European network of 33 countries focusing on research in this area (2015-2019).

Solar-powered graphene skin for more feeling in your prosthetics

A March 23, 2017 news item on Nanowerk highlights research that could put feeling into a prosthetic limb,

A new way of harnessing the sun’s rays to power ‘synthetic skin’ could help to create advanced prosthetic limbs capable of returning the sense of touch to amputees.

Engineers from the University of Glasgow, who have previously developed an ‘electronic skin’ covering for prosthetic hands made from graphene, have found a way to use some of graphene’s remarkable physical properties to use energy from the sun to power the skin.

Graphene is a highly flexible form of graphite which, despite being just a single atom thick, is stronger than steel, electrically conductive, and transparent. It is graphene’s optical transparency, which allows around 98% of the light which strikes its surface to pass directly through it, which makes it ideal for gathering energy from the sun to generate power.

A March 23, 2017 University of Glasgow press release, which originated the news item, details more about the research,

Ravinder Dahiya

Dr Ravinder Dahiya

A new research paper, published today in the journal Advanced Functional Materials, describes how Dr Dahiya and colleagues from his Bendable Electronics and Sensing Technologies (BEST) group have integrated power-generating photovoltaic cells into their electronic skin for the first time.

Dr Dahiya, from the University of Glasgow’s School of Engineering, said: “Human skin is an incredibly complex system capable of detecting pressure, temperature and texture through an array of neural sensors which carry signals from the skin to the brain.

“My colleagues and I have already made significant steps in creating prosthetic prototypes which integrate synthetic skin and are capable of making very sensitive pressure measurements. Those measurements mean the prosthetic hand is capable of performing challenging tasks like properly gripping soft materials, which other prosthetics can struggle with. We are also using innovative 3D printing strategies to build more affordable sensitive prosthetic limbs, including the formation of a very active student club called ‘Helping Hands’.

“Skin capable of touch sensitivity also opens the possibility of creating robots capable of making better decisions about human safety. A robot working on a construction line, for example, is much less likely to accidentally injure a human if it can feel that a person has unexpectedly entered their area of movement and stop before an injury can occur.”

The new skin requires just 20 nanowatts of power per square centimetre, which is easily met even by the poorest-quality photovoltaic cells currently available on the market. And although currently energy generated by the skin’s photovoltaic cells cannot be stored, the team are already looking into ways to divert unused energy into batteries, allowing the energy to be used as and when it is required.

Dr Dahiya added: “The other next step for us is to further develop the power-generation technology which underpins this research and use it to power the motors which drive the prosthetic hand itself. This could allow the creation of an entirely energy-autonomous prosthetic limb.

“We’ve already made some encouraging progress in this direction and we’re looking forward to presenting those results soon. We are also exploring the possibility of building on these exciting results to develop wearable systems for affordable healthcare. In this direction, recently we also got small funds from Scottish Funding Council.”

For more information about this advance and others in the field of prosthetics you may want to check out Megan Scudellari’s March 30, 2017 article for the IEEE’s (Institute of Electrical and Electronics Engineers) Spectrum (Note: Links have been removed),

Cochlear implants can restore hearing to individuals with some types of hearing loss. Retinal implants are now on the market to restore sight to the blind. But there are no commercially available prosthetics that restore a sense of touch to those who have lost a limb.

Several products are in development, including this haptic system at Case Western Reserve University, which would enable upper-limb prosthetic users to, say, pluck a grape off a stem or pull a potato chip out of a bag. It sounds simple, but such tasks are virtually impossible without a sense of touch and pressure.

Now, a team at the University of Glasgow that previously developed a flexible ‘electronic skin’ capable of making sensitive pressure measurements, has figured out how to power their skin with sunlight. …

Here’s a link to and a citation for the paper,

Energy-Autonomous, Flexible, and Transparent Tactile Skin by Carlos García Núñez, William Taube Navaraj, Emre O. Polat and Ravinder Dahiya. Advanced Functional Materials DOI: 10.1002/adfm.201606287 Version of Record online: 22 MAR 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

Does understanding your pet mean understanding artificial intelligence better?

Heather Roff’s take on artificial intelligence features an approach I haven’t seen before. From her March 30, 2017 essay for The Conversation (h/t March 31, 2017 news item on phys.org),

It turns out, though, that we already have a concept we can use when we think about AI: It’s how we think about animals. As a former animal trainer (albeit briefly) who now studies how people use AI, I know that animals and animal training can teach us quite a lot about how we ought to think about, approach and interact with artificial intelligence, both now and in the future.

Using animal analogies can help regular people understand many of the complex aspects of artificial intelligence. It can also help us think about how best to teach these systems new skills and, perhaps most importantly, how we can properly conceive of their limitations, even as we celebrate AI’s new possibilities.
Looking at constraints

As AI expert Maggie Boden explains, “Artificial intelligence seeks to make computers do the sorts of things that minds can do.” AI researchers are working on teaching computers to reason, perceive, plan, move and make associations. AI can see patterns in large data sets, predict the likelihood of an event occurring, plan a route, manage a person’s meeting schedule and even play war-game scenarios.

Many of these capabilities are, in themselves, unsurprising: Of course a robot can roll around a space and not collide with anything. But somehow AI seems more magical when the computer starts to put these skills together to accomplish tasks.

Thinking of AI as a trainable animal isn’t just useful for explaining it to the general public. It is also helpful for the researchers and engineers building the technology. If an AI scholar is trying to teach a system a new skill, thinking of the process from the perspective of an animal trainer could help identify potential problems or complications.

For instance, if I try to train my dog to sit, and every time I say “sit” the buzzer to the oven goes off, then my dog will begin to associate sitting not only with my command, but also with the sound of the oven’s buzzer. In essence, the buzzer becomes another signal telling the dog to sit, which is called an “accidental reinforcement.” If we look for accidental reinforcements or signals in AI systems that are not working properly, then we’ll know better not only what’s going wrong, but also what specific retraining will be most effective.

This requires us to understand what messages we are giving during AI training, as well as what the AI might be observing in the surrounding environment. The oven buzzer is a simple example; in the real world it will be far more complicated.

Before we welcome our AI overlords and hand over our lives and jobs to robots, we ought to pause and think about the kind of intelligences we are creating. …

Source: pixabay.com

It’s just last year (2016) that an AI system beat a human Go master player. Here’s how a March 17, 2016 article by John Russell for TechCrunch described the feat (Note: Links have been removed),

Much was written of an historic moment for artificial intelligence last week when a Google-developed AI beat one of the planet’s most sophisticated players of Go, an East Asia strategy game renowned for its deep thinking and strategy.

Go is viewed as one of the ultimate tests for an AI given the sheer possibilities on hand. “There are 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 possible positions [in the game] — that’s more than the number of atoms in the universe, and more than a googol times larger than chess,” Google said earlier this year.

If you missed the series — which AlphaGo, the AI, won 4-1 — or were unsure of exactly why it was so significant, Google summed the general importance up in a post this week.

Far from just being a game, Demis Hassabis, CEO and Co-Founder of DeepMind — the Google-owned company behind AlphaGo — said the AI’s development is proof that it can be used to solve problems in ways that humans may be not be accustomed or able to do:

We’ve learned two important things from this experience. First, this test bodes well for AI’s potential in solving other problems. AlphaGo has the ability to look “globally” across a board—and find solutions that humans either have been trained not to play or would not consider. This has huge potential for using AlphaGo-like technology to find solutions that humans don’t necessarily see in other areas.

I find Roff’s thesis intriguing and is likely applicable to the short-term but in the longer term and in light of the attempts to  create devices that mimic neural plasticity and neuromorphic engineering  I don’t find her thesis convincing.

Worm-inspired gel material and soft robots

The Nereis virens worm inspired new research out of the MIT Laboratory for Atomistic and Molecular Mechanics. Its jaw is made of soft organic material, but is as strong as harder materials such as human dentin. Photo: Alexander Semenov/Wikimedia Commons

What an amazing worm! Here’s more about robots inspired by the Nereis virens worm in a March 20, 2017 news item on Nanowerk,

A new material that naturally adapts to changing environments was inspired by the strength, stability, and mechanical performance of the jaw of a marine worm. The protein material, which was designed and modeled by researchers from the Laboratory for Atomistic and Molecular Mechanics (LAMM) in the Department of Civil and Environmental Engineering (CEE) [at the Massachusetts Institute of Technology {MIT}], and synthesized in collaboration with the Air Force Research Lab (AFRL) at Wright-Patterson Air Force Base, Ohio, expands and contracts based on changing pH levels and ion concentrations. It was developed by studying how the jaw of Nereis virens, a sand worm, forms and adapts in different environments.

The resulting pH- and ion-sensitive material is able to respond and react to its environment. Understanding this naturally-occurring process can be particularly helpful for active control of the motion or deformation of actuators for soft robotics and sensors without using external power supply or complex electronic controlling devices. It could also be used to build autonomous structures.

A March 20, 2017 MIT news release, which originated the news item, provides more detail,

“The ability of dramatically altering the material properties, by changing its hierarchical structure starting at the chemical level, offers exciting new opportunities to tune the material, and to build upon the natural material design towards new engineering applications,” wrote Markus J. Buehler, the McAfee Professor of Engineering, head of CEE, and senior author of the paper.

The research, recently published in ACS Nano, shows that depending on the ions and pH levels in the environment, the protein material expands and contracts into different geometric patterns. When the conditions change again, the material reverts back to its original shape. This makes it particularly useful for smart composite materials with tunable mechanics and self-powered roboticists that use pH value and ion condition to change the material stiffness or generate functional deformations.

Finding inspiration in the strong, stable jaw of a marine worm

In order to create bio-inspired materials that can be used for soft robotics, sensors, and other uses — such as that inspired by the Nereis — engineers and scientists at LAMM and AFRL needed to first understand how these materials form in the Nereis worm, and how they ultimately behave in various environments. This understanding involved the development of a model that encompasses all different length scales from the atomic level, and is able to predict the material behavior. This model helps to fully understand the Nereis worm and its exceptional strength.

“Working with AFRL gave us the opportunity to pair our atomistic simulations with experiments,” said CEE research scientist Francisco Martin-Martinez. AFRL experimentally synthesized a hydrogel, a gel-like material made mostly of water, which is composed of recombinant Nvjp-1 protein responsible for the structural stability and impressive mechanical performance of the Nereis jaw. The hydrogel was used to test how the protein shrinks and changes behavior based on pH and ions in the environment.

The Nereis jaw is mostly made of organic matter, meaning it is a soft protein material with a consistency similar to gelatin. In spite of this, its strength, which has been reported to have a hardness ranging between 0.4 and 0.8 gigapascals (GPa), is similar to that of harder materials like human dentin. “It’s quite remarkable that this soft protein material, with a consistency akin to Jell-O, can be as strong as calcified minerals that are found in human dentin and harder materials such as bones,” Buehler said.

At MIT, the researchers looked at the makeup of the Nereis jaw on a molecular scale to see what makes the jaw so strong and adaptive. At this scale, the metal-coordinated crosslinks, the presence of metal in its molecular structure, provide a molecular network that makes the material stronger and at the same time make the molecular bond more dynamic, and ultimately able to respond to changing conditions. At the macroscopic scale, these dynamic metal-protein bonds result in an expansion/contraction behavior.

Combining the protein structural studies from AFRL with the molecular understanding from LAMM, Buehler, Martin-Martinez, CEE Research Scientist Zhao Qin, and former PhD student Chia-Ching Chou ’15, created a multiscale model that is able to predict the mechanical behavior of materials that contain this protein in various environments. “These atomistic simulations help us to visualize the atomic arrangements and molecular conformations that underlay the mechanical performance of these materials,” Martin-Martinez said.

Specifically, using this model the research team was able to design, test, and visualize how different molecular networks change and adapt to various pH levels, taking into account the biological and mechanical properties.

By looking at the molecular and biological makeup of a the Nereis virens and using the predictive model of the mechanical behavior of the resulting protein material, the LAMM researchers were able to more fully understand the protein material at different scales and provide a comprehensive understanding of how such protein materials form and behave in differing pH settings. This understanding guides new material designs for soft robots and sensors.

Identifying the link between environmental properties and movement in the material

The predictive model explained how the pH sensitive materials change shape and behavior, which the researchers used for designing new PH-changing geometric structures. Depending on the original geometric shape tested in the protein material and the properties surrounding it, the LAMM researchers found that the material either spirals or takes a Cypraea shell-like shape when the pH levels are changed. These are only some examples of the potential that this new material could have for developing soft robots, sensors, and autonomous structures.

Using the predictive model, the research team found that the material not only changes form, but it also reverts back to its original shape when the pH levels change. At the molecular level, histidine amino acids present in the protein bind strongly to the ions in the environment. This very local chemical reaction between amino acids and metal ions has an effect in the overall conformation of the protein at a larger scale. When environmental conditions change, the histidine-metal interactions change accordingly, which affect the protein conformation and in turn the material response.

“Changing the pH or changing the ions is like flipping a switch. You switch it on or off, depending on what environment you select, and the hydrogel expands or contracts” said Martin-Martinez.

LAMM found that at the molecular level, the structure of the protein material is strengthened when the environment contains zinc ions and certain pH levels. This creates more stable metal-coordinated crosslinks in the material’s molecular structure, which makes the molecules more dynamic and flexible.

This insight into the material’s design and its flexibility is extremely useful for environments with changing pH levels. Its response of changing its figure to changing acidity levels could be used for soft robotics. “Most soft robotics require power supply to drive the motion and to be controlled by complex electronic devices. Our work toward designing of multifunctional material may provide another pathway to directly control the material property and deformation without electronic devices,” said Qin.

By studying and modeling the molecular makeup and the behavior of the primary protein responsible for the mechanical properties ideal for Nereis jaw performance, the LAMM researchers are able to link environmental properties to movement in the material and have a more comprehensive understanding of the strength of the Nereis jaw.

Here’s link to and a citation for the paper,

Ion Effect and Metal-Coordinated Cross-Linking for Multiscale Design of Nereis Jaw Inspired Mechanomutable Materials by Chia-Ching Chou, Francisco J. Martin-Martinez, Zhao Qin, Patrick B. Dennis, Maneesh K. Gupta, Rajesh R. Naik, and Markus J. Buehler. ACS Nano, 2017, 11 (2), pp 1858–1868 DOI: 10.1021/acsnano.6b07878 Publication Date (Web): February 6, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

Tree-on-a-chip

It’s usually organ-on-a-chip or lab-on-a-chip or human-on-a-chip; this is my first tree-on-a-chip.

Engineers have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and other plants. Courtesy: MIT

From a March 20, 2017 news item on phys.org,

Trees and other plants, from towering redwoods to diminutive daisies, are nature’s hydraulic pumps. They are constantly pulling water up from their roots to the topmost leaves, and pumping sugars produced by their leaves back down to the roots. This constant stream of nutrients is shuttled through a system of tissues called xylem and phloem, which are packed together in woody, parallel conduits.

Now engineers at MIT [Massachusetts Institute of Technology] and their collaborators have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and plants. Like its natural counterparts, the chip operates passively, requiring no moving parts or external pumps. It is able to pump water and sugars through the chip at a steady flow rate for several days. The results are published this week in Nature Plants.

A March 20, 2017 MIT news release by Jennifer Chu, which originated the news item, describes the work in more detail,

Anette “Peko” Hosoi, professor and associate department head for operations in MIT’s Department of Mechanical Engineering, says the chip’s passive pumping may be leveraged as a simple hydraulic actuator for small robots. Engineers have found it difficult and expensive to make tiny, movable parts and pumps to power complex movements in small robots. The team’s new pumping mechanism may enable robots whose motions are propelled by inexpensive, sugar-powered pumps.

“The goal of this work is cheap complexity, like one sees in nature,” Hosoi says. “It’s easy to add another leaf or xylem channel in a tree. In small robotics, everything is hard, from manufacturing, to integration, to actuation. If we could make the building blocks that enable cheap complexity, that would be super exciting. I think these [microfluidic pumps] are a step in that direction.”

Hosoi’s co-authors on the paper are lead author Jean Comtet, a former graduate student in MIT’s Department of Mechanical Engineering; Kaare Jensen of the Technical University of Denmark; and Robert Turgeon and Abraham Stroock, both of Cornell University.

A hydraulic lift

The group’s tree-inspired work grew out of a project on hydraulic robots powered by pumping fluids. Hosoi was interested in designing hydraulic robots at the small scale, that could perform actions similar to much bigger robots like Boston Dynamic’s Big Dog, a four-legged, Saint Bernard-sized robot that runs and jumps over rough terrain, powered by hydraulic actuators.

“For small systems, it’s often expensive to manufacture tiny moving pieces,” Hosoi says. “So we thought, ‘What if we could make a small-scale hydraulic system that could generate large pressures, with no moving parts?’ And then we asked, ‘Does anything do this in nature?’ It turns out that trees do.”

The general understanding among biologists has been that water, propelled by surface tension, travels up a tree’s channels of xylem, then diffuses through a semipermeable membrane and down into channels of phloem that contain sugar and other nutrients.

The more sugar there is in the phloem, the more water flows from xylem to phloem to balance out the sugar-to-water gradient, in a passive process known as osmosis. The resulting water flow flushes nutrients down to the roots. Trees and plants are thought to maintain this pumping process as more water is drawn up from their roots.

“This simple model of xylem and phloem has been well-known for decades,” Hosoi says. “From a qualitative point of view, this makes sense. But when you actually run the numbers, you realize this simple model does not allow for steady flow.”

In fact, engineers have previously attempted to design tree-inspired microfluidic pumps, fabricating parts that mimic xylem and phloem. But they found that these designs quickly stopped pumping within minutes.

It was Hosoi’s student Comtet who identified a third essential part to a tree’s pumping system: its leaves, which produce sugars through photosynthesis. Comtet’s model includes this additional source of sugars that diffuse from the leaves into a plant’s phloem, increasing the sugar-to-water gradient, which in turn maintains a constant osmotic pressure, circulating water and nutrients continuously throughout a tree.

Running on sugar

With Comtet’s hypothesis in mind, Hosoi and her team designed their tree-on-a-chip, a microfluidic pump that mimics a tree’s xylem, phloem, and most importantly, its sugar-producing leaves.

To make the chip, the researchers sandwiched together two plastic slides, through which they drilled small channels to represent xylem and phloem. They filled the xylem channel with water, and the phloem channel with water and sugar, then separated the two slides with a semipermeable material to mimic the membrane between xylem and phloem. They placed another membrane over the slide containing the phloem channel, and set a sugar cube on top to represent the additional source of sugar diffusing from a tree’s leaves into the phloem. They hooked the chip up to a tube, which fed water from a tank into the chip.

With this simple setup, the chip was able to passively pump water from the tank through the chip and out into a beaker, at a constant flow rate for several days, as opposed to previous designs that only pumped for several minutes.

“As soon as we put this sugar source in, we had it running for days at a steady state,” Hosoi says. “That’s exactly what we need. We want a device we can actually put in a robot.”

Hosoi envisions that the tree-on-a-chip pump may be built into a small robot to produce hydraulically powered motions, without requiring active pumps or parts.

“If you design your robot in a smart way, you could absolutely stick a sugar cube on it and let it go,” Hosoi says.

This research was supported, in part, by the Defense Advance Research Projects Agency [DARPA].

This research’s funding connection to DARPA reminded me that MIT has an Institute of Soldier Nanotechnologies.

Getting back to the tree-on-a-chip, here’s a link to and a citation for the paper,

Passive phloem loading and long-distance transport in a synthetic tree-on-a-chip by Jean Comtet, Kaare H. Jensen, Robert Turgeon, Abraham D. Stroock & A. E. Hosoi. Nature Plants 3, Article number: 17032 (2017)  doi:10.1038/nplants.2017.32 Published online: 20 March 2017

This paper is behind a paywall.

Vector Institute and Canada’s artificial intelligence sector

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

In addition, Vector is expected to receive funding from the Province of Ontario and more than 30 top Canadian and global companies eager to tap this pool of talent to grow their businesses. The institute will also work closely with other Ontario universities with AI talent.

(See my March 24, 2017 posting; scroll down about 25% for the science part, including the Pan-Canadian Artificial Intelligence Strategy of the budget.)

Not obvious in last week’s coverage of the Pan-Canadian Artificial Intelligence Strategy is that the much lauded Hinton has been living in the US and working for Google. These latest announcements (Pan-Canadian AI Strategy and Vector Institute) mean that he’s moving back.

A March 28, 2017 article by Kate Allen for TorontoStar.com provides more details about the Vector Institute, Hinton, and the Canadian ‘brain drain’ as it applies to artificial intelligence, (Note:  A link has been removed)

Toronto will host a new institute devoted to artificial intelligence, a major gambit to bolster a field of research pioneered in Canada but consistently drained of talent by major U.S. technology companies like Google, Facebook and Microsoft.

The Vector Institute, an independent non-profit affiliated with the University of Toronto, will hire about 25 new faculty and research scientists. It will be backed by more than $150 million in public and corporate funding in an unusual hybridization of pure research and business-minded commercial goals.

The province will spend $50 million over five years, while the federal government, which announced a $125-million Pan-Canadian Artificial Intelligence Strategy in last week’s budget, is providing at least $40 million, backers say. More than two dozen companies have committed millions more over 10 years, including $5 million each from sponsors including Google, Air Canada, Loblaws, and Canada’s five biggest banks [Bank of Montreal (BMO). Canadian Imperial Bank of Commerce ({CIBC} President’s Choice Financial},  Royal Bank of Canada (RBC), Scotiabank (Tangerine), Toronto-Dominion Bank (TD Canada Trust)].

The mode of artificial intelligence that the Vector Institute will focus on, deep learning, has seen remarkable results in recent years, particularly in image and speech recognition. Geoffrey Hinton, considered the “godfather” of deep learning for the breakthroughs he made while a professor at U of T, has worked for Google since 2013 in California and Toronto.

Hinton will move back to Canada to lead a research team based at the tech giant’s Toronto offices and act as chief scientific adviser of the new institute.

Researchers trained in Canadian artificial intelligence labs fill the ranks of major technology companies, working on tools like instant language translation, facial recognition, and recommendation services. Academic institutions and startups in Toronto, Waterloo, Montreal and Edmonton boast leaders in the field, but other researchers have left for U.S. universities and corporate labs.

The goals of the Vector Institute are to retain, repatriate and attract AI talent, to create more trained experts, and to feed that expertise into existing Canadian companies and startups.

Hospitals are expected to be a major partner, since health care is an intriguing application for AI. Last month, researchers from Stanford University announced they had trained a deep learning algorithm to identify potentially cancerous skin lesions with accuracy comparable to human dermatologists. The Toronto company Deep Genomics is using deep learning to read genomes and identify mutations that may lead to disease, among other things.

Intelligent algorithms can also be applied to tasks that might seem less virtuous, like reading private data to better target advertising. Zemel [Richard Zemel, the institute’s research director and a professor of computer science at U of T] says the centre is creating an ethics working group [emphasis mine] and maintaining ties with organizations that promote fairness and transparency in machine learning. As for privacy concerns, “that’s something we are well aware of. We don’t have a well-formed policy yet but we will fairly soon.”

The institute’s annual funding pales in comparison to the revenues of the American tech giants, which are measured in tens of billions. The risk the institute’s backers are taking is simply creating an even more robust machine learning PhD mill for the U.S.

“They obviously won’t all stay in Canada, but Toronto industry is very keen to get them,” Hinton said. “I think Trump might help there.” Two researchers on Hinton’s new Toronto-based team are Iranian, one of the countries targeted by U.S. President Donald Trump’s travel bans.

Ethics do seem to be a bit of an afterthought. Presumably the Vector Institute’s ‘ethics working group’ won’t include any regular folks. Is there any thought to what the rest of us think about these developments? As there will also be some collaboration with other proposed AI institutes including ones at the University of Montreal (Université de Montréal) and the University of Alberta (Kate McGillivray’s article coming up shortly mentions them), might the ethics group be centered in either Edmonton or Montreal? Interestingly, two Canadians (Timothy Caulfield at the University of Alberta and Eric Racine at Université de Montréa) testified at the US Commission for the Study of Bioethical Issues Feb. 10 – 11, 2014 meeting, the Brain research, ethics, and nanotechnology. Still speculating here but I imagine Caulfield and/or Racine could be persuaded to extend their expertise in ethics and the human brain to AI and its neural networks.

Getting back to the topic at hand the ‘AI sceneCanada’, Allen’s article is worth reading in its entirety if you have the time.

Kate McGillivray’s March 29, 2017 article for the Canadian Broadcasting Corporation’s (CBC) news online provides more details about the Canadian AI situation and the new strategies,

With artificial intelligence set to transform our world, a new institute is putting Toronto to the front of the line to lead the charge.

The Vector Institute for Artificial Intelligence, made possible by funding from the federal government revealed in the 2017 budget, will move into new digs in the MaRS Discovery District by the end of the year.

Vector’s funding comes partially from a $125 million investment announced in last Wednesday’s federal budget to launch a pan-Canadian artificial intelligence strategy, with similar institutes being established in Montreal and Edmonton.

“[A.I.] cuts across pretty well every sector of the economy,” said Dr. Alan Bernstein, CEO and president of the Canadian Institute for Advanced Research, the organization tasked with administering the federal program.

“Silicon Valley and England and other places really jumped on it, so we kind of lost the lead a little bit. I think the Canadian federal government has now realized that,” he said.

Stopping up the brain drain

Critical to the strategy’s success is building a homegrown base of A.I. experts and innovators — a problem in the last decade, despite pioneering work on so-called “Deep Learning” by Canadian scholars such as Yoshua Bengio and Geoffrey Hinton, a former University of Toronto professor who will now serve as Vector’s chief scientific advisor.

With few university faculty positions in Canada and with many innovative companies headquartered elsewhere, it has been tough to keep the few graduates specializing in A.I. in town.

“We were paying to educate people and shipping them south,” explained Ed Clark, chair of the Vector Institute and business advisor to Ontario Premier Kathleen Wynne.

The existence of that “fantastic science” will lean heavily on how much buy-in Vector and Canada’s other two A.I. centres get.

Toronto’s portion of the $125 million is a “great start,” said Bernstein, but taken alone, “it’s not enough money.”

“My estimate of the right amount of money to make a difference is a half a billion or so, and I think we will get there,” he said.

Jessica Murphy’s March 29, 2017 article for the British Broadcasting Corporation’s (BBC) news online offers some intriguing detail about the Canadian AI scene,

Canadian researchers have been behind some recent major breakthroughs in artificial intelligence. Now, the country is betting on becoming a big player in one of the hottest fields in technology, with help from the likes of Google and RBC [Royal Bank of Canada].

In an unassuming building on the University of Toronto’s downtown campus, Geoff Hinton laboured for years on the “lunatic fringe” of academia and artificial intelligence, pursuing research in an area of AI called neural networks.

Also known as “deep learning”, neural networks are computer programs that learn in similar way to human brains. The field showed early promise in the 1980s, but the tech sector turned its attention to other AI methods after that promise seemed slow to develop.

“The approaches that I thought were silly were in the ascendancy and the approach that I thought was the right approach was regarded as silly,” says the British-born [emphasis mine] professor, who splits his time between the university and Google, where he is a vice-president of engineering fellow.

Neural networks are used by the likes of Netflix to recommend what you should binge watch and smartphones with voice assistance tools. Google DeepMind’s AlphaGo AI used them to win against a human in the ancient game of Go in 2016.

Foteini Agrafioti, who heads up the new RBC Research in Machine Learning lab at the University of Toronto, said those recent innovations made AI attractive to researchers and the tech industry.

“Anything that’s powering Google’s engines right now is powered by deep learning,” she says.

Developments in the field helped jumpstart innovation and paved the way for the technology’s commercialisation. They also captured the attention of Google, IBM and Microsoft, and kicked off a hiring race in the field.

The renewed focus on neural networks has boosted the careers of early Canadian AI machine learning pioneers like Hinton, the University of Montreal’s Yoshua Bengio, and University of Alberta’s Richard Sutton.

Money from big tech is coming north, along with investments by domestic corporations like banking multinational RBC and auto parts giant Magna, and millions of dollars in government funding.

Former banking executive Ed Clark will head the institute, and says the goal is to make Toronto, which has the largest concentration of AI-related industries in Canada, one of the top five places in the world for AI innovation and business.

The founders also want it to serve as a magnet and retention tool for top talent aggressively head-hunted by US firms.

Clark says they want to “wake up” Canadian industry to the possibilities of AI, which is expected to have a massive impact on fields like healthcare, banking, manufacturing and transportation.

Google invested C$4.5m (US$3.4m/£2.7m) last November [2016] in the University of Montreal’s Montreal Institute for Learning Algorithms.

Microsoft is funding a Montreal startup, Element AI. The Seattle-based company also announced it would acquire Montreal-based Maluuba and help fund AI research at the University of Montreal and McGill University.

Thomson Reuters and General Motors both recently moved AI labs to Toronto.

RBC is also investing in the future of AI in Canada, including opening a machine learning lab headed by Agrafioti, co-funding a program to bring global AI talent and entrepreneurs to Toronto, and collaborating with Sutton and the University of Alberta’s Machine Intelligence Institute.

Canadian tech also sees the travel uncertainty created by the Trump administration in the US as making Canada more attractive to foreign talent. (One of Clark’s the selling points is that Toronto as an “open and diverse” city).

This may reverse the ‘brain drain’ but it appears Canada’s role as a ‘branch plant economy’ for foreign (usually US) companies could become an important discussion once more. From the ‘Foreign ownership of companies of Canada’ Wikipedia entry (Note: Links have been removed),

Historically, foreign ownership was a political issue in Canada in the late 1960s and early 1970s, when it was believed by some that U.S. investment had reached new heights (though its levels had actually remained stable for decades), and then in the 1980s, during debates over the Free Trade Agreement.

But the situation has changed, since in the interim period Canada itself became a major investor and owner of foreign corporations. Since the 1980s, Canada’s levels of investment and ownership in foreign companies have been larger than foreign investment and ownership in Canada. In some smaller countries, such as Montenegro, Canadian investment is sizable enough to make up a major portion of the economy. In Northern Ireland, for example, Canada is the largest foreign investor. By becoming foreign owners themselves, Canadians have become far less politically concerned about investment within Canada.

Of note is that Canada’s largest companies by value, and largest employers, tend to be foreign-owned in a way that is more typical of a developing nation than a G8 member. The best example is the automotive sector, one of Canada’s most important industries. It is dominated by American, German, and Japanese giants. Although this situation is not unique to Canada in the global context, it is unique among G-8 nations, and many other relatively small nations also have national automotive companies.

It’s interesting to note that sometimes Canadian companies are the big investors but that doesn’t change our basic position. And, as I’ve noted in other postings (including the March 24, 2017 posting), these government investments in science and technology won’t necessarily lead to a move away from our ‘branch plant economy’ towards an innovative Canada.

You can find out more about the Vector Institute for Artificial Intelligence here.

BTW, I noted that reference to Hinton as ‘British-born’ in the BBC article. He was educated in the UK and subsidized by UK taxpayers (from his Wikipedia entry; Note: Links have been removed),

Hinton was educated at King’s College, Cambridge graduating in 1970, with a Bachelor of Arts in experimental psychology.[1] He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1977 for research supervised by H. Christopher Longuet-Higgins.[3][12]

It seems Canadians are not the only ones to experience  ‘brain drains’.

Finally, I wrote at length about a recent initiative taking place between the University of British Columbia (Vancouver, Canada) and the University of Washington (Seattle, Washington), the Cascadia Urban Analytics Cooperative in a Feb. 28, 2017 posting noting that the initiative is being funded by Microsoft to the tune $1M and is part of a larger cooperative effort between the province of British Columbia and the state of Washington. Artificial intelligence is not the only area where US technology companies are hedging their bets (against Trump’s administration which seems determined to terrify people from crossing US borders) by investing in Canada.

For anyone interested in a little more information about AI in the US and China, there’s today’s (March 31, 2017)earlier posting: China, US, and the race for artificial intelligence research domination.