Category Archives: robots

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

Does understanding your pet mean understanding artificial intelligence better?

Heather Roff’s take on artificial intelligence features an approach I haven’t seen before. From her March 30, 2017 essay for The Conversation (h/t March 31, 2017 news item on phys.org),

It turns out, though, that we already have a concept we can use when we think about AI: It’s how we think about animals. As a former animal trainer (albeit briefly) who now studies how people use AI, I know that animals and animal training can teach us quite a lot about how we ought to think about, approach and interact with artificial intelligence, both now and in the future.

Using animal analogies can help regular people understand many of the complex aspects of artificial intelligence. It can also help us think about how best to teach these systems new skills and, perhaps most importantly, how we can properly conceive of their limitations, even as we celebrate AI’s new possibilities.
Looking at constraints

As AI expert Maggie Boden explains, “Artificial intelligence seeks to make computers do the sorts of things that minds can do.” AI researchers are working on teaching computers to reason, perceive, plan, move and make associations. AI can see patterns in large data sets, predict the likelihood of an event occurring, plan a route, manage a person’s meeting schedule and even play war-game scenarios.

Many of these capabilities are, in themselves, unsurprising: Of course a robot can roll around a space and not collide with anything. But somehow AI seems more magical when the computer starts to put these skills together to accomplish tasks.

Thinking of AI as a trainable animal isn’t just useful for explaining it to the general public. It is also helpful for the researchers and engineers building the technology. If an AI scholar is trying to teach a system a new skill, thinking of the process from the perspective of an animal trainer could help identify potential problems or complications.

For instance, if I try to train my dog to sit, and every time I say “sit” the buzzer to the oven goes off, then my dog will begin to associate sitting not only with my command, but also with the sound of the oven’s buzzer. In essence, the buzzer becomes another signal telling the dog to sit, which is called an “accidental reinforcement.” If we look for accidental reinforcements or signals in AI systems that are not working properly, then we’ll know better not only what’s going wrong, but also what specific retraining will be most effective.

This requires us to understand what messages we are giving during AI training, as well as what the AI might be observing in the surrounding environment. The oven buzzer is a simple example; in the real world it will be far more complicated.

Before we welcome our AI overlords and hand over our lives and jobs to robots, we ought to pause and think about the kind of intelligences we are creating. …

Source: pixabay.com

It’s just last year (2016) that an AI system beat a human Go master player. Here’s how a March 17, 2016 article by John Russell for TechCrunch described the feat (Note: Links have been removed),

Much was written of an historic moment for artificial intelligence last week when a Google-developed AI beat one of the planet’s most sophisticated players of Go, an East Asia strategy game renowned for its deep thinking and strategy.

Go is viewed as one of the ultimate tests for an AI given the sheer possibilities on hand. “There are 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 possible positions [in the game] — that’s more than the number of atoms in the universe, and more than a googol times larger than chess,” Google said earlier this year.

If you missed the series — which AlphaGo, the AI, won 4-1 — or were unsure of exactly why it was so significant, Google summed the general importance up in a post this week.

Far from just being a game, Demis Hassabis, CEO and Co-Founder of DeepMind — the Google-owned company behind AlphaGo — said the AI’s development is proof that it can be used to solve problems in ways that humans may be not be accustomed or able to do:

We’ve learned two important things from this experience. First, this test bodes well for AI’s potential in solving other problems. AlphaGo has the ability to look “globally” across a board—and find solutions that humans either have been trained not to play or would not consider. This has huge potential for using AlphaGo-like technology to find solutions that humans don’t necessarily see in other areas.

I find Roff’s thesis intriguing and is likely applicable to the short-term but in the longer term and in light of the attempts to  create devices that mimic neural plasticity and neuromorphic engineering  I don’t find her thesis convincing.

Worm-inspired gel material and soft robots

The Nereis virens worm inspired new research out of the MIT Laboratory for Atomistic and Molecular Mechanics. Its jaw is made of soft organic material, but is as strong as harder materials such as human dentin. Photo: Alexander Semenov/Wikimedia Commons

What an amazing worm! Here’s more about robots inspired by the Nereis virens worm in a March 20, 2017 news item on Nanowerk,

A new material that naturally adapts to changing environments was inspired by the strength, stability, and mechanical performance of the jaw of a marine worm. The protein material, which was designed and modeled by researchers from the Laboratory for Atomistic and Molecular Mechanics (LAMM) in the Department of Civil and Environmental Engineering (CEE) [at the Massachusetts Institute of Technology {MIT}], and synthesized in collaboration with the Air Force Research Lab (AFRL) at Wright-Patterson Air Force Base, Ohio, expands and contracts based on changing pH levels and ion concentrations. It was developed by studying how the jaw of Nereis virens, a sand worm, forms and adapts in different environments.

The resulting pH- and ion-sensitive material is able to respond and react to its environment. Understanding this naturally-occurring process can be particularly helpful for active control of the motion or deformation of actuators for soft robotics and sensors without using external power supply or complex electronic controlling devices. It could also be used to build autonomous structures.

A March 20, 2017 MIT news release, which originated the news item, provides more detail,

“The ability of dramatically altering the material properties, by changing its hierarchical structure starting at the chemical level, offers exciting new opportunities to tune the material, and to build upon the natural material design towards new engineering applications,” wrote Markus J. Buehler, the McAfee Professor of Engineering, head of CEE, and senior author of the paper.

The research, recently published in ACS Nano, shows that depending on the ions and pH levels in the environment, the protein material expands and contracts into different geometric patterns. When the conditions change again, the material reverts back to its original shape. This makes it particularly useful for smart composite materials with tunable mechanics and self-powered roboticists that use pH value and ion condition to change the material stiffness or generate functional deformations.

Finding inspiration in the strong, stable jaw of a marine worm

In order to create bio-inspired materials that can be used for soft robotics, sensors, and other uses — such as that inspired by the Nereis — engineers and scientists at LAMM and AFRL needed to first understand how these materials form in the Nereis worm, and how they ultimately behave in various environments. This understanding involved the development of a model that encompasses all different length scales from the atomic level, and is able to predict the material behavior. This model helps to fully understand the Nereis worm and its exceptional strength.

“Working with AFRL gave us the opportunity to pair our atomistic simulations with experiments,” said CEE research scientist Francisco Martin-Martinez. AFRL experimentally synthesized a hydrogel, a gel-like material made mostly of water, which is composed of recombinant Nvjp-1 protein responsible for the structural stability and impressive mechanical performance of the Nereis jaw. The hydrogel was used to test how the protein shrinks and changes behavior based on pH and ions in the environment.

The Nereis jaw is mostly made of organic matter, meaning it is a soft protein material with a consistency similar to gelatin. In spite of this, its strength, which has been reported to have a hardness ranging between 0.4 and 0.8 gigapascals (GPa), is similar to that of harder materials like human dentin. “It’s quite remarkable that this soft protein material, with a consistency akin to Jell-O, can be as strong as calcified minerals that are found in human dentin and harder materials such as bones,” Buehler said.

At MIT, the researchers looked at the makeup of the Nereis jaw on a molecular scale to see what makes the jaw so strong and adaptive. At this scale, the metal-coordinated crosslinks, the presence of metal in its molecular structure, provide a molecular network that makes the material stronger and at the same time make the molecular bond more dynamic, and ultimately able to respond to changing conditions. At the macroscopic scale, these dynamic metal-protein bonds result in an expansion/contraction behavior.

Combining the protein structural studies from AFRL with the molecular understanding from LAMM, Buehler, Martin-Martinez, CEE Research Scientist Zhao Qin, and former PhD student Chia-Ching Chou ’15, created a multiscale model that is able to predict the mechanical behavior of materials that contain this protein in various environments. “These atomistic simulations help us to visualize the atomic arrangements and molecular conformations that underlay the mechanical performance of these materials,” Martin-Martinez said.

Specifically, using this model the research team was able to design, test, and visualize how different molecular networks change and adapt to various pH levels, taking into account the biological and mechanical properties.

By looking at the molecular and biological makeup of a the Nereis virens and using the predictive model of the mechanical behavior of the resulting protein material, the LAMM researchers were able to more fully understand the protein material at different scales and provide a comprehensive understanding of how such protein materials form and behave in differing pH settings. This understanding guides new material designs for soft robots and sensors.

Identifying the link between environmental properties and movement in the material

The predictive model explained how the pH sensitive materials change shape and behavior, which the researchers used for designing new PH-changing geometric structures. Depending on the original geometric shape tested in the protein material and the properties surrounding it, the LAMM researchers found that the material either spirals or takes a Cypraea shell-like shape when the pH levels are changed. These are only some examples of the potential that this new material could have for developing soft robots, sensors, and autonomous structures.

Using the predictive model, the research team found that the material not only changes form, but it also reverts back to its original shape when the pH levels change. At the molecular level, histidine amino acids present in the protein bind strongly to the ions in the environment. This very local chemical reaction between amino acids and metal ions has an effect in the overall conformation of the protein at a larger scale. When environmental conditions change, the histidine-metal interactions change accordingly, which affect the protein conformation and in turn the material response.

“Changing the pH or changing the ions is like flipping a switch. You switch it on or off, depending on what environment you select, and the hydrogel expands or contracts” said Martin-Martinez.

LAMM found that at the molecular level, the structure of the protein material is strengthened when the environment contains zinc ions and certain pH levels. This creates more stable metal-coordinated crosslinks in the material’s molecular structure, which makes the molecules more dynamic and flexible.

This insight into the material’s design and its flexibility is extremely useful for environments with changing pH levels. Its response of changing its figure to changing acidity levels could be used for soft robotics. “Most soft robotics require power supply to drive the motion and to be controlled by complex electronic devices. Our work toward designing of multifunctional material may provide another pathway to directly control the material property and deformation without electronic devices,” said Qin.

By studying and modeling the molecular makeup and the behavior of the primary protein responsible for the mechanical properties ideal for Nereis jaw performance, the LAMM researchers are able to link environmental properties to movement in the material and have a more comprehensive understanding of the strength of the Nereis jaw.

Here’s link to and a citation for the paper,

Ion Effect and Metal-Coordinated Cross-Linking for Multiscale Design of Nereis Jaw Inspired Mechanomutable Materials by Chia-Ching Chou, Francisco J. Martin-Martinez, Zhao Qin, Patrick B. Dennis, Maneesh K. Gupta, Rajesh R. Naik, and Markus J. Buehler. ACS Nano, 2017, 11 (2), pp 1858–1868 DOI: 10.1021/acsnano.6b07878 Publication Date (Web): February 6, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

Tree-on-a-chip

It’s usually organ-on-a-chip or lab-on-a-chip or human-on-a-chip; this is my first tree-on-a-chip.

Engineers have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and other plants. Courtesy: MIT

From a March 20, 2017 news item on phys.org,

Trees and other plants, from towering redwoods to diminutive daisies, are nature’s hydraulic pumps. They are constantly pulling water up from their roots to the topmost leaves, and pumping sugars produced by their leaves back down to the roots. This constant stream of nutrients is shuttled through a system of tissues called xylem and phloem, which are packed together in woody, parallel conduits.

Now engineers at MIT [Massachusetts Institute of Technology] and their collaborators have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and plants. Like its natural counterparts, the chip operates passively, requiring no moving parts or external pumps. It is able to pump water and sugars through the chip at a steady flow rate for several days. The results are published this week in Nature Plants.

A March 20, 2017 MIT news release by Jennifer Chu, which originated the news item, describes the work in more detail,

Anette “Peko” Hosoi, professor and associate department head for operations in MIT’s Department of Mechanical Engineering, says the chip’s passive pumping may be leveraged as a simple hydraulic actuator for small robots. Engineers have found it difficult and expensive to make tiny, movable parts and pumps to power complex movements in small robots. The team’s new pumping mechanism may enable robots whose motions are propelled by inexpensive, sugar-powered pumps.

“The goal of this work is cheap complexity, like one sees in nature,” Hosoi says. “It’s easy to add another leaf or xylem channel in a tree. In small robotics, everything is hard, from manufacturing, to integration, to actuation. If we could make the building blocks that enable cheap complexity, that would be super exciting. I think these [microfluidic pumps] are a step in that direction.”

Hosoi’s co-authors on the paper are lead author Jean Comtet, a former graduate student in MIT’s Department of Mechanical Engineering; Kaare Jensen of the Technical University of Denmark; and Robert Turgeon and Abraham Stroock, both of Cornell University.

A hydraulic lift

The group’s tree-inspired work grew out of a project on hydraulic robots powered by pumping fluids. Hosoi was interested in designing hydraulic robots at the small scale, that could perform actions similar to much bigger robots like Boston Dynamic’s Big Dog, a four-legged, Saint Bernard-sized robot that runs and jumps over rough terrain, powered by hydraulic actuators.

“For small systems, it’s often expensive to manufacture tiny moving pieces,” Hosoi says. “So we thought, ‘What if we could make a small-scale hydraulic system that could generate large pressures, with no moving parts?’ And then we asked, ‘Does anything do this in nature?’ It turns out that trees do.”

The general understanding among biologists has been that water, propelled by surface tension, travels up a tree’s channels of xylem, then diffuses through a semipermeable membrane and down into channels of phloem that contain sugar and other nutrients.

The more sugar there is in the phloem, the more water flows from xylem to phloem to balance out the sugar-to-water gradient, in a passive process known as osmosis. The resulting water flow flushes nutrients down to the roots. Trees and plants are thought to maintain this pumping process as more water is drawn up from their roots.

“This simple model of xylem and phloem has been well-known for decades,” Hosoi says. “From a qualitative point of view, this makes sense. But when you actually run the numbers, you realize this simple model does not allow for steady flow.”

In fact, engineers have previously attempted to design tree-inspired microfluidic pumps, fabricating parts that mimic xylem and phloem. But they found that these designs quickly stopped pumping within minutes.

It was Hosoi’s student Comtet who identified a third essential part to a tree’s pumping system: its leaves, which produce sugars through photosynthesis. Comtet’s model includes this additional source of sugars that diffuse from the leaves into a plant’s phloem, increasing the sugar-to-water gradient, which in turn maintains a constant osmotic pressure, circulating water and nutrients continuously throughout a tree.

Running on sugar

With Comtet’s hypothesis in mind, Hosoi and her team designed their tree-on-a-chip, a microfluidic pump that mimics a tree’s xylem, phloem, and most importantly, its sugar-producing leaves.

To make the chip, the researchers sandwiched together two plastic slides, through which they drilled small channels to represent xylem and phloem. They filled the xylem channel with water, and the phloem channel with water and sugar, then separated the two slides with a semipermeable material to mimic the membrane between xylem and phloem. They placed another membrane over the slide containing the phloem channel, and set a sugar cube on top to represent the additional source of sugar diffusing from a tree’s leaves into the phloem. They hooked the chip up to a tube, which fed water from a tank into the chip.

With this simple setup, the chip was able to passively pump water from the tank through the chip and out into a beaker, at a constant flow rate for several days, as opposed to previous designs that only pumped for several minutes.

“As soon as we put this sugar source in, we had it running for days at a steady state,” Hosoi says. “That’s exactly what we need. We want a device we can actually put in a robot.”

Hosoi envisions that the tree-on-a-chip pump may be built into a small robot to produce hydraulically powered motions, without requiring active pumps or parts.

“If you design your robot in a smart way, you could absolutely stick a sugar cube on it and let it go,” Hosoi says.

This research was supported, in part, by the Defense Advance Research Projects Agency [DARPA].

This research’s funding connection to DARPA reminded me that MIT has an Institute of Soldier Nanotechnologies.

Getting back to the tree-on-a-chip, here’s a link to and a citation for the paper,

Passive phloem loading and long-distance transport in a synthetic tree-on-a-chip by Jean Comtet, Kaare H. Jensen, Robert Turgeon, Abraham D. Stroock & A. E. Hosoi. Nature Plants 3, Article number: 17032 (2017)  doi:10.1038/nplants.2017.32 Published online: 20 March 2017

This paper is behind a paywall.

Vector Institute and Canada’s artificial intelligence sector

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

In addition, Vector is expected to receive funding from the Province of Ontario and more than 30 top Canadian and global companies eager to tap this pool of talent to grow their businesses. The institute will also work closely with other Ontario universities with AI talent.

(See my March 24, 2017 posting; scroll down about 25% for the science part, including the Pan-Canadian Artificial Intelligence Strategy of the budget.)

Not obvious in last week’s coverage of the Pan-Canadian Artificial Intelligence Strategy is that the much lauded Hinton has been living in the US and working for Google. These latest announcements (Pan-Canadian AI Strategy and Vector Institute) mean that he’s moving back.

A March 28, 2017 article by Kate Allen for TorontoStar.com provides more details about the Vector Institute, Hinton, and the Canadian ‘brain drain’ as it applies to artificial intelligence, (Note:  A link has been removed)

Toronto will host a new institute devoted to artificial intelligence, a major gambit to bolster a field of research pioneered in Canada but consistently drained of talent by major U.S. technology companies like Google, Facebook and Microsoft.

The Vector Institute, an independent non-profit affiliated with the University of Toronto, will hire about 25 new faculty and research scientists. It will be backed by more than $150 million in public and corporate funding in an unusual hybridization of pure research and business-minded commercial goals.

The province will spend $50 million over five years, while the federal government, which announced a $125-million Pan-Canadian Artificial Intelligence Strategy in last week’s budget, is providing at least $40 million, backers say. More than two dozen companies have committed millions more over 10 years, including $5 million each from sponsors including Google, Air Canada, Loblaws, and Canada’s five biggest banks [Bank of Montreal (BMO). Canadian Imperial Bank of Commerce ({CIBC} President’s Choice Financial},  Royal Bank of Canada (RBC), Scotiabank (Tangerine), Toronto-Dominion Bank (TD Canada Trust)].

The mode of artificial intelligence that the Vector Institute will focus on, deep learning, has seen remarkable results in recent years, particularly in image and speech recognition. Geoffrey Hinton, considered the “godfather” of deep learning for the breakthroughs he made while a professor at U of T, has worked for Google since 2013 in California and Toronto.

Hinton will move back to Canada to lead a research team based at the tech giant’s Toronto offices and act as chief scientific adviser of the new institute.

Researchers trained in Canadian artificial intelligence labs fill the ranks of major technology companies, working on tools like instant language translation, facial recognition, and recommendation services. Academic institutions and startups in Toronto, Waterloo, Montreal and Edmonton boast leaders in the field, but other researchers have left for U.S. universities and corporate labs.

The goals of the Vector Institute are to retain, repatriate and attract AI talent, to create more trained experts, and to feed that expertise into existing Canadian companies and startups.

Hospitals are expected to be a major partner, since health care is an intriguing application for AI. Last month, researchers from Stanford University announced they had trained a deep learning algorithm to identify potentially cancerous skin lesions with accuracy comparable to human dermatologists. The Toronto company Deep Genomics is using deep learning to read genomes and identify mutations that may lead to disease, among other things.

Intelligent algorithms can also be applied to tasks that might seem less virtuous, like reading private data to better target advertising. Zemel [Richard Zemel, the institute’s research director and a professor of computer science at U of T] says the centre is creating an ethics working group [emphasis mine] and maintaining ties with organizations that promote fairness and transparency in machine learning. As for privacy concerns, “that’s something we are well aware of. We don’t have a well-formed policy yet but we will fairly soon.”

The institute’s annual funding pales in comparison to the revenues of the American tech giants, which are measured in tens of billions. The risk the institute’s backers are taking is simply creating an even more robust machine learning PhD mill for the U.S.

“They obviously won’t all stay in Canada, but Toronto industry is very keen to get them,” Hinton said. “I think Trump might help there.” Two researchers on Hinton’s new Toronto-based team are Iranian, one of the countries targeted by U.S. President Donald Trump’s travel bans.

Ethics do seem to be a bit of an afterthought. Presumably the Vector Institute’s ‘ethics working group’ won’t include any regular folks. Is there any thought to what the rest of us think about these developments? As there will also be some collaboration with other proposed AI institutes including ones at the University of Montreal (Université de Montréal) and the University of Alberta (Kate McGillivray’s article coming up shortly mentions them), might the ethics group be centered in either Edmonton or Montreal? Interestingly, two Canadians (Timothy Caulfield at the University of Alberta and Eric Racine at Université de Montréa) testified at the US Commission for the Study of Bioethical Issues Feb. 10 – 11, 2014 meeting, the Brain research, ethics, and nanotechnology. Still speculating here but I imagine Caulfield and/or Racine could be persuaded to extend their expertise in ethics and the human brain to AI and its neural networks.

Getting back to the topic at hand the ‘AI sceneCanada’, Allen’s article is worth reading in its entirety if you have the time.

Kate McGillivray’s March 29, 2017 article for the Canadian Broadcasting Corporation’s (CBC) news online provides more details about the Canadian AI situation and the new strategies,

With artificial intelligence set to transform our world, a new institute is putting Toronto to the front of the line to lead the charge.

The Vector Institute for Artificial Intelligence, made possible by funding from the federal government revealed in the 2017 budget, will move into new digs in the MaRS Discovery District by the end of the year.

Vector’s funding comes partially from a $125 million investment announced in last Wednesday’s federal budget to launch a pan-Canadian artificial intelligence strategy, with similar institutes being established in Montreal and Edmonton.

“[A.I.] cuts across pretty well every sector of the economy,” said Dr. Alan Bernstein, CEO and president of the Canadian Institute for Advanced Research, the organization tasked with administering the federal program.

“Silicon Valley and England and other places really jumped on it, so we kind of lost the lead a little bit. I think the Canadian federal government has now realized that,” he said.

Stopping up the brain drain

Critical to the strategy’s success is building a homegrown base of A.I. experts and innovators — a problem in the last decade, despite pioneering work on so-called “Deep Learning” by Canadian scholars such as Yoshua Bengio and Geoffrey Hinton, a former University of Toronto professor who will now serve as Vector’s chief scientific advisor.

With few university faculty positions in Canada and with many innovative companies headquartered elsewhere, it has been tough to keep the few graduates specializing in A.I. in town.

“We were paying to educate people and shipping them south,” explained Ed Clark, chair of the Vector Institute and business advisor to Ontario Premier Kathleen Wynne.

The existence of that “fantastic science” will lean heavily on how much buy-in Vector and Canada’s other two A.I. centres get.

Toronto’s portion of the $125 million is a “great start,” said Bernstein, but taken alone, “it’s not enough money.”

“My estimate of the right amount of money to make a difference is a half a billion or so, and I think we will get there,” he said.

Jessica Murphy’s March 29, 2017 article for the British Broadcasting Corporation’s (BBC) news online offers some intriguing detail about the Canadian AI scene,

Canadian researchers have been behind some recent major breakthroughs in artificial intelligence. Now, the country is betting on becoming a big player in one of the hottest fields in technology, with help from the likes of Google and RBC [Royal Bank of Canada].

In an unassuming building on the University of Toronto’s downtown campus, Geoff Hinton laboured for years on the “lunatic fringe” of academia and artificial intelligence, pursuing research in an area of AI called neural networks.

Also known as “deep learning”, neural networks are computer programs that learn in similar way to human brains. The field showed early promise in the 1980s, but the tech sector turned its attention to other AI methods after that promise seemed slow to develop.

“The approaches that I thought were silly were in the ascendancy and the approach that I thought was the right approach was regarded as silly,” says the British-born [emphasis mine] professor, who splits his time between the university and Google, where he is a vice-president of engineering fellow.

Neural networks are used by the likes of Netflix to recommend what you should binge watch and smartphones with voice assistance tools. Google DeepMind’s AlphaGo AI used them to win against a human in the ancient game of Go in 2016.

Foteini Agrafioti, who heads up the new RBC Research in Machine Learning lab at the University of Toronto, said those recent innovations made AI attractive to researchers and the tech industry.

“Anything that’s powering Google’s engines right now is powered by deep learning,” she says.

Developments in the field helped jumpstart innovation and paved the way for the technology’s commercialisation. They also captured the attention of Google, IBM and Microsoft, and kicked off a hiring race in the field.

The renewed focus on neural networks has boosted the careers of early Canadian AI machine learning pioneers like Hinton, the University of Montreal’s Yoshua Bengio, and University of Alberta’s Richard Sutton.

Money from big tech is coming north, along with investments by domestic corporations like banking multinational RBC and auto parts giant Magna, and millions of dollars in government funding.

Former banking executive Ed Clark will head the institute, and says the goal is to make Toronto, which has the largest concentration of AI-related industries in Canada, one of the top five places in the world for AI innovation and business.

The founders also want it to serve as a magnet and retention tool for top talent aggressively head-hunted by US firms.

Clark says they want to “wake up” Canadian industry to the possibilities of AI, which is expected to have a massive impact on fields like healthcare, banking, manufacturing and transportation.

Google invested C$4.5m (US$3.4m/£2.7m) last November [2016] in the University of Montreal’s Montreal Institute for Learning Algorithms.

Microsoft is funding a Montreal startup, Element AI. The Seattle-based company also announced it would acquire Montreal-based Maluuba and help fund AI research at the University of Montreal and McGill University.

Thomson Reuters and General Motors both recently moved AI labs to Toronto.

RBC is also investing in the future of AI in Canada, including opening a machine learning lab headed by Agrafioti, co-funding a program to bring global AI talent and entrepreneurs to Toronto, and collaborating with Sutton and the University of Alberta’s Machine Intelligence Institute.

Canadian tech also sees the travel uncertainty created by the Trump administration in the US as making Canada more attractive to foreign talent. (One of Clark’s the selling points is that Toronto as an “open and diverse” city).

This may reverse the ‘brain drain’ but it appears Canada’s role as a ‘branch plant economy’ for foreign (usually US) companies could become an important discussion once more. From the ‘Foreign ownership of companies of Canada’ Wikipedia entry (Note: Links have been removed),

Historically, foreign ownership was a political issue in Canada in the late 1960s and early 1970s, when it was believed by some that U.S. investment had reached new heights (though its levels had actually remained stable for decades), and then in the 1980s, during debates over the Free Trade Agreement.

But the situation has changed, since in the interim period Canada itself became a major investor and owner of foreign corporations. Since the 1980s, Canada’s levels of investment and ownership in foreign companies have been larger than foreign investment and ownership in Canada. In some smaller countries, such as Montenegro, Canadian investment is sizable enough to make up a major portion of the economy. In Northern Ireland, for example, Canada is the largest foreign investor. By becoming foreign owners themselves, Canadians have become far less politically concerned about investment within Canada.

Of note is that Canada’s largest companies by value, and largest employers, tend to be foreign-owned in a way that is more typical of a developing nation than a G8 member. The best example is the automotive sector, one of Canada’s most important industries. It is dominated by American, German, and Japanese giants. Although this situation is not unique to Canada in the global context, it is unique among G-8 nations, and many other relatively small nations also have national automotive companies.

It’s interesting to note that sometimes Canadian companies are the big investors but that doesn’t change our basic position. And, as I’ve noted in other postings (including the March 24, 2017 posting), these government investments in science and technology won’t necessarily lead to a move away from our ‘branch plant economy’ towards an innovative Canada.

You can find out more about the Vector Institute for Artificial Intelligence here.

BTW, I noted that reference to Hinton as ‘British-born’ in the BBC article. He was educated in the UK and subsidized by UK taxpayers (from his Wikipedia entry; Note: Links have been removed),

Hinton was educated at King’s College, Cambridge graduating in 1970, with a Bachelor of Arts in experimental psychology.[1] He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1977 for research supervised by H. Christopher Longuet-Higgins.[3][12]

It seems Canadians are not the only ones to experience  ‘brain drains’.

Finally, I wrote at length about a recent initiative taking place between the University of British Columbia (Vancouver, Canada) and the University of Washington (Seattle, Washington), the Cascadia Urban Analytics Cooperative in a Feb. 28, 2017 posting noting that the initiative is being funded by Microsoft to the tune $1M and is part of a larger cooperative effort between the province of British Columbia and the state of Washington. Artificial intelligence is not the only area where US technology companies are hedging their bets (against Trump’s administration which seems determined to terrify people from crossing US borders) by investing in Canada.

For anyone interested in a little more information about AI in the US and China, there’s today’s (March 31, 2017)earlier posting: China, US, and the race for artificial intelligence research domination.

China, US, and the race for artificial intelligence research domination

John Markoff and Matthew Rosenberg have written a fascinating analysis of the competition between US and China regarding technological advances, specifically in the field of artificial intelligence. While the focus of the Feb. 3, 2017 NY Times article is military, the authors make it easy to extrapolate and apply the concepts to other sectors,

Robert O. Work, the veteran defense official retained as deputy secretary by President Trump, calls them his “A.I. dudes.” The breezy moniker belies their serious task: The dudes have been a kitchen cabinet of sorts, and have advised Mr. Work as he has sought to reshape warfare by bringing artificial intelligence to the battlefield.

Last spring, he asked, “O.K., you guys are the smartest guys in A.I., right?”

No, the dudes told him, “the smartest guys are at Facebook and Google,” Mr. Work recalled in an interview.

Now, increasingly, they’re also in China. The United States no longer has a strategic monopoly on the technology, which is widely seen as the key factor in the next generation of warfare.

The Pentagon’s plan to bring A.I. to the military is taking shape as Chinese researchers assert themselves in the nascent technology field. And that shift is reflected in surprising commercial advances in artificial intelligence among Chinese companies. [emphasis mine]

Having read Marshal McLuhan (de rigeur for any Canadian pursuing a degree in communications [sociology-based] anytime from the 1960s into the late 1980s [at least]), I took the movement of technology from military research to consumer applications as a standard. Television is a classic example but there are many others including modern plastic surgery. The first time, I encountered the reverse (consumer-based technology being adopted by the military) was in a 2004 exhibition “Massive Change: The Future of Global Design” produced by Bruce Mau for the Vancouver (Canada) Art Gallery.

Markoff and Rosenberg develop their thesis further (Note: Links have been removed),

Last year, for example, Microsoft researchers proclaimed that the company had created software capable of matching human skills in understanding speech.

Although they boasted that they had outperformed their United States competitors, a well-known A.I. researcher who leads a Silicon Valley laboratory for the Chinese web services company Baidu gently taunted Microsoft, noting that Baidu had achieved similar accuracy with the Chinese language two years earlier.

That, in a nutshell, is the challenge the United States faces as it embarks on a new military strategy founded on the assumption of its continued superiority in technologies such as robotics and artificial intelligence.

First announced last year by Ashton B. Carter, President Barack Obama’s defense secretary, the “Third Offset” strategy provides a formula for maintaining a military advantage in the face of a renewed rivalry with China and Russia.

As consumer electronics manufacturing has moved to Asia, both Chinese companies and the nation’s government laboratories are making major investments in artificial intelligence.

The advance of the Chinese was underscored last month when Qi Lu, a veteran Microsoft artificial intelligence specialist, left the company to become chief operating officer at Baidu, where he will oversee the company’s ambitious plan to become a global leader in A.I.

The authors note some recent military moves (Note: Links have been removed),

In August [2016], the state-run China Daily reported that the country had embarked on the development of a cruise missile system with a “high level” of artificial intelligence. The new system appears to be a response to a missile the United States Navy is expected to deploy in 2018 to counter growing Chinese military influence in the Pacific.

Known as the Long Range Anti-Ship Missile, or L.R.A.S.M., it is described as a “semiautonomous” weapon. According to the Pentagon, this means that though targets are chosen by human soldiers, the missile uses artificial intelligence technology to avoid defenses and make final targeting decisions.

The new Chinese weapon typifies a strategy known as “remote warfare,” said John Arquilla, a military strategist at the Naval Post Graduate School in Monterey, Calif. The idea is to build large fleets of small ships that deploy missiles, to attack an enemy with larger ships, like aircraft carriers.

“They are making their machines more creative,” he said. “A little bit of automation gives the machines a tremendous boost.”

Whether or not the Chinese will quickly catch the United States in artificial intelligence and robotics technologies is a matter of intense discussion and disagreement in the United States.

Markoff and Rosenberg return to the world of consumer electronics as they finish their article on AI and the military (Note: Links have been removed),

Moreover, while there appear to be relatively cozy relationships between the Chinese government and commercial technology efforts, the same cannot be said about the United States. The Pentagon recently restarted its beachhead in Silicon Valley, known as the Defense Innovation Unit Experimental facility, or DIUx. It is an attempt to rethink bureaucratic United States government contracting practices in terms of the faster and more fluid style of Silicon Valley.

The government has not yet undone the damage to its relationship with the Valley brought about by Edward J. Snowden’s revelations about the National Security Agency’s surveillance practices. Many Silicon Valley firms remain hesitant to be seen as working too closely with the Pentagon out of fear of losing access to China’s market.

“There are smaller companies, the companies who sort of decided that they’re going to be in the defense business, like a Palantir,” said Peter W. Singer, an expert in the future of war at New America, a think tank in Washington, referring to the Palo Alto, Calif., start-up founded in part by the venture capitalist Peter Thiel. “But if you’re thinking about the big, iconic tech companies, they can’t become defense contractors and still expect to get access to the Chinese market.”

Those concerns are real for Silicon Valley.

If you have the time, I recommend reading the article in its entirety.

Impact of the US regime on thinking about AI?

A March 24, 2017 article by Daniel Gross for Slate.com hints that at least one high level offician in the Trump administration may be a little naïve in his understanding of AI and its impending impact on US society (Note: Links have been removed),

Treasury Secretary Steven Mnuchin is a sharp guy. He’s a (legacy) alumnus of Yale and Goldman Sachs, did well on Wall Street, and was a successful movie producer and bank investor. He’s good at, and willing to, put other people’s money at risk alongside some of his own. While he isn’t the least qualified person to hold the post of treasury secretary in 2017, he’s far from the best qualified. For in his 54 years on this planet, he hasn’t expressed or displayed much interest in economic policy, or in grappling with the big picture macroeconomic issues that are affecting our world. It’s not that he is intellectually incapable of grasping them; they just haven’t been in his orbit.

Which accounts for the inanity he uttered at an Axios breakfast Friday morning about the impact of artificial intelligence on jobs.

“it’s not even on our radar screen…. 50-100 more years” away, he said. “I’m not worried at all” about robots displacing humans in the near future, he said, adding: “In fact I’m optimistic.”

A.I. is already affecting the way people work, and the work they do. (In fact, I’ve long suspected that Mike Allen, Mnuchin’s Axios interlocutor, is powered by A.I.) I doubt Mnuchin has spent much time in factories, for example. But if he did, he’d see that machines and software are increasingly doing the work that people used to do. They’re not just moving goods through an assembly line, they’re soldering, coating, packaging, and checking for quality. Whether you’re visiting a GE turbine plant in South Carolina, or a cable-modem factory in Shanghai, the thing you’ll notice is just how few people there actually are. It’s why, in the U.S., manufacturing output rises every year while manufacturing employment is essentially stagnant. It’s why it is becoming conventional wisdom that automation is destroying more manufacturing jobs than trade. And now we are seeing the prospect of dark factories, which can run without lights because there are no people in them, are starting to become a reality. The integration of A.I. into factories is one of the reasons Trump’s promise to bring back manufacturing employment is absurd. You’d think his treasury secretary would know something about that.

It goes far beyond manufacturing, of course. Programmatic advertising buying, Spotify’s recommendation engines, chatbots on customer service websites, Uber’s dispatching system—all of these are examples of A.I. doing the work that people used to do. …

Adding to Mnuchin’s lack of credibility on the topic of jobs and robots/AI, Matthew Rozsa’s March 28, 2017 article for Salon.com features a study from the US National Bureau of Economic Research (Note: Links have been removed),

A new study by the National Bureau of Economic Research shows that every fully autonomous robot added to an American factory has reduced employment by an average of 6.2 workers, according to a report by BuzzFeed. The study also found that for every fully autonomous robot per thousand workers, the employment rate dropped by 0.18 to 0.34 percentage points and wages fell by 0.25 to 0.5 percentage points.

I can’t help wondering if the US Secretary of the Treasury is so oblivious to what is going on in the workplace whether that’s representative of other top-tier officials such as the Secretary of Defense, Secretary of Labor, etc. What is going to happen to US research in fields such as robotics and AI?

I have two more questions, in future what happens to research which contradicts or makes a top tier Trump government official look foolish? Will it be suppressed?

You can find the report “Robots and Jobs: Evidence from US Labor Markets” by Daron Acemoglu and Pascual Restrepo. NBER (US National Bureau of Economic Research) WORKING PAPER SERIES (Working Paper 23285) released March 2017 here. The introduction featured some new information for me; the term ‘technological unemployment’ was introduced in 1930 by John Maynard Keynes.

Moving from a wholly US-centric view of AI

Naturally in a discussion about AI, it’s all US and the country considered its chief sceince rival, China, with a mention of its old rival, Russia. Europe did rate a mention, albeit as a totality. Having recently found out that Canadians were pioneers in a very important aspect of AI, machine-learning, I feel obliged to mention it. You can find more about Canadian AI efforts in my March 24, 2017 posting (scroll down about 40% of the way) where you’ll find a very brief history and mention of the funding for a newly launching, Pan-Canadian Artificial Intelligence Strategy.

If any of my readers have information about AI research efforts in other parts of the world, please feel free to write them up in the comments.

Ishiguro’s robots and Swiss scientist question artificial intelligence at SXSW (South by Southwest) 2017

It seems unexpected to stumble across presentations on robots and on artificial intelligence at an entertainment conference such as South by South West (SXSW). Here’s why I thought so, from the SXSW Wikipedia entry (Note: Links have been removed),

South by Southwest (abbreviated as SXSW) is an annual conglomerate of film, interactive media, and music festivals and conferences that take place in mid-March in Austin, Texas, United States. It began in 1987, and has continued to grow in both scope and size every year. In 2011, the conference lasted for 10 days with SXSW Interactive lasting for 5 days, Music for 6 days, and Film running concurrently for 9 days.

Lifelike robots

The 2017 SXSW Interactive featured separate presentations by Japanese roboticist, Hiroshi Ishiguro (mentioned here a few times), and EPFL (École Polytechnique Fédérale de Lausanne; Switzerland) artificial intelligence expert, Marcel Salathé.

Ishiguro’s work is the subject of Harry McCracken’s March 14, 2017 article for Fast Company (Note: Links have been removed),

I’m sitting in the Japan Factory pavilion at SXSW in Austin, Texas, talking to two other attendees about whether human beings are more valuable than robots. I say that I believe human life to be uniquely precious, whereupon one of the others rebuts me by stating that humans allow cars to exist even though they kill humans.

It’s a reasonable point. But my fellow conventioneer has a bias: It’s a robot itself, with an ivory-colored, mask-like face and visible innards. So is the third participant in the conversation, a much more human automaton modeled on a Japanese woman and wearing a black-and-white blouse and a blue scarf.

We’re chatting as part of a demo of technologies developed by the robotics lab of Hiroshi Ishiguro, based at Osaka University, and Japanese telecommunications company NTT. Ishiguro has gained fame in the field by creating increasingly humanlike robots—that is, androids—with the ultimate goal of eliminating the uncanny valley that exists between people and robotic people.

I also caught up with Ishiguro himself at the conference—his second SXSW—to talk about his work. He’s a champion of the notion that people will respond best to robots who simulate humanity, thereby creating “a feeling of presence,” as he describes it. That gives him and his researchers a challenge that encompasses everything from technology to psychology. “Our approach is quite interdisciplinary,” he says, which is what prompted him to bring his work to SXSW.

A SXSW attendee talks about robots with two robots.

If you have the time, do read McCracken’t piece in its entirety.

You can find out more about the ‘uncanny valley’ in my March 10, 2011 posting about Ishiguro’s work if you scroll down about 70% of the way to find the ‘uncanny valley’ diagram and Masahiro Mori’s description of the concept he developed.

You can read more about Ishiguro and his colleague, Ryuichiro Higashinaka, on their SXSW biography page.

Artificial intelligence (AI)

In a March 15, 2017 EPFL press release by Hilary Sanctuary, scientist Marcel Salathé poses the question: Is Reliable Artificial Intelligence Possible?,

In the quest for reliable artificial intelligence, EPFL scientist Marcel Salathé argues that AI technology should be openly available. He will be discussing the topic at this year’s edition of South by South West on March 14th in Austin, Texas.

Will artificial intelligence (AI) change the nature of work? For EPFL theoretical biologist Marcel Salathé, the answer is invariably yes. To him, a more fundamental question that needs to be addressed is who owns that artificial intelligence?

“We have to hold AI accountable, and the only way to do this is to verify it for biases and make sure there is no deliberate misinformation,” says Salathé. “This is not possible if the AI is privatized.”

AI is both the algorithm and the data

So what exactly is AI? It is generally regarded as “intelligence exhibited by machines”. Today, it is highly task specific, specially designed to beat humans at strategic games like Chess and Go, or diagnose skin disease on par with doctors’ skills.

On a practical level, AI is implemented through what scientists call “machine learning”, which means using a computer to run specifically designed software that can be “trained”, i.e. process data with the help of algorithms and to correctly identify certain features from that data set. Like human cognition, AI learns by trial and error. Unlike humans, however, AI can process and recall large quantities of data, giving it a tremendous advantage over us.

Crucial to AI learning, therefore, is the underlying data. For Salathé, AI is defined by both the algorithm and the data, and as such, both should be publicly available.

Deep learning algorithms can be perturbed

Last year, Salathé created an algorithm to recognize plant diseases. With more than 50,000 photos of healthy and diseased plants in the database, the algorithm uses artificial intelligence to diagnose plant diseases with the help of your smartphone. As for human disease, a recent study by a Stanford Group on cancer showed that AI can be trained to recognize skin cancer slightly better than a group of doctors. The consequences are far-reaching: AI may one day diagnose our diseases instead of doctors. If so, will we really be able to trust its diagnosis?

These diagnostic tools use data sets of images to train and learn. But visual data sets can be perturbed that prevent deep learning algorithms from correctly classifying images. Deep neural networks are highly vulnerable to visual perturbations that are practically impossible to detect with the naked eye, yet causing the AI to misclassify images.

In future implementations of AI-assisted medical diagnostic tools, these perturbations pose a serious threat. More generally, the perturbations are real and may already be affecting the filtered information that reaches us every day. These vulnerabilities underscore the importance of certifying AI technology and monitoring its reliability.

h/t phys.org March 15, 2017 news item

As I noted earlier, these are not the kind of presentations you’d expect at an ‘entertainment’ festival.

From flubber to thubber

Flubber (flying rubber) is an imaginary material that provided a plot point for two Disney science fiction comedies, The Absent-Minded Professor in 1961 which was remade in 1997 as Flubber. By contrast, ‘thubber’ (thermally conductive rubber) is a real life new material developed at Carnegie Mellon University (US).

A Feb. 13, 2017 news item on phys.org makes the announcement (Note: A link has been removed),

Carmel Majidi and Jonathan Malen of Carnegie Mellon University have developed a thermally conductive rubber material that represents a breakthrough for creating soft, stretchable machines and electronics. The findings were published in Proceedings of the National Academy of Sciences this week.

The new material, nicknamed “thubber,” is an electrically insulating composite that exhibits an unprecedented combination of metal-like thermal conductivity, elasticity similar to soft, biological tissue, and can stretch over six times its initial length.

A Feb.13, 2017 Carnegie Mellon University news release (also on EurekAlert), which originated the news item, provides more detail (Note A link has been removed),

“Our combination of high thermal conductivity and elasticity is especially critical for rapid heat dissipation in applications such as wearable computing and soft robotics, which require mechanical compliance and stretchable functionality,” said Majidi, an associate professor of mechanical engineering.

Applications could extend to industries like athletic wear and sports medicine—think of lighted clothing for runners and heated garments for injury therapy. Advanced manufacturing, energy, and transportation are other areas where stretchable electronic material could have an impact.

“Until now, high power devices have had to be affixed to rigid, inflexible mounts that were the only technology able to dissipate heat efficiently,” said Malen, an associate professor of mechanical engineering. “Now, we can create stretchable mounts for LED lights or computer processors that enable high performance without overheating in applications that demand flexibility, such as light-up fabrics and iPads that fold into your wallet.”

The key ingredient in “thubber” is a suspension of non-toxic, liquid metal microdroplets. The liquid state allows the metal to deform with the surrounding rubber at room temperature. When the rubber is pre-stretched, the droplets form elongated pathways that are efficient for heat travel. Despite the amount of metal, the material is also electrically insulating.

To demonstrate these findings, the team mounted an LED light onto a strip of the material to create a safety lamp worn around a jogger’s leg. The “thubber” dissipated the heat from the LED, which would have otherwise burned the jogger. The researchers also created a soft robotic fish that swims with a “thubber” tail, without using conventional motors or gears.

“As the field of flexible electronics grows, there will be a greater need for materials like ours,” said Majidi. “We can also see it used for artificial muscles that power bio-inspired robots.”

Majidi and Malen acknowledge the efforts of lead authors Michael Bartlett, Navid Kazem, and Matthew Powell-Palm in performing this multidisciplinary work. They also acknowledge funding from the Air Force, NASA, and the Army Research Office.

Here’s a link to and a citation for the paper,

High thermal conductivity in soft elastomers with elongated liquid metal inclusions by Michael D. Bartlett, Navid Kazem, Matthew J. Powell-Palm, Xiaonan Huang, Wenhuan Sun, Jonathan A. Malen, and Carmel Majidi.  Proceedings of the National Academy of Sciences of the United States of America (PNAS, Proceedings of the National Academy of Sciences) doi: 10.1073/pnas.1616377114

This paper is open access.

New principles for AI (artificial intelligence) research along with some history and a plea for a democratic discussion

For almost a month I’ve been meaning to get to this Feb. 1, 2017 essay by Andrew Maynard (director of Risk Innovation Lab at Arizona State University) and Jack Stilgoe (science policy lecturer at University College London [UCL]) on the topic of artificial intelligence and principles (Note: Links have been removed). First, a walk down memory lane,

Today [Feb. 1, 2017] in Washington DC, leading US and UK scientists are meeting to share dispatches from the frontiers of machine learning – an area of research that is creating new breakthroughs in artificial intelligence (AI). Their meeting follows the publication of a set of principles for beneficial AI that emerged from a conference earlier this year at a place with an important history.

In February 1975, 140 people – mostly scientists, with a few assorted lawyers, journalists and others – gathered at a conference centre on the California coast. A magazine article from the time by Michael Rogers, one of the few journalists allowed in, reported that most of the four days’ discussion was about the scientific possibilities of genetic modification. Two years earlier, scientists had begun using recombinant DNA to genetically modify viruses. The Promethean nature of this new tool prompted scientists to impose a moratorium on such experiments until they had worked out the risks. By the time of the Asilomar conference, the pent-up excitement was ready to burst. It was only towards the end of the conference when a lawyer stood up to raise the possibility of a multimillion-dollar lawsuit that the scientists focussed on the task at hand – creating a set of principles to govern their experiments.

The 1975 Asilomar meeting is still held up as a beacon of scientific responsibility. However, the story told by Rogers, and subsequently by historians, is of scientists motivated by a desire to head-off top down regulation with a promise of self-governance. Geneticist Stanley Cohen said at the time, ‘If the collected wisdom of this group doesn’t result in recommendations, the recommendations may come from other groups less well qualified’. The mayor of Cambridge, Massachusetts was a prominent critic of the biotechnology experiments then taking place in his city. He said, ‘I don’t think these scientists are thinking about mankind at all. I think that they’re getting the thrills and the excitement and the passion to dig in and keep digging to see what the hell they can do’.

The concern in 1975 was with safety and containment in research, not with the futures that biotechnology might bring about. A year after Asilomar, Cohen’s colleague Herbert Boyer founded Genentech, one of the first biotechnology companies. Corporate interests barely figured in the conversations of the mainly university scientists.

Fast-forward 42 years and it is clear that machine learning, natural language processing and other technologies that come under the AI umbrella are becoming big business. The cast list of the 2017 Asilomar meeting included corporate wunderkinds from Google, Facebook and Tesla as well as researchers, philosophers, and other academics. The group was more intellectually diverse than their 1975 equivalents, but there were some notable absences – no public and their concerns, no journalists, and few experts in the responsible development of new technologies.

Maynard and Stilgoe offer a critique of the latest principles,

The principles that came out of the meeting are, at least at first glance, a comforting affirmation that AI should be ‘for the people’, and not to be developed in ways that could cause harm. They promote the idea of beneficial and secure AI, development for the common good, and the importance of upholding human values and shared prosperity.

This is good stuff. But it’s all rather Motherhood and Apple Pie: comforting and hard to argue against, but lacking substance. The principles are short on accountability, and there are notable absences, including the need to engage with a broader set of stakeholders and the public. At the early stages of developing new technologies, public concerns are often seen as an inconvenience. In a world in which populism appears to be trampling expertise into the dirt, it is easy to understand why scientists may be defensive.

I encourage you to read this thoughtful essay in its entirety although I do have one nit to pick:  Why only US and UK scientists? I imagine the answer may lie in funding and logistics issues but I find it surprising that the critique makes no mention of the international community as a nod to inclusion.

For anyone interested in the Asolimar AI principles (2017), you can find them here. You can also find videos of the two-day workshop (Jan. 31 – Feb. 1, 2017 workshop titled The Frontiers of Machine Learning (a Raymond and Beverly Sackler USA-UK Scientific Forum [US National Academy of Sciences]) here (videos for each session are available on Youtube).

500-year history of robots exhibition at London’s (UK) Science Museum

Thanks to a Feb.7, 2017 article by Benjamin Wheelock for Salon.com for the heads up regarding the ‘Robots’ exhibit at the UK’s Science Museum in London.

Prior to the exhibition’s opening on Feb. 8, 2017, The Guardian has published a preview (more about that in a minute), a photo essay, and this video about the show,

I find the robot baby to be endlessly fascinating.

The Science Museum announced its then upcoming Feb. 8  – Sept. 3, 2017 exhibition on robots in a May ?, 2016 press release,

8 February – 3 September 2017, Science Museum, London
Admission: £15 adults, £13 concessions (Free entry for under 7s; family tickets available)
Tickets available in the Museum or via sciencemuseum.org.uk/robots
Supported by the Heritage Lottery Fund


Throughout history, artists and scientists have sought to understand what it means to be human. The Science Museum’s new Robots exhibition, opening in February 2017, will explore this very human obsession to recreate ourselves, revealing the remarkable 500-year story of humanoid robots.

Featuring a unique collection of over 100 robots, from a 16th-century mechanical monk to robots from science fiction and modern-day research labs, this exhibition will enable visitors to discover the cultural, historical and technological context of humanoid robots. Visitors will be able to interact with some of the 12 working robots on display. Among many other highlights will be an articulated iron manikin from the 1500s, Cygan, a 2.4m tall 1950s robot with a glamorous past, and one of the first walking bipedal robots.

Robots have been at the heart of popular culture since the word ‘robot’ was first used in 1920, but their fascinating story dates back many centuries. Set in five different periods and places, this exhibition will explore how robots and society have been shaped by religious belief, the industrial revolution, 20th century popular culture and dreams about the future.

The quest to build ever more complex robots has transformed our understanding of the human body, and today robots are becoming increasingly human, learning from mistakes and expressing emotions. In the exhibition, visitors will go behind the scenes to glimpse recent developments from robotics research, exploring how roboticists are building robots that resemble us and interact in human-like ways. The exhibition will end by asking visitors to imagine what a shared future with robots might be like. Robots has been generously supported by the Heritage Lottery Fund, with a £100,000 grant from the Collecting Cultures programme.

Ian Blatchford, Director of the Science Museum Group said: ‘This exhibition explores the uniquely human obsession of recreating ourselves, not through paint or marble but in metal. Seeing robots through the eyes of those who built or gazed in awe at them reveals much about humanity’s hopes, fears and dreams.’

‘The latest in our series of ambitious, blockbuster exhibitions, Robots explores the wondrously rich culture, history and technology of humanoid robotics. Last year we moved gigantic spacecraft from Moscow to the Museum, but this year we will bring a robot back to life.’

Today [May ?, 2016] the Science Museum launched a Kickstarter campaign to rebuild Eric, the UK’s first robot. Originally built in 1928 by Captain Richards & A.H. Reffell, Eric was one of the world’s first robots. Built less than a decade after the word robot was first used, he travelled the globe with his makers and amazed crowds in the UK, US and Europe, before disappearing forever.

[The campaign was successful.]

You can find out more about Eric on the museum’s ‘Eric: The UK’s first robot’ webpage,

Getting back to the exhibition, the Guardian’s Ian Sample has written up a Feb. 7, 2017 preview (Note: Links have been removed),

Eric the robot wowed the crowds. He stood and bowed and answered questions as blue sparks shot from his metallic teeth. The British creation was such a hit he went on tour around the world. When he arrived in New York, in 1929, a theatre nightwatchman was so alarmed he pulled out a gun and shot at him.

The curators at London’s Science Museum hope for a less extreme reaction when they open Robots, their latest exhibition, on Wednesday [Feb. 8, 2016]. The collection of more than 100 objects is a treasure trove of delights: a miniature iron man with moving joints; a robotic swan that enthralled Mark Twain; a tiny metal woman with a wager cup who is propelled by a mechanism hidden up her skirt.

The pieces are striking and must have dazzled in their day. Ben Russell, the lead curator, points out that most people would not have seen a clock when they first clapped eyes on one exhibit, a 16th century automaton of a monk [emphasis mine], who trundled along, moved his lips, and beat his chest in contrition. It was surely mesmerising to the audiences of 1560. “Arthur C Clarke once said that any sufficiently advanced technology is indistinguishable from magic,” Russell says. “Well, this is where it all started.”

In every chapter of the 500-year story, robots have held a mirror to human society. Some of the earliest devices brought the Bible to life. One model of Christ on the cross rolls his head and oozes wooden blood from his side as four figures reach up. The mechanisation of faith must have drawn the congregations as much as any sermon.

But faith was not the only focus. Through clockwork animals and human figurines, model makers explored whether humans were simply conscious machines. They brought order to the universe with orreries and astrolabes. The machines became more lighthearted in the enlightened 18th century, when automatons of a flute player, a writer, and a defecating duck all made an appearance. A century later, the style was downright rowdy, with drunken aristocrats, preening dandies and the disturbing life of a sausage from farm to mouth all being recreated as automata.

That reference to an automaton of a monk reminded me of a July 22, 2009 posting where I excerpted a passage (from another blog) about a robot priest and a robot monk,

Since 1993 Robo-Priest has been on call 24-hours a day at Yokohama Central Cemetery. The bearded robot is programmed to perform funerary rites for several Buddhist sects, as well as for Protestants and Catholics. Meanwhile, Robo-Monk chants sutras, beats a religious drum and welcomes the faithful to Hotoku-ji, a Buddhist temple in Kakogawa city, Hyogo Prefecture. More recently, in 2005, a robot dressed in full samurai armour received blessings at a Shinto shrine on the Japanese island of Kyushu. Kiyomori, named after a famous 12th-century military general, prayed for the souls of all robots in the world before walking quietly out of Munakata Shrine.

Sample’s preview takes the reader up to our own age and contemporary robots. And, there is another Guardian article which offering a behind-the-scenes look at the then upcoming exhibition, a Jan. 28, 2016 piece by Jonathan Jones, ,

An android toddler lies on a pallet, its doll-like face staring at the ceiling. On a shelf rests a much more grisly creation that mixes imitation human bones and muscles, with wires instead of arteries and microchips in place of organs. It has no lower body, and a single Cyclopean eye. This store room is an eerie place, then it gets more creepy, as I glimpse behind the anatomical robot a hulking thing staring at me with glowing red eyes. Its plastic skin has been burned off to reveal a metal skeleton with pistons and plates of merciless strength. It is the Terminator, sent back in time by the machines who will rule the future to ensure humanity’s doom.

Backstage at the Science Museum, London, where these real experiments and a full-scale model from the Terminator films are gathered to be installed in the exhibition Robots, it occurs to me that our fascination with mechanical replacements for ourselves is so intense that science struggles to match it. We think of robots as artificial humans that can not only walk and talk but possess digital personalities, even a moral code. In short we accord them agency. Today, the real age of robots is coming, and yet even as these machines promise to transform work or make it obsolete, few possess anything like the charisma of the androids of our dreams and nightmares.

That’s why, although the robotic toddler sleeping in the store room is an impressive piece of tech, my heart leaps in another way at the sight of the Terminator. For this is a bad robot, a scary robot, a robot of remorseless malevolence. It has character, in other words. Its programmed persona (which in later films becomes much more helpful and supportive) is just one of those frightening, funny or touching personalities that science fiction has imagined for robots.

Can the real life – well, real simulated life – robots in the Science Museum’s new exhibition live up to these characters? The most impressively interactive robot in the show will be RoboThespian, who acts as compere for its final gallery displaying the latest advances in robotics. He stands at human height, with a white plastic face and metal arms and legs, and can answer questions about the value of pi and the nature of free will. “I’m a very clever robot,” RoboThespian claims, plausibly, if a little obnoxiously.

Except not quite as clever as all that. A human operator at a computer screen connected with Robothespian by wifi is looking through its video camera eyes and speaking with its digital voice. The result is huge fun – the droid moves in very lifelike ways as it speaks, and its interactions don’t need a live operator as they can be preprogrammed. But a freethinking, free-acting robot with a mind and personality of its own, Robothespian is not.

Our fascination with synthetic humans goes back to the human urge to recreate life itself – to reproduce the mystery of our origins. Artists have aspired to simulate human life since ancient times. The ancient Greek myth of Pygmalion, who made a statue so beautiful he fell in love with it and prayed for it to come to life, is a mythic version of Greek artists such as Pheidias and Praxiteles whose statues, with their superb imitation of muscles and movement, seem vividly alive. The sculptures of centaurs carved for the Parthenon in Athens still possess that uncanny lifelike power.

Most of the finest Greek statues were bronze, and mythology tells of metal robots that sound very much like statues come to life, including the bronze giant Talos, who was to become one of cinema’s greatest robotic monsters thanks to the special effects genius of Ray Harryhausen in Jason and the Argonauts.

Renaissance art took the quest to simulate life to new heights, with awed admirers of Michelangelo’s David claiming it even seemed to breathe (as it really does almost appear to when soft daylight casts mobile shadow on superbly sculpted ribs). So it is oddly inevitable that one of the first recorded inventors of robots was Leonardo da Vinci, consummate artist and pioneering engineer. Leonardo apparently made, or at least designed, a robot knight to amuse the court of Milan. It worked with pulleys and was capable of simple movements. Documents of this invention are frustratingly sparse, but there is a reliable eyewitness account of another of Leonardo’s automata. In 1515 he delighted Francois I, king of France, with a robot lion that walked forward towards the monarch, then released a bunch of lilies, the royal flower, from a panel that opened in its back.

One of the most uncanny androids in the Science Museum show is from Japan, a freakily lifelike female robot called Kodomoroid, the world’s first robot newscaster. With her modest downcast gaze and fine artificial complexion, she has the same fetishised femininity you might see in a Manga comic and appears to reflect a specific social construction of gender. Whether you read that as vulnerability or subservience, presumably the idea is to make us feel we are encountering a robot with real personhood. Here is a robot that combines engineering and art just as Da Vinci dreamed – it has the mechanical genius of his knight and the synthetic humanity of his perfect portrait.

Here’s a link to the Science Museum’s ‘Robots’ exhibition webspace and a link to a Guardian ‘Robots’ photo essay.

All this makes me wish I had plans to visit London, UK in the next few months.