Tag Archives: AI

Artificial synapse based on tantalum oxide from Korean researchers

This memristor story comes from South Korea as we progress on the way to neuromorphic computing (brainlike computing). A Sept. 7, 2018 news item on ScienceDaily makes the announcement,

A research team led by Director Myoung-Jae Lee from the Intelligent Devices and Systems Research Group at DGIST (Daegu Gyeongbuk Institute of Science and Technology) has succeeded in developing an artificial synaptic device that mimics the function of the nerve cells (neurons) and synapses that are response for memory in human brains. [sic]

Synapses are where axons and dendrites meet so that neurons in the human brain can send and receive nerve signals; there are known to be hundreds of trillions of synapses in the human brain.

This chemical synapse information transfer system, which transfers information from the brain, can handle high-level parallel arithmetic with very little energy, so research on artificial synaptic devices, which mimic the biological function of a synapse, is under way worldwide.

Dr. Lee’s research team, through joint research with teams led by Professor Gyeong-Su Park from Seoul National University; Professor Sung Kyu Park from Chung-ang University; and Professor Hyunsang Hwang from Pohang University of Science and Technology (POSTEC), developed a high-reliability artificial synaptic device with multiple values by structuring tantalum oxide — a trans-metallic material — into two layers of Ta2O5-x and TaO2-x and by controlling its surface.

A September 7, 2018 DGIST press release (also on EurekAlert), which originated the news item, delves further into the work,

The artificial synaptic device developed by the research team is an electrical synaptic device that simulates the function of synapses in the brain as the resistance of the tantalum oxide layer gradually increases or decreases depending on the strength of the electric signals. It has succeeded in overcoming durability limitations of current devices by allowing current control only on one layer of Ta2O5-x.

In addition, the research team successfully implemented an experiment that realized synapse plasticity [or synaptic plasticity], which is the process of creating, storing, and deleting memories, such as long-term strengthening of memory and long-term suppression of memory deleting by adjusting the strength of the synapse connection between neurons.

The non-volatile multiple-value data storage method applied by the research team has the technological advantage of having a small area of an artificial synaptic device system, reducing circuit connection complexity, and reducing power consumption by more than one-thousandth compared to data storage methods based on digital signals using 0 and 1 such as volatile CMOS (Complementary Metal Oxide Semiconductor).

The high-reliability artificial synaptic device developed by the research team can be used in ultra-low-power devices or circuits for processing massive amounts of big data due to its capability of low-power parallel arithmetic. It is expected to be applied to next-generation intelligent semiconductor device technologies such as development of artificial intelligence (AI) including machine learning and deep learning and brain-mimicking semiconductors.

Dr. Lee said, “This research secured the reliability of existing artificial synaptic devices and improved the areas pointed out as disadvantages. We expect to contribute to the development of AI based on the neuromorphic system that mimics the human brain by creating a circuit that imitates the function of neurons.”

Here’s a link to and a citation for the paper,

Reliable Multivalued Conductance States in TaOx Memristors through Oxygen Plasma-Assisted Electrode Deposition with in Situ-Biased Conductance State Transmission Electron Microscopy Analysis by Myoung-Jae Lee, Gyeong-Su Park, David H. Seo, Sung Min Kwon, Hyeon-Jun Lee, June-Seo Kim, MinKyung Jung, Chun-Yeol You, Hyangsook Lee, Hee-Goo Kim, Su-Been Pang, Sunae Seo, Hyunsang Hwang, and Sung Kyu Park. ACS Appl. Mater. Interfaces, 2018, 10 (35), pp 29757–29765 DOI: 10.1021/acsami.8b09046 Publication Date (Web): July 23, 2018

Copyright © 2018 American Chemical Society

This paper is open access.

You can find other memristor and neuromorphic computing stories here by using the search terms I’ve highlighted,  My latest (more or less) is an April 19, 2018 posting titled, New path to viable memristor/neuristor?

Finally, here’s an image from the Korean researchers that accompanied their work,

Caption: Representation of neurons and synapses in the human brain. The magnified synapse represents the portion mimicked using solid-state devices. Credit: Daegu Gyeongbuk Institute of Science and Technology(DGIST)

If only AI had a brain (a Wizard of Oz reference?)

The title, which I’ve borrowed from the news release, is the only Wizard of Oz reference that I can find but it works so well, you don’t really need anything more.

Moving onto the news, a July 23, 2018 news item on phys.org announces new work on developing an artificial synapse (Note: A link has been removed),

Digital computation has rendered nearly all forms of analog computation obsolete since as far back as the 1950s. However, there is one major exception that rivals the computational power of the most advanced digital devices: the human brain.

The human brain is a dense network of neurons. Each neuron is connected to tens of thousands of others, and they use synapses to fire information back and forth constantly. With each exchange, the brain modulates these connections to create efficient pathways in direct response to the surrounding environment. Digital computers live in a world of ones and zeros. They perform tasks sequentially, following each step of their algorithms in a fixed order.

A team of researchers from Pitt’s [University of Pittsburgh] Swanson School of Engineering have developed an “artificial synapse” that does not process information like a digital computer but rather mimics the analog way the human brain completes tasks. Led by Feng Xiong, assistant professor of electrical and computer engineering, the researchers published their results in the recent issue of the journal Advanced Materials (DOI: 10.1002/adma.201802353). His Pitt co-authors include Mohammad Sharbati (first author), Yanhao Du, Jorge Torres, Nolan Ardolino, and Minhee Yun.

A July 23, 2018 University of Pittsburgh Swanson School of Engineering news release (also on EurekAlert), which originated the news item, provides further information,

“The analog nature and massive parallelism of the brain are partly why humans can outperform even the most powerful computers when it comes to higher order cognitive functions such as voice recognition or pattern recognition in complex and varied data sets,” explains Dr. Xiong.

An emerging field called “neuromorphic computing” focuses on the design of computational hardware inspired by the human brain. Dr. Xiong and his team built graphene-based artificial synapses in a two-dimensional honeycomb configuration of carbon atoms. Graphene’s conductive properties allowed the researchers to finely tune its electrical conductance, which is the strength of the synaptic connection or the synaptic weight. The graphene synapse demonstrated excellent energy efficiency, just like biological synapses.

In the recent resurgence of artificial intelligence, computers can already replicate the brain in certain ways, but it takes about a dozen digital devices to mimic one analog synapse. The human brain has hundreds of trillions of synapses for transmitting information, so building a brain with digital devices is seemingly impossible, or at the very least, not scalable. Xiong Lab’s approach provides a possible route for the hardware implementation of large-scale artificial neural networks.

According to Dr. Xiong, artificial neural networks based on the current CMOS (complementary metal-oxide semiconductor) technology will always have limited functionality in terms of energy efficiency, scalability, and packing density. “It is really important we develop new device concepts for synaptic electronics that are analog in nature, energy-efficient, scalable, and suitable for large-scale integrations,” he says. “Our graphene synapse seems to check all the boxes on these requirements so far.”

With graphene’s inherent flexibility and excellent mechanical properties, these graphene-based neural networks can be employed in flexible and wearable electronics to enable computation at the “edge of the internet”–places where computing devices such as sensors make contact with the physical world.

“By empowering even a rudimentary level of intelligence in wearable electronics and sensors, we can track our health with smart sensors, provide preventive care and timely diagnostics, monitor plants growth and identify possible pest issues, and regulate and optimize the manufacturing process–significantly improving the overall productivity and quality of life in our society,” Dr. Xiong says.

The development of an artificial brain that functions like the analog human brain still requires a number of breakthroughs. Researchers need to find the right configurations to optimize these new artificial synapses. They will need to make them compatible with an array of other devices to form neural networks, and they will need to ensure that all of the artificial synapses in a large-scale neural network behave in the same exact manner. Despite the challenges, Dr. Xiong says he’s optimistic about the direction they’re headed.

“We are pretty excited about this progress since it can potentially lead to the energy-efficient, hardware implementation of neuromorphic computing, which is currently carried out in power-intensive GPU clusters. The low-power trait of our artificial synapse and its flexible nature make it a suitable candidate for any kind of A.I. device, which would revolutionize our lives, perhaps even more than the digital revolution we’ve seen over the past few decades,” Dr. Xiong says.

There is a visual representation of this artificial synapse,

Caption: Pitt engineers built a graphene-based artificial synapse in a two-dimensional, honeycomb configuration of carbon atoms that demonstrated excellent energy efficiency comparable to biological synapses Credit: Swanson School of Engineering

Here’s a link to and a citation for the paper,

Low‐Power, Electrochemically Tunable Graphene Synapses for Neuromorphic Computing by Mohammad Taghi Sharbati, Yanhao Du, Jorge Torres, Nolan D. Ardolino, Minhee Yun, Feng Xiong. Advanced Materials DOP: https://doi.org/10.1002/adma.201802353 First published [online]: 23 July 2018

This paper is behind a paywall.

I did look at the paper and if I understand it rightly, this approach is different from the memristor-based approaches that I have so often featured here. More than that I cannot say.

Finally, the Wizard of Oz song ‘If I Only Had a Brain’,

Brainy and brainy: a novel synaptic architecture and a neuromorphic computing platform called SpiNNaker

I have two items about brainlike computing. The first item hearkens back to memristors, a topic I have been following since 2008. (If you’re curious about the various twists and turns just enter  the term ‘memristor’ in this blog’s search engine.) The latest on memristors is from a team than includes IBM (US), École Politechnique Fédérale de Lausanne (EPFL; Swizterland), and the New Jersey Institute of Technology (NJIT; US). The second bit comes from a Jülich Research Centre team in Germany and concerns an approach to brain-like computing that does not include memristors.

Multi-memristive synapses

In the inexorable march to make computers function more like human brains (neuromorphic engineering/computing), an international team has announced its latest results in a July 10, 2018 news item on Nanowerk,

Two New Jersey Institute of Technology (NJIT) researchers, working with collaborators from the IBM Research Zurich Laboratory and the École Polytechnique Fédérale de Lausanne, have demonstrated a novel synaptic architecture that could lead to a new class of information processing systems inspired by the brain.

The findings are an important step toward building more energy-efficient computing systems that also are capable of learning and adaptation in the real world. …

A July 10, 2018 NJIT news release (also on EurekAlert) by Tracey Regan, which originated by the news item, adds more details,

The researchers, Bipin Rajendran, an associate professor of electrical and computer engineering, and S. R. Nandakumar, a graduate student in electrical engineering, have been developing brain-inspired computing systems that could be used for a wide range of big data applications.

Over the past few years, deep learning algorithms have proven to be highly successful in solving complex cognitive tasks such as controlling self-driving cars and language understanding. At the heart of these algorithms are artificial neural networks – mathematical models of the neurons and synapses of the brain – that are fed huge amounts of data so that the synaptic strengths are autonomously adjusted to learn the intrinsic features and hidden correlations in these data streams.

However, the implementation of these brain-inspired algorithms on conventional computers is highly inefficient, consuming huge amounts of power and time. This has prompted engineers to search for new materials and devices to build special-purpose computers that can incorporate the algorithms. Nanoscale memristive devices, electrical components whose conductivity depends approximately on prior signaling activity, can be used to represent the synaptic strength between the neurons in artificial neural networks.

While memristive devices could potentially lead to faster and more power-efficient computing systems, they are also plagued by several reliability issues that are common to nanoscale devices. Their efficiency stems from their ability to be programmed in an analog manner to store multiple bits of information; however, their electrical conductivities vary in a non-deterministic and non-linear fashion.

In the experiment, the team showed how multiple nanoscale memristive devices exhibiting these characteristics could nonetheless be configured to efficiently implement artificial intelligence algorithms such as deep learning. Prototype chips from IBM containing more than one million nanoscale phase-change memristive devices were used to implement a neural network for the detection of hidden patterns and correlations in time-varying signals.

“In this work, we proposed and experimentally demonstrated a scheme to obtain high learning efficiencies with nanoscale memristive devices for implementing learning algorithms,” Nandakumar says. “The central idea in our demonstration was to use several memristive devices in parallel to represent the strength of a synapse of a neural network, but only chose one of them to be updated at each step based on the neuronal activity.”

Here’s a link to and a citation for the paper,

Neuromorphic computing with multi-memristive synapses by Irem Boybat, Manuel Le Gallo, S. R. Nandakumar, Timoleon Moraitis, Thomas Parnell, Tomas Tuma, Bipin Rajendran, Yusuf Leblebici, Abu Sebastian, & Evangelos Eleftheriou. Nature Communications volume 9, Article number: 2514 (2018) DOI: https://doi.org/10.1038/s41467-018-04933-y Published 28 June 2018

This is an open access paper.

Also they’ve got a couple of very nice introductory paragraphs which I’m including here, (from the June 28, 2018 paper in Nature Communications; Note: Links have been removed),

The human brain with less than 20 W of power consumption offers a processing capability that exceeds the petaflops mark, and thus outperforms state-of-the-art supercomputers by several orders of magnitude in terms of energy efficiency and volume. Building ultra-low-power cognitive computing systems inspired by the operating principles of the brain is a promising avenue towards achieving such efficiency. Recently, deep learning has revolutionized the field of machine learning by providing human-like performance in areas, such as computer vision, speech recognition, and complex strategic games1. However, current hardware implementations of deep neural networks are still far from competing with biological neural systems in terms of real-time information-processing capabilities with comparable energy consumption.

One of the reasons for this inefficiency is that most neural networks are implemented on computing systems based on the conventional von Neumann architecture with separate memory and processing units. There are a few attempts to build custom neuromorphic hardware that is optimized to implement neural algorithms2,3,4,5. However, as these custom systems are typically based on conventional silicon complementary metal oxide semiconductor (CMOS) circuitry, the area efficiency of such hardware implementations will remain relatively low, especially if in situ learning and non-volatile synaptic behavior have to be incorporated. Recently, a new class of nanoscale devices has shown promise for realizing the synaptic dynamics in a compact and power-efficient manner. These memristive devices store information in their resistance/conductance states and exhibit conductivity modulation based on the programming history6,7,8,9. The central idea in building cognitive hardware based on memristive devices is to store the synaptic weights as their conductance states and to perform the associated computational tasks in place.

The two essential synaptic attributes that need to be emulated by memristive devices are the synaptic efficacy and plasticity. …

It gets more complicated from there.

Now onto the next bit.

SpiNNaker

At a guess, those capitalized N’s are meant to indicate ‘neural networks’. As best I can determine, SpiNNaker is not based on the memristor. Moving on, a July 11, 2018 news item on phys.org announces work from a team examining how neuromorphic hardware and neuromorphic software work together,

A computer built to mimic the brain’s neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. Tested for accuracy, speed and energy efficiency, this custom-built computer named SpiNNaker, has the potential to overcome the speed and power consumption problems of conventional supercomputers. The aim is to advance our knowledge of neural processing in the brain, to include learning and disorders such as epilepsy and Alzheimer’s disease.

A July 11, 2018 Frontiers Publishing news release on EurekAlert, which originated the news item, expands on the latest work,

“SpiNNaker can support detailed biological models of the cortex–the outer layer of the brain that receives and processes information from the senses–delivering results very similar to those from an equivalent supercomputer software simulation,” says Dr. Sacha van Albada, lead author of this study and leader of the Theoretical Neuroanatomy group at the Jülich Research Centre, Germany. “The ability to run large-scale detailed neural networks quickly and at low power consumption will advance robotics research and facilitate studies on learning and brain disorders.”

The human brain is extremely complex, comprising 100 billion interconnected brain cells. We understand how individual neurons and their components behave and communicate with each other and on the larger scale, which areas of the brain are used for sensory perception, action and cognition. However, we know less about the translation of neural activity into behavior, such as turning thought into muscle movement.

Supercomputer software has helped by simulating the exchange of signals between neurons, but even the best software run on the fastest supercomputers to date can only simulate 1% of the human brain.

“It is presently unclear which computer architecture is best suited to study whole-brain networks efficiently. The European Human Brain Project and Jülich Research Centre have performed extensive research to identify the best strategy for this highly complex problem. Today’s supercomputers require several minutes to simulate one second of real time, so studies on processes like learning, which take hours and days in real time are currently out of reach.” explains Professor Markus Diesmann, co-author, head of the Computational and Systems Neuroscience department at the Jülich Research Centre.

He continues, “There is a huge gap between the energy consumption of the brain and today’s supercomputers. Neuromorphic (brain-inspired) computing allows us to investigate how close we can get to the energy efficiency of the brain using electronics.”

Developed over the past 15 years and based on the structure and function of the human brain, SpiNNaker — part of the Neuromorphic Computing Platform of the Human Brain Project — is a custom-built computer composed of half a million of simple computing elements controlled by its own software. The researchers compared the accuracy, speed and energy efficiency of SpiNNaker with that of NEST–a specialist supercomputer software currently in use for brain neuron-signaling research.

“The simulations run on NEST and SpiNNaker showed very similar results,” reports Steve Furber, co-author and Professor of Computer Engineering at the University of Manchester, UK. “This is the first time such a detailed simulation of the cortex has been run on SpiNNaker, or on any neuromorphic platform. SpiNNaker comprises 600 circuit boards incorporating over 500,000 small processors in total. The simulation described in this study used just six boards–1% of the total capability of the machine. The findings from our research will improve the software to reduce this to a single board.”

Van Albada shares her future aspirations for SpiNNaker, “We hope for increasingly large real-time simulations with these neuromorphic computing systems. In the Human Brain Project, we already work with neuroroboticists who hope to use them for robotic control.”

Before getting to the link and citation for the paper, here’s a description of SpiNNaker’s hardware from the ‘Spiking neural netowrk’ Wikipedia entry, Note: Links have been removed,

Neurogrid, built at Stanford University, is a board that can simulate spiking neural networks directly in hardware. SpiNNaker (Spiking Neural Network Architecture) [emphasis mine], designed at the University of Manchester, uses ARM processors as the building blocks of a massively parallel computing platform based on a six-layer thalamocortical model.[5]

Now for the link and citation,

Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model by
Sacha J. van Albada, Andrew G. Rowley, Johanna Senk, Michael Hopkins, Maximilian Schmidt, Alan B. Stokes, David R. Lester, Markus Diesmann, and Steve B. Furber. Neurosci. 12:291. doi: 10.3389/fnins.2018.00291 Published: 23 May 2018

As noted earlier, this is an open access paper.

‘One health in the 21st century’ event and internship opportunities at the Woodrow Wilson Center

One health

This event at the Woodrow Wilson International Center for Scholars (Wilson Center) is the first that I’ve seen of its kind (from a November 2, 2018 Wilson Center Science and Technology Innovation Program [STIP] announcement received via email; Note: Logistics such as date and location follow directly after),

One Health in the 21st Century Workshop

The  One Health in the 21st Century workshop will serve as a snapshot of government, intergovernmental organization and non-governmental organization innovation as it pertains to the expanding paradigm of One Health. One Health being the umbrella term for addressing animal, human, and environmental health issues as inextricably linked [emphasis mine], each informing the other, rather than as distinct disciplines.

This snapshot, facilitated by a partnership between the Wilson Center, World Bank, and EcoHealth Alliance, aims to bridge professional silos represented at the workshop to address the current gaps and future solutions in the operationalization and institutionalization of One Health across sectors. With an initial emphasis on environmental resource management and assessment as well as federal cooperation, the One Health in the 21st Century Workshop is a launching point for upcoming events, convenings, and products, sparked by the partnership between the hosting organizations. RSVP today.

Agenda:

1:00pm — 1:15pm: Introductory Remarks

1:15pm — 2:30pm: Keynote and Panel: Putting One Health into Practice

Larry Madoff — Director of Emerging Disease Surveillance; Editor, ProMED-mail
Lance Brooks — Chief, Biological Threat Reduction Department at DoD
Further panelists TBA

2:30pm — 2:40pm: Break

2:40pm — 3:50pm: Keynote and Panel: Adding Seats at the One Health Table: Promoting the Environmental Backbone at Home and Abroad

Assaf Anyamba — NASA Research Scientist
Jonathan Sleeman — Center Director for the U.S. Geological Survey’s National Wildlife Health Center
Jennifer Orme-Zavaleta — Principal Deputy Assistant Administrator for Science for the Office of Research and Development and the EPA Science Advisor
Further panelists TBA

3:50pm — 4:50pm: Breakout Discussions and Report Back Panel

4:50pm — 5:00pm: Closing Remarks

5:00pm — 6:00pm: Networking Happy Hour

Co-Hosts:

Sponsor Logos

You can register/RSVP here.

Logistics are:

November 26
1:00pm – 5:00pm
Reception to follow
5:00pm – 6:00pm

Flom Auditorium, 6th floor

Directions

Wilson Center
Ronald Reagan Building and
International Trade Center
One Woodrow Wilson Plaza
1300 Pennsylvania, Ave., NW
Washington, D.C. 20004

Phone: 202.691.4000

stip@wilsoncenter.org

Privacy Policy

Internships

The Woodrow Wilson Center is gearing up for 2019 although the deadline for a Spring 2019  November 15, 2018. (You can find my previous announcement for internships in a July 23, 2018 posting). From a November 5, 2018 Wilson Center STIP announcement (received via email),

Internships in DC for Science and Technology Policy

Deadline for Fall Applicants November 15

The Science and Technology Innovation Program (STIP) at the Wilson Center welcomes applicants for spring 2019 internships. STIP focuses on understanding bottom-up, public innovation; top-down, policy innovation; and, on supporting responsible and equitable practices at the point where new technology and existing political, social, and cultural processes converge. We recommend exploring our blog and website first to determine if your research interests align with current STIP programming.

We offer two types of internships: research (open to law and graduate students only) and a social media and blogging internship (open to undergraduates, recent graduates, and graduate students). Research internships might deal with one of the following key objectives:

  • Artificial Intelligence
  • Citizen Science
  • Cybersecurity
  • One Health
  • Public Communication of Science
  • Serious Games Initiative
  • Science and Technology Policy

Additionally, we are offering specific internships for focused projects, such as for our Earth Challenge 2020 initiative.

Special Project Intern: Earth Challenge 2020

Citizen science involves members of the public in scientific research to meet real world goals.  In celebration of the 50th anniversary of Earth Day, Earth Day Network (EDN), The U.S. Department of State, and the Wilson Center are launching Earth Challenge 2020 (EC2020) as the world’s largest ever coordinated citizen science campaign.  EC2020 will collaborate with existing citizen science projects as well as build capacity for new ones as part of a larger effort to grow citizen science worldwide.  We will become a nexus for collecting billions of observations in areas including air quality, water quality, biodiversity, and human health to strengthen the links between science, the environment, and public citizens.

We are seeking a research intern with a specialty in topics including citizen science, crowdsourcing, making, hacking, sensor development, and other relevant topics.

This intern will scope and implement a semester-long project related to Earth Challenge 2020 deliverables. In addition to this the intern may:

  • Conduct ad hoc research on a range of topics in science and technology innovation to learn while supporting department priorities.
  • Write or edit articles and blog posts on topics of interest or local events.
  • Support meetings, conferences, and other events, gaining valuable event management experience.
  • Provide general logistical support.

This is a paid position available for 15-20 hours a week.  Applicants from all backgrounds will be considered, though experience conducting cross and trans-disciplinary research is an asset.  Ability to work independently is critical.

Interested applicants should submit a resume, cover letter describing their interest in Earth Challenge 2020 and outlining relevant skills, and two writing samples. One writing sample should be formal (e.g., a class paper); the other, informal (e.g., a blog post or similar).

For all internships, non-degree seeking students are ineligible. All internships must be served in Washington, D.C. and cannot be done remotely.

Full application process outlined on our internship website.

I don’t see a specific application deadline for the special project (Earth Challenge 2010) internship. In any event, good luck with all your applications.

Media registration is open for the 2018 ITU ( International Telecommunication Union) Plenipotentiary Conference (PP-18) being held 29 October – 16 November 2018 in Dubai

I’m a little late with this but there’s still time to register should you happen to be in or able to get to Dubai easily. From an October 18, 2018 International Telecommunication Union (ITU) Media Advisory (received via email),

Media registration is open for the 2018 ITU Plenipotentiary Conference (PP-18) – the highest policy-making body of the International Telecommunication Union (ITU), the United Nations’ specialized agency for information and communication technology. This will be closing soon, so all media intending to attend the event MUST register as soon as possible here.

Held every four years, it is the key event at which ITU’s 193 Member States decide on the future role of the organization, thereby determining ITU’s ability to influence and affect the development of information and communication technologies (ICTs) worldwide. It is expected to attract around 3,000 participants, including Heads of State and an estimated 130 VIPs from more than 193 Member States and more than 800 private companies, academic institutions and national, regional and international bodies.

ITU plays an integral role in enabling the development and implementation of ICTs worldwide through its mandate to: coordinate the shared global use of the radio spectrum, promote international cooperation in assigning satellite orbits, work to improve communication infrastructure in the developing world, and establish worldwide standards that foster seamless interconnection of a vast range of communications systems.

Delegates will tackle a number of pressing issues, from strategies to promote digital inclusion and bridge the digital divide, to ways to leverage such emerging technologies as the Internet of Things, Artificial Intelligence, 5G, and others, to improve the way all of us, everywhere, live and work.

The conference also sets ITU’s Financial Plan and elects its five top executives – Secretary-General, Deputy Secretary-General, and the Directors of the Radiocommunication, Telecommunication Standardization and Telecommunication Development Bureaux – who will guide its work over the next four years.

What: ITU Plenipotentiary Conference 2018 (PP-18) sets the next four-year strategy, budget and leadership of ITU.

Why: Finance, Business, Tech, Development and Foreign Affairs reporters will find PP-18 relevant to their newsgathering. Decisions made at PP-18 are designed to create an enabling ICT environment where the benefits of digital connectivity can reach all people and economies, everywhere. As such, these decisions can have an impact on the telecommunication and technology sectors as well as developed and developing countries alike.

When: 29 October – 16 November 2018: With several Press Conferences planned during the event.

* Historically the Opening, Closing and Plenary sessions of this conference are open to media. Confirmation of those sessions open to media, and Press Conference times, will be made closer to the event date.

Where: Dubai World Trade Center, Dubai, United Arab Emirates

More Information:

REGISTER FOR ACCREDITATION

I visited the ‘ITU Events Registration and Accreditation Process for Media‘ webpage and foudn these tidbits,

Accreditation eligibility & credentials 

1. Journalists* should provide an official letter of assignment from the Editor-in-Chief (or the News Editor for radio/TV). One letter per crew/editorial team will suffice. Editors-in-Chief and Bureau Chiefs should submit a letter from their Director. Please email this to pressreg@itu.int, along with the required supporting credentials below:​

    • ​​​​​print and online publications should be available to the general public and published at least 6 times a year by an organization whose principle business activity is publishing and which generally carries paid advertising;

      o 2 copies of recent byline articles published within the last 4 months.
    • news wire services should provide news coverage to subscribers, including newspapers, periodicals and/or television networks;

      o 2 copies of recent byline articles or broadcasting material published within the last 4 months.
    • broadcast should provide news and information programmes to the general public. Independent film and video production companies can only be accredited if officially mandated by a broadcast station via a letter of assignment;

      o broadcasting material published within the last 4 months.
    • freelance journalists including photographers, must provide clear documentation that they are on assignment from a specific news organization or publication. Evidence that they regularly supply journalistic content to recognized media may be acceptable in the absence of an assignment letter at the discretion of the ITU Media Relations Service.

      o a valid assignment letter from the news organization or publication.

 2. Bloggers may be granted accreditation if blog content is deemed relevant to the industry, contains news commentary, is regularly updated and made publicly available. Corporate bloggers are invited to register as participants. Please see Guidelines for Blogger Accreditation below for more details.

Guidelines for Blogger Accreditation

ITU is committed to working with independent ‘new media’ reporters and columnists who reach their audiences via blogs, podcasts, video blogs and other online media. These are the guidelines we use to determine whether to issue official media accreditation to independent online media representatives: 

ITU reserves the right to request traffic data from a third party (Sitemeter, Technorati, Feedburner, iTunes or equivalent) when considering your application. While the decision to grant access is not based solely on traffic/subscriber data, we ask that applicants provide sufficient transparency into their operations to help us make a fair and timely decision. 

Obtaining media accreditation for ITU events is an opportunity to meet and interact with key industry and political figures. While continued accreditation for ITU events is not directly contingent on producing coverage, owing to space limitations we may take this into consideration when processing future accreditation requests. Following any ITU event for which you are accredited, we therefore kindly request that you forward a link to your post/podcast/video blog to pressreg@itu.int. 

Bloggers who are granted access to ITU events are expected to act professionally. Those who do not maintain the standards expected of professional media representatives run the risk of having their accreditation withdrawn. 

If you can’t find answers to your questions on the ‘ITU Events Registration and Accreditation Process for Media‘ webpage, you can contact,

For media accreditation inquiries:


Rita Soraya Abino-Quintana
Media Accreditation Officer
ITU Corporate Communications

Tel: +41 22 730 5424

For anything else, contact,

For general media inquiries:


Jennifer Ferguson-Mitchell
Senior Media and Communications Officer
ITU Corporate Communications

Tel: +41 22 730 5469

Mobile: +41 79 337 4615

There you have it.

A potpourri of robot/AI stories: killers , kindergarten teachers, a Balenciaga-inspired AI fashion designer, a conversational android, and more

Following on my August 29, 2018 post (Sexbots, sexbot ethics, families, and marriage), I’m following up with a more general piece.

Robots, AI (artificial intelligence), and androids (humanoid robots), the terms can be confusing since there’s a tendency to use them interchangeably. Confession: I do it too, but, not this time. That said, I have multiple news bits.

Killer ‘bots and ethics

The U.S. military is already testing a Modular Advanced Armed Robotic System. Credit: Lance Cpl. Julien Rodarte, U.S. Marine Corps

That is a robot.

For the purposes of this posting, a robot is a piece of hardware which may or may not include an AI system and does not mimic a human or other biological organism such that you might, under circumstances, mistake the robot for a biological organism.

As for what precipitated this feature (in part), it seems there’s been a United Nations meeting in Geneva, Switzerland held from August 27 – 31, 2018 about war and the use of autonomous robots, i.e., robots equipped with AI systems and designed for independent action. BTW, it’s the not first meeting the UN has held on this topic.

Bonnie Docherty, lecturer on law and associate director of armed conflict and civilian protection, international human rights clinic, Harvard Law School, has written an August 21, 2018 essay on The Conversation (also on phys.org) describing the history and the current rules around the conduct of war, as well as, outlining the issues with the military use of autonomous robots (Note: Links have been removed),

When drafting a treaty on the laws of war at the end of the 19th century, diplomats could not foresee the future of weapons development. But they did adopt a legal and moral standard for judging new technology not covered by existing treaty language.

This standard, known as the Martens Clause, has survived generations of international humanitarian law and gained renewed relevance in a world where autonomous weapons are on the brink of making their own determinations about whom to shoot and when. The Martens Clause calls on countries not to use weapons that depart “from the principles of humanity and from the dictates of public conscience.”

I was the lead author of a new report by Human Rights Watch and the Harvard Law School International Human Rights Clinic that explains why fully autonomous weapons would run counter to the principles of humanity and the dictates of public conscience. We found that to comply with the Martens Clause, countries should adopt a treaty banning the development, production and use of these weapons.

Representatives of more than 70 nations will gather from August 27 to 31 [2018] at the United Nations in Geneva to debate how to address the problems with what they call lethal autonomous weapon systems. These countries, which are parties to the Convention on Conventional Weapons, have discussed the issue for five years. My co-authors and I believe it is time they took action and agreed to start negotiating a ban next year.

Docherty elaborates on her points (Note: A link has been removed),

The Martens Clause provides a baseline of protection for civilians and soldiers in the absence of specific treaty law. The clause also sets out a standard for evaluating new situations and technologies that were not previously envisioned.

Fully autonomous weapons, sometimes called “killer robots,” would select and engage targets without meaningful human control. They would be a dangerous step beyond current armed drones because there would be no human in the loop to determine when to fire and at what target. Although fully autonomous weapons do not yet exist, China, Israel, Russia, South Korea, the United Kingdom and the United States are all working to develop them. They argue that the technology would process information faster and keep soldiers off the battlefield.

The possibility that fully autonomous weapons could soon become a reality makes it imperative for those and other countries to apply the Martens Clause and assess whether the technology would offend basic humanity and the public conscience. Our analysis finds that fully autonomous weapons would fail the test on both counts.

I encourage you to read the essay in its entirety and for anyone who thinks the discussion about ethics and killer ‘bots is new or limited to military use, there’s my July 25, 2016 posting about police use of a robot in Dallas, Texas. (I imagine the discussion predates 2016 but that’s the earliest instance I have here.)

Teacher bots

Robots come in many forms and this one is on the humanoid end of the spectum,

Children watch a Keeko robot at the Yiswind Institute of Multicultural Education in Beijing, where the intelligent machines are telling stories and challenging kids with logic problems  [donwloaded from https://phys.org/news/2018-08-robot-teachers-invade-chinese-kindergartens.html]

Don’t those ‘eyes’ look almost heart-shaped? No wonder the kids love these robots, if an August  29, 2018 news item on phys.org can be believed,

The Chinese kindergarten children giggled as they worked to solve puzzles assigned by their new teaching assistant: a roundish, short educator with a screen for a face.

Just under 60 centimetres (two feet) high, the autonomous robot named Keeko has been a hit in several kindergartens, telling stories and challenging children with logic problems.

Round and white with a tubby body, the armless robot zips around on tiny wheels, its inbuilt cameras doubling up both as navigational sensors and a front-facing camera allowing users to record video journals.

In China, robots are being developed to deliver groceries, provide companionship to the elderly, dispense legal advice and now, as Keeko’s creators hope, join the ranks of educators.

At the Yiswind Institute of Multicultural Education on the outskirts of Beijing, the children have been tasked to help a prince find his way through a desert—by putting together square mats that represent a path taken by the robot—part storytelling and part problem-solving.

Each time they get an answer right, the device reacts with delight, its face flashing heart-shaped eyes.

“Education today is no longer a one-way street, where the teacher teaches and students just learn,” said Candy Xiong, a teacher trained in early childhood education who now works with Keeko Robot Xiamen Technology as a trainer.

“When children see Keeko with its round head and body, it looks adorable and children love it. So when they see Keeko, they almost instantly take to it,” she added.

Keeko robots have entered more than 600 kindergartens across the country with its makers hoping to expand into Greater China and Southeast Asia.

Beijing has invested money and manpower in developing artificial intelligence as part of its “Made in China 2025” plan, with a Chinese firm last year unveiling the country’s first human-like robot that can hold simple conversations and make facial expressions.

According to the International Federation of Robots, China has the world’s top industrial robot stock, with some 340,000 units in factories across the country engaged in manufacturing and the automotive industry.

Moving on from hardware/software to a software only story.

AI fashion designer better than Balenciaga?

Despite the title for Katharine Schwab’s August 22, 2018 article for Fast Company, I don’t think this AI designer is better than Balenciaga but from the pictures I’ve seen the designs are as good and it does present some intriguing possibilities courtesy of its neural network (Note: Links have been removed),

The AI, created by researcher Robbie Barat, has created an entire collection based on Balenciaga’s previous styles. There’s a fabulous pink and red gradient jumpsuit that wraps all the way around the model’s feet–like a onesie for fashionistas–paired with a dark slouchy coat. There’s a textural color-blocked dress, paired with aqua-green tights. And for menswear, there’s a multi-colored, shimmery button-up with skinny jeans and mismatched shoes. None of these looks would be out of place on the runway.

To create the styles, Barat collected images of Balenciaga’s designs via the designer’s lookbooks, ad campaigns, runway shows, and online catalog over the last two months, and then used them to train the pix2pix neural net. While some of the images closely resemble humans wearing fashionable clothes, many others are a bit off–some models are missing distinct limbs, and don’t get me started on how creepy [emphasis mine] their faces are. Even if the outfits aren’t quite ready to be fabricated, Barat thinks that designers could potentially use a tool like this to find inspiration. Because it’s not constrained by human taste, style, and history, the AI comes up with designs that may never occur to a person. “I love how the network doesn’t really understand or care about symmetry,” Barat writes on Twitter.

You can see the ‘creepy’ faces and some of the designs here,

Image: Robbie Barat

In contrast to the previous two stories, this all about algorithms, no machinery with independent movement (robot hardware) needed.

Conversational android: Erica

Hiroshi Ishiguro and his lifelike (definitely humanoid) robots have featured here many, many times before. The most recent posting is a March 27, 2017 posting about his and his android’s participation at the 2017 SXSW festival.

His latest work is featured in an August 21, 2018 news news item on ScienceDaily,

We’ve all tried talking with devices, and in some cases they talk back. But, it’s a far cry from having a conversation with a real person.

Now a research team from Kyoto University, Osaka University, and the Advanced Telecommunications Research Institute, or ATR, have significantly upgraded the interaction system for conversational android ERICA, giving her even greater dialog skills.

ERICA is an android created by Hiroshi Ishiguro of Osaka University and ATR, specifically designed for natural conversation through incorporation of human-like facial expressions and gestures. The research team demonstrated the updates during a symposium at the National Museum of Emerging Science in Tokyo.

Here’s the latest conversational android, Erica

Caption: The experimental set up when the subject (left) talks with ERICA (right) Credit: Kyoto University / Kawahara lab

An August 20, 2018 Kyoto University press release on EurekAlert, which originated the news item, offers more details,

When we talk to one another, it’s never a simple back and forward progression of information,” states Tatsuya Kawahara of Kyoto University’s Graduate School of Informatics, and an expert in speech and audio processing.

“Listening is active. We express agreement by nodding or saying ‘uh-huh’ to maintain the momentum of conversation. This is called ‘backchanneling’, and is something we wanted to implement with ERICA.”

The team also focused on developing a system for ‘attentive listening’. This is when a listener asks elaborating questions, or repeats the last word of the speaker’s sentence, allowing for more engaging dialogue.

Deploying a series of distance sensors, facial recognition cameras, and microphone arrays, the team began collecting data on parameters necessary for a fluid dialog between ERICA and a human subject.

“We looked at three qualities when studying backchanneling,” continues Kawahara. “These were: timing — when a response happens; lexical form — what is being said; and prosody, or how the response happens.”

Responses were generated through machine learning using a counseling dialogue corpus, resulting in dramatically improved dialog engagement. Testing in five-minute sessions with a human subject, ERICA demonstrated significantly more dynamic speaking skill, including the use of backchanneling, partial repeats, and statement assessments.

“Making a human-like conversational robot is a major challenge,” states Kawahara. “This project reveals how much complexity there is in listening, which we might consider mundane. We are getting closer to a day where a robot can pass a Total Turing Test.”

Erica seems to have been first introduced publicly in Spring 2017, from an April 2017 Erica: Man Made webpage on The Guardian website,

Erica is 23. She has a beautiful, neutral face and speaks with a synthesised voice. She has a degree of autonomy – but can’t move her hands yet. Hiroshi Ishiguro is her ‘father’ and the bad boy of Japanese robotics. Together they will redefine what it means to be human and reveal that the future is closer than we might think.

Hiroshi Ishiguro and his colleague Dylan Glas are interested in what makes a human. Erica is their latest creation – a semi-autonomous android, the product of the most funded scientific project in Japan. But these men regard themselves as artists more than scientists, and the Erica project – the result of a collaboration between Osaka and Kyoto universities and the Advanced Telecommunications Research Institute International – is a philosophical one as much as technological one.

Erica is interviewed about her hope and dreams – to be able to leave her room and to be able to move her arms and legs. She likes to chat with visitors and has one of the most advanced speech synthesis systems yet developed. Can she be regarded as being alive or as a comparable being to ourselves? Will she help us to understand ourselves and our interactions as humans better?

Erica and her creators are interviewed in the science fiction atmosphere of Ishiguro’s laboratory, and this film asks how we might form close relationships with robots in the future. Ishiguro thinks that for Japanese people especially, everything has a soul, whether human or not. If we don’t understand how human hearts, minds and personalities work, can we truly claim that humans have authenticity that machines don’t?

Ishiguro and Glas want to release Erica and her fellow robots into human society. Soon, Erica may be an essential part of our everyday life, as one of the new children of humanity.

Key credits

  • Director/Editor: Ilinca Calugareanu
  • Producer: Mara Adina
  • Executive producers for the Guardian: Charlie Phillips and Laurence Topham
  • This video is produced in collaboration with the Sundance Institute Short Documentary Fund supported by the John D and Catherine T MacArthur Foundation

You can also view the 14 min. film here.

Artworks generated by an AI system are to be sold at Christie’s auction house

KC Ifeanyi’s August 22, 2018 article for Fast Company may send a chill down some artists’ spines,

For the first time in its 252-year history, Christie’s will auction artwork generated by artificial intelligence.

Created by the French art collective Obvious, “Portrait of Edmond de Belamy” is part of a series of paintings of the fictional Belamy family that was created using a two-part algorithm. …

The portrait is estimated to sell anywhere between $7,000-$10,000, and Obvious says the proceeds will go toward furthering its algorithm.

… Famed collector Nicolas Laugero-Lasserre bought one of Obvious’s Belamy works in February, which could’ve been written off as a novel purchase where the story behind it is worth more than the piece itself. However, with validation from a storied auction house like Christie’s, AI art could shake the contemporary art scene.

“Edmond de Belamy” goes up for auction from October 23-25 [2018].

Jobs safe from automation? Are there any?

Michael Grothaus expresses more optimism about future job markets than I’m feeling in an August 30, 2018 article for Fast Company,

A 2017 McKinsey Global Institute study of 800 occupations across 46 countries found that by 2030, 800 million people will lose their jobs to automation. That’s one-fifth of the global workforce. A further one-third of the global workforce will need to retrain if they want to keep their current jobs as well. And looking at the effects of automation on American jobs alone, researchers from Oxford University found that “47 percent of U.S. workers have a high probability of seeing their jobs automated over the next 20 years.”

The good news is that while the above stats are rightly cause for concern, they also reveal that 53% of American jobs and four-fifths of global jobs are unlikely to be affected by advances in artificial intelligence and robotics. But just what are those fields? I spoke to three experts in artificial intelligence, robotics, and human productivity to get their automation-proof career advice.

Creatives

“Although I believe every single job can, and will, benefit from a level of AI or robotic influence, there are some roles that, in my view, will never be replaced by technology,” says Tom Pickersgill, …

Maintenance foreman

When running a production line, problems and bottlenecks are inevitable–and usually that’s a bad thing. But in this case, those unavoidable issues will save human jobs because their solutions will require human ingenuity, says Mark Williams, head of product at People First, …

Hairdressers

Mat Hunter, director of the Central Research Laboratory, a tech-focused co-working space and accelerator for tech startups, have seen startups trying to create all kinds of new technologies, which has given him insight into just what machines can and can’t pull off. It’s lead him to believe that jobs like the humble hairdresser are safer from automation than those of, says, accountancy.

Therapists and social workers

Another automation-proof career is likely to be one involved in helping people heal the mind, says Pickersgill. “People visit therapists because there is a need for emotional support and guidance. This can only be provided through real human interaction–by someone who can empathize and understand, and who can offer advice based on shared experiences, rather than just data-driven logic.”

Teachers

Teachers are so often the unsung heroes of our society. They are overworked and underpaid–yet charged with one of the most important tasks anyone can have: nurturing the growth of young people. The good news for teachers is that their jobs won’t be going anywhere.

Healthcare workers

Doctors and nurses will also likely never see their jobs taken by automation, says Williams. While automation will no doubt better enhance the treatments provided by doctors and nurses the fact of the matter is that robots aren’t going to outdo healthcare workers’ ability to connect with patients and make them feel understood the way a human can.

Caretakers

While humans might be fine with robots flipping their burgers and artificial intelligence managing their finances, being comfortable with a robot nannying your children or looking after your elderly mother is a much bigger ask. And that’s to say nothing of the fact that even today’s most advanced robots don’t have the physical dexterity to perform the movements and actions carers do every day.

Grothaus does offer a proviso in his conclusion: certain types of jobs are relatively safe until developers learn to replicate qualities such as empathy in robots/AI.

It’s very confusing

There’s so much news about robots, artificial intelligence, androids, and cyborgs that it’s hard to keep up with it let alone attempt to get a feeling for where all this might be headed. When you add the fact that the term robots/artificial inteligence are often used interchangeably and that the distinction between robots/androids/cyborgs is not always clear any attempts to peer into the future become even more challenging.

At this point I content myself with tracking the situation and finding definitions so I can better understand what I’m tracking. Carmen Wong’s August 23, 2018 posting on the Signals blog published by Canada’s Centre for Commercialization of Regenerative Medicine (CCRM) offers some useful definitions in the context of an article about the use of artificial intelligence in the life sciences, particularly in Canada (Note: Links have been removed),

Artificial intelligence (AI). Machine learning. To most people, these are just buzzwords and synonymous. Whether or not we fully understand what both are, they are slowly integrating into our everyday lives. Virtual assistants such as Siri? AI is at work. The personalized ads you see when you are browsing on the web or movie recommendations provided on Netflix? Thank AI for that too.

AI is defined as machines having intelligence that imitates human behaviour such as learning, planning and problem solving. A process used to achieve AI is called machine learning, where a computer uses lots of data to “train” or “teach” itself, without human intervention, to accomplish a pre-determined task. Essentially, the computer keeps on modifying its algorithm based on the information provided to get to the desired goal.

Another term you may have heard of is deep learning. Deep learning is a particular type of machine learning where algorithms are set up like the structure and function of human brains. It is similar to a network of brain cells interconnecting with each other.

Toronto has seen its fair share of media-worthy AI activity. The Government of Canada, Government of Ontario, industry and multiple universities came together in March 2018 to launch the Vector Institute, with the goal of using AI to promote economic growth and improve the lives of Canadians. In May, Samsung opened its AI Centre in the MaRS Discovery District, joining a network of Samsung centres located in California, United Kingdom and Russia.

There has been a boom in AI companies over the past few years, which span a variety of industries. This year’s ranking of the top 100 most promising private AI companies covers 25 fields with cybersecurity, enterprise and robotics being the hot focus areas.

Wong goes on to explore AI deployment in the life sciences and concludes that human scientists and doctors will still be needed although she does note this in closing (Note: A link has been removed),

More importantly, empathy and support from a fellow human being could never be fully replaced by a machine (could it?), but maybe this will change in the future. We will just have to wait and see.

Artificial empathy is the term used in Lisa Morgan’s April 25, 2018 article for Information Week which unfortunately does not include any links to actual projects or researchers working on artificial empathy. Instead, the article is focused on how business interests and marketers would like to see it employed. FWIW, I have found a few references: (1) Artificial empathy Wikipedia essay (look for the references at the end of the essay for more) and (2) this open access article: Towards Artificial Empathy; How Can Artificial Empathy Follow the Developmental Pathway of Natural Empathy? by Minoru Asada.

Please let me know in the comments if you should have an insights on the matter in the comments section of this blog.

Being smart about using artificial intelligence in the field of medicine

Since my August 20, 2018 post featured an opinion piece about the possibly imminent replacement of radiologists with artificial intelligence systems and the latest research about employing them for diagnosing eye diseases, it seems like a good time to examine some of the mythology embedded in the discussion about AI and medicine.

Imperfections in medical AI systems

An August 15, 2018 article for Slate.com by W. Nicholson Price II (who teaches at the University of Michigan School of Law; in addition to his law degree he has a PhD in Biological Sciences from Columbia University) begins with the peppy, optimistic view before veering into more critical territory (Note: Links have been removed),

For millions of people suffering from diabetes, new technology enabled by artificial intelligence promises to make management much easier. Medtronic’s Guardian Connect system promises to alert users 10 to 60 minutes before they hit high or low blood sugar level thresholds, thanks to IBM Watson, “the same supercomputer technology that can predict global weather patterns.” Startup Beta Bionics goes even further: In May, it received Food and Drug Administration approval to start clinical trials on what it calls a “bionic pancreas system” powered by artificial intelligence, capable of “automatically and autonomously managing blood sugar levels 24/7.”

An artificial pancreas powered by artificial intelligence represents a huge step forward for the treatment of diabetes—but getting it right will be hard. Artificial intelligence (also known in various iterations as deep learning and machine learning) promises to automatically learn from patterns in medical data to help us do everything from managing diabetes to finding tumors in an MRI to predicting how long patients will live. But the artificial intelligence techniques involved are typically opaque. We often don’t know how the algorithm makes the eventual decision. And they may change and learn from new data—indeed, that’s a big part of the promise. But when the technology is complicated, opaque, changing, and absolutely vital to the health of a patient, how do we make sure it works as promised?

Price describes how a ‘closed loop’ artificial pancreas with AI would automate insulin levels for diabetic patients, flaws in the automated system, and how companies like to maintain a competitive advantage (Note: Links have been removed),

[…] a “closed loop” artificial pancreas, where software handles the whole issue, receiving and interpreting signals from the monitor, deciding when and how much insulin is needed, and directing the insulin pump to provide the right amount. The first closed-loop system was approved in late 2016. The system should take as much of the issue off the mind of the patient as possible (though, of course, that has limits). Running a close-loop artificial pancreas is challenging. The way people respond to changing levels of carbohydrates is complicated, as is their response to insulin; it’s hard to model accurately. Making it even more complicated, each individual’s body reacts a little differently.

Here’s where artificial intelligence comes into play. Rather than trying explicitly to figure out the exact model for how bodies react to insulin and to carbohydrates, machine learning methods, given a lot of data, can find patterns and make predictions. And existing continuous glucose monitors (and insulin pumps) are excellent at generating a lot of data. The idea is to train artificial intelligence algorithms on vast amounts of data from diabetic patients, and to use the resulting trained algorithms to run a closed-loop artificial pancreas. Even more exciting, because the system will keep measuring blood glucose, it can learn from the new data and each patient’s artificial pancreas can customize itself over time as it acquires new data from that patient’s particular reactions.

Here’s the tough question: How will we know how well the system works? Diabetes software doesn’t exactly have the best track record when it comes to accuracy. A 2015 study found that among smartphone apps for calculating insulin doses, two-thirds of the apps risked giving incorrect results, often substantially so. … And companies like to keep their algorithms proprietary for a competitive advantage, which makes it hard to know how they work and what flaws might have gone unnoticed in the development process.

There’s more,

These issues aren’t unique to diabetes care—other A.I. algorithms will also be complicated, opaque, and maybe kept secret by their developers. The potential for problems multiplies when an algorithm is learning from data from an entire hospital, or hospital system, or the collected data from an entire state or nation, not just a single patient. …

The [US Food and Drug Administraiont] FDA is working on this problem. The head of the agency has expressed his enthusiasm for bringing A.I. safely into medical practice, and the agency has a new Digital Health Innovation Action Plan to try to tackle some of these issues. But they’re not easy, and one thing making it harder is a general desire to keep the algorithmic sauce secret. The example of IBM Watson for Oncology has given the field a bit of a recent black eye—it turns out that the company knew the algorithm gave poor recommendations for cancer treatment but kept that secret for more than a year. …

While Price focuses on problems with algorithms and with developers and their business interests, he also hints at some of the body’s complexities.

Can AI systems be like people?

Susan Baxter, a medical writer with over 20 years experience, a PhD in health economics, and author of countless magazine articles and several books, offers a more person-centered approach to the discussion in her July 6, 2018 posting on susanbaxter.com,

The fascination with AI continues to irk, given that every second thing I read seems to be extolling the magic of AI and medicine and how It Will Change Everything. Which it will not, trust me. The essential issue of illness remains perennial and revolves around an individual for whom no amount of technology will solve anything without human contact. …

But in this world, or so we are told by AI proponents, radiologists will soon be obsolete. [my August 20, 2018 post] The adaptational learning capacities of AI mean that reading a scan or x-ray will soon be more ably done by machines than humans. The presupposition here is that we, the original programmers of this artificial intelligence, understand the vagaries of real life (and real disease) so wonderfully that we can deconstruct these much as we do the game of chess (where, let’s face it, Big Blue ate our lunch) and that analyzing a two-dimensional image of a three-dimensional body, already problematic, can be reduced to a series of algorithms.

Attempting to extrapolate what some “shadow” on a scan might mean in a flesh and blood human isn’t really quite the same as bishop to knight seven. Never mind the false positive/negatives that are considered an acceptable risk or the very real human misery they create.

Moravec called it

It’s called Moravec’s paradox, the inability of humans to realize just how complex basic physical tasks are – and the corresponding inability of AI to mimic it. As you walk across the room, carrying a glass of water, talking to your spouse/friend/cat/child; place the glass on the counter and open the dishwasher door with your foot as you open a jar of pickles at the same time, take a moment to consider just how many concurrent tasks you are doing and just how enormous the computational power these ostensibly simple moves would require.

Researchers in Singapore taught industrial robots to assemble an Ikea chair. Essentially, screw in the legs. A person could probably do this in a minute. Maybe two. The preprogrammed robots took nearly half an hour. And I suspect programming those robots took considerably longer than that.

Ironically, even Elon Musk, who has had major production problems with the Tesla cars rolling out of his high tech factory, has conceded (in a tweet) that “Humans are underrated.”

I wouldn’t necessarily go that far given the political shenanigans of Trump & Co. but in the grand scheme of things I tend to agree. …

Is AI going the way of gene therapy?

Susan draws a parallel between the AI and medicine discussion with the discussion about genetics and medicine (Note: Links have been removed),

On a somewhat similar note – given the extent to which genetics discourse has that same linear, mechanistic  tone [as AI and medicine] – it turns out all this fine talk of using genetics to determine health risk and whatnot is based on nothing more than clever marketing, since a lot of companies are making a lot of money off our belief in DNA. Truth is half the time we don’t even know what a gene is never mind what it actually does;  geneticists still can’t agree on how many genes there are in a human genome, as this article in Nature points out.

Along the same lines, I was most amused to read about something called the Super Seniors Study, research following a group of individuals in their 80’s, 90’s and 100’s who seem to be doing really well. Launched in 2002 and headed by Angela Brooks Wilson, a geneticist at the BC [British Columbia] Cancer Agency and SFU [Simon Fraser University] Chair of biomedical physiology and kinesiology, this longitudinal work is examining possible factors involved in healthy ageing.

Turns out genes had nothing to do with it, the title of the Globe and Mail article notwithstanding. (“Could the DNA of these super seniors hold the secret to healthy aging?” The answer, a resounding “no”, well hidden at the very [end], the part most people wouldn’t even get to.) All of these individuals who were racing about exercising and working part time and living the kind of life that makes one tired just reading about it all had the same “multiple (genetic) factors linked to a high probability of disease”. You know, the gene markers they tell us are “linked” to cancer, heart disease, etc., etc. But these super seniors had all those markers but none of the diseases, demonstrating (pretty strongly) that the so-called genetic links to disease are a load of bunkum. Which (she said modestly) I have been saying for more years than I care to remember. You’re welcome.

The fundamental error in this type of linear thinking is in allowing our metaphors (genes are the “blueprint” of life) and propensity towards social ideas of determinism to overtake common sense. Biological and physiological systems are not static; they respond to and change to life in its entirety, whether it’s diet and nutrition to toxic or traumatic insults. Immunity alters, endocrinology changes, – even how we think and feel affects the efficiency and effectiveness of physiology. Which explains why as we age we become increasingly dissimilar.

If you have the time, I encourage to read Susan’s comments in their entirety.

Scientific certainties

Following on with genetics, gene therapy dreams, and the complexity of biology, the June 19, 2018 Nature article by Cassandra Willyard (mentioned in Susan’s posting) highlights an aspect of scientific research not often mentioned in public,

One of the earliest attempts to estimate the number of genes in the human genome involved tipsy geneticists, a bar in Cold Spring Harbor, New York, and pure guesswork.

That was in 2000, when a draft human genome sequence was still in the works; geneticists were running a sweepstake on how many genes humans have, and wagers ranged from tens of thousands to hundreds of thousands. Almost two decades later, scientists armed with real data still can’t agree on the number — a knowledge gap that they say hampers efforts to spot disease-related mutations.

In 2000, with the genomics community abuzz over the question of how many human genes would be found, Ewan Birney launched the GeneSweep contest. Birney, now co-director of the European Bioinformatics Institute (EBI) in Hinxton, UK, took the first bets at a bar during an annual genetics meeting, and the contest eventually attracted more than 1,000 entries and a US$3,000 jackpot. Bets on the number of genes ranged from more than 312,000 to just under 26,000, with an average of around 40,000. These days, the span of estimates has shrunk — with most now between 19,000 and 22,000 — but there is still disagreement (See ‘Gene Tally’).

… the inconsistencies in the number of genes from database to database are problematic for researchers, Pruitt says. “People want one answer,” she [Kim Pruitt, a genome researcher at the US National Center for Biotechnology Information {NCB}] in Bethesda, Maryland] adds, “but biology is complex.”

I wanted to note that scientists do make guesses and not just with genetics. For example, Gina Mallet’s 2005 book ‘Last Chance to Eat: The Fate of Taste in a Fast Food World’ recounts the story of how good and bad levels of cholesterol were established—the experts made some guesses based on their experience. That said, Willyard’s article details the continuing effort to nail down the number of genes almost 20 years after the human genome project was completed and delves into the problems the scientists have uncovered.

Final comments

In addition to opaque processes with developers/entrepreneurs wanting to maintain their secrets for competitive advantages and in addition to our own poor understanding of the human body (how many genes are there anyway?), there are same major gaps (reflected in AI) in our understanding of various diseases. Angela Lashbrook’s August 16, 2018 article for The Atlantic highlights some issues with skin cancer and shade of your skin (Note: Links have been removed),

… While fair-skinned people are at the highest risk for contracting skin cancer, the mortality rate for African Americans is considerably higher: Their five-year survival rate is 73 percent, compared with 90 percent for white Americans, according to the American Academy of Dermatology.

As the rates of melanoma for all Americans continue a 30-year climb, dermatologists have begun exploring new technologies to try to reverse this deadly trend—including artificial intelligence. There’s been a growing hope in the field that using machine-learning algorithms to diagnose skin cancers and other skin issues could make for more efficient doctor visits and increased, reliable diagnoses. The earliest results are promising—but also potentially dangerous for darker-skinned patients.

… Avery Smith, … a software engineer in Baltimore, Maryland, co-authored a paper in JAMA [Journal of the American Medical Association] Dermatology that warns of the potential racial disparities that could come from relying on machine learning for skin-cancer screenings. Smith’s co-author, Adewole Adamson of the University of Texas at Austin, has conducted multiple studies on demographic imbalances in dermatology. “African Americans have the highest mortality rate [for skin cancer], and doctors aren’t trained on that particular skin type,” Smith told me over the phone. “When I came across the machine-learning software, one of the first things I thought was how it will perform on black people.”

Recently, a study that tested machine-learning software in dermatology, conducted by a group of researchers primarily out of Germany, found that “deep-learning convolutional neural networks,” or CNN, detected potentially cancerous skin lesions better than the 58 dermatologists included in the study group. The data used for the study come from the International Skin Imaging Collaboration, or ISIC, an open-source repository of skin images to be used by machine-learning algorithms. Given the rise in melanoma cases in the United States, a machine-learning algorithm that assists dermatologists in diagnosing skin cancer earlier could conceivably save thousands of lives each year.

… Chief among the prohibitive issues, according to Smith and Adamson, is that the data the CNN relies on come from primarily fair-skinned populations in the United States, Australia, and Europe. If the algorithm is basing most of its knowledge on how skin lesions appear on fair skin, then theoretically, lesions on patients of color are less likely to be diagnosed. “If you don’t teach the algorithm with a diverse set of images, then that algorithm won’t work out in the public that is diverse,” says Adamson. “So there’s risk, then, for people with skin of color to fall through the cracks.”

As Adamson and Smith’s paper points out, racial disparities in artificial intelligence and machine learning are not a new issue. Algorithms have mistaken images of black people for gorillas, misunderstood Asians to be blinking when they weren’t, and “judged” only white people to be attractive. An even more dangerous issue, according to the paper, is that decades of clinical research have focused primarily on people with light skin, leaving out marginalized communities whose symptoms may present differently.

The reasons for this exclusion are complex. According to Andrew Alexis, a dermatologist at Mount Sinai, in New York City, and the director of the Skin of Color Center, compounding factors include a lack of medical professionals from marginalized communities, inadequate information about those communities, and socioeconomic barriers to participating in research. “In the absence of a diverse study population that reflects that of the U.S. population, potential safety or efficacy considerations could be missed,” he says.

Adamson agrees, elaborating that with inadequate data, machine learning could misdiagnose people of color with nonexistent skin cancers—or miss them entirely. But he understands why the field of dermatology would surge ahead without demographically complete data. “Part of the problem is that people are in such a rush. This happens with any new tech, whether it’s a new drug or test. Folks see how it can be useful and they go full steam ahead without thinking of potential clinical consequences. …

Improving machine-learning algorithms is far from the only method to ensure that people with darker skin tones are protected against the sun and receive diagnoses earlier, when many cancers are more survivable. According to the Skin Cancer Foundation, 63 percent of African Americans don’t wear sunscreen; both they and many dermatologists are more likely to delay diagnosis and treatment because of the belief that dark skin is adequate protection from the sun’s harmful rays. And due to racial disparities in access to health care in America, African Americans are less likely to get treatment in time.

Happy endings

I’ll add one thing to Price’s article, Susan’s posting, and Lashbrook’s article about the issues with AI , certainty, gene therapy, and medicine—the desire for a happy ending prefaced with an easy solution. If the easy solution isn’t possible accommodations will be made but that happy ending is a must. All disease will disappear and there will be peace on earth. (Nod to Susan Baxter and her many discussions with me about disease processes and happy endings.)

The solutions, for the most part, are seen as technological despite the mountain of evidence suggesting that technology reflects our own imperfect understanding of health and disease therefore providing what is at best an imperfect solution.

Also, we tend to underestimate just how complex humans are not only in terms of disease and health but also with regard to our skills, understanding, and, perhaps not often enough, our ability to respond appropriately in the moment.

There is much to celebrate in what has been accomplished: no more black death, no more smallpox, hip replacements, pacemakers, organ transplants, and much more. Yes, we should try to improve our medicine. But, maybe alongside the celebration we can welcome AI and other technologies with a lot less hype and a lot more skepticism.

Robot radiologists (artificially intelligent doctors)

Mutaz Musa, a physician at New York Presbyterian Hospital/Weill Cornell (Department of Emergency Medicine) and software developer in New York City, has penned an eyeopening opinion piece about artificial intelligence (or robots if you prefer) and the field of radiology. From a June 25, 2018 opinion piece for The Scientist (Note: Links have been removed),

Although artificial intelligence has raised fears of job loss for many, we doctors have thus far enjoyed a smug sense of security. There are signs, however, that the first wave of AI-driven redundancies among doctors is fast approaching. And radiologists seem to be first on the chopping block.

Andrew Ng, founder of online learning platform Coursera and former CTO of “China’s Google,” Baidu, recently announced the development of CheXNet, a convolutional neural net capable of recognizing pneumonia and other thoracic pathologies on chest X-rays better than human radiologists. Earlier this year, a Hungarian group developed a similar system for detecting and classifying features of breast cancer in mammograms. In 2017, Adelaide University researchers published details of a bot capable of matching human radiologist performance in detecting hip fractures. And, of course, Google achieved superhuman proficiency in detecting diabetic retinopathy in fundus photographs, a task outside the scope of most radiologists.

Beyond single, two-dimensional radiographs, a team at Oxford University developed a system for detecting spinal disease from MRI data with a performance equivalent to a human radiologist. Meanwhile, researchers at the University of California, Los Angeles, reported detecting pathology on head CT scans with an error rate more than 20 times lower than a human radiologist.

Although these particular projects are still in the research phase and far from perfect—for instance, often pitting their machines against a limited number of radiologists—the pace of progress alone is telling.

Others have already taken their algorithms out of the lab and into the marketplace. Enlitic, founded by Aussie serial entrepreneur and University of San Francisco researcher Jeremy Howard, is a Bay-Area startup that offers automated X-ray and chest CAT scan interpretation services. Enlitic’s systems putatively can judge the malignancy of nodules up to 50 percent more accurately than a panel of radiologists and identify fractures so small they’d typically be missed by the human eye. One of Enlitic’s largest investors, Capitol Health, owns a network of diagnostic imaging centers throughout Australia, anticipating the broad rollout of this technology. Another Bay-Area startup, Arterys, offers cloud-based medical imaging diagnostics. Arterys’s services extend beyond plain films to cardiac MRIs and CAT scans of the chest and abdomen. And there are many others.

Musa has offered a compelling argument with lots of links to supporting evidence.

[downloaded from https://www.the-scientist.com/news-opinion/opinion–rise-of-the-robot-radiologists-64356]

And evidence keeps mounting, I just stumbled across this June 30, 2018 news item on Xinhuanet.com,

An artificial intelligence (AI) system scored 2:0 against elite human physicians Saturday in two rounds of competitions in diagnosing brain tumors and predicting hematoma expansion in Beijing.

The BioMind AI system, developed by the Artificial Intelligence Research Centre for Neurological Disorders at the Beijing Tiantan Hospital and a research team from the Capital Medical University, made correct diagnoses in 87 percent of 225 cases in about 15 minutes, while a team of 15 senior doctors only achieved 66-percent accuracy.

The AI also gave correct predictions in 83 percent of brain hematoma expansion cases, outperforming the 63-percent accuracy among a group of physicians from renowned hospitals across the country.

The outcomes for human physicians were quite normal and even better than the average accuracy in ordinary hospitals, said Gao Peiyi, head of the radiology department at Tiantan Hospital, a leading institution on neurology and neurosurgery.

To train the AI, developers fed it tens of thousands of images of nervous system-related diseases that the Tiantan Hospital has archived over the past 10 years, making it capable of diagnosing common neurological diseases such as meningioma and glioma with an accuracy rate of over 90 percent, comparable to that of a senior doctor.

All the cases were real and contributed by the hospital, but never used as training material for the AI, according to the organizer.

Wang Yongjun, executive vice president of the Tiantan Hospital, said that he personally did not care very much about who won, because the contest was never intended to pit humans against technology but to help doctors learn and improve [emphasis mine] through interactions with technology.

“I hope through this competition, doctors can experience the power of artificial intelligence. This is especially so for some doctors who are skeptical about artificial intelligence. I hope they can further understand AI and eliminate their fears toward it,” said Wang.

Dr. Lin Yi who participated and lost in the second round, said that she welcomes AI, as it is not a threat but a “friend.” [emphasis mine]

AI will not only reduce the workload but also push doctors to keep learning and improve their skills, said Lin.

Bian Xiuwu, an academician with the Chinese Academy of Science and a member of the competition’s jury, said there has never been an absolute standard correct answer in diagnosing developing diseases, and the AI would only serve as an assistant to doctors in giving preliminary results. [emphasis mine]

Dr. Paul Parizel, former president of the European Society of Radiology and another member of the jury, also agreed that AI will not replace doctors, but will instead function similar to how GPS does for drivers. [emphasis mine]

Dr. Gauden Galea, representative of the World Health Organization in China, said AI is an exciting tool for healthcare but still in the primitive stages.

Based on the size of its population and the huge volume of accessible digital medical data, China has a unique advantage in developing medical AI, according to Galea.

China has introduced a series of plans in developing AI applications in recent years.

In 2017, the State Council issued a development plan on the new generation of Artificial Intelligence and the Ministry of Industry and Information Technology also issued the “Three-Year Action Plan for Promoting the Development of a New Generation of Artificial Intelligence (2018-2020).”

The Action Plan proposed developing medical image-assisted diagnostic systems to support medicine in various fields.

I note the reference to cars and global positioning systems (GPS) and their role as ‘helpers’;, it seems no one at the ‘AI and radiology’ competition has heard of driverless cars. Here’s Musa on those reassuring comments abut how the technology won’t replace experts but rather augment their skills,

To be sure, these services frame themselves as “support products” that “make doctors faster,” rather than replacements that make doctors redundant. This language may reflect a reserved view of the technology, though it likely also represents a marketing strategy keen to avoid threatening or antagonizing incumbents. After all, many of the customers themselves, for now, are radiologists.

Radiology isn’t the only area where experts might find themselves displaced.

Eye experts

It seems inroads have been made by artificial intelligence systems (AI) into the diagnosis of eye diseases. It got the ‘Fast Company’ treatment (exciting new tech, learn all about it) as can be seen further down in this posting. First, here’s a more restrained announcement, from an August 14, 2018 news item on phys.org (Note: A link has been removed),

An artificial intelligence (AI) system, which can recommend the correct referral decision for more than 50 eye diseases, as accurately as experts has been developed by Moorfields Eye Hospital NHS Foundation Trust, DeepMind Health and UCL [University College London].

The breakthrough research, published online by Nature Medicine, describes how machine-learning technology has been successfully trained on thousands of historic de-personalised eye scans to identify features of eye disease and recommend how patients should be referred for care.

Researchers hope the technology could one day transform the way professionals carry out eye tests, allowing them to spot conditions earlier and prioritise patients with the most serious eye diseases before irreversible damage sets in.

An August 13, 2018 UCL press release, which originated the news item, describes the research and the reasons behind it in more detail,

More than 285 million people worldwide live with some form of sight loss, including more than two million people in the UK. Eye diseases remain one of the biggest causes of sight loss, and many can be prevented with early detection and treatment.

Dr Pearse Keane, NIHR Clinician Scientist at the UCL Institute of Ophthalmology and consultant ophthalmologist at Moorfields Eye Hospital NHS Foundation Trust said: “The number of eye scans we’re performing is growing at a pace much faster than human experts are able to interpret them. There is a risk that this may cause delays in the diagnosis and treatment of sight-threatening diseases, which can be devastating for patients.”

“The AI technology we’re developing is designed to prioritise patients who need to be seen and treated urgently by a doctor or eye care professional. If we can diagnose and treat eye conditions early, it gives us the best chance of saving people’s sight. With further research it could lead to greater consistency and quality of care for patients with eye problems in the future.”

The study, launched in 2016, brought together leading NHS eye health professionals and scientists from UCL and the National Institute for Health Research (NIHR) with some of the UK’s top technologists at DeepMind to investigate whether AI technology could help improve the care of patients with sight-threatening diseases, such as age-related macular degeneration and diabetic eye disease.

Using two types of neural network – mathematical systems for identifying patterns in images or data – the AI system quickly learnt to identify 10 features of eye disease from highly complex optical coherence tomography (OCT) scans. The system was then able to recommend a referral decision based on the most urgent conditions detected.

To establish whether the AI system was making correct referrals, clinicians also viewed the same OCT scans and made their own referral decisions. The study concluded that AI was able to make the right referral recommendation more than 94% of the time, matching the performance of expert clinicians.

The AI has been developed with two unique features which maximise its potential use in eye care. Firstly, the system can provide information that helps explain to eye care professionals how it arrives at its recommendations. This information includes visuals of the features of eye disease it has identified on the OCT scan and the level of confidence the system has in its recommendations, in the form of a percentage. This functionality is crucial in helping clinicians scrutinise the technology’s recommendations and check its accuracy before deciding the type of care and treatment a patient receives.

Secondly, the AI system can be easily applied to different types of eye scanner, not just the specific model on which it was trained. This could significantly increase the number of people who benefit from this technology and future-proof it, so it can still be used even as OCT scanners are upgraded or replaced over time.

The next step is for the research to go through clinical trials to explore how this technology might improve patient care in practice, and regulatory approval before it can be used in hospitals and other clinical settings.

If clinical trials are successful in demonstrating that the technology can be used safely and effectively, Moorfields will be able to use an eventual, regulatory-approved product for free, across all 30 of their UK hospitals and community clinics, for an initial period of five years.

The work that has gone into this project will also help accelerate wider NHS research for many years to come. For example, DeepMind has invested significant resources to clean, curate and label Moorfields’ de-identified research dataset to create one of the most advanced eye research databases in the world.

Moorfields owns this database as a non-commercial public asset, which is already forming the basis of nine separate medical research studies. In addition, Moorfields can also use DeepMind’s trained AI model for future non-commercial research efforts, which could help advance medical research even further.

Mustafa Suleyman, Co-founder and Head of Applied AI at DeepMind Health, said: “We set up DeepMind Health because we believe artificial intelligence can help solve some of society’s biggest health challenges, like avoidable sight loss, which affects millions of people across the globe. These incredibly exciting results take us one step closer to that goal and could, in time, transform the diagnosis, treatment and management of patients with sight threatening eye conditions, not just at Moorfields, but around the world.”

Professor Sir Peng Tee Khaw, director of the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology said: “The results of this pioneering research with DeepMind are very exciting and demonstrate the potential sight-saving impact AI could have for patients. I am in no doubt that AI has a vital role to play in the future of healthcare, particularly when it comes to training and helping medical professionals so that patients benefit from vital treatment earlier than might previously have been possible. This shows the transformative research than can be carried out in the UK combining world leading industry and NIHR/NHS hospital/university partnerships.”

Matt Hancock, Health and Social Care Secretary, said: “This is hugely exciting and exactly the type of technology which will benefit the NHS in the long term and improve patient care – that’s why we fund over a billion pounds a year in health research as part of our long term plan for the NHS.”

Here’s a link to and a citation for the study,

Clinically applicable deep learning for diagnosis and referral in retinal disease by Jeffrey De Fauw, Joseph R. Ledsam, Bernardino Romera-Paredes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, Harry Askham, Xavier Glorot, Brendan O’Donoghue, Daniel Visentin, George van den Driessche, Balaji Lakshminarayanan, Clemens Meyer, Faith Mackinder, Simon Bouton, Kareem Ayoub, Reena Chopra, Dominic King, Alan Karthikesalingam, Cían O. Hughes, Rosalind Raine, Julian Hughes, Dawn A. Sim, Catherine Egan, Adnan Tufail, Hugh Montgomery, Demis Hassabis, Geraint Rees, Trevor Back, Peng T. Khaw, Mustafa Suleyman, Julien Cornebise, Pearse A. Keane, & Olaf Ronneberger. Nature Medicine (2018) DOI: https://doi.org/10.1038/s41591-018-0107-6 Published 13 August 2018

This paper is behind a paywall.

And now, Melissa Locker’s August 15, 2018 article for Fast Company (Note: Links have been removed),

In a paper published in Nature Medicine on Monday, Google’s DeepMind subsidiary, UCL, and researchers at Moorfields Eye Hospital showed off their new AI system. The researchers used deep learning to create algorithm-driven software that can identify common patterns in data culled from dozens of common eye diseases from 3D scans. The result is an AI that can identify more than 50 diseases with incredible accuracy and can then refer patients to a specialist. Even more important, though, is that the AI can explain why a diagnosis was made, indicating which part of the scan prompted the outcome. It’s an important step in both medicine and in making AIs slightly more human

The editor or writer has even highlighted the sentence about the system’s accuracy—not just good but incredible!

I will be publishing something soon [my August 21, 2018 posting] which highlights some of the questions one might want to ask about AI and medicine before diving headfirst into this brave new world of medicine.

The Royal Bank of Canada reports ‘Humans wanted’ and some thoughts on the future of work, robots, and artificial intelligence

It seems the Royal Bank of Canada ((RBC or Royal Bank) wants to weigh in and influence what is to come with regard to what new technologies will bring us and how they will affect our working lives.  (I will be offering my critiques of the whole thing.)

Launch yourself into the future (if you’re a youth)

“I’m not planning on being replaced by a robot.” That’s the first line of text you’ll see if you go to the Royal Bank of Canada’s new Future Launch web space and latest marketing campaign and investment.

This whole endeavour is aimed at ‘youth’ and represents a $500M investment. Of course, that money will be invested over a 10-year period which works out to $50M per year and doesn’t seem quite so munificent given how much money Canadian banks make (from a March 1, 2017 article by Don Pittis for the Canadian Broadcasting Corporation [CBC] news website),

Yesterday [February 28, 2017] the Bank of Montreal [BMO] said it had made about $1.5 billion in three months.

That may be hard to put in context until you hear that it is an increase in profit of nearly 40 per cent from the same period last year and dramatically higher than stock watchers had been expecting.

Not all the banks have done as well as BMO this time. The Royal Bank’s profits were up 24 per cent at $3 billion. [emphasis mine] CIBC [Canadian Imperial Bank of Commerce] profits were up 13 per cent. TD [Toronto Dominion] releases its numbers tomorrow.

Those numbers would put the RBC on track to a profit of roughly $12B n 2017. This means  $500M represents approximately 4.5% of a single year’s profits which will be disbursed over a 10 year period which makes the investment work out to approximately .45% or less than 1/2 of one percent. Paradoxically, it’s a lot of money and it’s not that much money.

Advertising awareness

First, there was some advertising (in Vancouver at least),

[downloaded from http://flinflononline.com/local-news/356505]

You’ll notice she has what could be described as a ‘halo’. Is she an angel or, perhaps, she’s an RBC angel? After all, yellow and gold are closely associated as colours and RBC sports a partially yellow logo. As well, the model is wearing a blue denim jacket, RBC’s other logo colour.

Her ‘halo’ is intact but those bands of colour bend a bit and could be described as ‘rainbow-like’ bringing to mind ‘pots of gold’ at the end of the rainbow.  Free association is great fun and allows people to ascribe multiple and/or overlapping ideas and stories to the advertising. For example, people who might not approve of imagery that hearkens to religious art might have an easier time with rainbows and pots of gold. At any rate, none of the elements in images/ads are likely to be happy accidents or coincidence. They are intended to evoke certain associations, e.g., anyone associated with RBC will be blessed with riches.

The timing is deliberate, too, just before Easter 2018 (April 1), suggesting to some us, that even when the robots arrive destroying the past, youth will rise up (resurrection) for a new future. Or, if you prefer, Passover and its attendant themes of being spared and moving to the Promised Land.

Enough with the semiotic analysis and onto campaign details.

Humans Wanted: an RBC report

It seems the precursor to Future Launch, is an RBC report, ‘Humans Wanted’, which itself is the outcome of still earlier work such as this Brookfield Institute for Innovation + Entrepreneurship (BII+E) report, Future-proof: Preparing young Canadians for the future of work, March 2017 (authors: Creig Lamb and Sarah Doyle), which features a quote from RBC’s President and CEO (Chief Executive Officer) David McKay,

“Canada’s future prosperity and success will rely on us harnessing the innovation of our entire talent pool. A huge part of our success will depend on how well we integrate this next generation of Canadians into the workforce. Their confidence, optimism and inspiration could be the key to helping us reimagine traditional business models, products and ways of working.”  David McKay, President and CEO, RBC

There are a number of major trends that have the potential to shape the future of work, from climate change and resource scarcity to demographic shifts resulting from an aging population and immigration. This report focuses on the need to prepare Canada’s youth for a future where a great number of jobs will be rapidly created, altered or made obsolete by technology.

Successive waves of technological advancements have rocked global economies for centuries, reconfiguring the labour force and giving rise to new economic opportunities with each wave. Modern advances, including artificial intelligence and robotics, once again have the potential to transform the economy, perhaps more rapidly and more dramatically than ever before. As past pillars of Canada’s economic growth become less reliable, harnessing technology and innovation will become increasingly important in driving productivity and growth. 1, 2, 3

… (p. 2 print; p. 4 PDF)

The Brookfield Institute (at Ryerson University in Toronto, Ontario, Canada) report is worth reading if for no other reason than its Endnotes. Unlike the RBC materials, you can find the source for the information in the Brookfield report.

After Brookfield, there was the RBC Future Launch Youth Forums 2017: What We Learned  document (October 13, 2017 according to ‘View Page Info’),

In this rapidly changing world, there’s a new reality when it comes to work. A degree or diploma no longer guarantees a job, and some of the positions, skills and trades of today won’t exist – or be relevant – in the future.

Through an unprecedented 10-year, $500 million commitment, RBC Future LaunchTM  is focused on driving real change and preparing today’s young people for the future world of work, helping them access the skills, job experience and networks that will enable their success.

At the beginning of this 10-year journey RBC® wanted to go beyond research and expert reports to better understand the regional issues facing youth across Canada and to hear directly from young people and organizations that work with them. From November 2016 to May 2017, the RBC Future Launch team held 15 youth forums across the country, bringing together over 430 partners, including young people, to uncover ideas and talk through solutions to address the workforce gaps Canada’s youth face today.

Finally,  a March 26, 2018 RBC news release announces the RBC report: ‘Humans Wanted – How Canadian youth can thrive in the age of disruption’,

Automation to impact at least 50% of Canadian jobs in the next decade: RBC research

Human intelligence and intuition critical for young people and jobs of the future

  • Being ‘human’ will ensure resiliency in an era of disruption and artificial intelligence
  • Skills mobility – the ability to move from one job to another – will become a new competitive advantage

TORONTO, March 26, 2018 – A new RBC research paper, Humans Wanted – How Canadian youth can thrive in the age of disruption, has revealed that 50% of Canadian jobs will be disrupted by automation in the next 10 years.

As a result of this disruption, Canada’s Gen Mobile – young people who are currently transitioning from education to employment – are unprepared for the rapidly changing workplace. With 4 million Canadian youth entering the workforce over the next decade, and the shift from a jobs economy to a skills economy, the research indicates young people will need a portfolio of “human skills” to remain competitive and resilient in the labour market.

“Canada is at a historic cross-roads – we have the largest generation of young people coming into the workforce at the very same time technology is starting to impact most jobs in the country,” said Dave McKay, President and CEO, RBC. “Canada is on the brink of a skills revolution and we have a responsibility to prepare young people for the opportunities and ambiguities of the future.”

‘There is a changing demand for skills,” said John Stackhouse, Senior Vice-President, RBC. “According to our findings, if employers and the next generation of employees focus on foundational ‘human skills’, they’ll be better able to navigate a new age of career mobility as technology continues to reshape every aspect of the world around us.”

Key Findings:

  • Canada’s economy is on target to add 2.4 million jobs over the next four years, virtually all of which will require a different mix of skills.
  • A growing demand for “human skills” will grow across all job sectors and include: critical thinking, co-ordination, social perceptiveness, active listening and complex problem solving.
  • Rather than a nation of coders, digital literacy – the ability to understand digital items, digital technologies or the Internet fluently – will be necessary for all new jobs.
  • Canada’s education system, training programs and labour market initiatives are inadequately designed to help Canadian youth navigate the new skills economy, resulting in roughly half a million 15-29 year olds who are unemployed and another quarter of a million who are working part-time involuntarily.
  • Canadian employers are generally not prepared, through hiring, training or retraining, to recruit and develop the skills needed to ensure their organizations remain competitive in the digital economy.

“As digital and machine technology advances, the next generation of Canadians will need to be more adaptive, creative and collaborative, adding and refining skills to keep pace with a world of work undergoing profound change,” said McKay. “Canada’s future prosperity depends on getting a few big things right and that’s why we’ve introduced RBC Future Launch.”

RBC Future Launch is a decade-long commitment to help Canadian youth prepare for the jobs of tomorrow. RBC is committed to acting as a catalyst for change, bringing government, educators, public sector and not-for-profits together to co-create solutions to help young people better prepare for the future of the work through “human skills” development, networking and work experience.

Top recommendations from the report include:

  • A national review of post-secondary education programs to assess their focus on “human skills” including global competencies
  • A national target of 100% work-integrated learning, to ensure every undergraduate student has the opportunity for an apprenticeship, internship, co-op placement or other meaningful experiential placement
  • Standardization of labour market information across all provinces and regions, and a partnership with the private sector to move skills and jobs information to real-time, interactive platforms
  • The introduction of a national initiative to help employers measure foundational skills and incorporate them in recruiting, hiring and training practices

Join the conversation with Dave McKay and John Stackhouse on Wednesday, March 28 [2018] at 9:00 a.m. to 10:00 a.m. EDT at RBC Disruptors on Facebook Live.

Click here to read: Humans Wanted – How Canadian youth can thrive in the age of disruption.

About the Report
RBC Economics amassed a database of 300 occupations and drilled into the skills required to perform them now and projected into the future. The study groups the Canadian economy into six major clusters based on skillsets as opposed to traditional classifications and sectors. This cluster model is designed to illustrate the ease of transition between dissimilar jobs as well as the relevance of current skills to jobs of the future.

Six Clusters
Doers: Emphasis on basic skills
Transition: Greenhouse worker to crane operator
High Probability of Disruption

Crafters: Medium technical skills; low in management skills
Transition: Farmer to plumber
Very High Probability of Disruption

Technicians: High in technical skills
Transition: Car mechanic to electrician
Moderate Probability of Disruption

Facilitators: Emphasis on emotional intelligence
Transition: Dental assistant to graphic designer
Moderate Probability of Disruption

Providers: High in Analytical Skills
Transition: Real estate agent to police officer
Low Probability of Disruption

Solvers: Emphasis on management skills and critical thinking
Transition: Mathematician to software engineer
Minimal Probability of Disruption

About RBC
Royal Bank of Canada is a global financial institution with a purpose-driven, principles-led approach to delivering leading performance. Our success comes from the 81,000+ employees who bring our vision, values and strategy to life so we can help our clients thrive and communities prosper. As Canada’s biggest bank, and one of the largest in the world based on market capitalization, we have a diversified business model with a focus on innovation and providing exceptional experiences to our 16 million clients in Canada, the U.S. and 34 other countries. Learn more at rbc.com.‎

We are proud to support a broad range of community initiatives through donations, community investments and employee volunteer activities. See how at http://www.rbc.com/community-sustainability/.

– 30 – 

The report features a lot of bulleted points, airy text (large fonts and lots of space between the lines), inoffensive graphics, and human interest stories illustrating the points made elsewhere in the text.

There is no bibliography or any form of note telling you where to find the sources for the information in the report. The 2.4M jobs mentioned in the news release are also mentioned in the report on p. 16 (PDF) and is credited in the main body of the text to the EDSC. I’m not up-to-date on my abbreviations but I’m pretty sure it does not stand for East Doncaster Secondary College or East Duplin Soccer Club. I’m betting it stands for Employment and Social Development Canada. All that led to visiting the EDSC website and trying (unsuccessfully) to find the report or data sheet used to supply the figures RBC quoted in their report and news release.

Also, I’m not sure who came up with or how they developed the ‘crafters, ‘doers’, ‘technicians’, etc. categories.

Here’s more from p. 2 of their report,

CANADA, WE HAVE A PROBLEM. [emphasis mine] We’re hurtling towards the 2020s with perfect hindsight, not seeing what’s clearly before us. The next generation is entering the workforce at a time of profound economic, social and technological change. We know it. [emphasis mine] Canada’s youth know it. And we’re not doing enough about it.

RBC wants to change the conversation, [emphasis mine] to help Canadian youth own the 2020s — and beyond. RBC Future Launch is our 10-year commitment to that cause, to help young people prepare for and navigate a new world of work that, we believe, will fundamentally reshape Canada. For the better. If we get a few big things right.

This report, based on a year-long research project, is designed to help that conversation. Our team conducted one of the biggest labour force data projects [emphasis mine] in Canada, and crisscrossed the country to speak with students and workers in their early careers, with educators and policymakers, and with employers in every sector.

We discovered a quiet crisis — of recent graduates who are overqualified for the jobs they’re in, of unemployed youth who weren’t trained for the jobs that are out there, and young Canadians everywhere who feel they aren’t ready for the future of work.

Sarcasm ahead

There’s nothing like starting your remarks with a paraphrased quote from a US movie about the Apollo 13 spacecraft crisis as in, “Houston, we have a problem.” I’ve always preferred Trudeau (senior) and his comment about ‘keeping our noses out of the nation’s bedrooms’. It’s not applicable but it’s more amusing and a Canadian quote to boot.

So, we know we’re having a crisis which we know about but RBC wants to tell us about it anyway (?) and RBC wants to ‘change the conversation’. OK. So how does presenting the RBC Future Launch change the conversation? Especially in light of the fact, that the conversation has already been held, “a year-long research project … Our team conducted one of the biggest labour force data projects [emphasis mine] in Canada, and crisscrossed the country to speak with students and workers in their early careers, with educators and policymakers, and with employers in every sector.” Is the proposed change something along the lines of ‘Don’t worry, be happy; RBC has six categories (Doers, Crafters, Technicians, Facilitators, Providers, Solvers) for you.’ (Yes, for those who recognized it, I’m referencing I’m referencing Bobby McFerrin’s hit song, Don’t Worry, Be Happy.)

Also, what data did RBC collect and how do they collect it? Could Facebook and other forms of social media have been involved? (My March 29, 2018 posting mentions the latest Facebook data scandal; scroll down about 80% of the way.)

There are the people leading the way and ‘changing the conversation’ as it were and they can’t present logical, coherent points. What kind of conversation could they possibly have with youth (or anyone else for that matter)?

And, if part of the problem is that employers are not planning for the future, how does Future Launch ‘change that part of the conversation’?

RBC Future Launch

Days after the report’s release,there’s the Future Launch announcement in an RBC March 28, 2018 news release,

TORONTO, March 28, 2017 – In an era of unprecedented economic and technological change, RBC is today unveiling its largest-ever commitment to Canada’s future. RBC Future Launch is a 10-year, $500-million initiative to help young people gain access and opportunity to the skills, job experience and career networks needed for the future world of work.

“Tomorrow’s prosperity will depend on today’s young people and their ability to take on a future that’s equally inspiring and unnerving,” said Dave McKay, RBC president and CEO. “We’re sitting at an intersection of history, as a massive generational shift and unprecedented technological revolution come together. And we need to ensure young Canadians are prepared to help take us forward.”

Future Launch is a core part of RBC’s celebration of Canada 150, and is the result of two years of conversations with young Canadians from coast to coast to coast.

“Young people – Canada’s future – have the confidence, optimism and inspiration to reimagine the way our country works,” McKay said. “They just need access to the capabilities and connections to make the 21st century, and their place in it, all it should be.”

Working together with young people, RBC will bring community leaders, industry experts, governments, educators and employers to help design solutions and harness resources for young Canadians to chart a more prosperous and inclusive future.

Over 10 years, RBC Future Launch will invest in areas that help young people learn skills, experience jobs, share knowledge and build resilience. The initiative will address the following critical gaps:

  • A lack of relevant experience. Too many young Canadians miss critical early opportunities because they’re stuck in a cycle of “no experience, no job.” According to the consulting firm McKinsey & Co., 83 per cent of educators believe youth are prepared for the workforce, but only 34 per cent of employers and 44 per cent of young people agree. RBC will continue to help educators and employers develop quality work-integrated learning programs to build a more dynamic bridge between school and work.
  • A lack of relevant skills. Increasingly, young people entering the workforce require a complex set of technical, entrepreneurial and social skills that cannot be attained solely through a formal education. A 2016 report from the World Economic Forum states that by 2020, more than a third of the desired core skill-sets of most occupations will be different from today — if that job still exists. RBC will help ensure young Canadians gain the skills, from critical thinking to coding to creative design, that will help them integrate into the workplace of today, and be more competitive for the jobs of tomorrow.
  • A lack of knowledge networks. Young people are at a disadvantage in the job market if they don’t have an opportunity to learn from others and discover the realities of jobs they’re considering. Many have told RBC that there isn’t enough information on the spectrum of jobs that are available. From social networks to mentoring programs, RBC will harness the vast knowledge and goodwill of Canadians in guiding young people to the opportunities that exist and will exist, across Canada.
  • A lack of future readiness. Many young Canadians know their future will be defined by disruption. A new report, Future-proof: Preparing young Canadians for the future of work, by the Brookfield Institute for Innovation + Entrepreneurship, found that 42 per cent of the Canadian labour force is at a high risk of being affected by automation in the next 10 to 20 years. Young Canadians are okay with that: they want to be the disruptors and make the future workforce more creative and productive. RBC will help to create opportunities, through our education system, workplaces and communities at large to help young Canadians retool, rethink and rebuild as the age of disruption takes hold.

By helping young people unlock their potential and launch their careers, RBC can assist them with building a stronger future for themselves, and a more prosperous Canada for all. RBC created The Launching Careers Playbook, an interactive, digital resource focused on enabling young people to reach their full potential through three distinct modules: I am starting my career; I manage interns and I create internship programs. The Playbook shares the design principles, practices, and learnings captured from the RBC Career Launch Program over three years, as well as the research and feedback RBC has received from young people and their managers.

More information on RBC Future Launch can be found at www.rbc.com/futurelaunch.

Weirdly, this news release is the only document which gives you sources for some of RBC’s information. If you should be inclined, you can check the original reports as cited in the news release and determine if you agree with the conclusions the RBC people drew from them.

Cynicism ahead

They are planning to change the conversation, are they? I can’t help wondering what return they’re (RBC)  expecting to make on their investment ($500M over10 years). The RBC is prominently displayed not only on the launch page but in several of the subtopics listed on the page.

There appears to be some very good and helpful information although much of it leads you to using a bank for one reason or another. For example, if you’re planning to become an entrepreneur (and there is serious pressure from the government of Canada on this generation to become precisely that), then it’s very handy that you have easy access to RBC from any of the Future Launch pages. As well, you can easily apply for a job at or get a loan from RBC after you’ve done some of the exercises on the website and possibly given RBC a lot of data about yourself.

For anyone who believes I’m being harsh about the bank, you might want to check out a March 15, 2017 article by Erica Johnson for the Canadian Broadcasting Corporation’s Go Public website. It highlights just how ruthless Canadian banks can be,

Employees from all five of Canada’s big banks have flooded Go Public with stories of how they feel pressured to upsell, trick and even lie to customers to meet unrealistic sales targets and keep their jobs.

The deluge is fuelling multiple calls for a parliamentary inquiry, even as the banks claim they’re acting in customers’ best interests.

In nearly 1,000 emails, employees from RBC, BMO, CIBC, TD and Scotiabank locations across Canada describe the pressures to hit targets that are monitored weekly, daily and in some cases hourly.

“Management is down your throat all the time,” said a Scotiabank financial adviser. “They want you to hit your numbers and it doesn’t matter how.”

CBC has agreed to protect their identities because the workers are concerned about current and future employment.

An RBC teller from Thunder Bay, Ont., said even when customers don’t need or want anything, “we need to upgrade their Visa card, increase their Visa limits or get them to open up a credit line.”

“It’s not what’s important to our clients anymore,” she said. “The bank wants more and more money. And it’s leading everyone into debt.”

A CIBC teller said, “I am expected to aggressively sell products, especially Visa. Hit those targets, who cares if it’s hurting customers.”

….

Many bank employees described pressure tactics used by managers to try to increase sales.

An RBC certified financial planner in Guelph, Ont., said she’s been threatened with pay cuts and losing her job if she doesn’t upsell enough customers.

“Managers belittle you,” she said. “We get weekly emails that highlight in red the people who are not hitting those sales targets. It’s bullying.”

Some TD Bank employees told CBC’s Go Public they felt they had to break the law to keep their jobs. (Aaron Harris/Reuters)

Employees at several RBC branches in Calgary said there are white boards posted in the staff room that list which financial advisers are meeting their sales targets and which advisers are coming up short.

A CIBC small business associate who quit in January after nine years on the job said her district branch manager wasn’t pleased with her sales results when she was pregnant.

While working in Waterloo, Ont., she says her manager also instructed staff to tell all new international students looking to open a chequing account that they had to open a “student package,” which also included a savings account, credit card and overdraft.

“That is unfair and not the law, but we were told to do it for all of them.”

Go Public requested interviews with the CEOs of the five big banks — BMO, CIBC, RBC, Scotiabank and TD — but all declined.

If you have the time, it’s worth reading Johnson’s article in its entirety as it provides some fascinating insight into Canadian banking practices.

Final comments and an actual ‘conversation’ about the future of work

I’m torn, It’s good to see an attempt to grapple with the extraordinary changes we are likely to see in the not so distant future. It’s hard to believe that this Future Launch initiative is anything other than a self-interested means of profiting from fears about the future and a massive public relations campaign designed to engender good will. Doubly so since the very bad publicity the banks including RBC garnered last year (2017), as mentioned in the Johnson article.

Also, RBC and who knows how many other vested interests appear to have gathered data and information which they’ve used to draw any number of conclusions. First, I can’t find any information about what data RBC is gathering, who else might have access, and what plans, if any, they have to use it. Second, RBC seems to have predetermined how this ‘future of work’ conversation needs to be changed.

I suggest treading as lightly as possible and keeping in mind other ‘conversations’ are possible. For example, Mike Masnick at Techdirt has an April 3, 2018 posting about a new ‘future of work’ initiative,

For the past few years, there have been plenty of discussions about “the future of work,” but they tend to fall into one of two camps. You have the pessimists, who insist that the coming changes wrought by automation and artificial intelligence will lead to fewer and fewer jobs, as all of the jobs of today are automated out of existence. Then, there are the optimists who point to basically every single past similar prediction of doom and gloom due to innovation, which have always turned out to be incorrect. People in this camp point out that technology is more likely to augment than replace human-based work, and vaguely insist that “the jobs will come.” Whether you fall into one of those two camps — or somewhere in between or somewhere else entirely — one thing I’d hope most people can agree on is that the future of work will be… different.

Separately, we’re also living in an age where it is increasingly clear that those in and around the technology industry must take more responsibility in thinking through the possible consequences of the innovations they’re bringing to life, and exploring ways to minimize the harmful results (and hopefully maximizing the beneficial ones).

That brings us to the project we’re announcing today, Working Futures, which is an attempt to explore what the future of work might really look like in the next ten to fifteen years. We’re doing this project in partnership with two organizations that we’ve worked with multiples times in the past: Scout.ai and R Street.

….

The key point of this project: rather than just worry about the bad stuff or hand-wave around the idea of good stuff magically appearing, we want to really dig in — figure out what new jobs may actually appear, look into what benefits may accrue as well as what harms may be dished out — and see if there are ways to minimize the negative consequences, while pushing the world towards the beneficial consequences.

To do that, we’re kicking off a variation on the classic concept of scenario planning, bringing together a wide variety of individuals with different backgrounds, perspectives and ideas to run through a fun and creative exercise to imagine the future, while staying based in reality. We’re adding in some fun game-like mechanisms to push people to think about where the future might head. We’re also updating the output side of traditional scenario planning by involving science fiction authors, who obviously have a long history of thinking up the future, and who will participate in this process and help to craft short stories out of the scenarios we build, making them entertaining, readable and perhaps a little less “wonky” than the output of more traditional scenario plans.

There you have it; the Royal Bank is changing the conversation and Techdirt is inviting you to join in scenario planning and more.

AI fairytale and April 25, 2018 AI event at Canada Science and Technology Museum*** in Ottawa

These days it’s all about artificial intelligence (AI) or robots and often, it’s both. They’re everywhere and they will take everyone’s jobs, or not, depending on how you view them. Today, I’ve got two artificial intelligence items, the first of which may provoke writers’ anxieties.

Fairytales

The Princess and the Fox is a new fairytale by the Brothers Grimm or rather, their artificially intelligent surrogate according to an April 18, 2018 article on the British Broadcasting Corporation’s online news website,

It was recently reported that the meditation app Calm had published a “new” fairytale by the Brothers Grimm.

However, The Princess and the Fox was written not by the brothers, who died over 150 years ago, but by humans using an artificial intelligence (AI) tool.

It’s the first fairy tale written by an AI, claims Calm, and is the result of a collaboration with Botnik Studios – a community of writers, artists and developers. Calm says the technique could be referred to as “literary cloning”.

Botnik employees used a predictive-text program to generate words and phrases that might be found in the original Grimm fairytales. Human writers then pieced together sentences to form “the rough shape of a story”, according to Jamie Brew, chief executive of Botnik.

The full version is available to paying customers of Calm, but here’s a short extract:

“Once upon a time, there was a golden horse with a golden saddle and a beautiful purple flower in its hair. The horse would carry the flower to the village where the princess danced for joy at the thought of looking so beautiful and good.

Advertising for a meditation app?

Of course, it’s advertising and it’s ‘smart’ advertising (wordplay intended). Here’s a preview/trailer,

Blair Marnell’s April 18, 2018 article for SyFy Wire provides a bit more detail,

“You might call it a form of literary cloning,” said Calm co-founder Michael Acton Smith. Calm commissioned Botnik to use its predictive text program, Voicebox, to create a new Brothers Grimm story. But first, Voicebox was given the entire collected works of the Brothers Grimm to analyze, before it suggested phrases and sentences based upon those stories. Of course, human writers gave the program an assist when it came to laying out the plot. …

“The Brothers Grimm definitely have a reputation for darkness and many of their best-known tales are undoubtedly scary,” Peter Freedman told SYFY WIRE. Freedman is a spokesperson for Calm who was a part of the team behind the creation of this story. “In the process of machine-human collaboration that generated The Princess and The Fox, we did gently steer the story towards something with a more soothing, calm plot and vibe, that would make it work both as a new Grimm fairy tale and simultaneously as a Sleep Story on Calm.” [emphasis mine]

….

If Marnell’s article is to be believed, Peter Freedman doesn’t hold much hope for writers in the long-term future although we don’t need to start ‘battening down the hatches’ yet.

You can find Calm here.

You can find Botnik  here and Botnik Studios here.

 

AI at Ingenium [Canada Science and Technology Museum] on April 25, 2018

Formerly known (I believe) [*Read the comments for the clarification] as the Canada Science and Technology Museum, Ingenium is hosting a ‘sold out but there will be a livestream’ Google event. From Ingenium’s ‘Curiosity on Stage Evening Edition with Google – The AI Revolution‘ event page,

Join Google, Inc. and the Canada Science and Technology Museum for an evening of thought-provoking discussions about artificial intelligence.

[April 25, 2018
7:00 p.m. – 10:00 p.m. {ET}
Fees: Free]

Invited speakers from industry leaders Google, Facebook, Element AI and Deepmind will explore the intersection of artificial intelligence with robotics, arts, social impact and healthcare. The session will end with a panel discussion and question-and-answer period. Following the event, there will be a reception along with light refreshments and networking opportunities.

The event will be simultaneously translated into both official languages as well as available via livestream from the Museum’s YouTube channel.

Seating is limited

THIS EVENT IS NOW SOLD OUT. Please join us for the livestream from the Museum’s YouTube channel. https://www.youtube.com/cstmweb *** April 25, 2018: I received corrective information about the link for the livestream: https://youtu.be/jG84BIno5J4 from someone at Ingenium.***

Speakers

David Usher (Moderator)

David Usher is an artist, best-selling author, entrepreneur and keynote speaker. As a musician he has sold more than 1.4 million albums, won 4 Junos and has had #1 singles singing in English, French and Thai. When David is not making music, he is equally passionate about his other life, as a Geek. He is the founder of Reimagine AI, an artificial intelligence creative studio working at the intersection of art and artificial intelligence. David is also the founder and creative director of the non-profit, the Human Impact Lab at Concordia University [located in Montréal, Québec]. The Lab uses interactive storytelling to revisualize the story of climate change. David is the co-creator, with Dr. Damon Matthews, of the Climate Clock. Climate Clock has been presented all over the world including the United Nations COP 23 Climate Conference and is presently on a three-year tour with the Canada Museum of Science and Innovation’s Climate Change Exhibit.

Joelle Pineau (Facebook)

The AI Revolution:  From Ideas and Models to Building Smart Robots
Joelle Pineau is head of the Facebook AI Research Lab Montreal, and an Associate Professor and William Dawson Scholar at McGill University. Dr. Pineau’s research focuses on developing new models and algorithms for automatic planning and learning in partially-observable domains. She also applies these algorithms to complex problems in robotics, health-care, games and conversational agents. She serves on the editorial board of the Journal of Artificial Intelligence Research and the Journal of Machine Learning Research and is currently President of the International Machine Learning Society. She is a AAAI Fellow, a Senior Fellow of the Canadian Institute for Advanced Research (CIFAR) and in 2016 was named a member of the College of New Scholars, Artists and Scientists by the Royal Society of Canada.

Pablo Samuel Castro (Google)

Building an Intelligent Assistant for Music Creators
Pablo was born and raised in Quito, Ecuador, and moved to Montreal after high school to study at McGill. He stayed in Montreal for the next 10 years, finished his bachelors, worked at a flight simulator company, and then eventually obtained his masters and PhD at McGill, focusing on Reinforcement Learning. After his PhD Pablo did a 10-month postdoc in Paris before moving to Pittsburgh to join Google. He has worked at Google for almost 6 years, and is currently a research Software Engineer in Google Brain in Montreal, focusing on fundamental Reinforcement Learning research, as well as Machine Learning and Music. Aside from his interest in coding/AI/math, Pablo is an active musician (https://www.psctrio.com), loves running (5 marathons so far, including Boston!), and discussing politics and activism.

Philippe Beaudoin (Element AI)

Concrete AI-for-Good initiatives at Element AI
Philippe cofounded Element AI in 2016 and currently leads its applied lab and AI-for-Good initiatives. His team has helped tackle some of the biggest and most interesting business challenges using machine learning. Philippe holds a Ph.D in Computer Science and taught virtual bipeds to walk by themselves during his postdoc at UBC. He spent five years at Google as a Senior Developer and Technical Lead Manager, partly with the Chrome Machine Learning team. Philippe also founded ArcBees, specializing in cloud-based development. Prior to that he worked in the videogame and graphics hardware industries. When he has some free time, Philippe likes to invent new boardgames — the kind of games where he can still beat the AI!

Doina Precup (Deepmind)

Challenges and opportunities for the AI revolution in health care
Doina Precup splits her time between McGill University, where she co-directs the Reasoning and Learning Lab in the School of Computer Science, and DeepMind Montreal, where she leads the newly formed research team since October 2017.  She got her BSc degree in computer science form the Technical University Cluj-Napoca, Romania, and her MSc and PhD degrees from the University of Massachusetts-Amherst, where she was a Fulbright fellow. Her research interests are in the areas of reinforcement learning, deep learning, time series analysis, and diverse applications of machine learning in health care, automated control and other fields. She became a senior member of AAAI in 2015, a Canada Research Chair in Machine Learning in 2016 and a Senior Fellow of CIFAR in 2017.

Interesting, oui? Not a single expert from Ottawa or Toronto. Well, Element AI has an office in Toronto. Still, I wonder why this singular focus on AI in Montréal. After all, one of the current darlings of AI, machine learning, was developed at the University of Toronto which houses the Canadian Institute for Advanced Research (CIFAR),  the institution in charge of the Pan-Canadian Artificial Intelligence Strategy and the Vector Institutes (more about that in my March 31,2017 posting).

Enough with my musing: For those of us on the West Coast, there’s an opportunity to attend via livestream from 4 pm to 7 pm on April 25, 2018 on xxxxxxxxx. *** April 25, 2018: I received corrective information about the link for the livestream: https://youtu.be/jG84BIno5J4 and clarification as the relationship between Ingenium and the Canada Science and Technology Museum from someone at Ingenium.***

For more about Element AI, go here; for more about DeepMind, go here for information about parent company in the UK and the most I dug up about their Montréal office was this job posting; and, finally , Reimagine.AI is here.