Tag Archives: artificial intelligence

Artificial synapse based on tantalum oxide from Korean researchers

This memristor story comes from South Korea as we progress on the way to neuromorphic computing (brainlike computing). A Sept. 7, 2018 news item on ScienceDaily makes the announcement,

A research team led by Director Myoung-Jae Lee from the Intelligent Devices and Systems Research Group at DGIST (Daegu Gyeongbuk Institute of Science and Technology) has succeeded in developing an artificial synaptic device that mimics the function of the nerve cells (neurons) and synapses that are response for memory in human brains. [sic]

Synapses are where axons and dendrites meet so that neurons in the human brain can send and receive nerve signals; there are known to be hundreds of trillions of synapses in the human brain.

This chemical synapse information transfer system, which transfers information from the brain, can handle high-level parallel arithmetic with very little energy, so research on artificial synaptic devices, which mimic the biological function of a synapse, is under way worldwide.

Dr. Lee’s research team, through joint research with teams led by Professor Gyeong-Su Park from Seoul National University; Professor Sung Kyu Park from Chung-ang University; and Professor Hyunsang Hwang from Pohang University of Science and Technology (POSTEC), developed a high-reliability artificial synaptic device with multiple values by structuring tantalum oxide — a trans-metallic material — into two layers of Ta2O5-x and TaO2-x and by controlling its surface.

A September 7, 2018 DGIST press release (also on EurekAlert), which originated the news item, delves further into the work,

The artificial synaptic device developed by the research team is an electrical synaptic device that simulates the function of synapses in the brain as the resistance of the tantalum oxide layer gradually increases or decreases depending on the strength of the electric signals. It has succeeded in overcoming durability limitations of current devices by allowing current control only on one layer of Ta2O5-x.

In addition, the research team successfully implemented an experiment that realized synapse plasticity [or synaptic plasticity], which is the process of creating, storing, and deleting memories, such as long-term strengthening of memory and long-term suppression of memory deleting by adjusting the strength of the synapse connection between neurons.

The non-volatile multiple-value data storage method applied by the research team has the technological advantage of having a small area of an artificial synaptic device system, reducing circuit connection complexity, and reducing power consumption by more than one-thousandth compared to data storage methods based on digital signals using 0 and 1 such as volatile CMOS (Complementary Metal Oxide Semiconductor).

The high-reliability artificial synaptic device developed by the research team can be used in ultra-low-power devices or circuits for processing massive amounts of big data due to its capability of low-power parallel arithmetic. It is expected to be applied to next-generation intelligent semiconductor device technologies such as development of artificial intelligence (AI) including machine learning and deep learning and brain-mimicking semiconductors.

Dr. Lee said, “This research secured the reliability of existing artificial synaptic devices and improved the areas pointed out as disadvantages. We expect to contribute to the development of AI based on the neuromorphic system that mimics the human brain by creating a circuit that imitates the function of neurons.”

Here’s a link to and a citation for the paper,

Reliable Multivalued Conductance States in TaOx Memristors through Oxygen Plasma-Assisted Electrode Deposition with in Situ-Biased Conductance State Transmission Electron Microscopy Analysis by Myoung-Jae Lee, Gyeong-Su Park, David H. Seo, Sung Min Kwon, Hyeon-Jun Lee, June-Seo Kim, MinKyung Jung, Chun-Yeol You, Hyangsook Lee, Hee-Goo Kim, Su-Been Pang, Sunae Seo, Hyunsang Hwang, and Sung Kyu Park. ACS Appl. Mater. Interfaces, 2018, 10 (35), pp 29757–29765 DOI: 10.1021/acsami.8b09046 Publication Date (Web): July 23, 2018

Copyright © 2018 American Chemical Society

This paper is open access.

You can find other memristor and neuromorphic computing stories here by using the search terms I’ve highlighted,  My latest (more or less) is an April 19, 2018 posting titled, New path to viable memristor/neuristor?

Finally, here’s an image from the Korean researchers that accompanied their work,

Caption: Representation of neurons and synapses in the human brain. The magnified synapse represents the portion mimicked using solid-state devices. Credit: Daegu Gyeongbuk Institute of Science and Technology(DGIST)

If only AI had a brain (a Wizard of Oz reference?)

The title, which I’ve borrowed from the news release, is the only Wizard of Oz reference that I can find but it works so well, you don’t really need anything more.

Moving onto the news, a July 23, 2018 news item on phys.org announces new work on developing an artificial synapse (Note: A link has been removed),

Digital computation has rendered nearly all forms of analog computation obsolete since as far back as the 1950s. However, there is one major exception that rivals the computational power of the most advanced digital devices: the human brain.

The human brain is a dense network of neurons. Each neuron is connected to tens of thousands of others, and they use synapses to fire information back and forth constantly. With each exchange, the brain modulates these connections to create efficient pathways in direct response to the surrounding environment. Digital computers live in a world of ones and zeros. They perform tasks sequentially, following each step of their algorithms in a fixed order.

A team of researchers from Pitt’s [University of Pittsburgh] Swanson School of Engineering have developed an “artificial synapse” that does not process information like a digital computer but rather mimics the analog way the human brain completes tasks. Led by Feng Xiong, assistant professor of electrical and computer engineering, the researchers published their results in the recent issue of the journal Advanced Materials (DOI: 10.1002/adma.201802353). His Pitt co-authors include Mohammad Sharbati (first author), Yanhao Du, Jorge Torres, Nolan Ardolino, and Minhee Yun.

A July 23, 2018 University of Pittsburgh Swanson School of Engineering news release (also on EurekAlert), which originated the news item, provides further information,

“The analog nature and massive parallelism of the brain are partly why humans can outperform even the most powerful computers when it comes to higher order cognitive functions such as voice recognition or pattern recognition in complex and varied data sets,” explains Dr. Xiong.

An emerging field called “neuromorphic computing” focuses on the design of computational hardware inspired by the human brain. Dr. Xiong and his team built graphene-based artificial synapses in a two-dimensional honeycomb configuration of carbon atoms. Graphene’s conductive properties allowed the researchers to finely tune its electrical conductance, which is the strength of the synaptic connection or the synaptic weight. The graphene synapse demonstrated excellent energy efficiency, just like biological synapses.

In the recent resurgence of artificial intelligence, computers can already replicate the brain in certain ways, but it takes about a dozen digital devices to mimic one analog synapse. The human brain has hundreds of trillions of synapses for transmitting information, so building a brain with digital devices is seemingly impossible, or at the very least, not scalable. Xiong Lab’s approach provides a possible route for the hardware implementation of large-scale artificial neural networks.

According to Dr. Xiong, artificial neural networks based on the current CMOS (complementary metal-oxide semiconductor) technology will always have limited functionality in terms of energy efficiency, scalability, and packing density. “It is really important we develop new device concepts for synaptic electronics that are analog in nature, energy-efficient, scalable, and suitable for large-scale integrations,” he says. “Our graphene synapse seems to check all the boxes on these requirements so far.”

With graphene’s inherent flexibility and excellent mechanical properties, these graphene-based neural networks can be employed in flexible and wearable electronics to enable computation at the “edge of the internet”–places where computing devices such as sensors make contact with the physical world.

“By empowering even a rudimentary level of intelligence in wearable electronics and sensors, we can track our health with smart sensors, provide preventive care and timely diagnostics, monitor plants growth and identify possible pest issues, and regulate and optimize the manufacturing process–significantly improving the overall productivity and quality of life in our society,” Dr. Xiong says.

The development of an artificial brain that functions like the analog human brain still requires a number of breakthroughs. Researchers need to find the right configurations to optimize these new artificial synapses. They will need to make them compatible with an array of other devices to form neural networks, and they will need to ensure that all of the artificial synapses in a large-scale neural network behave in the same exact manner. Despite the challenges, Dr. Xiong says he’s optimistic about the direction they’re headed.

“We are pretty excited about this progress since it can potentially lead to the energy-efficient, hardware implementation of neuromorphic computing, which is currently carried out in power-intensive GPU clusters. The low-power trait of our artificial synapse and its flexible nature make it a suitable candidate for any kind of A.I. device, which would revolutionize our lives, perhaps even more than the digital revolution we’ve seen over the past few decades,” Dr. Xiong says.

There is a visual representation of this artificial synapse,

Caption: Pitt engineers built a graphene-based artificial synapse in a two-dimensional, honeycomb configuration of carbon atoms that demonstrated excellent energy efficiency comparable to biological synapses Credit: Swanson School of Engineering

Here’s a link to and a citation for the paper,

Low‐Power, Electrochemically Tunable Graphene Synapses for Neuromorphic Computing by Mohammad Taghi Sharbati, Yanhao Du, Jorge Torres, Nolan D. Ardolino, Minhee Yun, Feng Xiong. Advanced Materials DOP: https://doi.org/10.1002/adma.201802353 First published [online]: 23 July 2018

This paper is behind a paywall.

I did look at the paper and if I understand it rightly, this approach is different from the memristor-based approaches that I have so often featured here. More than that I cannot say.

Finally, the Wizard of Oz song ‘If I Only Had a Brain’,

A solar, self-charging supercapacitor for wearable technology

Ravinder Dahiya, Carlos García Núñez, and their colleagues at the University of Glasgow (Scotland) strike again (see my May 10, 2017 posting for their first ‘solar-powered graphene skin’ research announcement). Last time it was all about robots and prosthetics, this time they’ve focused on wearable technology according to a July 18, 2018 news item on phys.org,

A new form of solar-powered supercapacitor could help make future wearable technologies lighter and more energy-efficient, scientists say.

In a paper published in the journal Nano Energy, researchers from the University of Glasgow’s Bendable Electronics and Sensing Technologies (BEST) group describe how they have developed a promising new type of graphene supercapacitor, which could be used in the next generation of wearable health sensors.

A July 18, 2018 University of Glasgow press release, which originated the news item, explains further,

Currently, wearable systems generally rely on relatively heavy, inflexible batteries, which can be uncomfortable for long-term users. The BEST team, led by Professor Ravinder Dahiya, have built on their previous success in developing flexible sensors by developing a supercapacitor which could power health sensors capable of conforming to wearer’s bodies, offering more comfort and a more consistent contact with skin to better collect health data.

Their new supercapacitor uses layers of flexible, three-dimensional porous foam formed from graphene and silver to produce a device capable of storing and releasing around three times more power than any similar flexible supercapacitor. The team demonstrated the durability of the supercapacitor, showing that it provided power consistently across 25,000 charging and discharging cycles.

They have also found a way to charge the system by integrating it with flexible solar powered skin already developed by the BEST group, effectively creating an entirely self-charging system, as well as a pH sensor which uses wearer’s sweat to monitor their health.

Professor Dahiya said: “We’re very pleased by the progress this new form of solar-powered supercapacitor represents. A flexible, wearable health monitoring system which only requires exposure to sunlight to charge has a lot of obvious commercial appeal, but the underlying technology has a great deal of additional potential.

“This research could take the wearable systems for health monitoring to remote parts of the world where solar power is often the most reliable source of energy, and it could also increase the efficiency of hybrid electric vehicles. We’re already looking at further integrating the technology into flexible synthetic skin which we’re developing for use in advanced prosthetics.” [emphasis mine]

In addition to the team’s work on robots, prosthetics, and graphene ‘skin’ mentioned in the May 10, 2017 posting the team is working on a synthetic ‘brainy’ skin for which they have just received £1.5m funding from the Engineering and Physical Science Research Council (EPSRC).

Brainy skin

A July 3, 2018 University of Glasgow press release discusses the proposed work in more detail,

A robotic hand covered in ‘brainy skin’ that mimics the human sense of touch is being developed by scientists.

University of Glasgow’s Professor Ravinder Dahiya has plans to develop ultra-flexible, synthetic Brainy Skin that ‘thinks for itself’.

The super-flexible, hypersensitive skin may one day be used to make more responsive prosthetics for amputees, or to build robots with a sense of touch.

Brainy Skin reacts like human skin, which has its own neurons that respond immediately to touch rather than having to relay the whole message to the brain.

This electronic ‘thinking skin’ is made from silicon based printed neural transistors and graphene – an ultra-thin form of carbon that is only an atom thick, but stronger than steel.

The new version is more powerful, less cumbersome and would work better than earlier prototypes, also developed by Professor Dahiya and his Bendable Electronics and Sensing Technologies (BEST) team at the University’s School of Engineering.

His futuristic research, called neuPRINTSKIN (Neuromorphic Printed Tactile Skin), has just received another £1.5m funding from the Engineering and Physical Science Research Council (EPSRC).

Professor Dahiya said: “Human skin is an incredibly complex system capable of detecting pressure, temperature and texture through an array of neural sensors that carry signals from the skin to the brain.

“Inspired by real skin, this project will harness the technological advances in electronic engineering to mimic some features of human skin, such as softness, bendability and now, also sense of touch. This skin will not just mimic the morphology of the skin but also its functionality.

“Brainy Skin is critical for the autonomy of robots and for a safe human-robot interaction to meet emerging societal needs such as helping the elderly.”

Synthetic ‘Brainy Skin’ with sense of touch gets £1.5m funding. Photo of Professor Ravinder Dahiya

This latest advance means tactile data is gathered over large areas by the synthetic skin’s computing system rather than sent to the brain for interpretation.

With additional EPSRC funding, which extends Professor Dahiya’s fellowship by another three years, he plans to introduce tactile skin with neuron-like processing. This breakthrough in the tactile sensing research will lead to the first neuromorphic tactile skin, or ‘brainy skin.’

To achieve this, Professor Dahiya will add a new neural layer to the e-skin that he has already developed using printing silicon nanowires.

Professor Dahiya added: “By adding a neural layer underneath the current tactile skin, neuPRINTSKIN will add significant new perspective to the e-skin research, and trigger transformations in several areas such as robotics, prosthetics, artificial intelligence, wearable systems, next-generation computing, and flexible and printed electronics.”

The Engineering and Physical Sciences Research Council (EPSRC) is part of UK Research and Innovation, a non-departmental public body funded by a grant-in-aid from the UK government.

EPSRC is the main funding body for engineering and physical sciences research in the UK. By investing in research and postgraduate training, the EPSRC is building the knowledge and skills base needed to address the scientific and technological challenges facing the nation.

Its portfolio covers a vast range of fields from healthcare technologies to structural engineering, manufacturing to mathematics, advanced materials to chemistry. The research funded by EPSRC has impact across all sectors. It provides a platform for future UK prosperity by contributing to a healthy, connected, resilient, productive nation.

It’s fascinating to note how these pieces of research fit together for wearable technology and health monitoring and creating more responsive robot ‘skin’ and, possibly, prosthetic devices that would allow someone to feel again.

The latest research paper

Getting back the solar-charging supercapacitors mentioned in the opening, here’s a link to and a citation for the team’s latest research paper,

Flexible self-charging supercapacitor based on graphene-Ag-3D graphene foam electrodes by Libu Manjakka, Carlos García Núñez, Wenting Dang, Ravinder Dahiya. Nano Energy Volume 51, September 2018, Pages 604-612 DOI: https://doi.org/10.1016/j.nanoen.2018.06.072

This paper is open access.

Brainy and brainy: a novel synaptic architecture and a neuromorphic computing platform called SpiNNaker

I have two items about brainlike computing. The first item hearkens back to memristors, a topic I have been following since 2008. (If you’re curious about the various twists and turns just enter  the term ‘memristor’ in this blog’s search engine.) The latest on memristors is from a team than includes IBM (US), École Politechnique Fédérale de Lausanne (EPFL; Swizterland), and the New Jersey Institute of Technology (NJIT; US). The second bit comes from a Jülich Research Centre team in Germany and concerns an approach to brain-like computing that does not include memristors.

Multi-memristive synapses

In the inexorable march to make computers function more like human brains (neuromorphic engineering/computing), an international team has announced its latest results in a July 10, 2018 news item on Nanowerk,

Two New Jersey Institute of Technology (NJIT) researchers, working with collaborators from the IBM Research Zurich Laboratory and the École Polytechnique Fédérale de Lausanne, have demonstrated a novel synaptic architecture that could lead to a new class of information processing systems inspired by the brain.

The findings are an important step toward building more energy-efficient computing systems that also are capable of learning and adaptation in the real world. …

A July 10, 2018 NJIT news release (also on EurekAlert) by Tracey Regan, which originated by the news item, adds more details,

The researchers, Bipin Rajendran, an associate professor of electrical and computer engineering, and S. R. Nandakumar, a graduate student in electrical engineering, have been developing brain-inspired computing systems that could be used for a wide range of big data applications.

Over the past few years, deep learning algorithms have proven to be highly successful in solving complex cognitive tasks such as controlling self-driving cars and language understanding. At the heart of these algorithms are artificial neural networks – mathematical models of the neurons and synapses of the brain – that are fed huge amounts of data so that the synaptic strengths are autonomously adjusted to learn the intrinsic features and hidden correlations in these data streams.

However, the implementation of these brain-inspired algorithms on conventional computers is highly inefficient, consuming huge amounts of power and time. This has prompted engineers to search for new materials and devices to build special-purpose computers that can incorporate the algorithms. Nanoscale memristive devices, electrical components whose conductivity depends approximately on prior signaling activity, can be used to represent the synaptic strength between the neurons in artificial neural networks.

While memristive devices could potentially lead to faster and more power-efficient computing systems, they are also plagued by several reliability issues that are common to nanoscale devices. Their efficiency stems from their ability to be programmed in an analog manner to store multiple bits of information; however, their electrical conductivities vary in a non-deterministic and non-linear fashion.

In the experiment, the team showed how multiple nanoscale memristive devices exhibiting these characteristics could nonetheless be configured to efficiently implement artificial intelligence algorithms such as deep learning. Prototype chips from IBM containing more than one million nanoscale phase-change memristive devices were used to implement a neural network for the detection of hidden patterns and correlations in time-varying signals.

“In this work, we proposed and experimentally demonstrated a scheme to obtain high learning efficiencies with nanoscale memristive devices for implementing learning algorithms,” Nandakumar says. “The central idea in our demonstration was to use several memristive devices in parallel to represent the strength of a synapse of a neural network, but only chose one of them to be updated at each step based on the neuronal activity.”

Here’s a link to and a citation for the paper,

Neuromorphic computing with multi-memristive synapses by Irem Boybat, Manuel Le Gallo, S. R. Nandakumar, Timoleon Moraitis, Thomas Parnell, Tomas Tuma, Bipin Rajendran, Yusuf Leblebici, Abu Sebastian, & Evangelos Eleftheriou. Nature Communications volume 9, Article number: 2514 (2018) DOI: https://doi.org/10.1038/s41467-018-04933-y Published 28 June 2018

This is an open access paper.

Also they’ve got a couple of very nice introductory paragraphs which I’m including here, (from the June 28, 2018 paper in Nature Communications; Note: Links have been removed),

The human brain with less than 20 W of power consumption offers a processing capability that exceeds the petaflops mark, and thus outperforms state-of-the-art supercomputers by several orders of magnitude in terms of energy efficiency and volume. Building ultra-low-power cognitive computing systems inspired by the operating principles of the brain is a promising avenue towards achieving such efficiency. Recently, deep learning has revolutionized the field of machine learning by providing human-like performance in areas, such as computer vision, speech recognition, and complex strategic games1. However, current hardware implementations of deep neural networks are still far from competing with biological neural systems in terms of real-time information-processing capabilities with comparable energy consumption.

One of the reasons for this inefficiency is that most neural networks are implemented on computing systems based on the conventional von Neumann architecture with separate memory and processing units. There are a few attempts to build custom neuromorphic hardware that is optimized to implement neural algorithms2,3,4,5. However, as these custom systems are typically based on conventional silicon complementary metal oxide semiconductor (CMOS) circuitry, the area efficiency of such hardware implementations will remain relatively low, especially if in situ learning and non-volatile synaptic behavior have to be incorporated. Recently, a new class of nanoscale devices has shown promise for realizing the synaptic dynamics in a compact and power-efficient manner. These memristive devices store information in their resistance/conductance states and exhibit conductivity modulation based on the programming history6,7,8,9. The central idea in building cognitive hardware based on memristive devices is to store the synaptic weights as their conductance states and to perform the associated computational tasks in place.

The two essential synaptic attributes that need to be emulated by memristive devices are the synaptic efficacy and plasticity. …

It gets more complicated from there.

Now onto the next bit.

SpiNNaker

At a guess, those capitalized N’s are meant to indicate ‘neural networks’. As best I can determine, SpiNNaker is not based on the memristor. Moving on, a July 11, 2018 news item on phys.org announces work from a team examining how neuromorphic hardware and neuromorphic software work together,

A computer built to mimic the brain’s neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. Tested for accuracy, speed and energy efficiency, this custom-built computer named SpiNNaker, has the potential to overcome the speed and power consumption problems of conventional supercomputers. The aim is to advance our knowledge of neural processing in the brain, to include learning and disorders such as epilepsy and Alzheimer’s disease.

A July 11, 2018 Frontiers Publishing news release on EurekAlert, which originated the news item, expands on the latest work,

“SpiNNaker can support detailed biological models of the cortex–the outer layer of the brain that receives and processes information from the senses–delivering results very similar to those from an equivalent supercomputer software simulation,” says Dr. Sacha van Albada, lead author of this study and leader of the Theoretical Neuroanatomy group at the Jülich Research Centre, Germany. “The ability to run large-scale detailed neural networks quickly and at low power consumption will advance robotics research and facilitate studies on learning and brain disorders.”

The human brain is extremely complex, comprising 100 billion interconnected brain cells. We understand how individual neurons and their components behave and communicate with each other and on the larger scale, which areas of the brain are used for sensory perception, action and cognition. However, we know less about the translation of neural activity into behavior, such as turning thought into muscle movement.

Supercomputer software has helped by simulating the exchange of signals between neurons, but even the best software run on the fastest supercomputers to date can only simulate 1% of the human brain.

“It is presently unclear which computer architecture is best suited to study whole-brain networks efficiently. The European Human Brain Project and Jülich Research Centre have performed extensive research to identify the best strategy for this highly complex problem. Today’s supercomputers require several minutes to simulate one second of real time, so studies on processes like learning, which take hours and days in real time are currently out of reach.” explains Professor Markus Diesmann, co-author, head of the Computational and Systems Neuroscience department at the Jülich Research Centre.

He continues, “There is a huge gap between the energy consumption of the brain and today’s supercomputers. Neuromorphic (brain-inspired) computing allows us to investigate how close we can get to the energy efficiency of the brain using electronics.”

Developed over the past 15 years and based on the structure and function of the human brain, SpiNNaker — part of the Neuromorphic Computing Platform of the Human Brain Project — is a custom-built computer composed of half a million of simple computing elements controlled by its own software. The researchers compared the accuracy, speed and energy efficiency of SpiNNaker with that of NEST–a specialist supercomputer software currently in use for brain neuron-signaling research.

“The simulations run on NEST and SpiNNaker showed very similar results,” reports Steve Furber, co-author and Professor of Computer Engineering at the University of Manchester, UK. “This is the first time such a detailed simulation of the cortex has been run on SpiNNaker, or on any neuromorphic platform. SpiNNaker comprises 600 circuit boards incorporating over 500,000 small processors in total. The simulation described in this study used just six boards–1% of the total capability of the machine. The findings from our research will improve the software to reduce this to a single board.”

Van Albada shares her future aspirations for SpiNNaker, “We hope for increasingly large real-time simulations with these neuromorphic computing systems. In the Human Brain Project, we already work with neuroroboticists who hope to use them for robotic control.”

Before getting to the link and citation for the paper, here’s a description of SpiNNaker’s hardware from the ‘Spiking neural netowrk’ Wikipedia entry, Note: Links have been removed,

Neurogrid, built at Stanford University, is a board that can simulate spiking neural networks directly in hardware. SpiNNaker (Spiking Neural Network Architecture) [emphasis mine], designed at the University of Manchester, uses ARM processors as the building blocks of a massively parallel computing platform based on a six-layer thalamocortical model.[5]

Now for the link and citation,

Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model by
Sacha J. van Albada, Andrew G. Rowley, Johanna Senk, Michael Hopkins, Maximilian Schmidt, Alan B. Stokes, David R. Lester, Markus Diesmann, and Steve B. Furber. Neurosci. 12:291. doi: 10.3389/fnins.2018.00291 Published: 23 May 2018

As noted earlier, this is an open access paper.

I found it at the movies: a commentary on/review of “Films from the Future”

Kudos to anyone who recognized the reference to Pauline Kael (she changed film criticism forever) and her book “I Lost it at the Movies.” Of course, her book title was a bit of sexual innuendo, quite risqué for an important film critic in 1965 but appropriate for a period (the 1960s) associated with a sexual revolution. (There’s more about the 1960’s sexual revolution in the US along with mention of a prior sexual revolution in the 1920s in this Wikipedia entry.)

The title for this commentary is based on an anecdote from Dr. Andrew Maynard’s (director of the Arizona State University [ASU] Risk Innovation Lab) popular science and technology book, “Films from the Future: The Technology and Morality of Sci-Fi Movies.”

The ‘title-inspiring’ anecdote concerns Maynard’s first viewing of ‘2001: A Space Odyssey, when as a rather “bratty” 16-year-old who preferred to read science fiction, he discovered new ways of seeing and imaging the world. Maynard isn’t explicit about when he became a ‘techno nerd’ or how movies gave him an experience books couldn’t but presumably at 16 he was already gearing up for a career in the sciences. That ‘movie’ revelation received in front of a black and white television on January 1,1982 eventually led him to write, “Films from the Future.” (He has a PhD in physics which he is now applying to the field of risk innovation. For a more detailed description of Dr. Maynard and his work, there’s his ASU profile webpage and, of course, the introduction to his book.)

The book is quite timely. I don’t know how many people have noticed but science and scientific innovation is being covered more frequently in the media than it has been in many years. Science fairs and festivals are being founded on what seems to be a daily basis and you can now find science in art galleries. (Not to mention the movies and television where science topics are covered in comic book adaptations, in comedy, and in standard science fiction style.) Much of this activity is centered on what’s called ’emerging technologies’. These technologies are why people argue for what’s known as ‘blue sky’ or ‘basic’ or ‘fundamental’ science for without that science there would be no emerging technology.

Films from the Future

Isn’t reading the Table of Contents (ToC) the best way to approach a book? (From Films from the Future; Note: The formatting has been altered),

Table of Contents
Chapter One
In the Beginning 14
Beginnings 14
Welcome to the Future 16
The Power of Convergence 18
Socially Responsible Innovation 21
A Common Point of Focus 25
Spoiler Alert 26
Chapter Two
Jurassic Park: The Rise of Resurrection Biology 27
When Dinosaurs Ruled the World 27
De-Extinction 31
Could We, Should We? 36
The Butterfly Effect 39
Visions of Power 43
Chapter Three
Never Let Me Go: A Cautionary Tale of Human Cloning 46
Sins of Futures Past 46
Cloning 51
Genuinely Human? 56
Too Valuable to Fail? 62
Chapter Four
Minority Report: Predicting Criminal Intent 64
Criminal Intent 64
The “Science” of Predicting Bad Behavior 69
Criminal Brain Scans 74
Machine Learning-Based Precognition 77
Big Brother, Meet Big Data 79
Chapter Five
Limitless: Pharmaceutically-enhanced Intelligence 86
A Pill for Everything 86
The Seduction of Self-Enhancement 89
Nootropics 91
If You Could, Would You? 97
Privileged Technology 101
Our Obsession with Intelligence 105
Chapter Six
Elysium: Social Inequity in an Age of Technological
Extremes 110
The Poor Shall Inherit the Earth 110
Bioprinting Our Future Bodies 115
The Disposable Workforce 119
Living in an Automated Future 124
Chapter Seven
Ghost in the Shell: Being Human in an
Augmented Future 129
Through a Glass Darkly 129
Body Hacking 135
More than “Human”? 137
Plugged In, Hacked Out 142
Your Corporate Body 147
Chapter Eight
Ex Machina: AI and the Art of Manipulation 154
Plato’s Cave 154
The Lure of Permissionless Innovation 160
Technologies of Hubris 164
Superintelligence 169
Defining Artificial Intelligence 172
Artificial Manipulation 175
Chapter Nine
Transcendence: Welcome to the Singularity 180
Visions of the Future 180
Technological Convergence 184
Enter the Neo-Luddites 190
Techno-Terrorism 194
Exponential Extrapolation 200
Make-Believe in the Age of the Singularity 203
Chapter Ten
The Man in the White Suit: Living in a Material World 208
There’s Plenty of Room at the Bottom 208
Mastering the Material World 213
Myopically Benevolent Science 220
Never Underestimate the Status Quo 224
It’s Good to Talk 227
Chapter Eleven
Inferno: Immoral Logic in an Age of
Genetic Manipulation 231
Decoding Make-Believe 231
Weaponizing the Genome 234
Immoral Logic? 238
The Honest Broker 242
Dictating the Future 248
Chapter Twelve
The Day After Tomorrow: Riding the Wave of
Climate Change 251
Our Changing Climate 251
Fragile States 255
A Planetary “Microbiome” 258
The Rise of the Anthropocene 260
Building Resiliency 262
Geoengineering the Future 266
Chapter Thirteen
Contact: Living by More than Science Alone 272
An Awful Waste of Space 272
More than Science Alone 277
Occam’s Razor 280
What If We’re Not Alone? 283
Chapter Fourteen
Looking to the Future 288
Acknowledgments 293

The ToC gives the reader a pretty clue as to where the author is going with their book and Maynard explains how he chose his movies in his introductory chapter (from Films from the Future),

“There are some quite wonderful science fiction movies that didn’t make the cut because they didn’t fit the overarching narrative (Blade Runner and its sequel Blade Runner 2049, for instance, and the first of the Matrix trilogy). There are also movies that bombed with the critics, but were included because they ably fill a gap in the bigger story around emerging and converging technologies. Ultimately, the movies that made the cut were chosen because, together, they create an overarching narrative around emerging trends in biotechnologies, cybertechnologies, and materials-based technologies, and they illuminate a broader landscape around our evolving relationship with science and technology. And, to be honest, they are all movies that I get a kick out of watching.” (p. 17)

Jurassic Park (Chapter Two)

Dinosaurs do not interest me—they never have. Despite my profound indifference I did see the movie, Jurassic Park, when it was first released (someone talked me into going). And, I am still profoundly indifferent. Thankfully, Dr. Maynard finds meaning and a connection to current trends in biotechnology,

Jurassic Park is unabashedly a movie about dinosaurs. But it’s also a movie about greed, ambition, genetic engineering, and human folly—all rich pickings for thinking about the future, and what could possibly go wrong. (p. 28)

What really stands out with Jurassic Park, over twenty-five years later, is how it reveals a very human side of science and technology. This comes out in questions around when we should tinker with technology and when we should leave well enough alone. But there is also a narrative here that appears time and time again with the movies in this book, and that is how we get our heads around the sometimes oversized roles mega-entrepreneurs play in dictating how new tech is used, and possibly abused. These are all issues that are just as relevant now as they were in 1993, and are front and center of ensuring that the technologyenabled future we’re building is one where we want to live, and not one where we’re constantly fighting for our lives.  (pp. 30-1)

He also describes a connection to current trends in biotechnology,

De-Extinction

In a far corner of Siberia, two Russians—Sergey Zimov and his son Nikita—are attempting to recreate the Ice Age. More precisely, their vision is to reconstruct the landscape and ecosystem of northern Siberia in the Pleistocene, a period in Earth’s history that stretches from around two and a half million years ago to eleven thousand years ago. This was a time when the environment was much colder than now, with huge glaciers and ice sheets flowing over much of the Earth’s northern hemisphere. It was also a time when humans
coexisted with animals that are long extinct, including saber-tooth cats, giant ground sloths, and woolly mammoths.

The Zimovs’ ambitions are an extreme example of “Pleistocene rewilding,” a movement to reintroduce relatively recently extinct large animals, or their close modern-day equivalents, to regions where they were once common. In the case of the Zimovs, the
father-and-son team believe that, by reconstructing the Pleistocene ecosystem in the Siberian steppes and elsewhere, they can slow down the impacts of climate change on these regions. These areas are dominated by permafrost, ground that never thaws through
the year. Permafrost ecosystems have developed and survived over millennia, but a warming global climate (a theme we’ll come back to in chapter twelve and the movie The Day After Tomorrow) threatens to catastrophically disrupt them, and as this happens, the impacts
on biodiversity could be devastating. But what gets climate scientists even more worried is potentially massive releases of trapped methane as the permafrost disappears.

Methane is a powerful greenhouse gas—some eighty times more effective at exacerbating global warming than carbon dioxide— and large-scale releases from warming permafrost could trigger catastrophic changes in climate. As a result, finding ways to keep it in the ground is important. And here the Zimovs came up with a rather unusual idea: maintaining the stability of the environment by reintroducing long-extinct species that could help prevent its destruction, even in a warmer world. It’s a wild idea, but one that has some merit.8 As a proof of concept, though, the Zimovs needed somewhere to start. And so they set out to create a park for deextinct Siberian animals: Pleistocene Park.9

Pleistocene Park is by no stretch of the imagination a modern-day Jurassic Park. The dinosaurs in Hammond’s park date back to the Mesozoic period, from around 250 million years ago to sixty-five million years ago. By comparison, the Pleistocene is relatively modern history, ending a mere eleven and a half thousand years ago. And the vision behind Pleistocene Park is not thrills, spills, and profit, but the serious use of science and technology to stabilize an increasingly unstable environment. Yet there is one thread that ties them together, and that’s using genetic engineering to reintroduce extinct species. In this case, the species in question is warm-blooded and furry: the woolly mammoth.

The idea of de-extinction, or bringing back species from extinction (it’s even called “resurrection biology” in some circles), has been around for a while. It’s a controversial idea, and it raises a lot of tough ethical questions. But proponents of de-extinction argue
that we’re losing species and ecosystems at such a rate that we can’t afford not to explore technological interventions to help stem the flow.

Early approaches to bringing species back from the dead have involved selective breeding. The idea was simple—if you have modern ancestors of a recently extinct species, selectively breeding specimens that have a higher genetic similarity to their forebears can potentially help reconstruct their genome in living animals. This approach is being used in attempts to bring back the aurochs, an ancestor of modern cattle.10 But it’s slow, and it depends on
the fragmented genome of the extinct species still surviving in its modern-day equivalents.

An alternative to selective breeding is cloning. This involves finding a viable cell, or cell nucleus, in an extinct but well-preserved animal and growing a new living clone from it. It’s definitely a more appealing route for impatient resurrection biologists, but it does mean getting your hands on intact cells from long-dead animals and devising ways to “resurrect” these, which is no mean feat. Cloning has potential when it comes to recently extinct species whose cells have been well preserved—for instance, where the whole animal has become frozen in ice. But it’s still a slow and extremely limited option.

Which is where advances in genetic engineering come in.

The technological premise of Jurassic Park is that scientists can reconstruct the genome of long-dead animals from preserved DNA fragments. It’s a compelling idea, if you think of DNA as a massively long and complex instruction set that tells a group of biological molecules how to build an animal. In principle, if we could reconstruct the genome of an extinct species, we would have the basic instruction set—the biological software—to reconstruct
individual members of it.

The bad news is that DNA-reconstruction-based de-extinction is far more complex than this. First you need intact fragments of DNA, which is not easy, as DNA degrades easily (and is pretty much impossible to obtain, as far as we know, for dinosaurs). Then you
need to be able to stitch all of your fragments together, which is akin to completing a billion-piece jigsaw puzzle without knowing what the final picture looks like. This is a Herculean task, although with breakthroughs in data manipulation and machine learning,
scientists are getting better at it. But even when you have your reconstructed genome, you need the biological “wetware”—all the stuff that’s needed to create, incubate, and nurture a new living thing, like eggs, nutrients, a safe space to grow and mature, and so on. Within all this complexity, it turns out that getting your DNA sequence right is just the beginning of translating that genetic code into a living, breathing entity. But in some cases, it might be possible.

In 2013, Sergey Zimov was introduced to the geneticist George Church at a conference on de-extinction. Church is an accomplished scientist in the field of DNA analysis and reconstruction, and a thought leader in the field of synthetic biology (which we’ll come
back to in chapter nine). It was a match made in resurrection biology heaven. Zimov wanted to populate his Pleistocene Park with mammoths, and Church thought he could see a way of
achieving this.

What resulted was an ambitious project to de-extinct the woolly mammoth. Church and others who are working on this have faced plenty of hurdles. But the technology has been advancing so fast that, as of 2017, scientists were predicting they would be able to reproduce the woolly mammoth within the next two years.

One of those hurdles was the lack of solid DNA sequences to work from. Frustratingly, although there are many instances of well preserved woolly mammoths, their DNA rarely survives being frozen for tens of thousands of years. To overcome this, Church and others
have taken a different tack: Take a modern, living relative of the mammoth, and engineer into it traits that would allow it to live on the Siberian tundra, just like its woolly ancestors.

Church’s team’s starting point has been the Asian elephant. This is their source of base DNA for their “woolly mammoth 2.0”—their starting source code, if you like. So far, they’ve identified fifty plus gene sequences they think they can play with to give their modern-day woolly mammoth the traits it would need to thrive in Pleistocene Park, including a coat of hair, smaller ears, and a constitution adapted to cold.

The next hurdle they face is how to translate the code embedded in their new woolly mammoth genome into a living, breathing animal. The most obvious route would be to impregnate a female Asian elephant with a fertilized egg containing the new code. But Asian elephants are endangered, and no one’s likely to allow such cutting edge experimentation on the precious few that are still around, so scientists are working on an artificial womb for their reinvented woolly mammoth. They’re making progress with mice and hope to crack the motherless mammoth challenge relatively soon.

It’s perhaps a stretch to call this creative approach to recreating a species (or “reanimation” as Church refers to it) “de-extinction,” as what is being formed is a new species. … (pp. 31-4)

This selection illustrates what Maynard does so very well throughout the book where he uses each film as a launching pad for a clear, readable description of relevant bits of science so you understand why the premise was likely, unlikely, or pure fantasy while linking it to contemporary practices, efforts, and issues. In the context of Jurassic Park, Maynard goes on to raise some fascinating questions such as: Should we revive animals rendered extinct (due to obsolescence or inability to adapt to new conditions) when we could develop new animals?

General thoughts

‘Films for the Future’ offers readable (to non-scientific types) science, lively writing, and the occasional ‘memorish’ anecdote. As well, Dr. Maynard raises the curtain on aspects of the scientific enterprise that most of us do not get to see.  For example, the meeting  between Sergey Zimov and George Church and how it led to new ‘de-extinction’ work’. He also describes the problems that the scientists encountered and are encountering. This is in direct contrast to how scientific work is usually presented in the news media as one glorious breakthrough after the next.

Maynard does discuss the issues of social inequality and power and ownership. For example, who owns your transplant or data? Puzzlingly, he doesn’t touch on the current environment where scientists in the US and elsewhere are encouraged/pressured to start up companies commercializing their work.

Nor is there any mention of how universities are participating in this grand business experiment often called ‘innovation’. (My March 15, 2017 posting describes an outcome for the CRISPR [gene editing system] patent fight taking place between Harvard University’s & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley and my Sept. 11, 2018 posting about an art/science exhibit in Vancouver [Canada] provides an update for round 2 of the Broad Institute vs. UC Berkeley patent fight [scroll down about 65% of the way.) *To read about how my ‘cultural blindness’ shows up here scroll down to the single asterisk at the end.*

There’s a foray through machine-learning and big data as applied to predictive policing in Maynard’s ‘Minority Report’ chapter (my November 23, 2017 posting describes Vancouver’s predictive policing initiative [no psychics involved], the first such in Canada). There’s no mention of surveillance technology, which if I recall properly was part of the future environment, both by the state and by corporations. (Mia Armstrong’s November 15, 2018 article for Slate on Chinese surveillance being exported to Venezuela provides interesting insight.)

The gaps are interesting and various. This of course points to a problem all science writers have when attempting an overview of science. (Carl Zimmer’s latest, ‘She Has Her Mother’s Laugh: The Powers, Perversions, and Potential of Heredity’] a doorstopping 574 pages, also has some gaps despite his focus on heredity,)

Maynard has worked hard to give an comprehensive overview in a remarkably compact 279 pages while developing his theme about science and the human element. In other words, science is not monolithic; it’s created by human beings and subject to all the flaws and benefits that humanity’s efforts are always subject to—scientists are people too.

The readership for ‘Films from the Future’ spans from the mildly interested science reader to someone like me who’s been writing/blogging about these topics (more or less) for about 10 years. I learned a lot reading this book.

Next time, I’m hopeful there’ll be a next time, Maynard might want to describe the parameters he’s set for his book in more detail that is possible in his chapter headings. He could have mentioned that he’s not a cinéaste so his descriptions of the movies are very much focused on the story as conveyed through words. He doesn’t mention colour palates, camera angles, or, even, cultural lenses.

Take for example, his chapter on ‘Ghost in the Shell’. Focused on the Japanese animation film and not the live action Hollywood version he talks about human enhancement and cyborgs. The Japanese have a different take on robots, inanimate objects, and, I assume, cyborgs than is found in Canada or the US or Great Britain, for that matter (according to a colleague of mine, an Englishwoman who lived in Japan for ten or more years). There’s also the chapter on the Ealing comedy, The Man in The White Suit, an English film from the 1950’s. That too has a cultural (as well as, historical) flavour but since Maynard is from England, he may take that cultural flavour for granted. ‘Never let me go’ in Chapter Two was also a UK production, albeit far more recent than the Ealing comedy and it’s interesting to consider how a UK production about cloning might differ from a US or Chinese or … production on the topic. I am hearkening back to Maynard’s anecdote about movies giving him new ways of seeing and imagining the world.

There’s a corrective. A couple of sentences in Maynard’s introductory chapter cautioning that in depth exploration of ‘cultural lenses’ was not possible without expanding the book to an unreadable size followed by a sentence in each of the two chapters that there are cultural differences.

One area where I had a significant problem was with regard to being “programmed” and having  “instinctual” behaviour,

As a species, we are embarrassingly programmed to see “different” as “threatening,” and to take instinctive action against it. It’s a trait that’s exploited in many science fiction novels and movies, including those in this book. If we want to see the rise of increasingly augmented individuals, we need to be prepared for some social strife. (p. 136)

These concepts are much debated in the social sciences and there are arguments for and against ‘instincts regarding strangers and their possible differences’. I gather Dr. Maynard hies to the ‘instinct to defend/attack’ school of thought.

One final quandary, there was no sex and I was expecting it in the Ex Machina chapter, especially now that sexbots are about to take over the world (I exaggerate). Certainly, if you’re talking about “social strife,” then sexbots would seem to be fruitful line of inquiry, especially when there’s talk of how they could benefit families (my August 29, 2018 posting). Again, there could have been a sentence explaining why Maynard focused almost exclusively in this chapter on the discussions about artificial intelligence and superintelligence.

Taken in the context of the book, these are trifling issues and shouldn’t stop you from reading Films from the Future. What Maynard has accomplished here is impressive and I hope it’s just the beginning.

Final note

Bravo Andrew! (Note: We’ve been ‘internet acquaintances/friends since the first year I started blogging. When I’m referring to him in his professional capacity, he’s Dr. Maynard and when it’s not strictly in his professional capacity, it’s Andrew. For this commentary/review I wanted to emphasize his professional status.)

If you need to see a few more samples of Andrew’s writing, there’s a Nov. 15, 2018 essay on The Conversation, Sci-fi movies are the secret weapon that could help Silicon Valley grow up and a Nov. 21, 2018 article on slate.com, The True Cost of Stain-Resistant Pants; The 1951 British comedy The Man in the White Suit anticipated our fears about nanotechnology. Enjoy.

****Added at 1700 hours on Nov. 22, 2018: You can purchase Films from the Future here.

*Nov. 23, 2018: I should have been more specific and said ‘academic scientists’. In Canada, the great percentage of scientists are academic. It’s to the point where the OECD (Organization for Economic Cooperation and Development) has noted that amongst industrialized countries, Canada has very few industrial scientists in comparison to the others.

‘One health in the 21st century’ event and internship opportunities at the Woodrow Wilson Center

One health

This event at the Woodrow Wilson International Center for Scholars (Wilson Center) is the first that I’ve seen of its kind (from a November 2, 2018 Wilson Center Science and Technology Innovation Program [STIP] announcement received via email; Note: Logistics such as date and location follow directly after),

One Health in the 21st Century Workshop

The  One Health in the 21st Century workshop will serve as a snapshot of government, intergovernmental organization and non-governmental organization innovation as it pertains to the expanding paradigm of One Health. One Health being the umbrella term for addressing animal, human, and environmental health issues as inextricably linked [emphasis mine], each informing the other, rather than as distinct disciplines.

This snapshot, facilitated by a partnership between the Wilson Center, World Bank, and EcoHealth Alliance, aims to bridge professional silos represented at the workshop to address the current gaps and future solutions in the operationalization and institutionalization of One Health across sectors. With an initial emphasis on environmental resource management and assessment as well as federal cooperation, the One Health in the 21st Century Workshop is a launching point for upcoming events, convenings, and products, sparked by the partnership between the hosting organizations. RSVP today.

Agenda:

1:00pm — 1:15pm: Introductory Remarks

1:15pm — 2:30pm: Keynote and Panel: Putting One Health into Practice

Larry Madoff — Director of Emerging Disease Surveillance; Editor, ProMED-mail
Lance Brooks — Chief, Biological Threat Reduction Department at DoD
Further panelists TBA

2:30pm — 2:40pm: Break

2:40pm — 3:50pm: Keynote and Panel: Adding Seats at the One Health Table: Promoting the Environmental Backbone at Home and Abroad

Assaf Anyamba — NASA Research Scientist
Jonathan Sleeman — Center Director for the U.S. Geological Survey’s National Wildlife Health Center
Jennifer Orme-Zavaleta — Principal Deputy Assistant Administrator for Science for the Office of Research and Development and the EPA Science Advisor
Further panelists TBA

3:50pm — 4:50pm: Breakout Discussions and Report Back Panel

4:50pm — 5:00pm: Closing Remarks

5:00pm — 6:00pm: Networking Happy Hour

Co-Hosts:

Sponsor Logos

You can register/RSVP here.

Logistics are:

November 26
1:00pm – 5:00pm
Reception to follow
5:00pm – 6:00pm

Flom Auditorium, 6th floor

Directions

Wilson Center
Ronald Reagan Building and
International Trade Center
One Woodrow Wilson Plaza
1300 Pennsylvania, Ave., NW
Washington, D.C. 20004

Phone: 202.691.4000

stip@wilsoncenter.org

Privacy Policy

Internships

The Woodrow Wilson Center is gearing up for 2019 although the deadline for a Spring 2019  November 15, 2018. (You can find my previous announcement for internships in a July 23, 2018 posting). From a November 5, 2018 Wilson Center STIP announcement (received via email),

Internships in DC for Science and Technology Policy

Deadline for Fall Applicants November 15

The Science and Technology Innovation Program (STIP) at the Wilson Center welcomes applicants for spring 2019 internships. STIP focuses on understanding bottom-up, public innovation; top-down, policy innovation; and, on supporting responsible and equitable practices at the point where new technology and existing political, social, and cultural processes converge. We recommend exploring our blog and website first to determine if your research interests align with current STIP programming.

We offer two types of internships: research (open to law and graduate students only) and a social media and blogging internship (open to undergraduates, recent graduates, and graduate students). Research internships might deal with one of the following key objectives:

  • Artificial Intelligence
  • Citizen Science
  • Cybersecurity
  • One Health
  • Public Communication of Science
  • Serious Games Initiative
  • Science and Technology Policy

Additionally, we are offering specific internships for focused projects, such as for our Earth Challenge 2020 initiative.

Special Project Intern: Earth Challenge 2020

Citizen science involves members of the public in scientific research to meet real world goals.  In celebration of the 50th anniversary of Earth Day, Earth Day Network (EDN), The U.S. Department of State, and the Wilson Center are launching Earth Challenge 2020 (EC2020) as the world’s largest ever coordinated citizen science campaign.  EC2020 will collaborate with existing citizen science projects as well as build capacity for new ones as part of a larger effort to grow citizen science worldwide.  We will become a nexus for collecting billions of observations in areas including air quality, water quality, biodiversity, and human health to strengthen the links between science, the environment, and public citizens.

We are seeking a research intern with a specialty in topics including citizen science, crowdsourcing, making, hacking, sensor development, and other relevant topics.

This intern will scope and implement a semester-long project related to Earth Challenge 2020 deliverables. In addition to this the intern may:

  • Conduct ad hoc research on a range of topics in science and technology innovation to learn while supporting department priorities.
  • Write or edit articles and blog posts on topics of interest or local events.
  • Support meetings, conferences, and other events, gaining valuable event management experience.
  • Provide general logistical support.

This is a paid position available for 15-20 hours a week.  Applicants from all backgrounds will be considered, though experience conducting cross and trans-disciplinary research is an asset.  Ability to work independently is critical.

Interested applicants should submit a resume, cover letter describing their interest in Earth Challenge 2020 and outlining relevant skills, and two writing samples. One writing sample should be formal (e.g., a class paper); the other, informal (e.g., a blog post or similar).

For all internships, non-degree seeking students are ineligible. All internships must be served in Washington, D.C. and cannot be done remotely.

Full application process outlined on our internship website.

I don’t see a specific application deadline for the special project (Earth Challenge 2010) internship. In any event, good luck with all your applications.

Media registration is open for the 2018 ITU ( International Telecommunication Union) Plenipotentiary Conference (PP-18) being held 29 October – 16 November 2018 in Dubai

I’m a little late with this but there’s still time to register should you happen to be in or able to get to Dubai easily. From an October 18, 2018 International Telecommunication Union (ITU) Media Advisory (received via email),

Media registration is open for the 2018 ITU Plenipotentiary Conference (PP-18) – the highest policy-making body of the International Telecommunication Union (ITU), the United Nations’ specialized agency for information and communication technology. This will be closing soon, so all media intending to attend the event MUST register as soon as possible here.

Held every four years, it is the key event at which ITU’s 193 Member States decide on the future role of the organization, thereby determining ITU’s ability to influence and affect the development of information and communication technologies (ICTs) worldwide. It is expected to attract around 3,000 participants, including Heads of State and an estimated 130 VIPs from more than 193 Member States and more than 800 private companies, academic institutions and national, regional and international bodies.

ITU plays an integral role in enabling the development and implementation of ICTs worldwide through its mandate to: coordinate the shared global use of the radio spectrum, promote international cooperation in assigning satellite orbits, work to improve communication infrastructure in the developing world, and establish worldwide standards that foster seamless interconnection of a vast range of communications systems.

Delegates will tackle a number of pressing issues, from strategies to promote digital inclusion and bridge the digital divide, to ways to leverage such emerging technologies as the Internet of Things, Artificial Intelligence, 5G, and others, to improve the way all of us, everywhere, live and work.

The conference also sets ITU’s Financial Plan and elects its five top executives – Secretary-General, Deputy Secretary-General, and the Directors of the Radiocommunication, Telecommunication Standardization and Telecommunication Development Bureaux – who will guide its work over the next four years.

What: ITU Plenipotentiary Conference 2018 (PP-18) sets the next four-year strategy, budget and leadership of ITU.

Why: Finance, Business, Tech, Development and Foreign Affairs reporters will find PP-18 relevant to their newsgathering. Decisions made at PP-18 are designed to create an enabling ICT environment where the benefits of digital connectivity can reach all people and economies, everywhere. As such, these decisions can have an impact on the telecommunication and technology sectors as well as developed and developing countries alike.

When: 29 October – 16 November 2018: With several Press Conferences planned during the event.

* Historically the Opening, Closing and Plenary sessions of this conference are open to media. Confirmation of those sessions open to media, and Press Conference times, will be made closer to the event date.

Where: Dubai World Trade Center, Dubai, United Arab Emirates

More Information:

REGISTER FOR ACCREDITATION

I visited the ‘ITU Events Registration and Accreditation Process for Media‘ webpage and foudn these tidbits,

Accreditation eligibility & credentials 

1. Journalists* should provide an official letter of assignment from the Editor-in-Chief (or the News Editor for radio/TV). One letter per crew/editorial team will suffice. Editors-in-Chief and Bureau Chiefs should submit a letter from their Director. Please email this to pressreg@itu.int, along with the required supporting credentials below:​

    • ​​​​​print and online publications should be available to the general public and published at least 6 times a year by an organization whose principle business activity is publishing and which generally carries paid advertising;

      o 2 copies of recent byline articles published within the last 4 months.
    • news wire services should provide news coverage to subscribers, including newspapers, periodicals and/or television networks;

      o 2 copies of recent byline articles or broadcasting material published within the last 4 months.
    • broadcast should provide news and information programmes to the general public. Independent film and video production companies can only be accredited if officially mandated by a broadcast station via a letter of assignment;

      o broadcasting material published within the last 4 months.
    • freelance journalists including photographers, must provide clear documentation that they are on assignment from a specific news organization or publication. Evidence that they regularly supply journalistic content to recognized media may be acceptable in the absence of an assignment letter at the discretion of the ITU Media Relations Service.

      o a valid assignment letter from the news organization or publication.

 2. Bloggers may be granted accreditation if blog content is deemed relevant to the industry, contains news commentary, is regularly updated and made publicly available. Corporate bloggers are invited to register as participants. Please see Guidelines for Blogger Accreditation below for more details.

Guidelines for Blogger Accreditation

ITU is committed to working with independent ‘new media’ reporters and columnists who reach their audiences via blogs, podcasts, video blogs and other online media. These are the guidelines we use to determine whether to issue official media accreditation to independent online media representatives: 

ITU reserves the right to request traffic data from a third party (Sitemeter, Technorati, Feedburner, iTunes or equivalent) when considering your application. While the decision to grant access is not based solely on traffic/subscriber data, we ask that applicants provide sufficient transparency into their operations to help us make a fair and timely decision. 

Obtaining media accreditation for ITU events is an opportunity to meet and interact with key industry and political figures. While continued accreditation for ITU events is not directly contingent on producing coverage, owing to space limitations we may take this into consideration when processing future accreditation requests. Following any ITU event for which you are accredited, we therefore kindly request that you forward a link to your post/podcast/video blog to pressreg@itu.int. 

Bloggers who are granted access to ITU events are expected to act professionally. Those who do not maintain the standards expected of professional media representatives run the risk of having their accreditation withdrawn. 

If you can’t find answers to your questions on the ‘ITU Events Registration and Accreditation Process for Media‘ webpage, you can contact,

For media accreditation inquiries:


Rita Soraya Abino-Quintana
Media Accreditation Officer
ITU Corporate Communications

Tel: +41 22 730 5424

For anything else, contact,

For general media inquiries:


Jennifer Ferguson-Mitchell
Senior Media and Communications Officer
ITU Corporate Communications

Tel: +41 22 730 5469

Mobile: +41 79 337 4615

There you have it.

A potpourri of robot/AI stories: killers , kindergarten teachers, a Balenciaga-inspired AI fashion designer, a conversational android, and more

Following on my August 29, 2018 post (Sexbots, sexbot ethics, families, and marriage), I’m following up with a more general piece.

Robots, AI (artificial intelligence), and androids (humanoid robots), the terms can be confusing since there’s a tendency to use them interchangeably. Confession: I do it too, but, not this time. That said, I have multiple news bits.

Killer ‘bots and ethics

The U.S. military is already testing a Modular Advanced Armed Robotic System. Credit: Lance Cpl. Julien Rodarte, U.S. Marine Corps

That is a robot.

For the purposes of this posting, a robot is a piece of hardware which may or may not include an AI system and does not mimic a human or other biological organism such that you might, under circumstances, mistake the robot for a biological organism.

As for what precipitated this feature (in part), it seems there’s been a United Nations meeting in Geneva, Switzerland held from August 27 – 31, 2018 about war and the use of autonomous robots, i.e., robots equipped with AI systems and designed for independent action. BTW, it’s the not first meeting the UN has held on this topic.

Bonnie Docherty, lecturer on law and associate director of armed conflict and civilian protection, international human rights clinic, Harvard Law School, has written an August 21, 2018 essay on The Conversation (also on phys.org) describing the history and the current rules around the conduct of war, as well as, outlining the issues with the military use of autonomous robots (Note: Links have been removed),

When drafting a treaty on the laws of war at the end of the 19th century, diplomats could not foresee the future of weapons development. But they did adopt a legal and moral standard for judging new technology not covered by existing treaty language.

This standard, known as the Martens Clause, has survived generations of international humanitarian law and gained renewed relevance in a world where autonomous weapons are on the brink of making their own determinations about whom to shoot and when. The Martens Clause calls on countries not to use weapons that depart “from the principles of humanity and from the dictates of public conscience.”

I was the lead author of a new report by Human Rights Watch and the Harvard Law School International Human Rights Clinic that explains why fully autonomous weapons would run counter to the principles of humanity and the dictates of public conscience. We found that to comply with the Martens Clause, countries should adopt a treaty banning the development, production and use of these weapons.

Representatives of more than 70 nations will gather from August 27 to 31 [2018] at the United Nations in Geneva to debate how to address the problems with what they call lethal autonomous weapon systems. These countries, which are parties to the Convention on Conventional Weapons, have discussed the issue for five years. My co-authors and I believe it is time they took action and agreed to start negotiating a ban next year.

Docherty elaborates on her points (Note: A link has been removed),

The Martens Clause provides a baseline of protection for civilians and soldiers in the absence of specific treaty law. The clause also sets out a standard for evaluating new situations and technologies that were not previously envisioned.

Fully autonomous weapons, sometimes called “killer robots,” would select and engage targets without meaningful human control. They would be a dangerous step beyond current armed drones because there would be no human in the loop to determine when to fire and at what target. Although fully autonomous weapons do not yet exist, China, Israel, Russia, South Korea, the United Kingdom and the United States are all working to develop them. They argue that the technology would process information faster and keep soldiers off the battlefield.

The possibility that fully autonomous weapons could soon become a reality makes it imperative for those and other countries to apply the Martens Clause and assess whether the technology would offend basic humanity and the public conscience. Our analysis finds that fully autonomous weapons would fail the test on both counts.

I encourage you to read the essay in its entirety and for anyone who thinks the discussion about ethics and killer ‘bots is new or limited to military use, there’s my July 25, 2016 posting about police use of a robot in Dallas, Texas. (I imagine the discussion predates 2016 but that’s the earliest instance I have here.)

Teacher bots

Robots come in many forms and this one is on the humanoid end of the spectum,

Children watch a Keeko robot at the Yiswind Institute of Multicultural Education in Beijing, where the intelligent machines are telling stories and challenging kids with logic problems  [donwloaded from https://phys.org/news/2018-08-robot-teachers-invade-chinese-kindergartens.html]

Don’t those ‘eyes’ look almost heart-shaped? No wonder the kids love these robots, if an August  29, 2018 news item on phys.org can be believed,

The Chinese kindergarten children giggled as they worked to solve puzzles assigned by their new teaching assistant: a roundish, short educator with a screen for a face.

Just under 60 centimetres (two feet) high, the autonomous robot named Keeko has been a hit in several kindergartens, telling stories and challenging children with logic problems.

Round and white with a tubby body, the armless robot zips around on tiny wheels, its inbuilt cameras doubling up both as navigational sensors and a front-facing camera allowing users to record video journals.

In China, robots are being developed to deliver groceries, provide companionship to the elderly, dispense legal advice and now, as Keeko’s creators hope, join the ranks of educators.

At the Yiswind Institute of Multicultural Education on the outskirts of Beijing, the children have been tasked to help a prince find his way through a desert—by putting together square mats that represent a path taken by the robot—part storytelling and part problem-solving.

Each time they get an answer right, the device reacts with delight, its face flashing heart-shaped eyes.

“Education today is no longer a one-way street, where the teacher teaches and students just learn,” said Candy Xiong, a teacher trained in early childhood education who now works with Keeko Robot Xiamen Technology as a trainer.

“When children see Keeko with its round head and body, it looks adorable and children love it. So when they see Keeko, they almost instantly take to it,” she added.

Keeko robots have entered more than 600 kindergartens across the country with its makers hoping to expand into Greater China and Southeast Asia.

Beijing has invested money and manpower in developing artificial intelligence as part of its “Made in China 2025” plan, with a Chinese firm last year unveiling the country’s first human-like robot that can hold simple conversations and make facial expressions.

According to the International Federation of Robots, China has the world’s top industrial robot stock, with some 340,000 units in factories across the country engaged in manufacturing and the automotive industry.

Moving on from hardware/software to a software only story.

AI fashion designer better than Balenciaga?

Despite the title for Katharine Schwab’s August 22, 2018 article for Fast Company, I don’t think this AI designer is better than Balenciaga but from the pictures I’ve seen the designs are as good and it does present some intriguing possibilities courtesy of its neural network (Note: Links have been removed),

The AI, created by researcher Robbie Barat, has created an entire collection based on Balenciaga’s previous styles. There’s a fabulous pink and red gradient jumpsuit that wraps all the way around the model’s feet–like a onesie for fashionistas–paired with a dark slouchy coat. There’s a textural color-blocked dress, paired with aqua-green tights. And for menswear, there’s a multi-colored, shimmery button-up with skinny jeans and mismatched shoes. None of these looks would be out of place on the runway.

To create the styles, Barat collected images of Balenciaga’s designs via the designer’s lookbooks, ad campaigns, runway shows, and online catalog over the last two months, and then used them to train the pix2pix neural net. While some of the images closely resemble humans wearing fashionable clothes, many others are a bit off–some models are missing distinct limbs, and don’t get me started on how creepy [emphasis mine] their faces are. Even if the outfits aren’t quite ready to be fabricated, Barat thinks that designers could potentially use a tool like this to find inspiration. Because it’s not constrained by human taste, style, and history, the AI comes up with designs that may never occur to a person. “I love how the network doesn’t really understand or care about symmetry,” Barat writes on Twitter.

You can see the ‘creepy’ faces and some of the designs here,

Image: Robbie Barat

In contrast to the previous two stories, this all about algorithms, no machinery with independent movement (robot hardware) needed.

Conversational android: Erica

Hiroshi Ishiguro and his lifelike (definitely humanoid) robots have featured here many, many times before. The most recent posting is a March 27, 2017 posting about his and his android’s participation at the 2017 SXSW festival.

His latest work is featured in an August 21, 2018 news news item on ScienceDaily,

We’ve all tried talking with devices, and in some cases they talk back. But, it’s a far cry from having a conversation with a real person.

Now a research team from Kyoto University, Osaka University, and the Advanced Telecommunications Research Institute, or ATR, have significantly upgraded the interaction system for conversational android ERICA, giving her even greater dialog skills.

ERICA is an android created by Hiroshi Ishiguro of Osaka University and ATR, specifically designed for natural conversation through incorporation of human-like facial expressions and gestures. The research team demonstrated the updates during a symposium at the National Museum of Emerging Science in Tokyo.

Here’s the latest conversational android, Erica

Caption: The experimental set up when the subject (left) talks with ERICA (right) Credit: Kyoto University / Kawahara lab

An August 20, 2018 Kyoto University press release on EurekAlert, which originated the news item, offers more details,

When we talk to one another, it’s never a simple back and forward progression of information,” states Tatsuya Kawahara of Kyoto University’s Graduate School of Informatics, and an expert in speech and audio processing.

“Listening is active. We express agreement by nodding or saying ‘uh-huh’ to maintain the momentum of conversation. This is called ‘backchanneling’, and is something we wanted to implement with ERICA.”

The team also focused on developing a system for ‘attentive listening’. This is when a listener asks elaborating questions, or repeats the last word of the speaker’s sentence, allowing for more engaging dialogue.

Deploying a series of distance sensors, facial recognition cameras, and microphone arrays, the team began collecting data on parameters necessary for a fluid dialog between ERICA and a human subject.

“We looked at three qualities when studying backchanneling,” continues Kawahara. “These were: timing — when a response happens; lexical form — what is being said; and prosody, or how the response happens.”

Responses were generated through machine learning using a counseling dialogue corpus, resulting in dramatically improved dialog engagement. Testing in five-minute sessions with a human subject, ERICA demonstrated significantly more dynamic speaking skill, including the use of backchanneling, partial repeats, and statement assessments.

“Making a human-like conversational robot is a major challenge,” states Kawahara. “This project reveals how much complexity there is in listening, which we might consider mundane. We are getting closer to a day where a robot can pass a Total Turing Test.”

Erica seems to have been first introduced publicly in Spring 2017, from an April 2017 Erica: Man Made webpage on The Guardian website,

Erica is 23. She has a beautiful, neutral face and speaks with a synthesised voice. She has a degree of autonomy – but can’t move her hands yet. Hiroshi Ishiguro is her ‘father’ and the bad boy of Japanese robotics. Together they will redefine what it means to be human and reveal that the future is closer than we might think.

Hiroshi Ishiguro and his colleague Dylan Glas are interested in what makes a human. Erica is their latest creation – a semi-autonomous android, the product of the most funded scientific project in Japan. But these men regard themselves as artists more than scientists, and the Erica project – the result of a collaboration between Osaka and Kyoto universities and the Advanced Telecommunications Research Institute International – is a philosophical one as much as technological one.

Erica is interviewed about her hope and dreams – to be able to leave her room and to be able to move her arms and legs. She likes to chat with visitors and has one of the most advanced speech synthesis systems yet developed. Can she be regarded as being alive or as a comparable being to ourselves? Will she help us to understand ourselves and our interactions as humans better?

Erica and her creators are interviewed in the science fiction atmosphere of Ishiguro’s laboratory, and this film asks how we might form close relationships with robots in the future. Ishiguro thinks that for Japanese people especially, everything has a soul, whether human or not. If we don’t understand how human hearts, minds and personalities work, can we truly claim that humans have authenticity that machines don’t?

Ishiguro and Glas want to release Erica and her fellow robots into human society. Soon, Erica may be an essential part of our everyday life, as one of the new children of humanity.

Key credits

  • Director/Editor: Ilinca Calugareanu
  • Producer: Mara Adina
  • Executive producers for the Guardian: Charlie Phillips and Laurence Topham
  • This video is produced in collaboration with the Sundance Institute Short Documentary Fund supported by the John D and Catherine T MacArthur Foundation

You can also view the 14 min. film here.

Artworks generated by an AI system are to be sold at Christie’s auction house

KC Ifeanyi’s August 22, 2018 article for Fast Company may send a chill down some artists’ spines,

For the first time in its 252-year history, Christie’s will auction artwork generated by artificial intelligence.

Created by the French art collective Obvious, “Portrait of Edmond de Belamy” is part of a series of paintings of the fictional Belamy family that was created using a two-part algorithm. …

The portrait is estimated to sell anywhere between $7,000-$10,000, and Obvious says the proceeds will go toward furthering its algorithm.

… Famed collector Nicolas Laugero-Lasserre bought one of Obvious’s Belamy works in February, which could’ve been written off as a novel purchase where the story behind it is worth more than the piece itself. However, with validation from a storied auction house like Christie’s, AI art could shake the contemporary art scene.

“Edmond de Belamy” goes up for auction from October 23-25 [2018].

Jobs safe from automation? Are there any?

Michael Grothaus expresses more optimism about future job markets than I’m feeling in an August 30, 2018 article for Fast Company,

A 2017 McKinsey Global Institute study of 800 occupations across 46 countries found that by 2030, 800 million people will lose their jobs to automation. That’s one-fifth of the global workforce. A further one-third of the global workforce will need to retrain if they want to keep their current jobs as well. And looking at the effects of automation on American jobs alone, researchers from Oxford University found that “47 percent of U.S. workers have a high probability of seeing their jobs automated over the next 20 years.”

The good news is that while the above stats are rightly cause for concern, they also reveal that 53% of American jobs and four-fifths of global jobs are unlikely to be affected by advances in artificial intelligence and robotics. But just what are those fields? I spoke to three experts in artificial intelligence, robotics, and human productivity to get their automation-proof career advice.

Creatives

“Although I believe every single job can, and will, benefit from a level of AI or robotic influence, there are some roles that, in my view, will never be replaced by technology,” says Tom Pickersgill, …

Maintenance foreman

When running a production line, problems and bottlenecks are inevitable–and usually that’s a bad thing. But in this case, those unavoidable issues will save human jobs because their solutions will require human ingenuity, says Mark Williams, head of product at People First, …

Hairdressers

Mat Hunter, director of the Central Research Laboratory, a tech-focused co-working space and accelerator for tech startups, have seen startups trying to create all kinds of new technologies, which has given him insight into just what machines can and can’t pull off. It’s lead him to believe that jobs like the humble hairdresser are safer from automation than those of, says, accountancy.

Therapists and social workers

Another automation-proof career is likely to be one involved in helping people heal the mind, says Pickersgill. “People visit therapists because there is a need for emotional support and guidance. This can only be provided through real human interaction–by someone who can empathize and understand, and who can offer advice based on shared experiences, rather than just data-driven logic.”

Teachers

Teachers are so often the unsung heroes of our society. They are overworked and underpaid–yet charged with one of the most important tasks anyone can have: nurturing the growth of young people. The good news for teachers is that their jobs won’t be going anywhere.

Healthcare workers

Doctors and nurses will also likely never see their jobs taken by automation, says Williams. While automation will no doubt better enhance the treatments provided by doctors and nurses the fact of the matter is that robots aren’t going to outdo healthcare workers’ ability to connect with patients and make them feel understood the way a human can.

Caretakers

While humans might be fine with robots flipping their burgers and artificial intelligence managing their finances, being comfortable with a robot nannying your children or looking after your elderly mother is a much bigger ask. And that’s to say nothing of the fact that even today’s most advanced robots don’t have the physical dexterity to perform the movements and actions carers do every day.

Grothaus does offer a proviso in his conclusion: certain types of jobs are relatively safe until developers learn to replicate qualities such as empathy in robots/AI.

It’s very confusing

There’s so much news about robots, artificial intelligence, androids, and cyborgs that it’s hard to keep up with it let alone attempt to get a feeling for where all this might be headed. When you add the fact that the term robots/artificial inteligence are often used interchangeably and that the distinction between robots/androids/cyborgs is not always clear any attempts to peer into the future become even more challenging.

At this point I content myself with tracking the situation and finding definitions so I can better understand what I’m tracking. Carmen Wong’s August 23, 2018 posting on the Signals blog published by Canada’s Centre for Commercialization of Regenerative Medicine (CCRM) offers some useful definitions in the context of an article about the use of artificial intelligence in the life sciences, particularly in Canada (Note: Links have been removed),

Artificial intelligence (AI). Machine learning. To most people, these are just buzzwords and synonymous. Whether or not we fully understand what both are, they are slowly integrating into our everyday lives. Virtual assistants such as Siri? AI is at work. The personalized ads you see when you are browsing on the web or movie recommendations provided on Netflix? Thank AI for that too.

AI is defined as machines having intelligence that imitates human behaviour such as learning, planning and problem solving. A process used to achieve AI is called machine learning, where a computer uses lots of data to “train” or “teach” itself, without human intervention, to accomplish a pre-determined task. Essentially, the computer keeps on modifying its algorithm based on the information provided to get to the desired goal.

Another term you may have heard of is deep learning. Deep learning is a particular type of machine learning where algorithms are set up like the structure and function of human brains. It is similar to a network of brain cells interconnecting with each other.

Toronto has seen its fair share of media-worthy AI activity. The Government of Canada, Government of Ontario, industry and multiple universities came together in March 2018 to launch the Vector Institute, with the goal of using AI to promote economic growth and improve the lives of Canadians. In May, Samsung opened its AI Centre in the MaRS Discovery District, joining a network of Samsung centres located in California, United Kingdom and Russia.

There has been a boom in AI companies over the past few years, which span a variety of industries. This year’s ranking of the top 100 most promising private AI companies covers 25 fields with cybersecurity, enterprise and robotics being the hot focus areas.

Wong goes on to explore AI deployment in the life sciences and concludes that human scientists and doctors will still be needed although she does note this in closing (Note: A link has been removed),

More importantly, empathy and support from a fellow human being could never be fully replaced by a machine (could it?), but maybe this will change in the future. We will just have to wait and see.

Artificial empathy is the term used in Lisa Morgan’s April 25, 2018 article for Information Week which unfortunately does not include any links to actual projects or researchers working on artificial empathy. Instead, the article is focused on how business interests and marketers would like to see it employed. FWIW, I have found a few references: (1) Artificial empathy Wikipedia essay (look for the references at the end of the essay for more) and (2) this open access article: Towards Artificial Empathy; How Can Artificial Empathy Follow the Developmental Pathway of Natural Empathy? by Minoru Asada.

Please let me know in the comments if you should have an insights on the matter in the comments section of this blog.

Robot radiologists (artificially intelligent doctors)

Mutaz Musa, a physician at New York Presbyterian Hospital/Weill Cornell (Department of Emergency Medicine) and software developer in New York City, has penned an eyeopening opinion piece about artificial intelligence (or robots if you prefer) and the field of radiology. From a June 25, 2018 opinion piece for The Scientist (Note: Links have been removed),

Although artificial intelligence has raised fears of job loss for many, we doctors have thus far enjoyed a smug sense of security. There are signs, however, that the first wave of AI-driven redundancies among doctors is fast approaching. And radiologists seem to be first on the chopping block.

Andrew Ng, founder of online learning platform Coursera and former CTO of “China’s Google,” Baidu, recently announced the development of CheXNet, a convolutional neural net capable of recognizing pneumonia and other thoracic pathologies on chest X-rays better than human radiologists. Earlier this year, a Hungarian group developed a similar system for detecting and classifying features of breast cancer in mammograms. In 2017, Adelaide University researchers published details of a bot capable of matching human radiologist performance in detecting hip fractures. And, of course, Google achieved superhuman proficiency in detecting diabetic retinopathy in fundus photographs, a task outside the scope of most radiologists.

Beyond single, two-dimensional radiographs, a team at Oxford University developed a system for detecting spinal disease from MRI data with a performance equivalent to a human radiologist. Meanwhile, researchers at the University of California, Los Angeles, reported detecting pathology on head CT scans with an error rate more than 20 times lower than a human radiologist.

Although these particular projects are still in the research phase and far from perfect—for instance, often pitting their machines against a limited number of radiologists—the pace of progress alone is telling.

Others have already taken their algorithms out of the lab and into the marketplace. Enlitic, founded by Aussie serial entrepreneur and University of San Francisco researcher Jeremy Howard, is a Bay-Area startup that offers automated X-ray and chest CAT scan interpretation services. Enlitic’s systems putatively can judge the malignancy of nodules up to 50 percent more accurately than a panel of radiologists and identify fractures so small they’d typically be missed by the human eye. One of Enlitic’s largest investors, Capitol Health, owns a network of diagnostic imaging centers throughout Australia, anticipating the broad rollout of this technology. Another Bay-Area startup, Arterys, offers cloud-based medical imaging diagnostics. Arterys’s services extend beyond plain films to cardiac MRIs and CAT scans of the chest and abdomen. And there are many others.

Musa has offered a compelling argument with lots of links to supporting evidence.

[downloaded from https://www.the-scientist.com/news-opinion/opinion–rise-of-the-robot-radiologists-64356]

And evidence keeps mounting, I just stumbled across this June 30, 2018 news item on Xinhuanet.com,

An artificial intelligence (AI) system scored 2:0 against elite human physicians Saturday in two rounds of competitions in diagnosing brain tumors and predicting hematoma expansion in Beijing.

The BioMind AI system, developed by the Artificial Intelligence Research Centre for Neurological Disorders at the Beijing Tiantan Hospital and a research team from the Capital Medical University, made correct diagnoses in 87 percent of 225 cases in about 15 minutes, while a team of 15 senior doctors only achieved 66-percent accuracy.

The AI also gave correct predictions in 83 percent of brain hematoma expansion cases, outperforming the 63-percent accuracy among a group of physicians from renowned hospitals across the country.

The outcomes for human physicians were quite normal and even better than the average accuracy in ordinary hospitals, said Gao Peiyi, head of the radiology department at Tiantan Hospital, a leading institution on neurology and neurosurgery.

To train the AI, developers fed it tens of thousands of images of nervous system-related diseases that the Tiantan Hospital has archived over the past 10 years, making it capable of diagnosing common neurological diseases such as meningioma and glioma with an accuracy rate of over 90 percent, comparable to that of a senior doctor.

All the cases were real and contributed by the hospital, but never used as training material for the AI, according to the organizer.

Wang Yongjun, executive vice president of the Tiantan Hospital, said that he personally did not care very much about who won, because the contest was never intended to pit humans against technology but to help doctors learn and improve [emphasis mine] through interactions with technology.

“I hope through this competition, doctors can experience the power of artificial intelligence. This is especially so for some doctors who are skeptical about artificial intelligence. I hope they can further understand AI and eliminate their fears toward it,” said Wang.

Dr. Lin Yi who participated and lost in the second round, said that she welcomes AI, as it is not a threat but a “friend.” [emphasis mine]

AI will not only reduce the workload but also push doctors to keep learning and improve their skills, said Lin.

Bian Xiuwu, an academician with the Chinese Academy of Science and a member of the competition’s jury, said there has never been an absolute standard correct answer in diagnosing developing diseases, and the AI would only serve as an assistant to doctors in giving preliminary results. [emphasis mine]

Dr. Paul Parizel, former president of the European Society of Radiology and another member of the jury, also agreed that AI will not replace doctors, but will instead function similar to how GPS does for drivers. [emphasis mine]

Dr. Gauden Galea, representative of the World Health Organization in China, said AI is an exciting tool for healthcare but still in the primitive stages.

Based on the size of its population and the huge volume of accessible digital medical data, China has a unique advantage in developing medical AI, according to Galea.

China has introduced a series of plans in developing AI applications in recent years.

In 2017, the State Council issued a development plan on the new generation of Artificial Intelligence and the Ministry of Industry and Information Technology also issued the “Three-Year Action Plan for Promoting the Development of a New Generation of Artificial Intelligence (2018-2020).”

The Action Plan proposed developing medical image-assisted diagnostic systems to support medicine in various fields.

I note the reference to cars and global positioning systems (GPS) and their role as ‘helpers’;, it seems no one at the ‘AI and radiology’ competition has heard of driverless cars. Here’s Musa on those reassuring comments abut how the technology won’t replace experts but rather augment their skills,

To be sure, these services frame themselves as “support products” that “make doctors faster,” rather than replacements that make doctors redundant. This language may reflect a reserved view of the technology, though it likely also represents a marketing strategy keen to avoid threatening or antagonizing incumbents. After all, many of the customers themselves, for now, are radiologists.

Radiology isn’t the only area where experts might find themselves displaced.

Eye experts

It seems inroads have been made by artificial intelligence systems (AI) into the diagnosis of eye diseases. It got the ‘Fast Company’ treatment (exciting new tech, learn all about it) as can be seen further down in this posting. First, here’s a more restrained announcement, from an August 14, 2018 news item on phys.org (Note: A link has been removed),

An artificial intelligence (AI) system, which can recommend the correct referral decision for more than 50 eye diseases, as accurately as experts has been developed by Moorfields Eye Hospital NHS Foundation Trust, DeepMind Health and UCL [University College London].

The breakthrough research, published online by Nature Medicine, describes how machine-learning technology has been successfully trained on thousands of historic de-personalised eye scans to identify features of eye disease and recommend how patients should be referred for care.

Researchers hope the technology could one day transform the way professionals carry out eye tests, allowing them to spot conditions earlier and prioritise patients with the most serious eye diseases before irreversible damage sets in.

An August 13, 2018 UCL press release, which originated the news item, describes the research and the reasons behind it in more detail,

More than 285 million people worldwide live with some form of sight loss, including more than two million people in the UK. Eye diseases remain one of the biggest causes of sight loss, and many can be prevented with early detection and treatment.

Dr Pearse Keane, NIHR Clinician Scientist at the UCL Institute of Ophthalmology and consultant ophthalmologist at Moorfields Eye Hospital NHS Foundation Trust said: “The number of eye scans we’re performing is growing at a pace much faster than human experts are able to interpret them. There is a risk that this may cause delays in the diagnosis and treatment of sight-threatening diseases, which can be devastating for patients.”

“The AI technology we’re developing is designed to prioritise patients who need to be seen and treated urgently by a doctor or eye care professional. If we can diagnose and treat eye conditions early, it gives us the best chance of saving people’s sight. With further research it could lead to greater consistency and quality of care for patients with eye problems in the future.”

The study, launched in 2016, brought together leading NHS eye health professionals and scientists from UCL and the National Institute for Health Research (NIHR) with some of the UK’s top technologists at DeepMind to investigate whether AI technology could help improve the care of patients with sight-threatening diseases, such as age-related macular degeneration and diabetic eye disease.

Using two types of neural network – mathematical systems for identifying patterns in images or data – the AI system quickly learnt to identify 10 features of eye disease from highly complex optical coherence tomography (OCT) scans. The system was then able to recommend a referral decision based on the most urgent conditions detected.

To establish whether the AI system was making correct referrals, clinicians also viewed the same OCT scans and made their own referral decisions. The study concluded that AI was able to make the right referral recommendation more than 94% of the time, matching the performance of expert clinicians.

The AI has been developed with two unique features which maximise its potential use in eye care. Firstly, the system can provide information that helps explain to eye care professionals how it arrives at its recommendations. This information includes visuals of the features of eye disease it has identified on the OCT scan and the level of confidence the system has in its recommendations, in the form of a percentage. This functionality is crucial in helping clinicians scrutinise the technology’s recommendations and check its accuracy before deciding the type of care and treatment a patient receives.

Secondly, the AI system can be easily applied to different types of eye scanner, not just the specific model on which it was trained. This could significantly increase the number of people who benefit from this technology and future-proof it, so it can still be used even as OCT scanners are upgraded or replaced over time.

The next step is for the research to go through clinical trials to explore how this technology might improve patient care in practice, and regulatory approval before it can be used in hospitals and other clinical settings.

If clinical trials are successful in demonstrating that the technology can be used safely and effectively, Moorfields will be able to use an eventual, regulatory-approved product for free, across all 30 of their UK hospitals and community clinics, for an initial period of five years.

The work that has gone into this project will also help accelerate wider NHS research for many years to come. For example, DeepMind has invested significant resources to clean, curate and label Moorfields’ de-identified research dataset to create one of the most advanced eye research databases in the world.

Moorfields owns this database as a non-commercial public asset, which is already forming the basis of nine separate medical research studies. In addition, Moorfields can also use DeepMind’s trained AI model for future non-commercial research efforts, which could help advance medical research even further.

Mustafa Suleyman, Co-founder and Head of Applied AI at DeepMind Health, said: “We set up DeepMind Health because we believe artificial intelligence can help solve some of society’s biggest health challenges, like avoidable sight loss, which affects millions of people across the globe. These incredibly exciting results take us one step closer to that goal and could, in time, transform the diagnosis, treatment and management of patients with sight threatening eye conditions, not just at Moorfields, but around the world.”

Professor Sir Peng Tee Khaw, director of the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology said: “The results of this pioneering research with DeepMind are very exciting and demonstrate the potential sight-saving impact AI could have for patients. I am in no doubt that AI has a vital role to play in the future of healthcare, particularly when it comes to training and helping medical professionals so that patients benefit from vital treatment earlier than might previously have been possible. This shows the transformative research than can be carried out in the UK combining world leading industry and NIHR/NHS hospital/university partnerships.”

Matt Hancock, Health and Social Care Secretary, said: “This is hugely exciting and exactly the type of technology which will benefit the NHS in the long term and improve patient care – that’s why we fund over a billion pounds a year in health research as part of our long term plan for the NHS.”

Here’s a link to and a citation for the study,

Clinically applicable deep learning for diagnosis and referral in retinal disease by Jeffrey De Fauw, Joseph R. Ledsam, Bernardino Romera-Paredes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, Harry Askham, Xavier Glorot, Brendan O’Donoghue, Daniel Visentin, George van den Driessche, Balaji Lakshminarayanan, Clemens Meyer, Faith Mackinder, Simon Bouton, Kareem Ayoub, Reena Chopra, Dominic King, Alan Karthikesalingam, Cían O. Hughes, Rosalind Raine, Julian Hughes, Dawn A. Sim, Catherine Egan, Adnan Tufail, Hugh Montgomery, Demis Hassabis, Geraint Rees, Trevor Back, Peng T. Khaw, Mustafa Suleyman, Julien Cornebise, Pearse A. Keane, & Olaf Ronneberger. Nature Medicine (2018) DOI: https://doi.org/10.1038/s41591-018-0107-6 Published 13 August 2018

This paper is behind a paywall.

And now, Melissa Locker’s August 15, 2018 article for Fast Company (Note: Links have been removed),

In a paper published in Nature Medicine on Monday, Google’s DeepMind subsidiary, UCL, and researchers at Moorfields Eye Hospital showed off their new AI system. The researchers used deep learning to create algorithm-driven software that can identify common patterns in data culled from dozens of common eye diseases from 3D scans. The result is an AI that can identify more than 50 diseases with incredible accuracy and can then refer patients to a specialist. Even more important, though, is that the AI can explain why a diagnosis was made, indicating which part of the scan prompted the outcome. It’s an important step in both medicine and in making AIs slightly more human

The editor or writer has even highlighted the sentence about the system’s accuracy—not just good but incredible!

I will be publishing something soon [my August 21, 2018 posting] which highlights some of the questions one might want to ask about AI and medicine before diving headfirst into this brave new world of medicine.