Tag Archives: artificial intelligence

Deep learning and some history from the Swiss National Science Foundation (SNSF)

A June 27, 2016 news item on phys.org provides a measured analysis of deep learning and its current state of development (from a Swiss perspective),

In March 2016, the world Go champion Lee Sedol lost 1-4 against the artificial intelligence AlphaGo. For many, this was yet another defeat for humanity at the hands of the machines. Indeed, the success of the AlphaGo software was forged in an area of artificial intelligence that has seen huge progress over the last decade. Deep learning, as it’s called, uses artificial neural networks to process algorithmic calculations. This software architecture therefore mimics biological neural networks.

Much of the progress in deep learning is thanks to the work of Jürgen Schmidhuber, director of the IDSIA (Istituto Dalle Molle di Studi sull’Intelligenza Artificiale) which is located in the suburbs of Lugano. The IDSIA doctoral student Shane Legg and a group of former colleagues went on to found DeepMind, the startup acquired by Google in early 2014 for USD 500 million. The DeepMind algorithms eventually wound up in AlphaGo.

“Schmidhuber is one of the best at deep learning,” says Boi Faltings of the EPFL Artificial Intelligence Lab. “He never let go of the need to keep working at it.” According to Stéphane Marchand-Maillet of the University of Geneva computing department, “he’s been in the race since the very beginning.”

A June 27, 2016 SNSF news release (first published as a story in Horizons no. 109 June 2016) by Fabien Goubet, which originated the news item, goes on to provide a brief history,

The real strength of deep learning is structural recognition, and winning at Go is just an illustration of this, albeit a rather resounding one. Elsewhere, and for some years now, we have seen it applied to an entire spectrum of areas, such as visual and vocal recognition, online translation tools and smartphone personal assistants. One underlying principle of machine learning is that algorithms must first be trained using copious examples. Naturally, this has been helped by the deluge of user-generated content spawned by smartphones and web 2.0, stretching from Facebook photo comments to official translations published on the Internet. By feeding a machine thousands of accurately tagged images of cats, for example, it learns first to recognise those cats and later any image of a cat, including those it hasn’t been fed.

Deep learning isn’t new; it just needed modern computers to come of age. As far back as the early 1950s, biologists tried to lay out formal principles to explain the working of the brain’s cells. In 1956, the psychologist Frank Rosenblatt of the New York State Aeronautical Laboratory published a numerical model based on these concepts, thereby creating the very first artificial neural network. Once integrated into a calculator, it learned to recognise rudimentary images.

“This network only contained eight neurones organised in a single layer. It could only recognise simple characters”, says Claude Touzet of the Adaptive and Integrative Neuroscience Laboratory of Aix-Marseille University. “It wasn’t until 1985 that we saw the second generation of artificial neural networks featuring multiple layers and much greater performance”. This breakthrough was made simultaneously by three researchers: Yann LeCun in Paris, Geoffrey Hinton in Toronto and Terrence Sejnowski in Baltimore.

Byte-size learning

In multilayer networks, each layer learns to recognise the precise visual characteristics of a shape. The deeper the layer, the more abstract the characteristics. With cat photos, the first layer analyses pixel colour, and the following layer recognises the general form of the cat. This structural design can support calculations being made upon thousands of layers, and it was this aspect of the architecture that gave rise to the name ‘deep learning’.

Marchand-Maillet explains: “Each artificial neurone is assigned an input value, which it computes using a mathematical function, only firing if the output exceeds a pre-defined threshold”. In this way, it reproduces the behaviour of real neurones, which only fire and transmit information when the input signal (the potential difference across the entire neural circuit) reaches a certain level. In the artificial model, the results of a single layer are weighted, added up and then sent as the input signal to the following layer, which processes that input using different functions, and so on and so forth.

For example, if a system is trained with great quantities of photos of apples and watermelons, it will progressively learn to distinguish them on the basis of diameter, says Marchand-Maillet. If it cannot decide (e.g., when processing a picture of a tiny watermelon), the subsequent layers take over by analysing the colours or textures of the fruit in the photo, and so on. In this way, every step in the process further refines the assessment.

Video games to the rescue

For decades, the frontier of computing held back more complex applications, even at the cutting edge. Industry walked away, and deep learning only survived thanks to the video games sector, which eventually began producing graphics chips, or GPUs, with an unprecedented power at accessible prices: up to 6 teraflops (i.e., 6 trillion calculations per second) for a few hundred dollars. “There’s no doubt that it was this calculating power that laid the ground for the quantum leap in deep learning”, says Touzet. GPUs are also very good at parallel calculations, a useful function for executing the innumerable simultaneous operations required by neural networks.
Although image analysis is getting great results, things are more complicated for sequential data objects such as natural spoken language and video footage. This has formed part of Schmidhuber’s work since 1989, and his response has been to develop recurrent neural networks in which neurones communicate with each other in loops, feeding processed data back into the initial layers.

Such sequential data analysis is highly dependent on context and precursory data. In Lugano, networks have been instructed to memorise the order of a chain of events. Long Short Term Memory (LSTM) networks can distinguish ‘boat’ from ‘float’ by recalling the sound that preceded ‘oat’ (i.e., either ‘b’ or ‘fl’). “Recurrent neural networks are more powerful than other approaches such as the Hidden Markov models”, says Schmidhuber, who also notes that Google Voice integrated LSTMs in 2015. “With looped networks, the number of layers is potentially infinite”, says Faltings [?].

For Schmidhuber, deep learning is just one aspect of artificial intelligence; the real thing will lead to “the most important change in the history of our civilisation”. But Marchand-Maillet sees deep learning as “a bit of hype, leading us to believe that artificial intelligence can learn anything provided there’s data. But it’s still an open question as to whether deep learning can really be applied to every last domain”.

It’s nice to get an historical perspective and eye-opening to realize that scientists have been working on these concepts since the 1950s.

Korea Advanced Institute of Science and Technology (KAIST) at summer 2016 World Economic Forum in China

From the Ideas Lab at the 2016 World Economic Forum at Davos to offering expertise at the 2016 World Economic Forum in Tanjin, China that is taking place from June 26 – 28, 2016.

Here’s more from a June 24, 2016 KAIST news release on EurekAlert,

Scientific and technological breakthroughs are more important than ever as a key agent to drive social, economic, and political changes and advancements in today’s world. The World Economic Forum (WEF), an international organization that provides one of the broadest engagement platforms to address issues of major concern to the global community, will discuss the effects of these breakthroughs at its 10th Annual Meeting of the New Champions, a.k.a., the Summer Davos Forum, in Tianjin, China, June 26-28, 2016.

Three professors from the Korea Advanced Institute of Science and Technology (KAIST) will join the Annual Meeting and offer their expertise in the fields of biotechnology, artificial intelligence, and robotics to explore the conference theme, “The Fourth Industrial Revolution and Its Transformational Impact.” The Fourth Industrial Revolution, a term coined by WEF founder, Klaus Schwab, is characterized by a range of new technologies that fuse the physical, digital, and biological worlds, such as the Internet of Things, cloud computing, and automation.

Distinguished Professor Sang Yup Lee of the Chemical and Biomolecular Engineering Department will speak at the Experts Reception to be held on June 25, 2016 on the topic of “The Summer Davos Forum and Science and Technology in Asia.” On June 27, 2016, he will participate in two separate discussion sessions.

In the first session entitled “What If Drugs Are Printed from the Internet?” Professor Lee will discuss the future of medicine being impacted by advancements in biotechnology and 3D printing technology with Nita A. Farahany, a Duke University professor, under the moderation of Clare Matterson, the Director of Strategy at Wellcome Trust in the United Kingdom. The discussants will note recent developments made in the way patients receive their medicine, for example, downloading drugs directly from the internet and the production of yeast strains to make opioids for pain treatment through systems metabolic engineering, and predicting how these emerging technologies will transform the landscape of the pharmaceutical industry in the years to come.

In the second session, “Lessons for Life,” Professor Lee will talk about how to nurture life-long learning and creativity to support personal and professional growth necessary in an era of the new industrial revolution.

During the Annual Meeting, Professors Jong-Hwan Kim of the Electrical Engineering School and David Hyunchul Shim of the Aerospace Department will host, together with researchers from Carnegie Mellon University and AnthroTronix, an engineering research and development company, a technological exhibition on robotics. Professor Kim, the founder of the internally renowned Robot World Cup, will showcase his humanoid micro-robots that play soccer, displaying their various cutting-edge technologies such as imaging processing, artificial intelligence, walking, and balancing. Professor Shim will present a human-like robotic piloting system, PIBOT, which autonomously operates a simulated flight program, grabbing control sticks and guiding an airplane from take offs to landings.

In addition, the two professors will join Professor Lee, who is also a moderator, to host a KAIST-led session on June 26, 2016, entitled “Science in Depth: From Deep Learning to Autonomous Machines.” Professors Kim and Shim will explore new opportunities and challenges in their fields from machine learning to autonomous robotics including unmanned vehicles and drones.

Since 2011, KAIST has been participating in the World Economic Forum’s two flagship conferences, the January and June Davos Forums, to introduce outstanding talents, share their latest research achievements, and interact with global leaders.

KAIST President Steve Kang said, “It is important for KAIST to be involved in global talks that identify issues critical to humanity and seek answers to solve them, where our skills and knowledge in science and technology could play a meaningful role. The Annual Meeting in China will become another venue to accomplish this.”

I mentioned KAIST and the Ideas Lab at the 2016 Davos meeting in this Nov. 20, 2015 posting and was able to clear up my (and possible other people’s) confusion as to what the Fourth Industrial revolution might be in my Dec. 3, 2015 posting.

Artificial synapse rivals biological synapse in energy consumption

How can we make computers be like biological brains which do so much work and use so little power? It’s a question scientists from many countries are trying to answer and it seems South Korean scientists are proposing an answer. From a June 20, 2016 news item on Nanowerk,

News) Creation of an artificial intelligence system that fully emulates the functions of a human brain has long been a dream of scientists. A brain has many superior functions as compared with super computers, even though it has light weight, small volume, and consumes extremely low energy. This is required to construct an artificial neural network, in which a huge amount (1014)) of synapses is needed.

Most recently, great efforts have been made to realize synaptic functions in single electronic devices, such as using resistive random access memory (RRAM), phase change memory (PCM), conductive bridges, and synaptic transistors. Artificial synapses based on highly aligned nanostructures are still desired for the construction of a highly-integrated artificial neural network.

Prof. Tae-Woo Lee, research professor Wentao Xu, and Dr. Sung-Yong Min with the Dept. of Materials Science and Engineering at POSTECH [Pohang University of Science & Technology, South Korea] have succeeded in fabricating an organic nanofiber (ONF) electronic device that emulates not only the important working principles and energy consumption of biological synapses but also the morphology. …

A June 20, 2016 Pohang University of Science & Technology (POSTECH) news release on EurekAlert, which originated the news item, describes the work in more detail,

The morphology of ONFs is very similar to that of nerve fibers, which form crisscrossing grids to enable the high memory density of a human brain. Especially, based on the e-Nanowire printing technique, highly-aligned ONFs can be massively produced with precise control over alignment and dimension. This morphology potentially enables the future construction of high-density memory of a neuromorphic system.

Important working principles of a biological synapse have been emulated, such as paired-pulse facilitation (PPF), short-term plasticity (STP), long-term plasticity (LTP), spike-timing dependent plasticity (STDP), and spike-rate dependent plasticity (SRDP). Most amazingly, energy consumption of the device can be reduced to a femtojoule level per synaptic event, which is a value magnitudes lower than previous reports. It rivals that of a biological synapse. In addition, the organic artificial synapse devices not only provide a new research direction in neuromorphic electronics but even open a new era of organic electronics.

This technology will lead to the leap of brain-inspired electronics in both memory density and energy consumption aspects. The artificial synapse developed by Prof. Lee’s research team will provide important potential applications to neuromorphic computing systems and artificial intelligence systems for autonomous cars (or self-driving cars), analysis of big data, cognitive systems, robot control, medical diagnosis, stock trading analysis, remote sensing, and other smart human-interactive systems and machines in the future.

Here’s a link to and a citation for the paper,

Organic core-sheath nanowire artificial synapses with femtojoule energy consumption by Wentao Xu, Sung-Yong Min, Hyunsang Hwang, and Tae-Woo Lee. Science Advances  17 Jun 2016: Vol. 2, no. 6, e1501326 DOI: 10.1126/sciadv.1501326

This paper is open access.

A human user manual—for robots

Researchers from the Georgia Institute of Technology (Georgia Tech), funded by the US Office of Naval Research (ONR), have developed a program that teaches robots to read stories and more in an effort to educate them about humans. From a June 16, 2016 ONR news release by Warren Duffie Jr. (also on EurekAlert),

With support from the Office of Naval Research (ONR), researchers at the Georgia Institute of Technology have created an artificial intelligence software program named Quixote to teach robots to read stories, learn acceptable behavior and understand successful ways to conduct themselves in diverse social situations.

“For years, researchers have debated how to teach robots to act in ways that are appropriate, non-intrusive and trustworthy,” said Marc Steinberg, an ONR program manager who oversees the research. “One important question is how to explain complex concepts such as policies, values or ethics to robots. Humans are really good at using narrative stories to make sense of the world and communicate to other people. This could one day be an effective way to interact with robots.”

The rapid pace of artificial intelligence has stirred fears by some that robots could act unethically or harm humans. Dr. Mark Riedl, an associate professor and director of Georgia Tech’s Entertainment Intelligence Lab, hopes to ease concerns by having Quixote serve as a “human user manual” by teaching robots values through simple stories. After all, stories inform, educate and entertain–reflecting shared cultural knowledge, social mores and protocols.

For example, if a robot is tasked with picking up a pharmacy prescription for a human as quickly as possible, it could: a) take the medicine and leave, b) interact politely with pharmacists, c) or wait in line. Without value alignment and positive reinforcement, the robot might logically deduce robbery is the fastest, cheapest way to accomplish its task. However, with value alignment from Quixote, it would be rewarded for waiting patiently in line and paying for the prescription.

For their research, Riedl and his team crowdsourced stories from the Internet. Each tale needed to highlight daily social interactions–going to a pharmacy or restaurant, for example–as well as socially appropriate behaviors (e.g., paying for meals or medicine) within each setting.

The team plugged the data into Quixote to create a virtual agent–in this case, a video game character placed into various game-like scenarios mirroring the stories. As the virtual agent completed a game, it earned points and positive reinforcement for emulating the actions of protagonists in the stories.

Riedl’s team ran the agent through 500,000 simulations, and it displayed proper social interactions more than 90 percent of the time.

“These games are still fairly simple,” said Riedl, “more like ‘Pac-Man’ instead of ‘Halo.’ However, Quixote enables these artificial intelligence agents to immerse themselves in a story, learn the proper sequence of events and be encoded with acceptable behavior patterns. This type of artificial intelligence can be adapted to robots, offering a variety of applications.”

Within the next six months, Riedl’s team hopes to upgrade Quixote’s games from “old-school” to more modern and complex styles like those found in Minecraft–in which players use blocks to build elaborate structures and societies.

Riedl believes Quixote could one day make it easier for humans to train robots to perform diverse tasks. Steinberg notes that robotic and artificial intelligence systems may one day be a much larger part of military life. This could involve mine detection and deactivation, equipment transport and humanitarian and rescue operations.

“Within a decade, there will be more robots in society, rubbing elbows with us,” said Riedl. “Social conventions grease the wheels of society, and robots will need to understand the nuances of how humans do things. That’s where Quixote can serve as a valuable tool. We’re already seeing it with virtual agents like Siri and Cortana, which are programmed not to say hurtful or insulting things to users.”

This story brought to mind two other projects: RoboEarth (an internet for robots only) mentioned in my Jan. 14, 2014 which was an update on the project featuring its use in hospitals and RoboBrain, a robot learning project (sourcing the internet, YouTube, and more for information to teach robots) was mentioned in my Sept. 2, 2014 posting.

Accountability for artificial intelligence decision-making

How does an artificial intelligence program arrive at its decisions? It’s a question that’s not academic any more as these programs take on more decision-making chores according to a May 25, 2016 Carnegie Mellon University news release (also on EurekAlert) by Bryon Spice (Note: Links have been removed),

Machine-learning algorithms increasingly make decisions about credit, medical diagnoses, personalized recommendations, advertising and job opportunities, among other things, but exactly how usually remains a mystery. Now, new measurement methods developed by Carnegie Mellon University [CMU] researchers could provide important insights to this process.

Was it a person’s age, gender or education level that had the most influence on a decision? Was it a particular combination of factors? CMU’s Quantitative Input Influence (QII) measures can provide the relative weight of each factor in the final decision, said Anupam Datta, associate professor of computer science and electrical and computer engineering.

It’s reassuring to know that more requests for transparency of the decision-making process are being made. After all, it’s disconcerting that someone with the life experience of a gnat and/or possibly some issues might be developing an algorithm that could affection your life in some fundamental ways. Here’s more from the news release (Note: Links have been removed),

“Demands for algorithmic transparency are increasing as the use of algorithmic decision-making systems grows and as people realize the potential of these systems to introduce or perpetuate racial or sex discrimination or other social harms,” Datta said.

“Some companies are already beginning to provide transparency reports, but work on the computational foundations for these reports has been limited,” he continued. “Our goal was to develop measures of the degree of influence of each factor considered by a system, which could be used to generate transparency reports.”

These reports might be generated in response to a particular incident — why an individual’s loan application was rejected, or why police targeted an individual for scrutiny, or what prompted a particular medical diagnosis or treatment. Or they might be used proactively by an organization to see if an artificial intelligence system is working as desired, or by a regulatory agency to see whether a decision-making system inappropriately discriminated between groups of people.

Datta, along with Shayak Sen, a Ph.D. student in computer science, and Yair Zick, a post-doctoral researcher in the Computer Science Department, will present their report on QII at the IEEE Symposium on Security and Privacy, May 23–25 [2016], in San Jose, Calif.

Generating these QII measures requires access to the system, but doesn’t necessitate analyzing the code or other inner workings of the system, Datta said. It also requires some knowledge of the input dataset that was initially used to train the machine-learning system.

A distinctive feature of QII measures is that they can explain decisions of a large class of existing machine-learning systems. A significant body of prior work takes a complementary approach, redesigning machine-learning systems to make their decisions more interpretable and sometimes losing prediction accuracy in the process.

QII measures carefully account for correlated inputs while measuring influence. For example, consider a system that assists in hiring decisions for a moving company. Two inputs, gender and the ability to lift heavy weights, are positively correlated with each other and with hiring decisions. Yet transparency into whether the system uses weight-lifting ability or gender in making its decisions has substantive implications for determining if it is engaging in discrimination.

“That’s why we incorporate ideas for causal measurement in defining QII,” Sen said. “Roughly, to measure the influence of gender for a specific individual in the example above, we keep the weight-lifting ability fixed, vary gender and check whether there is a difference in the decision.”

Observing that single inputs may not always have high influence, the QII measures also quantify the joint influence of a set of inputs, such as age and income, on outcomes and the marginal influence of each input within the set. Since a single input may be part of multiple influential sets, the average marginal influence of the input is computed using principled game-theoretic aggregation measures previously applied to measure influence in revenue division and voting.

“To get a sense of these influence measures, consider the U.S. presidential election,” Zick said. “California and Texas have influence because they have many voters, whereas Pennsylvania and Ohio have power because they are often swing states. The influence aggregation measures we employ account for both kinds of power.”

The researchers tested their approach against some standard machine-learning algorithms that they used to train decision-making systems on real data sets. They found that the QII provided better explanations than standard associative measures for a host of scenarios they considered, including sample applications for predictive policing and income prediction.

Now, they are seeking collaboration with industrial partners so that they can employ QII at scale on operational machine-learning systems.

Here’s a link to and a citation for a PDF of the paper presented at the May 2016 conference,

Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems by Anupam Datta, Shayak Sen, Yair Zick. Presented at the at the IEEE Symposium on Security and Privacy, May 23–25, in San Jose, Calif.

I’ve also embedded the paper here,

CarnegieMellon_AlgorithmicTransparency

AI (artificial intelligence) and logical dialogue in Japanese

Hitachi Corporation has been exciting some interest with its announcement of the latest iteration of its artificial intelligence programme’s and its new ability to speak Japanese (from a June 5, 2016 news item on Nanotechnology Now),

Today, the social landscape changes rapidly and customer needs are becoming increasingly diversified. Companies are expected to continuously create new services and values. Further, driven by recent advancements in information & telecommunication and analytics technologies, interest is growing in technology that can extract valuable insight from big data which is generated on a daily basis.

Hitachi has been developing a basic AI technology that analyzes huge volumes of English text data and presents opinions in English to help enterprises make business decisions. The original technology required rules of grammar specific to the English language to be programmed, to extract sentences representing reasons and grounds for opinions. This process represented a hurdle in applying system to Japanese or any other language as it required dedicated programs correlated to the linguistic rules of the target language.

By applying deep learning, this issue was eliminated thus enabling the new technology to recognize sentences that have high probability of being reasons and grounds without relying on linguistic rules. More specifically, the AI system is presented with sentences which represent reasons and grounds extracted from thousands of articles. Learning from the rules and patterns, the system becomes discriminating of sentences which represent reasons and grounds in new articles. Hitachi added an attention mechanism” which support deep learning to estimate which words and phrases are worthy of attention in texts like news articles and research reports. The “attention mechanism” helps the system to grasp the points that require attention, including words and phrases related to topics and values. This method enables the system to distinguish sentences which have a high probability of being reasons and grounds from text data in any language.

They have plans for this technology,

The technology developed will be core technology in achieving a multi-lingual AI system capable of offering opinion. Hitachi will pursue further research to realize AI systems supporting business decision making by enterprises worldwide.

The June 2, 2016 Hitachi news release which originated the news item can be found here.

Deep learning for cosmetics

Deep learning seems to be a synonym for artificial intelligence if a May 24, 2016 Insilico Medicine news release on EurekAlert about its use in the fields of cosmetics and as an alternative to testing animals is to be believed (Note: Links have been removed),

In addition to heading Insilico Medicine, Inc, a big data analytics company focused on applying advanced signaling pathway activation analysis and deep learning methods to biomarker and drug discovery in cancer and age-related diseases, Alex Zhavoronkov, PhD is the co-founder and principal scientist of Youth Laboratories, a company focusing on applying machine learning methods to evaluating the condition of human skin and general health status using multimodal inputs. The company developed an app called RYNKL, a mobile app for evaluating the effectiveness of various anti-aging interventions by analyzing “wrinkleness” and other parameters. The app was developed using funds from a Kickstarter crowdfunding campaign and is now being extensively tested and improved. The company also developed a platform for running online beauty competitions, where humans are evaluated by a panel of robot judges. Teams of programmers also compete on the development of most innovative algorithms to evaluate humans.

“One of my goals in life is to minimize unnecessary animal testing in areas, where computer simulations can be even more relevant to humans. Serendipitously, some of our approaches find surprising new applications in the beauty industry, which has moved away from human testing and is moving towards personalizing cosmetics and beauty products. We are happy to present our research results to a very relevant audience at this major industry event”, said Alex Zhavoronkov, CEO of Insilico Medicine, Inc.

Artificial intelligence is entering every aspect of our daily life. Deep learning systems are already outperforming humans in image and text recognition and we would like to bring some of the most innovative players like Insilico Medicine, who dare to work with gene expression, imaging and drug data to find novel ways to keep us healthy, young and beautiful”, said Irina Kremlin, director of INNOCOS.

Here’s a link to and a citation for the paper,

Deep biomarkers of human aging: Application of deep neural networks to biomarker development by Evgeny Putin, Polina Mamoshina, Alexander Aliper, Mikhail Korzinkin, Alexey Moskalev, Alexey Kolosov, Alexander Ostrovskiy, Charles Cantor, Jan Vijg, and Alex Zhavoronkov. Aging May 2016 vol. 8, no. 5

This is an open access paper.

You can find out more about In Silico Medicine here and RINKL here. I was not able to find a website for Youth Laboratories.

Will AI ‘artists’ be able to fool a panel judging entries the Neukom Institute Prizes in Computational Arts?

There’s an intriguing competition taking place at Dartmouth College (US) according to a May 2, 2016 piece on phys.org (Note: Links have been removed),

Algorithms help us to choose which films to watch, which music to stream and which literature to read. But what if algorithms went beyond their jobs as mediators of human culture and started to create culture themselves?

In 1950 English mathematician and computer scientist Alan Turing published a paper, “Computing Machinery and Intelligence,” which starts off by proposing a thought experiment that he called the “Imitation Game.” In one room is a human “interrogator” and in another room a man and a woman. The goal of the game is for the interrogator to figure out which of the unknown hidden interlocutors is the man and which is the woman. This is to be accomplished by asking a sequence of questions with responses communicated either by a third party or typed out and sent back. “Winning” the Imitation Game means getting the identification right on the first shot.

Turing then modifies the game by replacing one interlocutor with a computer, and asks whether a computer will be able to converse sufficiently well that the interrogator cannot tell the difference between it and the human. This version of the Imitation Game has come to be known as the “Turing Test.”

On May 18 [2016] at Dartmouth, we will explore a different area of intelligence, taking up the question of distinguishing machine-generated art. Specifically, in our “Turing Tests in the Creative Arts,” we ask if machines are capable of generating sonnets, short stories, or dance music that is indistinguishable from human-generated works, though perhaps not yet so advanced as Shakespeare, O. Henry or Daft Punk.

The piece on phys.org is a crossposting of a May 2, 2016 article by Michael Casey and Daniel N. Rockmore for The Conversation. The article goes on to describe the competitions,

The dance music competition (“Algorhythms”) requires participants to construct an enjoyable (fun, cool, rad, choose your favorite modifier for having an excellent time on the dance floor) dance set from a predefined library of dance music. In this case the initial random “seed” is a single track from the database. The software package should be able to use this as inspiration to create a 15-minute set, mixing and modifying choices from the library, which includes standard annotations of more than 20 features, such as genre, tempo (bpm), beat locations, chroma (pitch) and brightness (timbre).

In what might seem a stiffer challenge, the sonnet and short story competitions (“PoeTix” and “DigiLit,” respectively) require participants to submit self-contained software packages that upon the “seed” or input of a (common) noun phrase (such as “dog” or “cheese grater”) are able to generate the desired literary output. Moreover, the code should ideally be able to generate an infinite number of different works from a single given prompt.

To perform the test, we will screen the computer-made entries to eliminate obvious machine-made creations. We’ll mix human-generated work with the rest, and ask a panel of judges to say whether they think each entry is human- or machine-generated. For the dance music competition, scoring will be left to a group of students, dancing to both human- and machine-generated music sets. A “winning” entry will be one that is statistically indistinguishable from the human-generated work.

The competitions are open to any and all comers [competition is now closed; the deadline was April 15, 2016]. To date, entrants include academics as well as nonacademics. As best we can tell, no companies have officially thrown their hats into the ring. This is somewhat of a surprise to us, as in the literary realm companies are already springing up around machine generation of more formulaic kinds of “literature,” such as earnings reports and sports summaries, and there is of course a good deal of AI automation around streaming music playlists, most famously Pandora.

The authors discuss issues with judging the entries,

Evaluation of the entries will not be entirely straightforward. Even in the initial Imitation Game, the question was whether conversing with men and women over time would reveal their gender differences. (It’s striking that this question was posed by a closeted gay man [Alan Turing].) The Turing Test, similarly, asks whether the machine’s conversation reveals its lack of humanity not in any single interaction but in many over time.

It’s also worth considering the context of the test/game. Is the probability of winning the Imitation Game independent of time, culture and social class? Arguably, as we in the West approach a time of more fluid definitions of gender, that original Imitation Game would be more difficult to win. Similarly, what of the Turing Test? In the 21st century, our communications are increasingly with machines (whether we like it or not). Texting and messaging have dramatically changed the form and expectations of our communications. For example, abbreviations, misspellings and dropped words are now almost the norm. The same considerations apply to art forms as well.

The authors also pose the question: Who is the artist?

Thinking about art forms leads naturally to another question: who is the artist? Is the person who writes the computer code that creates sonnets a poet? Is the programmer of an algorithm to generate short stories a writer? Is the coder of a music-mixing machine a DJ?

Where is the divide between the artist and the computational assistant and how does the drawing of this line affect the classification of the output? The sonnet form was constructed as a high-level algorithm for creative work – though one that’s executed by humans. Today, when the Microsoft Office Assistant “corrects” your grammar or “questions” your word choice and you adapt to it (either happily or out of sheer laziness), is the creative work still “yours” or is it now a human-machine collaborative work?

That’s an interesting question and one I asked in the context of two ‘mashup’ art exhibitions in Vancouver (Canada) in my March 8, 2016 posting.

Getting back to back to Dartmouth College and its Neukom Institute Prizes in Computational Arts, here’s a list of the competition judges from the competition homepage,

David Cope (Composer, Algorithmic Music Pioneer, UCSC Music Professor)
David Krakauer (President, the Santa Fe Institute)
Louis Menand (Pulitzer Prize winning author and Professor at Harvard University)
Ray Monk (Author, Biographer, Professor of Philosophy)
Lynn Neary (NPR: Correspondent, Arts Desk and Guest Host)
Joe Palca (NPR: Correspondent, Science Desk)
Robert Siegel (NPR: Senior Host, All Things Considered)

The announcements will be made Wednesday, May 18, 2016. I can hardly wait!

Addendum

Martin Robbins has written a rather amusing May 6, 2016 post for the Guardian science blogs on AI and art critics where he also notes that the question: What is art? is unanswerable (Note: Links have been removed),

Jonathan Jones is unhappy about artificial intelligence. It might be hard to tell from a casual glance at the art critic’s recent column, “The digital Rembrandt: a new way to mock art, made by fools,” but if you look carefully the subtle clues are there. His use of the adjectives “horrible, tasteless, insensitive and soulless” in a single sentence, for example.

The source of Jones’s ire is a new piece of software that puts… I’m so sorry… the ‘art’ into ‘artificial intelligence’. By analyzing a subset of Rembrandt paintings that featured ‘bearded white men in their 40s looking to the right’, its algorithms were able to extract the key features that defined the Dutchman’s style. …

Of course an artificial intelligence is the worst possible enemy of a critic, because it has no ego and literally does not give a crap what you think. An arts critic trying to deal with an AI is like an old school mechanic trying to replace the battery in an iPhone – lost, possessing all the wrong tools and ultimately irrelevant. I’m not surprised Jones is angry. If I were in his shoes, a computer painting a Rembrandt would bring me out in hives.
Advertisement

Can a computer really produce art? We can’t answer that without dealing with another question: what exactly is art? …

I wonder what either Robbins or Jones will make of the Dartmouth competition?

Are they just computer games or are we in a race with technology?

This story poses some interesting questions that touch on the uneasiness being felt as computers get ‘smarter’. From an April 13, 2016 news item on ScienceDaily,

The saying of philosopher René Descartes of what makes humans unique is beginning to sound hollow. ‘I think — therefore soon I am obsolete’ seems more appropriate. When a computer routinely beats us at chess and we can barely navigate without the help of a GPS, have we outlived our place in the world? Not quite. Welcome to the front line of research in cognitive skills, quantum computers and gaming.

Today there is an on-going battle between man and machine. While genuine machine consciousness is still years into the future, we are beginning to see computers make choices that previously demanded a human’s input. Recently, the world held its breath as Google’s algorithm AlphaGo beat a professional player in the game Go–an achievement demonstrating the explosive speed of development in machine capabilities.

An April 13, 2016 Aarhus University press release (also on EurekAlert) by Rasmus Rørbæk, which originated the news item, further develops the point,

But we are not beaten yet — human skills are still superior in some areas. This is one of the conclusions of a recent study by Danish physicist Jacob Sherson, published in the journal Nature.

“It may sound dramatic, but we are currently in a race with technology — and steadily being overtaken in many areas. Features that used to be uniquely human are fully captured by contemporary algorithms. Our results are here to demonstrate that there is still a difference between the abilities of a man and a machine,” explains Jacob Sherson.

At the interface between quantum physics and computer games, Sherson and his research group at Aarhus University have identified one of the abilities that still makes us unique compared to a computer’s enormous processing power: our skill in approaching problems heuristically and solving them intuitively. The discovery was made at the AU Ideas Centre CODER, where an interdisciplinary team of researchers work to transfer some human traits to the way computer algorithms work. ?

Quantum physics holds the promise of immense technological advances in areas ranging from computing to high-precision measurements. However, the problems that need to be solved to get there are so complex that even the most powerful supercomputers struggle with them. This is where the core idea behind CODER–combining the processing power of computers with human ingenuity — becomes clear. ?

Our common intuition

Like Columbus in QuantumLand, the CODER research group mapped out how the human brain is able to make decisions based on intuition and accumulated experience. This is done using the online game “Quantum Moves.” Over 10,000 people have played the game that allows everyone contribute to basic research in quantum physics.

“The map we created gives us insight into the strategies formed by the human brain. We behave intuitively when we need to solve an unknown problem, whereas for a computer this is incomprehensible. A computer churns through enormous amounts of information, but we can choose not to do this by basing our decision on experience or intuition. It is these intuitive insights that we discovered by analysing the Quantum Moves player solutions,” explains Jacob Sherson. ? [sic]

The laws of quantum physics dictate an upper speed limit for data manipulation, which in turn sets the ultimate limit to the processing power of quantum computers — the Quantum Speed ??Limit. Until now a computer algorithm has been used to identify this limit. It turns out that with human input researchers can find much better solutions than the algorithm.

“The players solve a very complex problem by creating simple strategies. Where a computer goes through all available options, players automatically search for a solution that intuitively feels right. Through our analysis we found that there are common features in the players’ solutions, providing a glimpse into the shared intuition of humanity. If we can teach computers to recognise these good solutions, calculations will be much faster. In a sense we are downloading our common intuition to the computer” says Jacob Sherson.

And it works. The group has shown that we can break the Quantum Speed Limit by combining the cerebral cortex and computer chips. This is the new powerful tool in the development of quantum computers and other quantum technologies.

After the buildup, the press release focuses on citizen science and computer games,

Science is often perceived as something distant and exclusive, conducted behind closed doors. To enter you have to go through years of education, and preferably have a doctorate or two. Now a completely different reality is materialising.? [sic]

In recent years, a new phenomenon has appeared–citizen science breaks down the walls of the laboratory and invites in everyone who wants to contribute. The team at Aarhus University uses games to engage people in voluntary science research. Every week people around the world spend 3 billion hours playing games. Games are entering almost all areas of our daily life and have the potential to become an invaluable resource for science.

“Who needs a supercomputer if we can access even a small fraction of this computing power? By turning science into games, anyone can do research in quantum physics. We have shown that games break down the barriers between quantum physicists and people of all backgrounds, providing phenomenal insights into state-of-the-art research. Our project combines the best of both worlds and helps challenge established paradigms in computational research,” explains Jacob Sherson.

The difference between the machine and us, figuratively speaking, is that we intuitively reach for the needle in a haystack without knowing exactly where it is. We ‘guess’ based on experience and thereby skip a whole series of bad options. For Quantum Moves, intuitive human actions have been shown to be compatible with the best computer solutions. In the future it will be exciting to explore many other problems with the aid of human intuition.

“We are at the borderline of what we as humans can understand when faced with the problems of quantum physics. With the problem underlying Quantum Moves we give the computer every chance to beat us. Yet, over and over again we see that players are more efficient than machines at solving the problem. While Hollywood blockbusters on artificial intelligence are starting to seem increasingly realistic, our results demonstrate that the comparison between man and machine still sometimes favours us. We are very far from computers with human-type cognition,” says Jacob Sherson and continues:

“Our work is first and foremost a big step towards the understanding of quantum physical challenges. We do not know if this can be transferred to other challenging problems, but it is definitely something that we will work hard to resolve in the coming years.”

Here’s a link to and a citation for the paper,

Exploring the quantum speed limit with computer games by Jens Jakob W. H. Sørensen, Mads Kock Pedersen, Michael Munch, Pinja Haikka, Jesper Halkjær Jensen, Tilo Planke, Morten Ginnerup Andreasen, Miroslav Gajdacz, Klaus Mølmer, Andreas Lieberoth, & Jacob F. Sherson. Nature 532, 210–213  (14 April 2016) doi:10.1038/nature17620 Published online 13 April 2016

This paper is behind a paywall.

Managing risks in a world of converging technology (the fourth industrial revolution)

Finally there’s an answer to the question: What (!!!) is the fourth industrial revolution? (I took a guess [wrongish] in my Nov. 20, 2015 post about a special presentation at the 2016 World Economic Forum’s IdeasLab.)

Andrew Maynard in a Dec. 3, 2015 think piece (also called a ‘thesis’) for Nature Nanotechnology answers the question,

… an approach that focuses on combining technologies such as additive manufacturing, automation, digital services and the Internet of Things, and … is part of a growing movement towards exploiting the convergence between emerging technologies. This technological convergence is increasingly being referred to as the ‘fourth industrial revolution’, and like its predecessors, it promises to transform the ways we live and the environments we live in. (While there is no universal agreement on what constitutes an ‘industrial revolution’, proponents of the fourth industrial revolution suggest that the first involved harnessing steam power to mechanize production; the second, the use of electricity in mass production; and the third, the use of electronics and information technology to automate production.)

In anticipation of the the 2016 World Economic Forum (WEF), which has the fourth industrial revolution as its theme, Andrew  explains how he sees the situation we are sliding into (from Andrew Maynard’s think piece),

As more people get closer to gaining access to increasingly powerful converging technologies, a complex risk landscape is emerging that lies dangerously far beyond the ken of current regulations and governance frameworks. As a result, we are in danger of creating a global ‘wild west’ of technology innovation, where our good intentions may be among the first casualties.

There are many other examples where converging technologies are increasing the gap between what we can do and our understanding of how to do it responsibly. The convergence between robotics, nanotechnology and cognitive augmentation, for instance, and that between artificial intelligence, gene editing and maker communities both push us into uncertain territory. Yet despite the vulnerabilities inherent with fast-evolving technological capabilities that are tightly coupled, complex and poorly regulated, we lack even the beginnings of national or international conceptual frameworks to think about responsible decision-making and responsive governance.

He also lists some recommendations,

Fostering effective multi-stakeholder dialogues.

Encouraging actionable empathy.

Providing educational opportunities for current and future stakeholders.

Developing next-generation foresight capabilities.

Transforming approaches to risk.

Investing in public–private partnerships.

Andrew concludes with this,

… The good news is that, in fields such as nanotechnology and synthetic biology, we have already begun to develop the skills to do this — albeit in a small way. We now need to learn how to scale up our efforts, so that our convergence in working together to build a better future mirrors the convergence of the technologies that will help achieve this.

It’s always a pleasure to read Andrew’s work as it’s thoughtful. I was surprised (since Andrew is a physicist by training) and happy to see the recommendation for “actionable empathy.”

Although, I don’t always agree with him on this occasion I don’t have any particular disagreements but I think that including a recommendation or two to cover the certainty we will get something wrong and have to work quickly to right things would be a good idea.  I’m thinking primarily of governments which are notoriously slow to respond with legislation for new developments and equally slow to change that legislation when the situation changes.

The technological environment Andrew is describing is dynamic, that is fast-moving and changing at a pace we have yet to properly conceptualize. Governments will need to change so they can respond in an agile fashion. My suggestion is:

Develop policy task forces that can be convened in hours and given the authority to respond to an immediate situation with oversight after the fact

Getting back to Andrew Maynard, you can find his think piece in its entirety via this link and citation,

Navigating the fourth industrial revolution by Andrew D. Maynard. Nature Nanotechnology 10, 1005–1006 (2015) doi:10.1038/nnano.2015.286 Published online 03 December 2015

This paper is behind a paywall.