Tag Archives: GO

Deep learning and some history from the Swiss National Science Foundation (SNSF)

A June 27, 2016 news item on phys.org provides a measured analysis of deep learning and its current state of development (from a Swiss perspective),

In March 2016, the world Go champion Lee Sedol lost 1-4 against the artificial intelligence AlphaGo. For many, this was yet another defeat for humanity at the hands of the machines. Indeed, the success of the AlphaGo software was forged in an area of artificial intelligence that has seen huge progress over the last decade. Deep learning, as it’s called, uses artificial neural networks to process algorithmic calculations. This software architecture therefore mimics biological neural networks.

Much of the progress in deep learning is thanks to the work of Jürgen Schmidhuber, director of the IDSIA (Istituto Dalle Molle di Studi sull’Intelligenza Artificiale) which is located in the suburbs of Lugano. The IDSIA doctoral student Shane Legg and a group of former colleagues went on to found DeepMind, the startup acquired by Google in early 2014 for USD 500 million. The DeepMind algorithms eventually wound up in AlphaGo.

“Schmidhuber is one of the best at deep learning,” says Boi Faltings of the EPFL Artificial Intelligence Lab. “He never let go of the need to keep working at it.” According to Stéphane Marchand-Maillet of the University of Geneva computing department, “he’s been in the race since the very beginning.”

A June 27, 2016 SNSF news release (first published as a story in Horizons no. 109 June 2016) by Fabien Goubet, which originated the news item, goes on to provide a brief history,

The real strength of deep learning is structural recognition, and winning at Go is just an illustration of this, albeit a rather resounding one. Elsewhere, and for some years now, we have seen it applied to an entire spectrum of areas, such as visual and vocal recognition, online translation tools and smartphone personal assistants. One underlying principle of machine learning is that algorithms must first be trained using copious examples. Naturally, this has been helped by the deluge of user-generated content spawned by smartphones and web 2.0, stretching from Facebook photo comments to official translations published on the Internet. By feeding a machine thousands of accurately tagged images of cats, for example, it learns first to recognise those cats and later any image of a cat, including those it hasn’t been fed.

Deep learning isn’t new; it just needed modern computers to come of age. As far back as the early 1950s, biologists tried to lay out formal principles to explain the working of the brain’s cells. In 1956, the psychologist Frank Rosenblatt of the New York State Aeronautical Laboratory published a numerical model based on these concepts, thereby creating the very first artificial neural network. Once integrated into a calculator, it learned to recognise rudimentary images.

“This network only contained eight neurones organised in a single layer. It could only recognise simple characters”, says Claude Touzet of the Adaptive and Integrative Neuroscience Laboratory of Aix-Marseille University. “It wasn’t until 1985 that we saw the second generation of artificial neural networks featuring multiple layers and much greater performance”. This breakthrough was made simultaneously by three researchers: Yann LeCun in Paris, Geoffrey Hinton in Toronto and Terrence Sejnowski in Baltimore.

Byte-size learning

In multilayer networks, each layer learns to recognise the precise visual characteristics of a shape. The deeper the layer, the more abstract the characteristics. With cat photos, the first layer analyses pixel colour, and the following layer recognises the general form of the cat. This structural design can support calculations being made upon thousands of layers, and it was this aspect of the architecture that gave rise to the name ‘deep learning’.

Marchand-Maillet explains: “Each artificial neurone is assigned an input value, which it computes using a mathematical function, only firing if the output exceeds a pre-defined threshold”. In this way, it reproduces the behaviour of real neurones, which only fire and transmit information when the input signal (the potential difference across the entire neural circuit) reaches a certain level. In the artificial model, the results of a single layer are weighted, added up and then sent as the input signal to the following layer, which processes that input using different functions, and so on and so forth.

For example, if a system is trained with great quantities of photos of apples and watermelons, it will progressively learn to distinguish them on the basis of diameter, says Marchand-Maillet. If it cannot decide (e.g., when processing a picture of a tiny watermelon), the subsequent layers take over by analysing the colours or textures of the fruit in the photo, and so on. In this way, every step in the process further refines the assessment.

Video games to the rescue

For decades, the frontier of computing held back more complex applications, even at the cutting edge. Industry walked away, and deep learning only survived thanks to the video games sector, which eventually began producing graphics chips, or GPUs, with an unprecedented power at accessible prices: up to 6 teraflops (i.e., 6 trillion calculations per second) for a few hundred dollars. “There’s no doubt that it was this calculating power that laid the ground for the quantum leap in deep learning”, says Touzet. GPUs are also very good at parallel calculations, a useful function for executing the innumerable simultaneous operations required by neural networks.
Although image analysis is getting great results, things are more complicated for sequential data objects such as natural spoken language and video footage. This has formed part of Schmidhuber’s work since 1989, and his response has been to develop recurrent neural networks in which neurones communicate with each other in loops, feeding processed data back into the initial layers.

Such sequential data analysis is highly dependent on context and precursory data. In Lugano, networks have been instructed to memorise the order of a chain of events. Long Short Term Memory (LSTM) networks can distinguish ‘boat’ from ‘float’ by recalling the sound that preceded ‘oat’ (i.e., either ‘b’ or ‘fl’). “Recurrent neural networks are more powerful than other approaches such as the Hidden Markov models”, says Schmidhuber, who also notes that Google Voice integrated LSTMs in 2015. “With looped networks, the number of layers is potentially infinite”, says Faltings [?].

For Schmidhuber, deep learning is just one aspect of artificial intelligence; the real thing will lead to “the most important change in the history of our civilisation”. But Marchand-Maillet sees deep learning as “a bit of hype, leading us to believe that artificial intelligence can learn anything provided there’s data. But it’s still an open question as to whether deep learning can really be applied to every last domain”.

It’s nice to get an historical perspective and eye-opening to realize that scientists have been working on these concepts since the 1950s.

Environmental impacts and graphene

Researchers at the University of California at Riverside (UCR) have published the results of what they claim is the first study featuring the environmental impact from graphene use. From the April 29, 2014 news item on ScienceDaily,

In a first-of-its-kind study of how a material some think could transform the electronics industry moves in water, researchers at the University of California, Riverside Bourns College of Engineering found graphene oxide nanoparticles are very mobile in lakes or streams and therefore may well cause negative environmental impacts if released.

Graphene oxide nanoparticles are an oxidized form of graphene, a single layer of carbon atoms prized for its strength, conductivity and flexibility. Applications for graphene include everything from cell phones and tablet computers to biomedical devices and solar panels.

The use of graphene and other carbon-based nanomaterials, such as carbon nanotubes, are growing rapidly. At the same time, recent studies have suggested graphene oxide may be toxic to humans. [emphasis mine]

As production of these nanomaterials increase, it is important for regulators, such as the Environmental Protection Agency, to understand their potential environmental impacts, said Jacob D. Lanphere, a UC Riverside graduate student who co-authored a just-published paper about graphene oxide nanoparticles transport in ground and surface water environments.

I wish they had cited the studies suggesting graphene oxide (GO) may be toxic. After a quick search I found: Internalization and cytotoxicity of graphene oxide and carboxyl graphene nanoplatelets in the human hepatocellular carcinoma cell line Hep G2 by Tobias Lammel, Paul Boisseaux, Maria-Luisa Fernández-Cruz, and José M Navas (free access paper in Particle and Fibre Toxicology 2013, 10:27 http://www.particleandfibretoxicology.com/content/10/1/27). From what I can tell, this was a highly specialized investigation conducted in a laboratory. While the results seem concerning it’s difficult to draw conclusions from this study or others that may have been conducted.

Dexter Johnson in a May 1, 2014 post on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) provides more relevant citations and some answers (Note: Links have been removed),

While the UC Riverside  did not look at the toxicity of GO in their study, researchers at the Hersam group from Northwestern University did report in a paper published in the journal Nano Letters (“Minimizing Oxidation and Stable Nanoscale Dispersion Improves the Biocompatibility of Graphene in the Lung”) that GO was the most toxic form of graphene-based materials that were tested in mice lungs. In other research published in the Journal of Hazardous Materials (“Investigation of acute effects of graphene oxide on wastewater microbial community: A case study”), investigators determined that the toxicity of GO was dose dependent and was toxic in the range of 50 to 300 mg/L. So, below 50 mg/L there appear to be no toxic effects to GO. To give you some context, arsenic is considered toxic at 0.01 mg/L.

Dexter also contrasts graphene oxide with graphene (from his May 1, 2014 post; Note: A link has been removed),

While GO is quite different from graphene in terms of its properties (GO is an insulator while graphene is a conductor), there are many applications that are similar for both GO and graphene. This is the result of GO’s functional groups allowing for different derivatives to be made on the surface of GO, which in turn allows for additional chemical modification. Some have suggested that GO would make a great material to be deposited on additional substrates for thin conductive films where the surface could be tuned for use in optical data storage, sensors, or even biomedical applications.

Getting back to the UCR research, an April 28, 2014 UCR news release (also on EurekAlert but dated April 29, 2014) describes it  in more detail,

Walker’s [Sharon L. Walker, an associate professor and the John Babbage Chair in Environmental Engineering at UC Riverside] lab is one of only a few in the country studying the environmental impact of graphene oxide. The research that led to the Environmental Engineering Science paper focused on understanding graphene oxide nanoparticles’ stability, or how well they hold together, and movement in groundwater versus surface water.

The researchers found significant differences.

In groundwater, which typically has a higher degree of hardness and a lower concentration of natural organic matter, the graphene oxide nanoparticles tended to become less stable and eventually settle out or be removed in subsurface environments.

In surface waters, where there is more organic material and less hardness, the nanoparticles remained stable and moved farther, especially in the subsurface layers of the water bodies.

The researchers also found that graphene oxide nanoparticles, despite being nearly flat, as opposed to spherical, like many other engineered nanoparticles, follow the same theories of stability and transport.

I don’t know what conclusions to draw from the information that the graphene nanoparticles remain stable and moved further in the water. Is a potential buildup of graphene nanoparticles considered a problem because it could end up in our water supply and we would be poisoned by these particles? Dexter provides an answer (from his May 1, 2014 post),

Ultimately, the question of danger of any material or chemical comes down to the simple equation: Hazard x Exposure=Risk. To determine what the real risk is of GO reaching concentrations equal to those that have been found to be toxic (50-300 mg/L) is the key question.

The results of this latest study don’t really answer that question, but only offer a tool by which to measure the level of exposure to groundwater if there was a sudden spill of GO at a manufacturing facility.

While I was focused on ingestion by humans, it seems this research was more focused on the natural environment and possible future poisoning by graphene oxide.

Here’s a link to and a citation for the paper,

Stability and Transport of Graphene Oxide Nanoparticles in Groundwater and Surface Water by Jacob D. Lanphere, Brandon Rogers, Corey Luth, Carl H. Bolster, and Sharon L. Walker. Environmental Engineering Science. -Not available-, ahead of print. doi:10.1089/ees.2013.0392.

Online Ahead of Print: March 17, 2014

If available online, this is behind a paywall.