Tag Archives: Chris Eliasmith

Where’s the science? Stephen Hawking’s Brave New World debuts Nov. 15, 2013

Yesterday, Nov. 14, 2013, I happened to catch Dr. Carin Bondar being interviewed on a local (Vancouver, Canada) television (tv) programme about her upcoming appearances as one of the hosts of Stephen Hawking’s Brave New World series (season two) being debuted tonight (Nov. 15, 2013). While enthusiastic about this latest venture, Dr. Bondar didn’t offer much science information during the interview where she focused on her adventures as part of a virtual military team and her surprise at some of the work being done in the field of prosthetics. There’s a bit more detail about the programme (not the science) in Bondar’s Nov. 12, 2013 blog entry on the Huffington Post website,

One of the highlights of my career thus far was being involved in a groundbreaking television series Stephen Hawking’s Brave New World premiering on Discovery World. A co-operative project between Handel Productions (Canada) and IWC (England), the series showcases some of the most mind-blowing new technologies that will impact our daily lives in the not-too-distant future.

Each of the six, one-hour episodes is narrated by Professor Stephen Hawking, world-renown physicist and author of the best-seller A Brief History of Time, and is comprised of the investigations of a team of five scientists who travel the world — Myself and Professor Chris Eliasmith from Canada, Dr. Daniel Kraft from the US, and Professor Jim Al-Khalili and Dr. Aarathi Prasad from the UK.

The premiere episode, called Inspired by Nature, is all about how we need only to look to the natural world for some of the most awe-inspiring inventions. Millions of years of evolution have resulted in some highly complex and innovative strategies for life across the animal kingdom…and this episode shows us how humans are attempting to re-create them for our own purposes.

Stephen Hawking’s Brave New World premieres Friday, November 15 at 8 p.m. ET/10 p.m. PT on Discovery World.

Bondar’s personal blog offers very little more, from a Nov. 1, 2013 posting,

Hi Everyone! I’m thrilled to be one of the presenters on season two of ‘Brave New World with Stephen Hawking’, which will premiere on November 15th. Shooting took place last spring all over the states. It was a crazy, exhausting whirlwind from Atlanta to San Diego, LA, Houston, Pittsburgh and Boston, but it was one of the coolest experiences of my life. I love this promo image of me in a Faraday (bird) cage at the Boston Museum of Science.

The Discovery World website’s programme webpage provides a bit more detail (where’s the science?) about the first three shows in the series,

STEPHEN HAWKING’S BRAVE NEW WORLD: “Inspired by Nature”
Hawking and his team investigate groundbreaking innovations in science inspired by nature. Aarathi Prasad road tests two of the most advanced all-terrain robots in the world designed to go where humans and vehicles can’t; Chris Eliasmith examines an extraordinary new fabric that mimics the adhesive ability of gecko feet and bonds to any surface; Daniel Kraft visits Vancouver-based Nuytco Research where underwater subs are used to simulate zero gravity to train astronauts for deep space exploration; Jim Al-Khalili examines how re-engineering a virus can prevent pandemics; and Carin Bondar discovers how Nikola Tesla’s remarkable dream of wireless power is finally being realized.

STEPHEN HAWKING’S BRAVE NEW WORLD: “Code Red”
Hawking and his team examine new inventions that will change how humans deal with crises in the future. Chris Eliasmith looks into a revolutionary pilotless helicopter (the K-Max), that can fly and perform complex manoeuvres on its own; Daniel Kraft tests out the latest high-tech bomb disposal robot; Jim Al-Khalili checks out a sniper rifle equipped with jet fighter target tracking technology; Carin Bondar examines face recognition binoculars that can identify criminals within 15 seconds; then, Aarathi Prasad examines a lifesaving breakthrough that allows oxygen to be injected directly into the bloodstream.
STEPHEN HAWKING’S BRAVE NEW WORLD: “Virtual World”
Hawking and his team investigate technology transforming the idea of reality. Carin Bondar takes part in a remarkable 3D virtual training program created for the military; Aarathi Prasad tests a new system that maps locations inaccessible by GPS; Daniel Kraft investigates 3D bio-printing where computer designs can be turned into living tissue; Chris Eliasmith tests the latest in gaming technology – a breakthrough in virtual reality that promises the most immersive experience yet; and Jim Al-Khalili tests a computer that can read the human mind.

It would have been nice to find out a little more about the science and a little less about the exciting aspects of these adventures. Perhaps the producers thought it best to confine the science to the broadcast.

The local tv programme where Dr. Bondar was interviewed is called The Rush and while the Nov. 14, 2012 interview has yet (as of Nov. 15, 2013, 13H30 or 1:30 pm PDT) to be posted online, you should be able to find it shortly.

I have mentioned Chris Eliasmith (University of Waterloo, Ontario, Canada) here before, notably in my November 29, 2012 posting about his work simulating neurons in the virtual world.

Computer simulation errors and corrections

In addition to being a news release, this is a really good piece of science writing by Paul Preuss for the Lawrence Berkeley National Laboratory (Berkeley Lab), from the Jan. 3, 2013 Berkeley Lab news release,

Because modern computers have to depict the real world with digital representations of numbers instead of physical analogues, to simulate the continuous passage of time they have to digitize time into small slices. This kind of simulation is essential in disciplines from medical and biological research, to new materials, to fundamental considerations of quantum mechanics, and the fact that it inevitably introduces errors is an ongoing problem for scientists.

Scientists at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have now identified and characterized the source of tenacious errors and come up with a way to separate the realistic aspects of a simulation from the artifacts of the computer method. …

Here’s more detail about the problem and solution,

How biological molecules move is hardly the only field where computer simulations of molecular-scale motion are essential. The need to use computers to test theories and model experiments that can’t be done on a lab bench is ubiquitous, and the problems that Sivak and his colleagues encountered weren’t new.

“A simulation of a physical process on a computer cannot use the exact, continuous equations of motion; the calculations must use approximations over discrete intervals of time,” says Sivak. “It’s well known that standard algorithms that use discrete time steps don’t conserve energy exactly in these calculations.”

One workhorse method for modeling molecular systems is Langevin dynamics, based on equations first developed by the French physicist Paul Langevin over a century ago to model Brownian motion. Brownian motion is the random movement of particles in a fluid (originally pollen grains on water) as they collide with the fluid’s molecules – particle paths resembling a “drunkard’s walk,” which Albert Einstein had used just a few years earlier to establish the reality of atoms and molecules. Instead of impractical-to-calculate velocity, momentum, and acceleration for every molecule in the fluid, Langevin’s method substituted an effective friction to damp the motion of the particle, plus a series of random jolts.

When Sivak and his colleagues used Langevin dynamics to model the behavior of molecular machines, they saw significant differences between what their exact theories predicted and what their simulations produced. They tried to come up with a physical picture of what it would take to produce these wrong answers.

“It was as if extra work were being done to push our molecules around,” Sivak says. “In the real world, this would be a driven physical process, but it existed only in the simulation, so we called it ‘shadow work.’ It took exactly the form of a nonequilibrium driving force.”

They first tested this insight with “toy” models having only a single degree of freedom, and found that when they ignored the shadow work, the calculations were systematically biased. But when they accounted for the shadow work, accurate calculations could be recovered.

“Next we looked at systems with hundreds or thousands of simple molecules,” says Sivak. Using models of water molecules in a box, they simulated the state of the system over time, starting from a given thermal energy but with no “pushing” from outside. “We wanted to know how far the water simulation would be pushed by the shadow work alone.”

The result confirmed that even in the absence of an explicit driving force, the finite-time-step Langevin dynamics simulation acted by itself as a driving nonequilibrium process. Systematic errors resulted from failing to separate this shadow work from the actual “protocol work” that they explicitly modeled in their simulations. For the first time, Sivak and his colleagues were able to quantify the magnitude of the deviations in various test systems.

Such simulation errors can be reduced in several ways, for example by dividing the evolution of the system into ever-finer time steps, because the shadow work is larger when the discrete time steps are larger. But doing so increases the computational expense.

The better approach is to use a correction factor that isolates the shadow work from the physically meaningful work, says Sivak. “We can apply results from our calculation in a meaningful way to characterize the error and correct for it, separating the physically realistic aspects of the simulation from the artifacts of the computer method.”

You can find out more in the Berkeley Lab news release, or (H/T)  in the Jan. 3, 2013 news item on Nanowerk, or you can read the paper,

“Using nonequilibrium fluctuation theorems to understand and correct errors in equilibrium and nonequilibrium discrete Langevin dynamics simulations,” by David A. Sivak, John D. Chodera, and Gavin E. Crooks, will appear in Physical Review X (http://prx.aps.org/) and is now available as an arXiv preprint at http://arxiv.org/abs/1107.2967.

This casts a new light on the SPAUN (Semantic Pointer Architecture Unified Network) project, from Chris Eliasmith’s team at the University of Waterloo, which announced the most  successful attempt (my Nov. 29, 2012 posting) yet to simulate a brain using virtual neurons. Given the probability that Eliasmith’s team was not aware of this work from the Berkeley Lab, one imagines that once it has been integrated that SPAUN will be capable of even more extraordinary feats.

Synaptic electronics

There’s been a lot about the memristor, being developed at HP Labs, at the University of Michigan, and elsewhere, on this blog and significantly less on other approaches to creating nanodevices with neuromorphic properties by researchers in Japan and in the US. The Dec. 20, 2012 news item on ScienceDaily notes,

Researchers in Japan and the US propose a nanoionic device with a range of neuromorphic and electrical multifunctions that may allow the fabrication of on-demand configurable circuits, analog memories and digital-neural fused networks in one device architecture.

… Now Rui Yang, Kazuya Terabe and colleagues at the National Institute for Materials Science in Japan and the University of California, Los Angeles, in the US have developed two-, three-terminal WO3-x-based nanoionic devices capable of a broad range of neuromorphic and electrical functions.

The originating Dec. 20, 2012 news release from Japan’s International Center for Materials draws a parallel between the device’s properties and neural behaviour,  explains the ‘why’ of the process, and mentions what applications the researchers believe could be developed,

The researchers draw similarities between the device properties — volatile and non-volatile states and the current fading process following positive voltage pulses — with models for neural behaviour —that is, short- and long-term memory and forgetting processes. They explain the behaviour as the result of oxygen ions migrating within the device in response to the voltage sweeps. Accumulation of the oxygen ions at the electrode leads to Schottky-like potential barriers and the resulting changes in resistance and rectifying characteristics. The stable bipolar switching behaviour at the Pt/WO3-x interface is attributed to the formation of the electric conductive filament and oxygen absorbability of the Pt electrode.

As the researchers conclude, “These capabilities open a new avenue for circuits, analog memories, and artificially fused digital neural networks using on-demand programming by input pulse polarity, magnitude, and repetition history.”

For those who wish to delve more deeply, here’s the citation (from the ScienceDaily news item),

Rui Yang, Kazuya Terabe, Guangqiang Liu, Tohru Tsuruoka, Tsuyoshi Hasegawa, James K. Gimzewski, Masakazu Aono. On-Demand Nanodevice with Electrical and Neuromorphic Multifunction Realized by Local Ion Migration. ACS Nano, 2012; 6 (11): 9515 DOI: 10.1021/nn302510e

The news release does not state explicitly why this would be considered an on-demand device. The article is behind a paywall.

There was a recent attempt to mimic brain processing not based in nanoelectronics but on mimicking brain activity by creating virtual neurons. A Canadian team at the University of Waterloo led by Chris Eliasmith made a sensation  with SPAUN (Semantic Pointer Architecture Unified Network) in late Nov. 2012 (mentioned in my Nov. 29, 2012 posting).

University of Waterloo researchers use 2.5M (virtual) neurons to simulate a brain

I hinted about some related work at the University of Waterloo earlier this week in my Nov. 26, 2012 posting (Existential risk) about a proposed centre at the University of Cambridge which would be tasked with examining possible risks associated with ‘ultra intelligent machines’.  Today (Science (magazine) published an article about SPAUN (Semantic Pointer Architecture Unified Network) [behind a paywall])and its ability to solve simple arithmetic and perform other tasks as well.

Ed Yong writing for Nature magazine (Simulated brain scores top test marks, Nov. 29, 2012) offers this description,

Spaun sees a series of digits: 1 2 3; 5 6 7; 3 4 ?. Its neurons fire, and it calculates the next logical number in the sequence. It scrawls out a 5, in legible if messy writing.

This is an unremarkable feat for a human, but Spaun is actually a simulated brain. It contains2.5 millionvirtual neurons — many fewer than the 86 billion in the average human head, but enough to recognize lists of numbers, do simple arithmetic and solve reasoning problems.

Here’s a video demonstration, from the University of Waterloo’s Nengo Neural Simulator home page,

The University of Waterloo’s Nov. 29, 2012 news release offers more technical detail,

… The model captures biological details of each neuron, including which neurotransmitters are used, how voltages are generated in the cell, and how they communicate. Spaun uses this network of neurons to process visual images in order to control an arm that draws Spaun’s answers to perceptual, cognitive and motor tasks. …

“This is the first model that begins to get at how our brains can perform a wide variety of tasks in a flexible manner—how the brain coordinates the flow of information between different areas to exhibit complex behaviour,” said Professor Chris Eliasmith, Director of the Centre for Theoretical Neuroscience at Waterloo. He is Canada Research Chair in Theoretical Neuroscience, and professor in Waterloo’s Department of Philosophy and Department of Systems Design Engineering.

Unlike other large brain models, Spaun can perform several tasks. Researchers can show patterns of digits and letters the model’s eye, which it then processes, causing it to write its responses to any of eight tasks.  And, just like the human brain, it can shift from task to task, recognizing an object one moment and memorizing a list of numbers the next. [emphasis mine] Because of its biological underpinnings, Spaun can also be used to understand how changes to the brain affect changes to behaviour.

“In related work, we have shown how the loss of neurons with aging leads to decreased performance on cognitive tests,” said Eliasmith. “More generally, we can test our hypotheses about how the brain works, resulting in a better understanding of the effects of drugs or damage to the brain.”

In addition, the model provides new insights into the sorts of algorithms that might be useful for improving machine intelligence. [emphasis mine] For instance, it suggests new methods for controlling the flow of information through a large system attempting to solve challenging cognitive tasks.

Laura Sanders’ Nov. 29, 2012 article for ScienceNews suggests that there is some controversy as to whether or not SPAUN does resemble a human brain,

… Henry Markram, who leads a different project to reconstruct the human brain called the Blue Brain, questions whether Spaun really captures human brain behavior. Because Spaun’s design ignores some important neural properties, it’s unlikely to reveal anything about the brain’s mechanics, says Markram, of the Swiss Federal Institute of Technology in Lausanne. “It is not a brain model.”

Personally, I have a little difficulty seeing lines of code as ever being able to truly simulate brain activity. I think the notion of moving to something simpler (using fewer neurons as the Eliasmith team does) is a move in the right direction but I’m still more interested in devices such as the memristor and the electrochemical atomic switch and their potential.

Blue Brain Project

Memristor and artificial synapses in my April 19, 2012 posting

Atomic or electrochemical atomic switches and neuromorphic engineering briefly mentioned (scroll 1/2 way down) in my Oct. 17, 2011 posting.

ETA Dec. 19, 2012: There was an AMA (ask me anything) session on Reddit with the SPAUN team in early December, if you’re interested, you can still access the questions and answers,

We are the computational neuroscientists behind the world’s largest functional brain model

Existential risk

The idea that robots of one kind or another (e.g. nanobots eating up the world and leaving grey goo, Cylons in both versions of Battlestar Galactica trying to exterminate humans, etc.) will take over the world and find humans unnecessary  isn’t especially new in works of fiction. It’s not always mentioned directly but the underlying anxiety often has to do with intelligence and concerns over an ‘explosion of intelligence’. The question it raises,’ what if our machines/creations become more intelligent than humans?’ has been described as existential risk. According to a Nov. 25, 2012 article by Sylvia Hui for Huffington Post, a group of eminent philosophers and scientists at the University of Cambridge are proposing to found a Centre for the Study of Existential Risk,

Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction?

Philosophers and scientists at Britain’s Cambridge University think the question deserves serious study. A proposed Center for the Study of Existential Risk will bring together experts to consider the ways in which super intelligent technology, including artificial intelligence, could “threaten our own existence,” the institution said Sunday.

“In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology,” Cambridge philosophy professor Huw Price said.

When that happens, “we’re no longer the smartest things around,” he said, and will risk being at the mercy of “machines that are not malicious, but machines whose interests don’t include us.”

Price along with Martin Rees, Emeritus Professor of Cosmology and Astrophysics, and Jaan Tallinn, Co-Founder of Skype, are the driving forces behind this proposed new centre at Cambridge University. From the Cambridge Project for Existential Risk webpage,

Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake. …

The Cambridge Project for Existential Risk — a joint initiative between a philosopher, a scientist, and a software entrepreneur — begins with the conviction that these issues require a great deal more scientific investigation than they presently receive. Our aim is to establish within the University of Cambridge a multidisciplinary research centre dedicated to the study and mitigation of risks of this kind.

Price and Tallinn co-wrote an Aug. 6, 2012 article for the Australia-based, The Conversation website, about their concerns,

We know how to deal with suspicious packages – as carefully as possible! These days, we let robots take the risk. But what if the robots are the risk? Some commentators argue we should be treating AI (artificial intelligence) as a suspicious package, because it might eventually blow up in our faces. Should we be worried?

Asked whether there will ever be computers as smart as people, the US mathematician and sci-fi author Vernor Vinge replied: “Yes, but only briefly”.

He meant that once computers get to this level, there’s nothing to prevent them getting a lot further very rapidly. Vinge christened this sudden explosion of intelligence the “technological singularity”, and thought that it was unlikely to be good news, from a human point of view.

Was Vinge right, and if so what should we do about it? Unlike typical suspicious parcels, after all, what the future of AI holds is up to us, at least to some extent. Are there things we can do now to make sure it’s not a bomb (or a good bomb rather than a bad bomb, perhaps)?

It appears Price, Rees, and Tallinn are not the only concerned parties, from the Nov. 25, 2012 research news piece on the Cambridge University website,

With luminaries in science, policy, law, risk and computing from across the University and beyond signing up to become advisors, the project is, even in its earliest days, gathering momentum. “The basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence – in a way that they simply haven’t up to now, in human history,” says Price. “We should be investing a little of our intellectual resources in shifting some probability from bad outcomes to good ones.”

Price acknowledges that some of these ideas can seem far-fetched, the stuff of science fiction, but insists that that’s part of the point.

According to the Huffington Post article by Lui, they expect to launch the centre next year (2013). In the meantime, for anyone who’s looking for more information about the ‘intelligence explosion’ or  ‘singularity’ as it’s also known, there’s a Wikipedia essay on the topic.  Also, you may want to stay tuned to this channel (blog) as I expect to have some news about an artificial intelligence project based at the University of Waterloo (Ontario, Canada) and headed by Chris Eliasmith at the university’s Centre for Theoretical Neuroscience, later this week.