Tag Archives: artificial intelligence

University of Toronto, ebola epidemic, and artificial intelligence applied to chemistry

It’s hard to tell much from the Nov. 5, 2014 University of Toronto news release by Michael Kennedy (also on EurekAlert but dated Nov. 10, 2014) about in silico drug testing focused on finding a treatment for ebola,

The University of Toronto, Chematria and IBM are combining forces in a quest to find new treatments for the Ebola virus.

Using a virtual research technology invented by Chematria, a startup housed at U of T’s Impact Centre, the team will use software that learns and thinks like a human chemist to search for new medicines. Running on Canada’s most powerful supercomputer, the effort will simulate and analyze the effectiveness of millions of hypothetical drugs in just a matter of weeks.

“What we are attempting would have been considered science fiction, until now,” says Abraham Heifets (PhD), a U of T graduate and the chief executive officer of Chematria. “We are going to explore the possible effectiveness of millions of drugs, something that used to take decades of physical research and tens of millions of dollars, in mere days with our technology.”

The news release makes it all sound quite exciting,

Chematria’s technology is a virtual drug discovery platform based on the science of deep learning neural networks and has previously been used for research on malaria, multiple sclerosis, C. difficile, and leukemia. [emphases mine]

Much like the software used to design airplanes and computer chips in simulation, this new system can predict the possible effectiveness of new medicines, without costly and time-consuming physical synthesis and testing. [emphasis mine] The system is driven by a virtual brain that teaches itself by “studying” millions of datapoints about how drugs have worked in the past. With this vast knowledge, the software can apply the patterns it has learned to predict the effectiveness of hypothetical drugs, and suggest surprising uses for existing drugs, transforming the way medicines are discovered.

My understanding is that Chematria’s is not the only “virtual drug discovery platform based on the science of deep learning neural networks” as is acknowledged in the next paragraph. In fact, there’s widespread interest in the medical research community as evidenced by such projects as Seurat-1’s NOTOX* and others. Regarding the research on “malaria, multiple sclerosis, C. difficile, and leukemia,” more details would be welcome, e.g., what happened?

A Nov. 4, 2014 article for Mashable by Anita Li does offer a new detail about the technology,

Now, a team of Canadian researchers are hunting for new Ebola treatments, using “groundbreaking” artificial-intelligence technology that they claim can predict the effectiveness of new medicines 150 times faster than current methods.

With the quotes around the word, groundbreaking, Li suggests a little skepticism about the claim.

Here’s more from Li where she seems to have found some company literature,

Chematria describes its technology as a virtual drug-discovery platform that helps pharmaceutical companies “determine which molecules can become medicines.” Here’s how it works, according to the company:

The system is driven by a virtual brain, modeled on the human visual cortex, that teaches itself by “studying” millions of datapoints about how drugs have worked in the past. With this vast knowledge, Chematria’s brain can apply the patterns it perceives, to predict the effectiveness of hypothetical drugs, and suggest surprising uses for existing drugs, transforming the way medicines are discovered.

I was not able to find a Chematria website or anything much more than this brief description on the University of Toronto website (from the Impact Centre’s Current Companies webpage),

Chematria makes software that helps pharmaceutical companies determine which molecules can become medicines. With Chematria’s proprietary approach to molecular docking simulations, pharmaceutical researchers can confidently predict potent molecules for novel biological targets, thereby enabling faster drug development for a fraction of the price of wet-lab experiments.

Chematria’s Ebola project is focused on drugs already available but could be put to a new use (from Li’s article),

In response to the outbreak, Chematria recently launched an Ebola project, using its algorithm to evaluate molecules that have already gone through clinical trials, and have proven to be safe. “That means we can expedite the process of getting the treatment to the people who need it,” Heifets said. “In a pandemic situation, you’re under serious time pressure.”

He cited Aspirin as an example of proven medicine that has more than one purpose: People take it for headaches, but it’s also helpful for heart disease. Similarly, a drug that’s already out there may also hold the cure for Ebola.

I recommend reading Li’s article in its entirety.

The University of Toronto news release provides more detail about the partners involved in this ebola project,

… The unprecedented speed and scale of this investigation is enabled by the unique strengths of the three partners: Chematria is offering the core artificial intelligence technology that performs the drug research, U of T is contributing biological insights about Ebola that the system will use to search for new treatments and IBM is providing access to Canada’s fastest supercomputer, Blue Gene/Q.

“Our team is focusing on the mechanism Ebola uses to latch on to the cells it infects,” said Dr. Jeffrey Lee of the University of Toronto. “If we can interrupt that process with a new drug, it could prevent the virus from replicating, and potentially work against other viruses like Marburg and HIV that use the same mechanism.”

The initiative may also demonstrate an alternative approach to high-speed medical research. While giving drugs to patients will always require thorough clinical testing, zeroing in on the best drug candidates can take years using today’s most common methods. Critics say this slow and prohibitively expensive process is one of the key reasons that finding treatments for rare and emerging diseases is difficult.

“If we can find promising drug candidates for Ebola using computers alone,” said Heifets, “it will be a milestone for how we develop cures.”

I hope this effort along with all the others being made around the world prove helpful with Ebola. it’s good to see research into drugs (chemical formulations) that are familiar to the medical community and can be used for a different purpose than originally intended. Drugs that are ‘repurposed’ should be cheaper than new ones and we already have data about side effects.

As for the “milestone for how we develop cures,” this team’s work along with all the international research on this front and on how we assess toxicity should certainly make that milestone possible.

* Full disclosure: I came across Seurat-1’s NOTOX project when I attended (at Seurat-1’s expense) the 9th World Congress on Alternatives to Animal Testing held in Aug. 2014 in Prague.

Getting neuromorphic with a synaptic transistor

Scientists at Harvard University (Massachusetts, US) have devised a transistor that simulates the synapses found in brains. From a Nov. 2, 2013 news item on ScienceDaily,

It doesn’t take a Watson to realize that even the world’s best supercomputers are staggeringly inefficient and energy-intensive machines.

Our brains have upwards of 86 billion neurons, connected by synapses that not only complete myriad logic circuits; they continuously adapt to stimuli, strengthening some connections while weakening others. We call that process learning, and it enables the kind of rapid, highly efficient computational processes that put Siri and Blue Gene to shame.

Materials scientists at the Harvard School of Engineering and Applied Sciences (SEAS) have now created a new type of transistor that mimics the behavior of a synapse. The novel device simultaneously modulates the flow of information in a circuit and physically adapts to changing signals.

Exploiting unusual properties in modern materials, the synaptic transistor could mark the beginning of a new kind of artificial intelligence: one embedded not in smart algorithms but in the very architecture of a computer. [emphasis mine]

There are two other projects that I know of (and I imagine there are others) focused on intelligence that’s embedded rather than algorithmic. My December 24, 2012 posting focused on a joint (National Institute for Materials Science in Japan and the University of California, Los Angeles) project where researchers developed a nanoionic device with a range of neuromorphic and electrical properties. There’s also the memristor mentioned in my Feb. 26, 2013 posting (and many other times on this blog) which features a ,proposal to create an artificial brain.

Getting back to Harvard’s synaptic transistor (from the Nov. 1, 2013 Harvard University news release which originated the news item),

The human mind, for all its phenomenal computing power, runs on roughly 20 Watts of energy (less than a household light bulb), so it offers a natural model for engineers.

“The transistor we’ve demonstrated is really an analog to the synapse in our brains,” says co-lead author Jian Shi, a postdoctoral fellow at SEAS. “Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of its connection. And the faster the neurons spike each time, the stronger the synaptic connection. Essentially, it memorizes the action between the neurons.”

In principle, a system integrating millions of tiny synaptic transistors and neuron terminals could take parallel computing into a new era of ultra-efficient high performance.

Here’s an image of synaptic transistors that the researchers from Harvard’s School of Engineering and Applied Science (SEAS) have supplied,

Several prototypes of the synaptic transistor are visible on this silicon chip. (Photo by Eliza Grinnell, SEAS Communications.)

Several prototypes of the synaptic transistor are visible on this silicon chip. (Photo by Eliza Grinnell, SEAS Communications.)

The news release provides a description of the synatpic transistor and how it works,

While calcium ions and receptors effect a change in a biological synapse, the artificial version achieves the same plasticity with oxygen ions. When a voltage is applied, these ions slip in and out of the crystal lattice of a very thin (80-nanometer) film of samarium nickelate, which acts as the synapse channel between two platinum “axon” and “dendrite” terminals. The varying concentration of ions in the nickelate raises or lowers its conductance—that is, its ability to carry information on an electrical current—and, just as in a natural synapse, the strength of the connection depends on the time delay in the electrical signal.

Structurally, the device consists of the nickelate semiconductor sandwiched between two platinum electrodes and adjacent to a small pocket of ionic liquid. An external circuit multiplexer converts the time delay into a magnitude of voltage which it applies to the ionic liquid, creating an electric field that either drives ions into the nickelate or removes them. The entire device, just a few hundred microns long, is embedded in a silicon chip.

The synaptic transistor offers several immediate advantages over traditional silicon transistors. For a start, it is not restricted to the binary system of ones and zeros.

“This system changes its conductance in an analog way, continuously, as the composition of the material changes,” explains Shi. “It would be rather challenging to use CMOS, the traditional circuit technology, to imitate a synapse, because real biological synapses have a practically unlimited number of possible states—not just ‘on’ or ‘off.’”

The synaptic transistor offers another advantage: non-volatile memory, which means even when power is interrupted, the device remembers its state.

Additionally, the new transistor is inherently energy efficient. The nickelate belongs to an unusual class of materials, called correlated electron systems, that can undergo an insulator-metal transition. At a certain temperature—or, in this case, when exposed to an external field—the conductance of the material suddenly changes.

“We exploit the extreme sensitivity of this material,” says Ramanathan [principal investigator and associate professor of materials science at Harvard SEAS]. “A very small excitation allows you to get a large signal, so the input energy required to drive this switching is potentially very small. That could translate into a large boost for energy efficiency.”

The nickelate system is also well positioned for seamless integration into existing silicon-based systems.

“In this paper, we demonstrate high-temperature operation, but the beauty of this type of a device is that the ‘learning’ behavior is more or less temperature insensitive, and that’s a big advantage,” says Ramanathan. “We can operate this anywhere from about room temperature up to at least 160 degrees Celsius.”

For now, the limitations relate to the challenges of synthesizing a relatively unexplored material system, and to the size of the device, which affects its speed.

“In our proof-of-concept device, the time constant is really set by our experimental geometry,” says Ramanathan. “In other words, to really make a super-fast device, all you’d have to do is confine the liquid and position the gate electrode closer to it.”

In fact, Ramanathan and his research team are already planning, with microfluidics experts at SEAS, to investigate the possibilities and limits for this “ultimate fluidic transistor.”

Here’s a link to and a citation for the researchers’ paper,

A correlated nickelate synaptic transistor by Jian Shi, Sieu D. Ha, You Zhou, Frank Schoofs, & Shriram Ramanathan. Nature Communications 4, Article number: 2676 doi:10.1038/ncomms3676 Published 31 October 2013

This article is behind a paywall.

Brain-to-brain communication, organic computers, and BAM (brain activity map), the connectome

Miguel Nicolelis, a professor at Duke University, has been making international headlines lately with two brain projects. The first one about implanting a brain chip that allows rats to perceive infrared light was mentioned in my Feb. 15, 2013 posting. The latest project is a brain-to-brain (rats) communication project as per a Feb. 28, 2013 news release on *EurekAlert,

Researchers have electronically linked the brains of pairs of rats for the first time, enabling them to communicate directly to solve simple behavioral puzzles. A further test of this work successfully linked the brains of two animals thousands of miles apart—one in Durham, N.C., and one in Natal, Brazil.

The results of these projects suggest the future potential for linking multiple brains to form what the research team is calling an “organic computer,” which could allow sharing of motor and sensory information among groups of animals. The study was published Feb. 28, 2013, in the journal Scientific Reports.

“Our previous studies with brain-machine interfaces had convinced us that the rat brain was much more plastic than we had previously thought,” said Miguel Nicolelis, M.D., PhD, lead author of the publication and professor of neurobiology at Duke University School of Medicine. “In those experiments, the rat brain was able to adapt easily to accept input from devices outside the body and even learn how to process invisible infrared light generated by an artificial sensor. So, the question we asked was, ‘if the brain could assimilate signals from artificial sensors, could it also assimilate information input from sensors from a different body?'”

Ben Schiller in a Mar. 1, 2013 article for Fast Company describes both the latest experiment and the work leading up to it,

First, two rats were trained to press a lever when a light went on in their cage. Press the right lever, and they would get a reward–a sip of water. The animals were then split in two: one cage had a lever with a light, while another had a lever without a light. When the first rat pressed the lever, the researchers sent electrical activity from its brain to the second rat. It pressed the right lever 70% of the time (more than half).

In another experiment, the rats seemed to collaborate. When the second rat didn’t push the right lever, the first rat was denied a drink. That seemed to encourage the first to improve its signals, raising the second rat’s lever-pushing success rate.

Finally, to show that brain-communication would work at a distance, the researchers put one rat in an cage in North Carolina, and another in Natal, Brazil. Despite noise on the Internet connection, the brain-link worked just as well–the rate at which the second rat pushed the lever was similar to the experiment conducted solely in the U.S.

The Duke University Feb. 28, 2013 news release, the origin for the news release on EurekAlert, provides more specific details about the experiments and the rats’ training,

To test this hypothesis, the researchers first trained pairs of rats to solve a simple problem: to press the correct lever when an indicator light above the lever switched on, which rewarded the rats with a sip of water. They next connected the two animals’ brains via arrays of microelectrodes inserted into the area of the cortex that processes motor information.

One of the two rodents was designated as the “encoder” animal. This animal received a visual cue that showed it which lever to press in exchange for a water reward. Once this “encoder” rat pressed the right lever, a sample of its brain activity that coded its behavioral decision was translated into a pattern of electrical stimulation that was delivered directly into the brain of the second rat, known as the “decoder” animal.

The decoder rat had the same types of levers in its chamber, but it did not receive any visual cue indicating which lever it should press to obtain a reward. Therefore, to press the correct lever and receive the reward it craved, the decoder rat would have to rely on the cue transmitted from the encoder via the brain-to-brain interface.

The researchers then conducted trials to determine how well the decoder animal could decipher the brain input from the encoder rat to choose the correct lever. The decoder rat ultimately achieved a maximum success rate of about 70 percent, only slightly below the possible maximum success rate of 78 percent that the researchers had theorized was achievable based on success rates of sending signals directly to the decoder rat’s brain.

Importantly, the communication provided by this brain-to-brain interface was two-way. For instance, the encoder rat did not receive a full reward if the decoder rat made a wrong choice. The result of this peculiar contingency, said Nicolelis, led to the establishment of a “behavioral collaboration” between the pair of rats.

“We saw that when the decoder rat committed an error, the encoder basically changed both its brain function and behavior to make it easier for its partner to get it right,” Nicolelis said. “The encoder improved the signal-to-noise ratio of its brain activity that represented the decision, so the signal became cleaner and easier to detect. And it made a quicker, cleaner decision to choose the correct lever to press. Invariably, when the encoder made those adaptations, the decoder got the right decision more often, so they both got a better reward.”

In a second set of experiments, the researchers trained pairs of rats to distinguish between a narrow or wide opening using their whiskers. If the opening was narrow, they were taught to nose-poke a water port on the left side of the chamber to receive a reward; for a wide opening, they had to poke a port on the right side.

The researchers then divided the rats into encoders and decoders. The decoders were trained to associate stimulation pulses with the left reward poke as the correct choice, and an absence of pulses with the right reward poke as correct. During trials in which the encoder detected the opening width and transmitted the choice to the decoder, the decoder had a success rate of about 65 percent, significantly above chance.

To test the transmission limits of the brain-to-brain communication, the researchers placed an encoder rat in Brazil, at the Edmond and Lily Safra International Institute of Neuroscience of Natal (ELS-IINN), and transmitted its brain signals over the Internet to a decoder rat in Durham, N.C. They found that the two rats could still work together on the tactile discrimination task.

“So, even though the animals were on different continents, with the resulting noisy transmission and signal delays, they could still communicate,” said Miguel Pais-Vieira, PhD, a postdoctoral fellow and first author of the study. “This tells us that it could be possible to create a workable, network of animal brains distributed in many different locations.”

Will Oremus in his Feb. 28, 2013 article for Slate seems a little less buoyant about the implications of this work,

Nicolelis believes this opens the possibility of building an “organic computer” that links the brains of multiple animals into a single central nervous system, which he calls a “brain-net.” Are you a little creeped out yet? In a statement, Nicolelis adds:

We cannot even predict what kinds of emergent properties would appear when animals begin interacting as part of a brain-net. In theory, you could imagine that a combination of brains could provide solutions that individual brains cannot achieve by themselves.

That sounds far-fetched. But Nicolelis’ lab is developing quite the track record of “taking science fiction and turning it into science,” says Ron Frostig, a neurobiologist at UC-Irvine who was not involved in the rat study. “He’s the most imaginative neuroscientist right now.” (Frostig made it clear he meant this as a complement, though skeptics might interpret the word less charitably.)

The most extensive coverage I’ve given Nicolelis and his work (including the Walk Again project) was in a March 16, 2012 post titled, Monkeys, mind control, robots, prosthetics, and the 2014 World Cup (soccer/football), although there are other mentions including in this Oct. 6, 2011 posting titled, Advertising for the 21st Century: B-Reel, ‘storytelling’, and mind control.  By the way, Nicolelis hopes to have a paraplegic individual (using technology Nicolelis is developing for the Walk Again project) kick the opening soccer/football to the 2014 World Cup games in Brazil.

While there’s much excitement about Nicolelis and his work, there are other ‘brain’ projects being developed in the US including the Brain Activity Map (BAM), which James Lewis notes in his Mar. 1, 2013 posting on the Foresight Institute blog,

A proposal alluded to by President Obama in his State of the Union address [Feb. 2013] to construct a dynamic “functional connectome” Brain Activity Map (BAM) would leverage current progress in neuroscience, synthetic biology, and nanotechnology to develop a map of each firing of every neuron in the human brain—a hundred billion neurons sampled on millisecond time scales. Although not the intended goal of this effort, a project on this scale, if it is funded, should also indirectly advance efforts to develop artificial intelligence and atomically precise manufacturing.

As Lewis notes in his posting, there’s an excellent description of BAM and other brain projects, as well as a discussion about how these ideas are linked (not necessarily by individuals but by the overall direction of work being done in many labs and in many countries across the globe) in Robert Blum’s Feb. (??), 2013 posting titled, BAM: Brain Activity Map Every Spike from Every Neuron, on his eponymous blog. Blum also offers an extensive set of links to the reports and stories about BAM. From Blum’s posting,

The essence of the BAM proposal is to create the technology over the coming decade
to be able to record every spike from every neuron in the brain of a behaving organism.
While this notion seems insanely ambitious, coming from a group of top investigators,
the paper deserves scrutiny. At minimum it shows what might be achieved in the future
by the combination of nanotechnology and neuroscience.

In 2013, as I write this, two European Flagship projects have just received funding for
one billion euro each (1.3 billion dollars each). The Human Brain Project is
an outgrowth of the Blue Brain Project, directed by Prof. Henry Markram
in Lausanne, which seeks to create a detailed simulation of the human brain.
The Graphene Flagship, based in Sweden, will explore uses of graphene for,
among others, creation of nanotech-based supercomputers. The potential synergy
between these projects is a source of great optimism.

The goal of the BAM Project is to elaborate the functional connectome
of a live organism: that is, not only the static (axo-dendritic) connections
but how they function in real-time as thinking and action unfold.

The European Flagship Human Brain Project will create the computational
capability to simulate large, realistic neural networks. But to compare the model
with reality, a real-time, functional, brain-wide connectome must also be created.
Nanotech and neuroscience are mature enough to justify funding this proposal.

I highly recommend reading Blum’s technical description of neural spikes as understanding that concept or any other in his post doesn’t require an advanced degree. Note: Blum holds a number of degrees and diplomas including an MD (neuroscience) from the University of California at San Francisco and a PhD in computer science and biostatistics from California’s Stanford University.

The Human Brain Project has been mentioned here previously. The  most recent mention is in a Jan. 28, 2013 posting about its newly gained status as one of two European Flagship initiatives (the other is the Graphene initiative) each meriting one billion euros of research funding over 10 years. Today, however, is the first time I’ve encountered the BAM project and I’m fascinated. Luckily, John Markoff’s Feb. 17, 2013 article for The New York Times provides some insight into this US initiative (Note: I have removed some links),

The Obama administration is planning a decade-long scientific effort to examine the workings of the human brain and build a comprehensive map of its activity, seeking to do for the brain what the Human Genome Project did for genetics.

The project, which the administration has been looking to unveil as early as March, will include federal agencies, private foundations and teams of neuroscientists and nanoscientists in a concerted effort to advance the knowledge of the brain’s billions of neurons and gain greater insights into perception, actions and, ultimately, consciousness.

Moreover, the project holds the potential of paving the way for advances in artificial intelligence.

What I find particularly interesting is the reference back to the human genome project, which may explain why BAM is also referred to as a ‘connectome’.

ETA Mar.6.13: I have found a Human Connectome Project Mar. 6, 2013 news release on EurekAlert, which leaves me confused. This does not seem to be related to BAM, although the articles about BAM did reference a ‘connectome’. At this point, I’m guessing that BAM and the ‘Human Connectome Project’ are two related but different projects and the reference to a ‘connectome’ in the BAM material is meant generically.  I previously mentioned the Human Connectome Project panel discussion held at the AAAS (American Association for the Advancement of Science) 2013 meeting in my Feb. 7, 2013 posting.

* Corrected EurkAlert to EurekAlert on June 14, 2013.

Existential risk

The idea that robots of one kind or another (e.g. nanobots eating up the world and leaving grey goo, Cylons in both versions of Battlestar Galactica trying to exterminate humans, etc.) will take over the world and find humans unnecessary  isn’t especially new in works of fiction. It’s not always mentioned directly but the underlying anxiety often has to do with intelligence and concerns over an ‘explosion of intelligence’. The question it raises,’ what if our machines/creations become more intelligent than humans?’ has been described as existential risk. According to a Nov. 25, 2012 article by Sylvia Hui for Huffington Post, a group of eminent philosophers and scientists at the University of Cambridge are proposing to found a Centre for the Study of Existential Risk,

Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction?

Philosophers and scientists at Britain’s Cambridge University think the question deserves serious study. A proposed Center for the Study of Existential Risk will bring together experts to consider the ways in which super intelligent technology, including artificial intelligence, could “threaten our own existence,” the institution said Sunday.

“In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology,” Cambridge philosophy professor Huw Price said.

When that happens, “we’re no longer the smartest things around,” he said, and will risk being at the mercy of “machines that are not malicious, but machines whose interests don’t include us.”

Price along with Martin Rees, Emeritus Professor of Cosmology and Astrophysics, and Jaan Tallinn, Co-Founder of Skype, are the driving forces behind this proposed new centre at Cambridge University. From the Cambridge Project for Existential Risk webpage,

Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake. …

The Cambridge Project for Existential Risk — a joint initiative between a philosopher, a scientist, and a software entrepreneur — begins with the conviction that these issues require a great deal more scientific investigation than they presently receive. Our aim is to establish within the University of Cambridge a multidisciplinary research centre dedicated to the study and mitigation of risks of this kind.

Price and Tallinn co-wrote an Aug. 6, 2012 article for the Australia-based, The Conversation website, about their concerns,

We know how to deal with suspicious packages – as carefully as possible! These days, we let robots take the risk. But what if the robots are the risk? Some commentators argue we should be treating AI (artificial intelligence) as a suspicious package, because it might eventually blow up in our faces. Should we be worried?

Asked whether there will ever be computers as smart as people, the US mathematician and sci-fi author Vernor Vinge replied: “Yes, but only briefly”.

He meant that once computers get to this level, there’s nothing to prevent them getting a lot further very rapidly. Vinge christened this sudden explosion of intelligence the “technological singularity”, and thought that it was unlikely to be good news, from a human point of view.

Was Vinge right, and if so what should we do about it? Unlike typical suspicious parcels, after all, what the future of AI holds is up to us, at least to some extent. Are there things we can do now to make sure it’s not a bomb (or a good bomb rather than a bad bomb, perhaps)?

It appears Price, Rees, and Tallinn are not the only concerned parties, from the Nov. 25, 2012 research news piece on the Cambridge University website,

With luminaries in science, policy, law, risk and computing from across the University and beyond signing up to become advisors, the project is, even in its earliest days, gathering momentum. “The basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence – in a way that they simply haven’t up to now, in human history,” says Price. “We should be investing a little of our intellectual resources in shifting some probability from bad outcomes to good ones.”

Price acknowledges that some of these ideas can seem far-fetched, the stuff of science fiction, but insists that that’s part of the point.

According to the Huffington Post article by Lui, they expect to launch the centre next year (2013). In the meantime, for anyone who’s looking for more information about the ‘intelligence explosion’ or  ‘singularity’ as it’s also known, there’s a Wikipedia essay on the topic.  Also, you may want to stay tuned to this channel (blog) as I expect to have some news about an artificial intelligence project based at the University of Waterloo (Ontario, Canada) and headed by Chris Eliasmith at the university’s Centre for Theoretical Neuroscience, later this week.

Study AI at Stanford online and for free

I exaggerated a little with the headline. In fact, you’ll be studying the same materials, getting the same lectures, answering the same quizzes, and doing the same assignments as first year students in the introductory course to artificial intelligence taught by Stanford professors, Sebastian Thrun and Peter Norvig but you won’t be attending officially as a Stanford student.

I first came across this item at the Robot Shop blog. From there I went to AI class website to find more information. From the AI class home page,

The class runs from October 10 through December 16, 2011. While this class is being offered online, it is also taught at Stanford University, where it continues to be a popular intro-level class on AI. For the online version, the instructors aim to offer identical materials, assignments, and exams, and to use the same grading criteria. Both instructors will be available for online discussions.

A high speed internet connection is recommended as most of the course content will be video based. Access to a copy of Artificial Intelligence: A Modern Approach is also suggested. Peter Norvig is co-author of this text and is donating all royalties to charity.

Here’s a little more about the two instructors,

Sebastian Thrun is a Research Professor of Computer Science at Stanford University, a Google Fellow, a member of the National Academy of Engineering and the German Academy of Sciences. Thrun is best known for his research in robotics and machine learning.

Fast Company Magazine selected him as the fifth most creative person in business, the UK Telegraph included him in their list of 100 living geniuses, and Popular Science included him in their list of Brilliant Ten. His self-driving car was named one of the 50 best inventions of 2010 by Time Magazine, and Scientific American named Thrun one of the 50 business and technology leaders. …

Peter Norvig is Director of Research at Google Inc. He is also a Fellow of the American Association for Artificial Intelligence and the Association for Computing Machinery.

Norvig co-authored Artificial Intelligence: A Modern Approach, which is the world’s most popular text book on Artificial Intelligence. Artificial Intelligence: A Modern Approach is used in over 1,200 universities in over 100 countries, and it has been translated into 12 languages. Prior to joining Google, Norvig
was the head of the Computational Sciences Division at NASA Ames Research Center, making him NASA’s senior computer scientist. …

Here’s a video about the course,

ETA Aug. 17, 2011: According to an Aug. 17, 2011 news item on physorg.com, the course has attracted 58,000 registrants so far,

Demand has been enormous. Already more than 58,000 people have expressed interest in the artificial intelligence course taught by Sebastian Thrun, a Stanford research professor of computer science and a Google Fellow, and Google Director of Research Peter Norvig.

In fact, there are two other free online courses also being offered Machine Learning and Introduction to Databases.

Sept. 19, 2012 Note: I have removed what appeared to be some sort of excerpt which had been left blank.

Science in the British election and CASE; memristor and artificial intelligence; The Secret in Their Eyes, an allegory for post-Junta Argentina?

I’ve been meaning to mention the upcoming (May 6, 2010) British election for the last while as I’ve seen notices of party manifestos that mention science (!) but it was one of Dave Bruggeman’s postings on Pasco Fhronesis that tipped the balance for me. From his posting,

CaSE [Campaign for Science and Engineering] sent each party leader a letter asking for their positions with respect to science and technology issues. The Conservatives and the Liberal Democrats have responded so far (while the Conservative leader kept mum on science before the campaign, now it’s the Prime Minister who has yet to speak on it). Of the two letters, the Liberal Democrats have offered more detailed proposals than the Conservatives, and the Liberal Democrats have also addressed issues of specific interest to the U.K. scientific community to a much greater degree.

(These letters are in addition to the party manifestos which each mention science.) I strongly recommend the post as Bruggeman goes on to give a more detailed analysis and offer a few speculations.

The Liberal Democrats offer a more comprehensive statement but they are a third party who gained an unexpected burst of support after the first national debate. As anyone knows, the second debate (to be held around noon (PT) today) or something else for that matter could change all that.

I did look at the CaSE site which provides an impressive portfolio of materials related to this election on its home page. As for the organization’s mission, before getting to that you might find its history instructive,

CaSE was launched in March 2005, evolving out of its predecessor Save British Science [SBS]. …

SBS was founded in 1986, following the placement of an advertisement in The Times newspaper. The idea came from a small group of university scientists brought together by a common concern about the difficulties they were facing in obtaining the funds for first class research.

The original plan was simply to buy a half-page adverisement in The Times to make the point, and the request for funds was spread via friends and colleagues in other universities. The response was overwhelming. Within a few weeks about 1500 contributors, including over 100 Fellows of the Royal Society and most of the British Nobel prize winners, had sent more than twice the sum needed. The advertisement appeared on 13th January 1986, and the balance of the money raised was used to found the Society, taking as its name the title of the advertisement.

Now for their mission statement,

CaSE is now an established feature of the science and technology policy scene, supported among universities and the learned societies, and able to attract media attention. We are accepted by Government as an organisation able to speak for a wide section of the science and engineering community in a constructive but also critical and forceful manner. We are free to speak without the restraints felt by learned societies and similar bodies, and it is good for Government to know someone is watching closely.

I especially like the bit where they feel its “good for Government” to know someone is watching.

The folks at the Canadian Science Policy Centre (CSPC) are also providing information about the British election and science. As you’d expect it’s not nearly as comprehensive but, if you’re interested, you can check out the CSPC home page.

I haven’t had a chance to read the manifestos and other materials closely enough to be able to offer much comment. It is refreshing to see the issue mentioned by all the parties during the election as opposed to having science dismissed as a ’boutique issue’ as an assistant to my local (Canadian)l Member of Parliament described it to me.

Memristors and artificial intelligence

The memristor story has ‘legs’, as they say. This morning I found an in-depth story by Michael Berger on Nanowerk titled, Nanotechnology’s Road to Artificial Brains, where he interviews Dr. Wei Lu about his work with memristors and neural synapses (mentioned previously on this blog here). Coincidentally I received a comment yesterday from Blaise Mouttet about an article he’d posted on Google September 2009 titled, Memistors, Memristors, and the Rise of Strong Artificial Intelligence.

Berger’s story focuses on a specific piece of research and possible future applications. From the Nanowerk story,

If you think that building an artificial human brain is science fiction, you are probably right – for now. But don’t think for a moment that researchers are not working hard on laying the foundations for what is called neuromorphic engineering – a new interdisciplinary discipline that includes nanotechnologies and whose goal is to design artificial neural systems with physical architectures similar to biological nervous systems.

One of the key components of any neuromorphic effort is the design of artificial synapses. The human brain contains vastly more synapses than neurons – by a factor of about 10,000 – and therefore it is necessary to develop a nanoscale, low power, synapse-like device if scientists want to scale neuromorphic circuits towards the human brain level.

Berger goes on to explain how Lu’s work with memristors relates to this larger enterprise which is being pursued by many scientists around the world.

By contrast Mouttet offers an historical context for the work on memristors along with a precise technical explanation  and why it is applicable to work in artificial intelligence. From Mouttet’s essay,

… memristive systems integrate data storage and data processing capabilities in a single device which offers the potential to more closely emulate the capabilities of biological intelligence.

If you are interested in exploring further, I suggest starting with Mouttet’s article first as it lays the groundwork for better understanding memristors and also Berger’s story about artificial neural synapses.

The secret in their eyes (movie review)

I woke up at 6 am the other morning thinking about a movie I saw this last Sunday (April 18, 2010). That doesn’t often happen to me,  especially as I get more jaded with time but something about ‘The Secret in Their Eyes‘, the Argentinean movie that won this year’s Oscar for Best Foreign Language Film woke me up.

Before going further, a précis of the story: a retired man (in his late 50s?) is trying to write a novel based on a rape/homicide case that he investigated in the mid-1970s. He’s haunted by it and spends much of the movie calling back memories of both a case and a love he tried to bury. Writing his ‘novel’ compels him to reinvestigate the case (he was an investigator for the judge) and reestablish contact with the victim’s grief-stricken husband and with the woman he loved  who was his boss (the judge) and also from a more prestigious social class.

The movie offers some comedy although it can mostly be described as a thriller, a procedural, and a love story. It can also be seen as an allegory. The victim represents Argentina as a country. The criminal’s treatment (he gets rewarded— initially) represents how the military junta controlled Argentina after Juan Peron’s death in 1974. It seemed to me that much of this movie was an investigation about how people cope and recover (or don’t) from a hugely traumatic experience.

I don’t know much about Argentina and I have no Spanish language skills (other than recognizing an occasional word when it sounds like a French one). Consequently, this history is fairly sketchy and derived from secondary and tertiary sources. In the 1950s, Juan Peron (a former member of the military) led  a very repressive regime which was eventually pushed out of office. By the 1970s he was asked to return which he did. He died there in 1974 and sometime after a military Junta took control of the government. Amongst other measures, they kidnapped thousands of people (usually young and often students, teachers [the victim in the movie is a teacher], political activists/enemies, and countless others) and ‘disappeared’ them.

Much of the population tried to ignore or hide from what was going on. A  documentary released in the US  in 1985, Las Madres de la Plaza de Mayo, details the story of a group of middle-class women who are moved to protest, after years of trying to endure, when their own children are ‘disappeared’.

In the movie we see what happens when bullies take over control. The criminal gets rewarded, the investigator/writer is sent away for protection after a colleague becomes collateral damage, the judge’s family name protects her, and the grieving husband has to find his own way to deal with the situation.

The movie offers both a gothic twist towards the end and a very moving perspective on how one deals with the guilt for one’s complicity and for one’s survival.

ETA: (April 27, 2010) One final insight, the movie suggests that art/creative endeavours such as writing a novel (or making a movie?) can be a means for confession, redemption, and/or healing past wounds.

I think what makes the movie so good is the number of readings that are possible. You can take a look at some of what other reviewers had to say: Katherine Monk at the Vancouver Sun, Curtis Woloschuk at the Westender, and Ken Eisner at the Georgia Straight.

Kudos to the director and screen writer, Juan José Campanella and to the leads: Ricardo Darín (investigator/writer), Soledad Villamil (judge), Pablo Rago (husband), Javier Godino (criminal), Guillermo Francella (colleague who becomes collateral damage) and all of the other actor s in the company. Even the smallest role was beautifully realized.

One final thing, whoever translated and wrote the subtitles should get an award. I don’t know how the person did it but the use of language is brilliant. I’ve never before seen subtitles that managed to convey the flavour of the verbal exchanges taking place on screen.

I liked the movie, eh?

Public opinion doesn’t shake easily; Wilson talk on Artificial Intelligence

Over on the Framing Science blog, Matthew C. Nisbet has posted about the impact that ClimateGate has not had on public opinion about climate change. From the post,

The full report [by Jon Krosnick, professor at Stanford University, on most recent public opinion poll about cljmate change and ClimateGate] should be read, but below I feature several key conclusions. Despite alarm over the presumed impact of ClimateGate, Krosnick’s  analysis reveals very little influence for this event. More research is likely to come on this issue and this is just the first systematic analysis to be released.

Yet there is an even more interesting question emerging here than the impact of ClimateGate on public opinion: If communication researchers have difficulty discerning a meaningful impact for ClimateGate, why do so many scientists and advocates continue to misread public opinion on climate change and to misunderstand the influence of the news media? As I argue below, an additional object of study in this case should be the factors shaping the perceptions of scientists and advocates.

—>Krosnick’s analysis estimates that the percentage of Americans who believe in global warming has only dropped 5% since 2008 and that ClimateGate has had no meaningful impact on trust in climate scientists which stands at 70% (essentially the same as the 68% level in 2008).

A 5% drop isn’t to be sneezed at but taken into perspective it is predictable and, assuming these are ‘good’ figures, then in the short term, there has not been an appreciable impact. Makes sense, doesn’t it? After all, most people don’t change their opinions that easily. Oh they might have a crisis of confidence or a momentary hysterical response (I confess) but most of our opinions about important issues tend to persist over time and in the face of contradictory evidence.

Nisbet’s post makes reference to some other work, this time on scientists’ ideologies (liberal or conservative [not the Canadian political parties]) done by the Pew Research Center and released in July 2009. (Nisbet’s comments on ideology and scientists here and the Pew Research Center study here) Intriguingly, there’s a larger percentage of scientists (50%) self-identified as liberal than members of the general public (20%).

According to work published shortly after and mentioned on this blog here in a comment about the public’s focus on the benefits of nanotechnology while scientists focus on risks and economic value, by Elizabeth Corley (Arizona State University), this difference in focus may have something to do with ideology,  from the news release,

Decision-makers often rely on the input of scientists when setting policies on nanotechnology because of the high degree of scientific uncertainty – and the lack of data – about its risks, Corley says.

“This difference in the way nanoscientists and the public think about regulations is important for policymakers (to take into consideration) if they are planning to include both groups in the policymaking process for nanotechnology,” says Corley.

The study also reveals an interesting divide within the group of nanoscientists. Economically conservative scientists were less likely to support regulations, while economically liberal scientists were more likely to do so.

This suggests that a more nuanced approach to measuring public perception may be emerging despite  the rather disappointing meta analysis by Dr. Terre Satterfield of public perceptions about nanotechnology benefits and risks (mentioned on this blog here).

On a completely other note, I recently attended a lecture/presentation by Elizabeth Wilson, professor of Women’s Studies at Emory University (Atlanta, Georgia, US) given at the Green College at the University of British Columbia about artificial intelligence circa the early 1960s, titled, “Extravagance of affect:; How to build an artificial mind. I’m not sure who this lecture was aimed at. While I was deeply thankful for her detailed explanations of basic concepts, presumably people in the field of Women’s Studies wouldn’t have needed so much explanation.  Conversely, her presentation had some gaps where she jumped over things which you can only do if your audience is well versed on the topic.

I haven’t seen much about emotions and artificial intelligence prior to this talk so maybe Wilson is forging into new territory and over time will get better at presenting her material to audiences who are not familiar with her specialty. In the meantime, I’m not sure what to make of her work.

Later this week, I’m hoping to be publishing an interview with Peter Julian the NDP member of Parliament (Canada) who recently tabled a member’s bill on nanotechnology.