Tag Archives: artificial intelligence

Memristor, memristor, you are popular

Regular readers know I have a long-standing interest in memristor and artificial brains. I have three memristor-related pieces of research,  published in the last month or so, for this post.

First, there’s some research into nano memory at RMIT University, Australia, and the University of California at Santa Barbara (UC Santa Barbara). From a May 12, 2015 news item on ScienceDaily,

RMIT University researchers have mimicked the way the human brain processes information with the development of an electronic long-term memory cell.

Researchers at the MicroNano Research Facility (MNRF) have built the one of the world’s first electronic multi-state memory cell which mirrors the brain’s ability to simultaneously process and store multiple strands of information.

The development brings them closer to imitating key electronic aspects of the human brain — a vital step towards creating a bionic brain — which could help unlock successful treatments for common neurological conditions such as Alzheimer’s and Parkinson’s diseases.

A May 11, 2015 RMIT University news release, which originated the news item, reveals more about the researchers’ excitement and about the research,

“This is the closest we have come to creating a brain-like system with memory that learns and stores analog information and is quick at retrieving this stored information,” Dr Sharath said.

“The human brain is an extremely complex analog computer… its evolution is based on its previous experiences, and up until now this functionality has not been able to be adequately reproduced with digital technology.”

The ability to create highly dense and ultra-fast analog memory cells paves the way for imitating highly sophisticated biological neural networks, he said.

The research builds on RMIT’s previous discovery where ultra-fast nano-scale memories were developed using a functional oxide material in the form of an ultra-thin film – 10,000 times thinner than a human hair.

Dr Hussein Nili, lead author of the study, said: “This new discovery is significant as it allows the multi-state cell to store and process information in the very same way that the brain does.

“Think of an old camera which could only take pictures in black and white. The same analogy applies here, rather than just black and white memories we now have memories in full color with shade, light and texture, it is a major step.”

While these new devices are able to store much more information than conventional digital memories (which store just 0s and 1s), it is their brain-like ability to remember and retain previous information that is exciting.

“We have now introduced controlled faults or defects in the oxide material along with the addition of metallic atoms, which unleashes the full potential of the ‘memristive’ effect – where the memory element’s behaviour is dependent on its past experiences,” Dr Nili said.

Nano-scale memories are precursors to the storage components of the complex artificial intelligence network needed to develop a bionic brain.

Dr Nili said the research had myriad practical applications including the potential for scientists to replicate the human brain outside of the body.

“If you could replicate a brain outside the body, it would minimise ethical issues involved in treating and experimenting on the brain which can lead to better understanding of neurological conditions,” Dr Nili said.

The research, supported by the Australian Research Council, was conducted in collaboration with the University of California Santa Barbara.

Here’s a link to and a citation for this memristive nano device,

Donor-Induced Performance Tuning of Amorphous SrTiO3 Memristive Nanodevices: Multistate Resistive Switching and Mechanical Tunability by  Hussein Nili, Sumeet Walia, Ahmad Esmaielzadeh Kandjani, Rajesh Ramanathan, Philipp Gutruf, Taimur Ahmed, Sivacarendran Balendhran, Vipul Bansal, Dmitri B. Strukov, Omid Kavehei, Madhu Bhaskaran, and Sharath Sriram. Advanced Functional Materials DOI: 10.1002/adfm.201501019 Article first published online: 14 APR 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

The second published piece of memristor-related research comes from a UC Santa Barbara and  Stony Brook University (New York state) team but is being publicized by UC Santa Barbara. From a May 11, 2015 news item on Nanowerk (Note: A link has been removed),

In what marks a significant step forward for artificial intelligence, researchers at UC Santa Barbara have demonstrated the functionality of a simple artificial neural circuit (Nature, “Training and operation of an integrated neuromorphic network based on metal-oxide memristors”). For the first time, a circuit of about 100 artificial synapses was proved to perform a simple version of a typical human task: image classification.

A May 11, 2015 UC Santa Barbara news release (also on EurekAlert)by Sonia Fernandez, which originated the news item, situates this development within the ‘artificial brain’ effort while describing it in more detail (Note: A link has been removed),

“It’s a small, but important step,” said Dmitri Strukov, a professor of electrical and computer engineering. With time and further progress, the circuitry may eventually be expanded and scaled to approach something like the human brain’s, which has 1015 (one quadrillion) synaptic connections.

For all its errors and potential for faultiness, the human brain remains a model of computational power and efficiency for engineers like Strukov and his colleagues, Mirko Prezioso, Farnood Merrikh-Bayat, Brian Hoskins and Gina Adam. That’s because the brain can accomplish certain functions in a fraction of a second what computers would require far more time and energy to perform.

… As you read this, your brain is making countless split-second decisions about the letters and symbols you see, classifying their shapes and relative positions to each other and deriving different levels of meaning through many channels of context, in as little time as it takes you to scan over this print. Change the font, or even the orientation of the letters, and it’s likely you would still be able to read this and derive the same meaning.

In the researchers’ demonstration, the circuit implementing the rudimentary artificial neural network was able to successfully classify three letters (“z”, “v” and “n”) by their images, each letter stylized in different ways or saturated with “noise”. In a process similar to how we humans pick our friends out from a crowd, or find the right key from a ring of similar keys, the simple neural circuitry was able to correctly classify the simple images.

“While the circuit was very small compared to practical networks, it is big enough to prove the concept of practicality,” said Merrikh-Bayat. According to Gina Adam, as interest grows in the technology, so will research momentum.

“And, as more solutions to the technological challenges are proposed the technology will be able to make it to the market sooner,” she said.

Key to this technology is the memristor (a combination of “memory” and “resistor”), an electronic component whose resistance changes depending on the direction of the flow of the electrical charge. Unlike conventional transistors, which rely on the drift and diffusion of electrons and their holes through semiconducting material, memristor operation is based on ionic movement, similar to the way human neural cells generate neural electrical signals.

“The memory state is stored as a specific concentration profile of defects that can be moved back and forth within the memristor,” said Strukov. The ionic memory mechanism brings several advantages over purely electron-based memories, which makes it very attractive for artificial neural network implementation, he added.

“For example, many different configurations of ionic profiles result in a continuum of memory states and hence analog memory functionality,” he said. “Ions are also much heavier than electrons and do not tunnel easily, which permits aggressive scaling of memristors without sacrificing analog properties.”

This is where analog memory trumps digital memory: In order to create the same human brain-type functionality with conventional technology, the resulting device would have to be enormous — loaded with multitudes of transistors that would require far more energy.

“Classical computers will always find an ineluctable limit to efficient brain-like computation in their very architecture,” said lead researcher Prezioso. “This memristor-based technology relies on a completely different way inspired by biological brain to carry on computation.”

To be able to approach functionality of the human brain, however, many more memristors would be required to build more complex neural networks to do the same kinds of things we can do with barely any effort and energy, such as identify different versions of the same thing or infer the presence or identity of an object not based on the object itself but on other things in a scene.

Potential applications already exist for this emerging technology, such as medical imaging, the improvement of navigation systems or even for searches based on images rather than on text. The energy-efficient compact circuitry the researchers are striving to create would also go a long way toward creating the kind of high-performance computers and memory storage devices users will continue to seek long after the proliferation of digital transistors predicted by Moore’s Law becomes too unwieldy for conventional electronics.

Here’s a link to and a citation for the paper,

Training and operation of an integrated neuromorphic network based on metal-oxide memristors by M. Prezioso, F. Merrikh-Bayat, B. D. Hoskins, G. C. Adam, K. K. Likharev,    & D. B. Strukov. Nature 521, 61–64 (07 May 2015) doi:10.1038/nature14441

This paper is behind a paywall but a free preview is available through ReadCube Access.

The third and last piece of research, which is from Rice University, hasn’t received any publicity yet, unusual given Rice’s very active communications/media department. Here’s a link to and a citation for their memristor paper,

2D materials: Memristor goes two-dimensional by Jiangtan Yuan & Jun Lou. Nature Nanotechnology 10, 389–390 (2015) doi:10.1038/nnano.2015.94 Published online 07 May 2015

This paper is behind a paywall but a free preview is available through ReadCube Access.

Dexter Johnson has written up the RMIT research (his May 14, 2015 post on the Nanoclast blog on the IEEE [Institute of Electrical and Electronics Engineers] website). He linked it to research from Mark Hersam’s team at Northwestern University (my April 10, 2015 posting) on creating a three-terminal memristor enabling its use in complex electronics systems. Dexter strongly hints in his headline that these developments could lead to bionic brains.

For those who’d like more memristor information, this June 26, 2014 posting which brings together some developments at the University of Michigan and information about developments in the industrial sector is my suggestion for a starting point. Also, you may want to check out my material on HP Labs, especially prominent in the story due to the company’s 2008 ‘discovery’ of the memristor, described on a page in my Nanotech Mysteries wiki, and the controversy triggered by the company’s terminology (there’s more about the controversy in my April 7, 2010 interview with Forrest H Bennett III).

Self-organizing nanotubes and nonequilibrium systems provide insights into evolution and artificial life

If you’re interested in the second law of thermodynamics, this Feb. 10, 2015 news item on ScienceDaily provides some insight into the second law, self-organized systems, and evolution,

The second law of thermodynamics tells us that all systems evolve toward a state of maximum entropy, wherein all energy is dissipated as heat, and no available energy remains to do work. Since the mid-20th century, research has pointed to an extension of the second law for nonequilibrium systems: the Maximum Entropy Production Principle (MEPP) states that a system away from equilibrium evolves in such a way as to maximize entropy production, given present constraints.

Now, physicists Alexey Bezryadin, Alfred Hubler, and Andrey Belkin from the University of Illinois at Urbana-Champaign, have demonstrated the emergence of self-organized structures that drive the evolution of a non-equilibrium system to a state of maximum entropy production. The authors suggest MEPP underlies the evolution of the artificial system’s self-organization, in the same way that it underlies the evolution of ordered systems (biological life) on Earth. …

A Feb. 10, 2015 University of Illinois College of Engineering news release (also on EurekAlert), which originated the news item, provides more detail about the theory and the research,

MEPP may have profound implications for our understanding of the evolution of biological life on Earth and of the underlying rules that govern the behavior and evolution of all nonequilibrium systems. Life emerged on Earth from the strongly nonequilibrium energy distribution created by the Sun’s hot photons striking a cooler planet. Plants evolved to capture high energy photons and produce heat, generating entropy. Then animals evolved to eat plants increasing the dissipation of heat energy and maximizing entropy production.

In their experiment, the researchers suspended a large number of carbon nanotubes in a non-conducting non-polar fluid and drove the system out of equilibrium by applying a strong electric field. Once electrically charged, the system evolved toward maximum entropy through two distinct intermediate states, with the spontaneous emergence of self-assembled conducting nanotube chains.

In the first state, the “avalanche” regime, the conductive chains aligned themselves according to the polarity of the applied voltage, allowing the system to carry current and thus to dissipate heat and produce entropy. The chains appeared to sprout appendages as nanotubes aligned themselves so as to adjoin adjacent parallel chains, effectively increasing entropy production. But frequently, this self-organization was destroyed through avalanches triggered by the heating and charging that emanates from the emerging electric current streams. (…)

“The avalanches were apparent in the changes of the electric current over time,” said Bezryadin.

“Toward the final stages of this regime, the appendages were not destroyed during the avalanches, but rather retracted until the avalanche ended, then reformed their connection. So it was obvious that the avalanches correspond to the ‘feeding cycle’ of the ‘nanotube inset’,” comments Bezryadin.

In the second relatively stable stage of evolution, the entropy production rate reached maximum or near maximum. This state is quasi-stable in that there were no destructive avalanches.

The study points to a possible classification scheme for evolutionary stages and a criterium for the point at which evolution of the system is irreversible—wherein entropy production in the self-organizing subsystem reaches its maximum possible value. Further experimentation on a larger scale is necessary to affirm these underlying principals, but if they hold true, they will prove a great advantage in predicting behavioral and evolutionary trends in nonequilibrium systems.

The authors draw an analogy between the evolution of intelligent life forms on Earth and the emergence of the wiggling bugs in their experiment. The researchers note that further quantitative studies are needed to round out this comparison. In particular, they would need to demonstrate that their “wiggling bugs” can multiply, which would require the experiment be reproduced on a significantly larger scale.

Such a study, if successful, would have implications for the eventual development of technologies that feature self-organized artificial intelligence, an idea explored elsewhere by co-author Alfred Hubler, funded by the Defense Advanced Research Projects Agency [DARPA]. [emphasis mine]

“The general trend of the evolution of biological systems seems to be this: more advanced life forms tend to dissipate more energy by broadening their access to various forms of stored energy,” Bezryadin proposes. “Thus a common underlying principle can be suggested between our self-organized clouds of nanotubes, which generate more and more heat by reducing their electrical resistance and thus allow more current to flow, and the biological systems which look for new means to find food, either through biological adaptation or by inventing more technologies.

“Extended sources of food allow biological forms to further grow, multiply, consume more food and thus produce more heat and generate entropy. It seems reasonable to say that real life organisms are still far from the absolute maximum of the entropy production rate. In both cases, there are ‘avalanches’ or ‘extinction events’, which set back this evolution. Only if all free energy given by the Sun is consumed, by building a Dyson sphere for example, and converted into heat then a definitely stable phase of the evolution can be expected.”

“Intelligence, as far as we know, is inseparable from life,” he adds. “Thus, to achieve artificial life or artificial intelligence, our recommendation would be to study systems which are far from equilibrium, with many degrees of freedom—many building blocks—so that they can self-organize and participate in some evolution. The entropy production criterium appears to be the guiding principle of the evolution efficiency.”

I am fascinated

  • (a) because this piece took an unexpected turn onto the topic of artificial life/artificial intelligence,
  • (b) because of my longstanding interest in artificial life/artificial intelligence,
  • (c) because of the military connection, and
  • (d) because this is the first time I’ve come across something that provides a bridge from fundamental particles to nanoparticles.

Here’s a link to and a citation for the paper,

Self-Assembled Wiggling Nano-Structures and the Principle of Maximum Entropy Production by A. Belkin, A. Hubler, & A. Bezryadin. Scientific Reports 5, Article number: 8323 doi:10.1038/srep08323 Published 09 February 2015

Adding to my delight, this paper is open access.

‘Eve’ (robot/artificial intelligence) searches for new drugs

Following on today’s (Feb. 5, 2015) earlier post, The future of work during the age of robots and artificial intelligence, here’s a Feb. 3, 2015 news item on ScienceDaily featuring ‘Eve’, a scientist robot,

Eve, an artificially-intelligent ‘robot scientist’ could make drug discovery faster and much cheaper, say researchers writing in the Royal Society journal Interface. The team has demonstrated the success of the approach as Eve discovered that a compound shown to have anti-cancer properties might also be used in the fight against malaria.

A Feb. 4, 2015 University of Manchester press release (also on EurekAlert but dated Feb. 3, 2015), which originated the news item, gives a brief introduction to robot scientists,

Robot scientists are a natural extension of the trend of increased involvement of automation in science. They can automatically develop and test hypotheses to explain observations, run experiments using laboratory robotics, interpret the results to amend their hypotheses, and then repeat the cycle, automating high-throughput hypothesis-led research. Robot scientists are also well suited to recording scientific knowledge: as the experiments are conceived and executed automatically by computer, it is possible to completely capture and digitally curate all aspects of the scientific process.

In 2009, Adam, a robot scientist developed by researchers at the Universities of Aberystwyth and Cambridge, became the first machine to autonomously discover new scientific knowledge. The same team has now developed Eve, based at the University of Manchester, whose purpose is to speed up the drug discovery process and make it more economical. In the study published today, they describe how the robot can help identify promising new drug candidates for malaria and neglected tropical diseases such as African sleeping sickness and Chagas’ disease.

“Neglected tropical diseases are a scourge of humanity, infecting hundreds of millions of people, and killing millions of people every year,” says Professor Ross King, from the Manchester Institute of Biotechnology at the University of Manchester. “We know what causes these diseases and that we can, in theory, attack the parasites that cause them using small molecule drugs. But the cost and speed of drug discovery and the economic return make them unattractive to the pharmaceutical industry.

“Eve exploits its artificial intelligence to learn from early successes in her screens and select compounds that have a high probability of being active against the chosen drug target. A smart screening system, based on genetically engineered yeast, is used. This allows Eve to exclude compounds that are toxic to cells and select those that block the action of the parasite protein while leaving any equivalent human protein unscathed. This reduces the costs, uncertainty, and time involved in drug screening, and has the potential to improve the lives of millions of people worldwide.”

The press release goes on to describe how ‘Eve’ works,

Eve is designed to automate early-stage drug design. First, she systematically tests each member from a large set of compounds in the standard brute-force way of conventional mass screening. The compounds are screened against assays (tests) designed to be automatically engineered, and can be generated much faster and more cheaply than the bespoke assays that are currently standard. This enables more types of assay to be applied, more efficient use of screening facilities to be made, and thereby increases the probability of a discovery within a given budget.

Eve’s robotic system is capable of screening over 10,000 compounds per day. However, while simple to automate, mass screening is still relatively slow and wasteful of resources as every compound in the library is tested. It is also unintelligent, as it makes no use of what is learnt during screening.

To improve this process, Eve selects at random a subset of the library to find compounds that pass the first assay; any ‘hits’ are re-tested multiple times to reduce the probability of false positives. Taking this set of confirmed hits, Eve uses statistics and machine learning to predict new structures that might score better against the assays. Although she currently does not have the ability to synthesise such compounds, future versions of the robot could potentially incorporate this feature.

Steve Oliver from the Cambridge Systems Biology Centre and the Department of Biochemistry at the University of Cambridge says: “Every industry now benefits from automation and science is no exception. Bringing in machine learning to make this process intelligent – rather than just a ‘brute force’ approach – could greatly speed up scientific progress and potentially reap huge rewards.”

To test the viability of the approach, the researchers developed assays targeting key molecules from parasites responsible for diseases such as malaria, Chagas’ disease and schistosomiasis and tested against these a library of approximately 1,500 clinically approved compounds. Through this, Eve showed that a compound that has previously been investigated as an anti-cancer drug inhibits a key molecule known as DHFR in the malaria parasite. Drugs that inhibit this molecule are currently routinely used to protect against malaria, and are given to over a million children; however, the emergence of strains of parasites resistant to existing drugs means that the search for new drugs is becoming increasingly more urgent.

“Despite extensive efforts, no one has been able to find a new antimalarial that targets DHFR and is able to pass clinical trials,” adds Professor Oliver. “Eve’s discovery could be even more significant than just demonstrating a new approach to drug discovery.”

Here’s a link to and a citation for the paper,

Cheaper faster drug development validated by the repositioning of drugs against neglected tropical diseases by Kevin Williams, Elizabeth Bilsland, Andrew Sparkes, Wayne Aubrey, Michael Young, Larisa N. Soldatova, Kurt De Grave, Jan Ramon, Michaela de Clare, Worachart Sirawaraporn, Stephen G. Oliver, and Ross D. King. Journal of the Royal Society Interface March 2015 Volume: 12 Issue: 104 DOI: 10.1098/rsif.2014.1289 Published 4 February 2015

This paper is open access.

The future of work during the age of robots and artificial intelligence

2014 was quite the year for discussions about robots/artificial intelligence (AI) taking over the world of work. There was my July 16, 2014 post titled, Writing and AI or is a robot writing this blog?, where I discussed the implications of algorithms which write news stories (business and sports, so far) in the wake of a deal that Associated Press signed with a company called Automated Insights. A few weeks later, the Pew Research Center released a report titled, AI, Robotics, and the Future of Jobs, which was widely covered. As well, sometime during the year, renowned physicist Stephen Hawking expressed serious concerns about artificial intelligence and our ability to control it.

It seems that 2015 is going to be another banner for this discussion. Before launching into the latest on this topic, here’s a sampling of the Pew Research and the response to it. From an Aug. 6, 2014 Pew summary about AI, Robotics, and the Future of Jobs by Aaron Smith and Janna Anderson,

The vast majority of respondents to the 2014 Future of the Internet canvassing anticipate that robotics and artificial intelligence will permeate wide segments of daily life by 2025, with huge implications for a range of industries such as health care, transport and logistics, customer service, and home maintenance. But even as they are largely consistent in their predictions for the evolution of technology itself, they are deeply divided on how advances in AI and robotics will impact the economic and employment picture over the next decade.

We call this a canvassing because it is not a representative, randomized survey. Its findings emerge from an “opt in” invitation to experts who have been identified by researching those who are widely quoted as technology builders and analysts and those who have made insightful predictions to our previous queries about the future of the Internet. …

I wouldn’t have expected Jeff Bercovici’s Aug. 6, 2014 article for Forbes to be quite so hesitant about the possibilities of our robotic and artificially intelligent future,

As part of a major ongoing project looking at the future of the internet, the Pew Research Internet Project canvassed some 1,896 technologists, futurists and other experts about how they see advances in robotics and artificial intelligence affecting the human workforce in 2025.

The results were not especially reassuring. Nearly half of the respondents (48%) predicted that robots and AI will displace more jobs than they create over the coming decade. While that left a slim majority believing the impact of technology on employment will be neutral or positive, that’s not necessarily grounds for comfort: Many experts told Pew they expect the jobs created by the rise of the machines will be lower paying and less secure than the ones displaced, widening the gap between rich and poor, while others said they simply don’t think the major effects of robots and AI, for better or worse, will be in evidence yet by 2025.

Chris Gayomali’s Aug. 6, 2014 article for Fast Company poses an interesting question about how this brave new future will be financed,

A new study by Pew Internet Research takes a hard look at how innovations in robotics and artificial intelligence will impact the future of work. To reach their conclusions, Pew researchers invited 12,000 experts (academics, researchers, technologists, and the like) to answer two basic questions:

Will networked, automated, artificial intelligence (AI) applications and robotic devices have displaced more jobs than they have created by 2025?
To what degree will AI and robotics be parts of the ordinary landscape of the general population by 2025?

Close to 1,900 experts responded. About half (48%) of the people queried envision a future in which machines have displaced both blue- and white-collar jobs. It won’t be so dissimilar from the fundamental shift we saw in manufacturing, in which fewer (human) bosses oversaw automated assembly lines.

Meanwhile, the other 52% of experts surveyed speculate while that many of the jobs will be “substantially taken over by robots,” humans won’t be displaced outright. Rather, many people will be funneled into new job categories that don’t quite exist yet. …

Some worry that over the next 10 years, we’ll see a large number of middle class jobs disappear, widening the economic gap between the rich and the poor. The shift could be dramatic. As artificial intelligence becomes less artificial, they argue, the worry is that jobs that earn a decent living wage (say, customer service representatives, for example) will no longer be available, putting lots and lots of people out of work, possibly without the requisite skill set to forge new careers for themselves.

How do we avoid this? One revealing thread suggested by experts argues that the responsibility will fall on businesses to protect their employees. “There is a relentless march on the part of commercial interests (businesses) to increase productivity so if the technical advances are reliable and have a positive ROI [return on investment],” writes survey respondent Glenn Edens, a director of research in networking, security, and distributed systems at PARC, which is owned by Xerox. “Ultimately we need a broad and large base of employed population, otherwise there will be no one to pay for all of this new world.” [emphasis mine]

Alex Hearn’s Aug. 6, 2014 article for the Guardian reviews the report and comments on the current educational system’s ability to prepare students for the future,

Almost all of the respondents are united on one thing: the displacement of work by robots and AI is going to continue, and accelerate, over the coming decade. Where they split is in the societal response to that displacement.

The optimists predict that the economic boom that would result from vastly reduced costs to businesses would lead to the creation of new jobs in huge numbers, and a newfound premium being placed on the value of work that requires “uniquely human capabilities”. …

But the pessimists worry that the benefits of the labor replacement will accrue to those already wealthy enough to own the automatons, be that in the form of patents for algorithmic workers or the physical form of robots.

The ranks of the unemployed could swell, as people are laid off from work they are qualified in without the ability to retrain for careers where their humanity is a positive. And since this will happen in every economic sector simultaneously, civil unrest could be the result.

One thing many experts agreed on was the need for education to prepare for a post-automation world. ““Only the best-educated humans will compete with machines,” said internet sociologist Howard Rheingold.

“And education systems in the US and much of the rest of the world are still sitting students in rows and columns, teaching them to keep quiet and memorise what is told them, preparing them for life in a 20th century factory.”

Then, Will Oremus’ Aug. 6, 2014 article for Slate suggests we are already experiencing displacement,

… the current jobless recovery, along with a longer-term trend toward income and wealth inequality, has some thinkers wondering whether the latest wave of automation is different from those that preceded it.

Massachusetts Institute of Technology researchers Andrew McAfee and Erik Brynjolfsson, among others, see a “great decoupling” of productivity from wages since about 2000 as technology outpaces human workers’ education and skills. Workers, in other words, are losing the race between education and technology. This may be exacerbating a longer-term trend in which capital has gained the upper hand on labor since the 1970s.

The results of the survey were fascinating. Almost exactly half of the respondents (48 percent) predicted that intelligent software will disrupt more jobs than it can replace. The other half predicted the opposite.

The lack of expert consensus on such a crucial and seemingly straightforward question is startling. It’s even more so given that history and the leading economic models point so clearly to one side of the question: the side that reckons society will adjust, new jobs will emerge, and technology will eventually leave the economy stronger.

More recently, Manish Singh has written about some of his concerns as a writer who could be displaced in a Jan. 31, 2015 (?) article for Beta News (Note: A link has been removed),

Robots are after my job. They’re after yours as well, but let us deal with my problem first. Associated Press, an American multinational nonprofit news agency, revealed on Friday [Jan. 30, 2015] that it published 3,000 articles in the last three months of 2014. The company could previously only publish 300 stories. It didn’t hire more journalists, neither did its existing headcount start writing more, but the actual reason behind this exponential growth is technology. All those stories were written by an algorithm.

The articles produced by the algorithm were accurate, and you won’t be able to separate them from stories written by humans. Good lord, all the stories were written in accordance with the AP Style Guide, something not all journalists follow (but arguably, should).

There has been a growth in the number of such software. Narrative Science, a Chicago-based company offers an automated narrative generator powered by artificial intelligence. The company’s co-founder and CTO, Kristian Hammond, said last year that he believes that by 2030, 90 percent of news could be written by computers. Forbes, a reputable news outlet, has used Narrative’s software. Some news outlets use it to write email newsletters and similar things.

Singh also sounds a note of concern for other jobs by including this video (approximately 16 mins.) in his piece,

This video (Humans Need Not Apply) provides an excellent overview of the situation although it seems C. G. P. Grey, the person who produced and posted the video on YouTube, holds a more pessimistic view of the future than some other futurists.  C. G. P. Grey has a website here and is profiled here on Wikipedia.

One final bit, there’s a robot art critic which some are suggesting is superior to human art critics in Thomas Gorton’s Jan. 16, 2015 (?) article ‘This robot reviews art better than most critics‘ for Dazed Digital (Note: Links have been removed),

… the Novice Art Blogger, a Tumblr page set up by Matthew Plummer Fernandez. The British-Colombian artist programmed a bot with deep learning algorithms to analyse art; so instead of an overarticulate critic rambling about praxis, you get a review that gets down to the nitty-gritty about what exactly you see in front of you.

The results are charmingly honest: think a round robin of Google Translate text uninhibited by PR fluff, personal favouritism or the whims of a bad mood. We asked Novice Art Blogger to review our most recent Winter 2014 cover with Kendall Jenner. …

Beyond Kendall Jenner, it’s worth reading Gorton’s article for the interview with Plummer Fernandez.

University of Toronto, ebola epidemic, and artificial intelligence applied to chemistry

It’s hard to tell much from the Nov. 5, 2014 University of Toronto news release by Michael Kennedy (also on EurekAlert but dated Nov. 10, 2014) about in silico drug testing focused on finding a treatment for ebola,

The University of Toronto, Chematria and IBM are combining forces in a quest to find new treatments for the Ebola virus.

Using a virtual research technology invented by Chematria, a startup housed at U of T’s Impact Centre, the team will use software that learns and thinks like a human chemist to search for new medicines. Running on Canada’s most powerful supercomputer, the effort will simulate and analyze the effectiveness of millions of hypothetical drugs in just a matter of weeks.

“What we are attempting would have been considered science fiction, until now,” says Abraham Heifets (PhD), a U of T graduate and the chief executive officer of Chematria. “We are going to explore the possible effectiveness of millions of drugs, something that used to take decades of physical research and tens of millions of dollars, in mere days with our technology.”

The news release makes it all sound quite exciting,

Chematria’s technology is a virtual drug discovery platform based on the science of deep learning neural networks and has previously been used for research on malaria, multiple sclerosis, C. difficile, and leukemia. [emphases mine]

Much like the software used to design airplanes and computer chips in simulation, this new system can predict the possible effectiveness of new medicines, without costly and time-consuming physical synthesis and testing. [emphasis mine] The system is driven by a virtual brain that teaches itself by “studying” millions of datapoints about how drugs have worked in the past. With this vast knowledge, the software can apply the patterns it has learned to predict the effectiveness of hypothetical drugs, and suggest surprising uses for existing drugs, transforming the way medicines are discovered.

My understanding is that Chematria’s is not the only “virtual drug discovery platform based on the science of deep learning neural networks” as is acknowledged in the next paragraph. In fact, there’s widespread interest in the medical research community as evidenced by such projects as Seurat-1’s NOTOX* and others. Regarding the research on “malaria, multiple sclerosis, C. difficile, and leukemia,” more details would be welcome, e.g., what happened?

A Nov. 4, 2014 article for Mashable by Anita Li does offer a new detail about the technology,

Now, a team of Canadian researchers are hunting for new Ebola treatments, using “groundbreaking” artificial-intelligence technology that they claim can predict the effectiveness of new medicines 150 times faster than current methods.

With the quotes around the word, groundbreaking, Li suggests a little skepticism about the claim.

Here’s more from Li where she seems to have found some company literature,

Chematria describes its technology as a virtual drug-discovery platform that helps pharmaceutical companies “determine which molecules can become medicines.” Here’s how it works, according to the company:

The system is driven by a virtual brain, modeled on the human visual cortex, that teaches itself by “studying” millions of datapoints about how drugs have worked in the past. With this vast knowledge, Chematria’s brain can apply the patterns it perceives, to predict the effectiveness of hypothetical drugs, and suggest surprising uses for existing drugs, transforming the way medicines are discovered.

I was not able to find a Chematria website or anything much more than this brief description on the University of Toronto website (from the Impact Centre’s Current Companies webpage),

Chematria makes software that helps pharmaceutical companies determine which molecules can become medicines. With Chematria’s proprietary approach to molecular docking simulations, pharmaceutical researchers can confidently predict potent molecules for novel biological targets, thereby enabling faster drug development for a fraction of the price of wet-lab experiments.

Chematria’s Ebola project is focused on drugs already available but could be put to a new use (from Li’s article),

In response to the outbreak, Chematria recently launched an Ebola project, using its algorithm to evaluate molecules that have already gone through clinical trials, and have proven to be safe. “That means we can expedite the process of getting the treatment to the people who need it,” Heifets said. “In a pandemic situation, you’re under serious time pressure.”

He cited Aspirin as an example of proven medicine that has more than one purpose: People take it for headaches, but it’s also helpful for heart disease. Similarly, a drug that’s already out there may also hold the cure for Ebola.

I recommend reading Li’s article in its entirety.

The University of Toronto news release provides more detail about the partners involved in this ebola project,

… The unprecedented speed and scale of this investigation is enabled by the unique strengths of the three partners: Chematria is offering the core artificial intelligence technology that performs the drug research, U of T is contributing biological insights about Ebola that the system will use to search for new treatments and IBM is providing access to Canada’s fastest supercomputer, Blue Gene/Q.

“Our team is focusing on the mechanism Ebola uses to latch on to the cells it infects,” said Dr. Jeffrey Lee of the University of Toronto. “If we can interrupt that process with a new drug, it could prevent the virus from replicating, and potentially work against other viruses like Marburg and HIV that use the same mechanism.”

The initiative may also demonstrate an alternative approach to high-speed medical research. While giving drugs to patients will always require thorough clinical testing, zeroing in on the best drug candidates can take years using today’s most common methods. Critics say this slow and prohibitively expensive process is one of the key reasons that finding treatments for rare and emerging diseases is difficult.

“If we can find promising drug candidates for Ebola using computers alone,” said Heifets, “it will be a milestone for how we develop cures.”

I hope this effort along with all the others being made around the world prove helpful with Ebola. it’s good to see research into drugs (chemical formulations) that are familiar to the medical community and can be used for a different purpose than originally intended. Drugs that are ‘repurposed’ should be cheaper than new ones and we already have data about side effects.

As for the “milestone for how we develop cures,” this team’s work along with all the international research on this front and on how we assess toxicity should certainly make that milestone possible.

* Full disclosure: I came across Seurat-1’s NOTOX project when I attended (at Seurat-1’s expense) the 9th World Congress on Alternatives to Animal Testing held in Aug. 2014 in Prague.

Getting neuromorphic with a synaptic transistor

Scientists at Harvard University (Massachusetts, US) have devised a transistor that simulates the synapses found in brains. From a Nov. 2, 2013 news item on ScienceDaily,

It doesn’t take a Watson to realize that even the world’s best supercomputers are staggeringly inefficient and energy-intensive machines.

Our brains have upwards of 86 billion neurons, connected by synapses that not only complete myriad logic circuits; they continuously adapt to stimuli, strengthening some connections while weakening others. We call that process learning, and it enables the kind of rapid, highly efficient computational processes that put Siri and Blue Gene to shame.

Materials scientists at the Harvard School of Engineering and Applied Sciences (SEAS) have now created a new type of transistor that mimics the behavior of a synapse. The novel device simultaneously modulates the flow of information in a circuit and physically adapts to changing signals.

Exploiting unusual properties in modern materials, the synaptic transistor could mark the beginning of a new kind of artificial intelligence: one embedded not in smart algorithms but in the very architecture of a computer. [emphasis mine]

There are two other projects that I know of (and I imagine there are others) focused on intelligence that’s embedded rather than algorithmic. My December 24, 2012 posting focused on a joint (National Institute for Materials Science in Japan and the University of California, Los Angeles) project where researchers developed a nanoionic device with a range of neuromorphic and electrical properties. There’s also the memristor mentioned in my Feb. 26, 2013 posting (and many other times on this blog) which features a ,proposal to create an artificial brain.

Getting back to Harvard’s synaptic transistor (from the Nov. 1, 2013 Harvard University news release which originated the news item),

The human mind, for all its phenomenal computing power, runs on roughly 20 Watts of energy (less than a household light bulb), so it offers a natural model for engineers.

“The transistor we’ve demonstrated is really an analog to the synapse in our brains,” says co-lead author Jian Shi, a postdoctoral fellow at SEAS. “Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of its connection. And the faster the neurons spike each time, the stronger the synaptic connection. Essentially, it memorizes the action between the neurons.”

In principle, a system integrating millions of tiny synaptic transistors and neuron terminals could take parallel computing into a new era of ultra-efficient high performance.

Here’s an image of synaptic transistors that the researchers from Harvard’s School of Engineering and Applied Science (SEAS) have supplied,

Several prototypes of the synaptic transistor are visible on this silicon chip. (Photo by Eliza Grinnell, SEAS Communications.)

Several prototypes of the synaptic transistor are visible on this silicon chip. (Photo by Eliza Grinnell, SEAS Communications.)

The news release provides a description of the synatpic transistor and how it works,

While calcium ions and receptors effect a change in a biological synapse, the artificial version achieves the same plasticity with oxygen ions. When a voltage is applied, these ions slip in and out of the crystal lattice of a very thin (80-nanometer) film of samarium nickelate, which acts as the synapse channel between two platinum “axon” and “dendrite” terminals. The varying concentration of ions in the nickelate raises or lowers its conductance—that is, its ability to carry information on an electrical current—and, just as in a natural synapse, the strength of the connection depends on the time delay in the electrical signal.

Structurally, the device consists of the nickelate semiconductor sandwiched between two platinum electrodes and adjacent to a small pocket of ionic liquid. An external circuit multiplexer converts the time delay into a magnitude of voltage which it applies to the ionic liquid, creating an electric field that either drives ions into the nickelate or removes them. The entire device, just a few hundred microns long, is embedded in a silicon chip.

The synaptic transistor offers several immediate advantages over traditional silicon transistors. For a start, it is not restricted to the binary system of ones and zeros.

“This system changes its conductance in an analog way, continuously, as the composition of the material changes,” explains Shi. “It would be rather challenging to use CMOS, the traditional circuit technology, to imitate a synapse, because real biological synapses have a practically unlimited number of possible states—not just ‘on’ or ‘off.’”

The synaptic transistor offers another advantage: non-volatile memory, which means even when power is interrupted, the device remembers its state.

Additionally, the new transistor is inherently energy efficient. The nickelate belongs to an unusual class of materials, called correlated electron systems, that can undergo an insulator-metal transition. At a certain temperature—or, in this case, when exposed to an external field—the conductance of the material suddenly changes.

“We exploit the extreme sensitivity of this material,” says Ramanathan [principal investigator and associate professor of materials science at Harvard SEAS]. “A very small excitation allows you to get a large signal, so the input energy required to drive this switching is potentially very small. That could translate into a large boost for energy efficiency.”

The nickelate system is also well positioned for seamless integration into existing silicon-based systems.

“In this paper, we demonstrate high-temperature operation, but the beauty of this type of a device is that the ‘learning’ behavior is more or less temperature insensitive, and that’s a big advantage,” says Ramanathan. “We can operate this anywhere from about room temperature up to at least 160 degrees Celsius.”

For now, the limitations relate to the challenges of synthesizing a relatively unexplored material system, and to the size of the device, which affects its speed.

“In our proof-of-concept device, the time constant is really set by our experimental geometry,” says Ramanathan. “In other words, to really make a super-fast device, all you’d have to do is confine the liquid and position the gate electrode closer to it.”

In fact, Ramanathan and his research team are already planning, with microfluidics experts at SEAS, to investigate the possibilities and limits for this “ultimate fluidic transistor.”

Here’s a link to and a citation for the researchers’ paper,

A correlated nickelate synaptic transistor by Jian Shi, Sieu D. Ha, You Zhou, Frank Schoofs, & Shriram Ramanathan. Nature Communications 4, Article number: 2676 doi:10.1038/ncomms3676 Published 31 October 2013

This article is behind a paywall.

Brain-to-brain communication, organic computers, and BAM (brain activity map), the connectome

Miguel Nicolelis, a professor at Duke University, has been making international headlines lately with two brain projects. The first one about implanting a brain chip that allows rats to perceive infrared light was mentioned in my Feb. 15, 2013 posting. The latest project is a brain-to-brain (rats) communication project as per a Feb. 28, 2013 news release on *EurekAlert,

Researchers have electronically linked the brains of pairs of rats for the first time, enabling them to communicate directly to solve simple behavioral puzzles. A further test of this work successfully linked the brains of two animals thousands of miles apart—one in Durham, N.C., and one in Natal, Brazil.

The results of these projects suggest the future potential for linking multiple brains to form what the research team is calling an “organic computer,” which could allow sharing of motor and sensory information among groups of animals. The study was published Feb. 28, 2013, in the journal Scientific Reports.

“Our previous studies with brain-machine interfaces had convinced us that the rat brain was much more plastic than we had previously thought,” said Miguel Nicolelis, M.D., PhD, lead author of the publication and professor of neurobiology at Duke University School of Medicine. “In those experiments, the rat brain was able to adapt easily to accept input from devices outside the body and even learn how to process invisible infrared light generated by an artificial sensor. So, the question we asked was, ‘if the brain could assimilate signals from artificial sensors, could it also assimilate information input from sensors from a different body?'”

Ben Schiller in a Mar. 1, 2013 article for Fast Company describes both the latest experiment and the work leading up to it,

First, two rats were trained to press a lever when a light went on in their cage. Press the right lever, and they would get a reward–a sip of water. The animals were then split in two: one cage had a lever with a light, while another had a lever without a light. When the first rat pressed the lever, the researchers sent electrical activity from its brain to the second rat. It pressed the right lever 70% of the time (more than half).

In another experiment, the rats seemed to collaborate. When the second rat didn’t push the right lever, the first rat was denied a drink. That seemed to encourage the first to improve its signals, raising the second rat’s lever-pushing success rate.

Finally, to show that brain-communication would work at a distance, the researchers put one rat in an cage in North Carolina, and another in Natal, Brazil. Despite noise on the Internet connection, the brain-link worked just as well–the rate at which the second rat pushed the lever was similar to the experiment conducted solely in the U.S.

The Duke University Feb. 28, 2013 news release, the origin for the news release on EurekAlert, provides more specific details about the experiments and the rats’ training,

To test this hypothesis, the researchers first trained pairs of rats to solve a simple problem: to press the correct lever when an indicator light above the lever switched on, which rewarded the rats with a sip of water. They next connected the two animals’ brains via arrays of microelectrodes inserted into the area of the cortex that processes motor information.

One of the two rodents was designated as the “encoder” animal. This animal received a visual cue that showed it which lever to press in exchange for a water reward. Once this “encoder” rat pressed the right lever, a sample of its brain activity that coded its behavioral decision was translated into a pattern of electrical stimulation that was delivered directly into the brain of the second rat, known as the “decoder” animal.

The decoder rat had the same types of levers in its chamber, but it did not receive any visual cue indicating which lever it should press to obtain a reward. Therefore, to press the correct lever and receive the reward it craved, the decoder rat would have to rely on the cue transmitted from the encoder via the brain-to-brain interface.

The researchers then conducted trials to determine how well the decoder animal could decipher the brain input from the encoder rat to choose the correct lever. The decoder rat ultimately achieved a maximum success rate of about 70 percent, only slightly below the possible maximum success rate of 78 percent that the researchers had theorized was achievable based on success rates of sending signals directly to the decoder rat’s brain.

Importantly, the communication provided by this brain-to-brain interface was two-way. For instance, the encoder rat did not receive a full reward if the decoder rat made a wrong choice. The result of this peculiar contingency, said Nicolelis, led to the establishment of a “behavioral collaboration” between the pair of rats.

“We saw that when the decoder rat committed an error, the encoder basically changed both its brain function and behavior to make it easier for its partner to get it right,” Nicolelis said. “The encoder improved the signal-to-noise ratio of its brain activity that represented the decision, so the signal became cleaner and easier to detect. And it made a quicker, cleaner decision to choose the correct lever to press. Invariably, when the encoder made those adaptations, the decoder got the right decision more often, so they both got a better reward.”

In a second set of experiments, the researchers trained pairs of rats to distinguish between a narrow or wide opening using their whiskers. If the opening was narrow, they were taught to nose-poke a water port on the left side of the chamber to receive a reward; for a wide opening, they had to poke a port on the right side.

The researchers then divided the rats into encoders and decoders. The decoders were trained to associate stimulation pulses with the left reward poke as the correct choice, and an absence of pulses with the right reward poke as correct. During trials in which the encoder detected the opening width and transmitted the choice to the decoder, the decoder had a success rate of about 65 percent, significantly above chance.

To test the transmission limits of the brain-to-brain communication, the researchers placed an encoder rat in Brazil, at the Edmond and Lily Safra International Institute of Neuroscience of Natal (ELS-IINN), and transmitted its brain signals over the Internet to a decoder rat in Durham, N.C. They found that the two rats could still work together on the tactile discrimination task.

“So, even though the animals were on different continents, with the resulting noisy transmission and signal delays, they could still communicate,” said Miguel Pais-Vieira, PhD, a postdoctoral fellow and first author of the study. “This tells us that it could be possible to create a workable, network of animal brains distributed in many different locations.”

Will Oremus in his Feb. 28, 2013 article for Slate seems a little less buoyant about the implications of this work,

Nicolelis believes this opens the possibility of building an “organic computer” that links the brains of multiple animals into a single central nervous system, which he calls a “brain-net.” Are you a little creeped out yet? In a statement, Nicolelis adds:

We cannot even predict what kinds of emergent properties would appear when animals begin interacting as part of a brain-net. In theory, you could imagine that a combination of brains could provide solutions that individual brains cannot achieve by themselves.

That sounds far-fetched. But Nicolelis’ lab is developing quite the track record of “taking science fiction and turning it into science,” says Ron Frostig, a neurobiologist at UC-Irvine who was not involved in the rat study. “He’s the most imaginative neuroscientist right now.” (Frostig made it clear he meant this as a complement, though skeptics might interpret the word less charitably.)

The most extensive coverage I’ve given Nicolelis and his work (including the Walk Again project) was in a March 16, 2012 post titled, Monkeys, mind control, robots, prosthetics, and the 2014 World Cup (soccer/football), although there are other mentions including in this Oct. 6, 2011 posting titled, Advertising for the 21st Century: B-Reel, ‘storytelling’, and mind control.  By the way, Nicolelis hopes to have a paraplegic individual (using technology Nicolelis is developing for the Walk Again project) kick the opening soccer/football to the 2014 World Cup games in Brazil.

While there’s much excitement about Nicolelis and his work, there are other ‘brain’ projects being developed in the US including the Brain Activity Map (BAM), which James Lewis notes in his Mar. 1, 2013 posting on the Foresight Institute blog,

A proposal alluded to by President Obama in his State of the Union address [Feb. 2013] to construct a dynamic “functional connectome” Brain Activity Map (BAM) would leverage current progress in neuroscience, synthetic biology, and nanotechnology to develop a map of each firing of every neuron in the human brain—a hundred billion neurons sampled on millisecond time scales. Although not the intended goal of this effort, a project on this scale, if it is funded, should also indirectly advance efforts to develop artificial intelligence and atomically precise manufacturing.

As Lewis notes in his posting, there’s an excellent description of BAM and other brain projects, as well as a discussion about how these ideas are linked (not necessarily by individuals but by the overall direction of work being done in many labs and in many countries across the globe) in Robert Blum’s Feb. (??), 2013 posting titled, BAM: Brain Activity Map Every Spike from Every Neuron, on his eponymous blog. Blum also offers an extensive set of links to the reports and stories about BAM. From Blum’s posting,

The essence of the BAM proposal is to create the technology over the coming decade
to be able to record every spike from every neuron in the brain of a behaving organism.
While this notion seems insanely ambitious, coming from a group of top investigators,
the paper deserves scrutiny. At minimum it shows what might be achieved in the future
by the combination of nanotechnology and neuroscience.

In 2013, as I write this, two European Flagship projects have just received funding for
one billion euro each (1.3 billion dollars each). The Human Brain Project is
an outgrowth of the Blue Brain Project, directed by Prof. Henry Markram
in Lausanne, which seeks to create a detailed simulation of the human brain.
The Graphene Flagship, based in Sweden, will explore uses of graphene for,
among others, creation of nanotech-based supercomputers. The potential synergy
between these projects is a source of great optimism.

The goal of the BAM Project is to elaborate the functional connectome
of a live organism: that is, not only the static (axo-dendritic) connections
but how they function in real-time as thinking and action unfold.

The European Flagship Human Brain Project will create the computational
capability to simulate large, realistic neural networks. But to compare the model
with reality, a real-time, functional, brain-wide connectome must also be created.
Nanotech and neuroscience are mature enough to justify funding this proposal.

I highly recommend reading Blum’s technical description of neural spikes as understanding that concept or any other in his post doesn’t require an advanced degree. Note: Blum holds a number of degrees and diplomas including an MD (neuroscience) from the University of California at San Francisco and a PhD in computer science and biostatistics from California’s Stanford University.

The Human Brain Project has been mentioned here previously. The  most recent mention is in a Jan. 28, 2013 posting about its newly gained status as one of two European Flagship initiatives (the other is the Graphene initiative) each meriting one billion euros of research funding over 10 years. Today, however, is the first time I’ve encountered the BAM project and I’m fascinated. Luckily, John Markoff’s Feb. 17, 2013 article for The New York Times provides some insight into this US initiative (Note: I have removed some links),

The Obama administration is planning a decade-long scientific effort to examine the workings of the human brain and build a comprehensive map of its activity, seeking to do for the brain what the Human Genome Project did for genetics.

The project, which the administration has been looking to unveil as early as March, will include federal agencies, private foundations and teams of neuroscientists and nanoscientists in a concerted effort to advance the knowledge of the brain’s billions of neurons and gain greater insights into perception, actions and, ultimately, consciousness.

Moreover, the project holds the potential of paving the way for advances in artificial intelligence.

What I find particularly interesting is the reference back to the human genome project, which may explain why BAM is also referred to as a ‘connectome’.

ETA Mar.6.13: I have found a Human Connectome Project Mar. 6, 2013 news release on EurekAlert, which leaves me confused. This does not seem to be related to BAM, although the articles about BAM did reference a ‘connectome’. At this point, I’m guessing that BAM and the ‘Human Connectome Project’ are two related but different projects and the reference to a ‘connectome’ in the BAM material is meant generically.  I previously mentioned the Human Connectome Project panel discussion held at the AAAS (American Association for the Advancement of Science) 2013 meeting in my Feb. 7, 2013 posting.

* Corrected EurkAlert to EurekAlert on June 14, 2013.

Existential risk

The idea that robots of one kind or another (e.g. nanobots eating up the world and leaving grey goo, Cylons in both versions of Battlestar Galactica trying to exterminate humans, etc.) will take over the world and find humans unnecessary  isn’t especially new in works of fiction. It’s not always mentioned directly but the underlying anxiety often has to do with intelligence and concerns over an ‘explosion of intelligence’. The question it raises,’ what if our machines/creations become more intelligent than humans?’ has been described as existential risk. According to a Nov. 25, 2012 article by Sylvia Hui for Huffington Post, a group of eminent philosophers and scientists at the University of Cambridge are proposing to found a Centre for the Study of Existential Risk,

Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction?

Philosophers and scientists at Britain’s Cambridge University think the question deserves serious study. A proposed Center for the Study of Existential Risk will bring together experts to consider the ways in which super intelligent technology, including artificial intelligence, could “threaten our own existence,” the institution said Sunday.

“In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology,” Cambridge philosophy professor Huw Price said.

When that happens, “we’re no longer the smartest things around,” he said, and will risk being at the mercy of “machines that are not malicious, but machines whose interests don’t include us.”

Price along with Martin Rees, Emeritus Professor of Cosmology and Astrophysics, and Jaan Tallinn, Co-Founder of Skype, are the driving forces behind this proposed new centre at Cambridge University. From the Cambridge Project for Existential Risk webpage,

Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake. …

The Cambridge Project for Existential Risk — a joint initiative between a philosopher, a scientist, and a software entrepreneur — begins with the conviction that these issues require a great deal more scientific investigation than they presently receive. Our aim is to establish within the University of Cambridge a multidisciplinary research centre dedicated to the study and mitigation of risks of this kind.

Price and Tallinn co-wrote an Aug. 6, 2012 article for the Australia-based, The Conversation website, about their concerns,

We know how to deal with suspicious packages – as carefully as possible! These days, we let robots take the risk. But what if the robots are the risk? Some commentators argue we should be treating AI (artificial intelligence) as a suspicious package, because it might eventually blow up in our faces. Should we be worried?

Asked whether there will ever be computers as smart as people, the US mathematician and sci-fi author Vernor Vinge replied: “Yes, but only briefly”.

He meant that once computers get to this level, there’s nothing to prevent them getting a lot further very rapidly. Vinge christened this sudden explosion of intelligence the “technological singularity”, and thought that it was unlikely to be good news, from a human point of view.

Was Vinge right, and if so what should we do about it? Unlike typical suspicious parcels, after all, what the future of AI holds is up to us, at least to some extent. Are there things we can do now to make sure it’s not a bomb (or a good bomb rather than a bad bomb, perhaps)?

It appears Price, Rees, and Tallinn are not the only concerned parties, from the Nov. 25, 2012 research news piece on the Cambridge University website,

With luminaries in science, policy, law, risk and computing from across the University and beyond signing up to become advisors, the project is, even in its earliest days, gathering momentum. “The basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence – in a way that they simply haven’t up to now, in human history,” says Price. “We should be investing a little of our intellectual resources in shifting some probability from bad outcomes to good ones.”

Price acknowledges that some of these ideas can seem far-fetched, the stuff of science fiction, but insists that that’s part of the point.

According to the Huffington Post article by Lui, they expect to launch the centre next year (2013). In the meantime, for anyone who’s looking for more information about the ‘intelligence explosion’ or  ‘singularity’ as it’s also known, there’s a Wikipedia essay on the topic.  Also, you may want to stay tuned to this channel (blog) as I expect to have some news about an artificial intelligence project based at the University of Waterloo (Ontario, Canada) and headed by Chris Eliasmith at the university’s Centre for Theoretical Neuroscience, later this week.

Study AI at Stanford online and for free

I exaggerated a little with the headline. In fact, you’ll be studying the same materials, getting the same lectures, answering the same quizzes, and doing the same assignments as first year students in the introductory course to artificial intelligence taught by Stanford professors, Sebastian Thrun and Peter Norvig but you won’t be attending officially as a Stanford student.

I first came across this item at the Robot Shop blog. From there I went to AI class website to find more information. From the AI class home page,

The class runs from October 10 through December 16, 2011. While this class is being offered online, it is also taught at Stanford University, where it continues to be a popular intro-level class on AI. For the online version, the instructors aim to offer identical materials, assignments, and exams, and to use the same grading criteria. Both instructors will be available for online discussions.

A high speed internet connection is recommended as most of the course content will be video based. Access to a copy of Artificial Intelligence: A Modern Approach is also suggested. Peter Norvig is co-author of this text and is donating all royalties to charity.

Here’s a little more about the two instructors,

Sebastian Thrun is a Research Professor of Computer Science at Stanford University, a Google Fellow, a member of the National Academy of Engineering and the German Academy of Sciences. Thrun is best known for his research in robotics and machine learning.

Fast Company Magazine selected him as the fifth most creative person in business, the UK Telegraph included him in their list of 100 living geniuses, and Popular Science included him in their list of Brilliant Ten. His self-driving car was named one of the 50 best inventions of 2010 by Time Magazine, and Scientific American named Thrun one of the 50 business and technology leaders. …

Peter Norvig is Director of Research at Google Inc. He is also a Fellow of the American Association for Artificial Intelligence and the Association for Computing Machinery.

Norvig co-authored Artificial Intelligence: A Modern Approach, which is the world’s most popular text book on Artificial Intelligence. Artificial Intelligence: A Modern Approach is used in over 1,200 universities in over 100 countries, and it has been translated into 12 languages. Prior to joining Google, Norvig
was the head of the Computational Sciences Division at NASA Ames Research Center, making him NASA’s senior computer scientist. …

Here’s a video about the course,

ETA Aug. 17, 2011: According to an Aug. 17, 2011 news item on physorg.com, the course has attracted 58,000 registrants so far,

Demand has been enormous. Already more than 58,000 people have expressed interest in the artificial intelligence course taught by Sebastian Thrun, a Stanford research professor of computer science and a Google Fellow, and Google Director of Research Peter Norvig.

In fact, there are two other free online courses also being offered Machine Learning and Introduction to Databases.

Sept. 19, 2012 Note: I have removed what appeared to be some sort of excerpt which had been left blank.

Science in the British election and CASE; memristor and artificial intelligence; The Secret in Their Eyes, an allegory for post-Junta Argentina?

I’ve been meaning to mention the upcoming (May 6, 2010) British election for the last while as I’ve seen notices of party manifestos that mention science (!) but it was one of Dave Bruggeman’s postings on Pasco Fhronesis that tipped the balance for me. From his posting,

CaSE [Campaign for Science and Engineering] sent each party leader a letter asking for their positions with respect to science and technology issues. The Conservatives and the Liberal Democrats have responded so far (while the Conservative leader kept mum on science before the campaign, now it’s the Prime Minister who has yet to speak on it). Of the two letters, the Liberal Democrats have offered more detailed proposals than the Conservatives, and the Liberal Democrats have also addressed issues of specific interest to the U.K. scientific community to a much greater degree.

(These letters are in addition to the party manifestos which each mention science.) I strongly recommend the post as Bruggeman goes on to give a more detailed analysis and offer a few speculations.

The Liberal Democrats offer a more comprehensive statement but they are a third party who gained an unexpected burst of support after the first national debate. As anyone knows, the second debate (to be held around noon (PT) today) or something else for that matter could change all that.

I did look at the CaSE site which provides an impressive portfolio of materials related to this election on its home page. As for the organization’s mission, before getting to that you might find its history instructive,

CaSE was launched in March 2005, evolving out of its predecessor Save British Science [SBS]. …

SBS was founded in 1986, following the placement of an advertisement in The Times newspaper. The idea came from a small group of university scientists brought together by a common concern about the difficulties they were facing in obtaining the funds for first class research.

The original plan was simply to buy a half-page adverisement in The Times to make the point, and the request for funds was spread via friends and colleagues in other universities. The response was overwhelming. Within a few weeks about 1500 contributors, including over 100 Fellows of the Royal Society and most of the British Nobel prize winners, had sent more than twice the sum needed. The advertisement appeared on 13th January 1986, and the balance of the money raised was used to found the Society, taking as its name the title of the advertisement.

Now for their mission statement,

CaSE is now an established feature of the science and technology policy scene, supported among universities and the learned societies, and able to attract media attention. We are accepted by Government as an organisation able to speak for a wide section of the science and engineering community in a constructive but also critical and forceful manner. We are free to speak without the restraints felt by learned societies and similar bodies, and it is good for Government to know someone is watching closely.

I especially like the bit where they feel its “good for Government” to know someone is watching.

The folks at the Canadian Science Policy Centre (CSPC) are also providing information about the British election and science. As you’d expect it’s not nearly as comprehensive but, if you’re interested, you can check out the CSPC home page.

I haven’t had a chance to read the manifestos and other materials closely enough to be able to offer much comment. It is refreshing to see the issue mentioned by all the parties during the election as opposed to having science dismissed as a ’boutique issue’ as an assistant to my local (Canadian)l Member of Parliament described it to me.

Memristors and artificial intelligence

The memristor story has ‘legs’, as they say. This morning I found an in-depth story by Michael Berger on Nanowerk titled, Nanotechnology’s Road to Artificial Brains, where he interviews Dr. Wei Lu about his work with memristors and neural synapses (mentioned previously on this blog here). Coincidentally I received a comment yesterday from Blaise Mouttet about an article he’d posted on Google September 2009 titled, Memistors, Memristors, and the Rise of Strong Artificial Intelligence.

Berger’s story focuses on a specific piece of research and possible future applications. From the Nanowerk story,

If you think that building an artificial human brain is science fiction, you are probably right – for now. But don’t think for a moment that researchers are not working hard on laying the foundations for what is called neuromorphic engineering – a new interdisciplinary discipline that includes nanotechnologies and whose goal is to design artificial neural systems with physical architectures similar to biological nervous systems.

One of the key components of any neuromorphic effort is the design of artificial synapses. The human brain contains vastly more synapses than neurons – by a factor of about 10,000 – and therefore it is necessary to develop a nanoscale, low power, synapse-like device if scientists want to scale neuromorphic circuits towards the human brain level.

Berger goes on to explain how Lu’s work with memristors relates to this larger enterprise which is being pursued by many scientists around the world.

By contrast Mouttet offers an historical context for the work on memristors along with a precise technical explanation  and why it is applicable to work in artificial intelligence. From Mouttet’s essay,

… memristive systems integrate data storage and data processing capabilities in a single device which offers the potential to more closely emulate the capabilities of biological intelligence.

If you are interested in exploring further, I suggest starting with Mouttet’s article first as it lays the groundwork for better understanding memristors and also Berger’s story about artificial neural synapses.

The secret in their eyes (movie review)

I woke up at 6 am the other morning thinking about a movie I saw this last Sunday (April 18, 2010). That doesn’t often happen to me,  especially as I get more jaded with time but something about ‘The Secret in Their Eyes‘, the Argentinean movie that won this year’s Oscar for Best Foreign Language Film woke me up.

Before going further, a précis of the story: a retired man (in his late 50s?) is trying to write a novel based on a rape/homicide case that he investigated in the mid-1970s. He’s haunted by it and spends much of the movie calling back memories of both a case and a love he tried to bury. Writing his ‘novel’ compels him to reinvestigate the case (he was an investigator for the judge) and reestablish contact with the victim’s grief-stricken husband and with the woman he loved  who was his boss (the judge) and also from a more prestigious social class.

The movie offers some comedy although it can mostly be described as a thriller, a procedural, and a love story. It can also be seen as an allegory. The victim represents Argentina as a country. The criminal’s treatment (he gets rewarded— initially) represents how the military junta controlled Argentina after Juan Peron’s death in 1974. It seemed to me that much of this movie was an investigation about how people cope and recover (or don’t) from a hugely traumatic experience.

I don’t know much about Argentina and I have no Spanish language skills (other than recognizing an occasional word when it sounds like a French one). Consequently, this history is fairly sketchy and derived from secondary and tertiary sources. In the 1950s, Juan Peron (a former member of the military) led  a very repressive regime which was eventually pushed out of office. By the 1970s he was asked to return which he did. He died there in 1974 and sometime after a military Junta took control of the government. Amongst other measures, they kidnapped thousands of people (usually young and often students, teachers [the victim in the movie is a teacher], political activists/enemies, and countless others) and ‘disappeared’ them.

Much of the population tried to ignore or hide from what was going on. A  documentary released in the US  in 1985, Las Madres de la Plaza de Mayo, details the story of a group of middle-class women who are moved to protest, after years of trying to endure, when their own children are ‘disappeared’.

In the movie we see what happens when bullies take over control. The criminal gets rewarded, the investigator/writer is sent away for protection after a colleague becomes collateral damage, the judge’s family name protects her, and the grieving husband has to find his own way to deal with the situation.

The movie offers both a gothic twist towards the end and a very moving perspective on how one deals with the guilt for one’s complicity and for one’s survival.

ETA: (April 27, 2010) One final insight, the movie suggests that art/creative endeavours such as writing a novel (or making a movie?) can be a means for confession, redemption, and/or healing past wounds.

I think what makes the movie so good is the number of readings that are possible. You can take a look at some of what other reviewers had to say: Katherine Monk at the Vancouver Sun, Curtis Woloschuk at the Westender, and Ken Eisner at the Georgia Straight.

Kudos to the director and screen writer, Juan José Campanella and to the leads: Ricardo Darín (investigator/writer), Soledad Villamil (judge), Pablo Rago (husband), Javier Godino (criminal), Guillermo Francella (colleague who becomes collateral damage) and all of the other actor s in the company. Even the smallest role was beautifully realized.

One final thing, whoever translated and wrote the subtitles should get an award. I don’t know how the person did it but the use of language is brilliant. I’ve never before seen subtitles that managed to convey the flavour of the verbal exchanges taking place on screen.

I liked the movie, eh?