Tag Archives: artificial intelligence

KAIST (Korea Advanced Institute of Science and Technology) will lead an Ideas Lab at 2016 World Economic Forum

The theme for the 2016 World Economic Forum (WEF) is ‘Mastering the Fourth Industrial Revolution’. I’m losing track of how many industrial revolutions we’ve had and this seems like a vague theme. However, there is enlightenment to be had in this Nov. 17, 2015 Korea Advanced Institute of Science and Technology (KAIST) news release on EurekAlert,

KAIST researchers will lead an IdeasLab on biotechnology for an aging society while HUBO, the winner of the 2015 DARPA Robotics Challenge, will interact with the forum participants, offering an experience of state-of-the-art robotics technology

Moving on from the news release’s subtitle, there’s more enlightenment,

Representatives from the Korea Advanced Institute of Science and Technology (KAIST) will attend the 2016 Annual Meeting of the World Economic Forum to run an IdeasLab and showcase its humanoid robot.

With over 2,500 leaders from business, government, international organizations, civil society, academia, media, and the arts expected to participate, the 2016 Annual Meeting will take place on Jan. 20-23, 2016 in Davos-Klosters, Switzerland. Under the theme of ‘Mastering the Fourth Industrial Revolution,’ [emphasis mine] global leaders will discuss the period of digital transformation [emphasis mine] that will have profound effects on economies, societies, and human behavior.

President Sung-Mo Steve Kang of KAIST will join the Global University Leaders Forum (GULF), a high-level academic meeting to foster collaboration among experts on issues of global concern for the future of higher education and the role of science in society. He will discuss how the emerging revolution in technology will affect the way universities operate and serve society. KAIST is the only Korean university participating in GULF, which is composed of prestigious universities invited from around the world.

Four KAIST professors, including Distinguished Professor Sang Yup Lee of the Chemical and Biomolecular Engineering Department, will lead an IdeasLab on ‘Biotechnology for an Aging Society.’

Professor Lee said, “In recent decades, much attention has been paid to the potential effect of the growth of an aging population and problems posed by it. At our IdeasLab, we will introduce some of our research breakthroughs in biotechnology to address the challenges of an aging society.”

In particular, he will present his latest research in systems biotechnology and metabolic engineering. His research has explained the mechanisms of how traditional Oriental medicine works in our bodies by identifying structural similarities between effective compounds in traditional medicine and human metabolites, and has proposed more effective treatments by employing such compounds.

KAIST will also display its networked mobile medical service system, ‘Dr. M.’ Built upon a ubiquitous and mobile Internet, such as the Internet of Things, wearable electronics, and smart homes and vehicles, Dr. M will provide patients with a more affordable and accessible healthcare service.

In addition, Professor Jun-Ho Oh of the Mechanical Engineering Department will showcase his humanoid robot, ‘HUBO,’ during the Annual Meeting. His research team won the International Humanoid Robotics Challenge hosted by the United States Defense Advanced Research Projects Agency (DARPA), which was held in Pomona, California, on June 5-6, 2015. With 24 international teams participating in the finals, HUBO completed all eight tasks in 44 minutes and 28 seconds, 6 minutes earlier than the runner-up, and almost 11 minutes earlier than the third-place team. Team KAIST walked away with the grand prize of USD 2 million.

Professor Oh said, “Robotics technology will grow exponentially in this century, becoming a real driving force to expedite the Fourth Industrial Revolution. I hope HUBO will offer an opportunity to learn about the current advances in robotics technology.”

President Kang pointed out, “KAIST has participated in the Annual Meeting of the World Economic Forum since 2011 and has engaged with a broad spectrum of global leaders through numerous presentations and demonstrations of our excellence in education and research. Next year, we will choreograph our first robotics exhibition on HUBO and present high-tech research results in biotechnology, which, I believe, epitomizes how science and technology breakthroughs in the Fourth Industrial Revolution will shape our future in an unprecedented way.”

Based on what I’m reading in the KAIST news release, I think the conversation about the ‘Fourth revolution’ may veer toward robotics and artificial intelligence (referred to in code as “digital transformation”) as developments in these fields are likely to affect various economies.  Before proceeding with that thought, take a look at this video showcasing HUBO at the DARPA challenge,

I’m quite impressed with how the robot can recalibrate its grasp so it can pick things up and plug an electrical cord into an outlet and knowing whether wheels or legs will be needed to complete a task all due to algorithms which give the robot a type of artificial intelligence. While it may seem more like a machine than anything else, there’s also this version of a HUBO,

Description English: Photo by David Hanson Date 26 October 2006 (original upload date) Source Transferred from en.wikipedia to Commons by Mac. Author Dayofid at English Wikipedia

English: Photo by David Hanson
Date 26 October 2006 (original upload date)
Source Transferred from en.wikipedia to Commons by Mac.
Author Dayofid at English Wikipedia

It’ll be interesting to note if the researchers make the HUBO seem more humanoid by giving it a face for its interactions with WEF attendees. It would be more engaging but also more threatening since there is increasing concern over robots taking work away from humans with implications for various economies. There’s more about HUBO in its Wikipedia entry.

As for the IdeasLab, that’s been in place at the WEF since 2009 according to this WEF July 19, 2011 news release announcing an ideasLab hub (Note: A link has been removed),

The World Economic Forum is publicly launching its biannual interactive IdeasLab hub on 19 July [2011] at 10.00 CEST. The unique IdeasLab hub features short documentary-style, high-definition (HD) videos of preeminent 21st century ideas and critical insights. The hub also provides dynamic Pecha Kucha presentations and visual IdeaScribes that trace and package complex strategic thinking into engaging and powerful images. All videos are HD broadcast quality.

To share the knowledge captured by the IdeasLab sessions, which have been running since 2009, the Forum is publishing 23 of the latest sessions, seen as the global benchmark of collaborative learning and development.

So while you might not be able to visit an IdeasLab presentation at the WEF meetings, you could get a it to see them later.

Getting back to the robotics and artificial intelligence aspect of the 2016 WEF’s ‘digital’ theme, I noticed some reluctance to discuss how the field of robotics is affecting work and jobs in a broadcast of Canadian television show, ‘Conversations with Conrad’.

For those unfamiliar with the interviewer, Conrad Black is somewhat infamous in Canada for a number of reasons (from the Conrad Black Wikipedia entry), Note: Links have been removed,

Conrad Moffat Black, Baron Black of Crossharbour, KSG (born 25 August 1944) is a Canadian-born British former newspaper publisher and author. He is a non-affiliated life peer, and a convicted felon in the United States for fraud.[n 1] Black controlled Hollinger International, once the world’s third-largest English-language newspaper empire,[3] which published The Daily Telegraph (UK), Chicago Sun Times (U.S.), The Jerusalem Post (Israel), National Post (Canada), and hundreds of community newspapers in North America, before he was fired by the board of Hollinger in 2004.[4]

In 2004, a shareholder-initiated prosecution of Black began in the United States. Over $80 million in assets claimed to have been improperly taken or inappropriately spent by Black.[5] He was convicted of three counts of fraud and one count of obstruction of justice in a U.S. court in 2007 and sentenced to six and a half years’ imprisonment. In 2011 two of the charges were overturned on appeal and he was re-sentenced to 42 months in prison on one count of mail fraud and one count of obstruction of justice.[6] Black was released on 4 May 2012.[7]

Despite or perhaps because of his chequered past, he is often a good interviewer and he definitely attracts interesting guests. n an Oct. 26, 2015 programme, he interviewed both former Canadian astronaut, Chris Hadfield, and Canadian-American David Frum who’s currently editor of Atlantic Monthly and a former speechwriter for George W. Bush.

It was Black’s conversation with Frum which surprised me. They discuss robotics without ever once using the word. In a section where Frum notes that manufacturing is returning to the US, he also notes that it doesn’t mean more jobs and cites a newly commissioned plant in the eastern US employing about 40 people where before it would have employed hundreds or thousands. Unfortunately, the video has not been made available as I write this (Nov. 20, 2015) but that situation may change. You can check here.

Final thought, my guess is that economic conditions are fragile and I don’t think anyone wants to set off panic by mentioning robotics and disappearing jobs.

US White House’s grand computing challenge could mean a boost for research into artificial intelligence and brains

An Oct. 20, 2015 posting by Lynn Bergeson on Nanotechnology Now announces a US White House challenge incorporating nanotechnology, computing, and brain research (Note: A link has been removed),

On October 20, 2015, the White House announced a grand challenge to develop transformational computing capabilities by combining innovations in multiple scientific disciplines. See https://www.whitehouse.gov/blog/2015/10/15/nanotechnology-inspired-grand-challenge-future-computing The Office of Science and Technology Policy (OSTP) states that, after considering over 100 responses to its June 17, 2015, request for information, it “is excited to announce the following grand challenge that addresses three Administration priorities — the National Nanotechnology Initiative, the National Strategic Computing Initiative (NSCI), and the BRAIN initiative.” The grand challenge is to “[c]reate a new type of computer that can proactively interpret and learn from data, solve unfamiliar problems using what it has learned, and operate with the energy efficiency of the human brain.”

Here’s where the Oct. 20, 2015 posting, which originated the news item, by Lloyd Whitman, Randy Bryant, and Tom Kalil for the US White House blog gets interesting,

 While it continues to be a national priority to advance conventional digital computing—which has been the engine of the information technology revolution—current technology falls far short of the human brain in terms of both the brain’s sensing and problem-solving abilities and its low power consumption. Many experts predict that fundamental physical limitations will prevent transistor technology from ever matching these twin characteristics. We are therefore challenging the nanotechnology and computer science communities to look beyond the decades-old approach to computing based on the Von Neumann architecture as implemented with transistor-based processors, and chart a new path that will continue the rapid pace of innovation beyond the next decade.

There are growing problems facing the Nation that the new computing capabilities envisioned in this challenge might address, from delivering individualized treatments for disease, to allowing advanced robots to work safely alongside people, to proactively identifying and blocking cyber intrusions. To meet this challenge, major breakthroughs are needed not only in the basic devices that store and process information and the amount of energy they require, but in the way a computer analyzes images, sounds, and patterns; interprets and learns from data; and identifies and solves problems. [emphases mine]

Many of these breakthroughs will require new kinds of nanoscale devices and materials integrated into three-dimensional systems and may take a decade or more to achieve. These nanotechnology innovations will have to be developed in close coordination with new computer architectures, and will likely be informed by our growing understanding of the brain—a remarkable, fault-tolerant system that consumes less power than an incandescent light bulb.

Recent progress in developing novel, low-power methods of sensing and computation—including neuromorphic, magneto-electronic, and analog systems—combined with dramatic advances in neuroscience and cognitive sciences, lead us to believe that this ambitious challenge is now within our reach. …

This is the first time I’ve come across anything that publicly links the BRAIN initiative to computing, artificial intelligence, and artificial brains. (For my own sake, I make an arbitrary distinction between algorithms [artificial intelligence] and devices that simulate neural plasticity [artificial brains].)The emphasis in the past has always been on new strategies for dealing with Parkinson’s and other neurological diseases and conditions.

D-Wave upgrades Google’s quantum computing capabilities

Vancouver-based (more accurately, Burnaby-based) D-Wave systems has scored a coup as key customers have upgraded from a 512-qubit system to a system with over 1,000 qubits. (The technical breakthrough and concomitant interest from the business community was mentioned here in a June 26, 2015 posting.) As for the latest business breakthrough, here’s more from a Sept. 28, 2015 D-Wave press release,

D-Wave Systems Inc., the world’s first quantum computing company, announced that it has entered into a new agreement covering the installation of a succession of D-Wave systems located at NASA’s Ames Research Center in Moffett Field, California. This agreement supports collaboration among Google, NASA and USRA (Universities Space Research Association) that is dedicated to studying how quantum computing can advance artificial intelligence and machine learning, and the solution of difficult optimization problems. The new agreement enables Google and its partners to keep their D-Wave system at the state-of-the-art for up to seven years, with new generations of D-Wave systems to be installed at NASA Ames as they become available.

“The new agreement is the largest order in D-Wave’s history, and indicative of the importance of quantum computing in its evolution toward solving problems that are difficult for even the largest supercomputers,” said D-Wave CEO Vern Brownell. “We highly value the commitment that our partners have made to D-Wave and our technology, and are excited about the potential use of our systems for machine learning and complex optimization problems.”

Cade Wetz’s Sept. 28, 2015 article for Wired magazine provides some interesting observations about D-Wave computers along with some explanations of quantum computing (Note: Links have been removed),

Though the D-Wave machine is less powerful than many scientists hope quantum computers will one day be, the leap to 1000 qubits represents an exponential improvement in what the machine is capable of. What is it capable of? Google and its partners are still trying to figure that out. But Google has said it’s confident there are situations where the D-Wave can outperform today’s non-quantum machines, and scientists at the University of Southern California [USC] have published research suggesting that the D-Wave exhibits behavior beyond classical physics.

A quantum computer operates according to the principles of quantum mechanics, the physics of very small things, such as electrons and photons. In a classical computer, a transistor stores a single “bit” of information. If the transistor is “on,” it holds a 1, and if it’s “off,” it holds a 0. But in quantum computer, thanks to what’s called the superposition principle, information is held in a quantum system that can exist in two states at the same time. This “qubit” can store a 0 and 1 simultaneously.

Two qubits, then, can hold four values at any given time (00, 01, 10, and 11). And as you keep increasing the number of qubits, you exponentially increase the power of the system. The problem is that building a qubit is a extreme difficult thing. If you read information from a quantum system, it “decoheres.” Basically, it turns into a classical bit that houses only a single value.

D-Wave claims to have a found a solution to the decoherence problem and that appears to be borne out by the USC researchers. Still, it isn’t a general quantum computer (from Wetz’s article),

… researchers at USC say that the system appears to display a phenomenon called “quantum annealing” that suggests it’s truly operating in the quantum realm. Regardless, the D-Wave is not a general quantum computer—that is, it’s not a computer for just any task. But D-Wave says the machine is well-suited to “optimization” problems, where you’re facing many, many different ways forward and must pick the best option, and to machine learning, where computers teach themselves tasks by analyzing large amount of data.

It takes a lot of innovation before you make big strides forward and I think D-Wave is to be congratulated on producing what is to my knowledge the only commercially available form of quantum computing of any sort in the world.

ETA Oct. 6, 2015* at 1230 hours PST: Minutes after publishing about D-Wave I came across this item (h/t Quirks & Quarks twitter) about Australian researchers and their quantum computing breakthrough. From an Oct. 6, 2015 article by Hannah Francis for the Sydney (Australia) Morning Herald,

For decades scientists have been trying to turn quantum computing — which allows for multiple calculations to happen at once, making it immeasurably faster than standard computing — into a practical reality rather than a moonshot theory. Until now, they have largely relied on “exotic” materials to construct quantum computers, making them unsuitable for commercial production.

But researchers at the University of New South Wales have patented a new design, published in the scientific journal Nature on Tuesday, created specifically with computer industry manufacturing standards in mind and using affordable silicon, which is found in regular computer chips like those we use every day in smartphones or tablets.

“Our team at UNSW has just cleared a major hurdle to making quantum computing a reality,” the director of the university’s Australian National Fabrication Facility, Andrew Dzurak, the project’s leader, said.

“As well as demonstrating the first quantum logic gate in silicon, we’ve also designed and patented a way to scale this technology to millions of qubits using standard industrial manufacturing techniques to build the world’s first quantum processor chip.”

According to the article, the university is looking for industrial partners to help them exploit this breakthrough. Fisher’s article features an embedded video, as well as, more detail.

*It was Oct. 6, 2015 in Australia but Oct. 5, 2015 my side of the international date line.

ETA Oct. 6, 2015 (my side of the international date line): An Oct. 5, 2015 University of New South Wales news release on EurekAlert provides additional details.

Here’s a link to and a citation for the paper,

A two-qubit logic gate in silicon by M. Veldhorst, C. H. Yang, J. C. C. Hwang, W. Huang,    J. P. Dehollain, J. T. Muhonen, S. Simmons, A. Laucht, F. E. Hudson, K. M. Itoh, A. Morello    & A. S. Dzurak. Nature (2015 doi:10.1038/nature15263 Published online 05 October 2015

This paper is behind a paywall.

AI assistant makes scientific discovery at Tufts University (US)

In light of this latest research from Tufts University, I thought it might be interesting to review the “algorithms, artificial intelligence (AI), robots, and world of work” situation before moving on to Tufts’ latest science discovery. My Feb. 5, 2015 post provides a roundup of sorts regarding work and automation. For those who’d like the latest, there’s a May 29, 2015 article by Sophie Weiner for Fast Company, featuring a predictive interactive tool designed by NPR (US National Public Radio) based on data from Oxford University researchers, which tells you how likely automating your job could be, no one knows for sure, (Note: A link has been removed),

Paralegals and food service workers: the robots are coming.

So suggests this interactive visualization by NPR. The bare-bones graphic lets you select a profession, from tellers and lawyers to psychologists and authors, to determine who is most at risk of losing their jobs in the coming robot revolution. From there, it spits out a percentage. …

You can find the interactive NPR tool here. I checked out the scientist category (in descending order of danger: Historians [43.9%], Economists, Geographers, Survey Researchers, Epidemiologists, Chemists, Animal Scientists, Sociologists, Astronomers, Social Scientists, Political Scientists, Materials Scientists, Conservation Scientists, and Microbiologists [1.2%]) none of whom seem to be in imminent danger if you consider that bookkeepers are rated at  97.6%.

Here at last is the news from Tufts (from a June 4, 2015 Tufts University news release, also on EurekAlert),

An artificial intelligence system has for the first time reverse-engineered the regeneration mechanism of planaria–the small worms whose extraordinary power to regrow body parts has made them a research model in human regenerative medicine.

The discovery by Tufts University biologists presents the first model of regeneration discovered by a non-human intelligence and the first comprehensive model of planarian regeneration, which had eluded human scientists for over 100 years. The work, published in PLOS Computational Biology, demonstrates how “robot science” can help human scientists in the future.

To mine the fast-growing mountain of published experimental data in regeneration and developmental biology Lobo and Levin developed an algorithm that would use evolutionary computation to produce regulatory networks able to “evolve” to accurately predict the results of published laboratory experiments that the researchers entered into a database.

“Our goal was to identify a regulatory network that could be executed in every cell in a virtual worm so that the head-tail patterning outcomes of simulated experiments would match the published data,” Lobo said.

The paper represents a successful application of the growing field of “robot science” – which Levin says can help human researchers by doing much more than crunch enormous datasets quickly.

“While the artificial intelligence in this project did have to do a whole lot of computations, the outcome is a theory of what the worm is doing, and coming up with theories of what’s going on in nature is pretty much the most creative, intuitive aspect of the scientist’s job,” Levin said. “One of the most remarkable aspects of the project was that the model it found was not a hopelessly-tangled network that no human could actually understand, but a reasonably simple model that people can readily comprehend. All this suggests to me that artificial intelligence can help with every aspect of science, not only data mining but also inference of meaning of the data.”

Here’s a link to and a citation for the paper,

Inferring Regulatory Networks from Experimental Morphological Phenotypes: A Computational Method Reverse-Engineers Planarian Regeneration by Daniel Lobo and Michael Levin. PLOS (Computational Biology) DOI: DOI: 10.1371/journal.pcbi.1004295 Published: June 4, 2015

This paper is open access.

It will be interesting to see if attributing the discovery to an algorithm sets off criticism suggesting that the researchers overstated the role the AI assistant played.

Memristor, memristor, you are popular

Regular readers know I have a long-standing interest in memristor and artificial brains. I have three memristor-related pieces of research,  published in the last month or so, for this post.

First, there’s some research into nano memory at RMIT University, Australia, and the University of California at Santa Barbara (UC Santa Barbara). From a May 12, 2015 news item on ScienceDaily,

RMIT University researchers have mimicked the way the human brain processes information with the development of an electronic long-term memory cell.

Researchers at the MicroNano Research Facility (MNRF) have built the one of the world’s first electronic multi-state memory cell which mirrors the brain’s ability to simultaneously process and store multiple strands of information.

The development brings them closer to imitating key electronic aspects of the human brain — a vital step towards creating a bionic brain — which could help unlock successful treatments for common neurological conditions such as Alzheimer’s and Parkinson’s diseases.

A May 11, 2015 RMIT University news release, which originated the news item, reveals more about the researchers’ excitement and about the research,

“This is the closest we have come to creating a brain-like system with memory that learns and stores analog information and is quick at retrieving this stored information,” Dr Sharath said.

“The human brain is an extremely complex analog computer… its evolution is based on its previous experiences, and up until now this functionality has not been able to be adequately reproduced with digital technology.”

The ability to create highly dense and ultra-fast analog memory cells paves the way for imitating highly sophisticated biological neural networks, he said.

The research builds on RMIT’s previous discovery where ultra-fast nano-scale memories were developed using a functional oxide material in the form of an ultra-thin film – 10,000 times thinner than a human hair.

Dr Hussein Nili, lead author of the study, said: “This new discovery is significant as it allows the multi-state cell to store and process information in the very same way that the brain does.

“Think of an old camera which could only take pictures in black and white. The same analogy applies here, rather than just black and white memories we now have memories in full color with shade, light and texture, it is a major step.”

While these new devices are able to store much more information than conventional digital memories (which store just 0s and 1s), it is their brain-like ability to remember and retain previous information that is exciting.

“We have now introduced controlled faults or defects in the oxide material along with the addition of metallic atoms, which unleashes the full potential of the ‘memristive’ effect – where the memory element’s behaviour is dependent on its past experiences,” Dr Nili said.

Nano-scale memories are precursors to the storage components of the complex artificial intelligence network needed to develop a bionic brain.

Dr Nili said the research had myriad practical applications including the potential for scientists to replicate the human brain outside of the body.

“If you could replicate a brain outside the body, it would minimise ethical issues involved in treating and experimenting on the brain which can lead to better understanding of neurological conditions,” Dr Nili said.

The research, supported by the Australian Research Council, was conducted in collaboration with the University of California Santa Barbara.

Here’s a link to and a citation for this memristive nano device,

Donor-Induced Performance Tuning of Amorphous SrTiO3 Memristive Nanodevices: Multistate Resistive Switching and Mechanical Tunability by  Hussein Nili, Sumeet Walia, Ahmad Esmaielzadeh Kandjani, Rajesh Ramanathan, Philipp Gutruf, Taimur Ahmed, Sivacarendran Balendhran, Vipul Bansal, Dmitri B. Strukov, Omid Kavehei, Madhu Bhaskaran, and Sharath Sriram. Advanced Functional Materials DOI: 10.1002/adfm.201501019 Article first published online: 14 APR 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

The second published piece of memristor-related research comes from a UC Santa Barbara and  Stony Brook University (New York state) team but is being publicized by UC Santa Barbara. From a May 11, 2015 news item on Nanowerk (Note: A link has been removed),

In what marks a significant step forward for artificial intelligence, researchers at UC Santa Barbara have demonstrated the functionality of a simple artificial neural circuit (Nature, “Training and operation of an integrated neuromorphic network based on metal-oxide memristors”). For the first time, a circuit of about 100 artificial synapses was proved to perform a simple version of a typical human task: image classification.

A May 11, 2015 UC Santa Barbara news release (also on EurekAlert)by Sonia Fernandez, which originated the news item, situates this development within the ‘artificial brain’ effort while describing it in more detail (Note: A link has been removed),

“It’s a small, but important step,” said Dmitri Strukov, a professor of electrical and computer engineering. With time and further progress, the circuitry may eventually be expanded and scaled to approach something like the human brain’s, which has 1015 (one quadrillion) synaptic connections.

For all its errors and potential for faultiness, the human brain remains a model of computational power and efficiency for engineers like Strukov and his colleagues, Mirko Prezioso, Farnood Merrikh-Bayat, Brian Hoskins and Gina Adam. That’s because the brain can accomplish certain functions in a fraction of a second what computers would require far more time and energy to perform.

… As you read this, your brain is making countless split-second decisions about the letters and symbols you see, classifying their shapes and relative positions to each other and deriving different levels of meaning through many channels of context, in as little time as it takes you to scan over this print. Change the font, or even the orientation of the letters, and it’s likely you would still be able to read this and derive the same meaning.

In the researchers’ demonstration, the circuit implementing the rudimentary artificial neural network was able to successfully classify three letters (“z”, “v” and “n”) by their images, each letter stylized in different ways or saturated with “noise”. In a process similar to how we humans pick our friends out from a crowd, or find the right key from a ring of similar keys, the simple neural circuitry was able to correctly classify the simple images.

“While the circuit was very small compared to practical networks, it is big enough to prove the concept of practicality,” said Merrikh-Bayat. According to Gina Adam, as interest grows in the technology, so will research momentum.

“And, as more solutions to the technological challenges are proposed the technology will be able to make it to the market sooner,” she said.

Key to this technology is the memristor (a combination of “memory” and “resistor”), an electronic component whose resistance changes depending on the direction of the flow of the electrical charge. Unlike conventional transistors, which rely on the drift and diffusion of electrons and their holes through semiconducting material, memristor operation is based on ionic movement, similar to the way human neural cells generate neural electrical signals.

“The memory state is stored as a specific concentration profile of defects that can be moved back and forth within the memristor,” said Strukov. The ionic memory mechanism brings several advantages over purely electron-based memories, which makes it very attractive for artificial neural network implementation, he added.

“For example, many different configurations of ionic profiles result in a continuum of memory states and hence analog memory functionality,” he said. “Ions are also much heavier than electrons and do not tunnel easily, which permits aggressive scaling of memristors without sacrificing analog properties.”

This is where analog memory trumps digital memory: In order to create the same human brain-type functionality with conventional technology, the resulting device would have to be enormous — loaded with multitudes of transistors that would require far more energy.

“Classical computers will always find an ineluctable limit to efficient brain-like computation in their very architecture,” said lead researcher Prezioso. “This memristor-based technology relies on a completely different way inspired by biological brain to carry on computation.”

To be able to approach functionality of the human brain, however, many more memristors would be required to build more complex neural networks to do the same kinds of things we can do with barely any effort and energy, such as identify different versions of the same thing or infer the presence or identity of an object not based on the object itself but on other things in a scene.

Potential applications already exist for this emerging technology, such as medical imaging, the improvement of navigation systems or even for searches based on images rather than on text. The energy-efficient compact circuitry the researchers are striving to create would also go a long way toward creating the kind of high-performance computers and memory storage devices users will continue to seek long after the proliferation of digital transistors predicted by Moore’s Law becomes too unwieldy for conventional electronics.

Here’s a link to and a citation for the paper,

Training and operation of an integrated neuromorphic network based on metal-oxide memristors by M. Prezioso, F. Merrikh-Bayat, B. D. Hoskins, G. C. Adam, K. K. Likharev,    & D. B. Strukov. Nature 521, 61–64 (07 May 2015) doi:10.1038/nature14441

This paper is behind a paywall but a free preview is available through ReadCube Access.

The third and last piece of research, which is from Rice University, hasn’t received any publicity yet, unusual given Rice’s very active communications/media department. Here’s a link to and a citation for their memristor paper,

2D materials: Memristor goes two-dimensional by Jiangtan Yuan & Jun Lou. Nature Nanotechnology 10, 389–390 (2015) doi:10.1038/nnano.2015.94 Published online 07 May 2015

This paper is behind a paywall but a free preview is available through ReadCube Access.

Dexter Johnson has written up the RMIT research (his May 14, 2015 post on the Nanoclast blog on the IEEE [Institute of Electrical and Electronics Engineers] website). He linked it to research from Mark Hersam’s team at Northwestern University (my April 10, 2015 posting) on creating a three-terminal memristor enabling its use in complex electronics systems. Dexter strongly hints in his headline that these developments could lead to bionic brains.

For those who’d like more memristor information, this June 26, 2014 posting which brings together some developments at the University of Michigan and information about developments in the industrial sector is my suggestion for a starting point. Also, you may want to check out my material on HP Labs, especially prominent in the story due to the company’s 2008 ‘discovery’ of the memristor, described on a page in my Nanotech Mysteries wiki, and the controversy triggered by the company’s terminology (there’s more about the controversy in my April 7, 2010 interview with Forrest H Bennett III).

Self-organizing nanotubes and nonequilibrium systems provide insights into evolution and artificial life

If you’re interested in the second law of thermodynamics, this Feb. 10, 2015 news item on ScienceDaily provides some insight into the second law, self-organized systems, and evolution,

The second law of thermodynamics tells us that all systems evolve toward a state of maximum entropy, wherein all energy is dissipated as heat, and no available energy remains to do work. Since the mid-20th century, research has pointed to an extension of the second law for nonequilibrium systems: the Maximum Entropy Production Principle (MEPP) states that a system away from equilibrium evolves in such a way as to maximize entropy production, given present constraints.

Now, physicists Alexey Bezryadin, Alfred Hubler, and Andrey Belkin from the University of Illinois at Urbana-Champaign, have demonstrated the emergence of self-organized structures that drive the evolution of a non-equilibrium system to a state of maximum entropy production. The authors suggest MEPP underlies the evolution of the artificial system’s self-organization, in the same way that it underlies the evolution of ordered systems (biological life) on Earth. …

A Feb. 10, 2015 University of Illinois College of Engineering news release (also on EurekAlert), which originated the news item, provides more detail about the theory and the research,

MEPP may have profound implications for our understanding of the evolution of biological life on Earth and of the underlying rules that govern the behavior and evolution of all nonequilibrium systems. Life emerged on Earth from the strongly nonequilibrium energy distribution created by the Sun’s hot photons striking a cooler planet. Plants evolved to capture high energy photons and produce heat, generating entropy. Then animals evolved to eat plants increasing the dissipation of heat energy and maximizing entropy production.

In their experiment, the researchers suspended a large number of carbon nanotubes in a non-conducting non-polar fluid and drove the system out of equilibrium by applying a strong electric field. Once electrically charged, the system evolved toward maximum entropy through two distinct intermediate states, with the spontaneous emergence of self-assembled conducting nanotube chains.

In the first state, the “avalanche” regime, the conductive chains aligned themselves according to the polarity of the applied voltage, allowing the system to carry current and thus to dissipate heat and produce entropy. The chains appeared to sprout appendages as nanotubes aligned themselves so as to adjoin adjacent parallel chains, effectively increasing entropy production. But frequently, this self-organization was destroyed through avalanches triggered by the heating and charging that emanates from the emerging electric current streams. (…)

“The avalanches were apparent in the changes of the electric current over time,” said Bezryadin.

“Toward the final stages of this regime, the appendages were not destroyed during the avalanches, but rather retracted until the avalanche ended, then reformed their connection. So it was obvious that the avalanches correspond to the ‘feeding cycle’ of the ‘nanotube inset’,” comments Bezryadin.

In the second relatively stable stage of evolution, the entropy production rate reached maximum or near maximum. This state is quasi-stable in that there were no destructive avalanches.

The study points to a possible classification scheme for evolutionary stages and a criterium for the point at which evolution of the system is irreversible—wherein entropy production in the self-organizing subsystem reaches its maximum possible value. Further experimentation on a larger scale is necessary to affirm these underlying principals, but if they hold true, they will prove a great advantage in predicting behavioral and evolutionary trends in nonequilibrium systems.

The authors draw an analogy between the evolution of intelligent life forms on Earth and the emergence of the wiggling bugs in their experiment. The researchers note that further quantitative studies are needed to round out this comparison. In particular, they would need to demonstrate that their “wiggling bugs” can multiply, which would require the experiment be reproduced on a significantly larger scale.

Such a study, if successful, would have implications for the eventual development of technologies that feature self-organized artificial intelligence, an idea explored elsewhere by co-author Alfred Hubler, funded by the Defense Advanced Research Projects Agency [DARPA]. [emphasis mine]

“The general trend of the evolution of biological systems seems to be this: more advanced life forms tend to dissipate more energy by broadening their access to various forms of stored energy,” Bezryadin proposes. “Thus a common underlying principle can be suggested between our self-organized clouds of nanotubes, which generate more and more heat by reducing their electrical resistance and thus allow more current to flow, and the biological systems which look for new means to find food, either through biological adaptation or by inventing more technologies.

“Extended sources of food allow biological forms to further grow, multiply, consume more food and thus produce more heat and generate entropy. It seems reasonable to say that real life organisms are still far from the absolute maximum of the entropy production rate. In both cases, there are ‘avalanches’ or ‘extinction events’, which set back this evolution. Only if all free energy given by the Sun is consumed, by building a Dyson sphere for example, and converted into heat then a definitely stable phase of the evolution can be expected.”

“Intelligence, as far as we know, is inseparable from life,” he adds. “Thus, to achieve artificial life or artificial intelligence, our recommendation would be to study systems which are far from equilibrium, with many degrees of freedom—many building blocks—so that they can self-organize and participate in some evolution. The entropy production criterium appears to be the guiding principle of the evolution efficiency.”

I am fascinated

  • (a) because this piece took an unexpected turn onto the topic of artificial life/artificial intelligence,
  • (b) because of my longstanding interest in artificial life/artificial intelligence,
  • (c) because of the military connection, and
  • (d) because this is the first time I’ve come across something that provides a bridge from fundamental particles to nanoparticles.

Here’s a link to and a citation for the paper,

Self-Assembled Wiggling Nano-Structures and the Principle of Maximum Entropy Production by A. Belkin, A. Hubler, & A. Bezryadin. Scientific Reports 5, Article number: 8323 doi:10.1038/srep08323 Published 09 February 2015

Adding to my delight, this paper is open access.

‘Eve’ (robot/artificial intelligence) searches for new drugs

Following on today’s (Feb. 5, 2015) earlier post, The future of work during the age of robots and artificial intelligence, here’s a Feb. 3, 2015 news item on ScienceDaily featuring ‘Eve’, a scientist robot,

Eve, an artificially-intelligent ‘robot scientist’ could make drug discovery faster and much cheaper, say researchers writing in the Royal Society journal Interface. The team has demonstrated the success of the approach as Eve discovered that a compound shown to have anti-cancer properties might also be used in the fight against malaria.

A Feb. 4, 2015 University of Manchester press release (also on EurekAlert but dated Feb. 3, 2015), which originated the news item, gives a brief introduction to robot scientists,

Robot scientists are a natural extension of the trend of increased involvement of automation in science. They can automatically develop and test hypotheses to explain observations, run experiments using laboratory robotics, interpret the results to amend their hypotheses, and then repeat the cycle, automating high-throughput hypothesis-led research. Robot scientists are also well suited to recording scientific knowledge: as the experiments are conceived and executed automatically by computer, it is possible to completely capture and digitally curate all aspects of the scientific process.

In 2009, Adam, a robot scientist developed by researchers at the Universities of Aberystwyth and Cambridge, became the first machine to autonomously discover new scientific knowledge. The same team has now developed Eve, based at the University of Manchester, whose purpose is to speed up the drug discovery process and make it more economical. In the study published today, they describe how the robot can help identify promising new drug candidates for malaria and neglected tropical diseases such as African sleeping sickness and Chagas’ disease.

“Neglected tropical diseases are a scourge of humanity, infecting hundreds of millions of people, and killing millions of people every year,” says Professor Ross King, from the Manchester Institute of Biotechnology at the University of Manchester. “We know what causes these diseases and that we can, in theory, attack the parasites that cause them using small molecule drugs. But the cost and speed of drug discovery and the economic return make them unattractive to the pharmaceutical industry.

“Eve exploits its artificial intelligence to learn from early successes in her screens and select compounds that have a high probability of being active against the chosen drug target. A smart screening system, based on genetically engineered yeast, is used. This allows Eve to exclude compounds that are toxic to cells and select those that block the action of the parasite protein while leaving any equivalent human protein unscathed. This reduces the costs, uncertainty, and time involved in drug screening, and has the potential to improve the lives of millions of people worldwide.”

The press release goes on to describe how ‘Eve’ works,

Eve is designed to automate early-stage drug design. First, she systematically tests each member from a large set of compounds in the standard brute-force way of conventional mass screening. The compounds are screened against assays (tests) designed to be automatically engineered, and can be generated much faster and more cheaply than the bespoke assays that are currently standard. This enables more types of assay to be applied, more efficient use of screening facilities to be made, and thereby increases the probability of a discovery within a given budget.

Eve’s robotic system is capable of screening over 10,000 compounds per day. However, while simple to automate, mass screening is still relatively slow and wasteful of resources as every compound in the library is tested. It is also unintelligent, as it makes no use of what is learnt during screening.

To improve this process, Eve selects at random a subset of the library to find compounds that pass the first assay; any ‘hits’ are re-tested multiple times to reduce the probability of false positives. Taking this set of confirmed hits, Eve uses statistics and machine learning to predict new structures that might score better against the assays. Although she currently does not have the ability to synthesise such compounds, future versions of the robot could potentially incorporate this feature.

Steve Oliver from the Cambridge Systems Biology Centre and the Department of Biochemistry at the University of Cambridge says: “Every industry now benefits from automation and science is no exception. Bringing in machine learning to make this process intelligent – rather than just a ‘brute force’ approach – could greatly speed up scientific progress and potentially reap huge rewards.”

To test the viability of the approach, the researchers developed assays targeting key molecules from parasites responsible for diseases such as malaria, Chagas’ disease and schistosomiasis and tested against these a library of approximately 1,500 clinically approved compounds. Through this, Eve showed that a compound that has previously been investigated as an anti-cancer drug inhibits a key molecule known as DHFR in the malaria parasite. Drugs that inhibit this molecule are currently routinely used to protect against malaria, and are given to over a million children; however, the emergence of strains of parasites resistant to existing drugs means that the search for new drugs is becoming increasingly more urgent.

“Despite extensive efforts, no one has been able to find a new antimalarial that targets DHFR and is able to pass clinical trials,” adds Professor Oliver. “Eve’s discovery could be even more significant than just demonstrating a new approach to drug discovery.”

Here’s a link to and a citation for the paper,

Cheaper faster drug development validated by the repositioning of drugs against neglected tropical diseases by Kevin Williams, Elizabeth Bilsland, Andrew Sparkes, Wayne Aubrey, Michael Young, Larisa N. Soldatova, Kurt De Grave, Jan Ramon, Michaela de Clare, Worachart Sirawaraporn, Stephen G. Oliver, and Ross D. King. Journal of the Royal Society Interface March 2015 Volume: 12 Issue: 104 DOI: 10.1098/rsif.2014.1289 Published 4 February 2015

This paper is open access.

The future of work during the age of robots and artificial intelligence

2014 was quite the year for discussions about robots/artificial intelligence (AI) taking over the world of work. There was my July 16, 2014 post titled, Writing and AI or is a robot writing this blog?, where I discussed the implications of algorithms which write news stories (business and sports, so far) in the wake of a deal that Associated Press signed with a company called Automated Insights. A few weeks later, the Pew Research Center released a report titled, AI, Robotics, and the Future of Jobs, which was widely covered. As well, sometime during the year, renowned physicist Stephen Hawking expressed serious concerns about artificial intelligence and our ability to control it.

It seems that 2015 is going to be another banner for this discussion. Before launching into the latest on this topic, here’s a sampling of the Pew Research and the response to it. From an Aug. 6, 2014 Pew summary about AI, Robotics, and the Future of Jobs by Aaron Smith and Janna Anderson,

The vast majority of respondents to the 2014 Future of the Internet canvassing anticipate that robotics and artificial intelligence will permeate wide segments of daily life by 2025, with huge implications for a range of industries such as health care, transport and logistics, customer service, and home maintenance. But even as they are largely consistent in their predictions for the evolution of technology itself, they are deeply divided on how advances in AI and robotics will impact the economic and employment picture over the next decade.

We call this a canvassing because it is not a representative, randomized survey. Its findings emerge from an “opt in” invitation to experts who have been identified by researching those who are widely quoted as technology builders and analysts and those who have made insightful predictions to our previous queries about the future of the Internet. …

I wouldn’t have expected Jeff Bercovici’s Aug. 6, 2014 article for Forbes to be quite so hesitant about the possibilities of our robotic and artificially intelligent future,

As part of a major ongoing project looking at the future of the internet, the Pew Research Internet Project canvassed some 1,896 technologists, futurists and other experts about how they see advances in robotics and artificial intelligence affecting the human workforce in 2025.

The results were not especially reassuring. Nearly half of the respondents (48%) predicted that robots and AI will displace more jobs than they create over the coming decade. While that left a slim majority believing the impact of technology on employment will be neutral or positive, that’s not necessarily grounds for comfort: Many experts told Pew they expect the jobs created by the rise of the machines will be lower paying and less secure than the ones displaced, widening the gap between rich and poor, while others said they simply don’t think the major effects of robots and AI, for better or worse, will be in evidence yet by 2025.

Chris Gayomali’s Aug. 6, 2014 article for Fast Company poses an interesting question about how this brave new future will be financed,

A new study by Pew Internet Research takes a hard look at how innovations in robotics and artificial intelligence will impact the future of work. To reach their conclusions, Pew researchers invited 12,000 experts (academics, researchers, technologists, and the like) to answer two basic questions:

Will networked, automated, artificial intelligence (AI) applications and robotic devices have displaced more jobs than they have created by 2025?
To what degree will AI and robotics be parts of the ordinary landscape of the general population by 2025?

Close to 1,900 experts responded. About half (48%) of the people queried envision a future in which machines have displaced both blue- and white-collar jobs. It won’t be so dissimilar from the fundamental shift we saw in manufacturing, in which fewer (human) bosses oversaw automated assembly lines.

Meanwhile, the other 52% of experts surveyed speculate while that many of the jobs will be “substantially taken over by robots,” humans won’t be displaced outright. Rather, many people will be funneled into new job categories that don’t quite exist yet. …

Some worry that over the next 10 years, we’ll see a large number of middle class jobs disappear, widening the economic gap between the rich and the poor. The shift could be dramatic. As artificial intelligence becomes less artificial, they argue, the worry is that jobs that earn a decent living wage (say, customer service representatives, for example) will no longer be available, putting lots and lots of people out of work, possibly without the requisite skill set to forge new careers for themselves.

How do we avoid this? One revealing thread suggested by experts argues that the responsibility will fall on businesses to protect their employees. “There is a relentless march on the part of commercial interests (businesses) to increase productivity so if the technical advances are reliable and have a positive ROI [return on investment],” writes survey respondent Glenn Edens, a director of research in networking, security, and distributed systems at PARC, which is owned by Xerox. “Ultimately we need a broad and large base of employed population, otherwise there will be no one to pay for all of this new world.” [emphasis mine]

Alex Hearn’s Aug. 6, 2014 article for the Guardian reviews the report and comments on the current educational system’s ability to prepare students for the future,

Almost all of the respondents are united on one thing: the displacement of work by robots and AI is going to continue, and accelerate, over the coming decade. Where they split is in the societal response to that displacement.

The optimists predict that the economic boom that would result from vastly reduced costs to businesses would lead to the creation of new jobs in huge numbers, and a newfound premium being placed on the value of work that requires “uniquely human capabilities”. …

But the pessimists worry that the benefits of the labor replacement will accrue to those already wealthy enough to own the automatons, be that in the form of patents for algorithmic workers or the physical form of robots.

The ranks of the unemployed could swell, as people are laid off from work they are qualified in without the ability to retrain for careers where their humanity is a positive. And since this will happen in every economic sector simultaneously, civil unrest could be the result.

One thing many experts agreed on was the need for education to prepare for a post-automation world. ““Only the best-educated humans will compete with machines,” said internet sociologist Howard Rheingold.

“And education systems in the US and much of the rest of the world are still sitting students in rows and columns, teaching them to keep quiet and memorise what is told them, preparing them for life in a 20th century factory.”

Then, Will Oremus’ Aug. 6, 2014 article for Slate suggests we are already experiencing displacement,

… the current jobless recovery, along with a longer-term trend toward income and wealth inequality, has some thinkers wondering whether the latest wave of automation is different from those that preceded it.

Massachusetts Institute of Technology researchers Andrew McAfee and Erik Brynjolfsson, among others, see a “great decoupling” of productivity from wages since about 2000 as technology outpaces human workers’ education and skills. Workers, in other words, are losing the race between education and technology. This may be exacerbating a longer-term trend in which capital has gained the upper hand on labor since the 1970s.

The results of the survey were fascinating. Almost exactly half of the respondents (48 percent) predicted that intelligent software will disrupt more jobs than it can replace. The other half predicted the opposite.

The lack of expert consensus on such a crucial and seemingly straightforward question is startling. It’s even more so given that history and the leading economic models point so clearly to one side of the question: the side that reckons society will adjust, new jobs will emerge, and technology will eventually leave the economy stronger.

More recently, Manish Singh has written about some of his concerns as a writer who could be displaced in a Jan. 31, 2015 (?) article for Beta News (Note: A link has been removed),

Robots are after my job. They’re after yours as well, but let us deal with my problem first. Associated Press, an American multinational nonprofit news agency, revealed on Friday [Jan. 30, 2015] that it published 3,000 articles in the last three months of 2014. The company could previously only publish 300 stories. It didn’t hire more journalists, neither did its existing headcount start writing more, but the actual reason behind this exponential growth is technology. All those stories were written by an algorithm.

The articles produced by the algorithm were accurate, and you won’t be able to separate them from stories written by humans. Good lord, all the stories were written in accordance with the AP Style Guide, something not all journalists follow (but arguably, should).

There has been a growth in the number of such software. Narrative Science, a Chicago-based company offers an automated narrative generator powered by artificial intelligence. The company’s co-founder and CTO, Kristian Hammond, said last year that he believes that by 2030, 90 percent of news could be written by computers. Forbes, a reputable news outlet, has used Narrative’s software. Some news outlets use it to write email newsletters and similar things.

Singh also sounds a note of concern for other jobs by including this video (approximately 16 mins.) in his piece,

This video (Humans Need Not Apply) provides an excellent overview of the situation although it seems C. G. P. Grey, the person who produced and posted the video on YouTube, holds a more pessimistic view of the future than some other futurists.  C. G. P. Grey has a website here and is profiled here on Wikipedia.

One final bit, there’s a robot art critic which some are suggesting is superior to human art critics in Thomas Gorton’s Jan. 16, 2015 (?) article ‘This robot reviews art better than most critics‘ for Dazed Digital (Note: Links have been removed),

… the Novice Art Blogger, a Tumblr page set up by Matthew Plummer Fernandez. The British-Colombian artist programmed a bot with deep learning algorithms to analyse art; so instead of an overarticulate critic rambling about praxis, you get a review that gets down to the nitty-gritty about what exactly you see in front of you.

The results are charmingly honest: think a round robin of Google Translate text uninhibited by PR fluff, personal favouritism or the whims of a bad mood. We asked Novice Art Blogger to review our most recent Winter 2014 cover with Kendall Jenner. …

Beyond Kendall Jenner, it’s worth reading Gorton’s article for the interview with Plummer Fernandez.

University of Toronto, ebola epidemic, and artificial intelligence applied to chemistry

It’s hard to tell much from the Nov. 5, 2014 University of Toronto news release by Michael Kennedy (also on EurekAlert but dated Nov. 10, 2014) about in silico drug testing focused on finding a treatment for ebola,

The University of Toronto, Chematria and IBM are combining forces in a quest to find new treatments for the Ebola virus.

Using a virtual research technology invented by Chematria, a startup housed at U of T’s Impact Centre, the team will use software that learns and thinks like a human chemist to search for new medicines. Running on Canada’s most powerful supercomputer, the effort will simulate and analyze the effectiveness of millions of hypothetical drugs in just a matter of weeks.

“What we are attempting would have been considered science fiction, until now,” says Abraham Heifets (PhD), a U of T graduate and the chief executive officer of Chematria. “We are going to explore the possible effectiveness of millions of drugs, something that used to take decades of physical research and tens of millions of dollars, in mere days with our technology.”

The news release makes it all sound quite exciting,

Chematria’s technology is a virtual drug discovery platform based on the science of deep learning neural networks and has previously been used for research on malaria, multiple sclerosis, C. difficile, and leukemia. [emphases mine]

Much like the software used to design airplanes and computer chips in simulation, this new system can predict the possible effectiveness of new medicines, without costly and time-consuming physical synthesis and testing. [emphasis mine] The system is driven by a virtual brain that teaches itself by “studying” millions of datapoints about how drugs have worked in the past. With this vast knowledge, the software can apply the patterns it has learned to predict the effectiveness of hypothetical drugs, and suggest surprising uses for existing drugs, transforming the way medicines are discovered.

My understanding is that Chematria’s is not the only “virtual drug discovery platform based on the science of deep learning neural networks” as is acknowledged in the next paragraph. In fact, there’s widespread interest in the medical research community as evidenced by such projects as Seurat-1’s NOTOX* and others. Regarding the research on “malaria, multiple sclerosis, C. difficile, and leukemia,” more details would be welcome, e.g., what happened?

A Nov. 4, 2014 article for Mashable by Anita Li does offer a new detail about the technology,

Now, a team of Canadian researchers are hunting for new Ebola treatments, using “groundbreaking” artificial-intelligence technology that they claim can predict the effectiveness of new medicines 150 times faster than current methods.

With the quotes around the word, groundbreaking, Li suggests a little skepticism about the claim.

Here’s more from Li where she seems to have found some company literature,

Chematria describes its technology as a virtual drug-discovery platform that helps pharmaceutical companies “determine which molecules can become medicines.” Here’s how it works, according to the company:

The system is driven by a virtual brain, modeled on the human visual cortex, that teaches itself by “studying” millions of datapoints about how drugs have worked in the past. With this vast knowledge, Chematria’s brain can apply the patterns it perceives, to predict the effectiveness of hypothetical drugs, and suggest surprising uses for existing drugs, transforming the way medicines are discovered.

I was not able to find a Chematria website or anything much more than this brief description on the University of Toronto website (from the Impact Centre’s Current Companies webpage),

Chematria makes software that helps pharmaceutical companies determine which molecules can become medicines. With Chematria’s proprietary approach to molecular docking simulations, pharmaceutical researchers can confidently predict potent molecules for novel biological targets, thereby enabling faster drug development for a fraction of the price of wet-lab experiments.

Chematria’s Ebola project is focused on drugs already available but could be put to a new use (from Li’s article),

In response to the outbreak, Chematria recently launched an Ebola project, using its algorithm to evaluate molecules that have already gone through clinical trials, and have proven to be safe. “That means we can expedite the process of getting the treatment to the people who need it,” Heifets said. “In a pandemic situation, you’re under serious time pressure.”

He cited Aspirin as an example of proven medicine that has more than one purpose: People take it for headaches, but it’s also helpful for heart disease. Similarly, a drug that’s already out there may also hold the cure for Ebola.

I recommend reading Li’s article in its entirety.

The University of Toronto news release provides more detail about the partners involved in this ebola project,

… The unprecedented speed and scale of this investigation is enabled by the unique strengths of the three partners: Chematria is offering the core artificial intelligence technology that performs the drug research, U of T is contributing biological insights about Ebola that the system will use to search for new treatments and IBM is providing access to Canada’s fastest supercomputer, Blue Gene/Q.

“Our team is focusing on the mechanism Ebola uses to latch on to the cells it infects,” said Dr. Jeffrey Lee of the University of Toronto. “If we can interrupt that process with a new drug, it could prevent the virus from replicating, and potentially work against other viruses like Marburg and HIV that use the same mechanism.”

The initiative may also demonstrate an alternative approach to high-speed medical research. While giving drugs to patients will always require thorough clinical testing, zeroing in on the best drug candidates can take years using today’s most common methods. Critics say this slow and prohibitively expensive process is one of the key reasons that finding treatments for rare and emerging diseases is difficult.

“If we can find promising drug candidates for Ebola using computers alone,” said Heifets, “it will be a milestone for how we develop cures.”

I hope this effort along with all the others being made around the world prove helpful with Ebola. it’s good to see research into drugs (chemical formulations) that are familiar to the medical community and can be used for a different purpose than originally intended. Drugs that are ‘repurposed’ should be cheaper than new ones and we already have data about side effects.

As for the “milestone for how we develop cures,” this team’s work along with all the international research on this front and on how we assess toxicity should certainly make that milestone possible.

* Full disclosure: I came across Seurat-1’s NOTOX project when I attended (at Seurat-1’s expense) the 9th World Congress on Alternatives to Animal Testing held in Aug. 2014 in Prague.

Getting neuromorphic with a synaptic transistor

Scientists at Harvard University (Massachusetts, US) have devised a transistor that simulates the synapses found in brains. From a Nov. 2, 2013 news item on ScienceDaily,

It doesn’t take a Watson to realize that even the world’s best supercomputers are staggeringly inefficient and energy-intensive machines.

Our brains have upwards of 86 billion neurons, connected by synapses that not only complete myriad logic circuits; they continuously adapt to stimuli, strengthening some connections while weakening others. We call that process learning, and it enables the kind of rapid, highly efficient computational processes that put Siri and Blue Gene to shame.

Materials scientists at the Harvard School of Engineering and Applied Sciences (SEAS) have now created a new type of transistor that mimics the behavior of a synapse. The novel device simultaneously modulates the flow of information in a circuit and physically adapts to changing signals.

Exploiting unusual properties in modern materials, the synaptic transistor could mark the beginning of a new kind of artificial intelligence: one embedded not in smart algorithms but in the very architecture of a computer. [emphasis mine]

There are two other projects that I know of (and I imagine there are others) focused on intelligence that’s embedded rather than algorithmic. My December 24, 2012 posting focused on a joint (National Institute for Materials Science in Japan and the University of California, Los Angeles) project where researchers developed a nanoionic device with a range of neuromorphic and electrical properties. There’s also the memristor mentioned in my Feb. 26, 2013 posting (and many other times on this blog) which features a ,proposal to create an artificial brain.

Getting back to Harvard’s synaptic transistor (from the Nov. 1, 2013 Harvard University news release which originated the news item),

The human mind, for all its phenomenal computing power, runs on roughly 20 Watts of energy (less than a household light bulb), so it offers a natural model for engineers.

“The transistor we’ve demonstrated is really an analog to the synapse in our brains,” says co-lead author Jian Shi, a postdoctoral fellow at SEAS. “Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of its connection. And the faster the neurons spike each time, the stronger the synaptic connection. Essentially, it memorizes the action between the neurons.”

In principle, a system integrating millions of tiny synaptic transistors and neuron terminals could take parallel computing into a new era of ultra-efficient high performance.

Here’s an image of synaptic transistors that the researchers from Harvard’s School of Engineering and Applied Science (SEAS) have supplied,

Several prototypes of the synaptic transistor are visible on this silicon chip. (Photo by Eliza Grinnell, SEAS Communications.)

Several prototypes of the synaptic transistor are visible on this silicon chip. (Photo by Eliza Grinnell, SEAS Communications.)

The news release provides a description of the synatpic transistor and how it works,

While calcium ions and receptors effect a change in a biological synapse, the artificial version achieves the same plasticity with oxygen ions. When a voltage is applied, these ions slip in and out of the crystal lattice of a very thin (80-nanometer) film of samarium nickelate, which acts as the synapse channel between two platinum “axon” and “dendrite” terminals. The varying concentration of ions in the nickelate raises or lowers its conductance—that is, its ability to carry information on an electrical current—and, just as in a natural synapse, the strength of the connection depends on the time delay in the electrical signal.

Structurally, the device consists of the nickelate semiconductor sandwiched between two platinum electrodes and adjacent to a small pocket of ionic liquid. An external circuit multiplexer converts the time delay into a magnitude of voltage which it applies to the ionic liquid, creating an electric field that either drives ions into the nickelate or removes them. The entire device, just a few hundred microns long, is embedded in a silicon chip.

The synaptic transistor offers several immediate advantages over traditional silicon transistors. For a start, it is not restricted to the binary system of ones and zeros.

“This system changes its conductance in an analog way, continuously, as the composition of the material changes,” explains Shi. “It would be rather challenging to use CMOS, the traditional circuit technology, to imitate a synapse, because real biological synapses have a practically unlimited number of possible states—not just ‘on’ or ‘off.’”

The synaptic transistor offers another advantage: non-volatile memory, which means even when power is interrupted, the device remembers its state.

Additionally, the new transistor is inherently energy efficient. The nickelate belongs to an unusual class of materials, called correlated electron systems, that can undergo an insulator-metal transition. At a certain temperature—or, in this case, when exposed to an external field—the conductance of the material suddenly changes.

“We exploit the extreme sensitivity of this material,” says Ramanathan [principal investigator and associate professor of materials science at Harvard SEAS]. “A very small excitation allows you to get a large signal, so the input energy required to drive this switching is potentially very small. That could translate into a large boost for energy efficiency.”

The nickelate system is also well positioned for seamless integration into existing silicon-based systems.

“In this paper, we demonstrate high-temperature operation, but the beauty of this type of a device is that the ‘learning’ behavior is more or less temperature insensitive, and that’s a big advantage,” says Ramanathan. “We can operate this anywhere from about room temperature up to at least 160 degrees Celsius.”

For now, the limitations relate to the challenges of synthesizing a relatively unexplored material system, and to the size of the device, which affects its speed.

“In our proof-of-concept device, the time constant is really set by our experimental geometry,” says Ramanathan. “In other words, to really make a super-fast device, all you’d have to do is confine the liquid and position the gate electrode closer to it.”

In fact, Ramanathan and his research team are already planning, with microfluidics experts at SEAS, to investigate the possibilities and limits for this “ultimate fluidic transistor.”

Here’s a link to and a citation for the researchers’ paper,

A correlated nickelate synaptic transistor by Jian Shi, Sieu D. Ha, You Zhou, Frank Schoofs, & Shriram Ramanathan. Nature Communications 4, Article number: 2676 doi:10.1038/ncomms3676 Published 31 October 2013

This article is behind a paywall.