Tag Archives: computers

Artificial synapse courtesy of nanowires

It looks like a popsicle to me,

Caption: Image captured by an electron microscope of a single nanowire memristor (highlighted in colour to distinguish it from other nanowires in the background image). Blue: silver electrode, orange: nanowire, yellow: platinum electrode. Blue bubbles are dispersed over the nanowire. They are made up of silver ions and form a bridge between the electrodes which increases the resistance. Credit: Forschungszentrum Jülich

Not a popsicle but a representation of a device (memristor) scientists claim mimics a biological nerve cell according to a December 5, 2018 news item on ScienceDaily,

Scientists from Jülich [Germany] together with colleagues from Aachen [Germany] and Turin [Italy] have produced a memristive element made from nanowires that functions in much the same way as a biological nerve cell. The component is able to both save and process information, as well as receive numerous signals in parallel. The resistive switching cell made from oxide crystal nanowires is thus proving to be the ideal candidate for use in building bioinspired “neuromorphic” processors, able to take over the diverse functions of biological synapses and neurons.

A Dec. 5, 2018 Forschungszentrum Jülich press release (also on EurekAlert), which originated the news item, provides more details,

Computers have learned a lot in recent years. Thanks to rapid progress in artificial intelligence they are now able to drive cars, translate texts, defeat world champions at chess, and much more besides. In doing so, one of the greatest challenges lies in the attempt to artificially reproduce the signal processing in the human brain. In neural networks, data are stored and processed to a high degree in parallel. Traditional computers on the other hand rapidly work through tasks in succession and clearly distinguish between the storing and processing of information. As a rule, neural networks can only be simulated in a very cumbersome and inefficient way using conventional hardware.

Systems with neuromorphic chips that imitate the way the human brain works offer significant advantages. Experts in the field describe this type of bioinspired computer as being able to work in a decentralised way, having at its disposal a multitude of processors, which, like neurons in the brain, are connected to each other by networks. If a processor breaks down, another can take over its function. What is more, just like in the brain, where practice leads to improved signal transfer, a bioinspired processor should have the capacity to learn.

“With today’s semiconductor technology, these functions are to some extent already achievable. These systems are however suitable for particular applications and require a lot of space and energy,” says Dr. Ilia Valov from Forschungszentrum Jülich. “Our nanowire devices made from zinc oxide crystals can inherently process and even store information, as well as being extremely small and energy efficient,” explains the researcher from Jülich’s Peter Grünberg Institute.

For years memristive cells have been ascribed the best chances of being capable of taking over the function of neurons and synapses in bioinspired computers. They alter their electrical resistance depending on the intensity and direction of the electric current flowing through them. In contrast to conventional transistors, their last resistance value remains intact even when the electric current is switched off. Memristors are thus fundamentally capable of learning.

In order to create these properties, scientists at Forschungszentrum Jülich and RWTH Aachen University used a single zinc oxide nanowire, produced by their colleagues from the polytechnic university in Turin. Measuring approximately one ten-thousandth of a millimeter in size, this type of nanowire is over a thousand times thinner than a human hair. The resulting memristive component not only takes up a tiny amount of space, but also is able to switch much faster than flash memory.

Nanowires offer promising novel physical properties compared to other solids and are used among other things in the development of new types of solar cells, sensors, batteries and computer chips. Their manufacture is comparatively simple. Nanowires result from the evaporation deposition of specified materials onto a suitable substrate, where they practically grow of their own accord.

In order to create a functioning cell, both ends of the nanowire must be attached to suitable metals, in this case platinum and silver. The metals function as electrodes, and in addition, release ions triggered by an appropriate electric current. The metal ions are able to spread over the surface of the wire and build a bridge to alter its conductivity.

Components made from single nanowires are, however, still too isolated to be of practical use in chips. Consequently, the next step being planned by the Jülich and Turin researchers is to produce and study a memristive element, composed of a larger, relatively easy to generate group of several hundred nanowires offering more exciting functionalities.

The Italians have also written about the work in a December 4, 2018 news item for the Polytecnico di Torino’s inhouse magazine, PoliFlash’. I like the image they’ve used better as it offers a bit more detail and looks less like a popsicle. First, the image,

Courtesy: Polytecnico di Torino

Now, the news item, which includes some historical information about the memristor (Note: There is some repetition and links have been removed),

Emulating and understanding the human brain is one of the most important challenges for modern technology: on the one hand, the ability to artificially reproduce the processing of brain signals is one of the cornerstones for the development of artificial intelligence, while on the other the understanding of the cognitive processes at the base of the human mind is still far away.

And the research published in the prestigious journal Nature Communications by Gianluca Milano and Carlo Ricciardi, PhD student and professor, respectively, of the Applied Science and Technology Department of the Politecnico di Torino, represents a step forward in these directions. In fact, the study entitled “Self-limited single nanowire systems combining all-in-one memristive and neuromorphic functionalities” shows how it is possible to artificially emulate the activity of synapses, i.e. the connections between neurons that regulate the learning processes in our brain, in a single “nanowire” with a diameter thousands of times smaller than that of a hair.

It is a crystalline nanowire that takes the “memristor”, the electronic device able to artificially reproduce the functions of biological synapses, to a more performing level. Thanks to the use of nanotechnologies, which allow the manipulation of matter at the atomic level, it was for the first time possible to combine into one single device the synaptic functions that were individually emulated through specific devices. For this reason, the nanowire allows an extreme miniaturisation of the “memristor”, significantly reducing the complexity and energy consumption of the electronic circuits necessary for the implementation of learning algorithms.

Starting from the theorisation of the “memristor” in 1971 by Prof. Leon Chua – now visiting professor at the Politecnico di Torino, who was conferred an honorary degree by the University in 2015 – this new technology will not only allow smaller and more performing devices to be created for the implementation of increasingly “intelligent” computers, but is also a significant step forward for the emulation and understanding of the functioning of the brain.

“The nanowire memristor – said Carlo Ricciardirepresents a model system for the study of physical and electrochemical phenomena that govern biological synapses at the nanoscale. The work is the result of the collaboration between our research team and the RWTH University of Aachen in Germany, supported by INRiM, the National Institute of Metrological Research, and IIT, the Italian Institute of Technology.”

h.t for the Italian info. to Nanowerk’s Dec. 10, 2018 news item.

Here’s a link to and a citation for the paper,

Self-limited single nanowire systems combining all-in-one memristive and neuromorphic functionalities by Gianluca Milano, Michael Luebben, Zheng Ma, Rafal Dunin-Borkowski, Luca Boarino, Candido F. Pirri, Rainer Waser, Carlo Ricciardi, & Ilia Valov. Nature Communicationsvolume 9, Article number: 5151 (2018) DOI: https://doi.org/10.1038/s41467-018-07330-7 Published: 04 December 2018

This paper is open access.

Just use the search term “memristor” in the blog search engine if you’re curious about the multitudinous number of postings on the topic here.

Light-based computation made better with silver

It’s pretty amazing to imagine a future where computers run on light but according to a May 16, 2017 news item on ScienceDaily the idea is not beyond the realms of possibility,

Tomorrow’s computers will run on light, and gold nanoparticle chains show much promise as light conductors. Now Ludwig-Maximilians-Universitaet (LMU) in Munich scientists have demonstrated how tiny spots of silver could markedly reduce energy consumption in light-based computation.

Today’s computers are faster and smaller than ever before. The latest generation of transistors will have structural features with dimensions of only 10 nanometers. If computers are to become even faster and at the same time more energy efficient at these minuscule scales, they will probably need to process information using light particles instead of electrons. This is referred to as “optical computing.”

The silver serves as a kind of intermediary between the gold particles while not dissipating energy. Capture: Liedl/Hohmann (NIM)

A March 15, 2017 LMU press release (also one EurekAlert), which originated the news item, describes a current use of light in telecommunications technology and this latest research breakthrough (the discrepancy in dates is likely due to when the paper was made available online versus in print),

Fiber-optic networks already use light to transport data over long distances at high speed and with minimum loss. The diameters of the thinnest cables, however, are in the micrometer range, as the light waves — with a wavelength of around one micrometer — must be able to oscillate unhindered. In order to process data on a micro- or even nanochip, an entirely new system is therefore required.

One possibility would be to conduct light signals via so-called plasmon oscillations. This involves a light particle (photon) exciting the electron cloud of a gold nanoparticle so that it starts oscillating. These waves then travel along a chain of nanoparticles at approximately 10% of the speed of light. This approach achieves two goals: nanometer-scale dimensions and enormous speed. What remains, however, is the energy consumption. In a chain composed purely of gold, this would be almost as high as in conventional transistors, due to the considerable heat development in the gold particles.

A tiny spot of silver

Tim Liedl, Professor of Physics at LMU and PI at the cluster of excellence Nanosystems Initiative Munich (NIM), together with colleagues from Ohio University, has now published an article in the journal Nature Physics, which describes how silver nanoparticles can significantly reduce the energy consumption. The physicists built a sort of miniature test track with a length of around 100 nanometers, composed of three nanoparticles: one gold nanoparticle at each end, with a silver nanoparticle right in the middle.

The silver serves as a kind of intermediary between the gold particles while not dissipating energy. To make the silver particle’s plasmon oscillate, more excitation energy is required than for gold. Therefore, the energy just flows “around” the silver particle. “Transport is mediated via the coupling of the electromagnetic fields around the so-called hot spots which are created between each of the two gold particles and the silver particle,” explains Tim Liedl. “This allows the energy to be transported with almost no loss, and on a femtosecond time scale.”

Textbook quantum model

The decisive precondition for the experiments was the fact that Tim Liedl and his colleagues are experts in the exquisitely exact placement of nanostructures. This is done by the DNA origami method, which allows different crystalline nanoparticles to be placed at precisely defined nanodistances from each other. Similar experiments had previously been conducted using conventional lithography techniques. However, these do not provide the required spatial precision, in particular where different types of metals are involved.

In parallel, the physicists simulated the experimental set-up on the computer – and had their results confirmed. In addition to classical electrodynamic simulations, Alexander Govorov, Professor of Physics at Ohio University, Athens, USA, was able to establish a simple quantum-mechanical model: “In this model, the classical and the quantum-mechanical pictures match very well, which makes it a potential example for the textbooks.”

Here’s a link to and c citation for the paper,

Hotspot-mediated non-dissipative and ultrafast plasmon passage by Eva-Maria Roller, Lucas V. Besteiro, Claudia Pupp, Larousse Khosravi Khorashad, Alexander O. Govorov, & Tim Liedl. Nature Physics (2017) doi:10.1038/nphys4120 Published online 15 May 2017

This paper is behind a paywall.

Are they just computer games or are we in a race with technology?

This story poses some interesting questions that touch on the uneasiness being felt as computers get ‘smarter’. From an April 13, 2016 news item on ScienceDaily,

The saying of philosopher René Descartes of what makes humans unique is beginning to sound hollow. ‘I think — therefore soon I am obsolete’ seems more appropriate. When a computer routinely beats us at chess and we can barely navigate without the help of a GPS, have we outlived our place in the world? Not quite. Welcome to the front line of research in cognitive skills, quantum computers and gaming.

Today there is an on-going battle between man and machine. While genuine machine consciousness is still years into the future, we are beginning to see computers make choices that previously demanded a human’s input. Recently, the world held its breath as Google’s algorithm AlphaGo beat a professional player in the game Go–an achievement demonstrating the explosive speed of development in machine capabilities.

An April 13, 2016 Aarhus University press release (also on EurekAlert) by Rasmus Rørbæk, which originated the news item, further develops the point,

But we are not beaten yet — human skills are still superior in some areas. This is one of the conclusions of a recent study by Danish physicist Jacob Sherson, published in the journal Nature.

“It may sound dramatic, but we are currently in a race with technology — and steadily being overtaken in many areas. Features that used to be uniquely human are fully captured by contemporary algorithms. Our results are here to demonstrate that there is still a difference between the abilities of a man and a machine,” explains Jacob Sherson.

At the interface between quantum physics and computer games, Sherson and his research group at Aarhus University have identified one of the abilities that still makes us unique compared to a computer’s enormous processing power: our skill in approaching problems heuristically and solving them intuitively. The discovery was made at the AU Ideas Centre CODER, where an interdisciplinary team of researchers work to transfer some human traits to the way computer algorithms work. ?

Quantum physics holds the promise of immense technological advances in areas ranging from computing to high-precision measurements. However, the problems that need to be solved to get there are so complex that even the most powerful supercomputers struggle with them. This is where the core idea behind CODER–combining the processing power of computers with human ingenuity — becomes clear. ?

Our common intuition

Like Columbus in QuantumLand, the CODER research group mapped out how the human brain is able to make decisions based on intuition and accumulated experience. This is done using the online game “Quantum Moves.” Over 10,000 people have played the game that allows everyone contribute to basic research in quantum physics.

“The map we created gives us insight into the strategies formed by the human brain. We behave intuitively when we need to solve an unknown problem, whereas for a computer this is incomprehensible. A computer churns through enormous amounts of information, but we can choose not to do this by basing our decision on experience or intuition. It is these intuitive insights that we discovered by analysing the Quantum Moves player solutions,” explains Jacob Sherson. ? [sic]

The laws of quantum physics dictate an upper speed limit for data manipulation, which in turn sets the ultimate limit to the processing power of quantum computers — the Quantum Speed ??Limit. Until now a computer algorithm has been used to identify this limit. It turns out that with human input researchers can find much better solutions than the algorithm.

“The players solve a very complex problem by creating simple strategies. Where a computer goes through all available options, players automatically search for a solution that intuitively feels right. Through our analysis we found that there are common features in the players’ solutions, providing a glimpse into the shared intuition of humanity. If we can teach computers to recognise these good solutions, calculations will be much faster. In a sense we are downloading our common intuition to the computer” says Jacob Sherson.

And it works. The group has shown that we can break the Quantum Speed Limit by combining the cerebral cortex and computer chips. This is the new powerful tool in the development of quantum computers and other quantum technologies.

After the buildup, the press release focuses on citizen science and computer games,

Science is often perceived as something distant and exclusive, conducted behind closed doors. To enter you have to go through years of education, and preferably have a doctorate or two. Now a completely different reality is materialising.? [sic]

In recent years, a new phenomenon has appeared–citizen science breaks down the walls of the laboratory and invites in everyone who wants to contribute. The team at Aarhus University uses games to engage people in voluntary science research. Every week people around the world spend 3 billion hours playing games. Games are entering almost all areas of our daily life and have the potential to become an invaluable resource for science.

“Who needs a supercomputer if we can access even a small fraction of this computing power? By turning science into games, anyone can do research in quantum physics. We have shown that games break down the barriers between quantum physicists and people of all backgrounds, providing phenomenal insights into state-of-the-art research. Our project combines the best of both worlds and helps challenge established paradigms in computational research,” explains Jacob Sherson.

The difference between the machine and us, figuratively speaking, is that we intuitively reach for the needle in a haystack without knowing exactly where it is. We ‘guess’ based on experience and thereby skip a whole series of bad options. For Quantum Moves, intuitive human actions have been shown to be compatible with the best computer solutions. In the future it will be exciting to explore many other problems with the aid of human intuition.

“We are at the borderline of what we as humans can understand when faced with the problems of quantum physics. With the problem underlying Quantum Moves we give the computer every chance to beat us. Yet, over and over again we see that players are more efficient than machines at solving the problem. While Hollywood blockbusters on artificial intelligence are starting to seem increasingly realistic, our results demonstrate that the comparison between man and machine still sometimes favours us. We are very far from computers with human-type cognition,” says Jacob Sherson and continues:

“Our work is first and foremost a big step towards the understanding of quantum physical challenges. We do not know if this can be transferred to other challenging problems, but it is definitely something that we will work hard to resolve in the coming years.”

Here’s a link to and a citation for the paper,

Exploring the quantum speed limit with computer games by Jens Jakob W. H. Sørensen, Mads Kock Pedersen, Michael Munch, Pinja Haikka, Jesper Halkjær Jensen, Tilo Planke, Morten Ginnerup Andreasen, Miroslav Gajdacz, Klaus Mølmer, Andreas Lieberoth, & Jacob F. Sherson. Nature 532, 210–213  (14 April 2016) doi:10.1038/nature17620 Published online 13 April 2016

This paper is behind a paywall.

Plastic memristors for neural networks

There is a very nice explanation of memristors and computing systems from the Moscow Institute of Physics and Technology (MIPT). First their announcement, from a Jan. 27, 2016 news item on ScienceDaily,

A group of scientists has created a neural network based on polymeric memristors — devices that can potentially be used to build fundamentally new computers. These developments will primarily help in creating technologies for machine vision, hearing, and other machine sensory systems, and also for intelligent control systems in various fields of applications, including autonomous robots.

The authors of the new study focused on a promising area in the field of memristive neural networks – polymer-based memristors – and discovered that creating even the simplest perceptron is not that easy. In fact, it is so difficult that up until the publication of their paper in the journal Organic Electronics, there were no reports of any successful experiments (using organic materials). The experiments conducted at the Nano-, Bio-, Information and Cognitive Sciences and Technologies (NBIC) centre at the Kurchatov Institute by a joint team of Russian and Italian scientists demonstrated that it is possible to create very simple polyaniline-based neural networks. Furthermore, these networks are able to learn and perform specified logical operations.

A Jan. 27, 2016 MIPT press release on EurekAlert, which originated the news item, offers an explanation of memristors and a description of the research,

A memristor is an electric element similar to a conventional resistor. The difference between a memristor and a traditional element is that the electric resistance in a memristor is dependent on the charge passing through it, therefore it constantly changes its properties under the influence of an external signal: a memristor has a memory and at the same time is also able to change data encoded by its resistance state! In this sense, a memristor is similar to a synapse – a connection between two neurons in the brain that is able, with a high level of plasticity, to modify the efficiency of signal transmission between neurons under the influence of the transmission itself. A memristor enables scientists to build a “true” neural network, and the physical properties of memristors mean that at the very minimum they can be made as small as conventional chips.

Some estimates indicate that the size of a memristor can be reduced up to ten nanometers, and the technologies used in the manufacture of the experimental prototypes could, in theory, be scaled up to the level of mass production. However, as this is “in theory”, it does not mean that chips of a fundamentally new structure with neural networks will be available on the market any time soon, even in the next five years.

The plastic polyaniline was not chosen by chance. Previous studies demonstrated that it can be used to create individual memristors, so the scientists did not have to go through many different materials. Using a polyaniline solution, a glass substrate, and chromium electrodes, they created a prototype with dimensions that, at present, are much larger than those typically used in conventional microelectronics: the strip of the structure was approximately one millimeter wide (they decided to avoid miniaturization for the moment). All of the memristors were tested for their electrical characteristics: it was found that the current-voltage characteristic of the devices is in fact non-linear, which is in line with expectations. The memristors were then connected to a single neuromorphic network.

A current-voltage characteristic (or IV curve) is a graph where the horizontal axis represents voltage and the vertical axis the current. In conventional resistance, the IV curve is a straight line; in strict accordance with Ohm’s Law, current is proportional to voltage. For a memristor, however, it is not just the voltage that is important, but the change in voltage: if you begin to gradually increase the voltage supplied to the memristor, it will increase the current passing through it not in a linear fashion, but with a sharp bend in the graph and at a certain point its resistance will fall sharply.

Then if you begin to reduce the voltage, the memristor will remain in its conducting state for some time, after which it will change its properties rather sharply again to decrease its conductivity. Experimental samples with a voltage increase of 0.5V hardly allowed any current to pass through (around a few tenths of a microamp), but when the voltage was reduced by the same amount, the ammeter registered a figure of 5 microamps. Microamps are of course very small units, but in this case it is the contrast that is most significant: 0.1 μA to 5 μA is a difference of fifty times! This is more than enough to make a clear distinction between the two signals.

After checking the basic properties of individual memristors, the physicists conducted experiments to train the neural network. The training (it is a generally accepted term and is therefore written without inverted commas) involves applying electric pulses at random to the inputs of a perceptron. If a certain combination of electric pulses is applied to the inputs of a perceptron (e.g. a logic one and a logic zero at two inputs) and the perceptron gives the wrong answer, a special correcting pulse is applied to it, and after a certain number of repetitions all the internal parameters of the device (namely memristive resistance) reconfigure themselves, i.e. they are “trained” to give the correct answer.

The scientists demonstrated that after about a dozen attempts their new memristive network is capable of performing NAND logical operations, and then it is also able to learn to perform NOR operations. Since it is an operator or a conventional computer that is used to check for the correct answer, this method is called the supervised learning method.

Needless to say, an elementary perceptron of macroscopic dimensions with a characteristic reaction time of tenths or hundredths of a second is not an element that is ready for commercial production. However, as the researchers themselves note, their creation was made using inexpensive materials, and the reaction time will decrease as the size decreases: the first prototype was intentionally enlarged to make the work easier; it is physically possible to manufacture more compact chips. In addition, polyaniline can be used in attempts to make a three-dimensional structure by placing the memristors on top of one another in a multi-tiered structure (e.g. in the form of random intersections of thin polymer fibers), whereas modern silicon microelectronic systems, due to a number of technological limitations, are two-dimensional. The transition to the third dimension would potentially offer many new opportunities.

The press release goes to explain what the researchers mean when they mention a fundamentally different computer,

The common classification of computers is based either on their casing (desktop/laptop/tablet), or on the type of operating system used (Windows/MacOS/Linux). However, this is only a very simple classification from a user perspective, whereas specialists normally use an entirely different approach – an approach that is based on the principle of organizing computer operations. The computers that we are used to, whether they be tablets, desktop computers, or even on-board computers on spacecraft, are all devices with von Neumann architecture; without going into too much detail, they are devices based on independent processors, random access memory (RAM), and read only memory (ROM).

The memory stores the code of a program that is to be executed. A program is a set of instructions that command certain operations to be performed with data. Data are also stored in the memory* and are retrieved from it (and also written to it) in accordance with the program; the program’s instructions are performed by the processor. There may be several processors, they can work in parallel, data can be stored in a variety of ways – but there is always a fundamental division between the processor and the memory. Even if the computer is integrated into one single chip, it will still have separate elements for processing information and separate units for storing data. At present, all modern microelectronic systems are based on this particular principle and this is partly the reason why most people are not even aware that there may be other types of computer systems – without processors and memory.

*) if physically different elements are used to store data and store a program, the computer is said to be built using Harvard architecture. This method is used in certain microcontrollers, and in small specialized computing devices. The chip that controls the function of a refrigerator, lift, or car engine (in all these cases a “conventional” computer would be redundant) is a microcontroller. However, neither Harvard, nor von Neumann architectures allow the processing and storage of information to be combined into a single element of a computer system.

However, such systems do exist. Furthermore, if you look at the brain itself as a computer system (this is purely hypothetical at the moment: it is not yet known whether the function of the brain is reducible to computations), then you will see that it is not at all built like a computer with von Neumann architecture. Neural networks do not have a specialized computer or separate memory cells. Information is stored and processed in each and every neuron, one element of the computer system, and the human brain has approximately 100 billion of these elements. In addition, almost all of them are able to work in parallel (simultaneously), which is why the brain is able to process information with great efficiency and at such high speed. Artificial neural networks that are currently implemented on von Neumann computers only emulate these processes: emulation, i.e. step by step imitation of functions inevitably leads to a decrease in speed and an increase in energy consumption. In many cases this is not so critical, but in certain cases it can be.

Devices that do not simply imitate the function of neural networks, but are fundamentally the same could be used for a variety of tasks. Most importantly, neural networks are capable of pattern recognition; they are used as a basis for recognising handwritten text for example, or signature verification. When a certain pattern needs to be recognised and classified, such as a sound, an image, or characteristic changes on a graph, neural networks are actively used and it is in these fields where gaining an advantage in terms of speed and energy consumption is critical. In a control system for an autonomous flying robot every milliwatt-hour and every millisecond counts, just in the same way that a real-time system to process data from a collider detector cannot take too long to “think” about highlighting particle tracks that may be of interest to scientists from among a large number of other recorded events.

Bravo to the writer!

Here’s a link to and a citation for the paper,

Hardware elementary perceptron based on polyaniline memristive devices by V.A. Demin. V. V. Erokhin, A.V. Emelyanov, S. Battistoni, G. Baldi, S. Iannotta, P.K. Kashkarov, M.V. Kovalchuk. Organic Electronics Volume 25, October 2015, Pages 16–20 doi:10.1016/j.orgel.2015.06.015

This paper is behind a paywall.

Water-fueled computer

A computer fueled by water? A fascinating concept as described in a June 9, 2015 news item on ScienceDaily,

Computers and water typically don’t mix, but in Manu Prakash’s lab, the two are one and the same. Prakash, an assistant professor of bioengineering at Stanford, and his students have built a synchronous computer that operates using the unique physics of moving water droplets.

A June 8, 2015 Stanford University news release by Bjorn Carey, which originated the news item, details the ideas (new applications) and research (open access to the tools for creating water droplet-fueled circuits) further,

The computer is nearly a decade in the making, incubated from an idea that struck Prakash when he was a graduate student. The work combines his expertise in manipulating droplet fluid dynamics with a fundamental element of computer science – an operating clock.

“In this work, we finally demonstrate a synchronous, universal droplet logic and control,” Prakash said.

Because of its universal nature, the droplet computer can theoretically perform any operation that a conventional electronic computer can crunch, although at significantly slower rates. Prakash and his colleagues, however, have a more ambitious application in mind.

“We already have digital computers to process information. Our goal is not to compete with electronic computers or to operate word processors on this,” Prakash said. “Our goal is to build a completely new class of computers that can precisely control and manipulate physical matter. Imagine if when you run a set of computations that not only information is processed but physical matter is algorithmically manipulated as well. We have just made this possible at the mesoscale.”

The ability to precisely control droplets using fluidic computation could have a number of applications in high-throughput biology and chemistry, and possibly new applications in scalable digital manufacturing.

The crucial clock

For nearly a decade since he was in graduate school, an idea has been nagging at Prakash: What if he could use little droplets as bits of information and utilize the precise movement of those drops to process both information and physical materials simultaneously. Eventually, Prakash decided to build a rotating magnetic field that could act as clock to synchronize all the droplets. The idea showed promise, and in the early stages of the project, Prakash recruited a graduate student, Georgios “Yorgos” Katsikis, who is the first author on the paper.

Computer clocks are responsible for nearly every modern convenience. Smartphones, DVRs, airplanes, the Internet – without a clock, none of these could operate without frequent and serious complications. Nearly every computer program requires several simultaneous operations, each conducted in a perfect step-by-step manner. A clock makes sure that these operations start and stop at the same times, thus ensuring that the information synchronizes.

The results are dire if a clock isn’t present. It’s like soldiers marching in formation: If one person falls dramatically out of time, it won’t be long before the whole group falls apart. The same is true if multiple simultaneous computer operations run without a clock to synchronize them, Prakash explained.

“The reason computers work so precisely is that every operation happens synchronously; it’s what made digital logic so powerful in the first place,” Prakash said.

A magnetic clock

Developing a clock for a fluid-based computer required some creative thinking. It needed to be easy to manipulate, and also able to influence multiple droplets at a time. The system needed to be scalable so that in the future, a large number of droplets could communicate amongst each other without skipping a beat. Prakash realized that a rotating magnetic field might do the trick.

Katsikis and Prakash built arrays of tiny iron bars on glass slides that look something like a Pac-Man maze. They laid a blank glass slide on top and sandwiched a layer of oil in between. Then they carefully injected into the mix individual water droplets that had been infused with tiny magnetic nanoparticles.

Next, they turned on the magnetic field. Every time the field flips, the polarity of the bars reverses, drawing the magnetized droplets in a new, predetermined direction, like slot cars on a track. Every rotation of the field counts as one clock cycle, like a second hand making a full circle on a clock face, and every drop marches exactly one step forward with each cycle.

A camera records the interactions between individual droplets, allowing observation of computation as it occurs in real time. The presence or absence of a droplet represents the 1s and 0s of binary code, and the clock ensures that all the droplets move in perfect synchrony, and thus the system can run virtually forever without any errors.

“Following these rules, we’ve demonstrated that we can make all the universal logic gates used in electronics, simply by changing the layout of the bars on the chip,” said Katsikis. “The actual design space in our platform is incredibly rich. Give us any Boolean logic circuit in the world, and we can build it with these little magnetic droplets moving around.”

The current paper describes the fundamental operating regime of the system and demonstrates building blocks for synchronous logic gates, feedback and cascadability – hallmarks of scalable computation. A simple-state machine including 1-bit memory storage (known as “flip-flop”) is also demonstrated using the above basic building blocks.

A new way to manipulate matter

The current chips are about half the size of a postage stamp, and the droplets are smaller than poppy seeds, but Katsikis said that the physics of the system suggests it can be made even smaller. Combined with the fact that the magnetic field can control millions of droplets simultaneously, this makes the system exceptionally scalable.

“We can keep making it smaller and smaller so that it can do more operations per time, so that it can work with smaller droplet sizes and do more number of operations on a chip,” said graduate student and co-author Jim Cybulski. “That lends itself very well to a variety of applications.”

Prakash said the most immediate application might involve turning the computer into a high-throughput chemistry and biology laboratory. Instead of running reactions in bulk test tubes, each droplet can carry some chemicals and become its own test tube, and the droplet computer offers unprecedented control over these interactions.

From the perspective of basic science, part of why the work is so exciting, Prakash said, is that it opens up a new way of thinking of computation in the physical world. Although the physics of computation has been previously applied to understand the limits of computation, the physical aspects of bits of information has never been exploited as a new way to manipulate matter at the mesoscale (10 microns to 1 millimeter).

Because the system is extremely robust and the team has uncovered universal design rules, Prakash plans to make a design tool for these droplet circuits available to the public. Any group of people can now cobble together the basic logic blocks and make any complex droplet circuit they desire.

“We’re very interested in engaging anybody and everybody who wants to play, to enable everyone to design new circuits based on building blocks we describe in this paper or discover new blocks. Right now, anyone can put these circuits together to form a complex droplet processor with no external control – something that was a very difficult challenge previously,” Prakash said.

“If you look back at big advances in society, computation takes a special place. We are trying to bring the same kind of exponential scale up because of computation we saw in the digital world into the physical world.”

Here’s a link to and a citation for the paper,

Synchronous universal droplet logic and control by Georgios Katsikis, James S. Cybulski, & Manu Prakash. Nature Physics (2015) doi:10.1038/nphys3341 Published online 08 June 2015

This paper is behind a paywall.

For anyone interested in creating a water-fueled circuit, you could try contacting Manu Prakash, Bioengineering: manup@stanford.edu

Ada Lovelace Day tomorrow: Tuesday, Oct. 14, 2014

Tomorrow you can celebrate Ada Lovelace Day 2014. A remarkable thinker, Lovelace (1815 – 1852) suggested computers could be used to create music and art, as well as, other practical activities. By the way, Her father was the ‘mad, bad, and dangerous to know’ poet, Lord Byron who called her mother, Anna Isabelle Millbank (she had a complex set of names and titles), the ‘princess of parallelograms’ due to her (Millbank’s) interest in mathematics.

Thanks to David Bruggeman and an Oct. 8, 2014 post on his Pasco Phronesis blog, I’ve found out about some events planned for this year’s Ada Lovelace Day before the fact rather than the ‘day of’ as I did last year (Oct. 15, 2013 post).

Here’s more from David’s Oct. 8, 2014 post (Note: Links have been removed),

In New York City, one of the commemorations of Ada Lovelace Day involves an opera on her life.  Called Ada, selections will be performed on October 14 [2014].

You can find out more about the opera and the performance on David’s blog post, which also includes video clips from a rehearsal for the opera and comments from the librettist and the composer.

Ada Lovelace Day was founded in 2009 by Suw Charman-Anderson and it’s been gaining momentum ever since. While Charman-Anderson’s Ada Lovelace website doesn’t offer an up-to-date history of the event, there is this about the 2012 celebration (from the History of Ada Lovelace Day page),

… In all, there were 25 independently-organised grassroots events in the UK, Brazil, Canada, Colombia, Italy, Slovenia, Sweden and the USA, as well as online.

This year’s event includes:

Tuesday 14 October 2014

Ada Lovelace Day is an international celebration of the achievements of women in science, technology, engineering and maths (STEM).

Write about an inspiring woman in STEM

Every year we encourage you to take part, no matter where you are, by writing something about a woman, or women, in STEM whose achievements you admire. When your blog post is ready, you can add it to our list, and once we’re properly underway, you’ll be able to browse our list to see who inspires other people!

Ada Lovelace Day Live!

Tickets are now on sale for our amazing evening event [in London, England], featuring mathematician Dr Hannah Fry, musician Caro C, structural engineer Roma Agrawal, geneticist Dr Turi King, TV presenter Konnie Huq, artist Naomi Kashiwagi, technologists Steph Troeth, physicist Dr Helen Czerski and hosted by our inimitable ALD [Ada Lovelace Day] Live producer, Helen Arney!

This event is free for Ri  [Royal Institution] Members and Fellows, £6 for Ri Associates, £8 for Concessions and £12 for everyone else. Buy your tickets nowfind out more about the event or see accessibility information for the venue.

Ada Lovelace Day for Schools

The support of the Ri has this year allowed us to put together an afternoon event for 11 – 16 year olds, exploring the role and work of women in STEM. Speakers include sustainability innovator Rachel Armstrong, neuroscientist Sophie Scott, mathematician Hannah Fry, roboticist and theremin player Sarah Angliss, engineer Roma Agrawal, and dwarf mammoth expert Victoria Herridge, and is hosted by our very own Helen Arney! Tickets cost £3 per person, and are on sale now! [London, England] Find out more about the event or see accessibility information for the venue.

The organizers are currently running an indiegogo crowdfunding campaign (Ada Lovelace Day Live! 2014) to raise £2,000 to cover costs for videography and photography of the events in London, England. They have progressed to a little over 1/2 way towards their goal. The last day to contribute is Oct. 27, 2014.

One last tidbit, James Essinger’s book, Ada’s Algorithm, is being released on Oct. 14, 2014 in the US. The book was published last year in the UK. Sophia Stuart, in an Oct. 10, 2014 article for PC Magazine about the upcoming US release of Essinger’s book, wrote this,

A natural affinity for computer programming requires an unusual blend of arts and sciences; from appreciating the beauty of mathematics and the architectural composition of language via a vision for engineering, coupled with a meticulous attention to detail (and an ability to subsist on little sleep).

Ada Lovelace, considered to be the world’s first computer programmer, fits the profile perfectly, and is the subject of James Essinger’s book Ada’s Algorithm. Ada’s mother was a gifted mathematician and her father was the poet Lord Byron. In 1828, at the age of 12, Ada was multi-lingual while also teaching herself geometry, sketching plans for self-powered flight by studying birds and their wingspan, and imagining the future of aviation 75 years before the Wright Brothers’ first flight.

“In the form of a horse with a steamengine in the inside so contrived as to move an immense pair of wings,” she wrote in an April 7, 1828 letter to her mother.

Don’t forget, Ada Lovelace Day is tomorrow and perhaps in honour of her you can give your imagination permission to fly free for at least a moment or two.

Happy Thanksgiving today, Oct. 13, 2014 for Canadians of all stripes, those who were born here, those who are citizens (past and present), and those who choose to be Canadian in spirit for a day.

Better RRAM memory devices in the short term

Given my recent spate of posts about computing and the future of the chip (list to follow at the end of this post), this Rice University [Texas, US] research suggests that some improvements to current memory devices might be coming to the market in the near future. From a July 12, 2014 news item on Azonano,

Rice University’s breakthrough silicon oxide technology for high-density, next-generation computer memory is one step closer to mass production, thanks to a refinement that will allow manufacturers to fabricate devices at room temperature with conventional production methods.

A July 10, 2014 Rice University news release, which originated the news item, provides more detail,

Tour and colleagues began work on their breakthrough RRAM technology more than five years ago. The basic concept behind resistive memory devices is the insertion of a dielectric material — one that won’t normally conduct electricity — between two wires. When a sufficiently high voltage is applied across the wires, a narrow conduction path can be formed through the dielectric material.

The presence or absence of these conduction pathways can be used to represent the binary 1s and 0s of digital data. Research with a number of dielectric materials over the past decade has shown that such conduction pathways can be formed, broken and reformed thousands of times, which means RRAM can be used as the basis of rewritable random-access memory.

RRAM is under development worldwide and expected to supplant flash memory technology in the marketplace within a few years because it is faster than flash and can pack far more information into less space. For example, manufacturers have announced plans for RRAM prototype chips that will be capable of storing about one terabyte of data on a device the size of a postage stamp — more than 50 times the data density of current flash memory technology.

The key ingredient of Rice’s RRAM is its dielectric component, silicon oxide. Silicon is the most abundant element on Earth and the basic ingredient in conventional microchips. Microelectronics fabrication technologies based on silicon are widespread and easily understood, but until the 2010 discovery of conductive filament pathways in silicon oxide in Tour’s lab, the material wasn’t considered an option for RRAM.

Since then, Tour’s team has raced to further develop its RRAM and even used it for exotic new devices like transparent flexible memory chips. At the same time, the researchers also conducted countless tests to compare the performance of silicon oxide memories with competing dielectric RRAM technologies.

“Our technology is the only one that satisfies every market requirement, both from a production and a performance standpoint, for nonvolatile memory,” Tour said. “It can be manufactured at room temperature, has an extremely low forming voltage, high on-off ratio, low power consumption, nine-bit capacity per cell, exceptional switching speeds and excellent cycling endurance.”

In the latest study, a team headed by lead author and Rice postdoctoral researcher Gunuk Wang showed that using a porous version of silicon oxide could dramatically improve Rice’s RRAM in several ways. First, the porous material reduced the forming voltage — the power needed to form conduction pathways — to less than two volts, a 13-fold improvement over the team’s previous best and a number that stacks up against competing RRAM technologies. In addition, the porous silicon oxide also allowed Tour’s team to eliminate the need for a “device edge structure.”

“That means we can take a sheet of porous silicon oxide and just drop down electrodes without having to fabricate edges,” Tour said. “When we made our initial announcement about silicon oxide in 2010, one of the first questions I got from industry was whether we could do this without fabricating edges. At the time we could not, but the change to porous silicon oxide finally allows us to do that.”

Wang said, “We also demonstrated that the porous silicon oxide material increased the endurance cycles more than 100 times as compared with previous nonporous silicon oxide memories. Finally, the porous silicon oxide material has a capacity of up to nine bits per cell that is highest number among oxide-based memories, and the multiple capacity is unaffected by high temperatures.”

Tour said the latest developments with porous silicon oxide — reduced forming voltage, elimination of need for edge fabrication, excellent endurance cycling and multi-bit capacity — are extremely appealing to memory companies.

“This is a major accomplishment, and we’ve already been approached by companies interested in licensing this new technology,” he said.

Here’s a link to and a citation for the paper,

Nanoporous Silicon Oxide Memory by Gunuk Wang, Yang Yang, Jae-Hwang Lee, Vera Abramova, Huilong Fei, Gedeng Ruan, Edwin L. Thomas, and James M. Tour. Nano Lett., Article ASAP DOI: 10.1021/nl501803s Publication Date (Web): July 3, 2014

Copyright © 2014 American Chemical Society

This paper is behind a paywall.

As for my recent spate of posts on computers and chips, there’s a July 11, 2014 posting about IBM, a 7nm chip, and much more; a July 9, 2014 posting about Intel and its 14nm low-power chip processing and plans for a 10nm chip; and, finally, a June 26, 2014 posting about HP Labs and its plans for memristive-based computing and their project dubbed ‘The Machine’.

Learn to love slime; it may help you to compute in the future

Eeeewww! Slime or slime mold is not well loved and yet scientists seem to retain a certain affection for it, if their efforts at researching ways to make it useful could be termed affection. A March 27, 2014 news item on Nanowerk highlights a project where scientists have used slime and nanoparticles to create logic units (precursors to computers; Note: A link has been removed),

A future computer might be a lot slimier than the solid silicon devices we have today. In a study published in the journal Materials Today (“Slime mold microfluidic logical gates”), European researchers reveal details of logic units built using living slime molds, which might act as the building blocks for computing devices and sensors.

The March 27, 2014 Elsevier press release, which originated the news item, describes the researchers and their work in more detail,

Andrew Adamatzky (University of the West of England, Bristol, UK) and Theresa Schubert (Bauhaus-University Weimar, Germany) have constructed logical circuits that exploit networks of interconnected slime mold tubes to process information.

One is more likely to find the slime mold Physarum polycephalum living somewhere dark and damp rather than in a computer science lab. In its “plasmodium” or vegetative state, the organism spans its environment with a network of tubes that absorb nutrients. The tubes also allow the organism to respond to light and changing environmental conditions that trigger the release of reproductive spores.

In earlier work, the team demonstrated that such a tube network could absorb and transport different colored dyes. They then fed it edible nutrients – oat flakes – to attract tube growth and common salt to repel them, so that they could grow a network with a particular structure. They then demonstrated how this system could mix two dyes to make a third color as an “output”.

Using the dyes with magnetic nanoparticles and tiny fluorescent beads, allowed them to use the slime mold network as a biological “lab-on-a-chip” device. This represents a new way to build microfluidic devices for processing environmental or medical samples on the very small scale for testing and diagnostics, the work suggests. The extension to a much larger network of slime mold tubes could process nanoparticles and carry out sophisticated Boolean logic operations of the kind used by computer circuitry. The team has so far demonstrated that a slime mold network can carry out XOR or NOR Boolean operations. Chaining together arrays of such logic gates might allow a slime mold computer to carry out binary operations for computation.

“The slime mold based gates are non-electronic, simple and inexpensive, and several gates can be realized simultaneously at the sites where protoplasmic tubes merge,” conclude Adamatzky and Schubert.

Are we entering the age of the biological computer? Stewart Bland, Editor of Materials Today, believes that “although more traditional electronic materials are here to stay, research such as this is helping to push and blur the boundaries of materials science, computer science and biology, and represents an exciting prospect for the future.

I did look at the researchers’ paper and it is fascinating even to someone (me) who doesn’t understand the science very well. Here’s a link to and a citation for the paper,

Slime mold microfluidic logical gates by Andrew Adamatzky and Theresa Schubert. Materials Today, Volume 17, Issue 2, March 2014, Pages 86–91 (2014) published by Elsevier. http://dx.doi.org/10.1016/j.mattod.2014.01.018 The article is available for free at www.materialstoday.com

Yes, it’s an open access paper published by Elsevier, good on them!

Antikythera; ancient computer and a 100 year adventure

This post has been almost two years in the making, which seems laughable when considering that the story starts in 100 BCE (before the common era).

Picture ancient Greece and a Roman sailing ship holding an object we know as an Antikythera, named after the Greek island near where the ship was wrecked and where it lay undiscovered until 1900. From the Dec.10, 2010 posting by GrrlScientist on the Guardian science blogs,

Two years ago [2008], a paper was published in Nature describing the function of the oldest known scientific computer, a device built in Greece around 100 BCE. Recovered in 1901 from a shipwreck near the island of Antikythera, this mechanism had been lost and unknown for 2000 years. It took one century for scientists to understand its purpose: it is an astronomical clock that determines the positions of celestial bodies with extraordinary precision. In 2010, a fully-functional replica was constructed out of Lego.

Here’s the video mentioned by Grrl Scientist,

As noted in the video, it is a replica that requires twice as many gears as the original to make the same calculations. It seems we still haven’t quite caught up with the past.

Bob Yirka’s April 4, 2011 article for phys.org describes some of the research involved in decoding the mechanism,

If modern research is correct, the device worked by hand cranking a main dial to display a chosen date, causing the wheels and gears inside to display (via tabs on separate dials) the position of the sun, moon, and the five known planets at that time, for that date; a mechanical and technical feat that would not be seen again until the fourteenth century in Europe with precision clocks.

Now James Evans and his colleagues at the University of Puget Sound in Washington State, have shown that instead of trying to use the same kind of gear mechanism to account for the elliptical path the Earth takes around the sun, and subsequent apparent changes in speed, the inventor of the device may have taken a different tack, and that was to stretch or distort the zodiac on the dial face to change the width of the spaces on the face to make up for the slightly different amount of time that is represented as the hand moves around the face.

In a paper published in the Journal for the History of Astronomy, Evans describes how he and his team were able to examine x-rays taken of the corroded machine (69 then later 88 degrees of the circle) and discovered that the two circles that were used to represent the Zodiac and Egyptian calendar respectively, did indeed differ just enough to account for what appeared to be the irregular movement during different parts of the year.

Though not all experts agree on the findings, this new evidence does appear to suggest that an attempt was made by the early inventor to take into account the elliptical nature of the Earth orbiting the sun, no small thing.

Jenny Winder’s June 11, 2012 article for Universe Today and republished on phys.org provides more details about the gears and the theories behind the device,

The device is made of bronze and contains 30 gears though it may have had as many as 72 originally. Each gear was meticulously hand cut with between 15 and 223 triangular teeth, which were the key to discovering the mechanism’s various functions. It was based on theories of astronomy and mathematics developed by Greek astronomers who may have drawn from earlier Babylonian astronomical theories and its construction could be attributed to the astronomer Hipparchus or, more likely, Archimedes the famous Greek mathematician, physicist, engineer, inventor and astronomer. … [emphases mine]

I’ve highlighted the verbs which suggest they’re still conjecturing as to where the theories and knowledge to develop this ancient computer came from. Yirka’s article mentions that some folks believe that the Antikythera may be the result of alien visitations, along with the more academic guesses about the Babylonians and the Greeks.

I strongly recommend reading the articles and chasing down more videos about the Antikythera on Youtube as the story is fascinating and given the plethora of material (including a book and website by Jo Marchant, Decoding the Heavens), I don’t seem to be alone in my fascination.