Tag Archives: adenosine triphosphate (ATP)

2016 Nobel Chemistry Prize for molecular machines

Wednesday, Oct. 5, 2016 was the day three scientists received the Nobel Prize in Chemistry for their work on molecular machines, according to an Oct. 5, 2016 news item on phys.org,

Three scientists won the Nobel Prize in chemistry on Wednesday [Oct. 5, 2016] for developing the world’s smallest machines, 1,000 times thinner than a human hair but with the potential to revolutionize computer and energy systems.

Frenchman Jean-Pierre Sauvage, Scottish-born Fraser Stoddart and Dutch scientist Bernard “Ben” Feringa share the 8 million kronor ($930,000) prize for the “design and synthesis of molecular machines,” the Royal Swedish Academy of Sciences said.

Machines at the molecular level have taken chemistry to a new dimension and “will most likely be used in the development of things such as new materials, sensors and energy storage systems,” the academy said.

Practical applications are still far away—the academy said molecular motors are at the same stage that electrical motors were in the first half of the 19th century—but the potential is huge.

Dexter Johnson in an Oct. 5, 2016 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) provides some insight into the matter (Note: A link has been removed),

In what seems to have come both as a shock to some of the recipients and a confirmation to all those who envision molecular nanotechnology as the true future of nanotechnology, Bernard Feringa, Jean-Pierre Sauvage, and Sir J. Fraser Stoddart have been awarded the 2016 Nobel Prize in Chemistry for their development of molecular machines.

The Nobel Prize was awarded to all three of the scientists based on their complementary work over nearly three decades. First, in 1983, Sauvage (currently at Strasbourg University in France) was able to link two ring-shaped molecules to form a chain. Then, eight years later, Stoddart, a professor at Northwestern University in Evanston, Ill., demonstrated that a molecular ring could turn on a thin molecular axle. Then, eight years after that, Feringa, a professor at the University of Groningen, in the Netherlands, built on Stoddardt’s work and fabricated a molecular rotor blade that could spin continually in the same direction.

Speaking of the Nobel committee’s selection, Donna Nelson, a chemist and president of the American Chemical Society told Scientific American: “I think this topic is going to be fabulous for science. When the Nobel Prize is given, it inspires a lot of interest in the topic by other researchers. It will also increase funding.” Nelson added that this line of research will be fascinating for kids. “They can visualize it, and imagine a nanocar. This comes at a great time, when we need to inspire the next generation of scientists.”

The Economist, which appears to be previewing an article about the 2016 Nobel prizes ahead of the print version, has this to say in its Oct. 8, 2016 article,

BIGGER is not always better. Anyone who doubts that has only to look at the explosion of computing power which has marked the past half-century. This was made possible by continual shrinkage of the components computers are made from. That success has, in turn, inspired a search for other areas where shrinkage might also yield dividends.

One such, which has been poised delicately between hype and hope since the 1990s, is nanotechnology. What people mean by this term has varied over the years—to the extent that cynics might be forgiven for wondering if it is more than just a fancy rebranding of the word “chemistry”—but nanotechnology did originally have a fairly clear definition. It was the idea that machines with moving parts could be made on a molecular scale. And in recognition of this goal Sweden’s Royal Academy of Science this week decided to award this year’s Nobel prize for chemistry to three researchers, Jean-Pierre Sauvage, Sir Fraser Stoddart and Bernard Feringa, who have never lost sight of nanotechnology’s original objective.

Optimists talk of manufacturing molecule-sized machines ranging from drug-delivery devices to miniature computers. Pessimists recall that nanotechnology is a field that has been puffed up repeatedly by both researchers and investors, only to deflate in the face of practical difficulties.

There is, though, reason to hope it will work in the end. This is because, as is often the case with human inventions, Mother Nature has got there first. One way to think of living cells is as assemblies of nanotechnological machines. For example, the enzyme that produces adenosine triphosphate (ATP)—a molecule used in almost all living cells to fuel biochemical reactions—includes a spinning molecular machine rather like Dr Feringa’s invention. This works well. The ATP generators in a human body turn out so much of the stuff that over the course of a day they create almost a body-weight’s-worth of it. Do something equivalent commercially, and the hype around nanotechnology might prove itself justified.

Congratulations to the three winners!

Biological supercomputers (living, breathing supercomputers) and an international collaboration spearheaded by Canadian scientists

A living, breathing supercomputer is a bit mind-boggling but scientists at McGill University (Canada) and their international colleagues have created a working model according to a Feb. 26, 2016 McGill University news release on EurekAlert (and received via email), Note: A link has been removed,

The substance that provides energy to all the cells in our bodies, Adenosine triphosphate (ATP), may also be able to power the next generation of supercomputers. That is what an international team of researchers led by Prof. Nicolau, the Chair of the Department of Bioengineering at McGill, believe. They’ve published an article on the subject earlier this week in the Proceedings of the National Academy of Sciences (PNAS), in which they describe a model of a biological computer that they have created that is able to process information very quickly and accurately using parallel networks in the same way that massive electronic super computers do.

Except that the model bio supercomputer they have created is a whole lot smaller than current supercomputers, uses much less energy, and uses proteins present in all living cells to function.

Doodling on the back of an envelope

“We’ve managed to create a very complex network in a very small area,” says Dan Nicolau, Sr. with a laugh. He began working on the idea with his son, Dan Jr., more than a decade ago and was then joined by colleagues from Germany, Sweden and The Netherlands, some 7 years ago [there is also one collaborator from the US according the journal’s [PNAS] list of author affiliations, read on for the link to the paper]. “This started as a back of an envelope idea, after too much rum I think, with drawings of what looked like small worms exploring mazes.”

The model bio-supercomputer that the Nicolaus (father and son) and their colleagues have created came about thanks to a combination of geometrical modelling and engineering knowhow (on the nano scale). It is a first step, in showing that this kind of biological supercomputer can actually work.

The circuit the researchers have created looks a bit like a road map of a busy and very organized city as seen from a plane. Just as in a city, cars and trucks of different sizes, powered by motors of different kinds, navigate through channels that have been created for them, consuming the fuel they need to keep moving.

More sustainable computing

But in the case of the biocomputer, the city is a chip measuring about 1.5 cm square in which channels have been etched. Instead of the electrons that are propelled by an electrical charge and move around within a traditional microchip, short strings of proteins (which the researchers call biological agents) travel around the circuit in a controlled way, their movements powered by ATP, the chemical that is, in some ways, the juice of life for everything from plants to politicians.

Because it is run by biological agents, and as a result hardly heats up at all, the model bio-supercomputer that the researchers have developed uses far less energy than standard electronic supercomputers do, making it more sustainable. Traditional supercomputers use so much electricity that they heat up a lot and then need to be cooled down, often requiring their own power plant to function.

Moving from model to reality

Although the model bio supercomputer was able to very efficiently tackle a complex classical mathematical problem by using parallel computing of the kind used by supercomputers, the researchers recognize that there is still a lot of work ahead to move from the model they have created to a full-scale functional computer.

”Now that this model exists as a way of successfully dealing with a single problem, there are going to be many others who will follow up and try to push it further, using different biological agents, for example,” says Nicolau. “It’s hard to say how soon it will be before we see a full scale bio super-computer. One option for dealing with larger and more complex problems may be to combine our device with a conventional computer to form a hybrid device. Right now we’re working on a variety of ways to push the research further.”

What was once the stuff of science fiction, is now just science.

The funding for this project is interesting,

This research was funded by: The European Union Seventh Framework Programme; [US] Defense Advanced Research Projects Agency [DARPA]; NanoLund; The Miller Foundation; The Swedish Research Council; The Carl Trygger Foundation; the German Research Foundation; and by Linnaeus University.

I don’t see a single Canadian funding agency listed.

In any event, here’s a link to and a citation for the paper,

Parallel computation with molecular-motor-propelled agents in nanofabricated networks by Dan V. Nicolau, Jr., Mercy Lard, Till Kortend, Falco C. M. J. M. van Delft, Malin Persson, Elina Bengtsson, Alf Månsson, Stefan Diez, Heiner Linke, and Dan V. Nicolau. Proceedings of the National Academy of Sciences (PNAS): http://www.pnas.org/content/early/2016/02/17/1510825113

This paper appears to be open access.

Finally, the researchers have provided an image illustrating their work,

Caption: Strands of proteins of different lengths move around the chip in the bio computer in directed patterns, a bit like cars and trucks navigating the streets of a city. Credit: Till Korten

Caption: Strands of proteins of different lengths move around the chip in the bio computer in directed patterns, a bit like cars and trucks navigating the streets of a city. Credit: Till Korten

ETA Feb. 29 2016: Technical University Dresden’s Feb. 26, 2016 press release on EurekAlert also announces the bio-computer albeit from a rather different perspective,

The pioneering achievement was developed by researchers from the Technische Universität Dresden and the Max Planck Institute of Molecular Cell Biology and Genetics, Dresden in collaboration with international partners from Canada, England, Sweden, the US, and the Netherlands.

Conventional electronic computers have led to remarkable technological advances in the past decades, but their sequential nature -they process only one computational task at a time- prevents them from solving problems of combinatorial nature such as protein design and folding, and optimal network routing. This is because the number of calculations required to solve such problems grows exponentially with the size of the problem, rendering them intractable with sequential computing. Parallel computing approaches can in principle tackle such problems, but the approaches developed so far have suffered from drawbacks that have made up-scaling and practical implementation very difficult. The recently reported parallel-computing approach aims to address these issues by combining well established nanofabrication technology with molecular motors which are highly energy efficient and inherently work in parallel.

In this approach, which the researchers demonstrate on a benchmark combinatorial problem that is notoriously hard to solve with sequential computers, the problem to be solved is ‘encoded’ into a network of nanoscale channels (Fig. 1a). This is done, on the one hand by mathematically designing a geometrical network that is capable of representing the problem, and on the other hand by fabricating a physical network based on this design using so-called lithography, a standard chip-manufacturing technique.

The network is then explored in parallel by many protein filaments (here actin filaments or microtubules) that are self-propelled by a molecular layer of motor proteins (here myosin or kinesin) covering the bottom of the channels (Fig. 3a). The design of the network using different types of junctions automatically guides the filaments to the correct solutions to the problem (Fig. 1b). This is realized by different types of junctions, causing the filaments to behave in two different ways. As the filaments are rather rigid structures, turning to the left or right is only possible for certain angles of the crossing channels. By defining these options (‘split junctions’ Fig. 2a + 3b and ‘pass junctions’, Fig. 2b + 3c) the scientists achieved an ‘intelligent’ network giving the filaments the opportunity either to cross only straight or to decide between two possible channels with a 50/50 probability.

The time to solve combinatorial problems of size N using this parallel-computing approach scales approximately as N2, which is a dramatic improvement over the exponential (2N) time scales required by conventional, sequential computers. Importantly, the approach is fully scalable with existing technologies and uses orders of magnitude less energy than conventional computers, thus circumventing the heating issues that are currently limiting the performance of conventional computing.

The diagrams mentioned were not included with the press release.