Tag Archives: James Lewis

Surgical nanobots to be tested in humans in 2015?

Thanks to James Lewis at the Foresight Institute’s* blog and his Jan. 6, 2015 posting about an an announcement of human clinical trials for surgical nanobots (Note: Links have been removed),

… as structural DNA nanotechnology rapidly expanded the repertoire of atomically precise nanostructures that can be fabricated, it became possible to fabricate functional DNA nanostructures incorporating logic gates to deliver and release molecular cargo for medical applications, as we reported a couple years ago (DNA nanotechnology-based nanorobot delivers cell suicide message to cancer cells). More recently, DNA nanorobots have been coated with lipid to survive immune attack inside the body.

Lewis then notes this (Note: A link has been removed),

 … “Ido Bachelet announces 2015 human trial of DNA nanobots to fight cancer and soon to repair spinal cords“:

At the British Friends of Bar-Ilan University’s event in Otto Uomo October 2014 Professor Ido Bachelet announced the beginning of the human treatment with nanomedicine. He indicates DNA nanobots can currently identify cells in humans with 12 different types of cancer tumors.

A human patient with late stage leukemia will be given DNA nanobot treatment. Without the DNA nanobot treatment the patient would be expected to die in the summer of 2015. Based upon animal trials they expect to remove the cancer within one month.

The information was excerpted from Brian Wang’s Dec. 27, 2014 post on his Nextbigfuture blog,

One Trillion 50 nanometer nanobots in a syringe will be injected into people to perform cellular surgery.

The DNA nanobots have been tuned to not cause an immune response. They have been adjusted for different kinds of medical procedures. Procedures can be quick or ones that last many days.

Using DNA origami and molecular programming, they are reality. These nanobots can seek and kill cancer cells, mimic social insect behaviors, carry out logical operators like a computer in a living animal, and they can be controlled from an Xbox. Ido Bachelet from the bio-design lab at Bar Ilan University explains this technology and how it will change medicine in the near future.

I advise reading both Wang’s and Lewis’ posts in their entirety. To give you a sense of how their posts differ (Lewis is more technical), I solicited information from the websites hosting their blog postings.

Here’s more about Wang from the About page on the Nextbigfuture blog,

Brian L. Wang, M.B.A. is a long time futurist. A lecturer at the Singularity University and Nextbigfuture.com author. He worked on the most recent ten year plan for the Institute for the Future and at a two day Institute for the Future workshop with Universities and City planners in Hong Kong (advising the city of Hong Kong on their future plans). He had a TEDx lecture on Energy. Brian is available as a speaker for corporations and organizations that value accurate and detailed insight into the development of technology global trends.

Lewis provides a contrast (from the About page listing Lewis on the Foresight Institute website),

Jim received a B.A. in chemistry from the University of Pennsylvania in 1967, an M.A. in chemistry from Harvard University in 1968, and a Ph.D. in chemistry, from Harvard University in 1972. After doing postdoctoral research at the Swiss Institute for Experimental Cancer Research, Lausanne, Switzerland, from 1971-1973, Jim did research in the molecular biology of tumor viruses at Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, from 1973-1980, first as a postdoctoral researcher, and then as a Staff Investigator and Senior Staff Investigator. He continued his research as an Associate Member, Basic Sciences Division, Fred Hutchinson Cancer Research Center, Seattle, WA, from 1980-1988, and then joined the Bristol-Myers Squibb Pharmaceutical Research Institute in Seattle, WA, as a Senior Research Investigator from 1988-1996. Since 1996 he has been working as a consultant on nanotechnology.

Getting back to Bachelet, his team’s work, a precursor for this latest initiative, has been featured here before in an April 11, 2014 post,

This latest cockroach item, which concerns new therapeutic approaches, comes from an April 8, 2014 article by Sarah Spickernell for New Scientist (Note: A link has been removed),

It’s a computer – inside a cockroach. Nano-sized entities made of DNA that are able to perform the same kind of logic operations as a silicon-based computer have been introduced into a living animal.

Ido Bachelet can be seen in this February 2014 video describing the proposed surgical nanobots,

Bar-Ilan University where Bachelet works is located in Israel. You can find more information about this work and more on the Research group for Bio-Design website.

*The possessive was moved from Foresight to Institute as in Institute’s on Nov. 11, 2015.

Richard Jones and soft nanotechnology

One of the first posts on this blog was about Richard Jones’ nanotechnology book, ‘Soft Machines’. I have a ‘soft’ spot for the book which I found to be a good introduction to nanotechnology and well written too.

It’s nice to see the book getting some more attention all these years later as James Lewis notes in his Aug. 31, 2014 posting on Nanodot (Foresight Institute’s blog) that nano manufacturing has not progressed as some of the early thinkers in this area had hoped,

Long-term readers of Nanodot will be familiar with the work of Richard Jones, a UK physicist and author of Soft Machines: Nanotechnology and Life, reviewed in Foresight Update Number 55 (2005) page 10. Basically Jones follows Eric Drexler’s lead in Engines of Creation in arguing that the molecular machinery found in nature provides an existence proof of an advanced nanotechnology of enormous capabilities. However, he cites the very different physics governing biomolecular machinery operating in an aqueous environment on the one hand, and macroscopic machine tools of steel and other hard metals, on the other hand. He then argues that rigid diamondoid structures doing atomically precise mechanochemistry, as later presented by Drexler in Nanosystems, although at least theoretically feasible, do not form a practical path to advanced nanotechnology. This stance occasioned several very useful and informative debates on the relative strengths and weaknesses of different approaches to advanced nanotechnology, both on his Soft Machines blog and here on Nanodot (for example “Debate with ‘Soft Machines’ continues“, “Which way(s) to advanced nanotechnology?“, “Recent commentary“). An illuminating interview of Richard Jones over at h+ Magazine not only presents Jones’s current views, but spotlights the lack of substantial effort since 2008 in trying to resolve these issues “Going Soft on Nanotech

Lewis goes on to excerpt parts of the H+ interview which pertain to manufacturing and discusses the implications further. (Note: Eric Drexler not only popularized nanotechnology and introduced us to ‘grey goo’ with his book ‘Engines of Creation’, he also founded the Foresight Institute with then wife Christine Peterson. Drexler is no longer formally associated with Foresight.)

In the interests of avoiding duplication, I am focusing on the parts of the H+ interview concerning soft machines and synthetic biology and topics other than manufacturing. From the Nov. 23, 2013 article by Eddie Germino for H+ magazine,

H+: What are “soft machines”?

RJ: I called my book “Soft Machines” to emphasise that the machines of cell biology work on fundamentally different principles to the human-made machines of the macro-world.  Why “soft”?  As a physicist, one of my biggest intellectual influences was the French theoretical physicist Pierre-Gilles de Gennes (1932-2007, Nobel Prize for Physics 1991).  De Gennes popularised the term “soft matter” for those kinds of materials – polymers, colloids, liquid crystals etc – in which the energies with which molecules interact with each other are comparable with thermal energies, making them soft, mutable and responsive.  These are the characteristics of biological matter, so calling the machines of biology “soft machines” emphasises the different principles on which they operate.  Some people will also recognise the allusion to a William Burroughs novel (for whom a soft machine is a human being).

H+: What kind of work have you done with soft machines?

RJ: In my own lab we’ve been working on a number of “soft machine” related problems.  At the near-term end, we’ve been trying to understand what makes the molecules go where when you manufacture a solar cell from solutions of organic molecules – the idea here is that if you understand the self-assembly processes you can get a well-defined nanostructure that gives you a high conversion efficiency with a process you can use on a very large scale very cheaply. Further away from applications, we’ve been investigating a new mechanism for propelling micro- and nano-scale particles in water.  We use a spatially asymmetric chemical reaction so the particle creates a concentration gradient around itself, as a result of which osmotic pressure pushes it along.

H+: Putting aside MNT [micro/nanotechnology], what other design approaches would be most likely to yield advanced nanomachines?

RJ: If we are going to use the “soft machines” design paradigm to make functional nano machines, we have two choices.  We can co-opt what nature does, modifying biological systems to do what we want.  In essence, this is what is underlying the current enthusiasm for synthetic biology.  Or we can make synthetic molecules and systems that copy the principles that biology uses, possibly thereby widening the range of environments in which it will work.  Top-down methods are still enormously powerful, but they will have limits.

H+: So “synthetic biology” involves the creation of a custom-made microorganism built with the necessary organic parts and DNA to perform a desired function. Even if it is manmade, it only uses recognizable, biological parts in its construction, albeit arranged in ways that don’t occur in nature. But the second approach involving “synthetic molecules and systems that copy the principles that biology uses” is harder to understand. Can you give some clarifying examples?

RJ: If you wanted to make a molecular motor to work in water, you could use the techniques of molecular biology to isolate biological motors from cells, and this approach does work.  Alternatively, you could work out the principles by which the biological motor worked – these involve shape changes in the macromolecules coupled to chemical reactions – and try to make a synthetic molecule which would operate on similar principles.  This is more difficult than hacking out parts from a biological system, but will ultimately be more flexible and powerful.

H+: Why would it be more flexible and powerful?

RJ: The problem with biological macromolecules is that biology has evolved very effective mechanisms for detecting them and eating them.  So although DNA, for example, is a marvellous material for building nanostructures and devices from, its going to be difficult to use these directly in medicine simply because our cells are very good at detecting and destroying foreign DNA.  So using synthetic molecules should lead to more robust systems that can be used in a wider range of environments.

H+: In spite of your admiration for nanoscale soft machines, you’ve said that manmade technology has a major advantage because it can make use of electricity in ways living organisms can’t. Will soft machines use electricity in the future somehow?

RJ: Biology uses electrical phenomenon quite a lot – e.g. in our nervous system – but generally this relies on ion transport rather than coherent electron transport.  Photosynthesis is an exception, as may be certain electron transporting structures recently discovered in some bacteria.  There’s no reason in principle that the principles of self-assembly shouldn’t be used to connect up electronic circuits in which the individual elements are single conducting or semi-conducting molecules.  This idea – “molecular electronics” – is quite old now, but it’s probably fair to say that as a field it hasn’t progressed as fast as people had hoped.

Jones also discusses the term nanotechnology and takes a foray into transhumanism and the singularity (from the Germino article),

H+: What do you think of the label “nanotechnology”? Is it a valid field? What do people most commonly misunderstand about it? 

RJ: Nanotechnology, as the term is used in academia and industry, isn’t really a field in the sense that supramolecular chemistry or surface physics are fields.  It’s more of a socio-political project, which aims to do to physical scientists what the biotech industry did to life scientists – that is, to make them switch their focus from understanding nature to intervening in nature by making gizmos and gadgets, and then to try and make money from that.

What I’ve found, doing quite a lot of work in public engagement around nanotechnology, is that most people don’t have enough awareness of nanotechnology to misunderstand it at all.  Among those who do know something about it, I think the commonest misunderstanding is the belief that it will progress much more rapidly than is actually possible.  It’s a physical technology, not a digital one, so it won’t proceed at the pace we see in digital technologies.  As all laboratory-based nanotechnologists know, the physical world is more cussed than the digital one, and the smaller it gets the more cussed it seems to be…

… 

H+: Your thoughts on picotechnology and femtotechnology?

RJ: There’s a roughly inverse relationship between the energy scales needed to manipulate matter and the distance scale at which that manipulation takes place. Manipulating matter at the picometer scale is essentially a matter of controlling electron energy levels in atoms, which involves electron volt energies.  This is something we’ve got quite good at when we make lasers, for example.  Things are more difficult when we go smaller.  To manipulate matter at the nuclear level – i.e. on femtometer length scales – needs MeV energies, while to manipulate matter at the level of the constituents of hadrons – quarks and gluons – we need GeV energies.  At the moment our technology for manipulating objects at these energy scales is essentially restricted to hurling things at them, which is the business of particle accelerators.  So at the moment we really have no idea how to do femtotechnology of any kind of complexity, nor do we have any idea whether whether there is anything interesting we could do with it if we could.  I suppose the question is whether there is any scope for complexity within nuclear matter.  Perhaps if we were the sorts of beings that lived inside a neutron star or a quark-gluon plasma we’d know.

H+: What do you think of the transhumanist and Singularity movements?

RJ: These are terms that aren’t always used with clearly understood meanings, by me at least.  If by Transhumanism, we are referring to the systematic use of technology to better the lot of humanity, then I’m all in favour.  After all, the modern Western scientific project began with Francis Bacon, who said its purpose was “an improvement in man’s estate and an enlargement of his power over nature”.  And if the essence of Singularitarianism is to say that there’s something radically unknowable about the future, then I’m strongly in agreement.  On the other hand, if we consider Transhumanism and Singularitarianism as part of a belief package promising transcendence through technology, with a belief in a forthcoming era of material abundance, superhuman wisdom and everlasting life, then it’s interesting as a cultural phenomenon.  In this sense it has deep roots in the eschatologies of the apocalyptic traditions of Christianity and Judaism.  These were secularised by Marx and Trotsky, and technologised through, on the one hand, Fyodorov, Tsiolkovsky and the early Russian ideologues of space exploration, and on the other by the British Marxist scientists J.B.S. Haldane and Desmond Bernal.  Of course, the fact that a set of beliefs has a colourful past doesn’t mean they are necessarily wrong, but we should be aware that the deep tendency of humans to predict that their wishes will imminently be fulfilled is a powerful cognitive bias.

Richard goes into more depth about his views on transhumanism and the singularity in an Aug. 24, 2014 posting on his Soft Machines blog,

Transhumanism has never been modern

Transhumanists are surely futurists, if they are nothing else. Excited by the latest developments in nanotechnology, robotics and computer science, they fearlessly look ahead, projecting consequences from technology that are more transformative, more far-reaching, than the pedestrian imaginations of the mainstream. And yet, their ideas, their motivations, do not come from nowhere. They have deep roots, perhaps surprising roots, and following those intellectual trails can give us some important insights into the nature of transhumanism now. From antecedents in the views of the early 20th century British scientific left-wing, and in the early Russian ideologues of space exploration, we’re led back, not to rationalism, but to a particular strand of religious apocalyptic thinking that’s been a persistent feature of Western thought since the middle ages.

The essay that follows is quite dense (many of the thinkers he cites are new to me) so if you’re a beginner in this area, you may want to set some time aside to read this in depth. Also, you will likely want to read the comments which follow the post.

Toggling atomic switches and other talks at the Foresight Institute’s 2013 technical conference

The correct title for the conference, which took place almost one year ago (Jan. 11-13, 2013 in Palo Alto, California, US, is the 2013 Foresight Technical Conference: Illuminating Atomic Precision, and the organizers, the Foresight Institute in a Dec. 2, 2013 posting by James Lewis have announced a number of conference videos have been made available and have provided a transcript of sorts for one of the videos,

A select set of videos from the 2013 Foresight Technical Conference: Illuminating Atomic Precision, held January 11-13, 2013 in Palo Alto, have been made available on vimeo. Videos have been posted of those presentations for which the speakers have consented. Other presentations contained confidential information and will not be posted.

Here’s a listing of the 2013 conference presentations made available (click to access the videos),

  • Larry Millstein: Introductory comments at Foresight Technical Conference 2013
  • J. Fraser Stoddart: Introductory comments at Foresight Technical Conference 2013
  • Leonhard Grill: “Assembly and Manipulation of Molecules at the Atomic Scale”
  • John Randall: “Atomically Precise Manufacturing”
  • Philip Moriarty: “Mechanical Atom Manipulation: Towards a Matter Compiler?”
  • David Soloveichik: “DNA Displacement Cascades”
  • Alex Wissner-Gross: “Bringing Computational Programmability to Nanostructured Surfaces”
  • Joseph Puglisi: “Deciphering the Molecular Choreography of Translation”
  • Feynman Awards Banquet at Foresight Technical Conference 2013
  • Gerhard Klimeck: “Multi-Million Atom Simulations for Single Atom Transistor Structures”
  • William Goddard: “Nanoscale Materials, Devices, and Processing Predicted from First Principals” [Note: He’s a wearing a jaunty beret adding a note of style not usually found at technical conferences.]
  • Gerhard Klimeck: “Mythbusting Knowledge Transfer Mechanisms through Science Gateways”
  • Art Olson: “New Methods of Exploring, Analyzing, and Predicting Molecular Interactions”
  • George Church: “Regenesis: Bionano”
  • Dean Astumian: “Microscopic Reversibility: The Organizing Principle for Molecular Machines”
  • Larry Millstein: Closing comments at Foresight Technical Conference 2013

In his Foresight Institute blog posting  Lewis goes on to offer a description of Philip Moriarty’s presentation “Mechanical Atom Manipulation: Towards a Matter Compiler?,”

Prof. Moriarty presented his work with the qPlus technique of non-contact AFM of semiconductors, using chemical forces to mechanically move atoms around to structure matter, focusing on the tip of the probe—specifically how to optimize the tip structure, and how to return the tip to a previously known state. He begins with a brief review of how non-contact AFM uses a damped, driven oscillator to measure and manipulate what is happening at the level of single chemical bonds. The tip at the end of the oscillating cantilever measures the frequency shift of the cantilever as it approaches and interacts with the surface, and it maintains a constant amplitude of oscillation by pumping energy into the system. The frequency shift provides information about conservative forces acting on the tip, and the amount of energy pumped in gives a handle on non-conservative, or dissipative, forces. Before diving into the experimental details of his own work, Prof. Moriarty noted that various experimental accomplishments have vindicated Eric Drexler’s assertion that single atom chemistry could be done using purely mechanical force.

I found this description to be a beautiful piece of technical writing although I do have to admit to being distracted by thoughts of Sherlock Holmes on reading “Prof. Moriarty.” One final note, I noted the reference to Eric Drexler in the last sentence of my excerpt as Drexler was a Foresight Institute founder amongst his many other accomplishments.

Brain-to-brain communication, organic computers, and BAM (brain activity map), the connectome

Miguel Nicolelis, a professor at Duke University, has been making international headlines lately with two brain projects. The first one about implanting a brain chip that allows rats to perceive infrared light was mentioned in my Feb. 15, 2013 posting. The latest project is a brain-to-brain (rats) communication project as per a Feb. 28, 2013 news release on *EurekAlert,

Researchers have electronically linked the brains of pairs of rats for the first time, enabling them to communicate directly to solve simple behavioral puzzles. A further test of this work successfully linked the brains of two animals thousands of miles apart—one in Durham, N.C., and one in Natal, Brazil.

The results of these projects suggest the future potential for linking multiple brains to form what the research team is calling an “organic computer,” which could allow sharing of motor and sensory information among groups of animals. The study was published Feb. 28, 2013, in the journal Scientific Reports.

“Our previous studies with brain-machine interfaces had convinced us that the rat brain was much more plastic than we had previously thought,” said Miguel Nicolelis, M.D., PhD, lead author of the publication and professor of neurobiology at Duke University School of Medicine. “In those experiments, the rat brain was able to adapt easily to accept input from devices outside the body and even learn how to process invisible infrared light generated by an artificial sensor. So, the question we asked was, ‘if the brain could assimilate signals from artificial sensors, could it also assimilate information input from sensors from a different body?'”

Ben Schiller in a Mar. 1, 2013 article for Fast Company describes both the latest experiment and the work leading up to it,

First, two rats were trained to press a lever when a light went on in their cage. Press the right lever, and they would get a reward–a sip of water. The animals were then split in two: one cage had a lever with a light, while another had a lever without a light. When the first rat pressed the lever, the researchers sent electrical activity from its brain to the second rat. It pressed the right lever 70% of the time (more than half).

In another experiment, the rats seemed to collaborate. When the second rat didn’t push the right lever, the first rat was denied a drink. That seemed to encourage the first to improve its signals, raising the second rat’s lever-pushing success rate.

Finally, to show that brain-communication would work at a distance, the researchers put one rat in an cage in North Carolina, and another in Natal, Brazil. Despite noise on the Internet connection, the brain-link worked just as well–the rate at which the second rat pushed the lever was similar to the experiment conducted solely in the U.S.

The Duke University Feb. 28, 2013 news release, the origin for the news release on EurekAlert, provides more specific details about the experiments and the rats’ training,

To test this hypothesis, the researchers first trained pairs of rats to solve a simple problem: to press the correct lever when an indicator light above the lever switched on, which rewarded the rats with a sip of water. They next connected the two animals’ brains via arrays of microelectrodes inserted into the area of the cortex that processes motor information.

One of the two rodents was designated as the “encoder” animal. This animal received a visual cue that showed it which lever to press in exchange for a water reward. Once this “encoder” rat pressed the right lever, a sample of its brain activity that coded its behavioral decision was translated into a pattern of electrical stimulation that was delivered directly into the brain of the second rat, known as the “decoder” animal.

The decoder rat had the same types of levers in its chamber, but it did not receive any visual cue indicating which lever it should press to obtain a reward. Therefore, to press the correct lever and receive the reward it craved, the decoder rat would have to rely on the cue transmitted from the encoder via the brain-to-brain interface.

The researchers then conducted trials to determine how well the decoder animal could decipher the brain input from the encoder rat to choose the correct lever. The decoder rat ultimately achieved a maximum success rate of about 70 percent, only slightly below the possible maximum success rate of 78 percent that the researchers had theorized was achievable based on success rates of sending signals directly to the decoder rat’s brain.

Importantly, the communication provided by this brain-to-brain interface was two-way. For instance, the encoder rat did not receive a full reward if the decoder rat made a wrong choice. The result of this peculiar contingency, said Nicolelis, led to the establishment of a “behavioral collaboration” between the pair of rats.

“We saw that when the decoder rat committed an error, the encoder basically changed both its brain function and behavior to make it easier for its partner to get it right,” Nicolelis said. “The encoder improved the signal-to-noise ratio of its brain activity that represented the decision, so the signal became cleaner and easier to detect. And it made a quicker, cleaner decision to choose the correct lever to press. Invariably, when the encoder made those adaptations, the decoder got the right decision more often, so they both got a better reward.”

In a second set of experiments, the researchers trained pairs of rats to distinguish between a narrow or wide opening using their whiskers. If the opening was narrow, they were taught to nose-poke a water port on the left side of the chamber to receive a reward; for a wide opening, they had to poke a port on the right side.

The researchers then divided the rats into encoders and decoders. The decoders were trained to associate stimulation pulses with the left reward poke as the correct choice, and an absence of pulses with the right reward poke as correct. During trials in which the encoder detected the opening width and transmitted the choice to the decoder, the decoder had a success rate of about 65 percent, significantly above chance.

To test the transmission limits of the brain-to-brain communication, the researchers placed an encoder rat in Brazil, at the Edmond and Lily Safra International Institute of Neuroscience of Natal (ELS-IINN), and transmitted its brain signals over the Internet to a decoder rat in Durham, N.C. They found that the two rats could still work together on the tactile discrimination task.

“So, even though the animals were on different continents, with the resulting noisy transmission and signal delays, they could still communicate,” said Miguel Pais-Vieira, PhD, a postdoctoral fellow and first author of the study. “This tells us that it could be possible to create a workable, network of animal brains distributed in many different locations.”

Will Oremus in his Feb. 28, 2013 article for Slate seems a little less buoyant about the implications of this work,

Nicolelis believes this opens the possibility of building an “organic computer” that links the brains of multiple animals into a single central nervous system, which he calls a “brain-net.” Are you a little creeped out yet? In a statement, Nicolelis adds:

We cannot even predict what kinds of emergent properties would appear when animals begin interacting as part of a brain-net. In theory, you could imagine that a combination of brains could provide solutions that individual brains cannot achieve by themselves.

That sounds far-fetched. But Nicolelis’ lab is developing quite the track record of “taking science fiction and turning it into science,” says Ron Frostig, a neurobiologist at UC-Irvine who was not involved in the rat study. “He’s the most imaginative neuroscientist right now.” (Frostig made it clear he meant this as a complement, though skeptics might interpret the word less charitably.)

The most extensive coverage I’ve given Nicolelis and his work (including the Walk Again project) was in a March 16, 2012 post titled, Monkeys, mind control, robots, prosthetics, and the 2014 World Cup (soccer/football), although there are other mentions including in this Oct. 6, 2011 posting titled, Advertising for the 21st Century: B-Reel, ‘storytelling’, and mind control.  By the way, Nicolelis hopes to have a paraplegic individual (using technology Nicolelis is developing for the Walk Again project) kick the opening soccer/football to the 2014 World Cup games in Brazil.

While there’s much excitement about Nicolelis and his work, there are other ‘brain’ projects being developed in the US including the Brain Activity Map (BAM), which James Lewis notes in his Mar. 1, 2013 posting on the Foresight Institute blog,

A proposal alluded to by President Obama in his State of the Union address [Feb. 2013] to construct a dynamic “functional connectome” Brain Activity Map (BAM) would leverage current progress in neuroscience, synthetic biology, and nanotechnology to develop a map of each firing of every neuron in the human brain—a hundred billion neurons sampled on millisecond time scales. Although not the intended goal of this effort, a project on this scale, if it is funded, should also indirectly advance efforts to develop artificial intelligence and atomically precise manufacturing.

As Lewis notes in his posting, there’s an excellent description of BAM and other brain projects, as well as a discussion about how these ideas are linked (not necessarily by individuals but by the overall direction of work being done in many labs and in many countries across the globe) in Robert Blum’s Feb. (??), 2013 posting titled, BAM: Brain Activity Map Every Spike from Every Neuron, on his eponymous blog. Blum also offers an extensive set of links to the reports and stories about BAM. From Blum’s posting,

The essence of the BAM proposal is to create the technology over the coming decade
to be able to record every spike from every neuron in the brain of a behaving organism.
While this notion seems insanely ambitious, coming from a group of top investigators,
the paper deserves scrutiny. At minimum it shows what might be achieved in the future
by the combination of nanotechnology and neuroscience.

In 2013, as I write this, two European Flagship projects have just received funding for
one billion euro each (1.3 billion dollars each). The Human Brain Project is
an outgrowth of the Blue Brain Project, directed by Prof. Henry Markram
in Lausanne, which seeks to create a detailed simulation of the human brain.
The Graphene Flagship, based in Sweden, will explore uses of graphene for,
among others, creation of nanotech-based supercomputers. The potential synergy
between these projects is a source of great optimism.

The goal of the BAM Project is to elaborate the functional connectome
of a live organism: that is, not only the static (axo-dendritic) connections
but how they function in real-time as thinking and action unfold.

The European Flagship Human Brain Project will create the computational
capability to simulate large, realistic neural networks. But to compare the model
with reality, a real-time, functional, brain-wide connectome must also be created.
Nanotech and neuroscience are mature enough to justify funding this proposal.

I highly recommend reading Blum’s technical description of neural spikes as understanding that concept or any other in his post doesn’t require an advanced degree. Note: Blum holds a number of degrees and diplomas including an MD (neuroscience) from the University of California at San Francisco and a PhD in computer science and biostatistics from California’s Stanford University.

The Human Brain Project has been mentioned here previously. The  most recent mention is in a Jan. 28, 2013 posting about its newly gained status as one of two European Flagship initiatives (the other is the Graphene initiative) each meriting one billion euros of research funding over 10 years. Today, however, is the first time I’ve encountered the BAM project and I’m fascinated. Luckily, John Markoff’s Feb. 17, 2013 article for The New York Times provides some insight into this US initiative (Note: I have removed some links),

The Obama administration is planning a decade-long scientific effort to examine the workings of the human brain and build a comprehensive map of its activity, seeking to do for the brain what the Human Genome Project did for genetics.

The project, which the administration has been looking to unveil as early as March, will include federal agencies, private foundations and teams of neuroscientists and nanoscientists in a concerted effort to advance the knowledge of the brain’s billions of neurons and gain greater insights into perception, actions and, ultimately, consciousness.

Moreover, the project holds the potential of paving the way for advances in artificial intelligence.

What I find particularly interesting is the reference back to the human genome project, which may explain why BAM is also referred to as a ‘connectome’.

ETA Mar.6.13: I have found a Human Connectome Project Mar. 6, 2013 news release on EurekAlert, which leaves me confused. This does not seem to be related to BAM, although the articles about BAM did reference a ‘connectome’. At this point, I’m guessing that BAM and the ‘Human Connectome Project’ are two related but different projects and the reference to a ‘connectome’ in the BAM material is meant generically.  I previously mentioned the Human Connectome Project panel discussion held at the AAAS (American Association for the Advancement of Science) 2013 meeting in my Feb. 7, 2013 posting.

* Corrected EurkAlert to EurekAlert on June 14, 2013.

University of Windsor (Canada) chemists and molecular machines

Thanks to Instapundit (June 30, 2012 item) for the heads up regarding work being done at the University of Windsor (Ontario, Canada) by a team of chemists led by Nick Vukotic.

The University of Windsor News Daily’s June 16, 2012 item provides more detail (Note: I have removed links),

A graduate student and his team of researchers have turned the chemistry world on its ear by becoming the first ever to prove that tiny interlocked molecules can function inside solid materials, laying the important groundwork for the future creation of molecular machines.

“Until now, this has only ever been done in solution,” explained Chemistry & Biochemistry PhD student Nick Vukotic, lead author on a front page article recently published in the June issue of the journal Nature Chemistry. “We’re the first ones to put this into a solid state material.”

Here’s how they do it (from the UW June 16, 2012 item [links removed]),

The material Vukotic is referring to is UWDM-1, or University of Windsor Dynamic Material, a powdery substance that the team made which contains rotaxane molecules and binuclear copper centers.  The rotaxane molecules, which resemble a wheel around the outside of an axle, were synthesised in their lab. The group found that heating of these rotaxane molecules with a copper source resulted in the formation of a crystalline material which contained structured arrangement of the rotaxane molecules, spaced out by the binuclear copper centers.

“Basically, they self-assemble in to this arrangement,” said Vukotic, who works under the tutelage of chemistry professor Steve Loeb. Other team members include professor Rob Schurko, and post-doctoral fellows Kristopher Harris and Kelong Zhu.

Heating the material causes the wheels to rapidly rotate around the axles, while cooling the material causes the wheels to stop, he said. The entire process can’t be viewed with a microscope, so the motion was confirmed in Dr. Schurko’s lab using a process called nuclear magnetic resonance spectroscopy.

“You can actually measure the motion and you can do it unambiguously by placing an isotopic tag on the ring,” explained Dr. Harris, who helped oversee that verification process.

This image may help you better visualize these molecular machines,

This schematic shows how the various elements assemble themselves into mechanically interlocked molecules. (Courtesy University of Windsor)

James Lewis over at the Foresight Institute blog, where they have a very strong interest in molecular machines, commented in a June 26, 2012 posting,

A key component of exploratory engineering studies for molecular manufacturing or productive nanosystems is the ability to model molecular systems reliably. Modeling motions of molecules in solution is very difficult. A method to produce molecular machines in a solid state environment is a huge step forward.

DARPA’s Living Foundries and advanced nanotechnology via synthetic biology

This is not a comfortable topic for a lot of people, but James Lewis in a May 26, 2012 posting on the Foresight Institute blog, comments on some developments in the DARPA (US Defense Advanced Research Projeect Agency) Living Foundries program (Note: I have removed a link),

Synthetic biology promises near-term breakthroughs in medicine, materials, and energy, and is also one promising development pathway leading to advanced nanotechnology and a general capability for programmable, atomically-precise manufacturing. Darpa (US Defense Advanced Research Projects Agency) has launched a new program [Living Foundries] that could greatly accelerate progress in synthetic biology by creating a library of standardized, modular biological units that could be used to build new devices and circuits.

If Darpa’s Living Foundries program achieves its ambitious goals, it should create a methodology, toolbox, and a large group of practitioners ready to pursue a synthetic biology pathway to building complex molecular machine systems, and eventually, atomically precise manufacturing systems.

DARPA opened solicitations for this program Sept. 2, 2011 and made a series of award notices starting May 17, 2012 stretching to May 31,2012. Here’s a description of the program from the DARPA Living Foundries project webpage,

The Living Foundries Program seeks to create the engineering framework for biology, speeding the biological design-build-test cycle and expanding the complexity of systems that can be engineered. The Program aims to develop new tools, technologies and methodologies to decouple biological design from fabrication, yield design rules and tools, and manage biological complexity through abstraction and standardization.  These foundational tools would enable the rapid development of previously unattainable technologies and products, leveraging biology to solve challenges associated with production of new materials, novel capabilities, fuel and medicines. For example, one motivating, widespread and currently intractable problem is that of corrosion/materials degradation. The DoD must operate in all environments, including some of the most corrosively aggressive on Earth, and do so with increasingly complex heterogeneous materials systems. This multifaceted and ubiquitous problem costs the DoD approximately $23 Billion per year. The ability to truly program and engineer biology, would enable the capability to design and engineer systems to rapidly and dynamically prevent, seek out, identify and repair corrosion/materials degradation.

Accomplishing this vision requires an approach that is more than multidisciplinary – it requires a new engineering discipline built upon the integration of new ideas, approaches and tools from fields spanning computer science and electrical engineering to chemistry and the biological sciences.  The best innovations will introduce new architectures and tools into an open technology platform to rapidly move new designs from conception to execution.

Performers must ensure and demonstrate throughout the program that all methods and demonstrations of capability comply with national guidance for manipulation of genes and organisms and follow all guidance for biological safety and Biosecurity.

Katie Drummond in her May 22, 2012 posting on the Wired website’s Danger Room blog makes note of the awarded contracts (Note: I have removed the links),

Now, Darpa’s handed out seven research awards worth $15.5 million to six different companies and institutions. Among them are several Darpa favorites, including the University of Texas at Austin and the California Institute of Technology. Two contracts were also issued to the J. Craig Venter Institute. Dr. Venter is something of a biology superstar: He was among the first scientists to sequence a human genome, and his institute was, in 2010, the first to create a cell with entirely synthetic genome.

In total, nine contracts were awarded as of May 31, 2012. MIT (Massachusetts Institute of Technology) was awarded two, while  Stanford University, Harvard University, and the Foundation for Applied Molecular Evolution were each awarded one.

The J. Craig Venter Institute received a total of almost $4M for two separate contracts ($964,572 and $3,007, 321). Interestingly, Venter has just been profiled in the New York Times magazine in a May 30, 2012 article by Wil S. Hylton with nary a mention of this new project (I realize the print version couldn’t be revised but surely they could have managed a note online).  The opening paragraphs sound like a description of the Living Foundries project for people who don’t specialize in reading government documents,

In the menagerie of Craig Venter’s imagination, tiny bugs will save the world. They will be custom bugs, designer bugs — bugs that only Venter can create. He will mix them up in his private laboratory from bits and pieces of DNA, and then he will release them into the air and the water, into smokestacks and oil spills, hospitals and factories and your house.

Each of the bugs will have a mission. Some will be designed to devour things, like pollution. Others will generate food and fuel. There will be bugs to fight global warming, bugs to clean up toxic waste, bugs to manufacture medicine and diagnose disease, and they will all be driven to complete these tasks by the very fibers of their synthetic DNA.

This is is not a critical or academic  analysis of Venter’s approach to biology, synthetic or otherwise, but it does offer an in-depth profile and, given Venter’s prominence in the field of synthetic biology, it’s a worthwhile read.